id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15702120
pes2o/s2orc
v3-fos-license
Q_T-Resummation for Polarized Semi-Inclusive Deep Inelastic Scattering We study the transverse-momentum distribution of hadrons produced in semi-inclusive deep-inelastic scattering. We consider cross sections for various combinations of the polarizations of the initial lepton and nucleon or the produced hadron, for which we perform the resummation of large double-logarithmic perturbative corrections arising at small transverse momentum. We present phenomenological results for the process $ep\to e\pi X$ for the typical kinematics in the COMPASS experiment. We discuss the impact of the perturbative resummation and of estimated non-perturbative contributions on the corresponding cross sections and their spin asymmetry. We study the transverse-momentum distribution of hadrons produced in semiinclusive deep-inelastic scattering. We consider cross sections for various combinations of the polarizations of the initial lepton and nucleon or the produced hadron, for which we perform the resummation of large double-logarithmic perturbative corrections arising at small transverse momentum. We present phenomenological results for the process ep → eπX for the typical kinematics in the COMPASS experiment. We discuss the impact of the perturbative resummation and of estimated non-perturbative contributions on the corresponding cross sections and their spin asymmetry. Semi-inclusive deep inelastic scattering (SIDIS) with polarized beams and target, ep → ehX, for which a hadron h is detected in the final state, has been a powerful tool for investigating the spin structure of the nucleon. It also challenges our understanding of the reaction mechanisms in QCD. The bulk of the SIDIS events provided by experiments are in a kinematic regime of large virtuality Q 2 of the exchanged virtual photon and relatively small transverse momentum q T . In our recent paper 1 , we have studied the transverse-momentum dependence of SIDIS observables in this region, applying the resummation technique of 2 . The processes we considered were the leading-twist double-spin reactions: (i) e + p → e + π + X, (iv) e + p → e + π + X , (ii) e + p → e + Λ + X, (v) e + p → e + Λ + X . Here arrows to the right (upward arrows) denote longitudinal (transverse) polarization. Needless to say, the final-state pion could be replaced by any hadron. The same is true for the Λ, as long as the observed hadron is spin-1/2 and its polarization can be detected experimentally. Here we present a brief summary of the main results of 1 . There are five Lorentz invariants for SIDIS, e(k) + A(p A , S A ) → e(k ′ ) + B(p B , S B ) + X: the center-of-mass energy squared for the initial electron and the proton, To write down the cross section, we use a frame where p A and q are collinear, and we call the azimuthal angle between the lepton plane and the hadron plane φ. In this frame, the transverse momentum of the final-state hadron B with respect to p A and q is given by The lowest-order (LO) cross section differential in q T (or p T ) is of O(α s ) and has been derived in 3 . It can be decomposed into several pieces with different dependences on φ: for processes (i) and (ii) in (1), for (iv) and (v), and for (iii). Here Φ A (Φ B ) is the azimuthal angle of the transverse spin vector of A (B) as measured from the hadron plane around p A ( p B ) in the so-called hadron frame for which q = (0, 0, 0, −Q). At small q T , σ 0 and σ T 0 develop the large logarithmic contribution α s ln(Q 2 /q 2 T )/q 2 T . At yet higher orders, corrections as large as α k s ln 2k (Q 2 /q 2 T )/q 2 T arise in the cross section. We have worked out the NLL resummation of these large logarithmic corrections in σ 0 and σ T 0 for all the processes in (1) within the b-space resummation formalism of 2 , extending the previous studies on the resummation for unpolarized SIDIS 4 . The φ-dependent contributions to the cross sections in general also develop large logarithms 5 ; their resummation would require an extension of the formalism. In order to study the impact of resummation, we have carried out a numerical calculation for the process e p → eπX. The resummed cross section takes the form of an inverse Fourier transform into q T space. To carry out the Fourier integral, one needs a recipe for treating the Landau pole present in the perturbatively calculated Sudakov form factor. We have followed the method of 6 which deforms the b-integral to a contour integral in the complex b-plane. This method introduces no new parameter and is identical to the original b-integral for any finite-order expansion of the Sudakov exponent. For comparison, we have also used the b * -method proposed in 2 . In order to incorporate possible nonperturbative corrections, we introduce a Gaussian form factor by shifting the Sudakov exponent as e S(b,Q) → e S(b,Q)−gb 2 , where the coefficient g may be determined by comparison with data. In order to obtain an adequate description also at large q T ∼ Q, we "match" the resummed cross section to the fixed-order (LO, O(α s )) one. This is achieved by subtracting from the resummed expression its O(α s ) expansion and then adding the full O(α s ) cross section 6,7 . As an example, we show in Fig. 1 the z f -integrated cross sections 1 2π and their spin asymmetry for the typical kinematics of the COMPASS experiment, S ep = 300 GeV 2 , Q 2 = 10 GeV 2 , x bj = 0.04. As expected, the resummation tames the divergence of the LO cross section at q T → 0 and enhances the cross section in the region of intermediate and large q T . The nonperturbative Gaussian makes this tendency stronger. Although the cross sections vary slightly when different treatments of the b-integral and different values of g are chosen, the effects of resummation and the nonperturbative Gaussian are mostly common to both the unpolarized and the polarized cases. Accordingly, the spin asymmetry is relatively insensitive to these effects. It will be interesting to compare our results with forthcoming data from COMPASS and HERMES, and also to extend the analysis to the reaction ep → e ΛX which is accessible at HERA.
2014-10-01T00:00:00.000Z
2006-06-29T00:00:00.000
{ "year": 2006, "sha1": "91ba8811097489295215793a9d81db70b37f3f64", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0606295", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "91ba8811097489295215793a9d81db70b37f3f64", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
59190556
pes2o/s2orc
v3-fos-license
Killing tensors in stationary and axially symmetric space-times We discuss the existence of Killing tensors for certain (physically motivated) stationary and axially symmetric vacuum space-times. We show nonexistence of a nontrivial Killing tensor for a Tomimatsu-Sato metric (up to valence 7), for a C-metric (up to valence 9) and for a Zipoy-Voorhees metric (up to valence 11). The results are obtained by mathematically completely rigorous, nontrivial computer algebra computations with a huge number of equations involved in the problem. Introduction Let (M, g) be a 4-dimensional manifold with Lorentzian metric g of signature (+,+,+,-). A Killing tensor of valence d on M is a symmetric tensor field K whose symmetrized covariant derivative vanishes, (1) Here, ∇ denotes the Levi-Civita connection of g and the components K i 1 ...i d smoothly depend on the position coordinates. The metric g itself trivially is a Killing tensor of valence 2. Killing vectors with lowered indices are valence-1 Killing tensors. Given two Killing tensors, we can construct a new Killing tensor by forming their symmetrized product. Since Equation (1) is linear in the components of K, the linear combination of Killing tensors of the same valence is also a Killing tensor. Killing tensors correspond to first integrals of the geodesic flow that are homogeneous polynomials in the momenta: An integral of the geodesic flow is a function I : T * M → R such that its Poisson bracket with the Hamiltonian H : T * M → R, H = g ij p i p j , vanishes, i.e. It is well known that for a Killing tensor K the function, is an integral (here the K i 1 ...i d denote components of a Killing tensor with raised indices). Two Killing tensors are in involution if their corresponding integrals commute w.r.t. the standard Poisson bracket on T * M . The requirement (3) is equivalent to a system of partial differential equations (PDEs) on the coefficients of I, i.e. on the K i 1 ...i d . These equations are coefficients of {I, H} = 0 and therefore obviously polynomial in the momenta. Killing tensors appear in many contexts, e.g. in mechanics, mathematical relativity, integrability or differential geometry. In general relativity, the geodesic flow determines trajectories of free-falling particles. Integrals of the geodesic flow provide constants of the motion of such particles. Hamiltonian systems on 4-dimensional manifolds with four functionally independent integrals in involution are Liouville integrable. For such systems the orbits are restricted to tori (compact case) or cylinders, and the equations of motion can be solved by quadrature. It has recently been shown by Kruglikov & Matveev that a generic metric does not admit non-trivial Killing tensors [KM15]. However, many examples in physics do admit nontrivial Killing tensors. Such examples are in some sense the most important ones, e.g. the classical Kepler problem. In the context of stationary and axially symmetric vacuum metrics, the most prominent example is the family of Kerr metrics. These metrics are used as a model for the space-time around rotating neutron stars and black holes. In particular, the Schwarzschild metric is the static limit of the Kerr family. A metric satisfies the vacuum condition if it is Ricci-flat, i.e. if its Ricci tensor vanishes. This requirement is a system of partial differential equations on the components of the metric. For our examples, this requirement is automatically satisfied. See [Vol15] for a result on Killing tensors of valence 3 on arbitrary static and axially symmetric vacuum space-times. This reference makes explicit use of the vacuum condition. Physically, the vacuum condition Ric(g) = 0 is a fair assumption in the exterior region of stationary and axially symmetric astrophysical objects when we ignore electromagnetic fields. The present paper generally follows the method used by Kruglikov & Matveev in [KM12], which is based on an algorithm. We improve this algorithm and achieve considerably higher computational efficiency. The method can prove non-existence of nontrivial Killing tensors. In case there are additional Killing tensors, the method finds them. In our computations, we have to deal with up to more than 10,000 equations and unknowns, which necessitates the use of computer algebra. This has also been done in [KM12] where nonexistence of a nontrivial Killing tensor of valence up to 6 is proven for the Darmois metric (in [KM12] it is called the Zipoy-Voorhees metric with δ = 2). The reachable valence of the Killing tensors is, however, only restricted by computer strength. Our code is based on [KM12], but we employ additional tricks and achieve for the same metric a nonexistence proof up to valence 11. Note that our result on the Darmois metric does not follow from [MPS13] where only analytic integrals are considered. See also Section 3.1.3. Moreover, we implement the method for the first time for a non-static metric. Specifically, we prove for a certain Tomimatsu-Sato metric that there are no additional Killing tensors up to valence 7. The paper is organized as follows. In Section 2, we give a description of the method we use. The algorithm is summarized on page 7. Then, in Section 3, examples are given to exemplify application of the method. We investigate a (non-static) Tomimatsu-Sato metric in Theorem 1, a Zipoy-Voorhees metric (the Darmois solution) in Theorem 2 and a C-metric in Theorem 3. As an example for a case of existence of a nontrivial Killing tensor, we discuss the Kerr metric in Section 3.1.2. Method The general idea is classical and sometimes called Cartan-Kähler method or prolongation-projection method. It allows us to deal with a nontrivial overdetermined system of PDEs, say S, by straightforward, but computationally challenging, algebraic calculations. Actually, the system S is linear in our cases, and therefore our computations are also linear-algebraic. The basic procedure is as follows. Consider the differential consequences of S, i.e. differentiate the equations w.r.t. the independent coordinate variables. The system of equations resulting after k differentiations is called the k-th prolongation. An overdetermined system S of PDEs is called finite or of finite type if highest derivatives of the unknowns can be expressed through lower derivatives after a finite number of differentiations. The existence of Killing tensors is equivalent to the existence of solutions for an overdeter-mined system of PDEs of finite type. Consider derivatives of the unknown functions (in our case the coefficients in I and their derivatives) as new, independent unknowns. The equations are then algebraic equations on the unknowns. This algebraic system admits at least the solutions corresponding to solutions of the initial differential problem. Solving the algebraic problem therefore leads to an upper bound to the number of Killing tensors. Nonexistence of nontrivial Killing tensors is proven if this upper bound coincides with the number of trivial Killing tensors. In general, stationary and axially symmetric vacuum metrics can be written in so-called Lewis-Papapetrou coordinates (x, y, φ, t), with three parametrizing functions U (x, y), γ(x, y) and A(x, y). For the metrics in question, these parametrizing functions are fixed. In our examples, it is actually convenient not to use the coordinates of (4). Instead, we use slightly different coordinate choices, see Equations (8), (13) and (14). All our coordinate choices have the following property: The coordinates are adapted to the obvious symmetries of the metric. We clearly see that the metrics are invariant under rotations φ → φ + φ 0 and time translations t → t + t 0 . These symmetries correspond to the Killing vectors ∂ t and ∂ φ . Consider level surfaces of the integrals p t and p φ corresponding to the two Killing vectors ∂ t and ∂ φ , respectively. For regular values, these level surfaces are submanifolds. They are endowed with the coadjoint action of the symmetry group, and the quotient space M red is known as the symplectic quotient. It inherits a natural symplectic form and a Hamiltonian from the original manifold [Whi04;Mar92]. The Hamiltonian on this reduced space is, however, inhomogeneous, and therefore the initial problem of finding Killing tensors turns into the problem of finding inhomogeneous integrals of the geodesic flow for a reduced Hamiltonian of the form H = K g + V . The first term K g is called the kinetic term and corresponds to a Killing tensor on M red . The function V : M → R is called the potential. The coordinates x, y (and the respective momenta) are called non-ignorable, We are interested in functions I : T * M red → R such that the Poisson equation (2) for the Hamiltonian flow defined by H is satisfied, i.e. such that {H, I} = 0. This equation is an inhomogeneous polynomial in the momenta (p x , p y ). We introduce the following notion of parity for such polynomials: We say that the polynomial is of even (odd) parity if all its homogeneous components have even (odd) degree in the momenta (p x , p y ). Since H is even in the momenta (p x , p y ), we can consider integrals I of odd and even parity in these momenta separately, cf. [KM12;Hie87]. Considering homogeneous components of (2), one obtains a list of polynomial equations Here, we denote by I (r) the homogeneous polynomial component of I of degree r w.r.t. the momenta (p x , p y ). The polynomials E k can be further decomposed w.r.t. the ignorable momenta p φ and p t . We have two ignorable momenta, and thus we can decompose the k-th equation E k−1 into 2k − 1 new polynomial equations. We can consider every polynomial equation E l as corresponding to a collection of equations provided by the coefficients w.r.t. p φ and p t . Denote these equations by E l as well. We follow the Cartan-Kähler method as explained above. We differentiate each E l w.r.t. (x, y) and consider derivatives of the components K i 1 ...i d of I (i.e. the unknown functions) as new, independent unknowns. Since the metric is given explicitly in our examples, we obtain a linear-algebraic problem. The number of solutions of the linear problem can be determined by computing the rank of a (huge) matrix. Solutions of system of PDEs corresponding to (3) are equivalent to solutions of the linear problem. Therefore the matrix rank is an upper bound to the number of Killing tensors that the metric admits. Since the matrix dimensions are huge in our cases, we need to find tricks that can speed up the computations, particularly the required rank computation. For the metrics in question, highest derivatives of the unknowns can be expressed through lower derivatives after differentiating the equations E l for d + 1 times [Wol97]. We need d differentiations to find the lowest possible upper bound [KM12;Vol]. Specifically, write the Poisson bracket as the polynomial where each P . We denote by a, b the integers between (and including) a and b. The indices run over the following values: i ∈ 0, d + 1 , j ∈ 0, i , k ∈ 0, d + 1 − i , and µ ∈ 0, m . For integrals of pure parity w.r.t. (p x , p y ), many P (i, j) k are zero. Particularly, if we consider integrals of odd (even) (p 1 , p 2 )-parity, then only P (i, j) k with even (odd) value of i can be non-zero. Now, the unknown functions are the coefficients in the polynomial that represents I, Here, m denotes the order of partial differentiation, and µ is the order of differentiation w.r.t. the coordinate x. Structuring the equations and unknowns. With the considerations just made, we can organize the equations and unknowns into a tabular structure. For the equations, consider all sets E l and denote them in one column with l indexing the rows. Then, put their differential consequences in columns to the right, i.e. the first derivatives of E 0 w.r.t. (x, y) are in the first row of the second column and so forth (cf. Figure 1). The unknowns are the coefficient functions of I w.r.t. momenta. They can be organized in a way similar to the equations. Let 2l = i − d +ẽ, wherẽ e ∈ {0, 1} is the parity of d+e. Then, we first arrange the I (i, j, m, µ) k according to the value of l. For equal values of l, we then arrange the unknowns according to the order m of differentiation. Note that the resulting table for the unknowns has one column more than the table for the equations. The reason is that the system of PDEs following from (1) is of first order. Elimination scheme Let us have a closer look at the structure of these tables. We regard derivatives of the unknown functions as new, independent unknowns. We observe that in the table of equations the (l, m)-cell shares the unknowns I (i, j, m + 1, µ) k with the (l, m + 1)-cell if i = 2l + d −ẽ. Together with the structuring of the unknowns obtained above, this pattern suggests to solve the linear system of equations stepwise. In principle, one can handle one cell of the table of equations at a time, and iteratively replace unknowns. However, we are not going to follow this prescription entirely. Instead, only those equations P (i, j, m, µ) k that are monomial in the respective unknowns I (i, j, m + 1,μ) k will be taken into account. Yet, this will be done iteratively, so a maximal number of substitutions can be achieved. This partial solution of the system reduces the number of equations and unknowns considerably and therefore improves performance in the following steps. Computing the number of Killing tensors We need to identify the number of solutions of the obtained linear-algebraic system. This system is described by a matrix and we have to compute the rank of this matrix. On a computer, we can do this by choosing a point of reference in which we complete the computations. Actually, a little caution is necessary since we need the number of solutions for the generic matrix system; in non-generic points, the rank of the matrix may drop. We also restrict to rational reference points. In case the expressions are rational in the coordinates x and y, we can rewrite the equations such that the coefficients become integer numbers. After choosing a reference point, our freedom to add arbitrary multiples of known integrals can be made use of to further eliminate unknowns from the system. Specifically, we can set all I (i, 0, 0, 0) k = 0. For the remaining matrix problem, we can determine the number of solutions from the dimensionality of the matrix kernel. We use usual Gauß elimination for this computation. Better algorithms might be available for particular situations. The computation of an upper bound to the number of involutive Killing tensors can now be performed algorithmically: Algorithm I. (i) Consider the two pure-parity integrals w.r.t. (p x , p y ) separately. Compute the differential consequences of the corresponding differential systems up to d-th prolongation. Consider the corresponding algebraic problem. (ii) Choose a generic point P and evaluate the algebraic system at this point. Add multiples of the known integrals to set as many of the unknowns as possible to zero (in P ). (iii) Perform the elimination scheme as discussed above. (iv) If possible, rewrite the matrix system such that the coefficients are integers. Determine the kernel dimension. The algorithm confirms nonexistence of an additional integral if the matrix has full rank. Examples Let us explore some examples with the method we developed in the previous section. Coordinates are chosen according to the specific problem and are not identical to those in Equation (4). However, they are still adjusted to the symmetries, i.e. to stationarity and axial symmetry. The computation times have been achieved on a desktop computer with a 3.4GHz processor and 32GB RAM. The computations were performed using Maple 18. Tomimatsu-Sato metrics The Tomimatsu-Sato family generalizes the Kerr metric. Its static subclass is the Zipoy-Voorhees family, which contains the Schwarzschild metric as its Kerr limit. A non-static example We begin with a non-static case and consider a Tomimatsu-Sato metric with perturbation parameter δ = 2. In the Ernst-Perjés representation, it has the general form [Ern76; Per89; Man12] where the functions f , γ and ω are defined by and where µ, ν, σ and τ are the polynomials In addition, p and q have to obey the restriction, p 2 + q 2 = 1. The other free parameter, κ, can in principle be removed through redefinition of some quantities, but we keep it in view of [Man12]. We study the particular example with parameter values δ = 2, κ = 2, and p = 3 /5 (q = 4 /5). These parameters have also been chosen in [Man12], where some physical properties of the Tomimatsu-Sato metric for δ = 2 are discussed. Theorem 1. For this Tomimatsu-Sato metric, there is no additional independent Killing tensor of valence d ≤ 7 that is in involution with the trivial Killing tensors dφ, dt, and the metric. Indeed, the following table shows that after d = 7 steps of prolongation the algorithm yields the upper bound 20 (sum of the Λ (d) d , which denote the obtained upper bound for degree d and after d prolongation steps). The number Λ 0 d of trivial integrals is given by the formula 20. Both numbers coincide and this confirms the nonexistence of an additional Killing tensor. Results Tomimatsu-Sato metric δ = 2, κ = 2, p = 3 /5 (q = 4 /5) d e Λ Recall that e is the parity of the integral w.r.t. the momenta (p x , p y ). Thus, for given valence d, we have to compute two separate branches e = 0 and e = 1. By M , we denote the matrix obtained after Step (i) of the algorithm. The symbols m d,d and n d,d denote, respectively, the number of equations and unknowns of the initial matrix system obtained after Step (i) of the algorithm, i.e. for degree d and after d prolongation steps. The point of reference for the computations is (x, y) = ( 1 /2, 2). The last column provides the (approximate) computation times, cf. also Section 3.1.3. One might wonder why the number n d,d coincides for both branches. The reason is that for the number of unknowns the following formula holds (for degree d after M prolongations) [Vol]: where we define Σ = e +ẽ. Hence, if d is odd, then e cancels from (10), becauseẽ + e = 1 and thus Σ = 1 (recall thatẽ = ℘(d + e)). Similarly, for the number of equations, we find with ∆ = e −ẽ. Thus, e cancels from (11) if d is even, becauseẽ = e and ∆ = 0. In second degree we find an upper bound of 5. This is one above the number of trivial Killing tensors of valence 2. And indeed, the Kerr metric has an additional Killing tensor of valence 2 that commutes with the trivial Killing tensors. This is the Carter constant [Car68a; Car68b]. The Darmois solution The Darmois metric is a particular Zipoy-Voorhees metric [Ste03;Dar27]. It is therefore a static Tomimatsu-Sato metric. In prolate spheroidal coordinates it has the following form: dx 2 Existence of a nontrivial Killing tensor for the Darmois solution has been suggested by [Bri08;Bri11], but later studies challenged this claim [KM12; LG12; MPS13]. In [KM12], Kruglikov & Matveev study the number of Killing tensors for this metric up to valence 6, and the method we discuss here is based on this work. It is therefore interesting to compare the computer performance of both methods (see below). For static and axially symmetric metrics, the algorithm can be improved further. It is possible to restrict to integrals of even parity in p φ . Their parity w.r.t. (p x , p y ) can be taken equal to the parity of d. In order to make a statement for valence d, we need to apply the algorithm for d, d − 1 and d − 2. The reason for this simplification is the following observation: The Hamiltonian is not only of even parity w.r.t. (p x , p y ), but also w.r.t. p φ or p t . Thus, components of the integral of even and odd parity w.r.t. p φ can be considered separately. Now, consider only integrals of even parity in p φ and such that their parity w.r.t. (p x , p y ) equals the parity of d as an integer number. Let S d be the system of equations obtained from the Poisson equation by considering coefficients with respect to momenta. For Weyl metrics, the system of equations S d splits into four separate subsystems, see Figure 2. Each of the subsystems can be solved independently. Unknowns of one of the subsystems do not appear in the other subsystems. The crucial observation is that each of Subsystems according to parity of the integral parity w.r.t. (p x , p y ) even parity in p φ odd parity in p t equals ℘(d) S d corresponds to S d−2 opposite to ℘(d) corresponds to S d−1 corresponds to S d−1 Figure 2: Refined parity split for the Darmois metric (and the C-metric). this subsystems corresponds to one of the described type, but possibly with another value for d. For details, see [Vol]. The upper bound to the number of Killing tensors is therefore 42, and this equals the number of minimally expected Killing tensors (note that S d−1 contributes twice). The point of reference in this computation is ( 1 /2, 2). Computational Efficiency Let us quickly contrast the performance of the method presented in Section 2 with the original method from [KM12]. While on our computer the algorithm from [KM12] takes about 3 minutes for all the necessary computations for valence d = 4, the modified algorithm takes only around 3 seconds. For valence d = 6, we needed approximately 47 minutes with the algorithm from [KM12]. We needed a little more that a minute (67 seconds) with the improved method. These numbers include both branches for [KM12] and all three branches for the method of Algorithm I in combination with the decomposition according to Figure 2. C-metrics The C-metric is used in relativity as a model for systems of two black holes accelerating in opposite directions under the action of certain forces. Its up the computations and to reach higher degrees of the integrals. Moreover, we saw that for static space-times the Killing tensors partially come from lower degrees, and that this fact can significantly improve computational performance. The additional techniques were implemented for several examples, namely the Darmois solution, a C-metric and a Tomimatsu-Sato metric. The extreme Kerr solution was discussed as an example that admits a nontrivial Killing tensor. We saw that the new techniques also work fine with axially symmetric metrics that are stationary (instead of only static). Obviously, the techniques applied in our method rely on the structural properties of the problem rather than its physical properties. The method is applicable not only for stationary and axially symmetric metrics, and it can produce results in other contexts as well, see [KVL15] for an application of a similar method in sub-Riemannian geometry.
2016-02-29T14:12:04.000Z
2016-02-29T00:00:00.000
{ "year": 2016, "sha1": "0eaa31a6a7eb2450b2322c940e59b9813b84943e", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.geomphys.2016.09.009", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "0eaa31a6a7eb2450b2322c940e59b9813b84943e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
1612152
pes2o/s2orc
v3-fos-license
Daily electronic self-monitoring of subjective and objective symptoms in bipolar disorder—the MONARCA trial protocol (MONitoring, treAtment and pRediCtion of bipolAr disorder episodes): a randomised controlled single-blind trial Introduction Electronic self-monitoring of affective symptoms using cell phones is suggested as a practical and inexpensive way to monitor illness activity and identify early signs of affective symptoms. It has never been tested in a randomised clinical trial whether electronic self-monitoring improves outcomes in bipolar disorder. We are conducting a trial testing the effect of using a Smartphone for self-monitoring in bipolar disorder. Methods We developed the MONARCA application for Android-based Smartphones, allowing patients suffering from bipolar disorder to do daily self-monitoring—including an interactive feedback loop between patients and clinicians through a web-based interface. The effect of the application was tested in a parallel-group, single-blind randomised controlled trial so far including 78 patients suffering from bipolar disorder in the age group 18–60 years who were given the use of a Smartphone with the MONARCA application (intervention group) or to the use of a cell phone without the application (placebo group) during a 6-month study period. The study was carried out from September 2011. The outcomes were changes in affective symptoms (primary), social functioning, perceived stress, self-rated depressive and manic symptoms, quality of life, adherence to medication, stress and cognitive functioning (secondary and tertiary). Analysis Recruitment is ongoing. Ethics Ethical permission has been obtained. Dissemination Positive, neutral and negative findings of the study will be published. Registration details The trial is approved by the Regional Ethics Committee in The Capital Region of Denmark (H-2-2011-056) and The Danish Data Protection Agency (2013-41-1710). The trial is registered at ClinicalTrials.gov as NCT01446406. INTRODUCTION Bipolar disorder is a common and complex mental disorder with a prevalence of 1-2% 1 2 and accounts as one of the most important causes of disability at age 15-44 years worldwide. 1 Bipolar disorder is a long-term and persistent illness with need for treatment over many years. 3 The disorder is associated with a high risk of relapse and hospitalisation and the risk of relapse increases along with the number of previous episodes. [4][5][6] Many patients do not recover from previous psychosocial function and the cognitive disturbances are also prevalent during remitted phases. 7 It is well documented from randomised clinical trials (RCT) that the risk of a new episode in bipolar disorder can be reduced significantly by treatment with lithium or other mood stabilisers. 8 Further, the prophylactic effect of medical treatment may be enhanced by psychoeducation or cognitive behavioural therapy. 9 However, results from naturalistic follow-up studies suggest that the progressive development of the disease is not prevented in clinical practice with the present treatments. 4-6 10 The major reasons for the decreased effect of interventions in clinical practice are delayed intervention for prodromal depressive and manic episodes 11 12 as well as decreased medical adherence. [13][14][15] During the last decades, there has been an organisational shift in paradigm from inpatient to outpatient treatment in healthcare, and in bipolar disorder there is an emerging shift in illness paradigm from a focus on mood episodes to a focus on the interepisodic mood instability. 16 However, current monitoring of bipolar disorder illness activity is based on the identification and analysis of mood episodes at different intervals of time, often on a monthly basis during outpatient facility visits. Recently, electronic self-monitoring of affective symptoms using cell phones to prompt patients to respond to weekly text messages was proposed as an easy and inexpensive way to monitor and identify early signs of emerging affective episodes so that providers could intervene shortly after prodromal symptoms appeared. 17 However, the used electronic devices have been rather simple, not including a bidirectional feedback loop between patients and providers and without electronic data on 'objective' measures of the affective psychopathology. It has never been tested in a randomised trial whether the continued use of an electronic device, including a feedback loop, improves affective symptoms and other outcomes in bipolar disorder. In the MONitoring, treAtment and pRediCtion of bipolAr disorder episodes (MONARCA) study, we developed and are currently testing in a randomised controlled trial (RCT) the software for Android Smartphones to monitor the subjective and objective activities of bipolar disorder alongside with treatment adherence in a bidirectional feedback loop between patients and providers. The software system includes the recording of subjective items such as mood/irritability, 17 sleep 18 19 and alcohol 20 that may reflect or correlate with illness activity in bipolar disorder. As the ability of these subjective measures to detect prodromal symptoms of depression and mania may not be sufficient, we have also included objective measures such as speech, social and physical activity in the software system. Decreased activity in speech ( paucity of speech) seems to be a sensitive and valid measure of prodromal symptoms of depression 21 22 and conversely increased speech activity (talkativeness) predicts a switch to hypomania. 19 23 24 Similarly, social activity, 25 that is, engaging in relations to others, as well as physical activity 26 27 represents central and sensitive aspects of illness activity in bipolar disorder. Hypotheses Daily electronic monitoring using an online interactive Smartphone including a feedback loop between patients and clinicians reduces the severity of depressive and manic symptoms and stress and increases social functioning, quality of life, adherence to medication and cognitive functioning. Objectives To investigate in a randomised controlled trial whether the use of an online monitoring system including a feedback loop in patients suffering from bipolar disorder reduces symptoms of affective disorder and stress and increases social functioning, quality of life, adherence to medication and cognitive functioning. METHODS This protocol is reported according to the CONsolidated Standards Of Reporting Trials (CONSORT) statement and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT). [28][29][30] This protocol describes a randomised controlled trial comparing the effect of using a Smartphone with the MONARCA system including a feedback loop with the use of a placebo Smartphone without an active MONARCA system. Trial design and study organisation The trial is a single-blind, placebo-controlled, parallelgroup study stratified on age (18-29 and 30-60 years) and former hospitalisation (yes and no) with balanced randomisation of bipolar disorder patients (1:1) to either the active use of MONARCA application on a Smartphone (intervention group) or a placebo MONARCA Smartphone. The study is conducted at The Clinic for Affective Disorders, Psychiatric Center Copenhagen, Rigshospitalet, Copenhagen, Denmark. There are no changes in the design or methods after the start of the trial. Participants and setting All patients were recruited from The Clinic for Affective Disorder, Psychiatric Center Copenhagen, Rigshospitalet, Copenhagen, Denmark. Recruitment started in September 2011. The Clinic for Affective Disorders is a specialised outpatient clinic that covers a recruitment area of the Capital Region, Denmark, corresponding to 1.4 million people. The staff consists of fulltime specialists in psychiatry with specific clinical experience and knowledge about the diagnosis and treatment of bipolar disorder as well as certified psychologists, nurses and a social worker with experience in bipolar disorder. Patients with bipolar disorder are referred to the clinic from secondary healthcare when a diagnosis of a single mania or bipolar disorder is made for the first time 31 or in the case of occurrence of treatment resistance, that is, persistent affective symptoms or recurrences despite treatment in standard care. The physicians at the clinic follow the patients with evidence-based pharmacological treatment and regular appointments depending on their clinical status and needs. Treatment at the clinic comprises combined psychopharmacological treatment and supporting therapy for a 2-year period. Bipolar patients are referred to the clinic after the first, second or third admission and asked to participate after initial assessment by a psychiatrist. Following referral to the clinic, the clinicians make the diagnosis of bipolar disorder and subsequently introduce the MONARCA study to all patients except those who are either pregnant, older than 60 years or have a lack of Danish language skills. Inclusion criteria: bipolar disorder diagnosis according to ICD-10 using the Schedules for Clinical Assessment in Neuropsychiatry (SCAN), 32 Hamilton Depression Rating scale score (HDRS), 17 items ≤17 33 and Young Mania Rating Scale (YMRS) score ≤17 34 at the time of inclusion and age between 18 and 60 years. Exclusion criteria: significant physical illness, schizophrenia or other F2 diagnoses according to the SCAN interview, unwillingness to use the project Smartphone as the primary cell phone, inability to learn the necessary technical skills for being able to use the Smartphone, lack of Danish language skills and pregnancy. Patients meeting the inclusion criteria and having none of the exclusion criteria were enrolled in the study. Study procedure Following referral to the MONARCA trial, potential participants were screened and if they met the criteria for participating in the trial, they were included. Following inclusion in the trial, a baseline assessment was performed on all patients (table 1). Immediately after this baseline assessment, the study nurse got the allocation envelope and patients met with her and were randomised to receive either an intervention MONARCA Smartphone or a placebo MONARCA Smartphone for the 6 month study period. Interventions All patients received standard treatment at The Clinic for Affective Disorder, Psychiatric Center, Copenhagen, Rigshospitalet, Copenhagen, Denmark as described above. The Smartphone In MONARCA, the 'HTC Desire' and 'HTC Desire S' Smartphones running the Android operating system were used and all patients received a Smartphone free of charge for the 6-month study period. The placebo group had to use the MONARCA Smartphone for normal communicative purposes and the intervention group had to use the application for self-monitoring once a day, every day, for 6 months (figure 1). Pilot study As part of the clinical assessment at The Clinic for Affective Disorder, a paper version with daily monitoring of subjective items such as mood and medication was used for 4 years. Based on an interactive process between four patients suffering from bipolar disorder, the clinicians, bipolar researchers with clinical and scientific experience of bipolar disorder and IT researchers involved in the study, we developed an android application for monitoring bipolar disorder prior to this RCT (figures 2-5). During this interactive user-centred design process, the system was developed and the items to monitor and the corresponding scoring system were selected. Subsequently, the application was tested in a pilot trial with 12 patients for 3 months to test the usability and relevance of the selected monitoring items and to validate the technical part of the software. 35 Following the pilot study, minor adjustments were made and thereafter the system was 'locked' into a final version to be tested in the present trial. Subjective items for monitoring in the active intervention group Patients in the active intervention group entered the following subjective items every evening: mood (scored from depressive to manic: −3, −2, −1, 0, +1, +2 and +3), sleep duration (number of hours per night, measured in half-hour intervals), medicine (taken as prescribed: yes, no, if changed, the patient was asked to specify these), activity (scored on a scale of −3, −2, −1, 0, 1, 2 and 3), irritability (yes or no), mixed mood (yes or no), cognitive problems (yes or no), alcohol consumption (number of units per day), stress (scored on a scale of 0, 1, 2, 3, 4 and 5), menstruation for women (yes or no) and individualised early warning signs (yes or no). Patients were prompted by a reminder in the Smartphone to evaluate these items every evening at a chosen time. After midnight, the entered data were 'locked' and further changes could be made. If the patients forgot to evaluate the subjective items, it was possible to retrospectively enter data for 2 days. It was then noted in the system that the data were collected retrospectively. Screenshots from the software can be seen in figures 2-5. A user's guide for the MONARCA system was developed and handed out to all patients in the intervention group (can be obtained by contacting author). Objective parameters monitored in the intervention and placebo arms All the Smartphones in the study automatically collected objective data every day for the intervention group as well as the placebo group. The following objective items were chosen: speech duration (minutes of speech per 24 h on the Smartphone), social activity measured as numbers of outgoing and incoming calls per day and numbers of outgoing and incoming text messages per 24 h and physical activity measured by the accelerometer installed in the Smartphones as well as the amount of physical movement measured through the accelerometer in the Smartphone (sampled every 5 min). Thus, we can investigate the correlation between the activity on the Smartphone and affective symptoms based on HDRS and YMRS. A study nurse from the clinic (HSN) with experience with bipolar disorder was assigned to the patients allocated to the active intervention arm of the MONARCA study. She monitored on a daily basis all self-reported subjective electronic patient data and when these data suggested upcoming or deterioration of depressive or manic symptoms, she contacted the patients by text messages, telephone or email as part of the feedback loop during the entire period of this study (see later). Patients allocated to the placebo arm were similarly assigned a nurse (other than HSN, but similarly experienced with bipolar disorder) on clinical indication as part of the standard treatment in the clinic, for example, when upcoming or deterioration of depressive or manic symptoms, but this nurse did not have access to electronic daily data of the patient. Identification of the early warning signs and triggers, and the interactive feedback loop in the active intervention group In the intervention group, a personal homepage for each patient was set up on a server and the patient could connect to the homepage using secure codes. By giving informed consent to participate in the MONARCA trial, patients allowed clinicians to connect to the homepage. The homepage presents all the monitored items graphically. A standard of scoring thresholds on the subjectively monitored items for when the study nurse should contact patients was made. For example, the patients had to be contacted if they registered ≥−2 or+ 2 in their mood for 2 days, if they registered changes in their sleep patterns of 1 h more or less for 3 days, if medication was not taken or changed for more than 2 days, if the activity level registered was ≥−2 or +2 for 2 days, if mixed mood was registered for more than 3 days and if alcohol intake was >2 units for more than 3 days (full version of standard scoring thresholds can be obtained from the authors on request). These thresholds were individualised for every patient within the first 4 weeks of the trial. The study nurse reviewed the monitored data for all the patients in the intervention group every day and in case of signs of bipolar disorder instability, she contacted the patient. The patients could also contact the study nurse by phone or email in case of subjective signs of bipolar disorder instability. Following a run in monitoring of approximately 4 weeks, the patient and study nurse, in collaboration with the clinicians, and relatives (if accepted by the patient) agreed on a concordance status in (1) his/her most important items for identifying prodromal symptoms of mania (eg, sleep or alcohol consumption) as well as depression (eg, social activity); (2) the threshold for future signal warnings of prodromal symptoms (eg, slept 1 h less than the average monitored historic sleep time for three consecutive nights, had been drinking alcohol for three consecutive days, did not call anyone on the Smartphone for four consecutive days, did not take medication as prescribed for three consecutive days, etc) and (3) actions to be taken (eg, contact the caregiver within 3 days following the alarm signal and if he did not, the caregiver contacted the patient for clinical evaluation and intervention, for example, increase the dose of the mood stabiliser). Assessments All assessments were carried out by two physicians (MFJ and ASJ) who were not involved in the treatment of the patients. The patients were enrolled in the trial for a 6-month study period and assessed every month (table 1). The bipolar diagnosis was confirmed by a SCAN interview before inclusion of the patient. 32 Every month the affective symptoms were clinically rated using HDRS 33 and YMRS. 34 The following questionnaires were fulfilled every month when visiting the researcher; Psychosocial Functioning (Functioning Assessment Short Test, FAST), 36 Cohens' Perceived Stress Scale, 37 quality of life (WHOQOL), 38 coping strategies (CISS), 39 self-rated depressive [40][41][42] and manic symptoms 43 and cognitive functioning. 44 Biological samples of awakening salivary cortisol, 45 46 urinary oxidative stress, 47 48 plasma BDNF 49 and adherence to medication as measured by plasma concentration of the patient-prescribed medicine (mood stabilisers, antipsychotics, antidepressants) were taken at baseline, after 3 and 6 months. Cognitive function according to the Screen for Cognitive Impairment in Psychiatry (SCIP-S) 50 51 was assessed at baseline and after 3 and 6 months. Primary outcomes Clinically rated affective symptoms based on HDRS 17 items 33 and YMRS. 34 These were assessed every month for 6 months (table 1). Tertiary outcomes Awakening salivary cortisol, 45 46 urinary oxidative stress, 47 48 plasma BDNF, 49 cognitive function according to the screen for cognitive impairment in psychiatry (SCIP-S) 50 51 and adherence to medication as measured by plasma concentration of the prescribed medicine (mood stabilisers, antipsychotics, antidepressants). These were measured at baseline and after 3 and 6 months (table 1). No changes in trial outcomes were made after the start of the trial. Sample size The statistical power and sample size were calculated using http://stat.ubc.ca/~rollin/stats/ssize/n2.html. The primary outcome was differences in the level of affective symptoms based on the HDRS score and YMRS score respectively. The clinical relevant difference is defined as the minimum of three scores and the SD was set to four with a mean score of 10 vs 7 in the two groups. The statistical power to detect a three score difference in the areas under the curves between the intervention and the control groups on the HDRS score or the YMRS score, respectively, was 80% with α=0.05 for a two-sample comparison of means including 28 patients in the intervention group and 28 patients in the placebo group. The dropout rate is estimated to be around 25%. Sequence generation A computer-generated list of random allocation numbers was carried out by an independent researcher (KM) using randomisation.com. Since the course of illness and effect of the intervention could be influenced by age and previous hospitalisation, stratification is carried out on age (18-30 vs >30) and previous hospitalisation (yes or no). Stratification is carried out to ensure good balance of these patient characteristics in each randomisation group so that the number of patients receiving the intervention MONARCA Smartphone or placebo MONARCA Smartphone was balanced within each stratum. Allocation was 1:1. Within each stratum, a fixed block randomisation size of 10 is used. The block size was unknown to all the clinicians recruiting patients to the trial and the study nurse allocating participants to their correct randomisation arm. Allocation concealment and implementation The allocation sequence was concealed from the researcher (MFJ and ASJ) enrolling and assessing patients. Allocation was concealed in numbered, opaque and sealed envelopes stored in a securely locked cabinet by a secretary until the moment of randomisation. Allocation was identified by the letter A or B written on the paper in the envelopes and this indicated the type of intervention. The translation of allocation as A or B was made and known only to LVK and the study nurse. A paper with this translation was kept in a securely locked cabinet unknown to others than LVK. The secretary gave the envelope to the study nurse. Corresponding envelopes were opened only after all baseline assessment was performed and the patient's name was written on the envelope. The study nurse assigned patients to their allocation of intervention. Blinding Owing to the type of intervention in this trial, the patients and the study nurse were aware of the allocation arm. The researchers responsible for outcome assessments (MFJ and ASJ) and data analysis (MFJ) were kept blinded to allocation at all times during the trial. The trial was therefore single-blinded. The study nurse did not collect any kind of outcome measures. All patients were thoroughly and repeatedly instructed not to mention anything about allocation to intervention at each visit with the researcher. The risk of unblinding due to simply seeing the type of mobile phone in the patient's hands was minimised since all patients received the same type of mobile phone. Statistical methods Data will be managed by MFJ and entered using Epidata. All analyses will be done using Statistical Package for the Social Sciences (SPSS). Data from all randomised patients will be collected until dropout or the end of the study period. The outcome is changes in affective symptoms measured as HDRS and YMRS during the 6-month study period. We will employ a linear mixed effects model with random intercept for each participant. Differences between outcomes of the interventions during the 6 months study period will be analysed, first unadjusted and then adjusted for age, previously psychiatric hospitalisations (yes/no) and sex, if these variables present with a p≤0.1 in univariate analyses. Analysis will be carried out with intention-to-treat (ITT). The statistical threshold for significance is p≤0.05 (two-tailed). Ethical considerations Ethical permission for the MONARCA study has been obtained from the Regional Ethics Committee in The Capital Region of Denmark (H-2-2011-056) and The Danish Data Protection Agency (2013-41-1710). The trial is registered at ClinicalTrials.gov as NCT01446406. All positive, neutral and negative findings of the study will be published according to the CONSORT guidelines. 28 All electronic monitored data are stored at a secure server at Concern IT, Capital Region, Copenhagen, Denmark (I-suite number RHP-2011-03). All potential participants are invited to be informed about the trial and the information is given in a quiet and undisturbed office. All information is presented in both written and verbal form and participants can bring a friend or relative to the introduction conversation. Participants are informed that participation is voluntary and that consent can be withdrawn at any time of the study without this giving any consequences for future treatment possibilities. All participating patients sign a consent form and get a copy of this and their rights as a participant in clinical trials. All Smartphones are provided by the project and economic costs from data traffic due to the MONARCA project are refunded. Participants do not receive any economic compensation for participating in the MONARCA trial. RESULTS Until the time of submission, a total of 141 patients suffering from bipolar disorder had been identified, but 11 of these were over 60 years of age and seven were pregnant. This left 123 patients to be assessed for eligibility for the trial. Of these, three patients had an HDRS score ≥17 for a prolonged period of time and two were unable to speak Danish. Thus, so far a total of 118 patients have been eligible, but 32 declined to participate, four were unwilling to use our Smartphone as their primary Smartphone and we could not contact four patients. Until the time of submission, the participation rate was 66.1% and the dropout rate during the 6 months follow-up period was 12.8%. Until the time of submission, a total of eight patients dropped out at baseline before knowledge of their allocation to intervention and two patients dropped out during the 6-month study period. DISCUSSION This is the first randomised trial to test whether electronic monitoring may improve long-term outcome in mental illness, in this case bipolar disorder. A major advantage in the MONARCA trial is that the system is developed and tested in a pilot study in a close collaboration between patients suffering from bipolar disorder, clinicians (specialists in psychiatry and nurses with specific clinical expertise within bipolar disorder) as well as clinical researchers within bipolar disorder and IT researchers. The intervention We decided to investigate the effect of a total system combining electronic self-monitoring and a feedback system between patients and clinicians in order to help patients acknowledge illness activity and identify and react more adequately on early warning signs and triggers of affective episodes. The study is designed to investigate the total effect of this intervention versus placebo intervention and, consequently, we will not be able to address more specifically the effect of the individual elements of the intervention, such as for example, the effect of subjective self-monitoring on its own. Control group It is a major challenge in any non-medical trial to define a proper control group. We decided to include a control group of patients who received the same Smartphone but without the MONARCA software system, that is, a placebo Smartphone. Patients in the placebo group did not make any subjective electronic self-monitoring of symptoms or behaviour and they were not monitored with the feedback loop, but their illness activity was monitored 'objectively' in the same way as for the intervention group using Smartphone data to monitor speech duration, social activity and physical activity and they followed treatment as usual in the clinic. Objective measures of illness activity? Possible electronic objective measures of illness activity have never been studied, as electronic monitoring in healthcare is a new and unstudied area. If successful, this may be a major breakthrough for treatment of bipolar disorder and for research in bipolar disorder. We will be able to validate Smartphone generated data of speech duration, social activity and physical activity against repeated measures of HAM D-17 and YMRS score over a 6-month period. Anyhow, as this is the first trial to investigate electronic monitoring, we were not able to provide feedback to the patients allocated to the active intervention arm on these objective data. We are currently transferring the Smartphone-generated data on these objective items into useful simple information that can be provided to the patients in a future revised MONARCA application. Generalisability The study was carried out in a tertiary specialised mood disorder clinic. However, the trial has a pragmatic design with few exclusion criteria and few patients were excluded. The majority of patients entering the trial are in an early course of the illness with a newly diagnosis of single mania or bipolar disorder. Further, as the MONARCA system is easy to use for both patients and clinicians with a high appeal and low dropout rate, we believe that the findings of the trial can be generalised to patients with bipolar disorder in general. Perspectives If the Smartphone self-monitoring system proves to be effective in preventing mood symptoms and improving psychosocial functioning and quality of life in the present study, there will be a basis for extending the use of the system to treatment of patients with bipolar disorder in clinical practice in other clinical settings (eg, community psychiatric centres) and on a larger scale. Using electronic self-monitoring may improve patient empowerment in relation to bipolar disorder and treatment. Potentially, electronic self-monitoring may be applied in relation to patients suffering from other psychiatric disorders with development of other software systems. In this way, it is possible that outpatient treatment can be optimised in general and that the frequency of physician and other clinical visits can be decreased.
2017-08-28T01:17:18.296Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "7de5ada664d868279d6feb67700a6ac08eb44e53", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/3/7/e003353.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7de5ada664d868279d6feb67700a6ac08eb44e53", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221862701
pes2o/s2orc
v3-fos-license
Reference values of fat mass index and fat-free mass index in healthy Spanish adolescents. Background . Body mass index (BMI) does not allow to discriminate the composition of the different body compartments. The aim of this study is to elaborate reference values of the fat mass index (FMI) and fat-free mass index (FFMI) in healthy adolescents using anthropometric techniques in order to be available as reference standards in daily clinical practice. healthy adolescents (both sexes), from the measurement of skin folds in order to be available as benchmarks in daily clinical practice. The normality of the nutrition status was the condition sine qua non to be included in this study; this means, BMI should range between +1 and -1 standard deviations (Z-score). In addition, non-Caucasian adolescents and those diagnosed with chronic pathologies that might affect growth, body composition, food ingestion or physical activity were excluded. The response rate after the exclusions was 85.5%. Weight and height measurements were taken with participants in underwear and barefoot. An Año-Sayol scale was used for weight measurement (reading interval 0 to 120 kg and a precision of 100 g), and a Holtain wall stadiometer for height measurement (reading interval 60 to 210 cm, precision 0.1 cm). BMI was calculated according to the following formula: weight (kg) / height 2 (m). Skinfold-thickness measurements were performed in triplicate at the biceps, triceps, subscapular, and suprailiac sites, and the mean of the 3 values was used, and measurements were performed by the same trained individual. Skinfold thicknesses values were measured to a precision of 0.1 mm on the left side of the body with Holtain skinfold calipers (CMS Weighing Equipment, Crymych, United Kingdom). The percentage of total body fat, fat mass (kg) and fat-free mass (kg) were calculated using the equations reported by Slaughter et al. [16], adjusted for sex and age. In the same way, the fat mass index (FMI) and the fat free mass index (FFMI) were estimated using the following formulas: fat mass (kg) / height 2 (m), and free fat mass (kg) / height 2 (m), respectively. [17]. Statistical analysis Results are displayed as means (M) with corresponding standard deviations (SDs). The statistical analysis (descriptive statistics, percentiles calculation, Student's t test, and analysis of variance) was executed using the program Statistical Packages for the Social Sciences version 20.0 (SPSS, Chicago, IL, USA). The condition for statistical significance was a P-value <0.05. Parents and/or legal guardians were informed and provided written consent for the participation in this study in all cases. This study was approved by the Ethics Comm ittee for Human Investigation of Navarra Hospital Complex (in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and later amendments). Table 1 lists and compares the mean values of anthropometric and body composition characteristics according to age in adolescents males. A significant increase in the mean values of weigh, height, BMI, fat-mass, fat free-mas and FFMI is observed (P<0.05). In contrast, mean values of body fat, skinfold thickness (triceps) and FMI significantly decreased (P<0.05). There are no significant differences in mean values of BMI z-score and skinfold thickness (biceps, subscapular and suprailiac). Table 3 shows and compares the mean values of anthropometric and body composition characteristics related to age group in adolescents females. Mean values of weigh, height, BMI, skinfold thickness (subescapular and suprailiac), body fat, fat-mass, fat free-mas, FMI and FFMI significantly increased (P<0.05). No significant differences in mean values of BMI z-score and skinfold thickness (biceps and triceps) were detected. discussion The analysis of the evolutionary changes in the body compartments (fat mass and fat-free mas) in healthy adolescents -between 10 and 14 years of ages-with a normal BMI adjusted for age and sex reveals a different pattern in relation to sex. There is a progressive and significant increase in the FFMI in both sexes, and males show significantly higher values than females; in addition, there is a progressive and significant decrease in the FMI in males, in contrast to the progressive and significant increase in the FMI in females. It should be stressed that these changes take place simultaneously with a progressive increase in BMI in both sexes in this period of life, in the absence of significant differences of BMI values in both sexes in the different ages considered. In this study, BMI was applied for the classification of the nutritional status of the children who were included. However, although it may be useful to define overweight and obesity [1,7,18,19], it provides limited information since it denotes excessive weight in relation to height rather than excessive body fat; this means, it does not allow to discriminate the relative composition of the different body compartments: fat mass and fat-free mass [2][3][4][5]20]. This limitation becomes more evident in adolescence, when a series of physiological changes occur [21,22] and an increase of weight might be erroneously identified as excessive fat accumulation [23,24] Therefore, having in place standardized values of FMI and FFMI in healthy adolescents would allow to distinguish between those individuals that, for example, present with high values of BMI and, simultaneously, show a low FFMI and high FMI (a situation that corresponds with overweight or obesity), and those that also present with high BMI but show high FFMI and low FMI (a situation that would be identified as muscle hypertrophy, which is quite frequent in adolescent males). Few reference charts of FMI and FFMI in the pediatric age have been published to date, and they are usually based on sophisticated methodologies and poorly accessible in clinical practice, such as bioelectrical impedance analysis, dualenergy X-ray absorptiometry or isotope dilution [10,11,25]; its use is basically limited to investigation. However, there is ample evidence that the values that have been obtained by using anthropometric measurements correlate extremely well with those collected with these sophisticated and high cost techniques [7,[11][12][13][14][15]26]; even those more simple models which divide the body in FM and FFM are as valid as those more complex models that subdivide FFM in its different components (water, proteins, minerals) [25]. Parents and/or legal guardians were informed and provided consent for the participation in this study in all cases. Not applicable Availability of data and material. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Competing interests. The authors declare that they have no competing interests Gender differences for FMI in each of the ages. Figure 2 Gender differences for FFMI in each of the ages.
2020-09-24T13:06:19.099Z
2019-12-02T00:00:00.000
{ "year": 2020, "sha1": "09c07ce8b82fc25fb39e13896db006eca733e2d8", "oa_license": "CCBYNCSA", "oa_url": "https://www.nutricionhospitalaria.org/filesPortalWeb/10/MA-00010-01.pdf?P0cAW58JCTZGPYq9J7yFi4v6oLaiTBu6", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3afd350104f641f1012c888d90d93f69dcc5a9ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56470351
pes2o/s2orc
v3-fos-license
Interactions of Delta Shock Waves for Zero-Pressure Gas Dynamics with Energy Conservation Law We study the interactions of delta shock waves and vacuum states for the system of conservation laws of mass, momentum, and energy in zero-pressure gas dynamics. The Riemann problems with initial data of three piecewise constant states are solved case by case, and four different configurations of Riemann solutions are constructed. Furthermore, the numerical simulations completely coinciding with theoretical analysis are shown. Introduction As is well known, the system of zero-pressure gas dynamics consisting of conservation laws of mass and momentum, which is also called the transport equations, or Euler equations for pressureless fluids, has been extensively investigated since the 90s of 20th century.It is derived from Boltzmann equations [1] and the flux-splitting scheme of the full compressible Euler equations [2,3] and can be used to describe the motion process of free particles sticking together under collision [4] and the formation of large-scale structures in the universe [5,6]. However, we have to mention that, as having no pressure, the energy transport must be taken into account for the considered media.Therefore, it is very necessary to consider the conservation law of energy in zero-pressure gas dynamics.To this end, we study the one-dimensional zero-pressure gas dynamics governed by the conservation laws of mass, momentum, and energy: where and represent the density and velocity, respectively, = is the internal energy and assumed to be nonnegative, and is the internal energy per unit mass.The regions in the physical space where = 0 and = 0 are identified with the vacuum regions of the flow.Here, is considered as an independent variable just for convenience.System (1) was early studied by Kraiko [7].In contrast to the traditional zero-pressure gas dynamics system that contains only the conservation laws of mass and momentum, in order to construct the solution of (1) for arbitrary initial data, a new type of discontinuities which are different from the classical ones and carry mass, impulse, and energy are needed.In [8,9], system (1) was further discussed.Some special integral identities were introduced to define the delta-shock solutions and construct the Rankine-Hugoniot relation for delta shock waves.Moreover, using these integral identities, the balance laws describing mass, momentum, and energy transport from the area outside the delta shock wave front onto its front were derived.What is more, the delta shock wave type solutions for multidimensional zeropressure gas dynamics with the energy conservation law were defined in [10]. A delta shock wave is a generalization of an ordinary shock wave.Roughly speaking, it is a kind of discontinuity, on which at least one of the state variables may develop 2 Advances in Mathematical Physics an extreme concentration in the form of a weighted Dirac delta function with the discontinuity as its support.It is more compressive than an ordinary shock wave and more characteristics enter the discontinuity line.Physically, the delta shock waves describe the process of formation of the galaxies in the universe and the process of concentration of particles.As for delta shock waves, there are numerous excellent papers, see [11][12][13][14][15][16][17][18][19][20][21] and so forth.Nevertheless, compared to these results, a distinctive feature for (1) is that the Dirac delta functions develop in both state variables and simultaneously, which is quite different from those aforementioned, in which only one state variable contains the Dirac delta function.In fact, the theory of delta shock waves with Dirac delta functions developing in both state variables has been established by Yang and Zhang [22,23] for a class of 2 × 2 nonstrictly hyperbolic systems of conservation laws. In the past over two decades, the investigation of interactions of delta shock waves has been increasingly active.This is important not only because of their significance in practical applications but also because of their basic role as building blocks for the general mathematical theory of quasilinear hyperbolic equations.And the results on interactions are also touchstones for the numerical schemes.Specifically, Sheng and Zhang [18] discussed the overtaking of delta shock waves and vacuum states in one-dimensional zero-pressure gas dynamics.By solving the two-dimensional Riemann problems for zero-pressure gas dynamics with three constant states, Cheng et al. [24] studied the interactions among delta-shock waves, vacuums, and contact discontinuities.In addition, with the help of a generalized plane wave solution, Yang [25] studied a type of generalized plane delta-shock wave for the -dimensional zero-pressure gas dynamics and investigated the overtaking of two plane delta shocks.For more works on the interactions of delta shock waves, we refer to [26][27][28][29] and so forth. Motivated by the discussions above, in the present paper, we are concerned with the interactions among delta shock waves, vacuum states, and contact discontinuities in solutions.Therefore, we study the Riemann problem of (1) with initial data of three piecewise constant states as follows: where , , ( = ±, ) are arbitrary constants and 01 , 02 are any two fixed points on -axis. We will deal with the Riemann problem (1), (2) case by case along with constructing the solutions.For this purpose, it is necessary to consider whether two adjacent waves intersect and interact with each other when constructing the global solution.However, it is often not so easy to see whether two delta shock waves meet and how they interact with each other.Therefore, some technical treatments are needed. This paper is arranged as follows.In Section 2, the delta shock solution of ( 1) is reviewed and a general case when the delta shock wave is emitted at the beginning with a nonzero initial data is considered.Section 3 discusses the interactions of the delta shock waves and vacuum states.The Riemann solutions of (1), ( 2) are constructed globally.Finally, four kinds of numerical simulations coinciding with the theoretical analysis are presented in Section 4. For the case − ≤ + , the solution containing two contact discontinuities and a vacuum state besides two constants is expressed as where () is a smooth function satisfying ( − ) = − and ( + ) = + . For the case − > + , the singularity of solutions must develop because of the overlap of characteristic lines.Therefore, the solution involving a delta shock wave is introduced. Let (, , ; , , ℎ) be the delta shock solution of the form and then the following generalized Rankine-Hugoniot relation holds where [] = − − + .In order to ensure the uniqueness, the delta shock wave should satisfy the entropy condition which means that the characteristics on both sides of the discontinuity are in-coming.Under the entropy condition (7), by solving the ordinary differential equations ( 6) with the initial data = 0: (0) = 0, (0) = 0, ℎ(0) = 0, (0) = 0, one has For convenience, we now consider a special case when a delta shock wave is emitted at the beginning with the initial data satisfying − > 0 > + .It yields from ( 6) and (9) that One can check that the delta shock solution (10) satisfies the following: (1) () is a monotone function of . ( While if − = + = 0, then = 0 .(3) + < < − . Interactions of Delta Shock Waves In this section, we analyze the interactions of delta shock waves.To ensure that all the cases are covered completely, according to the relation among − , , + , our discussion is divided into four cases: Case 1 ( − > > + ).In this case, two delta shock waves 1 and 2 will be emitted from ( 01 , 0) and ( 02 , 0), respectively, as shown in Figure 1. According to what has been discussed in Section 2, these two delta shock waves are uniquely determined by We have + < 2 < < 1 < − by entropy condition (6), which means that 1 will overtake 2 at a finite time.The intersection point ( 0 , 0 ) is calculated by which yields that At the intersection ( 0 , 0 ), the new initial data are formed as follows: satisfying 1 > 0 > 2 .In view of − > + , a new delta shock wave will generate after interaction and we denote it with : = ().The trajectory, velocity, and weights ((), (), (), ℎ()) of can be uniquely obtained by solving the ordinary differential equations ( 6) with the initial date (16).The detail is omitted. Thus, the result of interaction of two delta shock waves is still a single delta shock wave.This fact can be formulated as Case 2 ( − > + > (when > + > − , the structure of solution is similar)).In this situation, a delta shock wave 1 determined by ( 12) is emitted from ( 01 , 0) and two contact discontinuities 1 : = and 2 : = + with a vacuum in between are emitted from ( 02 , 0), as shown in Figure 2. Since the propagating speed of 1 satisfies < 1 < − , so 1 must meet the contact discontinuity 1 : = at 1 = ( 02 − 01 )/( 1 − ), and a new delta shock wave 2 : = 2 () forms, which is subjected to the generalized Rankine-Hugoniot relation with the initial data Therefore, by solving ( 18) and ( 19), we have It is clear that 2 will cross the vacuum region with a varying propagation speed.Noting that lim →+∞ 2 () = − > + , so 2 will penetrate over the whole vacuum region and then meet 2 : = + at a finite time.The intersection ( 2 , 2 ) is determined by At ( 2 , 2 ), a new initial value problem is formed and can be solved similar to Case 1.We denote the delta shock wave connecting two constant states ( − , − , − ) and ( + , + , + ) with 3 after the interaction of 2 and 2 . The conclusion of this case is that the delta shock wave will penetrate over the whole vacuum region between two contact discontinuities.This fact is expressed as Case 3 ( + > − > (when > − > + , the structure of solution is similar)).Similar to Case 2, there are a delta shock wave, two contact discontinuities, and a vacuum near = 0 on the (, )-plane, as shown in Figure 3. Vac Vac Figure 4: The delta shock wave 1 collides with 1 at first and a new delta shock wave 2 generates.However, since lim →+∞ 2 () = − < + , 2 cannot penetrate over the vacuum region and finally has 2 () = − + 02 as its asymptote.This fact is symbolized as Case 4 ( + > > − ).In this situation, both the contact discontinuities with a vacuum state in between are emitted from ( 01 , 0) and ( 02 , 0), respectively.Noting that the contact discontinuities 2 and 3 own the same propagating speed, thus there is no collision of waves and the solution is expressed as which is called a collisionless solution, as shown in Figure 4. Numerical Simulations In order to verify the validity of the interactions of delta shock waves and vacuum states mentioned in Section 3, we present some representative numerical simulations in this section.Many more numerical tests have been performed to make sure that what are presented are not numerical artifacts.To discretize the system, we employ the second-order nonoscillatory central schemes [31] with 300 × 300 cells and CFL = 0.475.In what follows, by taking 01 = −0.2 and 02 = 0.2, we simulate the interaction of waves by four cases.For convenience, each situation will be simulated at two different times. Case 1 ( − > > + ).We take the initial data as follows: The numerical results are presented by Figures 5-7.We observe from Figures 5-7 that when = 1, two delta shock waves appear at (−0.2, 0) and (0.2, 0), respectively.As increases, they will overtake each other and finally unify into a new delta shock wave at = 6.5. Case 2 ( − > + > ).We choose the following initial data The numerical results are shown in Figures 8-10. From Figures 8-10, we can clearly see that, at = 0.8, a delta shock wave and two contact discontinuities with a vacuum state in between are emitted from (−0.2, 0) and (0.2, 0), respectively.However, at = 2.9, the delta shock wave penetrates over the whole vacuum region, and a new delta shock wave generates. The numerical results are shown in Figures 11-13.Figures 11-13 imply that a delta shock wave is emitted from (−0.2, 0), and two contact discontinuities with a vacuum in between are emitted from (0.2, 0) at = 0.5.But the delta shock wave can not penetrate over the whole vacuum region even though time is on the increase.In this process, the region of vacuum state keeps expanding. Advances in Mathematical Physics Case 4 ( + > > − ).We select the initial data to be The numerical results are presented by Figures 14-16. From Figures 14-16, we observe that both the contact discontinuities with a vacuum state in between are emitted from (−0.2, 0) and (0.2, 0) at = 0.8, respectively.As time goes on, the vacuum state keeps continuously expanding and never disappears. To sum up, all of the above numerical results clearly reveal the interactions of delta shock waves and vacuum states discussed in Section 3. We also indicate that because of the occurrence of singularity as the weighted Dirac delta functions, some oscillations appear in the numerical t
2018-12-17T21:34:28.458Z
2016-06-15T00:00:00.000
{ "year": 2016, "sha1": "4e8d32978e67ddf32db29417de0c26c129569922", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/amp/2016/1783689.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4e8d32978e67ddf32db29417de0c26c129569922", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
119696500
pes2o/s2orc
v3-fos-license
All-derivable points in nest algebras Suppose that $\mathscr{A}$ is an operator algebra on a Hilbert space $H$. An element $V$ in $\mathscr{A}$ is called an all-derivable point of $\mathscr{A}$ for the strong operator topology if every strong operator topology continuous derivable mapping $\phi$ at $V$ is a derivation. Let $\mathscr{N}$ be a complete nest on a complex and separable Hilbert space $H$. Suppose that $M$ belongs to $\mathscr{N}$ with $\{0\}\neq M\neq\ H$ and write $\hat{M}$ for $M$ or $M^{\bot}$. Our main result is: for any $\Omega\in alg\mathscr{N}$ with $\Omega=P(\hat{M})\Omega P(\hat{M})$, if $\Omega |_{\hat{M}}$ is invertible in $alg\mathscr{N}_{\hat{M}}$, then $\Omega$ is an all-derivable point in $alg\mathscr{N}$ for the strong operator topology. Introduction Let K and H be complex and separable Hilbert spaces of dimensions greater than one. Suppose that A is a subalgebra of B(H) and V is an operator in A . A linear mapping ϕ from A into itself is called a derivable mapping at V if ϕ(S T ) = ϕ(S )T + S ϕ(T ) for any S , T in A with S T = V. Operator V is called an all-derivable point in A for the strong operator topology if every strong operator topology continuous derivable mapping ϕ at V is a derivation. In recent years the study of all-derivable points in operator algebras has attracted many researchers' attentions. Jing, Lu, and Li [4] proved that every derivable mapping ϕ at 0 with ϕ(I) = 0 on nest algebras is a derivation. Li, Pan, and Xu [5] showed that every derivable mapping ϕ at 0 with ϕ(I) = 0 on CSL algebras is a derivation. Zhu and Xiong proved the following results in [6,7,8,9,10] is an all-derivable point in nest algebra algN for the strong operator topology. The following three lemmas will be used to prove the main result of this paper in Section 3. Lemma 2.1. Let H be a complex and separable Hilbert space and let N be a complete nest in H. Suppose that δ is a strong operator topology continuous linear mapping from algN into itself and Γ is an invertible operator in algN . If the following equation holds for any S 1 , S 2 in algN with S 1 S 2 = I, then δ is an inner derivation. Proof. Put S 1 = S 2 = I in Eq. (2.1), we have S 1 S 2 = I. It follows that Γδ(I) = 0. That is, δ(I) = 0 since Γ is invertible in algN . Put S 1 = I − aP and S 2 = I − bP in Eq. (2.1), where P is an idempotent in algN and a, b are two complex numbers such that a + b = ab = 1. Thus we get that S 1 S 2 = I. Thus L x = L u = L v since u and v are linearly independent. This implies that L x is independent of x for any x ∈ H. If we write L = L x , then ϕ(x ⊗ g) = x ⊗ Lg for any x in H and g in K. Next we shall prove that L is in B(K). In fact, for arbitrary sequence (g n ) in K with g n → g and Lg n → h, we have Therefore L is a closed operator. By the Closed Graph Theorem, we obtain that L is a bounded linear operator on K. for any S in B(K, H) and finite rank operator F in algN . Since ϕ is a strong operator topology continuous linear mapping, it follows from Erdös Density Theorem that ϕ(S ) = S D for any S in B(K, H). } strongly converges to 0 as n → +∞. It is obvious that N 1 N 2 · · · N j N j+1 · · · H and the sequence {P(N n )} strongly converges to the unit operator I H as n → +∞. For an arbitrary integer n and x in N n , by imitating the proof of case 1, we can find a linear mapping D N n on K such that ϕ(x ⊗ g) = x ⊗ gD N n for any x in N n and g in K. Note that N n ⊆ N m (m > n) and ϕ(x ⊗ g) = x ⊗ gD N m for any x in N m and g in K. So x ⊗ gD N n = x ⊗ gD N m for any x ∈ N n and g ∈ K. It follows that D N n = D N m . Hence D N n is independent of N n . We write D as D N n . Thus ϕ(x ⊗ g) = x ⊗ gD for any x in N n and g in K. For any x in H, put x n = P(N n )x. Then we get that That is, Since ϕ is a strong operator topology continuous linear mapping and P(N n ) strongly converges to I H as n → +∞, taking limit on both sides in the above equation, we obtain that ϕ(x ⊗ g) = x ⊗ gD for any x in H and g in K. The rest of the proof is similar to case 1. The lemma is proved. Proof. We only need to prove that φ(T ) = 0 for any operator T in A . Take a complex number λ with All-derivable points in algN In this section, we always assume that M belongs to N with where W is an invertible operator in algN M . The proof are divided into the following five steps: Step 1. For arbitrary X 1 , then S T = Ω. Since ϕ is a derivable mapping at Ω on algN , we have 2) A 22 (W) = 0 for any X 1 , X 2 in algN M with X 1 X 2 = I M . By Lemma 2.1, we get that A 11 is an inner derivation on algN M . Then there exists an operator A ∈ algN M such that for any X in algN M . Step 3. For arbitrary Y in B(M ⊥ , M) and X 1 , Since A 11 is a inner derivation and X 2 is an invertible operator in algN M , we have Step [4]). Thus C 22 is inner, and so there is an operator C ∈ algN M ⊥ such that Step 5 . For arbitrary idempotent Q in algN M and Y in B(M ⊥ , M), we write Q λ for Q + λI M . It follows that Since every rank one operator in algN M can be represented as a linear combination of at most four idempotents in algN M (see [3]), we get that the above equation is valid for each rank-one operator in algN M . Furthermore, it is valid for every finite rank operator in algN M (see [2]). Therefore, by the Erdös Density Theorem(see [2]), we have In summary, we get that Thus ϕ is an inner derivation. Then Ω may be represented as the following operator matrices relative to the orthogonal decomposition H = M ⊕ M ⊥ : where W is an invertible operator in algN M ⊥ . Since the proof is similar to case 1, the sketch of the proof is given below. The proof is divided into the following six steps: Step 1. For arbitrary Z 1 , Since ϕ is derivable at Ω, by imitating the proof of Case 1, we get that . It follows from Lemma 2.1 that there exists Step 2. For arbitrary Z 1 , , then S T = Ω. By Lemma 2.3 and imitating the proof of case 1, we may get that C 11 (Z) = 0 for any Z in algN M ⊥ . Since C 11 vanishes on algN M ⊥ , we obtain that A 11 is derivable at 0. It follows from the expression of C 12 that A 12 (X) = −XB ′ for any X in algN M . We also get that A 22 (X) = 0 for any X in algN M . Thus ϕ is an inner derivation. This completes the proof.
2010-09-20T06:08:04.000Z
2010-08-09T00:00:00.000
{ "year": 2010, "sha1": "e850f2ca1db3b969d0ed82f6d3b9f6748951d384", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.laa.2010.01.034", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "e850f2ca1db3b969d0ed82f6d3b9f6748951d384", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
16803713
pes2o/s2orc
v3-fos-license
Spectral representation of the Casimir Force Between a Sphere and a Substrate We calculate the Casimir force in the non-retarded limit between a spherical nanoparticle and a substrate, and we found that high-multipolar contributions are very important when the sphere is very close to the substrate. We show that the highly inhomegenous electromagnetic field induced by the presence of the substrate, can enhance the Casimir force by orders of magnitude, compared with the classical dipolar approximation. Recent advances in micro and nano devices have opened the possibility of studying quantum phenomena that occur at these length scales. Such is the case of the Casimir force [1] that is a macroscopic manifestation of the quantum vacuum fluctuations, as predicted by quantum electrodynamics. The textbook example [2,3,4] consists of two parallel neutral conducting plates which attract each other. The first experimental measurements were done in 1951 using dielectric materials [5], and in 1958 using conductors [6]. These measurements have large errors, and up to recently, it was possible to perform measurements with about 15% of precision on truly parallel metal surfaces [7]. The difficulty of keeping the two plates parallel at separations of few nanometers makes it easier to measure the Casimir force between a sphere and a plane [8,9,10,11,12]. In this case, the Casimir theory for parallel plates can be extended using the proximity theorem [5]. The approximation is valid when the minimum separation between the sphere and the plane is much smaller than the radius of the sphere. This theorem was employed to corroborate experimental measurements of the Casimir force between a plane and a large sphere [10,11,12]. However, it is well known that quantum effects become more evident as the size of the system decreases. Thus, the question of how important are the Casimir effects on nanometric-size spheres is still an open question of fundamental importance. In 1948 Casimir and Polder [13] calculated the force of a polarizable atom near a perfect conductor plane considering the influence of retardation, and finding a correction to the London or van der Waals forces. Retardation effects are important if we consider that the distance between the atom and the plane is larger than the characteristic length of the system. Complementary theories are necessary to handle nanometer-size systems with real dielectric properties. Within this context Ford [14] calculated the force between a perfectly conducting wall and a sphere with a Drude dielectric function. After a delicate cancelation of terms in the equations, he obtained a force that changes from attractive to repulsive in an oscillatory fashion depending on the relative distance between the sphere and the surface. However, this oscillatory behavior has not been observed experimentally [8,9,10,11,12], and has not been predicted by other theories [5,13]. In this work, we develop a spectral representation formalism to calculate the force between a sphere and a substrate. The advantage of this spectral representation is that we can separate the contribution of the dielectric properties of the sphere and substrate from the contribution of its geometrical properties. Since results for large spheres [5] and large distances [13] are known, we restrict ourselves to the case of nanometric-size spheres and distances of few nanometers. In this case, it is not necessary to consider retardation effects, therefore, we work in the quasi-static limit such that the radius of the sphere and the minimum separation between the sphere and the plane, are smaller than the characteristic length of the system [15]. In this regime, the Casimir force is commonly known as the van der Waals or London force [4]. We consider a homogeneous sphere of radius R, electrically neutral and with a local dielectric function ǫ sph (ω). The sphere is suspended at a minimum distance z above a substrate (see Fig. 1) which is also neutral and has a local dielectric function ǫ sub (ω). The space or ambient between the sphere and substrate is vacuum (ǫ amb = 1). The quantum fluctuations of the electromagnetic field induce a polarization in the sphere which can be described by a point dipole located at its center, where α(ω) = [ǫ sph (ω) − ǫ amb ]/[ǫ sph (ω) + 2ǫ amb ]R 3 , is the polarizability of the sphere that is assumed to be polarized uniformly [16], and E vac (ω) is the electromagnetic field associated to the vacuum fluctuations. We can rewrite the polarizability as where n 0 = 1/3 is a constant and u(ω) = [1 − ǫ sph (ω)/ǫ amb ] −1 is a variable that only depends on the dielectric properties of the sphere and the ambient. When the sphere is near a substrate, it induces a charge distribution on the substrate that can be seen as a dipole image, such that where the term f c M satisfies the boundary conditions of the system, being M a diagonal matrix whose elements depend on the choice of the coordinate system, and f c (ω) = [ǫ amb − ǫ sub (ω)]/[ǫ amb + ǫ sub (ω)] a the contrast factor that only depends on the dielectric properties of the substrate and the ambient. The induced charge distribution on the substrate produces a field which also modifies the sphere's dipole moment through a local field. Thus, the total induced dipole-moment on the sphere is where T is the dipole-dipole interaction tensor and, in the non-retarded limit, it takes the form where 1 is a unitary matrix, r = (0, 0, 2(z + R)) is the vector from the center of the image dipole to the center of the sphere,r = r/r, and r = |r|. Given the symmetry of the system the diagonal components of M are (−1, −1, 1), and there are only three independent components of T, one perpendicular to the surface plane and two parallel to this plane. The frequencies that satisfy the boundary conditions of the system are those at which the sphere is polarized. These frequencies are known as the proper electromagnetic modes of the system, and we denote them like ω s . Then, the total energy of the system is E = s 1/2 ω s . A convinient way of determining these proper electromagnetic modes is using a spectral representation formalism that we derive as follows. First we rewrite Eq. (4) using the expression of the polarizability from Eq. (2), as where is a dimensionless matrix that only depends on the geometry of the system. To find the solution of Eq. (6), consider the case when f c (ω) is real, then H is a real and symmetric matrix. In this case, we can always find a unitary transformation that diagonalizes it, U −1 HU = n s , being n s the eigenvalues of H. Furthermore, the solution of Eq. (6) is given by where G(u) = [−u(ω)1 + H] −1 is a Green's operator. The ijth element of G(u) can be written in terms of the unitary matrix U as [17] G The poles of G(u), that is u(ω) = n s , give the frequencies of the proper electromagnetic modes, ω s , of the system [18]. We now calculate the Casimir interaction energy as the difference between the energy when the sphere is at a distance z from the substrate and the energy when z → ∞, that is, The eigenfrequencies ω s ′ are obtained from the poles of Eq. (8) when z → ∞, or by substituting ǫ sub (ω) = ǫ amb in f c . Note that it is not necessary to do any renormalization or any delicate cancelation to calculate the energy. Alternatively, we can also find the density of states using the Green's function definition and then calculate the energy of the system, as we show in the appendix. The advantage of the spectral representation is that we can separate the contribution of the dielectric properties of the sphere from the contribution of its geometrical properties. As we mentioned, the material properties of the sphere are contained in the spectral variable u, while the geometrical properties of the system, like the radius of the sphere and the separation of the sphere to the substrate are in the matrix H. Furthermore, H is a dimensionless matrix that depends on the ratio z/R. Its eigenvalues are independent of V vac and of the dielectric properties of the sphere. And the dielectric properties of the substrate are in f c which is a real function even for dispersive materials [19]. A similar spectral representation was proposed years ago to study the effective dielectric properties of granular composites [20]. The results discuss here are calculated as follows. First, we construct the matrix H for a given z/R, and we diagonalize it numerically to find its eigenvalues n s . Considering an explicit expression for the dielectric function of the sphere, we calculate the proper electromagnetic modes ω s trough the relation u(ω s ) = n s . Once we have ω s , we calculate the energy according with Eq. (9). Here, we use the Drude model, such that ǫ sph (ω) = 1 − ω 2 p /[ω(ω + i/τ )], where ω p is the plasma frequency and τ is the relaxation time. We present results for potassium (K), gold (Au), silver (Ag) and aluminum (Al) spheres with ω p = 3.80, 8.55, 9.60, and 15.80 eV, and (τ ω p ) −1 = 0.105, 0.0126, 0.00188, and 0.04, respectively. We have considered substrates whose dielectric function is real and constant in a wide range of the electromagnetic spectrum as sapphire (Al 3 O 2 ), and titanium dioxide (TiO 2 ), with ǫ sub = 3.13, and 7.81, respectively. Then, the corresponding contrast factors are f c = -0.516, and -0.773. We have also considered the case of a perfect conductor substrate (denoted by Inf) with ǫ sub → ∞ and f c = −1. In Fig. 2, we show the energy as a function of z/R. In general, we observe that the energy shows a power law of (z/R) −3 . This behavior is independent of the material properties, and it is inherent to the dipole-dipole interaction model. This is consistent with the result found by Casimir and Polder [13] for a polarizable atom, and with the measurements by Mohideen et al. [11], but it is contrary to the oscillatory behavior calculated by Ford [14]. The value of the energy varies with the substrate, for example, at small distances it is about two times larger for a perfect conductor substrate than for Al 3 O 2 , while the TiO 2 case is between them. This is easily explained if we look at the contrast factor values for each substrate, where one can see that as f c → −1, the energy is larger. For all the substrates, we found that V becomes larger as the plasma frequency of the metal also does. In conclusion, we found that V is large when f c → −1 and ω p is large, recovering the limit for perfect conductors. Therefore, the energy is largest (smallest) for an Al (K) particle over a perfect conductor (Al 3 O 2 ) substrate. When the sphere is at a distance larger than 2R, the energy is very similar, independently of the dielectric properties of the sphere and substrate. Let us first analyze the force as a function of the geometrical properties, that is, as a function of R and z. In Fig. 3 we show the Casimir force calculated as In all cases, we obtain an attractive force such that, as R is smaller the force increases. When the sphere is almost touching the substrate (z ∼ 0 nm) the force is ten times larger for a sphere of R = 10 nm than the one of R = 100 nm, and increases fifty times for a sphere of R = 500 nm. As a function of z, the force for the sphere with R = 500 nm seems to be almost constant from 0 to 40 nm compared with the other curves. This is an artifact of the scale since all curves have a power law of z −4 , and they are proportional to R 2 . This implies that for a distance z ≤ 10 nm the force is larger for the smallest sphere; however, at a larger distance the force is larger for larger spheres, while for very large distances, the force is independent of R. Furthermore, the force for the sphere with R = 10 nm decreases about three orders of magnitude as the separation of the sphere goes from 0 to 40 nm, independently of the dielectric properties of the system. On the other hand, with the proper combination of dielectric functions of the sphere and substrate it is possible to modulate the magnitude of the Casimir force. Here, we show the force for an Al sphere over a perfect conductor which is one order of magnitude larger than the force between the K sphere over Al 3 O 2 , as it is expected. From Fig. 4 we analyze the force as a function of the dielectric properties of the particles and substrate. In all cases, we found the same dependence of the force as a function of z, independently of the dielectric functions of both, sphere and substrate. Although the dielectric function of the substrate is important, the dependence on the dielectric function of the sphere is more critical in the magnitude of the Casimir force. Indeed, the force is larger for increasing values of ω p . In particular, we found that the force for an Al sphere is almost ten times larger than for a K sphere. On the other hand, for a given sphere the Casimir force increases at most by a factor of three, when the substrate is changed from Al 3 O 2 to a perfect conductor. The spectral representation formalism allows us to calculate the force between a sphere and a substrate in a range of sizes and separations where the proximity theorem is not applicable. value of the Casimir force using the proximity theorem yields a force about three orders of magnitude of the values obtained in this paper. This is due to the linear dependence of the proximity theorem with the radius of the sphere and that the geometrical effects of the sphere are not included. On the other hand, systems where the proximity theorem is used, such as the experiments by the group of Mohideen and collaborators [11,12] do not employ homogeneous metallic spheres, rather coated dielectric spheres, making it difficult to employ the spectral representation formalism. In conclusion, we have developed a spectral representation formalism within the van der Waals approximation to calculate the Casimir force between a sphere and a substrate. This spectral formalism separates the geometrical properties contributions from dielectric properties contributions on the Casimir effect in the non-retarded limit. We found that at very small distances, the force can increase orders of magnitude as the size of the particle becomes smaller. We have also observed that the correct choice of the dielectric properties of both, sphere and substrate, can increase or decrease the force by orders of magnitude. This work has been partly financed by CONACyT grant No. 36651-E and by DGAPA-UNAM grants No. IN104201 and IN107500.
2014-10-01T00:00:00.000Z
2003-03-28T00:00:00.000
{ "year": 2003, "sha1": "cbc32676ffdc999dbbe70ee8418abc1de8531332", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0303172", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ce2f3f847734523ca0b46975825b15a50268bdd5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
6177836
pes2o/s2orc
v3-fos-license
Scaling Up Antiretroviral Treatment Services in Karnataka, India: Impact on CD4 Counts of HIV-Infected People Setting Twelve antiretroviral treatment centres under National AIDS Control Programme (NACP), Karnataka State, India. Objective For the period 2004-2011, to describe the trends in the numbers of people living with HIV (PLHIV) registered for care and their median baseline CD4 counts, disaggregated by age and sex. Design Descriptive study involving analysis of routinely captured data (year of registration, age, sex, baseline CD4 count) under NACP. Results 34,882 (97% of total eligible) PLHIV were included in analysis. The number registered for care has increased by over 12 times during 2004-11; with increasing numbers among females. The median baseline CD4 cell count rose from 125 in 2004 to 235 in 2011 – the increase was greater among females as compared to males. However, about two-thirds still presented at CD4 cell counts less than 350. Conclusion We found an increasing trend of median CD4 counts among PLHIV presenting to ART centres in Karnataka, an indicator of enhanced and early access to HIV care. Equal proportion of females and higher baseline CD4 counts among them allays any fear of differential access by gender. Despite this relative success, a substantial proportion still presented at low CD4 cell counts indicating possibly delayed HIV diagnosis and delayed linkage to HIV care. Universal HIV testing at health care facilities and strengthening early access to care are required to bridge the gap. Introduction With an estimated 2.5 million people living with HIV (PLHIV), India has the third highest HIV burden in the world, after South Africa and Nigeria [1]. The introduction of antiretroviral therapy (ART) to people living with HIV⁄AIDS has been credited with significantly improving quality of life and reducing mortality [2]. However, a large proportion (15-43%) of HIV-infected individuals in developing countries present themselves for care when CD4 lymphocyte count has fallen below ≤ 200 cells/mm 3 [1,3]. In 2005, up to 44% individuals presenting at the ART centres of India had baseline CD4 count <100 cells/mm 3 [4]. Furthermore, till March 2008, 85% had registered for HIV care with baseline CD4 cell count less than 200 cells/mm 3 , indicating advanced stages of immunosuppression due to delays in diagnosis and access to care [5]. Pre-treatment CD4 cell count is one of the important criteria for categorizing the degree of immunosuppression in order to determine eligibility for initiation of antiretroviral therapy. It is well documented that survival rates are longer if ART is started as soon as possible rather than waiting till CD4 counts reach a nadir of <250 cells/mm 3 [4,[6][7][8]. The consequences of presenting with a low CD4 cell count are multiple; patients are more likely to be diagnosed with severe opportunistic infections, the risk of death may be higher [9], the rate of immunological improvement may be slower [10], the likelihood of transmitting the virus to other individuals is higher, and overall probability of posing a higher financial strain on national health services [11]. Karnataka is one of six states in India considered to have high HIV prevalence with an estimated 0.25 million people living with HIV [12]. There has been a massive scale-up of HIV testing and treatment services since 2004. The number of stand-alone HIV testing centres increased from 40 in 2004 to 565 in 2011( Figure 1). This, along with policies to routinely offer HIV testing to TB patients and pregnant women in addition to other high-risk groups were introduced in 2007 which led to an exponential increase in the number of people tested for HIV and found HIV positive ( Figure 2). The free ART program for HIV-infected people was launched in Karnataka in four centres in 2004. At present, the state has 49 ART centres with at-least one centre in every district providing multidisciplinary care, counseling and dispensing of ART medications ( Figure 1). We hypothesize that this scale-up would have led to early access to care for PLHIV reflected by an increasing trend in average CD4 counts at the time of registration. However, this has not been studied systematically till date. Hence, the specific objectives of this study were: For the period 2004-2011, in selected 12 ART centres of Karnataka State: 1 To describe the trends in the numbers of PLHIV registered for HIV care, disaggregated by age and sex 2 To describe the trends in median CD4 counts of PLHIV at the time of registration for HIV care, disaggregated by age and sex Ethics considerations Since this study was a review of the routinely recorded data and did not involve patient interaction, informed consent was deemed unnecessary. The entire protocol was reviewed for ethical issues by the Ethics Advisory Group of International Union Against Tuberculosis and Lung Diseases (The Union), Paris, France and approved including a waiver of informed consent. Appropriate administrative approvals were obtained by National AIDS Control Organization, New Delhi, India. Study Design This is a descriptive study involving secondary analysis of data routinely recorded under the National AIDS Control Programme (NACP). Setting Karnataka, with 30 districts and a population of 61 million, is one of four large states in South India facing a relatively advanced HIV epidemic, with the adult HIV prevalence in some districts exceeding 1%. As per the NACO report of 2009, Karnataka had a prevalence of 0.63% amounting to 0.25 million persons living with HIV [12]. There are 565 stand-alone HIV testing facilities, 1050 facility integrated HIV testing facilities and 49 ART centres in of the State. Nearly 27 new ART centres in district and sub-district level hospitals were established in the past 4 years. The primary aim of the HIV testing facilities is to provide information, counseling and HIV testing services. All HIV positive persons diagnosed at testing centres are referred to the nearest ART centre for further management. HIV positive patients, who reach ART centres are registered for HIV care, are assessed clinically including CD4 count assessments and if found eligible for ART initiation as per the national guidelines, are initiated on ART [13]. India currently follows WHO 2010 ART guidelines [14]. Since 2008, in its plan to decentralize the monitoring of services, Government of India has established District AIDS Prevention and Control units (DAPCU) headed by a district level officer with support staff for supervision and monitoring in selected districts with high prevalence. Study site, Study population, Study period Twelve ART centres representative of the three geographical zones of the state: north, south and coastal regions of Karnataka were purposively selected for the study based on completeness of data on baseline CD4 count. All PLHIV aged 15 years and above, newly diagnosed and registered to receive HIV care and treatment at each of the selected centres in the state of Karnataka, between April 2004 and December 2011 constituted the study population. Data variables and Source The data were extracted out of the electronic databases maintained in the ART centres during the month of November 2012. Original data sources included the pre-ART patient register and patient treatment cards kept at each centre. The key variables included pre-ART number, year of registration, age in completed years, sex and CD4 lymphocyte count at the time of registration. Data management and analysis Since the data were already present in the electronic format, double data entry and validation was not considered. Abstracted data from the Microsoft Excel database were imported into EpiData [15] software and analyzed. The following key indicators were calculated: 1 Trends in numbers of PLHIV registered for care, year-wise 2 Trends in numbers (proportion) of PLHIV by sex and age groups, year-wise 3 Trends in median (interquartile range) CD4 lymphocyte counts of PLHIV, disaggregated by sex and age groups, yearwise Results Between April 2004 and December 2011, 38,245 newly diagnosed HIV-infected individuals were registered in 12 selected ART centres in north, south, and coastal Karnataka. Of these, 2,367 (6%) were aged <15 years and excluded as per the study criteria. Of the remaining, 34,882 (97%) were included in the final analysis after excluding records with missing information on age, sex and CD4 count. The key characteristics of the study population and their trends from 2004-11 are shown in Table 1 Discussion This is the first study from India systematically examining the scale-up of anti-retroviral treatment services and its potential impact on the trends in average CD4 lymphocyte counts of people living with HIV. The strength of the study was that we collected data from 12 ART centres spread across the state of Karnataka and included all the PLHIV registered for HIV care. We found that there has been a massive scale-up of ART services in the state over the past 8 years with a 12-fold increase in the number of PLHIV registered for care in 2011 as compared to 2004. The proportion of females living with HIV registered for care has consistently increased over the years to reach more than 50% in 2011 allaying any concerns of genderbased inequity in accessing services. Increasing trend in median age of PLHIV registered for care indicates a right shift in age distribution and may be an early indicator of declining HIV epidemic in the state. This fact is confirmed by recent estimates from NACO which indicate a nationally declining trend in HIV incidence and the number of people living with HIV. HIV incidence is declining in both males and females [16]. The rising number of females accessing HIV care in the background of falling incidence provides additional evidence of improved access among females. It was encouraging to note the increasing trend of median CD4 cell counts from 125 in 2004 to 235 in 2011 indicating improvements in early access to HIV care services. There was a sudden jump in median CD4 count noted from 2007 to 2008 (Figures 1 and 2). The increase in median CD4 counts was greater among females and among younger age-groups; again a very significant finding indicating improved and early access to HIV care among females. The other possible reasons for the improved access include -widespread advocacy, communication and social mobilization programs, increased support from community programs for HIV counseling and testing and strengthening linkages between patients and care systems. Despite the increasing trends, more than sixty-five percent of patients continue to first present for HIV care with a CD4 cell count below 350 cells/mm 3 , the level at which initiation of antiretroviral therapy is recommended by national guidelines. The high proportion of patients presenting with low CD4 cell count at their initial clinic visit indicates delayed diagnosis which can lead to high morbidity and mortality [4], higher transmissibility at the community level [17] and steeper treatment costs [18][19][20]. Few Indian studies in the past have analyzed CD4 trends on such a large state-wide scale. Published national program data of 972 patients at three government ART centres from 2004 to 2005 showed nearly 75% of patients had CD4 cell count <200 cells/mm 3 at the time of initiation of ART [4]. In New Delhi, India from 2001 to 2007, 33% (n=3680) of patients first presented at CD4 cell count below 200 cells/mm 3 with 9.5% subjects having CD4 cell count below 50 cells/uL [21]. According to published national program data reporting baseline CD4 cell count of 116,225 registered HIV-infected persons from 2005 to 2008, 85% registered for ART with baseline CD4 cell count less than 200 cells/mm 3 [5]. Studies have found that male gender and older age are significant determinants of presenting late [22][23][24]. These have been confirmed in our study as well with a higher median CD4 cell count among females as compared to males. The proportion of females accessing care increased from 2004 to 2011, particularly in 2008 and then after till 2011. The reason for females presenting to care earlier over time may be explained by the fact that females may be getting tested for HIV earlier now through expanded HIV testing programs in pregnancy, or through expanded partner testing programs after a spouse tests positive. Similarly the median CD4 counts among younger age groups were greater when compared to that among older age groups. This can be explained due to the fact that older individuals were likely to have got infected at younger ages, but had a delayed diagnosis [25]. Another study which found similar results explained that younger persons were likely to have been more recently infected compared to older persons, and hence less likely to progress rapidly to develop severe immunosuppression [26]. Our findings have several important programmatic implications for the country. First, increasing average CD4 counts with decentralized access to HIV care is very encouraging and needs to be continued and strengthened. Second, as men are at increased risk of presenting late; further efforts to enroll men into care must be focused. Many untested persons have a perception that they may be at low risk of HIV infection or they are fearful of being aware of their HIV status [27]. Such persons are more likely to present at late stage or have an illness-triggered HIV diagnosis. Our findings reinforce the need to establish universal routine HIV testing as standard of care for all adolescents and adults seen in private and public care settings, regardless of patient reported HIV risk [28]. It is only under such circumstances that late-stage or illnesstriggered HIV diagnoses will be reduced. Third, this study provides valuable information useful for program planning. The information on proportions of PLHIV in several CD4 groups helps the programme manager to assess the possible increase in workload at ART centres with changes in ART initiation criteria from a CD4 threshold of 200 to 350, or from 350 to 500. In light of the recently released ART guidelines by the WHO [29,30], this information helps in planning for drug procurement and distribution of antiretroviral drugs. As with any operational research, there were a few limitations. While we selected 12 ART centres representing the three regions of Karnataka and have no reason to believe that they are any different from the rest of the ART centres in the state, we have no data to demonstrate the same. Similarly, we had to exclude children from the study owing to incomplete data among them. We also acknowledge the limitations of using ecologic data in measuring access to HIV services. In conclusion, we have found that in Karnataka, there has been a massive scale-up of HIV diagnostic and treatment services and it has improved the median CD4 cell counts of PLHIV at the time of registration, an indicator of early access to care. However, about two-thirds were diagnosed with CD4 cell counts ≤ 350cells/ mm 3 , the threshold at which initiating ART is unequivocally recommended as per national guidelines [13,14]. These findings suggest that further expanding HIV testing and reducing late HIV diagnosis needs to be a priority, if the programs related to improving linkage to care and earlier antiretroviral treatment initiation are to reach patients and potentially alter the trajectory of the HIV epidemic in India.
2016-05-04T20:20:58.661Z
2013-08-08T00:00:00.000
{ "year": 2013, "sha1": "0a697c860d4860d532839d224841b1d16a437336", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0072188&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a697c860d4860d532839d224841b1d16a437336", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119739746
pes2o/s2orc
v3-fos-license
Polytopal surfaces in Fuchsian manifolds Let $S_{g,n}$ be a surface of genus $g$ with $n$ punctures equipped with a complete hyperbolic cusp-metric. Then it can be uniquely realized as the boundary metric of a symmetric Fuchsian polytope. In the present paper we give a new variational proof of this result. Our proof is based on the discrete Hilbert-Einstein functional and on an interpretation of the Epstein--Penner convex hull construction. Theorems of Alexandrov and Rivin Consider a convex polytope P ⊂ R 3 . Its boundary is homeomorphic to S 2 and carries a metric induced from the Euclidean metric on R 3 . What are the intrinsic properties of this metric? A metric on S 2 is called polyhedral Euclidean if it is locally isometric to the Euclidean metric on R 2 except finitely many points, which have neighborhoods isometric to an open subset of a cone (an exceptional point is mapped to the apex of this cone). If the angle at every exceptional point is less than 2π, then this metric is called convex. It is clear that the induced metric on the boundary of a convex polytope is a convex polyhedral Euclidean metric. One can ask a natural question: is this description complete, in the sense that every convex polyhedral flat metric can be realized as the induced metric of a polytope? This question was answered positively by Alexandrov in 1942, see [1], [2]. Theorem 1.1. For every convex polyhedral Euclidean metric d on S 2 there is a convex polytope P ⊂ R 3 such that (S 2 , d) is isometric to the boundary of P . Moreover, such P is unique up to an isometry of R 3 . Note that P can degenerate to a polygon. In this case P is doubly covered by the sphere. The uniqueness part is easy and follows from the modified version of Cauchy's global rigidity of convex polytopes. The original proof by Alexandrov of the existence part was not constructive. It was based on some topological properties of the map from the space of convex polytopes to the space of convex polyhedral Euclidean metrics. Another proof was done by Volkov in [23], a student of Alexandrov, by considering a discrete version of the total scalar curvature. A new proof of Theorem 1.1 was proposed by Bobenko and Izmestiev in [3]. For a fixed metric they considered a space of singular polytopes realizing this metric at their boundary. In order to remove singularities they constructed a functional over this space and investigated its behavior. Such a proof can be turned into a practical algorithm of finding a polytopal realization of a given metric. It was implemented by Stefan Sechelmann. One should note that this algorithm is approximate as it uses numerical methods of solving variational problems, but it works well for all practical needs. We turn our attention to hyperbolic metrics on surfaces. Let S g,n be a surface of genus g endowed with a complete hyperbolic metric of a finite volume with n cusps. In [17] Rivin proved a version of Theorem 1.1 for cusp-metrics on a sphere S 2 with punctures. Theorem 1.2. For every cusp-metric d on S 0,n there exists a convex ideal polytope P ⊂ H 3 such that (S 0,n , d) is isometric to the boundary of P . Moreover, such P is unique up to an isometry of H 3 . Rivin gave a proof in the spirit of Alexandrov's original proof. Very recently, in [21] Springborn gave a variational proof of Theorem 1.2. Ideal Fuchsian Polyhedra and Alexandrov-type results It is of interest how can we generalize these results to surfaces of higher genus. We restrict ourselves to the case g > 1 and to metrics with cusps. By G denote the fundamental group of S g . Let ρ : G → Iso + (H 3 ) be a Fuchsian representation: an injective homomorphism such that its image is discrete and there is a geodesic plane invariant under ρ(G). Then F := H 3 /ρ(G) is a complete hyperbolic manifold, homeomorphic to S g ×(−∞; +∞). The image of the invariant plane is the so-called convex core of F and is homeomorphic to S g . The manifold F is symmetric with respect to its convex core. A subset of F is called convex if it contains every geodesic between any two of its points. It is possible to consider convex hulls with respect to this definition. An ideal Fuchsian polytope P is the convex hull of a finite point set in ∂ ∞ F invariant under the reflection with respect to the convex core. The boundary of P consists of two isometric copies of S g,n . Now we can establish our main result. Theorem 1.3. For every cusp-metric d on S g,n , g > 1, n > 0, there exists a Fuchsian manifold F and an ideal Fuchsian polytope P ⊂ F such that (S g,n , d) is isometric to each of two components of the boundary of P . Moreover, F and P are unique up to isometry. This theorem was first proved by Schlenker in his unpublished manuscript [19]. Another proof was given by Fillastre in [8]. Both these proofs were nonconstructive following the original approach of Alexandrov. The aim of the present paper is to give a variational proof of Theorem 1.3 in the spirit of papers [3], [9] and [21]. Several authors studied Alexandrov-type questions for hyperbolic surfaces of genus g > 1 in more general sense. They were collected in the following result of Fillastre [8]. Consider a complete hyperbolic metric d on S g,n with cusps, conical points and complete ends of infinite area. Complete ends of infinite area are boundary components "at infinity". One can see an example in the projective model as the intersection of a cone with the apex outside of H 3 (such a point is called hyperideal point) with H 3 . Such a metric can be uniquely realized as the induced metric at the boundary of a generalized Fuchsian polytope. Some vertices of this polytope may be hyperideal, which corresponds to complete ends of infinite area. The case, when the surface is a sphere with punctures and holes, was proved by Schlenker in [18]. Some vertices may be in the interior of a Fuchsian manifold and correspond to conic points. The case of g > 1 with only conical singularities was first proved in an earlier paper of Fillastre [7] and with only cusps and infinite ends in the paper [20] by Schlenker. The torus case with only conical singularities was the subject of the paper [9] by Fillastre and Izmestiev. The last paper also followed the scheme of variational proof. All other mentioned works were done in the direction of the original Alexandrov approach. Recently another realization result of metrics on surfaces with conical singularities was obtained by Brunswic in [5]. We would like to thank Boris Springborn, who pointed that another proof of Theorem 1.3 follows from the paper [10]. The authors do not consider Alexandrov-type statements, but Theorem 1.3 can be deduced from their results using several geometric lemmas. The proof in this paper is also nonconstructive and investigates the correspondence between discrete Euclidean structures on surfaces up to discrete conformality and hyperbolic cusp-metrics up to isometry first noted in [4]. The variational approach is discussed in the end, but the functional is not provided directly. There is an interesting interpretation of Theorem 1.3 in terms of the Teichmüller space. LetT g,n be the cusped Teichmüller space, i.e. the space of all cusp-metrics on S g,n up to isometry isotopic to the identity. Also, let T g,n be the Teichmüller space with n marked points, i.e. the space of all complete hyperbolic metrics on S g with n marked points up to isometry isotopic to the identity. Theorem 1.4. There is a natural bijection betweenT g,n, and T g,n . Overview of the proof In general, we follow the road landmarked by the previous works [3], [9] and [21] contained variational proofs of Alexandrov-type problems. But in the new setting we encounter different obstacles. We highlight the connection (noted also in [21]) with Epstein-Penner decompositions (see [6], [14], [15]). For the definitions we refer to Subsection 2.4. In our paper for every Epstein-Penner decomposition of S g,n we construct a Fuchsian polytope with singularities whose boundary structure coincides with the given decomposition. This is similar to the connection of the Euclidean case with the weighted Delaunay triangulations noted in [3] and may lead to a further comprehension of these objects. We would like to highlight that our proof can be turned to an effective algorithm of finding a realization as a Fuchsian polytope of a given cuspmetric. This is is a big difference comparing with the previous indirect proofs by Schlenker and Fillastre. Our strategy of proof is as follows. A Fuchsian polytope P can be cut into two symmetric halves. Its boundary has a polytopal decomposition into faces. This decomposition can be projected to the convex core of P and provides a decomposition of a half of P into basic geometric objects: semi-ideal rectangular prisms. We go in the opposite direction: we glue several prisms altogether according to a geodesic triangulation of S g,n . We obtain a complex that looks like a Fuchsian polytope, but has conical singularities on the inner edges of gluing. A variational argument shows that these singularities can be removed. In Section 2 we overview and establish several necessary results from elementary hyperbolic geometry. In Section 3 we define our basic objects (called prismatic complexes) and study their properties. In Section 4 we introduce the space of all prismatic complexes. In Section 5 we represent our problem as a variational one and finish the proof of Theorem 1.3. Related work and perspectives It may be of interest to consider the following generalization of our statement. Define a non-symmetric ideal Fuchsian polytope P as the convex hull of n > 0 ideal points belonging to one component of ∂ ∞ F and m > 0 ideal point belonging to the other one. The boundary of P consists of S g,n with a cusp-metric and S g,m equipped with another cusp-metric. One can ask if we take two such metrics, is there a non-symmetric ideal Fuchsian polytope realizing both metrics at its boundary? The answer to this naive question is no. From the statement of Theorem 1.3 we can see that a cusp-metric d on S g,n determines uniquely a metric on the convex core of the Fuchsian manifold F such that (S g,n , d) can be convexly realized in F . If for two cusp metrics the corresponded metrics on the convex cores are different, then these cuspmetric can not be boundary components of the same non-symmetric ideal Fuchsian polytope (and, clearly, Theorem 1.3 implies that otherwise such a polytope exists). But we may consider polytopes in so-called quasifuchsian manifolds. A representation ρ of G = π 1 (S g ) in Iso + (H 3 ) is called quasifuchsian if it is discrete, faithful and the limit set at the boundary at infinity of its action is a Jordan curve. A manifold F is quasifuchsian if it is isometric to H 3 /ρ(G). As in the Fuchsian case, F is homeomorphic to S g × R and has the well-defined boundary at infinity. The convex core of F is the image of the convex hull of the limit set under the projection of H 3 onto F , which is 3-dimensional if F is not Fuchsian. A non-symmetric ideal quasifuchsian polytope is the convex hull of n > 0 ideal points belonging to one component of ∂ ∞ F and m > 0 ideal point belonging to the other one. To state an analog of the uniqueness part of Theorem 1.3 we need a way to connect Teichmüller spaces for surfaces with different number of punctures. A marked cusp-metric is a cusp-metric on S g,n together with a marking monomorphism π 1 (S g ) → π 1 (S g,n ). A quasifuchsian manifold F has a canonical identification π 1 (F ) π 1 (S g ). For every quasifuchsian polytope P it induces monomorphisms ι + and ι − of π 1 (S g ) to the fundamental groups of the upper and lower boundary components of P respectively. Conjecture 1.5. Let d 1 and d 2 be two marked cusp-metrics on S g,n and S g,m respectively, n, m > 0. Then, there is a unique non-symmetric ideal quasifuchsian polytope P such that one component of its boundary is isometric to (S g,n , d 1 ), the other one is isometric to (S g,m , d 2 ) and the compositions of marking monomorphisms with the maps induced by these isometries coincide with ι + and ι − . We think that our proof of Theorem 1.3 can be adapted to a proof of this conjecture. It is a perspective direction of a further research. In order to remove singularities we use the so-called discrete Hilbert-Einstein functional. Another perspective direction for further research is determining its signature for triangulations of Euclidean and hyperbolic manifolds. This may lead to new proofs of various geometrization and rigidity results. We refer the reader to [11] for a survey of mentioned ideas. Acknowledgments. The author would like to thank Ivan Izmestiev for numerous useful discussions and his constant attention to this work. 2 Some hyperbolic geometry Hyperbolic and de-Sitter spaces and duality In this section we fix some notation and mention results from basic hyperbolic geometry that will be used below. For several proofs we deal with the hyperboloid model for H 3 . Consider R 1,3 with the scalar product x, y = −x 1 y 1 + x 2 y 2 + x 3 y 3 + x 4 y 4 . By letters with lines above like x we denote points of R 1,3 . Define By R 1,2 we denote the plane {x : x 4 = 0} and by H 2 denote H 3 ∩ R 1,2 . Define the three-dimensional de Sitter space and a half of the cone of light-like vectors There is a natural correspondence between ideal points of H 3 and generatrices of L. Horospheres are intersections of H 3 with affine planes parallel to generatrices of L. Slightly abusing the notation we will use the same letters both for these planes and for the horosphere defined by them. For such a plane L define its polar dual l ∈ L by the equation for all x ∈ L. A hyperbolic plane M in H 3 is the intersection of H 3 with a two-dimensional subspace of R 1,3 such that its normal m is space-like. Again, in our notation we will not distinguish these planes in R 1,3 from the corresponding planes in for all x ∈ M . These two normals naturally correspond to halfspaces defined by the plane M . If m ∈ dS 3 is chosen, then denote these halfspaces by Computations in hyperbolic space We will need the following interpretation of scalar products between vectors of R 1,3 in terms of distances (see [12], [16], [22]) where the distance is signed: it is nonnegative if a ∈ M + and negative otherwise. 2. If a ∈ H 3 and l ∈ L, then where the distance is also signed: it is positive if a is outside the horoball bounded by L and negative otherwise. 3. If m ∈ dS 3 and l ∈ L, then where the distance between a plane and a horosphere is the length of the common perpendicular taken with the minus sign if the line intersects the horosphere. The sign of the right hand side depends on at which halfspace with respect to M the center of L lies. 4. If l 1 ∈ L and l 2 ∈ L, then where the distance between two horospheres is the length of the common perpendicular taken with the minus sign if these horospheres intersect. Now we establish some formulas. The proof is a straightforward computation. Since this point we will always suppose that they are equipped with horospheres. Under this agreement, it is always possible to define the distance between two points. For two ideal points the distance is the distance between corresponding horospheres. This distance is signed: we write it with the minus sign if horospheres intersect. For one ideal and one point in H 3 , the distance means the signed distance from the point in H 3 to the corresponded horosphere at ideal point. A proof can be found in [14]. We will need a semi-ideal version of this lemma. It is less known, hence, we provide a full proof. Lemma 2.4. Let ABC be a hyperbolic triangle with ideal vertices A and B equipped with horospheres. By a and b denote the distances from C to the horospheres at B and A respectively, by c denote the distance between these horospheres, by α A denote the length of the part of the horocycle centered at A that is inside this triangle. Then Proof. Consider the hyperboloid model. LetC be an intersection of the ray AC with boundary at infinity and put a horocycle atC such that it passes through C (see Figure 1). Denote the side lengths of this new ideal decorated triangle byã,b = b andc = c. From Lemma 2.3 it follows that Hence, we need to calculateã. By l A , l B , lC denote the polar light-like vectors corresponded to all horocycles; by x C denote the vector of the point C at hyperboloid. Then We have Hence, we obtain that λ = 2. Now calculate We obtain µ = −e −b . We need only to evaluate Fuchsian manifolds and polytopes In this subsection we briefly discuss Fuchsian polytopes and different approaches to them. Consider a surface S g of genus g > 1. Now we formulate precisely some concepts touched in the introduction. is called a Fuchsian representation if it is discrete, faithful and there is a geodesic plane P fixed by ρ(G). The action of such Γ can be extended to the boundary at infinity of H 3 . The factor of this action can be naturally seen as the boundary at infinity of F . It is homeomorphic to two copies of S g and inherits a conformal structure. The plane P projects in F onto a geodesic surface homeomorphic to S g . As it was noted in the introduction, it is called the convex core of F . The manifold F has a natural isometric involution preserving its convex core coming from the symmetry of H 3 with respect to P . Definition 2.7. A set P ⊂ F is called convex if it contains every geodesic between any pair of points of P . The convex hull of a set Q is the inclusionminimal convex set containing Q. Definition 2.8. A set P ⊂ F is called a Fuchsian polytope if it is the convex hull of a finite point set in F that is symmetric with respect to the convex core. It is called ideal if all its vertices belong to the boundary at infinity of F . Intrinsically, an ideal Fuchsian polytope is a complete hyperbolic 3-manifold with an isometric involution with respect to a geodesic embedding of S g and with convex piecewise-geodesic boundary isometric to two copies of S g,n equipped with a cusp-metric. This description is full as shown by the following lemma. Lemma 2.9. Let Q be a manifold satisfying the description above. Then there is a Fuchsian manifold F and an ideal Fuchsian polytope P isometric to Q. We will give a proof of this lemma in Subsection 3.1 after introducing the necessary machinery. Our proof can be easily extended to non-ideal case, but we do not need it. Now we are able to speak about Fuchsian polytopes without referring to ambient Fuchsian manifolds. Consider a Fuchsian polytope P in F = H 3 /Γ. A boundary component of P can be lifted to H 3 . This lift is a polytopal surface in H 3 that is invariant under the action of Γ. It is clear that this construction also works backwards. which is the image of a Fuchsian representation of π 1 (S g ). Epstein-Penner decompositions We need to remind the concept of Epstein-Penner ideal polygonal decomposition of a decorated cusped hyperbolic surface S g,n . Fix a decoration of S g,n , i.e. a horosphere at every cusp. Then, the space of all decorations of S g,n can be identified with R n . A point r ∈ R n corresponds to the choice of horospheres at the distances r 1 , . . . , r n from the fixed ones. Consider the hyperboloid model of H 2 . Develop S g,n as H 2 /Γ in H 2 ⊂ R 1,2 , where Γ is a discrete subgroup of Iso + (H 2 ) isomorphic to π 1 (S g ). Take r ∈ R n and the corresponding decoration. By L 1 i , L 2 i , . . . denote the horoballs in the orbit of the horoball at i-th cusp under the action of Γ. As in Section 2 we denote by L k i also the affine plane such that the corresponded horoball is the intersection of H 2 and this plane. Also, recall that l k i denotes the polar vector to the plane L k i . By L denote the union of all vectors l k i . Let C be the convex hull of the set {l j i }. Its boundary ∂C is divided in two parts ∂ l C ∂ t C consisting of lightlike points and timelike points. Below we describe well-known properties of this construction. For proofs we refer to [13]. • The convex hull C is 3-dimensional. • The set C ∩ L is the set of points αL k i for some α 1. • Every time-like ray intersects ∂ t C exactly once. • The boundary ∂ t C is decomposed into countably many Euclidean polygons. This decomposition is Γ-invariant and projects to a Γ-invariant decomposition of H 2 which provides a decomposition of S g,n into finitely many ideal polygons. Definition 2.12. The Epstein-Penner decomposition is the decomposition of decorated S g,n obtained in the above way. This decomposition depends on a choice of decorations. 13. An Epstein-Penner triangulation is a triangulation that refines the Epstein-Penner decomposition. The space R n is subdivided into cells corresponding to Epstein-Penner decompositions. Each n-dimensional cell corresponds to a decomposition that is a triangulation. Epstein-Penner triangulations and cells are well studied, see [14], [15]. Prisms and complexes In this section we are going to introduce our main objects of study: prismatic complexes. They are metric spaces glued from basic building blocks. Hence, first we define these blocks and study their properties. Prisms and their properties Definition 3.1. A rectangular prism is the convex hull of a triangle A 1 A 2 A 3 (with possibly ideal vertices) and its orthogonal projection to a plane such that A 1 A 2 A 3 does not intersect this plane. By B 1 , B 2 and B 3 denote the images of A 1 , A 2 and A 3 under the projection. Such a prism has nine edges. We call the edges A 1 A 2 , A 2 A 3 and A 3 A 1 upper edges, the edges B 1 B 2 , B 2 B 3 and B 3 B 1 lower edges and edges A 1 B 1 , A 2 B 2 and A 3 B 3 lateral edges. In the same way, we call the face A 1 A 2 A 3 the upper face, the face B 1 B 2 B 3 the lower face and other faces the lateral faces. The dihedral angles of edges B 1 B 2 , B 2 B 3 and B 3 B 1 are equal π/2. The dihedral angles A 1 A 2 , A 2 A 3 and A 3 A 1 are denoted by φ 3 , φ 1 and φ 2 respectively. The dihedral angles A 1 B 1 , A 2 B 2 and A 3 B 3 are denoted by ω 1 , ω 2 and ω 3 . Note that the points A i may be ideal, but B i may not. An important special case is a singular rectangular prism, when the points B 1 , B 2 and B 3 are collinear. For the sake of brevity, until the end of the article we will use the word prism instead of rectangular prism. Some of the points A i can be ideal. In this case as we agreed in Section 2, we will always suppose that they are equipped with horospheres. Then the lengths of upper edges and lateral edges are defined as in Section 2. Mainly we will deal with such prisms that all A i are ideal. Figure 2) is a prism with an ideal upper face. It is clear that in a semi-ideal prism the lines 2 be a triangle and r 1 , r 2 , r 3 be three real Then there exists at most one prism up to isometry such that its upper face is isometric to A 1 A 2 A 3 and the lengths of the corresponding lateral edges are equal to r i . If A i is ideal, then it is equipped with a horosphere L i . As in Section 2, denote by l i its polar dual. We need to find three points B 1 , B 2 and B 3 such that A 1 A 2 A 3 B 3 B 2 B 1 will be a prism with lateral edges equal to r 1 , r 2 and r 3 . Equivalently, we need to find a 2-plane M such that dist(A i , M ) = r i (the distance from an ideal vertex to hyperplane is naturally the distance from corresponding horosphere). By Lemma 2.1 we can see that the existence of the desired prism is equivalent to the existence of a vector m ∈ dS 3 such that m, l i = e r i for every ideal A i and m, a i = sinh(r i ) otherwise. Fore ideal points we take plus signs because we can look only for such m that Every a i and l i is in R 1,2 . Therefore, it has the last coordinate equal to zero. Hence, the system of equations is a linear system for the first three coordinates. The vectors a i are linearly independent which implies that this system has a unique solution. Now we need to find the last coordinate for m using the equation This is a quadratic equation. It is clear that if it has only one solution, then we obtain a singular prism, and if it has two solutions, then we have two prisms corresponding to two possible choices of orientation (and differing by the symmetry with respect to the plane A lateral face of a prism is a special hyperbolic quadrilateral, which we naturally call a trapezoid. its orthogonal projection to a line such that A 1 A 2 does not intersect this line. By B 1 and B 2 denote the images of A 1 and A 2 under the projection. The notions of lateral, upper and lower edges are similar. We need also the following planar version of Lemma 3.3. 2 be a segment, r 1 and r 2 be two real numbers. If A i is an ideal point, then we suppose that it is equipped with a horocycle. If A i ∈ H 2 , we assume that r i > 0. Then there exists at most one trapezoid up to isometry such that its upper edge is isometric to A 1 A 2 and the lengths of the corresponding lateral edges are equal to r i . (See Figure 3.) The proof goes along the lines of the proof of Lemma 3.3 and is quite straightforward, hence, we omit it. Now let us establish several formulas for trapezoids. Proof. Direct computation using the cosine law for a de-Sitter triangle. Lemma 3.7. Let A 1 A 2 B 2 B 1 be a semi-ideal trapezoid, l 12 be the length of the upper edge, a 12 be the length of the lower edge and r 1 , r 2 be the lengths of lateral edges.Then cosh(a 12 ) = 1 + 2e l 12 −r 1 −r 2 . Proof. Assume that B 1 and B 2 are outside of horospheres at A 1 and A 2 and these horospheres are disjoint. Consider a sequence of points {A 1(i) } ⊂ A 1 B 1 tending to A 1 and a sequence of points {A 2(i) } ⊂ A 2 B 2 tending to A 2 . Let C 1 and C 2 be the intersection points of A 1 B 1 and A 2 B 2 with horospheres at A 1 and A 2 respectively and D 1(i) and D 2(i) be the intersection points of A 1(i) A 2(i) with these horospheres. Observe that Using it and the fact that cosh(x) sinh(x) → 1 as x grows we can see that our formula follows from Lemma 3.6 as a limiting case. Proof. Direct computation. Prismatic complexes Let S g,n be a surface of genus g with n punctures equipped with a complete hyperbolic cusp-metric d. Consider a decoration of S g,n . Let T be an ideal geodesic triangulation of S g,n . By E(T ) and F (T ) denote its sets of edges and faces respectively. Then d can be fully described in Penner coordinates: the lengths of decorated ideal edges of T . Cusps are denoted by A 1 , . . . , A n . Note that E(T ) may contain loops and multiple edges. It is also possible that is some triangles there are edges glued together. But without loss of generality, when we consider a particular triangle (or a pair of distinctive adjacent triangles), we will denote it as Suppose that to every cusp A i some real weight r i is assigned. Denote the weight vector by r ∈ R n . Let (T, r) be an admissible pair. For each ideal triangle A i A j A h ∈ F (T ) consider a prism from the last definition. Definition 3.11. A prismatic complex K(T, r) is a metric space obtained by identifying all these prisms via isometries of lateral faces: if two triangles of T have a common edge A i A j , then we isometrically identify the faces This definition is correct because of Lemma 3.5. Note that the decorations at ideal vertices are identified with decorations. For the sake of brevity, we will omit the word prismatic. We also write K instead of K(T, r) when it does not bring an ambiguity. Every prismatic complex is a complete cone-manifold with polyhedral boundary. The boundary consists of two components. The union of upper faces forms the upper boundary isometric to S g,n with d. The union of lower faces forms the lower boundary which is isometric to S g equipped with a hyperbolic metric with cone-singularities at points B i . We can consider T as a triangulation of both components. There are well-defined total dihedral angles of edges of triangulations A i A j and B i B j equal to the sum of corresponding dihedral edges in both glued prisms. In the same way we can define the total dihedral angle of every inner edge A i B i as the sum of corresponding dihedral angles of all prisms containing A i B i . Definition 3.12. A prismatic complex K is called convex if for every upper edge A i A j ∈ E(T ) its dihedral angle is at most π. If K = K(T, r), then the pair (T, r) is also called convex. Note that in every prism either the plane containing the upper face intersects the plane containing the lower face, or they are asymptotically parallel, or they are ultraparallel. The following lemma will be important. Lemma 3.13. Let K be a convex prismatic complex. Then for every prism Assume the contrary. Let these two planes intersect and l is the line of intersection. The intersection of M 1 with ∂ ∞ H 3 is a circle. The line l divides it into two arcs. All points A i , A j and A h belong to the same arc and one of them lies between the two others. Suppose that this point is A i . Then we call the edge A j A h "heavy" and two other edges "light" (see Figure 4). Let χ be the dihedral angle between M 1 and M 2 . For every x ∈ M 1 , we have sinh dist(x, M 2 ) = sinh dist(x, l) sin(χ), by the law of sines in a right-angled hyperbolic triangle. It follows that the distances from the light edges to M 2 are both strictly bigger than the distance from the heavy edge. For the dihedral angles of the upper edges we have φ i > π/2 and φ j , φ h < π/2. Indeed, let x ∈ A j A h be the nearest point from this edge to M 2 ; x ∈ M 2 and x ∈ l be the bases of perpendiculars from x to M 2 and l. Then ∠xx x = π/2, ∠x xx < π/2 and φ i = π − ∠x xx > π/2. Next, we can consider the ideal vertex A j . Using that the sum of three dihedral angles at one vertex is equal π we obtain It implies that φ h < π/2. Similarly, φ j < π/2. Edge A j A h can not be glued in T neither with the edge A i A j nor with A i A h because these edges have different distances to the lower face. Therefore, there is another triangle in such a way that it is glued with the former prism over the face A j A h B h B j via an orientation-reversing isometry. Then B g belongs to the plane B i B j B h . The total dihedral angle at A j A h is less or equal than π. Since that, the line l and the triangle B j B h B g lie on opposite sides with respect to the plane A j A h A g and hence the plane A j A h A g also intersects M 2 . Therefore, the light edges and the heavy edge are well-defined for the new prism. Moreover, it is clear that in this prism A j A h is light. Hence, we can see that the distance from the new heavy edge to M 2 is strictly less than the distance from A j A h . Now for this edge we can choose the next prism containing it and continue in the same way. The distances from the heavy edges to M 2 are strictly decreasing. But the number of edges in K is finite. We get a contradiction. Now we need to consider the case when the upper face is asymptotically parallel to the lower face. The proof is very similar. We also have one heavy edge, two light edges and all other details remain the same. Corollary 3.14. Let K be a convex prismatic complex. Then all prisms in K are non-singular. Assume that for some complex K all total dihedral angles at inner edges A i B i equal to 2π. Double this complex and glue two copies together along their lower boundaries. We obtain a complete hyperbolic manifold P with convex ideal polyhedral boundary. Each component of the boundary is isometric to (S g,n , d), there is a canonical embedding of S g in P and an isometric involution permuting components of the boundary and preserving the canonical embedding. We call it the canonical surface of P . This is precisely the definition of an ideal Fuchsian polytope in intrinsic terms as discussed in Subsection 2.3. Now we are ready to prove Lemma 2.9 from that subsection. Proof. The manifold P can be decomposed into semi-ideal prisms according to the polyhedral structure of its boundary. The canonical surface S g in P inherits the induced complete hyperbolic metric d on S g and a triangulation T on it. There is a unique up to isometry Fuchsian manifold F such that (S g , d ) is isometric to the convex core of F . The canonical surface of P is isometric to the convex core of F . Consider such an isometry. It induces the triangulation T on the convex core of F . Let B 1 , . . . , B n be its vertices. There is a unique line passing through every point B i orthogonal to the convex core. Let A 1 , . . . , A n be the intersections of these lines with one of the components of the boundary at infinity. Choose a horosphere at A i such that the distance between this horosphere and the convex core is equal to r i -the respective distance in P . Now to finish the proof we need to note that every semi-ideal prism is uniquely defined by its lower boundary and the lengths of lateral edges. We omit the proof of this claim as it follows the proof of Lemma 3.3 line by line. This lemma shows that to prove Theorem 1.3 it is enough to show that there exists a complex K with the total dihedral angle of every inner edge A i B i equal to 2π. The space of convex complexes Denote by K the set of all convex prismatic complexes K up to isometry such that the upper boundary of K is isometric to (S g,n , d). In this section we are going to completely describe it. Every K ∈ K can be represented as K(T, r). Clearly, if K = K(T , r ), K = K(T , r ) and r = r , then complexes K and K are not isometric. This defines a map which we denote by r : K → R n abusing the notation. The plan of this section as follows. In Subsection 4.1 we prove Lemma 4.1. Let (T , r) and (T , r) be two convex pairs. Then the corresponding complexes K and K are isometric. Corollary 4.2. The map r : K → R n is injective. Hence, K can be identified with a subset of R n . In Subsection 4.2 we show that The proof of Lemma 4.1 First, we need to introduce some machinery. For every convex prismatic complex K = (T, r) we can define a function Here S g,n is identified with the upper boundary of K. , let ρ(x) be the distance from x to the plane containing the lower face of the prism A i A j A k B k B j B i after some (and any) embedding in H 3 . Definition 4.4. This function is called the distance function of K. We omit the subscript K where it is redundant. It follows from Lemma 2.2 that for any A i A j A h ∈ F (T ) if we embed the corresponding prism in H 3 , then for some b ∈ R and a point a from the plane A i A j A h the restriction of ρ to this triangle has the form Moreover, let s : [x 0 ; x 1 ] → S g,n be a geodesic segment with an arc-length parametrization such that its image is contained in A i A j A h . Then ρ • s has a form arcsinh(b cosh(x − a)) for some real numbers a and b. Now consider any geodesic segment s : [x 0 ; x 1 ] → S g,b with natural parametrization that is transversal to every edge of T . Let (x 0 = x 0 and x k = x 1 ) be a subdivision induced by intersections with the edges of T . The restriction of ρ • s to [x l ; x l+1 ] is arcsinh(b l cosh(x − a l )). (2) Figure 5: Graphics of distance and piecewise distance functions in the projection to a geodesic. The points x l corresponding to strictly convex edges of T are singular points of ρ • s in the sense that ρ • s is not differentiable at these points, but both the left derivative and the right derivative exist. It is clear that convexity of K means that at every singular point x l the left derivative of ρ • s is greater or equal than the right derivative. (x 0 = x 0 and x k+1 = x 1 ) such that the restriction ofρ to [x l ; x l+1 ] is equal to arcsinh(b l cosh(x − a l )); (iii) at every point x l the left derivative ofρ(x) is greater or equal than the right derivative. Proof. We can assume without loss of generality that at [x 0 ; x 1 ] and [x k ; x k+1 ] ρ(x) andρ do not coincide and for every l the pairs (a l ; b l ) and (a l+1 ; b l+1 ) are different. Under these assumptions we will prove a stronger statement: for every x ∈ (x 0 ; x 1 ), ρ(x) >ρ(x). Indeed, just compute its derivative. Claim 2: if the pairs (a 1 ; b 1 ) and (a 2 ; b 2 ) are different, then the functions arcsinh(b 1 cosh(x − a 1 )) and arcsinh(b 2 cosh(x − a 2 )) can not coincide at more than one point. It is a direct implication of Claim 1. Induction over m − l. The base case m = l + 1 follows from Claim 2 and the fact that in the singular point x l+1 the derivative of f l is greater or equal than the derivative of f l+1 . The inductive step is obvious. It implies that the differenceρ(x) − ρ(x) is strictly positive over the interval (x 0 ; x 1 ). Now we can prove Lemma 4.1. Proof. Let A be the point of intersection of an edge e of T with an edge e of T . The edge e is a geodesic in (S g , d), we can consider it in the upper boundary of K and look at the restriction of the distance function ρ of the complex K at e . Since K is convex, A is a singular point for this function and the left derivative is greater or equal than the right derivative. Consider also the restriction of the distance function ρ of the complex K . It has the form (2). From Lemma 4.5 we infer that that ρ (A) ρ (A), where . Similarly, we obtain that ρ (A) ρ (A). Therefore, ρ (A) = ρ (A). Let A be the union of the set of all cusps and of the set of all intersection points of the edges of T with the edges of T . Edges of T ∪ T decompose S g,n into simply-connected geodesic polygons. We subdivide every polygon into geodesic triangles and obtain a triangulation T with the vertex set A such that T refines both T and T . This triangulation induces a subdivision of both K and K into prisms. Two corresponding prisms are isometric because of Lemma 3.3. It follows that K is isometric to K . We would like to note one easy corollary. For a convex pair (T, r) denote by E s (T, r) the union of all strictly convex edges of the corresponding complex K. Corollary 4.6. If (T, r) and (T , r) are two convex pairs then E s (T, r) = E s (T , r). Hence, we can denote it by E s (r). It means that two different triangulations of the same convex complex may be different only in "flat" edges. Let (T, r) be a convex pair and K be the corresponding convex prismatic complex. Definition 4.7. For a triangle ∆ ∈ F (T ), the face of K containing ∆ is the union of all triangles ∆ such that there exists a path from an interior point of ∆ to an interior point of ∆ that intersects only edges of T with dihedral angles equal to π. Clearly, for two triangles the relation "to be in one face" is an equivalence relation. Hence, we obtain a decomposition of the upper boundary of K into faces. A face Π may not be simply-connected. Below we prove that in this case some inner edges of Π are strictly convex and if we delete the union of all such edges, then we obtain a simply-connected set (see Figure 6). Proof. Let Π be an open face. We prove that if Π is not simply-connected, then there is a closed geodesic in Π that does not intersect strictly convex edges. Consider a simple homotopically nontrivial closed curve ψ in Π that does not intersect strictly convex edges and is transversal to every edge. Lift ψ to H 2 and develop all triangles of T that intersect ψ. We obtain an ideal polygon P . Some edges of P are glued. The triangulation T is lifted to a triangulation of P . All inner edges P are lifts of flat edges of Π (the dihedral angles are equal to π). Let τ : P → Π be a projection. Suppose that AB and CD are two glued ideal edges: τ (AB) = τ (CD), τ (A) = τ (C) and τ (B) = D (note that A may coincide with C and B may coincide with D). We remind that ideal points are decorated and the decoration defines the gluing map between AB and CD. For a point X ∈ AB there is Y ∈ CD such that τ (Y ) = τ (X). A hyperbolic segment XY corresponds to a geodesic loop in Π. It can have a singular point at τ (X) = τ (Y ). Clearly, τ (XY ) is a closed geodesic if and only if ∠BXY + ∠XY D = π. It is clear that as X tends to B, the point Y tends to D and this sum tends to 2π. Similarly, as X tends to C, this sum tends to 0. Therefore, for some X this sum will be equal to π. In this case τ (XY ) is a closed geodesic ψ ⊂ Π. It intersects only edges of T that were lifted to inner edges of P . Therefore, it does not intersect any strictly convex edges. Consider a distance function ρ K . Its restriction to ψ must be periodic, because ψ is closed. On the other hand ψ intersects no strictly convex edges. Therefore, the restriction of ρ to ψ has a form (2) which is not periodic. We obtain a contradiction. More precisely, we have the following. Corollary 4.11. If r ∈ R n is such that at least one convex complex (T, r) exists, then r determines a decomposition of S g,n into faces. Every such T can be obtained as a refinement of this decomposition. For a triangulation T denote by K(T ) ⊂ R n the set of all r ∈ R n such that the pair (T, r) is convex. This defines a subdivision of K into cells corresponding to different triangulations. An inner points of a cell K(T ) to r such that the decomposition in Corollary 4.11 is the triangulation T itself. Boundary points of K(T ) have the property that there are ideal polygons in this decomposition that are not triangles. Proof of Lemma 4.3 We prove the following Clearly, this lemma implies Lemma 4.3. Moreover, the decomposition described in Corollary 4.11 is exactly the Epstein-Penner decomposition for r and the subdivision K = K(T ) is the Epstein-Penner subdivision of R n . Proof. Let r ∈ R n and let T be one of its Epstein-Penner triangulations. Further, consider S g,n as H 2 /Γ and letT be the lift of T to a triangulation of H 2 and∆ = A i A j A h ∈ F (T ) be a triangle of T . Let L i , L j and L h be the By B i , B j and B h denote the tangent points of M with L i , L j and L h respectively. We see that the prism A i A j A h B h B j B i is a semi-ideal prism with lateral edges r i , r j and r h . It follows that the pair (T, r) is possible. From this construction we obtain the prismatic complex K. Now we should check the convexity. Take two adjacent triangles ∆ = A i A j A h , ∆ = A j A h A g ∈ F (T ) and corresponding semi-ideal prisms. Geometrically, it is clear that φ i + φ g π/2. Indeed, we need to bend this two prisms around the edge A j A h . Lemma 4.13 implies that we bend in the right direction and will obtain angle less or equal than π in the end. Below we give more rigorous analytical proof of this statement. Clearly, l g ∈ L M (∆ 1 ) if and only if the plane M (∆ 1 ) coincides with the plane M (∆ 2 ) which is equivalent to the condition φ i + φ g = π/2 (edge A j A h is "flat"). Also, φ i + φ g < π/2 is equivalent to the condition that for some (and hence for every) geodesic ψ intersecting A j A h transversely at a point X, the left derivative of ρ K (X)| ψ is strictly greater than the right. Assume that it l g ∈ L M (∆ 1 ) . Take ψ := A i A g and parametrize it by length over R. Let X = A j A h ∩ A i A g and x ∈ R be its coordinate. The distance from X to M (∆ 1 ) is equal to the distance from X to M (∆ 2 ) (see Figure 7). The point x is a kink point of ρ K | ψ , hence (a 1 , b 1 ) = (a 2 , b 2 ) and by Claim 2 of Lemma 4.5 the sign of f 1 (x )−f 2 (x ) is constant over the halfrays (−∞, x) and (x, +∞). Consider x tending to +∞. Take a sphere S(x ) centered at the corresponded point X ∈ XA g tangent to the plane B j B h B g . The tangent point tends to B g and the sphere tends to the horosphere centered at A g . This horosphere does not intersect the plane B i B j B h hence for some sufficiently large x , S(x ) does not intersect B i B j B h and in turn it implies that the distance from X to B i B j B h is greater than the distance to B j B h B g . It implies that f 1 (x ) − f 2 (x ) > 0 over (x; +∞) and the left derivative of ρ K (x)| ψ is greater than the right. Therefore, φ i + φ g < π/2. We proved the "if" part. If T is an Epstein-Penner triangulation for r, K is the corresponding complex and it can be represented as (T , r), then according to Corollary 4.6 can be different only in flat edges. By Lemma 4.9, faces of K are simply-connected ideal polygons, hence, T and T can be connected with a sequence of flips of flat edges. If T k and T k+1 are two consequent triangulations in this sequence and T k is an Epstein-Penner triangulation, then T k+1 also is another Epstein-Penner triangulation. Indeed, for T k we consider the construction as in proof of the "if" part. Let ∆ 1 = A i A j A h and ∆ 2 = A j A h A g are the triangles, where the flip were made. It was made over a flat edge, therefore, four points B i , B j , B h and B g are in the same plane. Then, M (∆ 1 ) = M (∆ 2 ), which means that T k+1 is just another triangulation refining the Epstein-Penner decomposition. The variational approach In this section we show that there is r ∈ R n such that the total dihedral angles around all inner edges of the complex corresponding to r are equal 2π. Such a point is a critical point of some functional over R n that we will introduce. Figure 7: The section orthogonal to the geodesic A i A g and its translation in terms of the graphic of distance function. The discrete Hilbert-Enstein functional For a complex K = (T, r) defineω i to be the total dihedral angle around the i-th edge and κ i = 2π −ω i . For e ∈ E(T ) letφ e be the total dihedral angle at e and θ e = π −φ e . Introduce the discrete Hilbert-Einstein functional over the space of all complexes Let us show that S(r) is well defined. Indeed, if M can be represented as (T, r) and (T , r), then T and T can be different only in flat edges, so for such edges θ e = 0. Lemma 5.1. For every r ∈ R n , S(r) is twice differentiable and Proof. Assume that r is an inner point of K(T ) for some triangulation T and K is the corresponding complex. Then T is precisely the face-decomposition of the upper boundary of K. The same holds for every r that is sufficiently close to r. Hence, combinatorics of complexes does not change in some neighborhood of r and every total angle can be written as the sum of dihedral angles in the same prisms. Clearly, every dihedral angle in every prism is differentiable. Moreover, by Schläffli's differential formula for a prism Summing these equalities over all prisms we obtain (4). Since dihedral angles are differentiable we obtain that S is twice differentiable at r. Now consider the case when r belongs to the boundary of some K(T ) (see the end of Subsection 4.1 for the definition of K(T )). First, we show that this boundary is piecewise-analytic. Remind that K (T ) is the set of all r for which the pair (T, r ) is admissible and r is an inner point of K (T ) (as K does not contain singular prisms). Then for every e ∈ E(T ) consider the total dihedral angle of a complex (T, r ) as a function of r . It is analytic over the interior of K (T ). Hence, the conditionφ e = π is analytic in a neighborhood of r. The boundary of K(T ) consists of different pieces corresponded to different flat edges of T and is piecewise-analytic. Consider a coordinate vector e i . As every boundary is piecewise-analytic, we have r +λe i is in the interior of some K(T ) for small enough λ. Therefore, we can compute the directional derivative of S(r) in the direction e i using the formula (4). For every coordinate direction they are continuous, hence S is differentiable. Below we compute the derivatives of κ i and show that they are also continuous, which finishes the proof that S is twice differentiable in this case. Lemma 5.1 implies that if r is a critical point of S, then every inner dihedral angle is equal to 2π. In order to find such a point we should consider the second partial derivatives of S. We saw that it is sufficient to calculate them for a fixed triangulation T . Lemma 5.2. For every 1 i n (i) X ii < 0, (ii) for i = j, X ij > 0, (iii) for every 1 i n, (iv) the second derivatives are continuous at every point r ∈ R n . In particular, it implies that X ij = X ji . Note that a matrix satisfying the properties (i)-(iii) is a particular case of so-called diagonally dominated matrices. Corollary 5.3. The function S is strictly concave over R n . Proof. We prove that the Hessian X of S is negatively definite over R n . Indeed, The intersection of the trihedral angle at the vertex A and the horosphere centered at it is a Euclidean triangle with side lengths equal to α 12 , α 13 and λ; corresponding angles are φ 12 , φ 13 and ω 1 . Then by the cosine theorem we have cos(ω 1 ) = α 2 12 + α 2 13 − λ 2 2α 12 α 13 . We calculate the derivatives of ω 1 Calculate the derivatives of α 12 from Lemma 3.8: If the ideal face is fixed then this prism is uniquely determined by lengths r 1 , r 2 and r 3 . Consider a deformation of this prism with preserved upper face. Then (−α 2 12 + e −2r 1 ), Now consider a complex K = (T, r). Consider the set E or (T ) of oriented edges of T . Every edge e ∈ E(T ) gives rise to two oriented edges in E or (T ). By E orp i (T ) ⊂ E or (T ) denote the set of oriented edges starting at A i , but ending not in A i . By E orl i (T ) ⊂ E or (T ) denote the set of oriented loops from A i to A i (every non-oriented loop is counted twice). By E or i denote the union E orp i (T ) ∪ E orl i (T ) For an oriented edge e ∈ E or i (T ) denote by α e the length of the arc of horosphere at A i between A i B i and e. To calculate ∂ω i ∂r i we considerω i as the sum of angles in all prisms incident to A i and take their derivatives. If there are no loops among the upper edges of a prism, then this prism makes a contribution of the form (5). If there are some loops, we should also add contributions of the form (6). Combining the summands containing the terms α e for the same e we get where φ e+ and φ e− are the dihedral angles at e in two prisms containing e. For every e ∈ E(T ) we have where e is e forgetting orientation. Hence (cot φ e+ + cot φ e− ) 0 and ∂ω i ∂r i = −X ii 0. Also, equality here means that the total dihedral angle of every edge starting at A i is equal to π. But in this case we obtain a non simplyconnected open face of K, which is impossible by Lemma 4.9. Similarly, for i = j denote by E orp ij (T ) ⊂ E orp i the set of all oriented edges starting at A i and ending at A j . Then, From this we obtain for every i which is greater than zero for similar reasons. It finishes the proof of Lemma 5.2. We know that S(r) is strictly concave over R n . Therefore, it has at most one maximum point. We want to prove that such a point exist. To do it we need to study what happens with complexes when the absolute values some coordinates are large. The plan is as follows. First, we study the case when all coordinates are sufficiently negative. Second, we deal with the case when there is at least one sufficiently positive coordinate. Then we combine these results and get the desired conclusion. Lemma 5.4. For every ε > 0 there exists C > 0 such that if in K = (T, r) we have r i < −C for some i, thenω i < ε. The behavior of S near infinity Proof. Fix ε > 0. Remind that for e ∈ E or i (T ) ending at A j (not necessarily different from A i ) Lemma 3.8 gives the following expression of α e α 2 e = e r j −r i −le + e −2r i . Consider two consecutive edges e 1 and e 2 ∈ E or i (T ). Together with the line A i B i they cut a triangle at the horosphere of A i with the side length α e 1 , α e 2 and λ. If r i < −C, then both the lengths α e 1 and α e 2 are at least e C and λ is bounded from above by the total length of the horocycle at A i on S g,n . The angle ω between α e 1 and α e 2 in this triangle is sufficiently small. We can choose large enough C > 0 such that if r i < −C, then the angle at A i B i in every such triangle, which is the dihedral angle of A i B i in the prism containing e 1 and e 2 , is less than ε/(12(n + g − 1)). Note that the number of triangles incident to one cusp is bounded from above by the total number of triangles which can be calculated from the Euler characteristic and is equal to 4(n + g − 1). Therefore, the total anglẽ ω i < ε. Lemma 5.5. For every ε > 0 there exists C > 0 such that if in K = (T, r) for some i and every j we have r i + r j C, then at every point x ∈ S g,n the value of distance function ρ K (x) ε. For any t ∈ R consider the following two sets (remind that we measure distances from horospheres with signs): Note D 1 (t) is a horoball centered at the i-th cusp and containing B i for t 0. D 2 (t) is the union of horoballs centered at other cusps and is contained inB. If x ∈ D 1 (t) then ρ K (x) r i − t. Indeed, we can connect x with point y ∈ G i by a geodesic such that the length of this geodesic is at most t. ρ K (y) r i and ρ K (x) cannot differ from ρ K (y) more than by the length of this geodesic. If x ∈ D 2 (t) then ρ K (x) r m + t because for some j, x belongs to the horoball at the j-th cusp which is at the distance t from our fixed cusp. Take t 1 = r 1 − ε and t 2 = ε − r m . We have that if x ∈ D(t 1 ) ∪ D(t 2 ), then ρ K (x) ε. Consider H = S g,n \ 1 j n B j . It is compact. Define It is clear that for every x ∈B, then dist(x, G i ) dist(x,G) + p. Therefore, if t 1 t 2 + p then S g,n = D(t 1 ) ∪ D(t 2 ). The inequality t 1 t 2 + p is equivalent to the inequality r i + r m 2ε + p. Take C = 2ε + p. Hence, the condition r i + r m C implies that ρ K (x) ε for every x ∈ S g,n as desired. Next two lemmas are straightforward. Lemma 5.6. For every ε > 0 there exists δ > 0 such that if in a hyperbolic triangle every edge length is less than δ, then its sum of angles is bigger than π − ε. From Lemma 3.9 we see that Lemma 5.7. For every ε > 0 there exists C > 0 such that if the distance ρ e from an oriented edge e to the lower boundary is greater than C, then α 2 e < ε. Corollary 5.8. For every ε > 0 there exists C > 0 such that if in K = (T, r) for some i and every j we have r i + r j > C, then 1 i n ω i 4π(n + g − 1) − ε. Now we are able to prove that S attains its maximal point at R n . Lemma 5.9. Consider a cube T in R n : T = {r ∈ R n : max(|r i |) q}. Then for sufficiently large q, the maximum of S(r) over T is attained at an interior point of T . Proof. Indeed, consider q > C 1 + C 2 , where C 1 is taken from Lemma 5.4 for ε = 2π and C 2 is taken from Corollary 5.8 for some small enough ε = ε 0 . The cube T is convex and compact, S is concave, therefore S reaches its maximal value over T at some point r 0 ∈ T . Assume that r 0 ∈ bdT . Then there are two possibilities: either there is i such that r 0 i < −C 1 < 0 or for every i we have r 0 i −C 1 . In the first case, by Lemma 5.4, ω i < 2π. Therefore, κ i = ∂S ∂r i | r=r 0 > 0. Let v i be the i-th coordinate vector. We can see that for small enough µ > 0, S(r 0 + µv i ) > S(r 0 ) and r 0 + µv i ∈ T , which is a contradiction. In the second case, consider i such that |r 0 i | = q. Then r 0 i = q > 0 (because otherwise the first case holds) and for every j we have r 0 i + r 0 j > C 2 . Therefore, by Corollary 5.8 we have 1 i n ω i 4π(n + g − 1) − ε 0 . Hence, for some i ∈ I and small enough ε 0 we obtain ω i > 2π and so κ i < 0. Therefore, for small enough µ > 0, S(r 0 − µv i ) > S(r 0 ) and r 0 − µv i ∈ T , which is a contradiction.
2018-04-16T18:55:18.000Z
2018-04-16T00:00:00.000
{ "year": 2018, "sha1": "134dd95a5ca9f6ee1c4dac179d243bbeeeff6b0c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "134dd95a5ca9f6ee1c4dac179d243bbeeeff6b0c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
10930614
pes2o/s2orc
v3-fos-license
Modeling Chemotherapeutic Neurotoxicity with Human Induced Pluripotent Stem Cell-Derived Neuronal Cells There are no effective agents to prevent or treat chemotherapy-induced peripheral neuropathy (CIPN), the most common non-hematologic toxicity of chemotherapy. Therefore, we sought to evaluate the utility of human neuron-like cells derived from induced pluripotent stem cells (iPSCs) as a means to study CIPN. We used high content imaging measurements of neurite outgrowth phenotypes to compare the changes that occur to iPSC-derived neuronal cells among drugs and among individuals in response to several classes of chemotherapeutics. Upon treatment of these neuronal cells with the neurotoxic drug paclitaxel, vincristine or cisplatin, we identified significant differences in five morphological phenotypes among drugs, including total outgrowth, mean/median/maximum process length, and mean outgrowth intensity (P < 0.05). The differences in damage among drugs reflect differences in their mechanisms of action and clinical CIPN manifestations. We show the potential of the model for gene perturbation studies by demonstrating decreased expression of TUBB2A results in significantly increased sensitivity of neurons to paclitaxel (0.23 ± 0.06 decrease in total neurite outgrowth, P = 0.011). The variance in several neurite outgrowth and apoptotic phenotypes upon treatment with one of the neurotoxic drugs is significantly greater between than within neurons derived from four different individuals (P < 0.05), demonstrating the potential of iPSC-derived neurons as a genetically diverse model for CIPN. The human neuron model will allow both for mechanistic studies of specific genes and genetic variants discovered in clinical studies and for screening of new drugs to prevent or treat CIPN. Introduction The number of cancer survivors in the United States has risen to an estimated 12 million in 2012 resulting in a heightened awareness of long-term toxicities and the impact of treatment on quality of life [1]. CIPN is one of the most common and potentially permanent side effects for many anti-cancer agents and its incidence has been reported to be as high as 20-40% among all cancer patients undergoing chemotherapy [2]. General symptoms start in the fingers and toes and spread progressively up the extremities as CIPN worsens and include numbness, tingling, burning, loss of tendon reflexes and vibration sensation, and spontaneous or evoked pain [3]. There is substantial inter-patient and drug-dependent variability in time to symptom onset, time to peak symptoms, severity of peak symptoms, and reversibility [4][5][6][7]. Management is complicated by the lack of reliable means to identify at-risk patients. If patients at high risk could be identified, alternative chemotherapy regimens with similar efficacy could be considered. In efforts to identify genetic variants associated with chemotherapeutic toxicities including CIPN, researchers have performed genome-wide association studies (GWAS) in clinical trials [8][9][10]. The challenges of clinical GWAS, including accurately phenotyping large patient cohorts receiving the same drug regimen and obtaining replication cohorts, have led to the development of cell based models as a complementary method to identify variants and functionally validate findings resulting from the clinical studies [11][12][13][14]. The extensively genotyped International HapMap lymphoblastoid cell line (LCL) model has been useful for this purpose and significant overlap between genetic variants associated with cellular sensitivity to paclitaxel and paclitaxel-induced clinical neuropathy has been demonstrated [15]. Follow up studies have utilized either LCLs or Neuroscreen-1 (rat pheochromocytoma) cells to functionally validate the involvement of GWAS findings in response to chemotherapeutics [15,16]. Neither cellular model represents genetically diverse human peripheral neurons, the tissue of CIPN toxicity. In addition to clinical studies and cell line models, several rodent models have been developed to elucidate the mechanisms of CIPN and identify potential therapies, including those that measure pathological, electrophysiological, and behavioral outcomes that mimic CIPN in patients [17][18][19][20]. In particular, studies in cultured rat dorsal root ganglion (DRG) neurons have provided insight into underlying mechanisms of CIPN [21][22][23][24][25]. However, clinical trials that relied on preclinical animal data have not resulted in consistent benefits of candidate CIPN treatments [17,18]. Although pain reduction was observed in a recent trial of duloxetine in patients with CIPN [2], there are currently no FDA approved treatments for CIPN [3,4,26,27]. Due to the rapid advances in stem cell technology, the ability to differentiate human neurons (and other tissues) from iPSCs provides an opportunity to create panels of genetically diverse human neurons. Large quantities of neurons from one iPSC line (iCell Neurons) are commercially available for preliminary assay development, drug screens, siRNA screens or functional studies of candidate genes. Upon treatment of iCell Neurons with increasing concentrations of representative neurotoxic agents (paclitaxel, vincristine or cisplatin), we identified reproducible decreases in neurite outgrowth phenotypes. As a proof of concept, we show that decreased expression of the paclitaxel target TUBB2A by siRNA transfection causes decreased neurite outgrowth after paclitaxel treatment, as expected based on a previous patient study [28]. We show that the variance in neurite outgrowth phenotypes is greater between individuals than the experimental variance within individuals, demonstrating that larger genetic association studies are possible with iPSC-derived neurons. iCell neuron culture Neurons derived from human induced pluripotent stem cells (iCell Neurons and MyCell Neurons reprogrammed from LCLs) were purchased from Cellular Dynamics International (Madison, WI, USA). Neurons were maintained according to the manufacturers protocol. Depending on the experiment, 1.33 x 10 4 cells/well or 4 x 10 4 cells/well were seeded using either a single coating or double coating plating method. Drug treatment only experiments used the single coating method where 96-well black, clear bottom Greiner Bio-One plates (Monroe, NC, USA) were pre-treated with 0.01% poly-L-ornithine and coated with 3.3 μg/ml laminin (Sigma-Aldrich; St.Louis, MO, USA) prior to seeding. siRNA experiments used the double coating method where cells were mixed with 3.3 μg/ml laminin prior to seeding on poly-D-lysine coated 96-well Greiner Bio-One plates at a density of 1.33 x 10 4 cells/well. For experiments presented in the Results, cells were either treated with drug or transfected with siRNA 4 h after plating. Some experiments presented in the Supporting Information evaluated treatment of drugs at 1, 2, 3, 5, or 11 d following plating. Drug preparation Paclitaxel (Sigma-Aldrich) was prepared in the dark by dissolving powder in 100% DMSO and filtered to obtain a stock solution of 58.4 mM. Stock drug was serially diluted in media for final dosing concentrations ranging from 0.001 μM to 100 μM, increasing by factors of ten. Control wells were treated with 0.17% final concentration of DMSO to match drug treatments. Vincristine (Development Therapeutics Program at NCI or Sigma-Aldrich) was prepared on ice in the dark by dissolving powder in cold PBS and filtered to obtain a stock solution of 100 mM. Stock drug was individually diluted in media and added to the cells in a range of 0.001 μM to 100 μM. Cisplatin (Sigma-Aldrich) was prepared in the dark by dissolving powder in 100% DMSO and filtered to obtain a stock solution of 20 mM. Stock drug was serially diluted in media for final dosing concentrations ranging from 0.001 μM to 100 μM. Control wells were treated with 0.2% or 0.5% final concentration of DMSO to match drug treatment. Hydroxyurea (Sigma-Aldrich) was prepared by dissolving powder in PBS and filtered to obtain a stock solution of 1 M. Stock drug was serially diluted in media for final dosing concentrations ranging from 0.001 μM to 100 μM. Cell viability and apoptosis assays Cell viability 72 h after drug treatment was assessed using the CellTiter-Glo assay (Promega, Madison, WI, USA), which measures ATP levels. Apoptosis induction 48 h after drug treatment was assessed using the Caspase-Glo 3/7 assay (Promega). Four replicates of the viability assay and five replicates of the apoptosis assay were performed. At least two wells per drug dose were measured in each replicate. High content imaging and neurite outgrowth analysis After drug treatments of 48 or 72 h, neurons were stained for 15 minutes at 37°C with 1 μg/ml Hoechst 33342 (Sigma-Aldrich), 2 μg/ml Calcein AM and 1 μM Ethidium Homodimer-2 (Molecular Probes, Life Technologies Inc., Carlsbad, CA, USA) then washed twice using dPBS without calcium or magnesium (LifeTechnologies). Imaging was performed at 10x magnification using an ImageXpress Micro (Molecular Devices, LLC, Sunnyvale, CA, USA) at the University of Chicago Cellular Screening Center. Individual cell measurements of mean/median/ maximum process length, total neurite outgrowth (sum of the length of all processes), number of processes, number of branches, cell body area, mean outgrowth intensity, and straightness were calculated using the MetaXpress software Neurite Outgrowth Application Module (Molecular Devices, LLC). At least 500 cells per dose (except 100 μM) were quantified and three replicates of each drug treatment were performed. The mean number of cells quantified per well (3-4 wells/dose) is shown for each experiment in S1 Table. Cell level outgrowth data is available for every experiment presented in the Results in S2-S6 Tables. For each replicate, the mean value relative to control at each dose was used to calculate the area under the concentration curve (AUC) for each of the nine phenotypes. For analyses comparing the four drugs, AUCs were calculated from 0.001-10 μM or from 0.001-100 μM and for analyses comparing the four LCL-iPSC-derived neuronal cells, AUCs were calculated from 0.001-100 μM. The differences in AUCs among the four drugs or among the four individuals for each phenotype were tested for significance by one-way ANOVA using the oneway.test function in R version 3.0.2, not assuming equal variances. For each replicate, we fit a linear model (outgrowth = log 10 (dose)) to determine the effect sizes of the dose response for relative total outgrowth using the lm function in R. Time-lapse video collection iCell Neurons were plated at a density of 1.33 x 10 5 cells/ml into MatTex 35mm glass-bottom dishes (MatTek, Ashland, MA, USA) with 14 mm insert coated with 3.3 μg/ml laminin. Imaging was performed at the University of Chicago Integrated Light Microscopy Facility. Images were captured with an Olympus VivaView incubator-based, epifluorescence microscope (Olympus Corporation of the Americas, Center Valley, PA, USA) run by MetaMorph software (Molecular Devices, LLC) using the 40x objective at 10 min intervals for 28 h. The cells were allowed to grow for 4 h before 10 μM of each drug or 0.17% DMSO vehicle control was added. Imaging began 1 h prior to drug treatment. The image stacks were prepared into videos using ImageJ software [29]. siRNA iCell Neurons underwent siRNA transfection with Dharmacon Accell technology (Thermo Fisher Scientific Inc., Waltham, MA, USA). Accell human TUBB2A SMARTpool siRNA was diluted to a stock concentration of 100 μM in 1X siRNA Buffer (ThermoScientific) and then diluted to 1 μM in Accell siRNA delivery media (ThermoScientific). A non-targeting siRNA pool from ThermoScientific was used as a negative control. Four hours after plating, the iCell neuron maintenance media was removed and the Accell delivery media was added for 24 h. At the 24 h timepoint, cells were then treated with 0.1 μM paclitaxel or DMSO vehicle control as described above. High content imaging was performed 24 h after paclitaxel treatment. The entire experiment was replicated three times. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) was performed to measure the level of expression of TUBB2A (Hs00742533_s1) 24 and 48 h after transfection. Two wells of 1.33 x 10 4 cells/well were lysed and prepared for qRT-PCR using the Cells to CT kit from Life Technologies. A comparative delta delta CT method was used with human beta-2-microglobulin (NM_004048.2) as the endogenous control to determine the percent of knockdown at each given timepoint compared to non-targeting control (NTC). Each sample was reverse transcribed twice and run in triplicate on the Life Technology Viia7 PCR machine. The differences in mean relative total outgrowth (0.1 μM paclitaxel:vehicle) between TUBB2A siRNA and NTC were tested for significance by Welch's t-test. LCL reprogramming and neuronal differentiation The two most paclitaxel-sensitive (GM12814, GM12892) and two most paclitaxel-resistant (GM07022, GM12752) unrelated LCLs from the CEU HapMap population (HAPMAPPT01, Northern and Western European ancestry from Utah) were chosen for reprogramming into iPSC cells. The lines were chosen by ranking all CEU LCLs with paclitaxel-induced cytotoxicity (viability) and caspase-3/7 measurements [30] by sensitivity and choosing the two with the highest mean rank and the two with the lowest mean rank (S7 Table). These four LCLs were purchased and sent directly from the Coriell Cell Repository (Camden, New Jersey, USA) to Cellular Dynamics International (CDI) for reprogramming and differentiation into neurons via their MyCell Neurons product. CDI generated EBV-free iPSCs from the LCLs using their feeder-free episomal method [31] and subsequently differentiated the iPSCs into neuronal cells for use in the chemotherapy treatment experiments. The neuronal cells are differentiated along a developmental pathway to produce cortical neurons (verified by DACH1, FOXG1 and OTX2 gene expression). Their purity was assessed by an intracellular flow cytometry assay for Tuj1 (βIII-tubulin) positive and nestin negative cells (>97% for all lines, see S7 Fig.). The neuronal cells from CDI do not contain glia or any other proliferating cell type. The differentiated neurons were named N12814, N12892, N07022 and N12752, according to their LCL identification number. We verified the identity of the neurons, the LCLs from Coriell used to generate iPSCs, and the four LCLs stored in our laboratory by genotyping the 47 informative SNPs included in the Sequenom iPLEX Sample ID Plus Panel (Sequenom, Inc., San Diego, CA). DNA was extracted from each sample using the DNeasy Blood and Tissue Kit (Qiagen, Valencia, CA). Genotyping was performed following the iPLEX Pro application guide and the iPLEX Pro reaction products were dispensed onto a 384-sample SpectroCHIP and run on a Sequenom MassAR Differences in neurite outgrowth across chemotherapeutics Several preliminary studies were done to optimize plating density, time of outgrowth prior to drug treatment, drug treatment length and drug concentrations in order to achieve reproducible dose-response curves for neurite outgrowth phenotypes for 4 distinct chemotherapeutic drugs: vincristine, paclitaxel, cisplatin and hydroxyurea (S1-S3 Figs.). To assess morphological changes of 4 different chemotherapeutics, we treated human neurons derived from iPSCs (iCell Neurons) with increasing concentrations of drug for 72 h and measured neurite outgrowth phenotypes by high content imaging. For the three drugs that cause peripheral neuropathy (paclitaxel, vincristine, cisplatin), we observed variable decreases in the neurite process phenotypes upon increasing drug concentration (Fig. 1). As illustrated in Fig. 2A, across three replicate experiments, vincristine caused the largest decrease in relative total outgrowth (β = -0.14 ± 0.021), followed by (paclitaxel β = -0.11 ± 0.0053), then cisplatin (β = -0.069 ± 0.013). When iCell Neurons were treated with a fourth chemotherapeutic drug, hydroxyurea, a negative control not known to cause neuropathy, there was no decrease in relative total outgrowth (β = 0.0042 ± 0.0023). Among the neurotoxic drugs, in addition to these differences in linear regression effect sizes, we observed that the cisplatin dose-response was the least linear. Neurite outgrowth phenotypes remained constant until the 1 μM cisplatin dose and then dropped sharply (Fig. 2). Relative mean outgrowth intensity increased upon iCell Neuron treatment with paclitaxel doses from 0.01-10 μM (Fig. 2I), which is represented by thicker, brighter neurites in paclitaxel-treated cells compared to those treated with the other drugs (also see Fig. 1). This thickening of neurites was visible in time-lapse photography of the iCell Neurons upon treatment with 10 μM paclitaxel (S1 Movie). While the addition of cisplatin or paclitaxel appeared to slow the outgrowth of iCell Neurons over 28 h compared to vehicle or hydroxyurea (negative control), a visible retraction of neurite processes was observed upon treatment with 10 μM vincristine (S1 Movie). For the nine phenotypes measured by the MetaXpress high content imaging software, we calculated the area under the concentration curve (AUC) for each of the three replicates and used these values to test for differences among the four drugs by ANOVA. When all data points are included (up to100 μM) in the AUC calculation, all nine phenotypes showed a significant difference among the four drugs and seven phenotypes significantly differed among the three neurotoxic drugs (P < 0.05, S8 Table). A more conservative approach excludes the 100 μM dose in the AUC calculation because of the greatly reduced viability among the neurotoxic drugs, cisplatin in particular, at this dose ( Fig. 3A-C). In this case, five of the nine phenotypes, including relative total outgrowth, mean/median/maximum process length, and mean outgrowth intensity, showed a significant difference in mean AUC among drugs (P < 0.05), while relative number of processes, number of branches, straightness, and cell body area did not differ among drugs (Table 1, Fig. 2). For paclitaxel and vincristine, the observed decrease in total neurite outgrowth over 72 h of treatment did not coincide with a dramatic decrease in cell viability as measured by ATP levels (Fig. 3A-B). However, overlapping curves depicting the decrease in total outgrowth and the decrease viability upon cisplatin treatment were observed, indicating cell death is the likely cause of the cisplatin-induced neurite outgrowth reduction (Fig. 3C). Apoptosis following DNA damage as demonstrated by the 13-fold increase in caspase-3/7 activity 48 h after treatment with 10 μM cisplatin dose plays an important role in cisplatin-induced cell death (Fig. 3D). A smaller, approximately 3-fold increase in caspase-3/7 activity was observed at doses of vincristine above 0.01 μM 48 h after treatment, corresponding to a slight decrease in viability at 72 h (Fig. 3B,D). Interestingly, paclitaxel did not cause caspase-3/7 activation (Fig. 3D). Reduced TUBB2A expression sensitizes neurons to paclitaxel We tested several variables including transfection reagents, plating density and plate coating in an attempt to optimize conditions for siRNA transfection in iCell Neurons (S4-S5 Figs.). Paclitaxel binds to β-tubulin to exert its cytotoxic effect and genetic variants within the promoter of TUBB2A have previously been shown to be associated with increased expression of the gene and reduced risk of paclitaxel-induced peripheral neuropathy [28]. Using optimized transient siRNA transfection conditions, we decreased expression of TUBB2A resulting in increased sensitivity of iCell Neurons to 0.1 μM paclitaxel, as measured by reduced total neurite outgrowth (P = 0.011, Fig. 3E). This decreased expression of TUBB2A resulted in a 0.23 ± 0.06 decrease in relative total neurite outgrowth. Differences in neurite outgrowth across genetically distinct cell lines To determine whether phenotypes for a given drug differed among genetically diverse neurons, iPSCs reprogrammed from four LCLs were differentiated into neurons and neurite outgrowth phenotypes were measured via high content imaging for each neuronal line upon treatment for 72 h with paclitaxel, vincristine, or cisplatin. The two most sensitive and two most resistant unrelated LCLs from the CEU HapMap population based on paclitaxel-induced cytotoxicity and caspase-3/7 data [30] were chosen for reprogramming into iPSCs. For the nine phenotypes measured by the MetaXpress high content imaging software, we calculated the area under the concentration curve (AUC) from 0.001-100 μM for each of the three replicates and used these values to test for greater variance among than within individuals per drug by ANOVA. For paclitaxel-treated neurons, five of the nine phenotypes, including relative total outgrowth, mean/median/maximum process length, and number of branches, significantly differed among individuals (ANOVA P < 0.05, Table 2, Fig. 4A-E). For vincristine-treated neurons, the relative number of processes significantly differed among individuals (ANOVA P < 0.05, Table 2, Fig. 4F). None of the nine outgrowth phenotypes significantly differed among cisplatin-treated neurons ( Table 2). Negligible (less than 2-fold change) caspase-3/7 activity was detected in the paclitaxel-treated neurons (Fig. 4G), consistent with the apoptosis results in the iCell Neurons (Fig. 3D). Mean caspase-3/7 activity across the neurons was greater than 2-fold higher than the control upon treatment with 0.01-100 μM vincristine and significantly differed among individuals at these doses (ANOVA P < 0.05, Fig. 4H). Similar to that observed with iCell Neurons, cisplatin-induced caspase-3/7 activity was highest at the 10 μM dose in the LCL-derived neurons, ranging from 4.5-fold to 14.5-fold increases relative to control, which represents a significant difference among individuals (ANOVA P < 0.001, Fig. 4I). Discussion We have applied a human neuronal cell model to the study of chemotherapeutic neurotoxicity. We demonstrate reproducible differences in morphological changes including neurite outgrowth phenotypes, cellular viability and apoptosis among four distinct chemotherapeutic drugs. Importantly, we identified differences among genetically distinct iPSC-derived neurons in the degree of apoptosis for vincristine and cisplatin, relative number of processes for vincristine and relative total outgrowth, process length, and number of branches for paclitaxel. The iPSC-derived neurons are a highly relevant human model currently available for neurotoxicity and much improved from the LCL model used previously for screening and validation. In the human neuronal model, vincristine was the most neurotoxic as measured by morphological changes following treatment. Similarly, in patients, neurotoxic doses of vincristine are approximately 40-fold lower than those for paclitaxel and 75-fold lower than those for cisplatin [32]. Cisplatin-induced neuropathy is also known to have delayed onset, often not appearing until several months after treatment has been completed and thought to be due to an accumulation of drug [32][33][34]. Importantly we see phenotypic changes in our human neuronal model at physiologically relevant plasma concentrations seen in patients: 10-100 nM for paclitaxel [35], 1-100 nM for vincristine [36], and 1-10 μM for cisplatin [37]. The human neurons used in this study are not peripheral neurons, but are predominantly glutamatergic and GABAergic cortical neuronal subtypes. Researchers have created iPSC- Phenotypes measured after 72 h of drug exposure that significantly differed across individuals (ANOVA of AUC calculated from 0.001-100 μM) include relative paclitaxel-induced (A) total outgrowth (P = 0.005), (B) mean process length (P = 0.019), (C) median process length (P = 0.022), (D) maximum process length (P = 0.009), (E) number of branches (P = 0.018), and vincristine-induced (F) number of processes (P = 0.022). Relative caspase 3/7 activity measured by Caspase-Glo 3/7 after 48 h treatment of (G) paclitaxel, (H) vincristine, and (I) cisplatin. Doses of drug that caused a mean greater than 2-fold increase in caspase activity relative to control were tested for significant differences among individual cell lines by one-way ANOVA, *P < 0.05 and **P < 0.001. Error bars represent the standard error of the mean from three independent experiments of >500 cells per dose. doi:10.1371/journal.pone.0118020.g004 derived neurons to study neuronal diseases and have found these cells to be representative to the neuronal disease [38][39][40]. The expectation is that large quantities of peripheral neurons will undergo production at some point. Despite this shortcoming, having a highly pure, readily available human neuron is a significant advance relative to the tools of the past. Furthermore, the human neuronal model complements the rat DRG model and offers the advantage that the human model will better reflect the complex genetic interactions that result in neurotoxicity in humans. Prior animal studies reveal similar differences among drugs compared to our results. For instance, in a study of rat DRG neurons treated with the same three neurotoxic drugs, vincristine had the lowest IC 50 for neurite outgrowth, the paclitaxel IC 50 was intermediate, and the cisplatin IC 50 was the highest [23]. Consistent with vincristine being the most severe, vincristine treatment decreased both anterograde and retrograde fast axonal transport in isolated squid axoplasm, whereas paclitaxel only decreased anterograde transport [41]. Also, only higher doses of cisplatin reduced caudal nerve conduction velocity in BALB/c mice, consistent with our finding of no outgrowth reduction in human neurons until high doses of cisplatin [42]. Our findings support different mechanisms of action among the three neurotoxic drugs examined. The known cisplatin mechanism of action of DNA platination with eventual apoptosis is consistent with our findings in iPSC-derived neurons of morphological changes concomitant with caspase 3/7 activation. We saw the largest caspase 3/7 activity at the 10 μM cisplatin dose, but did not detect any caspase 3/7 activity at the 100 μM dose, likely because the activation had already occurred prior to the 48 h time point, leaving few living cells for caspase measurement (S1 Table). Consistent with our observation that cisplatin-treated iPSC-derived neurons have the largest apoptotic response among drugs tested, post-mitotic rat dorsal root ganglion neurons attempted to re-enter the cell cycle and underwent apoptosis upon cisplatin treatment [21]. Paclitaxel may be disrupting mitochondrial function more than the other two drugs [43,44], suggesting mitochondrial function in iPSC-derived neurons might be a worthwhile phenotype to investigate. Because we observed large, consistent dose-response effects with little cell death, the neurite total outgrowth and process length phenotypes are most appropriate for future studies of paclitaxel-and vincristine-induced peripheral neuropathy in the human iPSC-derived neuron model. Cisplatin, on the other hand, may be best studied in the neuronal model using apoptotic or additional, yet untested, phenotypes. The induced pluripotent stem cell derived human neuronal model is one of the most appropriate cell types available for follow-up functional studies from patient CIPN GWAS. As a proof of concept, we show that decreased expression of the gene encoding paclitaxel target TUBB2A causes decreased neurite outgrowth after paclitaxel treatment. This result is concordant with findings from a previous patient study connecting decreased expression of the gene with promoter alleles that associate with increased risk of paclitaxel-induced peripheral neuropathy [28]. In a recent GWAS of vincristine-induced peripheral neuropathy in pediatric acute lymphoblastic leukemia patients, a promoter SNP of CEP72 was genome-wide significantly associated with neuropathy risk [45]. In that study, we used the human iPSC-derived neuron model to show that decreased expression of CEP72 decreased the relative total outgrowth, number of processes, and number of branches upon vincristine treatment, which greatly supported additional functional findings [45]. Importantly, this model will have utility for functional validation of other genes associated with CIPN found in GWAS or other genomic studies. While GWAS for CIPN have revealed a few promising associations [8,10,45], the problem has been in identifying large replication patient cohorts receiving the same drug regimen for replication. Indeed, replication studies in oncology are extremely challenging because a large, well-controlled trial using the same drug regimen is rarely performed twice [46]. Here, we show that the variance in neurite outgrowth phenotypes is greater between than within individuals, demonstrating the potential of larger genetic association studies using the human iPSC-derived neuron model. A recent study demonstrated that genetic background was the major cause of transcriptional variation among iPSC lines, suggesting that future studies should focus on collection of a large number of donors, rather than generating large numbers of lines from the same donor [47]. While the two most paclitaxel-sensitive and two most paclitaxel-resistant LCLs [30] were chosen for reprogramming into iPSCs, the differences among neurite outgrowth phenotypes in derived neurons, while statistically significant, were not so dramatic. Indeed, one of the resistant lines in LCLs (GM07022) was one of the most paclitaxel-sensitive lines in neurons (N07022), while GM12752/N12752 was paclitaxel-resistant in both cell types (Fig. 4A, S6 Fig.). These differences suggest there may be neuron-specific mechanisms of drug sensitivity not present in the LCL model, which should be studied in larger populations of neurons. As there is a serious need for a more relevant and genetically diverse human cellular model for studies of drug toxicity, the use of differentiated cells (cardiomyocytes, hepatocytes, neurons) will have great implications for the field of pharmacogenomics. These studies provide a framework for the discovery, validation, and identification of 1) individuals at high genetic risk for neurotoxicity; 2) genetic components and genes contributing to CIPN and; 3) druggable targets to treat or prevent this devastating side effect of chemotherapy. The Accell siRNA transfection method showed improved knockdown efficiency over the Dharmafect1 method. iCell Neurons were allowed to grow for 11 days prior to adding either Dharmafect1 (ThermoFisher) transfection media for 5 h (earlier experiments) or Accell (ThermoFisher) transfection media for 24 h (later experiments). The percent of targeted gene remaining was measured 24 h post-transfection in each experiment by qPCR. However, the Dharmafect1 experiments used the RNeasy kit (Qiagen, 10 6 cells required) and the Accell experiments used the Cells-to-CT kit (Life Technologies, 10 4 cells required) for RNA isolation. After Dharmafect1 transfection, less than the required number of cells remained for RNA extraction, so the differences between the two methods may be exaggerated. However, since the Accell method was consistently successful, we used it for additional experiments. Each method represents 1 experiment but Accell RNA was in abundance to allow for 3 independent preparations of cDNA for qPCR, as shown. Table. LCL paclitaxel-induced cytotoxicity and caspase-3/7 activity data used to select LCLs for reprogramming into iPSCs. (XLSX) S8 Table. ANOVA results comparing the AUCs of relative neurite outgrowth phenotypes among all 4 drugs or the 3 neurotoxic drugs (Paclitaxel, Vincristine, Cisplatin) after 72 h treatment of iCell Neurons. (DOCX)
2016-05-04T20:20:58.661Z
2015-02-17T00:00:00.000
{ "year": 2015, "sha1": "09fb3310b63567c38681804dce61facdb7b1218b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0118020&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09fb3310b63567c38681804dce61facdb7b1218b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
43919764
pes2o/s2orc
v3-fos-license
Zoology of a non-local cross-diffusion model for two species We study a non-local two species cross-interaction model with cross-diffusion. We propose a positivity preserving finite volume scheme based on the numerical method introduced in Ref. [15] and explore this new model numerically in terms of its long-time behaviours. Using the so gained insights, we compute analytical stationary states and travelling pulse solutions for a particular model in the case of attractive-attractive/attractive-repulsive cross-interactions. We show that, as the strength of the cross-diffusivity decreases, there is a transition from adjacent solutions to completely segregated densities, and we compute the threshold analytically for attractive-repulsive cross-interactions. Other bifurcating stationary states with various coexistence components of the support are analysed in the attractive-attractive case. We find a strong agreement between the numerically and the analytically computed steady states in these particular cases, whose main qualitative features are also present for more general potentials. Introduction. Multi-agent systems in nature oftentimes exhibit emergent behaviour, i.e. the formation of patterns in the absence of a leader or external stimuli such as light or food sources. The most prominent examples of these phenomena are probably fish schools, flocking birds, and herding sheep, reaching across scales from tiny bacteria to huge mammals. While self-interaction models for one particular species have been extensively studied, cf. Refs. [26,23,20,37,46] and references therein, there has been a growing interest in understanding and modelling interspecific interactions, i.e. the interaction among different types of species. One way to derive macroscopic models from microscopic dynamics consists in taking suitable scaling limits as the number of individuals goes to infinity. Minimal models for collective behaviour include attraction and/or repulsion between individuals as the main source of interaction, see [46,19,20,35] and the references therein. Attraction and repulsion are normally introduced through effective pairwise potentials whose strength and scaling properties determine the limiting continuum equations, see [39,9,8,16]. Usually localised strong repulsion gives rise to non-linear diffusion like those in porous medium type models [39], while longrange attraction remains non-local in the final macroscopic equation, see [16] and the references therein. In this paper we propose a finite-volume scheme to study two-species systems of the form ∂ t ρ = ∇ · ρ∇ W 11 ⋆ ρ + W 12 ⋆ η + ǫ(ρ + η) , (1a) with given initial data ρ(x, 0) = ρ 0 (x), and η(x, 0) = η 0 (x). (1c) Here, ρ, η are two unknown mass densities, W 11 , W 22 are self-interaction potentials (or intraspecific interaction potentials), W 12 , W 21 are cross-interaction potentials (or interspecific interaction), and ǫ > 0 is the coefficient of the cross-diffusivity. The non-linear diffusion term of porous medium type can be considered as a mechanism to include volume exclusion in cell chemotaxis [32,40,12], since it corresponds to very concentrated repulsion between all individuals. This model can also be easily understood as a natural extension of the well-known aggregation equation (cf. [37,46,3,18] ) to two species including a cross-diffusion term. Common interaction potentials for the one species case include power laws W (x) = |x| p /p, as for instance in the case of granular media models, cf. [2,47]. Another choice is a combination of power laws of the form W (x) = |x| a /a − |x| b /b, for −N < b < a where N is the space dimension. These potentials, featuring short-range repulsion and long-range attraction, are typically chosen in the context of swarming models, cf. [36,1,28,29,4,21,17]. Other typical choices include characteristic functions of sets (like spheres) or Morse potentials W (x) = −c a exp(−|x|/l a ) + c r exp(−|x|/l r ), or their regularised versions W p (x) = −c a exp(−|x| p /l a ) + c r exp(−|x| p /l r ), where c a , c r and l a , l r denote the interaction strength and radius of the attractive (resp. repulsive) part and p ≥ 2, cf. [26,22,21]. These potentials display a decaying interaction strength, e.g. accounting for biological limitations of visual, acoustic or olfactory sense. The asymptotic behaviour of solutions to one single equation where the repulsion is modelled by non-linear diffusion and the attraction by non-local forces has also received lots of attention in terms of qualitative properties, stationary states and metastability, see [11,15,27,13,14] and the references therein. Systems without cross-diffusion, ǫ = 0, were proposed in [24] as the formal mean-field limit of the following ODE systeṁ For symmetrisable systems, i.e. systems such that there exists some positive constant α > 0 with W 12 = αW 21 , they show the system can be assigned an interaction energy functional. As a result, the system admits a gradient flow structure and variational schemes can be applied to ensure existence of solutions, cf. [24,33]. However, in many contexts such a condition is too exclusive in the sense that lots of applications exhibit a lack of symmetry in the interactions between different species. In order to treat the system for general, and possibly different, cross-interactions W 12 , W 21 , they modify the well-known variational scheme to prove convergence even in the absence of gradient flow structure. These systems without cross-diffusion appear in modelling cell adhesion in mathematical biology with applications in zebrafish patterning and tumour growth models, see [30,25,41,48] for instance. In this paper we extend their cross-interaction model by a cross-diffusion term which is used to take into account the population pressure, i.e. the tendency of individuals to avoid areas of high population density. As cross-diffusion we choose the form introduced by Gurtin and Pipkin in their seminal paper [31]. Although their work is antedated by results of mathematicians and biologists interested in density segregation effects of biological evolution equations, cf. [44,43] and references therein, the particularity about their population pressure model is the occurrence of strict segregation of densities under certain circumstances, cf. [31,6,7]. This cross-diffusion term has been the basis to incorporate volume exclusions in models for e.g. tumour growth [5] or cell adhesion [38]. Hence, our model is of particular interest from a modelling point of view taking into account non-local interactions between the same species and different species as well as the urge of both species to avoid clustering. We discover a rich asymptotic behaviour including phenomena such as segregation of densities, regions of coexistence, travelling pulses -all of which are observed in biological contexts, cf. [42,45]. Existence of segregated stationary states under certain assumptions on the interaction potentials for small cross-diffusivity has been very recently obtained in [10]. Here we show that it is in fact possible to find explicit stationary states and travelling pulses for certain singular not necessarily decaying interaction potentials showing coexistence and segregation of densities. The rest of this paper is organised as follows: in Section 2 we discuss the basic properties of the system (1) in one dimension, in Section 3 we propose our numerical scheme which is used in Section 4 to explore the model and its long-time behaviour numerically. These insights are used to make reasonable assumptions on the support of the asymptotic solutions in order to derive analytic expressions for their shape and give a first classification of the zoology of the different stationary states. Finally we discuss in Section 5 how generic these phenomena are for different potentials and we draw the final conclusions of this work in Section 6. 2. A non-local cross-diffusion model for two species. Throughout this paper we consider system (1) in one spatial dimension. Then the model reads for some initial data ρ(x, 0) = ρ 0 (x), and η(x, 0) = η 0 (x), and radially symmetric potentials W ij , for i, j = 1, 2. We can obtain some apriori estimates on solutions by using the following energy We note that for W ij ∈ W 2,∞ (R), along any solution (ρ, η) of system (2), there holds In the case of W ii = C ii x 2 /2 and W ij = ±C ij |x| for i = j with non-negative constants C ij , the estimate is also true, since and similarly for the terms in (2b), as long as ρ, η ∈ L ∞ (0, T ; L ∞ (R)). Thus the terms implying that ρ + η ∈ L 2 (0, T ; H 1 (R)). We deduce that the sum of both species remains continuous for almost all positive times -a property we will make use of later. Now, let us introduce our notion of steady states. Proof. Clearly, the characterisation is sufficient, since the velocity field vanishes in each connected component of their supports if there exist constants c 1 , c 2 such that Eqs. (3) are satisfied. Conversely, if there holds we note that ρ, η, ∂ x (ρ + η) ∈ L 2 (R) by the definition of steady state, and therefore the right-hand sides are distributional derivatives of L 1 functions. By a well-known result (cf. e.g. [34], Lemma 1.2.1.), we deduce that there exist constants K 1 , K 2 ∈ R such that K 1 = ρ∂ x (W 11 ⋆ ρ + W 12 ⋆ η + ǫ(ρ + η)), Due to the integrabilty properties of the right-hand sides above, we infer that K 1 = K 2 = 0, and thus in the interior of any connected component of the supports of ρ and η, we obtain that there exist constants c 1 , c 2 ∈ R such that using the same argument as above. Note that the assumption on the interiors of the supports of the species is purely technical and due to the regularity assumptions on our definition of stationary states. This avoids pathological cases such as functions supported on a fat Cantor set. 3. Numerical scheme. In order to solve system (2), we introduce a finite volume scheme based on Ref. [15]. The problem is posed on the domain Ω : and uniform size ∆x := x i+1/2 − x i−1/2 . Finally, the time interval [0, T ] is discretised by t n = n∆t, for n = 0, . . . , ⌈T /∆t⌉. We define the discretised initial data via We integrate system (2) over the test cell [t n , t n+1 ] × C i to obtain whereF n i+1/2 ,Ḡ n i+1/2 denote the flux on the boundary of cell C i , i.e. Then the finite volume scheme for the cell averages ρ n i and η n i reads where we approximate the fluxes on the boundary, Eqs.(4), by the numerical fluxes using (·) + := max(·, 0) and (·) − := min(·, 0) to denote the positive part and the negative part, respectively. The velocity is discretised by centred differences: Here we have set where W l−k ij = W ij (x l − x k ), for i, j = 1, 2. This scheme has proven very robust for one species, and under a (more restrictive) CFL condition we can also prove the following result. Proposition 3.1 (Non-negativity preservation). Consider system (2) with initial data ρ 0 , η 0 ≥ 0. Then for all n ∈ N the cell averages obtained by the finite volume method (5) satisfy ρ n i , η n i ≥ 0, granted that the following CFL condition is satisfied . Proof. Let us assume that ρ n i , η n i ≥ 0, and we need to show that then ρ n+1 We can rearrange the terms so that Clearly, all terms in the second line are non-negative. The first line is non-negative if the CFL condition is satisfied. Application of the same procedure to η n+1 i yields the statement. 4. Numerical study. In this section we study system (2) numerically with emphasis on its long time behaviour. Throughout this chapter we use the self-interaction potentials and the cross-interaction potentials for the interspecific attractive-attractive and attractive-repulsive case, respectively. This choice of potentials allows us to compute steady states of system (2) explicitly. We find a wide range of different behaviours and properties, including segregation phenomena, for different cross-diffusivities and cross-interactions. Notice that the system is translationally invariant and therefore, if for symmetry considerations we can show that the centres of mass of both species in a stationary state are fixed and equal to some particular value, we can suppose that value to be zero without loss of generality. From numerical simulations we observe that steady states are compactly supported which motivates this ansatz when computing the profiles analytically. This is also due to the non-linear diffusion of porous medium type in the volume exclusion term. This chapter is subdivided into two sections addressing the mutually attractive case and the attractive-repulsive case, respectively. 4.1. Attractive-attractive case. Let us begin with the case of attractive interaction between both species, i.e. W 12 = W 21 = |x|. Upon exploring the system numerically, we find a vast variety of stationary patterns, including both symmetric and non-symmetric profiles whose occurrence and stability depends on the crossdiffusivity. In fact, the coefficient ǫ of the cross-diffusivity plays a crucial role in the bifurcations of these profiles, and will be discussed in the next section. Then, we study the system as the cross-diffusivity tends to zero and the stability of the steady states -a matter that seems closely intertwined with the bifurcations. Steady states and behavioural bifurcation. We begin by introducing the two types of symmetric steady states observed in the attractive-attractive case. Motivated by numerical simulations, we assume that the stationary distributions are compactly supported, i.e., is then only inhabited by the first species, but not η. Upon rearranging Eq. (3), we obtain The two non-local terms W 11 ⋆ ρ and W 12 ⋆ η can be computed individually. First the self-interaction terms becomes where are the mass and the first two moments of ρ, respectively. Then the cross-interaction term becomes where m 2 , M 2 denote the mass and the centre of mass of the second species. Due to symmetry and translational invariance of the solution, both M 1 and M 2 can be taken as zero without loss of generality. Upon substitution of the non-local terms in (7) and (8), Eq. (6) is simplified into , respectively. Using ρ(±c) = 0 at the boundary (where ρ + η vanishes identically), we get Finally, let us consider the interval [−b, b] where both species coexist. Again, ρ satisfies where the cross-interaction term W 12 ⋆ η can be further reduced, according to Notice that all terms on the right side are twice differentiable. Therefore from (10), ρ + η is twice differentiable in (−b, b), and upon differentiating Eq. (10) twice we obtain and similarly from the second equation in (3) The system of equations (11) and (12) can be solved by first introducing the decoupled system for u := ρ + η and v := ρ − η, giving by Thus, the solutions ρ and η are obtained as In fact, due to symmetry there holdsû 1 = 0, and Eqs. (13,14) can be simplified to Hence the symmetric steady states are determined uniquely by three parameters,û 2 , b and c, which are governed by algebraic equations. Since η is only supported on [−b, b], the condition for the total mass of η becomes From Eqs. (9,15), the condition for the total mass of ρ becomes Whenû 2 is eliminated, Eq. (16) provides a relation between b and c, i.e., Finally, consider the continuity of the sum of the densities ρ+η at Therefore b and c are in the zero locus of Eqs. (17,18) that are numerically solved, cf. Figure 1(a). Then the shape of the steady state is given by two parabola profiles on the parts only inhabited by the first species and cosine profiles where both species coexist: Figure 1(b) shows an excellent agreement between numerical and analytical steady states. Let us remark that Eq. (17) implies b = c in the case of m 1 = m 2 . As a consequence both species completely overlap and the profile is just that of a cosine, cf. Figure 2. Numerical simulations show that the Batman profiles are the only symmetric stationary distribution in a certain range of cross-diffusivities, namely (0, ǫ (1) ]. For ǫ ∈ (ǫ (1) , ǫ (2) ], a new family of profiles (called the second kind ) emerges coexisting with the Batman profiles in this range, cf. Figure 3. Finally, for ǫ > ǫ (2) only profiles of the second kind prevail. Since the steady states are a state of balance between diffusion and attractive interactions, the second kind of profiles can be seen as states in which the attractive force is not strong enough to ensure the formation of a single group for η as observed in the Batman profiles. Similarly to the Batman profiles, we may determine parameters and their governing equations for profiles of second kind. In the symmetric case, using (3) the profiles are given by , and p is the fraction of mass in the corners of η, cf. Figure 3, (areas filled in red). Similarly, It is apparent that there are five unknowns b, c, d for the support, B for the amplitude in regions of coexistence, and p for the mass fraction. Correspondingly, we find four conditions in order to determine all parameters but p: for the mass of ρ and the continuity of the sum σ = ρ + η at x = c and x = b. Since p parameterises a family of solutions and describes both branches (as envelope) of the bifurcation diagram, cf. Figure 4, we are interested in finding the conditions leading to p min (ǫ), p max (ǫ) in the diagram, Figure 4. In order to determine the bifurcation diagram we run simulations with two different types of initial data -on the one hand we start the system with supp(η) ⊂ supp(ρ), on the other hand we initialise the system such that η is supported around ρ, cf. first row of Figure 5. The second row shows the stationary distribution asymptotically achieved with the respective initial data. We note that the mass fraction of η in the corners is different for both simulations albeit having used the same cross-diffusivity. The mass fraction in the left graph corresponds to p = p min and the mass fraction in the right graph to p = p max , respectively. Now we want to give conditions determining the envelopes p min (ǫ), p max (ǫ) of Figure 4. Let us impose non-negativity of η at x = b, i.e. η(b) ≥ 0. This is a reasonable assumption which is also reflected in the numerical simulations, cf. Figure 6(a). The figure shows steady states corresponding to the left initial data in Figure 5 as ǫ increases. While we observe a discontinuity of η at x = b for small ǫ, there is a critical value where η(b) = 0, for all ǫ > ǫ (1) . For the upper envelope we impose that the velocity field u 2 is non-negative at x = c since otherwise any small perturbation will render the stationary state unstable, i.e. mass would get transported into the interior, cf. Figure 6(b). These two conditions describe both envelopes in Figure 4. In all four graphs the masses are m 1 = 0.1, m 2 = 0.6, and the cross-diffusivity is ǫ = 1.7. The first row depicts two different initial data -one (left) where η is included in ρ, and one (right) where η surrounds ρ. In the second row we present the corresponding steady states. Albeit having a similar make-up, they differ in their respective mass fraction of the corner, p. The left graph gives the minimal mass fraction p min while the right graph gives the maximal, pmax, respectively, cf. Vanishing diffusion regime. In this section we study the case of Batman profiles as ǫ → 0. Recall the two equations for b and c, When ǫ is small, both b and c are O( √ ǫ), suggesting b = ǫ 1/2 b 0 + ǫ 1/2 b 1 + ǫb 2 + · · · , and c = ǫ 1/2 c 0 + ǫ 1/2 c 1 + ǫc 2 + · · · . Upon substitution of the asymptotic expansions into Eq. (19) and (20), the leading order coefficients b 0 and c 0 satisfy Notice that both densities in the Batman profiles will converge to a Dirac Delta at zero with the respective masses while keeping their shape with this described asymptotic scaling for their supports. Asymmetric profiles. So far we only discussed symmetric steady states. However, there is an equally rich variety of non-symmetric stationary states, cf. Figure 7 and Figure 8. In Figure 7 we display the cases where the support only consists of metric profiles for 0 < ǫ < ǫ (1) independent of the masses m 1 and m 2 . Only for larger cross-diffusivities, ǫ > ǫ (1) , asymmetric profiles can be observed. Moreover, there is a whole family of asymmetric profiles as can be seen in Figure 8. This is similar to the case of symmetric stationary states, parameterised by the mass fraction p. Stability of steady states and symmetrising effect. Let us now discuss the numerical stability of the symmetric steady states. Here the bifurcation point ǫ (1) plays an important role, for the system exhibits a symmetrising effect whenever the crossdiffusivity lies below the critical one, in the sense that there is only one symmetric steady state attracting any initial data. We fixed ǫ ∈ (0, ǫ (1) ) and chose ρ 0 = 2m 1 ½ [−0.5,0] and η 0 = 2m 2 ½ [0,0.5] for all combinations of masses of the form (m 1 , m 2 ) = 0.1 · (i, j) for i, j = 1 . . . 10. In all cases we observe that there is only one attractor, namely the Batman profile of the form given in Figure 1(b) and Figure 2(b) in the case m 1 = m 2 , respectively. For ǫ > ǫ (1) the system is not symmetrising anymore and small perturbations lead to different stationary states. This can be seen if p is varied in [p min , p max ], for it leads to different states. A similar argument holds for the asymmetric states, by shifting mass from one corner into the other, cf. Figure 8. 4.2. Attractive-repulsive case. In this section we present the attractive -repulsive case, i.e. W 12 = |x| = −W 21 . Then the steady states have segregated densities, as asserted by the following proposition. Proof. Suppose the interior of a connected componente of supp(ρ) ∩ supp(η) is not empty. We know that both species satisfies Eqs. (3) in that connected component: Similar arguments as above imply that the interaction terms are twice differentiable in this interval, thus we differentiate twice and get 0 = m 1 + 2η + ǫ(ρ + η) ′′ and 0 = m 2 − 2ρ + ǫ(ρ + η) ′′ . Steady states. This section is dedicated to studying the steady states of the system with attractive-repulsive cross-interactions. Due to numerical simulations and the previous proposition we make the following assumption on the support where a < b ≤ −c < c ≤ d < e are some real numbers. Using Eqs. (3) we proceed similar as above, cf. Eqs. (7,8), to obtain for shape of the second species on the left part of the support and for the right part, respectively. Similar as above, we can see that the interaction terms are twice differentiable, therefore differentiating Eq. (3) in the support of ρ twice yields 0 = m 1 + ǫ(ρ) ′′ , and thus and analogously Concerning the first species, the parameters β, γ are determined by the continuity condition Eq. (22) and we obtain We can see that there are six unknowns, namely a, b, c, d, e, and M 2 with a total of five conditions: by imposing half of the mass of η to each side of ρ. Case of strict segregation. Let us start by discussing the case Then the condition on the mass yields We can solve η L (b) = 0 for a, Since half of the mass is located to the left of the first species, we get where we used Eq. (23a). Similarly, we solve η R (d) = 0 for e to obtain Using this expression we compute So we have determined c, b, d depending only on the masses and the second order moments of the second species, M 2 . We can substitute the values into Eqs. (23a, 23c) to determine a and e. Critical ǫ and maximal M 2 . We are interested in a condition determining as to when segregation of species occurs. In fact there is a critical value of the crossdiffusivity, ǫ c , such that there only exist adjacent steady states for ǫ > ǫ c . For 0 < ǫ < ǫ c strictly segregated steady states occur if |M 2 | < M 2,max , where M 2 = 0 corresponds to the symmetric case. Figure 9 displays this behaviour. Let us derive an expression for ǫ c and M 2,max . For a fixed ǫ we may compute M 2,max . We begin with the case c = −b. We can solve for the critical M 2 , i.e. Similarly, we can solve equation c = d for M 2 , which gives (c) ǫ = 1/2. If ǫ = ǫ c both species touch at the points {−c, c} or are partially adjacent. If ǫ < ǫ c but we choose M 2 outside of the aforementioned range we observe steady states consisting of (partially) adjacent bumps. Figure 10 displays the steady states in the symmetric case, i.e. M 2 = 0, for attractiverepulsive cross-interactions. We observe a transition of behaviour for different values of ǫ, ranging from strictly segregated states to completely adjacent states. The numerical results agree perfectly with the results obtained analytically. Vanishing diffusion regime. As we have seen in Figure 9, there is an ǫ c such that the steady states parameterised by M 2 ∈ [−M 2,max , M 2,max ] are segregated. In this section we consider the case of vanishing cross-diffusion. We can assume that ǫ < ǫ c and This is indeed a measure solution of system. To see this let us consideṙ Since we are looking for a steady state we observė We assume without loss of generality that Fixing X = 0 we get Y 2 = Y 1 + 2m 1 /m 2 and Y 1 ∈ [−2 m1 m2 , 0]. This is exactly the solution of the system as ǫ → 0, cf. Eq. (26). Stability of steady states. Here we want to discuss the stability of the stationary states of the attractive-repulsive system. In general, the stationary states are not stable as small perturbations may lead to a completely different stationary state. It becomes clear in Figure 9, that perturbing η by shifting it to either side leads to a completely different stationary state. Although this is an arbitrarily small perturbation in any L p -norm, the translated profile is another stationary state. This is why these profiles are not stable. The same argument holds for symmetric stationary states. However, they are stable under symmetric perturbations since any symmetric initial data is attracted by the symmetric profile. Characterising fully the basin of attraction for each stationary state seems difficult. For perturbations shifting mass from η L to η R (or vice versa) there is no stationary state but the profile is then attracted by a travelling pulse solution. Travelling pulses. In addition to the convergence to steady states we observe travelling pulse solutions in the case of attractive-repulsive cross-interactions. There are two types of travelling pulses -those consisting of two bumps and those consisting of three. In our numerical study we do not observe more than three bumps, even in the case of exponentially decaying potentials. There are however metastable states where more bumps exist but after a sufficiently long time the collapse into two or three. Two pulses. In order to compute these profiles, we assume [−a, a] denotes the initial support of u = η(0) and therefore [−a − x 0 , a − x 0 ] the initial support of ρ(0). We transform the system into co-moving coordinates, z = x − vt, and obtain the following conditions for the pulse profiles similarly to Eqs. (3). A computation similar to Eqs. (7,8), leads to the explicit form of the pulse on [−a, a] for some constantc 1 . Since u(z) is a parabola with roots ±a, u is symmetric. As a consequence we obtain M = v − m. By definition of M = zu(z)dz = 0, whence v = m. Hence the shape is given by Then the following consideration determines the boundary of the support, a, If there were travelling pulse solutions of this form they would satisfy the same equations as above. Then, The continuity of the sum suggests that u 1 (0) = u 2 (0) implies m = v. But then We solve this expression for a > 0 and find a = 3 √ 12ǫ. A comparison of the support of the adjacent solutions and the support of segregated solutions, cf. Eq. (28), shows that the adjacent solutions in fact only touch. Figure 11 shows the formation of two travelling pulses. We start with two indicator functions as initial data and let the system evolve. At about time t ≈ 2 we observe a fully established pulse profile. We let the system evolve further and compare the We transform to co-moving coordinates, z = x−vt, and obtain the following conditions for the profile whence we obtain Here Similarly, the profiles of the second species are given by and Again, we use the fact that the sum of both densities has to be continuous, i.e. as well as Eqs. (30) hold. We consider the case of strictly segregated solutions first, i.e. b < −c, and c < d. Since then ρ(±c) = 0, we may deduce from Eq. (29) that v = m R − m L for the speed of propagation and for the shape of the first species (c is determined by the mass condition, Eq. (31)). Furthermore we obtain in terms of b. Similarly, we can get an expression for e in terms of d, i.e. Using the expression for a, we obtain Note that Eqs. (32) completely determine the support and the profiles of the pulses. Figure 12 shows the formation of a triple pulse solution. We choose characteristic functions as initial data (dotted). The mass on the left is m L = 1/3 and, respectively, m R = 2/3 on the right. After some time the pulse profile is established. We compare the system (blue and red) at time t = 9 and time t = 24 with the analytical expression derived above (black). The figure displays a great agreement between our numerical result and the analytical. Once the profile is fully established it moves to the right at a constant speed. The numerical velocity is given by ∆x/∆t = 5/15 = 1/3. This is in perfect agreement with the analytically obtained results, i.e. v = m R − m L = 2/3 − 1/3 = 1/3. At this stage, let us draw our attention to two special cases. Remark 3 (Maximal M 2 ). Let us get back to the general case. We study the interval of M 2 . Assuming ǫ fixed, b = −c yields On the other hand, c = d gives where v = m R − m L , as above. It is worthwhile noting that in the case m L = m R both M 2,max and M 2,min coincide with Eqs. (24,25) for the stationary state. Parallel to the consideration for (partially) adjacent steady states of the attractiverepulsive system we also find the existence of adjacent travelling pulse solutions. 5. Generality. This section is dedicated to the study of more general or realistic potentials to understand whether the behaviours observed above are specific to our interaction potentials. Different cross-interaction and self-interaction potentials will be investigated. Even though analytic expressions for the steady states and travelling pulses seem no longer avaiablable, the behaviours are indeed generic and, in fact, even richer than the above particular model. Different cross-interactions. Let us begin by considering different crossinteraction potentials. We regard two types of potentials -power-laws and Morse-like potentials decaying at infinity, i.e. where p ∈ {1/2, 1, 3/2}. This choice of potentials is motivated as they are similar to the Newtonian cross-interaction. In both cases, we observe a very similar behaviour both in the mutually attractive case and the attractive-repulsive case, respectively. Figure 13 displays the Batman profile for different cross-interaction potentials. In all simulations the same initial data, mass, and cross-diffusivity were used. Each steady state features the salient characteristics observed in the case W cr = |x|, i.e. a region of coexistence surrounded by regions inhabited by only one species. From the steady states we can also infer another information, namely, second type profiles exist and the point of bifurcation depends on the potential, for only Figure 13(d) exhibits a profile of second type. Similarly, we observe a symmetrising effect for small cross-diffusivities and asymmetric profiles. Different self-interactions. Here, we keep the cross-interaction potentials fixed as W 12 = |x| = ±W 21 and consider different self-interaction potentials of the form W (x) = |x| p /p, for p ∈ {3/2, 2, 4}. In each case we observe a very similar behaviour. We obtain the same variety including both Batman profiles and the profiles of second type. Again we observe that the system is symmetrising, however for a different ǫ (1) . In the attractive-repulsive case as well we observe the characteristic profiles and the formation of pulses. 6. Conclusions. In this paper we introduced a system of two interacting species with cross-diffusion. We used a positivity-preserving finite-volume scheme in order to study the system numerically. For a specific choice of potentials, the steady states can be constructed with parameters governed by algebraic equations. These numerically simulated and the analytically constructed stationary states and travelling pulses were found to agree with each other. Using the same scheme the model was explored for related potentials and the behaviours observed for the specific potentials turned out to be generic, when the cross-interaction potentials or the self-interaction potentials were exchanged. While this paper gives a first insight as to what qualitative properties can be expected from models taking the general form (1), there is still a lot of analytical work to be done. First and foremost, it is still an open problem to show existence of solutions to the systems. The formal gradient flow structure is lost when the crossinteraction potentials W 12 and W 21 are not proportional to each other, and the main problem is to find the right estimates for individual species since we only control the gradient of the sum of the densities.
2017-05-08T15:42:13.000Z
2017-05-08T00:00:00.000
{ "year": 2017, "sha1": "556b6679fae981cbf2db97a8c5bc1e7b57fa1eae", "oa_license": "CCBY", "oa_url": "https://epubs.siam.org/doi/pdf/10.1137/17M1128782", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "556b6679fae981cbf2db97a8c5bc1e7b57fa1eae", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Physics" ] }
219724447
pes2o/s2orc
v3-fos-license
Age Structural Transitions and Copayment Policy Effectiveness: Evidence from Taiwan’s National Health Insurance System Background: Population ageing is a worldwide phenomenon that could influence health policy effectiveness. This research explores the impact of age structural transitions on copayment policy responses under Taiwan’s National Health Insurance (NHI) system. Methods: The time-varying parameter vector autoregressive model was applied to create two measures of the copayment policy effectiveness, and multiple linear regression models were used to verify the nonlinear effect of age structural transitions on copayment policy responses. Results: Our results show that copayment policy effectiveness (in terms of the negative response of medical center outpatient visits to upward adjustments in copayment) is positively correlated with the proportions of the population in two older age groups (aged 55–64 and ≥ 65) and children (age < 15), but negatively correlated with the proportion of the population that makes up most of the workforce (aged 15‒54). These tendencies of age distribution, which influence the responses of medical center outpatient visits to copayment policy changes, predict that copayment policy may have a greater influence on medical center outpatient utilization in an ageing society. Conclusions: Policymakers should be concerned about the adverse effects of copayment adjustments on the elderly, such as an increasing financial burden and the effect of pricing some elderly patients out of Taiwan’s NHI system. Background Taiwan's National Health Insurance (NHI) system is a government-implemented social insurance program providing universal healthcare coverage (based on pay-as-you-go principles) to all residents in Taiwan. This system has encountered tremendous financial difficulties over the past two decades due to the nature of publicly financed healthcare systems [1]. A sizeable proportion of total medical care expenditures was spent on outpatient care services in Taiwan's NHI system since it was implemented in 1995. In 2016, approximately 70% of total medical expenditures were for outpatient care services, with 58.25% of this going to medical centers (23.51%), regional hospitals (23.84%) and district hospitals (10.90%) [2]. These three levels of hospitals have different responsibilities within Taiwan's NHI system [3,4]. In particular, medical centers are oriented to deal with the most complex diseases, and to support teaching and research in clinical practices. District hospitals and regional hospitals are responsible for secondary and tertiary care, respectively. In addition to hospitals, local clinics in Taiwan are built to deal with primary care. Over 80% of children (age < 15) receive their outpatient care from the local clinics, and the elderly (aged 65 and older) and youth (aged [15][16][17][18][19][20][21][22][23][24] contribute the largest (approximately 36.88-38.90%) and smallest shares (approximately 3.03-3.83%) of total outpatient care visits, respectively, to medical centers, regional hospitals, and district hospitals [2,5]. It is important to note that the reimbursement payments per outpatient visit to district hospitals (NT$ 1770 or about USD 59), regional hospitals (NT$ 2445 or about USD 82) and medical centers (NT$ 3261 or about USD 109) were 2.37-4.36 times higher than those made to local clinics (NT$ 748 or about USD 25) in 2016 [2]. With the ageing population in Taiwan, the referral system becomes an important issue to consider, with regard to the financial difficulties of Taiwan's NHI system. In order to strengthen the referral system in a way that could reduce outpatient care expenditure in hospitals (particularly in medical centers), Taiwan's Ministry of Health and Welfare (MOHW) has adjusted copayments for outpatient care multiple times since 1995 [6,7]. During our study period, from January 1998 to December 2015, Taiwan's MOHW revised the copayment policies for outpatient care utilization in 1999, 2002 and 2005 [6,7]. The 1999 co-payment policy introduced co-payments (up to NT$ 100 or about USD 3.33) for prescription drugs, and an additional NT$ 50 (about USD 1.67) copayment fee per visit for excessive outpatient visits (more than 24 visits per year), while the 2002 copayment policy further increased the copayment fees for outpatient care visits in medical centers and regional hospital from NT$ 150 (about USD 5) to NT$ 210 (about USD 7), and from NT$ 100 (about USD 3.33) to NT$ 140 (about USD 4.67), respectively. In 2005, the design of the copayment policy for outpatient care utilization under Taiwan's NHI system offered a price-differentiating mechanism, leading patients to select healthcare providers from a local clinic for their first outpatient visit, instead of going directly to a hospital. This price-differentiating mechanism was designed to raise the copayment fees for outpatient visits at medical centers, regional hospitals and district hospitals to NT$ 360 (about USD 9), NT$ 240 (about USD 8) and NT$ 80 (about USD 2.67), from NT$ 210 (about USD 7), NT$ 140 (about USD 4.67) and NT$ 50 (about USD 1.67), respectively, if the patient was not referred from a local clinic. Those who were referred to hospitals from a local clinic paid copayment fees at the pre-2005 rates at hospitals (i.e., medical centers, regional and district hospitals) [6,7]. The design of the price-differentiating mechanism, intended to change patients' behavior when seeking outpatient care, was also used for the new copayment policy for outpatient care (effective 15 April 2017). Specifically, the new co-payment fees for direct visits to medical centers, regional hospitals and district hospitals are 2.47, 2.40 and 1.6 times higher, respectively, than for those with referrals, and this policy also creates a price gap of NT$ 250 (or about USD 8.33) per outpatient visit, between the price for direct visits and the price for those with referrals to medical centers, an amount much higher than the price for regional hospitals (NT$ 140 or about USD 4.67) and district hospitals (NT$ 30 or about USD 1.00) [6]. The effectiveness of the new copayment policy in reducing outpatient care expenditure depends on how patient behavior in seeking outpatient care responds to the changes in medical center outpatient visit prices under Taiwan's NHI system. In this study, we applied the time-varying parameter vector autoregressive model (developed by Nakajima [8]) and multiple linear regression models, in order to specifically explore the impact of age structural transitions on copayment policy effectiveness in medical center outpatient care. This was done because the recent nonreferral copayment policy under Taiwan's NHI system focused on decreasing medical center outpatient care utilization, due to its significant contribution to total outpatient care expenditure, and a substantial reimbursement-payment-per-visit gap between medical center and clinic outpatient visits [2,6]. In addition, we chose Taiwan as our study area for examining the impact of age structural transitions on copayment policy effectiveness because the Taiwanese population is now experiencing rapid population ageing, and it is predicted that it will take only 33 years to move from an ageing to a hyper-aged society [9]. Since our time-varying parameter vector autoregressive model is capable of dealing with the changes in outpatient care utilization related to the changes in copayment policies made during our study period, the results generated in this study can provide insight into the influence of population ageing on the healthcare system. Literature Reviews Recent studies on the effectiveness of copayment policy in curbing healthcare expenditure suggest a significantly negative relationship between copayment and healthcare utilization. For example, Kill and Houlberg conducted a systematic literature review investigating the influence of copayments on healthcare demand. Their review identified 47 eligible studies on the behavioral effects of copayment, and the majority of their reviewed studies suggested that copayments reduce outpatient care utilization [10]. Recent systematic literature reviews, such as Kolasa and Kowalczyk [11] and Sensharma and Yabroff [12], investigating the relationships between patient cost-sharing of prescription drugs, healthcare utilization and health outcomes, suggested that an increase in patient cost-sharing of prescription drugs not only decreases prescription drug utilization, but also increases the risk of worsening health outcomes (in terms of deteriorating adherence to prescription drugs), and increases the demand for healthcare services, such as the emergency room, outpatient services and inpatient care services. Another strand of the literature, on the effect of user fee (consisting of copayment and cost-sharing) changes on healthcare utilization, applied a difference-in-differences (DID) regression model to evaluate the effect of user fees on the utilization of various healthcare services. In general, a negative association between user fees and healthcare utilization was found for inpatient care services [13][14][15], outpatient care services [14,[16][17][18][19][20][21][22], long-term care utilization [23,24], psychiatric care services [25,26], rehabilitation care services [27] and prescription drug usage [28][29][30]. Taiwan's MOHW has modified the copayment policies for outpatient care utilization many times since Taiwan's NHI program was implemented in 1995 [6,7]. Many studies have focused on the response of outpatient care utilization to the user fee change under Taiwan's NHI system. For instance, Yang, Tsai and Tien investigated the effect of persistent behavior and cost-sharing policy on outpatient care utilization by the elderly in Taiwan, and the results obtained from their dynamic panel count data model indicated that the elderly Taiwanese population is more price-sensitive in the long run than in the short run, and that therefore the effects of copayment intervention on elderly outpatient care utilization would be more effective in the long run than in the short run [31]. Liu, Hsu and Huang adopted a conventional time series model to examine the determinants of health expenditure in Taiwan; their results show that upward adjustments in copayments for healthcare services covered by the NHI in 1999 had a significant impact, in terms of curbing healthcare expenditure [32]. A significantly negative relationship between the 1999 copayment adjustments and outpatient care utilization was also found in Huang and Tung's study [33]. Additionally, Chen, Schafheutle and Noyce [34], and Chen, Bermell and McMullen [7], evaluated the impact of nonreferral outpatient copayments on outpatient care utilization (effective July 15, 2005), based on the estimation of the segment time series model and the sample selection model for count data, respectively. The results generated by these two studies also suggest a negative relationship between copayments and outpatient care utilization. Although the empirical findings from the aforementioned studies suggest that copayment policy may be effective in decreasing healthcare utilization, the link between age structural transitions and copayment policy effectiveness remains unclear in the literature on health economics and policy. This is becoming a particularly important issue as population ageing becomes a worldwide phenomenon. In fact, recent studies, such as those by Chen [35] and Imam [36], have begun to establish the influence of age structural transitions on monetary policy, in the form of making adjustments to the money supply by manipulating interest rates in order to achieve relevant economic objectives, such as economic growth or the reduction of unemployment rates. In order to make up for the deficit in studies on how population ageing impacts copayment policy effectiveness, we closely followed the empirical procedures suggested by Chen [35] to examine the impact of age structural transitions on copayment policy effectiveness. Accordingly, a time-varying parameter vector autoregressive model was employed to create time-varying impulse responses of outpatient care utilization to copayment adjustments, and this allowed us to construct two measures of copayment policy effectiveness (i.e., maximum and accumulative maximum responses of medical center outpatient visits to upward adjustments in copayments). Time-Varying Parameter Vector Autoregressive Model Since the patient has complete freedom to choose healthcare providers, and the recent nonreferral copayment policy (effective 15 April 2017) focuses on decreasing outpatient visits to medical centers, our investigation into the impact of age structural transitions on the response of copayment change to outpatient utilization is based on the demand for medical center outpatient visits, as follows: where q m t denotes outpatient visits per capita in medical centers, and p m t , p r t , p d t and p c t represent copayments per outpatient visit in medical centers, regional hospitals, district hospitals and local clinics, respectively. W t is monthly regular earnings, and ζ t is the residuals. It is important to note that copayments per outpatient visit include the copayments for outpatient care services and prescription drugs [6]. Prior research on the influence of age structural transitions on policy responses utilizes time-varying impulse response functions to measure the responses of policy adjustments to variables of interest [35,36]. Following this line of research, we also applied the time-varying parameter vector autoregressive model (developed by Nakajima [8]) to estimate the time-varying impulse response functions that describe the response of medical center outpatient visits to upward adjustments in copayments. Thus, the time-varying impulse responses of medical center outpatient visits to one standardized unit shock in copayment, across different types of hospitals and local clinics (generated by the time-varying impulse response functions), were used to measure copayment policy effectiveness. For the sake of brevity, we will skip the technical details of the specification and estimation process of the time-varying parameter vector autoregressive model, and refer any interested reader to the online Supplementary Materials. Several model specification tests, such as the Hansen instability Lc, Exp-F, Ave-F and Sup-F tests, were employed to test for the null hypothesis of parameter stability within the vector autoregressive system for our time series data used in this study (see Chen [35] for details). Effect of Age Structural Transitions on Copayment Responses Since the effect of age structural transitions on policy effectiveness has been explained by previous studies [35,36], we specify the nonlinear relationship between age distribution and copayment policy effectiveness as follows: where imq g it (g = m or a) was generated from the time-varying impulse response functions by estimating the time-varying parameter vector autoregressive model. It represents the effect of a change in copayment per medical center outpatient visit to various types of healthcare providers on medical center outpatient visits at time t, and its impacting time scale i (1, 2, 3, . . . , 12 months). imq m it denotes the maximum (based on the minimal negative principle) response of medical center outpatient visits to one standardized unit change in the copayment per medical center outpatient visit, over the succeeding 12 months. Conventional microeconomic theory predicts that, other things being equal, an increase in copayment per medical center outpatient visit should decrease the number of medical center outpatient visits. Thus, negative values of imq m it suggest that the copayment policy is effective in terms of an increase in the copayment per medical center outpatient visit. Previous studies (such as Chen, Chi and Lin [3], and Chen, Liang and Lin [4], which modeled the discrete choice demand for outpatient care under Taiwan's NHI system) suggested that regional hospitals and medical centers could be classified as one group of healthcare providers, due to the minor differences in the first contact user fees between these two providers. Following this line of classification, regional hospital outpatient care could be substituted for medical center outpatient care. In addition, one may expect that the outpatient care provided by regional hospitals, district hospitals and local clinics is complementary to that provided by medical centers, because local clinics (emphasizing primary care), district hospitals (dealing with secondary care), regional hospitals (responsible for tertiary care) and medical centers (treating the most complex diseases) were designed to constitute a (noncompulsory) referral system under Taiwan's NHI system. In order to examine the possible relationships between outpatient care provided by different healthcare providers, we considered the accumulative maximum responses, of medical center outpatient visits (symbolized by imq a it ) to the simultaneous change in copayment per visit by one standardized unit, from four providers (namely, medical centers, regional hospitals, district hospitals and local clinics). This procedure incorporates the same definition of copayment policy effectiveness as was used in generating the maximum response of medical center outpatient visits to the change in the copayment per medical center outpatient visit over the succeeding 12 months ( imq m it ), with negative values of imq a it indicating copayment policy effectiveness, with regard to a simultaneous increase in the copayment per outpatient visit from all providers. Furthermore, the proportion of the population in each individual age group w at time t (t = 1, 2, 3 . . . , T) was denoted by p wt (w = 1, 2, 3, . . . , W), and cv t indicates the control variables. α g 0 , φ g w , and α g 1 are the parameters corresponding to the constant term, the proportion of the population in age group w, and control variables, respectively. ξ g t represents residuals. It is important to note that the model specification in Equation (2) includes proportions of the population from all age groups ( p wt , w = 1, 2, . . . , W), so we were unable to estimate our empirical model due to the perfect collinearity problem of age distribution. To avoid this problem, we imposed some parametric restrictions on the φ g w parameters in Equation (2), based on Fair and Dominquez's method, to estimate coefficients of φ g w (w = 1, 2, 3, . . . , W) [37]. In this way, we could apply the delta method to obtain the standard errors of φ g w (w = 1, 2, 3, . . . , W), and 90% confidence intervals for the estimated coefficients of φ g w (w = 1, 2, 3, . . . , W) could be established accordingly. Those estimated coefficients enabled us to portray the effects of age structural transitions on copayment policy responses. The validation of statistical inferences generated from Equation (2) relies on the stationarity of time series data. In this study, we utilized the newly developed Fourier unit root test proposed by Chang, Lee and Chou [38]. The Fourier unit root test has been proven to perform better than conventional unit root tests, such as the Dickey-Fuller test (a special case of the Fourier unit root test), in terms of the size and power properties of test statistics [38,39]. The results generated from the Fourier unit root tests suggest that most of our time series data support a rejection of the null hypothesis of the conventional Dickey-Fuller unit root specification, in favor of the alternative hypothesis of the Fourier specification, and the stationarity of all variables used in this study is confirmed. For the sake of brevity, we once again skip the technical details of the model specifications of Fair and Dominquez's method [37] and the Fourier unit root test [38,39], as well as their estimated empirical results. We refer any interested reader to the online Supplementary Materials. Data and Variables Data for this research came from Taiwan's National Health Insurance Research Database (NHIRD) [40], the Demographic Statistics Database (DSD) [41], and the Macroeconomics Statistics Database (MSD) [42], administered by the Taiwanese government. This study uses secondary data (i.e., monthly aggregate healthcare utilization data for all residents in Taiwan), and did not involve any human participants and/or tissue. The data collection process was approved by the Research Ethics Committee of Taichung Tzu Chi Hospital, with the Certificate of Exempt Review ID: REC106-28. Prior to modeling the responses of medical center outpatient visits to changes in copayment per outpatient visit for various types of healthcare providers, we defined and calculated the variables used in this study as follows: First, monthly outpatient visits per capita were computed as the monthly total outpatient visits to medical centers divided by the monthly total population. Second, copayments per outpatient visit to various providers (medical centers, regional hospitals, district hospitals and local clinics) were calculated as the monthly total copayment divided by the monthly total outpatient visits. Third, data for total outpatient visits and total copayments to various providers were retrieved from the NHIRD. Fourth, monthly regular earnings and total population per month were obtained from the MSD and DSD, respectively. Fifth, all the price variables were transformed into real price variables, at the 2011 price level, using the appropriate price indices (such as medical price index and labor wage index), and monthly medical center outpatient visits per capita were also transformed into annual outpatient visits by multiplying by 12. The responses of medical center outpatient visits to changes in copayment per outpatient visit for various types of healthcare providers, obtained by the time-varying parameter vector autoregressive model, were utilized to generate two measures of copayment policy effectiveness. The first is the maximum response (based on the minimal negative principle) of medical center outpatient visits over the 12 months to a change, by one standardized unit shock, in copayment per medical center outpatient visit. The other is the accumulative maximum response of medical center outpatient visits per capita to the simultaneous increase in copayment per outpatient visit, by one standardized unit of copayment, from all providers (i.e., medical centers, regional hospitals, district hospitals and local clinics). These unique measures were first generated using a total of 216 monthly observations over the period of January 1998-December 2015. They were then transformed into quarterly data (by average, resulting in a total of 72 items of quarterly data, from the first quarter of 1998 to the fourth quarter of 2015) to serve as the dependent variable in the multiple linear regression model, while matching the quarterly frequency of some control variables used in the multiple linear regression model. The independent variables used in the multiple linear regression model are quarterly age distribution data, which include proportions of the population in seven age-specific groups (i.e., under 15, aged 15-24, aged 25-34, aged 35-44, aged 45-54, aged 55-64 and 65 or over). These data were obtained from the DSD and MSD. Control variables, such as the contributions of the healthcare and social service sector to economic growth (measuring the prosperity of the healthcare industry), the unemployment rate (measuring business cycles), and the female labor participation rate (measuring socioeconomic transitions), were all retrieved from the MSD. Time-Varying Parameter Vector Autoregressive Model The upper part of Table 1 shows that the mean of medical center outpatient visits per capita was about 1.143 visits, and the average copayments per outpatient visit (at the 2011 price level) to medical centers, regional hospitals, district hospitals and local clinics were approximately NT$ 189, NT$ 157, NT$ 95 and NT$ 58, respectively. Monthly regular earnings at the 2011 price level are about NT$ 37,463. In addition, four parameter stability tests for the null hypothesis of the time-invariant parameter vector autoregressive model against the time-varying parameter vector autoregressive model are shown in the lower part of Table 1. The Sup-F, Ave-F, Ave-F and Hansen instability L c statistics [43,44] generated p values lower than the 5% significance level in all six equations within the vector autoregressive system. Therefore, the null hypothesis of parameter stability in the vector autoregressive system was soundly rejected, and these results validated the use of the time-varying parameter vector autoregressive model to evaluate the responses of medical center outpatient visits to the changes in copayment per outpatient visit for various types of healthcare providers. 2.395 (0.00) † Note: 1 USD = 30 NT$. The whole sample period spanned from January 1998 to December 2015, generating a total of 216 monthly observations. ‡ Standardized variables were used to estimate the time-varying parameter vector autoregressive (TVP-VAR) model. One lag was selected by the convergence of TVP-VAR model; VAR is the abbreviation for "vector autoregressive", and VAR(1) means the VAR model with one lag period. The p values for the Sup-F Ave-F and Exp-F tests were calculated based on Hansen [44]. The p values for L c were calculated based on Hansen [43]. The propagation mechanisms of copayment impact, over the time scale of 1-12 months during the period from January 1998 to December 2015, are displayed in Figure 1. The impulse responses of medical center outpatient visits to a positive shock of copayment per medical center outpatient visit, for the 3-to 12-month time scale, were all negative during our study period (see Figure 1a). These findings result in negative values for the maximum (based on the minimal negative principle) responses of medical center outpatient visits, over the 12 months following a positive shock to copayment per medical center outpatient visit (see Figure 1b). These results imply a negative price elasticity of demand for outpatient care in medical centers. Contrarily, the impulse responses of medical center outpatient visits to a positive shock of copayment per regional hospital outpatient visit, for the 3-to 12-month time scale, were all positive during the period from January 1998 to December 2015 (see Figure 1c). These findings show positive values in maximum (based on the maximal positive principle) responses of medical center outpatient visits, over the 12 months following a positive shock to copayment per regional hospital outpatient visit (see Figure 1d). These results indicate a positive cross-elasticity between medical center outpatient care and regional hospital outpatient care, and hence, the outpatient care provided by these two providers is interchangeable. As indicated in Figure 1e, the impulse responses of medical center outpatient visits to a positive shock of copayment per district hospital outpatient visit tend to be negative, for the 3-to 12-month time scale, during our study period. Therefore, we used the minimal negative principle to define maximum responses of medical center outpatient visits, over the 12 months following a positive shock to copayment per district hospital outpatient visit. It follows that maximum responses of medical center outpatient visits, over the 12 months following a positive shock, to copayment per district hospital outpatient visit were negative during most of our study period (see Figure 1f). Moreover, the impulse responses of medical center outpatient visits to a positive shock of copayment per local clinic outpatient visit, from the 3-to 12-month time scale, were all negative during our observed period (see Figure 1g). These findings show negative values in maximum (based on the minimal negative principle) responses of medical center outpatient visits, over the 12 months following a positive shock, to copayment per local clinic outpatient visit (see Figure 1h). These results suggest a negative cross-elasticity between medical center outpatient care and local clinic outpatient care, and therefore, the outpatient care provided by these two providers is likely to be complementary. The relationship between medical center outpatient visits and copayment per outpatient visit, for the four providers, was summarized as the accumulative maximum response of medical center outpatient visits to a simultaneous increase in copayment per visit by one standardized unit, for medical centers, regional hospitals, district hospitals and local clinics (see the red dotted line in Figure 1b). The negative values in the accumulative maximum response of medical center outpatient visits to copayment adjustments reveal that patients were responsive, and medical center outpatient visits reduced as the prices of outpatient care per visit were adjusted upwards under Taiwan's NHI system. Effect of Age Structural Transitions on Copayment Responses As shown in Table 2, the average maximum impulse responses of medical center outpatient visits to one standardized unit change of copayment per outpatient visit, for medical centers, regional hospitals, district hospitals and local clinics, were −0.036, 0.015, −0.015 and −0.055, respectively, and the means of accumulative maximum impulse responses of medical center outpatient visits to simultaneously increasing copayments per outpatient visit, by one standardized unit, for the four providers was −0.090, during the period from the first quarter of 1998 to the fourth quarter of 2015. These results are accordant with the interrelationship between medical center outpatient visits and copayments per outpatient visit from the four providers, illustrated in Figure 2. The average proportions of the population in seven age-specific groups during our observed period ranged from 9.9% (aged 65 and above) to 19.9% (age < 15). The average contribution of the health and social services sector to economic growth was about 8.3% over the period from the first quarter of 1998 to the fourth quarter of 2015. This result shows that Taiwan's healthcare industry was prosperous during our study period, considering the average 4.07% economic growth rate during the same period [45]. The means of unemployment rate and female labor participation rate were approximately 4.19% and 48.48%, respectively. The former result (unemployment rate less than 5%) shows that Taiwan was in full employment status, and the latter result indicates that females played an important role in productivity during our study period. .890 *** −3.160 −6.791 *** † MRM represents the maximal (based on minimal negative principle) response of medical center outpatient visits per capita to a standardized unit change of the copayment per medical center outpatient visit within a 12-month period. Cum-Max denotes the accumulative maximal response of medical center outpatient visits per capita to a simultaneous increase in copayment per outpatient visit by a standardized unit for medical centers, regional hospitals, district hospitals and local clinics. *, **, *** represent the 10%, 5% and 1% significance levels, respectively. T-values were estimated through the delta method. ‡ T-values were computed by dividing the estimated coefficients by the Newey-West standard errors. CHR represents the contribution of the healthcare and social services sector to economic growth. UR and FLPR denote the unemployment rate and female labor participation rate, respectively. Discussion Since the proportions of the population making up the bulk of the workforce (ages 15 to 54) have significantly positive effects on the accumulative maximum and maximum responses of medical center outpatient visits to upward adjustments in copayment, one may expect that copayment policy would be less effective when these major working populations expand. However, the proportions of the population in the two older age groups [55-64 and the elderly (age ≥ 65)] and the children group (age < 15) generate significantly negative effects on these two measures of copayment policy effectiveness, implying that copayment policy would be more effective when the populations of children, and those aged 55 years or above, grow. These findings are different from previous studies investigating the impact of age structural transitions on monetary policy responses, wherein population ageing is seen to attenuate the effectiveness of monetary policy (i.e., the responses of economic growth or unemployment rates to the adjustment of interest rates) [35,36]. It is worth noting that copayments put a direct price on a very specific activity-obtaining outpatient care services from different providers-whereas the monetary policy (in terms of adjusting interest rates to accomplish some economic targets) is an increase in the price of borrowing, and has effects on a range of economic activities. Population ageing mitigates the effects of monetary policy because the elderly have fewer choices than the young. That is, the young consume, save, and invest in human capital. The elderly are likely to just consume. In the case of healthcare services, the elderly consume a lot more healthcare services, thereby making them more price-sensitive, which would discourage consumption. A recent study conducted by Nillsson and Paul showed the elastic demand for children's and adolescents' healthcare services; the response of cost-sharing to children's and adolescents' healthcare utilization is negatively associated with their parental income [19]. Note that the real wage in Taiwan continuously decreased during our study period. The real wage in 2015 was around the same level The empirical results for the multiple linear regression model are presented in Table 3. It is important to note that the negative values of the (accumulative) maximum responses of medical center outpatient care visits to the (simultaneous) change of copayment per outpatient visit for providers (medical centers, regional hospitals, district hospitals and local clinics) indicate copayment policy effectiveness. As shown in Table 3, the proportions of the population in seven age-specific groups have statistically significant impacts on copayment policy effectiveness. Figure 2a further illustrates the inverse U shape of the effect of age distribution on maximum responses of medical center outpatient care visits to the change of copayment per medical center outpatient visit. Figure 2b demonstrates the same shape for the effect of age distribution on accumulative maximum responses of medical center outpatient care visits to the simultaneous change of copayment per outpatient visit for all providers (namely, medical centers, regional hospitals, district hospitals and local clinics). Specifically, the proportions of the population in four major working age groups (i.e., aged 15-24, aged 25-34, aged 35-44 and aged 45-54 groups) are positively correlated with these two measures of copayment policy effectiveness, but the proportions of the population in the two older age groups [the elderly (age ≥ 65) and aged 55-64] and the children group (age < 15) have negative effects on the accumulative maximum and maximum responses of medical center outpatient visits to copayment policy change. In addition, the prosperity of the healthcare and social services industries could reinforce copayment policy effectiveness, since the estimated coefficients of the contribution of the healthcare and social services sector to economic growth is significantly negative. The estimated coefficients of the female labor participation rate are significantly positively at the 1% significance level, indicating that female participation in productivity would mitigate the effectiveness of copayment policy, possibly due to increased income resources coming into the household. .890 *** −3.160 −6.791 *** † MRM represents the maximal (based on minimal negative principle) response of medical center outpatient visits per capita to a standardized unit change of the copayment per medical center outpatient visit within a 12-month period. Cum-Max denotes the accumulative maximal response of medical center outpatient visits per capita to a simultaneous increase in copayment per outpatient visit by a standardized unit for medical centers, regional hospitals, district hospitals and local clinics. *, **, *** represent the 10%, 5% and 1% significance levels, respectively. T-values were estimated through the delta method. ‡ T-values were computed by dividing the estimated coefficients by the Newey-West standard errors. CHR represents the contribution of the healthcare and social services sector to economic growth. UR and FLPR denote the unemployment rate and female labor participation rate, respectively. Discussion Since the proportions of the population making up the bulk of the workforce (ages 15 to 54) have significantly positive effects on the accumulative maximum and maximum responses of medical center outpatient visits to upward adjustments in copayment, one may expect that copayment policy would be less effective when these major working populations expand. However, the proportions of the population in the two older age groups [55-64 and the elderly (age ≥ 65)] and the children group (age < 15) generate significantly negative effects on these two measures of copayment policy effectiveness, implying that copayment policy would be more effective when the populations of children, and those aged 55 years or above, grow. These findings are different from previous studies investigating the impact of age structural transitions on monetary policy responses, wherein population ageing is seen to attenuate the effectiveness of monetary policy (i.e., the responses of economic growth or unemployment rates to the adjustment of interest rates) [35,36]. It is worth noting that copayments put a direct price on a very specific activity-obtaining outpatient care services from different providers-whereas the monetary policy (in terms of adjusting interest rates to accomplish some economic targets) is an increase in the price of borrowing, and has effects on a range of economic activities. Population ageing mitigates the effects of monetary policy because the elderly have fewer choices than the young. That is, the young consume, save, and invest in human capital. The elderly are likely to just consume. In the case of healthcare services, the elderly consume a lot more healthcare services, thereby making them more price-sensitive, which would discourage consumption. A recent study conducted by Nillsson and Paul showed the elastic demand for children's and adolescents' healthcare services; the response of cost-sharing to children's and adolescents' healthcare utilization is negatively associated with their parental income [19]. Note that the real wage in Taiwan continuously decreased during our study period. The real wage in 2015 was around the same level as in 1999 [46]. More severe family income constraints in recent years, due to economic fluctuation, may be one of the crucial factors that explain why copayment policy effectiveness is positively correlated with children (age < 15). In addition, there is more dependency with respect to decision-making in the elderly (age ≥ 65) and children (age < 15) age groups. One might presume that people making decisions (likely those aged 16-54) are not necessarily "perfect agents", and might be making health decisions that are not in the best interests of the dependents (age < 15, and age ≥ 65) they have charge over. This might be another reason why children (age < 15) and the two older age groups [aged 55-64 and the elderly (age ≥ 65)] are more sensitive to copayment adjustments than their counterparts. In addition, despite increases in copayment fees for outpatient visits having a lower impact on working people (aged 15-54), based on Figure 2a,b, a policy of reducing copayment fees is likely to increase health and long-term care expenditure for the elderly, and thus put a financial burden on the working groups (aged 15-54). Furthermore, evidence from our time-varying impulse response plots in Figure 1 indicated that medical center outpatient care is complementary with the outpatient care provided by district hospitals and local clinics, but could be replaced by regional hospital outpatient care. If the relationships between medical center outpatient care and outpatient care provided by other healthcare providers (such as local clinics, regional hospitals and district hospitals) are taken into account, the proportions of the population in the two older age groups [aged 55-64 and the elderly (age ≥ 65)] and children (age < 15) could be seen to have a much stronger effect on copayment policy effectiveness. This is because copayment adjustment in medical center outpatient care is most likely to raise the price of outpatient care provided by other healthcare providers, based on the expectations imposed on the economic agents. This is supported by the fact that the accumulative maximum responses of medical center outpatient visits to copayment policy change are much more significant than the maximum responses of medical center outpatient visits to copayment policy change, as we can see by noting that the scale of the vertical axis in Figure 2b is at least five times higher than that in Figure 2a. As a result, copayment policy would be even more effective in decreasing outpatient care utilization in medical centers, from the perspective of the dynamic interaction effect among different healthcare providers under Taiwan's NHI system. The discussion in the previous paragraph should be of particular concern, because Taiwan is now experiencing rapid population ageing in terms of its historical longevity (life expectancy above 80) and low fertility (around one child per woman) [35]. Because of the ageing trend of the population, copayment policy will influence the elderly population the most in the near future. Consequently, an effective copayment policy is most likely to increase the financial burden of the elderly, and possibly force some elderly people, particularly the poor and the sick, who have an urgent need for healthcare services, to seek alternative treatment (or self-treatment) rather than the orthodox healthcare services provided by Taiwan's NHI system [3,4,7]. It follows that fundamental equality of access to healthcare services under Taiwan's NHI would gradually diminish as the elderly population rises. Therefore, policymakers should be warned against a possible pricing-out effect (meaning that the price of healthcare services is high enough to create a barrier preventing disadvantaged groups, such as the elderly, from accessing healthcare services) due to copayment policy change. A strategic intervention to combat the adverse effects of an increase in copayment fees could involve subsidizing healthcare services for elderly people with chronic diseases or low income. This study provides two innovative contributions, beyond those of prior studies on the effect, of copayment on healthcare utilization. First, we adopted the time-varying parameter vector autoregressive model developed by Nakajima [8], coupled with the multiple linear regression model, to establish a link between age distribution and copayment policy effectiveness (in terms of the negative response of medical center outpatient visits to upward copayment adjustment) for the first time. Second, the stationarity of time series variables used in this study was proved by the newly developed Fourier unit root test, which has better power and size properties than conventional unit root tests [38,39]. Third, our multiple linear regression model includes the whole range of age distribution, rather than a single measure of population ageing, so it can assess the nonlinear effect of age structural transitions on copayment policy responses. Nevertheless, a limitation inherent in this study is the time series methodology used for our investigation into the effect of age structural transitions on policy effectiveness. We are aware that the age distribution and medical center outpatient care utilization data used for our analyses belong to aggregated data, and the empirical results obtained from our empirical models should not refer to individual behavioral change, exhibited at one's specific age in response to copayment adjustments on medical center outpatient visits, in order to avoid the ecological fallacy. Additionally, we did not apply other time series methodologies (such as the interrupted time series analyses for multiple groups) due to the lack of outpatient care utilization by multiple groups. Finally, the statistical inferences obtained from our time series analyses are confined to the long-term impact of age structural transitions on copayment policy effectiveness, because the change in age distribution is most likely to be a long-term process. Conclusions Copayment, referring to the specified amount that the patient has to pay for each healthcare service received, has served as a means by which to limit healthcare utilization under Taiwan's NHI system [6,7]. Previous studies of the relationship between copayment adjustment and healthcare utilization suggest that copayment adjustments have the positive effect of controlling healthcare utilization under Taiwan's NHI system [7,[31][32][33][34]. Nevertheless, Taiwanese population ageing is notoriously fast. It will take only 33 years for Taiwan's demographic to transition from an ageing to a hyper-aged society [9]. As the Taiwanese population ages, an understanding of the effect of age structural transitions on copayment policy will become important in assisting policymakers in preparing strategic interventions to decrease the adverse impact of population ageing on the healthcare system. Hence, this study enriches the health economic literature, focusing on the impact of an ageing population on the healthcare system by demonstrating the time-varying impulse responses of medical center outpatient visits to copayment policy adjustments, and the impact of age structural transitions on copayment policy effectiveness. Specifically, the results obtained from our empirical models suggest that copayment policy effectiveness (in terms of the negative response of medical center outpatient visits to upward copayment adjustment) is positively correlated with the proportions of the population in the two older age groups [aged 55-64 and the elderly (age ≥ 65)] and the children group (age < 15), but is negatively correlated with the proportion of the population aged 15-54. The tendency of age distribution to affect the responses of medical center outpatient visits to copayment policy adjustments suggests that a copayment policy could be more effective in an ageing society. Therefore, policymakers should be concerned about the adverse effects copayment adjustments will have on the elderly population, such as deteriorating health, increasing financial burdens, and pricing some elderly patients out of Taiwan's NHI system. One of the strategic schemes to limit these adverse effects of copayment adjustments is to provide a subsidy of healthcare services for the poor and those in greater need of healthcare services. We encourage future research into a tiered copayment system, and into projecting the extent of the pricing-out effect in relation to Taiwanese age structural transitions. Supplementary Materials: The following are available online at http://www.mdpi.com/1660-4601/17/12/4183/s1. This section consists of model specifications for the time-varying parameter vector autoregressive (TVP-VAR) model with stochastic volatility, the parametric age response function and Fourier unit root tests, and results obtained from the Fourier unit root tests and the parametric age response function.
2020-06-18T09:05:19.931Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "323b6fac6d49a9ad97151ef6b5b4bbc9890c6880", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/12/4183/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "733a6a8f9d3d938b4cd19f5b14235a3b73de13f0", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244945584
pes2o/s2orc
v3-fos-license
Analysis of lacrimal duct morphology from cone-beam CT dacryocystography in a Japanese population Dacryocystorhinostomy (DCR) is the rst-line treatment for lacrimal duct stenosis and obstruction in western countries. Endoscopic-assisted nasolacrimal duct intubation (ENDI) is spreading steadily as a minimally invasive treatment in Northeast Asia. ENDI is prevalent in this area because Northeast Asians have relatively at facial features, with a less elevated superior orbital rim than other ethnic groups. This allows for relatively easy manipulation of a dacryoendoscope. Evidence has accumulated that the morphology and inclination of the lacrimal duct differ among individuals and ethnic groups. In this study, we collected anthropometric data from a Japanese population of 100 samples—the parameters vital for designing a dacryoendoscope probe. The data we provided was essential in designing the overall length, bending point, and curve-line of the dacryoendoscope probe. Although the Japanese data measured in this study would not be directly applicable to other ethnic groups, we hope that the parameters provided by this study will contribute to the accumulation of valuable anthropometric data for the design of endoscopic probe morphologies and the development of therapeutic devices for lacrimal tract diseases—in terms of designing optimal morphologies, specic to ethnic groups and populations. Introduction The lacrimal duct extends from the lacrimal punctum to the lower opening of the nasolacrimal duct (NLD) on the lateral wall of the inferior nasal meatus. It passes through upper and lower punctum, the superior and inferior canaliculi, and the common canaliculus to reach the internal common punctum (ICP) in the lacrimal sac. The pathway to this point passes through eyelid tissue that is mobile and elastic. The lacrimal sac (LS) is located in the lacrimal fossa. The interosseous and meatal parts of the NLD are xed tissues. Primary acquired nasolacrimal duct obstruction (PANDO) is an organic obstruction of the lacrimal duct that can occur anywhere from the punctum to the NLD opening. 1 Cases with obstruction from the punctum to the ICP are classi ed as lacrimal canaliculus obstruction, while cases with obstruction from the LS to the NLD opening are classi ed as nasolacrimal duct obstruction. Dacryocystorhinostomy (DCR) is the rst-line treatment for PANDO. Endoscopic-assisted nasolacrimal duct intubation (ENDI) is widely used as a minimally invasive treatment for lacrimal duct stenosis and obstruction in Northeast Asia. [2][3][4][5] The ENDI procedure is performed while directly observing the obstructed area in the lacrimal duct with a dacryoendoscope and observing the nasal cavity with a nasal endoscope. This reduces complications from false passage formation. Because ENDI can usually be performed under local anesthesia, it has evolved into a less invasive and safer procedure, which is one of the main reasons for its increasingly widespread use in Northeast Asia. Another reason is that Northeast Asians have relatively at facial features, with a less elevated superior orbital rim (SOR) than other ethnic groups. This allows for relatively easy manipulation of a dacryoendoscope. 6 In general, the long-term therapeutic outcomes of ENDI are not equivalent to DCR. Nevertheless, evidence has accumulated that the outcomes of ENDI are almost as effectual as DCR for canaliculus obstruction and PANDO (in cases of non-in ammatory or partial obstruction). 2,4,5,7−9 Since ENDI is a minimally invasive procedure for the treatment of PANDO, which can be performed under local anesthesia, further studies are needed to compare the long-term treatment outcomes of DCR and ENDI in PANDO, in terms of pathological conditions (e.g., site of obstruction, cause of obstruction, and duration of obstruction). Since it was rst reported in 1909, dacryocystography (DCG) has undergone improvements in contrast media, injection method (using a cannula), and imaging method. DCG is still an essential preoperative evaluation for PANDO. 10,11 Since its rst application in dentistry in 1998, clinical applications of cone-beam computed tomography (CBCT) have gradually increased in the head and neck regions; CBCT is now widely used in medical facilities for dentistry, oral surgery, and otorhinolaryngology. [12][13][14][15] There are few reports on CBCT in the eld of ophthalmology. Nonetheless, CBCT-DCG is a valuable test for evaluating PANDO. It has the advantage of much lower radiation exposure than conventional multi-slice CT-DCG. [16][17][18] The length and inclination of the LS and NLD differ among individuals; there are also differences between races and ethnic groups. This is the rst study to report measurements of various parameters of the lacrimal duct in a Japanese population, based on CBCT-DCG. We hope that the measurements provided by this study will contribute to the accumulation of valuable anthropometric data for the design of endoscopic probe morphologies and the development of therapeutic devices for lacrimal tract diseases (in terms of designing optimal morphologies, speci c to ethnic groups and populations). Results The mean age of the 102 cases was 71.3 ± 11.7 years. Among them, 74 cases were female and 28 cases were male. There were 51 cases of right side PANDO and 51 cases of left-side PANDO. The maximum, minimum, and average values of the measured parameters are shown in Table 1. The angle formed by SOR-ICP-NLD opening The maximum value of the angle was 27° and the minimum value was −11°. The mean value was 10.2 ± 7.8°. The angle was positive in 92% (93/101) of cases, while 8% (8/101) of the subjects had a negative angle. An example image of a case with a large SOR-ICP-NLD opening angle is shown in Fig. 3. The large angle was due to the elevation of the SOR and anterior inclination of the NLD. The Shapiro-Wilk test gave a value of 0.55, indicating a normal distribution (Fig. 4A). For females, the mean was 9.9 ± 8.2°; for males, it was 10.8 ± 6.7°. There was no signi cant difference between males and females (p = 0.67). The length of SOR-ICP The length of LS The maximum value was 17.1 mm and the minimum value was 4.3 mm. The mean was 8.9 ± 2.3 mm. The Shapiro-Wilk test gave a value of 0.0002, indicating a non-normal distribution (Fig. 4D). For females, the mean was 8.7 ± 2.1 mm; for males, it was 9.6 ± 2.6 mm. There was no signi cant difference between females and males (p = 0.079). The length of NLD The maximum value was 20.7 mm and the minimum value was 5.7 mm. The mean was 13.2 ± 2.7 mm. The Shapiro-Wilk test gave a value of 0.39, showing a normal distribution (Fig. 4E). For females, the mean was 13.0 ± 2.4 mm; for males, it was 13.7 ± 3.3 mm. There was no signi cant difference between females and males (p = 0.17). The LS-NLD angle The maximum angle was 40° and the minimum was −43°. The mean was −6.3 ± 14.1°. The Shapiro-Wilk test gave a value of 0.30, indicating a normal distribution (Fig. 4F). The anterior bending type represented 33.3% (31/93) of cases; 66.7% (62/93) were of the posterior bending type. Examples of cases with anterior and posterior bending are shown in Figs. 5 and 6. For females, the mean was −6.9 ± 14.5°; for males, it was −4.6 ± 12.9°. There was no signi cant difference between males and females (p = 0.29). Discussion The lengths of the LS and the NLD and the inclination of the LS-NLD vary among individuals and between ethnic groups. [19][20][21][22][23] In this study, we measured various parameters of the Japanese lacrimal duct using CBCT-DCG images. The average angle formed by the SOR-ICP-NLD opening was 10.2 ± 7.8°. The line formed by the SOR-ICP is the anatomical limit where the tip of a straight probe can reach most anteriorly after entering the NLD through the ICP. We con rmed that, in 92% of subjects, the line formed by the ICP-NLD opening was anteriorly inclined to the line formed by the SOR-ICP. This suggests that blind probing with a straight bougie, or manipulating a dacryoendoscope with a straight probe, is more likely to form a false passage posterior to the original lacrimal duct. Therefore, a probe with a bent anterior tip, or a curved probe, is more appropriate. In 8% of the subjects, the SOR-ICP-NLD angle was zero or negative. In such cases, a straight probe is considered more suitable than a curved one. The mean length of the SOR-ICP was 24.3 ± 3.2 mm. The mean length of the ICP-NLD opening was 21.8 ± 2.7 mm. These new parameters could be measured because CBCT-DCG depicts the ICP. Although these parameters did not follow a normal distribution, they may be helpful for optimizing the length and the curve line settings of the dacryoendoscope probe. Generally, Northeast Asians have a low development of the SOR and a relatively at facial appearance. The angles and lengths in other ethnic groups that have a well-developed SOR may be different from our study's results. By measuring the angles and lengths of several other races and ethnic groups, it will be possible to develop dacryoendoscope probes that are more suitable for the anthropometric structure of the target population. Based on anatomical measurements in Japanese cadavers, the average length from the lacrimal punctum to the ICP is 11 mm. The average axial length of the LS is 12-15 mm, with the average diameter being 3 mm (the lumen was 1-2 mm). The average length of the NLD is 12-17 mm. 21,24 In our study, the length from the ICP to the LS-NLD transition was 8.9 ± 2.3 mm; the length from the LS-NLD transition to the NLD opening was 13.2 ± 2.7 mm. In fact, as the length from the ICP to the LS-NLD transition refers to the length of the LS body, the actual axial length of the LS can be assumed to be 2-3 mm longer-the length of the fundus of the LS. It was challenging to measure the total length of the LS because the contrast medium had already owed out of the LS at the time of imaging. Thus, the fundus of the LS was often poorly visualized. Therefore, we measured the distance between the ICP and the LS-NLD transition, which was clearly delineated. The distance from the LS-NLD transition to the NLD opening was, in fact, the length of the bony nasolacrimal canal. The interosseous part of the NLD does not have an entirely linear structure but sometimes has a complicated and diverse curvature. Therefore, our measurements do not represent the actual length of the NLD. We acknowledge the necessity of developing a more accurate method for evaluating the length of the NLD. Several studies have investigated the LS-NLD angle. We found a mean LS-NLD angle of −6.3 ± 14.1°( range, −43° to +40°). The average angle of the anterior bending type (33.3% of cases) was 8. Second, our measurements were obtained from two-dimensional images; in essence, we need to obtain measurements in three dimensions. It has been reported that, in coronal section, the LS is inclined laterally to the midline and the NLD is inclined medially to the LS. Approximately one-third of the NLD is medially inclined, and two-thirds are laterally inclined, relative to the midline. 25 Furthermore, the NLD does have a linear structure but bends in complicated and diverse ways. Although we used planimetric data in sagittal section, the actual lengths of LS and NLD (and their constituent angles) should be represented in three dimensions. In future research to evaluate the parameters of the lacrimal duct, DCG images should be converted into three dimensions. Patient selection The subjects of this study were patients diagnosed with unilateral PANDO at Ehime University Hospital from December 2015 to April 2021. Diagnosis was obtained through the irrigation test, dacryoendoscopic examination, and CBCT-DCG. We retrospectively analyzed the CBCT-DCG images of the contralateral side of 102 cases diagnosed with unilateral PANDO. There were no abnormalities on the contralateral side in all the above tests. A typical example of a CBCT-DCG image, sectioning the lacrimal duct, is shown in Fig. 1. The patient had been diagnosed with left-sided unilateral PANDO. Fig. 1 shows a DCG image of the right side, contralateral to the obstructed side. In CBCT-DCG images of a sagittal section, the following parameters were evaluated: 1) the angle formed by SOR-ICP-NLD opening ( Fig. 2A); 2) The length of SOR-ICP (Fig. 2B); and 3) The length of ICP-NLD opening (Fig. 2B). To measure these parameters, the following method was applied. A straight line starting from the ICP was drawn in the direction of the SOR; the tangent point on the SOR was determined. The distal end of the interosseous NLD was de ned as the NLD opening. The angle formed by the line connecting the tangent point of SOR-ICP and the line connecting ICP-NLD opening was measured. Figure 5 Example in which the NLD is inclined anteriorly to the LS (anterior bending type).On the left is the original image. On the right side, the long axis of the NLD was anteriorly inclined by +14° relative to that of the LS. Abbreviations: NLD, nasolacrimal duct; LS, lacrimal sac. Figure 6 Example in which the NLD is inclined posteriorly to the LS (posterior bending type). On the left is the original image. On the right side, the long axis of the NLD was posteriorly inclined by −21° relative to the LS. Abbreviations: NLD, nasolacrimal duct; LS, lacrimal sac.
2021-12-08T16:19:50.583Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "94b74465c78a7d4d1f59db11df3c5e84b07bc5fa", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1137390/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "dff54c9802b6d35d3e331670ae0e9adb0a8eec81", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
229500115
pes2o/s2orc
v3-fos-license
Lockdown and Insomnia among Undergraduate Healthcare Students: A Cross-Sectional Study INTRODUCTION: Insomnia is a risk factor for various physical and mental disorders as well can affect the academic performance of a student(s). AIM: To assess the prevalence of insomnia among university going students (medical, dental and nursing streams) in the South Asian continent during the lockdown due to the COVID-19 pandemicMATERIALS AND METHOD: The present study was conducted amongst 743 medical, dental and nursing undergraduate students residing in South Asia using convenience sampling. Data was collected using a pre-tested and pre-validated questionnaire [the Athens Insomnia Scale (AIS)] using google forms and had a total of 8 questions (score range 0-3) . Final scores (the individual AIS score) were obtained by adding the scores for each question (range 0 -24). The higher the score was, the worse was the sleep quality; students with score of ≥ 6 were considered insomniac. Data Analysis was done using SPSS version 21.012 by using the independent samples t-test, and multiple logistic regression.RESULTS: A total of 921 entries were recorded, out of which 743 were complete and hence, were included in the study (response rate: 80.7%). Insomnia was reported in 421 (56.7%) students, out of which, the highest was seen among dental (62.7%), followed by medical (59.8%) and nursing (45.3%) undergraduates. The highest range of AIS was observed among females (6-22) and dental students (6-21). Gender differences revealed a significant association among females in both range obtained (t-test) (p=0.03) as well as the multiple linear regression analyzing insomnia in relation to gender (p=0.03).CONCLUSION: There is a need to regularly assess insomnia among students and to take preventive measures incase of high prevalence is found among them, especially while pursuing academics online and from their homes due to the pandemic. INTRODUCTION Insomnia, a serious public health issue is a risk factor associated with various physical and mental disorders. 1 It is classified as a disease characterized by difficulty falling and/or remaining asleep, can be accompanied with early morning awakening, daytime impairment, and/or non-restorative sleep, and may be associated with a variety of psychiatric conditions (especially depression and anxiety). 2,3 In the Asian subcontinent, a study among Chinese adolescents was reported to be 16.9% with factors associated with insomnia being age, lack of physical exercise/poor physical health, self-selected diet, longer distance from home to school, and life stresses. 4 Another cross-sectional study from a sample drawn from the general population of England, Wales, and Scotland, revealed that 37% of them had insomnia. 5 Researchers have documented that insomnia is a common problem in young adults, including university students and its prevalence varies as per geographic location. 6,7 This can have detrimental effects on daytime activities, including studying, weakened physical, mental functions and lowered work productivity. It can also lead to anxiety and depression among students. 2,8 It has been observed that the Grade Point Average (GPA) of a student is significantly associated with the duration of sleep; and insomnia decreases the ability to perform basic academic activities such as solving mathematical problems among students. 9 As already the university student was under immense stress, the COVID-19 pandemic has wreaked havoc in their lives, especially international students who have been forced to remain in their native country due to travel bans/imposition. While teaching shifted to the online medium from the classroom method, many students/parents lost their jobs (part-time/full time), students experienced laggy internet speeds, and it might have been a possibility that they were attending their classes using outdated laptops/PCs (A sudden lockdown might not have given students the change to upgrade their equipment). This was in addition to the need to meet the deadlines of their assignments/projects. Hence, the aim of this study was to assess the prevalence of insomnia among university going students (medical, dental and nursing streams) in the South Asian Continent during the lockdown due to the COVID-19 pandemic. MATERIALS AND METHOD The present study was conducted amongst medical, dental and nursing undergraduate students residing in South Asia using convenience sampling from 01 st June 2020 to 31 st August, 2020 after obtaining all necessary approvals (including ethical clearance) prior to start of the study. Data was collected using a pre-tested and pre-validated questionnaire [the Athens Insomnia Scale (AIS), Soldatos et al (2000)] 10 using google forms, which is an 8-item questionnaire with each consisting of 4 parameters showing insomnia severity from none to very severe levels (0-3). Scores from each question are added to get the individual AIS score (range 0 -24). The higher the score was, the worse was the sleep quality. As per the scale, students with score of ≥ 6 were considered insomniac. The questionnaire was distributed among various social media websites as a link to ensure maximum participation. The first page informed the students about the study objectives, that participation in the study was voluntary and they could leave filing the questionnaire in between. By clicking on the "next" button, the respondent gave his consent to participate in the study. No personal particulars were collected to keep the data confidential. Sample Size and Statistical analysis: Based on a pilot study among 25 students, the minimum sample required was 287 (OpenEpi Software) 11 and to compensate for incomplete responses, the maximum sample was sought. Data Analysis was done using SPSS version 21.012 by using the independent samples t-test, and multiple logistic regression. Demographic details of the study population (table 1) A total of 921 entries were recorded, out of which 743 were complete and hence, were included in the study (response rate: 80.7%). There 219 (29.5%) medical students, 301 (40.5%) dental students while 223 (30%) belonged to the nursing sciences. Their gender wise distribution is described in table 1. Insomnia was reported in 421 (56.7%) students, out of which, the highest was seen among dental (62.7%), followed by medical (59.8%) and nursing (45.3%) undergraduates. DESCRIPTION n,% Total Respondents • Responses to the Athens insomnia questionnaire (table 2) The range of scores obtained by the respondents are shown in table 2. The highest range was observed among females (6-22) and dental students (6-21). Gender differences revealed a significant association among females (p=0.03) as compared to their male counterparts. Relationship between insomnia, gender and course pursued using multivariate regression analysis (table 3) The multiple linear regression model to analyze insomnia in relation to gender and course pursued revealed a statistical significant association in relation to gender with females being more affected (p=0.03), while no significant difference were observed in relation to the specialization of the student (table 3). DISCUSSION The results of the present study revealed a 56.7% prevalence of insomnia among medical, dental and nursing undergraduate students, and is on the higher end as compared to medical students of countries including Pakistan (40.74%) 13 , Brazil, (28.15%) and Iran (42%). 14 In contrast, Sing CY et al. reported a 68.8% prevalence of insomnia among a sample of Chinese college students. These differences can be due to socio- The multiple linear regression model revealed that females were significantly more associated to have insomnia and these findings are in agreement to previous studies by various authors. 2,16,17 In contrast, Pallos et al. 18 reported males had a higher rate of insomnia as compared to females. Such increased statistics among females in the present study could be due to the fact that during lockdown, apart from studies, most females had to assist in homely work, which had significantly increased during the lockdown. The results of the present study are consistent with the findings that students belonging to medical and allied sciences appear to be vulnerable to poor sleep due to duration and intensity of their curriculum, clinical duties and both pre-clinical clinical assignments. 19 This burden can be assessed by the fact that as while researchers have estimated sleep disorders in the general population to be around 15-35%, medical students showed a prevalence of insomnia of 30%. 20 As per Jiang et al., who documented the prevalence of insomnia among university students from 9.4% to 38.2%, the prevalence of insomnia among medical, dental and nursing students in on the higher side as stated above and as per the findings of the present study. It is also to be noted that the present study was done during the lockdown period, which makes it unique in nature as a comprehensive literature search did not reveal any such study(ies) during this time. Therefore, the presence of insomnia directed in the present study could be higher as compared to pre-COVID times. However, the study meets the aim and objectives of the present study and respective councils and student bodies can implement regulations in place to reduce stress among students in the wake of another, god forbid, global pandemic. The limitations of the present study could be the under/over reporting of data by the students, and social desirability bias. However, no personal details were collected and students were assured of the confidentiality of their data to reduce the probability of such a bias and it can be safely stated that the results of the present study can be generalized for the medical, dental and nursing students belonging to the South Asian continent. CONCLUSION Based on the results of the present study, a high prevalence of insomnia was found between medical, dental and nursing undergraduate students and there needs to be regular assessment of insomnia and stress among the students and preventive measures adopted incase a high prevalence of insomnia is found among them.
2020-11-26T09:07:40.092Z
2020-11-19T00:00:00.000
{ "year": 2020, "sha1": "96e2a47cd30a90598f9eb46c7b9e0bd09f5c019a", "oa_license": "CCBYNC", "oa_url": "https://ihrjournal.com/ihrj/article/download/286/838", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "81eb756a016e2d3c12544728c834c81c505738b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221346243
pes2o/s2orc
v3-fos-license
First-step validation of a text message-based application for newborn clinical management among pediatricians Background Neonatal mortality is high in developing countries. Lack of adequate training and insufficient management skills for sick newborn care contribute to these deaths. We developed a phone application dubbed Protecting Infants Remotely by Short Message Service (PRISMS). The PRISMS application uses routine clinical assessments with algorithms to provide newborn clinical management suggestions. We measured the feasibility, acceptability and efficacy of PRISMS by comparing its clinical case management suggestions with those of experienced pediatricians as the gold standard. Methods Twelve different newborn case scenarios developed by pediatrics residents, based on real cases they had seen, were managed by pediatricians and PRISMS®. Each pediatrician was randomly assigned six of twelve cases. Pediatricians developed clinical case management plans for all assigned cases and then obtained PRISMS suggested clinical case managements. We calculated percent agreement and kappa (k) statistics to test the null hypothesis that pediatrician and PRISMS management plans were independent. Results We found high level of agreement between pediatricians and PRISMS for components of newborn care including: 10% dextrose (Agreement = 73.8%), normal saline (Agreement = 73.8%), anticonvulsants (Agreement = 100%), blood transfusion (Agreement =81%), phototherapy (Agreement = 90.5%), and supplemental oxygen (agreement = 69.1%). However, we found poor agreement with potential investigations such as complete blood count, blood culture and lumbar puncture. PRISMS had a user satisfaction score of 3.8 out of 5 (range 1 = strongly disagree, 5 = strongly agree) and an average PRISMS user experience score of 4.1 out of 5 (range 1 = very bad, 5 = very good). Conclusion Management plans for newborn care from PRISMS showed good agreement with management plans from experienced Pediatricians. We acknowledge that the level of agreement was low in some aspects of newborn care. Background Over 90% of the global burden of neonatal mortality occurs in countries within resource limited settings [1]. Neonatal mortality accounted for about 40% of the under 5 mortality in 2015 [2]. Most neonatal deaths can be prevented by administration of proven interventions for newborn survival [3][4][5][6]. These interventions require the presence of skilled health workers to recognize a newborn in need of additional care, conduct a timely assessment, and establish an appropriate management plan [7]. Many health facilities in resource limited settings are understaffed and/or lack skilled manpower to provide appropriate health care including managing a sick newborn [8,9]. In resource rich settings, neonatal mortality rate is low and neonatal care is a highly specialized discipline [7,10]. Decisions regarding sick newborn care management in resource rich settings are most often made by highly qualified pediatricians or neonatologists [7]. However, in resource limited settings, the bulk of sick newborn care management decisions are made by frontline health workers (FLHW) including medical officers, nurses, and or midwives with no specialized neonatology training [9,11,12]. Some of these frontline cadres have not only inadequate training or experience to make management decisions for sick newborn care, but also have no access to a specialist for consultation [3,13,14]. Telemedicine has been used for several decades to connect lower cadre health workers in remote areas to specialists far away [15,16]. However, this service requires significant resources to function in a sustainable manner. Mobile health (mHealth) applications are cheaper and may have the same potential to bridge the knowledge and skills gap among FLHW to save lives [17]. Various mHealth applications designed to improve management of sick newborns have been tested and show promise [18][19][20]. Applications have also been extended to include training of FLHW in retention of knowledge and skills for managing newborns [21], patient follow-up, and communication of critical laboratory results [22,23], creating a vibrant and innovative landscape in mHealth. Most of these interventions target the patient with few directed towards capacity development of practicing health workers [24][25][26][27]. Smart phones are now widely available in resource limited settings and, for the health workers in sub Saharan Africa [28,29], this presents an opportunity to support mHealth applications. However, there are few innovations on the continent that have been developed to take advantage of these advancements. We hypothesized that a tool to aid FLHW in providing care for sick newborns might perform comparably to a specialist pediatrician. Therefore, we developed and tested an automated text message system called PRISMS (Protecting Infants Remotely by Short Message Service (SMS)). PRIS MS is a cellphone-based platform with management algorithms designed to mimic those of a specialist pediatrician. PRISMS uses routine clinical assessment findings to provide newborn care management suggestions to frontline health workers by text message. The purpose of this study was to determine the feasibility, acceptability and efficacy of PRISMS in terms of its performance in diagnosis and management of newborns compared to specialist pediatricians, using simulated newborn scenarios as an initial step to PRISMS validation. Development and functionality of PRISMS PRISMS is composed of a remote automated server and a phone application that runs on an Android device. The phone application is comprised of a phone-basedform into which clinical assessment findings are entered. All fields on the form have to be completed for the message "send button" at the bottom of the form to become active. The health worker will not be able to send assessment findings to the server without entering missing information. The clinical assessment findings are entered as raw numerical data for the case of age, gestational age, temperature, respiratory rate, and heart rate. The rest of the parameters are entered as a selection from a dropdown list of predetermined response categories. Once an assessment form is completely filled, and the send button clicked to submit findings, the PRISMS application utilizes native functionalities of the Android device to send a formatted text via SMS to the PRISMS server. At the server, (available 24 h a day) the formatted text was received by a 2-Way SMS Gateway and sent to an algorithm script. Feedback from the algorithm script was processed, prepackaged and sent via the same SMS Gateway to the PRISMS user as proposed clinical management plans. These clinical management plans are based on predetermined server algorithms extensively tested in lab settings by the study team. Our study team included four experienced medical doctors (two Canadian neonatologists, one Ugandan pediatrician and an epidemiologist) and a Ugandan computer programmer. The pediatricians on the study team did not participate in assessing the newborn cases using PRISMS in this study. PRISMS uses an algorithm for clinical assessment adapted from the Canadian Acute Care of at Risk Newborns (ACoRN) Primary Survey [30], and World Health Organization Newborn Guidelines [31]. Development of newborn case scenarios A group of four postgraduate trainees in the Masters of Pediatrics program at Mbarara University of Science and Technology (MUST) Department of Pediatrics developed 12 different newborn case scenarios based on clinical cases they had seen on the neonatal unit in Mbarara Regional Referral Hospital (MRRH). MRRH is a tertiary health care facility with a catchment area of approximately 5 million people. The study team checked all cases for completeness. A case was considered complete if it contained at least a short descriptive clinical history, patient age, weight, gestational age, temperature, skin color, heart rate, capillary refill time, degree of dehydration, respiratory rate, presence or absence of chest-in-drawing, presence or absence of noisy breathing, convulsions at the time of clinical examination, breast feeding ability and jaundice assessment (Additional file 1, details all 12 case scenarios). The results for jaundice assessment were provided and classified as absent, mild jaundice or deep jaundice. Presence of jaundice within 24 h of birth and persistence of jaundice after 3 weeks of birth were made as other selectable jaundice characteristics. The ability to breastfeed was categorized as breast feeding well, breastfeeding poorly or unable to breastfeed. PRISMS recommended clinical management suggestions to different assessment-findingcombinations were reviewed for alignment to existing newborn care guidelines by two Canadian neonatologists and one Ugandan Pediatrician. Participant recruitment and familiarization to PRISMS Using convenience sampling, we recruited volunteer pediatricians involved in regular clinical management of newborn babies from four referral hospitals in southwestern, central and eastern Uganda, regardless of the time since their training. We used a convenience sample because of the limited number of pediatricians in the country. We selected our study participants from a pool estimated to be 16 pediatricians at the hospitals we contacted. We used a computer random number generator to assign each pediatrician six of the twelve newborn cases. Each of the twelve cases was equally likely to be selected. Pediatricians were requested to develop comprehensive clinical case management plans for each of the six randomly selected newborn case scenarios on a casespecific hardcopy clinical management form. Each pediatrician then received a 10-min orientation and training on how to use the PRISMS platform. We enhanced familiarity with the PRISMS phone application by allowing each pediatrician to input the assessment findings from the other six of the 12 case scenarios that were not randomly selected for pediatrician management into PRISMS to obtain PRISMS suggested clinical management plans. Pediatricians were then asked to use the PRISMS application to obtain clinical management plans for the six cases that they had previously managed without PRISMS. We categorized PRISMS and pediatrician suggested clinical case managements into four broad classes: 1) thermal care interventions, 2) laboratory investigations, 3) medical treatment, and 4) other management interventions. Data were entered into EpiInfo and analyzed using Stata version 12 (College Station, Texas). We determined agreement between pediatrician and PRISMS suggested clinical management plans using the percentage agreement and the kappa statistic. We used the two approaches to assess agreement because the percentage agreement alone, although easy to interpret, has potential to overestimate agreement to include that due to chance. The kappa statistic is adjusted to measure agreement beyond that expected due to chance and a kappa below 0.4 is considered to be poor [32][33][34]. The feasibility and acceptability of PRISMS among the users was measured with user experience and satisfaction surveys with a number of items on the Likert scales developed by the research team. The Likert scale scores ranged from 1 to 5 with 1 = very bad and 5 = very good. We used Cronbach's alpha to measure the internal consistence of these scales and report the scores. Human subject issues All pediatricians enrolled in the study provided written informed consent. No personal identifiers were collected. The study was approved by both Mbarara University of Science and Technology Research Ethics Committee and the Uganda National Council of Science and Technology. Results Seven pediatricians, two males and five females, conducted a total of 42 newborn case scenario assessments and made managements plans for them. All pediatricians received their pediatric training in Uganda and had a mean pediatrics clinical care experience of 5.9 years (95% CI: 2.63 -9.08). All pediatricians (7/7) had been exposed to Helping Babies Breathe (HBB) and Essential Care for Every Baby (ECEB) [35] as trainees and trainers. Case scenario characteristics The 42 cases (Table 1) had different combinations of clinical signs and symptoms. Fever (axillary temperature > 37.5°C) and hypothermia (temperature < 36.5) was present in 35.7% (15/42) and 45.2% (19/42) of cases respectively. Fast breathing (respiratory rate greater than 60 breathes per minute) was present among 52.3% of all case scenarios. Half of cases with jaundice had deep jaundice and the rest of jaundiced cases had mild jaundice. Although we had 12 independent cases, repeated assessments were done. In the results, we present in Table 1, the details of frequency of occurrence of different clinical signs among the 42 case scenario assessments selected from the pool of 12 cases managed by the 7 pediatricians. User experience Overall, PRISMS was rated as feasible based on the user experience and satisfaction. The overall mean score for user experience (Table 2) was 4.1 out of a potential maximum of 5 indicating an overall good experience. The scores on the individual items ranged between 3.8 for the item on time to complete filling information into PRISMS application form and 4.3 for ease of use of PRISMS. Pediatrician satisfaction with PRISMS We assessed satisfaction using 8 items as shown in Table 3. The item with the maximum score was "Investigations provided by PRISMS were adequate" with a score of 4.1 out of a maximum score of 5. The lowest score was 3.4 for the item "PRISMS provides comprehensive newborn management". The overall mean score was 3.8 out of a maximum score of 5. When asked whether "PRISMS can only be used outside hospitals", the mean Likert score for this question was 2.3 (SD = 1.1). Respondents' disagreement with restricting use suggests support for use across a variety of health facility settings. Clinical management agreement is seen in Table 4 Statistically significant concordance in pediatrician and PRISMS for clinical management was obtained for prolonged skin to skin care, intravenous (IV) 10% dextrose administration, blood transfusion, phototherapy, exchange transfusion, and investigations for jaundice. However, there was lack of agreement with certain components of management namely: decision to reduce clothing, doing a complete blood count, blood culture, lumbar puncture and use of antibiotics. Discussion We designed and tested a novel cell phone platform (PRISMS) to assist health workers with no specialty training in neonatal care to manage sick newborns in a resource limited setting. Our results also show there was a good level of agreement in the management plans proposed by PRISMS and the pediatrician, and there were areas where the pediatrician felt PRISMS enhanced their prior clinical management plans. For many countries in resource limited settings, majority of patients seek health care at lower level health facilities. In these facilities they often receive care from non-specialized FLHWs [36]. Our next step will be to investigate use of PRISMS in these frontline health workers with an aim to strengthen their ability to provide newborn care. We chose to start with a higher level of specialty in order to test the performance of the tool against these specialists as our stated gold standard to examine its validity. We assessed PRISMS to ensure its functionality to established standards of care. This care standards included validated newborn danger signs predictive of severe illness as detailed by the Young Infants Clinical Signs Study Group [37]. We noted that for interventions related to thermal care, PRISMS and the pediatricians were more likely to disagree compared to other components of management. For two aspects of thermal care management (reducing clothing and rechecking temperature after one hour), there was total disagreement between PRISMS and Pediatrician. All case scenarios with fever (15/42) had no pediatrician recommendation for reduction of clothing while PRISMS recommended clothing reduction for all. None of the pediatricians recommended a recheck of temperature one hour following any thermal intervention provided to febrile or hypothermic cases. These thermal care management disagreements were reported by pediatricians as management omissions when they compared their suggested care to that of PRISMS. The management of febrile babies with exposure/ reduction of clothing, and of hypothermic babies with removal of any wet clothing, covering with dry warm clothing and use of skin-to-skin contact followed by a repeat temperature measurement in one hour is a recommended thermal care measure [35]. PRISMS was more adherent to these thermal recommendations than the Pediatricians. We observed management options where pediatricians had complete agreement with PRISMS. The item with complete agreement was exchange transfusion although it should be noted that this is a relatively uncommon aspect of clinical care which will not be able to be carried out without patient transfer when PRISMS is next tested in smaller health centers. The complete agreement could be explained by the fact that we enrolled pediatricians from tertiary referral centers where exchange transfusion is commonly offered as a specialist's procedure. The pediatricians are expected to be familiar with the procedure. There were pediatricians that recommended investigations such as c-reactive protein (CRP) measurement for babies with suspected infections that PRISMS was not recommending. Though CRP may indicate likelihood for sepsis, PRISMS did not recommend its use for patients with danger signs. The developers of the algorithm felt CRP was not critical to recommend as majority of newborn care facilities in developing countries The cases were easy to manage 3.6 0.8 Overall mean score for this scale = 3.8 (SD = 0.6) The Cronbach's alpha for this scale was 0.83. a SD standard deviation [37]. Some Pediatricians on the other hand were cautious to recommend antibiotics before investigation results, such as for CRP when signs predictive of severe illness were present. These differences in approach contributed to the level of agreement observed between PRISMS and Pediatricians for administration of antibiotics. Mobile applications have been used to improve skilled attendance at delivery [25], and follow up infants for other outcomes such as breastfeeding and perinatal mortality [24,38]. Existing interventions have targeted the patients, but very few have targeted the health worker [24][25][26][27]. Health worker targeted electronic interventions have mainly been for management of childhood illnesses with limited focus on newborn care [39][40][41]. A strength of our study is that our mobile application is built on the android platform allowing wide scale deployment due to increasing android device availability. Our study sets the pace for quality of care improvement and standardization of newborn care assessment and care planning. Such care benefits have been realized with the use of electronic systems for Integrated Management of Childhood Illnesses and Community Case Management of Malaria, Pneumonia and Diarrhea [39,41]. These have demonstrated better adherence to protocol, and improved clinical care outcomes for infants and under-five children both at facility and community levels compared to paper based versions [40,42]. The time taken to receive clinical management plans after completing the PRISMS assessment form had an average satisfaction score of 4. There were times when text messages from the server delayed to be received by PRISMS users due to telephone network challenges. We have already implemented an inbuilt server algorithm that guarantees provision of clinical management plans in less than 8 s independent of internet and telephone networks. Therefore, PRISMS use in health facilities for the generation of clinical management plans no longer requires internet or telephone network connectivity. However, for remote synchronization of data from PRISMS devices to the backend server, internet connectivity is required. With the 4.1 average score on the item "PRISMS can be used in hospitals", this seems like PRISMS will be a likely successful addition to clinical care in these settings. Hospitals are associated with greater investigative capacity that are seldom available in lower unit health facilities. We have restructured the clinical management suggestions provided by PRISMS to be applicable in higher level facilities with more investigative capacity. For example, we would state "consider full blood count, blood culture and lumber puncture" for all babies with danger signs. We plan to elect clinical investigation suggestions that are preceded with the word "consider" to refer to management suggestions that are desired if the health facility in which the baby is managed has the ability to provide such investigations. Limitations Our study has limitations. We have tested this application among pediatricians and not among the non-pediatrician frontline health workers such as midwives, nurses, clinical officers and medical officers who provide the greatest bulk of newborn care decisions in Sub-Saharan Africa especially at the lower level health facilities. The lower level facility staff are the ones more likely to need assistance in management of sick newborns. We have demonstrated feasibility but we now need to test this application using a randomized controlled design among the likely end users to determine its effect on quality of newborn care and newborn care outcomes. A randomized cluster trial for this inquiry is ongoing. This application assumes that the health worker has adequate clinical skills to identify key clinical signs and symptoms upon which the clinical management algorithm is based. We are aware of some limitations in clinical skills among lower level cadres and even pediatricians due to knowledge and skills decay. One way to overcome this is to provide refresher training in clinical assessment prior to implementation of the intervention. These finding are based on case assessments sampled from twelve different case scenarios and these may not be representative of the entire breadth of different newborn cases. In addition, recommendations for clinical care change with time and the algorithm will need to be kept up to date. Conclusion We have successfully developed, tested and demonstrated feasibility and acceptability of a mobile platform to manage sick newborns. This application has demonstrated a reminder function and acceptable level of agreement with pediatrician suggested clinical case managements. We acknowledge that the level of agreement was low in some aspects of management. We plan to test the acceptability and utilization of this application on a larger scale with more frontline healthcare workers. On this large scale, we also propose to assess the impact of this intervention on clinical endpoints such as neonatal mortality. Additional file 1. List of case scenarios used in the comparative study of clinical case managements between 7 pediatricians and PRISMS.
2020-08-28T14:23:24.520Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "3aa94af288da8794a6987ccf21fc563488f0da25", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-020-02307-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3aa94af288da8794a6987ccf21fc563488f0da25", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119313727
pes2o/s2orc
v3-fos-license
Jordan constant for Cremona group of rank 3 We give explicit bounds for Jordan constants of groups of birational automorphisms of rationally connected threefolds over fields of zero characteristic, in particular, for Cremona groups of ranks 2 and 3. The Cremona group of rank n is the group Cr n (k) of birational transformations of the projective space P n over a field k. It has been actively studied from various points of view for many years (see [Hud27], [CL13], [Dés12], [DI09a], [Ser09a], [Can16], and references therein). One of the approaches to this huge group is to try to understand its finite subgroups. It appeared that it is possible to obtain a complete classification of finite subgroups of Cr 2 (k) over an algebraically closed field k of characteristic 0 (see [BB00], [BB04], [Bla09], [DI09a], [Tsy11], and [Pro15c]), and to obtain partial classification results for Cr 3 (k) (see [Pro12], [Pro11], [Pro14], [Pro13a], and [Pro15b]). Some results are also known for non algebraically closed fields, see e.g. [Ser09b], [DI09b], and [Yas16]. In general, it is partially known and partially expected that the collection of finite subgroups of a Cremona group shares certain features with the collection of finite subgroups of a group GL m (k). Definition 1.1.2. A group Γ is called Jordan (alternatively, we say that Γ has Jordan property) if there is a constant J such that for any finite subgroup G ⊂ Γ there exists a normal abelian subgroup A ⊂ G of index at most J. Theorem 1.1.1 implies that all linear algebraic groups over an arbitrary field k of characteristic 0 are Jordan. Jordan property was also studied recently for groups of birational automorphisms of algebraic varieties. The starting point here was the following result of J.-P. Serre. The first of these bounds becomes an equality if k is algebraically closed. The main goal of this paper is to present a bound for Jordan constants of the groups of birational automorphisms of rationally connected threefolds, in particular, for the group Cr 3 (k) = Bir(P 3 ). Theorem 1.2.4. Let X be a rationally connected threefold over a field k of characteristic 0. Then one hasJ Bir(X) 10 368, J Bir(X) 107 495 424. If moreover X is rational and k is algebraically closed, then the first of these bounds becomes an equality. It is known (see [PS16a,Theorem 1.10]) that if X is a rationally connected threefold over a field of characteristic 0, then there is a constant L such that for any prime p > L any finite p-group G ⊂ Bir(X) is abelian. An immediate consequence of Theorem 1.2.4 is an explicit bound for the latter constant L. Corollary 1.2.5. Let X be a rationally connected threefold over a field k of characteristic 0, and let p > 10 368 be a prime. Let G ⊂ Bir(X) be a finite p-group. Then G is abelian. We believe that one can significantly improve the bound given by Corollary 1.2.5. Remark 1.2.6. J.-P. Serre showed (see the remark made after Theorem 5.3 in [Ser09b]) that any finite subgroup G of the Cremona group Cr 2 (k) over a field k of characteristic 0 has a normal abelian subgroup A ⊂ G such that the index [G : A] divides the number 2 10 · 3 4 · 5 2 · 7. The result of Theorem 1.2.4 is not that precise: we cannot say much about the primes that divide the index [G : A] in our case. This is explained by the fact that to obtain the bound we have to deal with terminal singularities on threefolds as compared to smooth surfaces. See Remark 8.2.1 for our expectations on possible improvements of the bounds given by Proposition 1.2.3 and Theorem 1.2.4, and Remark 8.2.2 for a further disclaimer in higher dimensions. The plan of the paper is as follows. In §2 we compute weak Jordan constants for some linear groups. In §3 we compute certain relevant constants for rational surfaces, and in particular prove Proposition 1.2.3. In §4 we study groups of automorphisms of three-dimensional terminal singularities and estimate their weak Jordan constants; then we use these estimates to bound weak Jordan constants for groups of automorphisms of non-Gorenstein terminal Fano threefolds. In §5 we estimate weak Jordan constants for groups acting on three-dimensional G-Mori fiber spaces. In §6 and §7 we bound weak Jordan constants for groups of automorphisms of Gorenstein terminal (and in particular smooth) Fano threefolds. Finally, in §8 we summarize the above partial results and complete the proof of Theorem 1.2.4, and also make concluding remarks. In Appendix A we collect some information about automorphism groups of two particular classes of smooth Fano varieties: complete intersections of quadrics, and complete intersections in weighted projective spaces; these results are well known to experts, but we decided to include them because we do not know proper references. Proof. Note that Γ is contained in the center of the group G. We may assume that G is finite. By the well-known classification of finite subgroups in PGL 2 (k), we know that the groupḠ = φ(G) is either cyclic, or dihedral, or isomorphic to one of the groups A 4 , S 4 , or A 5 . IfḠ is cyclic, then the group G is abelian. IfḠ is dihedral, then the group G contains an abelian subgroup of index 2. IfḠ ∼ = A 4 , thenḠ contains a cyclic subgroup of order 3, so thatJ(G) 4; the inequality here is due to the fact that in the case when G ∼ = µ 2 × A 4 one hasJ(G) = 3, but for a non-trivial central extension G ∼ = 2.A 4 one hasJ (G) = 4. As an easy application of Lemma 2.2.1, we can find the weak Jordan constants of the groups GL 2 (k) and Aut(P 1 ) ∼ = PGL 2 (k). Proof. Let V be a two-dimensional vector space over k, and let G ⊂ GL(V ) be a finite subgroup. It is enough to study the weak Jordan constantJ(G). Moreover, for this we may assume that G ⊂ SL(V ) ∼ = SL 2 (k), and that G contains the scalar matrix acting by −1 on V . Therefore, the boundJ(G) 12 follows from Lemma 2.2.1, so thatJ(GL 2 (k)) 12. The inequalityJ PGL 2 (k) J GL 2 (k) holds by Remark 2.1.1. The valueJ(PGL 2 (k)) = 12 is given by the group A 5 ⊂ PGL 2 (k), and the valueJ(GL 2 (k)) = 12 is given by the group 2.A 5 ⊂ GL 2 (k). Remark 2.2.3. Suppose that C is an irreducible curve such that the normalizationĈ of C has genus g. Since the action of the group Aut(X) lifts toĈ, one has J Aut(C) J Aut(Ĉ) . We can use a classification of finite subgroups in PGL 2 (k) to find the weak Jordan constant of the automorphism group of a line and a smooth two-dimensional quadric. More precisely, we have the following result. Lemma 2.2.4. The following assertions hold. (i) Let G ⊂ Aut(P 1 ) be a finite group. Then there exists an abelian subgroup A ⊂ G of index at most 12 acting on P 1 with a fixed point. (ii) Let G ⊂ Aut P 1 × P 1 be a finite group. Then there exists an abelian subgroup A ⊂ G of index at most 288 that acts on P 1 × P 1 with a fixed point, and does not interchange the rulings of P 1 × P 1 . (iii) One hasJ Aut P 1 × P 1 = 288. Let us use the notation introduced in the proof of Lemma 2.3.1. If G is a group of type (i) or (ii), thenJ(G) 12. If G is a group of type (iii) or (iv), then |G| |3.A 6 | = 1080, and |Ḡ| 360. Finally, if G is a group of type (v), then |G| |H 3 ⋊ SL 2 (F 3 )| = 648, and |Ḡ| 216. Lemma 2.3.3. Let B be a (non-trivial) finite abelian subgroup of PGL 3 (k). Then B is generated by at most three elements. Proof. Recall that a finite abelian subgroup of GL n (k) is generated by at most n elements. LetB ⊂ SL 3 (k) be the preimage of B with respect to the natural projection SL 3 (k) → PGL 3 (k). Letà ⊂B be a maximal abelian subgroup and let A ⊂ B be its image. Then A has an isolated fixed point on P 2 , and the number of its isolated fixed points is at most 3. Therefore, the group B has an orbit of length at most 3 on P 2 . Let P be a point of such orbit, and let B ′ ⊂ B be the stabilizer of P . By Lemma 2.1.2 there is a faithful representation of the group B ′ in the Zariski tangent space T P (P 2 ) ∼ = k 2 , so that B ′ is generated by at most two elements. The group B is generated by its subgroup B ′ and an arbitrary element from B \ B ′ , if any. Proof. We may assume that G is finite. If the order of the group φ(G) ⊂ PGL 3 (k) is at most 360, then one hasJ (G) [G : Γ] = |φ(G)| 360. Therefore, we may assume that |φ(G)| > 360. By Lemma 2.3.2 we can find an abelian so that by Remark 1.2.2 we are left with the task to boundJ(G). We have an exact sequence of groups For an element g ∈G denote by Z(g) the centralizer of g inG. Since B is an abelian quotient ofG, we see that the commutator subgroup ofG has order at most |Γ|, so that for any g ∈G one has [G : Z(g)] |Γ|. Since B is an abelian subgroup of PGL 3 (k), it is generated by at most three elements by Lemma 2.3.3. Choose three generators of B, and let g 1 , g 2 and g 3 be elements ofG that project to these three generators. Put Let C be the centralizer of Γ inG. Since Γ is a normal subgroup ofG, we see that C is a normal subgroup ofG as well. Moreover, since Γ ⊂ C, we have an inclusionG/C ⊂ B, so thatG/C is an abelian group generated by three elements. Also, one has an inclusioñ Therefore, we conclude that |G/C| 3. Let Z be the center ofG. Then Z contains the intersection C ∩ I, so that Proof. Let V be a four-dimensional vector space over k, and let G ⊂ GL(V ) be a finite subgroup. It is enough to study the weak Jordan constantJ(G). Moreover, for this we may assume that G ⊂ SL(V ) ∼ = SL 4 (k). Then there are the following possibilities for the group G (see [Bli17,Chapter VII] or [Fei71, §8.5]): (i) the G-representation V is reducible; (ii) there is a transitive homomorphism h : G → S k such that V splits into a sum of k representations of the subgroup H = Ker(h) of dimension 4/k for some k ∈ {2, 4}; (iii) the group G contains a subgroup H of index at most 2, such that H is a quotient by a certain central subgroup of a group Γ 1 × Γ 2 , where Γ 1 and Γ 2 are finite subgroups of GL 2 (k); (iv) the group G is generated by some subgroup of scalar matrices in SL 4 (k) and a groupĜ that is one of the groups A 5 , S 5 , 2.A 5 , 2.S 5 , or SL 2 (F 7 ); (v) the group G is generated by some subgroup of scalar matrices in SL 4 (k) and a groupĜ that is one of the groups 2.A 6 , 2.S 6 , 2.A 7 , or Sp 4 (F 3 ); (vi) the group G contains an extra-special group H 4 of order 32 and is contained in the normalizer of H 4 in SL(V ). In case (i) there is an embedding G ֒→ Γ 1 × Γ 2 , where Γ i is a finite subgroup of GL n i (k) for i ∈ {1, 2}, and n 1 n 2 are positive integers such that n 1 + n 2 = 4. One has If k = 2, this givesJ (G) 288 by Corollary 2.2.2. If k = 4, this givesJ(G) 24. In case (iii) we obtain the boundJ(G) 288 in a similar way. In case (iv) one hasJ In case (vi) one hasJ(G) J (N), where N is the normalizer of H 4 in SL(V ). The group N fits into the exact sequence whereH 4 is a group generated by H 4 and a scalar matrix √ −1 · Id ∈ SL(V ). Therefore, we see thatJ(G) 960, and thusJ GL 4 (k) 960. The inequalitȳ Remark 2.4.2. The group 2.S 5 listed in case (iv) of Lemma 2.4.1 is omitted in the list given in [Fei71,§8.5]. It is still listed by some other classical surveys, see e.g. [Bli17,§119]. Recall that for a given group G with a representation in a vector space V a semiinvariant of G of degree n is an eigen-vector of G in Sym n V ∨ . Lemma 2.4.3. Let V be a four-dimensional vector space over k, and let G ⊂ GL(V ) be a finite subgroup. If G has a semi-invariant of degree 2, thenJ(G) 288. Proof. Let q be a semi-invariant of G of degree 2. We consider the possibilities for the rank of the quadratic form q case by case. Suppose that V has a one-dimensional subrepresentation of G. Then G ⊂ k * × GL 3 (k), so thatJ(G) 72 by Lemma 2.3.1. Therefore we may assume that the rank of q is not equal to 1 or 3. Suppose that the rank of q is 2, so that q is a product of two linear forms. Then there is a subgroup G 1 ⊂ G of index at most 2 such that these linear forms are semi-invariant with respect to G 1 . Hence V splits as a sum of a two-dimensional and two one-dimensional representations of G 1 . This implies that G 1 ⊂ k * × k * × GL 2 (k), so that Finally, suppose that the rank of q is 4, so that the quadric Q ⊂ P(V ) ∼ = P 3 given by the equation q = 0 is smooth, i.e. Q ∼ = P 1 × P 1 . By Lemma 2.2.4 there is a subgroup H ⊂ G of index [G : H] 288 that acts on Q with a fixed point P and does not interchange the lines L 1 and L 2 passing through P on Q. As the representation of H, the vector space V splits as a sum of the one-dimensional representation corresponding to the point P , two one-dimensional representations arising from the lines L 1 and L 2 , and one more onedimensional representation. Therefore, H is an abelian group (note that Lemma 2.2.4 asserts only that the image of H in PGL 4 (k) is abelian). This shows thatJ (G) 288 and completes the proof of the lemma. Proof. Let V be a five-dimensional vector space over k, and let G ⊂ GL(V ) be a finite subgroup. It is enough to study the weak Jordan constantJ(G). Moreover, for this we may assume that G ⊂ SL(V ) ∼ = SL 5 (k). Recall that there are the following possibilities for the group G (see [Bra67] or [Fei71, §8.5]): (i) the G-representation V is reducible; (ii) there is a transitive homomorphism h : G → S 5 such that V splits into a sum of five one-dimensional representations of the subgroup H = Ker(h); (iii) the group G is generated by some subgroup of scalar matrices in SL 5 (k) and a groupĜ that is one of the groups A 5 , S 5 , A 6 , S 6 , PSL 2 (F 11 ), or PSp 4 (F 3 ); (iv) one has G ∼ = H 5 ⋊ Σ, where H 5 is the Heisenberg group of order 125, and Σ is some subgroup of SL 2 (F 5 ). In case (i) there is an embedding G ֒→ Γ 1 × Γ 2 , where Γ i is a finite subgroup of GL n i (k) for i ∈ {1, 2}, and n 1 n 2 are positive integers such that n 1 + n 2 = 5. One has In case (iii) it is easy to check thatJ (G) =J(Ĝ) 960, cf. the proof of Lemma 2.4.1. We summarize the main results of § §2.2-2.5 in Table 1. In the first column we list the dimensions we will need in the sequel. In the second column we give the values of the weak Jordan constantsJ(PGL n (k)), and in the third column we give the groups that attain these constants. Similarly, in the fourth column we give the values of the weak Jordan constantsJ(GL n (k)), and in the fifth column we give the groups that attain the constants. In the sixth column we list the actual values of the usual Jordan constantsJ(GL n (k)) which can be found in [Col07,Proposition C]. nJ(PGL n (k)) groupJ(GL n (k)) group J(GL n (k)) Lemma 2.6.1. Let G be a group, andΓ ⊂ G be a normal finite abelian subgroup. Suppose thatΓ cannot be generated by less than m elements. Let V be an N-dimensional vector space over k. Suppose that V is a faithful representation of G. Then there exist positive integers t, m 1 , . . . , m t , d 1 , . . . , d t such that be the splitting of V into isotypical components with respect toΓ. Since V is a faithful representation ofΓ, andΓ is an abelian group, we have an injective homomor-phismΓ ֒→ (k * ) s . By assumption one has s m. Suppose that the splitting (2.6.2) contains m 1 summands of dimension d 1 , m 2 summands of dimension d 2 , . . . , and m t summands of dimension d t . Then one has m 1 d 1 + . . . + m t d t = N. Moreover, the total number of summands in (2.6.2) equals m 1 + . . . + m t = s m. SinceΓ ⊂ G is a normal subgroup, the group G interchanges the summands in (2.6.2). Moreover, G can interchange only those subspaces V i and V j that have the same dimension. Therefore, we get a homomorphism Let ∆ ⊂ G be the kernel of the homomorphism ψ. Then each summand of (2.6.2) is invariant with respect to ∆. Since V is a faithful representation of ∆, one has an inclusion Note that Recall that the groups GL d i (k) are Jordan by Theorem 1.1.1. Thus the group G is Jordan withJ Lemma 2.6.1 allows us to provide a bound for Jordan constants of some subgroups of GL 7 (k). This bound will be used in the proof of Lemma 7.1.2. Lemma 2.6.3. Let G be a group, andΓ ⊂ G be a normal finite abelian subgroup such thatΓ ∼ = µ m 2 with m 4. Suppose that G has a faithful seven-dimensional representation. Then G is Jordan withJ (G) 10368. Surfaces The goal of this section is to estimate weak Jordan constants for automorphism groups of rational surfaces, as well as some other constants of similar nature. In the sequel for any variety X we will denote by Φ(X) the minimal positive integer m such that for any finite group G ⊂ Aut(X) there is a subgroup F ⊂ G with [G : F ] m acting on X with a fixed point. If there does not exist an integer m with the above property, we put Φ(X) = +∞. Note that Φ(X) is bounded by some universal constant for rationally connected varieties X of dimension at most 3 by [PS16a, Theorem 4.2]. 3.1. Preliminaries. We start with the one-dimensional case. Lemma 3.1.1. One has Φ(P 1 ) = 12. Moreover, if T is a finite union of rational curves such that its dual graph T ∨ is a tree, then Φ(T ) 12. Let T be a finite union of rational curves such that its dual graph T ∨ is a tree. Then there is a natural homomorphism of Aut(T ) to the finite group Aut(T ∨ ). It is easy to show by induction on the number of vertices that either there is an edge of T ∨ that is invariant under Aut(T ∨ ), or there is a vertex of T ∨ that is invariant under Aut(T ∨ ). In the former case there is a point P ∈ T fixed by Aut(T ), so that Φ(T ) = 1. In the latter case there is a rational curve C ⊂ T that is invariant under Aut(T ), so that Φ(T ) Φ(C) 12. Now we proceed with the two-dimensional case. In a sense, we are going to do in a more systematic way the same things that were done in Lemma 2.2.4. For a variety X with an action of a finite group G, we will denote by Φ a (X, G) the minimal positive integer m such that there is an abelian subgroup A ⊂ G with [G : A] m acting on X with a fixed point. The main advantage of this definition is the following property. Lemma 3.1.2. Let X and Y be smooth surfaces acted on by a finite group G. Suppose that there is a G-equivariant birational morphism π : Proof. The assertion is implied by the results of [KS00] in arbitrary dimension. We give the proof for dimension 2 for the readers convenience. The inequality Φ a (Y, G) Φ a (X, G) is obvious. To prove the opposite inequality choose an abelian subgroup A ⊂ G such that there is a point P ∈ X fixed by A. We are going to produce a point Q ∈ Y fixed by A such that π(Q) = P . The birational morphism π is a composition of blow ups of smooth points. Since π is Gequivariant and thus A-equivariant, we may replace X by a neighborhood of the point P and thus suppose that π is a sequence of blow ups of points lying over the point P . If π is an isomorphism, then there is nothing to prove. Otherwise, by induction in the number of blow ups, we see that it is enough to consider the case when π is a single blow up of the point P . In this case the exceptional divisor E = π −1 (P ) is identified with the projectivization of the Zariski tangent space T P (X), and the action of A on E comes from a linear action of A on T P (X). Since the group A is abelian, it has a one-dimensional invariant subspace in T P (X), which gives an A-invariant point Q ∈ E ⊂ Y . Proof. One has Aut(X) ∼ = PGL 3 (k). By the holomorphic Lefschetz fixed-point formula any cyclic group acting on a rational variety has a fixed point. Now the required bound is obtained from the classification of finite subgroups of GL 3 (k) (see [MBD16,Chapter XII] or [Fei71, §8.5], and also the proof of Lemma 2.3.1). Remark 3.2.2. Note that the bound given by Lemma 3.2.1 is actually attained for the group A 6 ⊂ PGL 3 (k) whose abelian subgroup of maximal order acting on P 2 with a fixed point is µ 5 . Lemma 3.2.3. Let X be a smooth del Pezzo surface. Let G ⊂ Aut(X) be a finite group. Then one has Φ a (X, G) 288. Moreover, if X is not isomorphic to P 1 × P 1 , then Φ a (X, G) 144. Suppose that X ∼ = P 1 × P 1 . Then one has Φ a (X, G) 288 by Lemma 2.2.4(ii). Note that this value is attained for the group Suppose that X is a blow up π : X → P 2 at one or two points. Then π is an Aut(X)equivariant birational morphism, so that Φ a (X, G) 72 by Lemmas 3.2.1 and 3.1.2. Put d = K 2 X . We may assume that d 6. Suppose that d = 6. Then where D 6 is the dihedral group of order 12 (see [Dol12,Theorem 8.4.2]). The subgroup k * × k * ⊂ Aut(X) acts on X with a fixed point by Borel's theorem (see e. g. [Hum75, VIII.21]). From this one can easily deduce that Φ a (X, G) 12 for any finite subgroup G ⊂ Aut(X). 10 (see [Dol12, Theorem 8.6.6]). Representing X as an intersection of two quadrics with equations in diagonal form, one can see that there is a subgroup µ 2 2 ⊂ Aut(X) acting on X with a fixed point. Therefore, one has Suppose that d = 3. Then either Aut(X) ∼ = µ 3 3 ⋊ S 4 and X is the Fermat cubic, or |Aut(X)| 120 (see [Dol12, Theorem 9.5.6]). In the former case it is easy to see that there is a subgroup µ 2 3 ⊂ Aut(X) acting on X with a fixed point, so that In the latter case one has Φ a X, Aut(X) |Aut(X)| 120. Table 8.9]). In the latter case one has Φ a X, Aut(X) |Aut(X)| 120. To estimate Φ a X, Aut(X) in the former two cases, recall that the anticanonical linear system | − K X | defines a double cover branched over a smooth quartic curve C ⊂ P 2 . The subgroup µ 2 acts by the Galois involution of the corresponding double cover. In particular, the curve ϕ −1 , then the group PSL 2 (F 7 ) ⊂ Aut(X) contains a subgroup µ 7 , and µ 7 acts on the curve ϕ −1 |−K X | (C) ∼ = C with a fixed point (this can be easily seen, for example, from the Riemann-Hurwitz formula since C is a smooth curve of genus 3). Thus Finally, suppose that d = 1. Then Table 8.14]). Remark 3.2.4. In several cases (say, for a del Pezzo surface of degree d = 5) one can produce better upper bounds for Φ a (X, G) than those given in the proof of Lemma 3.2.3, but we do not pursue this goal. Lemma 3.2.3 immediately implies the following. 3.3. Rational surfaces. Now we pass to the case of arbitrary rational surfaces. Proof. Let Y be a smooth projective rational surface, and G ⊂ Aut(Y ) be a finite group. Let π : Y → X be a result of a G-Minimal Model Program ran on Y . One has by Lemma 3.1.2. Moreover, X is either a del Pezzo surface, or there is a G-equivariant conic bundle structure on X (see [Isk80b,Theorem 1G]). If X is a del Pezzo surface, then Φ a (X, G) 288 by Lemma 3.2.3, so that Φ a (Y, G) 288. Therefore, we assume that there is a G-equivariant conic bundle structure There is an exact sequence of groups where G φ acts by fiberwise automorphisms with respect to φ, and G B ⊂ Aut(P 1 ). By 12 acting on P 1 with a fixed point P ∈ P 1 . The group ⊂ G acts by automorphisms of the fiber C = φ −1 (P ). Note that C is a reduced conic, i.e. it is either isomorphic to P 1 , or is a union of two copies of P 1 meeting at one point. Suppose that C ∼ = P 1 . Then there is a point Q ∈ C that is invariant with respect to some 12 by Lemma 3.1.1. The morphism φ : X → B is smooth at Q. Hence the map dφ : T Q (X) → T P (B) is surjective. By Lemma 2.1.2 the group G ′′ acts faithfully on the Zariski tangent space T Q (X), and the group G ′ B acts faithfully on the Zariski tangent space T P (B). The map dφ is G ′′ -equivariant and so G ′′ has one-dimensional invariant subspace Ker(dφ) ⊂ T Q (X) ∼ = k 2 . In this case G ′′ must be abelian with [G : G ′′ ] 12 · 12 = 144. Now consider the case when C is a reducible conic, i.e. it is a union of two copies of P 1 meeting at one point, say Q. Corollary 3.3.2. Let X be a smooth rational surface. Then one hasJ Aut(X) 288. Proof. Let G ⊂ Cr 2 (k) be a finite group. It is enough to study the weak Jordan con-stantJ(G). Regularizing the action of G and taking an equivariant desingularization (see e. g. [PS14, Lemma-Definition 3.1]), we may assume that G ⊂ Aut(X) for a smooth rational surface X. Now the boundJ Cr 2 (k) 288 follows from Corollary 3.3.2. The equality is due to Lemma 2.2.4(iii). A direct consequence of Corollary 3.3.3 is that the weak Jordan constant of the Cremona group of rank 2 is bounded by 288 for an arbitrary (not necessarily algebraically closed) base field. Together with Remark 1.2.2 this gives a proof of Proposition 1.2.3. 3.4. Non-rational surfaces. We conclude this section by three easy observations concerning automorphism groups of certain non-rational surfaces. Lemma 3.4.1. Let C be a smooth curve of genus g 2, and let S be a ruled surface over C. Then the group Aut(S) is Jordan withJ Aut(S) 1008(g − 1). Proof. Let G ⊂ Aut(S) be a finite group. It is enough to prove the corresponding bound forJ(G). There is an exact sequence of groups where G φ acts by fiberwise automorphisms with respect to φ, and G C ⊂ Aut(C). One has by the Hurwitz bound. On the other hand, the group G φ is a subgroup of Aut(P 1 ) ∼ = PGL 2 (k), so that G φ contains an abelian subgroup H of index To obtain a bound for a weak Jordan constant in the last case we will use some purely group-theoretic facts. Proposition 3.4.3 (see Corollary 2 of Theorem 1.17 in Chapter 2 of [Suz82]). Let p be a prime number, G be a group of order p n , and A ⊂ G be an abelian normal subgroup of maximal possible order p a . Then 2n a(a + 1). Lemma 3.4.4. Let G be a finite group with |G| 79380. Then Proof. Suppose that |G| is divisible by a prime number p. Then G contains a cyclic subgroup of order p, so thatJ In particular, if |G| is divisible by a prime p 11, then Similarly, suppose that p is a prime such that |G| is divisible by p 2 . Let G p ⊂ G be a Sylow p-subgroup. Then |G p | p 2 . If |G p | = p 2 , then G p is abelian, so that If |G p | p 3 , then G p contains an abelian subgroup A of order |A| p 2 by Proposition 3.4.3, and we again haveJ In particular, if there is a prime p 3 such that |G| is divisible by p 2 , then Now suppose that |G| is not divisible by any prime greater than 7, and |G| is not divisible by a square of any prime greater than 2. This means that Thus we assume that α 4. Let G 2 ⊂ G be a Sylow 2-subgroup. Applying Proposition 3.4.3 once again, we see that G 2 contains an abelian subgroup A of order |A| 8. Hence one hasJ Now we are ready to bound a weak Jordan constant for automorphism groups of surfaces of general type of low degree. Terminal singularities In this section we study Jordan property for automorphism groups of germs of threedimensional terminal singularities, and derive some conclusions about automorphism groups of non-Gorenstein terminal Fano threefolds. 4.1. Local case. Recall from §2.1 that for an arbitrary variety U and a point P ∈ U we denote by Aut P (U) the stabilizer of P in Aut(U). Now we are going to estimate a weak Jordan constant of a group Aut P (U), where P ∈ U is a three-dimensional terminal singularity. Lemma 4.1.1. Let U be a threefold, and P ∈ U be a terminal singular point of U. Let G ⊂ Aut P (U) be a finite subgroup. Then for some positive integer r there is an extension such that the following assertions hold. (i) There is an embeddingG ⊂ GL 4 (k), and the groupG has a semi-invariant of degree 2. (ii) If (U, P ) is a cyclic quotient singularity, then there is an embeddingG ⊂ GL 3 (k). (iii) Let D be a G-invariant boundary on X such that the log pair (U, D) is log canonical and such that there is a minimal center C of log canonical singular- Proof. Let r 1 be the index of U ∋ P , i. e. r equals the minimal positive integer t such that tK U is Cartier at P . Replacing U by a smaller G-invariant neighborhood of P if necessary, we may assume that rK U ∼ 0. Consider the index-one cover (see [Rei87, Proposition 3.6]). Then U ♯ ∋ P ♯ is a terminal singularity of index 1, and U ∼ = U ♯ /µ r . Note that U ♯ ∋ P ♯ is a hypersurface singularity, i. e. dim T P ♯ (U ♯ ) 4 (see [Rei87, Corollary 3.12(i)]). Moreover, U ♯ is smooth at P ♯ if (U, P ) is a cyclic quotient singularity. By construction of the index one cover every element of Aut P (U) admits r lifts to Aut(U ♯ , P ♯ ). Thus we have a natural exact sequence (4.1.2), whereG is some subgroup of Aut P ♯ (U ♯ ). Furthermore, by Lemma 2.1.2 we know thatG ⊂ GL 3 (k) if U ♯ is smooth at P ♯ . This gives assertion (ii). Now suppose that dim T P ♯ (U ♯ ) = 4. By Lemma 2.1.2 one has an embed-dingG ⊂ GL 4 (k). Moreover, U ♯ ∋ P ♯ is a hypersurface singularity of multiplicity 2 by [KM98,Corollary 5.38]. This means that the kernel of the natural map is generated by an element of degree 2. Therefore, the groupG has a semi-invariant polynomial of degree 2. This completes the proof of assertion (i). Corollary 4.1.3. Let U be a threefold, and P ∈ U be a terminal singularity. Then the following assertions hold. (i) The group Aut P (U) is Jordan with J(Aut P (U)) 288. (ii) If (U, P ) is a cyclic quotient singularity, then Aut P (U) is Jordan with J(Aut P (U)) 72. (iii) Let C ∋ P be a curve contained in U and Γ ⊂ Aut P (U) be a subgroup such that C is Γ-invariant. Assume that C is a minimal center of log canonical singularities of the log pair (U, D) for some Γ-invariant boundary D. Proof. We use the methods of [Pro12,§6]. Let P 1 be a non-Gorenstein point and P 1 , . . . , P N ∈ X be its Aut(X)-orbit. Let r be the index of points P 1 , . . . , P N ∈ X. By the orbifold Riemann-Roch theorem and Bogomolov-Miyaoka inequality we have (see [Kaw92], [KMMT00]). This immediately implies that N 16. The subgroup Aut P 1 (X) ⊂ Aut(X) stabilizing the point P 1 has index [Aut(X) : Aut P 1 (X)] N. Remark 4.2.2. It is known that terminal non-Gorenstein Fano threefolds are bounded, i.e. they belong to an algebraic family (see [Kaw92], [KMMT00]). However it is expected that the class of these varieties is huge [B + ]. There are only few results related to some special types of these Fanos (see e.g. [BS07], [Pro16]). Mori fiber spaces Recall that a G-equivariant morphism φ : X → S of normal varieties acted on by a finite group G is a G-Mori fiber space, if X has terminal GQ-factorial singularities, one has dim(S) < dim(X), the fibers of φ are connected, the anticanonical divisor −K X is φ-ample, and the relative G-invariant Picard number ρ G (X/S) equals 1. If the dimension of X equals 3, there are three cases: • S is a point, −K X is ample; in this case X is said to be a GQ-Fano threefold, and X is a G-Fano threefold provided that the singularities of X are Gorenstein; • S is a curve, a general fiber of φ is a del Pezzo surface; in this case X is said to be a GQ-del Pezzo fibration; • S is a surface, a general fiber of φ is a rational curve; in this case X is said to be a GQ-conic bundle. The goal of this section is to estimate weak Jordan constants for the automorphism groups of varieties of GQ-conic bundles and GQ-del Pezzo fibrations. 5.1. Conic bundles. We start with automorphism groups of GQ-conic bundles. Lemma 5.1.1. Let G be a finite group, and φ : X → S be a three-dimensional Gequivariant fibration into rational curves over a rational surface S. ThenJ(G) 3456. Proof. By [Avi14] we may assume that X and S are smooth, and any fiber of φ is a (possibly reducible or non-reduced) conic. There is an exact sequence of groups where G φ acts by fiberwise automorphisms with respect to φ, and G S ⊂ Aut(S). By Lemma 3.3.1 there is an abelian subgroup G ′ S ⊂ G S of index [G S : G ′ S ] 288 such that G ′ S acts on S with a fixed point. Let P ∈ S be one of the fixed points of G ′ S , and let The fiber C is a reduced conic, so that it is either isomorphic to P 1 , or is a union of two copies of P 1 meeting at one point. In the former case there is a point Q ∈ C that is invariant with respect to some by Lemma 3.1.1. In the latter case the intersection point Q of the irreducible components C 1 and C 2 of C is invariant with respect to the group G ′ , and there exists a subgroup G ′′ ⊂ G ′ of index [G ′ : G ′′ ] 2 such that C 1 and C 2 are invariant with respect to G ′′ . By Lemma 2.1.2 the group G ′′ acts faithfully on the Zariski tangent space T Q (X), and the group G ′ S acts faithfully on the Zariski tangent space T P (S). As we have seen, the group G ′′ preserves the point Q and a tangent direction v ∈ T Q (X) ∼ = k 3 that lies in the kernel of the natural projection T Q (X) → T P (S). Moreover, there is an embedding where Γ 1 ⊂ k * , and Γ 2 ⊂ G ′ S . Since G ′ S and k * are abelian groups, we conclude that so is G ′′ . Therefore, one has 5.2. Del Pezzo fibrations. Before we pass to the case of GQ-del Pezzo fibrations we will establish some auxiliary results. Recall [KSB88, Definition 3.7] that a surface singularity is said to be of type T if it is a quotient singularity and admits a Q-Gorenstein one-parameter smoothing. Lemma 5.2.1. Let X be a normal threefold with at worst isolated singularities and let S ⊂ X be an effective Cartier divisor such that the log pair (X, S) is purely log terminal (see [KM98,§2.3]). Then S has only singularities of type T . Proof. Regard X as the total space of a deformation of S. By our assumptions divisors K X + S and S are Q-Cartier. Hence X is Q-Gorenstein. By the inversion of adjunction (see [KM98,Theorem 5.20]) the surface S has only Kawamata log terminal (i. e. quotient) singularities (see [KM98,Theorem 5.50]). Hence the singularities of S are of type T . Lemma 5.2.2. Let S be a singular del Pezzo surface with T -singularities. Assume that S has at least one non-Gorenstein point. Then Aut(S) has an orbit of length at most 2 on S. Proof. Assume that Aut(S) has no orbits of length at most 2 on S. By [HP10, Proposition 2.6] one has where |M| is a linear system without fixed components and F is the fixed part of | − K S |, so that First assume that F = 0. By Shokurov's connectedness theorem (see e.g [KM98, Theorem 5.48]) we know that F is connected. Hence F is a connected chain of rational curves. In this situation Aut(S) acts on F so that there exists either a fixed point P ∈ Sing (F ) or an invariant irreducible component F 1 ⊂ F (cf. the proof of Lemma 3.1.1). In the first case we have a contradiction with our assumption and in the second case Aut(S) permutes two points of intersection of F 1 with Supp (F − F 1 ), again a contradiction. Thus F = 0 and so Sing ′ (S) ⊂ Bs |M| and Sing ′ (S) ⊂ Sing (M). Since Sing ′ (S) contains at least three points and p a (M) = 1, the divisor M is reducible. By Bertini's theorem the linear system |M| is composed of a pencil, which means that there is a pencil |L| such that |M| = n|L| for some n 2, and Sing ′ (S) ⊂ Bs |L|. Since the log pair (S, M) is log canonical, there are exactly two irreducible components of M passing through any point P ∈ Sing ′ (S), see [KM98,Theorem 4.15]. Since Sing ′ (S) contains at least three points, the dual graph of M cannot be a combinatorial cycle, a contradiction. Lemma 5.2.3. Let X be a threefold, and G ⊂ Aut(X) be a finite subgroup. Suppose that there is a G-invariant smooth del Pezzo surface S contained in the smooth locus of X. ThenJ(G) 288. Proof. There is an exact sequence of groups Hence G Q ⊂ A × H Q , where A ⊂ k * is some cyclic group. Therefore, the group G Q is abelian, so that one hasJ (G) [G : G Q ] 288. Remark 5.2.4. Let G ⊂ Aut(X) be a finite subgroup, and Σ ⊂ X be a non-empty finite subset. Then a stabilizer G P ⊂ G of a point P ∈ Σ has index [G : G P ] |Σ|, so that by Remark 1.2.2 one hasJ (G) |Σ| ·J(G P ) |Σ| ·J Aut P (X) . Now we are ready to finish with weak Jordan constants of rationally connected threedimensional GQ-del Pezzo fibrations. Proof. There is an exact sequence of groups where G φ acts by fiberwise automorphisms with respect to φ, and acts on B with a fixed point. Let P ∈ B be one of the fixed points of G ′ B , let F = φ * (P ) be the scheme fiber over P , ] 12, and the fiber S is G ′ -invariant. In particular, one has Suppose that F is a multiple fiber of φ, i.e. S = F . Then by [MP09] there is a G-invariant set Σ ⊂ S of singular points of X such that either |Σ| 3, or |Σ| = 4 and Σ consists of cyclic quotient singularities. In the former case Remark 5.2.4 and Corollary 4.1.3(i) imply thatJ (G) 12 · 3 · 288 = 10368. In the latter case Remark 5.2.4 and Corollary 4.1.3(ii) imply that J(G) 12 · 4 · 72 = 3456. Therefore, we can assume that S is not a multiple fiber of φ. In particular, S = F is a Cartier divisor on X. Suppose that the log pair (X, S) is not purely log terminal (see [KM98,§2.3]). Let c be the log canonical threshold of the log pair (X, S) (cf. the proof of [PS16a, Lemma 3.4]). Let Z 1 ⊂ S be a minimal center of log canonical singularities of the log pair (X, cS), see [Kaw97, Proposition 1.5]. Since (X, S) is not purely log terminal, we conclude that c < 1, so that dim(Z) 1. It follows from [PS16a, Lemma 2.5] that Z is G ′ -invariant. If Z is a point, thenJ (G) [G : G ′ ] ·J(G ′ ) 12 · 288 = 3456 by Corollary 4.1.3(i). Thus we assume that Z is a curve. Using [PS16a, Lemma 2.5] once again, we see that Z is smooth and rational. By Gorenstein Fano threefolds Let X be a Fano threefold with at worst terminal Gorenstein singularities. In this case, the number g(X) = 1 2 (−K X ) 3 + 1 is called the genus of X. By Riemann-Roch theorem and Kawamata-Viehweg vanishing one has dim | − K X | = g(X) + 1 (see e. g. [IP99, 2.1.14]). In particular, g(X) is an integer, and g(X) 2. The maximal number ι = ι(X) such that −K X is divisible by ι in Pic (X) is called the Fano index, or sometimes just index, of X. Recall that Pic (X) is a finitely generated torsion free abelian group, see e.g. [IP99, Proposition 2.1.2]. The rank ρ(X) of the free abelian group Pic (X) is called the Picard rank of X. Let H be a divisor class such that −K X ∼ ι(X)H. The class H is unique since Pic (X) is torsion free. Define the degree of X as d(X) = H 3 . The goal of this section is to bound weak Jordan constants for automorphism groups of singular terminal Gorenstein Fano threefolds. 6.1. Low degree. We start with the case of small anticanonical degree. We will use notation and results of §A.2. (iii) ι(X) = 1 and g(X) = 2; (iv) ι(X) = 1, g(X) = 3, and X is a double cover of a three-dimensional quadric. Suppose that G ⊂ Aut(X) is a finite group. Then for some positive integer r there is a central extension 1 → µ r →G → G → 1 such that one has an embeddingG ⊂ GL 3 (k) × k * in case (i), an embeddingG ⊂ GL 4 (k) in cases (ii) and (iii), and an embeddingG ⊂ GL 5 (k) in case (iv). respectively. In case (iv) our X is naturally embedded as a weighted complete intersection of multidegree (2, 4) in P = P(1 5 , 2). Let O X (1) be the restriction of the (non-invertible) divisorial sheaf O P (1) to X (see [Dol82,1.4.1]). Since X is Gorenstein, in all cases it is contained in the smooth locus of P, and thus O X (1) is an invertible divisorial sheaf on X. Moreover, under the above embeddings we have in cases (iii) and (iv), while O X (1) = O X (− 1 2 K X ) in cases (i) and (ii). Since the group Pic (X) has no torsion, in all cases the class of O X (1) in Pic (X) is invariant with respect to the whole automorphism group Aut(X). Also, the line bundle O X (1) is ample, so that the algebra R(X, O X (1)) is finitely generated. Therefore, by Lemma A.2.13 for any finite subgroup Γ ⊂ Aut(X) the action of Γ on X is induced by its action on P ∼ = Proj R X, O X (1) . Thus the assertion follows from Lemma A.2.8. Remark 6.1.2. Assume the setup of Proposition 6.1.1. Then using the notation of the proof of Lemma A.2.13 one can argue that a central extension of the group G acts on the vector space which immediately gives its embedding into GL k 1 +...+k N (k). This would allow to avoid using Lemma A.2.8, but would give a slightly weaker result. Using a more explicit geometric approach, one can strengthen the assertion of Proposition 6.1.1(i). Corollary 6.1.3. In the assumptions of Proposition 6.1.1(i) one has G ⊂ GL 3 (k). Proof. The base locus of the linear system |H| is a single point P which is contained in the smooth part of X (see e.g. [Shi89, Theorem 0.6]). Clearly, the point P is Aut(X)invariant. Therefore, Lemma 2.1.2 implies that G ⊂ GL 3 (k). Then the group Aut(X) is Jordan withJ Aut(X) 960. Lemma 6.1.5. Let X ⊂ P 4 be a hypersurface of degree at least 2. Then the group Aut(X) is Jordan withJ Aut(X) 960. 6.2. Complete intersection of a quadric and a cubic. Now we will describe some properties of finite subgroups of automorphisms of a complete intersection of a quadric and a cubic in P 5 . Lemma 6.2.1. Let X ⊂ P 5 be a Fano threefold with terminal Gorenstein singularities such that ρ(X) = 1, ι(X) = 1, and g(X) = 4, i. e. X is a complete intersection of a quadric and a cubic in P 5 (see [Isk80a, Proposition IV.1.4], [PCS05, Theorem 1.6 or Remark 4.2]). Let Q ⊂ P 5 be the (unique) quadric passing through X. Then one of the following possibilities occurs: (i) the quadric Q is smooth; in this case there is a subgroup Aut ′ (X) ⊂ Aut(X) of index at most 2 such that Aut ′ (X) ⊂ PGL 4 (k); (ii) the quadric Q is a cone with an isolated singularity; in this case for any finite subgroup G ⊂ Aut(X) there is an embedding (iii) the quadric Q is a cone whose singular locus is a line; in this case for any finite subgroup G ⊂ Aut(X) there is a subgroup F ⊂ G of index [G : F ] 3 such that there is an embedding Proof. The embedding X ֒→ P 5 is given by the anticanonical linear system on X. Hence there is an action of the group Aut(X) on P 5 that agrees with the action of Aut(X) on X, see e.g. [KPS16, Lemma 3.1.2]. The quadric Q is Aut(X)-invariant, and the action of Aut(X) on Q is faithful. Since the singularities of X are terminal and thus isolated, we see that the singular locus of Q is at most one-dimensional. Suppose that Q is non-singular. Then Q is isomorphic to the Grassmannian Gr(2, 4), so that Aut(Q) ∼ = PGL 4 (k) ⋊ µ 2 , which gives case (i). Therefore, we may assume that Q is singular. Then Sing (Q) is a linear subspace of P 5 of dimension δ 1. Suppose that δ = 0, so that Sing (Q) is a single point P . Then the point P is Aut(Q)invariant, and thus also Aut(X)-invariant. Let G ⊂ Aut(X) be a finite subgroup. By Lemma 2.1.2 there is an embedding Moreover, the group G acts by a character on a quadratic polynomial on T P (P 5 ) that corresponds to the quadric Q. Hence G is contained in the subgroup π −1 (PSO 5 (k)) ⊂ GL 5 (k), where π : GL 5 (k) → PGL 5 (k) is the natural projection. This gives case (ii). Finally, suppose that δ = 1. Let L ∼ = P 1 be the vertex of Q. Then L is Aut(Q)-invariant, and thus also Aut(X)-invariant. Let G ⊂ Aut(X) be a finite subgroup. Note that X ∩ L is non-empty and consists of at most three points. Hence there is a subgroup F ⊂ G of index [G : F ] 3 such that F has a fixed point on L. Denote this point by P . By Lemma 2.1.2 there is an embedding F ֒→ GL T P (P 5 ) ∼ = GL 5 (k). Moreover, the representation of F in T P (P 5 ) splits as a sum of a one-dimensional and a four-dimensional representations since F preserves the tangent direction T P (L) to L. Put Then there is an embedding F ֒→ F 1 × F 2 , where F 1 is a finite cyclic group, and F 2 is a finite subgroup of GL(V ) ∼ = GL 4 (k). The last thing we need to observe is that F 2 preserves a quadric cone in P(V ) corresponding to an intersection of the tangent cone to Q at P with the subspace V ֒→ T P (P 5 ). Therefore, F 2 is contained in the subgroup where π : GL 4 (k) → PGL 4 (k) is the natural projection. Since π −1 (PSO 4 (k)) ∼ = SO 4 (k) × k * /µ 2 , this gives case (iii) and completes the proof of the lemma. Proof. By Lemma 6.2.1 one of the following possibilities holds: (i) there is a subgroup Aut ′ (X) ⊂ Aut(X) of index at most 2 such that Aut ′ (X) ⊂ PGL 4 (k); (ii) for any finite subgroup G ⊂ Aut(X) there is an embedding G ⊂ GL 5 (k); (iii) for any finite subgroup G ⊂ Aut(X) there is a subgroup F ⊂ G of index [G : F ] 3 such that there is an embedding In particular, the group Aut(X) is Jordan. In case (i) one has Proof. Recall that g(X) 2. If g(X) = 2, then Aut(X) is Jordan withJ Aut(X) 960 by Lemma 6.1.4. If g(X) = 3 and −K X is not very ample, then Aut(X) is also Jordan withJ Aut(X) 960 by Lemma 6.1.4. If g(X) = 3 and −K X is very ample, then X is a smooth quartic in P 4 (because dim | − K X | = 4 and −K 3 X = 4), so that Aut(X) is Jordan withJ Aut(X) 960 by Lemma 6.1.5. Finally, if g(X) = 4, then the group Aut(X) is Jordan withJ Aut(X) 1920 by Corollary 6.2.2. Now we are ready to study automorphism groups of arbitrary singular Gorenstein G-Fano threefolds. Lemma 6.3.2. Let G be a finite group, and let X be a singular Gorenstein G-Fano threefold. Then the group Aut(X) is Jordan with J Aut(X) 9504. Proof. Let P 1 , . . . , P N ∈ X be all singular points of X. The group Aut(X) acts on the set {P 1 , . . . , P N }. The subgroup Aut P 1 (X) ⊂ Aut(X) stabilizing the point P 1 has index [Aut(X) : Aut P 1 (X)] N. According to [Nam97] there exists a smoothing of X, that is a one-parameter deformation such that a general fiber X b is smooth and the central fiber X 0 is isomorphic to X. One has by [Nam97,Theorem 13]. Moreover, there is an identification Pic (X b ) ∼ = Pic (X), see [JR11,Theorem 1.4]. Suppose that ρ(X) 2. Smooth Fano threefolds V whose Picard group admits an action of a finite group G such that ρ(V ) G = 1 and ρ(V ) > 1 are classified in [Pro13b]. Applying this classification to V = X b we obtain h 1,2 (X b ) 9. Therefore, we are left with several possibilities with h 1,2 (X b ) 14. In this case (6.3.3) implies that N 33 (and in some cases this bound can be significantly improved, see [Pro17]). Now Corollary 4.1.3(i) implies that Aut(X) is Jordan with J Aut(X) 33 · 288 = 9504. Smooth Fano threefolds In this section we bound weak Jordan constants for automorphism groups of smooth Fano threefolds. 7.1. Complete intersections of quadrics. It appears that we can get a reasonable bound for a weak Jordan constant of an automorphism group of a smooth complete intersection of two quadrics of arbitrary dimension. Here we will use the results of §A.1. In dimension 3 we can also bound weak Jordan constants for automorphism groups of smooth complete intersections of three quadrics. Lemma 7.1.2. Let X ⊂ P 6 be a smooth complete intersection of 3 quadrics. Then the group Aut(X) is Jordan withJ Aut(X) 10368. so that P 6 is identified with P(V ). Since the anticanonical class of X is linearly equivalent to a hyperplane section of X in P 6 , the group Aut(X) acts on V , see e.g. [KPS16, Corollary 3.1.3]. Thus we may assume that Aut(X) ⊂ GL(V ). Let − Id ∈ GL(V ) be the scalar matrix diag(−1, . . . , −1). LetΓ ⊂ GL(V ) be a group generated by Γ and − Id, and let G ⊂ GL(V ) be a group generated by Aut(X) and − Id. Since Aut(X) ⊂ GL(V ) acts faithfully on P(V ) and thus does not contain scalar matrices, we see thatΓ with m ′ = m + 1 4. We conclude that Aut(X) is Jordan with by Lemma 2.6.3. Remark 7.1.3. Let X ⊂ P 6 be a smooth complete intersection of 3 quadrics. Then X is non-rational, see [Bea77a, Theorem 5.6]. Therefore, automorphism groups of varieties of this type cannot provide examples of subgroups in Cr 3 (k) whose Jordan constants attain the bounds given by Theorem 1.2.4, cf. Remark 8.2.1 below. 7.2. Fano threefolds of genus 6. Recall that a smooth Fano threefold X with ρ(X) = 1, ι(X) = 1, and g(X) = 6 may be either an intersection of the Grassmannian Gr(2, 5) ⊂ P 9 with a quadric and two hyperplanes, or a double cover of a smooth Fano threefold Y = Gr(2, 5) ∩ P 6 ⊂ P 9 with the branch divisor B ∈ | − K Y | (see [Gus82]). We will refer to the former varieties as Fano threefolds of genus 6 of the first type, and to the latter varieties as Fano threefolds of genus 6 of the second type. Remark 7.2.1. In [DK15] these were called ordinary and special varieties, respectively. . Let X be a smooth Fano threefold with ρ(X) = 1, ι(X) = 1, and g(X) = 6. If X is of the first type, then there is an embedding If X is of the second type, then there is a normal subgroup Γ ⊂ Aut(X) such that Γ ∼ = µ 2 and there is an exact sequence Proof. By definition, we have a natural morphism γ : X → Gr(2, 5). By [DK15, Theorem 2.9] the morphism γ is functorial. Note that γ is completely determined by what is called GM data in [DK15], in particular it is equivariant with respect to the action of the group Aut(X). Consider the corresponding map θ : Aut(X) → Aut(Gr(2, 5)) ∼ = PGL 5 (k). 32 Suppose that X is a Fano threefold of genus 6 of the first type. Then functoriality of γ implies that θ is an embedding. This proves the first assertion of the lemma. Now suppose that X is a Fano threefold of genus 6 of the second type. Then the morphism γ is a double cover, and its image is a Fano threefold Y with ρ(Y ) = 1, ι(Y ) = 2, and d(Y ) = 5, see [DK15,Proposition 2.20]. Let Γ ⊂ Aut(X) be the subgroup generated by the Galois involution of the double cover γ : X → Y . Then Γ ∼ = µ 2 is a normal subgroup of Aut(X), and Aut(X)/Γ embeds into Aut(Y ). On the other hand, one has Aut(Y ) ∼ = PGL 2 (k), see e.g. [Muk88,Proposition 4.4] or [CS16, Proposition 7.1.10]. This gives the second assertion of the lemma. Proof. Suppose that X is a Fano threefold of genus 6 of the first type. Then there is an embedding Aut(X) ⊂ PGL 5 (k) by Lemma 7.2.2, so that Aut(X) is Jordan withJ(Aut(X)) 960 by Lemma 2.5.1. Now suppose that X is a Fano threefold of genus 6 of the second type. Then there is an exact sequence 1 → Γ → Aut(X) → PGL 2 (k) by Lemma 7.2.2. Therefore, Aut(X) is Jordan withJ(G) 12 by Lemma 2.2.1. 7.3. Large degree and index. Now we consider the cases with large anticanonical degree and large index. Proof. It is known that ι(X) 4. Moreover, ι(X) = 4 if and only if X ∼ = P 3 , and ι(X) = 3 if and only if X is a quadric in P 4 (see e. g. [IP99, 3.1.15]). In the former case one has Aut(X) ∼ = PGL 4 (k), so that the group Aut(X) is Jordan withJ Aut(X) = 960 by Lemma 2.4.1. In the latter case the group Aut(X) is Jordan withJ Aut(X) 960 by Lemma 6.1.5. Lemma 7.4.1. Let G be a finite group, and X be a smooth G-Fano threefold. Suppose that ρ(X) > 1. Then Aut(X) is Jordan withJ Aut(X) 10368. Proof. By [Pro13b] we have the following possibilities. In case (iv) one has ρ(X) = 2, so that two conic bundles are all possible Mori contractions from X. Thus there is a subgroup Aut ′ (X) ⊂ Aut(X) of index at most 2 such that the conic bundle π 1 : X → P 2 is Aut ′ (X)-equivariant. Let G ⊂ Aut ′ (X) be a finite subgroup. Then one has ρ(X/P 2 ) G = ρ(X/P 2 ) = 1, so that π 1 : X → P 2 is a G-equivariant conic bundle. Thus Aut(X) is Jordan with J Aut(X) [Aut(X) : Aut ′ (X)] ·J Aut ′ (X) 2 · 3456 = 6912 by Lemma 5.1.1. In case (v) one has ρ(X) = 2, so that the contraction π : X → P 3 is one of the two possible Mori contractions from X. Hence there is a subgroup Aut ′ (X) of index at most 2 such that π is Aut ′ (X)-equivariant. In particular, Aut ′ (X) acts on P 3 faithfully, and since the curve C ⊂ P 3 is not contained in any plane, Aut ′ (X) acts faithfully on C as well. Therefore, Aut(X) is Jordan with by Remark 2.2.3. In case (vi) one has ρ(X) = 2, so that the contraction π : X → Q is one of the two possible Mori contractions from X. Hence there is a subgroup Aut ′ (X) of index at most 2 such that π is Aut ′ (X)-equivariant. In particular, Aut ′ (X) acts on Q faithfully. Since all automorphisms of Q are linear, and the curve C ⊂ Q ⊂ P 4 is not contained in any hyperplane, Aut ′ (X) acts faithfully on C as well. Therefore, Aut(X) is Jordan with J Aut(X) [Aut(X) : Aut ′ (X)] ·J Aut ′ (X) 2 ·J PGL 2 (k) = 24 by Corollary 2.2.2. Remark 7.4.2 (cf. Remark 7.1.3). Let X be a smooth G-Fano threefold with ρ(X) > 1, and assume the notation of the proof of Lemma 7.4.1. Then one hasJ Aut(X) < 10368 with an exception of case (ii), and with a possible exception of case (vii). However, if X is like in case (vii), then it is non-rational, see [AB92]. Therefore, automorphism groups of varieties of this type cannot provide examples of subgroups in Cr 3 (k) whose Jordan constants attain the bounds given by Theorem 1.2.4, cf. Remark 8.2.1 below. Remark 7.4.3. In general, studying Fano varieties with large automorphism groups is an interesting problem on its own. In many cases such varieties exhibit intriguing birational properties, see e.g. [CS11], [CS16], [PS16b]. Proof of the main theorem In this section we complete the proof of Theorem Proof. If X is singular, the group Aut(X) is Jordan with J Aut(X) 9504 by Lemma 6.3.2. Therefore, we assume that X is smooth. If ι(X) > 1, then the group Aut(X) is Jordan withJ Aut(X) 960 by Lemma 7.3.2. It remains to consider the case when X is a smooth Fano threefold with Pic (X) = Z · K X . According to the classification (see e. g. [IP99, §12.2]), one has either 2 g(X) 10, or g(X) = 12. If g(X) 4, then the group Aut(X) is Jordan with by Lemma 7.4.1. Therefore, we may assume that ρ(X) = 1, so that the group Aut(X) is Jordan withJ Aut(X) 10368 by Proposition 8.1.1. Remark 8.1.3 (cf. Remark 3.2.4). In several cases one can produce better bounds for weak Jordan constants of certain Fano threefolds applying a bit more effort. We did not pursue this goal since the current estimates are already enough to prove our main results. 8.2. Proof and concluding remarks. Now we are ready to prove Theorem 1.2.4. Proof of Theorem 1.2.4. Let X be a rationally connected threefold over an arbitrary field k of characteristic 0, and let G ⊂ Bir(X) be a finite group. It is enough to establish the upper bounds forJ (G) and J(G). Moreover, to prove the bounds we may assume that k is algebraically closed. Regularizing the action of G and taking an equivariant desingularization (see e. g. [PS14, Lemma-Definition 3.1]), we may assume that X is smooth and G ⊂ Aut(X). Applying G-equivariant Minimal Model Program to X (which is possible due to an equivariant version of [BCHM10, Corollary 1.3.3] and [MM86, Theorem 1], since rational connectedness implies uniruledness), we may assume that either there is a GQ-conic bundle structure φ : X → S for some rational surface S, or there is a GQ-del Pezzo fibration φ : X → P 1 , or X is a GQ-Fano threefold. Therefore, we havē J (G) 10368 by Lemmas 5.1.1 and 5.2.5 and Corollary 8.1.2. Applying Remark 1.2.2, we obtain the inequality J(G) 10368 2 = 107495424. If k is algebraically closed, then the group Cr 3 (k) contains a group Aut P 1 × P 1 × P 1 ⊃ A 5 × A 5 × A 5 ⋊ S 3 , and the largest abelian subgroup of the latter finite group has order 125. Therefore, one hasJ Cr 3 (k) = 10368. Remark 8.2.1. We do not know whether the bound for the (usual) Jordan constant for the group Cr 3 (k) over an algebraically closed field k of characteristic 0 provided by Theorem 1.2.4 is sharp or not. The Jordan constant of the group Aut(P 1 × P 1 × P 1 ) is smaller than that, but there may be other automorphism groups of rational varieties providing this value, cf. Lemma 5.2.5. We also do not know the actual value of J Cr 2 (k) , but we believe that it can be found by a thorough (and maybe a little bit boring) analysis of automorphism groups of del Pezzo surfaces and two-dimensional conic bundles, since in dimension 2 much more precise classification results are available. Remark 8.2.2. In dimension 4 and higher we cannot hope (at least on our current level of understanding the problem) to obtain results similar to Theorem 1.2.4. The first reason is that in dimension 3 we have a partial classification of Fano varieties, which gives a much more detailed information than the boundedness proved in [KMMT00] and [Bir16]; this gives us a possibility to (more or less) establish an alternative proof of Theorem 1.1.6 by repeating the same steps as in [PS16a] and using this information instead of boundedness. Another (and actually more serious) reason is that we use a classification of three-dimensional terminal singularities to obtain bounds for Jordan constants of automorphism groups of terminal Fano varieties and Mori fiber spaces. The result of [Kol11, Theorem 1] shows that a "nice" classification of higher dimensional terminal singularities is impossible, at least in the setup we used in Lemma 4.1.1 and Corollary 4.1.3, due to unboundedness of the dimensions of Zariski tangent spaces of their index one covers. Appendix A. Automorphisms of some complete intersections In this section we collect some (well-known) results about automorphisms of complete intersections of quadrics, and complete intersections in weighted projective spaces. A.1. Complete intersections of quadrics. Let X ⊂ P n = P(V ) be a smooth complete intersection of r quadrics. Let I X be the ideal sheaf of X, so that W = H 0 (P(V ), I X (2)) is the r-dimensional vector space of quadrics passing through X. Let q : W ֒→ Sym 2 V ∨ be the natural embedding. Lemma A.1.1. Suppose that n 2r. Then any automorphism of X is induced by an automorphism of P(V ) and induces an automorphism of P(W ). Proof. By adjunction formula one has −K X ∼ (n + 1 − 2r)H, where H is the class of a hyperplane section of X. Thus X is Fano, and in particular there is no torsion in the Picard group of X. Therefore, the class of H in Pic (X) is Aut(X)-invariant, and there is a natural embedding Aut(X) ֒→ PGL(V ). Furthermore, the twisted ideal sheaf I X (2) is invariant, hence the subspace is invariant under the action of Aut(X), and so we also have a map Aut(X) → PGL(W ). In the remaining part of this section we denote by Aut W (X) the image of the morphism Aut(X) → PGL(W ) constructed in Lemma A.1.1, and by Γ(X) its kernel. Thus we have an exact sequence Since Γ(X) preserves every quadric passing through X, it also preserves the quadrics in the pencil generated by Q and Q ′ , hence Γ(X) ⊂ Γ(Y ). So, the claim follows from Proposition A.1.3. We can describe the cases when Γ(X) = 1. This, in fact, is equivalent to "strict semistability" of q, i.e. to the situation when In the example below all V i are one-dimensional. Example A.1.8. Let X ⊂ P n be given by r n equations λ 10 x 2 0 + λ 11 x 2 1 + . . . + λ 1n x 2 n = 0, . . . , λ r0 x 2 0 + λ r1 x 2 1 + . . . + λ rn x 2 n = 0, where λ ij ∈ k are sufficiently general. Then X is a smooth complete intersection of r quadrics, and clearly all diagonal matrices with entries ±1 preserve each of the quadrics. Therefore, in this case one has Γ(X) ∼ = µ n 2 . This shows that the group Γ(X) may be nontrivial for any r. Now we will consider intersections of three quadrics. Let ∆ be a reduced connected curve. Recall that ∆ is said to be stable if its singularities are nodes, and ∆ has no infinitesimal automorphisms. The automorphism group of a stable curve is finite [DM69, Theorem 1.11]. Note also that any nodal plane curve of degree at least 4 is stable (see e.g. [Has99, Proposition 2.1]). Lemma A.1.9. Let X ⊂ P n , n 6, be a smooth complete intersection of three quadrics. Then Aut W (X) acts faithfully on a stable curve. In particular, the group Aut(X) is finite. Proof. Let ∆ ⊂ P 2 = P(W ) be the curve that parameterizes degenerate quadrics passing through X. This curve is usually called the Hesse curve of X (see [Tyu75, §2.2]). One has deg ∆ = n + 1 7. The curve ∆ is Aut(X)-invariant. Since it is not a line, we conclude that Aut W (X) acts faithfully on ∆. It is well known that the curve ∆ is nodal; this follows, for example, from [Bea77a, Proposition 1.2(iii)] applied to the quadric bundle over P 2 that is obtained by blowing up P n along X. Thus, the curve ∆ is stable. As we noticed above, stability of ∆ implies finiteness of Aut W (X). On the other hand, Γ(X) is finite by Corollary A.1.7. So, finiteness of Aut(X) follows from exact sequence (A.1.2). In a standard way, a weighted projective space P = P(a 0 , . . . , a n ) is equipped with rank 1 coherent sheaves O P (m), m ∈ Z. These sheaves are divisorial but non-invertible in general (see [Dol82,§1]). Any weighted projective space is isomorphic to a well-formed weighted projective space, i.e. a weighted projective space P(a 0 , . . . , a n ) such that the greatest common divisor of any n among the n + 1 weights a 0 , . . . , a n equals 1 (see [Dol82,1.3 Lemma A.2.1. Suppose that the weighted projective space P = P(a 0 , . . . , a n ) is wellformed. Then the following assertions hold. (i) The group Cl (P) ∼ = Z is generated by the class of O P (1). (ii) One has a canonical isomorphism of Z-graded rings where the weight of the variable x i is defined to be a i . Proof. We have the standard exact sequence and the action of µ a 0 on A n is diagonal with weights (a 1 , . . . , a n ). By our well-formedness assumption gcd(a 1 , . . . , a n ) = 1, i.e. the action of µ a 0 on A n is free in codimension 1. Therefore, Cl (U 0 ) ∼ = Z/a 0 Z and Cl (P) ∼ = Z ⊕ T , where T is a finite cyclic group whose order divides a 0 . By symmetry the order of T divides a i for all i and again by our well-formedness assumption T = 0. Thus Cl (P) ∼ = Z. Let D be the positive generator of Cl (P) and let D i be the effective Weil divisor given by x i = 0. Since Cl (U 0 ) ∼ = Z/a 0 Z, the sequence (A.2.2) shows D 0 ∼ a 0 D and similarly D i ∼ a i D for all i. Consider the polynomial ring R = k[x 0 , . . . , x n ] as a graded k-algebra R = i 0 R i with grading given by deg x i = a i > 0. In particular, one has R 0 = k. Denote by R m the graded vector subspace i m R i ⊂ R. Lemma A.2.4. Let U m ⊂ R m be the intersection of R m with the subalgebra of R generated by R m−1 , and put k m = dim R m − dim U m . Suppose that R is finitely generated, so that there is a positive N such that k m = 0 for m > N . Put Γ = GL k 1 (k) × . . . × GL k N . Then Aut(R), regarded as the automorphism group of the graded algebra R, contains the group Γ, and any reductive subgroup of Aut(R) is isomorphic to a subgroup of Γ. Proof. The group Aut(R) acts on every vector space R m so that the subspace U m is Aut(R)-invariant. Choose V m ⊂ R m to be a vector subspace such that One has k m = dim V m . This gives an obvious action of Γ on R. Now let G ⊂ Aut(R) be a reductive subgroup. Then one can choose a G-invariant vector subspace V ′ m ⊂ R m such that U m ⊕ V ′ m = R m . Moreover, the action of G on R is recovered from its action on V ′ m . Since V ′ m ∼ = V m , this gives the second assertion of the lemma. We will use the abbreviation where k 1 , . . . , k N are allowed to be any non-negative integers. Proposition A.2.5. Suppose that the weighted projective space P = P(a k 1 1 , . . . , a k N N ) is well-formed. Let R U be the unipotent radical of the group Aut(P), so that the quotient Aut red (P) = Aut(P)/R U is reductive. Then Aut red P ∼ = GL k 1 (k) × . . . × GL k N (k) /k * , where k * embeds into the above product by (A.2.6) t → (t a 1 Id k 1 , . . . , t a N Id k N ). Here Id k denotes the identity k × k-matrix. Then Γ naturally acts on Cox(P), that is identified with the ring of regular functions on A k 1 +...+k N \ {0}. Moreover, Aut(P) is actually a centralizer of k * in Γ, since all weights of the action of k * on Cox(P) are positive (and Cox(P) splits into a sum of eigen-spaces of k * ). Thus Aut(P) is isomorphic to the group of graded automorphisms of the ring Cox(P) by [Cox95,Theorem 4.2(iii)]. Remark A.2.7. The assertion of Proposition A.2.5 fails without the assumption that the weighted projective space P is well-formed. One can take the weighted projective line P(1, 2) ∼ = P 1 as a counterexample. Proof. Let R U ⊂ Aut(P) be the unipotent radical of Aut(P). Since any nontrivial element of a unipotent group has infinite order, the intersection G ∩ R U is trivial, hence G embeds into the reductive quotient Aut red (P) = Aut(P)/R U . Let Y be a normal projective variety and let A be a Weil divisor on Y . Put has a natural structure of a graded k-algebra. Remark A.2.10. If the divisor A is ample, then the algebra R(Y, A) is finitely generated, and Y ∼ = Proj(R(Y, A)). As before, define a (graded) vector subspace Lemma A.2.11. Let Y be a normal projective variety with an action of a group Γ, and A be an ample Weil divisor on Y . Suppose that the class of A in Cl (Y ) is Γ-invariant. Then for some positive integer r there is a central extension (A.2.12) 1 → µ r →Γ → Γ → 1 such thatΓ acts on the algebra R(Y, A), and this action induces the initial action of Γ on Y . The algebra R(Y, A) is generated by its vector subspace R N (Y, A) for some N. Now it suffices to define an action of an appropriate central extension (A.2.12) on R m (Y, A) for each 1 m N (see e.g. [KPS16, §3.1]). Taking r to be sufficiently divisible, we may assume thatΓ acts on the whole vector space R N (Y, A), which gives the desired action on the algebra R(Y, A). Lemma A.2.13 (cf. Lemma A.2.4). Let Y be a normal projective variety with an action of a finite group Γ, and A be an ample Weil divisor on Y . Suppose that the class of A in Cl (Y ) is Γ-invariant and R(Y, A) is generated by R N (Y, A). For 1 m N let U m ⊂ R m (Y, A) be the intersection of R m (Y, A) with the subalgebra of R(Y, A) generated by R m−1 (Y, A), and put Then there is a natural embedding Y ֒→ P = P 1 k 1 , . . . , N k N and an action of Γ on P that induces the initial action of Γ on Y . Proof. By Lemma A.2.11 there is an action of a finite central extensionΓ of Γ on R(Y, A) that induces the initial action of Γ on Y . In particular, the groupΓ acts on every vector space R m (Y, A). Obviously, the subspace U m isΓ-invariant. Choose V m ⊂ R m (Y, A) to be aΓ-invariant vector subspace such that One has k m = dim V m . Let x Note that the action ofΓ on P factors through the action of Γ on P, and this action clearly induces the initial action of Γ on Y .
2017-06-08T18:58:27.000Z
2016-08-02T00:00:00.000
{ "year": 2016, "sha1": "0ffad8b7e24da3f7aa342a9eb025779bd2b732f8", "oa_license": null, "oa_url": "https://doi.org/10.17323/1609-4514-2017-17-3-457-509", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "0ffad8b7e24da3f7aa342a9eb025779bd2b732f8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
265287332
pes2o/s2orc
v3-fos-license
Significant Benefits of Environmentally Friendly Hydrosols from Tropaeolum majus L. Seeds with Multiple Biological Activities Tropaeolum majus L. is a traditional medicinal plant with a wide range of biological activities due to the degradation products of the glucosinolate glucotropaeolin. Therefore, the goals of this study were to identify volatiles using gas chromatography–mass spectrometry analysis (GC-MS) of the hydrosols (HYs) isolated using microwave-assisted extraction (MAE) and microwave hydrodiffusion and gravity (MHG). Cytotoxic activity was tested against a cervical cancer cell line (HeLa), human colon cancer cell line (HCT116), human osteosarcoma cell line (U2OS), and healthy cell line (RPE1). The effect on wound healing was investigated using human keratinocyte cells (HaCaT), while the antibacterial activity of the HYs was tested against growth and adhesion to a polystyrene surface of Staphylococcus aureus and Escherichia coli. Antiphytoviral activity against tobacco mosaic virus (TMV) was determined. The GC-MS analysis showed that the two main compounds in the HYs of T. majus are benzyl isothiocyanate (BITC) and benzyl cyanide (BCN) using the MAE (62.29% BITC and 15.02% BCN) and MHG (17.89% BITC and 65.33% BCN) extraction techniques. The HYs obtained using MAE showed better cytotoxic activity against the tested cancer cell lines (IC50 value of 472.61–637.07 µg/mL) compared to the HYs obtained using MHG (IC50 value of 719.01–1307.03 μg/mL). Both concentrations (5 and 20 µg/mL) of T. majus HYs using MAE showed a mild but statistically non-significant effect in promoting gap closure compared with untreated cells, whereas the T. majus HY isolated using MHG at a concentration of 15 µg/mL showed a statistically significant negative effect on wound healing. The test showed that the MIC concentration was above 0.5 mg/mL for the HY isolated using MAE, and 2 mg/mL for the HY isolated using MHG. The HY isolated using MHG reduced the adhesion of E. coli at a concentration of 2 mg/mL, while it also reduced the adhesion of S. aureus at a concentration of 1 mg/mL. Both hydrosols showed excellent antiphytoviral activity against TMV, achieving100% inhibition of local lesions on the leaves of infected plants, which is the first time such a result was obtained with a hydrosol treatment. Due to the antiphytoviral activity results, hydrosols of T. majus have a promising future for use in agricultural production. Introduction Tropaeolum majus L. (Indian cress) is a traditional, medicinal plant belonging to the Tropaeolaceae family (order Brassicales) [1].It is considered an extremely valuable plant primarily due to the presence of polyphenols, fatty acids, and the glucosinolate glucotropaeolin and its degradation products (isothiocyanate) [2,3].Isothiocyanates have become the focus of research in recent years, precisely because of their significant pharmacological, anticancer, antibacterial, antiadhesive and anti-inflammatory effects [2,[4][5][6][7]. The antiproliferative and antibacterial activities of T. majus can be attributed to benzyl isothiocyanate (BITC), which is the major degradation product of the glucosinolate glucotropaeolin [2,8].The major degradation product of glucotropaeolin, BITC, is a volatile compound that suppresses the action of chemical carcinogenesis in various preclinical cancer models and is effective against various sensitive and resistant bacteria [7].Gramnegative bacteria, such as Salmonella typhi, Escherichia coli, and Pseudomonas aeruginosa, and Gram-positive bacteria, including Staphylococcus aureus and Bacillus cereus, have been associated with food poisoning or food spoilage [9].Although BITC has excellent biological activity, due to its low water solubility, volatility, and unpleasant odor, its application in the food industry is still limited [10].The scratch test assay is a standard technique for testing the wound healing potential of different compounds.Although a variety of plant extracts have traditionally been considered to have positive wound healing potential, wound healing assays using plant materials have not been commonly performed [11,12].Among plant diseases, viral infections are of particular concern because the arsenal of available means to combat these pathogens is very limited.Modern phytotherapy and innovative approaches are promoting the use of essential oils, hydrosols, and other plant extracts to treat various human diseases and also to protect plants from pathogens.Since plants can tolerate high levels of infection in some cases and have understandably evolved strategies to deal with viral pathogens, some of their specialized metabolites could be effective antiviral compounds.In some previous studies, we identified volatile plant constituents as metabolites that, among numerous other functions, protect plants against viral infections [13,14].Plant isolates are used to prevent a large number of plant diseases caused by phytopathogenic bacteria, fungi, plant parasitic nematodes, and parasitic and non-parasitic weeds [15].In addition, natural extracts are increasingly becoming the focus of scientific research, as green prevention strategies support the use of natural resources that can replace synthetic remedies and herbicides in crop protection.Therefore, in recent years, modern advanced techniques such as microwave-assisted extraction (MAE) and microwave hydrodiffusion and gravity (MHG) have been used to isolate biologically active compounds.These extraction methods are more environmentally friendly due to their simplicity, lower energy consumption, and shorter processing time compared to conventional extraction methods [16,17].The microwave extraction technique applies heat to the extracted plant material indirectly via in situ water, while classical techniques apply heat directly [17].Using MAE/MHG techniques, three types of samples can be obtained: essential oils/extracts and hydrosols (HYs).As for biologically active compounds, the emphasis is mainly placed on their presence in essential oils and extracts [16].Hydrosols, also called floral and aromatic waters, are complex mixtures that are mostly used in cosmetics and contain trace amounts of essential oils and other compounds that are soluble in water [18,19].HYs, as by-products of extraction techniques, are often misclassified as wastewater and neglected/discarded, although they contain very valuable bioactive components [20].Isolated chemical components from the flowers and leaves of T. majus are usually identified by colorimetric reactions and gas chromatographic analysis combined with mass spectrometry [21].Therefore, because of all the above, the novelty of this research is the first detailed investigation of HYs obtained from T. majus seeds, to the best of our knowledge. Consequently, the aim was the characterization and quantification of volatile compounds present in HYs obtained using two modern extraction techniques: MAE and MHG.The main goal of this research was to test the HYs' biological activities: cytotoxic activity Plants 2023, 12, 3897 3 of 16 against three cancer cell lines (cervical cancer cell line (HeLa), human colon cancer cell line (HCT116), human osteosarcoma cell line (U2OS)) and a healthy cell line (RPE1); scratch assays (to measure effects on wound healing) using human keratinocyte cells (HaCaT); antibacterial activity against the growth and adhesion to a polystyrene surface of S. aureus and E. coli; and antiphytoviral activity against TMV. Identification of Volatile Components in Hydrosols from T. majus Seeds The volatile components present in the HYs isolated using MAE and MHG extraction techniques from T. majus seeds were identified by GC-MS (Table 1).The stock solution of volatile compounds (VCs) of the HYs using the MAE and MHG techniques were 1.05 mg/mL and 3.95 mg/mL, respectively.The compounds benzaldehyde (9.91%), BITC (62.29%), and benzyl cyanide (BCN; 15.02%) were main components detected in the HY of T. majus isolated using the MAE technique.In the HY of T. majus obtained using the MHG extraction technique, the main compounds were α-thujene (5.25%), BITC (17.89%), and BCN (65.33%).What is interesting is that the two most important/main components present in the HYs of T. majus had a reciprocal relationship when comparing the extracts using MAE and MHG extraction techniques.Retention indices (RIs) were determined relative to a series of n-alkanes (C8-C40) on capillary columns VF5-ms (RI).Identification method: RI, comparison of RIs with those in a self-generated library reported in the literature [22] and/or with authentic samples; comparison of mass spectra with those in the NIST02 and Wiley 9 mass spectral libraries.* injection of reference compounds; -, not identified. Cytotoxic Activity The cytotoxic activity of the T. majus HYs on HeLa, HCT116, and U2OS cancer cell lines was determined for the first time.The results showed that both HYs exhibited moderate activity against the cancer cell lines (Figure 1a).The HY of T. majus isolated using MAE (HY after MAE) showed the highest cytotoxic activity with an IC 50 value of 472.61 µg/mL.The HY obtained using MHG (HY after MHG) had slightly weaker activity on U2OS (IC 50 = 719.01µg/mL).The HY isolated using MAE had a similar ability to inhibit the growth of HeLa and HCT116 cells with IC 50 values of 637.07 µg/mL and 636.13 µg/mL, respectively.In contrast, the HY isolated using MHG were less effective at inhibiting the growth of HeLa and HCT116 cancer cells (IC 50 = 960.79µg/mL and IC 50 = 1307.03µg/mL, respectively).The healthy cell line (RPE1) showed significant resistance; extremely high concentrations of both HYs were required to inhibit the growth of 50% of the cells (IC 50 for HY after MAE = 1496.09µg/mL and IC 50 for HY after MHG = 8779.88µg/mL) (Figure 1b).www.mdpi.com/journal/plantslated using MHG were less effective at inhibiting the growth of HeLa and HCT116 cancer cells (IC50 = 960.79µg/mL and IC50 = 1307.03µg/mL, respectively).The healthy cell line (RPE1) showed significant resistance; extremely high concentrations of both HYs were required to inhibit the growth of 50% of the cells (IC50 for HY after MAE = 1496.09µg/mL and IC50 for HY after MHG = 8779.88µg/mL) (Figure 1b). Scratch Assay The scratch assay (to measure effects on wound healing) was performed by treated HaCaT cells with hydrosols isolated using MAE and MHG extraction techniques at concentrations of 5 µg/mL and 20 µg/mL of the HY isolated using MAE, and 15 µg/mL of the HY isolated using MHG.Both concentrations of the T. majus HY isolated using the MAE technique (5 and 20 µg/mL) showed mild and statistically non-significant effects in promoting gap closure compared to untreated cells (Table 2, Figure 2).The T. majus HY isolated using MHG at a concentration of 15 µg/mL showed a statistically significant negative effect on wound healing. Scratch Assay The scratch assay (to measure effects on wound healing) was performed by treated HaCaT cells with hydrosols isolated using MAE and MHG extraction techniques at concentrations of 5 µg/mL and 20 µg/mL of the HY isolated using MAE, and 15 µg/mL of the HY isolated using MHG.Both concentrations of the T. majus HY isolated using the MAE technique (5 and 20 µg/mL) showed mild and statistically non-significant effects in promoting gap closure compared to untreated cells (Table 2, Figure 2).The T. majus HY isolated using MHG at a concentration of 15 µg/mL showed a statistically significant negative effect on wound healing. Antibacterial Activity Table 3 presents the results of the antibacterial activity assays of the HYs isolated using MAE and MHG extraction techniques against Gram-positive S. aureus and Gramnegative E. coli.The stock solutions of the HY isolated using MAE and HY isolated using MHG were at concentrations of 1.05 mg/mL and 3.95 mg/mL, respectively.The obtained results showed that the real MICs for the HY isolated using MAE was above 0.5 mg/mL, Antibacterial Activity Table 3 presents the results of the antibacterial activity assays of the HYs isolated using MAE and MHG extraction techniques against Gram-positive S. aureus and Gram-negative E. coli.The stock solutions of the HY isolated using MAE and HY isolated using MHG were at concentrations of 1.05 mg/mL and 3.95 mg/mL, respectively.The obtained results showed that the real MICs for the HY isolated using MAE was above 0.5 mg/mL, and for the HY isolated using MHG, it was above 2 mg/mL (Table 3). Bacterial Growth Kinetics To test their effects on bacterial growth, S. aureus and E. coli were exposed to the HYs isolated from T. majus seeds using MAE and MHG extraction techniques for 24 h at concentrations of 0.5 mg/mL for the HYs isolated using MAE and concentrations ranging from 2 mg/mL to 0.5 mg/mL for the HY isolated using MHG (Figure S1).The lack of growth inhibition was visible in the growth curves.The concentration did not significantly affect the growth curve, only slightly lengthening the lag phase. Antiadhesion Activity The HYs obtained from T. majus seeds using MAE and MHG extraction techniques were tested for ability to inhibit the adhesion of S. aureus and E. coli to a polystyrene surface.The HY isolated using MHG reduced the adhesion of E. coli at a concentration of 2 mg/mL, while it also reduced the adhesion of S. aureus at a concentration of 1 mg/mL (Figure 3).Due to the stock solution of the HY isolated using MAE (1.05 mg/mL), it was not possible to determine if there was a reduction in adhesion (Figure 3). Antiphytoviral Activity With the aim of improving our knowledge of the biological activities of plant volatiles in general, the species T. majus, which is still poorly studied in terms of its biological activities, became the focus of our scientific interest.To investigate its antiphytoviral activity, D. stramonium plants infected with TMV were pretreated with the HYs of T. majus isolated using the MAE and MHG extraction methods.Treatment of the host plants prior Antiphytoviral Activity With the aim of improving our knowledge of the biological activities of plant volatiles in general, the species T. majus, which is still poorly studied in terms of its biological activities, became the focus of our scientific interest.To investigate its antiphytoviral activity, D. stramonium plants infected with TMV were pretreated with the HYs of T. majus isolated using the MAE and MHG extraction methods.Treatment of the host plants prior to virus inoculation with HYs obtained using both methods significantly inhibited infection, indicating that T. majus is a highly effective natural source of antiphytoviral compounds.A comparison of the control and treated plants showed a statistically significant reduction in the number of local lesions (LLN) on HY MAE -and HY MHG -treated plants.The average LLN on the leaves of the control plants on day 14 after inoculation (dpi) was 12.97, whereas the HY MAE -treated plants developed a significantly lower number of lesions during the same period (1.67 lesions per leaf).In addition, HY MHG -treated plants in all experimental groups completely reduced the development of lesions, and the leaves of all treated plants showed no symptoms (Table 4).To date, extensive research has been conducted to control TMV, and our results are the first report of complete inhibition of local TMV infection by treatment with T. majus HYs.The percent inhibition of virus infection on leaves of HY MAE -treated plants at the 4th, 7th, and 14th dpi was 91.65%, 85.89%, and 84.15%, respectively (Figure 4).HY MHG -treated plants showed 100% inhibition of virus infection during the same period.Since TMV is one of the most important pathogens of agricultural crops, affecting more than 200 species of herbaceous and, to a lesser extent, woody plants, these results suggest that T. majus could be a new and effective source of antiviral compounds in the form of environmentally friendly aqueous solutions. Discussion Many researchers and manufacturers are interested in investigating and including plant extracts in formulations, due to their many and varied biological properties, such as reducing skin pigmentation, skin softening, moisturizing, and promoting wound healing [19].Therefore, the chemical composition and biological activity of HYs obtained using two extraction methods, MAE and MHG, were studied.Two HYs were obtained and analyzed by GC-MS; the data obtained are shown in Table 1.Also, in this paper, the cytotoxic activity of the HYs against HeLa, HCT116, and U2OS cancer cell lines, and against a Discussion Many researchers and manufacturers are interested in investigating and including plant extracts in formulations, due to their many and varied biological properties, such as reducing skin pigmentation, skin softening, moisturizing, and promoting wound healing [19].Therefore, the chemical composition and biological activity of HYs obtained using two extraction methods, MAE and MHG, were studied.Two HYs were obtained and analyzed by GC-MS; the data obtained are shown in Table 1.Also, in this paper, the cytotoxic activity of the HYs against HeLa, HCT116, and U2OS cancer cell lines, and against a healthy cell line (RPE1) was also investigated; the scratch assay (to measure effects on wound healing) was performed using HaCaT cells; the antibacterial activity of HYs was evaluated against the growth and adhesion to a polystyrene surface of S. aureus and E. coli; and their antiphytoviral activity was determined against TMV.The biological activities investigated in this study focus for the first time exclusively on HYs derived from the seeds of T. majus. The obtained results show that the two main components in HYs, BITC and BCN, have a reciprocal relationship.These two volatiles are breakdown products of the glucosinolate glucotropaeolin [23].Benyelles et al. reported that glucotropaeolin is the only glucosinolate present in T. majus.BITC (82.5%) is the most abundant volatiles compound from the aerial parts (orange flowers, stems, leaves) of T. majus from northwest Algeria [24].The higher BCN content in the HY isolated using MHG can be explained by the presence of an epithiospecific protein due to its interaction with the enzyme myrosinase, which redirects the reaction toward the formation of epithionitrile or nitrile depending on the glucosinolate structure [25].The epithiospecific protein is heat sensitive and its activity decreases significantly at high temperatures (>50 • C), which is probably the reason for the lower content of BCN and higher content of BITC in the HY isolated using MAE extraction technique [26].The obtained results agree with those published so far [2,8,27]. Earlier studies by Vrca et al. [2] with an essential oil (EO) and extracts of T. majus showed exceptional cytotoxic activity for the EO and slightly lower activity for the extract on cancer cell lines HeLa, U2OS, and HCT116.A chemical analysis revealed that the main volatile components of the EO and extract of T. majus are BITC and BCN [2].BITC showed significantly higher cytotoxic activity on the tested cells compared to BCN.The presence of these compounds is responsible for the biological activity attributed to T. majus.Numerous studies have shown that BITC, which is enzymatically hydrolyzed from glucotropaeolin (benzyl glucosinolate), has anti-inflammatory, antioxidant, antiangiogenic, and anticancer effects on various cancers [28,29].Recently, there has been an increased awareness of the use of natural products and medicines, including treatments for numerous diseases such as cancer, dietary prescriptions, or nutritional supplements.The current treatment options for cancer patients have a number of obstacles, such as high toxicity to normal cells and many other side effects associated with the treatment.On the other hand, biomolecules derived from natural products, such as BITC, offer great potential for cancer treatment [30]./c) to investigate the effect of oral administration of BITC on tumor growth and metastasis [31].Significant decreases in tumor growth, hemoglobin content, and vascular endothelial growth factor (VEGF) expression in tumor tissue were observed.BITC also led to a decrease in Bcl-2 protein, an apoptosis inhibitor.Owis et al. prepared an alcoholic extract from the leaves of T. majus and tested its potential activity against diethylnitrosamine-induced liver cancer (HCC) in vivo [32].Oral administration of the extract significantly reduced levels of the inflammatory marker NF-κB and suppressed HCC progression.Oral therapy was combined with 0.5 Gy gamma radiation via the EGF-HER-2 pathway.A histopathological analysis showed a restoration of the liver structure, while an immunohistochemical analysis revealed an increase in pro-apoptotic markers and inhibition of anti-apoptotic factors.The authors proved that T. majus can mediate the defense against HCC carcinogenesis in such a way that, in combination with a low dose of gamma radiation, it can stop the further development of the HCC cancer.Phenolic extracts of T. majus flowers rich in caffeic acid, coumaric acid, chlorogenic acid, and rutin showed significant cytotoxic activity in synergy with 5-FU (fluorouracil) on the tested MCF -7 breast cancer cell line [33].Pintão et al. reported the in vitro anticancer properties of BITC against a number of human ovarian cancer cell lines (SKOV-3, 41-M-CHI, CHIcisR), human lung tumor (H-69), murine leukemia L-1210, and a murine plasmacytoma (PC6/sens) [34].BITC showed significant cytotoxicity at low molar concentrations (0.86 to 9.4 µM) in all the cell lines tested.In the present study, the HYs of T. majus isolated using two microwave-assisted extraction methods also predominantly contained glucosinolate degradation products, namely the volatile compounds BITC and BCN.The amount of these compounds in the HYs depended on their water solubility.It is known that BITC and BCN are poorly soluble in water [10], so their concentration in the hydrosol was much lower than in EOs or in the extract, and thus their ability to inhibit the growth of cancer cells as well as healthy cells was lower.Considering the obtained results, the hydrosols of T. majus isolated using modern extraction techniques, MAE and MHG, can be considered as a safe natural product that could find application in the cosmetic and food industries. Han et al. demonstrated that BITC can induce apoptosis in gastric adenocarcinoma (AGS) cell lines via a pathway involving ROS-promoted mitochondrial dysfunction and death receptor activation. Kim et al. injected 4T1 breast cancer cells into albino mice (BALB To our knowledge, there are no previous studies on the wound healing properties of T. majus extracts.Only one research study on wound healing using the genus Tropaeolum was found.That study was conducted on another species of Tropaeolum, namely Tropaeolum tuberosum.The authors tested topical preparations containing 1% acidic ethanolic extract of tubers of T. tuberosum on mice and demonstrated an improved wound healing activity [35].Therefore, our research is a pioneer testing of the wound healing potential of T. majus HYs.The results of the wound healing test on the HaCaT cell line using different concentrations and extraction methods did not show a positive influence of the T. majus HYs; moreover, 15 µg/mL of the T. majus HY isolated using MHG showed a negative impact.The HY obtained using MAE extraction at concentrations of 5 µg/mL and 20 µg/mL showed mild and statistically non-significant wound healing effects.These unexpected results may indicate that the method of extraction could be responsible for different effects of HYs on wound healing performance.The concentration of free VCs in both extracts (MAE and MHG) was the main determinant of HY potency.The concentration range used in the wound healing assay of both HYs was similar, and it was determined based on the cytotoxicity results obtained in our study.However, the detailed composition of the MAE and MHG HYs showed several different chemical identities.We believe that the reason for the different healing effects of the HYs should due to the specific composition of each extract.Moreover, the negative impact of the T. majus MHG HY on wound healing could be an interesting effect, because other substances that show a negative effect on wound healing, like caffeine [36] and allicin, are considered to be a promising therapeutic candidates for keloid scars [37].Nevertheless, the reason for the negative effect of 15 µg/mL of the T. majus HY isolated using MHG on wound healing remains unclear, and it should be a subject for further studies. S. aureus is the bacterium that is most commonly associated with hospital-acquired wound infections, while E. coli is one of the dominant bacteria associated with burn wounds [38].Due to the increase in bacterial resistance to antibiotics, the focus of research is on biologically active components isolated from plant species used as herbal medicines, due to their ability to produce powerful antibacterial and antifungal compounds [39].Certain extracts of plant species such as garlic, basil, ginger, sage, mustard, etc., show antimicrobial activity against a wide range of Gram-positive and Gram-negative bacteria [40].Plants are a rich source of secondary metabolites such as tannins, phenolic compounds, alkaloids, and flavonoids, which have been proven to have antimicrobial properties [41].Vrca et al. [2] reported antibacterial and antiadhesion activities for EOs and extracts of T. majus plants, and for pure BITC and BCN.According to Vrca et al., the EO obtained using MAE and the extract obtained using MHG extraction techniques showed extremely strong antimicrobial activity against S. aureus and E. coli due to degradation products of gluotropaeolin: BITC and BCN [2].Consequently, the by-products or HYs obtained from the seeds of T. majus were tested for antibacterial activity, bacterial growth kinetics, and antiadhesion activity against S. aureus and E. coli.The stock solution of the HY isolated using MAE was lower than that isolated using MHG (1.05 mg/mL), and due to the impossibility of testing at higher concentrations, we assume that the MIC concentration is above 0.5 mg/mL.According to the obtained results, the HYs were not effective enough to achieve the true MIC, but the INT color was less intense for 2 mg/mL of the HY isolated using MHG.The HY isolated using MHG at 2 mg/mL reduced the adhesion of E. coli, while it also reduced the adhesion of S. aureus at 1 mg/mL.Due to the concentration of the stock solution of the HY isolated using MAE (1.05 mg/mL), it was not possible to achieve a reduction in adhesion.The HY isolated using MAE had higher amount of BITC than the HY isolated using the MHG extraction technique, where BCN dominates; according to previous research, ITCs have been proven to be the most biologically active degrading components of glucosinolates [2,42].Although previous research on HYs has been insignificant in terms of antibacterial activity, according to the research of Nazlić et al., essential oils have a stronger effect compared to HYs, which is consistent with the results of this research [20].According to Kuete [43], the activity of plant extracts was classified as significant (MIC < 100 µg/ mL), moderate (100 < MIC ≤ 625 µg/mL), or weak (MIC > 625 µg/ mL); thus, T. majus HYs have a weak activity.Despite the activity of HYs being weaker than those of EOs when it comes to antimicrobial activity, they definitely have their advantages such as availability, quantity, non-toxicity, and environmentally friendly.Despite the fact that the HYs did not show significant antibacterial activity, they showed excellent antiphytoviral activity. Plant viruses are important pathogens for agricultural crops, and new antiviral agents are welcome for economic and environmental reasons.Therefore, one of our goals was the investigation of the antiphytoviral activity of T. majus HYs, as a continuation of research on the biological activities of this promising and understudied plant species.Our hypothesis of the T. majus volatile compounds possessing antiphytoviral activities is supported by the fact that volatiles of various aromatic plant species can stimulate plant defense responses against infections by various pathogens, including viruses [13,14]. Although HYs have emerged in recent years as new potential bioactive candidates to protect plants against viruses [13,44], the use of these environmentally friendly natural products has not been sufficiently explored.In addition to the historical use of HYs in the traditional medicine of Mediterranean countries, they have recently been used in cosmetics and in the food industry to prevent the growth of pathogenic and harmful microorganisms in foods and in the working environment.Glucosinolates (GSLs), water-soluble metabolites found in almost all plants of the order Brassicales, are among the natural volatile chemicals that most likely contribute to plant defenses against pests and diseases.The described antiviral activity of GSLs is mainly focused on their activity against animal viruses [45,46], and a limited number of publications describe GSLs in the context of viral infections of plants [47].In some of our previous studies and in studies by other authors, plants were treated with HYs to control viral infections, but 100% inhibition of local TMV lesions has not yet been reported.Considering the results already published in the literature [13,48,49], the present results undoubtedly indicate that HYs containing degradation products of the GSL glucotropaeolin as the dominant compound are new and very promising natural sources of antiphytoviral agents. Although chemical control methods remain essential in the broader context of plant disease control for economic reasons, natural sources should be explored in the development of effective yet environmentally friendly antivirals to reduce the harmful effects of chemicals on the environment.The promising results presented here confirmed our hypothesis about the antiphytoviral activity of the GSLs contained in the HYs of T. majus.The reported antiphytoviral activity of both HYs obtained using MAE and MHG extraction techniques deserves more detailed analysis in the future and opens new research areas related to this unexplored bioactivity of GSLs and HYs of T. majus. Based on all the obtained results, in the category of HYs, we can say without a doubt that T. majus HYs are one of the most biologically active when compared to other results reported for different plants.Due to their ecological acceptability and the fact that they show biological activity, these HYs have potential applications in the near future in the medicine, pharmaceutical, food, agricultural, and cosmetic industries. Plant Material and Reagents The plant materials (T.majus L. seeds) were purchased from Marcon d.o.o.(Novi Marof, Croatia).Before the isolation of the volatile compounds present in the HYs, the seeds of T. majus were grounded to a fine powder using a coffee grinding machine.Afterward, the milled seeds (ca.50 g) were soaked in distilled water directly before MAE, and approximately 1 h before MHG extraction.HYs of T. majus were obtained using MAE and MHG extraction techniques and an ETHOS X device (Milestone, Milan, Italy) and applying a microwave power of 500 W for a duration of 30 min, as previously described [2,8].The temperature inside the microwave oven was ca.98 • C. The HYs were stored at 4 • C until further analysis. Preparation of the Samples and Analyses of Hydrosols The T. majus HYs (2 samples) obtained using MAE and MHG extraction techniques were prepared for analysis according to Nazlić et al. [50].Briefly, 2 mL of the HY was added to a glass bottle and capped with a metal cap.The prepared sample was placed in a water bath at a temperature of 40 • C for 20 min due to the fact that the volatile compounds (VCs) evaporate from the water.The process took an additional 20 min to allow the VCs to adsorb to the resin filament of the headspace needle that was injected through the septum of the bottle cap at the beginning of sample preparation.The injection of the HYs was carried out with a headspace injection needle and there was no split ratio (splitless mode).The injection needle collected the volatile compounds from the HY and was then inserted into a GC inlet and left there for 20 min to ensure that all the volatile compounds from the resin filament were resorbed into the injection liner. Gas Chromatography and Mass Spectrometry The gas chromatographic analyses of the HY fraction were performed using a gas chromatograph (model 3900; Varian Inc., Lake Forest, CA, USA) equipped with a flame ionization detector and a mass spectrometer (model 2100T; Varian Inc., Lake Forest, CA, USA), capillary column VF-5 ms (30 m × 0.25 mm i.d., coating thickness 0.25 µm, Palo Alto, CA, USA), according to the method described in [50].The chromatographic conditions were the same as detailed by Dunkić et al. [51].Briefly, the conditions for the VF-5-ms column were a temperature of 60 • C (isothermal) for 3 min, which was then increased to 246 • C at a rate of 3 • C min −1 and maintained for 25 min (isothermal).The conditions for the CP Wax 52 column were a temperature of 70 • C (isothermal) for 5 min, which was then increased to 240 • C at a rate of 3 • C min −1 and maintained for 25 min (isothermal).The injection volume was 2 µL and the split ratio was 1:20.The MS conditions were: ion source temperature, 200 • C; ionization voltage, 70 eV; mass scan range, 40-350 mass units.The individual peaks for both HY samples were identified by a comparison of their retention indices of n-alkanes to those of authentic samples and from previous studies [22,52].The results are expressed as the mean value of three analyses. Cytotoxic Activity The cytotoxic activity of the T. majus HYs was determined on three cancer cell lines (kindly donated by Prof. Janoš Terzić from the School of Medicine, University of Split, Split, Croatia), HeLa, HCT116, and U2OS, and one healthy cell line, retinal pigmented epithelial cells (RPE1), using the MTS-based CellTiter 96 ® Aqueous Assay (Promega, Madison, WI, USA) as described in detail by Fredotović et al. [53].The cells were grown in a CO 2 incubator at 37 • C and 5% CO 2 until they reached 80% confluency.The cells were counted using an automated handheld cell counter (Merck, Darmstadt, Germany), seeded into 96-well plates, and treated with serially diluted HYs of T. majus.The cells were grown for an additional 48 h, then 20 µL of MTS tetrazolium reagent (Promega, Madison, WI, USA) was added to each well and the plates were left in the incubator at 37 • C and 5% CO 2 for 3 h.Absorbance was measured at 490 nm using a 96-well plate reader (Infinite M Plex, Tecan, AG, Switzerland).IC 50 values were calculated from three independent experiments using GraphPad Software Prism 9 (GraphPad, Boston, MA, USA). Scratch Assay Human keratinocyte cells (HaCaT) were cultured in Dulbecco's Modified Eagle's Medium (DMEM) containing 10% fetal bovine serum (FBS) and 0.1% of an antibioticantimycotic solution (penicillin, streptomycin, amphotericin B).HaCaT cells were seeded at a 1 × 10 5 cells/well density into standard six-well culture plates in complete cell culture medium and maintained at 37 • C, 5% CO 2 until 80-90% confluency was reached.Prior to the experiment, the cells were starved for 24 h in serum-free media (to inhibit cell proliferation).For the scratch assay, a sterile 200 µL pipette tip was used to make a straight scratch in the monolayer of cells, simulating an epithelial wound.After the scratch was made, the cellular debris was washed out with Dulbecco s phosphate-buffered saline (DPBS).Growth medium containing different concentrations and of the two T. majus HYs (5 µg/mL HY isolated using MAE, 15 µg/mL HY isolated using MGH and 20 µg/mL HY isolated using MAE) was added to each well, followed by incubation for 24 h.Negative control wells were kept in HY-free growth media.Positive control wells were kept in complete growth media containing 5% FBS.An inverted microscope (Olympus IX73, Olympus, Tokyo, Japan) equipped with a digital camera was used to obtain images of the wound healing assay.Two representative images from different parts of the scratched area for each replicate well were digitally photographed.Wound closure was monitored at time 0 h and 24 h after the scratch.ImageJ software 1.8.0.(NIH, Bethesda, Rockville, MD, USA) was used to measure the size of the wound area.The closure of the wound area was measured and expressed as a percentage of the initial wound area (determined at 0 h) using the following formula: S cell free area = [(Area t0 −Area t24 )/Area t0 ] × 100, For the statistical analysis, the Mann-Whitney U test was used (Statistica 14.1.0,Tibcosoftware Inc., Palo Alto, CA, USA).Significance was defined as p < 0.05. Antibacterial Susceptibility To determine the minimal inhibitory concentrations (MICs), the microdilution method was used according to Klančnik et al. and the EUCAST guideline [54,55].Briefly, the HYs were dissolved and filtered through 0.22 µm filters (Sartorius, Croatia) to ensure that the HYs were sterile.Two-fold serial dilutions of the HYs were performed in a 96-well microtiter plate to achieve concentrations from 2 mg/mL to 0.5 mg/mL in a final volume of 50 µL.A 50 µL volume of prepared inoculum (10 5 CFU/mL) was added to each well and mixed.A 10 µL volume of 2-p-iodophenyl-3-p-nitrophenyl-5-tetrazolium chloride (INT, Sigma Aldrich, St. Louis, MO, USA) was added after incubation and was used as an indicator for bacterial metabolic activity [54]. Bacterial Growth Kinetics The HYs were added to 5 mL of growth medium to give final concentrations of 2 mg/mL to 0.5 mg/mL for S. aureus ATCC 25,923 and E. coli ATCC 11,229.S. aureus or E. coli cultures without the addition of T. majus HYs were used as a positive control.As a negative control, MH broth with or without the addition of the T. majus HYs at different concentrations was used and was deducted from the results obtained from the experimental samples.The inocula were prepared as described above.A total of 100 µL of the prepared cultures and negative controls, with or without the addition of HYs, were added to 96-well microtiter plates (Nunc 266 120 polystyrene plates; Nunc, Denmark).The absorbance was measured at 600 nm using a Multiskan reader (Thermo Scientific, Waltham, MA, USA) every 30 min over 24 h at 37 • C to obtain growth curves. Antiadhesion Assay The adhesion of S. aureus ATCC 25,923 and E. coli ATCC 11,229 was analyzed with after treatment with the T. majus HYs obtained using MAE and MHG extraction techniques.The inocula were prepared as described above and treated with HYs at MIC, 1 /2 MIC, and 1 /4 MIC concentrations.The treated inocula (200 µL) were then transferred to 96well polystyrene microtiter plates (Nunc 266 120 polystyrene plates; Nunc, Denmark) and incubated at 37 • C under aerobic conditions for 24 h.To remove non-adherent cells, each well in the microtiter plate was rinsed three times with phosphate-buffered saline (PBS) (Oxoid, Hampshire, UK); afterward, 200 µL of PBS was added to each well and the plates were sonicated for 10 min (28 kHz, 300 W; IskraPIo, Šentjernej, Slovenia).CFU/mL was used to measure the adhesion of the cells, as previously described by Šikić Pogačar et al. [56].The negative control was an untreated culture.The experiments were carried out in triplicate as three or more independent experiments.The data are presented as means ± standard deviation (SD); the analysis was performed using GraphPad Software Prism 9 (Boston, MA, USA).IBM SPSS Statistics 23 (Statsoft Inc., Tulsa, OK, USA) was used to the perform statistical analyses.The Kolmogorov-Smirnov test of normality was used to determine the distribution of the data and statistical significance was determined using T-tests for two independent means.Data with a p-value < 0.05 were considered significant. Antiphytoviral Activity Assay The antiphytoviral activity of T. majus HYs (HY isolated using MAE-HY MAE ; HY isolated using the MHG techniques-HY MHG ) was tested using a local host plant, the species Datura stramonium L., which was infected with tobacco mosaic virus (TMV).An inoculum prepared from systemically infected leaves of Nicotiana tabacum L. cv.Samsun was diluted with phosphate buffer to obtain 10 to 20 lesions per inoculated leaf of the local host plant.HY (undiluted) was applied as a spray solution to the leaves of D. stramonium on two consecutive days before virus inoculation.Antiphytoviral activity was evaluated as the percentage inhibition of the number of local lesions on the leaves of treated and control plants [52].Statistical analysis was performed using GraphPad Prism version 9.All data are presented as mean ± SD (n = 4).Statistical significance was determined by multiple t-tests [14]. Conclusions HYs are environmentally friendly, non-toxic by-products of advanced and conventional extraction techniques that are often unjustly neglected despite possessing biologically active constituents.The HYs of T. majus obtained using MAE and MHG extraction techniques are enriched with volatile components: α-thujene, benzaldehyde, benzyl isothiocyanate (BITC), and benzyl cyanide (BCN).The HY obtained using MAE showed better cytotoxic activity against three cancer cell lines (HeLa, HCT116, and U2OS) than the HY isolated using MHG.The healthy cell line, retinal pigmented epithelial cells (RPE1), showed extremely high resistance to both HYs.The MICs of the T. majus HY isolated using MAE was above 0.5 mg/mL, and for the T. majus HY isolated using MHG, it was above 2 mg/mL.The HY isolated using MHG reduced the adhesion of E. coli at a concentration of 2 mg/mL, while it also reduced the adhesion of S. aureus at a concentration of 1 mg/mL.Due to the stock solution of the HY isolated using MAE, it was not possible to determine the concentration at which it reduces adhesion.Moreover, both HYs showed high antiphytoviral activity against TMV, achieving 100% inhibition of local lesions, which is the first such result of inhibition of TMV viral infection by an HY.Due to the significant and important results on the antiphytoviral activity, HYs of T. majus have a potential future in agricultural production. The advantages of HYs, in addition to showing a wide spectrum of biological activity, is their ecological acceptability due to their non-toxicity, lack of organic solvents in the extraction process, and a larger volume compared to essential oils and extracts, which enables them to be more desirable for use in the medicine, pharmaceutical, food, agriculture, and cosmetic industries.In the category of HYs, we can say without a doubt that T. majus HYs are one of the most biologically active. Figure 1 . Figure 1.Cytotoxic activity of T. majus hydrosols (HYs) isolated using advanced microwave-assisted-extraction techniques, MAE and MHG, on HeLa, HCT116, and U2OS cancer cell lines (a) and RPE1 cell line (b).Statistical analysis was performed using two-way ANOVA followed by Sidak's multiple comparisons test.The presented IC50 values are mean values of three independent experiments performed in quadruplicate ± SD (standard deviation), and significantly different levels between the two extraction methods are marked as **** p < 0.0001. Figure 1 . Figure 1.Cytotoxic activity of T. majus hydrosols (HYs) isolated using advanced microwave-assistedextraction techniques, MAE and MHG, on HeLa, HCT116, and U2OS cancer cell lines (a) and RPE1 cell line (b).Statistical analysis was performed using two-way ANOVA followed by Sidak's multiple comparisons test.The presented IC 50 values are mean values of three independent experiments performed in quadruplicate ± SD (standard deviation), and significantly different levels between the two extraction methods are marked as **** p < 0.0001. Figure 2 . Figure 2. Microphotographs of wound healing assay at timepoints 0 and 24 h after treatment. Figure 2 . Figure 2. Microphotographs of wound healing assay at timepoints 0 and 24 h after treatment. Plants 2023 , 4 Figure 3 . Figure 3. Effects of hydrosols of T. majus isolated using MAE and MHG extraction techniques at concentrations of 0.5 mg/mL, 1 mg/mL, and 2 mg/mL on the adhesion of S. aureus (a) and E. coli (b) to polystyrene surface.The results are expressed as means ± SD; * p-value < 0.05.PC-positive control; HY MAE-hydrosol isolated using MAE extraction; HY MHG-hydrosol isolated using MHG extraction. Figure 3 . Figure 3. Effects of hydrosols of T. majus isolated using MAE and MHG extraction techniques at concentrations of 0.5 mg/mL, 1 mg/mL, and 2 mg/mL on the adhesion of S. aureus (a) and E. coli (b) to polystyrene surface.The results are expressed as means ± SD; * p-value < 0.05.PC-positive control; HY MAE-hydrosol isolated using MAE extraction; HY MHG-hydrosol isolated using MHG extraction. Figure 4 . Figure 4. Percentage of inhibition of TMV infection on leaves of treated host plants compared to control plants on the 4th, 7th, and 14th days post inoculation (dpi).Prior to virus inoculation, plants were treated with HYs of T. majus for two consecutive days (HY isolated using microwave-assisted extraction (MAE)-HYMAE; HY isolated using microwave hydrodiffusion and gravity (MHG) technique-HYMHG).The error bars show the standard deviation of the quadruplicate analyses. Figure 4 . Figure 4. Percentage of inhibition of TMV infection on leaves of treated host plants compared to control plants on the 4th, 7th, and 14th days post inoculation (dpi).Prior to virus inoculation, plants were treated with HYs of T. majus for two consecutive days (HY isolated using microwaveassisted extraction (MAE)-HY MAE ; HY isolated using microwave hydrodiffusion and gravity (MHG) technique-HY MHG ).The error bars show the standard deviation of the quadruplicate analyses. Table 1 . Chemical composition of volatile compounds in hydrosols of T. majus isolated using microwave-assisted extraction (MAE) and microwave hydrodiffusion and gravity (MHG). Table 2 . Percentage of cell-free area 24 h after treatment with T. majus hydrosols isolated using microwave-assisted isolation (MAE) and microwave hydrodiffusion and gravity (MHG). Table 2 . Percentage of cell-free area 24 h after treatment with T. majus hydrosols isolated using microwave-assisted isolation (MAE) and microwave hydrodiffusion and gravity (MHG). Table 4 . Number of local lesions on leaves of local host plants treated with hydrosols (HY MAE and HY MHG ) of T. majus before virus inoculation.
2023-11-20T16:05:35.452Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "7ed13249859a5e7014367b2d0f21cb8e13028124", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/12/22/3897/pdf?version=1700299504", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7bcef39fed3ac45b4bd612f3f91b88f788bf3615", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
220608463
pes2o/s2orc
v3-fos-license
Significant intraocular pressure associated with open-angle glaucoma: Korea National Health and Nutrition Examination Survey 2010-2011 Objectives To investigate significant intraocular pressure (IOP) levels associated with the risk of open-angle glaucoma (OAG) in the treatment-naïve Korean population. Methods Participants ≥20 years of age in Korea National Health and Nutrition Examination Survey 2010–2011 were divided into two groups, those with higher and lower IOP values, compared with the reference IOP value. We compared the risk of OAG in each group using regression analyses. The IOP value that yielded the highest statistical significance was determined as an IOP significantly associated with the OAG risk. Results A total of 7,650 participants (7,292 control, 358 OAG) were included. The mean IOP was significantly higher in OAG group (14.4 ± 2.9 mmHg), compared to control group (13.9 ± 2.7 mmHg, P = 0.022). In association with an increased risk of OAG, the significant IOP value was 18 mmHg (Odds ratio [OR] = 1.79, 95% confidence interval [CI] 1.14–2.80, P = 0.011). Additionally, sex-difference was identified and they were 19 mmHg (OR = 2.79, 95% CI 1.27–6.16, P = 0.011) in men and 18 mmHg (OR = 2.65, 95% CI 1.32–5.33, P = 0.006) in women. The IOP values associated with significantly decreased risk of glaucoma were determined to be 14 mmHg in men (OR = 0.68, 95% CI 0.47–0.99, P = 0.042) and 16 mmHg in women (OR = 0.47, 95% CI 0.27–0.81, P = 0.007). Conclusions In consideration of the risk to benefit ratio, the reference IOP level for screening or setting the target IOP for treatment could be considered different from traditional 21 mmHg in Korean population. Introduction Glaucoma is a chronic, progressive optic neuropathy characterized by change in the optic nerve head and corresponding visual field loss [1]. High intraocular pressure (IOP) has been considered to be one of the most important risk factors for developing glaucoma [1][2][3]. The normal IOP range, defined as the mean IOP within 2 standard deviations (SDs) has been considered to be 10-21 mmHg [3]. Thus, traditional criterion for an "abnormal" or "high" IOP has been regarded as an IOP greater than 21 mmHg, an IOP level that exceeds the 97.5 th percentile value. However, there are limitations to this criterion for assessing the risk of glaucoma in the real world. A number of previous studies reported that the IOP level in the general population does not represent a Gaussian distribution [3]. In addition, the IOP distribution curves in glaucomatous and control eyes overlap to a great extent, and thus, they cannot be simply divided by one definite IOP level. Moreover, in Asian countries, OAG patients with a baseline IOP of �21 mmHg are more prevalent than those with a baseline IOP of >21 mmHg [4,5], which suggests different population groups may require different IOP criteria. In this regard, an abnormal IOP value of >21 mmHg, may have limited clinical relevance for a generalized application of screening eyes at risk of glaucoma or ocular hypertension. These differences suggest that the normal IOP range, as well as the significant IOP value associated with the risk of glaucoma should be applied distinctively, with consideration of baseline IOP characteristic values in various populations. This is also important in the perspective of establishing a target IOP and evaluating the efficacy of glaucoma treatment. In light of these, the purpose of this study was to investigate the IOP level that is significantly associated with the risk of glaucoma in treatment-naïve population based on the Korea National Health and Nutrition Examination Survey (KNHANES) 2010-2011 data. Moreover, we examined the range and distribution of IOP in healthy and OAG groups. Materials and methods The KNHANES is a nationwide population-based cross-sectional survey of the South Korean population that is conducted by the Korea Centers for Disease Control and Prevention and the Korean Ministry of Health and Welfare [5][6][7]. Using a multistage, stratified, probability-clustered sampling method and weighting scheme, the KNHANES provides estimated health statistics that are representative of the civilian, non-institutionalized South Korean population. This survey adhered to the tenets of the Declaration of Helsinki for human research, and all participants provided written informed consent. The survey protocol was approved by the Institutional Review Board of the Korea Center for Disease Control and Prevention. Since all of the KNHANES data are anonymized, the Institutional Review Board of Kangbuk Samsung Hospital agreed that this study was exempt from requiring subject approval. Study design and examinations All subjects had a health interview survey that included standardized questionnaires on demographic variables, as well as current and past medical history, health-influencing behaviors, and socioeconomic status. They also had a health examination survey that included physical and ophthalmologic examinations. The comprehensive ophthalmologic examinations were performed by ophthalmologists trained by the Korean Ophthalmology Society National Epidemiologic Survey Committee. After a health interview that included previous ophthalmic disease-related history, a visual acuity by Snellen chart, the IOP by Goldmann applanation tonometry (GAT), and spherical equivalent (SE), using an automatic refractometer (KR-8800; Topcon, Tokyo, Japan), were measured. A slit-lamp examination (Haag-Streit model BQ-900; Haag-Streit AG, Koeniz, Switzerland) was performed to evaluate the anterior segment and peripheral anterior chamber depth. A peripheral anterior chamber depth of >1/4 peripheral corneal thickness by the Van Herick method was defined as an open angle. Retinal examinations were performed by obtaining a nonmydriatic digital fundus photograph (TRCNW6S; Topcon) of each eye from all of the subjects in a dark room. Visual field testing was performed using the frequency doubling technology (FDT; Humphrey Matrix FDT perimetry; Carl Zeiss Meditec, Inc., Dublin, CA, USA) with the N30-1 screening program on subjects who showed elevated IOP (�22 mm Hg) or glaucomatous optic discs. Glaucoma diagnosis A glaucoma diagnosis was made based on the fundus photography and FDT perimetry findings, according to the International Society of Geographical and Epidemiological Ophthalmology criteria [8] and the findings from previous studies [5,7]. After the preliminary grading based on the glaucoma reading by a committee comprised of glaucoma specialists, the detailed grading was independently performed by another group of glaucoma specialists who were blind to the participants' other information. Any discrepancies between the preliminary and detailed grading were adjudicated by a third group of glaucoma specialists. The glaucoma group was defined based on the ISGEO criteria category I or II [8]. Category I requires a visual field defect consistent with glaucoma and either a vertical cup-to-disc ratio (VCDR) of �0.7 (97.5th percentile) or VCDR asymmetry of �0.2 between the right and left eyes (97.5th percentile). Category II indicates that the visual field results are not definitive, requiring a VCDR of �0.9 (99.5th percentile) or VCDR asymmetry of �0.3 (99.5th percentile). Systemic variable definition Physical measurements included height, weight, systolic and diastolic blood pressures, waist circumference, and body mass index (BMI, the ratio of weight divided by height squared). A morning blood sample was collected after at least 12 hours of fasting. Impaired fasting glucose was defined as fasting blood glucose between 100 mg/dl and 126 mg/dl. Diabetes mellitus (DM) was defined as a fasting glucose value of �126 mg/dl, use of oral hypoglycemic agents or insulin, or a history of DM. Prehypertension was defined as systolic blood pressure between 120 mmHg and 140 mmHg or diastolic blood pressure between 80 mmHg and 90 mmHg. Hypertension was defined as systolic blood pressure greater than 140 mmHg, diastolic blood pressure greater than 90 mmHg, or use of antihypertensive medication. Statistical analysis All data were analyzed using IBM SPSS Statistics for Windows, version 24.0 (IBM Corp., Armonk, NY) to account for the complex sampling design. Strata, sampling units, and sampling weights were used to obtain point estimates and standard errors (SEs) of the mean. All data were analyzed with weighted data, and the SEs of mean population estimates were calculated by Taylor linearization methods. Participant characteristics were summarized as means and SEs for continuous variables and as frequencies and percentages for categorical variables. Demographic information and clinical parameters were compared between groups using the Pearson chi-square test for categorical variables and the general linear model for continuous variables. The right eye was used for controls and bilateral glaucoma patients, and the affected eye for monocular glaucoma patients. Participants were divided into two subgroups, those with higher versus lower IOP values compared with each reference IOP value. We analyzed the risk of OAG (presented as an odds ratio [OR] with 95% confidence interval [CI]) for each group using univariate and multivariate regression analyses adjusted for age, sex, DM, systemic hypertension, BMI, serum cholesterol. The optimal reference IOP value that yielded the highest statistical significance was then determined as IOP level that was significant for the increased or decreased risk of OAG. Results During 2010-2011, a total of 12,356 non-institutionalized South Koreans �20 years of age participated in the KNHANES. Exclusion of 3,147 subjects who did not undergo ophthalmic examinations left 9,209 eligible subjects. Participants were excluded from the study if they had any history of cataracts (n = 93), retinal (n = 48) or refractive surgeries (n = 376), showed evidence of retinal detachment or age-related macular degeneration (n = 22), or had any missing data (n = 927). Participants who were diagnosed and treated for glaucoma were also excluded (n = 93). Finally, a total of 7,650 participants (7,292 controls and 358 OAG patients) were included in the analysis. The OAG group had significantly older mean age values and higher rates of systemic hypertension and DM compared to the control group. A separate analysis in women revealed that the OAG group had significantly larger waist circumferences and BMI values, higher total cholesterol and triglycerides, and diastolic blood pressure than the control group (Table 1). Older age (P <0.001), male (P = 0.012), and higher IOP (P = 0.021) were significantly associated with OAG, but hypertension (P = 0.278) and diabetes mellitus (P = 0.343) were not after univariate and multivariate logistic regression analyses. The mean IOP was significantly higher in the OAG group (14.4 ± 2.9 mmHg, range 7-22 mmHg) compared to the control group (13.9 ± 2.7 mmHg, range 6-21 mmHg, P = 0.022, Fig 1). The IOP measurement distribution showed a right-sided skew with a skewness of 0.16 (SE 0.03) and -0.02 (SE 0.13) and kurtosis of 2.68 (SE 0.03) and 2.54 (SE 0.11) in the control and OAG groups, respectively. The IOP ranges within the mean ± 2SD were 8.7-19.3 mmHg in the control group and 8.7-20.2 mmHg in the glaucoma group (Fig 2). The risk of glaucoma significantly increased as the reference IOP level was set at 18 mmHg (OR = 1.79, 95% CI 1.14-2.80, P = 0.011, Table 2). The IOP value that was significant for an increased risk of glaucoma was calculated as 19 mmHg in men (OR = 2.79, 95% CI 1.27-6.16, P = 0.011) and 18 mmHg in women (OR = 2.65, 95% CI 1.32-5.33, P = 0.006). In comparison, the IOP values associated with a significantly decreased (protective) risk of glaucoma were determined to be 14 mmHg in men (OR = 0.68, 95% CI 0.47-0.99, P = 0.042) and 16 mmHg in women (OR = 0.47, 95% CI 0.27-0.81, P = 0.007). Discussion The main pathophysiology of glaucoma has long been attributed to a high IOP of more than 21 mmHg, which represents an IOP greater than the 97.5 th percentile value in the general population. However, the IOP value reflecting the risk of OAG has not been sufficiently investigated with evidence-based research. Moreover, a clear basis for the upper pressure threshold for glaucomatous damage has not yet been defined for different ethnicities. In this regard, based on our population-based survey, we investigated the clinically meaningful IOP values associated with the risk of glaucoma, independent of age, sex, and systemic variables including DM, systemic hypertension, BMI, and serum cholesterol. The significant IOP level that indicated a higher risk of glaucoma was 18 mmHg in the treatment-naïve Korean population based on the KNHANES 2010-2011. Moreover, a sex-difference was identified, indicating risk values of 19 mmHg in men and 18 mmHg in women. Therefore, we concluded that at least in the Korean population, the reference IOP level for screening or setting the target IOP for treatment cannot always be set as 21 mmHg. These further indicate that in populations with higher proportion of patients with lower untreated IOP, different IOP criteria can be considered and evaluating the risk of glaucoma cannot be solely dependent on the IOP itself. A number of previous population-based studies have reported IOP measurements that were within a mean ± SD and corresponding ranges for a healthy population. For example, the values were 15.4 ± 3.3 in the Beaver-Dam study [9], 13.6 ± 3.4 mmHg in studies in Central India, 14.3 ± 3.3 mmHg in South India, and 13.6 ± 3.8 mmHg in the Ural Eye and Medical Study (Russian population; [10]. Similarly, the mean IOP was 13.9 ± 2.7 mmHg and the range within the 97.5 th percentile was 8.0-20.0 mmHg in the healthy population of the KNHANES 2010-2011. Then, the upper threshold of abnormal IOP level in this study would be 20 mmHg, when applying the traditional concept. However, we speculated that it would be more reasonable to set the contemporary definition of abnormal IOP as a "clinically meaningful IOP", which would be significantly associated with an increased risk of glaucoma. As a result, the values in this study were 19 mmHg in men and 18 mmHg in women in the Korean population, after adjusting for important systemic variables. A higher IOP has been associated with a higher likelihood of developing glaucoma. However, in the present study, the risk of glaucoma did not continuously increase with corresponding increases in the IOP elevation. This is partially consistent with the results from the Namil study, which was another epidemiological study conducted in South Korea [11]. Although the Namil study presented a general trend of increasing POAG prevalence in subjects with high IOP, the prevalence did not reach the highest point in subjects with the highest IOP. These results are also in agreement with Tajimi study [12] from Japan and Handan study [13] from China, where up to 90% of OAG patients had an IOP �21 mmHg. Thus, we speculate that these results are attributable to a large proportion of subjects with an IOP of �21 mmHg in Korea [5], despite the possibility of insufficient statistical power due to the low frequency of patients with an IOP >21 mmHg identified in the KNHANES. The main purpose of our study was to investigate clinically meaningful IOP values that could suggest the risk of glaucoma, based on our population-based survey. Although our results may have limitations in representing the whole population, these are important for the following reasons. First, the clinically meaningful IOP can be important for establishing the appropriate IOP level for screening glaucoma or ocular hypertension. Although the IOP cannot be a standalone screening tool for glaucoma [14], a rationale is needed to identify IOP measurements that indicate potential glaucoma development. Currently, the upper limit of the normal IOP level worldwide has been set as 21 mmHg, but this criterion may require population-specific revisions, especially for those with a large proportion of glaucoma patients that have lower pre-treatment IOP values. Second, our results can provide guidance for determining the appropriate amount of treatment to reduce the IOP as well as insight into optimal levels that will not increase the likelihood of glaucoma development. In an advanced glaucoma intervention study [15], patients were classified into 3 groups according to IOP levels of 14, 14.5, and 17.5 mmHg and the conclusion was that not only lowering IOP, but maintaining an IOP less than 17.5 mmHg could effectively lower the probability of glaucoma progression. Although the present study was a cross-sectional study, we believe that our data can provide additional insight into the target IOP to be considered as below 18 mmHg and furthermore, less than 14 mmHg for the significantly beneficial effects. This information can also be considered when evaluating the effectiveness of the glaucoma treatment. Third, the primary challenge for initiating glaucomatous damage, especially for those with lower baseline IOP, has been associated with a low threshold for stress tolerance at a certain pressure level rather than the absolute IOP level [16,17]. In addition, the threshold for stress tolerance can differ, depending on various factors including age, sex, and ethnicity. Since the majority of OAG patients in Korea have lower baseline IOP values, we speculated that different pressure criteria would potentially provide new insights for clinicians to better understand such thresholds in Koreans. Studies have reported different results on the association between IOP and sex: some studies have reported a higher IOP in women than in men [9,10,18,19], and others have reported opposite results [5,[20][21][22][23]. Based on the KNHANES from 2009-2010, the mean IOP was significantly higher in men than in women and the higher IOP was also significantly correlated with male gender after multivariate analysis [21]. Another study that used a large-scale database of Korean subjects (n = 155,198) also reported the same trend [20]. These studies may account for the higher IOP level in men (19 mmHg) compared with the level for women (18 mmHg) in the present study. Sex-hormone related factors such as the IOP-lowering effect of estrogen, and the IOP increase associated with a relative increase of testosterone levels, in addition to genetic factors, have been suggested as possible mechanisms for the sex-associated IOP differences [24][25][26]. However, these results remain controversial as conflicting results have been reported in different studies, depending on the covariate adjustment. Therefore, further investigations are needed to elucidate the mechanisms for sex-associated IOP differences. Several limitations should be considered when interpreting our study. First, the KNHANES had a single IOP measurement, which limits our ability to explore the association between the peak or fluctuating IOP and the risk of glaucoma. Second, the present study was based on treatment-naïve patients. This may have resulted the low frequency of patients with an IOP >21 mmHg. However, the information on the baseline IOP was unavailable from treated glaucoma patients (n = 93), and thus, we speculated that they should not be included in the present study. Third, the FDT was used for the functional examination, which does not meet the standard criteria for a glaucoma diagnosis. Nevertheless, the FDT is a fast, reliable, large-scale screening method frequently used in populationbased studies. Moreover, since it can detect glaucomatous visual field defects earlier than the standard automated perimetry, it was optimal for our study to ensure that patients at risk of glaucoma were included [27]. Lastly, the angle was assessed using the Van Herick methods and not a gonioscopic examination, thus subjects with angle closure may have been included in our OAG population. Despite these limitations, our study population had a relatively large sample size and a high participation rate, which was representative of the whole population in South Korea. In conclusion, the IOP value associated with a significantly increased risk of OAG was 18 mmHg; the value was 19 mmHg in men and 18 mmHg in women. Therefore, in consideration of the risk to benefit ratio, the reference IOP level for screening or setting the target IOP for treatment could be considered different from traditional 21 mmHg in Korean population. Additional clinical studies are needed to further elucidate applications of our results in Koreans.
2020-07-18T13:06:47.591Z
2020-07-16T00:00:00.000
{ "year": 2020, "sha1": "7bc475e8eaff06a447effbcaace92d01da632969", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0235701&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dbf92bb2fd272ab2a9573c67ee39270c08e3b731", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244779176
pes2o/s2orc
v3-fos-license
The Impact of Ecological Civilization Theory on University Students’ Pro-environmental Behavior: An Application of Knowledge-Attitude-Practice Theoretical Model In environmental education, environmental knowledge is considered to be one of the most important factors affecting university students’ pro-environmental behavior. First, in this paper, the ecological civilization theory (ECT) was understood as a new kind of environmental knowledge. Based on this, a new theoretical model for analyzing the relationships among environmental knowledge, environmental attitude, and environmental behavior was designed in this paper according to ECT and the Knowledge-Attitude-Practice (KAP) theoretical model. Second, from the perspective of students, a questionnaire was designed for students according to ECT, so as to understand the level of ECT of students. On this basis, an empirical test of the relationship between the ECT level, pro-environmental attitude level, and pro-environmental behavior level was carried out. This research shows that ECT as environmental knowledge is as important as science-oriented environmental knowledge (SEK) in environmental education. As a result, the role of environmental knowledge in environmental education should not be ignored but environmental knowledge should be enriched by adding ECT to the environmental knowledge system and improving the environmental knowledge education curriculum, contributing to environmental education in China. INTRODUCTION In recent years, not much has improved in the Chinese ecological environment, which has attracted the attention of the Chinese government to formulate systematic environmental protection strategies and policies, design environmental education programs, and encourage university students' pro-environmental behavior (Wang et al., 2021). It is widely agreed that current human behavior has a negative impact on the natural environment (Edgell and Nowell, 1989;IPCC, 2014). Thus, avenues for increasing pro-environmental behavior are required. Environmental education can serve as a critical tool in increasing pro-environmental behavior as it strives toward the goal of environmental protection (Potter, 2009;Ariffin and Wan, 2017). Environmental education is the comprehensive education of environmental knowledge, environmental attitude, and ecological behavior. Environmental education aims to motivate people to perform appropriate real-life proenvironmental behavior (Carmi et al., 2015;Roczen et al., 2015). Indeed, environmental education is regarded as an indispensable requirement if we want to increase pro-environmental behavior and protect the natural environment successfully (Fortner and Teates, 1980;Michelsen and Fischer, 2017). In environmental education, environmental knowledge is important in producing pro-environmental behavior because an individual must know what type of actions needs to be taken. Thus, environmental knowledge is an intellectual prerequisite to performing pro-environmental behavior (Stern and Gardner, 2002;Frick et al., 2004;Fujii, 2006;Kaiser et al., 2014;Nordin and Saliluddin, 2016). People who have greater knowledge of environmental problems are more prone to behave in a proenvironmental way, ceteris paribus (Oguz et al., 2010). On the contrary, a shortage of environmental knowledge or the holding of wrong environmental perception might limit proenvironmental behavior. A study in Hungary found that more than 50% of the respondents felt that their pro-environmental behavior was often constrained by a shortage of environmental knowledge (Zsóka et al., 2012). The reality, though, is more complicated than it seems. The relation between environmental knowledge and proenvironmental behavior has been disputed (Frick et al., 2004;Geiger et al., 2014;Gharagozlou et al., 2019), and maybe influenced by several factors, such as motivational components in the form of personal attitude (Gatersleben et al., 2002;Gifford and Nilsson, 2014). Previous research investigating the relationship between environmental knowledge and proenvironmental behavior shows that environmental knowledge, more often than not, fails to directly influence pro-environmental behavior (Hines et al., 1986(Hines et al., -1987Kals et al., 1999;Kaplowitz and Levine, 2005;Steg and Vlek, 2009;Levy and Marans, 2012;Kastanakis and Voyer, 2014). In other words, in environmental education, environmental knowledge and environmental attitude work together to produce pro-environmental behavior. As a result, a more complex theoretical model of environmental education has been produced. Knowledge-Attitude-Practice (KAP) is a behavioral intervention theory, which is one of the common models to explain how environmental knowledge affects pro-environmental behavior (Milfont, 2009;Paço and Lavrador, 2017). It was first proposed by the British scientist John Coster in the 1960s. At present, environmental education in Chinese universities still pays attention to the influence of environmental knowledge on environmental behavior, and mainly focuses on the teaching of environmental knowledge. Environmental knowledge can be defined as a general knowledge of facts, concepts, and relationships concerning the natural environment and its major ecosystems (Fryxell and Lo, 2003;Liobikienė et al., 2016). Environmental knowledge involves what people know about the environment, key relationships leading to environmental aspects or impacts, an appreciation of "whole systems, " and collective responsibilities necessary for sustainable development . Environmental knowledge is usually divided into three categories: system knowledge, action knowledge, and efficiency knowledge (McGorry, 2000;Geiger et al., 2018). However, to date, a lot of research studies on environmental knowledge have examined only one or, at most, three forms of environmental knowledge. In the course of daily environmental education, an interesting phenomenon was found in this paper. Students who took some courses that included the ecological civilization theory (ECT) of the Chinese government showed more pro-environmental behavior. These courses are mainly ideological and political theory courses. Then, will the ECT of the Chinese government become a new type of knowledge that affects pro-environmental behavior? As a result, a new theoretical model for analyzing the relationships among environmental knowledge, environmental attitude, and environmental behavior were designed in this study based on ECT and the KAP theoretical model, contributing to environmental education in China. LITERATURE REVIEW From the current situation of environmental education in Chinese universities, after nearly 20 years of difficult exploration, some achievements have been made, but there are still some shortcomings, mainly manifested in the insufficient education of ECT. There are not many theoretical works on it in China, and the understanding in practice is still insufficient. It is necessary to conduct a further study on it to realize its guidance and promotion in practice. The ultimate goal of environmental education is to achieve the development of pro-environmental behavior and the formation of real problem-solving capabilities (Mostafa, 2007;Ottoa and Pensinib, 2017). Therefore, environmental education as an effective way to implement the development of students' core qualities will inevitably become the trend of curriculum reform and development. For environmental education, a large number of worldwide scholars have conducted research. Their main research focus is to explore the potential factors that influence students' pro-environmental behavior, so as to actively explore such potential factors and help students improve their pro-environmental behavior. Environmental knowledge and attitude are considered to be the most important factors affecting pro-environmental behavior. Therefore, the main content of environmental education is environmental knowledge teaching. Through an increase of environmental knowledge, pro-environmental attitude is improved, and then pro-environmental behavior is improved. Zsóka et al. (2012) explore the relationship between environmental education and environmental knowledge, attitude, and the reported actual behavior of university and high school students. Kaiser et al. (2014) begin by distinguishing the three forms of environmental knowledge and go on to predict that people's attitude toward nature represents the force that drives their proenvironment behavioral engagement. Based on the data from 1,907 students, Kaiser et al. (2014) calibrated the previously established instruments to measure pro-environmental behavior, environmental knowledge, and attitude toward nature with Rasch-type models. Vicente- Molina et al. (2013) analyze the influence of environmental knowledge on pro-environmental behavior among university students from countries with different levels of economic development. The results suggest that motivation and perceived effectiveness are not only significant variables in both groups but also the most important ones in explaining pro-environmental behavior. Genovaite and Mykolas (2019) have transformed Value-Belief-Norm (VBN) by including environmental knowledge as an external factor. The results showed that action-related environmental knowledge was related to an ecological worldview and directly influenced the private sphere behavior. Thus, Genovaite and Mykolas (2019) revealed how specific environmental knowledge influenced proenvironmental behavior. Based on the current study, new research was designed in this paper. In this study, ECT was understood as a new kind of environmental knowledge. From the perspective of students, a questionnaire was designed for students according to ECT to understand students' ECT level once again. On this basis, an empirical test of the relationship between the ECT level, proenvironmental attitude level, and pro-environment behavior level was carried out, and then the validity of the KAP theory in environmental education was proved. Based on the conclusion of this study, we will evaluate the existing environmental education in China and put forward suggestions for improvement. Knowledge-Attitude-Practice in Environmental Education The theory divides the change in human practice into three continuous processes: acquiring knowledge, producing attitude, and forming practice (Prothero, 1990). Among them, "knowledge" is an understanding of relevant information, "attitude" is correct belief and positive attitude, and "practice" is behavior. Knowledge produces attitude, and attitude is the result of knowledge. University students' environmental knowledge can foster positive environmental attitude (Otto and Kaiser, 2014;Tam and Chan, 2018). Environmental attitude changes positively as environmental knowledge increases. People with more knowledge of natural resources have more positive attitude toward environmental protection (Cappetta and Magni, 2015). For example, aquarium visits and publications related to marine protected areas can greatly improve the marine environmental protection attitude of visitors by increasing their knowledge (Ting and Cheng, 2017). Behavior is the external expression of attitude, and attitude has a positive influence on behavior (Taufique et al., 2016;Sarkis, 2017). Environmental attitude is essential to environmental behavior research, and environmental education often seeks ways to determine and modify environmental attitude about the relationship between humans and nature. The main focus has been that, by understanding attitude, environmental education research can better predict the student's behavior, thereby changing students' attitude to elicit appropriate environmental behavior (Barber and Taylor, 2009;Yadav and Pathak, 2016). Based on this, the classical KAP theoretical model forms two hypotheses with inherent relevance, and the two hypotheses are proved to be valid through empirical methods. Hypothesis 1 (H1): Environmental knowledge positively influences proenvironmental attitude. Hypothesis 2 (H2): Pro-environmental attitude positively influences pro-environmental behavior. Therefore, the classical KAP theoretical model supports environmental education with environmental knowledge as the main teaching content being carried out in China. Knowledge is the premise and foundation for a person to form a psychological tendency of liking or disliking something. Environmental knowledge can promote a pro-environmental attitude. For example, consumers with environmental knowledge are more likely to understand the significance and value of buying green products. Attitude is a person's evaluation of an object. It expresses a psychological tendency to like or dislike something, or a specific emotional tendency toward something (Paço and Lavrador, 2017), as a significant predictor for interpreting and promoting behavioral intentions. A pro-environmental attitude can promote pro-environmental behavior (Fishbein and Ajzen, 2010). For example, consumers with a positive attitude are more likely to buy "green" and energy-efficient products, they do not find it inconvenient to buy green products. According to the representation of environmental education in the classical KAP model, the relationship among environmental knowledge, pro-environmental attitude, and pro-environmental behavior is shown in Figure 1. In addition to the classical KAP theoretical model, some scholars proposed an improved theory of the KAP theoretical model (Pisano and Lubell, 2017;Geiger et al., 2018). In the improved KAP theoretical model, the influence of environmental knowledge on pro-environmental behavior is strengthened. The transfer of environmental knowledge to enable people to reflect on their actions rationally and then to act intentionally based on this has been a dominant model of environmental education. At the same time, in the improved KAP theoretical model, a direct influence of environmental knowledge on pro-environmental behavior is also further revealed and verified by empirical research. In the improved KAP theoretical model, environmental knowledge [science-oriented environmental knowledge (SEK)] is divided into three types: system knowledge, action knowledge, and efficiency knowledge (Razak et al., 2015;Geiger et al., 2018). System knowledge refers to the knowledge about the basic operating state and internal laws of the natural ecosystem. For example, global warming is due to the emission of CO 2 , less forest will lead to soil erosion, and shrinking lakes will lead to climate drought. Action knowledge refers to the knowledge about what actions are best for environmental protection. For example, using recyclable bags helps protect the environment, proper disposal of waste batteries helps protect the environment, and rejecting the use of wildlife products helps protect biodiversity. Efficiency knowledge refers to the knowledge about saving energy and saving resources. For example, waste recovery can effectively save resources, taking public transport is conducive to energy conservation, and using energy-saving bulbs can save energy. Based on this, the improved KAP theoretical model forms three hypotheses. Hypothesis 1 (H1): SEK positively influences pro-environmental attitude. Hypothesis 2 (H2): Proenvironmental attitude positively influences pro-environmental behavior. Hypothesis 3 (H3): SEK positively influences proenvironmental behavior. According to the representation of environmental education in the improved KAP theoretical model, the relationship among environmental knowledge, pro-environmental attitude, and pro-environmental behavior is shown in Figure 2. Based on the abovementioned research, a new KAP theoretical model is studied in this paper. According to the representation of environmental education in the new KAP theoretical model, the types of environmental knowledge are further increased. Because the environmental education experience from China shows that a politics-oriented ECT is an important factor affecting students' pro-environmental behavior. In the new KAP theoretical model, ECT, as a new type of environmental knowledge, is regarded as an equally important variable as SEK. Ecological Civilization Theory in Environmental Education "Ecological prosperity leads to civilization, while ecological decline leads to civilization decline" reflects the importance China attaches to ecological civilization construction. Ecological civilization is a major issue related to the sustainable development of China. Therefore, the theory of ecological civilization is regarded as environmental knowledge that Chinese university students must learn and is an important part of environmental education in China. To implement the ecological environment protection strategy, China's environmental education was divided into two groups: science-and politics-oriented environmental education (Wang et al., 2021). Science-oriented environmental education has been continuing since the 1990s and mainly includes elementary, middle, and high school basic education on environmental pollution, ecosystem function, saving resources, and other basic environmental topics. The main teaching contents are basically consistent with the environmental knowledge system in the classical KAP theoretical model. Politics-oriented environmental education has been intensified since 2012 through vigorously promoted strategies, such as ecological civilization, "beautiful China, " and "lucid waters and lush mountains are invaluable assets" via various media channels. The main teaching content is an environmental knowledge system different from the environmental knowledge system in the classical KAP theoretical model. To be precise, this environmental knowledge system is ECT. The ecological civilization theory, as a new type of environmental knowledge, covers political, economic, cultural, social, environmental, and other aspects, mainly including China's environmental governance strategies, systems, and programs. ECT includes three basic contents: the basic standpoint, the basic viewpoint, and the basic method. The basic standpoint of ECT is "the people-centered philosophy of development," which clearly points out that the fundamental purpose of the construction of ecological civilization is not to maintain the intrinsic value of abstract nature, nor to realize sustainable development of economy and society, but to "meet the ecological needs of the people" and realize the ecological happiness of the people. The basic standpoint of ECT reflects a kind of value-related environmental knowledge. The basic viewpoints of ECT can be summarized as the following six basic viewpoints: the ecological view of "harmonious coexistence of human and nature"; the ecological economic view of "lucid waters and lush mountains are invaluable assets"; the ecological ethics view of "man and nature are life community"; the ecological system view of "accelerating the reform and innovation of the system and mechanism of ecological civilization"; the ecological culture view of "bringing ecological culture into the mass spiritual civilization creation activities"; and the international ecological view of "building an international ecological governance cooperation mechanism featuring extensive consultation, joint contribution, and shared benefits." The basic viewpoints of ECT reflect a kind of significance-related environmental knowledge. The basic method of ECT is the means of China to analyze and deal with China's environmental problems, including historical analysis, contradiction analysis, and system analysis. According to the historical analysis, the construction of ecological civilization in China is a long process, which needs to be handed down from generation to generation and gradually realizes a harmonious coexistence between man and nature. According to the analysis of contradiction, the main task of ecological civilization construction in China is to solve the contradiction between economic development and environmental protection, which is to realize social development and protect the natural environment as well. According to the system analysis method, China's ecological civilization construction is a systematic project, which requires us to start from China's reality, consider environmental governance as a whole, and deal with the wholeness, complexity, and coordination of ecological civilization construction. The basic method of ECT reflects a kind of method-related environmental knowledge. The relationship between the basic content of ECT and ECT as a new type of environmental knowledge is shown in Figure 4. At present, in China's environmental education system, there is no specialized course on the theory of ecological civilization, and in environmental education, the theory of ecological civilization as environmental knowledge has not been paid enough attention. The theory of ecological civilization is being taught to university students as part of the ideological and political theory course. However, students who have taken ideological and political theory courses show stronger pro-environmental behavior, which will provide important implications for environmental education in China. A questionnaire about university students' level of ECT is designed in this study, which is the basis of all the research studies in this paper. Participants The data in the valid questionnaires were analyzed and processed to obtain the situation of students' environmental education. Because the environmental education courses in different universities in China are not completely consistent. Therefore, the participants of this study come from the two universities in the Liaoning Province, China, who receive different types of environmental education courses and meet the special requirements of this study. Some of the students (n = 105) had only taken environmental education courses related to ECT. Some of the students (n = 107) had only taken environmental education courses related to SEK. Some of the students (n = 95) had taken both. Some of the students (n = 114) were yet to take any environmental education courses. Among the 421 questionnaires that were completed and returned, 16 were invalid and thus excluded based on the data verification requirements, hence 96.1% of the returned questionnaires (405 questionnaires) were valid and used for research. General Environmental Behavior Scale General environmental behavior was measured by a comprehensively tested and validated eight-item self-report instrument (Midden et al., 2007). The instrument originally had 40 items, but it was reduced to eight items for this study. For eight of the environmental behavioral self-report items, we used a 5-point scale ranging from 1 (never) to 5 (always). General Environmental Attitude Scale The general environmental attitude was measured by a comprehensively tested and validated eight-item self-report instrument (Brügger et al., 2011). The instrument originally had 40 items, but it was reduced to eight items for this study. For eight of the environmental attitude self-report items, we used a 5-point scale ranging from 1 (totally disagree) to 5 (in full agreement). General Environmental Knowledge Scale General environmental knowledge (SEK) was measured by a comprehensively tested and validated eight-item self-report instrument (Frick et al., 2004). The instrument originally had 48 items, but it was reduced to eight items for this study. For eight of the environmental knowledge self-report items, we used a 5-point scale ranging from 1 (no knowledge) to 5 (biggest knowledge). Ecological Civilization Theory Scale The level of ECT was measured by a comprehensively tested and validated 20-item self-report instrument (Wang et al., 2020). The 20 questions are newly developed for this particular research. According to the opinions of experts in environmental education and ideological and political courses, we design a set of questionnaires reflecting ECT. The questionnaire is designed according to the basic content of ECT. It reflects the valuerelated environmental knowledge of ECT, the significance-related environmental knowledge of ECT, and the method-related environmental knowledge of ECT. The items of the questionnaire were emailed to eight relevant experts of China who studied environmental education and ideological and political courses, and the experts were asked to score the items of a questionnaire to ensure the accuracy of the measurement. Finally, the experts gave scores based on the years of experience and relevant public research questionnaires. All 20 items were measured by using a 5-point scale ranging from 1 (totally disagree) to 5 (in full agreement). Experts give scores (M = 4.750, SD = 0.463) well above 4, which indicate that the experts agree with the evaluation system of this paper, and the ECT evaluation method is feasible and the opinions of experts are almost unified. For 20 of the ECT self-report items, we used a 5-point scale ranging from 1 (not at all) to 5 (very clear). The ECT scale is shown in Table 1. First, we divided the students into four groups. Group A (n = 105) only received ECT education. Group B (n = 107) only received SEK education. Group C (n = 95) received SEK education and ECT education. Group D (n = 114) received no environmental education. Second, the general environmental knowledge scale and ECT scale were, respectively, used to (2) Do you know the fundamental goal of China's ecological progress is to realize the ecological happiness of the people? test the four groups of university students to examine their environmental knowledge level. Next, the general environmental behavior scale and general environmental attitude scale were, respectively, used to test the four groups of university students to examine their pro-environmental behavior level and pro-environmental attitude level. Finally, the correlation coefficient between SEK, ECT, pro-environmental attitude, and pro-environmental behavior of the four groups of students was calculated to prove the important role of ECT in environmental education. RESULTS We will present the details of our confirmatory test of the theoretically anticipated relations between environmental knowledge (SEK and ECT), pro-environmental attitude, and pro-environmental behavior. Figure 5. The pro-environmental attitude level of groups A, B, and C who have attended environmental education courses (M = 3.548, SD = 1.114) is much higher than group D (M = 2.579, SD = 1.512) who have not received environmental education. The pro-environmental attitude level of group C (M = 3.814, SD = 0.859) who have taken two environmental knowledge courses at the same time had the highest level of proenvironmental attitude among all the four groups. While students in group D (M = 2.579, SD = 1.412) who have not taken any environmental education courses had the lowest proenvironmental attitude level. The pro-environmental attitude level of the overall four groups (M = 3.305, SD = 1.226) is higher than 3, which is a high level. This shows that Chinese university students generally have high pro-environmental attitude, which again proves that China's environmental education has achieved certain achievements. The pro-environmental behavior level of groups A, B, and C who have attended environmental education courses (M = 3.262, SD = 1.119) is still much higher than group D (M = 2.159, SD = 1.865) who has not received environmental education. The pro-environmental behavior level of group C (M = 3.377, SD = 0.797) is still the highest among the four groups. However, there are no significant difference between group C (M = 3.377, SD = 0.797), group A (M = 3.263, SD = 0.959), and group B (M = 3.147, SD = 1.231). The pro-environmental behavior level of the overall four groups (M = 2.986, SD = 1.445) is almost equal to 3, which is still a good level. The levels of pro-environmental attitude and pro-environmental behavior of groups A, B, C, and D are shown in Figure 6. Descriptive Analysis of the Obtained Data Correlation Analysis Between Ecological Civilization Theory, Science-Oriented Environmental Knowledge, Pro-environmental Attitude, and Pro-environmental Behavior The correlation coefficient between ECT and pro-environmental attitude is 0.571 (p < 0.05), which is a high correlation degree. This supported hypothesis H1. This shows that ECT does have a significant impact on pro-environmental attitude, and the more ECT, the stronger pro-environmental attitude will be. The correlation coefficient between pro-environmental attitude and pro-environmental behavior is 0.214 (p < 0.05), which is a weak correlation. However, this still supported hypothesis H2. This shows that the ECT level of university students does not necessarily translate into 100% pro-environmental behavior through pro-environmental attitude, and pro-environmental behavior is also influenced by objective factors such as economic factors, which is also confirmed in the interviews with students. This has created an opportunity for the perfection of the KAP theoretical model in environmental education. The correlation coefficient between SEK and pro-environmental attitude is 0.561 (p < 0.05), indicating a high correlation degree. This supported the hypothesis H4. This once again proves the effectiveness of the KAP theoretical model in environmental education. The correlation coefficient (r = 0.571) between ECT FIGURE 6 | Levels of pro-environmental attitude and pro-environmental behavior of groups A, B, C, and D. and pro-environmental attitude is almost equal to the correlation coefficient (r = 0.561) between SEK and pro-environmental attitude. This shows that ECT, like SEK, had a positive impact on pro-environmental attitude. ECT as environmental knowledge is as important as SEK in environmental education. The correlation coefficient between ECT and proenvironmental behavior is 0.447 (p < 0.05), which is still relatively high. This supported hypothesis H3. This shows that ECT does have a direct impact on pro-environmental behavior. The more ECT, the more frequent pro-environmental behavior is. The correlation coefficient between SEK and pro-environmental behavior is 0.344 (p < 0.05), which is a medium correlation. This supported hypothesis H5. The correlation coefficient (r = 0.447) between ECT and pro-environmental behavior is greater than that (r = 0.344) between SEK and pro-environmental behavior. This shows that SEK produces pro-environmental behavior less quickly than ECT does. This is probably because of the spread of ECT that has been strongly supported by the Chinese government, and students have learned ECT quickly through various channels. It is worth mentioning that the correlation coefficient between SEK and ECT is 0.618 (p < 0.05), which is a high correlation degree, which means that they interact with each other, thereby showing the fusion of the knowledge groups. This supported hypothesis H6. The significant correlation among ECT, SEK, pro-environmental attitude, and pro-environmental behavior is shown in Table 2. Case Analysis of Students In this paper, 10 students from group C, who received SEK education and ECT education were selected for interview. The interview results are shown as follows. Through interviews with students, it was found that the students with more environmental knowledge (SEK and ECT) were marked with stronger pro-environmental attitude and pro-environmental behavior. Students who were interviewed pointed out, that compared with SEK, the theory of ecological civilization can better enable students to understand the value, significance, and the method of environmental governance so that students can clearly understand the significance of environmental protection to national development and people's happiness. This is very useful for improving their pro-environmental attitude and pro-environmental behavior. Therefore, as a kind of environmental knowledge, ECT plays a very important role in environmental education. At the same time, they actively link SEK with the theory of ecological civilization to form a new environmental knowledge system. This is much greater than pro-environmental attitude and pro-environmental behavior that come from just having a kind of environmental knowledge (SEK or ECT). They also suggest that the current courses on the theory of ecological civilization are not very attractive, and students often acquire relevant knowledge from outside the courses. They have obtained a considerable amount of information through various information channels such as the news media. The students also said that, although they have developed pro-environmental attitude through learning environmental knowledge (SEK and ECT), it does not mean that they adopt pro-environmental behavior every time because pro-environmental behavior is also influenced by some objective conditions, such as economic conditions and technological conditions. On the whole, students pay close attention to the significance conveyed by the theory of ecological civilization. It is because students feel the great significance of ecological civilization through learning that they are more willing to take proenvironmental behavior. At the same time, students also think that the relevant courses are not attractive. DISCUSSION The core of modern environmental education has changed from allowing students to master environmental knowledge to adapting them to life-long pro-environmental behavior (Bhattacharya, 2019). Thus, this paper studies and analyzes the relationship between environmental knowledge, environmental attitude, and environmental behavior of university students, and puts forward that the theory of ecological civilization can be studied as a kind of environmental knowledge that can affect pro-environmental attitude and pro-environmental behavior. In our research, we confirmatory tested the anticipated proenvironmental behavior structure that was originally proposed in the KAP theory model. Specifically, we found that there is a positive correlation between environmental knowledge (SEK and ECT) and environmental attitude, and the influence of proenvironmental attitude on pro-environmental behavior is also significant. In the structure of environmental knowledge, the influence of ECT on pro-environmental attitude is significant, which can lead to pro-environmental behavior, which again proves the hypothesis of this study. However, the conclusion of this study is in conflict with the conclusion of some other studies. For example, some studies that examined systems or knowledge of environmental issues found an insignificant relationship between knowledge and pro-environmental behavior (Rhead et al., 2018). Some studies suggest that environmental knowledge does not necessarily result in pro-environmental actions (Ahmad et al., 2018). From our results, environmental knowledge (SEK and ECT) does have a direct impact on pro-environmental behavior. However, the correlation between environmental knowledge (SEK and ECT) and pro-environmental behavior is indeed weaker than that between environmental knowledge (SEK and ECT) and pro-environmental attitude. There is, however, a possible explanation for this finding. Because our students knew so little about environmental issues, how systems work, behavioral remedies, especially the significance of ecological civilization construction for national development, range restrictions due to floor effects seemed to occur. In other words, we found the seemingly small knowledge effects that might have been due to our students who extremely restricted the level of environmental knowledge because the restricted variances of variables often lead to artificially deflated correlations with other variables as well (Tabachnick and Fidell, 2006). From our results, we should not conclude that environmental knowledge can be abandoned. A general increase in environmental knowledge, especially an increase of ECT, might in fact already be able to alleviate the weak relations between knowledge and behavior (Kaiser and Frick, 2002). Thus, the important thing is not to abandon environmental knowledge but to strengthen environmental knowledge. It is important to consider the type of environmental knowledge as well. Two suggestions can be made for China's environmental education. First, it is suggested that the curriculum system of environmental education should be further improved to improve the environmental knowledge level of university students. More importantly, all students are required to receive environmental education, and SEK and ECT must be taken into account at the same time to achieve the integration of different types of environmental knowledge, so as to achieve a better environmental education effect. In addition, efforts should be made to make the courses of environmental education more vivid and interesting, which should be close to life and arouse students' interest in learning. Second, it is suggested that the relevant education of ECT is strengthened so that university students can fully understand the significance of ecological civilization construction and its importance for national development. Value-related environmental knowledge, signification-related environmental knowledge, and method-related environmental knowledge provided by ECT, functioning as a heuristic, could reduce the cognitive load needed to make decisions, thus potentially having a direct effect on behavior. There are a few studies showing that people who know about behavior significance and value are more confident and inclined to behave accordingly (Cappetta and Magni, 2015). Thus, there is evidence that significance-related knowledge and value-related environmental knowledge enabling individuals to make concrete and informed decisions might more easily be translatable into behavior. CONCLUSION In our research, we have demonstrated that both the classical KAP theoretical model and the new theoretical model proposed in this paper are effective in environmental education in China. China's environmental education has made some achievements, but it is still far from the ideal level. Both ECT and SEK have a significant impact on pro-environmental attitude, and the more ECT and SEK, the stronger pro-environmental attitude will be. Similarly, both ECT and SEK have a direct impact on pro-environmental behavior. The more ECT and SEK, the more frequent pro-environmental behavior is. This shows that ECT as environmental knowledge is as important as SEK in environmental education. However, we also found that the ECT level of university students does not necessarily translate into pro-environmental behavior 100% through pro-environmental attitude, and pro-environmental behavior is also influenced by objective factors such as economic factors, which is also confirmed in the interviews with students. At the same time, we found that SEK in the classical KAP theoretical model produces pro-environmental behavior less quickly than ECT does. This is probably due to the spread of ECT that has been strongly supported by the Chinese government, and students have learned ECT quickly through various channels. In short, the role of environmental knowledge in environmental education should not be ignored but should enrich environmental knowledge by adding ECT to the environmental knowledge system and improving the environmental knowledge education curriculum. Although some achievements have been made in this research, there are still some shortcomings. The research data included in this paper are few and not comprehensive. For an impact of ECT on university students' pro-environmental behavior, different schools are likely to have different characteristics, but this paper only studies a small number of students in two universities. The results are not applicable to every university or every region. Therefore, the next step is to carry out studies in other universities and other regions based on the new KAP theoretical model of this paper to provide more data support for the research results of this paper. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS KW conceived the idea for this study. LZ conducted the statistical analysis. Both authors contributed to the final write-up and reviewed and approved the submission.
2021-12-02T14:46:50.073Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "595d5279aa133ffa2ffb584f886215e68f5f244e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.681409/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "595d5279aa133ffa2ffb584f886215e68f5f244e", "s2fieldsofstudy": [ "Environmental Science", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
240670539
pes2o/s2orc
v3-fos-license
Online Teaching: A Relational Study of Perception and Satisfaction Distance learning offers an affordable and convenient way to study and improve one’s knowledge in one’s spare time. This trend has been accelerated by information and communication technologies that have pushed to new boundaries the ways in which online learning is undertaken. The prevalence of such learning has greatly increased during the COVID-19 pandemic, enabling education in a relatively safe environment. This paper studies how satisfied learners are with such learning. It also looks at interactivity and communication self-efficacy and the effects on student satisfaction with online courses. Analysis of these factors and their cross-effects was undertaken using a case study in a virtual online classroom of 75 students. A questionnaire was designed (with a reliability coefficient of 0.93) and the results were analysed using correlation analysis and ANOVA in SPSS. Satisfaction of students with the course significantly correlated with satisfaction with the online discussion, and positively correlated with satisfaction with course content. The student perception had a significant impact on communication self-efficacy and the interactivity. The results revealed that the key indicating factors for the satisfaction were course content and structure, and the quality of online discussions. Introduction Online teaching and learning is a digitised version of distance education that goes back to the 1700s (Harting & Erthal, 2005). On March 20, 1728, Caleb Phillips placed an advertisement in the newspaper the Boston Gazette in Massachusetts, about shorthand lessons that would be sent weekly to prospective students (Holmberg, 2005). In the 1800s, Anna Eliot Ticknor had a correspondence school in Boston in which she gave instruction on 24 subjects. In the mid-1800s Oxford and Cambridge universities in England were offering a version of distance learning called extension services that included lectures and a system of instruction by correspondence (Isman et al., 1999). The Open University in the United Kingdom was the world's first university to teach fully at distance, and in 1971 more than 24,000 students were enrolled for its courses. In 2003 it admitted about 200,000, including 7,653 students with disabilities, and provided more than 150 courses delivered by instructors using the internet to conduct tutorials and discussion groups, and to take electronic submissions (Gibbs et al, 2006). With the advance of technology, distance education has evolved through several stages. The evolution of non-face-to-face (distance) education is illustrated in Figure 1 (Bozkurt, 2019). It can be seen that hard-copy printed learning moved into an audio-visual format as the technology evolved, taking advantage of the many benefits it offers. . Inclusivity of terms used to define learning approaches (adapted from Anohina, 2005) Throughout the literature there is discussion as to which category belongs to which method of teaching/ learning and which term adequately defines or describes teaching/learning delivery methods. Scholars have been trying to frame a clear definition and delimitation, with specific terminologies describing the modes and methods of teaching/learning. (Moore et al, 2010;Anohina, 2005). In Figure 1, the evolution of non-face-to-face teaching is shown in stages of progression, whereas Figure 2 illustrates the interrelationships and inclusivity of the terms used to define approaches. It can be seen in Figure 2 that web-based learning is a subcategory of internet-based learning. However, in 131 International Journal of TESOL Studies 2 (4) Background to Online Teaching and Learning Satisfaction is one of the key performance factors that promote motivation in learning (Keller, 1987). Motivation to learn becomes particularly important in online learning, where there is no physical supervision, and the learner is left as an autodidact. Eichelberger & Ngo (2018) and Li et al. (2016) referred to the satisfaction with online courses as a complex structure that interlinks the course content and its structure, the educational activities and instructor support. Numerous researchers have investigated the indicators of learner satisfaction with online courses. Anderson (2003) pointed out that interactivity is one of the important factors affecting satisfaction. This accords with the findings of Allen et al (2002), in which they said teacher-student and peer-to-peer interactions are the premise of course satisfaction. Similarly, it was found that factors including student perception of teacher-to-student, student-to-student interactions, and discussion board features significantly affected learner satisfaction (Lee et al, 2011;McFarland & Hamilton, 2005;Paechter et al, 2010). Some studies have shown that students considered communication as the most important factor when they were assessing their satisfaction with a course. So instructors were encouraged to keep frequent contact and regular presence in face-to-face classrooms (Dennen et al., 2007). Equally, the competence of teachers and their support were considered to be among the most important indicators of learner satisfaction with online courses (Zhu, 2017). In addition, Sahin & Shelley (2008) showed that the student perception of online learning as a beneficial and flexible way to learn, to communicate and to share knowledge was significantly related to their satisfaction with online courses. Similarly, internet self-efficacy was found to be an indicator of the satisfaction in an online learning context (Kuo et al, 2013). So factors influencing the satisfaction with online courses include the overall interactions between instructors and learners, and the perceptions of online learning (Wei & Chou, 2020). An interaction refers to a reciprocal event or exchange between the knowledge provider and the knowledge recipient and or between the learners. It is a key factor in the learning environment that helps learners realise their educational goals (Wagner, 1994). Moore (1989) breaks down interactivity into three basic types: student-student, student-instructor and student-content. Other classifications include formal and informal interactions (Rhode, 2007), synchronous interaction (such as online chat and video conferencing) and asynchronous interaction (including email, online discussion boards and blogs) (Hines & Pearl, 2004;Croxton, 2014). In addition, a new concept of purposeful interpersonal interaction has been put forward to emphasize the quality of online interaction (Mehall, 2020). An increasing number of interactions do not necessarily lead to a higher quality of online learning; it is therefore important to provide a moderate quantity of high-quality interaction. There are different typologies that underline the complexity of the concept of online interaction. Interpersonal interactions, including student-student and student-instructor ones, are regarded as crucial for all educational settings (York & Richardson, 2012). Many studies have confirmed the positive effect of online interactions on the following aspects: perceived learning (Sher, 2009), student satisfaction with the course (Fedynich, Bradley & Bradley, 2015;Khalid & Quick, 2016), faculty satisfaction with the course (Su et al, 2005), student academic achievement (Long et al, 2011) and second-language acquisition (Ajabshir, 2019;Xu & Yu, 2018;Sauro, 2011;Mackey & Goo, 2007;Smith, 2005). West & Jones, (2007), McBrien et al, (2009 also found a positive relationship between real-time interaction and student satisfaction. Several studies have been conducted to compare face-to-face (FTF) interactions and synchronous computer-mediated communication (SCMC). In some studies it became apparent that high-quality realtime online interaction could increase learner output (Chun, 1994;Kelm, 1992) and improve the quality of language acquisition (Chun, 1994;Kern, 1995) in comparison to face-to-face interaction. It was observed that SCMC provides a more relaxed environment for students than do FTF interactions, which 132 Ning Yan and Andre DL Batako carry elements of internal tension in shy learners (Chun, 1998). Therefore, SCMC is a good tool for encouraging and enabling passive students to become actively involved in the classroom (Chun, 1994;Kern, 1995). Apart from the benefits of online interaction, lack of feedback from instructors and peer learners is one of the major challenges perceived by students (Muuro et al, 2014). Therefore, securing active interactions is a critical element in online learning. However, there is strong evidence that to improve students' learning experience, the quality of interactions is paramount (Garrison & Cleveland-Innes, 2005). Self-efficacy is an important factor that affects student satisfaction. It refers to the confidence in oneself in completing specific tasks and activities (Alqurashi, 2016). The research on self-efficacy mainly focuses on technical aspects, such as the internet, computer and allied systems. However, other studies have shown that internet self-efficacy has little relationship with satisfaction in online learning (Kuo et al, 2014). Other studies have revealed that systems and computer self-efficacy cannot predict the student satisfaction with online courses (Liaw, 2008;Jan, 2015). Lim (2001) and other scholars say there is a correlation between computer self-efficacy and student satisfaction, but this may need further investigation. A lot of current teaching practice shows that students are becoming more and more familiar with computers, and using the technology is easier than it used to be. It can thus be inferred that the detrimental effect of technology on the satisfaction of students is declining, and research should seek to look at other aspects, such as learning and communication selfefficacy and how they affect student satisfaction. There are four aspects to student perceptions of online learning, including flexibility, adaptability, convenience and interactivity (Wei, 2019). The perception of online learning affects learning behaviour, which implies that if students have a positive view of online learning they are more likely to choose online courses as a way of learning (Duggan et al, 2001). There are studies on the relationship between the perception of online learning, learning behaviour and the learning outcome which show that a positive perception of online learning increases the frequency of students learning online and their scores in online discussion. However, the test scores are not affected by perception (Wei, 2019). Furthermore, some scholars predict that if students have the necessary online learning skills and think that online learning is effective and flexible, their satisfaction with the course will be higher than that of other students (Sahin & Shelly, 2008). Purposes and Motives As shown in the above covered literature, the notion of satisfaction is highly subjective. However, in the context of this investigation, it would be ideal to acquire some basic understanding of the judgement and feelings of learners about how they perceive a course, its structure and content, and their appreciation of the discussions along with their interactions with peers and with the teacher. This work follows the aforementioned characteristics of online learning and attempts to comprehend student appreciation of the course under investigation and to gain some insight into how to improve the learner experience using the findings to develop better delivery. Therefore, this work explores whether the learners are satisfied with the online courses, including factors such as interactivity, communication self-efficacy and the perception of online learning along with the cross-effect of these factors. There is a limited amount of research on the impact of interactivity and the perception on learner satisfaction within the context of public high schools in China. In addition, most research on self-efficacy has focused on technological aspects, so further studies need to be conducted to explore the aspects of self-efficacy, e.g. communication self-efficacy. Consequently, this work was conducted in order to investigate the impact that interactivity, communication self-efficacy and student perception have on satisfaction in online learning. Here an English-language course in a high school is taken as the subject of the study in an online learning environment in secondary education. It was hypothesised that the learner satisfaction with online courses would have a positive and significant relationship with interactivity, the perception of online learning and communication selfefficacy. The following research questions were thus set: (1) To what extent does interactivity predict student satisfaction with online courses? ( 2) To what extent does student perception of online learning and communication self-efficacy affect student satisfaction? (3) To what extent do interactivity, perception of online learning, communication self-efficacy and student satisfaction affect one another? Methodology The study was undertaken in Beijing No. 4 High School using an online platform that was implemented as a response to the COVID-19 confinement to deliver all the teaching materials online. The platform has several functions, including real-time online teaching, a discussion board, testing, assignment and a resource base. In the real-time teaching there are functions such as audio-visual interaction between the teachers and the students, real-time text interaction, a whiteboard for sharing, questions and answers, and a voting option. The students can undertake a guided self-study in the morning preparing for the upcoming lessons, and then they attend the real-time lessons in the afternoon so that their questions and doubts can be addressed. The platform offers most of the functionality and activities similar to those in traditional teaching but in a virtual environment. The cohort involved in this study were students aged 16-18 who studied a range of subjects over the year, including, but not only, Chinese, mathematics, English, physics, biology, chemistry, history, politics and geography. However, this research focused on the English subject as it is not the native language of the students, and hence, more difficulties were encountered in the transfer to the online mode of teaching and learning. There was only one questionnaire containing sections on student satisfaction, interactivity, communication self-efficacy and the perception of online learning, which was sent to all participants simultaneously. Participants In the study there were more than 75 students in two classes in senior II, 44 girls and 31 boys. The selection of the participants was designed to be representative of students at different levels in terms of motivation and academic performance. The English level ranged from medium to advanced and the participants had taken the online English course for about 10 weeks, so they had some basic experience with the online system to adequately appreciate the survey questionnaire. Tools The questionnaire was compiled using the material and guidance available in the published peer reviewed papers, and the questions were grouped to provide an understanding of the interaction, the perception of online learning, communication self-efficacy and satisfaction of students with online English learning. To ensure that the questionnaire worked well with the survey platform it was pre-tested by a neutral group of other students who were taking other online courses. After careful analysis, some adjustments were made, and the final questionnaire achieved a reliability coefficient of 0.93. Some questions offered multiple choice in section (1) and others were open questions for course improvement in terms of what the students would like to see more in the online lessons and what should be removed as illustrated in section (4). For sections (2) and (3) the Likert five-level scale was used to gauge the answers. Figure 3 depicts a snapshot of the survey platform where 1 point means "strongly disagree" and 5 points means "strongly agree", and the participants had to tick the corresponding answer of their choice. (2) and (3) There were altogether 51 questions in the questionnaire, categorized in four sections: (1) demographic questions (gender, age, computer and online learning experience); (2) student satisfaction (satisfaction with the online discussion, course content and structure); (3) interactivity (teacher-student interaction, student-student interaction) and communication selfefficacy; (4) comments and suggestions (what the students liked and disliked in the course; suggestions). The demographic section allowed identification of the proportion of the population that is at ease using computers, as well as experience in any online learning and the type of equipment used to access the lessons. Standard statistical methods including correlation and ANOVA were used to process the data. The data processing and analysis were undertaken using a SPSS software package, where correlation analysis was conducted to explore the relationship between factors, along with ANOVA to study the significance of the correlation. Results and Discussion The initial analysis of the data revealed that in the gender section, 86.4% of female participants and 77.5% of male participants disclosed their gender, whereas those who did not state their gender accounted for 13.6 % among females and 22.5% among males. Figure 4a shows that most of the female participants were somehow free to state their gender compared with the male group. Figure 4b shows that about 64% of the cohort had good experience with IT and computers, whereas nearly a quarter were new in using them and about 4% had no experience at all. This result shows that more than a quarter of the participants will experience various difficulties in effectively following the lessons online because their attention will be diverted to sorting and handling technical issues at home, with limited technical support. The instructor may not be fully aware of these issues while delivering the lesson since it is impossible to see all students on the computer at the same time and to be aware of difficulties any may be having. There is a risk that a quarter of the students may be left behind due to a lack of familiarity with IT and computer issues, such as WiFi, poor connections and bandwidth, badly functioning video system, sound and microphone, poor audio quality and slow computer response, along with many other minor issues. Referring to the results of the questionnaire, a mean (M) and standard deviation (SD) were calculated for all variables, and the outputs are given in Table 1. Here the overall satisfaction covers course content, its structure, online discussion and the learners' general judgement of their experiences in the online English course. It is observed that satisfaction with the course content structure had a higher mean value (M=4.03) followed by overall satisfaction with a mean of 3.96, and online discussion had a mean of 3.94. These three factors are key indicators and have a high mean of about 4, whereas the communication selfefficiency is relatively low (M=3.62). The average value for overall interactivity was 3.93, for instructorlearner interaction 3.96, and for learner-learner interaction 3.91, which demonstrates that there was a high level of interactivity in the course. The perception of online learning scored an average of 3.86, suggesting that perception may need some improvement. Table 1. Interactivity, Communication Self-efficacy, Perception and Satisfaction Factors Mean Stand. dev. Interactivity .56 Instructor-learner interaction .55 Learner-learner interaction .72 Communication self-efficacy .95 Perception of online learning .57 Overall satisfaction .53 Online discussion satisfaction .58 Content and structure satisfaction .63 Correlation of online discussion satisfaction with overall satisfaction The correlation analysis shows a strong positive interdependence between the variables, which implies that the online discussion directly influenced the satisfaction of the students, their perception of the online learning and interactivity. However, the online discussion had no direct relationship with communication self-efficacy. Figure 5 shows there is a high positive correlation between satisfaction with online discussion and overall satisfaction with r =0.949. This indicates that students who had a good experience in the online discussion were overall highly satisfied with the course. Figure 5. Correlation between online discussion satisfaction and overall satisfaction In addition, satisfaction with the online discussion positively correlated with the perception of online learning (r= .3921) and with interactivity (r= .4001), but somehow did not correlate with communication self-efficacy (r= .2750). Therefore, the participation of students in the online interaction is not closely related to their satisfaction with the online discussion. This indicates that students who did not frequently take part in discussion may still have enjoyed various forms of online discussion, and may still have benefited in listening to the opinions of teachers and other students. Similarly, when the learners were asked "What do you like most in the online English course?", most mentioned activities such as "free discussion" and "debate". This shows that the students preferred a form of free discussion and debate along with other similar activities that gave them the chance to learn by exercising their language skills and by exchanging ideas. This suggests that teachers need to select topics that are attractive to students and turn them into high-quality and effective online discussions on a regular basis. This will enhance the efficiency of the exchange of views between teachers and students, and subsequently lead to more learner satisfaction with the online discussion and overall satisfaction with the course. Figure 6 depicts results of analysis showing that satisfaction with content and structure of the course had a significant positive correlation with overall satisfaction (r= .7395). Satisfaction with course content and structure When the participants were asked "What did you like most in the online English course?", most respondents referred to the online courses benefits such as "rich content" and "high degree of freedom". The students put a stress on the statements "The design of the curriculum should ensure that all questions are answered in a timely manner" and "The learning content should be clearly explained". This shows that students paid attention to elements related to course content and structure, including learning materials, curriculum structure, the variety and richness of activities and the question/answer sessions. In addition, there was a significant positive correlation between satisfaction with course content and structure, and satisfaction with the online discussion (r= .6204). In the teaching practice undertaken within this investigation it was observed that the enjoyable online discussions could stimulate the interest of students in the topic covered, and this could eventually improve the satisfaction with the content of the lesson. Therefore, it is possible for the instructors to increase the overall satisfaction of students by including a good variety of interactivity along with rich course content and well-designed, high-quality teaching material. The influence of student perception of online learning It was found that the student perception of online learning affected their interaction, communication selfefficacy and their satisfaction with the course, especially interaction and communication. There was a high positive correlation between the perception and the interactivity (r= .8935). This indicates that students who appreciated the effectiveness of online learning more actively interacted with the instructors and their peers. In addition, student perception had a high positive correlation with communication self-efficacy (r= .8544). This suggests that students who take a positive view of online learning have a strong confidence in the online communication. However, there was a weak positive correlation between perception and satisfaction with the online course (r= .3915), which suggests that student perception had a minor impact on satisfaction. This has two implications for teaching practice namely: (1) Educators may need to guide students in developing a positive view of online learning by helping them understand its benefits. This may greatly promote their participation in class activities and their communication self-efficacy; consequently, it may help students to integrate gradually into the learning community and reduce the risk of exclusion and being left behind. The view of students about online learning has little impact on their satisfaction; consequently, the educators could improve the satisfaction of students by designing a range of attractive online discussions to engage with students and improve course content and structure. Ning Yan and Andre DL Batako Student satisfaction indicators Among the factors studied in this work, online discussion has the greatest influence on student satisfaction with the online course (r= .9491). This is followed by interactivity (see Figure 7), which has a medium impact on satisfaction (r= .3948), and communication self-efficacy, which has a minor effect (r= .2795). This shows that there are two key indicators for student satisfaction: (1) In online discussions students are given opportunities to practice their language skills and voice their opinions; (2) Course content and structure help students gain knowledge and interact with teachers and peers. In addition, the two factors help foster in the learners a sense of belonging to the learning environment, which may contribute to satisfaction. Figure 7. Correlation between student-teacher interaction and satisfaction Though interactivity is a key indicator of student satisfaction, the quantity of interactions need to be controlled, since some respondents to the questionnaire stated that "interactions sometimes escalate and may waste valuable time". Interactivity indicators Student perception of online learning has a strong impact on interactivity (r= .8935); this is followed by satisfaction with discussion (r= .4001). However, satisfaction with course content/structure has the smallest influence (r= .3653). This indicates that an effective way of increasing interactivity in online teaching is to improve the way it is perceived. This may be done by clearly conveying to students online learning's benefits. However, it is undeniable that student perceptions are influenced by a variety of factors, including experience and personal preferences. So teachers will need to engage in activities that may be able to improve student perception in this matter. Communication self-efficiency indicators In terms of communication self-efficacy, student perception of online learning (r= .8544) is the most influential factor; satisfaction with the course (r= .2795) has a moderate effect, and satisfaction with content and structure (r= .0941) has relatively no effect. As noted above, student perception has an overall influence on various factors including communication self-efficacy. This suggests that learners who consider online learning to be effective may be strongly motivated and confident as they communicate with others online. Conclusions This work is a case study that explores the key indicators of student satisfaction with online courses. It focuses on a given cohort of young students in a particular online course. The research has brought forward some findings that could be used to improve the teaching/learning performance of this particular course. The main aim was to understand how the teachers could improve the learning experience of students and the success of the English module. (1) It is understandable that not all schools would be able to conduct immediate research of this kind to reveal issues of OLL to improve teaching/learning performance. However, this study investigated the above covered aspects of online teaching and learning, and the outcomes have been used to adjust some aspects of the online delivery of this course. Nevertheless, the results are crude and ought not be overgeneralised. This work is an initial attempt to understand the effect of OLL on students using one single group in a specific course, and some studies are being undertaken to quantify the impact of OLL on teaching/ learning. Therefore, further work is planned to extend this study onto the entire school to find a general trend that could be applied to others. However, it is hoped that this paper will encourage other schools to undertake this kind of study to identify adequate support to students because each school has its own peculiarities and needs over a range of subjects delivered online. Satisfaction with the online discussion was found to have a highly positive correlation with online course satisfaction. Therefore, teachers may need to devote themselves to organizing high-quality online discussions to improve the satisfaction of learners. It appears that students who do not often take part in discussion are also satisfied with effective online discussion. There is a significant positive correlation between satisfaction with course content and its structure and overall satisfaction. With reference to instructor-learner and learner-learner interaction, students pay more attention to content and structure of courses. Therefore, teachers may have to pay more attention to content along with an active engagement delivery, which are important in improving learner experience and satisfaction. Interactivity positively affects student satisfaction, and this points to the fact that one must ensure a sufficient amount and frequency of interactions, but the interaction should be controlled and moderate so it does not affect the content/structure of the course and its delivery. Student perception of online learning significantly affects interactivity and communication selfefficacy, and to some extent it affects the satisfaction of the learner with online courses. This means that one should focus on improving perception of online learning and on supporting students to develop a positive view of online learning. Perception of online learning 18. With OLL I get a good variety of multimedia resources. 19. In OLL I can extract important information from the provided resources. 20. OLL provides a good flexibility for interacting directly with other students. 21. OLL removes the distance and the barriers between the teacher and students. 22. It is a very good and convenient way to communicate with friends and other students 23. In OLL there is less limit in time and place to study and thus I have more freedom. 24. With OLL I have increased the range of my general knowledge. 25. Using OLL my academic performance has improved. 26. OLL is an effective and personalised way of learning. 27. In OLL it is easier to follow and keep up with the teaching plan pace and timing. 28. OLL brings a relaxing atmosphere and less anxiety. 29. I have a lot fewer difficulties and a smaller workload in OLL.
2020-12-31T09:06:20.800Z
2020-12-25T00:00:00.000
{ "year": 2020, "sha1": "b4d7d28b5822b3194e0e281d06f69ba06bc07921", "oa_license": null, "oa_url": "https://doi.org/10.46451/ijts.2020.12.12", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fec9663f259cc148e15f8e4f091f13487f80ad1d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
4793892
pes2o/s2orc
v3-fos-license
Anatomy of Mississippi Delta growth and its implications for coastal restoration Prehistoric rates of land gain in a large portion of the Mississippi Delta are significantly outpaced by present-day rates of land loss. INTRODUCTION Many of the world's largest deltas undergo rapid transformations due to reductions in sediment supply (1), accelerating rates of sea-level rise (2), plus some of the world's highest subsidence rates (3). The Holocene stratigraphic record contains abundant information on the ability of delta plains to grow within the constraints of these controls. However, this archive has only partially been explored, in part due to a historic lack of geochronological tools that are necessary to quantify rates of change. Previous studies have assessed the timing of delta lobe (subdelta) activity through radiocarbon dating of bounding peat (4) and shoreline progradation through optically stimulated luminescence (OSL) dating of beachridge deposits (5). However, delta growth is fundamentally driven by distributary channel activity. Currently available records of delta growth rely largely on instrumental data obtained over the recent decades. For example, the mean land growth rate of the Wax Lake Delta, a recent bayhead delta within the Mississippi Delta, United States (Fig. 1, A and B), has been reported at 0.8 to 3.1 km 2 /year (6,7). However, the assessment of delta growth over small temporal and spatial scales may reveal little about how river-dominated deltas operate over longer time scales. Understanding the rates and patterns of delta growth through distributary channel activity is essential for predicting future deltaic land change (8), managing sediment resources (9), and understanding the effects of human perturbations on deltas (4). This information will be of paramount importance in the 21st century as major population and economic centers in large deltas struggle with rapid environmental change. These issues are exemplified well by the Mississippi Delta, where the deposition of clastic sediment by the trunk channel of the Mississippi River (the primary population and infrastructure corridor) is severely hampered by flood protection levees. Despite the growth of new land in the Wax Lake and Atchafalaya deltas and, to a lesser extent, in the birdfoot delta (Fig. 1B), net land loss rates for the delta plain are about 45 km 2 /year, averaged over the past century (10). The postindustrial sea-level acceleration is likely a relatively small factor herein. Direct human activities-including reduced sediment delivery, dredging of canals, subsequent saltwater intrusion and wave erosion, and fluid extraction-have played a primary role in the recent degradation of the delta plain (11). Land loss in deltas can be offset by the controlled delivery of new sediment to the delta plain (9,12,13). For example, a $50 billion management plan for coastal Louisiana includes proposals to create new land by the year 2065 through engineered river diversions (13) that would reintroduce clastic deposition by means of sediment-laden river water. Developing realistic expectations for the efficacy of these strategies requires an understanding of the natural deltaic processes (for example, distributary channel growth rates and drivers of avulsion) that govern land growth over time scales well beyond decadal-scale instrumental records and the slightly longer historical records (~165 years) (14). In addition to information on fluvial sediment loads (15) and deltaic sediment retention efficiency (16), centennial-to millennial-scale records of rates of land growth in the Mississippi Delta are needed to evaluate whether it is possible to significantly offset the high rates of present-day land loss by means of river diversions. There is currently a lack of field data to answer these questions. Here, we use OSL dating of mouth bar deposits from the Lafourche subdelta ( Fig. 1B) to determine the rates and patterns of growth in the Mississippi Delta. Luminescence techniques enable the direct dating of both subaqueous and subaerial fluviodeltaic deposits (17) and have proven successful for dating the deposition of Mississippi Delta sediment (18). Mouth bars form as distributaries deliver their sediment load to a receiving basin and reflect deposition of the coarsest sediment fractions as flow decelerates when it meets a standing water body. This results in a sand-dominated deposit that progrades and aggrades to fill the basin (19). Vertical accretion of mouth bars occurs more rapidly than can be resolved by OSL. OSL samples taken from any depth within Lafourche subdelta mouth bars therefore reveal the timing of both mouth bar formation and land emergence. Other chronometers, such as radiocarbon dating of peats (20), may provide chronologies for the initiation and termination of subdelta activity, but they are less powerful for the direct dating of fluviodeltaic clastic strata. OSL dating of mouth bar sand is therefore the preferred tool to directly capture the time of emergence of new land and thus the progradation of the Lafourche subdelta shoreline. The Mississippi Delta is composed of a series of subdeltas that formed when quasi-periodic avulsions of major distributaries relocated the depocenter (21). The 10,000-km 2 Lafourche subdelta was active from about 1.6 to 0.6 thousand years (ka) ago (18,20) under conditions of fairly constant relative sea-level rise. Water and sediment discharge was shared with the Modern (Balize) subdelta after 1.4 to 1.0 ka ago (22). The abandonment of the Lafourche subdelta likely preceded the initiation of the Atchafalaya subdelta (22,23), so river discharge was never shared between these two subdeltas. We selected the Lafourche subdelta for this study because it is the most recently abandoned subdelta in the Mississippi Delta. The Lafourche subdelta has experienced a complete delta cycle (24) and therefore provides an archive for river-dominated delta growth from initiation to termination, yet it has experienced limited reworking compared with older subdeltas. In addition, this system has a wellconstrained sea-level history with a long-term sea-level rise trend of 0.6 mm/year (25). In the uppermost reach (about 55 river km long), Cross section Mississippi River Lafourche Atchafalaya River (43). (B) Mississippi Delta, including the Lafourche subdelta, the Modern (Balize) subdelta with the birdfoot delta, and the Atchafalaya subdelta with the Wax Lake and Atchafalaya deltas. Trunk channels that feed these subdeltas branch into multiple distributaries at polyfurcation points, which define the landward limit of bayhead deltas. The two most recent deltaic avulsion sites are the Lafourche-Modern (L-M) and Modern-Atchafalaya (M-A) avulsions. Previous work (18) was conducted at Paincourtville (PV) and Napoleonville (NV). (C) Location of cross sections, with distance in river kilometers from the Lafourche subdelta polyfurcation point shown in parentheses. the Lafourche system essentially features one trunk distributary channel that fed sediment to the surrounding delta plain through episodic overbank deposition, including abundant crevassing on top of a widespread wood peat bed (18). This demonstrates that the region between the avulsion site (L-M) and the furcation of the trunk channel ( Fig. 1B) was subaerial before the initiation of the Lafourche subdelta (20). Here, we focus on the lower reach of the subdelta (seaward of the trunk channel) based on 10 cross sections roughly perpendicular to both the main distributary (Bayou Lafourche) and the lesser distributaries (Fig. 1C). Ages are presented in ka relative to 2010. Stratigraphy The Lafourche trunk channel splits into multiple smaller distributaries at 55 river km downstream of its divergence from the modern Mississippi River. This polyfurcation (that is, a furcation of the distributary network resulting in more than two channels) marks the pre-Lafourche shoreline and produced a distributary network that geomorphologically resembles a bayhead delta. Similar polyfurcations mark the antecedent shorelines of modern bayhead deltas, such as the Wax Lake and Atchafalaya deltas, and give rise to the birdfoot shape of the Modern (Balize) subdelta (Fig. 1B). Downstream of the Lafourche polyfurcation point, the Lafourche distributary system built new land by prograding into a shallow bay (Fig. 2). We refer to the area of new land created during Lafourche activity as the "bayhead delta" (~6000 to 8000 km 2 ) and the broader area in which Lafourche sedimentation occurred as the "subdelta" (~10,000 km 2 ). The bayhead delta exhibits a common succession of shell-rich bayfloor muds overlain by 1.3 ± 0.5-m-thick laminated delta front silts and then 2.1 ± 0.8-m-thick mouth bar sands, capped by overbank sediments of varying textures that thin both seaward and away from the channel (Figs. 3 and 4 and fig. S1). Overbank deposits are relatively fine-grained and somewhat organic near the base. In the more mature regions of the subdelta, the overbank unit grades vertically into a patchwork of relatively coarse deposits that pinch out coastward and away from the channel (Fig. 4). This shows that initial, channel-proximal elevation gain in the newly formed bayhead delta was dominated by the deposition of clays, likely through annual flooding. Later, elevation gain was characterized by deposition of predominantly silts associated with crevasse channels. The thickness of bayhead delta strata is similar between the main and lesser distributaries ( fig. S2). The combined thickness of mouth bar and delta front deposits (referred to as "foundation deposits") that aggraded to sea level and subsequently supported the growth of the subaerial delta through overbank deposition is consistent throughout the bayhead delta (Fig. 4). Growth patterns Modern bayhead deltas have been shown to prograde in a radially symmetric pattern at their onset (26). This finding is consistent with observational and modeling studies demonstrating that the most seaward (18,20). Note that the uppermost portion of the overbank unit is highly generalized; for details, see the study of Shen et al. (18). All ages are presented as thousands of years (ka) ago, relative to 2010. portion of a delta is characterized by bifurcations that produce coeval distributaries (27,28). However, other studies suggest that radial growth of deltas may be restricted to these early stages, whereas more mature systems may prograde in succession by means of repeated avulsions within the subdelta distributary network. Such a mechanism has found support from a widely used Holocene Mississippi Delta radiocarbon chronology (29), as well as historical records of the human-modified Po (30) and Huanghe (31) deltas that feature distributary avulsions within 20 and 100 km of the present-day shoreline, respectively. Our results show that distributary mouth bars of the Lafourche subdelta at similar distances from the polyfurcation point have matching OSL ages (see Materials and Methods), indicating that growth was characterized by coeval distributary channels throughout its period of activity (Fig. 5). Contrary to what has been proposed by the previous work of Frazier (29), there is no evidence for avulsions within the distributary network of the Lafourche bayhead delta. We therefore conclude that the Lafourche distributaries formed by means of bifurcation. This demonstrates that radial growth through distributary channel progradation can persist in river-dominated deltas for nearly a millennium. These data also underscore a principle of distributary evolution evident in both modern and past landscapes of the Mississippi Delta: River-dominated delta systems branch at polyfurcation points associated with the paleoshoreline (Fig. 1B). Growth rates It has been previously hypothesized that progradation slows and ultimately reverses with delta maturity because the area of the delta plain becomes too large to be supported by a constant sediment supply under conditions of constant accommodation creation (32). This process of "autoretreat" has been replicated in laboratory and model experiments (33) and has been offered as a possible explanation for transgressive successions found in the ancient stratigraphic record (34). Autoretreat has been proposed as a fundamental element of any deltaic system where the evolution of the system may be described by the ratio of accommodation creation to sediment supply rates (35). Progradation rates of deltas during the late Holocene have been assessed elsewhere (4,36); however, the autoretreat concept has never been tested in a real-world setting with a well-constrained sea-level history and geochronology. The Lafourche bayhead delta grew at an average rate of 6 to 8 km 2 /year, associated with distributary mouth bar progradation at a relatively constant rate of 100 to 150 m/year (r 2 = 0.89; see Materials and Methods) throughout most of the Lafourche activity (Fig. 5). This is a surprising result, considering that discharge was shared between the Lafourche distributaries and the modern Mississippi River after 1.4 to 1.0 ka ago (22). Furthermore, at least one major crevasse splay in the upstream reach of the Lafourche subdelta extracted a considerable amount of sediment from 0.8 to 0.6 ka ago (18). The constant progradation rate of the Lafourche shoreline indicates that autoretreat did not occur in this system during the time period of interest. Avulsions Avulsions constitute the principal mechanism that shift the depocenter within deltas, thereby driving delta evolution over centennial to millennial time scales. Our new results show that avulsions did not occur within the Lafourche subdelta, suggesting that subdeltas function fundamentally differently and should not be seen as miniature versions of the broader delta. Here, we zoom out to the entire Mississippi Delta and Lower Mississippi Valley to identify avulsion sites and to test the degree to which avulsions are preferentially located near a single node (37, 38) versus a broader zone (39,40), corresponding to the backwater transition where channel-bed deposition is relatively rapid (39,41). The link between backwater dynamics, bed aggradation, and avulsion has been described by the backwater number. The backwater number is defined as the backwater length divided by the avulsion length (the channel length between the avulsion site and the shoreline at the time of avulsion) (42) and is reported to range from 0.5 to greater than 4 (39,40). The two most recent avulsion sites within the Mississippi Delta include the partial shift of the modern Mississippi River to the Atchafalaya River (M-A avulsion) and the partial shift of Bayou Lafourche to the modern river (L-M avulsion) (Fig. 1, A and B). The M-A avulsion was initiated at 0.5 to 0.3 ka ago (22). The M-A avulsion length is 490 river km (see Materials and Methods), comparable to the backwater length of the modern Mississippi River, yielding a backwater number of roughly 1 (41). The L-M avulsion occurred between 1.4 and 1.0 ka ago (22). Our data show that the Lafourche bayhead delta had prograded between 20 and 70 km beyond the polyfurcation point at this time, yielding an avulsion length of 75 to 125 km (see Materials and Methods), significantly shorter than the M-A avulsion length. Assuming similar backwater dynamics as in the modern system, the L-M backwater number is roughly 5. The backwater numbers of the two well-constrained avulsions within the Holocene Mississippi Delta are therefore generally compatible with backwater theory, but not with the concept of repeated avulsion around a single, backwater-mediated node. Evidence of other Holocene Mississippi River avulsions, in the form of relict channel belts, can be found more than 700 linear km inland, within the uppermost reaches of the Lower Mississippi Valley (Fig. 1A) (43). Assuming a sinuosity of 1.9 (44), this corresponds to avulsion lengths greater than 1300 km. This region has seen considerable (10 m or more) (43) Holocene aggradation, making avulsions almost inevitable. The locations of the two most recent avulsion sites in this region are relatively well defined, yet three or more older avulsions likely occurred within an~250-km linear zone centered around Memphis, TN (see Materials and Methods). From this evidence, we conclude that avulsions of the Mississippi River are at least partially dictated by fluvial processes that occur far landward of the delta and extend well beyond the backwater transition. Our findings are consistent with observations of avulsion nodes occurring over an~80-km linear distance and extending beyond the backwater transition in the Rhine-Meuse Delta (45), Netherlands, an area with significantly more data to address this problem (46). Within the Mississippi Delta, as well as in other muddy, river-dominated deltas, avulsions may be partly steered by factors such as sediment cohesion (39), which may drive the river to reoccupy easily erodible (sandy) channel belts (47) rather than forging new tracks through cohesive, muddy overbank strata. DISCUSSION The consistent thickness of foundation deposits indicates that the pre-Lafourche bay floor depth was fairly uniform (3.4 ± 0.8 m) and remarkably similar to basin water depths of modern incipient bayhead deltas of the Atchafalaya subdelta (48). The Lafourche subdelta is therefore a S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E good analog for present-day processes of bayhead delta growth, such as the proposed river diversions that are planned to convert open water into land. This similarity to the modern system enables a direct evaluation of the ability of present-day depositional systems (that is, incipient bayhead deltas and engineered diversions) to offset contemporary rates of Mississippi Delta land loss. Our finding that distributary networks polyfurcate at the coeval shoreline provides a framework by which the antecedent shoreline and stratigraphy of other river-dominated deltas may be inferred. On the basis of this, we hypothesize that the paleo-shoreline of the Modern (Balize) subdelta may have been positioned near the polyfurcation point of the birdfoot delta (Fig. 1B) at the time of Modern subdelta initiation. Although our work tests many fundamental principles of delta growth, our results are limited to describing deposits immediately proximal (at most a few kilometers) to distributary channels. The timing of land emergence in the distal, interdistributary flood basins of the Lafourche subdelta was not tested with our approach. It is possible that progradation and land creation rates varied over decadal or even centennial time scales. However, the precision of the OSL ages does not allow for confidently inferring this higher frequency variability. Furthermore, the nature of the discharge split between the Lafourche subdelta and the Modern (Balize) subdelta from 1.4 to 1.0 ka onward is not known. Despite these limitations, our work makes considerable contributions to the understanding of delta growth, which are relevant to the management of deltas. Avulsions of the Mississippi River are shown to most likely occur over a broad spatial zone that is only partly mediated by backwater dynamics, with a considerable density of avulsion sites 450 to 700 linear km inland that are unrelated to backwater hydraulics. In contrast, because no evidence was found for avulsions in prograding distributary channels, it seems unlikely that new bayhead deltas associated with river diversions will exhibit avulsions. Rather, they can be expected to grow radially by means of bifurcation. There are a number of potential reasons why autoretreat is not observed in the Lafourche subdelta, including a relatively slow rate of sealevel rise and a relatively high sediment supply, which may reduce the efficacy of autoretreat (35). It is also possible that other mechanisms, for example, higher sediment retention efficiency with increasing delta maturity, exert a primary control over delta growth. Alternatively, deltas situated on relatively open coasts and unconstrained by topography may avulse before they enter a state of autogenic decline. Regardless of the mechanism(s) that may enable sustained progradation, our findings raise questions about the applicability of the autoretreat concept to large deltas and their stratigraphic records. We document high average progradation rates of 100 to 150 m/year and land area creation rates of 6 to 8 km 2 /year within the Lafourche subdelta, sustained for nearly a millennium, that is, rates that are at least two times higher than present-day growth rates in the Wax Lake Delta (6). These rates are especially noteworthy considering that the sediment input was shared between the Lafourche subdelta and the Modern (Balize) subdelta (at least during the latter part of its existence). This finding is relevant to coastal planning because it shows that channels with diminished sediment flux, including the proposed river sediment diversions that siphon only a fraction of modern Mississippi River discharge during relatively short time periods, can be very effective in building new land. However, the average prehistoric rates of land growth are several times (by a factor of about 5 to 7) lower than the recent human-enhanced rates of Mississippi Delta land loss (10). Although areas beyond the Lafourche subdelta such as the Modern (Balize) subdelta may have also experienced growth during the time period of concern, there was undoubtedly significant decline in other portions of the Mississippi Delta (that is, pre-Lafourche subdeltas); thus, it is unlikely that net growth of the delta plain exceeded 6 to 8 km 2 /year. Furthermore, land building by the Lafourche subdelta occurred under the lowest rates of relative sea-level rise experienced by the Mississippi Delta throughout the Holocene (25). Considering recent land loss rates (~45 km 2 /year) (10) in combination with the global sea-level rise acceleration (49), net land loss in the modern delta will likely continue regardless of coastal restoration strategies, ultimately producing a deltaic landscape that will be very different from the present one. MATERIALS AND METHODS This study used stratigraphic data obtained through hand coring and OSL dating through a combination of well-established and novel methods. Boreholes were drilled using an Edelman hand auger and gouge. Cores were discretized to 10-cm intervals and described in the field with attention to grain size following the U.S. Department of Agriculture texture classification scheme, sedimentary structures, and fossil content, which informed the interpretation of lithogenetic units (see table S1). The surface elevation at borehole sites was obtained from publicly available LiDAR (light detection and ranging) data. OSL samples were captured using a stainless steel Eijkelkamp sampler that prevents light exposure. Below, we describe the OSL dating approach, as well as the calculation of progradation and land change rates, and avulsion lengths. OSL sample preparation and measurement OSL samples were prepared under amber light at Tulane University following standard procedures (50,51). Luminescence measurements were performed at the University of Liverpool using 1-to 2-mm aliquots of 75 to 125 mm (~110 grains) or 125 to 180 mm (~50 grains) purified quartz sand, adhered to 10-mm stainless steel discs. The coarsest grain-size fraction for which sufficient sediment was available was used. Descriptions of measurement facilities are given in the previously published work (52). A standard single-aliquot regenerative-dose (SAR) protocol (53, 54) with a 200°or 220°C preheat, 180°C cut heat, three to four regenerative points, one recuperation point, and recycling checks including infrared (IR) depletion of the OSL signal (table S3) (55) was used to extract the equivalent dose (D e ). Note that D e herein refers solely to the absorbed radiation dose estimated from luminescence measurement for a single aliquot. Luminescence measurements were made for 40 s over 250 channels. The OSL signal was integrated over the first 0.48 s, and an early background interval, integrated over 0.48 to 1.76 s, was subtracted (56). Aliquot acceptance criteria included recycling and OSL IR depletion ratios of 10% (55), a maximum test dose error of 20%, and recuperation of 5% relative to the natural signal. OSL age calculation D e data sets were cleaned to remove potential outliers before age modeling (see the Supplementary Materials) and then treated with a bootstrap minimum age model (bootMAM) (57,58) to obtain the paleodose for each sample. The paleodose is defined as the best estimate of the true burial dose (the average dose absorbed by the dated quartz sand grains within the sample since burial). The bootstrap approach provides the benefit of incorporating uncertainty on the width of the D e distribution (sigma_b) expected for well-bleached sands in this setting (57). To define the sigma_b input to bootMAM, this study used a new method for quantifying overdispersion based on the assumption that at least some samples contain only well-bleached quartz grains. This assumption was supported by initial tests, which showed that some samples (n = 5) had overdispersion values equal to or less than those considered characteristic of well-bleached Mississippi Delta sands by previous studies (18,52). First, each D e data set (n = 23; see the Supplementary Materials) was analyzed with a central age model (CAM) (58), which gives a central value and overdispersion of the D e distribution of the sample (table S4). The values for overdispersion obtained through the CAM were grouped by grain size (75 to 125 mm or 125 to 180 mm) and input with their uncertainties into bootMAM (57) with sigma_b = [0,0]. The output revealed the overdispersion that is characteristic of the best-bleached samples within a given grain-size fraction. Overdispersion quantified with this approach was 11 ± 3% for 75-to 125-mm sand and 11 ± 4% for 125-to 180-mm sand. The exclusion and addition of samples to the overdispersion analysis are discussed in the Supplementary Materials. The natural radiation of bulk sediment was determined using activity concentrations of 40 K and several radionuclides from the uranium and thorium series, measured using a gamma spectrometer at Tulane University (table S5). The dose rate was calculated using standard dose rate conversion (59) and cosmogenic contribution (60) factors (table S5). No external alpha contribution was included because the outer layer of the quartz grains was removed by etching. Beta dose attenuation was corrected for grain size (61), and attenuation due to pore water was calculated (62). Water content was measured by drying bulk sediment for each sample in a low-temperature oven, with 5% uncertainty added. OSL ages were calculated by dividing the paleodose obtained from the bootMAM by the dose rate shown in table S5. Two samples were dated per cross section, and paired ages that agreed within 2s unshared uncertainty were accepted. One age (St. Charles I-2) was rejected; this is discussed further in the Supplementary Materials. Paired ages and their unshared uncertainties were treated with a weighted mean following the separation of shared (that is, instrument source calibration, dose rate conversion factors, and gamma spectrometer calibration) and unshared (that is, the spread of the D e distribution assigned by the age models, dose rate measurement error due to counting statistics, and water content) errors (63) to obtain a single age for land emergence at each cross section. Shared errors were returned in quadrature to the uncertainty of the weighted mean ages after application of the weighted mean. Progradation and land change rates The range of the Lafourche bayhead delta progradation rates (100 to 150 m/year) was obtained by dividing the distance between the most landward (St. Charles) and most seaward (Fourchon) cross sections (101 river km) by the minimum and maximum time span between emergence at these localities (0.65 to 0.97 ka, based on the 1s uncertainty of the weighted mean OSL ages). The land area produced by the Lafourche bayhead delta was obtained by estimating different shoreline positions at the time of Lafourche subdelta abandonment; other boundaries are better constrained (Fig. 5). The minimum area (6000 km 2 ) was calculated using the current position of the transgressive barrier island chain. The maximum area (8000 km 2 ) was estimated by projecting the Lafourche subdelta beyond the most seaward cross section (Fourchon, 0.74 ± 0.06 ka), assuming a progradation rate of 150 m/year sus-tained by all distributaries for the final~150 years of subdelta activity. The contemporary rate of land loss for the deltaic plain was calculated as the sum of areas lost from the Atchafalaya Delta, Barataria, Breton Sound, Mississippi River Delta, Pontchartrain, Teche-Vermilion, and Terrebonne basins over the time period of 1932 to 2016 (10). Avulsion lengths Avulsion lengths in the Mississippi Delta are presented in river kilometers, obtained along the center of river channels using Google Earth. The avulsion length range associated with the establishment of the presentday Mississippi River in the Mississippi Delta was obtained from the distance between the L-M avulsion site and the most seaward and landward positions possible for the Lafourche paleo-shoreline at the time of the avulsion (1.4 to 1.0 ka ago) and by placing the timing of Lafourche subdelta initiation at 1.6 ka ago. The most landward position was determined by multiplying the minimum time that the Lafourche subdelta had been active when the L-M avulsion occurred (0.2 ka ago) by the minimum rate of progradation (100 m/year). Multiplying the maximum time (0.6 ka ago) by the maximum rate of progradation (150 m/year) projected the most seaward position of the paleo-shoreline beyond the realistic region constrained by the OSL ages, and so we established this boundary by using the 1-ka isochron (Fig. 5). Holocene channel belts and their relative chronology have been mapped by Saucier (43). Avulsion sites associated with the creation of new channel belts were identified on the basis of the following criteria: (i) likely redirection of all flows to form a new channel belt, rather than partial redirection of flow via bifurcation; and (ii) the most inland departure between two sequential channel belts, rather than a point where channel belts may cross-cut downstream. Distinction was made between avulsion sites that unequivocally met these criteria versus those that were classified as plausible avulsion sites (Fig. 1A). Other avulsions within this region have been suggested by previous work (47). However, those phenomena cannot be ruled out as instances of cross-cutting, given the lack of chronologic data. Holocene channel belt avulsion sites were estimated in linear kilometers relative to the modern shoreline using Google Earth and rounded to the nearest 50 km. The sinuosity of the entire Lower Mississippi River is 1.9 (44); this value was used to approximate the avulsion lengths as measured along channels. SUPPLEMENTARY MATERIALS Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/4/4/eaar4740/DC1 Stratigraphic data for all cross sections Lithogenetic unit thickness calculation OSL dating approach Sample exclusions and additions to analyses Cleaning of outlying aliquots Sample rejection Comparison with previous OSL approach fig. S1. Cross sections illustrating the stratigraphy and OSL ages for all study sites. fig. S2. Thickness of lithogenetic units at main and lesser distributary cross sections. fig. S3. Comparison of mouth bar sand ages estimated using two approaches.
2018-04-26T23:46:28.703Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "ca212bfb8034e2ddda1ebf9c5f0dfd9f2a2b5b2f", "oa_license": "CCBY", "oa_url": "https://advances.sciencemag.org/content/advances/4/4/eaar4740.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca212bfb8034e2ddda1ebf9c5f0dfd9f2a2b5b2f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
211472888
pes2o/s2orc
v3-fos-license
The “Snail Flap”: A Rotation Flap in Scalp Reconstruction The scalp rotation flap is still the flap of choice for scalp defects as it provides hair-bearing skin, replacing “like with like,” and can be designed to respect hairlines and patterns.1 Conventionally, these flaps are planned to be up to 8 times the diameter of the defect to allow for sufficient recruitment of scalp laxity and allow for primary closure of the secondary defect. Nevertheless, its use is limited by large flap to defect ratios (See Video [online], which displays the surgical technique of the “snail flap”). Video 1. This video displays the snail flap technique demonstrated in the article. From “The "Snail Flap": A Rotation Flap in Scalp Reconstruction” Following the creation of the scalp defect, the thickness of the scalp is measured. If it is less than 5 mm, a snail flap is raised, as shown in the video, with a “flap:defect” ratio of 2:1. The tip of the flap is folded onto itself and advanced into the primary defect first. The secondary defect is then closed by spreading the tension across the entire arc. In younger patients, where the scalp thickness tends to be greater than 5 mm and there is increased scalp laxity, a “flap:defect” ratio of 1.5:1 may be chosen as the greater scalp laxity accords ease of closure (Fig. 1). INTRODUCTION The scalp rotation flap is still the flap of choice for scalp defects as it provides hair-bearing skin, replacing "like with like," and can be designed to respect hairlines and patterns. 1 Conventionally, these flaps are planned to be up to 8 times the diameter of the defect to allow for sufficient recruitment of scalp laxity and allow for primary closure of the secondary defect. Nevertheless, its use is limited by large flap to defect ratios (See Video [online], which displays the surgical technique of the "snail flap"). Following the creation of the scalp defect, the thickness of the scalp is measured. If it is less than 5 mm, a snail flap is raised, as shown in the video, with a "flap:defect" ratio of 2:1. The tip of the flap is folded onto itself and advanced into the primary defect first. The secondary defect is then closed by spreading the tension across the entire arc. In younger patients, where the scalp thickness tends to be greater than 5 mm and there is increased scalp laxity, a "flap:defect" ratio of 1.5:1 may be chosen as the greater scalp laxity accords ease of closure ( Fig. 1). DISCUSSION Several local flaps can be safely utilized in scalp reconstruction providing "like with like" tissue in nonradiated patients with moderate and large defects. 2 In general, rotation flaps are more commonly chosen as they match the natural convexity of the scalp. 3 A typical rotation flap as presented by Costa et al is presented in Figure 2. 4 Tensionfree closure, inclusion of a reliable vascular pedicle, and maintenance of anterior hairline, brow, and sideburn symmetry are critical aspects of the flap design. 5 Types of described flaps include the "yin-yang" and "pinwheel" flaps for vertex defects 6 or some more complex designs such as the "banana peel" flap for occipital defects 7 and the "Juri" flap for anterior scalp defects. 8 Efforts to improve upon the efficiency of the scalp rotation flap have long been afoot. Ahuja 9 designed a modified rotation flap by placing the isosceles triangle in an imaginary circle of tissue to gain more effective movement flap tissue into the defect. This involved raising a tongue of extra tissue on the leading edge of the flap (above the base of the isosceles triangle) and then discarding it before flap inset. Subsequent modifications of this particular technique involved utilizing this extra tongue of tissue to further fill in the defect. 10 This modification allows tissue movement from an area almost diametrically opposite to the defect instead of shortening the line of maximum extensibility, a paradigm shift over conventional rotation flap philosophy. A similar concept was propounded, termed "the divine rotation flap" 11 but a comparative study between all these designs above concluded that the conventional rotation flap design was superior based on the tension resulting after wound closure and calculating the length of the scar. 12 In this article, we challenge that notion and introduce an advanced technique without any related complicated mathematical type but inspired by the well-known golden spiral, which is abundant in nature. 13 According to our practical experience, scalp thickness changes by age. Provided that one takes scalp thickness which represents the intrinsic scalp laxity 14 into account and modifies flap:defect ratios as graphically illustrated in Figure 1, closure of both primary and secondary defects is seamless. This is a useful generic rule that must be evaluated in every case allowing larger scalp defects to be reconstructed without resorting to skin grafts or transposition flaps, which again require skin grafting of secondary defect. This is an ideal scalp reconstructive tool, especially in women and younger patients, with most reconstructions possible as an office procedure. Our experience in the Queen Victoria Hospital (East Grinstead, UK) reiterates this conclusion. More specifically, we have used the "snail flap" for the reconstruction of scalp defects during a 2-year period in 18 patients (10 women and 8 men) with age ranging from 45 to 85 years and a postoperative follow-up from 5 to 12 months. Nineteen malignant skin lesions (13 basal cell carcinomas [BCC] and 6 squamous cell carcinomas [SCC]) have been initially excised leaving circular defects with a diameter up to 7.5 cm. The flap survival rate was 100% with minor complications including 2 incidents of minimal flap necrosis and 1 overgranulating scar. Alopecia was practically undetectable and confined just over the scar. Generally, the aesthetic outcome was deemed very satisfactory from the surgeon and the patient in all cases. Limitations One limitation of our study is that we cannot provide any experience of the flap use for other regions of the body except from scalp. Additionally, the video from the operating room is quite shaky in some instances and in the second case presented we could not provide a long-term follow-up image. Georgios Christopoulos, MD, PhD, MSc Plastic Surgery Department Queen Victoria Hospital NHS Trust East Grinstead, UK E-mail: gdchristopoulos@gmail.com
2020-01-30T09:06:31.085Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "0dba96850ba73d996b0bd4109bde54fbdc329cd0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/gox.0000000000002599", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "40166a14fc773560ae53409f8f7ad214a51d502f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
196406001
pes2o/s2orc
v3-fos-license
Methylprednisolone for the Treatment of Patients with Acute Spinal Cord Injuries: A Propensity Score-Matched Cohort Study from a Canadian Multi-Center Spinal Cord Injury Registry Abstract In prior analyses of the effectiveness of methylprednisolone for the treatment of patients with acute traumatic spinal cord injuries (TSCIs), the prognostic importance of patients' neurological levels of injury and their baseline severity of impairment has not been considered. Our objective was to determine whether methylprednisolone improved motor recovery among participants in the Rick Hansen Spinal Cord Injury Registry (RHSCIR). We identified RHSCIR participants who received methylprednisolone according to the Second National Spinal Cord Injury Study (NASCIS-II) protocol and used propensity score matching to account for age, sex, time of neurological exam, varying neurological level of injury, and baseline severity of neurological impairment. We compared changes in total, upper extremity, and lower extremity motor scores using the Wilcoxon signed-rank test and performed sensitivity analyses using negative binomial regression. Forty-six patients received methylprednisolone and 1555 received no steroid treatment. There were no significant differences between matched participants for each of total (13.7 vs. 14.1, respectively; p=0.43), upper extremity (7.3 vs. 6.4; p=0.38), and lower extremity (6.5 vs. 7.7; p=0.40) motor recovery. This result was confirmed using a multivariate model and, as predicted, only cervical (C1–T1) rather than thoracolumbar (T2–L3) injury levels (p<0.01) and reduced baseline injury severity (American Spinal Injury Association [ASIA] Impairment Scale grades; p<0.01) were associated with greater motor score recovery. There was no in-hospital mortality in either group; however, the NASCIS-II methylprednisolone group had a significantly higher rate of total complications (61% vs. 36%; p=0.02) NASCIS-II methylprednisolone did not improve motor score recovery in RHSCIR patients with acute TSCIs in either the cervical or thoracic spine when the influence of anatomical level and severity of injury were included in the analysis. There was a significantly higher rate of total complications in the NASCIS-II methylprednisolone group. These findings support guideline recommendations against routine administration of methylprednisolone in acute TSCI. Introduction T raumatic spinal cord injuries (TSCIs) affect up to 500,000 people worldwide each year, and their high morbidity is associated with substantial individual and societal burden and socioeconomic impact. 1,2 Patients with TSCIs often experience devastating neurological impairments, and they frequently require complex long-term multidisciplinary care. 3,4 Total health care costs related to TSCIs exceed $10 billion annually in the United States alone, and lifetime per person direct and indirect costs can exceed $3 million. 5,6 TSCIs most commonly affect young males and result from road traffic accidents, but recent reports also highlight their increasing incidence in older adults as a result of low-energy falls. 2,[7][8][9] The identification of novel interventions to reduce the morbidity of TSCIs is an urgent ongoing research priority. 3,10 Methylprednisolone is a corticosteroid that was proposed to inhibit the inflammatory cascades contributing to secondary spinal cord damage after TSCIs, but its clinical utility remains controversial. 11,12 Considerable debate has centered on the validity of results from the landmark Second National Spinal Cord Injury Study (NASCIS-II), which was published in 1990. 11,13,14 In NASCIS-II, 487 patients with acute TSCIs were randomized to an initial bolus of 30 mg/kg of methylprednisolone followed by an infusion of 5.4 mg/kg per h for 23 h versus either naloxone or placebo. The primary analysis among the 487 patients enrolled within 12 h in NASCIS-II failed to demonstrate a significant neurological benefit in the 162 patients randomized to methylprednisolone. However, a secondary analysis of 65 of these patients who received methylprednisolone within 8 h of injury suggested that this subgroup experienced improved neurological recovery at 6 months. 13,15 Critics of NASCIS-II highlight the limited credibility of subgroup testing, the potential importance of losses to follow-up, the small magnitude of observed treatment effects, and the arbitrary nature of an 8-h threshold. 14,[16][17][18][19] Advocates discuss a lack of otherwise high-quality evidence and cite indirect support elsewhere in the literature. 15,20 The use of methylprednisolone has decreased dramatically in many centers, but some clinicians still report a belief in its efficacy or concerns about medical-legal pressure. [21][22][23][24][25] Potential harms include increased risks for respiratory, urinary tract, and wound infections, hyperglycemia, gastrointestinal hemorrhage, steroidinduced myopathy, and all-cause mortality. 17,26,27 Early critical reviews of the NASCIS studies recommended that methylprednisolone administration not be considered a ''standard of care'' for acute TSCI, but rather, a treatment option. More recently, the 2013 ''Guidelines for the Management of Acute Cervical Spine and Spinal Cord Injuries'' recommended against the routine administration of methylprednisolone for the treatment of acute TSCIs. [28][29][30] Recent evidence from the Rick Hansen Spinal Cord Injury Registry (RHSCIR) suggests that the prognostic importance of patients' neurological level of injury in combination with the baseline severity of their neurological impairments may have been previously overlooked. 3 Controlling for the joint distribution of these two variables in TSCI research might increase the likelihood of detecting true treatment effects while simultaneously avoiding spurious or misleading results. 31 In this study, our primary objective was to determine whether the NASCIS-II regimen of methylprednisolone started within 8 h of injury improved motor recovery in comparison with no steroid treatment among RHSCIR patients with acute TSCIs. Our secondary objectives were to consider the effect of patients' neurological level of injury and the baseline severity of their neurological impairments on motor recovery, and to compare rates of complications between groups. Study design We performed a propensity score-matched cohort study using patient data that were prospectively collected in RHSCIR. RHSCIR is an ongoing multi-center observational study of patients with acute TSCIs who are admitted to major trauma centers and accompanying rehabilitation centers in Canada. 32 There are currently 31 participating study sites in the RHSCIR network, which are located across 16 cities from 9 out of 10 Canadian provinces. This article's primary objective was specified a priori during the development of RHSCIR, along with several other research objectives. 32 Each participating site obtained local Research Ethics Board or Institutional Review Board approval prior to enrolling patients and collecting data. Participants Patients were eligible for this study if they were 18 years of age or older and they presented to a participating site following an acute TSCI. Patients with non-traumatic etiologies of SCI such as infection, neoplasm, iatrogenic, or acute vascular causes were ineligible, but no exclusions were made on the basis of age, sex, medical co-morbidities, associated injuries, or planned treatment. According to the RHSCIR protocol, approximately 265 data elements were collected during participants' pre-hospital, acute, and rehabilitation phases of care. Further descriptions of the RHSCIR data elements, procedures, governance structure, and patient privacy and confidentiality framework are available elsewhere. 3,32,33 We used the RHSCIR database to identify all patients from May 2004 to March 2014 who received either the NASCIS-II regimen of methylprednisolone started within 8 h of their acute injury or no steroid treatment. Patients who received regimens of methylprednisolone other than NASCIS-II, patients who received steroids other than methylprednisolone, and patients whose steroid status was indeterminate were excluded. Patients who received the NASCIS-II regimen followed by an additional 24 h of methylprednisolone were included. 15 The indications for NASCIS-II methylprednisolone were not standardized across the participating sites, and patients could have received NASCIS-II methylprednisolone at RHSCIR acute care sites or at non-participating community hospitals prior to being transferred to an RHSCIR acute care site. Data sources Motor function scores were measured by trained physicians, nurse practitioners, or physiotherapists according to the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI). 34 ISNCSCI total motor scores (TMS) can range from 0 (absent motor function) to 100 (intact motor function) and comprise component upper extremity motor scores (UEMS; range 0-50), and lower extremity motor scores (LEMS; range 0-50). We considered patients' baseline motor scores to be those obtained on their admission to acute care and we considered patients' final motor scores to be those obtained at the time of their discharge to the community from acute care or inpatient rehabilitation. 31 Each ISNCSCI record was processed through a customized electronic algorithm that maintained consistency and high quality. 32 We also retrieved the following variables from the RHSCIR database for each patient: age, sex, Body Mass Index, Glasgow Coma Scale and Injury Severity Score at admission, injury mechanism, Charlson Comorbidity Index, 35 whether or not patients underwent surgery, and RHSCIR study site. These data elements were collected by trained research personnel and entered into standardized local RHSCIR databases before being exported to the RHSCIR national office for centralized quality checks. 32 Missing or ambiguous data were reconciled with local research coordinators, hospital health records, and medical chart abstraction whenever possible. We collected rates of in-hospital mortality, urinary tract infections (UTIs), pneumonias, decubitus ulcers, deep vein thrombosis or pulmonary embolism, surgical site infections, and sepsis using International Classification of Diseases, 10th Revision (ICD-10) codes from the Canadian Institute for Health Information's Discharge Abstract Database. 36 Statistical analysis We used 1:1 propensity score matching based on logistic regression to match patients who received NASCIS-II methylprednisolone with controls who received no steroid treatment. To control for potential confounding, we matched according to varying neurological level of injury (cervical: C1-T1, or thoracic: T2-L3) and baseline severity of neurological impairments (ISNCSCI ASIA Impairment Scale A, B, C, or D), as well as age, sex, and time from injury to first neurological examination (<72 h, 72 h to one week, greater than one week, or unknown). 3,[37][38][39] Jitter plots and propensity histograms were used to verify the distribution of propensity scores in each group. Sensitivity analysis were performed to control for any residual imbalance by (i) comparing the matched groups while adjusting for the matched variables using negative binomial regression; and (ii) comparing the NASCIS-II methylprednisolone group against the full cohort of unmatched potential controls while adjusting for the same variables and RHSCIR site using negative binomial regression. 40 Goodness of fit was confirmed using the Akaike information criterion and the Bayesian information criterion. Discrete variables are reported as counts or proportions, normally distributed continuous variables as means with standard deviations (SD), and skewed continuous variables as medians with interquartile ranges (IQR). We used parametric tests for data with normal distributions and non-parametric tests for data without normal distributions. 3,31 We compared unmatched groups with the independent samples t test using Levene's test to assess the equality of variance or the Mann-Whitney U test, and matched groups with the paired t test or the Wilcoxon signed-rank test. We used Pearson's v 2 or Fisher's exact test for categorical data depending upon the number of the sample in each cell. Direct correlations were evaluated using Pearson's correlation coefficient. Participants with missing data were excluded from each analysis and imputations were not performed. 18,41 Extreme outliers were removed from each group when comparing lengths of stay. All tests of significance were two-tailed and p values of less than 0.05 were considered significant. All analyses were performed using R 3. Participants There were 2009 patients with acute TSCIs who consented to RHSCIR enrollment and were discharged to the community from acute care or inpatient rehabilitation (Fig. 1). Of these, we excluded 318 because their steroid administration status was indeterminate, 72 because they received dexamethasone, 5 because they received non-NASCIS-II methylprednisolone, and 14 because they received steroid regimens that were not further specified. In total, 46 consecutive patients were included who received the NASCIS-II regimen of methylprednisolone within 8 h of their acute injury, 5 of whom received the NASCIS-II regimen followed by an additional 24 h of methylprednisolone. There were 1555 included patients who received no steroid treatment. Of the 46 patients who received NASCIS-II methylprednisolone, 20 were enrolled between 2004 and 2006, 25 between 2007 and 2010, and one was enrolled between 2011 and March 2014. NASCIS-II methylprednisolone was initiated at least once at 7 of the 18 acute care RHSCIR sites, but 25 of the 46 patients who received NASCIS-II methylprednisolone did so at a non-RHSCIR community hospital prior to being transferred to a RHSCIR site. These patients received their NASCIS-II methylprednisolone prior to their baseline neurological examinations, which were performed upon arrival at the RHSCIR site. Baseline characteristics There were no significant baseline differences between the group of patients who received NASCIS-II methylprednisolone (n = 46) and the cohort of potential controls who received no steroid treatment (n = 1555) except that those who received NASCIS II methylprednisolone had a significantly longer time from injury to first ISNCSCI examination (median 72 vs. 56 h, p = 0.01; see Table 1). Propensity score matching Two of the 46 patients who received NASCIS-II methylprednisolone were excluded from the matched analysis because they had incomplete motor score outcome data. The remaining 44 were matched in a 1:1 ratio with controls who received no steroid treatment. The propensity score distributions within each group were similar (Fig. 2), and there were no significant differences in the proportions of patients with each combination of neurological level (cervical/thoracic) and ASIA Impairment Scale (A, B, C, or D), or any of the other baseline characteristics ( Table 2). The median interval from injury to baseline neurological exam was 44 h (IQR 152) in the matched NASCIS-II methylprednisolone group and 31 h (IQR 170) in the matched no steroids group ( p = 0.47), whereas the median interval from injury to final neurological exam was 127 days (IQR 142) in the matched NASCIS-II methylprednisolone group and 117 days (IQR 138) in the matched no steroids group ( p = 0.78). Surgery was performed in 91% of the matched NASCIS-II methylprednisolone group and 82% of the matched no steroids group ( p = 0.29). Motor score recovery There were no significant differences in motor recovery between the matched NASCIS-II methylprednisolone group and the mat- There was also no significant difference in motor recovery when we performed sensitivity analyses to compare the matched groups while adjusting for the matched variables using negative binomial regression (Table 3), or when we compared the NASCIS-II methylprednisolone group against the full cohort of unmatched potential controls (n = 1555) while adjusting for the same variables and RHSCIR site (Table 4). When analyzing cervical and thoracic injuries separately, the methylprednisolone group and the matched groups had near identical mean motor score recovery. Using the Mann-Whitney U test to compare cervical patients treated with methylprednisolone versus matched patients and thoracic patients In the analysis of the full cohort of unmatched potential controls, cervical rather than thoracic injury levels ( p < 0.01) and reduced baseline injury severity (ASIA Impairment Scale A, B, C, or D; p < 0.01) were each significantly associated with greater TMS recovery. Complications and length of stay The most common complications in either matched group were urinary tract infections, decubitus ulcers, and pneumonias. None of the patients in either group experienced in-hospital mortality and there were no surgical site infections. The NASCIS-II methylprednisolone group had a significantly higher rate of total complications (61% vs. 36%; p = 0.02), but there were not significant differences in the rates of specific complications between groups (Table 5). Patients in the NASCIS-II methylprednisolone group experienced a significantly shorter mean length of stay in acute care (34.4 days vs. 48.4 days; p = 0.02), but there were no significant differences in the lengths of stay at inpatient rehabilitation (106.7 vs. 117.9 days; p = 0.45) or the total lengths of stay, which is a combination of the acute care and inpatient rehabilitation lengths (mean 143.6 days vs. 152.9 days; p = 0.28). Discussion Using data prospectively collected in the RHSCIR, we performed a propensity-matched cohort study and found that the NASCIS-II regimen of methylprednisolone started within 8 h of injury did not improve motor recovery in comparison with no steroid treatment in patients with acute cervical and thoracic TSCIs. In a sensitivity analysis, cervical rather than thoracic injury level and reduced baseline injury severity were each associated with greater recovery. The NASCIS-II methylprednisolone group did not demonstrate a difference in motor recovery in cervical or thoracic patients when analyzed separately, but the methylprednisolone patients had a higher rate of total complications. There were no differences between groups for the rates of individual complications or for total length of stay. Strengths and limitations RHSCIR is part of the Translational Research Program of the Rick Hansen Institute, and it was created with the explicit purpose of facilitating clinical research to improve patient outcomes. Each data element was developed according to a priori research objectives and was standardized to optimize quality and accuracy 32 ; ISNCSCI motor scores for this study were collected by trained clinical research staff and were verified using a customized electronic algorithm. 42 Administration of the NASCIS-II bolus and infusion of methylprednisolone were confirmed to begin within 8 h of patients' injuries, as per this protocol. The timing of ISNCSCI examinations was not standardized, and differences in timing could have introduced bias in the results. Early baseline examinations risk confounding due to spinal shock, and delayed baseline examinations risk missing early recovery. For example, the median time from injury to baseline examination was longer in the methylprednisolone group, and those patients who received methylprednisolone prior to their baseline examinations could have experienced some neurological recovery that was not captured. Nonetheless, Marino and colleagues showed that delays in baseline examinations are of minimal importance as long as they are conducted within 7 days. 43 Neurological improvement may continue up to or beyond one year, 44 but Pollard and Apple reported that more than 70% of neurological recovery occurs before discharge from rehabilitation. 45 We identified only 46 patients who received the NASCIS-II regimen of methylprednisolone within 8 h of their injuries since 2005. This sample size is small and may limit confidence in our results, particularly the rates of complications. However, it is unlikely to reflect selection bias because RHSCIR includes all of the specialized acute care spine centers in Canada and methylprednisolone use is known to have sharply declined. 21,23 For comparison, it is worthwhile to note that the analyses of the NASCIS II motor score improvements reported in a 2012 Cochrane Review rely on only 65 patients who received the NASCIS II protocol within 8 h. 15 Our finding that the frequency of NASCIS-II methylprednisolone administration has decreased over time suggests that the NASCIS-II protocol has fallen into widespread disfavor in Canada. We excluded 318 patients whose steroid administration status was indeterminate because many of these patients received various steroid preparations peri-operatively for off-label neuro-protective indications, and we chose not to impute missing data in order to avoid introducing extra variability. 46 We also excluded patients who received steroid regimens other than NASCIS-II methylprednisolone in order to minimize confounding. 47 Propensity score matching is an analytical technique that pairs treated and untreated patients on the basis of their conditional probability of receiving an intervention according to a set of observed co-variates. 37,38 Propensity score matching is more efficient than conventional multivariable regression when there are large differences in important prognostic characteristics between treatment groups, but its validity depends on the appropriate selection of covariates, matching techniques, and methods of final data analysis. 39 Our propensity scores controlled for patients' neurological levels of injury and the baseline severity of their impairments, but our small sample precluded further differentiation according to high (C1-C4) versus low (C5-T1) cervical injuries or thoracic (T2-T10) versus thoracolumbar (T11-L2)injuries. 3 We were also unable to control for potential clustering due to local co-interventions at each RHSCIR site because more than half of the patients who received NASCIS-II methylprednisolone did so before arriving at a RHSCIR site. Propensity score matching cannot adjust for unknown confounders. 48 Our approach to collecting complications data according to ICD-10 codes from a national database is known to be at risk for underreporting, and ICD-10 codes may have been applied differently across the sites. Street and associates showed that nearly twice as many adverse events per person can be identified by prospectively applying the Spine Adverse Events Severity System. 36 Our use of a composite endpoint for total complications was justified because the component endpoints are likely to be of similar importance to patients, occurred with similar frequency, and are likely to share similar underlying biological plausibility. 49,50 The time from injury to first neurological examination was significantly longer in the group of patients who received NASCIS-II methylprednisolone in comparison with the larger cohort of potential controls who received no steroids, which may suggest that the patients who received NASCIS-II methylprednisolone had greater injury severity. However, we used propensity score matching and negative binomial regression to control for this potential confounder and the times from injury to first neurological examination were not significantly different between the matched group of patients who received no steroids. There were also no significant differences between the matched groups for Injury Severity Score, Glasgow Coma Scale, ASIA Impairment Scale, or neurological level of injury. We prospectively verified whether the patients who received NASCIS II methylprednisolone did so within 8 h of their injuries, but it is possible that the effect of NASCIS-II methylprednisolone might further vary according to whether patients received it earlier or later within 8 h of their injuries. In NASCIS-III, the 24-h regimen of methylprednisolone begun within the first 3 h after injury was not as effective if its initiation was delayed until between 3 and 8 h. 26 Our study was not designed to investigate this issue, however, and we did not collect exact timing data to explore it. Surgical timing may be an important modifiable determinant of the outcomes in the management of patients with TSCIs. Decompression prior to 24 h was associated with improved neurological outcomes among RHSCIR patients with ASIA B, C, or D cervical, thoracic, or thoracolumbar injuries, 31 and it was also associated with improved outcomes in the Surgical Timing in Acute Spinal Cord Injury Study (STASCIS). 51 A multivariate analysis of the STASCIS data suggested that methylprednisolone could have a synergistic effect with early decompression, and the incidence of wound infections among patients who received NASCIS-II methylprednisolone was lower in STASCIS than in the NASCIS-II trial. 52 However, STASCIS included only patients with cervical SCIs, who were more likely to undergo anterior surgery rather than posterior surgery, which may explain the reduced infection rates. 20 It is unlikely that surgical timing was a confounder in our study because the difference in the timing of surgery between the matched groups was not significant. Relation to previous literature Our results support a considerable body of literature that fails to demonstrate a benefit attributable to methylprednisolone for neurological functional recovery in patients with acute TSCIs, and our study is the first to adjust for patients' neurological level of injury and the baseline severity of their impairments. 16,28,53,54 The original NASCIS-I trial found no significant differences in motor recovery at 6 months among 330 patients who were randomized to high-versus low-dose 10-day regimens of methylprednisolone, 55 the primary analysis of NASCIS-II found no significant differences in motor recovery at 6 months among 487 patients who were randomized to a 24-h regimen of methylprednisolone versus either naloxone or placebo, 13 and the primary analysis of NASCIS-III found no significant differences in motor recovery at 6 months among 499 patients who were randomized to 24 h or methylprednisolone, 48 h of methylprednisolone, or tirilazad. 26 A secondary analysis of 65 NASCIS-II patients who received methylprednisolone within 8 h of injury found that this subgroup experienced significantly improved sensory and motor recovery at 6 months. 13 More recently, Chikuda and colleagues compared methylprednisolone against no steroid treatment in a propensity-matched analysis of their nationwide administrative database in Japan. 56 They matched 824 pairs of patients with cervical SCIs and found significantly higher rates of major complications including respiratory complications, urinary tract infections, sepsis, gastrointestinal bleedings, and pulmonary emboli in patients who received high doses of methylprednisolone, as well as longer lengths of stay. Their study did not specify whether lengths of stay included inpatient rehabilitation, did not include motor scores, did not control for levels or injury or severity of impairment, and did not verify that all patients received the NASCIS-II regimen within 8 h of their injuries. Three other small randomized trials and several earlier observational studies have been previously reviewed. 16,28,53,54 Implications Evidence-based medicine describes the careful integration of patient preferences and clinician expertise with the best available external evidence to facilitate decision-making, and clinicians, researchers, and other evidence users should consider the totality of relevant evidence before applying results to patient care. 57 Metaanalyses are systematic reviews in which the results from similar studies are combined using statistical tests to produce pooled treatment effects, and they are powerful tools that can synthesize conflicting literature and evaluate bias. However, they require high methodological credibility in order to avoid misleading conclusions. 58 Bracken and Botelho and colleagues have each reported on meta-analyses that evaluate the effect of methylprednisolone against placebo in patients with TSCIs, but the conclusions from these studies are conflicting and each is limited by poor methodological credibility. 15,59 Neither ensured that the selection of studies was reproducible, neither explored possible explanations for between-studies differences in results, and neither study addressed the overall quality of the evidence or confidence in the pooled estimates. 58 According to the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) approach, confidence in pooled effect estimates depends on study design, risk of bias, imprecision, inconsistency, indirectness, publication bias, and other factors. 60 An updated independent meta-analysis could help resolve any ongoing controversy, and the open release of individual participant data for this purpose would allow adjustments for the prognostic importance of patients' neurological levels of injury and the baseline severity of their impairments. [61][62][63] The clinical validation of novel interventions to treat patients with acute TSCIs remains an urgent ongoing research priority. 64,65 Randomized controlled trials are the most rigorous clinical research studies for investigating treatment effects and establishing causal-ity, but their design and conduct for interventions in patients with acute TSCIs is challenging. The number of patients who might be eligible for enrollment at individual institutions is surprisingly small, and complex stratification is required to account for variability in baseline prognostic factors. 3,10 Multi-center trials can achieve sufficient power, but they require extensive coordination, collaboration, and resources. 66 Large observational studies can overcome some of these challenges, but they must be also appropriately designed and implemented in order to minimize bias. 48 The Joint Section on Spine and Peripheral Nerves of the American Association of Neurological Surgeons and Congress of Neurological Surgeons recommended against the routine administration of methylprednisolone for the treatment of acute TSCIs in 2013. 28 Their guidelines highlight that methylprednisolone is not approved by the Food and Drug Administration for use in TSCIs, there is no Class I or Class II medical evidence supporting clinical benefit, and there is Class I, II, and III evidence suggesting harmful side effects including death. The Canadian Neurosurgical Society, the Canadian Spine Society, and the Canadian Association of Emergency Physicians have previously contributed to position statements recognizing insufficient evidence to support the use of high-dose methylprednisolone in acute TSCIs. 21,29 Conclusions NASCIS-II methylprednisolone started within 8 h of injury did not improve motor score recovery in RHSCIR patients with acute cervical or thoracic TSCIs. These findings support guideline recommendations against its routine administration, and validate trends toward decreasing utilization. Clinicians, researchers, and other evidence users should consider these results in the context of a considerable body of evidence, and should recognize that patients' neurological levels of injury and the baseline severity of their impairments are important prognostic factors that warrant further consideration.
2016-05-12T22:15:10.714Z
2015-10-26T00:00:00.000
{ "year": 2015, "sha1": "494df788e316d38a9c6c01b32ca3cd93ff30d37a", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4638202?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "494df788e316d38a9c6c01b32ca3cd93ff30d37a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257663679
pes2o/s2orc
v3-fos-license
A Cycle-Accurate Soft Error Vulnerability Analysis Framework for FPGA-based Designs Many aerospace and automotive applications use FPGAs in their designs due to their low power and reconfigurability requirements. Meanwhile, such applications also pose a high standard on system reliability, which makes the early-stage reliability analysis for FPGA-based designs very critical. In this paper, we present a framework that enables fast and accurate early-stage analysis of soft error vulnerability for small FPGA-based designs. Our framework first extracts the post-synthesis netlist from an FPGA design. Then it inserts the bit-flip configuration faults into the design netlist using our proposed interface software. After that, it seamlessly feeds the golden copy and fault copies of the netlist into the open source simulator Verilator for cycle-accurate simulation. Finally, it generates a histogram of vulnerability scores of the original design to guide the reliability analysis. Experimental results show that our framework runs up to 53x faster than the Xilinx Vivado fault simulation with cycle-level accuracy, when analyzing the injected bit-flip faults on the ITC'99 benchmarks. I. INTRODUCTION An FPGA usually contains a set of configurable logic blocks (CLBs) and programmable interconnect resources, making it possible to implement flexible digital systems in many areas such as aerospace, automotive, machine learning, and datacenters [1]. In order to have high density and fast configuration speed, the majority of FPGAs use SRAM (Static RAM) technology to store the configuration bitstream. However, SRAM-based FPGAs are vulnerable to radiation induced soft errors [2], [3]. Soft errors may occur with a particle strike to a sensitive region of a transistor in a SRAM cell resulting in an inversion of the logical value, which is called bit-flip. The bit-flips occurring in the FPGA configuration memory remain unchanged during the following cycles. Such bit-flips are permanent and can cause failure in an FPGA-based design. Therefore, it is very important to analyze such bit-flips in FPGA-based designs and provide corresponding protection and/or correction, especially for highly-reliable systems used in aerospace and automotive. Modern FPGAs often use error correction methods, such as scrubbing, to correct configuration memory bit-flips [1], [4]. As shown in Fig. 1, memory bit-flips are usually corrected using coding mechanisms, or golden models, together with partial reconfiguration [5], [6]. As a result, single event upsets (SEUs) need not be permanent and their consequences can be classified as a transient fault that lasts until a scrubbing occurs. Moreover, if a bit-flip effect is further masked in the circuit design, this error will not be propagated to the circuit outputs. And thus the effect of that bit-flip will be completely transparent to the system. Therefore, to implement a highly-reliable design that still achieves high performance, it is essential to precisely analyze which parts of the FPGAbased design are most vulnerable to soft errors (bit-flips) and thus provide lightweight error protection and/or correction. As presented in Section II, there are many methods developed to analyze the reliability of FPGA-based designs . However, there is still the need for a fast and accurate toolflow, that is open source, to perform early-stage analysis of soft error vulnerability for FPGA-based applications in the design phase. In this paper, we present a framework that enables fast and cycle-accurate soft error vulnerability analysis of an FPGAbased design using the post-synthesis netlist as input. First, given an initial FPGA design or partial design in the early design stage, our framework extracts the post-synthesis netlist. Based on the extracted netlist, it automatically inserts the bitflip configuration faults. Then it seamlessly feeds the golden copy and fault copies of the netlist into the open source simulator Verilator [7] to perform cycle-accurate simulation. According to the simulation results, it generates a histogram of vulnerability scores for each component in the original design. To demonstrate the usefulness of our framework, we use it to analyze the reliability of the widely used ITC'99 benchmarks [8] with injected bit-flip faults and show which components are more vulnerable to bit-flip errors. Compared to the Xilinx Vivado fault simulation, our framework runs up to 53x faster while achieving cycle-level accuracy. II. REVIEW OF SOFT ERROR ANALYSIS FOR FPGAS To analyze the vulnerability of soft errors for an FPGAbased design, a common technique (called fault injection) is to intentionally inject faults to disturb some parts of the design and analyze the possibility of changes of its normal behavior [9]. There are many different means to insert artificial faults into an FPGA-based design, and we classify them as follows. 1. Using radiation techniques to induce upsets in actual physical FPGA design [2], [10], [11]: These methods are fast and realistic. But they need a prototype version of the design and cannot be applied in early stages of the design flow. The lack of controllability of fault locations is another weakness. In addition, these methods destroy the equipment and are quite expensive. 2. Modifying configuration memory to force upsets [12]- [17]: Some studies try to modify an FPGA's configuration bitstream file directly, where others modify an FPGA's configuration memory at run-time using dynamic partial reconfiguration. According to the different configuration ports, these methods can be divided into two categories: external and internal fault injections. Note that they need the prototype version of the design. 3. Emulating faults in FPGAs [18]- [22]: These studies add some extra hardware (saboteurs) to the original design and run the modified version on the board to emulate the behaviour of bit-flips. Existing techniques in this subject manipulate the design's description in RTL (Register-Transfer Level) or post-implementation netlist. These methods need complex internal or external components for controlling the fault injection and inference processes. Therefore, they either need dedicated external hardware platforms or impose considerable area overhead. In addition, they need the prototype version of the design. 4. Analytical modelling that applies probabilistic and/or statistical analysis to model the behavior of fault masking in an FPGA design [23]- [27]: These methods often use worst case analysis. Hence they usually lead to inaccurate results. 5. Fault simulation that uses a software simulator to simulate the behaviour of memory upsets for a given design descried in hardware description languages (HDLs) [19], [28]- [38]: These methods use either code modification to add mutants and saboteurs or simulator commands to change variable and signal values. Fault simulation offers important advantages over other techniques. They have both high observability and controllability of the faults, and can be applied soon at various level of design abstractions. Hence they are very useful for reliability aware CAD (computeraided design) tool development. However, fault simulation methods suffer from long run-time. We summarize the fault injection techniques in Table I of fault injection methods, fault simulation is a good choice to analyze the reliability of an FPGA design accurately at early design stages. On one hand, high-level fault simulation suffers from high inaccuracy compared to low-level fault simulation [33]. On the other hand, low-level fault simulation tools [38], [39] use event-driven simulation, and thus, suffer from long execution time and large memory consumption. In this paper, we explore a cycle-accurate fault simulation framework to enable accurate yet fast soft-error vulnerability analysis for FPGA-based designs. We decide to explore the fault simulation at the post-synthesis netlist level, which can be considered precise because it provides accurate fault injection locations in an already optimized netlist. III. PROPOSED FRAMEWORK FOR SOFT ERROR VULNERABILITY ANALYSIS The overall structure of our proposed framework is presented in Figure 2. Given an input FPGA circuit described in VHDL or Verilog, it can automatically extract the postsynthesis netlist and insert bit-flips at the basic element (BEL) level, and seamlessly feed the golden and fault copies to the open source simulator Verilator [7] for fast and cycleaccurate simulation. Finally, it will generate a histogram of vulnerability scores for the input circuit. Next, we present more details of each component of our framework. A. Post-Synthesis Design Netlist Extraction During the synthesis stage, a digital circuit expressed in synthesizable VHDL or Verilog code is synthesized to a lower level Verilog netlist, with the TCL (Tool Command Language) command 'write verilog' in a TCL script to automate the process. This exported Verilog netlist describes a digital circuit B. Interface Software To seamlessly simulate the post-synthesis netlists in the open source simulator Verilator, we develop an interface software that connects them to facilitate the tool automation. The interface software is a C++ program that performs four major tasks. 1. The first task is to open the exported netlist, add and/or remove some lines of Verilog code to obtain a synthesizable structural description that can be compiled by the Verilator tool. This task only includes the Verilog modules for the BELs in the structural description, generating the minimum necessary code to the Verilator compilation process. The generated Verilog file is going to be used in the fault injection campaign as the "golden copy". 2. The second task is to generate the input stimuli that will be used by the Verilator testbench during the fault injection campaign. The input stimuli can be generated in two different ways. First, for an extensive fault injection campaign, it generates all combinations of input stimuli. Second, when the extensive campaign is not feasible due to the long simulation time, it generates a pseudo-random fault injection input stimuli. In the pseudo-random case, the C++ software uses the boost C++ library to generate the pseudo-random integer numbers in a uniform distribution that is converted to its binary correspondent. 3. The third task is to automatically generate the testbench that wraps the C++ model of the Verilog structural description. This testbench file, when executed, receives two parameters: the stimuli input file generated in the second task and the desired output file name to write the testbench results. The testbench algorithm can be summarized into three different stages. The first stage is the input stimuli read; the second is the input injection (including the clock event generation); and the last stage is reading the output from the Verilator simulator and writing the results to the output file. These three stages are repeated for the number of cycles intended for which the simulation runs. 4. The fourth task is to generate all the faulty copies of the golden circuit by inverting only the logic function programmed at each lookup-table (LUT) for each file. Each copy has only one faulty component and in this case, the total number of faulty copies is equal to the total number of LUTs. In our current tool, only one LUT function is inverted in each file. But it can be easily extended to support fault injection at multiple LUTs and other types of BELs, which is left for future work. C. Cycle-Accurate Fault Simulation Verilator is a cycle-accurate open source simulator [7]. Compared to an event-driven simulator-such as Xilinx Vivado logic simulator, which executes processes based on event triggering-Verilator executes the whole design in a topological order at every simulation cycle. Thus, the number of simulator-specific variables is significantly reduced and the simulation time is much faster. In our framework, Verilator takes the Verilog description and compiles the code into a much faster optimized and optionally thread-partitioned model. This model is then wrapped into a C++ module, all using parameters similar to GCC or Synopsys's VCS that can be easily automated with Makefiles and the make command, available in any Unix and Unix-like operating systems. D. Vulnerability Score Histogram Generation To better guide the soft error vulnerability analysis, we also calculate a normalized vulnerability score for each LUT and generate a histogram of vulnerability scores for all the LUTs in the design. The vulnerability score for each LUT is calculated as follows. First, for each output bit of a LUT, we multiply 1) the error possibility based on simulation results by 2) its weight, which can be configured by users (the default weight value is 1 for all output bits). Second, we divide the sum of all products calculated in the first step by the sum of all weights of output bits to get the vulnerability score for a LUT. The histogram of vulnerability scores for all the LUTs demonstrate which LUTs are more vulnerable to the bit-flip errors. IV. FAULT SIMULATION TOOL COMPARISON To demonstrate the performance gain of cycle-accurate simulation using Verilator in our framework, we compare it to the Xilinx Vivado ISIM simulator. Both simulators are evaluated with the widely used ITC'99 benchmark subset circuits [8]. Both simulators are parallelized using the make command with the -j option and use 12 cores. Ideally, the each circuit should be simulated with all possible combinations of inputs. But as this number grows exponentially with the product of the number of input bits and the number of intended simulation cycles, we decided to use 1,024 pseudo-random input combinations due to the long time run. It already takes several hours to run the 1,024 input stimulus for the biggest circuit. Table II presents the simulation performance comparison for the Verilator and Vivado simulators. As shown in column 5, the cycle-accurate Verilator simulator is from 9.7 up to 53.8 times faster than the Vivado ISIM simulator. This is because the Verilator simulator compiles the Verilog code into a much faster optimized model and does not model intra-cycle events. V. USE CASES OF OUR FRAMEWORK To demonstrate the usefulness of our framework, we conduct two case studies. A. Case Study #1: Cycle-Level Error Possibility In this case study, we demonstrate how one can use our framework to analyze the possibility that each LUT output bit can be affected by an input bit-flip error, as the time goes cycle by cycle. We choose this granularity because sometimes a certain output bit may represent vital output information, for example, whether the circuit is ready or not ('1' or '0'). To calculate the error possibility for each output bit of a LUT, we simulate the circuit with each faulty input combination. If there is an error in the output bit (comparing to the golden copy), its corresponding error counter (per output bit per cycle) is incremented by one. The corresponding error possibility is computed by dividing the total number of counted errors by the total number of simulation runs (i.e. input combinations). This process is repeated for every cycle. At the end, we generate an error possibility distribution for each LUT output bit cycle by cycle. As an example, let us consider the circuit b01 of the ITC'99 benchmark. Table III presents the detailed error possibility for each output bit of each LUT from cycle 1 to 4 by simulating the b01 circuit using our framework. Each row of the table presents the error possibility observed in each output bit of each LUT at each cycle. The 'Total' column presents the error possibility considering at least one error in the total run. The total error metric is adjustable so that application designers can create and utilize different approaches to calculate the total error value that are appropriate according to their knowledge of the application. [1] outp 0 0 0 0 0 FSM st [1] overflw 0 0 0 0 0 FSM st [2] outp 0 1 0.5 0 1 FSM st [2] overflw As shown in Table III, there is one LUT (FSM st [1]) that has no errors at the outputs. This is because the circuit topology has a reconvergence path. The fault propagates through two different paths and converges back to a specific component, where the bit flips twice resulting in an always masking condition. There are three LUTs that have a total error value of 100%. Two of them (overflw i 1 and outp i 1) are very close to the outputs and the third is the most significant bit of the circuit FSM (Finite State Machine) state register (the FSM st [2]). Such analysis gives designers some guidance into which LUTs they should pay most attention to for better system reliability. This works well for small circuits that do not have too many LUTs, or a partial circuit that designers want to dive into for more details. Note that one can also perform such analysis per LUT, instead of per output bit of each LUT. B. Case Study #2: Histogram of Vulnerability Scores In this case study, we demonstrate how one can use our framework to analyze which LUTs are more vulnerable to bit-flip errors in a circuit at a higher level, by analyzing the histogram of vulnerability scores of all LUTs. For illustrative purpose, we consider that all LUT output bits have the same importance (i.e., same weight 1) for the overall circuit functionality. We run the fault simulation for the ITC'99 benchmark subset circuits for 10 cycles to calculate the total error possibility for each LUT output bit as explained in case study #1. Designers can configure the weights in our framework and decide how many cycles to simulate when they have better knowledge of the application. Figure 3 presents the histograms for some circuits in the ITC'99 benchmark subset. Each sub-figure shows the number of LUTs (y-axis) that fit within a 0.1 normalized vulnerability score range (x-axis) for a circuit. As shown in Figure 3, for some circuits, the majority of LUTs have normalized vulnerability scores below or equal to 0.5, e.g., circuits b07, b08, b11, b12, b13 and b14. Such circuits are less vulnerable to bit-flip errors. For the small portion of LUTs that have normalized vulnerability scores above 0.5, one can afford to protect them using an expensive yet effective method like TMR (Triple Modular Redundancy) technique [40]. On the other hand, some circuits, such as b06, b09 and b10, have a significant amount of LUTs with normalized vulnerability score above 0.5. Such circuits are more vulnerable to bit-flip errors and need more protection. In order to reduce the overhead of the error protection circuit, one may consider to apply the more lightweight DWC (Duplication With Comparison) technique [41] or even redesign the circuit to reduce the number of vulnerable LUTs. VI. APPLICATION FOR LARGE-SCALE CIRCUITS The proposed fault simulation is not limited to small circuits, but it can be challenging to perform the fault simulation on large designs due to the computational complexity involved. Large FPGA designs can have thousands or even millions of BLEs, making it impractical to simulate every possible fault (even random faults) in the circuit. To address this issue, partitioning techniques can be used to divide the large design into smaller, more manageable parts that can be simulated individually via the proposed accurate method. The design can be partitioned based on various criteria, such as functional blocks, critical parts, or input/output interfaces. Each partition can then be simulated separately using the proposed cycle-accurate fault simulation techniques, and the results can be combined to obtain the overall reliability of the design. In addition, it can also help identify potential problems in specific parts (i.e. critical parts) of a large design, allowing for targeted optimization and testing. VII. CONCLUSION In this paper, we have presented a framework that enables fast and cycle-accurate analysis of soft error vulnerability for FPGA-based circuits in the early design stage. The fault model is at the basic element (BEL) level in an already optimized post-synthesis netlist. In our framework, a post synthesis netlist is first extracted for a given FPGA design or partial design. Then, bit-flip errors are inserted at BEL level. Both the golden copy and faulty copies run through the cycle-accurate simulator Verilator to identify the effects of faulty LUTs. By applying user-defined weights to each output bit, a histogram of vulnerability scores will be generated to guide designers to identify and rank the more vulnerable LUTs in the design. Two case studies have been conducted on the widely used ITC'99 benchmark circuits to demonstrate the usefulness of our framework: one on cycle-level error possibility analysis for each LUT output, and the other on histogram of vulnerability scores of all LUTs. In general, our framework is up to 53x faster than the Xilinx Vivado simulation, while achieving cycle-level accuracy.
2023-03-23T01:15:43.364Z
2023-03-22T00:00:00.000
{ "year": 2023, "sha1": "96fef40d4a19df11c3642d86ea62c85118bafc91", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "96fef40d4a19df11c3642d86ea62c85118bafc91", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
18976612
pes2o/s2orc
v3-fos-license
The Influence of Host Stress on the Mechanism of Infection: Lost Microbiomes, Emergent Pathobiomes, and the Role of Interkingdom Signaling Mammals constantly face stressful situations, be it extended periods of starvation, sleep deprivation from fear of predation, changing environmental conditions, or loss of habitat. Today, mammals are increasingly exposed to xenobiotics such as pesticides, pollutants, and antibiotics. Crowding conditions such as those created for the purposes of meat production from animals or those imposed upon humans living in urban environments or during world travel create new levels of physiologic stress. As such, human progress has led to an unprecedented exposure of both animals and humans to accidental pathogens (i.e., those that have not co-evolved with their hosts). Strikingly missing in models of infection pathogenesis are the various elements of these conditions, in particular host physiologic stress. The compensatory factors released in the gut during host stress have profound and direct effects on the metabolism and virulence of the colonizing microbiota and the emerging pathobiota. Here, we address unanswered questions to highlight the relevance and importance of incorporating host stress to the field of microbial pathogenesis. infection in small animals, we attempt to control as many variables as possible, such as breeding history, diet, housing conditions, etc. (Stappenbeck and Virgin, 2016). Nonetheless, unaccounted variability still exists frequently. Traditionally, experiments are designed to detect "betweengroup" differences while manipulating genes in the infecting agent or host. Yet rarely are "within-group" differences of infection rates or mortality accounted for among the treatments so long as the between group differences are robust and statistically significant. What drives this heterogeneity of response within a highly homogenously treated group in a highly controlled environment? Here, we posit that the degree of physiologic stress of an individual subject plays a key, and regularly dismissed, role in the variability of infectionrelated outcome. To fully match all animals in a study, hourly measurements of numerous parameters (e.g., sleep, hunger, fear, anxiety, handling, etc.) would be necessary and integrated responses over the course of the experiment would need to be calculated. This is obviously not routinely performed and would be costly, if not impossible to achieve. Yet in virtually every small animal experiment in which infection or mortality is used as the endpoint, there exists a high degree of variability in outcome that is rarely, if ever, reported or studied (Stough et al., 2016). What is often overlooked may be the emergent properties that develop in the infecting agent and the host as they interact with each other over the entire course of the host-pathogen relationship. Pathogen phenotypes are highly dynamic over the course of this interaction, as is the host physiologic response (hormones, cytokines, metabolome) before, during, and after the infectious inoculum is introduced . A complex molecular dialog is developing as these two living organisms interact, exchange signals, and behave as one multi-cellular system (Rhee et al., 2009). Such dynamism will have a profound effect in shaping the social behavior of colonizing microbes. In order to model more precisely the host pathogen interaction, reductionist experiments with small animal models (i.e., C. elegans) and laboratory pathogens are used (Yuen and Ausubel, 2014). While much is to be gained from these reductionist models, they do not reflect some of the most challenging infections in humans, such as those that occur in modern intensive care units in the developing world (Rasigade et al., 2012). Patients, for example in an ICU are highly traumatized by procedural medicine, cared for under the most physiologically stressful conditions, and confined to the most hostile microbial environment (Zakharkina et al., 2017). Such patients are regularly exposed to healthcare associated pathogens that harbor unique antibiotic resistance patterns and highly virulent phenotypes (Busani et al., 2017). In addition, because of the promiscuous use of antibiotics to care for ICU patients, the protective action of the normal microbiota is essentially eliminated (Arrieta and Finlay, 2012). Hosts are vulnerable on two fronts, loss of the microbiome and the emergence of a virulent and resistant pathobiome . There is also evidence that physiologic or traumatic stress alone causes depletion of the host's intestinal microbiome by unknown mechanism (Alverdy and Krezalek, 2017). Thus at the same time that compensatory host-derived signaling molecules are released during stress, which shift the phenotype of its colonizing flora, the normal microbiota are collapsing in abundance and function (Hayakawa et al., 2011). Such a scenario begs investigators to understand the role of physiologic and traumatic stress on infection-related outcome beyond their direct effects on the immune system and to apply a more holistic and systems biology approach to model infection, as it likely occurs in vivo (Figure 1). HOW DOES ACUTE HOST STRESS AFFECT THE ABUNDANCE AND FUNCTION OF THE MICROBIOTA? It is now well established that following a sudden insult to the host, such as acute trauma, myocardial infarction, or burn injury, the intestinal microbiota decrease in abundance and function by greater than ninety percent (Shimizu et al., 2015). This observation may play an unappreciated role in the general consensus that a stressed host is more vulnerable to infection (Guyton and Alverdy, 2016). The scope and molecular details by which physiologic stress interacts with the intestinal microbiota and causes immunosuppression remains incompletely elucidated. However, ongoing investigations are beginning to shed some light on the mechanisms. In hospitalized patients who are critically ill, we often see a near complete ecological collapse of their endogenous microbiota, which is likely the result of both the patient's active disease state and the selective pressure imposed upon them by modern intensive care efforts (Modi et al., 2014). Not only does the abundance of the microbiota become reduced in these patients, but lowdiversity communities, often difficult to detect, tend to proliferate and are represented by highly resistant and virulent organisms such as Candida albicans, Enterococcus spp., Staphylococcus spp., and Enterobacteriaceae (Zaborin et al., 2014b). In one recent study by our group, Zaborin et al. (2014b), found that 30% of the critically ill patients had "ultra-low-diversity" microbial communities consisting of four or less bacterial taxa. One of the most obvious and intuitive drivers of this ecologic collapse is the profound selection pressure imposed by the promiscuous use of antimicrobial agents. Extensive work has been reported to understand the effects of antibiotics on the microbiota (Modi et al., 2014). In 2010, more than 70 billion individual doses of antibiotics were consumed world-wide (Blaser, 2016). Broad-spectrum antibiotics can impact up to 30% of the bacteria among the human microbiota, resulting in severe loss of taxonomic and functional diversity (Francino, 2016). This dramatic shift in the microbiota can develop immediately following antibiotic administration, and can sometimes last for years after its cessation (Jakobsson et al., 2010). The perturbation of the endogenous flora has been linked to many disease states including obesity and autoimmunity (Francino, 2016). While the effects of antibiotics are well studied and appreciated, the microbial collapse associated with critical illness is much more profound and broad when compared FIGURE 1 | The microbiome affects everything and everything affects the microbiome. Multiple converging lines of bidirectional signaling between the host and microbiota and between the microbiota and pathobiota demonstrate that host circumstances directly affect both the microbiota and the immune system. to exposure to antimicrobials alone. Many forms of host stress, independent of antimicrobial administration, have been shown to affect the composition and function of the microbiota (Mackenzie et al., 2017). For example, in patients undergoing gastrointestinal surgery, the use of opioid analgesics, withholding of enteral nutrition, and gastric acid suppression have all been shown to have profound effects on the microbiome Levesque et al., 2016). Reuland et al. (2016) reported that the use antacids is associated with increased risk of extended-spectrumβ-lactamase producing Enterobacteriaceae carriage. Even surgical procedures themselves, such as colonic resection and reconnection, can be associated with a 500-fold increase in the abundance of Enterococcus faecalis . This dynamic reality of microbiome stability further highlights the importance of understanding the complex host-microbiota interaction. HOW DOES HOST STRESS ACTIVATE PATHOGENS TO CAUSE INFECTION? Attempts to elucidate the mechanistic details of this microbial shift have aimed mainly at the hypothesis that host stress causes immunosuppression (Vanzant et al., 2014). However, less well explored, is the possibility that host stress diminishes the protective intestinal microbiota, in both abundance and function, and that host stress signals activate colonizing "pathobiota" to express enhanced virulence (Lupp et al., 2007;Alverdy and Krezalek, 2017). It could be postulated that the intestinal microbiota "sense" that the host is under duress and decrease their growth rate and metabolism both anticipating that resources will be limited, and that the host cannot tolerate activation of its immune system by the intestinal microbiota (Babrowski et al., 2013). Alternatively, and in concert with this mechanism, could be the activation of intestinal antimicrobial peptides via IL-22, which is known to be elevated following traumatic and physiologic stress (Bingold et al., 2010;Rendon et al., 2013;Behnsen et al., 2014). In this way, the host keeps its intestinal microbiome "at bay" until which time recovery is established and homeostasis returns. The temporal dynamic of this response, the period of diminution, the refaunation process, and the species and community structure that are involved in this response remain to be clarified. Although some elucidation of this mechanism has been reported with the foodborne pathogen Salmonella; importantly, no host stress was imposed in the experimental model (Behnsen et al., 2014). Although such elegant and insightful models of Salmonella inform the mechanisms of its pathogenesis, they fall short in explaining why most humans exposed to the pathogen never develop an infection (Barak et al., 2009;Spencer et al., 2010). Several key questions remain unanswered. What are the mechanisms by which ingested isolates shift their phenotype to adapt to their new environment so they can express virulence factors that allow them to induce host cytokines (i.e., IL-22) that eliminate the microbiota? Do humans (and mice) who are stressed release host stress-derived compensatory factors that induce Salmonella to express these virulence factors that then determine if and how infection occurs (Krueger and Opp, 2016;Poroyko et al., 2016)? The last is a particularly important question given that we know that host stress depletes the microbiota, activates IL-22 (Bingold et al., 2010) (which further can deplete the microbiota), releases cytokines that directly signal bacteria to activate their quorum sensing circuits (Wu et al., FIGURE 2 | Rate and degree of microbiota refaunation on recovery from severe catabolic stress such as following human critical illness (Guyton and Alverdy, 2016;Alverdy and Krezalek, 2017). (A) Demonstrates typical response to successful modern medical care with limited antibiotic exposure and rapid resolution of the infection or injury (Modi et al., 2014). (B) Represents the aging patient population with multiple exposures to western diet, smoking etc, who have fragile microbiomes that cannot recover when invasive surgery and toxic agents are used to treat complex disorders such as cancer Shakhsheer et al., 2016). (Long et al., 2008) (i.e., phosphate) in the local milieu (Long et al., 2008;Rendon et al., 2013). 2005), and diminishes local resources Through a process recently termed "telesensing" (Roux et al., 2009), certain bacteria can not only sense their population density via quorum sensing, but also can detect and respond to hoststress derived signals such as opioids, cytokines, end-products of ischemia, immune cell environments, etc., that are unique to host tissues exposed to stressful conditions (Sansonetti, 2004). This type of interkingdom signaling has traditionally received little attention in the microbial pathogenesis field (Kendall and Sperandio, 2016). While certain physio-chemical cues, such as pH, redox state, phosphate, etc., are well known to influence bacterial virulence activation, an emerging area of interest is how host-stress derived compensatory "cues" drive colonization, invasion, virulence activation, and ultimately, the continuum of infection from symptom development to lethality (Mekalanos, 1992). We and others have described many of these hoststress compensatory elements, the receptors on bacteria to which they bind, and the downstream pathways that become activated leading to a shift in virulence (Seal et al., 2010). For example, the Gram-negative pathogen Pseudomonas aeruginosa can detect host physiologic disturbance by sensing opioids in the host environment, and in response, activate its quorum sensing virulence machinery . This process involves a complex and constant dialog between the pathogen and its host (Alverdy et al., 2000). The host secrete factors in response of microbial presence, the microbe in turn detects these signals and adjusts its virulence accordingly (Patel et al., 2007;Zaborin et al., 2014a). Many commonly encountered bacterial virulence mechanisms are subject to this additional level of host-derived signaling: biofilm formation, swarming, luminescence, toxin production, etc. (Palmer and Blackwell, 2008). Host-microbe interkingdom signaling and telesensing are not novel developments. Because the microbiota and its human host co-evolved over tens of thousands of years, an elaborate signaling system exists between them (Sansonetti, 2004). It is well known that host catecholamines released during stress can induce bacterial growth, enhance colonization to host tissue, and virulence upregulation (Freestone and Lyte, 2008). In addition, the human "gut-brain axis" is an active area of investigation. We are just beginning to appreciate the level of involvement that the microbiota plays in the development of the human nervous system (Lyte, 2013). So far, we know that this gut-brain axis is a bidirectional dialog involving neural (e.g., GABA), endocrine (e.g., amines), immune, and humoral signals (Carabotti et al., 2015). In addition to host-produced signals, release and sequestration of inorganic compounds, such as phosphate, cooper, iron, have all been implicated in this host-microbe interkingdom signaling (Schaible and Kaufmann, 2004;Zaborin et al., 2014a). These complex mechanisms of communication help to maintain the mutualistic humanmicrobiome relationship, and is the product of millennia of co-evolution. As such, the occurrence, course, and outcome of infection may be highly influenced by the degree of host stress, not only because stress has a direct effect on immune function, but because physiologic stress has a direct effect on bacterial behavior. In the context of human infection, rarely if ever, is host stress adequately instantiated into experimental models. Host genes are manipulated as are microbial genes, and pathogenicity described. However, a major flaw in this approach is the dismissal of the "within-group" variability in infection occurrence and outcome that may be the most informative of the host-pathogen dialog that must first occur for the process of infection to be initiated. As can be seen in Figure 1, converging lines of host-pathogen interactions make it extremely challenging to organize and study such a dynamic and fluid system in the context of a critically ill patient. It may be for this reason that no new therapies for sepsis in the critically ill have emerged in decades. Yet understanding how the microbiota collapse following host stress, how the pathobiota emerge to achieve a new state of equilibrium with the host, and whether the resilience of the host to achieve recovery depends on the ability of the microbiota to refaunate, remains a challenging but important line of inquiry (Sansonetti, 2004). IS HOST RECOVERY FROM STRESS DEPENDENT ON THE ABILITY OF THE MICROBIOTA TO REFAUNATE? While resilience to host injury and recovery from infection is generally attributed to a robust host immune clearance mechanism, emerging knowledge in microbiome science suggests that the intestinal microbiome plays a key role in driving a recovery-directed immune response (Shen et al., 2012). As described above, when a human is injured, both from the injury response itself and from its treatment by modern medicine, the intestinal microbiome can collapse in abundance and function (Shimizu et al., 2006). Yet as injuries are repaired and infections cleared with antibiotics, the ability of the host microbiome to refaunate is often considered to lag behind recovery, rather than to drive it (Jakobsson et al., 2010). However, equally plausible is the possibility that a previously healthy host (no smoking, limited previous antibiotic use, lean diet, regular exercise) may have a capacity to refaunate his/her microbiome to a greater degree than a previously unhealthy patient (Carlisle and Morowitz, 2011;Cosnes, 2016;Munck et al., 2016). The dynamics of refaunation and its correlation to recovery is poorly explored, however, with sequencing and metabolomics becoming more widely available and less costly, this can now be determined. Enhancing the refaunation process with fecal transplantation alongside therapies that are highly catabolic (bone marrow and solid organ transplantation) are underway and may further reinforce the plausibility of this concept (Kazerouni and Wein, 2017). The near disappearance of the intestinal microbiome following severe catabolic stress and injury, while adaptive prior to modern medicine, may be considered maladaptive in the present era, where highly toxic and invasive therapies (chemotherapy, radiation, severe burn injury) are needed to treat life-threatening diseases (Jenq et al., 2012;Pamer, 2016;Taur and Pamer, 2016). Figure 2 depicts our theory involving the uncharted space in the intestinal tract that may play an unappreciated role in recovery from severe host stress. CONCLUSION Pathogens bring their own unique life histories when they colonize or infected a new host. The complex dynamics of physiologic stress in the host drives these pathogens, and the microbial communities in which they co-exist, into a pathoadaptive process where genes are lost and found, and where new phenotypes emerge. Under such circumstances, emergent phenotypes among the colonizing pathobiota increase in frequency and compete for colonization sites and local resources. As stress becomes a persistent state and antibiotics are added to treat infections, microbial evolution speeds up as the emergent "pathobiome" enters an evolutionarily uncharted environment. As these pathobiomes compete for fixation niches, they become hidden from clinicians in protected sites where they do their dirty work at arms' length from the immune system. Uncovering the dynamics of this host-pathogen interactome and the sites in which it occurs will lead to novel lines of inquiry and hypotheses to explain more completely the occurrence, course, and outcome of life-threatening infections that develop in the critically ill and all around the world.
2017-05-04T00:08:46.492Z
2017-03-02T00:00:00.000
{ "year": 2017, "sha1": "45f4157e1948cded9965df503061b614475225ec", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.00322/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45f4157e1948cded9965df503061b614475225ec", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265482605
pes2o/s2orc
v3-fos-license
Anatomo-Radiological Correlation Comparing Anatomopathological Data from Radical Prostatectomy and Multiparametric Prostate MRI Abstract INTRODUCTION Prostate cancer is the most frequently diagnosed cancer in Morocco, after lung cancer [1].Diagnosis is based on rectal examination, PSA measurement and ultrasound-guided biopsy.Despite the reliability of this approach, it has limitations that can lead to unwarranted invasive diagnosis and treatment.So, to characterize the clinical behavior of CaP, from silent to invasive and aggressive tumors, other diagnostic means can be employed for better management.Multiparametric magnetic resonance imaging (mpMRI) could meet this need, with its ability to detect the most aggressive cancers.The American College of Radiology (ACR) has therefore developed a score, PIRADS, to help improve early diagnosis of clinically significant prostate cancer and reduce unnecessary biopsy and treatment of benign and sub-clinical tumors.Histopathological study of the prostate after radical prostatectomy has undeniable predictive value.We might therefore ask whether correlating anatomopathological data from radical prostatectomy with those from multiparametric prostate MRI would be a prognostic factor that would enable better stratification and management of CaP. MATERIALS AND METHODS This is a retrospective study including 44 patients collected in the urology department of the Hôpital Militaire d'Instruction Mohammed V de Rabat (HMIMV) over a 22-month period, from January 2020 to October 2021.The following data were collected: 1. clinical data: patient age, PSAt, PSAl 2. mpMRI data: prostate volume, lesion size, lesion side, lesion location, number of lesions.PIRADS score.3. Approach.4. Histopathological specimen data: Tumour volume, histological type, Gleason score, extracapsular extension according to side of extension, positive margins, Gleason grade at margins, peri-neural invasion, lympho-vascular invasion, seminal vesicle invasion, lymphatic invasion during lymph node dissection. 5. PI-RADS and Gleason scores.Inclusion criteria:-All patients who had mpMRI prior to radical prostatectomy.3.2.Exclusion criteria:-All patients with radical prostatectomy without mpMRI. -All patients with mpMRI but without PIRADS 3.3.Study population: Of 55 files, we selected 44 according to the above criteria.The search was carried out using the search database on the library of : -Pub Med -Science direct -Clinical Key Using the following keywords:-Prostate cancer -Prostate mpMRI -Radical prostatectomy -PIRADS score.Data entry and analysis were carried out using SPSS 23 for IOS (IBM corporation, ARMONK, NEW YORK, U.S.).Two types of analysis were chosen for the data analysis:-Univariate analysis: Categorical variables were described in terms of numbers and percentages, and the comparative study was carried out using Pearson's Chi-Square method of comparing percentages, or Fisher's Exact method (in cases where the expected number of participants was less than 5 in the Chi-Square method).A P <0.05 was considered statistically significant. RESULTS Exploitation of our archive over a 22-month period, from January 2020 to October 2021, allowed the exploitation of 44 radical prostatectomy files meeting the predefined inclusion criteria.The mean age of this population was 65.7 ± 6.31 with a maximum of 72 years.The mean PSAt was 10.1 ng/ml, with 59% of patients having a PSAt between 4-10 ng/ml.PSAt > 20 ng/ml was found in only 5% of patients.Analysis of mpMRI reports revealed a mean prostate volume of 44.6 ml +/-7.6.Tumor size was between 2.1 and 3 mm in half (50%) of patients, and rarely > 3.1 mm (2%).PIRADS 3 was in the majority with 47.7%, followed by PIRADS 5 with 25%.PIRADS 2 and 4 were in the minority, with 6.8% and 20.45% respectively.Table 1 summarizes all our patients' clinico-radiological data.Laparoscopy was standard except in cases of contraindication or technical difficulty.Laparoscopy was performed in 2/3 of our patients (68%), divided between the transperitoneal route (45.4%) and the subperitoneal route (22.7%).Gleason 7 was the predominant score in 62.4% of patients.Followed by Gleason 6 in 21.1% of patients and Gleason 9 in 11.2%.Finally, Gleason 8 was in the minority, representing only 4.1% of patients.Extracapsular extension was found in 27.2% of patients, with no predominance of one side over the other, while positive margins were present in 34.5%.Tumor size greater than 1mm accounted for 62% of cases.Gleason grade at the margins is often unavailable.Pejorative factors, such as peri-neural invasion, are often present in 72.7% of patients.In contrast, lympho-vascular invasion is rarely found, in only 4.5% of cases.Invasion of the seminal vesicles was present in only 11.3% of cases, with no predominance of one side over the other.Lymph node dissection was performed in 56.8% of patients, with positive lymph node involvement in 4%.All these histological data are listed in Table 2. Univariate analysis showed that the post-operative Gleason score for PIRADS 3 and 5 lesions was statistically different, with a significant p-value (p < 0.005), as was that for PIRADS 4 and 5 lesions.There was also a significant correlation between PIRADS and other factors such as extracapsular extension, lymphovascular invasion and seminal vesicle invasion (p<0.001,0.032, 0.007 respectively).However, this correlation was not found with age, surgical margins, peri-neural invasion and number of positive lymph nodes at lymph node curage.These data are summarized in Table 3.Nevertheless, multi-variate analysis of the correlation of PIRADS with these different histopathological factors, clearly shows a clear correlation of high PIRADS with high Gleason score, extra capsular extension and vesicular invasion.A summary of this correlation with histoprognostic factors is given in Table 4. DISCUSSION Conventional diagnostic tools such as digital rectal examination (DRE), PSAt (Prostate Specific Antigen) and transrectal prostate ultrasound (TRUS) helped detect the disease, but with no distinction between significant and non-significant cancer.In fact, the consequence was over-diagnosis and over-treatment.In other words, cancers were diagnosed that should not have been detected, and others were operated on that should not have been treated.mpMRI has become an essential element in the management of prostate cancer, acting as a filter that detects significant cancer at risk of progression and requiring treatment.However, its role is still expanding, as it moves beyond the traditional diagnostic framework towards another, that of predicting histoprognostic factors.According to our study, the histopronostic factors corroborating with PI-RADS are : Gleason score, extra-capsular extension, invasion of seminal vesicles, lymphovascular invasion.The histopathological data obtained from the surgical specimen and used to stratify patients into groups at risk of recurrence and/or specific survival are the Gleason score, pathological stage and status of surgical excision margins.These criteria are widely validated in the literature, and are currently included in the tables of Partin et al., and the nomograms of Kattan et al., [2].Assessing the status of the limits of resection is an important step.It is based on the presence or absence of tumour in contact with the indelible ink.This status is an independent prognostic factor, predictive of both local and systemic recurrence.However, the impact of positive surgical margins on specific survival remains formidable, depending on other prognostic factors and the initiation of adjuvant or salvage therapy [3].Tumour volume at the margins and the number of positive sites are only poor prognostic elements and should be mentioned by uro-anatomopathologists.On the other hand, the location of the positive margin does not have an independent prognostic impact, but it remains a useful precision for urologists to promote their surgical technique.In our case, when analyzing the location of margins, we did not raise any particularities.It is legitimate to say that a poorly differentiated residual tumour is more likely to progress than a welldifferentiated tumour, but this logical link has not been clearly demonstrated by studies and remains optional in prostatectomy specimen reports [4].In our study, positive margins were seen in 15 patients, corresponding to a rate of 34.5%, which is in line with international centers (33.5%) [3].In our cohort, 82.9% of positive margins were found in patients with PIRADS 4 and 5, while 17.1% of positive margins were found in patients with PIRADS 2 and 3.However, in univariate and multivariate analysis, no significant relationship was found between high PIRADS and the risk of positive margins.However, a prospective study including 154 patients has shown that PIRADS can help in decisionmaking regarding the extent of resection during radical prostatectomy without increasing the risk of positive surgical margins [5].Involvement of the seminal vesicle is an important histoprognostic factor with a direct impact on the management of CaP.It enables us to distinguish localized from locally advanced cancer, and thus to assess the risk of recurrence.The study by Kwong Kim et al., on the prognostic value of seminal vesicle invasion on preoperative mpMRI, retrospectively analyzed data from 159 patients and found a direct relationship between seminal vesicle invasion and biochemical recurrence (p=0.049)[6].A correlation between PI-RADS and seminal vesicle invasion is therefore relevant to our study.Analysis of our data shows a direct relationship between high PI-RADS and seminal vesicles invasion (p=0.007).The De Cobelli et al., study, which included 223 patients, found no relationship between PI-RADS and VS invasion (p=0.41)[7].Another retrospective Korean study (Lim et al.,), published in 2021 in the Scandinavian Journal of Urology, included 569 patients and corroborated our study by finding a direct relationship between PI-RADS and seminal vesicles invasion (<0.001) [8].The Gleason score is one of the most important histoprognostic factors.Adenocarcinomas form a broad spectrum of lesions, ranging from very well-differentiated to clinically significant, poorly differentiated cancers.The higher the Gleason score, the more severe the prognosis in terms of biological progression.The relationship between PI-RADS and Gleason score is therefore of prime importance in demonstrating the relationship between mpMRI and pathology data.In our series, the Gleason 7 score was the most represented at 62.7%, and was evenly distributed across all PIRADS scores, apart from a non-significant increase for PIRADS 3. Gleason 6 is seen mainly (70%) in patients with a low PIRADS score of 2 or 3.However, Gleason scores 8 and 9 were seen exclusively in patients with a high PIRADS score 4 or 5. Univariate and multivariate analysis objective the correlation between a high PIRADS score and a high Gleason score and vice versa, with a significant p-value <0.001 (95% CI 0.821-1.179).The Sahin et al., study, which examined the relationship between mpMRI and histopathology prior to radical prostatectomy, retrospectively pooled data from 93 patients and found a significant relationship between PIRADS and Gleason score (P<0.001), in line with the results found in our study [9].A multicenter American meta-analysis, involving 3349 patients and aimed at demonstrating the PPV of PI-RADS for the detection of high-grade prostate cancer, presented a result that was low and varied considerably from center to center [10].This may be due to inter-reader variability among pathologists and radiologists, or to false negatives.The value of PI-RADS lies in its ability to distinguish clinically significant cancers.A retrospective study of 56 patients sought to demonstrate the role of PI-RADS2 in patients with Gleason 6 (3+3) biopsy.It demonstrated that PI-RADSv2 and the measurement of periprostatic fat using mpMRI can be correlated with pathological upgrading on the radical prostatectomy specimen and, consequently, accurately identify and monitor patients who are candidates for active surveillance [11].Extracapsular extension of prostate cancer is a poor prognostic factor associated with progression, posttreatment recurrence and increased prostate cancer mortality.Accurate staging prior to radical prostatectomy is crucial in deciding whether or not to preserve neurovascular strips and possibly avoid positive margins.In our study, 12 patients, i.e. 27.2%, presented with a Extracapsular extension.This value is in line with that of other international studies (32.4%) [12].We found that 95.1% of patients with Extracapsular extension had PI-RADS 4. Uni-and multivariate analysis of these data revealed a clear relationship between PEE and PI-RADS (p<0.001).This is in line with the retrospective study by De Cobelli et al., which included 223 patients and showed a correlation between PI-RADS and extra-capsular extension (p<0.001).Another prospective study involving 154 patients also corroborates ours, with a relationship between PI-RADS and Extracapsular extension (p < 0.05) [5].Perineural invasion corresponds to isolated colonization of a nerve located in the periprostatic space, without invasion of the fat surrounding this nerve section [2].It is predictive of lymphatic and vascular spread and can therefore be considered a poor prognostic factor.Perineural invasion is not one of the prognostic factors that PI-RADS can highlight in our study (p=0.379).This is in line with the prospective study of the impact of uni-or multifocal perineural invasion in prostate cancer during radical prostatectomy.It included 288 patients and found no correlation between PI-RADS and perineural invasion (p=0.258)[13].It has been reported that lymphatic metastasis frequently indicates a poor prognosis and increases the postoperative probability of biochemical recurrence.To our knowledge, curage is the most direct and standard method for determining the presence of lymphatic metastases [14].This raises the question of whether PI-RADS would enable a better assessment of lymphatic invasion.In our study, PI-RADS was associated with the risk of lymphovascular invasion (p=0.032),but only with the number of lymph nodes positive for curage (p=0.611).This contrasts with the Chinese study retrospectively pooling data from 316 patients with T2N0M0 and a Gleason score ≥ 3, which asserts that PI-RADSv2 was relevant in predicting the number of positive nodes (p<0.001)[14].The limitations of our series are its retrospective nature, the small number of patients included in the study and the lack of centralized reading of the mpMRI, which presents a real problem in view of the inter-reader variability proven in the literature.However, some authors report that this bias has a minimal impact on the results [15].The small sample size of this series led to the random absence of PIRADS 1 in our series.
2023-11-29T16:19:23.929Z
2023-11-24T00:00:00.000
{ "year": 2023, "sha1": "8df209ac458bd5836c8aaaf61eea874bba379cc2", "oa_license": null, "oa_url": "https://doi.org/10.36347/sasjm.2023.v09i11.022", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4038f8390d8bdb6421895a8786ab331043fc105a", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
13971711
pes2o/s2orc
v3-fos-license
Ribosome protection by antibiotic resistance ATP-binding cassette protein Significance ARE ABC-F genes have been found in numerous pathogen genomes and multi-drug resistance conferring plasmids. Further transmission will challenge the clinical use of many antibiotics. The development of improved ribosome-targeting therapeutics relies on the elucidation of the resistance mechanisms. Characterization of MsrE protein bound to the bacterial ribosome is first of its kind for ARE ABC-F members. Together with biochemical data, it sheds light on the ribosome protection mechanism by domain linker-mediated conformational change and displacement leading to drug release, suggesting a mechanism shared by other ARE ABC-F proteins. These proteins present an intriguing example of structure-function relationship and a medically relevant target of study as they collectively mediate resistance to the majority of antibiotic classes targeting the peptidyl-transferase center region. M ore than one-half of the antibiotics in clinical use target bacterial ribosome and protein synthesis, particularly the elongation step (1). The peptidyl-transferase center (PTC) and the adjacent nascent peptide exit tunnel (NPET) in the ribosomal large subunit are the key players in protein elongation, with functions in catalyzing the peptide bond formation and the emergence of the nascent chain, respectively. PTC-targeting antibiotics, such as chloramphenicol, group A streptogramins, lincosamides, and pleuromutilins, inhibit protein synthesis by interfering with the correct positioning of the tRNA substrates (1,2). In contrast, macrolides and group B streptogramins bind to a site within the NPET adjacent to the PTC and immediately before the constriction point at which ribosomal proteins (r-proteins) L4 and L22 narrow the tunnel width to approximately 10 Å (3,4). The primary mechanism of macrolide action is believed to be the context-specific inhibition of peptide bond formation rather than the indiscriminate obstruction of nascent chain passage through the NPET. Namely, ribosome-profiling analyses have revealed that translation of most genes proceeds past the first six to eight codons and can be arrested at any point during the translation when the ribosome encounters specific short-sequence motifs (5)(6)(7). The problematic sequence motifs are confined to the nascent peptide residues in the PTC, not the peptide segment in contact with the macrolide further down the NPET (5). Therefore, it appears that the general mode of macrolide action involves selective inhibition of peptide bond formation between specific combinations of donor and acceptor substrates. In some cases, the macrolide-induced and leader peptide-mediated translational arrest is used to regulate the expression of downstream macrolide resistance methyltransferase genes, such as ermB and ermC (8)(9)(10)(11)(12). Structural characterization of the erythromycin-ErmBL leader peptide-ribosome complex reveals that the drug redirects the path of the peptide in the tunnel and leads to conformational changes in PTC and tRNA substrates unable to participate in the peptide bond formation that underlies translation arrest (10,11). However, certain oligopeptides are believed to lead to drug resistance by "flushing out" the macrolides while passing through the NPET (13,14). Therefore, the fate of the macrolide-bound ribosome is determined by the dynamic interactions among the bound drug, the PTC, and the sequence specificity of the emerging oligopeptide chain (5,15). It should be noted, however, that macrolides could induce ribosomal arrest by allosterically altering the PTC even without forming significant contacts with the nascent chain, demonstrating the existence of a functional link between the NPET and the PTC (16). A wide range of mechanisms mediate antibiotic resistance, one of the greatest threats to public health and food security worldwide. In the case of macrolides, various bacterial species are intrinsically insensitive due to chromosomal mutations in ribosomal genes causing reduced drug-binding efficiency (17,18). The most common acquired resistance mechanism is the posttranscriptional methylation of the 23S rRNA by methyltransferases (e.g., Erm family), which also results in decreased drug-binding efficiency (1, Significance ARE ABC-F genes have been found in numerous pathogen genomes and multi-drug resistance conferring plasmids. Further transmission will challenge the clinical use of many antibiotics. The development of improved ribosome-targeting therapeutics relies on the elucidation of the resistance mechanisms. Characterization of MsrE protein bound to the bacterial ribosome is first of its kind for ARE ABC-F members. Together with biochemical data, it sheds light on the ribosome protection mechanism by domain linker-mediated conformational change and displacement leading to drug release, suggesting a mechanism shared by other ARE ABC-F proteins. These proteins present an intriguing example of structure-function relationship and a medically relevant target of study as they collectively mediate resistance to the majority of antibiotic classes targeting the peptidyl-transferase center region. 19). Ribosome protection mechanism, by which the drug is actively removed from the ribosome, has recently become of great interest. The ATP-binding cassette F (ABC-F) family proteins confer resistance to a number of clinically relevant antibiotics targeting the ribosome PTC/NPET region (20). These proteins are collectively referred to as antibiotic resistance (ARE) ABC-F proteins (21). Unlike other ARE ABC family members that are shown to actively pump drugs out of the cells, ARE ABC-F proteins lack the transmembrane domain characteristic to transporters and are believed to confer antibiotic resistance via ribosomal protection mechanism by interacting with the ribosome and displacing the bound drug (17,20,(22)(23)(24). ARE ABC-F proteins have been classified into three groups based on antibiotic resistance: (i) Msr homologs (macrolides and streptogramin B), (ii) Vga/Lsa/Sal homologs (lincosamides, pleuromutilins, and streptogramin A), and (iii) OptrA homologs (phenicols and oxazolidinones) (20) (SI Appendix, Fig. S1). Notably, these proteins compose two ATP-binding domains connected by a linker of various lengths, which appears to be crucial for the efficiency and specificity of antibiotic resistance (20). However, an understanding of the molecular mechanism of how ARE ABC-F proteins interact with ribosomes to mediate antibiotic resistance requires a high-resolution structure of the complex. Here we report the cryo-EM structure of the ARE ABC-F protein MsrE bound to the bacterial ribosome at 3.6-Å resolution. Note that the Msr homologs (divided into four classes: A, C, D, and E) have the longest domain linkers among the ARE ABC-F members (SI Appendix, Fig. S1). Furthermore, we have previously identified a Pseudomonas aeruginosa clinical isolate carrying the msrE gene obtained through horizontal gene transfer (NCBI accession ID: CP020704), which highlights the importance of the ARE ABC-F proteins in clinical practice. Therefore, our findings offer insight into the ribosomal protection mechanism underlying antibiotic resistance of ABC-F proteins, and demonstrate the huge potential for the future antibacterial drug development to counteract this resistance. Results MsrE Rescues AZM Affected Translation. The msr genes have mostly been identified in staphylococci, streptococci, and enterococci, and have recently spread to P. aeruginosa (18). We have previously observed that the exogenous expression of MsrE protein from its putative promoter significantly increases the azithromycin (AZM; a second-generation derivative of erythromycin) resistance of P. aeruginosa laboratory strains. Furthermore, the expression of MsrE from the arabinose-inducible promoter P BAD confers AZM resistance to Escherichia coli in a dose-dependent manner (SI Appendix, Table S1). The induction of MsrE expression by 0.2% arabinose increases the minimum inhibitory concentration of AZM by 16-fold compared with the uninduced condition. Overexpression of MsrE does not significantly affect the fitness of E. coli, as revealed by the comparison of growth curves of E. coli/P BAD -msrE in the presence of 0.2% arabinose and 0.2% glucose (which reduces leaky expression from the arabinose-inducible promoter) (SI Appendix, Fig. S2). Moreover, purified MsrE protein rescues the AZM-inhibited translation process in a dose-dependent fashion, as shown by our in vitro transcription/translation assay (SI Appendix, Fig. S3A). These results clearly support a ribosomal protection mechanism for MsrE and ARE ABC-F proteins in general, which is also in agreement with the recent bacteriological and biochemical studies on VgaA and LsaA proteins (ARE ABC-F members conferring resistance to lincosamides, pleuromutilins, and streptogramin A) (SI Appendix, Fig. S1) (20,22). Structure of Ribosome-Bound MsrE. Purified MsrE protein can bind to Thermus thermophilus ribosome with a stoichiometry of 1:1 (SI Appendix, Fig. S3B). The addition of AZM, tRNA fMet , and its corresponding mRNA did not significantly affect the binding stoichiometry. Furthermore, the complex was reconstituted in the presence of ATP or its nonhydrolyzable analog AMP-PNP (SI Appendix, Fig. S3B), indicating that ATP hydrolysis is not required for MsrE ribosome binding. We next determined the cryo-EM structure of the MsrEribosome complex with an mRNA and its cognate P-tRNA at 3.6-Å resolution ( Fig. 1A and SI Appendix, Fig. S4). Local resolution analysis revealed that the PTC/NPET of the 50S and the decoding center region of the 30S, as well as the tRNA anticodon stem loop and MsrE domain linker region, could be visualized at approximately 3.5-Å resolution. The local resolution was lower for the globular domains of the MsrE protein and the acceptor stem of the tRNA ( Fig. 1 B and C), indicating higher structural flexibility. As no structures of ARE ABC-F proteins are available, the ribosome-bound MsrE model is of great interest. The MsrE protein has a needle-like overall structure, with two ABC transporter domains (ABC1 and ABC2) carrying highly conserved nucleotide-binding sites (NBSs) (residues 37-44 and 329-336) assembled as its base and a domain linker (residues 180-279) that form two long crossed helices connected by an extended loop) assembled as the needle and tip ( Fig. 2A). Both NBSs reveal electron density for the bound nonhydrolyzable ATP analog (AMP-PNP) ( Fig. 2 B and C). Despite structural similarity to the ribosome-bound ATP form of the non-ARE ABC-F protein EttA (25), ATP hydrolysis appears to play a different role in the functioning of MsrE. ATP hydrolysis is essential for the release of EttA from the ribosomes, as revealed by the finding that its deficient mutant (double substitutions E188Q/ E470Q of the catalytic residues) affects cell growth by inhibiting protein synthesis through trapping ribosomes (26). In contrast, the cell growth of E. coli is not significantly affected by overexpression of the corresponding ATP hydrolysis deficient MsrE mutant (E104Q/E413Q) (SI Appendix, Fig. S2). However, the MsrE E104Q/E413Q mutant in vitro ribosome binding (SI Appendix, Fig. S3B) and in vivo AZM resistance (SI Appendix, Table S2) efficiencies are significantly reduced, indicating that ATP hydrolysis is crucial for mediating macrolide resistance. Analyzing the aforementioned mutations one by one reveals that both residues contribute to antibiotic resistance, but the one in NBS1 (E413Q) has a more significant effect (SI Appendix, Table S2). In line with this, ATPase activity has been reported to be crucial for ARE ABC-F protein VgaA, since mutation of the catalytic glutamines results in abolished antibiotic resistance in vivo (27) and rescue activity in the in vitro translation system (20). It was recently reported that the hydrolytically inactive Staphylococcus haemolyticus VgaA LC and Enterococcus faecalis LsaA mutants inhibit peptidyl transferase activity of the ribosome in a reconstituted translation system (28). The long and flexible domain linker is a conserved feature in the ARE ABC-F family and likely provides an explanation for the lack of available structures. The sole identified crystal structure of ABC-F protein is the E. coli EttA (non-ARE) mentioned above (26), which has a significantly shorter linker region. Furthermore, the EttA crystal structure reveals a dimer formation mediated by the linker that might have facilitated the crystallization, yet the monomer state is favored in solution and is likely the active form of EttA (26). MsrE likely also functions as a monomer in cells, as revealed by molecular weight analysis. Surprisingly, in our structure, the entire backbone of the MsrE domain linker can be traced, demonstrating two crossed helices, longer α5, and shorter α6 (referred to as αL and αS, respectively) connected by an extended loop ( Fig. 2A), with the majority of the amino acid side chains visualized in the electron density map (Fig. 1C). MsrE Binds to Ribosomal E-Sites. MsrE is held in a position known as the ribosomal exit (E) site (Fig. 3A). The ABC1 domain of MsrE faces the L1 stalk of 50S, whereas the ABC2 domain contacts the 30S head (h41 and h42 of 16S rRNA and the r-protein S7), the r-protein L5 in the P-site finger region of 50S, and the elbow of P-tRNA (Fig. 3B). In contrast to the few ribosome contacts observed for the two ABC domains, the elongated domain linker establishes extensive contacts with ribosomes as it stretches in parallel with the acceptor arm of P-tRNA toward the NPET (Fig. 3C). The region at the interface between ABC1 and the linker (foremost the residues Glu191 and His194) contacts the H68 of 23S rRNA, which is involved in ribosomal intersubunit bridge as well as E-site formation and is important for translation activity (29) (Fig. 3C). Starting from the middle of domain linker and heading toward the terminal loop, MsrE αS helix (Lys264-Thr250 region) forms extensive contacts with H74 (A2435-U2438 and C2064-C2065 regions), H80 (G2252-G2253), and H93 (C2601) of 23S rRNA (mainly the backbone) (Fig. 3C). From the P-tRNA side, the residues Pro275 (conserved in the Msr subfamily) and Glu276 at the C terminus of the domain linker interact with the P-site tRNA elbow region (D-loop nucleotides 17-19) (Fig. 3C). The residues Arg217, Lys216, and Gln214 in αL directly contact the acceptor stem of P-tRNA (Fig. 3C). The orientation and detailed interactions of the extended loop in the MsrE domain linker with ribosome are discussed below. Taken together, the foregoing findings indicate that MsrE interacts with the ribosome mainly through nonspecific interactions, in agreement with its cross-species specificity. Therefore, MsrE can likely function on drug-affected ribosomes across a variety of bacterial species, making it an intriguing target for investigations of antiresistance strategies. MsrE binding causes dramatic conformational changes in both the ribosome and the P-tRNA. Overall, the 30S subunit of the MsrE-bound ribosome is rotated counterclockwise with respect to the 50S subunit by 2.5 Å, with its head rotated by 3.9°(SI Appendix, Fig. S5A) compared with the post-peptidyl transfer state ribosome (30,31). Concurrently, MsrE affects the positioning of the P-tRNA; the acceptor stem shifts by 30 Å toward a site usually occupied by the acceptor stem of the fully accommodated A-tRNA ( Fig. 4A and SI Appendix, Fig. S5B), and the anticodon stem loop moves by approximately 5 Å coupled to the ribosome rotation (SI Appendix, Fig. S5B) but is seen to maintain the codon-anticodon interaction with its cognate mRNA (SI Appendix, Fig. S5C). Other notable changes include a more open positioning of the L1 stalk and displacement of the N-terminal extension of the r-proteins L27 and L16 from their usual positions in close proximity to the PTC to accommodate the MsrE protein (Fig. 4A). Since the N-terminal extension of L27 is known to be crucial for tRNA binding and ribosome functioning (32)(33)(34), the MsrE-induced conformation is likely transient, and the original state is restored on MsrE dissociation from the ribosome to resume normal translation. While both MsrE and EttA belong to the ABC-F family and bind to ribosome E-sites, their interactions with the ribosomes vary significantly. MsrE has the elongated domain linker but lacks the insertions called "arm" and "toe" observed in EttA (SI Appendix, Fig. S6A). The EttA arm makes additional contacts with the L1 stalk, resulting in an even more open form than that seen in the present structure (SI Appendix, Fig. S6B). Note that EttA arm has been shown to restrict the ribosome and tRNA dynamics required for translation elongation in response to the availability of ATP (25). The EttA toe region interacts with the r-protein L5 and positions it away from the 30S (SI Appendix, Fig. S6B). The different positions of MsrE and EttA in the ribosome likely reflects their diverse functions. MsrE Extended Loop Displaces AZM from Ribosome. As mentioned above, the extended loop of the MsrE domain linker binds deep into the PTC/NPET region and causes deformation of the r-protein L16, the N-terminal extension of the r-protein L27, and the acceptor stem of P-tRNA (Fig. 4A). Unexpectedly, we observed the positions of the acceptor stems of both classical A-and P-tRNA occupied by the domain liker of MsrE (SI Appendix, Fig. S5B). The shifted P-site tRNA acceptor stem is likely stabilized by interactions with MsrE residues Asp224-Lys226 (Fig. 4B). Furthermore, the residues Arg241, Leu242, and His244 at the tip insert deep even beyond the PTC, with their side chains approaching the macrolidebinding site (Fig. 5 A-C). In particular, the Leu242 clashes (within 1.8 Å) with AZM when our structure is compared with that of AZM bound to the ribosome (2, 3) (Fig. 5C). The 23S rRNA residue A2062, located in the NPET, is believed to relay the drug-induced stalling signal to the PTC, as mutations of this nucleotide can eliminate stalling (35). The base of A2062 protrudes into the tunnel lumen in drug-free ribosomes (30), whereas it moves closer to the tunnel wall as macrolide and nascent peptide chain fill the tunnel (9,35). Such a movement would clash with MsrE His244 residue, however (Fig. 5C). Instead, A2062 nucleotide is reoriented in the tunnel lumen in the presence of MsrE compared with that in drug-free ribosomes (Fig. 5C). On AZM binding to ribosomes, the nucleotide U2506 was observed to shift toward AZM and to be involved in its binding, corroborating the role of U2506 in macrolide drug action (2,3). In contrast, the U2506 in the present complex is seen to undergo a significant conformational change away from both the PTC and the macrolide-binding site (Fig. 5C). Furthermore, the neighboring U2504, which has been implicated in determining the species-specificity of several PTC A-site-binding antibiotics (e.g., tiamulin) (36), shifts away by approximately 2 Å (Fig. 5C). Simultaneously, the next nucleotide, A2503, whose identity was found to be critical for programmed translational arrest (8,9) and whose C8 methylation by Cfr methyltransferase is known to cause resistance to some macrolides (37,38), is also deformed from AZM-binding positioning (SI Appendix, Fig. S7A). These conformational changes in the A-site side of PTC appear to be caused by MsrE extended-loop tip residues, especially Arg241 (Fig. 5C). As for the P-site side, U2585, a key player in PTC activity, is displaced away from PTC (>90°flip and a 5-Å movement), consequently forming a stacking interaction with A2602, which is also rearranged from its conformation in ribosomes with AZM (2, 3) and without AZM (30) (Fig. 5C). The inherent flexibility of the universally conserved residues U2506 and U2585 at active sites is essential for their pivotal role in peptide transfer (30,39,40). For instance, the 180°flip of U2585 caused by the ErmCL nascent chain and macrolide interplay in the NPET prevents the stable binding and accommodation of A-site tRNA, leading to inhibition of peptide bond formation and the translation of downstream macrolide resistance protein ErmC (41). The movement of U2585 caused by macrolide erythromycin binding is apparently accompanied by repositioning of A2602 (16). The function of the universally conserved A2602, essential for peptidyl-tRNA hydrolysis to release nascent peptide, is highly dependent on its positioning (42). Here the residue Lys233 in MsrE is likely involved in stabilizing the reorientation of A2602 (Fig. 5B). Our mutagenesis study on MsrE revealed that mutation of Arg241, Leu242, or His244 to Ala results in a significantly diminished ability of MsrE to confer AZM resistance in vivo (SI Appendix, Table S2) and to rescue AZM-affected translation in vitro (Fig. 5D). This finding demonstrates their crucial role in mediating macrolide resistance, which is consistent with our structural information. Furthermore, truncation of this extended loop region (residues Lys216-Lys254, Δ loop) or even mutation of only the two residues (Arg241Ala/His244Ala) completely abolished its ability to mediate AZM resistance ( Fig. 5D and SI Appendix, Table S2). This finding demonstrates that the extended loop region, particularly the two residues Arg241 and His244, is essential for displacing AZM from its binding site through strong interactions with ribosomes. Interestingly, Lys233, Arg241, Leu242, and Lys246 are universally conserved in the Msr subfamily, whereas His244 is found only in MsrE and MsrD, substituted by small residues Ala and Ser in MsrA and MsrC, respectively (SI Appendix, Fig. S1). The Arg241Ala and His244Ala mutations or even loop deletion had no significant effect on the ribosome binding efficiency (SI Appendix, Fig. S3B), but did render the MsrE protein unable to recover the AZM-affected translation in vitro (Fig. 5D). Finally, we tested whether binding of MsrE could displace AZM and release it from the ribosome. Quantification of AZM copelleting with ribosomes revealed that incubation of the AZM-ribosome complex with MsrE indeed reduced the amount of AZM associated with ribosomes (Fig. 5E). Furthermore, our data demonstrate the importance of the loop residues Arg241 and His244, as mutating either to Ala resulted in complete loss of AZM release activity (Fig. 5E). Curiously, when the lesser binding efficiency of the ATP hydrolysis-deficient mutant E104Q/E413Q (SI Appendix, Fig. S3B) is taken into account, its drug displacing effect is comparable to that of WT MsrE (Fig. 5E). Therefore, the role of ATP hydrolysis in MsrE resistance activity is likely due to its importance in turnover. Regardless of the size of the macrolactone ring, all macrolides are oriented in the ribosomal tunnel in a similar manner involving a hydrogen bond between their desosamine hydroxyl and the N1 atom of A2058 in the 23S rRNA (2,3). Furthermore, the binding of macrolides to ribosomes is stabilized by the tight hydrophobic packing of the lactone ring against the conserved U2611 and A2057, as well as the ionic interaction between desosamine and the phosphate group of G2505. In addition to the aforementioned notable changes for several crucial PTC residues on MsrE binding (Fig. 5C), some minor changes are observed in 23S rRNA (U2609 and U2611 on one side and A2058 and A2057 on the other side) and rprotein L22 (Arg90 at the tip of elongated hairpin) involved in formation of the macrolide-binding site in the tunnel (SI Appendix, Fig. S7A). As a result, the tunnel around the macrolide-binding site widens by approximately 1.5-2 Å. Taken together, the MsrE extended loop mediated allosteric relay of changes to the PTC and NPET in the vicinity of the macrolide-binding site likely contributes to the dislodgement of AZM from the tunnel. Consistent with this notion, many nucleotide modifications mediating antibiotic resistances do not involve direct interactions with the drug, but instead involve reshaping of the binding pocket to release the drug. It appears that the composition and length of the extended loop correlate with antibiotic resistance profiles of ARE ABC-F proteins (SI Appendix, Fig. S1). Indeed, detailed mutational analysis of VgaA linker residues identified a short stretch of residues 212-220 whose composition determines the efficiency as well as the specificity of antibiotic resistance (27,43). Based on sequence alignment (SI Appendix, Fig. S1), this short stretch corresponds to the region Arg241-Leu242, the importance of which can be rationalized by our structural information. The diverse nature of ARE ABC-F domain linker extended loops correlating with the differences in their drug specificities suggests that bacteria have evolved a plethora of mechanisms to protect the ribosome from PTC-and NPET-targeting antibiotics. Discussion We propose the following mechanism for MsrE, which appears to be universally conserved for ARE ABC-F proteins that confer resistance to translation elongation inhibitors, which trap the ribosome with a tRNA in the P-site (1). The ATP form MsrE recognizes the stalled ribosome with a peptidyl-tRNA in P-site and a bound AZM blocking the NPET. MsrE subsequently binds to the ribosomal E-site, with its domain linker inserting into the PTC/ NPET region and stabilizing the P-tRNA to prevent its drop-off. To accommodate the MsrE extended loop into the PTC region, the tRNA is shifted away from 50S, and its acceptor stem is bent toward the A-site. The extended loop approaches the bound AZM and causes its release allosterically through a combinative effect of structural displacement and ribosomal conformational changes taking place in PTC and NPET. The drug likely leaves the ribosome through the PTC rather than the NPET given the structural constrictions, especially when the nascent chain is present. Interestingly, a comparison of the present complex, the ErmBL nascent peptide trapped ribosome complex (11), and the post-peptidyl transfer ribosome structure (30) reveals that the conformation of the P-site tRNA is similar in the latter two but undergoes a significant conformational shift when MsrE is bound to the ribosome (Fig. 4A). There is a possible passageway for the nascent chain of the P-tRNA distorted by MsrE given the available space and structural flexibility around this area in the present complex (SI Appendix, Fig. S7B); however, understanding the detailed interactions of MsrE with the nascent chain of P-tRNA requires a high-resolution structure of MsrE-bound ribosome with a P-tRNA carrying a nascent chain. ATP hydrolysis on MsrE likely drives its two ABC domains apart into a conformation that is no longer compatible with ribosome binding, thereby triggering the release of MsrE. With MsrE and the drug released, peptidyl tRNA is presumably returned to the P/Psite, and the nascent chain can likely proceed to the tunnel without any obstacles, so that the translation can resume. The nascent chain likely blocks drug rebinding, and MsrE rebinding is hindered by deacylated tRNA progression into the E-site when translation proceeds, as has been proposed for EttA (26). In a sense, ARE ABC-F proteins are similar to ribosome protection proteins TetM and TetO, which are homologous to EF-G and bind to the ribosome A-site to displace tetracycline from ribosomes, leading to drug resistance (44,45). Materials and Methods The MsrE-ribosome complex was reconstituted and used for cryo-EM grid preparation. Grids were analyzed with an FEI Titan/Krios cryo-transmission electron microscope (Thermo Fisher Scientific) operated at 300 kV. Data were acquired automatically in movie mode as sets of 20 frames at a nominal magnification of 75,000×. Particles picking and data processing were done in Relion 2.0. A total of 310,270 particles were used for reference-free 2D classification, and nonribosome particles were removed. The remaining 186,654 particles were used for 3D classification using empty 70S ribosome as a reference. Finally, 127,778 particles with homogenous density for MsrE and P-tRNA were used for final reconstruction with statistical movie processing. Final reconstruction yielded a map of 3.6-Å resolution, as determined using the gold standard Fourier cell correlation criteria in Relion (SI Appendix, Fig. S4). The MsrE model was built using a sequence from P. aeruginosa PASGNDM699. Initial docking of the 50S, 30S, and tRNA (Protein Data Bank ID code 5AA0) structures and the MsrE model into cryo-EM maps was performed in Chimera. Structures were subsequently rigid-fitted manually and refined in Coot. More details on the study methodology are provided in SI Appendix, Materials and Methods.
2018-05-03T02:53:33.747Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "c353b37060b832dfc28c15d26abe33b50c3dc315", "oa_license": "CCBYNCND", "oa_url": "https://www.pnas.org/content/pnas/115/20/5157.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c353b37060b832dfc28c15d26abe33b50c3dc315", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119420090
pes2o/s2orc
v3-fos-license
Large volume quantum correction in loop quantum cosmology: Graviton illusion? The leading quantum correction to Einstein-Hilbert Hamiltonian coming from large volume vacuum isotropic loop quantum cosmology, is independent of quantization ambiguity parameters. It is shown here that this correction can be viewed as finite volume gravitational Casimir energy due to one-loop `graviton' contributions. In vacuum case sub-leading quantum corrections and in non-vacuum case even leading quantum correction depend on ambiguity parameters. It may be recalled that these are in fact analogous features of perturbative quantum gravity where it is well-known that pure gravity (on-shell) is one-loop finite whereas higher-loops contributions are not even renormalizable. These features of the quantum corrections coming from non-perturbative quantization, sheds a new light on a major open issue; how to communicate between non-perturbative and perturbative approaches of quantum gravity. The leading quantum correction to Einstein-Hilbert Hamiltonian coming from large volume vacuum isotropic loop quantum cosmology, is independent of quantization ambiguity parameters. It is shown here that this correction can be viewed as finite volume gravitational Casimir energy due to one-loop 'graviton' contributions. In vacuum case sub-leading quantum corrections and in nonvacuum case even leading quantum correction depend on ambiguity parameters. It may be recalled that these are in fact analogous features of perturbative quantum gravity where it is well-known that pure gravity (on-shell) is one-loop finite whereas higher-loops contributions are not even renormalizable. These features of the quantum corrections coming from non-perturbative quantization, sheds a new light on a major open issue; how to communicate between non-perturbative and perturbative approaches of quantum gravity. PACS numbers: 04.60. Pp,04.60.Kz,98.80.Jk The Standard Model of particle physics, a description of matter fields based on perturbative quantum field theory, has been shown to be one of the most predictive physical theory ever constructed. The physical predictions of this theory have been verified experimentally with an outstanding accuracy. Unfortunately, the techniques of perturbative quantum field theory when applied to theory of gravity, fail quite miserably. Thus, in a quest for a quantum theory of gravity one is being compelled to take different courses. In the most popular approach, known as string theory [1], one postulates the basic constituents of the model to be one dimensional objects. The other leading candidate theory of quantum gravity is known as loop quantum gravity (LQG) [2] where one attempts to formulate a background independent non-perturbative quantum theory of gravity. Being very different in its formulation, a major issue that haunts non-perturbative quantum gravity, is its relation to the 'low-energy' world [3]. In particular, how do the results of non-perturbative approach compare with the results of perturbative approach in the regime where perturbative approach should be a good effective description. To explore this issue, here we use the framework of loop quantum cosmology (LQC) [4,5]. It is a quantization of cosmological (homogeneous) models using techniques of loop quantum gravity. Thanks to its simplicity, it allows explicit calculation to study its consequences. The loop quantum cosmology so far has lead to several impressive results. It has been shown that loop quantum cosmology cures the problem of classical singularity [6], along with quantum suppression of classical chaotic behaviour near singularities in Bianchi-IX models [7]. Further, it has been shown that this model has a in-built generic phase of inflation [8]. The corresponding power spectrum of density perturbation contains a distinguishing feature [9]. However, the issues related with physical observables, external time evolution, physical Hilbert space are still in nascent stage [4,10]. Nevertheless, it is possible to derive an effective Hamiltonian using WKB techniques [11,12]. We consider here spatially flat isotropic loop quantum cosmology, as we are interested in vacuum solution of it. The spatially closed model does not have vacuum solution. In loop quantum cosmology, a kinematical state is written as |s = s µ |µ , where |µ 's are eigenstates of volume (densitized triad) operator. It is important to emphasize the meaning of volume in this context. In particular, the volume V = d 3 x √ −g of the space is infinite, as it is non-compact. To avoid this trivial divergence in loop quantum cosmology, one considers the volume of a finite cell of universe (see Fig 1.) and studies its evolution. This feature plays the central role in the arguments presented here. In loop quantum cosmology, the underlying (internal time) dynamics is described by a difference equation. This discrete evolution faithfully represents the underlying discrete geometry, a feature of full theory of loop quantum gravity. In the effective description of loop quantum cosmology, one tries to understand the dynamics from a perspective based on continuum geometry. In other words, one tries to approximate the fundamentally discrete dynamics by a continuum dynamics. In this process, the discrete dynamics effectively provides a new potential term in the continuum description. This feature can be seen rather easily by considering the gravitational term in the difference In deriving the effective Hamiltonian, in first step one approximates the solution of the difference equation s µ , by a smooth (differentiable) function say ψ(p := γµl 2 p /6). Using WKB approximation, one derives a Hamilton-Jacobi equation from it. The corresponding Hamiltonian then contains an effective potential term, referred as quantum geometry potential [11] which is (l 2 . Thus, the quantum geometry potential term arises when one tries to view a fundamentally discrete dynamics through a continuum description. In effective description, the quantum geometry potential leads to a generic bounce [13]. In large volume V (= p 3/2 , p is densitized triad) and small extrinsic curvature K (conjugate variable of p) regime, the gravitational part of the effective Hamiltonian [11,12] can be expanded as is the Einstein-Hilbert Hamiltonian for the homogeneous and isotropic spacetime. In natural units (c = = 1), κ = 16πG = l 2 p , is the gravitational coupling constant. γ is the Barbero-Immirzi parameter. µ 0 here is viewed as a quantization ambiguity parameter. µ 0 appears as the length of the edges, while expressing curvature tensor in terms of holonomies around a square. It essentially plays the role of a regulator [5]. Both of these parameters are generally assumed to be order of unity numbers but there is no unique way to fix their values within loop quantum cosmology itself. The accuracy of WKB approximation here increases with increasing volume. So in large volume (scale set by the step-size of the difference equation), the effective Hamiltonian (1) is quite trustworthy. Let's now consider the pure gravity case i.e without any matter field. The Einstein-Hilbert Hamiltonian H EH vanishes (on-shell) for pure gravity. It is then clear from the expression (1) that leading quantum correction is independent of the parameters µ 0 and γ. This quantum correction comes solely from the quantum geometry potential. In the volume goes to infinity limit this quantum correction vanishes. We will refer this term as gravitational Casimir energy. Later we will show that this term can indeed be viewed as gravitational Casimir energy due to the finite volume of the system. We now re-write the term as where the volume d 3 = p 3/2 . Traditionally, one computes Casimir energy by computing the shift of vacuum polarisation energy due to imposition of an external boundary condition. In particular, the quantum electrodynamic Casimir energy between two conducting plates of surface areaà and separated by a distance d, is −(π 2 /240)(Ã/d 3 ). Surprisingly, this expression does not have explicit dependence on fine structure constant. Although, in principle possible but in reality there is no conductor that can enforce such boundary conditions for modes of all wavelength. On the other hand, the experimental result seems to agree with the traditional expression extremely well [14]. Thus, reconciling these two facts may appear as a conceptually difficult task. However in a recent approach of computing Casimir energy [15,16], instead of imposing boundary condition, one considers the plates as a classical static background field. Then one introduces an interaction of the type −L int = λ σ(x)φ 2 (x) where σ(x) is classical (non-dynamical) background field and φ(x) is the dynamical field whose vacuum fluctuations contribute to Casimir energy. The background field σ(x) is represented by delta functions peaked around the positions of the conducting plates. Using the techniques of perturbative quantum field theory, one computes order by order contributions of this interaction. It is possible to sum up all order contributions to give a 'close' form expression for the Casimir energy. In strong coupling (λ → ∞) limit, the explicit dependence of coupling constant drops out from this expression and it reduces exactly to the traditional expression of Casimir energy. The real experimental system that one considers for measuring Casimir force, the fine structure constant effectively appears as strong coupling [16]. Thus, the recent approach of computing Casimir energy addresses the mentioned conceptual difficulty quite well. On the other hand in weak coupling (λ → 0) limit, the Casimir energy computed using this method, scales as ∼ (λ 2 /d). It is essentially the contribution from one-loop diagram with two-point insertion. The expression of effective Hamiltonian (1) is valid in large volume ( d 2 >> l 2 p ) regime. Naturally in the regime of interest, the gravitational coupling constant l p is very weak. We have already mentioned that in isotropic loop quantum cosmology, one essentially studies the evolution of a finite cell of the universe. So one can expect to get finite volume Casimir energy due to quantum fluctuations of geometry. The homogeneous and isotropic vacuum solution of Einstein equation is Minkowski spacetime. Since we are considering vacuum isotropic loop quantum cosmology, the system is essentially a finite patch of the Minkowski spacetime. This feature makes it conceptually easier to use the techniques of perturbative quantum field theory. To compute the gravitational Casimir energy for the system i.e. a finite cell of the universe, here we use the recent method. Essential difference in this case is that one should consider the vacuum fluctuations of spin 2 field. Also, instead of one pair of boundary, here one has three pairs of boundary, one pair in each spatial direction. For simplicity, however we will compute Casimir energy due to a massless spin 0 field (massless Klein-Gordon field). The computational scheme can be extended for the spin 2 field as well. The result will differ by a numerical factor of 2 because of its two helicities. So to study the qualitative behaviour (as it is not expected to have quantitative match; in loop quantum cosmology one consider only the temporal fluctuations of geometry. Imposition of high symmetry essentially freezes the spatial fluctuations.), use of spin 0 field suffices. We consider the background field σ(x) to be represented by three dimensional delta functions, as there are boundaries along all three spatial directions. We 'normalize' non-dynamical background field σ(x) as where d = {d, d, d}. It is worth pointing out that a different 'normalization' essentially alters the coupling constant λ. In this approach [15,16] Casimir energy is read off from the one-loop effective action computed using the background field method [17]. Naturally, the Casimir energy is defined as where L is the full interacting Lagrangian. The functional determinant can be expressed in terms of Feynman diagrams. Being independent of d, the one-loop diagram with one-point insertion does not contribute to Casimir force which is a physically measurable quantity. The contribution from one-loop diagram with two-point insertion (see Fig 1.) can be computed in a straightforward manner, leading to where In the calculation, the d independent contributions (formally divergent) have been dropped, as they do not contribute to Casimir force. Before we compare the expression (5) of Casimir energy computed using perturbative quantum field theory, with the expression (2) extracted from isotropic loop quantum cosmology, a caution is appropriate. In strong coupling limit the method used here gives unambiguous expression for Casimir energy as coupling constant dependence drops out. However in weak coupling limit, it depends on the choice of 'normalization' (3) which needs to be provided from outside. Also, the contribution from isotropic loop quantum cosmology itself may not account for the full gravitational Casimir energy. So here we restrict to the qualitative comparison of these two expressions. Comparing the expressions (2) and (5), it is clear that the expression (2) can indeed be viewed as contributions from virtual quantas whose coupling strength is Planck constant l p . So we refer these quantas as 'gravitons'. However, due to the 'normalization' uncertainty involved in the perturbative method used here, it is not yet possible to conclude definitively about the spin degrees of freedom of these quantas. Let's now go back to the expression (1) of the effective Hamiltonian. In vacuum case, the leading quantum correction is unambiguous. However, the sub-leading corrections depends on ambiguity parameters. With inclusion of matter, the Einstein-Hilbert Hamiltonian H EH does not vanish. Clearly, the leading quantum correction then also becomes ambiguous. There will also be contributions from the direct coupling of matters with gravity. It is important to observe that these features of large volume quantum corrections coming from non-perturbative quantization, in fact closely resemble the features of perturbative quantum gravity. It is well-known through the work of 't Hooft and Veltman [18] that pure gravity (onshell) is one-loop (order l 2 p ) finite. In other words, oneloop contributions from perturbative quantum gravity without matter, is unambiguous. However, higher-loops contributions from pure gravity are not even renormalizable i.e. it is not possible to obtain unambiguous results from such computations. With inclusion of matter, perturbative quantum gravity is not even one-loop renormalizable. Now, as we have already mentioned that a severe criticism that often haunts the advocates of nonperturbative quantum gravity, is its relation to the 'low energy' world. Although for symmetric models as shown here, the quantum cosmology based on loop quantum gravity not only reproduces the Einstein-Hilbert Hamiltonian as the leading term but also its quantum corrections resemble the qualitative features of perturbative quantum gravity in the regime where the later should be a reasonable effective description. In computing Casimir energy using perturbative quantum field theory, the interaction term was introduced rather by hand. We now argue that the form of the interaction used in the calculation arises quite naturally. The gravitational Lagrangian involves term of the form g(x)g(x)∂g(x)∂g(x). In the background field method of quantum field theory, one expands the field g(x) around a given classical background say η(x); g(x) = η(x)+h(x), where h(x) is the fluctuating field. Inserting the decomposition into the gravitational Lagrangian, it is easy to see that it contains a term of the form σ(x)h(x)h(x). The σ(x) ∼ ∂η(x)∂η(x), can indeed be treated as a classical background field. For small extrinsic curvature regime, one can simply consider background η(x) as static while computing perturbative corrections. For a finite cell of the universe, the use of delta function potential peaked around the boundaries is also well-motivated. For example, one crude way to make the volume of flat space finite, is by multiplying the metric component with a Heaviside step function say θ(d µ /2 − |x µ |) where d µ = {∞, d}. It is easy to see that the background field σ(x) then involves delta functions peaked around boundaries. However, the mentioned term need not be the only boundary interaction term that can contribute to the Casimir energy. So, it is necessary to perform a 'first principle' computation of gravitational Casimir energy for the system. It may also help to eventually settle the issue of spin degrees of freedom through quantitative comparison or at least to specify what to expect from a computation using the full theory of loop quantum gravity. To summarize, the leading quantum correction to Einstein-Hilbert Hamiltonian coming from vacuum isotropic loop quantum cosmology is unambiguous and can be viewed as gravitational Casimir energy due to one-loop 'graviton' contributions. However, based on arguments presented here, it is not yet possible to conclude definitively about the spin degrees of freedom of these quantas. The sub-leading quantum corrections depend on quantization ambiguity parameters. In non-vacuum case even leading quantum correction depends on ambiguity parameters. Importantly, these are analogous features of perturbative quantum gravity. In other words, the quantum corrections coming from loop quantum cosmology whose quantization relies on non-perturbative techniques, closely resemble the qualitative features of perturbative quantum gravity in the regime where the later should be a reasonable effective description. Acknowledgements: I thank Ghanashyam Date, Romesh Kaul and Martin Bojowald for careful reading and critical comments on the manuscript. It is a pleasure to thank Ghanashyam Date, Romesh Kaul for helpful, illuminating discussions. I thank Romesh Kaul for an encouragement.
2019-04-14T02:22:53.914Z
2005-04-25T00:00:00.000
{ "year": 2005, "sha1": "7680c59cc27f5dbc1819854da38a5f2113c6a94b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e3606f3b5293c404ba49bacb5f31a847b5629fa3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
23948137
pes2o/s2orc
v3-fos-license
Relative Validity and Reproducibility of a Food Frequency Questionnaire for Assessing Dietary Intakes in a Multi-Ethnic Asian Population Using 24-h Dietary Recalls and Biomarkers The assessment of diets in multi-ethnic cosmopolitan settings is challenging. A semi-quantitative 163-item food frequency questionnaire (FFQ) was developed for the adult Singapore population, and this study aimed to assess its reproducibility and relative validity against 24-h dietary recalls (24 h DR) and biomarkers. The FFQ was administered twice within a six-month interval in 161 adults (59 Chinese, 46 Malay, and 56 Indian). Fasting plasma, overnight urine, and 24 h DR were collected after one month and five months. Intra-class correlation coefficients between the two FFQ were above 0.70 for most foods and nutrients. The median correlation coefficient between energy-adjusted deattenuated FFQ and 24 h DR nutrient intakes was 0.40 for FFQ1 and 0.39 for FFQ2, highest for calcium and iron, and lowest for energy and carbohydrates. Significant associations were observed between urinary isoflavones and soy protein intake (r = 0.46), serum carotenoids and fruit and vegetable intake (r = 0.34), plasma eicosapentaenoic acid and docosahexaenoic acid (EPA + DHA) and fish/seafood intake (r = 0.36), and plasma odd chain saturated fatty acids (SFA) and dairy fat intake (r = 0.25). Associations between plasma EPA + DHA and fish/seafood intake were consistent across ethnic groups (r = 0.28–0.49), while differences were observed for other associations. FFQ assessment of dietary intakes in modern cosmopolitan populations remains feasible for the purpose of ranking individuals’ dietary exposures in epidemiological studies. Introduction The prevalence of obesity and chronic diseases is rising rapidly in Asia [1,2]. In every Asian country, diet-related risk factors such as overweight, hypertension, and hyperglycemia are among the top contributors to early death and disability [3]. Effective interventions require a clear understanding of food consumption trends and diet-disease relationships. However, the reliable assessment of dietary intakes is increasingly challenging, as Asian diets become increasingly varied, consisting of both traditional fresh foods and ultra-processed products [4,5]. In addition, large proportions of populations reside in multi-ethnic cosmopolitan settings, where meals prepared outside the home are frequently consumed. Singapore is one such Asian multi-ethnic cosmopolitan setting (74% Chinese, 13% Malay, 9% Indian, and 3% others) [6], with a wide variety of traditional ethnic and international cuisines, and a strong eating-out culture [7]. Long-term dietary exposures in chronic disease epidemiology are usually assessed by self-report methods such as food frequency questionnaires (FFQ). In the 1990s, a FFQ for the Singapore population was developed and validated [8]. Subsequently, new food trends have emerged, and there is growing interest in the health effects of a wider variety of nutrients. Other FFQs developed for multi-ethnic populations in the Asia region did not stratify by ethnic group in either FFQ development or validation [9,10]. Hence, a new FFQ was recently developed that incorporated ethnic stratification into the design methodology [11]. Since all dietary assessment methods based on self-reported intake are prone to some degree of measurement error, evaluating the magnitude of this error is required before use. Although there is no 'gold standard' for measuring dietary intakes, studies on validity relative to other dietary assessment methods can offer valuable insights. Dietary recalls or records are often used as a reference method because they are open-ended and thus do not have the same restrictions as a semi-quantitative FFQ related to a limited food list or fixed portion size. While biochemical markers are not available for all nutrients, they are also valuable as reference instruments because they are not affected by errors in recall or inaccuracies in food composition data [12]. In addition to the relative validity of an FFQ administered at a single time point, assessing the reproducibility of an FFQ over a period of time indicates the stability of the estimates of long-term dietary exposures. This study aimed to assess the reproducibility and relative validity of a newly-developed FFQ for a multi-ethnic Asian population against two 24-h dietary recalls and nutritional biomarkers. Participants and Study Design Demographic quotas were constructed to reflect the age distribution of the adult Singapore population (aged 18-79 years), and to include equal numbers of each gender-ethnic group to enable the assessment of FFQ performance by ethnic group. We contacted participants of two population-based studies, the Singapore Population Health Study, and a national survey conducted by the Singapore Health Promotion Board. All participants had given consent to be re-contacted for future research studies. Telephone recruitment was conducted by trained interviewers. Interested participants were asked a series of screening questions to assess eligibility for the study. Participants were considered ineligible if they were residing in a household with another study participant, living in an institution, pregnant or breastfeeding, practicing/going to practice a special diet (e.g., for weight loss), unable to communicate in English or Mandarin, taking diuretic medication, diagnosed with kidney disease or a severe mental illness, or had recently changed their diet due to chronic medical conditions. We called 780 participants; 241 were uncontactable, 322 refused to participate, and 25 were ineligible. As a result, 192 participants were visited at their homes or a place of their convenience, where they provided written informed consent to take part in the study. Figure 1 summarizes the sequence of measurements. During the first study visit, the FFQ was interviewer-administered, and participants were issued with plastic screw-capped bottles (500 mL) and instructions on how to collect an overnight urine sample. During a second visit approximately one month later (median 35 days; interquartile range (IQR) 30-42 days), a fasted blood sample was drawn, filled urine bottles were collected, and a 24-h dietary recall interview was conducted. The third visit took place approximately four months after the second (median 3.75 months; IQR 3.1-4.7 months), and included the same measurements as the second visit. The fourth study visit took place approximately one month after the third (median 36 days; IQR 33-41 days) and approximately six months after the first (median 6.25 months; IQR 5.6-7.2 months). At this visit, the FFQ was interviewer-administered again, and socio-demographic information was collected. The study methodologies, protocols, and procedures were approved by the National University of Singapore Institutional Review Board (NUS IRB, reference code: B-14-082). methodologies, protocols, and procedures were approved by the National University of Singapore Institutional Review Board (NUS IRB, reference code: B-14-082). Food Frequency Questionnaire The development of the FFQ has been described previously [11]. Briefly, in order to develop the food list, a data-driven approach was adopted, as described by Block et al. [13] using data from a nationally representative two-day 24-h dietary recall survey (n = 805) conducted in 2010 in adult Singapore residents aged 18-79 years. The FFQ consisted of a list of 163 food/beverage items with additional sub-questions on food sub-types and cooking methods. For each FFQ item, participants were asked how often they consumed one serving of the item, and were requested to provide the number of times either 'per day', 'per week' or 'per month'. For items consumed less than once per month, the response category 'Never/Rarely' was used. Participants were asked to consider their intake over the past year when answering. For seasonal foods, interviewers converted consumption frequency during the season to an average consumption frequency over a year. A standard portion size was given for each food item, which interviewers read out for every question. Visual aids relating to the standard portion sizes were shown. A nutrient database for the FFQ was constructed using the nationally representative 24-h dietary recall data that was used for FFQ development. Each food/drink was tagged to an FFQ line item, then data were averaged to obtain nutrient profiles for each FFQ line item that reflected the relative consumption frequencies of each food subtype covered by the line item. FFQ response data were entered into in-house data entry software. Following data extraction and cleaning, frequencies were standardized to 'per day' and multiplied by standard serving sizes (grams). The intake frequencies of individual fruits and vegetables were scaled up or down to align with the response to summary questions on total fruit and vegetables. For example, the intake frequency of apples was multiplied by the intake frequency of total fruit (as reported in the summary question), then divided by the sum of intake frequencies of all of the individual fruit items. All of the food intake frequencies were then merged with the FFQ nutrient database. Daily totals for energy and nutrients were calculated, Food Frequency Questionnaire The development of the FFQ has been described previously [11]. Briefly, in order to develop the food list, a data-driven approach was adopted, as described by Block et al. [13] using data from a nationally representative two-day 24-h dietary recall survey (n = 805) conducted in 2010 in adult Singapore residents aged 18-79 years. The FFQ consisted of a list of 163 food/beverage items with additional sub-questions on food sub-types and cooking methods. For each FFQ item, participants were asked how often they consumed one serving of the item, and were requested to provide the number of times either 'per day', 'per week' or 'per month'. For items consumed less than once per month, the response category 'Never/Rarely' was used. Participants were asked to consider their intake over the past year when answering. For seasonal foods, interviewers converted consumption frequency during the season to an average consumption frequency over a year. A standard portion size was given for each food item, which interviewers read out for every question. Visual aids relating to the standard portion sizes were shown. A nutrient database for the FFQ was constructed using the nationally representative 24-h dietary recall data that was used for FFQ development. Each food/drink was tagged to an FFQ line item, then data were averaged to obtain nutrient profiles for each FFQ line item that reflected the relative consumption frequencies of each food subtype covered by the line item. FFQ response data were entered into in-house data entry software. Following data extraction and cleaning, frequencies were standardized to 'per day' and multiplied by standard serving sizes (grams). The intake frequencies of individual fruits and vegetables were scaled up or down to align with the response to summary questions on total fruit and vegetables. For example, the intake frequency of apples was multiplied by the intake frequency of total fruit (as reported in the summary question), then divided by the sum of intake frequencies of all of the individual fruit items. All of the food intake frequencies were then merged with the FFQ nutrient database. Daily totals for energy and nutrients were calculated, followed by macronutrient intakes as a percentage of energy and micronutrient intakes (and sugar and fiber) as amounts per 1000 kcal. 24-h Dietary Recall The United States Department of Agriculture (USDA) five-step multiple-pass approach [14] was adapted to include four rather than five passes; a quick list, detailed pass, forgotten foods, and final review. In the first pass, participants were asked to list all of the foods and drinks consumed in the previous 24 h from midnight to midnight. In the second pass, the interviewer went through the list chronologically, probing for food description details, preparation methods, and amounts consumed. Visual aids, which displayed photographs of utensils, were used to assist with portion size descriptions. The third pass attempted to elicit any forgotten items such as condiments or beverages. The fourth pass reviewed all of the information. The USDA's 'Time and occasion' step was omitted because it was found to be redundant in pre-tests, as the information was provided either spontaneously during the first pass, or in response to probing in the second pass. Interviewers aimed to conduct one recall on a working day and the other on a non-working day, to account for intra-individual variation between day type. Interviews were audio recorded for quality control purposes. Data were entered into in-house software that contained the Singapore Health Promotion Board's food composition database [15]. Biological Samples Participants were twice instructed to collect an overnight urine sample that included the first morning urine and any urine passed during the night. At the start of the study visit, participants were asked if their urine corresponded to the required collection period. If it did not, new urine bottles were issued and the visit was rearranged. Fasting venous blood was drawn and collected in ethylenediaminetetraacetic acid (EDTA) tubes and plain tubes. Both blood and urine samples were stored at −4 • C for a maximum of 24 h before processing, aliquoting into 1 mL tubes, and freezing at −80 • C. Blood was centrifuged to separate the plasma, and ascorbic acid preservative 2% was added to urine samples before freezing. Each participant's two plasma/urine samples collected at the two time points (i.e., one month and five months) were analyzed pairwise in the same batch to minimize the effect of inter-batch variation on biomarker measurements. For each analyte and each batch (which corresponded to one day of laboratory measurements), split blinded quality control samples were used to assess the coefficient of variation within and between batches. In order to evaluate the FFQ's ability to assess soy intake, the concentration of urinary metabolites daidzein, glycitein, genistein, and equol were measured using a HPLC ( Dionex UltiMate 3000 LC system, Thermo Fisher Scientific, Waltham, MA, USA) from an established LC-MS/MS method [18] using phenyl C6 chromatography coupled with photodiode array and fluorometric detections. The within-day and between-day CVs were as follows: daidzein (4.0 and 17.0%), equol (17.9 and 30.2%), glycitein (8.4 and 18.4%), and genistein (4.5 and 12.7%). Urinary creatinine was measured using the Jaffe method (reaction with alkaline picrate using an auto-analyzer). Urinary metabolite concentrations were expressed as nmol/mg creatinine. Statistical Analysis Correlation coefficients in the range of 0.4-0.6 have been reported between FFQ nutrient intake estimates and reference instruments [12]. To estimate the sample size required to distinguish between these values, the formula: was used, with Fisher's Z transformation of correlation coefficients, where σ 2 = 1 for the Z-scale [19]. For α = 0.05 and (1 − β) = 0.80 (i.e., 80% power), the number of required participants was 110. Within-person variation in the reference instrument indicates the need for a larger sample size [12]; therefore, we estimated n = 150 to be the minimum number of participants required. To account for an expected 20% attrition rate, we increased the target sample size to n = 192. Dietary recall data were checked for errors by examining outlying values. Weightings of 5.5/7 were applied to work days, and 1.5/7 to non-working days to reflect typical work patterns, before calculating average daily intakes. Macronutrients for both FFQ and dietary recall data were expressed as a percentage of total energy (%E), while micronutrients were expressed as amount per 1000 kcals. Both energy and nutrient variables were transformed using natural logs to obtain a normal distribution. For food groups, 1 g was added to remove zeros before applying transformations. Either natural log or square root transformations were used in analyses depending on which best improved normality. Plasma polyunsaturated fatty acids (PUFA), odd-chain saturated fatty acids (SFA) and EPA + DHA as a percentage of total plasma fatty acids (% total FA) were calculated, in order to evaluate the relative validity of the FFQ in assessing PUFA intake, dairy fat intake [20], and fish and seafood intake [21], respectively. Values were transformed using natural logs. All of the values were within four standard deviations of the mean. The mean concentration of analytes from the two time points were used in subsequent analyses. Descriptive statistics were calculated where ANOVA was used to compare continuous variables, and chi-squared was used to compare categorical variables. Pearson correlation coefficients were calculated to examine the associations between FFQ and dietary recall nutrient intakes, and between selected FFQ measures and urinary isoflavones, serum carotenoids, and plasma fatty acids. Users of carotenoid-containing supplements were excluded from analyses involving serum carotenoids, and users of phytoestrogen-containing supplements were excluded from analyses involving urinary isoflavones, because the supplement contents could not be quantified. Partial correlations were calculated, and then adjusted for ethnicity, age, sex, energy intake, and additionally total fat (as a percentage of energy intake) for carotenoid associations only. Values were adjusted for intra-individual variation using the intra-class correlation coefficient (ICC) and adopting the formula described by Rosner and Willett [22]. The ICCs between the two time points were calculated by ANOVA for biomarker concentrations and 24-h dietary recalls (using transformed energy-adjusted nutrient intakes) in order to deattenuate the correlation coefficients for intra-individual variation. The ICCs between the two time points were also calculated for the FFQ to examine reproducibility. IBM SPSS version 23 (IBM Corp, Armonk, NY, USA) was used for statistical analyses, with the level of significance set at 5%. Results Even distributions of ethnicity, gender and age groups were recruited, and most participants had education beyond secondary school level (Table 1). Of the 192 participants enrolled, 31 (16%) dropped out (6 Chinese, 8 Indians, and 17 Malays), and 161 completed the study. The ICC of the FFQ estimates, a measure of reproducibility, was above 0.70 for most foods and nutrients (Table 2), and values were similar among the ethnic groups except in Malays, where the ICCs for vitamin A, vitamin C, and fruit intake were below 0.40. The median correlation coefficient for the association between FFQ and 24-h dietary recall nutrient intakes was 0.40 for FFQ1 and 0.39 for FFQ2. Associations of the FFQ protein (r = 0.45-0.53) and total fat intakes (r = 0.35-0.39) as a percentage of energy were higher than for carbohydrates (r = 0.20-0.25) ( Table 3). For PUFA, there was a stronger association between FFQ2 and dietary recall values (r = 0.31) as compared with FFQ1 (r = 0.09). The magnitude of correlations for other nutrients was similar for FFQ1 and FFQ2. The highest correlation coefficients were observed for micronutrients, for example, between the FFQ and dietary recall values for calcium (r = 0.57-0.68) and iron (r = 0.50-0.64). Table 4 shows correlations between FFQ intake estimates and related biomarkers. Significant associations were observed between urinary isoflavone concentration and FFQ soy protein intake, serum total carotenoid concentration and FFQ fruit and vegetable intake, plasma EPA+DHA concentration and FFQ fish/seafood intake, and plasma odd chain SFA and FFQ dairy fat intake. Correlations for dairy fat, fruit, and soy protein with relevant biomarkers were higher for FFQ2 than for FFQ1, whereas correlations for fish/seafood were higher for FFQ1. Adjustments for demographic factors and correction for within-person variation in biomarkers did not substantially change the observed associations. The exclusion of fish oil supplement users did not affect the magnitude of associations (data not shown). Based on FFQ2, the weakest correlation was observed between intake and plasma levels of PUFA (r = 0.12), and the highest correlations were found between fish/seafood and plasma EPA + DHA (r = 0.36), and soy protein and isoflavones (r = 0.46). FA, fatty acid. Values were transformed using natural logs or square roots before analysis. 1 Adjusted for energy intake, ethnicity, age, and sex. Carotenoid associations additionally adjusted for total fat intake (as a % of energy). 2 Adjusted for energy intake, ethnicity, age, and sex, and corrected for intra-individual variation between two biomarker measurements. Carotenoid associations additionally adjusted for total fat intake (as a % of energy). 3 Phytoestrogen supplement users were excluded (FFQ1 n = 18; FFQ2 n = 17). 4 Carotenoid supplement users were excluded (FFQ1 n = 18; FFQ2 n = 17). * Correlation is statistically significant, p < 0.05. Stratified analyses highlighted differences between ethnic groups for correlations between FFQ estimated intakes and biomarkers (Table 5 and Supplementary Table S1). For instance, fruit and vegetable associations with total carotenoid concentration and with individual carotenoids was present among Chinese participants (from r = 0.30 for lutein, to r = 0.52 for alpha-carotene) and to some extent Malays although, besides lycopene, associations did not reach statistical significance, but were absent among Indians. There was some indication of an association between curry gravy and total carotenoid concentration in Chinese (r = 0.19) and Malays (r = 0.12), but this was not statistically significant (data not shown). The association between soy protein intake and urinary isoflavones was present only among Malays (r = 0.58) and Indians (r = 0.64), but was not significant among Chinese (r = 0.31). The association between plasma EPA + DHA concentrations with fish/seafood intake was consistent across ethnic groups (r = 0.28-0.49). Differences were also noted between males and females, with a stronger correlation for fruit and serum carotenoids in women and a stronger correlation for dairy fat and plasma odd chain SFA in men (Supplementary Table S2). Discussion The purpose of this study was to assess the reproducibility and relative validity of a new multi-ethnic FFQ in an urban Asian setting, using repeat FFQ administrations to assess reproducibility and 24-h dietary recalls and plasma and urine concentration biomarkers to assess relative validity. Our results suggest reasonable accuracy and good reproducibility for evaluated FFQ assessments of dietary intakes. In comparison with reproducibility studies of other FFQs, in which values ranged from 0.26-0.91 [23][24][25], we observed relatively high ICCs, which indicated the good reproducibility of our FFQ assessment. This implies that a single FFQ is likely to be sufficient to capture habitual dietary intake, although repeated FFQ administration may be required to accurately reflect dietary changes for cohort studies with long follow-up periods. It should be noted, however, that the timeframe of six months used in this study was shorter than that of other studies, which often used a 12-month interval. Although a six-month timeframe can introduce seasonal variation, such variation is minimal in Singapore for most foods. However, there may have been some effect related to various cultural celebrations that take place throughout the year, which may have resulted in minor attenuation of ICCs. Poorer reproducibility was observed in Malays for vitamins A and C, which may be explained by the low reproducibility for fruit intake. This could be related to some seasonal variation for fruit intake such as durian, or because of the Ramadan fasting month, which took place during the study period, as ethnic Malays are generally Muslims, and changes in dietary intakes have been observed [26]. Despite soy intakes in Singapore being lower than in some other Asian populations, such as Japan and parts of China [27], the strength of association observed between soy protein intake and urinary isoflavones (r = 0.46) was of a similar magnitude to that observed in these populations, for example in Shanghai men (r = 0.48) [28] and Japanese adults (r = 0.30-0.40) [29]. However, results may not be directly comparable, since we used soy protein intake as a proxy for total isoflavone intake, while other studies estimated the isoflavone contents of FFQ items. Although, with the limitation of wide variability in the isoflavone content of foods [30], it is not clear which approach is superior, as demonstrated in populations such as the United States, where both approaches have been used [31][32][33][34]. The relative validity of our FFQ in assessing fish and seafood intake, as indicated by the association with plasma EPA + DHA, was comparable to observations in other studies [21,35]. The associations between dairy fat intake and odd-chain SFAs were also comparable to the results of other studies [36,37]. The stronger association in males as compared with females, and Chinese and Indians as compared with Malays may be explained by differences in data capture and/or reporting accuracy by dairy product type. For instance, in Malays, condensed milk consumed in beverages was the top contributor (22%) to total dairy fat intake, whereas this item only contributed to 11% (Chinese) and 12% (Indian) of dairy fat intake in other groups (data not shown). In females, ice cream was the top contributor to dairy fat intake (22%, as compared with 13% in males). It may be that the intake frequency of ice cream is less well recalled than the intake frequency of milk, for example. On the other hand, it may be that a sub-type question on whether the ice cream was low fat would have allowed a more accurate assignment of a fat value, and thus would have improved the correlation. The magnitude of associations between serum carotenoids and FFQ fruit and vegetable intake are consistent with results observed in other studies [38,39]. However, substantial differences were observed between ethnic groups in our study, with high correlations in Chinese and the absence of association between FFQ fruit and vegetable intake and individual and total carotenoids in Indians. This lack of association in Indians is difficult to explain. It may be related to co-ingestion of carotenoid absorption enhancers such as fat or inhibitors such as fiber [12]. Fiber intake was significantly higher in Indians as compared with other ethnic groups. The result could also be related to the types of fruit and vegetables consumed by this group, although data on intakes did not indicate as such. Another possible explanation is the consumption of other carotenoid-containing foods by this group such as powdered spices or food colourants. Although no association between curry gravies and serum carotenoids was observed in Indians, this item represents only a fraction of the foods that potentially contain carotenoids. Cooking method may also play a role in these observations because of its effect on carotenoid bioavailability [12]. Another explanation could be related to the assessment of portion sizes for specific fruits and vegetables consumed by this group. Generally, the second FFQ performed better than the first FFQ, which is likely to be related to the matching timeframe of FFQ with the biomarker measurement. With the exception of weak correlations for carbohydrate and energy intakes, the relative validity of the FFQ in assessing nutrient intakes compared with the repeat 24-h dietary recalls was similar to results observed in other studies in Asia [23,40], and in a multi-ethnic Western population [41]. It is unclear why carbohydrate was less well assessed by the FFQ in comparison to nutrients, but may be related to portion size assessment. A study in China using portion size photographs along with an FFQ reported a correlation coefficient for carbohydrate of r = 0.53 between FFQ estimates and repeat 24-h dietary recalls [23]. The portion sizes used in this study were based on a combination of standard serving sizes, national level intake data, and weighed samples from food outlets. For example, the portion size used for plain rice was 1 rice bowl (200 g), the amount commonly served in food outlets. Although eating out is common, with 60% of Singapore residents eating out at least four times per week [7], serving sizes of rice at home may be variable. As other studies have indicated that collecting information on portion size does not necessarily improve FFQ estimates [42][43][44], the FFQ was not designed to collect this information. However, the importance of portion size assessment may warrant further investigation in this population. The strengths of this study include the diverse sample size and the collection of repeat reference measurements covering a range of dietary components. FFQ and 24-h dietary recall share some biases, such as the use of food composition data, dependency on participants recall, and susceptibility to desirability bias. Biological markers represent a more independent measure, although they too are not a 'gold standard' since they are affected by varying bioavailability. The biomarkers used were concentration biomarkers, and we did not use biomarkers that assess absolute intakes such as 24-h urinary nitrogen for study feasibility reasons, the implication being a reliance on the two-day 24-h dietary recall data to evaluate FFQ estimates for most nutrients. Smoking [45,46] and body weight status [47] may modulate serum carotenoid concentrations, but information on these factors was not collected in this study. We would expect stronger correlations between dietary fruit/vegetable consumption and carotenoids than what we observed had we accounted for smoking and body weight. Due to budget constraints, our sample size within each ethnic group was limited, meaning that some of our analyses may have been underpowered, and more weight should be placed on analyses of the whole sample. Another limitation that may have attenuated the observed correlation coefficients was the absence of data on the carotenoid content of individual foods. This results in a loss of precision due to the varying contents of various carotenoids in different fruits and vegetables; for example, some participants may have consumed a high intake of fruit, but only items with low carotenoid content. We noted good reproducibility of biomarker measurements apart from isoflavones, but poor reproducibility of all nutrients assessed by the 24-h dietary recall, and extremely poor when analyses were stratified by ethnic group, particularly in Malay participants (data not shown). This suggests that more than two repeats of the 24-h dietary recall may be required to obtain a better estimate of habitual intake using this measure, and two days of data was not a suitable reference method for all nutrients. Conclusions This study evaluated the relative validity and reproducibility of a 163-item semi-quantitative FFQ developed specifically for a multi-ethnic Asian population. The results showed that the FFQ performed similarly to other FFQs in the literature for most nutrients, although we identified exceptions for specific dietary intakes in certain ethnic groups. The Singaporean population is characterized by the consumption of a wide variety of foods from different Asian and international cuisines and a high frequency of eating out. Our results suggest that the assessment of dietary intakes in such modern cosmopolitan populations remains feasible for the purpose of ranking individuals' dietary exposures in epidemiological studies.
2017-11-25T18:59:54.143Z
2017-09-25T00:00:00.000
{ "year": 2017, "sha1": "3f8d0176e2f756d2fc60dd1caf7ebf8340b6aa5d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/9/10/1059/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f8d0176e2f756d2fc60dd1caf7ebf8340b6aa5d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225502258
pes2o/s2orc
v3-fos-license
Detecting Associations between Archaeological Site Distributions and Landscape Features: A Monte Carlo Simulation Approach for the R Environment : Detecting association between archaeological sites and physical landscape elements like geological deposits, vegetation, drainage networks, or areas of modern disturbance like mines or quarries is a key goal of archaeological projects. This goal is complicated by the incomplete nature of the archaeological record, the high degree of uncertainty of typical point distribution patterns, and, in the case of deeply buried archaeological sites, the absence of reliable information about the ancient landscape itself. Standard statistical approaches may not be applicable (e.g., X 2 test) or are di ffi cult to apply correctly (regression analysis). Monte Carlo simulation, devised in the late 1940s by mathematical physicists, o ff ers a way to approach this problem. In this paper, we apply a Monte Carlo approach to test for association between Lower and Middle Palaeolithic sites in Hampshire and Sussex, UK, and quarries recorded on historical maps. We code our approach in the popular ‘R’ software environment, describing our methods step-by-step and providing complete scripts so others can apply our method to their own cases. Association between sites and quarries is clearly shown. We suggest ways to develop the approach further, e.g., for detecting associations between sites or artefacts and remotely-sensed deposits or features, e.g., from aerial photographs or geophysical survey. Introduction Associating archaeological sites with elements of the physical landscape (geological deposits, vegetation, drainage networks) are key goals of archaeological projects. This information is desirable for a variety of reasons, e.g., to understand the relationship of past human settlements to agricultural land in their territory (e.g., [1]), to trace patterns of hominid colonisation (e.g., [2]) or to identify key landforms or landscape types for future research or preservation from development (e.g., [3]). However, the task is not an easy one and a wide range of difficulties are typically encountered. A key problem relates to the incompleteness of the archaeological record as a result of differential survival or detection, so apparent associations with particular landscape types or features may not be what they seem-for example, sites discovered by aerial photography are often strongly biased towards geological deposits where cropmarks and soilmarks are most easily formed (see, e.g., [4]), which may not relate to the past human pattern of exploitation of the landscape. For older archaeological periods, the problems are particularly acute. The absence of lasting structural remains throughout most of Prehistory means that locating traces of Prehistoric archaeology in the landscape is typically challenging. Increasingly, archaeologists are turning to geoarchaeology, deposit modelling [5] and geophysics in order to ascertain what the relationships are between artefacts and the contexts in which they are found. The basic premise on which such an approach is formulated is that the sites/artefacts lie within, and are associated with, palaeolandscapes [6]. Today these palaeolandscapes only exist as fragments of once extant geographies. Consequently, an understanding of the palaeolandscape context of the archaeological remains allows our interpretive horizons for recovered material to be enhanced and developed. Insight into the likely locations in the landscape in which we may expect to find evidence for our earliest ancestors [7] is highlighted as an important analytical tool. However, undertaking such investigations is far from easy. Besides the difficulty of reconstructing lost geographies from fragmentary archaeological remains lies the major fundamental issue of relating artefacts to sequences. Statistical analysis and modelling approaches like regression that explore the correlation between sites and explanatory variables are not intuitive to researchers inexperienced in statistics, and may feel like overkill for simple cases. Standard statistical tests of association (e.g., the X 2 test) do not seem suitable, as the expected frequency of samples in each class is a function of the proportional size of the class, rather than the proportion of landscape area occupied by the landscape features. A further problem, which frequently confounds analysis of archaeological site distribution data, is the low degree of accuracy and precision of the point data within a dataset. This is especially a problem with large, integrated databases, like those maintained under the Valetta Convention, for example, the UK's county-level Historic Environment Records (HER). This is usually due to the low degree of precision of the original recorded coordinates (e.g., UK six-figure grid references), which are often derived from record cards or paper maps, or simply from the lack of precise knowledge about the location. While these problems are well known, satisfactory solutions remain elusive. In this paper, we propose an approach which addresses these difficulties using Monte Carlo simulation to determine the probability that the observed patterns can be replicated by chance. This involves generating large numbers of randomly located simulated sites incorporating the appropriate level of spatial uncertainty and computing the coincidence between the feature layer of interest and the simulated sites. Though we code the problem in the popular Free and Open Source "R" software [8], the procedure is straightforward enough to be carried out with a desktop GIS program and a spreadsheet. The scripts used for the analysis are provided in the appendices and as supplementary data to the paper so that interested readers can use them to carry out their own analyses. Overview Archaeological site distributions are significant because they can tell us about past human activity in the landscape. However, making robust inferences from archaeological site distribution data is not straightforward. The problems are discussed in classic texts (e.g., [9,10]). Of the wide range of potentially useful statistical approaches, few seem immediately applicable to detecting association between archaeological point patterns and landscape features-for example, permutation tests, while useful for intra-site spatial analysis [11], would seem to require pairs of points (e.g., two different types of artefact) rather than points and features with which they might be associated, though a customized test could no doubt be developed. Likewise, cluster analysis, both the simplest forms (Clark and Evans test, Quadrat analysis, see, e.g., [12]) and more advanced approaches capable of dealing with uncertainty at spatial scales like local density analysis [13] and kernel density analysis [14], is more suitable for developing or testing hypotheses about spatial organisation of archaeological occupation areas than for testing the relationship between the sites and particular deposits or features. What remains, once the range of applicable techniques has been carefully filtered, are very simple, classic tests of association like the X 2 test, which are not really appropriate for spatial distributions, and more complicated approaches, like regression analysis, which require experience and care in application in order not to fall into one of a number of well-documented traps. Monte Carlo simulation, once difficult to do before the advent of powerful computers, is something of an intermediate approach, as it allows a test of association of one spatial data set against another in a way that is fast, intuitive and robust, and does not require immersion in statistical modelling. Monte Carlo approaches have been variously applied in archaeological contexts (see, e.g., [15,16]) but are not as widely used as they might be, probably because the procedure is not integrated as standard in GIS software, and a straightforward description of the process with worked examples has so far not been published. In this paper, we seek to address this gap by showing the utility of the approach by application to a simple research question-whether the spatial distribution of Lower and Middle Palaeolithic sites in Hampshire and Sussex can be shown to be associated with 19th and earlier 20th-century quarries recorded on historical topographic maps. The paper is structured as follows. In the next section, we discuss the origins of Monte Carlo simulation and explain its utility for investigation of the association of archaeological sites with physical landscape features. We then describe how the method can be applied by taking the reader carefully through the steps in the R software environment, which is increasingly popular for spatial analysis [8]. We then link these steps into an R script (see Appendices A-C) and apply the script to a case study example dataset, demonstrating a clear association between Palaeolithic sites and quarries. Finally, we discuss the potential of this approach to other cases, and finish with some simple recommendations to enable our method to be improved. Monte Carlo Simulation The origins of the Monte Carlo method in modern science can be found in the work of Metropolis and Ulam [17], and Ulam [18]. Key to the work of these authors was the observation that complex phenomena could not usefully be described using ordinary methods of mathematical analysis, as the resulting system of integral or integral-differential equations, with numerous interdependent variables, would quickly become too complicated to solve. Alternatively, overarching theoretical approaches, such as statistical mechanics, suitable for analysis of systems comprising enormous numbers of objects, were insufficiently detailed to be of general use for description of smaller systems. Ulam and his colleagues therefore proposed an alternative approach, called the Monte Carlo Method [17], in which the application of probability theory allowed the outcome of a very complex set of circumstances to be determined correctly with great probability [18]. Though the approach was initially proposed for problems of mathematical physics, Monte Carlo-type approaches are clearly applicable to geographical problems. Just as particle collisions inside cosmic ray showers would be expected to occur with greater or lesser probability dependent on particular sets of circumstances, so too will the intersection or coincidence of geographical or archaeological features expressed by map geometry inside a GIS (a fundamental operation of geographical analysis) be less or more likely, depending on the characteristics of the map space and the individual map objects. [19] provide examples of solutions to intersection probability problems for map object types such as lines, circles and rectangles, in which probabilities can be calculated through relatively simple equations. More complex feature types, such as the irregular polygon areas derived from map representations of natural or landscape formations like vegetation, soils or geology, are much more difficult to express through equations. In such cases, as these authors point out, Monte Carlo simulations-"playing a game of chance" in the words of [17]-offer the possibility of obtaining estimates for intersection probabilities. Monte Carlo approaches have been used in archaeological site location analyses in various ways. [15] used a Monte Carlo simulation to compare the area of visible landscape, known as the viewshed, from Bronze Age Cairns on the Isle of Mull, Scotland with the viewsheds from randomly generated non-sites; these approaches are essential in order to confirm that the alleged superior visibility or intervisibility of archaeological sites cannot have arisen by chance alone [20]. In a highly innovative study, [16] used a Monte Carlo approach to test hypotheses about changes in mid-Holocene hunter-gatherer settlement patterns in Japan by comparing spatio-temporal patterns based on real data with large numbers of randomly generated hypothetical spatio-temporal patterns. [21] used a pair correlation function to compare clustering of archaeological finds in highly irregular spaces (a tomb complex) with clusters of finds that had been randomly assigned through a Monte Carlo procedure. In addition to these highly developed approaches, Monte Carlo simulation seems ideally suited to the apparently simpler task of detecting associations between mapped landscape features like geology, land cover or vegetation types. It does not seem easily possible to directly calculate the probability that discrete circles representing archaeological site locations will intersect with the many irregular polygons resulting from mapping these kinds of landscape features. This is a significantly more complex problem than any of the examples given by [19], which deal entirely with probabilities of intersection of regular geometrical features such as lines, circles and rectangles, within other regular objects (such as map tile boundaries) in Cartesian space. Of course, any complex polygon in a vector dataset can be generalised as a series of smaller rectangles, circles or triangles, but to do this across the whole map area, without significant loss in accuracy (even before calculation of all individual intersections and production of summed probabilities), seems like a very time-consuming exercise. Instead, the approach we demonstrate in this paper is to discover the likelihood that a given set of archaeological sites will intersect the irregular map of features we are interested in by "playing a game of chance". The approach involves generating, at random, the same number of sites as there are real archaeological sites within the same study area, counting the number of times the sites intersect the features layer, and repeating the analysis multiple times, in order to obtain a satisfactory degree of confidence in our outcome. For example, by repeating the simulation 100 times, we obtain the probability that the coincidence frequency of the real sites could have been generated by random error, in the form of a p-value familiar to users of conventional statistical tests; i.e. if, after 100 runs, it is not possible to attain the same, or larger, number of sites coincident with the relevant landscape features as in the real dataset, we can conclude that our sites are likely to be associated with the features of interest at a significance level of p = 0.01 [22]. Materials and Methods To apply the Monte Carlo approach described above to test the association of archaeological sites with particular landscape features or units, we need to answer two key questions: (1) How many archaeological sites would we expect to find per landscape unit, in the case that there is no association between the sites and the landscape element we wish to investigate? (2) Does our sample differ from such an expected distribution in a way that would lead us to conclude that our site is associated with or attracted to the landscape features of interest? Study Area In line with the above discussion, we demonstrate the application of our approach with reference to the Lower and Middle Palaeolithic findspots database compiled by the Palaeolithic Archaeology of the Sussex/Hampshire Coastal Corridor project (PASHCC; [23]) and quarries extant between 1866 and 1924, digitised from historic editions of the Ordnance Survey maps.The original PASHCC project area comprised the lowland coastal corridor zone between Southampton (Hampshire) and Brighton (East Sussex) (Figure 1a,b), UK. For the purposes of the investigation presented here, the study area comprised the western half of the PASHCC project area, the Eastern Solent Basin (Figure 1c), which contains the densest concentration of Palaeolithic sites in the PASHCC project area. Evans [24] recorded several Palaeolithic findspots from the Pleistocene terrace gravels of the drowned Solent River around Southampton and east towards Gosport ( [25], p. 26), and throughout the second half of the nineteenth and first part of the twentieth centuries, the activities of observant amateur antiquarians brought Lower and Middle Palaeolithic artefacts to light in increasing numbers from the then-hand-worked gravel quarries and brick pits of the Eastern Solent Basin. However, since around 1950, the recovery of artefacts and investigation of artefact-bearing sequences have dramatically declined (though see, e.g., [26]), something that is, in the opinion of most (e.g., [27]. p. 49), a direct result of the increasing mechanisation of extractive industry after ca.1930. In the Eastern Solent study area investigated here, the PASHCC project documented 57 discrete Lower or Middle Palaeolithic sites (out of a total of 98 in the whole PASHCC study area) recorded in Museum collections and Historic Environment Records databases. Sites ranged from single lithic findspots with no accompanying information, to important sites like Warsash where large lithic assemblages have been recovered and stratigraphic sequences have lately been reconstructed [28] ( Figure 1d). For the purposes of the analysis conducted in this paper, the terms "site" and "findspot" are used interchangeably throughout the manuscript, and refer only to the presence of one or more lithic artefacts recorded in the PASHCC database. The whole lithic artifact database is freely available for download from the archaeology data service (see Supplementary Materials); sample sites for the study region and the polygon layer of digitized quarries are also freely available for download at the link provided by the authors (see Supplementary Materials). Geosciences 2020, 10, x FOR PEER REVIEW 5 of 21 In the Eastern Solent study area investigated here, the PASHCC project documented 57 discrete Lower or Middle Palaeolithic sites (out of a total of 98 in the whole PASHCC study area) recorded in Museum collections and Historic Environment Records databases. Sites ranged from single lithic findspots with no accompanying information, to important sites like Warsash where large lithic assemblages have been recovered and stratigraphic sequences have lately been reconstructed [28] ( Figure 1d). For the purposes of the analysis conducted in this paper, the terms "site" and "findspot" are used interchangeably throughout the manuscript, and refer only to the presence of one or more lithic artefacts recorded in the PASHCC database. The whole lithic artifact database is freely available for download from the archaeology data service (see Supplementary Material); sample sites for the study region and the polygon layer of digitized quarries are also freely available for download at the link provided by the authors (see Supplementary Material). Study Aims Given that the sites recorded in the PASHCC database are known to be mostly derived from quarrying activities in the region prior to mechanisation of the industry, we would expect to find an association between known locations of quarries that were worked during the period when the artefacts were collected and the recorded locations of the artefacts. However, the expectation was that such a hypothesis would be difficult to prove using a simple test of spatial coincidence due to the large number of confounding factors involved. Briefly, these can be defined as follows: • The location of the Palaeolithic finds is, in many cases, uncertain. • Many quarries are small-the fairly low precision of the recording of findspots means that finds might not coincide with the mapped location of the quarry from which they probably came. • Not all quarrying activities are recorded on the maps. • The fourth edition map was only partially available in digital form at the time the digitizing work was undertaken. Many tiles and areas of this map were missing from the dataset used, so later quarries are likely to have been omitted. • Not all Palaeolithic sites are directly derived from quarries. Some have arrived in the hands of collectors as a result of natural processes, such as erosion. • Quarries have been digitised without consideration for their depth or purpose, as determining this information would have been unreasonably time-consuming. Consequently, the dataset includes a number of quarries which do not impact any Pleistocene deposits (e.g., hilltop chalk quarries). In this sense, the Monte Carlo simulation approach provides a useful way of testing the widely shared assumption that the spatial distribution of Palaeolithic sites of this period in the PASHCC study area (Figure 1) is mostly due to the activities of collectors or fieldworkers in areas of quarrying, without amassing a vast amount of knowledge about the specific context of each site. If we find no association between known locations of quarries that were worked during the period when the artefacts were collected and the recorded locations of the artefacts, we can suspect that the provenance of the finds is less exact than expected, or that the PASHCC collection contains many artefacts recovered by other kinds of excavation than quarrying (river or coastal erosion, subsidence, urban development, etc.). However, if we do find a strong association, we provide good confirmation that, as a whole, the location of the recorded finds is broadly accurate, despite the clear confounding factors noted above.The aim of our investigation was, therefore, to test empirically this widely shared assumption about the spatial distribution of the Palaeolithic sites in the study area, and by doing so, to demonstrate the usefulness of our proposed method. In line with conventional statistical approaches, we can formulate a null hypothesis, which our analysis can test. This hypothesis broadly states that Palaeolithic sites will coincide no more often with quarrying areas than the same number of randomly-located sites. In the next sections, we describe in detail the steps undertaken to test this hypothesis using Monte Carlo simulation in R software. Workflow The procedure comprises 5 stages: (1) The relevant features of interest are digitised in suitable GIS software, e.g., ArcGIS, QGIS or similar; (2) buffers are added to the points to reflect the uncertainty derived from the degree of precision associated with the coordinates of the points; (3) a set of random points is generated within the same study area; (4) the random points are overlain onto the features map and the number of overlapping points are counted, and (5) the simulation is repeated 100 times, each time using a different set of random points. The analysis can be implemented in two ways, according to the user's preference, (i) by simply using the R software to generate the random sites and export them in a GIS-compatible format, and then carrying out the overlay operations manually in GIS, e.g., using Select By Location in ArcGIS or (for raster data) the r.coin module in GRASS software. This is repeated as many times as required, depending on the significance value chosen as a cutoff; (ii) by carrying out the entire analysis from start to finish inside R. While the second of these two options is very much quicker and more efficient, the first may be preferred by users of GIS who are unfamiliar with script programming or the R environment, and also serves to illustrate the procedure more clearly. For users who prefer to avoid R entirely, a prototype of the procedure described here programmed in Microsoft Windows' visual basic scripting language (vbscript) can be found in the Appendices of [29]. For advanced GIS users, scripting can be avoided entirely. For example, in ArcGIS Pro, the "Create Random Points (Data Management)" tool can be used to create random points within the study area boundary. The "Select by Location" tool can then be used to select the recorded Palaeolithic sites and the random sites that intersect with the quarries layer. The ModelBuilder can be used to build these two operations into a loop to run the Monte Carlo test the required number of times. Georeferencing and Digitising of Quarries This first phase of the analysis involved on-screen (heads-up) digitising of all identifiable quarries recorded on four editions of the Ordnance Survey map (Figure 1)-First or Old edition (1866-9), Second or New edition (1896-7), Third edition (1909-11), and Fourth edition (1924) at an approximate scale of 1:10000 (6" to 1 mile). Digitising was carried out in vector software and exported to raster format. We used ArcView GIS 3.2, an outdated package which nonetheless remains useful for simple GIS operations. Adding Buffers to Account for Uncertainty Uncertainty in location is a very common issue with archaeological data, and can be especially problematic when imprecise coordinate information is used at large spatial scales. Findspot locations in our study were recorded as standard Ordnance Survey of Great Britain (OSGB) six-figure grid references (e.g., #29, 645,065), expanded to enable them to be plotted in map space by adding the easting grid offset of 400,000 to the first three figures (giving 464,500) and the northing grid offset of 100,000 to the last three figures (giving 106,500). Six-figure grid references do not determine a point at all, rather, they define a square of 100 m sides within which the findspot in question must lie. In fact, the findspot must lie not only within this 100 m square, but inside a circle of 100 m diameter within the square, whose centre point is defined by the intersection of each 100 m gridline. The region must be a 100 m-diameter circle, as any diagonal of a 100 m square would fall within the interval 100 ≤ X ≤ 141.42, and the findspot cannot lie further than 100 m from the intersection of the gridlines. Thus, to account for this in the analysis, 100 m diameter buffers were generated around the Palaeolithic sites ( Figure 2). Though this was done in R (Appendix A and B), it can also be easily be accomplished in any standard GIS software. Random Sites Generation Through the st_sample operation in the sf package [30], we generate the same number of random points as there are sites. The st_sample operation uses the uniform distribution (the generated pseudorandom values are uniformly, not normally, distributed), in line with the csr command from the splancs package [31] and standard reference texts on Monte Carlo simulation (e.g., [32]).To create our random sites with the same degree of uncertainty as our real sites, we add 100 m diameter buffers to the randomly generated points (Figure 2). Observed sites, random sites and uncertainty buffers. Inner (100 m diameter) buffers were used in this analysis. The greater the uncertainty, the larger the buffer, and hence, the higher the probability of intersection between sites and quarries. Overlay and Frequency Determination The final part of the analysis involves overlaying the set of 100 m diameter buffers derived from the recorded sites onto the quarries layer and then counting the number of times the sites buffers intersect the quarries. The process is then repeated, but this time using a set of 100 m diameter buffers derived from the random sites generated as described above. To turn this procedure into a Monte Carlo simulation, it is now only necessary to build the random point generation procedure into a loop that simulates sites enough times for the appropriate statistical confidence level to be reached. In this case, we choose to run the analysis 100 times, equivalent to p = 0.01. Script Programming The above-described steps 3.3.1-3.3.4 were incorporated into two R Scripts which run successively and together comprise the MCSites tool. The tool, together with example data, can be accessed at https://drive.google.com/uc?export=download&id=1m-7KcIUCD-Zx3dQV_NCTBMGYT3Sr_mT4. Copy and paste this link into the address bar of your browser and you will be able to download the "MCSites.zip" file. All that is needed to run the tool is the multi-platform software R, which can be installed from https://www.r-project.org/. Once R is installed, the R software package sf will also need to be installed from the command line, as follows: install.packages("sf") If you copy and paste from this document, take care to replace the inverted commas with ones from your keyboard. The package tcltk is also needed, but is usually available as part of base R and does not need to be installed separately. Installed packages can be loaded with the command: library(sf) library(tcltk) However, the necessary packages, once installed as described above, will be automatically loaded by the script file. To run the analysis from the script file provided at the line described above, open a new R window (or instance of RStudio), set the R working directory to the MCSites directory and execute the command "source ("scripts/main.R")". setwd("C:\\Users\\rh\\Documents\\MCSites")##windows system setwd("/home/oct/MCSites")##linux system source("scripts/main.R") The script will run giving a series of prompts to import the relevant data and carry out the Monte Carlo simulation. To execute the Monte Carlo analysis successfully, the user needs to do only what is described in this Section 3.3.5. More details are provided for advanced users or those interested in the process of code development in Appendix A. Results Given the level of uncertainty expected in the dataset (Section 3.3.2), the results are surprisingly unambiguous. Figure 3 shows clearly that the null hypothesis (Section 3.2) can be firmly rejected. For the 100-run test described in Section 3.3.4, the highest number of coincident random sites was 6, less than one third of the number of Palaeolithic sites that are coincident with the same map areas. On this basis, we can suggest with at least a 1% level of confidence (a result of this magnitude will not occur randomly more often than once in every 100 passes) that there is a significant association between Palaeolithic sites within the study area and quarries. To confirm this result, the test was repeated for 1000 runs on five separate occasions (Table 1). This had the effect of increasing the maximum number of hits in the random sites datasets to 8, but still did not approach the number of recorded sites coincident with the quarries layer. To show the effect on the test of the higher level of uncertainty in site location, the buffer diameter was increased from 100 to 200, and a further 3,1000 run tests were carried out (Table 1). Even allowing for a much-higher-than-expected uncertainty, the results still firmly suggest an association between recorded Palaeolithic sites and quarries. Geosciences 2020, 10, x FOR PEER REVIEW 10 of 21 uncertainty in site location, the buffer diameter was increased from 100 to 200, and a further 3,1000 run tests were carried out (Table 1). Even allowing for a much-higher-than-expected uncertainty, the results still firmly suggest an association between recorded Palaeolithic sites and quarries. Table 1, below). Test No. 1001 is the 57 recorded Palaeolithic sites. Site/quarries intersection counts for Monte Carlo simulation shown in blue, recorded sites in red. While we cannot prove for certain that the location of the quarries is the cause of the spatial pattern (correlation is not causation), we show clearly that it is not likely to be independent from it. This offers a useful starting point for more in-depth examination of the individual sites, as well as Table 1, below). Test No. 1001 is the 57 recorded Palaeolithic sites. Site/quarries intersection counts for Monte Carlo simulation shown in blue, recorded sites in red. Table 1. Monte Carlo Simulation test results for 8 simulation passes of 1000 runs. Note how increasing the size of the uncertainty buffer increases the number of hits for both recorded sites and randomly simulated sites. While we cannot prove for certain that the location of the quarries is the cause of the spatial pattern (correlation is not causation), we show clearly that it is not likely to be independent from it. This offers a useful starting point for more in-depth examination of the individual sites, as well as offering a reminder that, for this ancient archaeological period, the location of archaeological finds is only very rarely an indicator of nearby hominin settlement activity. Concluding Discussions Though we have demonstrated our Monte Carlo testing approach with reference to quarries, the method is clearly applicable to any other kinds of landscape feature for which an association with archaeological sites could be postulated. This might include other archaeological features like ditches or boundaries (i.e. testing to see if finds are associated with them) and environmental variables like superficial geology, soil formations, or vegetation. It provides a bridge between somewhat antiquated aspatial techniques like the X 2 test, which are difficult to successfully apply in a spatial context, and more developed analyses involving various kinds of regression. For testing association between archaeological sites and multiple explanatory variables, regression approaches are likely to be superior to this simple Monte Carlo test. However, the need to obtain and process suitable data, and the absence of obviously applicable explanatory variables for archaeological periods where climate, vegetation and landforms are all unrecognisably different from today, means that the Monte Carlo simulation approach described here is useful in a very wide variety of analysis contexts. For example, one very simple application, analogous to the case presented here, might be to investigate the association between cropland and archaeological sites detected by aerial photography, e.g., in the UK (e.g., [33]). As hot, dry years are known to provide ideal conditions for detection of archaeological sites from cropmarks on arable land, a clear association between arable land and archaeological sites might be expected. On the other hand, this would be expected to show some variation, depending on the type of crop and the type of site. A very strong association between sites and arable land might indicate a significant under-representation of archaeological sites on different land cover types, e.g., pasture, allowing different kinds of aerial remote sensing techniques (e.g., LIDAR) to be targeted to these areas. The approach described also offers a way to account for spatial uncertainty in the form of imprecision in site location. Clearly, this is a less-than-perfect approximation because, in the absence of more precise coordinate information, the site must be represented as a probability zone rather than a single point, which makes it more likely that it will coincide with the selected features. However, if the same method is applied to the random points, they are also more likely to coincide with the selected features, and the two effects would be expected to cancel each other out. Despite the imperfect nature of the proposed solution, we feel that this approach is more honest than simply ignoring the issue. It is especially relevant for modern digital datasets which superimpose many objects recorded at differing scales and degrees of precision, like the point patterns produced by plotting HER records held by county archaeological offices in the UK. Finally, the approach described accounts only for spatial uncertainty. However, other kinds of uncertainty in archaeological data, such as temporal uncertainty, are also amenable to Monte Carlo approaches [34], and the development of a hybrid method that simultaneously addresses both spatial and temporal uncertainty would be an interesting direction for future work. GIS approaches are widely used by archaeologists, and the R environment is an increasingly popular analysis tool. However, the GIS capability of R, recently enhanced with the addition of the simple features (sf) package [30], which we make use of in this analysis, remains under-appreciated. R is continually expanding and improving, and operations that were once convoluted or complex have become much easier over time. For example, Orton [35], describing the difficulty in finding software for point pattern analysis, lamented being unable to use the splancs package [36], because it required the purchase of S+, then an expensive proprietary statistics package. S+ has since been entirely integrated into R, and an early version of the Monte Carlo script described here made use of the splancs package. Mostly, the barrier to entry for archaeologists remains the perception that R is difficult to use (see, e.g., [37]). We hope to have dispelled this impression with the simple step-by-step "routemap" approach given here. For readers who wish to explore the analysis of archaeological point distributions in more detail, we recommend the spatstat package [38]. A final consideration, of key relevance to the topic of this Special Issue, is the high potential applicability of the proposed method as a rapid analysis tool for detection of association between archaeological surface or near-surface finds (e.g., recorded by an HER database or recovered from topsoil by metal detector survey or fieldwalking), and geophysical survey data. In its simplest form, as demonstrated here, the approach allows for very rapid comparison of a finds distribution against a map of geophysical anomalies. Clearly, if we are looking to identify an important archaeological feature, for example, the vicus associated with a Roman fort, being able to show association between particular finds and geophysical survey results is an important first step in any analysis. However, a more sophisticated application of the approach might seek to match scatters of finds of different dates with different geophysical plots corresponding to different techniques (ground-penetrating radar, magnetometer, etc.), or different depths. Though we do not claim great sophistication (for more advanced applications of Monte Carlo simulation in archaeology, see, e.g., [16,21]), the approach we have demonstrated in this paper is statistically robust, free of the complexities associated with regression analysis, and simple to apply. This paper, therefore, hopes to serve as a starting point for development of more detailed studies incorporating Monte Carlo testing approaches for analysing patterns of association between archaeological sites and landscapes. Its potential seems particularly strong where landscapes are deeply buried or multiple uncertain interpretations exist, e.g., from palaeo-environmental reconstructions or results of geophysical surveys. We call for this approach to be more widely used in these contexts. The difficulties identified in the introduction to this article, especially around the potential of regional archaeological data to elucidate patterns of human activity in the landscape given the incompleteness of the archaeological record, deserve closer attention. Future work might usefully concentrate on the development of a toolbox of statistical approaches to address these issues more broadly. Funding: The archaeological data used in this analysis were collected by the Palaeolithic Archaeology of the Sussex/Hampshire Coastal Corridor project, which ran between 2005 and 2007 and was funded by the Aggregates Levy Sustainability Fund (ALSF) as distributed by English Heritage (now Historic England). No funding was received for the research described in this paper, for the preparation of this manuscript for publication, or for the publication costs incurred. Acknowledgments: The author(s) are grateful to COST Action SAGA: The Soil Science &Archaeo-Geophysics Alliance-CA17131 (www.saga-cost.eu), supported by COST (European Cooperation in Science and Technology), for the opportunity to participate in the Special Issue in which this paper appears. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Appendix A Technical Description of the MCSites Tool Appendix A.1 Data Import and Analysis Preparation in R Geographical vector data, e.g., in shape file or geopackage format, can be imported into R in several ways. Here, we choose to do it with the sf package, as this package has recently been developed with the aim of unifying the GIS functionality provided across multiple packages and is now widely recommended as a replacement for earlier packages like maptools. The sample dataset provided with this paper (see additional data) is an ESRI shapefile of points expressed as xy coordinates under the UK Ordnance Survey national grid coordinate system (OSGB36). Points representing archaeological sites are imported as follows: sites <-st_read("data/example_sites.shp") plot(st_geometry(sites),pch = 18,col = "red") The same process is repeated for the study area, and for the features that the sites will overlap, the quarries are digitised from historic Ordnance Survey mapping. study<-st_read ("data/esolent_studyarea.shp") feats<-st_read("data/all_clipped_quarries.shp") These operations are carried out through a series of dialogs ( Figure A1) so the user does not need to execute the code by hand. now widely recommended as a replacement for earlier packages like maptools. The sample dataset provided with this paper (see additional data) is an ESRI shapefile of points expressed as xy coordinates under the UK Ordnance Survey national grid coordinate system (OSGB36). Points representing archaeological sites are imported as follows: sites <-st_read("data/example_sites.shp") plot(st_geometry(sites),pch = 18,col = "red") The same process is repeated for the study area, and for the features that the sites will overlap, the quarries are digitised from historic Ordnance Survey mapping. study<-st_read ("data/esolent_studyarea.shp") feats<-st_read("data/all_clipped_quarries.shp") These operations are carried out through a series of dialogs ( Figure A1) so the user does not need to execute the code by hand. Figure A1. "Open" dialog, with shape file selected. Once each of the three necessary layers (study area boundary, features to intersect with archaeological sites, archaeological sites) has been loaded, the "main.R" script file will plot each of the layers together, with sites shown in red, and prompt the user to continue. If the "OK" Option is selected, R will plot the selected layers ( Figure A2) and proceed to the generation of random sites through the script file "rangen.R". If the "Cancel" Option is selected, the dialog will close. Once each of the three necessary layers (study area boundary, features to intersect with archaeological sites, archaeological sites) has been loaded, the "main.R" script file will plot each of the layers together, with sites shown in red, and prompt the user to continue. If the "OK" Option is selected, R will plot the selected layers ( Figure A2) and proceed to the generation of random sites through the script file "rangen.R". If the "Cancel" Option is selected, the dialog will close. A.2. Generation of Random Points Inside Study Area Next, we generate the same number of random points as there are sites inside the study area boundary. This is accomplished using the st_sample command from the sf package, which applies a uniform sampling strategy as follows: Figure A2. MCSites tool overlay maps and "Proceed" dialog.
2020-08-20T10:08:16.068Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "c98b901a6811a57230cf0ed45e0ce3ed89dfe60e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3263/10/9/326/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4dbd104138903a58f2c6720bd5789c8aa105dd3e", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
36828018
pes2o/s2orc
v3-fos-license
Mechanism of Molecular Orientation by Single-cycle Pulses Significant molecular orientation can be achieved by time-symmetric single-cycle pulses of zero area, in the THz region. We show that in spite of the existence of a combined time-space symmetry operation, not only large peak instantaneous orientations but also nonzero time-average orientations over a rotational period can be obtained. We show that this unexpected phenomenon is due to interferences among eigenstates of the time-evolution operator, as was described previously for transport phenomena in quantum ratchets. This mechanism also works for sequences of identical pulses, spanning a rotational period. This fact can be used to obtain a net average molecular orientation regardless of the magnitude of the rotational constant. As far as we know, only a paper by Sugny et al. [27], and a recent work by Fleischer et al. [28] have studied molecular orientation with pulses of zero area. Numerical evidence was presented in [27] showing that a significant instantaneous orientation can be produced during and after the pulse is over. The main purpose of the work by Sugny et al. [27] was testing the efficiency of a time-dependent unitary perturbation theory, and therefore details of the wave packets involved in the orientation phenomenon were not analyzed. On the other hand, a fairly complete numerical analysis of the molecule and field parameters needed to produce maximum orientation was given. Recently, it has been experimentally demonstrated [28], for the first time, that intense single-cycle THz pulses induce field-free orientation that survives thermal averaging. The emphasis of the work by Fleischer et al. [28] was the experimental observation of the phenomenon. These authors calculated and measured the degree of orientation and alignment that a THz pulse is able to produce in a sample of OCS molecules. The aim of the present work is to study the mechanism for which molecules can get oriented with symmetric pulses in the light of previous studies in quantum ratchets. We have found that a common explanation can be given for both phenomena, based on the existence of interferences between pairs of eigenstates of the unitary time propagator. Here, by considering that the orienting pulse is a member of a periodic sequence of identical pulses, we study in Sec. II the symmetries of the Floquet Hamiltonian instead of the more complicated unitary time propagator, as the eigenvectors of both operators are related [29]. From this symmetry analysis we show that a nonzero average orientation during a pulse can be achieved. In Sec. III we present numerical evidence of this phenomenon, and study its dependence on the duration and strength of the electric field. We show that not only large peak orientations but also a nonzero time average orientation during a rotational period can be obtained. In Sec. IV we present our general conclusions. Specifically, we discuss the advantages of using an unique pulse whose duration is comparable to a rotational period or a pulse sequence that span such a time length. Also, we argue that the suitability of each approach depends on the magnitude of the rotational constant. Finally, a brief summary of the Floquet approach is presented in an Appendix. SINGLE-CYCLE PULSES A. Symmetry properties of quasienergy eigenfunctions The electromagnetic field for a periodic field formed by single-cycle pulses is an odd function of time, E(t) = −E(−t). Due to this property, the eigenvalue equation for the This anti-linear transformation is the rotational version of the so-called time-inversion parity S TP , which generalizes the notion of parity to the extended Hilbert space [22,23]. The transformation θ → π − θ, is a rotational-restricted version of the operation E * , that in molecules inverts the spatial coordinates of all the nuclei and electrons through the molecular center of mass [30]. Eigenstates χ can be symmetric (+) or antisymmetric (-) with respect to time-inversion parity, i.e., This symmetry along with the property the coefficients at times t and −t, along with the following selection rules for the matrix elements of cos θ between rotational eigenstates, J, M| cos θ|J ′ , M = 0, for ∆J = ±1, ∆M = 0 , gives for the matrix elements between Floquet eigenfunctions, in the standard Hilbert space, the following identities Eq. (6) inmediately implies Note that matrix elements, Eqs. (7) and (8), can be complex. Thus, the integrals −T /2 χ ± n (t)| cos θ|χ ± m (t) dt for n = m are, in general, nonzero. However, linear combinations of functions with the same time-inversion parity give if b n , b m are both real or pure imaginary. In the same way, for linear combination of functions with different symmetry, we have if one of the coefficients b is real and the other imaginary. When b n and b m are both complex, the integrals are nonzero. B. Symmetry properties of time-dependent wave packets The average orientation over the pulse duration, for an arbitrary initial state, Ψ(−T /2), where b n = χ n (t = −T /2)|Ψ . When the initial state is a Floquet eigenstate, χ n (−T /2), only one term arises in the summation, Eq. (A7), and from Eq. (9) and (12), we get for any number of pulses m. Thus, if a molecule, initially described by a given Floquet eigenstate, is oriented (well localized at θ 0 ) at time −t it becomes antioriented (well localized at π − θ 0 ) at time t, producing a zero time-average orientation during a pulse or sequence of pulses. Using relations Eq. (6)-(8), we can write Eq. (12) as where the function Θ This expression shows that the time average over the pulse duration of cos θ (t) can be obtained integrating exclusively over the second half of the pulse. Since there is no symmetry relation between cos θ (t) at any two different times greater than zero, the integral in Eq. From a classical point of view, for each classical trajectory with instantaneous orientation θ(t) there exists another one with orientation π − θ(t). Thus, the average orientation in the classical ergodic limit is zero [22]. Although time-reversal parity does not forbid a net time average orientation for an arbitrary initial state Ψ(t) during a single pulse, only nondiagonal matrix elements between Floquet eigenstates contribute to the average, and, in the long-time limit, the quantum time average goes to zero too. In effect, for a sequence of l max pulses, with repetition period T , the average orientation for an arbitrary initial state Ψ, given by which can be written, thanks to the periodicity of the χ functions [25], as This expression goes to zero when l max → ∞, since There is an exception to this behavior. When the expansion of Ψ(t) contains two degenerate Floquet eigenstates, the average orientation may be nonzero even in the long-time limit since the exponentials are cancelled. Then, the average is zero only for the two special cases given in Eqs. (10), and (11). Note that, if the initial state is a rotational eigenstate, hold if χ n , χ m have the same symmetry, or the restrictions for Eq. (11) to hold if they have different symmetry. Therefore, the long-time limit average orientation will be zero when the initial states are field-free eigenstates regardless of the existence of degenerate quasienergy eigenstates. A. Molecular orientation by a single-cycle pulse In this section we present calculations of the instantaneous and average orientation as a function of various parameters. The time-evolved wave function can be obtained from Eq. (A7) after diagonalizing the Floquet Hamiltonian in the extended Hilbert space, which gives eigenpairs ǫ and χ. This Floquet matrix can be very large. A more efficient method consists on dividing (t, t 0 ) into smaller time intervals of τ duration, which allows writting the propagator U as a product [31], where [31,32] U where [F (θ)] n,0 is the time integral of the matrix representation of the Floquet Hamiltonian in the rotational basis set: Sugny et al. [27] showed that a zero-area pulse can produce large instantaneous ori- efficiency-duration compromise [33]. Again, the full duration of each sequency is exactly one rotational period. With current technology, strong anough single-cycle pulses can last no longer than a few ps. Then, a single pulse produces a net average orientation over a time of the order of a rotational period a only if the rotational constant B is large (for very light molecules). However, the average orientation during a sequence of pulses is controlled by the differences between quasienergies unlike the average orientation over a single pulse, which is controlled by the Floquet eigenvectors [23]. For a sequence of pulses, Eq. (18) shows that the average orientation is primarily controlled by exponential factors that depend on quasienergy differences multiplied by increasing time, while for a single pulse the orientation primarily depends on the factors Ψ(t)| cos θ|Ψ(t) in Eq. (18), which in turns depend on the composition of the Floquet eigenvectors. Thus, a nonzero time average can be achieved by using periodic sequences that span a rotational period as shown in panel a of Fig. 4. Also, a more stable average orientation is obtained with a sequence of pulses, since the Floquet eigenvalues corresponding to the states that contribute more to instantaneous wave packets remain fairly constant with the field strength. On the other hand, for longer pulses frequent avoided crossings exist. At these crossings, the character of the Floquet eigenstates forming the wavepackets may change, giving rise to larger variations on average orientation. IV. DISCUSSION Very scarce attention has been paid, in the existing bibliography, to the use of single-cycle pulses of zero area for orienting molecules. Although molecules get sequentially oriented and antioriented as the dipole force breaks rotational parity, the existence of a combined timespace symmetry operation (time-reversal parity [22,23] seems to imply that the average orientation over the full duration of the pulse should be zero. Based on this notion, it is frequently argued that achievement of molecular orientation requires electromagnetic fields asymmetric in time. This is strictly true only in the asymptotic long-time limit. Such a limit is easily reached when high-frequency driving fields are used, but it is not met for low-frequency fields. Nowadays, THz pulses of zero area can be used with exquisite control. Therefore, experiments can be done that use only one pulse or a short sequence of pulses for which the asymptotic limit is far away. We have shown that significant nonzero molecular orientation can be obtained with such fields. The mechanism at work is the existence of nonzero matrix elements for the operator cos θ between pairs of Floquet states. These These symmetries can be analyzed more conveniently by considering that the single pulse or the finite sequence of pulses are a member of a infinite periodic train. For a periodic Hamiltonian H(t + T ) = H(t) the propagator, U(t, t 0 ), satisfies Operators U(t + T, t) are called Floquet operators, and their spectral properties are related to those of the Floquet Hamiltonian, F (t) = H(t) − i∂/∂t. Specifically, eigenfunctions, χ, of F are related to those of U(t 0 + T, t 0 ), ψ, as follows [29]. If Then, for any n, is an eigenfunction of F with eigenvalue ǫ + n. The Floquet Hamiltonian acts in an enlarged Hilbert space where time, designated by t ′ , is treated like a spatial coordinate [32,34,35]. This operator may have no normalizable eigenvectors if its spectrum is continuous. However, heuristic evidence has been given that the Floquet Hamiltonian for a rotating molecule interacting with external fields has a pure point spectrum [36]. Eigenfunctions, χ, of F in this space, can be expanded in terms of a Fourier time basis and spatial basis functions φ k (x) Functions χ(t), Eq. (A3), can be obtained from Eq. (A4) by taking the projection t ′ = t, which gives where c k (t) = n d kn e 2πint/T . The time-evolved wave function, for an initial Ψ(t 0 ), can be expanded as The Floquet Hamiltonian (in reduced units, B/ [37]) for a rotating linear molecule in the presence of a periodic train of linearly polarized pulses, and repetition period T is [38] where ω is the carrier frequency, J is the angular momentum vector, B the rotational constant, µ the permanent dipole moment, θ is the angle between the polarization vector of the field and the internuclear axis, and σ ≈ 3δ/5 where δ is the Gaussian half-width at halfmaximum. The carrier envelope phase, φ, is zero for single-cycle pulses of zero area.
2012-10-19T10:30:33.000Z
2011-08-19T00:00:00.000
{ "year": 2011, "sha1": "15fb736139d62d9044f7679eb584c974b13e9ba7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1108.3991", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "15fb736139d62d9044f7679eb584c974b13e9ba7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
225232026
pes2o/s2orc
v3-fos-license
Efficacy and Safety of Thirst-Quenching Lozenges for Xerostomia in Patients Undergoing Hemodialysis: A Prospective, Single-Arm, Open-Label Study Citation: Sahay M, Almeida A, Naqvi SMH, Venugopal H, Kale RM, et al. (2020) Efficacy and Safety of Thirst-Quenching Lozenges for Xerostomia in Patients Undergoing Hemodialysis: A Prospective, Single-Arm, Open-Label Study. J Clin Nephrol Ren Care 6:056. doi.org/10.23937/2572-3286/1510056 Accepted: August 24, 2020: Published: August 26, 2020 Copyright: © 2020 Sahay M, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Open Access ISSN: 2572-3286 Introduction Xerostomia is a subjective complaint of dryness of oral cavity and is frequently reported (28% to 67%) in patients with end-stage kidney disease (ESKD); including those on chronic hemodialysis [1]. Xerostomia may be attributed to the reduced salivary flow as a result of atrophy and fibrosis of salivary glands as well as certain medications, which are commonly used in patients on hemodialysis [2][3][4]. Further, xerostomia and hyposalivation together augment the sensation of thirst in patients on hemodialysis. In these patients, xerostomia is associated with clinical consequences like, increased risk of oral infections and diseases, difficulty Committee at each study center. A written informed consent was obtained from all the patients prior to the study initiation after explaining to them the study protocol in the language that they understood. The study was conducted in accordance with the International Council for Harmonization good clinical practice guidelines and ethical principles of the Declaration of Helsinki. Study population Adult (aged ≥ 18 years) patients with ESKD undergoing hemodialysis thrice a week for at least 3 months with daily urine output of < 200 mL were included in the study. Key exclusion criteria were patients who were scheduled for kidney transplantation or immune suppressant therapy, those who exceeded average weekly IDWG of ~2.5 kg, those admitted for fluid overload in last 3 months prior to screening, or those who had heart failure (New York Heart Association class IV). Other exclusion criteria included patients with active systemic infections, autoimmune conditions causing dryness of mouth like Sjogren's syndrome, uncontrolled diabetes, hypertension, and drugs causing dry mouth. Patients receiving drugs (sympathomimetic, antihypertensive, cytotoxic, anti-HIV drugs, opioids, benzodiazepines, and anti-migraine agents) which cause xerostomia was excluded. Study endpoints and assessments Baseline demographics (age, sex and body mass index) and clinical characteristics (medical history and comorbidities) were recorded. The primary efficacy endpoints included pre and post changes in xerostomia inventory (XI) and dialysis thirst inventory (DTI) scores from baseline to the end of study (EOS; i.e. 2 weeks). The XI and DTI are the standard validated questionnaires for xerostomia and thirst, respectively, and were assessed at screening, baseline, and at the EOS (week 2). These included several questions to evaluate XI (Supplementary Table 1) and DTI (Supplementary Table 2) with the scale of 'never/ almost never', 'occasionally', 'fairly often/very often'. Secondary efficacy endpoint included the proportion of patients satisfied with TQL in reducing dry mouth and thirst. The thirst severity was recorded by patients on a visual analogue score scale of 0-10 thrice a day (morning, afternoon and night). Safety was evaluated by observing adverse events (AEs), recorded as per Common Terminology Criteria for AEs (CTCAE; version 4.0), physical examination, vital signs and clinical laboratory investigations. Both patient-reported and investigator-observed AEs were recorded at all visits. Clinical laboratory investigations included blood urea, serum creatinine, blood glucose, serum uric acid, hemoglobin, glycated hemoglobin, and serum albumin at baseline and at the EOS. Patients were provided with diaries to record their fluid intake throughout the day and data were collected during each visit. in chewing, swallowing, speaking, and may contribute to increased interdialytic weight gain (IDWG), which may reduce the overall quality of life (QoL) [1,5]. The oral diseases related to xerostomia include mucosal, gingival and tongue lesions, candidiasis, dental caries, periodontal disease, oral fungal and bacterial infections [6]. Furthermore, patients on chronic hemodialysis most often receive multiple drugs concomitantly and xerostomia is exacerbated by such polypharmacy [7]. Drugs causing xerostomia often are those with anticholinergic activity or those acting through mechanisms on brain centers to reduce fluid secretion [8]. Current treatment strategies for xerostomia generally target stimulation of the salivary glands either mechanically (e.g., sugar-free chewing gums, mouthwash, or acupressure) or pharmacologically (e.g., pilocarpine and cevimeline) [1]. Irrespective of mechanical and/ or pharmacological actions, saliva substitutes are recommended for these patients with insufficient salivary secretion. These substitutes are available as different formulations containing either mucin, xanthan gum, carboxymethyl cellulose, hydroxyethyl cellulose or polyethylene glycol and have shown limited success [9][10][11]. Hence, novel strategies, which stimulate salivary secretions addressing xerostomia in patients with ESKD undergoing hemodialysis are warranted [1]. Most patients on hemodialysis need to maintain a fluid-restricted diet to prevent a high IDWG [12]. The prevalence of xerostomia is higher in these patients than in controls thus necessitating an alternative treatment that can stimulate salivary secretions and keep the oral mucosa moist. To address this unmet need in patients with chronic kidney disease suffering from dry mouth and thirst, a sugar-free xylitol based thirst-quenching lozenge (TQL) with unique release profile, proprietary patented technology, has been developed by Dr. Reddy's Laboratories. Xylitol is a natural sweetener and was proven to be effective in relieving symptoms of drug-induced xerostomia [13]. This study evaluated the efficacy and safety of TQL in reducing dryness of mouth and thirst in patients with chronic kidney disease stage 5 undergoing hemodialysis. Moreover, patient satisfaction with the use of TQL in reducing thirst was determined. Study design This was a prospective, open-label, single-arm study to evaluate the efficacy and safety of TQL in patients with chronic kidney disease stage 5 who were on hemodialysis at two centers in India between 2018 and 2019. Ethics statement The study protocol and informed consent form were reviewed and approved by an Independent Ethics from baseline, indicating improvements, when compared with those at the EOS. In context with XI questionnaires, percentage patients opted for 'fairly often' as a response for following questions (baseline vs. EOS): required sip of liquid to swallow food (39.3% vs. 6.0%), dry mouth while eating food (41.7% vs. 10 Table 3. Most patients responded as satisfied versus unsatisfied with the study treatment (91.3% vs. 8.7%; Table 4). Patients' response on overall treatment satisfaction questionnaires showed that a greater number of patients were satisfied in terms of relief in condition, relief in symptoms, side effects, timing of medication, and overall confidence on medication and Study treatment The study consisted of a 1-day screening phase, 1-week stabilization phase, and a 2-week treatment phase with a 1-week follow-up. Overall, patients had 8 visits: screening (Visit 1: Day 0), baseline (Visit 2: Day 7 ± 1), treatment period (Visit 3: Day 9; Visit 4: Day 11; Visit 5: Day 14; Visit 6: Day 16; Visit 7: Day 18) and end of study (Visit 8: Day 21 ± 1). Patients were administered TQL weighing 1650 mg (manufactured by Dr. Reddy's Laboratories) thrice a day for 2 weeks. Each TQL contained xylitol, lactose monohydrate, isomalt, acacia, hydroxypropyl cellulose, sucralose and magnesium stearate. There was no relation of the drug dosage to the meals and patients were advised to continue with the permitted concomitant medications. Statistical analysis The effect of treatment was compared with the baseline variables using the general linear model of Chi-square tests. The XI scale, DTI scale and the patient satisfaction scales were recorded on a 5 and 7 points Likert scale, respectively. The responses were cumulatively grouped as the worst response together and the best responses together and then the analysis was carried out using Chi-square for the frequency outcome comparison of before and after the intervention for the overall population. Patients A total of 90 patients were enrolled and received TQL, of which 89 patients completed the study and one patient withdrew consent. At the discretion of the principal investigator, 6 patients were excluded from the study due to non-compliance and final data set evaluation was done for 83 participants. The mean age of the patients was 45.3 years and the majority were men (69.0%). Half of the patients had normal body weight (Table 1). Efficacy Patients displayed favourable outcomes to the TQL with a reduction in the mean (SD) XI (Baseline: 38. 2 How satisfied or dissatisfied are you with the ability of the medication to prevent or treat your condition? How convenient or inconvenient is it to take the medication as instructed? Overall, how confident are you that taking this medication is a good thing for you? QoL due to complications in chewing/swallowing and an increased risk of oral disease [1]. Treatment with sugar-free TQL showed a significant reduction in xerostomia and thirst and a decrease in consumption of fluid. Patients also showed satisfaction with TQL use in reducing dry mouth. Available treatment options have limited success in terms of alleviating thirst and associated comorbidities in these patients. A few studies, which were conducted to evaluate the efficacy of treatments with pharmacological as well mechanical actions such as pilocarpine, artificial saliva, chewing gum, and acupressure in patients with xerostomia on hemodialysis, were inconclusive and/or contradictory for xerostomia symptoms (such as salivary flow and impact on thirst) [14][15][16][17][18]. Clinical benefits have been reported with saliva substitutes in patients with radiotherapy-related xerostomia or with Sjögren syndrome [9][10][11]. A cross-sectional study in patients on hemodialysis has demonstrated that 2-week treatment with chewing gum and saliva substitute significantly reduced XI (p = 0.024) and DTI (p = 0.015) scores compared to baseline [19]. In contrast, in another study, regular use of sugarless chewing gum for 3 months did not alleviate xerostomia symptoms and thirst in a group of 38 patients on hemodialysis [15]. In a randomized study, patients on chronic hemodialysis who had received liquorice mouthwash showed significant lowering of XI scores at day 5 and day 10 compared to baseline [20]. Another study reported that xerostomia improved in terms of reduced XI score with 4 weeks of treatment with auricular acupressure [18]. Additionally, the other treatment options used for the management of dry mouth and to prevent damage of salivary glands including pilocarpine and cevimeline, reported severe AEs (sweating, vomiting, and diarrhea) [1]. Above studies on xerostomia have shown efficacy with different treatment strategies but not proven as a complete standard-of-care for xerostomia in patients with ESKD undergoing hemodialysis. The current study showed clinical benefits of the TQL for xerostomia, patients' answers to individual questions, which its form. There was a significant reduction in the mean (SD) fluid consumption from 718.9 (178.4) mL at baseline to 568.1 mL (260.2; p = 0.001) at the EOS. Visit to visit comparison showed a reduction in fluid consumption at every visit until EOS ( Table 5). The IDWG was monitored throughout the study. Mean (SD) weight was 62.2 kg (7.9) at the baseline and 62.4 kg (7.9) at the EOS with no significant change ( Table 6). Safety No serious AEs or deaths were reported in the study. One patient discontinued the study due to diarrhea; however, this resolved later. No major changes were observed in blood glucose levels. The mean (SD) fasting blood glucose was 101.1 (85.1) mg/dL at the baseline and was 90.8 (22.9) mg/dL at the EOS. The mean postprandial glucose level was 124.8 (20.8) mg/dL at baseline and 126.6 (20.8) mg/dL at the EOS. Discussion Xerostomia, often encountered in patients on chronic hemodialysis, negatively impacts the patients' altogether evaluated overall XI scores, showed that at baseline, higher proportion of the patients required sip of liquid for swallowing food, felt dryness of mouth when eating a meal, frequently drank water during the night, had dryness of mouth, and difficulties in eating dry food. Reduction in all the above individual complications was reported with the lozenges and at the EOS, a high number of patients answered 'never' or 'almost never' for these questions. Similarly, in the DTI questionnaires at baseline, patients opted for 'often' or 'fairly often' in response to the following questions-thirst was a problem, thirsty during the day, thirsty during the night, social life influenced by thirst, thirst before, after, and during the dialytic session. However, at EOS, a high proportion of patients treated with lozenges answered 'never' or 'almost never' for the above complications. Thirst and dry mouth impact the QoL of patients who are on hemodialysis. Thirst can be a sign of disorientation and discomfort in these patients [21]. Previously published studies reported that approximately 80% of the patients are non-adherent with restrictions on liquids, which eventually contributes to IDWG and reduced QoL [22]. Studies have reported a positive correlation among xerostomia, thirst and IDWG; in general, patients on hemodialysis with high levels of thirst and xerostomia, gain more weight between hemodialysis sessions [12,23]. The IDWG results from consumption of salt and liquids between two hemodialysis sessions [24,25]. Non-adherence to the fluid-restricted diet may result in complications, such as congestive heart failure and cardiovascular comorbidity, hypertension, and acute pulmonary edema [12,22]. In the current study, a significant reduction was reported in the mean fluid consumption from baseline to the EOS; mean body weight was consistent throughout the study and none of the patients experienced IDWG. This consistent body weight change observed during the study, despite a significant change in fluid consumption might be due to several other factors including diet which were not controlled during the study. A strong association between xerostomia and reduced QoL has been reported earlier [26,27]. Our study reported enhanced QoL with the TQL. Overall, a significant difference was observed in the patients who were satisfied with the study treatment versus those who were not satisfied; patient's satisfaction was in terms of relief in condition, relief in symptoms, side effects, timing of medication, and overall confidence on medication and its form. The study is limited by the inherent limitation of being an open-label, single-arm study, with short duration and small sample size. Further long-term head-to-head studies with large sample size are warranted to corroborate the results in this study.
2020-09-10T10:17:35.478Z
2020-08-26T00:00:00.000
{ "year": 2020, "sha1": "225d6edbd0017a703c15192793bdf7cab75aa47a", "oa_license": "CCBY", "oa_url": "https://www.clinmedjournals.org/articles/jcnrc/journal-of-clinical-nephrology-and-renal-care-jcnrc-6-056.pdf?jid=jcnrc", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f13afbd4c67e1ced2a5335cb00cd6c3385f2ce7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2435997
pes2o/s2orc
v3-fos-license
Comparison results for conjugate and focal points in semi-Riemannian geometry via Maslov index We prove an estimate on the difference of Maslov indices relative to the choice of two distinct reference Lagrangians of a continuous path in the Lagrangian Grassmannian of a symplectic space. We discuss some applications to the study of conjugate and focal points along a geodesic in a semi-Riemannian manifold. INTRODUCTION Classical comparison theorems for conjugate and focal points in Riemannian or causal Lorentzian geometry require curvature assumptions, or Morse theory (see [1,5,6,10,11]). When passing to the general semi-Riemannian world this approach does not work. Namely, the curvature is never bounded (see [4]) and the index form has always infinite Morse index. In addition, it is well known that singularities of the semi-Riemannian exponential map may accumulate along a geodesic (see [17]), and there is no hope to formulate a meaningful comparison theorem using assumptions on the number of conjugate or focal points. There are several good indications that a suitable substitute of the notion of size of the set of conjugate or focal points along a semi-Riemannian geodesic is given by the Maslov index. This is a symplectic integer valued invariant associated to the Jacobi equation, or more generally to the linearized Hamilton equations along the solution of a Hamiltonian system. This number replaces the Morse index of the index form, which in the general semi-Riemannian case is always infinite, and in some nondegenerate case it is a sort of algebraic count of the conjugate points. In the Riemannian or causal Lorentzian case, the Maslov index of a geodesic relative to some fixed Lagrangian coincides with the number of conjugate (or focal) points counted with multiplicity. The exponential map is not locally injective around nondegenerate conjugate points (see [20]), or more generally around conjugate points whose contribution to the Maslov index is non zero (see [14]). Inspired by a recent article by A. Lytchak [12], in this paper we prove an estimate on the difference between Maslov indices (Proposition 3.3), and we apply this estimate to obtain a number of results that are the semi-Riemannian analogue of the standard comparison theorems in Riemannian geometry (Section 4). These results relate the existence and the multiplicity of conjugate and focal points with the values of Maslov indices naturally associated to a given geodesic. It is very interesting to observe that Riemannian versions of the results proved in the present paper, which are mostly well known, are obtained here with a proof that appears to be significantly more elementary than the classical proof using Morse theory. The paper is organized as follows. In Section 2 we recall a few basic facts on the geometry of the Lagrangian Grassmannian Λ of a symplectic space (V, ω), and on the notion of Maslov index for continuous paths in Λ. We use a generalized notion of Maslov index, which applies to paths with arbitrary endpoints; note that, for paths with endpoints on the Maslov cycle, there are several conventions regarding the contribution of the endpoints. Here we adopt a convention slightly different from that in [19], (see (2.3), (2.4) and (2.5)), which is better suited for our purposes. Section 3 contains the estimate (3.1) on the difference of Maslov indices relatively to the choice of two arbitrarily fixed reference Lagrangians L 0 and L 1 . Using the canonical atlas of charts of the Grassmannian Lagrangian and the transtition map (2.1), the proof is reduced to studying the index of perturbations of symmetric bilinear forms (Lemma 3.1, Corollary 3.2). Several analogous estimates ((3.2), (3.3)) are obtained using the properties (2.4) and (2.6) of Hörmander's index. Applications to the study of conjugate and focal points along semi-Riemannian geodesics are discussed in Section 4. In Subsection 4.1 we describe how to obtain Lagrangian paths out of the flow of the Jacobi equation along a geodesic γ : [a, b] → M and an initial nondegenerate submanifold P of a semi-Riemannian manifold (M, g). In Lemma 4.1 we give a characterization of which Lagrangian subspaces of the symplectic space T γ(a) M ⊕T γ(a) M arise from an initial submanifold construction. The comparison results are proved in Subsection 4.2; they include comparison between conjugate and focal points, as well as comparison between conjugate points relative to distinct initial endpoints. We conclude the paper in Section 5 with a few final remarks concerning the question of nondegeneracy of conjugate and focal points. PRELIMINARIES 2.1. The Lagrangian Grassmannian. Let us consider a symplectic space (V, ω), with dim(V ) = 2n; we will denote by Sp(V, ω) the symplectic group of (V, ω), which is the closed Lie subgroup of GL(V ) consisting of all isomorphisms that preserve ω. A subspace X ⊂ V is isotropic if the restriction of ω to X × X vanishes identically; an n-dimensional (i.e., maximal) isotropic subspace L of V is called a Lagrangian subspace. We denote by Λ the Lagrangian Grassmannian of (V, ω), which is the collection of all Lagrangian subspaces of (V, ω), and is a compact differentiable manifold of dimension 1 2 n(n + 1). A real-analytic atlas of charts on Λ is given as follows. Given a Lagrangian decomposition (L 0 , L 1 ) of V , i.e., L 0 , L 1 ∈ Λ are transverse Lagrangians, so that V = L 0 ⊕ L 1 , then denote by Λ 0 (L 1 ) the open and dense subset of Λ consisting of all Lagrangians L transverse to L 1 . A diffeomorphism ϕ L0,L1 from Λ 0 (L 1 ) to the vector space B sym (L 0 ) of all symmetric bilinear forms on L 0 is defined by ϕ L0,L1 (L) = ω(T ·, ·)| L0×L0 , where T : L 0 → L 1 is the unique linear map whose graph in We will need the following expression for the transition map ϕ L1,L • ϕ −1 L0,L , where L 0 , L 1 , L ∈ Λ are three Lagrangians such that L ∩ L 0 = L ∩ L 1 = {0}. Note that the two charts ϕ L0,L and ϕ L1,L have the same domain. If η : L 1 → L 0 denotes the isomorphism defined as the restriction to L 1 of the projection L ⊕ L 0 → L 0 , then for all B ∈ B sym (L 0 ) the following formula holds (see for instance [15,Lemma 2.5.4]): If (L 0 , L 1 ) is a Lagrangian decomposition of V , there exists a bijection between Λ and the set of pairs (P, S), where P ⊂ L 1 is a subspace and S : P × P → R is a symmetric bilinear form on P (see [15,Exercise 1.11]). More precisely, to each pair (P, S) one associates the Lagrangian subspace L P,S defined by: L P,S = v + w : v ∈ P, w ∈ L 0 , ω(w, ·)| P + S(v, ·) = 0 . Maslov index. Let us recall a few notions related to symmetric bilinear forms. Given a symmetric bilinear form B on a (finite dimensional) real vector space W , the index of B is defined to be the dimension of a maximal subspace of W on which B is negative definite. The coindex of B is the index of −B, and the signature of B, denoted by sign(B) is defined to be the difference coindex minus index. We will now recall briefly the notion of Maslov index for a continuous path ℓ : [a, b] → Λ. For a fixed Lagrangian L 0 ∈ Λ, the L 0 -Maslov index µ L0 (ℓ) of ℓ is the integer characterized by the following properties: (see [7] for a similar discussion). Let us denote by µ − L0 the L 0 -Maslov index function relatively to the opposite symplectic form −ω on V . The relation between the functions µ L0 and µ − L0 is given by the following identity: Let us emphasize that, for curves ℓ whose endpoints are not transverse to L 0 , there are several conventions as to the contribution to the Maslov index of the endpoints. For instance, the definition of L 0 -Maslov indexμ L0 in [19] is 1 obtained by replacing (2.3) with: , in which case the Maslov index takes values in 1 2 Z. Given any continuous path ℓ : [a, b] → Λ and any two Lagrangians L 0 , L ′ 0 ∈ Λ, the difference µ L0 (ℓ) − µ L ′ 0 (ℓ) depends only on L 0 , L ′ 0 and the endpoints ℓ(a) and ℓ(b) of ℓ. This quantity will be denoted by q L 0 , L ′ 0 ; ℓ(a), ℓ(b) , and it coincides (up to some factor which is irrelevant here) with the so called Hörmander index (see [8]). The Hörmander index satisfies certain symmetries; we will need the following: The quantity: coincides (again up to some factor) with the Kashiwara index (see [13]). The Kashiwara index function determines completely the Hörmander index, by the identity: which is easily proved using the concatenation additivity property of the Maslov index. AN ESTIMATE ON THE DIFFERENCE OF MASLOV INDICES Our analysis is based on the following elementary result: Proof. It suffices to prove the inequality n + (B + C) − n + (B) ≤ n + (C); if this holds for every B and C, replacing C with −C and B with B + C will yield the other inequality −n − (C) ≤ n + (B + C) − n + (B). Choose W ⊂ V a maximal subspace of V on which B + C is positive definite, so that dim(W ) = n + (B + C), and write W = W + ⊕ W − , where B| W+×W+ is positive definite and B| W−×W− is negative semi-definite. Since B + C is positive definite on W , it follows that C| W−×W− must be positive definite, so that n + C| W ×W ≥ dim(W − ). Then: Corollary 3.2. Given a fixed symmetric bilinear form Proof. Since the quantity µ L0 (ℓ)−µ L1 (ℓ) depends only on the endpoints ℓ(a) and ℓ(b), we can assume the existence of a Lagrangian (these are dense opens subsets of Λ, hence their intersection is non empty!), and replace ℓ by any continuous curve in Λ 0 (L) from ℓ(a) to ℓ(b). Once we are in this situation, then the Maslov indices of ℓ are given by: . Now consider the isomorphism η : L 1 → L 0 obtained as the restriction to L 1 of the projection L ⊕ L 0 → L 0 ; using formula (2.1) of transition function for the charts ϕ L0,L and ϕ L1,L , for all α ∈ Λ 0 (L) we have: and so: n + ϕ L1,L (α) = n + ϕ L0,L (α) + C , where: C = η * ϕ L1,L (L 0 ) does not depend on α. Note that: Inequality (3.1) is obtained easily from Corollary 3.2 by setting Using the symmetry property (2.6) of Hörmander index, we also get the following estimate: Moreover, changing the sign of the symplectic form and using (2.4), one obtains easily the following inequalities: Consider the flow of the Jacobi equation, which is the family of isomorphisms ). An immediate calculation shows that ℓ(t) is a Lagrangian subspace of (V, ω), and we obtain in this way a smooth curve ℓ : [a, b] → Λ(V, ω). Note that: ℓ(a) = L a 0 =: L 0 . Now, consider a smooth connected submanifold P ⊂ M , with γ(a) ∈ P and 2γ (a) ∈ T γ(a) P ⊥ ; let us also assume that P is nondegenerate at γ(a), meaning that the restriction of the metric g to T γ(a) P is nondegenerate. We will denote by n − (g, P) and n + (g, P) respectively the index and the coindex of the restriction of g to P, so that n − (g, P) + n + (g, P) = dim(P). Let S be the second fundamental form of P at γ(a) in the normal directionγ(a), seen as a g-symmetric operator S : T γ(a) P → T γ(a) P, and consider the subspace L P ⊂ V defined by: which is precisely the construction of Lagrangian subspaces described abstractly in (2.2). If π 1 : T γ(a) M ⊕ T γ(a) M → T γ(a) M is the projection onto the first summand, then π 1 (L P ) = T γ(a) P is orthogonal toγ(a). Conversely: .2). Let P 0 ⊂ T γ(a) M be the submanifold given by the graph of the function P ∋ x → 1 2 S(x, x)γ(a) ∈ P ⊥ . The desired submanifold P is obtained by taking the exponential of a small open neighborhood of 0 in P 0 . It is easily seen that the tangent space to P 0 at 0 is P , and since d exp γ(a) (0) is the identity, T γ(a) P = P . Moreover, using the fact that the Christoffel symbols of the chart exp γ(a) vanish at 0, it is easily seen that the second fundamental form of P at γ(a) in the normal directionγ(a) is S. Let us also consider the space L 0 = {0} ⊕ T γ(a) M , which corresponds to the Lagrangian associated to the trivial initial submanifold P = {γ(a)}. Then, an instant t ∈ ]a, b] is P-focal along γ if and only if ℓ(t) ∩ L P = {0}, and the dimension of this intersection equals the multiplicity of t as a P-focal instant. In particular, t is a conjugate instant, i.e., γ(t) is conjugate to γ(a) along γ, if ℓ(t) ∩ L 0 = {0}. Note that: thus: For all t ∈ ]a, b], consider the space J is a P-Jacobi field along γ with J(t) = 0 , while for t = a we set: When the initial submanifold is just a point, we will use the following notation: It is well known that focal or conjugate points along a semi-Riemannian geodesic may accumulate (see [17]), however, nondegenerate conjugate or focal points are isolated. A P-focal point γ(t) along γ is nondegenerate when the restriction of the metric g to the space A P [t] is nondegenerate. This is always the case when g is positive definite (i.e., Riemannian), or if g has index 1 (i.e., Lorentzian) and γ is either timelike or lightlike. Also, the initial endpoint γ(a) which is always P-focal of multiplicity equal to the codimension of P, is always isolated. For all t ∈ [a, b], let us denote by n − (g, P, t), n + (g, P, t) and σ(g, P, t) respectively the index, the coindex and the signature of the restriction of g to A P [t]. Given a nondegenerate P-focal point γ(t) along γ, with t ∈ ]a, b[, then t is an isolated instant of nontransversality of the Lagrangians ℓ(t) and L P . Its contribution to the Maslov index µ LP (ℓ), i.e., µ LP (ℓ| [t−ε,t+ε] ) with ε > 0 sufficiently small, is given by the integer σ(g, P, t). The contribution of the initial point to the Maslov index µ LP (ℓ), which as observed is always nondegenerate, is given by n + (g, P, a): µ LP ℓ| [a,a+ε] = n + (g, P, a) = n + (g) − n + (g, P). In particular: Moreover, if γ(b) is a nondegenerate P-focal point along γ, then its contribution to the Maslov index µ LP (ℓ) is equal to −n − (g, P, b). Thus, when g is Riemannian the Maslov index µ LP ℓ| [a+ε,b] is the number of P-focal points along γ| [a,b[ counted with multiplicity. The same holds when g is Lorentzian (i.e., index equal to 1) and γ is timelike. More generally, if all P-focal points along γ are nondegenerate, the Maslov index µ LP (ℓ) is given by the finite sum: All this follows easily from the following elementary result: Lemma 4.2. Let B : I → B sym (V ) be a C 1 -curve of symmetric bilinear forms on a real vector space V . Assume that t 0 ∈ I is a degeneracy instant, and denote by B 0 the restriction to Ker B(t 0 ) of the derivative B ′ (t 0 ). If B 0 is nondegenerate, then t 0 is an isolated degeneracy instant, and for ε > 0 sufficiently small: Lemma 4.2 is employed in order to compute the Maslov index µ LP as follows. Given a P-focal instant t 0 ∈ [a, b] and a Lagrangian L 1 transversal to both L P and ℓ(t 0 ), then consider the smooth path t → ϕ LP ,L1 ℓ(t) of symmetric bilinear forms on L P . The kernel of B(t 0 ) is identified with the space A P [t 0 ], and the restriction of the derivative B ′ (t 0 ) to Ker B(t 0 ) with the restriction of the metric g to A P [t 0 ] (see for instance [16]). Comparison results. Having this in mind, let us now prove some comparison results for conjugate and focal instants. In particular, we have the following result concerning the existence of conjugate or focal instant along an arbitrary portion of a geodesic: Corollary 4.4. Given any interval The second statement is totally analogous. All the above statements have a much more appealing version in the Riemannian or timelike Lorentzian case, where the "Maslov index" can be replaced by the number of conjugate or focal instants. In this situation, focal and conjugate instants are always nondegenerate and isolated, and without using Morse theory one can prove nice comparison results of the following type: Corollary 4.6. Assume that either g is Riemannian or that g is Lorentzian and γ is timelike (in which case P is necessarily a spacelike submanifold of M ). Denote by t 0 and t P the following instants: Then, t P ≤ t 0 , and if t P = t 0 then the multiplicity of t P as a P-focal point is greater than or equal to its multiplicity as a conjugate point. Assume that t P = t 0 and that t P is a P-focal point. By possibly extending the geodesic γ to a slightly larger interval [a, b ′ ] with b ′ > b, we can assume the existence of t ′ > t P with the property that there are no conjugate or P-focal instants in ]t P , t ′ ]. Then: where mul(t P ) is the (possibly null) multiplicity of t P as a conjugate instant. Similarly: where mul P (t P ) is the multiplicity of t P as a P-focal instant. Then: which has to be less than or equal to dim(P), giving mul(t P ) ≥ mul P (t P ). It is known that the result of Corollary 4.6 does not hold without the assumption that the metric g is positive definite or that g is Lorentzian and γ timelike. A counterexample is exhibited by Kupeli in [11], where the author constructs a spacelike geodesic γ orthogonal to a timelike submanifold P of a Lorentzian manifold, with the property that γ has conjugate points but no focal point. In the following statements, ε will denote a small positive number with the property that there are no conjugate or P-focal instants in ]a, a + ε]. Proposition 4.7. The following inequalities hold: In particular, when g is Riemannian, or g is Lorentzian and γ timelike, Proposition 4.7 says that the number of P-focal points along γ is greater than or equal to the number of conjugate points along γ, and that their difference is less than or equal to the dimension of P. For the following result we need to recall the definition of the space A 0 [t] given in (4.4); we will denote by n + (g, t) and n − (g, t) respectively the coindex and the index of the restriction of g to The estimate in Corollary 3.4 can be used to obtain results of the following type: ] is a conjugate instant such that either: mul(t 0 ) > n − (g) − µ L0 ℓ| [a+ε,t0] or µ L0 ℓ| [a+ε,t0] < −n + (g), then for every a ′ < a there is an instant t ′ ∈ [a, t 0 ] such that γ(t ′ ) is conjugate to γ(a) along γ. When the first conjugate point is nondegenerate, we can state a more precise result. Corollary 4.11. Let t 0 ∈ ]a, b] be the first conjugate instant along γ, and assume that it is nondegenerate and mul(t 0 ) > n − (g) + n − (g, t 0 ). Then for every a ′ < a there exists and instant t ′ ∈ [a, t 0 ] such that γ(t ′ ) is conjugate to γ(a ′ ) along γ. Note that if g is Riemannian, then n − (g) = n − (g, t 0 ) = 0 and the result of Corollary 4.11 holds without any assumption of the multiplicity of t 0 . FINAL REMARKS AND CONJECTURES If the semi-Riemannian manifold (M, g) is real-analytic, then conjugate and focal points do not accumulate along a geodesic, and higher order formulas for the contribution to the Maslov index of each conjugate and focal points are available (see [18]). In this case, the statement of all the above results can be given in terms of the partial signatures of the conjugate and the focal points, which are a sort of generalized multiplicities. It may also be worth observing that the nondegeneracy assumption for the conjugate and focal points is stable by C 3 -small perturbations of the metric, and generic, although a precise genericity statement seems a little involved to prove. We conjecture that, given a differentiable manifold M and a countable set Z ⊂ T M , then the set of semi-Riemannian metrics g on M having a fixed index and for which all the geodesics γ : [0, 1] → M withγ(0) ∈ Z have only conjugate points nondegenerate and of multiplicity equal to 1 is generic. In this situation, the comparison results proved in this paper would have a more explicit statement in terms of number of conjugate and focal points. A natural conjecture is also that in the case of stationary Lorentzian metrics, all geodesics have nondegenerate conjugate points whose contribution to the Maslov index is positive and equal to their multiplicity. This fact has been proved in the case of left-invariant Lorentzian metrics on Lie groups having dimension less than 6 (see [9]) and, recently, using semi-Riemannian submersions (see [2]), also for spacelike geodesics orthogonal to some timelike Killing vector field. If this conjecture were true in full generality, one would have Riemannian-like comparison results also for spacelike geodesics in stationary Lorentz manifolds.
2008-08-11T15:17:47.000Z
2008-08-11T00:00:00.000
{ "year": 2009, "sha1": "4054ad1e9742028dd8715e2ccef4496698c98ef2", "oa_license": null, "oa_url": "http://msp.org/pjm/2009/243-1/pjm-v243-n1-p02-s.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "73866f8041b29b830e5632f92f5725a7270b3771", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
21779621
pes2o/s2orc
v3-fos-license
Dominant Mutations of the TREX1 Exonuclease Gene in Lupus and Aicardi-Goutières Syndrome* TREX1 is a potent 3′→5′ exonuclease that degrades single- and double-stranded DNA (ssDNA and dsDNA). TREX1 mutations at amino acid positions Asp-18 and Asp-200 in familial chilblain lupus and Aicardi-Goutières syndrome elicit dominant immune dysfunction phenotypes. Failure to appropriately disassemble genomic DNA during normal cell death processes could lead to persistent DNA signals that trigger the innate immune response and autoimmunity. We tested this concept using dsDNA plasmid and chromatin and show that the TREX1 exonuclease locates 3′ termini generated by endonucleases and degrades the nicked DNA polynucleotide. A competition assay was designed using TREX1 dominant mutants and variants to demonstrate that an intact DNA binding process, coupled with dysfunctional chemistry in the active sites, explains the dominant phenotypes in TREX1 D18N, D200N, and D200H alleles. The TREX1 residues Arg-174 and Lys-175 positioned adjacent to the active sites act with the Arg-128 residues positioned in the catalytic cores to facilitate melting of dsDNA and generate ssDNA for entry into the active sites. Metal-dependent ssDNA binding in the active sites of the catalytically inactive dominant TREX1 mutants contributes to DNA retention and precludes access to DNA 3′ termini by active TREX1 enzyme. Thus, the dominant disease genetics exhibited by the TREX1 D18N, D200N, and D200H alleles parallel precisely the biochemical properties of these TREX1 dimers during dsDNA degradation of plasmid and chromatin DNA in vitro. These results support the concept that failure to degrade genomic dsDNA is a principal pathway of immune activation in TREX1-mediated autoimmune disease. Deoxyribonucleases are essential enzymes acting to degrade DNA polynucleotides in the orchestrated processes of dismantling dying cells and in defense from invading pathogens. Failure to efficiently degrade superfluous DNA macromolecules can result in persistent nucleic acids that activate the mammalian immune system (1). TREX1 is a 314-amino acid polypeptide containing a robust 3Ј-exonuclease that degrades ssDNA and dsDNA and is expressed ubiquitously in mammalian cells (2)(3)(4). The catalytic core of TREX1 is contained in the N-terminal 242 amino acids, and the C-terminal 72 amino acids contain a hydrophobic region that localizes TREX1 to the endoplasmic reticulum in the perinuclear space of cells (5). Located in the cytosol, TREX1 prevents the initiation of a cell-intrinsic autoimmune pathway by degrading ssDNA derived from endogenous retroelements (6). TREX1 also degrades HIV DNA generated during HIV-1 infection, preventing activation of intrinsic DNA sensors (7). Upon activation of a cell death pathway and treatment of cells with DNA-damaging agents, TREX1 relocates to the nucleus, where it acts on DNA 3Ј termini (5,8,9). Multiple mechanisms of dysfunction underlie the observed clinical phenotypes in patients carrying TREX1 mutations, reflecting the position of the mutation and the stable dimeric structure. Our structural studies of the TREX1 catalytic domain with bound DNA reveal the protein-polynucleotide interactions that explain the requirement for ssDNA in the active site and highlight the extensive interface contacts in the remarkably stable dimeric enzyme (21). The TREX1 AGS-causing mutations locate predominantly to the catalytic core region with a few notable exceptions positioned in the C-terminal hydrophobic region (22). Some TREX1 AGS-causing mutants exhibit dramatically lower levels of catalytic function, whereas others show more modest effects on the ssDNA degradation activities, yet all yield similar human pathologies (21,23). The TREX1 systemic lupus erythematosus-associated mutations are located mostly in the C-terminal region with a few variants positioned in the catalytic core. The heterozygous TREX1 mutations that cause retinal vasculopathy and cerebral leukodystrophy are all frameshifts in the C-terminal region (17). The TREX1 enzymes containing C-terminal mutations retain full catalytic function but fail to localize to the perinuclear space in cells (15,17). Thus, disruptions in catalytic function and in cellular trafficking are mechanisms of TREX1 dysfunction in autoimmunity. TREX1-mediated autoimmune disease exhibits both dominant and recessive genetics dependent upon the nature of the mutation. Heterozygous de novo and inherited mutations in the highly conserved TREX1 Asp-18 and Asp-200 metal-binding residues exhibit dominant FCL and AGS (12,14,19,24). The disease phenotypes in dominant FCL and AGS patients correlate best with the dsDNA degradation activities measured in the TREX1 D18N and D200N enzymes and not with the ssDNA degradation activities (23). We have proposed that the inability to perform chemistry of phosphodiester bond cleavage resulting from the D18N or D200N mutation might trap the TREX1 mutant enzyme onto the dsDNA in a nonproductive enzyme-DNA complex at the site of the nick. These data support the proposal that TREX1 degrades dsDNA by acting at 3Ј termini generated by the NM23-H1 endonuclease during cell death (5). The studies presented here show that the dsDNA degradation activities of TREX1 enzymes containing the FCL and AGS dominant D18N, D200N, and D200H mutations are defective and that these mutants inhibit the dsDNA degradation activity of TREX1 WT enzyme, likely present in cells of these patients. The dominant effect of these TREX1 mutant enzymes is dependent upon the functional DNA binding by residues Arg-174 and Lys-175 positioned adjacent to the active sites and the Arg-128 positioned in the catalytic cores (see Fig. 1). In addition, metal-dependent DNA binding in the active sites of the catalytically inactive dominant TREX1 mutants contributes to DNA retention and precludes access to the DNA 3Ј termini of nicked dsDNA by the TREX1 WT enzyme. EXPERIMENTAL PROCEDURES Materials-The synthetic 30-mer oligonucleotide 5Ј-ATAC-GACGGTGACAGTGTTGTCAGACAGGT-3Ј with 5Ј-fluorescein was from Operon. Plasmid pR01-250 is a derivative of pUC19 provided by J. Hays (Oregon State University), and plasmid 1 is a derivative of the pMYC plasmid (New England Biolabs). Both plasmids contain one Nt.BbvCI restriction enzyme site. Plasmids were purified from bacterial cultures and from restriction enzyme digests using Qiagen kits. Nuclei were prepared from hamster livers as described (25). Enzyme Preparation-The human recombinant TREX1 enzymes were expressed in bacteria and purified as stable homo-or heterodimers as described (23). Briefly, mutant TREX1-containing plasmids were produced using a PCR sitedirected mutagenesis strategy and confirmed by DNA sequencing. The TREX1 WT and homodimer enzymes (amino acids 1-242) were expressed in bacteria as N-terminal maltose-binding protein (MBP) fusions with a PreScission protease recognition sequence between the MBP and TREX1. The MBP-TREX1 fusion protein was bound to an amylose resin (New England Biolabs) and washed, and the TREX1 was separated from the MBP with PreScission protease (GE Biosciences). The TREX1 was collected and purified to homogeneity using phosphocellulose chromatography. To generate TREX1 heterodimers, one TREX1-containing plasmid was engineered to express MBP-TREX1, and a second was engineered to express His-NusA-TREX1. Co-expression of the two plasmids in the same bacterial cell generates a mixture of TREX1 homodimers containing only the MBP and only the His-NusA affinity tags and TREX1 heterodimers containing both affinity tags. The TREX1 heterodimers are separated from the homodimers by sequential chromatography using nickelnitrilotriacetic acid (Qiagen) and amylose resins (New England Biolabs). The TREX1 heterodimers were purified by phosphocellulose or MonoQ chromatography. Protein concentrations were determined by A 280 using the molar extinction coefficient for human TREX1 protomer ⑀ ϭ 23,950 M Ϫ1 cm Ϫ1 . Aliquots (3 g) of the TREX1 preparations were analyzed by 12% SDS-PAGE and visualized by Coomassie Blue staining (supplemental Fig. S1). Gel images were generated using a FluorChem 8900 imaging system (Alpha Innotech) and scanned using ImageQuant TL version7.0 to obtain densitometric profiles and to determine the final TREX1 protein concentrations. The NM23-H1 (26) and APE1 fragment (amino acids 32-318) (27) endonucleases were cloned into plasmids as N-terminal MBP fusions with a PreScission protease recognition sequence between the MBP and the endonuclease. The MBP fusion endonucleases were overexpressed in Escherichia coli BL21(DE3) Rosetta 2 cells (Novagen), bound to amylose resins (New England Biolabs), and washed, and the column resin was incubated at 4°C overnight with PreScission protease (GE Biosciences) to separate the endonuclease from MBP. The NM23-H1 and APE1 fragments were collected from the column flow-through, dialyzed against 50 mM Tris-HCl (pH 7.5), 50 mM NaCl, 1 mM EDTA, and 10% glycerol, and stored at Ϫ80°C. Exonuclease Assays-The exonuclease reactions contained 20 mM Tris-HCl (pH 7.5), 5 mM MgCl 2 , 2 mM dithiothreitol, 100 g/ml bovine serum albumin, 50 nM fluorescein-labeled 30-mer oligonucleotide (ssDNA assays) or 10 g/ml plasmid DNA (dsDNA assays), and TREX1 protein as indicated in the legends for Figs. 2-10. The TREX1 enzyme and variant mixtures in the competition assays were prepared on ice at 10ϫ the final concentrations to allow the addition of TREX1 dimer mixtures simultaneously to reactions yielding the indicated final concentrations. Reactions were incubated at 25°C for 30 min or as indicated. Reactions were quenched by the addition of 3 volumes of cold ethanol and dried in vacuo. For ssDNA assays, the reaction products were resuspended in 4 l of formamide and separated on 23% denaturing polyacrylamide gels. Fluorescently labeled bands were visualized and quantified using a Storm PhosphorImager (GE Healthcare). The fraction of oligomer at each position was multiplied by the number of dNMPs excised from the 30-mer and by the total fmol of 30-mer in the starting reaction to determine the activities for TREX1 WT and variants (fmol of dNMP/s/fmol of enzyme). For visualization of dsDNA reaction products, assays were resuspended in 10 l of TAE agarose gel running solution and electrophoresed on 0.8% agarose gels containing ethidium bromide. DNA was visualized using a FluorChem 8900 imaging system (Alpha Innotech). For quantification of dsDNA degradation, samples (20 l) were removed at the indicated times and quenched in wells of a 96-well plate containing 20 l of 15ϫ SYBR Green (Invitrogen). Fluorescent emission at 522 nm was determined using a POLARstar Omega microplate reader (BMG LABTECH). The amount of dsDNA remaining was determined by comparing fluorescence values with those obtained from a standard curve of fluorescence emission using varied plasmid 1 concentrations (1-10 g/ml) stained with SYBR Green. The amount of dsDNA degraded was used to calculate dNMPs excised and activities for TREX1 WT and variants (fmol of dNMP/s/fmol of enzyme). RESULTS AND DISCUSSION The TREX1 exonuclease degrades ssDNA and dsDNA polynucleotide substrates containing available 3Ј termini. Structural studies of the TREX1-DNA complex provide direct evidence for ssDNA binding of at least four nucleotides in length in the active sites ( Fig. 1) (21). Amino acid residues Asp-18 and Asp-200 are two of the divalent metal ion Mg 2ϩ -coordinating aspartates in the TREX1 active site that contribute to DNA binding and are required for catalysis. Residues Arg-174 and Lys-175 located on a flexible loop adjacent to the active sites and Arg-128 in the catalytic core are positioned appropriately to function in DNA binding to locate available 3Ј termini and generate ssDNA for entry into the active sites. The TREX1 ssDNA Exonuclease Activities of Dominant Mutants-The dominant TREX1 D18N, D200N, and D200H de novo and inherited mutations have been identified in FCL and AGS (12,14,19). The dominant phenotypes caused by these TREX1 alleles and the metal binding functions of these residues suggest a common mechanism of dysfunction. TREX1 is a homodimer, so TREX1 dimers in cells of these heterozygous individuals could be TREX1 MUT/MUT and TREX1 WT/WT homodimers and TREX1 WT/MUT heterodimers. Therefore, the TREX1 D200H/D200H and TREX1 WT/WT homodimers and TREX1 WT/D200H heterodimers were prepared, and the ssDNA degradation activities of these enzymes were compared with the activities of the TREX1 D18N and D200N mutants that we had previously determined ( Fig. 2 and Table 1). The ssDNA exonuclease activities of the dominant TREX1 D18N, D200N, and D200H homodimers were reduced by more than 10 4 -fold when compared with TREX1 WT . The activities of the TREX1 D18N, D200N, and D200H heterodimers are reduced by only 1.5-, 2.6-, and 1.5-fold when compared with TREX1 WT (Fig. 2 and Table 1). The ϳ2-fold loss in activity of the TREX1 heterodimers indicates that the TREX1 WT protomer within the Standard exonuclease reactions (30 l) were prepared with a fluoresceinlabeled 30-mer oligonucleotide, and dilutions of the recombinant TREX1 WT , TREX1 D200H/D200H , and TREX1 WT/D200H were prepared at 10 times the final concentrations. Samples (3 l) containing the TREX1 enzymes to yield the final indicated concentrations were added to reactions. The reactions were incubated for 30 min at 25°C. A and B, the reaction products were subjected to electrophoresis on 23% urea-polyacrylamide gels (A) and quantified as described under "Experimental Procedures." B, to precisely quantify results, the relative exonuclease activities of TREX1 WT and TREX1 WT/D200H were assayed in triplicate at 38, 57, and 76 pM as described above. Plots of activity versus enzyme concentrations were used to confirm the linearity of the assay and to generate the enzyme activity values. The average activities and standard errors were determined by regression analysis using SigmaPlot 8.02 (SPSS Science, Inc.). The relative activity was calculated as: relative activity ϭ 100 ϫ ((fmol of dNMP released/s/fmol of mutant enzyme)/(fmol of dNMP released/s/fmol WT enzyme)). The position of migration of the 30-mer is indicated. a Activities were derived from reactions in Fig. 2 or as previously reported. Relative activities are calculated as: relative activity ϭ 100 ϫ ((fmol of dNMP released/s/fmol of mutant enzyme)/(fmol of dNMP released/s/fmol WT enzyme)). b WT ϭ wild type. dimer retains fully functional ssDNA degradation activity. This ϳ50% reduction in ssDNA degradation activity has been demonstrated in patient cells carrying the D200N allele (14). The ssDNA degradation activities of the TREX1 dominant mutants suggest that TREX1 protomers within the dimer can act independently during the degradation of small ssDNA substrates. The TREX1 dsDNA Exonuclease Activities of Dominant Mutants-The dominant negative effects of the TREX1 D18N, D200N, and D200H alleles are apparent upon examination of the dsDNA degradation activities. The TREX1 D18N, D200N, and D200H protomers within the heterodimers exhibit a dominant inhibitory effect on the dsDNA degradation activities of the TREX1 WT protomer. Incubation of nicked plasmid dsDNA with TREX1 WT homodimer results in the degradation of the nicked polynucleotide strand and the accumulation of the unnicked ssDNA strand (Fig. 3A, lane 2). In contrast, the TREX1 WT/D18N , TREX1 WT/D200H , and TREX1 WT/D200N heterodimers do not degrade the nicked dsDNA plasmid (Fig. 3A, lanes [3][4][5]. Additions of up to 10-fold higher concentrations of the TREX1 mutant heterodimers resulted in no detectable dsDNA degradation (Ref. 23 and data not shown). These data indicate that the dominant TREX1 D18N, D200N, and D200H heterodimers exhibit at least a 200-fold decreased level of dsDNA degradation activity relative to TREX1 WT in contrast to the modest ϳ2-fold level of reduced ssDNA degradation activity by these mutant heterodimers ( Table 1). The TREX1 Dominant Mutants Inhibit the TREX1 WT dsDNA Degradation Activity-The TREX1 D200H protomers in the TREX1 D200H/D200H homodimers and TREX1 WT/D200H heterodimers exhibit a dominant inhibitory effect on the dsDNA degradation activity of TREX1 WT . The TREX1 WT enzyme was mixed with increased amounts of the TREX1 D200H/D200H and TREX1 WT/D200H enzymes and incubated with the nicked dsDNA plasmid (Fig. 3B). In these reactions, the TREX1 WT competes with the mutant TREX1 enzyme to degrade the nicked dsDNA plasmid. The amount of TREX1 WT (76 nM) added in these reactions is 10-fold higher than the amount required to degrade the nicked polynucleotide of the dsDNA (23). The presence of increased amounts of the TREX1 D200H/D200H (Fig. 3B, lanes 8 -12) and TREX1 WT/D200H (Fig. 3B, lanes 13-17) results in decreased dsDNA degradation activity by the TREX1 WT enzyme as evidenced by the increased amount of remaining nicked dsDNA. The inhibition of TREX1 WT dsDNA degradation activity by the TREX1 D200H/D200H and TREX1 WT/D200H enzymes is similar to that previously demonstrated with the TREX1 D18N and D200N mutations (23). The potent inhibition of TREX1 WT dsDNA degradation activity exhibited by TREX1 dimers containing D18N, D200N, and D200H protomers could explain the dominant phenotypes exhibited by these TREX1 mutant alleles described in FCL and AGS (12,14). These data suggest that FCL and AGS TREX1 D18N, D200N, and D200H heterozygote patients likely have varying mixtures of TREX1 WT and mutant homo-and heterodimers. The TREX1 D18N-, D200N-, and D200H-containing dimers would likely inhibit the TREX1 WT dsDNA degradation activity in these cells. Identification of TREX1 Residues Contributing to DNA Binding-The TREX1 Arg-174, Lys-175, and Arg-128 residues are positioned within the enzyme to participate in DNA binding. To measure the contribution of these residues in the TREX1-catalyzed reaction, a series of variant enzymes was generated in which each of these residues was changed to alanine individually or in combinations. The TREX1 and variant proteins were tested to confirm the presence of nuclease activity using a 30-mer oligonucleotide and to establish the relative ssDNA excision activities ( Fig. 4 and Table 2). The TREX1 R174A and TREX1 K175A exhibit no loss of activity, and the TREX1 R174A,K175A shows a modest ϳ3-fold reduced excision activity relative to the TREX1 WT . These data indicate some contribution to ssDNA binding by the Arg-174 and Lys-175 that is satisfied by the presence of one of these residues on the flexible loop. The TREX1 R128A exhibits an ϳ2-fold reduced excision activity relative to the TREX1 WT , also indicating a modest contribution to ssDNA binding by the Arg-128 located in the catalytic core. The TREX1 R128A,R174A and TREX1 R128,K175A double mutants exhibit ϳ3-fold reduced excision activities, consistent with the requirement for one of the positively charged flexible loop residues (Arg-174 or Lys-175) and Arg-128 in the core for full ssDNA exonuclease activity. The TREX1 R128A,R174A,K175A triple mutant exhibits an ϳ30fold reduced excision activity, further demonstrating the requirement for a single positively charged residue positioned on the flexible loop and the Arg-128 within the catalytic core for full ssDNA degradation activity. Also, a steady-state kinetic analysis indicated an ϳ35-fold higher K m value for the TREX1 R128A,R174A,K175A when compared with the TREX1 WT protein and similar k cat values, confirming the structural integrity of the mutant enzyme and further supporting the diminished DNA binding potential (data not shown). However, the ϳ3-fold magnitude in loss of TREX1 catalytic function upon mutation of the flexible loop Arg-174 and Lys-175 residues contrasts sharply with the ϳ200-fold loss of TREX2 catalytic function upon mutation of the comparable Arg-163, Arg-165, and Arg-167 flexible loop residues (Refs. 28 and 29 and data not shown). Furthermore, the TREX1 Arg-128 is not conserved in TREX2, where this residue is Asp-121. These data point to a unique function for Arg-128, Arg-174, and Lys-175 in TREX1 DNA degradation activities. The TREX1 Arg-174 and Lys-175 flexible loop residues and the Arg-128 located in the catalytic core contribute to dsDNA degradation activity. Incubation of TREX1 WT with a singly nicked dsDNA plasmid results in the degradation of the nicked polynucleotide strand and the accumulation of the un-nicked ssDNA strand (23). To determine the effects of TREX1 DNAbinding mutations on dsDNA degradation activity, a quantitative fluorescence assay was developed (Figs. 5 and 6). The TREX1 WT and variants were incubated with nicked dsDNA in time course reactions, and DNA degradation was visualized in agarose gels (Figs. 5A and 6A) and quantified by SYBR Green fluorescence emission (Figs. 5B and 6B). The TREX1 R174A and TREX1 K175A variants exhibit modest but reproducibly reduced dsDNA degradation activities (Fig. 5, A and B, and Table 2). Further, the TREX1 R174A,K175A double mutant exhibited a much greater ϳ20-fold reduction in dsDNA degradation activity when compared with TREX1 WT (Fig. 5, A and B, and Table 2). This 20-fold reduction in dsDNA degradation activity contrasts with the more modest 3-fold reduction in ssDNA degradation activity, indicating that Arg-174 and Lys-175 positioned on the flexible loop region contribute more substantially to dsDNA degradation by TREX1. The TREX1 Arg-128 located in the catalytic core contributes to dsDNA degradation activity. Incubation of the TREX1 R128A variant with nicked dsDNA resulted in ϳ8-fold reduced dsDNA degradation activity when compared with TREX1 WT (Fig. 6, A and B, and Table 2), contrasting the modest ϳ2-fold reduction in ssDNA degradation activity of this variant. The TREX1 R128A,R174A and TREX1 R128A,K175A double mutants exhibited much greater ϳ60-fold reduced dsDNA degradation activities when compared with TREX1 WT (Fig. 6, A and B, and Table 2). These large reductions in dsDNA degradation activities contrast with the modest ϳ3-fold reduced ssDNA activities of these TREX1 mutants (Table 2). Finally, the TREX1 R128A,R174A,K175A variant exhibits ϳ500-fold reduced dsDNA degradation activity relative to TREX1 WT (Fig. 6, A and B, and Table 2). These data indicate that Arg-174 and Lys-175 positioned on the flexible loop adjacent to the active sites and Arg-128 positioned in the catalytic core contribute to dsDNA binding and subsequent generation of a partial duplex a ssDNA exonuclease activity assays were performed in triplicate at three different concentrations as described under "Experimental Procedures." Plots of activity versus enzyme concentrations were used to confirm linearity of the assay and to generate the enzyme activity values. The average activities and standard errors were determined by regression analysis using SigmaPlot 8.02 (SPSS Science, Inc.). b The relative activity was calculated as: Relative activity ϭ 100 ϫ ((fmol of dNMP released/s/fmol of mutant enzyme)/(fmol of dNMP released/s/fmol WT enzyme)). c dsDNA exonuclease activity assays were performed at enzyme concentrations linear during the time course. Samples were removed at X times and resolved on agarose gels (Fig. 5A) or DNA degradation quantified (Fig. 5B) D18N, D200N, and D200H mutations on the dsDNA degradation activity of TREX1 WT is dependent upon DNA binding contributions by Arg-174, Lys-175, and Arg-128. A collection of TREX1 D18N-, D200N-, and D200H-containing homodimers and het-erodimers with additional mutations of R174A, K175A, and R128A was prepared and tested for dominant inhibition of TREX1 WT dsDNA degradation ( Fig. 7 and supplemental Figs. S2 and S3). The TREX1 WT enzyme was mixed with increased amounts of the TREX1 D200H,R174A,K175A (Fig. 7A, lanes 3-7), TREX1 WT/D200H,R174A,K175A (Fig. 7A, lanes 8 -12), TREX1 D200H,R128A (Fig. 7B, lanes 3-7), TREX1 WT/D200H,R128A (Fig. 7B, lanes 8 -12), and TREX1 D200H,R128A,R174A,K175A (Fig. . Samples (20 l) were removed prior to enzyme addition (0 min) and after incubation for the indicated times (A-E). The reaction products were subjected to electrophoresis on agarose gels and visualized by ethidium staining (A-D) or quenched in 15ϫ SYBR Green, and dsDNA remaining was determined by emission at 520 nm (E) as described under "Experimental Procedures." No Enz, no enzyme. 7C, lanes [3][4][5][6][7][8][9][10][11][12][13][14][15] and incubated with the nicked dsDNA plasmid. In these reactions, the catalytically inactive TREX1 D200Hcontaining enzymes compete with the TREX1 WT to bind the nicked dsDNA plasmid and inhibit degradation by TREX1 WT . Increased inhibition of TREX1 WT degradation of the nicked polynucleotide strand is apparent by the decreased accumulation of the un-nicked ssDNA upon the addition of increased amounts of the TREX1 D200H mutants containing R174A,K175A (Fig. 7A) and TREX1 D200H mutants containing R128A (Fig. 7B). However, the magnitudes of the TREX1 WT inhibition by TREX1 D200H,R174A,K175A and TREX1 D200H,R128A are dramatically reduced when compared with the levels of inhibition exhibited by the TREX1 D200H/D200H and TREX1 WT/D200H as apparent from the much lower concentrations of the D200H enzymes needed for complete TREX1 WT inhibition (compare Fig. 3B with Fig. 7, A and B). Complete inhibition of TREX1 WT (76 nM) is achieved in competition reactions at concentrations of ϳ19 nM for TREX1 D200H/D200H and ϳ38 nM for TREX1 WT/D200H (Fig. 3B). In contrast, it is apparent in these competition reactions that greater concentrations of TREX1 D200H,R174A,K175A (Fig. 7A) and TREX1 D200H,R128A (Fig. 7B) are required to completely inhibit TREX1 WT dsDNA degradation. Further, the TREX1 D200H,R128A,R174A,K175A quadruple mutant exhibits no detectable TREX1 WT dsDNA degradation inhibition upon additions of up to 230 nM (Fig. 7C). Thus, the Arg-174, Lys-175, and Arg-128 contribute to TREX1 dsDNA binding in the D18N, D200N, and D200H dominant mutations as evidenced by the diminished inhibitory effect on TREX1 WT dsDNA degradation activity upon mutation of these residues to alanine ( Fig. 7 and supplemental Figs. S1 and S2). The proficient DNA binding contributed by these positively charged residues contributes to the dominant phenotypes of these TREX1 alleles and highlights these three residues in the TREX1 dsDNA degradation mechanism. The dominant TREX1 D200N and D200H alleles cause a more aggressive AGS autoimmune disease than the TREX1 D18N allele that causes FCL (12,14,19). The more aggressive AGS disease phenotype correlates with a greater inhibitory effect of the mutations at Asp-200 on TREX1 WT dsDNA degradation activity when compared with the TREX1 mutation at Arg-18 (Fig. 8). To demonstrate the greater TREX1 WT inhibition by the TREX1 Asp-200 mutants, increased concentrations of TREX1 D200H,R174A,K175A (Fig. 8A, lanes 3-15), TREX1 D200N,R174A,K175A (Fig. 8B, lanes 3-15), and TREX1 D18N,R174A,K175A (Fig. 8C, lanes 3-15) were mixed with TREX1 WT , and dsDNA degradation was examined. The level of TREX1 WT inhibition by TREX1 D200H,R174A,K175A is similar to that of TREX1 D200N,R174A,K175A , and both of these TREX1 Asp-200-containing mutants exhibit a greater level of TREX1 WT inhibition when compared with the TREX1 D18N,R174A,K175A . The greater level of TREX1 WT dsDNA (Fig. 1). The Asp-18 coordinates metals A and B and Asp-200 the metal at position A (21). Thus, mutations at Asp-18 and Asp-200 to alanine are likely to compromise metal binding in the TREX1 active sites. The TREX1 D18A and D200A homo-and heterodimers were prepared and tested for inhibition of TREX1 WT dsDNA degradation to determine whether metal binding contributes to the TREX1 dominant mutant phenotype. The TREX1 WT enzyme was mixed with increased amounts of the TREX1 D18A and D200A enzymes and incubated with the nicked dsDNA plasmid (Fig. 9). In these reactions, the TREX1 WT competes with the increased amounts of the TREX1 D18A/D18A (Fig. 9A, lanes 3-7), TREX1 WT/D18A (Fig. 9A, lanes 8 -12), TREX1 D200A/D200A (Fig. 9B, lanes 3-7), and TREX1 WT/D200A (Fig. 9B, lanes 8 -12), resulting in varied levels of nicked dsDNA degradation by TREX1 WT as evidenced by the accumulated ssDNA product. The TREX1 WT dsDNA degradation activity is inhibited to a lesser extent by the TREX1 D18A-containing mutants than by the D18N-containing mutants (compare Fig. 9A with Fig. 6B in Ref. 23). In contrast, the TREX1 D200A-containing mutants inhibit the TREX1 WT dsDNA degradation activity similarly to the D200H-and D200N-containing mutants (compare Fig. 9B with Fig. 3B in the present work and with Fig. 6A in Ref. 23). Mutation of Asp-18 to alanine likely diminishes metal ion binding at positions A and B, whereas mutation of Asp-200 to alanine likely reduces metal ion binding at position A. Thus, these results suggest that metal binding at position B in the TREX1 active site contributes mostly to ssDNA binding, whereas metal binding at position A contributes more to the chemistry of phosphodiester bond cleavage. TREX1 Degrades Genomic DNA-The DNA degradation properties of the TREX1 Asp-18 and Asp-200 dominant mutations using plasmid DNA suggest that an in vivo DNA substrate is nicked dsDNA. The association of TREX1 with the SET com- plex and granzyme A-mediated cell death further implicates TREX1 in nuclear DNA degradation after nicking by DNA endonucleases during cell death processes (5,8). To test this idea, we prepared hamster liver nuclei and examined the TREX1 WT and variants for dsDNA degradation activities using nicked chromatin DNA (Fig. 10). Incubation of nuclei with NM23-H1 or APE1 fragment endonucleases generates nicked chromatin DNA that is apparent from the more rapid migration of ethidium-stained DNA in the agarose gel and by the appearance of ϳ200-bp DNA laddering indicative of nucleosomal fragmentation (Fig. 10, lanes 3 and 7). The TREX1 WT degrades ϳ50% of the fragmented chromatin DNA as evidenced by the loss of ethidium-stained DNA (Fig. 10, lanes 4 and 8). When the TREX1 D18N/D18N and TREX1 WT/D18N dominant mutants were mixed with TREX1 WT and incubated with endonuclease-treated nuclei, the amount of chromatin DNA degradation is reduced, indicating inhibition of TREX1 WT by the dominant TREX1 D18N-containing mutants during chromatin DNA degradation (Fig. 10, lanes 5 and 9). In contrast, when the TREX1 WT was mixed with the TREX1 D18N,R128A,R174A,K175A quadruple mutant, the TREX1 WT degrades the chromatin DNA to a similar extent as that detected in reactions containing TREX1 WT only (Fig. 10, lanes 6 and 10). These TREX1 WT and variant chromatin DNA degradation activities using nuclei parallel the dsDNA degradation activities using nicked plasmids and further support the TREX1 exonuclease action to degrade the nicked polynucleotide strands of genomic DNA during cell death processes. In conclusion, the biochemical activities of the TREX1 dominant alleles support a direct role for TREX1 WT in the degradation of dsDNA to prevent autoimmune disease. The TREX1 D18N, D200N, and D200H homodimers are catalytically inactive with respect to ssDNA and dsDNA degradation of polynucleotides. However, this complete loss of TREX1 DNA degradation activity alone does not sufficiently explain the dominant genetics of the D18N, D200N, and D200H alleles because there are other TREX1 AGS alleles, such as the insertion mutations of aspartate at position 201 (D201ins) and alanine at position 124 (A124ins), that result in elimination of TREX1 DNA degradation activities but exhibit recessive genetics (14,21). The dominant genetic phenotypes exhibited by the TREX1 D18N, D200N, and D200H alleles likely result from the generation of TREX1 enzymes with dysfunctional catalytic potential and fully functional nicked dsDNA binding properties. This mechanism of TREX1 dominant negative inhibition of dsDNA degradation might also extend to inhibition of dsDNA degradation by other exonucleases acting at nicked dsDNA sites, such as perhaps TREX2. Mutations of the TREX1 Arg-174, Lys-175, and Arg-128 residues reduce the dsDNA binding potential and, thus, the inhibitory effect of the D18N, D200N, and D200H alleles on TREX1 WT dsDNA degradation activity. The TREX1 exonuclease activities during ssDNA and dsDNA degradation and the stable TREX1 dimeric structure lead to multiple mechanisms of dysfunction, helping to explain the spectrum of TREX1-related autoimmune disorders. The dominant effects exhibited by TREX1 Asp-18 and Asp-200 mutations provide insights into TREX1-mediated autoimmune disease in the heterozygous state.
2018-04-03T05:01:02.327Z
2011-08-01T00:00:00.000
{ "year": 2011, "sha1": "2b909fad4ee77d0f1a20aafac0c323d5136d9e79", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/286/37/32373.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "159adfcf651342eb23baed9a05835c2a64f54a6f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
187467413
pes2o/s2orc
v3-fos-license
Clinical Trial Using A Silver-Coated Screw-Rod System and One-Year Follow-Up of The First 50 Patients Aim: The occurrence of implant-related infection in all surgical branches is one of the challenges for which a definitive solution has yet to be found. One way to reduce the incidence of implant-related infection is to use implants which are coated with antibacterial materials such as silver. The aim of this study is to investigate if the nanoparticle silver coated spinal implants reduce the implant related infection rates and safe for human use. Method: In this clinical trial performed with 50 patients, we investigated whether or not silver-coated titanium implants alter renal and/or hepatic functions and increase serum silver levels at one year postoperatively. The required stabilization procedure was performed using the “nanoparticle silver coated transpedicular stabilisation system”. Blood and urine samples were taken from each patient at six different time points for detection of any alteration in silver concentration. Silver levels of all samples were investigated spectrophotometrically. Additional serum samples were taken for monitoring liver and kidney functions. Results: All values measured were regarded as safe since they were lower than 5 μg/L. There was no alteration in renal and/or hepatic function, and the amount of silver in urine and serum was at undetectable levels using atomic absorption spectrophotometer. Neither complication was related to silver nor any implant infection was detected in one year followup period. Conclusions: This study showed that, nanoparticular silver coated spinal implants are capable to reduce implantrelated infection rates and these type of implants are safe for human use. InTRODuCTIOn The onset of implant-related infections in vertebral and orthopedic implant surgery is one of the challenges for which a definitive solution has not yet to be found. Infection rates in routine vertebral surgery applications such as discectomy and laminectomy, in which no implant is used, is around one percent (1,2) . However, this rate rises to 2.1-8.5% in cases of implant use (3,4) . The rate of primary infection for joint replacement is between .86% and 2.52% according to the National Nosocomial Infections Surveillance System (5) which demonstrates increase in the incidence of implant surgery. Antibiotic treatment alone is insufficient in nearly half of the patients, and inevitably the implant must be surgically removed, and in some cases, a new implant system must be inserted. This situation necessitates conduction of studies aimed at the development of an implant that will "decrease the risk of infection". We aimed to investigate whether or not a silvercoated transpedicular screw-rod system alters renal and hepatic functions, reduces the implantrelated infection rate during postoperative period and to determine the resultant silver levels in body fluids. Our previous in vitro studies have shown that silver-coated titanium implants have antibacterial characteristics as effective as pure silver metal (12) . In another study, we have also made the following assertions: silver does not accumulate in vital organs, it has not any toxic effect on these tissues; serum silver values did not increase when silver -coated implants are used. All of these findings indicate that nanoparticular silver is not excreted at an undetectable level (unpublished data). We also demonstrated that silver coated screws inhibit biofilm formation in rabbits (13) . Results derived from these studies encouraged us to perform a clinical trial on 50 patients participating on a voluntary basis, based upon an approval letter obtained from the Human Ethics Committee of ……… University, Faculty of Medicine (Approval number: 146-4612). MATERIAl and METHOD a) Silver-coated implants The standard transpedicular screw-rod system routinely used in the clinic was coated with nanoparticle silver using the dip coating technique, in a quantity sufficient for approximately 50 patients (29 female [58%] and 21 [42%] male, respectively). All elements of the system were autoclaved and taken into the operating room on the morning of the operation. b) Patient selection The trial included a total of 50 ASA class I-II patients (median age, 56.8; range: 28-80 years of) with indications for posterior lumbar stabilization who stated in writing that they were participating in the trial on a voluntary basis. It was preferred that the patients included in the trial were particularly high-risk in terms of implant-related infection. For this purpose, patients who underwent previous surgery in our clinic or another center, and whose implants had been removed due to infection, in addition patients with suspect infection detected by preoperative magnetic resonance imaging (MRI) or clinical evidence of infection, diabetes, history of CSF leakage were especially included in the trial. Patients who did not belong to any of the above-mentioned risk groups were also included in the trial. Patient demographics and statistics are given in Table 1 and 2. c) Approaches and methods applied On the morning of the trial, 5 cc blood and 5 cc urine were obtained from the patients admitted to the trial who provided signed informed consents. Samples were stored in biochemistry tubes in order to detect the basal silver level in the blood. Blood, and urine samples were taken from all patients and sent to the laboratory to detect erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) for the follow-up and detection of any infection that might develop during the postoperative period. Leukocytes and platelet counts investigated for any hematopoietic harmful effect of silver on kidney and liver functions, and any alteration in the functions of these organs due to silver accumulation was evaluated. The required stabilization procedure was performed using the "transpedicular screw-rod system" made of titanium alloy and coated with nanoparticle silver ion using dip coating technique. All patients were discharged from the hospital within an average of 5 days. Oral ceftriaxone (2 x 750 mg) was recommended to all patients in the postoperative period. This application was not different from the routine protocol that was used for several years in our clinic. Blood and urine (5 cc from each) samples were taken from each patient on the postoperative 10th day, 1st, 3rd, 6th, and 12th months for the detection of silver concentration in blood and urine to be sent to Ankara University, Faculty of Medicine, Physiopathology Department Laboratory, where the silver quantity in these fluids was detected on atomic absorption spectrophotometer. Samples were also taken for complete blood count, blood biochemistry, ESR, and CRP on the same dates as stated above for detecting silver levels. d) Detection of silver in blood and urine samples For this purpose, 0.250 ml serum and 0.250 ml urine samples of the patients was taken and diluted with 5 ml of 2% nitric acid in the proportion of 1/5. The samples prepared were compared with the standards of 2.5, 5.0, 7.5, and 10 μg /L on Perkin-Elmer Analyst 800 Atomic Absorption Spectrophotometer to determine their silver concentrations. e) Statistical analysis TThe erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), blood urea nitrogen (BUN), creatinine (Crea), leukocyte (Leu), platelet (Plt), alanine transaminase (ALT), aspartate transaminase (AST), gamma-glutamyl transaminase (GGT) and silver (in blood and urine) va-lues which were measured at six different time points compared with the use of Repeated Measures ANOVA test, and Greenhouse-Geisser correction was made if sphericity could not be assumed. Bonferroni test was used as post-hoc test. Statistical analysis was made using computerized SPSS 11.5 programme. RESulTS IIn all samples worked on, silver quantity was determined as <0.125 μg/L at 0.005 absorbance. No difference was detected between the preoperative samples and the samples taken 12 months after the operation. All other values measured were regarded as being safe since they were <5 micrograms/L (Table 3). Complete blood counts, renal and liver functions, ESR, and CRP values of the patients included in the trial were periodically followed up for one year (Graphic 1-9). Any elevation in the white blood cell count which is an indicator of infection during postoperative period, was not determined in any patient. Reactive platelet elevation, which lasted for the first three months of the postoperative period and is known to be secondary to the operation, was detected in all patients, and a mild elevation in the white blood cell count that had normalized at the end of the first month was detected in some patients. All these values normalized during the 12-month follow-up period. No deterioration in renal and liver functions when compared to the preopera- tive period was determined in any of the patients in the trial. There was an increase in ESR and CRP values in the postoperative period as expected; however, these values had normalized in all patients within the first month. The summary of all these data is given in Table 3. Cerebrospinal fluid leakage occurred during the operation in only 3 patients, and the dural defects in these patients were primarily sutured and repaired using tissue adhesives. Lumbar external drainage was used for one of these 3 patients and these patients were followed up for 10 days. No surgical wound site problem was detected in the postoperative period in any other patient included in the trial. Body temperature above 38°C was detected in only 2 patients on the postoperative 2nd day, and in 1 patient on the postoperative 3rd day, detected only in one measurement. No application other than cold compress was applied in these patients. No clinical or laboratory infection was detected in any of these 50 patients included in the trial after one year, while it was 3% for our clinic, before we used silver coated system. It has been stated in the literature that gray-blue skin discoloration at the site of the implant can occur after silver intoxication (14,15,16) . We thus inspected the incision area for any such development when patients presented for their follow-up examination. No skin discoloration was observed in any patient. It was also reported in the literature that especially in strabismus operations to shorten the ocular muscles, silver metal was used for reattaching the muscles to the bone, and gray-blue discoloration in the sclera was detection in association with exposure to silver metal (17) . Each patient that presented for follow-up was reinspected in this regard, even though discoloration develops due to the local rather than systemic impact of the implant. During further follow-up (at 3rd, 6th and 12th months), each patient was contacted by telephone or e-mail and queried regarding any discoloration on the skin or sclera. Patients were kept under follow-up for 12 months, during which period there was no Some patients required a second lumbosacral MRI during their follow-up examinations. MRIs showed that the artifact level was no different from that of the classical titanium screws. It was detected that the silver coating was also useful in this respect and did not create any problems in postoperative follow-up imaging. DISCuSSIOn CClinical trials have shown that the presence of biomaterial in the surgical site renders the host tissue sensitive to infection in both the early and late periods (18) . The bacterial biofilm layer formed on the surface of the implanted material is the most important factor in development of resistance (8,13) . This layer forms a serious barrier against the effect of antibiotics on the bacteria. Thus, infections occurring in biomaterial responds hardly to antibiotics, and the infection generally cannot be controlled until removal of the implant (19,20,21) . Infections that occur following implantation may require long-term treatment, including replacement of the infected implant, resection arthroplasty, or amputation, depending on the severity of symptoms (22) . Implant-related infections occurring after instrumental spinal surgery among the most difficult problems for which there remains no proven solution. Antibiotic treatment alone is not sufficient in nearly 50% of the patients, and a surgical procedure is inevitable. This is undesirable both in terms of patient comfort and financial burden. Increased usage of metal implants in vertebral surgery, especially within the last decade, has brought about an increase in postoperative infection rates. The infection rate reported after surgical operations in which routine antibiotic prophylaxis is performed is 1% in cases where a metal implant is not used, and increasing up to 2.1-8.5% in patients who underwent metal implantation (19,23,24) . This demonstrates the presence of a strong correlation between instrument use and infection development. Silver has been used for centuries due to its antibacterial properties and no evidence has been found thus far showing that it has any important function in animal or plant metabolism (25) . Only small amounts of silver will be resorbed by the intestine and transported as a complex with plasma proteins. Most silver is then excreted by the liver. The rest of the silver is stored and accumulated intracellularly in organs and tissues without any use (22) . However, silver binds not only to proteins but also to bacterial DNA and RNA. In a study conducted using radioactive silver, it was detected that silver formed covalent bonds with Pseudomonas aeruginosa DNA, but did not change the structure of the DNA (26) . The same experiment showed that silver binds to the RNA and other components of bacteria at a much lower rate. Silver that penetrates the cell inhibits the energy metabolism of bacteria. It deactivates sulfhydryl enzymes and forms compounds with amino, imidazole, carboxyl, and phosphate groups (27) . It disrupts DNA replication and prevents mitosis in prokaryotes and disrupts the selective permeability of the cell membrane, ultimately causing the cell to swell and die (28) . It reacts with tissue proteins, which disrupt the medium required for the reproduction of proteolytic bacteria (27) . It stops replication of P. aeruginosa by binding to its DNA in the logarithmic reproduction phase. It prevents the oxidation of glucose, glycerol, fumarate, succinate, D-lactate, and L-lactate in Escherichia coli and affects the oxidative phosphorylation of the cell, and therefore, ATP synthesis (28) . It inhibits B-galactosidase enzyme thereby stopping the respiratory chain and causing cell death (27) . Glucose, glycerol, fumarate, succinate, D-lactate, and L-lactate are oxidized, whereas the oxidation of free sulfhydryl groups and NADPH is inhibited (27,28) . The toxic activity of silver is often local. Its systemic effect tends to remain local since silver is absorbed very slowly. Silver ions bind to proteins and form sediments of silver chloride at the application site. A trace amount of silver is absorbed through mucous membranes or through the skin in burn patients in the form of silver nitrate. Absorbed silver finds itself a wide area of distribution in the body. It particularly accumulates in the subepithelial area of the skin. It causes blue-gray discoloration, also known as argyria, as a result of its subepithelial accumulation at greater amounts (14,15) . This pigment consists of silver sulfide and metallic silver which causes only permanent cosmetic problem Argyria is most commonly observed in humans exposed to silver (22) . This pathological finding was seen more commonly in the 19th century in association with occupational exposure in silversmiths, miners, and photographers (15,16,22) . Argyria can also appear from the use of colloidal silver products and/or silver containing medical agents (22) . A research on silver accumulation in the tissue and blood tends to show that the level of this metal in a normal population not affected by industrial exposure should be only at a level of nanograms per gram of tissue (29) . After the spectrophotometric examination they performed on a patient prediagnosed with lead poisoning because of his gray-blue skin color. A study detected that the blood silver quantity of the patient was 0.5 ug/ml; however, they stated the normal silver content in the blood should be at maximum 5 ng/ml (or 5 ug/L) (25,29) . As the history of the patient was further investigated, it was learned that the patient had taken silver nitrate capsules of 16 mg x 3 times a day for his gastrointestinal symptoms. In another resource, normal values of silver are given as follows (30) : Serum: 2.1±1.5 µg/L (19.5±13.9 nmol/L) Plasma: 0.68±0.33 µg/L (6.3±5.8 nmol/L) According to Wan et al. blood silver levels lower than 200 ppb must be considered as normal. Because regular human diets includes small amounts of silver and consumers take silver via their diets (31) . Oral silver intake from a typical diet has been estimated to range between 27, and 88 ug/ day but some other researchers estimated lesser intake of 10-20 ug/day (29,32,33) . A concentration of silver in the blood of more than 300 ppb has been reported to cause argyria, and liver and kidney damage (34) . Drake and Hazelwood found that acute symptoms of overexposure to silver nitrate include a decrease in blood pressure, diarrhea, irritation of the stomach, and decreased respiration (35) . Chronic symptoms resulting from intake of a low dose of silver salts are fatty degeneration in the liver and kidneys (35) . Long-term inhalation or ingestion of soluble silver salts or colloidal silver may cause argyria. Usage of nanoparticle silver because of its antibacterial and antifungal properties is increasing in frequency with the result of development in nanoscience. Food, drug and cosmetic industry have begun to use nano silver in their products. Thus water, food, cosmetics, drugs, and drug delivery devices can be a route for ingesting silver nanoparticles (36) . It has been also demonstrated that silver ions can liberate from ingested products into the blood. Thus they can be responsable for accumulating in visceral organs leading to liver and kidney toxicity (37) . However, acute oral or transdermal intake of nanoparticle silver (2,000 mg/kg-body weight) has not caused any significant clinical signs, mortality, acute irritation, or corrosive reactions affecting the eyes and skin neither in rats nor in guinea pigs or in rabbits (38,39) . Kawata et al. reported that nano-particle silver may cause cytotoxicity in human hepatoma cells but only at high doses (>1 mg/L) (40) . Other investigators (41,42) reported that silver is nonmutagenic. Despite all these studies and increase in usage of nano silver, most of these studies are still restricted to in vitro experiments and conduction of a clinical trial is still needed. Our study is the unique study which focused on clinical usage of nanoparticle silver coated vertebral implants in human beings. When we discuss and focus on our results; elevated levels in ESR-CRP and platelet-leukocytes levels during the first month after exposure were expected because acute phase reactants and reactive species respond to nano silver implant placement. All these values were normalized after 3th month of study. Sedimentation, CRP and leukocytes values were under baseline at the end of the 12th month. Decrease in the levels of these parameters suggested recovery of some preoperatively infected patients. Increased levels of BUN and creatinine at early period of trial (up to 1th month) may be misleading. All patients received intravenous saline therapy all night long preoperatively as a clinical routine and all blood samples were taken at the end of night, just before the operation. This condition decreased the baseline parameters of renal function. A mild elevation of these parameters may be due to anesthesia and/or analgesic and antibiotic therapy after operation or dehydration because of blood loss during surgery as expected. Only hepatic functions, especially ALT measurements were deteriorated. AST suddenly elevated at 3th month and returned to baseline at the end of 12th month. While GGT decreased at 3th month, a mild elevation was occurred at 6th month but it was close to baseline at the end. Despite all these fluctuations and alterations in hepatic functions, all parameters were still in normal ranges. Continuous elevation in ALT at all time points of trial may be the sign of long-term harmful hepatic effects of nano silver, even the measurements were still within the normal ranges. Although isolated ALT elevation has not any clinical value, ongoing elevation of ALT may be thought to be associated with exposure to nano silver, and it may cause hepatic dysfunction in the long run if serum or urine silver levels were elevated. But our spectrophotometric analysis clearly demonstrated that serum or urine silver levels did not elevate, thus all these results could not be associated with serum and urine levels of silver. Gaul and Staud estimated that a 50-year old man can store an average of 0.23-0,48 g of silver in his body14. Literature has also revealed that total accumulated intravenous dose of 8 g silver arsphenamin (1.84 g silver) is enough to cause argyria (14,43) . It has been also reported that ingesting 30 mg/day silver for 1 year, elevates serum silver levels to 0.5 mg/L and may cause argyria (44) . Olcott reported that 89 mg/kg/day colloidal silver consumption resulted in ventricular hypertrophy in rats after 218 day , and upon autopsy, advanced pigmentation was seen in visceral organs, but the ventricular hypertropy was not attributed to silver deposition (45) . Furchner et al. studied absorption and retention of silver (as silver nitrate) in mice, rats, monkeys and dogs. In all species cumulative amount of silver nitrate excretion ranged from 90 to 99%, and only 1 to 10% of it was retained (46) . Nanoparticle silver may have a higher retention rate because of its nano scale, but even so, approximately 90% of silver will be excreted after it is released from implant surface. Another topic about silver toxicity or biosafety may focus on amalgam fillings. As is known, conventional amalgam is a powder of silver-tin alloy mixed with mercury. Silver proportion of an amalgam filling is 65% (approximately 2,6 g of silver). Despite most of the articles about amalgam fillings focused on its mercury content, a few studies about its silver content and tissue dispersion can be found. Drasch et al. studied 173 cadavers that had more than 9 amalgam fillings (23 g silver, as calculated by us). They found that silver concentration was 5.41 ug/kg in cerebral cortex, 4.25 ug/kg in white matter, 5.02 ug/kg in cerebellum, 8.15 ug/kg in liver and 0.44 ug/kg in renal cortex (47) . The data about cadavers' mental and health status could not be predicted while they were alive, and the authors did not note any skin coloration which could reveal argyria. Maybe one can claim that toxic metal accumulation in brain may cause mental disorders, but all other clinical trials have been claiming that amalgam fillings are safe enough and do not cause mental or neurological disorders (48,49,50) . When the said study and the listed dental literature interpreted together, we can claim that even 23 g of silver implantation will not cause argyria or mental or neurological disorders. Tsukamato et al. declared that 2% silver hydroxyapatite coating of a femoral replacement prosthesis contains 1,14 mg of coating material (0,0228 mg silver, calculated by us). And they said that surface area of such a prosthesis is 76 cm2. On the other hand, they claimed that this amount of silver would not cause argyria (22) . Our pedicle screws used in this study had 26 cm2 surface area for a 6.5x45 mm sized implant (data provided from manufacturer, Mikron Makine, Ankara, Turkey). The sol-gel chemical which was used for dip coating of implants contained 100 ppm nano silver (100 mg/L). Estimated thickness of coating was 300 nanometer, calculated volume of chemical and amount of silver which coated on one screw were 0.00078 ml and 0.000078 mg, respectively. It means, that if 10 screws are used for one patient, total exposure to silver will be 0.78 ug for a whole body distribution. According to the literature listed above, a normal person intakes 10-20 ug/day silver with its diary diet (29,32,33) . Even this literature alone proves that the silver-coated implants are sufficiently safe. On the other hand, according to Furchner et al (46) , minimum 90% percentage of 0.78 ug silver must be excreted. This means that 0.078 ug of silver will be retained in the whole body of a patient for each screw. According to Kawata et al (40) 0.78 ug silver will not cause any cytotoxic effect.
2019-06-13T13:19:32.082Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "c75fa68cc5419e4fb805eb99fd1a9e4e4d16e838", "oa_license": null, "oa_url": "https://www.journalagent.com/z4/download_fulltext.asp?pdir=sscd&plng=tur&un=SSCD-96268", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4b763c2f5242bbf2e8d6b399fd24b5b8be91bbc9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260597662
pes2o/s2orc
v3-fos-license
Aerobic power across positions – an investigation into women’s soccer performance , INTRODUCTION Soccer, also known as football, is a highly demanding sport that requires continuous physical activity and endurance during matches [1].Aerobic capacity, commonly measured by the maximum amount of oxygen uptake during exercise (VO2 max), is a fundamental factor in assessing an athlete's fitness level and performance ability [2].Soccer players are found to have higher aerobic capacities than the average person due to the nature of the sport [3].However, studies show that there may be variations in position specific VO 2 max among soccer players, based on their roles on the field [4].More than 4 million female soccer players are registered with soccer associations, and women's soccer is gradually gaining some influence in the world [5].Women's soccer has the same complex structure as men's soccer, and many other factors influence the game.However, a good female soccer player should have the aerobic capacity, anaerobic capacity, speed, endurance, explosive power, coordination, and the ability to read the game [6,7].The physical and technical requirements of players in different positions in the game are also different [8]. In recent years, researchers have started to study the influence of soccer players' physiology and morphology on the playing position, which are factors that may influence the tactical arrangement by team coaches [9].The literature suggests that there may be differences in VO 2 max levels between soccer players of different positions.One study found that midfielders have the highest VO 2 max levels among all positions, while defenders and forwards may have slightly lower measurements [10].Most studies have focused on male soccer players and fewer studies have been conducted on female soccer players [11]. In normal game analysis, the dominant analytic item in women's soccer remains movement, which includes running distances, sprints, and direction changes throughout the game, or can be described as movement of different intensities [12].In a regular game, female athletes run around 10,000 m in a 90-minute game, with an aerobic-to-anaerobic work ratio of 9:1 [2].From a physiological point of view, during such long races, athletes do not maintain constant highintensity exercise, but intermittent exercise alternating between aerobic and anaerobic [13], with some studies indicating a change in activity every 4-6 seconds [14].This exercise pattern may occur in both sprint-rest or jog-rest situations, in which the athlete's organism changes with the duration of the race, and good aerobic capacity affects the exercise performance.For example, a series of physiological responses, such as decreased intramuscular PH, decreased ATP and PCr concentrations, aerobic metabolic depletion, and increased lactic acid [6].While lactate clearance rate is related to aerobic capacity, athletes with a higher aerobic capacity have a shorter recovery time and a very strong regeneration of phosphocreatine.The average VO 2 level of a team will even determine the league standings, where the first-place team has the highest VO 2 and the second and third places ranked behind; therefore, good aerobic capacity is undoubtedly one of the factors in achieving results [15]. Different race levels mean different load intensities, and in physiological terms the essence of exercise load originates from the metabolic capacity of body functions to cope with the intensity of the race.Measuring VO 2 max is the maximum amount of oxygen a person utilizes per unit of time, and oxygen uptake can represent oxygen utilization and exercise capacity [16,17]. Despite extensive research on aerobic capacity and variations in VO 2 max among soccer players, there remains a notable research gap in investigating female athletes.The existing studies predominantly focus on male soccer players, leaving a dearth of comprehensive research on positionspecific aerobic capacities within women's soccer.Therefore, there is a compelling need to investigate the intricacies of the aerobic capacities of female soccer players in different positions. OBJECTIVE The aim of the study is to investigate the gap in knowledge about female aerobic capabilities in the game of soccer: 1) to meticulously assess and compare female soccer players' aerobic capacities, as measured by VO 2 max across distinct positions, including forwards, midfielders, and defenders; 2) to scrutinize and discover any significant disparities in the aerobic capacities exhibited by athletes occupying diverse positions within the female soccer team; 3) to discern and understand the implications and ramifications of position-specific aerobic capacities regarding training regimens and overall game performance among female soccer players; 4) to provide invaluable insights into the multi-faceted physical characteristics and individual demands inherent on different positions within women's soccer. A flowchart depicting the schematics of the study is shown in Figure 1.Flowchart demonstrating the schematics of current study. MATERIALS AND METHOD Participants.Cardiorespiratory fitness testing was performed on 25 female soccer players (age = 22.72 ± 2.69 SD): 5 forwards, 10 midfielders, and 10 defenders (Table 1.Basic information of participants.1, Fig. 2).All players had participated in the Chinese Women's Super League and were active players at the time of the study.The testing of the players took place at the Shandong Sports Science Research Center, China.All tests were conducted during daytime hours.According to the terms of the study, the results of the data collected were anonymous.All participants were informed about the purpose of the study and approved by both the club and individual players.The study was approved by the Gdańsk University of Physical Education and Sport and the local Ethics Committee, and conducted according to the provisions of the 1964 Declaration of Helsinki. Sample variables. During all tests, a heart rate belt was used to monitor each athlete's heart rate (HR) at 5s intervals.The GPS sampling frequency of this device is 10Hz, and the accelerometer is 200Hz and 1,000Hz for HR.Athletes were not permitted to exercise before the test to achieve a resting heart rate (HRrest).The entire test was carried out on a professional sports treadmill (hp/cosmos GmbH, Nussdorf-Traunstein, Germany).Oxygen uptake (VO 2 ) and heart rate were measured by a spirometer (Oxytone Alpha, Jaeger, Germany); calibration was performed before and after each test according to the During the test, the spirometer measured VO 2 every five seconds.To attain VO 2 max, the following conditions must be met: 1) test subject's heart rate reaches 180 beats/min; 2) respiratory quotient is greater than 1; 3) oxygen uptake has a relatively stable plateau with the increase in exercise. After the best effort, the participant could not exercise under load [18,19]. After the test, the data was uploaded and exported to the Microsoft Excel table.The athlete always wears a breathing mask connected to the gas.The gas analyzer measured the athlete's oxygen uptake and VE ventilation.A polar heart rate belt was used to monitor the athlete's real-time heart rate and heart rate reserve, and a HP treadmill used for the treadmill test. Testing process.The athletes did not exercise vigorously the day before the experiment and had no injuries.The test process consisted of 2 parts: a preparation phase and an incremental load phase.The purpose was to change the load intensity so that the athletes receive a certain amount of strong stimulus in advance; preparation phase 3 speed 4km/h, slope 0% for 6 minutes; the purpose was to change the load intensity so that the athletes received a certain amount in advance of strong stimulus; preparation phase 3 speed 4km/h, slope 0% hold for 1 minute, the purpose was to adjust the athlete's state and prepare for an incremental load test.Incremental load stage: Level 1: speed 8.5km/h, gradient 0.4%, hold for 2 minutes; Level 2: speed 9km/h, gradient 0.8%; LEVEL3: speed 9.5km/h.gradient 1.2%; and so on every 2 minutes with speed increasing the slope by 0.5km/h, and increased by 0.4% until the athlete could no longer continue, gradient 1.2%; and so on every 2 minutes, speed increasing the slope by 0.5km/h, increased by 0.4% until the athletes could no longer continue.Sample variables.SPSS26 software was used for data processing (IBM SPSS Statistics for Windows, Version 26.0.Armonk, NY, USA), and statistical differences between the oxygen intakes of female soccer players were calculated using the Sh*apiro-Wilk test for normal distribution -p>0.05,conforming to normal distribution; p≤0.05, not conforming to normal distribution; ANOVA using Levene test -p<0.05 RESULTS AND DISCUSSION The coefficients of variation (CV) for height (170.04±5.42cm), weight (58.64±6.06kg), VO 2 max (59.37±7.27ml/kg/min) and speed corresponding to VO 2 max (13.26±0.39km/h) of female soccer players were less than 0.15, and the absolute values of sk less than 4, and the absolute value of Kurt less than 10 (Tab.2).This indicates that athletes in different positions are homogeneous, and can be considered comparable in the same test, although in different positions. In Table 3, P>0.05 in the values of the Shapiro-Wilk test indicates normality in the distribution of variables for all analyses.In contrast, in the one-way ANOVA, the F and P values also identify significant differences in height, weight, VO 2 max and maximum oxygen uptake corresponding to VO 2 max among the players in different positions.Also, Levene's test (P>0.05)further determined the homogeneity of the variances. In Table 4. One-way ANOVA comparison of player information at different positions., in the data description of the different positions of players, all players' positions were grouped in a one-way analysis of variance (ANOVA).There was no significant difference in weight between of different positions of players; however, there was a significant difference (p<0.05) between the height of the forwards (172.8±4.79 cm) and the defenders (167.8±3.22 cm), while there was no significant difference between the height of the forwards and the midfielders (170.9±6.25 cm).In the comparison of VO 2 max, there was a significant difference (p<0.05) between the 3 positions of players with a significantly different in the VO 2 max comparison (P<0.05); the speed corresponding to VO 2 also showed significant differences between each position (P<0.05). To the best of the authors' knowledge, this is the first study to examine the relationship between different positions and VO 2 max in Chinese female professional soccer players.In the study, the results of maximal oxygen uptake by female soccer players in different positions showed that midfielders had the highest oxygen uptake; midfielders had 6.8% higher maximal oxygen uptake than forwards, and 11.9% higher maximal oxygen uptake than defenders.This is essentially the same as the study by Haugen et al. [20].Players with a higher VO 2 max will have more rushes and often play a key role in the game; such players with high VO 2 max also prolong lactate buildup.Players with high VO 2 max theoretically also have a higher lactate threshold, which means that such players do not accumulate lactate during high-intensity activities [21]. In the 2019 Women's World Cup, midfielders ran an average of 11,210 m, forwards, an average of 10,979 m, and defenders an average of 10,369 m [22].In addition to team injuries and athletes' skills in the tournament, fitness was an important factor that affected the game.If the team's average VO 2 max was higher than the other team, the result was equivalent to adding one more player to the game than the opponent [23].This is one of the reasons why VO 2 max is important for women's soccer. In terms of speed corresponding to maximal oxygen uptake, it is still the midfielders who are the fastest; midfielders are 3.3% faster than strikers and 1.9% faster than defenders.Vescovi et al. performed the '20-meter beep test' for endurance assessment of female college soccer players, and found that players in different positions ran a distance that still showed a large gap [24].VO 2 max is an important factor in the repetitive sprinting ability and total distance run [25].VO 2 max is also necessary for recovery after short periods of intense exercise [26].This contrasts with the current study on the speed of different players at maximal oxygen uptake, where midfielders were also faster than other position players under maximal oxygen uptake conditions or where midfielders were more economical in their running exertion during the game.As a result, midfielders also cover the greatest area of the pitch during the game [27]. In terms of actual competition and training, the speed corresponding to the maximum oxygen uptake is a factor that determines the performance of the game.vVO 2 max represents the athlete's aerobic capacity, and there is also a relationship between the athlete's speed and the maximum oxygen uptake [28][29][30]. There was no significant difference in the weight comparison, which may be caused by the female athletes' fear of being too strong due to training and reduced diet intake [31,32].However, there were significant differences in body weight among male soccer players in different positions [33]. CONCLUSIONS Evaluating the physical abilities of athletes at different positions is a key aspect of targeted training and identifying the potential of athletes.This study assess the maximum amount of oxygen uptake during exercise (VO 2 max) and vVO 2 max, as well as height and weight.Some of the data are similar for female professional athletes at different positions, but there were significant differences in VO 2 max and vVO 2 max, although in most cases they performed the same training in the same training programme, and the division of positions led to more differences in oxygen uptake and speed.There are obviously differences in the height and weight between female and male athletes, and male training methods cannot directly follow many training methods.Evaluation of maximal oxygen uptake in soccer players facilitates the organization of training programmes. .It is part of the future direction of the player, and maximal oxygen uptake is the key to success in soccer.In the current study, patterns were found in the VO 2 max of players in different positions; VO 2 max ranked in the order of midfielders -63.24 ± 7.04 ml/kg/min, forwards -58.92 ± 7.70 ml/kg/min, and defenders -55.73 ± 4.40 ml/kg/min.The conclusion is that midfielders, who combine both offense and defense in the match, have more tasks to accomplish in 90 minutes; therefore, the midfielders also had higher running distances.The defenders, on the other hand, have fewer tasks to perform during the game, and therefore have the lowest oxygen uptake requirement of the whole team.This is a good indication that different positions in the team have different requirements for the players, as a result of training and games. Future research based on this study could include developing position-specific training programmes for soccer players, given the varied aerobic capacities observed across different roles.Additional research could also examine physiological differences between male and female athletes, track players' VO 2 max changes over time, and study other physical characteristics for comprehensive performance insights.The findings could potentially be applied to other team sports with position-specific roles, maximizing player performance through tailored training strategies. Figure 1 .Figure 2 . Figure 1.Flowchart demonstrating the schematics of current study Table 1 . Basic information of participants hypothesis not valid; p>0.05 -hypothesis is valid; one-way ANOVA, p<0.05 indicates significance. Table 3 . Basic information about soccer players in different positions Table 2 . Parameters and their values using in analysis Table 4 . One-way ANOVA comparison of player information at different positions.
2023-08-06T15:32:35.020Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "c964b40b3740b4142b4a5d1bc4d23a18c3bb126d", "oa_license": "CCBYNC", "oa_url": "https://www.aaem.pl/pdf-169854-92811?filename=Aerobic%20power%20across.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "86d34db1046564418c2447913dec47754a349685", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
52962254
pes2o/s2orc
v3-fos-license
Isolation and genomic characterization of six endophytic bacteria isolated from Saccharum sp (sugarcane): Insights into antibiotic, secondary metabolite and quorum sensing metabolism Six endophytic bacteria were isolated from Saccharum sp (sugarcane) grown in the parish of Westmoreland on the island of Jamaica located in the West Indies. Whole genome sequence and annotation of the six bacteria show that three were from the genus Pseudomonas and the other three were from the genera Pantoea, Pseudocitrobacter, and Enterobacter. A scan of each genome using the antibiotics and secondary metabolite analysis shell (antiSMASH4.0) webserver showed evidence that the bacteria were able to produce a variety of secondary metabolites. In addition, we were able to show that one of the organisms, Enterobacter sp RIT418 produces N-acyl-homoserine lactones (AHLs), which is indicative of cell-cell communication via quorum sensing (QS). Introduction Sugarcane is a tall perennial monocot from the genus Saccharum. The plant is native to warm temperate and tropical climates. The plant is agriculturally important because of sugar production. Sugarcane is an important crop plant in many countries including the island of Jamaica, where it is estimated that 100,000 metric tons of raw sugar will be produced from 1.4 million metric tons of sugarcane in 2018 (1). Even though the plant is integral to the economy of Jamaica and other countries due it its role in sugar and ethanol production, studies related to endophytic and epiphytic bacterial-sugarcane interaction from sugarcane grown in Jamaica is very sparse. Our laboratory is interested in assessing the bacterial-sugarcane symbiotic relationship for two main reasons. Firstly, to isolate and identify beneficial bacteria involved in nitrogen fixation. Secondly, to isolate and identify phyto-pathogens that are detrimental to the growth and/or development of the plant. A previous study from our group isolated and identified the bacterium Enterobacter sp strain SST3. Whole genome sequencing and annotation of Enterobacter sp strain SST3 show that the bacterium employs an AHL synthase gene involved in quorum sensing signaling. The AHL synthase gene from Enterobacter sp strain SST3 shares 88% similarity to the CroI gene from Citrobacter rodentium strain CC168 Ivyspring International Publisher which is also involved in quorum sensing. In addition, Enterobacter sp strain SST3 possesses the complete genetic and proteomic machinery required for the catabolism of sucrose as an energy source in addition an indoleacetamide hydrolase (iaaH) ortholog involved in the production of auxin-like compound/s which has an integral role in plant growth and development (2). The work presented here is a continuation of the screening for additional beneficial and/or pathogenic endophytic bacteria. Here we present the isolation, genome sequencing and annotation of six endophytic bacteria isolated from the internal stem tissue of sugarcane grown in Jamaica. In addition, we present features of the isolates related to antibiotic production and other secondary metabolites such as; the production of compounds indicative of the quorum sensing cell-cell communication system. Isolation of endophytic bacteria Sugarcane was obtained from a farm located in the parish of Westmoreland located on the island of Jamaica in the West Indies. The external surface of the sugarcane was sterilized using 1% (v/v) triton X-100 surfactant for 10 minutes followed by 20% (v/v) sodium hypochlorite/1% triton X-100 for 10 minutes followed by five 10 minutes washes using sterile distilled water. Following sterilization, the internal stem tissue was dissected under sterile conditions and 0.5 grams was used to inoculate 100 mL of 5 different media (tryptic soy, nutrient, R2A, Luria, and potato dextrose). The inoculated broth were allowed to incubate at 30oC for 48 hours with continuous shaking at 250 rpm. For isolation of pure colonies, serial dilutions (10 -1 to 10 -10 ) was performed and 100 μL of the samples ranging from 10 -5 to 10 -10 were plated on to the five different agar media ( Fig. 1A and 1B). N-acyl-homoserine lactone signal separation and detection N-acyl-homoserine lactones (AHLs) were prepared and concentrated using ethyl acetate extraction of growth supernatants as previously describe by our laboratory (3,4). T-streak, disc diffusion and thin layer chromatography (TLC) bioassays were done as described in previous original publications by our laboratory (3)(4)(5) and in corresponding review articles on biosensors for AHL detection (6,7). Genomic DNA isolation and PCR amplification of the 16S V3/V4 rDNA regions Genomic DNA was extracted from 5 mLs of individual bacteria grown in broth using the MolBio DNA extraction kit according to the manufacturer's instructions. For initial identification of isolates, the variable 3 and 4 (V3/V4) regions of the 16S rDNA was amplified using 12 picomoles of forward and reverse primer, 1mM MgSO4, 0.5 mM of each of the four deoxynucleotide triphosphates, 0.2 ng genomic DNA and 1 unit of platinum Pfx DNA polymerase (Invitrogen) using the following PCR conditions: 1 cycle at 95 o C for 2 minutes, followed by 25 cycles at 95 o C for 30 seconds, 52 o C for 30 seconds and 72 o C for 1 minute. The forward and reverse primers used to amplify the V3/V4 region were 5'-CCTACGGGNGGCWGCAG-3' and 5'-GACTACH VGGGTATCTAATCC-3' (Fig. 1C). The ~500bp V3/ V4 amplicons were resolved by electrophoresis on s 0.8% (w/v) agarose gel followed by gel extraction using the QIAquick Gel Extraction Kit (Qiagen) followed by Sanger nucleotide sequencing in both directions using the primers that were used for amplification. The individual genera were identified using the Basic Local Alignment Search Tool (BLAST) (8). Genome sequencing and assembly For whole genome sequencing, the extracted DNA was processed using the Nextera XT (Illumina), quantified using a NanoDrop spectrophotometer and sequenced using the MiSeq Illumina platform at the Rochester Institute of Technology Genomics Facility. Adapter trimming was performed on the raw paired-end reads using SeqPurge version 0.1. The trimmed reads were subsequently assembled de novo with Unicycler version 0.3.0b. Strain identification and genome quality assessment For each assembled genome, 43 conserved microbial marker genes were identified, concatenated and used to determine strain identity based on phylogenetic placement within a reference genome tree consisting of 5,656 trusted reference genomes (9). Lineage-specific marker genes were subsequently inferred for each genome based on updated taxonomic assignment and was also used to estimate genome completeness and contamination. The taxonomic assignment of Pseudocitrobacter sp RIT415 to the family Enteobacteriaceae showed strikingly high 16S rDNA identity (>99.5%) to members from the genus Pseudocitrobacter. Since there were no reference published genomes from the genus as of June 28, 2018, a nucleotide BLAST search was done using its whole genome as the query against seven house-keeping genes (gyrB, rpoA, rpoB, trmE, recN, infB, atpD) of Pseudocitrobacter faecalis 25 CIT T and Pseudocitrobacter anthropi C138 T . The result of this comparison between the seven house-keeping genes of Pseudocitrobacter sp RIT415 and Pseudocitrobacter faecalis 25 CIT T and Pseudocitrobacter anthropi C138 T shows that >98% identity for each (data not shown). Results and Discussion Approximately 500 MB of paired-end reads were generated from the whole genome sequencing for each of the six isolates. De novo genome assembly followed by CheckM inspection indicates that the assembled genomes are of good quality with high completeness (>98%) and negligible or possibly background contamination (<0.5%) ( Table 1). CheckM, JSpecies and nucleotide BLAST analyses assigned the isolates to the genera Pseudomonas, Enterobacter, Pantoea or Pseudocitrobacter (Table 1). It is notable that this is the first reported genome representative for Pseudocitrobacter as of June 28, 2018. AntiSMASH results for the isolates in this study showed the production of various secondary metabolites, including antibiotics as listed in Table 2. Interestingly, an N-acyl homoserine lactone synthase (luxI) homolog was identified in the Pantoea sp RIT413 genome. However, the strain did not accumulate AHLs that could activate the receptors TraR or CviR proteins (Fig. 2). In contrast, a luxI homolog could not be identified in Enterobacter sp RIT418 using both BLAST and antiSMASH queries. However, the strain was able to produce TraR and CviR-detectable AHL signal/s ( Table 2, Fig. 2). To be sure that this was not a result of human error during library preparation and culture deposition, the six isolated strains were re-streaked from our culture repository to assess purity. In addition, the V3/V4 region of the 16S rDNA were re-amplified, re-sequenced and analyzed. The reanalysis corroborated or our original findings. Nucleotide and protein BLAST searches was done using the croI/luxI and easI homologs from Enterobacter sp strain SST3 and Enterobacter asburiae stain L1 respectively as queries (2,10). The BLAST searches did not identify any luxI or luxI-like homologs in Enterobacter sp RIT418. A complete genome of Enterobacter sp RIT418 can be obtained more readily given the recent advancement of Nanopore and PacBio long read technologies. Sequencing from Nanopore and PacBio will be instructive in corroborating the absence of luxI homlog(s) in the genome of Enterobacter sp RIT418 (11). It would also be interesting to perform future experiments to identify and characterize the gene/s and proteins that are responsible for the production AHL signals in Enterobacter sp RIT418 using forward and or reverse genetic approaches. The lack of the TraR or CviR-detectable AHL signal(s) production in the luxI-ortholog-containing Pantoea sp RIT413 may be associated with the regulation of the luxI homolog. This will require recombinant cloning and heterologous expression of the luxI ortholog employing a broad host-range expression vector to detect AHL production (12). It is also possible that Pantoea sp RIT413 does accumulate AHLs but the compounds produced are structurally divergent from the typical TraR and CviR cognate AHL substrates which may not be able to activate luxR-type receptors. Nucleotide sequence accession numbers The genome sequences of the strains described in this study have been deposited in the GenBank database with the accession numbers and annotation features describes in Table 1. The version described in this paper is the first version.
2018-10-30T00:53:25.898Z
2018-10-02T00:00:00.000
{ "year": 2018, "sha1": "f420c5603c5e722cbfc19806b5a14e4461cedafd", "oa_license": "CCBYNC", "oa_url": "http://www.jgenomics.com/v06p0117.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f420c5603c5e722cbfc19806b5a14e4461cedafd", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
10793680
pes2o/s2orc
v3-fos-license
Two Novel CYP11B1 Gene Mutations in Patients from Two Croatian Families with 11β-Hydroxylase Deficiency Steroid 11β-hydroxylase deficiency (11β-OHD) is the second most common cause of congenital adrenal hyperplasia. Mutations in the CYP11B1 gene, which encodes steroid 11β-hydroxylase, are responsible for this autosomal recessive disorder. Here, we describe the molecular genetics of two previously reported male siblings in whom diagnosis of 11β-OHD has been established based on their hormonal profiles displaying high levels of 11-deoxycortisol and hyperandrogenism. Both patients are compound heterozygous for a novel p.E67fs (c.199delG) mutation in exon 1 and a p.R448H (c.1343G>A) mutation in exon 8. We also report the biochemical and molecular genetics data of one new 11β-OHD patient. Sequencing of the CYP11B1 gene reveals that this patient is compound heterozygous for a novel, previously undescribed p.R141Q (c.422G>A) mutation in exon 3 and a p.T318R (c.953C>G) mutation in exon 5. All three patients are of Croatian (Slavic) origin and there is no self-reported consanguinity in these two families. Results of our investigation confirm that most of the CYP11B1 mutations are private. In order to elucidate the molecular basis for 11β-OHD in the Croatian/Slavic population, it is imperative to perform CYP11B1 genetic analysis in more patients from this region, since so far only four patients from three unrelated Croatian families have been analyzed. Introduction Congenital adrenal hyperplasia (CAH) is a group of autosomal recessive disorders caused by the loss of one of five steroidogenic enzymes involved in cortisol synthesis. Approximately 90-95% of all cases are due to steroid 21-hydroxylase deficiency, and about 3-8% are caused by steroid 11 -hydroxylase deficiency [1][2][3]. The deficiency of 11 -OH leads to reduced cortisol biosynthesis, increased ACTH secretion, and overproduction of steroid precursors. These precursors are shunted toward androgen synthesis, resulting in hyperandrogenism. Phenotypical expression of classic 11 -OHD leads to virilization of external genitalia in newborn females. The overproduction of reactive androgen also causes precocious pseudopuberty, accelerated somatic growth, and premature epiphyseal closure in both sexes. The accumulation of 11-deoxycorticosterone and its metabolites causes hypertension in about two-thirds of these patients. The gene for steroid 11 -hydroxylase is encoded by CYP11B1. It is located on chromosome 8q22, approximately 40 kb apart from the highly homologous CYP11B2 gene that encodes the aldosterone synthase. To date, over 90 diseasecausing CYP11B1 mutations have been identified [1,4,5]. Notably, a high incidence of 11 -OHD has been reported, with a disease frequency of about 1 in 5,000-7,000 live births, in Israel among Jewish immigrants from Morocco, a relatively inbred population. Almost all alleles in this patient group carry the same p.R448H (c.1343G>A, rs28934586) missense mutation in exon 8 [6][7][8]. Recently, we described one 11 -OHD patient of the Slavic origin who was compound heterozygous for this p.R448H mutation and a novel intron 7 2 International Journal of Endocrinology (c.1200+4A>G) splice site mutation [9]. It was the first report of CYP11B1 genetic analysis in a Croatian patient with 11 -OHD. Here, we further report two novel CYP11B1 mutations from three more patients of the Croatian descent, which include two previously reported brothers [10] and one new patient. Subjects 2.1.1. Family A. Patient 1 is the older of two sons of healthy nonconsanguineous parents of the Croatian descent. He was diagnosed at 2.5 years of age due to accelerated growth, skeletal maturation, pseudoprecocious puberty, elevated serum levels of 11-deoxycortisol, 17-hydroxyprogesterone (17-OHP), and androgens, and suppressed levels of cortisol, aldosterone, and plasma renin activity (PRA). His blood pressure was high normal. Patient 2 is the younger brother of Patient 1. He was diagnosed at 3 months of age, with clinical presentation almost identical to his older brother, except for a normal blood pressure. Hydrocortisone treatments for both patients were introduced immediately after diagnosis. Patients 1 and 2, now 30 and 28 years old, respectively, have not been under our pediatric care or taking medication on a regular basis for the past 15 years. They came to us recently for genetic counseling. During this visit, blood was drawn for CYP11B1 gene analysis. Family B. Patient 3, the first child of healthy nonconsanguineous parents of the Croatian descent, was born spontaneously at term after an uneventful pregnancy. At 2 years of age, he was presented for the first time with growth acceleration and pseudoprecocious puberty. His height was 100 cm (+4.27 SD) and his weight was 19.5 kg (+3.9 SD). He had deep voice, acne, and large phallus (length 7 cm, circumference 4.5 cm). His testes were 2-3 cc, and his pubic hair was Tanner II. Blood pressure was normal (90/45 mmHg). Bone age according to Greulich and Pyle was 5 years. Plasma electrolytes were normal. Biochemical results confirmed diagnosis of 11 -OHD (Table 1) and hydrocortisone treatment (12 mg/m 2 /day) was subsequently started. His younger brother was admitted at 6 months of age. He had no signs of accelerated growth and sexual development, and his plasma levels of 11-deoxycortisol, 17-OHP, androgens, aldosterone, and PRA were within normal range for age, suggesting that he was not affected with CAH. Hormonal Assays. Blood for biochemical analyses was drawn after an overnight fast. Serum/plasma was either assayed immediately or frozen for later use. Standard recommended biochemical methods for measured parameters were employed. DNA Amplification and Sequence Analysis. Genomic DNA was isolated from peripheral leukocytes. Three DNA fragments (exons 1-2, 3-5, and 6-9) of the CYP11B1 gene were amplified by polymerase chain reaction (PCR). Reactions were carried out in 50 L volume, which contained 100 ng genomic DNA, 10 mM Tris-HCl, pH 8. Table 2). Thermocycling was performed in a Mastercycler (Eppendorf, Hauppauge, NY, USA) with an initial 3-minute denaturation step at 94 ∘ C followed by 35 cycles of 94 ∘ C for 45 seconds, 58 ∘ C for 30 seconds, and 72 ∘ C for 3 minutes. The expected amplicon sizes were 0.9 kb, 1.4 kb, and 1.5 kb for the exon 1-2, 3-5, and 6-9 fragments, respectively, and were confirmed by agarose gel electrophoresis. After purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA), direct sequencing of the amplicons was performed using primers listed in Table 2. The sequencing results were compared to the 7.46 kb human CYP11B1 reference sequence at chromosome 8 (GenBank gi|224589820:c143961236-143953773). Results More than two decades ago, at a time before modern molecular diagnostic tools were available, we described two brothers (Patients 1 and 2) with 11 -OHD [9]. These two patients recently visited our clinic for genetic counseling. This gave us an opportunity to perform genetic analysis of their CYP11B1 gene. DNA sequencing results showed that both patients carried a novel p.E67fs (p.E67Kfs * 9, c.199delG) frameshift mutation in exon 1 and a previously reported p.R448H (c.1343G>A) missense mutation in exon 8 (Figures 1 and 3). The p.E67fs mutation causes a reading frame shift, resulting in premature translation termination at codon 75. For Patient 3, deficiency of 11 -OH was suspected on the basis of his clinical presentation. This is confirmed by hormonal analyses, which showed elevated serum levels of 11-deoxycortisol, androstenedione, testosterone, and 17-OHP and suppressed levels of cortisol, aldosterone, and plasma renin activity (Table 1). Genetic analysis of the CYP11B1 gene revealed that Patient 3 carried a novel, previously undescribed p.R141Q (c.422G>A, rs26701810) missense mutation in exon 3 and a previously reported p.T318R (c.953C>G) missense mutation in exon 5 (Figures 2 and 3) [11]. In both families, DNA of the parents was not available for genetic analysis. Since all three patients have clear clinical and biochemical characteristics of 11 -OHD, we assume that the patients are compound heterozygotes, and that the mutations are located on different alleles. Discussion Patients 1, 2, and 3 were presented at the age of 2.5 years, 3 months, and 2 years, respectively, with characteristic features for boys with 11 -OHD. Accelerated somatic growth and skeletal maturation and pseudoprecocious puberty were found in all three patients. Although their blood pressure was normal, their hormonal profile was distinctive of 11 -OHD, with elevated serum levels of 11-deoxycortisol, 17-OHP, and androgens and suppressed levels of cortisol, aldosterone, and PRA. At present, over 90 different CYP11B1 gene mutations have been described. Whereas the majority of these are missense and nonsense mutations, other mutations have also been reported, which include splice site mutations, small and large deletions/insertions, and complex rearrangements. These mutations are distributed over the entire coding region but tend to cluster in exons 2, 6, 7, and 8 [1,2]. There is no consistent correlation between a specific CYP11B1 gene mutation and the clinical phenotype of 11 -OHD. Distinctive phenotypic variability exists in patients with the same mutation regarding the onset of symptoms, 4 International Journal of Endocrinology the age of diagnosis, the degree of virilization, or the severity of hypertension [1,2]. Nonetheless, based on in vitro expression data, the p.R448H mutation completely abolished 11 -hydroxylase enzymatic activity [12]. The c.199delG frameshift mutation should result in premature translational termination and production of a nonfunctional protein. T G G C G C T T C A A C C G A T T G C G G C T G The compound heterozygous c.199delG/p.R448H mutation therefore predicts a severe classic CAH phenotype in Patients 1 and 2. Similarly, the T318 residue is completely conserved in all known P450 enzymes. Any changes in this position, for example, either a p.T318 M or a p.T318R mutation, causes loss of 11 -hydroxylase activity [1]. Although the consequence International Journal of Endocrinology of the p.R141Q mutation is unknown, based on the severe clinical phenotype observed in Patient 3, we predict that the p.R141Q mutation causes major loss of 11 -hydroxylase activity. The mutation of a positively charged arginine to an uncharged glutamine residue could break a salt bridge necessary for maintaining conformational stability of the enzyme. In silico mutation analyses were performed using PolyPhen-2 (http://genetics.bwh.harvard.edu/pph2/), SIFT (http://sift.jcvi.org/), and Provean (http://provean.jcvi.org/) software. All three software predicted a damaging effect of the p.R141Q mutation on CYP11B1 function (Table 3). That p.R141 is conserved in the mammalian CYP11B1 protein further suggests an important role of this residue at this position ( Figure 4). Most of the reported mutations are private, familyspecific mutations. However, in some ethnic groups, particularly in families with high rate of consanguinity, International Journal of Endocrinology with 11 -OHD patients from Tunisia, although some patients with p.Q356X mutation have also been described in patients originating from Africa [12,15]. Our families with 11 -OHD patients are all of the Croatian (Slavic) origin, and there is no self-reported consanguinity. Mutation analysis of the CYP11B1 gene revealed that the p.R448H mutation was present not only in one previously reported Croatian patient [9], but also in Patients 1 and 2. Thus, three patients from two Croatian families carry this mutation, which is otherwise rarely reported in Caucasians [16,17]. Also, similar to this previously reported patient, all three patients in this study carry novel CYP11B1 mutations in the heterozygous form. This is consistent with former reports that most of the CYP11B1 mutations are private because they can only be found within the same family [13]. To the best of our knowledge, there are no other patients with 11 -OHD in the Slavic population in whom CYP11B1 gene analysis has been performed. Therefore, it is imperative to perform CYP11B1 genetic analysis in more patients from this region. Our investigation should benefit from genetic counseling, prenatal and postnatal diagnosis, and treatment of 11 -OHD in Croatia.
2018-04-03T01:03:09.399Z
2014-06-02T00:00:00.000
{ "year": 2014, "sha1": "a3fe9a37be15773497c3049e6c474fecb4ab0482", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ije/2014/185974.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93de77c3517b7e697b4ae25a9dc5a1714571efd6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6068158
pes2o/s2orc
v3-fos-license
Malaria and Under-Nutrition: A Community Based Study Among Under-Five Children at Risk of Malaria, South-West Ethiopia Background The interaction between malaria and under-nutrition is not well elucidated in Ethiopia. The objective of this study was to assess the magnitude of under-nutrition and its correlation with malaria among under-five children in south-west Ethiopia. Methods This cross-sectional study was undertaken during March–February, 2009 as part of the baseline assessment of a cluster randomized trial around Gilgel Gibe Hydroelectric dam, south-west Ethiopia. A total of 2410 under-five children were included for anthropometric measurement and blood investigation for the diagnosis of malaria and anemia. The nutritional status of children was determined using the International Reference Population defined by the U.S National Center for Health Statistics (NCHS). Blood film was used to identify malaria parasite and haemoglobin concentration was determined by Hemo Cue analyzer (HemoCue Hb 301, Sweden). Results Significant proportion (40.4%) of under-five children were stunted (height-for-age<−2SD). The prevalence of under-weight was 34.2%. One third and one tenth of the children had anemia and malaria parasite respectively. Older children were more likely to have under-nutrition. There was no association between malaria and under-nutrition. Children who had malaria parasite were 1.5 times more likely to become anaemic compare to children who had no malaria parasite, [OR = 1.5, (95% CI: 1.1–2.0)]. Conclusion In this study, there is no association between malaria and under-nutrition. Children who have malaria are more likely to be anaemic. Malaria prevention and control program should consider nutrition interventions particularly anemia. Introduction Malaria and under-nutrition are the two major causes of childhood mortality in sub-Saharan Africa [1]. Each year, malaria kills more than 800,000 people annually, of which 91% of them reside in Africa and 85% of them are under five children [2]. On the other hand, under-nutrition is considered to be the underlying cause for more than 50% of deaths of under-five children [3]. In Ethiopia, malaria and malnutrition are the top causes of morbidity and mortality in under-five children [2,4]. The relationship between malaria and under-nutrition is debatable. Although a number of observations have indicated a deleterious effect of malaria on nutritional status [5,6], it is still unclear whether and how nutritional status influences malariarelated morbidity. Earlier observational studies provide some evidence of protective effect of under-nutrition against malaria [7,8,9]. However, more recent studies have presented inconsistent findings. Deen et al in Gambia and Friedman et al in Kenya reported that under-weight was not associated with infection with malaria [10,11]. Another study in Gambia showed that nutritional status was not associated with the occurrence of malaria [12]. Results on the relationship between malaria and stunting are conflicting. Stunting was the risk factors of malaria in Gambia and Kenya [10,11]. In contrary, a study in Papua New Guinea showed that stunting protected children from malaria [13]. In Ethiopia, where malaria and malnutrition are the major public health problems, little is known about the interaction between the two diseases. The objective of this study was to assess the effect of malaria parasite on the nutritional status of under-five children who are at risk of malaria around Gilgel Gibe hydroelectric dam, south-west Ethiopia. Materials and Methods The study was conducted in Gilgel Gibe Field Research Center (GGFRC). This research site is selected since malaria is the major health problem in the area due to ecological disruption [14]. GGFRC was established in 2005 to serve as Demographic Surveillance System and field attachment site of Jimma University. The research center comprises of eight rural and two urban Kebeles (lowest administration unit in Ethiopia) which are located around the reservoir of Gilgel Gibe hydroelectric dam. In the ten Kebeles, there are 52 Gots (villages), 55,000 population and 10800 households. This cross-sectional study was undertaken as part of the baseline assessment of a cluster randomized trial. The objective of the trial was to assess the effect of tailored training of the heads of the households on the use of long lasting treated nets (ITN) on the burden of malaria in vulnerable groups. Detail description of the methods of the trial is given elsewhere [15]. In brief, 22 Gots (11 interventions and 11 controls) were selected and at least two ITN were distributed to each household in all Gots (villages). All of the heads of the households in the intervention villages were trained about the proper use of Insecticide-treated Nets (ITN). The proper use of ITN has been properly monitored in each household by trained village residents. The trained village residents also monitor the occurrence of malaria in each household in the intervention and control villages. To evaluate the effect of the intervention, mass blood investigation for the diagnosis of malaria and anaemia among all under-five children and pregnant women in the 22 study Gots has been undertaken three times a year. As part of the baseline survey, mass blood investigation and anthropometric measurements was done among 2410 under-five children in the study Gots. Weight was measured using UNICEF electronic scale (Item No. 0141015 Scale mother/child, electronic) and height was measured using stadiometer (Holtain, UK). The nutritional status of children was determined using the International Reference Population defined by the U.S National Center for Health Statistics (NCHS) and Centres for Disease Control and Prevention [16]. Height-for-age (HAZ), weight-for-height (WHZ), and weight-for-age (WAZ) Z-scores were calculated based on this recommendation. Children were classified as stunting, wasting, and being under-weight if the HAZ, WHZ, and WAZ were ,22 standard deviation (SD). They were categorized as having severe stunting or wasting and being severe underweight if the HAZ or WHZ, and WAZ were ,23 SD, respectively. Under-nutrition is defined as the presence of either stunting, wasting or under-weight. For the diagnosis of malaria and anemia, a drop of blood from a finger prick was taken from the children. For malaria parasite identification, thick and thin films were prepared in the field and stained with Giemsa in Jimma Specialized Hospital. Each slide was read by experienced laboratory technicians. Absence of malaria parasite in 200 high power ocular fields of the thick film was considered as negative. Haemoglobin (Hb) concentration was determined using HemoCue analyzer in the field (HemoCue Hb 301, Sweden). Anaemia and moderately severe anaemia were defined as Hb concentrations below ,11.0 mg/dL and ,7.0 mg/dL, respectively. Malaria was defined as any asexual parasitemia detected on a thick or thin blood smear. Data were entered into computer, edited, cleaned, and analyzed using SPSS-12 software. To calculate the anthropometric indices, the data was exported to Epi Info 2000 software (version 2000, Atlanta, GA). Bivariate analysis was done to see the association between socio-demographic variables and malaria with under-nutrition. To control the effect of confounding variables, stepwise logistic regression was done. The study has got ethical clearance from Jimma University and the WHO ethical committee. Written consent was obtained from caretakers of under-five children. Patients with anaemia, undernutrition and malaria were given treatment by the health extension workers or the nearby health centres. Results Nearly equal number of male (50.7%) and female (49.3%) children participated in the study. Infants (age less than one year) constituted 21% of the total children and one third of the children were above 47 months. The mean monthly family income of the children's family was 1232 Ethiopian Birr (ETHB) (SD6833). More than half of the family earned monthly income of above 1000 ETHB ( Table 1). Significant proportion (40.4%) of the children were stunted (height-for-age,22SD) and 18% of them were severely stunted. The prevalence of under-weight was 34.2%. One hundred and twenty two children (5.1%) were wasted (weight-forheight,22SD). The prevalence of anemia was 32.4% and one tenth of the children had malaria parasite ( Table 2). There was no statistically significant association between malaria parasite and under-weight, [OR = 0.9, (95%CI: 0.7, 1.2)]. After controlling for the effect of potential confounding variables, children above one years of age were more likely to become under-weight as compared to infants. Sex, birth order and family income did not have statistically significant association with under-weight ( Table 3). As the age of children increased, the prevalence of stunting increased. Compared to boys, girls were less likely to be stunted (OR = 0.8, 95% CI = 0.7-0.9). The other independent variables such as presence of malaria parasite, family income and birth order were not correlated with stunting ( Table 4). Children in the age group of 12-23 months were 2.2 times more likely to develop wasting than infants,[OR = 2.2, (95%CI: 1,1, 4.4)]. There was no statistically significant association between malaria, sex, income and birth order of the children with wasting ( Table 5). Children who had malaria parasite were 1.5 times more likely to become anaemic as compare to children who had no malaria parasite, [OR = 1.5, (95% CI: 1.1-2.0)]. Children above 12 months of age were less likely to become anaemic as compared to infants ( Table 6). Discussion We have assessed the magnitude of under-nutrition, anemia and malaria and the interaction of malaria with under-nutrition and anaemia using a large sample size in under-five children at risk of malaria. The magnitude of stunting and under-weight in this study is almost similar to the findings of the 2005 Ethiopian Demographic Health Survey which revealed that 47% of underfive children were stunted and 38% were under-weight [4]. The high prevalence of under-nutrition in our study can be explained by the high level of food insecurity in the area [17] and lack of knowledge of the care givers to provide balanced diet to their children. Our finding of stunting and under-weight is also [18]. Phengxay and colleagues reported higher proportion of children(54%) had stunting and 35% under-weight in Laos [19]. In previous literatures, higher family size [20], maternal education [20], male gender [21], and poor feeding practices [22] were associated with under-nutrition of children. In our study, older children were more likely to have under-nutrition as compared to the younger ones which is similar to a previous report in Ethiopia [4]. As a result of short birth interval in the locality, care givers may give more attention to the younger children and neglect the older ones which predispose the later to malnutrition. Similar to other studies [21,23], male gender was associated with stunting. Poor feeding practices may also contribute for undernutrition of the children [4]. One third of the under-five children in this study had anemia which is comparable to the finding of Wolde et al in North Ethiopia [24]. One in ten of the children had malaria parasitemia. Detail description of the finding related to malaria is given elsewhere [15]. In this study, we did not find association between malaria and under-nutrition which is consistent with several previous studies [10,11,25]. Protein energy malnutrition might predispose children to malaria infection through reduction of malaria specific antibodies (IgG) [26]. Several previous reports indicated that deficiencies of micronutrients such as Vitamin A and Zinc are more important risk factors for the occurrence of malaria [5,27]. In our study, deficiencies of these micronutrients might contribute for the occurrence of malaria. Malaria was strongly associated with anemia which is consistent with previous reports [5,27,28]. The interaction of malaria and anemia is complex. Malaria could cause anemia through cytokine mediated suppression of haematopoiesis or by predisposing the victim to other infection [5,28]. Iron deficiency and parasite infestation can also contribute for the occurrence of anemia in our study. Previous report indicated that anemia in Ethiopia is primarily due to parasite infestation and malaria [24]. Although the study is the first of its kind in Ethiopia to assess the interaction of malaria and malnutrition in under-five children at risk of malaria, it has several limitations. First, we didn't assess level of micronutrients which might have impact on malaria morbidity. Second, several behavioural factors of malnutrition were not assessed. Third, cause effect relationship of undernutrition and malaria could not be established. In conclusion, under-nutrition and malaria are very common in under-five children around Gilgel Gibe Hydroelectric dam. Malaria was not associated with under-nutrition but strongly correlated with anemia. The Ministry of Health in collaboration with other partners should design nutritional and malaria intervention strategies for under-five children at risk of malaria. Children with malaria should be screened and treated for anemia.
2014-10-01T00:00:00.000Z
2010-05-21T00:00:00.000
{ "year": 2010, "sha1": "4a189e95bf1dc2fa5a2e90cde77545da927a5fe2", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0010775&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a189e95bf1dc2fa5a2e90cde77545da927a5fe2", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237856757
pes2o/s2orc
v3-fos-license
Everyday experiences of digital financial inclusion in India's ‘micro-entrepreneur’ paratransit services Self-employed labour in transportation is a notoriously precarious form of employment that occurs throughout many developing countries. In order to offset high-cost and insecure vehicle procurement arrangements, paratransit fare structures are formulated on the basis of a set of logics designed to maximise revenues. Although entrepreneurial, when these logics occur in conflict with public fare legislation, they are undertaken illegally, or informally, and are perceived as undesirable by policy makers and transport users. However, the underlying structures that necessitate these practices are seldom examined despite their significant effect on mobilities and the livelihood experiences of male entrepreneurs. This paper engages with critical literatures on the financialisation of poverty reduction to present financialisation as a class-based mechanism that, with the rapid increase of digital payment and ‘alternative’ credit scoring, structures micro-entrepreneurship and precarity in the neoliberal context of India. The paper argues that digitally enhanced financial inclusion techniques may steer low-income workers toward mainstream finance institutions modelled on the global economy. They enable profit to be generated by investors and private microfinance companies. However, new financial technologies do not do little to reduce the risk and expense of microfinance, nor do they increase micro-entrepreneurs' profit margins. Moreover, they threaten the informal practices entrepreneurs use to self-manage their financial precarity. Introduction 'I took a loan to pay the interest on the older loan and that's how it started' begins one of Bengaluru's paratransit operators. He is recalling-during our interview-the difficulties of managing finance in a precarious occupation operated at the very low end of profitability. Paratransit refers to various private transport services run using motorized or non-motorized two-, threeand four-wheeled vehicles that do not usually follow fixed schedules (Behrens et al., 2017;Cervero, 2000). Services may, or may not, be regulated by the state although the term paratransit often is used to convey services that operate partially, or entirely, informally. Paratransit is revered for its ability to keep up with the rate of expansion in cities of the global South and to adjust to varying demand in ways that formal, top-down, transport services cannot, or do not (Cervero and Golub, 2007;Finn, 2012). In some contexts, paratransit has been shaped by neoliberal economic policies that have cultivated multiple deregulated private services over comprehensive state-owned public transport systems on the basis of the market's superior efficiency to deliver services (Rizzo, 2017). It forms a significant segment of micro-entrepreneurship in contexts, such as India, where much employment has shifted from industrial waged labour to self-employment. Despite their key role in transportation systems, paratransit micro-entrepreneurs commonly have little control over their working conditions, suffer substantial precarity and disposability, and are paid insufficiently (Agbiboa, 2017;Doherty, 2017;Rekhviashvili and Sgibnev, 2018). Enduring cycles of debt and financial exclusion feature in their lives and are an expression of their social class in relation to neoliberal policy, practice and discourse, and related processes of individualisation and financialisation (Chhachhi, 2020;Marron, 2013;Lai, 2018;Langley, 2008). A key challenge for paratransit operators is how to secure a vehicle as to the main asset to their business. Operators are often limited to high-cost and insecure finance and it is thought that improving access to mainstream and formal forms of finance will financially empower them (Chadhar, 2016;Garg et al., 2010;Harding et al., 2016). This is the central hypothesis of financial inclusion (FI), which now increasingly occurs through the expansion of finance technologies (fintech). These include digital payment mobile phone platforms and big data-driven credit scoring techniques that are promoted by international development institutions, governments and private companies (Demirgüç-Kunt et al., 2017;Owens and Wilhelm, 2017). Critical scholars have pointed out, however, that financial exclusion continues on the basis of social class with the entry of new actors and technologies involved in FI as a result of financialisation (Aitken, 2017;Bateman, 2018;Bernards, 2019;Boamah and Murshid, 2019;Gabor and Brooks, 2017;Lai and Samers, forthcoming;Langevin, 2019;Mawdsley, 2018). Financialisation is the mechanism through which financial motives, markets, actors, institutions and technologies, take an increasing role in the operation of economies (Epstein, 2005). The effects of class-based financial exclusion are particularly acute in the global South where financial technologies do not easily adapt to informal economies of micro-entrepreneurship and where the number of unbanked populations is large in scale (Rona-Tas and Guseva, 2018). Currently, the ways in which new financial technologies alter the practices and capabilities of the poor to manage their finance and micro-enterprises, and paratransit services, are not well understood. This is particularly the case for male entrepreneurs, whereas previous research has predominantly considered issues of micro-finance and FI for women. A common understanding in transportation policy is that digital technology, payment and data, will facilitate new understandings of paratransit and its reform for the benefit of transport users (Behrens et al., 2017;Tinka and Behrens, 2019). However, critical research that addresses the limitations and problems of FI for transport workers is much less common. This oversight occurs because in much of transportation research, planning and policy, there is an ongoing commitment to the application of technical solutions without adequate understanding of the social and political contexts of developing cities (Marsden and Reardon, 2017;Uteng and Lucas, 2018). This paper indicates that financialisation, in this case transitioning to digital mechanisms of finance and commerce to meet the objective of FI, is problematic for low income microentrepreneurs in a number of ways. Firstly, micro-entrepreneurs embedded in cash economies have difficulty in adapting to new financial technologies or have little incentive to use them, which limits their ability to develop digital data for credit scoring. Paratransit operators are an example of self-employed entrepreneurs who remain largely dependent on indigenous moneylenders, whom although exploitative, do offer easy entry toand exit from -finance. Secondly, through financialisation, low-income micro-entrepreneurs are increasingly exposed to high-cost subprime finance. They are brought into formal finance markets that enable the production of capital for an elite class of investors, however, financialisation is doing little to improve the financial insecurity of the poor who may benefit more from improvements to their employment, income and state welfare. Thirdly, fintech is accompanied by obligations for entrepreneurs to reform their business practices. In the case of paratransit entrepreneurs, in order to repay their finance while managing low and fluctuating income, operators often negotiate fares above fixed-fare rates set by governing institutions. Fares are differentiated according to time of day, appearance of passengers, road surface conditions, weather conditions, travel demand, speed, quality of service or time spent travelling, for example (Diaz Olvera et al., 2016;Khayesi et al., 2015;McCormick et al., 2013;Phun and Yai, 2016;Venter et al., 2014). The transparency of digital fare payment and online trip booking now threaten the possibility of informal entrepreneurship in paratransit. This creates resistance to fintech and further exclusion from financial institutions. This paper argues that financialisation is a mechanism cultivated by neoliberalism, which reproduces exclusions on the basis of class. The role of new financial companies and technologies in bolstering FI is therefore unable to offer a straightforward solution that financially empowers low-income entrepreneurs. Financialised development: entrepreneurship and risk Neoliberalism and the growth of micro-entrepreneurship Academics have documented how a neoliberal agenda was implemented in low-and middle-income economies since the 1980s under the condition of loans distributed to developing economies by the World Bank and the International Monetary Fund (Mitchell and Sparke, 2016;Natile, 2020;Rankin, 2001;Rizzo, 2017;Soederberg, 2004Soederberg, , 2013. Borrowing states agreed to undergo structural adjustment programmes designed to liberalise local and global trade and deregulate markets so that a self-regulating market based on competition could prevail. The structural adjustments would privatise state-owned resources, banks and industries, and balance government deficits through austerity measures. The influence of Western institutions continued under the norms and codes of the Washington Consensus from 1990 and, consequently, developing economies were restructured to fit into and compete within globalised markets. Economies were developed around ideologies of individual liberty and market flexibility (Amable, 2011;Carroll and Jarvis, 2014;Harvey, 2005;Peck, 2010). To achieve these objectives governments have had a reduced role in controlling economic and social spheres (Harvey, 2005;Soederberg, 2004). A significant aspect of neoliberalism is the transformation of people into individuals who proactively maximise the potential of their human and financial capital in competition with other individuals and thus contribute to a society of free enterprise (Amable, 2011;Foucault, 2008;Lazzarato, 2009). This notion has evolved in developing economies in the form of microentrepreneurship along with financialisation, a specific form of neoliberalism that facilitates and depends on the role of private financial actors. Financialisation is defined straightforwardly as a shift from economies of production to the increasing role of financial motives, markets, actors and institutions, in the operation of the economy (Epstein, 2005). Financialisation encourages individuals to take on debt to improve their financial wellbeing within private, for-profit, markets of finance that are inflated over social welfare and the provision of secure waged employment (Soederberg, 2014). Many scholars contend that financialisation has redistributed financial risks and the management of financial wellbeing to entrepreneurial subjects, often as part of their everyday lives (Chhachhi, 2020;Finlayson, 2009;Kear, 2018;Lai, 2018;Langley, 2008;Lapavitsas, 2013;Lazzarato, 2009;Marron, 2013;Martin, 2002;Mitchell and Sparke, 2016;Mulcahy, 2017). India has witnessed a shift away from the post-independence socialist ideologies that depicted industrial workers and rural villagers as archetypical citizens and objects of development (Gupta, 1998) to ideologies of enterprise, business, consumption and technology as the source of social mobility and economic growth (Fernandes, 2004;Gooptu, 2007). From 2014, the nation's ruling Bharatiya Janata Party (BJP) government accelerated neoliberalism with shifts in state regulations that have driven self-employment, micro-enterprise and market-based institutions resulting in further casualisation of labour, unemployment and a rise in precarious and informal work (Chhachhi, 2020). Self-employed micro-entrepreneurs now constitute ∼80% of all persons employed in South Asia (ILO, 2019). The role of FI in neoliberal development: a critical perspective FI is currently one of the most prevalent development orthodoxies currently used to counter worldwide poverty. It targets 1.7 billion unbanked adults worldwide, including those without an account at a financial institution or mobile money provider (Demirgüç-Kunt et al., 2017). For the World Bank, 'access to financial services has a critical role in reducing extreme poverty, boosting shared prosperity, and supporting inclusive and sustainable development' (2014: 1). The provision of (micro) finance to micro-entrepreneurs forms a significant facet of FI policy, practice and commerce, and has contributed to the financialisation of poverty reduction. Following the structural adjustments, regulatory reforms created environments that supported the commercialization of microfinance and encouraged a competitive lending market (Aitken, 2013;Roy, 2010;Soederberg, 2014;Weber, 2006). Financialisation has occurred, for example, through grassroots microcredit programmes seeking to empower (often female) entrepreneurs under the influence of development institutions that devolve responsibility for securing economic opportunity and social wellbeing to individuals (Rankin, 2001;Roy, 2010;Young, 2010). Critics are concerned that FI exposes previously invisible economies and vulnerable subjects to the workings of profit-driven, globalised, markets (Aitken, 2013;Mader, 2014;Rankin, 2013;Roy, 2010;Soederberg, 2013;Young, 2010). Private finance companies, along with their various technologies, have been institutionalised to supposedly achieve economic development by making credit more available to the poor in such a way that profit is derived (Aitken, 2013;Carroll and Jarvis, 2014;Gabor and Brooks, 2017;Mader, 2014;Natile, 2020;Roy, 2010). This has occurred by making visible, calculating and disciplining the finances of low-income subjects and commodifying their debt and risk (Roy, 2012). The FI approach has overlooked class-based power, exploitation and inequality in credit markets. Soederberg (2014) reveals unequal power relations in the microfinance industry using a Marxian framework to critique how surplus, low-waged or underemployed workers (required for capital accumulation and the growth of wealth held privately by a capitalist class) are identified as an unbanked population and are brought into the financial market to facilitate FI as a supposedly apolitical strategy against poverty. Mader (2014) is similarly concerned about how the lack of societal wealth among one class of peoplethe poorbecomes the basis for a financial contract with another class of people able to rent out capital wealth. Thus, rather than redistributing wealth, microcredit is thought to create new relationships of entitlement between lenders and borrowers and financialises relationships between the wealthy and the poor, enabling a market society that reshapes social relationships that contribute to capital accumulation (ibid). Critics indicate that financialisation and entrepreneurship are increasingly creating conditions for indebtedness and the transfer of financial responsibility to credit-seeking vulnerable individuals who can no longer access financial security through employment rights or state welfare to the effect that precarity is now a source of value and enterprise (Bowsher, 2019;Chhachhi, 2020;Lazzarato, 2009;Mitchell and Sparke, 2016). Meanwhile, the structural inequalities that have produced exclusion, exploitation and financial risk among the poor are not addressed (Brigg, 2006;Carroll and Jarvis, 2014;Gabor and Brooks, 2017;Lazzarato, 2009;Mader, 2014;Natile, 2020;Soederberg, 2014). FI in the era of big data and digital payment Public funded philanthropic organisations, development finance institutions and government institutions are partnering with private fintech companies to progress big data-driven credit scoring, which is used to scope, assess and govern previously invisible customers in low-income countries (Aitken, 2017;Boamah and Murshid, 2019;Gabor and Brooks, 2017;Langevin, 2019). This is occurring as international development institutions and fintech companies are promoting mobile phone payment to facilitate micro-entrepreneurship and self-employment (Natile, 2020). Big data are typically derived from various devices that continually and automatically generate large data of a scale, volume, variety and speed, previously impossible (Kitchin, 2014). Aligned with positivistic scholarship, big data are analysed computationally and algorithmically, to reveal and predict associations, behaviour patterns, and social trends utilising additional data points, or variables, previously unavailable (Sagiroglu and Sinanc, 2013). 'Alternative' credit scoring methods derive big data from non-bank and non-financial sources, such as social media data, mobile phone data, mobile money transactions, utility bill and home rent payment histories, online profile data (education, employment), retail spending histories and e-commerce data (Aitken, 2017;Hurley and Adebayo, 2016;Óskarsdóttir et al., 2019). Big data can be used for psychometric and social network analyses that use machine learning techniques to seek associations between the behaviour of potential borrowers and their financial behaviour, categorising them and calculating and pricing their risk accordingly (Langevin, 2019). These methods are designed to overcome a 'credit catch-22' whereby to receive credit a person must first demonstrate their successful history of repaying credit (FICO, 2021). Since the techniques are able to extract data from non-bank sources they are considered to improve access to credit for the unbanked and those who rarely use bank accounts, the majority of whom reside in developing countries and belong to the poorest households in their economies (Demirgüç-Kunt et al., 2017). The use of alternative credit scoring techniques for FI valorises their use in developing countries by private companies (Donovan and Park, 2019). The United States analytic credit scoring company FICO is currently piloting their new product on data gathered from mobile money users in Sub-Saharan African countries where at least 70% of the population use a mobile phone. The area demonstrating the largest number of active accounts (72 million) and the largest value of money transactions is East Africa where Safaricom and Vodaphone launched the mobile money payment, transfer and credit service, M-pesa. The company is ensuring products are directed to where the most data and thus, profit, can be generated. This motivation is endorsed by the International Finance Corporation (IFC)a member of the World Bankwho suggest fintech can be used to tap into the markets of small and medium enterprises of developing markets; half of which have unmet credit needs valued at 'approximately US$2.1 to US$2.6 trillion' (Owens and Wilhelm, 2017: 1). The wide range of financial and non-financial devices that generate big data used in contemporary credit scoring require digital and technical know-how, and often the use of a smartphone. The exclusion of the poor from formal financial institutions is now more so conceptualised in terms of a technical issue than previously realised by Marron (2013). Financial: 'others' continue to be created who cannot conform with the everyday devices promoted for digitised FI. Increasing digital literacy is now, perhaps, becoming more significant to the FI market than financial education. That is because innovations in alternative data scoring allow 'bad' behaviour to be more accurately surveyed and priced, which offers greater security to lenders. In order to expedite digital know-how, governments are using government-to-person payments, such as social welfare payments, to force the production of digital data by those who would otherwise not have used, or rarely use, a bank account (Gabor and Brooks, 2017). Shifts in fintech use and big data analytics, however, have not overcome the existing issues outlined in the previous section of class-based exclusions. As Kear asserts, 'being inside the financial system' does not necessarily produce the emancipatory experience FI advocates assume (2012: 936). That is because making the poor visible to financiers in many cases demonstrates their risk with a poor credit score, which opens them to a subprime lending market in which low scoring borrowers are subjected to higher interest rates and more unfavourable terms in order to compensate for their risk. For example, a FICO credit scoring model using alternative data demonstratesamong a sample of consumers in the United Statesthat a majority (∼65%) produced a score below 620, whereas ∼30% scored between 620 and 699. A good score is considered to be 700 and 850 excellent. Because the latest methods open access to credit outside of the subprime category for some consumers, their utility in FI is legitimised despite that over half of consumers tested by the FICO model are scored to access only the subprime credit market. The potential for digital FI to radically alter the borrowing experiences of low-and precariously-waged workers is questionable. Researching auto-rickshaw operators in Bengaluru Auto-rickshaw taxi operators provide analytical insights into how financialisation is experienced by India's micro-entrepreneurs. Auto-rickshaw services are operated by self-employed drivers, who are from here referred to as auto operators, using three-wheeled motorised rickshaw vehicles. The services are aligned with the Indian state's definition of micro-enterprises: businesses in the service sector with investments in machinery under 1,000,000 INR/∼£10,000 including the selfemployed (Ministry of MMSEs, 2019). Operators have been brought into digital FI through their use of online smartphone payment platforms (e-wallets) and online trip booking platforms. In Bengaluru, 200,000 auto operators are now registered (TDGK, 2019). Autos account for ∼10-20% of urban transport mode shares (Mani et al., 2012). Due to the scale of their services, autorickshaw trips are a key market segmentation for digital payment companies (Redseer, 2019). The research was undertaken in Bengaluru based on the aspirations of the local government for its future as a smart city, which indicates an incentive for the digitisation of various services. Bengaluru is becoming younger and, in many areas, gentrified by technologically skilled migrants. A large proportion of paratransit customers demand the use of digital payment. The method aimed for an in-depth study on the particularities of finance and fare-setting of auto operators and therefore, a qualitative approach was taken. Data collection involved semi-structured interviews of 30-60 min undertaken over 12 weeks in 2019 including interviews with 42 individual operators (31 in central city locations, 11 in city periphery locations), four driver union representatives, five finance brokers, six financiers (two private moneylenders, one non-banking finance company (NBFC), two NBFC franchise private moneylenders, one saving co-operative), eight traffic police and four regional transport office (RTO) officials. Auto operators were recruited on the street using a purposive sampling technique that sought to collect an estimated representation of Hindu and Muslim operators of various ages and of varying socioeconomic dispositions. The later was achieved by recruiting operators working different routes and in areas that are marked with varying degrees of precarity; from central transit stations and shopping malls, to periphery locations. Those working in central and lucrative areas (with many exceptions) have procured access to spaces and associations that enable higher earnings or better working conditions, such as fewer hours worked each day or the opportunity for a day of rest each week. Financiers were recruited by snowball sampling, asking operators and brokers of their whereabouts. Most are legitimately working as licensed moneylenders under the Karnataka Money Lenders Act 1961 and some were operating as franchises of larger finance companies providing loans on commercial and domestic vehicles. One financier located in a jewellery shop was recruited on the basis of their previous involvement in finance, which eased a discussion of informal practices. I was sensitive to the prospect of participants placing themselves in vulnerable positions as they discussed their practices, experiences and concerns. Considering the ethics of these conversations I chose to position myself as an empathetic listener. It is possible, being a white British female researcher, that I was perceived as reasonably neutral, impartial and non-threatening to those who chose to participate in interviews (Fonow and Cook, 1991). I was visibly an outsider and did not gain access to participants through an influential gatekeeper, which can potentially obstruct trust (Mullings, 1999). Trust was gained with participants by approaching them with a local research assistant, offering anonymity and off-record interviews. I interviewed participants in public shared spaces (a street or coffee shop) rather than inviting them to a more formal space. Although in a privileged position of belonging to an influential Western research institution, in the field, I was someone who apparently knew little about the politics of the associated industry networks or the legality of activities I was researching. Interviews were either audio-recorded, transcribed and translated by a research assistant. When participants did not wish to be recorded, notes were taken during and/or after interviews. Illicit fare negotiations were discussed openly by auto operators. However, they did not disclose accurately the extent to which passengers were overcharged, of which I was aware based on my experiences of using autos throughout fieldwork, field observations at key locations, and interviews with passengers. In total 84 interviews were carried out with actors involved in the industry and by crossreferencing participants' narratives, it was possible to determine a convergence in the data collected that signified a credibility sufficient for the purpose of this analysis. For example, the costs and terms of informal and formal loans, the mechanisms and practices of lending and fare negotiating, the use of digital financial technologies and institutions, the roles of actors involved in the industry, the management of subjects, financial risk and operators precarity. These were themes that were identified during fieldwork and revisited later, undertaking a thematic analysis of transcripts and extracting representative participant quotes. Participants were coded for anonymous transcripts (e.g., DD1) based on their role (D = auto operator, B = broker), interview location (A, B, C, D) with an allocated a number (1, 2, 3). Financing vehicles and experiences of exclusion Based on knowledge derived from interviews Figure 1 maps the pathways to vehicle procurement available to operators and Table 1 details the documents and credit scores auto operators must provide to the different lenders available. Banks do not usually finance auto operators unless they can provide the surety of a government employee and a worthy credit score. Most auto operators purchase vehicles by entering into a hire-purchase contract with either a private moneylender or an NBFC. NBFCs are registered and monitored by the state's Reserve Bank of India (RBI) to receive deposits and to provide infrastructure for bank related services, including loans, without a banking license. The leading NBFCs providing auto loans are lenders that specialise in transport and logistic enterprises and vehicle finance. These relatively new companies are using existing moneylenders as franchise lenders who have access to a market of auto operators and who are not required to undertake credit score checks to assess borrowers. Moneylenders, instead, use specialist knowledge and their social and economic networks to manage defaulting operators. On rare occasions, drivers approach a bank directly to take a loan for a vehicle using a government employee as surety. Operator workers' unions sometimes assist operators to do this using their social networks. Private moneylenders are referred to as Saitu lenders by auto operators. The term Saitu is given to people belonging to the Marwari community (originating from Rajasthan), although in Bengaluru we also identified moneylenders with family lineage originating from Gujarat and Maharashtra. Indigenous lenders exist in other areas of India within key merchant communities, such as Marwari, Nat-tukottai Chettiar, Shikarpuri or Multani, and Gujarati (Martin, 2015). They are referred to as indigenous lenders having had a role in India's economy before the arrival of the British colonial government and global systems of finance to supply microfinance to the poor, by the government, as a means to facilitate entrepreneurship. A report of the Study Group on the Indigenous Bankers to the RBI in 1971 details how the government earmarked 'indigenous bankers' as a means to supply loans to millions of 'small borrowers' 'with comparable ease and informality' and 'on the basis of personal creditworthiness rather than tangible securities' in order to compliment the 'formal' banking system of credit provision (Gupta, 2007: 159). In this way, Saitu moneylenders are considered to operate outside of the formal system but are regulated in order to protect borrowers. Saitu moneylenders' original role as intermediaries between informal and formal economies continues today despite recent technological shifts in line with financialisation. Moneylenders are the most accessible lenders for operators, and, as Financier 4 describes, 'can do a loan on trust and not through CIBIL [credit] scores or all those paperwork requirements'. Moneylenders take on more risk than other lenders because they have developed social networks of 'informers' to reach and prompt defaulting operators to repay agreed equated monthly instalments (EMIs). Vehicles are seized from defaulters and withheld to prompt repayments and as a last resort are sold to recover loans and interest fees. Vehicles are easily sold in an active secondhand market, which is essential for operators who need to exit early from their hire-purchase contracts to release money for other needs. This is considered acceptable and normal among Saitu financiers and does not affect future access to finance for operators, at least to high-cost loans. Moneylenders and NBFCs are approached using a broker that functions as a one-stop shop for operators to procure finance, a vehicle (second-hand or new), and vehicle documentation including a permit and vehicle fitness certificate from their local RTO. A down payment of 60,000-80,000 INR (£630-£840), or about 40% of a vehicle's value, is paid to a broker. On a second-hand vehicle, this amounts to ∼25% of the vehicle's value (this figure differs between lenders). A copy of the hire-purchase contract is required in order to get a vehicle permit. The contract is held by the lender who is the agreed owner of the vehicle until the operator-driver pays off the loan and interest. A vehicle cannot be sold without a permit and, therefore, without permission of the lender. These loans are repaid in EMIs usually over 30-36 months for new vehicles and fewer for second-hand vehicles. Based on interviews with drivers, brokers and financiers, Table 1 further details the different finance options (giving two examples of each) available to drivers, their EMI repayment schedules and the effects of these on the daily expenditures of drivers, as well as the various requirements of operators seeking loans. The lowest-cost finance available for vehicle procurement is through banks and they offer terms allowing operators to make smaller EMIs over a longer period. NBFCs are the next best option although operators must pay back EMIs over a shorter period at higher interest rates. At the highest cost to operators are Saitu lenders. However, in terms of drivers' EMI repayments, they have remained competitive with NBFCs. The longer repayment schedules of Saitu lenders ensure their EMIs are of no extra cost to operators (Table 1). This means that loans acquired through NBFCs and Saitu lenders bear equal pressure on operators' daily income targets. Operators' experience of digital FI Auto operators are drawn into the global financial system through their engagement with a number of technologies that have been used in credit scoring. Most relevant to auto operators is a digital payment of fares demanded by customers and platform companies through smartphone 'e-wallets'. In future, it is hoped that digital payment will be used for operator's payment of utility bills, house rent, groceries, goods and loan repayments. The appeal of technology for FI to the government of India is evident in its 'Digital India' campaign that was announced during a speech by Narendra Modi on 1 July 2015 to an audience in California (India Times, 2015). The campaign seeks to increase digital literacy, access to the internet and digital infrastructure in order to shape the country's digital commerce and governance. The demonetisation ordinance of 2016 was thereafter used to steer the public towards digital transactions since it removed ₹500 and ₹1000 notes (∼£5 and £10) from circulation, creating a sharp shortage in cash notes (Athique, 2019). By means of increasing digital literacy, bank use and digital payment, the government's 2016 Aadhaar Act has linked digitised identification to the targeted delivery of government welfare subsidies and services. Possessing an Aadhaar card is not mandatory per se, however, it is mandatory for receipt of a wide range of government benefits and services. It is also mandatory for citizens in possession of a Permanent Account Number (PAN) card, issued by the Income Tax Department for filing income tax returns, to link it with their Aadhaar card. The Aadhaar and PAN cards feature in the government's FI strategy. For example, the Jan Dhan-Aadhaar-Mobile payment interface, used for welfare payments, has connected the Aadhaar to the government's FI Pradhan Mantri Jan Dhan Yojana scheme (PMJDY). The PMJDY allows citizens to open a savings account with no minimum balance and offers other benefits, such as accident insurance. Aadhaar has utility in producing new flows of income, bank, transaction and credit history data, linked with personal demographic data that enable credit scoring across a wider population than has previously been possible in India (Athique, 2019). Among other objectives, by increasing digital payment, the government has set out to advance the information lenders have about potential borrowers (KOAN, 2019). The state set up a credit rating bureau in 2000 consisting of the United States' global company TransUnion to assist in delivering the infrastructure required to undertake credit scores mandated for use by banks and NBFCs from 2007. TransUnion's members gain access to customer scores to assist them in managing investment risk and in devising 'lending strategies to reduce costs and increase portfolio profitability' (TransUnion CIBIL, 2020, n.p.). The introduction of credit scoring in India was part of wider global, neoliberal, financialisation of credit designed for efficiency that increases the market's productivity and reduces its outgoing costs by calculating risk and transferring it to individual borrowers (RBI, 2014). By means of highlighting this point I quote Shri R. Gandhi of the RBI who, during a keynote speech, states 'borrowers with a good credit history will be rewarded for their discipline while delinquent borrowers will no longer be subsidized by lower-risk consumers ' (2015, n.p.). Despite market and state intentions for fintech to increase FI, auto operators are continuing to experience exclusion from secure and affordable finance. Most operators interviewed are unaware of credit scores since their knowledge of finance is produced through the experiences of other operators and brokers. Financial literacy programmes are occurring through civil society and drivers' unions to counteract this problem. Digital payment companies seek out auto operators at gas stations to encourage them to adopt products that will enable their customers to pay using a quick response code from an e-wallet. Platform trip booking companies Ola and Uber have also been active in educating operators to use digital payment. However, despite the activities of new corporate actors to scale up the use of fintech, digital payments remain limited with only 6% of micro-enterprises in India, including transport workers, using them (KOAN, 2019). The literacy approach has not substantially contributed to auto operators' FI because a lack of knowledge does not entirely determine their exclusion. Financier 4 shared a common perception that 'there is such a demand for moneylenders because operators are not educated. They need agents to do their paperwork'. But, he then went on to say 'They [operators] move around and cannot keep up with their address proof. Then when they go to banks and Shriram or Bajaj [NBFCs] they have a problem tracing their repayment history. Then they come to us'. Financier 4 is describing one of the difficulties operators have in producing data applicable for credit scoring. This problem indicates the mobile and precarious lives of operators rather than their supposed lack of education. Operators living in low-cost rented housing often encounter problems with landlords, which add to their residential movements, commonly every 2 years (DQ2). Based on the research undertaken it seems unlikely that addressing technical issues in the short term will lead to increased autonomy for operators to access a wider range of financiers and lower cost loans that might address their economic precarity. That is primarily because operators on meagre incomes cannot create the consumer spending, income and household data that would generate a credit score rating deemed suitable for low-cost finance provided by banks. Additionally, their income falls within a tax-free allowance and, therefore, operators have had very little incentive, or need, to report it to the Income Tax Department and therefore do not use PAN cards. Furthermore, there are significant differentiations in the use of digital and cash monies in India determined by class that is not easily overcome through education because of the social meanings associated with cash (Zabiliūtė, 2020). This is compounded by a lack of trust in digital technologies among operators. Examples given were issues with activating ATM cards and not receiving fares into their digital wallets. There is a concern among operators that they might be taken advantage of by those who are more technically literate. Distrust was not limited to operators and one broker thought that cash loan repayments reduced the possibility of an auto operator 'cheating' a lender by cancelling direct debits, whereas a transaction in cash offers a material confirmation. This is relevant when operators delay an EMI so that they can make an alternative and more urgent payment. A number of everyday inconveniences and costs associated with digital payments further influence operators' withdrawal from them. For example, DA1 states: 'It's very difficult to pay rent to the vehicle owner every day if fare payments are credited to a bank account'. The government have placed charges on ATM use and a minimum withdrawal limit of 100 rupees to deter the use of cash, which mostly affects those needing to withdraw small amounts frequently. Here, the 'unstable incomes of the poor' are in conflict with the 'financial stability stipulated by financial institutions and cashless infrastructures' (Zabiliūtė, 2020: 80). Furthermore, there are gender norms in saving and handling household expenses. Operators are accountable to take a certain proportion of their salary to their spouse and handing over cash on a daily basis forms an obligation on operators. Accruing cash 'under the mattress' is bound to its materiality and signifies distrust in financial institutions and the immateriality of digital money, whilst a sense of security, on the other hand, is initiated through the personal guarding of cash (Sjørslev, 2020). The significance of gender norms in managing household finances is demonstrated by DB4 who claims 'my wife is my safety'. The experiences of auto operators contrast with the efficiency of digital transactions as promoted by advocates. For example, operators are sensitive to the temporality of digital payments. DH1 commented that if his first ride was paid online then the simple act of buying a morning cup of tea became uncertain. Those using trip booking platforms preferred the next day payments of Ola compared with the weekly payments of Uber. These are issues faced by those who utilise all of their income on their outgoings and have little money to save in an account. These issues and inconveniences are significant barriers to operators in accepting digital fares and thus in creating the credit scores used by NBFCs and banks to determine their chances of timely loan repayment. Ultimately, though, if operators were to evidence their income, credit and consumer histories, the data would unlikely produce credit scores that satisfy the needs of banks. That is since their low-cost interest fees cannot cover the costs and risks associated with customers who are more likely to default on payments. The credit scores of auto operators given their current income and employment precarity would likely only provide access to subprime NBFCs. Auto operators' fare setting practices Micro-entrepreneurs working as rickshaw taxi operators have a history of precarity in Southern and South East Asia. Colonial governments sought to restrict their operation through various policies such as restricting permit allocation, introducing laws that relegate services from main roads, city centres or terminals in favour of allocating space and travel demand to more advanced forms of motorised-vehicular traffic most relevant to the upper-and middle-classes (Notar et al., 2018;Pante, 2014;Warren, 1986). These class-based policies continue today and impact operators' ability to negotiate with local governments for improved terms of employment. The auto operators participating in the research comprise almost entirely of working-class (self-identified) men embedded in cash-based economies. They tend to set themselves a target daily income of 1000 rupees (∼£10-£11). From this, operators deduct fuel costs, daily subsistence and vehicle rental fees or loan repayments (if applicable). Most take home 500-700 rupees each day. The Indian states do not intervene to subsidise the procurement of vehicles, nor raise the wages of self-employed microentrepreneur operators. Drawing operators into formal finance markets using fintech does little to improve their working conditions, low incomes or the cost and precarity of their debt. However, additional to the significant caveats of digitising FI, the transparency instilled in digital payment poses a problem for operators managing their economically precarious positions through informal fare negotiations. The fare setting strategies of paratransit services operate between binary categories of informal and formal, and have ensued, at least to some extent, because of neoliberal economic policies that have shaped their governance and precarity. In certain contexts, as Rizzo (2017) describes for Dar es Salaam, competition within and across various paratransit services has contributed to the low incomes of paratransit operators. Additionally, the deregulation of paratransit, pursued under neoliberalism, has resulted in governments' subsequent difficulty in re-regulating services and their operation between formal and informal mechanisms (Cervero, 2000;Rizzo, 2017). When working in areas they are familiar with, operators know the price of a trip according to their fare meter and 'based on that', explains DD2, 'ask 10-20 rupees more'. This is a common example of fare surcharging, although a modest one, whereas in lucrative locations (bus, metro and train stations, shopping malls and hospitals), fares can be negotiated up to as much as double the meter rate (Author, forthcoming). In these spaces, auto operators have divided into peer/ kinship associations they call 'unions'. They often pay extortion money to the police who, either overlook their operation in spaces not legally allocated to autos or, assist operators in keeping a space closed to non-members of an association. The additional fares charged in these areas substantially impact travellers who are unfamiliar with the city and cannot easily predict fares (ibid). Tacit knowledge is used to calculate a customer's willingness to pay based on any urgency in their need to travel, if going to an evening event, accompanied by young children, or if elderly. In particularly congested areas of the city, autos offer an advantage over other public transport as they weave through standing traffic. Operators ask for higher fares through congested areas to offset the time of being 'stuck in traffic' (DB1) and the costs of 'burning [additional] gas' (DF1). Congestion can also surge during heavy rain, elections and festivals. At these times operators charge passengers one and a half times the fare rate. Operators' tacit fare calculations are a set of entrepreneurial practices set to profit from, geographical and temporal circumstances, as well as class distinctions and their conjunction with the city's geography. These practices are required to survive on low-profit margins and to compensate for costly and insecure finance. Given that most operators practise illegal fare setting techniques, there is resistance to having those transactions opened for monitoring, which is likely made possible through removing cash payment and the discretion it affords (ibid). Digitised forms of FI are threatening the techniques that low-income entrepreneurs currently use to self-manage their precarity, which remain necessary even as they are drawn into formal mechanisms, institutions and markets of finance. Conclusion Micro-entrepreneurship is imparted through neoliberal policies that have reduced opportunities for waged employment and have instilled an ethos for individuals to seek out livelihood opportunities that cater to market demands. Responsibility to manage the risks associated with debt is passed on to self-employed, self-starters. Studies have suggested that formal sector finance has the potential to improve auto operators' income by overcoming the highest interest rates of indigenous moneylenders that are not fully compensated, additional to a reasonable income, within the government's flatfare rate (Chadhar, 2016, Garg et al., 2010Harding et al., 2016). However, this solution cannot be actualised without first understanding the difficulties many micro-entrepreneurs experience with fintech, their exclusion from credit scoring and that global processes of financialisation do not guarantee financial security. Rather, these processes tend to reproduce the precarity of paratransit labour (high monthly repayments and vehicle asset seizures). Beyond the case of transportation labour, questions arise about the implications of FI as a development ideology, its increasingly technological apparatus, and its private ownership and management. It is difficult to see how policies that seek to open a market of subprime borrowers to finance companies by evidencing their incomes and credit histories will achieve economic empowerment. That is since being on the inside of the financial systemin this case gaining access to NBFCsdoes not place microentrepreneurs in a position of greater financial autonomy and freedom (Kear, 2012). Fitting risky subjects into mainstream systems that calculate their risk based on digital data flows will not necessarily reduce the cost or conditions, of their credit. Nor will it ease the financial precarity of micro-entrepreneurs experiencing ongoing class-based exclusion from lower-cost and more secure bank loans. The lowering of interest charged by NBFCs may legitimate the adoption of technologies that facilitate alternative credit scoring for FI. However, low-income auto operators have little to benefit while they gain no additional security over loan repayments through NBFCs, and have less flexibility to access finance or to sell their vehicles to release equity when necessary without it affecting their access to future credit. Rapid changes to payment technologies are also threatening the strategies devised to fulfil paratransit operators' hire-purchase finance contract obligations (among other household and business expenses). Meanwhile, private, formal, financial institutions (NBFCs, e-wallet and other fintech companies) are profiting from a steady supply of low-income working-class entrepreneurs seeking credit and from whom capital can be extricated with greater efficiency under the guise of financial empowerment. This paper demonstrates that financialisation and related digital forms of FI are contributing to the continuation of inequitable structures of social class from which rentier capitalists profit. These profits circulate within private markets of finance. In the future of the less-cash Indian economy, self-employed and low-income workers, who are rarely visible as human agents, may well be steered into systems of income evidencing and credit scoring through FI programmes. They are yet to receive surety from governments that their right to work in the city is secure, and without are not particularly moving away from the exploitation previously suffered through their dependence on indigenous and informal moneylenders. Unless conditions of employment are improved for the self-employed to the effect that they can gain access to the terms and interest fees more closely aligned to those of bank credit, underbanked microentrepreneurs will not benefit as much from digital FI as its advocates promise.
2021-09-01T15:07:00.681Z
2021-06-28T00:00:00.000
{ "year": 2021, "sha1": "02d4b32948d3964e98e3f503193bf4e06d9cdb62", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0308518X211026320", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "0bf140c8a3283a354cc9b98033b3963d4c508525", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
14120243
pes2o/s2orc
v3-fos-license
Stabilization and destabilization of second-order solitons against perturbations in the nonlinear Schr\"{o}dinger equation We consider splitting and stabilization of second-order solitons (2-soliton breathers) in a model based on the nonlinear Schr\"{o}dinger equation (NLSE), which includes a small quintic term, and weak resonant nonlinearity management (NLM), i.e., time-periodic modulation of the cubic coefficient, at the frequency close to that of shape oscillations of the 2-soliton. The model applies to the light propagation in media with cubic-quintic optical nonlinearities and periodic alternation of linear loss and gain, and to BEC, with the self-focusing quintic term accounting for the weak deviation of the dynamics from one-dimensionality, while the NLM can be induced by means of the Feshbach resonance. We propose an explanation to the effect of the resonant splitting of the 2-soliton under the action of the NLM. Then, using systematic simulations and an analytical approach, we conclude that the weak quintic nonlinearity with the self-focusing sign stabilizes the 2-soliton, while the self-defocusing quintic nonlinearity accelerates its splitting. It is also shown that the quintic term with the self-defocusing/focusing sign makes the resonant response of the 2-soliton to the NLM essentially broader, in terms of the frequency. I. INTRODUCTION The control of soliton dynamics has been drawing a great deal of interest both as a fundamental problem and a topic with a vast spectrum of applications -in particular, to optics and matter waves [1]. It is well known that the nonlinear Schrödinger equation (NLSE), being an integrable one, supports an infinite sequence of exact higher-order soliton solutions, which may be understood as bound states of strongly overlapping fundamental solitons [2]. However, within the framework of the integrable equation, the binding energy of the exact multi-soliton complex is always equal to zero [2], hence the higher-order solitons are unprotected (unstable) against perturbations of initial conditions, which can induce splitting into fundamental constituents. For example, the second-and third-order solitons (which are often briefly called 2-soliton and 3-soliton, respectively) readily split into sets of two and three fundamental solitons, with amplitude ratios 1 : 3 and 1 : 3 : 5. Different approaches were proposed to stimulate the splitting and make it a real physical effect. One possibility is to introduce a specific nonlinear dissipation into the model by adding nonconservative terms to the NLSE, such as the one accounting for the intrapulse Raman scattering in optical fibers [3]. Other physically relevant settings are those with a periodic modulation of either the group-velocity-dispersion (GVD) or nonlinearity coefficient in the NLSE, which is known as the dispersion management (DM) and nonlinearity management (NLM), respectively [1]. The DM in optical fibers can be of both sign-alternating and sign-preserving types (in the former case, the GVD coefficient periodically changes between positive and negative, alias normal-GVD and anomalous-GVD, values). The format of the DM can be piecewise-constant, built as an alternation of fiber segments with different values of the GVD coefficient, or sinusoidal. It was predicted that the latter format, with a mild modulation amplitude that does not imply the change of the sign of the GVD, can induce the splitting of both fundamental [13] and higher-order solitons [4,5]. Both effects have been experimentally demonstrated in Ref. [6], which made use of a specially fabricated fiber with the diameter subjected to the sinusoidal modulation along the fiber, thus inducing the modulation of the local GVD coefficient. Various effects of the NLM in the context of fiber-optic telecommunications were theoretically studied in Refs. [7,8,9], and the integration of the NLM with the DM was considered in Ref. [10]. In terms of Bose-Einstein condensates (BECs) in dilute atomic gases, which, in the mean-field approximation, is also described by the NLSE (called the Gross-Pitaevskii equation, in that context [11]), the NLM represents the application of the Feshbach-resonance technique to the BEC in the case when the resonance is induced by a variable (ac) magnetic field [12]. In the latter case, Ref. [14] has demonstrated that a small-amplitude variable part of the nonlinearity coefficient gives rise to resonant splitting of higher-order solitons into fundamental ones, provided that the NLM frequency is close to the frequency of free shape oscillations of the higher-order bound state. In this work we demonstrate that the addition of a very weak quintic nonlinearity dramatically changes the stability of 2-solitons under the action of the NLM. Namely, the 2-soliton's splitting time becomes significantly larger (smaller) in the presence of weak additional attraction (repulsion), represented by a self-focusing (defocusing) quintic term. These predictions may be tested experimentally in nonlinear optics and BEC. A challenging possibility is to create effectively stable 2-solitons using a weak self-focusing (attractive) quintic nonlinearity, and control their behavior by means of the weak NLM. While both the small quintic term and weak NLM represent perturbations that break the integrability of the underlying NLSE, the stabilization of the 2-soliton states under the combined action of both perturbations implies an intriguing possibility of their mutual compensation, leading to an effective extension of the quasi-integrable behavior of the solitons. In optics, the cubic-quintic (CQ) nonlinearity with different sign combinations of the two terms were predicted [15] and observed [16] in aqueous colloids, as well as in dye solutions [17] and, very recently, in thin ferroelectric films [18]. The same nonlinearity was also predicted, via the cascading mechanism, in two-level media [19]. The self-defocusing quintic nonlinearity, accounted for by a proximity to the resonant two-photon absorption, was also observed in other optical materials [20]. In the description of effectively one-dimensional BEC settings, a universal self-attractive quintic term in the respective Gross-Pitaevskii equation (as said above, the attractive quintic nonlinearity should facilitate the creation of stabilized 2-solitons) accounts for the deviation from the exact one-dimensionality, i.e., a finite transverse size of the corresponding trap for the atomic condensate [21,22,23]. Besides that, a self-defocusing quintic term may take into regard three-body collisions in the BEC, provided that the related losses are negligible [24]. As concerns the time-periodic modulation of the cubic coefficient, dealt with in this work, in optics it may naturally arise as a result of the periodic alternation of the linear loss and compensating gain (a well-known transformation removes the respective linear terms in the NLSE, mapping them into the effective NLM) [7]. As mentioned above, the same modulation in BEC represents the action of the Feshbach resonance controlled by the ac magnetic field. Thus, both the CQ nonlinearity and NLM are generic features of numerous physical settings. The paper is organized as follows. Results obtained by means of systematic simulations, that demonstrate the stabilization/destabilization of the 2-soliton, subject to the action of the resonant or near-resonant NLM, under the action of the weak quintic term, that corresponds, respectively, to the self-attraction/repulsion, are reported in Section II. Analytical approximations, which make it possible to explain the underlying effect of the resonant splitting of higher-order solitons under the action of the weak NLM, and also the stabilization/destabilization of the 2-soliton, induced by the quintic term, are presented in Section III. These approximations are based on analysis of the system's energy in the presence of the NLM and quintic term. Finally, Section IV concludes the paper. II. SPLITTING OF THE SECOND-ORDER SOLITONS A. Second-order soliton in nonlinear Schrödinger equation with cubic-quintic nonlinearity In this work, we take the one-dimensional NLSE for wave function φ (x, t) in the usual scaled form, where g > 0 is the coefficient accounting for the cubic self-attraction, which is set to be g ≡ 1 in the absence of the NLM. In physically relevant realizations of Eq. (1), the dimensionless quintic-interaction constant, which accounts for the higher-order self-attraction/repulsion in case of ǫ < 0/ǫ > 0, is a small parameter, |ǫ| ≪ 1. Energy E (the Hamiltonian), from which Eq. (1) can be derived as i∂φ/∂t = δE/δφ * , where δ/δφ * stands for the variational derivative with respect to the complex-conjugate field [2], is Exact fundamental-soliton solutions to Eq. (1) are known for either sign of ǫ [23,26]. In the case of the integrable cubic NLSE, with ǫ = 0, exact n-soliton solutions are generated by initial conditions with integer n ≥ 1 [25]. For all n ≥ 2, they are breathers whose shape oscillates with the frequency independent of n, (2π/ω sh is usually called the soliton period). The explicit solution for the 2-soliton is relatively simple: The energy of the 2-solution, taken as per Eq. (2) with g = 1 and ǫ = 0, is B. Resonant splitting of the second-order soliton We introduce the NLM, in the form of the time-periodic (ac) modulation of the nonlinearity, by setting in Eq. (1), where amplitude b of the perturbation is small, |b| ≪ 1, and the modulation frequency is kept in resonance with the shape oscillations, ω = ω sh . Figure 2 shows a typical example of the evolution of the wave function subject to the action of the resonant perturbation for ǫ = 0 (with the cubic nonlinearity only) and b = 5 × 10 −3 . The 2-soliton splits into fundamental solitons with amplitudes related as 1 : 3, as expected from the exact solution available at b = 0. The respective velocity ratio of the splinters is 3 : 1, in agreement with the prediction based on the momentum conservation (it follows from the fact that the effective masses of the two splinters are in the same ratio as their amplitudes [2], i.e., 1 : 3, and the total momentum must remain equal to zero [14]). The chain of open squares in Fig. 4 shows the splitting time as a function of the perturbation strength, b (in the case of ǫ = 0), revealing the divergence in the limit of b → 0. We fit this dependence to a simple power-law expression, where a, c and p < 0 are constant parameters and b is taken in percents. As can be seen in p = −0.372 ± 0.007, a = (5.84 ± 0.09) × 10 3 , c = 0. It is worth noting the differences in parameter c for both cases. For ǫ < 0, positive c in the fitting set (10) means that, even for a strong perturbation, a final waiting time is required to observe the splitting of the 2-soliton, which is another manifestation of its stabilization by the quintic self-focusing term. On the contrary, for ǫ > 0, the best fit actually required to choose c < 0-in the parameter region where formula (8) with c < 0 produces T > 0. Setting c = 0 in the fitting set (11) means that the simulations demonstrate that the 2-soliton starts splitting instantaneously under the action of the strong resonant perturbation. Actually, for large perturbation amplitudes (b > 0.02), the splitting produces fundamental solitons with the amplitude ratio different from 1 : 3, which may be explained by effects induced by the relatively strong perturbation on the constituents of the 2-soliton in the course of the splitting. In the case of the self-defocusing quintic nonlinearity, ǫ > 0, the splitting time shows saturation for extremely weak perturbations (see black circles in Fig. 4), i.e., the splitting time ceases to grow with further weakening of the perturbation. A plausible explanation to this feature, which demonstrates the fragility of the second-order soliton in this situation, is the fact that splitting is spontaneously initiated by the numerical noise. We also note that the quintic nonlinearity slightly changes the frequency of the shape oscillations of the 2-soliton, see Fig. 1. The modulation frequency was modified, accordingly, in the simulations, to maintain the resonance condition for all cases included in Fig. 4. On the other hand, |p| drops for ǫ > 0, showing the destabilization of the second-order soliton under the action of the quintic self-defocusing. C. The near-resonance response In Ref. [14] it was shown that the splitting of the 2-soliton (in the case of ǫ = 0) could also be caused by the temporal modulation of the coefficient in front of the cubic term with the frequency slightly different from resonant value (4). Here, we aim to confirm this behavior for case of the cubic NLSE and extend the analysis to the CQ model, with ǫ = 0 (in the latter case, the resonant frequency should be first slightly adjusted, as mentioned above). Fig. 6; the broadening is weak in the case of the weakest quintic self-focusing, which corresponds to ǫ = −7.6 × 10 −4 , which is represented by the chain of black triangles). In the case of the repulsive quintic nonlinearity, ǫ > 0, the dependence shown by the circles in Fig. 6 is nearly flat, i.e., the second-order soliton readily splits even at a relatively large detuning from the resonance. On the contrary, the stabilization of the 2-soliton by the self-focusing quintic nonlinearity is seen to be robust also under the off-resonance conditions. III. ANALYTICAL ESTIMATES A qualitative explanation to some numerical findings reported above can be provided by an analytical consideration of the model based on Eq. (1). First, it is possible to explain the underlying effect of the resonant splitting of the 2-soliton. Indeed, as mentioned above, the binding energy of the higher-order solitons is exactly zero in the integrable NLSE, therefore the splitting may be explained by the fact that the resonant temporal modulation pumps energy into the bound 2-soliton state, causing its splitting into constituents, which carry away the excess energy, in the form of their kinetic energies. In the presence of the small modulation term in the cubic coefficient given by Eq. (7), whose frequency is set to coincide with the resonant value (4), the exact evolution equation for energy E 0 of the unperturbed NLSE, i.e., Eq. (2) with g = 1 and ǫ = 0, can be derived in the following form: In the lowest approximation, one can substitute the unperturbed 2-soliton solution, as given by Eq. (5), into the right-hand side of Eq. (12). Under the condition that the shape oscillations of the 2-soliton are synchronized with the temporal modulation in Eq. (7) (analysis of of numerical results confirms this conjecture, in the case of the slowly developing splitting), the time averaging of Eq. (12) yields an effective energy-pump rate, where the constant is given by the following integral expression, This explanation of the gradual onset of the splitting of the 2-soliton through the pumping of the energy into it by the resonant NLM was not considered in Ref. [14], which was dealing with the (near-)resonant splitting in the cubic NLSE (with ǫ = 0). The stabilization/destabilization of the 2-soliton by the self-focusing/defocusing quintic nonlinearity may also be explained by means of the consideration of the energy. In expression (2), the term corresponding to the quintic nonlinearity in Eq. (1) yields the following expression for the additional energy, obtained by the substitution of the unperturbed 2soliton solution (5) and averaging over the period of its shape oscillations: The negativeness and positiveness of expression (16) for ǫ < 0 and ǫ > 0, respectively, explains the stabilization/destabilization of the 2-soliton by the self-focusing/defocusing quintic nonlinearity. According to Eq. (13), under the action of the resonant NLM the energy of the resonantly driven 2-soliton grows, on the average, linearly in time: where the initial value, E 0 (0), is energy (6) 6) and (17), one can conclude that, as long as the resulting deviation of amplitude A from its initial value, A 0 , remains small, the amplitude varies in time as A(t) ≈ A 0 − 0.8A 3 0 b · t. This variation leads to a detuning of the resonant driving, according to Eq. (4), but, on the other hand, Fig. 6 demonstrates that the detuning does not produce an essential effect for −ǫ > 10 −3 . IV. CONCLUSIONS In this work we have considered the influence of the weak quintic nonlinearity on the stability and splitting of second-order solitons (alias 2-solitons) in the NLSE-based model, under the action of the weak resonant NLM (nonlinearity management), i.e., periodic time modulation of the coefficient in front of the cubic term, with the frequency equal or close to the frequency of the free shape oscillations of the 2-soliton. The model applies to the propagation of light in CQ optical media, taking into regard the periodic action of the linear loss and compensating gain. The same model finds a natural application to BEC, where the self-focusing quintic term accounts for the effect of the residual three-dimensionality in the effective one-dimensional approximation, while the NLM may be induced by the Feshbach resonance controlled by an ac magnetic field. By means of direct simulations and an approximate analytical considerations, we have demonstrated that the additional weak selffocusing quintic nonlinearity stabilizes the 2-soliton, while the self-defocusing nonlinearity of the same type makes it fragile and accelerates its splitting. We have confirmed the resonant character of the splitting of the 2-soliton under the action of the NLM, and proposed an explanation to this effect, based on the consideration of the rate at which the energy is pumped into the bound state by the resonantly tuned ac drive. We have also studied the resonant NLM-induced splitting of the 2-soliton in the presence of the weak quintic nonlinearity. Depending on its sign, the self-defocusing/focusing higher-order nonlinearity gives rise to conspicuous broadening/sharpening of this resonant response. The results of the numerical and analytical considerations reported in this paper for 2-solitons can be readily extended to higher-order solitons.
2017-04-06T13:35:35.435Z
2009-09-28T00:00:00.000
{ "year": 2009, "sha1": "ba00d1ee587a8702153ffce7043053ac9216f0d1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0910.0176", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "092bec11030a59eafcd4b20c74dec30bf5a1ffbf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
235253724
pes2o/s2orc
v3-fos-license
Equilibration of the planar modes of ultracold two dimensional ion crystals in a Penning trap Planar thermal equilibration is studied using direct numerical simulations of ultracold two-dimensional (2D) ion crystals in a Penning trap with a rotating wall. The large magnetic field of the trap splits the modes that describe in-plane motion of the ions into two branches: High frequency cyclotron modes dominated by kinetic energy and low frequency $\mathbf{E \times B}$ modes dominated by potential energy associated with thermal position displacements. Using an eigenmode analysis we extract the equilibration rate between these two branches as a function of the ratio of the frequencies that characterize the two branches and observe this equilibration rate to be exponentially suppressed as the ratio increases. Under experimental conditions relevant for current work at NIST, the predicted equilibration time is orders of magnitude longer than any relevant experimental timescales. We also study the coupling rate dependence on the thermal temperature and the number of ions. Besides, we show how increasing the rotating wall strength improves crystal stability. These details of in-plane mode dynamics help set the stage for developing strategies to efficiently cool the in-plane modes and improve the performance of single-plane ion crystals for quantum information processing. I. INTRODUCTION Single-plane crystals of several hundred ions in Penning traps provide an appealing platform for quantum information processing and quantum sensing. The large number of qubits in this system provides for the possibility of quantum simulations of paradigmatic spin and spin-boson models in a regime where classical simulation becomes intractable [1][2][3][4]. Experimental work to date has focused on all-to-all interactions between the ion qubits, studying the buildup of qubit correlations in a regime where experiment can be benchmarked with theory [5,6], but with improved control and the addition of techniques such as single-site addressability, more complex simulations and general information processing will be possible [7]. This promise has motivated recent efforts to improve the Penning trap platform and increase the control and tools available to the experimentalist. This includes efforts to develop miniaturized permanent-magnet systems that offer portability [8], traps with improved optical access [9], the incorporation of sideband cooling [10], and proposals for quantum computing and simulation in arrays of Penning traps [11]. In trapped-ion quantum information processing, strong interactions between the ion qubits (or spins) are generated by coupling the ion crystal spin degrees of freedom with the ion crystal motional (or mode) degrees of freedom through the application of a spin-dependent force. For single-plane crystals in Penning traps, this is routinely accomplished by coupling the ion spins to the drumhead modes that describe ion motion perpendicular to the plane of the crystal (or parallel to the magnetic field of the Penning trap) [12]. A single-plane crys-tal with N ions will support N drumhead modes, each of which can be described as a simple harmonic oscillator. The drumhead modes are efficiently cooled to near their ground state by a combination of Doppler and EIT (electromagnetically induced transparency) cooling [13,14]. In contrast, the in-plane ion motion is complicated by the presence of the strong magnetic field of the trap and has not to date been employed for quantum information processing tasks. The strong magnetic field splits the planar normal modes into a cyclotron branch containing N high frequency modes and an E × B branch containing N low frequency modes. Additionally, the planar modes do not undergo simple harmonic motion and their average potential and kinetic energies are not equal. The E × B modes are dominated by potential energy associated with thermal position displacements, while the cyclotron modes are dominated by kinetic energy associated with cyclotron motion. In contrast to the drumhead modes, efficient cooling of the in-plane modes has not been demonstrated experimentally or even clearly discussed theoretically for multi-ion crystals. Doppler cooling of the cyclotron modes to millikelvin temperatures appears feasible [15,16], but recent theoretical work indicates that observed frequency instabilities of the drumhead mode spectrum can be attributed to an elevated temperature of order 10 mK for the E × B modes [17]. A detailed understanding of the planar mode dynamics and the energy exchange between the different planar mode branches, besides being of fundamental importance [18][19][20][21], is an important first step in the design of efficient cooling techniques as well as quantum information protocols that utilize these modes. In this paper, we investigate the exchange of energy be-tween the cyclotron and E × B branches of single-plane ion crystals in Penning traps using an eigenmode analysis of a first-principle molecular dynamics-type simulation [15]. We characterize the energy exchange as a function of the ratio R of the ion crystal cyclotron and E × B centerof-mass mode frequencies (see Eq. (16)). The center-ofmass frequencies provide a convenient characterization for the frequency ratio between the two branches. From simulations performed with 5 < R < 10 we find that the exchange of energy between the two branches is exponentially suppressed as a function of R. A simplistic extrapolation to R ≈ 735, relevant for the current NIST experimental set-up, gives an equilibration time many orders of magnitude longer than the age of the universe. In addition, we also study the less-sensitive dependence of the rate of energy exchange between the branches on the initial energy and the number of trapped ions. Finally, for large R where the energy exchange between branches is negligible, we study the exchange of energy between modes within a given branch and observe a significantly faster equilibration within the E × B branch than the cyclotron branch. In the course of the above studies, we also show that increasing the rotating wall strength leads to improved crystal stability. These observations improve our understanding of the in-plane mode dynamics, setting the stage for developing strategies for efficiently cooling the E × B modes. The isolation of the cyclotron modes suggests their potential use and efficacy in quantum information processing protocols. The organization of the paper is as follows. In Sec. II, we review the governing equations for the rotating-wall Penning trap configuration at NIST. The model equations are the starting point for both direct numerical simulation and the linear eigenmode analysis. In Sec. III, we present both an eigenmode and band-pass filter technique for determining the energies of the two mode branches. The eigenmode technique is based on linearizing the system, details of which are presented in Appendix A. In Sec. IV we discuss Penning trap and ion crystal parameters that affect the coupling rate and develop a systematic procedure for obtaining different crystal configurations characterized by the desired parameters. In Sec. V, we study the influence of the rotating wall strength on the ion crystal stability. We find that a strong rotating wall improves the crystal stability and the effectiveness of the eigenmode measurement. In Sec. VI, we present the first-principles simulation results. We begin by showing a thermalization process of the modes for R = 5, where equipartition of the mode energies is reached after 10 ms evolution. We then study the dependence of the equilibration rate between the cyclotron and E × B modes on several parameters in Sec. VI B. For large R, where the inter-branch coupling is very weak, we also examine coupling among the modes within each branch. Finally, in Sec. VII, we summarize with a discussion and concluding remarks. II. THEORETICAL FORMULATION We have developed an N -particle classical simulation of ultra-cold ions in a Penning trap, including a rotating wall and axial and planar Doppler cooling [15]. The code includes a fairly realistic implementation of the experimental configuration employed at NIST [1,5,22]. Here we use this code (without implementing the laser cooling) to simulate the equilibration of the planar modes. We analyze the simulation through an eigenmode decomposition. In this section, we introduce the model and parameters relevant for single-plane crystals in Penning traps and describe the planar normal modes of motion [17,23]. Details of the normal mode analysis are given in Appendix A. We treat N ions, all with the same mass m and charge q, as classical point particles confined in a rotating-wall Penning trap. The Penning trap confining fields consist of a magnetic field B = Bẑ, a quadrupole electrostatic potential ϕ trap (x) = 1 4 k z (2z 2 − x 2 − y 2 ), and a time-dependent potential ϕ wall (x, t) = 1 2 k z δ x 2 − y 2 cos [2 (θ + ω R t)] called the rotating wall. The dimensionless parameter δ characterizes the relative strength of the rotating wall potential to that of ϕ trap . The parameters θ and ω R are the azimuthal angle and the rotating wall frequency. Further details of the simulation model are given in Ref. [15]. Experimentally [1,5,22], the ions are cooled to a regime where the ions are strongly correlated with a correlation coefficient Γ = q 2 /(4π 0 ak B T p ) 1 [24] (a is the typical inter-ion spacing and T p is the temperature). The strongly correlated ions form a crystal that rigidly rotates at the frequency ω r = ω R [15] as the rotating wall potential locks the ion crystal rotation frequency. In the rotating frame of the crystal, the potential energy of N ions with coordinates x i = (x i , y i , z i ) is time independent and is given by [15] where we parametrize the trap strength with axial trapping frequency ω z = qk z /m and bare cyclotron frequency Ω = qB/m. A stationary equilibrium ion crystal state with positions x 0i satisfying ∂Ψ r /∂x 0i = 0 for (i = 1, 2, ..., N ) can be found numerically by minimizing the potential energy Ψ r [3]. In this work, we study single-plane ion crystals that have a two-dimensional structure. There are two require-ments on the radial confinement strength for an ion crystal to maintain a single-plane structure in a Penning trap. The strength of the radial confinement (second line in Eq. (1)) relative to the strength of the axial confinement (first line in Eq. (1)) is characterized by the parameter For a single plane crystal of N ions, the trap asymmetry β must be less than a critical β c (N ) [24], Second, the radial confinement strength must be stronger than the rotating wall strength. The force along the y axis is which requires β > δ for trapping along the y direction. At ultracold temperatures, ion displacements relative to the equilibrium inter-ion spacing are small. This feature allows us to linearize the ion motion and then solve for the normal modes. In a two-dimensional crystal, the linearized ion motion in the out-of-plane (z) direction decouples from that in the planar (x and y) direction. In this work, we solve for the normal modes in the planar direction. As presented in Appendix A, there are 2N normal modes in the planar direction with eigenvectors u n and mode frequencies ω n . In terms of these eigenvectors, we can express any small position and velocity displacements s ⊥ = (x ⊥ , v ⊥ ) T of the ions in the planar direction as Because of the Lorentz force arising from the magnetic field, the planar eigenvectors obey a generalized orthogonality relation with respect to a composite energy matrix E [17]. As shown in Eq. (A8), E is constructed out of the (diagonal) mass matrix of the ions and a stiffness matrix K ⊥ that is obtained by linearizing the ion equations of motion. As a result of the E-orthogonality of the eigenvectors, the complex amplitude a n of each normal mode is given by where the eigenvectors u n have been normalized according to Eq. (A9). Among the 2N normal modes, N modes correspond to the low-frequency E × B branch and N modes correspond to the high-frequency cyclotron branch (which we will respectively denote by subscripts b and c in what follows). We arrange the 2N modes (n from 1 to 2N ) in ascending order according to their frequencies. The E × B branch then contains modes 1 to N and the cyclotron branch contains modes N + 1 to 2N . A typical configuration of an ion crystal studied in this manuscript is shown in Fig. 1(a). The associated distribution of the mode frequencies is presented in Fig. 1(b). Here trap and ion parameters are chosen so that the frequencies of the two branches are separated by a small amount. III. DIAGNOSTIC TOOLS FOR IN-PLANE MODES In this section, we first describe the eigenmode analysis method that we use to measure the kinetic and potential energies of individual planar modes in the course of a molecular dynamics simulation. Under certain conditions, the linearization assumption giving rise to the mode picture can be marginal due to the significant potential energy (and displacements of the ions) associated with the E × B modes. Therefore, we subsequently also discuss a band-pass filter method, based on the Fourier transform of a time-series of the ions' velocities, which is applicable regardless of the linearization assumption. We use the latter method to validate the results from our eigenmode analysis. A. Eigenmode measurement method We start by separating eigenvectors into their coordinate and velocity components as u n = (r n , v n ) T . Using Eqs. (A7) and (5), we express the total in-plane thermal energy in terms of the planar modes as Here, T p is defined as a mean planar temperature while T n describes the temperature of a single mode. In Eq. (7), terms involving r n and v n respectively represent the potential (E n p ) and kinetic (E n k ) energies in a single mode. We replace a n by Eq. (6) to obtain the potential and kinetic energies in a single mode as Equation (8) allows measurement of the mode potential and kinetic energies of any instantaneous state s ⊥ (t) based on the orthonormal eigenvectors set {u n }. To evaluate the energy distribution during an evolution process, we simulate the crystal evolution and record ion displacements s i (n s ∆t) = (x i (n s ∆t), v i (n s ∆t)) in the rotating frame with the sampling period ∆t and total sample number N s . Using the recorded velocities and displacements, we calculate the kinetic energies in the two branches based on Eq. (8) as Similar expressions are obtained for the potential energies in the two branches by replacing mv * n v n with r * n K ⊥ r n in Eq. (9). B. Band-pass filter method A second method for measuring the kinetic energies in the two mode branches is by band-pass filtering the velocities as described below. For the same recorded velocities used in Eq. (9), we perform a Fourier transform on the velocity of ion j by means of where τ = N s × ∆t is the total recording time. Given the discretely sampled velocities, we approximate the Fourier transform by utilizing a discrete Fast Fourier Transform where l ∈ {0, 1, ..., N s − 1} and ∆ω = 2π/τ is the frequency resolution. In order to accommodate the full frequency range of the planar modes, N s ∆ω/2 = π/∆t exceeds the maximum mode frequency ω m . We then apply a band-pass filter to separateṽ j (l∆ω) with respect to mode frequency. We choose the band-pass filter frequency l 0 ∆ω, with l 0 a positive integer, to be located in the frequency gap of the two mode branches, i.e. With the help of l 0 , we divideṽ Next, we apply inverse Fourier transforms to transform v b j andṽ c j back tov b j (n s ∆t) andv c j (n∆t) in the time domain. We repeat the above process for all ions (j = 1, ..., N ) to calculate the kinetic energies, K b (n s ∆t) and K c (n s ∆t), in the two branches as We can then compare the results from Eqs. (14) and (9) in the simulation for validation purposes. In Sec. VI A, good agreement between the two methods is achieved when the displacements are small and no slippage or distortion of the crystal is observed. We have also found good agreement between the total kinetic and potential thermal energies obtained from the eigenmode analysis and those obtained from a direct evaluation using the ion coordinates in the simulation, again for small displacements. It is worth noting that the eigenmodes method is not restricted by the requirement of the sampling period and the size of the data collection. The band-pass filter method, however, requires an appropriate sampling period and enough data to cover the frequencies of all planar modes. While the band-pass filter method only measures the kinetic energy, it performs better than the eigenmode measurement when displacements are not extremely small. IV. PARAMETERS CONTROLLING EQUILIBRATION In this section, we identify important trap and crystal parameters that control the thermal equilibration of the planar modes. We also describe a procedure to tune the parameters and obtain similar crystal configurations whose equilibration rates can be meaningfully compared. Normal modes of trapped ion crystals are only decoupled in the limit of small-amplitude displacements. In reality, anharmonic terms in the Coulomb interaction couple different modes and may eventually lead to equilibration [25]. Prior work with one-dimensional ion chains in an RF Paul trap showed that the equilibration rate between the high-frequency radial modes and the lowfrequency axial modes is exponentially suppressed in the ratio of the characteristic frequencies of motion along these two directions [26]. This result can be understood via energy conservation in a phonon picture, wherein for a large separation of frequencies, several low-frequency phonons must be created in order to annihilate a single high-frequency phonon. Such multiple phonon processes arise as high-order terms in the Coulomb interaction with small effective rates. A natural measure of the characteristic frequency of motion for the cyclotron and E × B branches is provided by the center-of-mass (c.m.) frequencies ω + , ω − of each branch. The c.m. frequencies are independent of ion number and are the same as the single-ion motional frequencies. In the weak rotating wall limit (δ 1), we can solve analytically for the two frequencies to obtain, in a frame rotating at a frequency ω r , Here ω + is the c.m. mode for the cyclotron branch and ω − is the center-of-mass mode for the E × B branch. We study the dependence of the equilibration between the two branches on the ratio Other important parameters that can impact the equilibration rate are the number N of trapped ions and the thermal temperature in the planar direction. With larger numbers of ions, one expects more available modes for satisfying the frequency match required for phononphonon coupling. When the temperature is higher, the ion displacements are larger and anharmonic Coulomb coupling is stronger [27]. To enable a study of the energy transfer between the planar modes, we develop a systematic procedure by which we can obtain similar crystal configurations that can be meaningfully compared while varying the frequency ratio parameter R. This is not trivial due to the large number of trap parameters. We obtain crystals with the same rotation frequency ω r , magnetic field B, the relative rotating wall strength δ and relative radial confinement strength β. We study single-plane crystals with N ≤ 127. The critical trap asymmetry parameter for N = 127 ions is β M (127) ≈ 0.059. We fix β = 0.05, δ = 0.0126, and ω r = 2π × 400 kHz. The axial trapping frequency ω z and the bare cyclotron frequency Ω can be expressed as functions of R, β, and ω r in the following way , where The relation between ω z , Ω, and R, with ω r /(2π) = 400 kHz and β = 0.05, is plotted in Fig. 2(a). By fixing the rotation frequency and magnetic field we obtain crystals that have approximately the same ion density. Physically, the cyclotron frequency determines the ion mass through m = qB/Ω and the axial frequency the required trap voltage for that ion mass. In Fig. 2(b) we investigate the dependence of the frequency gap between the two branches, ∆ ω = ω {c,min} −ω {b,max} , on R for N = 91 ions. The nearly linear relation indicates that R also provides a means of parameterizing the gap between the two branches. V. ROTATING WALL STRENGTH In order to apply the eigenmode measurement method, we need a stable crystal equilibrium. In this section, we show that a strong rotating wall is a way to achieve such a crystal configuration. Fig. 1 and discussed in Sec. VI A. In particular R = 5 and δ = 0.0126. As ions with significant displacements escape from the vicinity of their equilibrium positions, the crystal is not stable anymore and we cannot use the eigenmodes associated with the original equilibrium state to describe the system. The failure of the eigenmode method in such situations can be observed numerically from the lack of energy conservation when mode energies are computed using this method. Therefore, the effectiveness of the eigenmode measurement method relies on the crystal stability. To quantify the crystal stability, we consider the sum of the squared thermal displacements in the planar direction of all the ions in the rotating frame. This quantity can be written as a sum of mean-squared thermal displacements δr 2 n of the individual planar modes, which are given by [17] HereR n = E n p /E n k is the ratio of the potential to kinetic energy of the nth mode, 1 ω n is the mode frequency, T n is the mode temperature, and δr 2 n is obtained by summing the thermal fluctuations in mode n over all the ions. In Fig. 3, we plot the distribution of δr 2 n for N = 91 and T n = 1 mK. We observe that δr 2 1 of the rocking mode (with n = 1) is much larger than for the other modes. When the rocking mode temperature gets higher, (δr 2 1 /N ) 1/2 becomes comparable to the interparticle spacing of d = 12.1 µm. Since the contribution to the total crystal displacement is dominated by the rocking mode, we use the mean squared displacement δr 2 1 of this mode to characterize the crystal stability. We plot (δr 2 1 /N ) 1/2 versus rotating wall strength in Fig. 4. The behavior seen in Fig. 4 can be explained qualitatively as follows. A strong rotating wall causes a difference between the trapping potential in the x and y directions in the rotating frame For δ = 0, the trapping potential is azimuthally symmetric, resulting in a circular crystal with a zero-frequency rocking mode. With increasing δ, the asymmetry in the trapping potential leads to an elliptic crystal that is squeezed along the axis corresponding to the stronger trapping potential (in this case, the x-axis in the rotating frame). The breaking of the azimuthal symmetry is accompanied by the rocking mode acquiring a non-zero frequency that increases with δ. Correspondingly, the mean squared displacement δr 2 1 associated with this mode decreases resulting in improved crystal stability. 1 We note that R defined in Eq. (16) can be shown to be the ratio of potential to kinetic energy for the E × B c.m. mode [17]. For illustration, we show the time trace of two crystal configurations with normalized wall strength of δ = 3.5 × 10 −4 and δ = 0.0126 in Fig. 5. We first generate two equilibrium crystals with the respective rotating wall strengths and initialize their E × B branches with T p = 1 mK. We then track the trajectories of the ions in the rotating frame once in thermal equilibrium (after 50 ms). A stronger rotating wall leads to a more stable configuration with well localized ions. In Fig. 5, early times are represented by yellow dots and later times represented by blue dots. In Sec. VI, we set δ = 0.0126. In passing, we note that besides ensuring the validity of the eigenmode method, crystals produced with a strong rotating wall may also offer several experimental advantages. The improved localization of the ions may be beneficial for implementing schemes for single-site addressing. The strong wall may also improve Doppler cooling of the planar modes, since torque from the cooling laser [16] can be more effectively counterbalanced, thereby ensuring that the crystal does not slip during the cooling process. VI. SIMULATION OF PLANAR MODES COUPLING In this section, we perform molecular dynamics type simulations to study the coupling in the planar direction. During the thermal equilibration process, we validate the eigenmode measurement method by comparing the energy measurement results with the band-pass filter method. We then investigate the cyclotron-E × B coupling as we vary R, the planar thermal temperature and the number of ions. Finally, we study the coupling within the E × B branch and the cyclotron branch when the cyclotron-E × B coupling is prohibited by large R. A. Equilibration of the two branches Here, we present the thermal equilibration process using both energy measurement methods presented in Sec. III. We generate a crystal of N = 91 ions with charge q = e in a Penning trap with parameters R = 5, β = 0.05, ω r = 2π × 400 kHz, δ = 0.0126, and B = 4.4588 T. Accordingly, ω z = 2π × 0.704 MHz, Ω = 2π × 1.082 MHz, and m = 63.3 u. We generate an initial state far from the thermal equilibrium by initializing modes in only one of the two branches with non-zero thermal energy. Details of the initialization are discussed in Appendix B. We initialize the 91 modes in the E × B branch with a homogeneous temperature T n = 1 mK (n ∈ b), yielding T p = 0.5 mK. We then let the system evolve for 50 ms. In Fig. 6(a) and (b), we compare the kinetic energies in the two branches based on Eqs. (9) and (14). From the frequency ranges in Fig. 1, we use a filter with l 0 ∆ω = 320 kHz for the band-pass filter method. In Fig. 6(c) and (d), we compare the total kinetic and potential energies in the planar direction based on the eigenmodes method and a direct measurement using the position and velocity coordinates of the ions. For the latter, we utilize E k = m|v i | 2 /2 and Eq. (1) to directly measure the total kinetic and potential energies in the planar direction. The good agreement observed in Fig. 6 demonstrates that the eigenmode method is valid at the low planar temperatures used in this paper. Using the eigenmode method, we now plot the behavior of the total energies in the two branches in Fig. 7(a). We observe that the energies of the two branches approach T p , which indicates an equipartition between the two branches. The dependence of the equipartition rate between the two branches on the parameters discussed in Sec. IV is studied in the next section. To present details of the equipartition process, we compare the energy distribution in 2N = 182 modes at t = 0 and 50 ms, as shown in Fig. 7(b) and (c). The total energy for each E × B mode is initialized at 1 mK. At later times, e.g. at 50 ms as shown in Fig. 7(c), the system approaches equipartition. B. Dependencies of the cyclotron-E × B coupling We now proceed to study the cyclotron-E × B equilibration rate dependence on R, the initial temperature and the number of ions. We average every measurement over 10 realizations with random-phase initial conditions (Appendix B). We measure the equilibration rate by fitting the timedependent behavior of the temperatures in the two branches, T b = E b /N k B and T c = E c /N k B , to the following exponential functions where we define α as the cyclotron-E × B equilibration rate. We will use this definition of α in what follows when we investigate the dependence of the equilibration rate on various parameters. To allow for a well-defined frequency gap between the two branches, we only investigate cases with R ≥ 5. The following parameters are held constant: β = 0.05, ω r = 2π ×400 kHz, δ = 0.0126, and B = 4.4588 T. With N = 91 we first vary R from 5 to 10 with the 91 modes in the E × B branch initialized with a homogeneous temperature T n ≡ 1 mK (n ∈ b). The time histories of the energies in the two branches are shown in Fig. 8. We observe that, with increasing R, the time to equipartition increases. The black lines are exponential fits based on Eq. (20) that determine the equilibration rate α. In Fig. 9, we display the relation between the fitted α and R. We find that α is exponentially suppressed with increasing R with a fitted exponential function (gray dashed line) dependence of α = exp(−0.765R + 9.608) s −1 . This exponential scaling, showing suppression of the coupling rate with increasing ratio of frequencies is similar to what is seen in Ref. [24]. Moreover, the relevant parameters in current NIST experiments are ω z = 2π × 1.585 MHz, ω r = 2π × 180 kHz, B = 4.4588 T, and m(Be + ) = 9.01 u, resulting in R = 735. For N = 91 and assuming T p = 0.5 mK, we have α ∼ 10 −242 s −1 ∼ 0 s −1 . Such a small prediction for α suggests extremely weak coupling under current operating conditions of the NIST Penning trap. Any coupling will probably be due to other mechanisms such as mode interactions with error fields in the trap potential, which is not accounted for in our current model. We now fix R = 5 and N = 91 and study the dependence of α on T p . We perform similar simulations and exponential fitting as in Fig. 8 to obtain the coupling rate for different T p . As shown in Fig. 10(a), as the planar temperature, T p , is varied from 0.05 to 0.5 mK we observe an approximate linear increase in α. Ions with higher temperature tend to have larger displacement, which increases the coupling rate. Finally, we vary the number of ions from 37 to 127, while fixing T p ≡ 0.5 mK, to determine the dependence of α on N . Figure 10 Figure 9 suggests an extremely weak cyclotron-E × B coupling for a high value of R, which is consistent with the slow multi-phonon coupling process qualitatively discussed in Sec. IV. The large frequency gap and relatively small frequency ranges of the two branches make it impossible for a low temperature (T p < 10 mK) state to reach thermal equilibration on experimentally relevant timescales. In the absence of the equilibration of the two branches on the time scale of 50 ms 1/α, we can study the effect of in-branch coupling. In this section, we set R = 100 while keeping other parameters (β, δ, ω r , and B) the same as in Sec. VI A. To study the coupling within either branch, we only initialize one single mode n e for each initial state with mode temperature T ne ≡ N ×1 mK in order that the mean thermal temperature in the planar direction is still T p = 0.5 mK. We first study the coupling among modes in the E × B branch. During each evolution process we measure the temperature of the single initialized mode. In Fig. 11(a) we present cases where individual modes with n e = 2, 4, 6, 8, 10 are initialized. Except for the case when the initialized mode is the center-of-mass mode (n e = 8, red line in Fig. 11(a)), the temperature of the initialized mode decreases within 1 ms of evolution. The relative displacements of ions do not change under center-of-mass motion, making this mode immune to the Coulomb interaction. To investigate how the energy of the initialized mode is eventually distributed, we plot the energy distribution for the n e = 4 case at t = 50 ms in Fig. 11(b). We observe that the energy is approximately uniformly shared by the E × B modes, but, as expected, the cyclotron modes are well isolated and no energy trans- fer happens between the two branches. The results in Fig. 11 indicate a strong coupling between modes within the E × B branch. We now proceed to study the coupling between modes in the cyclotron branch. In contrast to the E × B branch, we find that the intrabranch coupling proceeds much more slowly. Figure 12 shows some characteristic examples. In Fig. 12, we plot the energy distribution at t = 50 ms for the cases where individual modes with n e = 94, 178, or 138 were initialized with an initial temperature of 91 mK. Some non-center-of-mass modes like the n e = 94 case shown in Fig. 12(a) are effectively decoupled from the other modes. On the other hand, Fig. 12(b) shows one of the simplest coupling mechanisms involving only three modes. In the n e = 178 case, the primary coupling involves two cyclotron modes (n = 178, ω 178 = 2π×4.191 MHz and n = 125, ω 125 = 2π × 4.076 MHz) and one E × B mode (n = 47, ω 47 = 2π ×0.115 MHz) that satisfy a resonance condition, i.e. ω 125 + ω 47 = ω 178 . Although an E × B mode is involved, this three-wave mixing pro-cess preserves the total phonon number in the cyclotron branch and hence cannot lead to thermal equilibration between the two branches [26]. We also present a multimode coupling in Fig. 12(c), in which several cyclotron and E × B modes are excited. To demonstrate that the coupling in the cyclotron branch is very slow, we measure and display in Fig. 13(a) the temperature of the single mode that was initialized in Figs. 12(a), (b), and (c). For n e = 178 and 138 the mode temperature slowly changes during a 50 ms evolution time. From such plots, we can measure the temperature range ∆T ne = max(T ne (t)) − min(T ne (t)) sampled by the initialized mode n e during an evolution of duration t. In Fig. 13(b) and (c), we choose two time cutoffs (t = 10 ms and t = 50 ms) and plot the distribution of ∆T ne when each cyclotron mode is separately initialized and allowed to evolve. For t = 10 ms, most excited cyclotron modes are still isolated with their energy not transferred to other modes. As t increases, more excited modes begin to exchange energy with other modes, but the intrabranch coupling is much slower compared to that within the E × B branch. In the case of the E × B modes, a single initialized mode is typically observed to lose energy in an exponential manner. The other modes in the E × B branch serve as an effective thermal reservoir leading to damping of the initialized mode on a timescale of a few tenths of a millisecond. However, in the case of the cyclotron branch, the time evolution of the energy in the initialized mode does not resemble exponential damping and instead shows signatures of revivals. In this case, the initialized cyclotron mode only couples to a few spectator modes on the timescale of the simulation, which is not sufficient to resemble an effective thermal reservoir of modes. The vast difference in the timescale of damping in the two branches may be attributed to the fact that the anharmonic terms in the Coulomb interaction scale with position fluctuations. For large values of R, position fluctuations are almost exclusively associated with the E × B branch, and hence the in-branch equilibration is much faster here than in the cyclotron branch. VII. SUMMARY We have used an eigenmode analysis to study the thermal equilibration in the planar direction of a simulated two-dimensional ion crystal in a Penning trap with a rotating wall. We first solved for the eigenvectors and eigenvalues by linearizing the dynamics about the crystal equilibrium. We then validated the eigenmode analysis method by comparing with kinetic energies measured with a velocity filter technique and total energies calculated from a direct measurement of the ion positions and velocities. In the process, we discussed how a strong rotating wall helps reduce the amplitude of the rocking mode resulting in a more stable crystal structure. To study the thermalization process in the planar direction, we initialized the modes in the low frequency E × B branch with a specified temperature and performed firstprinciple simulations to measure the thermalization process. Finally, for large R we studied the thermalization process within each branch by initializing the energy of a single mode and simulating the resulting equilibration process. We investigated the dependence of the thermal equilibration rate between the cyclotron and E × B branches on several trap and ion crystal parameters. We found that this equilibration rate is exponentially suppressed as a function of the ratio R of the center-of-mass cyclotron to E × B mode frequencies. The parameter R provides a measure of the effective strength of the magnetic field on the dynamics of the in-plane motion [18]. We also investigated the dependence of the cyclotron-E × B equilibration rate on the planar temperature T p , and the number of ions, both of which exhibited an approximate linear dependence. In the simulations presented here, we fixed other aspects of the Penning trap, including the radial trapping strength β, rotating frequency ω r , rotating wall strength δ, and the magnetic field B. For large R (R = 100), where the coupling between cyclotron and E × B modes is very weak, we also investigated the internal coupling rate within the E × B branch and within the cyclotron branch. The E × B branch was observed to rapidly equilibrate on a time scale of a few tenths of a millisecond. The cyclotron branch equilibration time was more than two orders of magnitude longer and showed revivals instead of exponential damping. Understanding planar equilibration and coupling between planar modes provides a starting point for understanding Doppler and sub-Doppler cooling in the planar direction. Doppler cooling of the E × B modes is not well understood [17]. Current NIST Penning trap experiments [1,5,22] have R ∼ 735, indicating that the E × B branch is not cooled through a coupling to the cyclotron branch, which is efficiently cooled by Doppler laser cooling. The high frequency ratio R also results in unequal energy distributions [17], in which energies in E × B and cyclotron branches are predominantly potential and kinetic, respectively. An efficient cooling of the E × B branch requires a cooling technique that can remove potential energy fluctuations associated with the ion positions. Axialization, which provides such a technique and has been carefully studied for single and small numbers of trapped ions [28,29], may also work with many-ion crystals and will be the subject of future theoretical investigations. Finally, the long coherence time of the cyclotron modes motivates finding ways of employing these modes in quantum information processing. VIII. ACKNOWLEDGEMENT (δx i , δy i , δz i ). When the planar confinement is weak compared to that in the axial direction, the ion crystal is two dimensional [3]. The potential energy Ψ is expanded at x 0 using Taylor series to first order The out-of-plane (or axial) motion δz in such a twodimensional crystal is linearly decoupled from the planar motion (δx, δy). The axial motion is described as a collection of N simple harmonic normal modes [17]. The normal modes in the planar direction, however, are not simple harmonic due to the velocity-dependent form of the Lorentz force. In this work, we solve for the normal modes in the planar direction. We begin by writing down the linearized equations for x ⊥ = (δx 1 , ..., δx N , δy 1 , ..., δy N ) and v ⊥ =dx ⊥ /dt as Here, is a real symmetric matrix [3] and L is the antisymmetric Lorentz force matrix (2N × 2N ) given by We introduce the composite phase vector u ⊥ = (x ⊥ , v ⊥ ) T and rewrite Eq. (A2) as where D ⊥ is a composite matrix (4N × 4N ) We also combine the linearized potential energy and kinetic energy to obtain the total thermal fluctuation energy in the planar direction where is the energy matrix in the planar direction. Next, we solve Eq. (A5) as an eigenvalue problem. We apply ansatz u ⊥ = u ω e −iωt to transform Eq. (A5) into −iωu ω = D ⊥ u ω . We then obtain 4N eigenvalues ω n by solving the determinant equation det D ⊥ + iω n I 4N = 0. The elements of D ⊥ and ω n are all real [23] which results in pairs of complex conjugate eigenvectors, u n and u * n , associated with eigenvalues ω n and −ω n , respectively. Therefore, there are 2N positive and distinct eigenvalues ω n that represent the frequencies of 2N normal modes. The eigenvectors are E-orthogonal according to Ref. [17] and [23], which allows us to normalize the eigenvectors by means of u n → u n u * n Eu n . (A9) The orthonormal eigenvectors satisfy u * m Eu n = δ mn , where δ mn is the Kronecker delta. Appendix B: Initialization of ions To generate an initial state that is far from thermal equilibrium, we initialize one or several eigenmodes to create an inhomogeneous distribution of eigenmode energies. We perform the initialization in the lab frame, where a two-dimensional crystal in equilibrium is described by the coordinates X 0 = (X 1 , ..., X N , Y 1 , ..., Y N ), Z i = 0, and velocities V 0 = ω r × X 0 corresponding to the collective rotation of all the ions. We also utilize the corresponding orthonormal eigenvectors {u n , n = 1, ..., 2N } that are determined in the rotating frame. As an example of the procedure, suppose we initialize one mode. We multiply the associated eigenvector with a random phase e iψn . We then take the real part as U n = Re [exp(iψ r )u n ] (B1) and decompose into the position and velocity parts as Next, we give each ion an extra displacement λR ⊥ , where λ is a normalization factor producing a desired thermal temperature T n for mode n as The resulting positions and velocities are X = X 0 +λR ⊥ and V = ω r × X + λV ⊥ . The initialization introduces a rotation λω r × R and produces an initial thermal distribution in the chosen mode. In addition, if we initialize multiple modes, we multiply each selected eigenvector with a random phase e iψn and take the real part of the sum of the phase-multiplied eigenvectors We then decompose U ⊥ in order to give ions extra displacement and velocities similar to Eqs. (B2) and (B3) in the one mode case.
2021-06-01T01:16:23.395Z
2021-05-29T00:00:00.000
{ "year": 2021, "sha1": "b0f42d97e9d04e41996400b83b54aba83ad73525", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2105.14330", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b0f42d97e9d04e41996400b83b54aba83ad73525", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249207306
pes2o/s2orc
v3-fos-license
Abnormal Brain Oscillations in Developmental Disorders: Application of Resting State EEG and MEG in Autism Spectrum Disorder and Fragile X Syndrome Autism Spectrum Disorder (ASD) and Fragile X Syndrome (FXS) are neurodevelopmental disorders with similar clinical and behavior symptoms and partially overlapping and yet distinct neurobiological origins. It is therefore important to distinguish these disorders from each other as well as from typical development. Examining disruptions in functional connectivity often characteristic of neurodevelopment disorders may be one approach to doing so. This review focuses on EEG and MEG studies of resting state in ASD and FXS, a neuroimaging paradigm frequently used with difficult-to-test populations. It compares the brain regions and frequency bands that appear to be impacted, either in power or connectivity, in each disorder; as well as how these abnormalities may result in the observed symptoms. It argues that the findings in these studies are inconsistent and do not fit neatly into existing models of ASD and FXS, then highlights the gaps in the literature and recommends future avenues of inquiry. INTRODUCTION Autism Spectrum Disorder (ASD) and Fragile X Syndrome (FXS) are neurodevelopmental disorders with known comorbidity and share many features. The symptoms of both disorders include repetitive motions, sensory hypersensitivity, echolalia, attention deficits, anxiety, and impaired social interaction such as poor eye contact, perseveration in speech, and aggression (Belser and Sudhalter, 2001;Charman, 2003;Belmonte and Bourgeron, 2006;Poole et al., 2018;Chernenok et al., 2019). Both disorders occur more often in males than females, with a 3:1 bias in ASD (Loomes et al., 2017) and a 2:1 bias in FXS (Hunter et al., 2014). The estimated prevalence of ASD in FXS ranges from 5 to 60 percent (Belmonte and Bourgeron, 2006), and FXS is the leading monogenic cause of ASD, accounting for around 5 percent of cases (Simberlund and Veenstra-VanderWeele, 2018). However, there are crucial differences between the two disorders. FXS is caused by a mutation of the fragile X mental retardation type 1 (FMR1) gene that blocks its transcription. ASD, on the other hand, is an entirely behavioral diagnosis with polygenetic, epigenetic, and environmental roots (Persico and Bourgeron, 2006). Its two domains of diagnostic criteria in the DSM-5 are a) deficits in social communication and interaction and b) restricted, repetitive behaviors (Yaylaci and Miral, 2017). The two disorders show some subtle differences in their characteristic symptoms -for example, McDuffie et al. (2015) found that individuals with FXS displayed less impaired social smiling and more stereotyped motor behaviors than those with idiopathic ASD in a severity-matched analysis. Brain differences have also been observed in structural magnetic resonance imaging. Hoeft et al. (2011) found that, compared to typically developing controls (TD), the frontal and temporal areas involved in social cognition are larger in idiopathic ASD but smaller in FXS. ASD and FXS have been described as disorders of connectivity (Rippon et al., 2007;Haberl et al., 2015). Structural connectivity, or the physical connections of synapses and tracts, appears to be impaired with fewer connections, for example, between the amygdala and other brain regions in both disorders. However, the brain's functional connectivity, or the temporal correlations between the activity of spatially distinct regions, is also disrupted in ASD and FXS . This review examines functional connectivity through the lens of electroencephalography (EEG), quantitative electroencephalography (qEEG), and magnetoencephalography (MEG) recording techniques. Examining the large-scale organization of the brain in ASD and FXS can lend insight into the biomarkers and etiology of these disorders, for improved diagnosis and treatment. Compared to functional magnetic resonance imaging (fMRI), EEG is less expensive, more portable, and offers higher temporal resolution at the cost of some spatial resolution. EEG data can be interpreted using spectral band analysis, whereby the signal is decomposed into frequency bands: delta (1-3 Hz), theta (4-7 Hz), alpha (8-12 Hz), beta (13-30 Hz), and gamma (31-50 Hz). These bands are believed to be functionally distinct, though the upper and lower boundary frequencies that define each varies considerably in the literature (Newson and Thiagarajan, 2019). EEG "power" represents the amount of activity in a given frequency band of the signal (Nunez and Srinivasan, 2005). Functional connectivity is quantified using a variety of metrics including coherence, synchronization likelihood, phase lag index, and phase-amplitude coupling. EEG and MEG signals originate from the same neural sources and have high temporal resolution, but MEG is less affected by tissue properties. MEG is more sensitive to currents that are tangential to the surface of the scalp, whereas EEG is sensitive to both tangential and radial currents. MEG is more expensive and unportable (Singh, 2014), though there appear to be exciting new developments in the field of wearable optically pumped magnetometers (Boto et al., 2018). We focus on resting-state studies conducted when participants are not given any external stimuli nor instructed to engage in any particular task. Data collection is straightforward and at lower risk of being confounded by cognitive or motor impairments; the brain is spontaneously active even in this "resting" state, reflecting patterns similar to those generated under active task conditions. Building on the work of Devitt et al. (2015), we aim to compare resting-state EEG and MEG studies of ASD and FXS, to more thoroughly break down the frequency bands and brain areas implicated in each disorder and examine how these abnormalities in functional connectivity may contribute to the observed symptoms. RESTING STATE EEG AND MEG IN ASD AND FXS The tables in this section synthesize the findings of EEG and MEG resting state studies in ASD and FXS. The "higher" or "lower" results refer to the direction of the difference, in either power or functional connectivity, observed in ASD (Table 1) or FXS ( Table 2) compared to typically developing controls. The results are specific to brain regions (row) and frequency bands (column), with the exception of "global" results, where differences were noted throughout the entire brain. As shown in Tables 1, 2, power abnormalities, overconnectivity, and underconnectivity across frequency bands and brain regions are implicated in ASD and FXS. Yet these differences are far from consistent in the literature and do not appear to fall neatly into one model (e.g., the "U-shaped profile" of ASD to describe excessive power in low-frequency and high-frequency bands) (Wang et al., 2013). Only significant differences are reported in the tables, but many of the studies found no differences between the ASD/FXS and control groups for a given frequency band and brain area. It is particularly difficult to draw conclusions from the FXS data, as this review identified only three resting state studies in FXS. The following section will discuss some of the general patterns revealed in the literature, how these electrophysiological abnormalities may relate to ASD and/or FXS symptoms, possible reasons behind the (many) inconsistencies, and future avenues for research. Delta Delta power is elevated globally in ASD (and insufficiently studied in FXS). Enhanced delta power is commonly observed among low-functioning children with ASD in studies that involve doing a task (Wang et al., 2013), as well as children with learning disabilities (Fonseca et al., 2006) and those born preterm (Rommel et al., 2017). The delta band plays roles ranging from sustained attention to decision making to motivation, and it has been proposed that increased resting delta power is a general marker of brain trauma, pathology, or neurotransmitter disturbances (Başar-Eroglu et al., 1992;Kirmizi-Alsan et al., 2006;Knyazev, 2012;Rommel et al., 2017). In ASD, delta connectivity appears to be increased within the frontal lobe but decreased elsewhere. As slower oscillations are usually associated with longer range connections, this could reflect a failure of top-down synchronization and poorer inhibitory regulation. It is also reflective of hyperconnectivity seen within the frontal region more generally. Alpha Wang et al. (2017) found that alpha power was diminished in individuals with FXS; this decrease was correlated with greater social impairment and hypersensitivity to sensory stimuli observed clinically. These results are consistent with studies showing that the alpha band is involved in inhibitory control and correlated with lower arousal levels (Klimesch, 1996;Barry et al., 2004;Klimesch et al., 2007). Alpha oscillations may reflect a mechanism to suppress sensory information during selective attention (Foxe and Snyder, 2011). The data on alpha power in ASD are mixed. The U-shaped profile of power, whereby alpha power is reduced in individuals with ASD, is a popular model in the literature (Wang et al., 2013). However, several studies found an excess of alpha power instead. This, too, might be a compensatory mechanism similar to that proposed for beta, insofar as alpha power appears to increase for tasks demanding greater attentional control (Benedek et al., 2014;Mathewson et al., 2015). Furthermore, elevated alpha power is associated with greater autistic trait expression in the non-clinical general population. Moore and Franz (2017) found that increased relative alpha power in typically developing adults is associated with increased aloofness measured by the Broad Autism Phenotype Questionnaire (Moore and Franz, 2017);similarly, Carter Leno et al. (2018) found that among typically developing adults with subthreshold ASD trait expression, elevated resting-state alpha power was significantly correlated with behavioral rigidity in ASD. The suppression of alpha activity is an indicator of mirror neuron system activity, which is required for imitating behavior (Bernier et al., 2007). The elevated alpha power seen in ASD could well be linked to mirror neuron system dysfunction in ASD and the resulting social impairments. Beta Beta waves are associated with alertness, motor behavior, and the direction of attention (Neuper and Pfurtscheller, 2001;Güntekin et al., 2013). ADHD is characterized by reduced beta power (Newson and Thiagarajan, 2019), yet paradoxically, the attention deficits observed in ASD are coupled with an elevation in beta power. This may be a compensatory mechanism for the social deficits also seen in ASD -Palacios- García et al. (2021) found that psychosocial stress can evoke higher beta power, perhaps as a top-down modulator to redirect attention to the stressful task at hand. Beta connectivity, on the other hand, is generally lower in ASD as well as FXS. van der Molen et al. (2014) suggest this is an indicator of immature cortical networks, since over the course of typical development, low-frequency synchronization decreases and high-frequency synchronization increases. Ye et al. (2014) found that individuals with ASD showed reductions in beta synchronization during a face processing task, suggesting a role for this frequency in social-emotional processes in ASD. Wang et al. (2017) found that gamma power is elevated in FXS. Increased gamma power is correlated with social communication abnormalities, auditory hypersensitivity, and reductions in neurocognitive abilities in FXS (Ethridge et al., , 2019Wang et al., 2017). Orekhova et al. (2007) found that gamma power is positively correlated with degree of Frontiers in Neuroimaging | www.frontiersin.org developmental delay in boys with ASD. However, Maxwell et al. (2015) found decreased gamma power among individuals with ASD compared to controls, and lower power was correlated with increased autism severity as measured by the Social Responsiveness Scale. Wilkinson et al. (2019) found that gamma power was lower in high-risk toddlers without ASD than in lowrisk toddlers. Yet, among the high-risk group, reduced gamma was associated with improved language ability regardless of later ASD diagnosis. It is thus unclear whether lower gamma power is directly associated with cognitive deficits or is a compensatory mechanism for other processes that raise gamma power. The answer will likely vary between groups (ASD, high-risk, lowrisk) and depend on sex as well as stage of development, so further research is needed to elucidate the role of the gamma band in ASD. Gamma Gamma connectivity, unlike power, is generally increased across all brain regions in ASD. This broad difference is consistent with gamma's posited function as an elemental part of cortical computation, serving to segment and select between inputs (Fries, 2009). More specifically, Ye et al. (2014) found that the inferior frontal gyrus, orbitofrontal areas, amygdalae, and superior temporal gyrus, which are implicated in social cognition, were hyperconnected in the gamma band. Atypical connectivity could disrupt the interactions between these regions and other parts of the brain and lead to the socioemotional deficits seen in ASD. The frontal and temporal lobes appear to be the most heavily affected brain regions in ASD, with a general pattern of underconnectivity between these lobes and all other areas. Courchesne and Pierce (2005) hypothesize that these brain regions, responsible for higher-order cognitive and social functions, are later to mature and form synapses with a far greater number of neurons compared to the posterior cortices. Thus, their disproportionate disruption in ASD is consistent with the intact early development, followed by progressively greater abnormalities in the next few years, that is observed in the disorder. Specifically in the gamma band, though, individuals with ASD exhibit higher connectivity between the temporal lobe and other brain areas, which may account for the atypical language skills and memory difficulty seen in ASD. Interestingly, while frontal lobe connectivity is similarly affected in FXS and ASD, there appears to be little disruption in the temporal lobe in FXS . Interhemispheric Connectivity Previous reviews in this field have largely neglected to discuss disrupted connectivity between hemispheres observed in ASD or FXS, but such disruptions have been frequently reported. Cantor et al. (1986) proposed that the higher interhemispheric connectivity they measured in individuals with ASD was an indicator of lack of cerebral differentiation, which has been linked to lower cognitive capabilities. However, most other ASD studies found a decrease in interhemispheric connectivity, consistent with a decrease in the volume of the corpus callosum in some autism subtypes (Alexander et al., 2007). Interhemispheric underconnectivity may explain why lateralized speech and social communication functions are disrupted in ASD. It could also explain intellectual deficits, since it may be more efficient for two hemispheres to interact while processing information than for either one to do it alone (Belger and Banich, 1992). None of the FXS studies found differences in interhemispheric connectivity. Further research on abnormal interhemispheric connectivity in these disorders -the direction of the change and the potential causes -is warranted. LIMITATIONS AND FUTURE DIRECTIONS The often-contradictory findings reported may arise from the variety of experimental methods used. Studies differed in the definition of each frequency band, the age of the subjects and the metrics used to quantify functional connectivity. Though they all used resting-state paradigms, some involved eyes-open conditions, while others were done with eyes closed. The EEG and MEG recordings could vary in their accuracy and precision based on the number of sensors used, and their distance from the participant's head (the MEG helmet is one size fits all), subject movement during recording, and how the data were analyzed to map the source space to the signal space and filter artifacts. Not all of the studies on ASD explicitly excluded participants with comorbid FXS, nor vice versa, Furthermore, terms such as "local, " "long-distance, " and "short-range" connectivity appear frequently in the literature but are defined only ambiguously, if at all. The literature reflects the spatial resolution limitations of EEG and MEG; few of the power or connectivity differences are reported in greater specificity than the general cortical lobe that is implicated. Future research should capture more spatially specific sources, as potential nodes in a connectivity matrix. In the context of this paper thus far, "resting state" has been used as an experimental paradigm. However, the last decade has witnessed increasing attention being paid to resting state networks (RSNs), such as the default mode network and the dorsal attention network, as a feature of the brain. These networks are comprised of spatially distinct regions that are functionally connected when the brain is at rest. While there is not a one-to-one correspondence between RSNs and frequency bands, particular bands do appear to be implicated in each network; for example, gamma is elevated in the DMN at rest (Nair et al., 2018). Multimodal imaging combining EEG, MEG and fMRI could be used to integrate the spatial and temporal markers of these disorders. The inconsistencies may also reflect the inherent heterogeneity of these disorders, especially ASD. Future research may need to break down the ASD label into behavioral and/or genetic subtypes as well as take developmental changes into account. There remains a general dearth of research on neural oscillations in FXS -which, given its more straightforward nature as a monogenic disorder and the considerable overlap between the two disorders, may be an overlooked pathway to understanding many features of ASD. In conclusion, the EEG and MEG studies reveal interesting, if inconsistent, patterns in power and connectivity disruptions that hint at mechanisms underlying the symptoms in ASD and FXS. A more standardized analysis approach could help hone RS measures for use in targeted interventions. AUTHOR CONTRIBUTIONS MM conceived and designed the outline of this manuscript. SL prepared the manuscript assisted by MM. Both authors contributed equally to critically reviewing this work and approved the final version of the paper.
2022-06-01T13:29:15.853Z
2022-05-27T00:00:00.000
{ "year": 2022, "sha1": "d4b913e4f0a03b132fb7d40c5a7c747f53abe74f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "d4b913e4f0a03b132fb7d40c5a7c747f53abe74f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
202854363
pes2o/s2orc
v3-fos-license
Structure of the native supercoiled flagellar hook as a universal joint The Bacterial flagellar hook is a short supercoiled tubular structure made from a helical assembly of the hook protein FlgE. The hook acts as a universal joint that connects the flagellar basal body and filament, and smoothly transmits torque generated by the rotary motor to the helical filament propeller. In peritrichously flagellated bacteria, the hook allows the filaments to form a bundle behind the cell for swimming, and for the bundle to fall apart for tumbling. Here we report a native supercoiled hook structure at 3.6 Å resolution by cryoEM single particle image analysis of the polyhook. The atomic model built into the three-dimensional (3D) density map reveals the changes in subunit conformation and intersubunit interactions that occur upon compression and extension of the 11 protofilaments during their smoke ring-like rotation. These observations reveal how the hook functions as a dynamic molecular universal joint with high bending flexibility and twisting rigidity. The paper needs to be carefully edited by someone who speaks English as a native language, as it is filled with typos and language that is uninterpretable. For example (lines 79-81): "Since the passes of the protofilaments are nearly parallel to the tubular axis of the hook, the protofilament length varies from one to the other." I assume that what is meant is "paths". Or line 85: "we built on the map" should be "we built into the map". Or line 112: "are achieved also by well-designed intersubunit interactions." Who designed them? What do they mean here? On line 179: "This difference in the mechanical property" should be "This difference in the mechanical properties". Or lines 199-200: "is now within the reach" which should be "is now within reach". This sentence (lines 186-190) is not atypical: "The corresponding gap in the Salmonella hook is much smaller in the compressed protofilaments (Extended Data Fig. 7), suggesting that the insertion of the FlgG specific 18-residues forms the L-stretch just as that of Campylobacter hook to fill the gap to prevent protofilament compression in the hook made of the insertion mutant of FlgE." Reviewer #2 (Remarks to the Author): The manuscript from Takayuki et al. presents the first cryo-EM structure of the native supercoiled flagellar hook at 3.1 Angstrom using a mutant fliK of Salmonella, which can be as long as 1um. This hook was proposed by the same group as the universal joint. With this resolution and the supercoiled conformation, the author can model all the 11 pf and also additional residues that is not modelled in their previous crystallography work. With this model, the authors can discuss about the difference between the shortest and longest units and see the interaction with the adjacent units for the universal join mechanism. In my opinion, the results presented in the manuscript can attract broad interest and suitable for publication in Nature Communications. However, the manuscript, in particular, the figures needs a good modification in order to make the paper easy to read for the audience of Nat Comms. The manuscript is probably benefited a lot from a good cartoon to demonstrate what the readers are looking at, especially if they are not from the bacterial flagellar field. Comments and suggestions The 3 domains of FlgE were mentioned and discussed throughout the paper but none of this info is labelled in any main figures. As a colour-blind person, it is extremely difficult more me to look at Figure 2a to distinguish any colour there. In the text, it says "The superpositions of corresponding domains show almost no changes in their conformations as mentioned above (Extended Data Fig. 4). The extended data figure 4 is an RMSD table, which is informative but is terrible to convey the message. I would suggest having a superposition of each separate domain from PF 1,3,5 or 7,9,11 in Figure 2 to illustrate this point. Figure 3a is very hard to understand if you are not coming from the bacterial flagellar field. A good schematic cartoon would help the reader to understand what is presented here in the neighbouring unit. For general readers, sudden presentation of -5 start, 11-start and 6-start will confuse them since the information they got from the figure before is the 11-protofilament. In the text, there is a discussion about which patch of residues are interacting with which (Constant interactions are seen between residues Leu 101 -Glu 103 of subunit 0 with Ala 320 -Asn 321 and Gln 337 -Ser 339 of subunit 5 (Fig. 3d), but the interactions between Ser 87 -Asn 88 of subunit 6 and Gly 348 -Gly 350 of subunit 0 are present only in the compressed form). However, in Figure 3d, none of these residues are indicated in the figure. There should be a figure to compare with flgE with flgE in straight hook conformation from Campylobacter jejuni. Only a part of this is presented in Extended Data Figure 7 but a full flgE should be compared. For image Processing What is the resolution attained during the refinement in cryoSPARC before & after CTF refinement in Relion 3.0? Also, what is the 3 classes during classification look like? This info should be added in supplementary materials. Minor Comments: Why is the sampled incubated overnight at room temperature for prepare the native supercoiled hook? Responses to the Reviewers' comments To Reviewer #1: In general, this is a strong paper that clearly deserves publication. The structure shows how a continuous series of conformations (11-states) explains the supercoiling of the flagellar hook, while current models for the flagellar filament are based upon 2-state models. The paper is therefore of much more general interest than simply to those studying bacterial motility, as it also demonstrates the new powers of cryo-EM to reconstruct at near-atomic resolution structures where symmetry does not need to be applied. I had a few technical concerns that need to be addressed by the authors. Thank you very much for your recognition of the impact and strength of our study. We appreciate your technical concerns and made revisions in response to them as described below. It is stated in Extended Data Figure 1 that the resolution is 3.07 Å and in the paper this is rounded to 3.1 Å. Given that some people in the cryo-EM field describe resolution to a hundredth of an Å, it suggests that this precision is meaningful and significant. But if one actually looks at the figure shown, the first crossing of the FSC curve with the 0.143 threshold occurs at slightly worse than 1/(3.2 Å), as indicated by the arrow in my attached figure. More troubling, the curve never goes to 0.0, and plateaus to a value near 0.1 out to the Nyquist frequency. If this offset were to be subtracted from the curve, the stated resolution would be closer to 4.0 Å. The problem of the FSC curve not reaching 0.0 before the Nyquist frequency was due to redundant information by the 90% overlap of image segment boxes used for particle extraction. We therefore reprocessed images by editing the star file with RELION to avoid the use of redundant information in each half of the image data set for 3D image reconstruction. Now the FSC curve goes to 0.0, and the resolution is 3.6 Å at the FCS threshold of 0.143, as shown in Extended Data Fig. 1. Extended Data Table 1 has some problems. It is stated that the voltage was 300 kV. If this was the case, they need to explain in the Methods how they were able to get a 200 keV microscope to operate at 300 keV. More likely it is a mistake. "Does rate" should be "Dose rate". And no units are given for "Total exposure time". We apologize for the wrong and missing information in Extended Data Table 1. We revised it in a correct and complete form. The paper needs to be carefully edited by someone who speaks English as a native language, as it is filled with typos and language that is uninterpretable. For example (lines 79-81): "Since the passes of the protofilaments are nearly parallel to the tubular axis of the hook, the protofilament length varies from one to the other." I assume that what is meant is "paths". Or line 85: "we built on the map" should be "we built into the map". Or line 112: "are achieved also by well-designed intersubunit interactions." Who designed them? What do they mean here? On line 179: "This difference in the mechanical property" should be "This difference in the mechanical properties". Or lines 199-200: "is now within the reach" which should be "is now within reach". This sentence (lines 186-190) is not atypical: "The corresponding gap in the Salmonella hook is much smaller in the compressed protofilaments (Extended Data Fig. 7), suggesting that the insertion of the FlgG specific 18-residues forms the L-stretch just as that of Campylobacter hook to fill the gap to prevent protofilament compression in the hook made of the insertion mutant of FlgE." Thank you for the list of our typographic mistakes. We made revisions according to them and carefully went through the entire manuscript to make correction in English. To Reviewer #2: The manuscript from Takayuki et al. presents the first cryo-EM structure of the native supercoiled flagellar hook at 3.1 Angstrom using a mutant fliK of Salmonella, which can be as long as 1um. This hook was proposed by the same group as the universal joint. With this resolution and the supercoiled conformation, the author can model all the 11 pf and also additional residues that is not modelled in their previous crystallography work. With this model, the authors can discuss about the difference between the shortest and longest units and see the interaction with the adjacent units for the universal join mechanism. In my opinion, the results presented in the manuscript can attract broad interest and suitable for publication in Nature Communications. However, the manuscript, in particular, the figures needs a good modification in order to make the paper easy to read for the audience of Nat Comms. The manuscript is probably benefited a lot from a good cartoon to demonstrate what the readers are looking at, especially if they are not from the bacterial flagellar field. Thank you very much for your favorable comments on our study. We appreciate your comments and suggestions and made revisions in response to them as described below. Comments and suggestions The 3 domains of FlgE were mentioned and discussed throughout the paper but none of this info is labelled in any main figures. Domains D0, D1 and D2 are now labeled in Fig. 1 and 3. As a colour-blind person, it is extremely difficult more me to look at Figure 2a to distinguish any colour there. We apologize for the colors in Fig. 2 that are difficult to recognize. We change them to blue to orange. We hope they are all right. In the text, it says "The superpositions of corresponding domains show almost no changes in their conformations as mentioned above (Extended Data Fig. 4). The extended data figure 4 is an RMSD table, which is informative but is terrible to convey the message. I would suggest having a superposition of each separate domain from PF 1,3,5 or 7,9,11 in Figure 2 to illustrate this point. Superpositions of FlgE with each separate domain for PF 1,3,5 and 7,9,11 are presented as panels l and m of Extended Data Fig. 4. Figure 3a is very hard to understand if you are not coming from the bacterial flagellar field. A good schematic cartoon would help the reader to understand what is presented here in the neighbouring unit. For general readers, sudden presentation of -5 start, 11-start and 6-start will confuse them since the information they got from the figure before is the 11-protofilament. We added a figure as Fig. 3a showing a short hook segment with the subunit array on the surface by rainbow colored D2 domains, in which four neighboring subunits are labeled with numbers and the three major helical lines of -5, 11-start and 6-start are indicated as a guide for readers. In the text, there is a discussion about which patch of residues are interacting with which (Constant interactions are seen between residues Leu 101 -Glu 103 of subunit 0 with Ala 320 -Asn 321 and Gln 337 -Ser 339 of subunit 5 (Fig. 3d), but the interactions between Ser 87 -Asn 88 of subunit 6 and Gly 348 -Gly 350 of subunit 0 are present only in the compressed form). However, in Figure 3d, none of these residues are indicated in the figure. We put residue labels as many as possible in new Fig. 4, which was part of original Fig. 3. There should be a figure to compare with flgE with flgE in straight hook conformation from Campylobacter jejuni. Only a part of this is presented in Extended Data Figure 7 but a full flgE should be compared. We moved Extended Data Figure 7 to the main text as Figure 6 and added a figure panel comparing Salmonella FlgE with Campylobacter FlgE as Figure 6a. For image Processing What is the resolution attained during the refinement in cryoSPARC before & after CTF refinement in Relion 3.0? The resolution attained during the refinement in cryoSPARC before and after CTF refinement in RELION 3.0 was 3.3 Å and 3.1 Å, respectively. However, as mentioned in our response to Reviewer #1, in order to solve a problem of the FSC curve not reaching zero, we reprocessed image data by avoiding the use of redundant information caused by the 90% overlap of image segment boxes used for particle extraction, and the resolution is now 3.6 Å after CTF refinement in RELION 3.0. We described this in Methods and updated the FSC curve in Extended Data Fig. 1. Also, what is the 3 classes during classification look like? This info should be added in supplementary materials. We used 5 classes in redong the image processing and used 2 classes. They are shown in Extended Data Fig. 7. Minor Comments: Why is the sampled incubated overnight at room temperature for prepare the native supercoiled hook? The hook is supercoiled at room temperature but becomes straight at temperatures below 4ºC. This is how we carry out cryoEM helical image analysis of the straight hook.
2019-09-17T01:03:56.195Z
2019-08-27T00:00:00.000
{ "year": 2019, "sha1": "49f8617309edce118b2edff2ce258c799cd3d75a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41467-019-13252-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e278ecece4b1127dc8ee4fa98e738b77f4609a57", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Biology", "Physics" ] }
251075085
pes2o/s2orc
v3-fos-license
AuAg Nanoparticles Grafted on TiO2@N-Doped Porous Carbon: Improved Depletion of Ciprofloxacin under Visible Light through Plasmonic Photocatalysis TiO2 nanoparticles (NPs) were modified to obtain photocatalysts with different composition sophistication and displaying improved visible light activity. All of them were evaluated in the photodegradation of ciprofloxacin. The band gap of TiO2 NPs was successfully tailored by the formation of an N-doped porous carbon (NPC)-TiO2 nanohybrid through the pyrolysis of melamine at 600 °C, leading to a slight red-shift of the absorption band edge for nanohybrid NPC-TiO2 1. In addition, the in-situ formation and grafting of plasmonic AuAg NPs at the surface of NPC sheets and in close contact with TiO2 NPs leads to AuAg-NPC-TiO2 nanohybrids 2–3. These nanohybrids showed superior photocatalytic performance for the degradation of ciprofloxacin under visible light irradiation, compared to pristine P25 TiO2 NPs or to AuAg-PVP-TiO2 nanohybrid 4 in which polyvinylpyrrolidone stabilized AuAg NPs were directly grafted to TiO2 NPs. The materials were characterized by transmission electron microscope (TEM), High Angle Annular Dark Field—Scanning Transmission Electron Microscopy—Energy Dispersive X-ray Spectroscopy HAADF-STEM-EDS, X-ray photoelectron spectroscopy and solid UV-vis spectroscopy. Moreover, the active species involved in the photodegradation of ciprofloxacin using AuAg-NCS-TiO2 nanohybrids were evaluated by trapping experiments to propose a mechanism for the degradation. Introduction Pharmaceuticals have been a milestone in medical care and eradication of diseases in recent decades but, at the same time, water quality has been worsened by continuous pharmaceuticals release into the environment. Consequently, pharmaceuticals have become emerging water pollutants. Among them, antibiotics stand out for their potential risk for many living beings to develop antibiotic resistance. Indeed, they pose a risk to the environment, ecology and the health of human beings [1], especially considering that wastewater treatment plants (WWTPs) were not a priori conceived to handle this kind of pollutants [2]. In this regard, ciprofloxacin (CIP), a widely used antibiotic, displays inert chemical bonds, which makes CIP very difficult to degrade by microorganisms and remains persistent in wastewater [3]. Advanced oxidation treatments have drawn great attention to tackle these issues. In particular, photocatalysis has gained increasing attention due to its non-toxicity, low-cost and non-polluting benefits to transform the bioactive molecule of pharmaceuticals into non-toxic by-products [4]. Semiconductor TiO 2 NPs are one of the most widely used photocatalysts. However, its limited visible light activity, characteristic of large band gap semiconductors, and the high recombination rate of photoexcited electron-hole pairs are the main drawbacks that limit its use for practical applications. In this sense, semiconductor doping or heterojunction formation between semiconductors with different band gaps [5] constitute widely studied approaches that improve the photocatalytic performance of TiO 2 NPs. On the other hand, a very interesting class of emerging materials is nitrogen-doped porous carbon (NPC) materials [6]. These materials display interesting characteristics such as porous structures, nitrogen heteroatom active sites and the ability to act as catalyst or catalyst support in hydrogenation and oxidation reactions. In addition, the formation of N-doped graphitic conjugated π-structures provides improved photoinduced charge separation when these materials are combined with semiconductors [7]. NPCs can be synthesized, for instance, by the direct carbonization of nitrogen-rich precursors such as urea, melamine or aniline. In addition, the use of plasmonic NPs displaying visible light harvesting properties is an interesting alternative to take advantage of the solar spectrum (ca. 43% visible light) and also prevent fast charge recombination when hybrid nanomaterials combining plasmonic nanoparticles and wide band gap semiconductors, such as TiO 2 , are designed. The visible light photoexcited electrons (hot electrons) can migrate from the metal to the conduction band (CB) of the semiconductor, which restrains the recombination of charges [8,9]. Localized surface plasmon resonance (SPR) of noble metal nanostructures could enhance solar light harvesting by additional mechanisms such as optical near-field enhancement and photothermal heating [8]. We wondered whether the combination of two approaches such as surface sensitization of TiO 2 NPs in contact with NPC sheets and improved visible light absorption via plasmonic NPs would be an efficient strategy to overcome the lack of photocatalytic activity of TiO 2 NPs in the visible range. In this work, we propose a novel approach to tailor the visible light photocatalytic activity of a new type of hybrid nanostructures, by the purposeful combination of TiO 2 semiconductor nanoparticles, nitrogen-doped porous carbon shells and plasmonic AuAg NPs. First, we have obtained an NPC-TiO 2 nanohybrid, which, in a second step, has been decorated with in situ formed bimetallic AuAg plasmonic NPs. The bimetallic AuAg NPs were obtained by the mild decomposition of the organometallic complex [Au 2 Ag 2 (C 6 F 5 ) 4 (OEt 2 ) 2 ] n , which has been previously used as a versatile precursor for the formation of plasmonic NPs, in the absence of stabilizing ligands or polymers [10][11][12][13][14]. The photocatalytic activity of these nanohybrids has been proved in the depletion of antibiotic ciprofloxacin. The photocatalytic activity of a similar system AuAg-PVP-TiO 2 (4) (PVP = polyvinylpyrrolidone), in which the NPC component is not present, has been checked and compared with the ones bearing NPC sheets. In addition, we investigated the role of the active species involved in the photocatalytic mechanism through scavenging experiments. All the synthesized nanomaterials were characterized by transmission electron microscope (TEM), X-ray photoelectron spectroscopy (XPS) and solid UV-vis absorption spectroscopy. Synthesis of Photocatalysts NPC-TiO 2 (1) The NPC-TiO 2 (1) nanohybrid was prepared by mixing a 70:30 weight ratio of P25 and melamine. Initially, 2 g of TiO 2 (P25) and 0.86 g of melamine were suspended in 30 mL Nanomaterials 2022, 12, 2524 3 of 13 of distilled water at room temperature. The mixture was sonicated for 10 min, and it was evaporated to dryness under reduced pressure. The powder obtained was dried in an oven at 100 • C overnight. Secondly, thermal polycondensation and carbonization were carried out in alumina covered crucible to prevent sublimation at 600 • C for 3 h to obtain the nanohybrid. After that, the crucible was cooled at room temperature and the composite was crushed to obtain a homogeneous powder. AuAg-NPC-TiO 2 (2) In total, 0.2116 g of 1 were sonicated for 30 min in 5 mL of ethylene glycol. Then, 1.5 mg (1.05 × 10 −3 mmoL, 1% wt of metal content) of the bimetallic precursor [Au 2 Ag 2 (C 6 F 5 ) 4 (OEt 2 ) 2 ] n were added and the resulting mixture was also introduced in an ultrasound bath for 3 min. The suspension was stirred and heated under reflux at 130 • C, in darkness, for 15 min. After that, the obtained nanohybrid 2 was washed and centrifuged twice with distilled water, collected in ethanol and evaporated to dryness under reduced pressure. AuAg-NPC-TiO 2 (3) Exactly 0.1047 g of 1 were sonicated for 30 min in 5 mL of ethylene glycol. Then, 1.5 mg (1.05 × 10 −3 mmoL, 2% wt of metal content) of the bimetallic precursor [Au 2 Ag 2 (C 6 F 5 ) 4 (OEt 2 ) 2 ] n were added and the resulting mixture was also introduced in an ultrasound bath for 3 min. The suspension was stirred and heated under reflux at 130 • C, in darkness, for 15 min. After that, the obtained nanohybrid 3 was washed and centrifuged twice with distilled water, collected in ethanol, and evaporated to dryness under reduced pressure. AuAg-PVP-TiO 2 (4) In order to prepare nanohybrid 4, PVP stabilized AuAg were first synthesized. For this, 100 mg of [Au 2 Ag 2 (C 6 F 5 ) 4 (OEt 2 ) 2 ] n were dissolved in an excess of PVP (500 mg) with 80 mL of THF under argon atmosphere at 66 • C with reflux for 4 h, giving rise to a dark brown solid. The solvent was evaporated under vacuum and the isolated bimetallic nanoparticles were dissolved in 20 mL of distilled water and were placed in an ultrasound bath for 5 min. Finally, the compound was obtained after evaporation. The material was stored at 4 • C [16]. Then, 40 mg of the previously prepared AuAg-PVP-NPs were added to 10 mL of distilled water and the mixture was sonicated for 5 min. Then, 951.40 mg of TiO 2 (Degussa P25) were added. The reaction mixture was stirred overnight at room temperature. The final product was washed and centrifuged three times with distilled water to remove the remaining PVP, collected in ethanol and evaporated to dryness under reduced pressure. Characterization of Photocatalysts Absorption UV−vis spectra of pressed powder samples diluted with silica were recorded on a Shimadzu (UV-3600 spectrophotometer with a Harrick Praying Mantis accessory, Kyoto, Japan). The absorption spectra were calculated from diffuse reflectance spectra and applying the Kubelka−Munk function. Transmission Electron Microscopy (TEM) samples were drop-casted from the ethanol dispersions (2-3 drops) to carboncoated Cu grids. The corresponding TEM micrographs were acquired with a JEOL JEM 2100 microscope (Tokyo, Japan). In addition, High Angle Annular Dark Field-Scanning Transmission Electron Microscopy (HAADF-STEM) images were registered with a Tecnai F30 (ThermoFisher Scientific, Waltham, MA, USA) at a working voltage of 300 kV, coupled with a HAADF detector (Fischione, Export, PA, USA). In this operation mode, the intensity of the signal is proportional to the square of the atomic number (Z 2 ), hence heavier elements such as gold or silver show a much brighter contrast than lighter elements, such as carbon or silicon. This is remarkably useful to localize metals in organic or light metal oxide matrixes. Furthermore, to analyze the chemical composition of the materials, X-ray Energy Dispersive Spectra (EDS) were registered in an EDAX detector or with an Ultim Max detector (Oxford, UK). XPS experiments were carried out with a Kratos AXIS Supra spectrometer (Manchester, UK), using a monochromatized Al Kα source (1486.6 eV) operating at 12 kV and 10 mA. Survey spectra were registered at analyzer pass energy of 160 eV, while narrow scans were acquired at constant pass energy of 20 eV and steps of 0.1 eV. The photoelectrons were detected at a take-off angle of F = 0 • with respect to the surface normal. Basal pressure in the analysis chamber was less than 5 × 10 −9 Torr. The spectra were obtained at room temperature. The binding energy (BE) scale was internally referenced to the C 1s peak (BE for C-C = 284.9 eV). The data treatment was carried out with the Casa XPS software using the specific relative sensitivity factor library that the software has for Kratos equipment. To calculate the atomic concentrations a Shirley-type background subtraction was used. The 4f and the 3d regions were used for Au and Ag, respectively. The RSF sensitivity factors used were Ti 2p = 2.001; C 1s = 0.278; O 1s = 0.780; N 1s = 0.477; Ag 3d = 5.987 and Au 4f = 6.25. Photodegradation Procedure In a typical procedure, 15.5 mg of the catalyst were suspended in 70 mL of a 4 µg mL −1 CIP aqueous solution in a 70-mL Schlenk glass reactor. Before the irradiation, the solution was treated in an ultrasound bath for 2 min approximately, and it was stirred for 60 min under dark conditions to reach adsorption/desorption equilibrium. Then, solutions were irradiated with visible LED light in a cooled lab-made setup. The assembly consists of four 10 W white light LED lamps (LED Engin, San Jose, CA, USA) placed equidistantly inside a cylindrical compartment. A constant temperature (25 • C) of the solution was maintained alongside the photodegradations thanks to a water recirculating coolant coil placed on the outside of the cylindrical compartment. During the degradation, the mixtures were stirred at 1200 rpm and 1.5 mL aliquots were collected at different times to monitor the reaction. All samples were filtered to remove the solid suspension of the catalyst and stored at 4 • C until analysis. In order to evaluate active species responsible for the photodegradation, the CIP degradations were carried out under three different conditions: 10 −3 M solution of tertbutanol and triethanolamine, to quench hydroxyl radicals (·OH) and photogenerated holes (h + ), respectively and under N 2 atmosphere to quench superoxide radicals (·O 2 − ) [17][18][19][20]. All the experiments were carried out at in ultrapure water at natural pH. Photodegradation Analysis The concentration of ciprofloxacin during the photodegradations was monitored by a high-performance liquid chromatography system. The equipment was an Agilent modular 1100/1200 liquid chromatography system (Agilent Technologies, Santa Clara, CA, USA) equipped with a G1379A degasser, a G1311A HPLC quaternary pump, a G1329A autosampler and a G1315D diode array detector. A Phenomenex Luna ® LC C18 100 Å (5 µm particle size, 150 mm × 4.6 mm i.d.) column was used for the separation of compounds. The mobile phase was a 75:25 mixture of 0.1% formic acid in methanol and 0.1% formic acid aqueous solution. Injection volume was 20 µL, the flow rate was 1.0 mL min −1 and the separation was performed at room temperature. Detection wavelength was 280 nm. Synthesis and Characterisation of Photocatalysts The first aim of this study was the design of a facile and straightforward approach for the preparation of AuAg-NPC-TiO 2 nanohybrids (2-3) with enhanced photocatalytic properties. The synthesis was a two-step process (see Scheme 1). First, nitrogen-doped porous carbon was formed in the presence of TiO 2 NPs by the carbonization of melamine (initial 30:70 melamine:TiO 2 weight ratio) at 600 • C, leading to nanohybrid NPC-TiO 2 1. Then, the organometallic compound [Au 2 Ag 2 (C 6 F 5 ) 4 (OEt 2 ) 2 ] n (1% wt. of added Au-Ag content in the precursor (2) or 2% wt (3) A closer inspection of the TiO 2 NPs through HRTEM clearly shows the formation of an amorphous layer in the border of monocrystalline TiO 2 NPs with 0.35 nm of interplanar spacing assigned to the (101) plane of anatase phase in P25 TiO 2 . The EDS analysis of a selected area depicted in Figure 1, shows the homogeneous distribution of C, Ti and O along the whole selected area, being more abundant in the presence of Ti and O. The presence of N is not detected through EDS analysis, which points to the presence of very small amounts of this element in the nanohybrids, if present. This EDS analysis confirms the presence of both NPC and TiO 2 nanomaterials at the same position, keeping a main composition of TiO 2 , in agreement with the lower added weight amount of melamine. The decomposition of [Au 2 Ag 2 (C 6 F 5 ) 4 (OEt 2 ) 2 ] n over nanohybrid 1 leads to the formation of AuAg-NPC-TiO 2 nanohybrids (2-3). This synthetic approach allows the direct stabilization of small-size Au-Ag NPs stabilized by the NPC surfaces, probably by the interaction with the N-donor groups of this material, as previously reported for NPC-Pd NPs [21], avoiding the use of additional polymers or ligands as stabilizing agents for the plasmonic NPs (see Scheme 1). Indeed, the direct reduction of the Au(I)-Ag(I) organometallic precursor in the presence of TiO 2 NPs without the concurrence of additional stabilizing polymers or ligands leads to the formation of bulk metals. The HAADF-STEM-EDS study of Au-Ag-NPC-TiO2 nanohybrid 2 was performed to confirm the pursued composition and distribution of each component of the nanohybrid. The larger images in Figure 3 depict HAADF-STEM images with low (upper image) or high (bottom image) magnification of 2. These images show small NPs with high Z-contrast and TiO2 NPs covered with NPC nanosheets (see Supplementary Materials). The EDS analysis of selected regions displays a main homogeneous composition of Ti and O, together with a lesser contribution from C, but also extended in the same region. A minor composition of Au and Ag is located at similar places, corresponding with the presence of AuAg alloyed NPs. This is particularly detailed in the high-magnification EDS images in Figure 3, where a bimetallic AuAg NP appears at the edge of a region with NPC-TiO2 composition. The HAADF-STEM-EDS study of Au-Ag-NPC-TiO 2 nanohybrid 2 was performed to confirm the pursued composition and distribution of each component of the nanohybrid. The larger images in Figure 3 depict HAADF-STEM images with low (upper image) or high (bottom image) magnification of 2. These images show small NPs with high Z-contrast and TiO 2 NPs covered with NPC nanosheets (see Supplementary Materials). The EDS analysis of selected regions displays a main homogeneous composition of Ti and O, together with a lesser contribution from C, but also extended in the same region. A minor composition of Au and Ag is located at similar places, corresponding with the presence of AuAg alloyed NPs. This is particularly detailed in the high-magnification EDS images in Figure 3, where a bimetallic AuAg NP appears at the edge of a region with NPC-TiO 2 composition. The elemental composition and chemical states on the surface of nanohybrids 1-3 were studied through XPS. Figure 4 shows the Ti 2p, O 1s, C 1s, N 1s, Au 4f and Ag 3d narrow spectra for 2 and 3. The corresponding survey spectra for 1-3 and narrow XPS spectra for 1 are included in the Supplementary Materials (see Figures S3 and S4 and Tables S1 and S2). Intense peaks corresponding to Ti and O are observed in the survey XPS spectrum of the nanohybrids 1-3. Smaller peaks are assigned to C, and in the case of f 2 and 3, Au and Ag peaks can also be found, whereas an almost negligible peak corresponding to N is detected for all nanohybrids. The % atomic composition (narrow spectra) of Ti is 22.78 (1), 21.98 (2) and 21.52% (3), whereas for O is 55.95 (1), 56.50 (2) and 56.77% (3), being these elements the most abundant in the samples, decreasing as the metal content increases (see below). The C composition is 21.11 (1), 20.98 (2) and 20.84% (3) and the one found for N is only 0.16 (1), 0.32 (2) and 0.47% (3), confirming the formation of nitrogen-doped carbon material from melamine, due to the high-temperature carbonization conditions that lead to the nitrogen burnt off from the nanohybrid surface. This result is similar to other previously reported when N-rich molecules such as melamine and TiO2 are pyrolyzed together [7,22]. Minor atomic compositions of Ag (0.13 (2) and 0.22% (3)) and Au (0.09 (2) The elemental composition and chemical states on the surface of nanohybrids 1-3 were studied through XPS. Figure 4 shows the Ti 2p, O 1s, C 1s, N 1s, Au 4f and Ag 3d narrow spectra for 2 and 3. The corresponding survey spectra for 1-3 and narrow XPS spectra for 1 are included in the Supplementary Materials (see Figures S3 and S4 and Tables S1 and S2). Intense peaks corresponding to Ti and O are observed in the survey XPS spectrum of the nanohybrids 1-3. Smaller peaks are assigned to C, and in the case of f 2 and 3, Au and Ag peaks can also be found, whereas an almost negligible peak corresponding to N is detected for all nanohybrids. The % atomic composition (narrow spectra) of Ti is 22.78 (1), 21.98 (2) and 21.52% (3), whereas for O is 55.95 (1), 56.50 (2) and 56.77% (3), being these elements the most abundant in the samples, decreasing as the metal content increases (see below). The C composition is 21.11 (1), 20.98 (2) and 20.84% (3) and the one found for N is only 0.16 (1), 0.32 (2) and 0.47% (3), confirming the formation of nitrogen-doped carbon material from melamine, due to the high-temperature carbonization conditions that lead to the nitrogen burnt off from the nanohybrid surface. This result is similar to other previously reported when N-rich molecules such as melamine and TiO 2 are pyrolyzed together [7,22]. Minor atomic compositions of Ag (0.13 (2) and 0.22% (3)) and Au (0.09 (2) Nanomaterials 2022, 12, 2524 8 of 13 and 0.17% (3)) are also found, which corresponds to % weight compositions of 0.62 (2), 1.05 (3) and 0.79 (2), 1.49% (3), respectively, slightly higher than the 1% wt of added metal in the organometallic precursor. This is probably due to the loss of some NPC-TiO 2 1 nanoparticles in the preparation process of 2 and 3 nanohybrids. Nanomaterials 2022, 12, x FOR PEER REVIEW 9 of 14 and 0.17% (3)) are also found, which corresponds to % weight compositions of 0.62 (2), 1.05 (3) and 0.79 (2), 1.49% (3), respectively, slightly higher than the 1% wt of added metal in the organometallic precursor. This is probably due to the loss of some NPC-TiO2 1 nanoparticles in the preparation process of 2 and 3 nanohybrids. The narrow XPS spectrum for the Ti 2p region for 2 and 3 displays a doublet at 458.8 and 464.6 (2) or 458.7 and 464.5 eV (3) with a separation of 5.7 eV that is assigned to the presence of TiO2 in the sample [23]. Ti-N and Ti-C bonds are discarded, ruling out the N or C doping in the TiO2 crystal lattice. The presence of TiO2 is further confirmed by the narrow spectrum found for O 1s for 2 and 3 with binding energies of 529.8 and 530.9 (2) or 530.0 and 531.1 eV (3), which are assigned to oxygen anions in the TiO2 lattice (Ti-O) and to surface OH groups, respectively. The C 1s region for 2 and 3 is fitted into three main peaks at 285.0, 286.3 and 288.8 (2) or 285.0, 286.3 and 288.5 eV (3). The lowest binding energy peak at 285.0 eV corresponds to single C-C/C-H binding; since the presence of N in the sample is almost negligible, the deconvoluted peaks at 286.3 and 288.5 eV could be assigned to C-O and C=O bonds, respectively [24]. These peaks would confirm the interaction between NPC and TiO2 and rule out the TiO2 doping. The N 1s peak is very weak and appears at 400.4 (2) or 399,4 eV (3), corresponding to tertiary N-(C)3 binding, being absent the peak at ca. 398 eV corresponding to N=C-N bonding in triazine groups of g-C3N4, in agreement with a high degree of carbonization of melamine and N-doping. The narrow XPS spectrum for the Au 4f region for 2 and 3 is depicted in The narrow XPS spectrum for the Ti 2p region for 2 and 3 displays a doublet at 458.8 and 464.6 (2) or 458.7 and 464.5 eV (3) with a separation of 5.7 eV that is assigned to the presence of TiO 2 in the sample [23]. Ti-N and Ti-C bonds are discarded, ruling out the N or C doping in the TiO 2 crystal lattice. The presence of TiO 2 is further confirmed by the narrow spectrum found for O 1s for 2 and 3 with binding energies of 529.8 and 530.9 (2) or 530.0 and 531.1 eV (3), which are assigned to oxygen anions in the TiO 2 lattice (Ti-O) and to surface OH groups, respectively. The C 1s region for 2 and 3 is fitted into three main peaks at 285.0, 286.3 and 288.8 (2) or 285.0, 286.3 and 288.5 eV (3). The lowest binding energy peak at 285.0 eV corresponds to single C-C/C-H binding; since the presence of N in the sample is almost negligible, the deconvoluted peaks at 286.3 and 288.5 eV could be assigned to C-O and C=O bonds, respectively [24]. These peaks would confirm the interaction between NPC and TiO 2 and rule out the TiO 2 doping. The N 1s peak is very weak and appears at 400.4 (2) or 399.4 eV (3), corresponding to tertiary N-(C)3 binding, being absent the peak at ca. 398 eV corresponding to N=C-N bonding in triazine groups of g-C 3 N 4 , in agreement with a high degree of carbonization of melamine and N-doping. The narrow XPS spectrum for the Au 4f region for 2 and 3 is depicted in Figure 4. The experimental signals are fitted to two spin-orbit doublets of different intensities, but equally separated in energy (ca. 3.7 eV). The most intense doublet at 83.5 eV and 87.2 eV (2) state for Au atoms is usually found when sub-10 nm size particles are grafted onto C-based surfaces such as g-C 3 N 4 or graphene [25]. The narrow XPS spectrum for the Ag 3d region for 2 and 3 shown in Figure 4 displays two signals that are fitted to one spin-orbit doublet with a separation in the energy of ca. 6.0 eV, characteristic of the silver 3d region at 367.6 and 373.6 (2) or 367.5 and 373.5 eV (3). Considering the energy maxima of the doublet these signals are attributed to metallic silver [26]. The low binding energy values for Au 4d and Ag 3d, usually between 84.3-84.5 eV and 368.1-368.3 eV, respectively, could be ascribed both to AuAg alloy formation and to AuAg NPs-TiO 2 substrate interactions, as previously reported [27,28]. Figure 5 shows the solid UV-vis absorption spectra of the samples. Comparing TiO 2 and AuAg-PVP-TiO 2 (4) spectra, it is observed that the addition of PVP-stabilized bimetallic nanoparticles slightly red-shifts the absorption band edge of TiO 2 and a plasmonic absorption band in the visible region at ca. 495 nm appears. The formation of the NPC layers on TiO 2 NPs in nanohybrid 1 produces an important red-shift of the absorption band edge of TiO 2 . Additionally, when nanohybrid 1 is grafted with plasmonic AuAg nanoparticles, an even more pronounced band edge red-shift is produced together with the appearance of a plasmonic absorption at 497 nm (2) and 512 nm (3), in the 400-650 nm range. 6.0 eV, characteristic of the silver 3d region at 367.6 and 373.6 (2) or 367.5 and 373.5 eV (3). Considering the energy maxima of the doublet these signals are attributed to metallic silver [26]. The low binding energy values for Au 4d and Ag 3d, usually between 84.3-84.5 eV and 368.1-368.3 eV, respectively, could be ascribed both to AuAg alloy formation and to AuAg NPs-TiO2 substrate interactions, as previously reported [27,28]. Figure 5 shows the solid UV-vis absorption spectra of the samples. Comparing TiO2 and AuAg-PVP-TiO2 (4) spectra, it is observed that the addition of PVP-stabilized bimetallic nanoparticles slightly red-shifts the absorption band edge of TiO2 and a plasmonic absorption band in the visible region at ca. 495 nm appears. The formation of the NPC layers on TiO2 NPs in nanohybrid 1 produces an important red-shift of the absorption band edge of TiO2. Additionally, when nanohybrid 1 is grafted with plasmonic AuAg nanoparticles, an even more pronounced band edge red-shift is produced together with the appearance of a plasmonic absorption at 497 nm (2) and 512 nm (3), in the 400-650 nm range. The corresponding Tauc plots ( Figure S5 in Supplementary Materials) display the values of the TiO2 band-gap in the nanohybrids. Further, the metal-semiconductor heterojunction produces a slight decrease in the TiO2 band gap from 3.10 in TiO2 NPs to 2.99 eV in TiO2-PVP-AuAg (4). The inclusion of NPC layers on TiO2 NPs also produces a bandgap decrease to 2.97 eV, whereas the inclusion of AuAg NPs in nanohybrid 1 leads to even narrower band-gaps of 2.92 (2) and 2.88 eV (3), respectively. These results confirm that the presence of both the NPC layers and the plasmonic AuAg NPs favors the band-gap reduction of TiO2. In addition, the plasmonic absorption produced by the presence of tiny amounts of AuAg NPs increases the visible light harvesting ability of these systems and the boosting of the LSPR effects. Figure 6 shows the evolution of the concentration of ciprofloxacin (CIP) under visible light in the presence of the previously described photocatalysts. The adsorption of CIP decreases in nanomaterials that contain NPC shells, whereas the adsorption with TiO2 and nanohybrid (4) is around 30%. Photocatalytic Activity CIP degradation in the absence of photocatalyst (photolysis) was negligible. About 70% of CIP was removed within 230 min of irradiation after the addition of TiO2 (P25). A The corresponding Tauc plots ( Figure S5 in Supplementary Materials) display the values of the TiO 2 band-gap in the nanohybrids. Further, the metal-semiconductor heterojunction produces a slight decrease in the TiO 2 band gap from 3.10 in TiO 2 NPs to 2.99 eV in TiO 2 -PVP-AuAg (4). The inclusion of NPC layers on TiO 2 NPs also produces a band-gap decrease to 2.97 eV, whereas the inclusion of AuAg NPs in nanohybrid 1 leads to even narrower band-gaps of 2.92 (2) and 2.88 eV (3), respectively. These results confirm that the presence of both the NPC layers and the plasmonic AuAg NPs favors the band-gap reduction of TiO 2 . In addition, the plasmonic absorption produced by the presence of tiny amounts of AuAg NPs increases the visible light harvesting ability of these systems and the boosting of the LSPR effects. Figure 6 shows the evolution of the concentration of ciprofloxacin (CIP) under visible light in the presence of the previously described photocatalysts. The adsorption of CIP decreases in nanomaterials that contain NPC shells, whereas the adsorption with TiO 2 and nanohybrid (4) is around 30%. partial absorption in the visible light region for TiO2 was also observed by other authors [5]. In addition, the white LED light used displays a relative spectral power in the 400-800 nm range (maxima at ca. 460 and 550 nm), being the higher energy limit very close to the TiO2 band-gap energy. The photodegradation of CIP is enhanced when plasmonic nanoparticles are grafted on TiO2 which corresponds to nanohybrid 4. As can be seen in Figure 6, the depletion rates achieved with nanohybrids NPC-TiO2 (1), AuAg-NPC-TiO2 (2) and AuAg-NPC-TiO2 (3) were higher than the ones obtained for pristine TiO2 and AuAg-PVP-TiO2 (4). Total CIP degradation was achieved with catalyst 2 within 200 min of irradiation. Moreover, the degradation rates observed with nanohybrids 1 and 3 were similar to each other and were lower than the one obtained with 2, pointing out that a higher percentage of metal did not enhance the photodegradation rate. Furthermore, the size of the NPs might play a role in the degradation effectiveness, since the mean size of the NPs in the nanohybrid 2 was smaller than the one for 3, displaying in the former a higher surface/volume ratio. Mechanism of Photocatalytic Activity During CIP degradation experiments, reactive oxygen species are generated upon visible light irradiation, and they are responsible for the CIP depletion. In order to study the photocatalytic mechanism with the catalyst AuAg-NPC-TiO2 (2) and identify which reactive species play a significant role in the photodegradation of CIP, three trapping experiments were performed. Figure 7 shows how CIP depletion rates vary when the degradations are carried out in the presence of tert-butanol or triethanolamine and under an N2 atmosphere. The addition of triethanolamine greatly decreased, by around 90%, the degradation efficiency of CIP, suggesting that the holes (h + ) are the main reactive species in the photodegradation process. Moreover, the degradation was partially inhibited under the N2 atmosphere and, consequently, superoxide radicals (·O2 − ) also play an important role. However, slight suppression in the degradation of CIP is observed when tert-butanol is added, which means that the hydroxyl radicals (·OH) had little contribution to the CIP photodegradation process with the photocatalysts 2. These results agree with a recent report on the photocatalytic degradation of Rhodamine B using Au/TiO2 network-like nanofibers as photocatalysts [29]. CIP degradation in the absence of photocatalyst (photolysis) was negligible. About 70% of CIP was removed within 230 min of irradiation after the addition of TiO 2 (P25). A partial absorption in the visible light region for TiO 2 was also observed by other authors [5]. In addition, the white LED light used displays a relative spectral power in the 400-800 nm range (maxima at ca. 460 and 550 nm), being the higher energy limit very close to the TiO 2 band-gap energy. The photodegradation of CIP is enhanced when plasmonic nanoparticles are grafted on TiO 2 which corresponds to nanohybrid 4. As can be seen in Figure 6, the depletion rates achieved with nanohybrids NPC-TiO 2 (1), AuAg-NPC-TiO 2 (2) and AuAg-NPC-TiO 2 (3) were higher than the ones obtained for pristine TiO 2 and AuAg-PVP-TiO 2 (4). Total CIP degradation was achieved with catalyst 2 within 200 min of irradiation. Moreover, the degradation rates observed with nanohybrids 1 and 3 were similar to each other and were lower than the one obtained with 2, pointing out that a higher percentage of metal did not enhance the photodegradation rate. Furthermore, the size of the NPs might play a role in the degradation effectiveness, since the mean size of the NPs in the nanohybrid 2 was smaller than the one for 3, displaying in the former a higher surface/volume ratio. Mechanism of Photocatalytic Activity During CIP degradation experiments, reactive oxygen species are generated upon visible light irradiation, and they are responsible for the CIP depletion. In order to study the photocatalytic mechanism with the catalyst AuAg-NPC-TiO 2 (2) and identify which reactive species play a significant role in the photodegradation of CIP, three trapping experiments were performed. Figure 7 shows how CIP depletion rates vary when the degradations are carried out in the presence of tert-butanol or triethanolamine and under an N 2 atmosphere. The addition of triethanolamine greatly decreased, by around 90%, the degradation efficiency of CIP, suggesting that the holes (h + ) are the main reactive species in the photodegradation process. Moreover, the degradation was partially inhibited under the N 2 atmosphere and, consequently, superoxide radicals (·O 2 − ) also play an important role. However, slight suppression in the degradation of CIP is observed when tert-butanol is added, which means that the hydroxyl radicals (·OH) had little contribution to the CIP photodegradation process with the photocatalysts 2. These results agree with a recent report on the photocatalytic degradation of Rhodamine B using Au/TiO 2 network-like nanofibers as photocatalysts [29]. Following the observations of the scavenger effects, the improved photocatalytic activity of nanohybrid 2 can be related to the visible light LSPR absorption of the bimetallic AuAg NPs grafted at the surface of TiO2 NPs and stabilized with NPC shells. The plasmonic absorption enables the hot-electron injection from the alloyed NPs to the conduction band of the TiO2 semiconductor. This hot-electron injection produces an electron-hole pair formation and further charge carrier separation [7]. In addition, the presence of Ndoped porous carbon (NPC) layers strongly contributes to the charge carrier separation. The photogenerated electrons are able to react with absorbed O2, leading to superoxide radicals (·O2 − ) through a reduction process. A schematic representation of the mechanism of photocatalysis is depicted in Figure 8. The almost negligible role played by hydroxyl radicals agrees with the unfavorable formation of photogenerated electrons upon TiO2 excitation. Indeed, the semiconductor band-gap is slightly larger than the higher energy component of visible light, ruling out Following the observations of the scavenger effects, the improved photocatalytic activity of nanohybrid 2 can be related to the visible light LSPR absorption of the bimetallic AuAg NPs grafted at the surface of TiO 2 NPs and stabilized with NPC shells. The plasmonic absorption enables the hot-electron injection from the alloyed NPs to the conduction band of the TiO 2 semiconductor. This hot-electron injection produces an electron-hole pair formation and further charge carrier separation [7]. In addition, the presence of N-doped porous carbon (NPC) layers strongly contributes to the charge carrier separation. The photogenerated electrons are able to react with absorbed O 2 , leading to superoxide radicals (·O 2 − ) through a reduction process. A schematic representation of the mechanism of photocatalysis is depicted in Figure 8. Following the observations of the scavenger effects, the improved photocatalytic activity of nanohybrid 2 can be related to the visible light LSPR absorption of the bimetallic AuAg NPs grafted at the surface of TiO2 NPs and stabilized with NPC shells. The plasmonic absorption enables the hot-electron injection from the alloyed NPs to the conduction band of the TiO2 semiconductor. This hot-electron injection produces an electron-hole pair formation and further charge carrier separation [7]. In addition, the presence of Ndoped porous carbon (NPC) layers strongly contributes to the charge carrier separation. The photogenerated electrons are able to react with absorbed O2, leading to superoxide radicals (·O2 − ) through a reduction process. A schematic representation of the mechanism of photocatalysis is depicted in Figure 8. The almost negligible role played by hydroxyl radicals agrees with the unfavorable formation of photogenerated electrons upon TiO2 excitation. Indeed, the semiconductor band-gap is slightly larger than the higher energy component of visible light, ruling out The almost negligible role played by hydroxyl radicals agrees with the unfavorable formation of photogenerated electrons upon TiO 2 excitation. Indeed, the semiconductor band-gap is slightly larger than the higher energy component of visible light, ruling out the formation of holes (h + ) at the valence band of TiO 2 (1.65 eV vs. NHE), which would display a favorable potential for the formation of the ·OH radicals from OH − groups (·OH/OH − potential is 2.38 eV, vs. NHE) or from H 2 O (·OH/H 2 O potential is 2.72 eV, vs. NHE). Conclusions We have developed a straightforward and efficient approach for the synthesis of N-doped porous carbon-titania nanohybrids displaying enhanced photocatalytic activity towards the degradation of CIP under visible light when this material is grafted with plasmonic AuAg alloyed NPs. The presence of AuAg nanoparticles provides the possibility of hot-electron injection to the CB of TiO 2 upon visible light plasmonic absorption thanks to an effective metal-semiconductor interface formation. The charge-carrier separation is clearly enhanced by the presence of NPC layers and improves the photocatalytic efficiency of the nanohybrid photocatalysts towards the degradation of CIP, which is a persistent antibiotic in water. Data Availability Statement: The data presented in this study are available on request from the corresponding authors.
2022-07-27T15:16:09.444Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "ef42db6bdc0167997ffb66979eb3746bd3af900c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/15/2524/pdf?version=1658493908", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ffc3ec3646e37727293580e3bd7cbf0991c9331", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
3278779
pes2o/s2orc
v3-fos-license
HIF-2α and Oct4 have synergistic effects on survival and myocardial repair of very small embryonic-like mesenchymal stem cells in infarcted hearts Poor cell survival and limited functional benefits have restricted mesenchymal stem cell (MSC) efficacy for treating myocardial infarction (MI), suggesting that a better understanding of stem cell biology is needed. The transcription factor HIF-2α is an essential regulator of the transcriptional response to hypoxia, which can interact with embryonic stem cells (ESCs) transcription factor Oct4 and modulate its signaling. Here, we obtained very small embryonic-like mesenchymal stem cells (vselMSCs) from MI patients, which possessed the very small embryonic-like stem cells' (VSELs) morphology as well as ESCs' pluripotency. Using microarray analysis, we compared HIF-2α-regulated gene profiles in vselMSCs with ESC profiles and determined that HIF-2α coexpressed Oct4 in vselMSCs similarly to ESCs. However, this coexpression was absent in unpurified MSCs (uMSCs). Under hypoxic condition, vselMSCs exhibited stronger survival, proliferation and differentiation than uMSCs. Transplantation of vselMSCs caused greater improvement in cardiac function and heart remodeling in the infarcted rats. We further demonstrated that HIF-2α and Oct4 jointly regulate their relative downstream gene expressions, including Bcl2 and Survivin; the important pluripotent markers Nanog, Klf4, and Sox2; and Ang-1, bFGF, and VEGF, promoting angiogenesis and engraftment. Importantly, these effects were generally magnified by upregulation of HIF-2α and Oct4 induced by HIF-2α or Oct4 overexpression, and the greatest improvements were elicited after co-overexpressing HIF-2α and Oct4; overexpressing one transcription factor while silencing the other canceled this increase, and HIF-2α or Oct4 silencing abolished these effects. Together, these findings demonstrated that HIF-2α in vselMSCs cooperated with Oct4 in survival and function. The identification of the cooperation between HIF-2α and Oct4 will lead to deeper characterization of the downstream targets of this interaction in vselMSCs and will have novel pathophysiological implications for the repair of infarcted myocardium. Mesenchymal stem cells (MSCs) are multipotent, easily obtainable, have low immunogenicity, and secrete angiogenic factors that promote cardiac repair after myocardial infarction (MI). 1 However, the therapeutic potency of transplanted MSCs appears to be limited by low rates of engraftment, survival, and differentiation: 2 the percentage of transplanted MSCs in hearts declined from 34-80% immediately after administration to just 0.3-3.5% after 6 weeks; 3 in a swine model of chronic ischemic cardiomyopathy, 10% of MSCs participated in coronary angiogenesis, and 14% differentiated into cardiomyocytes. 4 Accordingly, researchers have developed methods to improve the survival and effectiveness of transplanted cells by genetically manipulating the expression of proteins that regulate antioxidant resistance, vascular growth and the apoptotic response to ischemic injury. 5,6 One problem that remains is whether the persistent expression of foreign proteins could lead to malignant transformation or transplantation failure, supporting the hypothesis that new strategies for exploring the endogenous cytoprotection and survival advantage to improve the effect of stem cell therapy would be more favorable. The primary transcriptional regulators of both cellular and systemic hypoxic adaptation in mammals are hypoxia-inducible factors (HIFs). HIFs regulate the expression of many genes involved in the survival and effects of transplanted cells, but which remains elusive. 7 Most of our current knowledge about these transcription factors is based on studies of HIF-1α and, to a lesser degree, HIF-2α. Forristal et al. found that silencing of HIF-2α resulted in a significant decrease in human embryonic stem cell (hESC) proliferation and the protein expressions of Oct4, SOX2 and NANOG. 8 Covello et al. showed that HIF-2α can regulate ESCs function and/or differentiation through activation of Oct-4, 9 suggesting that HIFs in combination with Oct4 are essential for ESC survival. How the relation between Oct4 and HIFs by ischemia leads to MSC death or survival, and the attendant transcriptional activity, is unknown. MSCs produce a variety of cytokines, such as vascular growth factor (VEGF), basic fibroblast growth factor (bFGF), and angiopoietin-1 (Ang-1), which directly promote cell survival and have beneficial effects on myocardial repair following MI. 10,11 In some cases, MSC sorting based on markers appears to enrich subpopulations of MSCs with differing paracrine activity. 12 This led to our development of a population of vselMSCs using hypoxic culture and ESC culture conditions in combination with our previously described methods 11 from the patients with acute MI. The present study was designed to gain insights into the autologous expression of HIFs, Oct4, anti-apoptotic factors, and angiogenic cytokines in vselMSCs under hypoxic conditions. We then demonstrated the functional cooperation between HIFs and Oct4 in myocardial repair induced by autologous vselMSC therapy combined with HIF-2α or Oct4 overexpression. Results Comparison of the VSELs in circulating blood MNCs. Some data confirm that VSEL mobilization induced by acute MI differ according to age. 13 Our study shows the same change trend: comparing with the enrolled patients with the older patients, we observed a statistically significant difference in VSEL numbers in the peripheral vein blood (PB) Figure 1 VSEL properties. (a) Age-dependent frequency of VSEL cell subsets expressing CD133 + Lin − CD45 − into the PB. Two groups of patients with STEMI were designated according to age: Non-Older (20-60 years), Older (460-75 years). The frequency of CD133 + Lin − CD45 − cell subsets was calculated per ml PB. *Po0.05 for comparison between the groups (n = 10 per group). (b) Bar graphs showing the absolute numbers of circulating CD133 + Lin − CD45 − cells in the peripheral vein and stenotic coronary artery of patients with STEMI; there was peak mobilization early in the patients. *Po0.05 for comparison between the stenotic arterial blood and the peripheral vein blood (n = 10 per group). (c) depicts cryptograms of the MNC population and gating strategy starting from Lineage versus side scatter (SSC). Cells were visualized by dot plot showing Lineage-PerCP-Cy5.5 versus SSC characteristics, which are related to Lineage negative (Lin − ) and granularity/complexity, respectively (left). Objects from gate R1 were further analyzed for CD133 and CD45 expression, and only CD133 + CD45 − events were selected. The population from gate R1 was subsequently sorted based on CD45 marker expression into Lin − /CD133 + /CD45 − VSELs, which are visualized in the histogram (right). (d) qRT-PCR evaluation of Oct4, Nanog, Klf4, and Sox2 mRNA levels. *Po0.05 for comparison between SB and PB (n = 10 per group). (e) Representative immunoblot electrophoresis showing Oct4, Nanog, Klf4, and Sox2 protein levels in VSELs from SB and PB. (f) Apoptotic cell death was assessed by annexin V-PI staining. *Po0.05 for comparison between SB and PB (n = 10 per group) Figure 2 Characterization of vselMSCs. MSCs were collected from the affected coronary artery and filtered to obtain a population of vselMSCs. (a) vselMSCs, ESCs, and uMSCs were cultured in ESC medium and MSC medium, and compared morphologically under a bright-field microscope (upper panels: 10 × magnification, bars = 25 μm; lower panels: × 20 magnification, bars = 10 μm). (b) mRNA levels of the pluripotency markers Nanog, Klf4, Sox2, and Oct4 evaluated in vselMSCs, ESCs and uMSCs via qRT-PCR and normalized to GAPDH mRNA levels. *Po0.05 versus vselMSCs, † Po0.05 versus ESCs (n = 10 per group). (c) Oct4, Nanog, Klf4, and Sox2, protein levels in vselMSCs, ESCs and uMSCs compared via western blotting; GAPDH levels were used as the protein loading control. (d) The proportions of vselMSCs that expressed MSC (SH2 and SH3), ESC (SSEA), and VSEL (CD133 and CXCR4) markers, the matrix receptor CD44, and the endothelial marker CD147 were determined via flow cytometry (A-E). (F) FACS analysis of CD44, CXCR4, SSEA, CD147, SH2, SH3, and CD133 expression levels between vselMSCs and uMSCs. *Po0.05 versus vselMSCs (n = 10 per group). (e) vselMSCs were induced to differentiate into cells from all three developmental germ layers (ectoderm: column 1; endoderm: columns 2-3; and mesoderm: column 4). The differentiated cells were examined morphologically (A) and via immunofluorescence (B-E) for the expression of ectodermal cell markers (i.e., the neuron-specific proteins β-tubulin III and glial fibrillary acidic protein [GFAP]), endodermal cell markers (i.e., the cardiomyocyte-specific markers troponin Tand myosin heavy chain [MHC], and the vascular-cell specific proteins factor VIII and α-sarcomeric actin [α − SMA]), and mesodermal cell markers (i.e. the hepatic-cell markers serum albumin and alpha-fetoprotein [AFP]). The nuclei were stained with DAPI (blue), and the cytoplasm was stained red with anti-β-tubulin III, MHC anti-factor VIII, or serum albumin, and green with GFAP, troponin T, α-SMA, or AFP, respectively. Bars = 10 μm. (f) Representative immunoblot electrophoresis and subsequent quantification showing β-tubulin III, MHC, factor VIII, and AFP protein levels. *Po0.05 versus vselMSCs (n = 10 per group) between the two groups ( Figure 1a). The data suggested that patients aged 20-60 years had stronger mobilization of VSELs into the PB after AMI. Accordingly, we selected this age group for subsequent study. The number of circulating VSELs was significantly higher in the stenotic coronary arterial blood (SB) than in that from PB ( Figure 1b). The Lin − / (d) Apoptosis (annexin V) and cell death (propidium iodide (PI)) were evaluated in normoxia-cultured vselMSCs, ESCs, and uMSCs via flow cytometry vselMSCs resist hypoxia via HIF-2α and Oct4 S Zhang et al CD133 + /CD45 − population number from gate R1 was greater in the SB than in the PB (Figure 1c). The qRT-PCR and immunoblotting showed that the SB VSELs expressed higher levels of Oct4, Nanog, Klf4, and Sox2 mRNA and protein than the PB (Figures 1d and e). Compared with the SB VSELs, 4-h simulated hypoxia induced less SB VSEL apoptotic cell death ( Figure 1f). These data suggest that the SB contains a larger pool of anti-apoptotic VSELs as compared to PB. Therefore, we chose blood MNCs from the affected coronary artery to isolate and purify VSELs. Next, we performed directed differentiation toward the ectoderm, endoderm, and mesoderm by growth factor supplementation and growth on defined matrices. 14,15 After induction, light microscopy showed characteristic morphologies of nerve cells, myocardiocytes, blood vascular cells, and hepatocytes (Figures 2eA). Immunofluorescence showed that the vselMSCs positively coexpressed the neuron marker β-tubulin III, the astrocyte-specific protein GFAP, myocardiocyte markers, troponin T and MHC, blood vascular markers, factor VIII and α-SMA, hepatocyte marker proteins, human serum albumin, and AFP (Figures 2eB-E). Western blotting revealed higher β-tubulin III, MHC, factor VIII, and AFP expression in vselMSCs as compared with uMSCs ( Figure 2f). As Oct4 acts as a stem cell marker, 16 we evaluated the presence of HIF motifs around the Oct4-occupied regions in the data from vselMSCs and ESC. There were 16 genes that were expressed by more than 2-fold relative to GAPDH in vselMSCs, and there were seven shared genes that were uniquely common to vselMSCs and hESCs data sets (HIF-1, HIF-2, Oct4, bFGF, VEGF, Survivin, and Bcl2). HIF-2α motifs were enriched adjacent to the Oct4 motifs in vselMSCs, and were also detectable in hESCs ( Figure 3a). The mRNA and protein expression levels of HIF-2, bFGF, VEGF, Survivin and Bcl2 were significantly higher in vselMSCs than in uMSCs, and slightly lower than in ESCs (Figures 3b and c), while HIF-1 expression in all three cell types was similar. The death rate was similar between vselMSCs and ESCs, and significantly lower in vselMSCs than in uMSCs (Figure 3d). The expression of the HIF-2 protein was negatively correlated with the apoptotic cell death ratio of vselMSCs, as assessed by FACS (r = − 0.951, Po0.01), and positively correlated with the protein expressions of Oct4, Bcl2, Survivin, bFGF, and VEGF (r = 0.929, 0.842, 0.930, 0.902, and 0.871, respectively; Po0.01 for all comparisons), showing the most significant positive correlation of Oct4 protein expression with HIF-2α expression among these interactors. HIF-2α interacts with Oct4, and both are essential for vselMSCs growth. Figures 4aA and B show that Oct4 and HIF-2α mRNA and protein levels were significantly upregulated by Oct4 or HIF-2α overexpression and were downregulated by HIF-2α or Oct4 siRNA inhibition. Co-overexpressing HIF-2α and Oct4 further increased HIF-2α and Oct4 expression, but overexpressing one transcription factor while silencing the other only caused a corresponding increase in the expression of the overexpressed gene and decreased the expression of the silenced gene. These changes were confirmed by immunofluorescence ( Figure 4aC). Under hypoxic conditions, vselMSCs overexpressing HIF-2α or Oct4 were significantly more proliferative than WT vselMSCs, and co-overexpressing HIF-2α and Oct4 further promoted cell proliferation. HIF-2α or Oct4 siRNAs led to a greater antiproliferative effect, and overexpressing one transcription factor while silencing the other produced the same effects ( Figure 4b). The apoptotic ratios were lowest in HIF-2α + Oct4 + cells, second lowest in cells overexpressing HIF-2α or Oct4 alone, and highest in cells treated with HIF-2α or Oct4 siRNAs with or without Oct4/HIF-2α overexpression (Figure 4c), suggesting that HIF-2α and Oct4 cooperatively protects vselMSCs against the apoptotic response to hypoxic injury. There was high expression of bFGF, VEGF, Bcl2, and survivin mRNA and protein in HIF-2α+ vselMSCs and Oct4+ vselMSCs as compared to WT vselMSCs, and much more than in siHIF-2α vselMSCs and siOct4 vselMSCs, respectively; however, measurements in HIF-2α or Oct4-deficient cells with or without Oct4/HIF-2α overexpression were similar and generally lower than in cells with unmodified Oct4 and HIF-2α expressions. Caspase 3 expression was lower in WT vselMSCs than in siHIF-2α vselMSCs and siOct4 vselMSCs, and was downregulated Figure 4 HIF-2α and Oct4 promote vselMSC growth. vselMSCs were transfected with vectors encoding HIF-2α, HIF-2α siRNA (siHIF-2α), Oct4, or Oct4 siRNA (siOct4) and cultured under hypoxic conditions. (a) qRT-PCR (A) and western blot (B) analysis of HIF-2α and Oct4 mRNA and protein expression, respectively, revealing that the two genes were significantly increased in vselMSCs overexpressing HIF-2α or Oct4 as compared with control vselMSCs and that expression was highest in cells co-overexpressing HIF-2α and Oct4. Silencing HIF-2α or Oct4 significantly reduced expression of the corresponding mRNA and protein. Overexpressing one transcription factor while silencing the other significantly increased the former and decreased the latter. *Po0.05 versus vehicle, † Po0.05 versus HIF-2α or Oct4 overexpression, ‡ Po0.05 versus HIF-2α or Oct4 silencing, § Po0.05 versus HIF-2α and Oct4 co-overexpression, || Po0.05 versus HIF-2α overexpression and Oct4 silencing (n = 10 per group). (C) HIF-2α and Oct4 expression in cells determined by immunofluorescence with anti-Oct4 (green) and anti-HIF-2α (red) antibodies, respectively. Also shown are DAPI staining (nuclei; blue) and merged images. Bars = 10 μm. HIF-2α and Oct4 were mainly localized in the nucleus. HIF-2α or Oct4 overexpression markedly increased the staining intensity of HIF-2α and Oct4, while HIF-2α or Oct4 silencing markedly suppressed it. The increase was further improved in the cells co-overexpressing both HIF-2α and Oct4, and an inhibitory effect was observed when one transcription factor was overexpressed and the other was silenced. These data all indicate the physical co-binding of HIF-2α and Oct4. (Figures 4dA and B). These data all show that Oct4 and HIF-2α cooperatively share many anti-apoptotic transcriptional targets. Oct4 collaborates with HIF-2α to regulate vselMSCs pluripotency under hypoxia. Compared with WT vselMSCs, HIF-2α or Oct4 overexpression alone upregulated mRNA and protein expressions of Klf4, Nanog, and Sox2 in vselMSCs, and HIF-2α and Oct4 co-overexpression further improved this upregulation; siHIF-2α or siOct4 abolished the upregulation, and overexpressing one transcription factor while silencing the other elicited the same results (Figures 5a and b). Compared with those in WT vselMSCs, the mRNA and protein expression levels of MHC, troponin T and factor VIII were highest in vselMSCs co-overexpressing HIF-2α and Oct4, followed by that in vselMSCs overexpressing either one transcription factor, and were significantly lower in HIF-2α-or Oct4-deficient cells combined with Oct4 or HIF-2α (Figures 5a and b). HIF-2α and Oct4 overexpression together showed the same change in the number of vselMSCs that expressed cardiomyocyte and/or vascular cell markers (Figure 5c). HIF-2α and Oct4 cooperate to promote myocardial repair induced by vselMSCs therapy. Echocardiography revealed significant deterioration in the LV function and structural indices in all MI animals that had received PBS injection or (Figures 6a-d). However, all functional and structural parameters were significantly better in animals treated with cells expressing unmodified levels of HIF-2α and Oct4 than in saline-treated animals (PBS group), in WT vselMSC ( WT vsel)treated animals than in the WT uMSC ( WT uM)-treatment group, and in HIF-2α vselMSC or Oct4 vselMSC-treated than in siHIF-2α vselMSC or siOct4 vselMSC-treated animals. The greatest improvement was seen in the vselMSCs transfected with both HIF-2α and Oct4, and overexpressing one transcription factor while silencing the other caused obviously decreased effects (Figures 6a-g). Overexpression of HIF-2α and Oct4 enhance angiogenesis induced by vselMSCs transplantation. Compared with SHAM and PBS injection, uMSCs transplantation resulted in increase of HIF-2α and Oct4 mRNA expression in the infarcted hearts, and vselMSCs therapy significantly increased their expressions. vselMSC transplantation combined with the transfection of either HIF-2α or Oct4 further increased mRNA expression of both HIF-2α and Oct4, and vselMSC transplantation combined with HIF-2α and Oct4 transfection led to the greatest increase (Figures 7a and b). Furthermore, the expression level of HIF-2α induced by transplantation of vselMSCs transfected with HIF-2α and siOct4 was lower than that of vselMSCs therapy combined with transfection of HIF-2α alone and vice versa. Proangiogenic factors, Ang-1, bFGF, and VEGF in the group receiving vselMSCs alone were expressed at higher levels than in the WT uMSC, PBS, and SHAM groups. HIF-2α or Oct4 overexpression further increased the mRNA expressions of those factors, and the HIF-2α and Oct4 combination caused the greatest increase, which were significantly reduced by HIF-2α or Oct4 deficiency, or siHIF-2α or siOct4 transfection combined with Oct4 or HIF-2α overexpression. Although Ang-1 and bFGF expression was not significantly different between HIF-2α-or Oct4-deficient cells and between cells transfected with siHIF-2α or siOct4 combined with Oct4 or HIF-2α overexpression, VEGF mRNA expression was significantly higher in the latter group (Figures 7c-e). Immunoblots showed the same trends of those factors (Figure 7f). These results were confirmed by immunofluorescence (Figure 7h). There was no significant difference between the SHAM group and PBS group. Collectively, these findings suggested that the initial upregulation of the expressions of proangiogenic cytokines is induced by vselMSCs transplantation, and overexpression of HIF-2α and Oct4 enhances this upregulation. The numbers of blood vessels stained for anti-factor VIII antibody were greater in the rats that received vselMSC therapy than in the uMSC and the PBS groups, the greatest in the vselMSCs transfected with HIF-2α and Oct4, followed by that in the rats that had received vselMSCs transfected with HIF-2α or Oct4 alone, and were significantly reduced in the rats that had received vselMSCs combined with siHIF-2α or (Figures 7g and i). Overexpression of HIF-2α and Oct4 protects vselMSCs against ischemia-induced injury. Both anti-apoptotic (survivin and Bcl2) mRNA expression and protein expression were higher, and caspase 3 expression was lower, in WT vselMSC-and WT uMSC-treated hearts than in hearts from the PBS group, in WT vselMSC-treated hearts than in WT uMSC-treated hearts, and was improved in the vselMSCs resist hypoxia via HIF-2α and Oct4 S Zhang et al vselMSC-and Oct4+ vselMSC-treated hearts; improvement was greatest in the HIF-2α+Oct4+ vselMSC-treated hearts (Figures 8a-d). Immunofluorescence showed that the numbers of the cells that expressed these anti-apoptotic proteins increased in response to HIF-2α and Oct4 overexpression, while caspase 3 expression declined (Figure 8e). HIF-2α and Oct4 overexpression enhanced the proliferation and engraftment of transplanted cells. EGFPexpressing cells were significantly more common in WT vselMSC-treated hearts than in WT uMSC-treated hearts, and were more common in HIF-2α+Oct4+ vselMSC-treated hearts than in the hearts with HIF-2α or Oct4 overexpression alone, while HIF-2α or Oct4 deficiency significantly decreased engraftment as compared with that in WT vselMSCs (Figures 9a and e). More cells coexpressed EGFP and the proliferation marker Ki67 in WT vselMSCs than in WT uMSCs, which was significantly elevated by HIF-2α or Oct4 overexpression and was significantly reduced by HIF-2α or Oct4 deficiency (Figures 9b and f). This cooperation was reduced by siHIF-2α or siOct4 transfection combined with or without Oct4 or HIF-2α overexpression. Expression of MHC and VIII was higher in WT vselMSCtreated hearts than in WT uMSC-treated hearts, in vselMSC-and Oct4+ vselMSC-treated hearts than in WT vselMSC-treated hearts, and was further improved in hearts treated with vselMSCs co-overexpressing HIF-2α and Oct4 (Figures 9c, d and g-j). Discussion This is the first study to demonstrate that vselMSCs isolated and purified from the coronary arterial blood MNCs and MSCs have greater survival and enhanced functions under hypoxic conditions compared with uMSCs. In addition, HIF-2α and Oct4 signaling improves cardiac function and remodeling induced by vselMSCs therapy after MI, and this myocardial repair can be significantly altered by HIF-2α or Oct4 overexpression and HIF-2α or Oct4 deficiency. We also observed the collaborative induction of angiogenesis, differentiation, and anti-apoptosis in vselMSCs by HIF-2α and Oct4. Taken together, the findings of this study suggested that collaboration between HIF-2α and Oct4 promotes survival and myocardial repair by human vselMSCs in MI. Although human VSELs are enriched for CD133 + Lin − CD45 − cells, and express stem cell markers such as Oct4, Nanog, and stage-specific embryonic antigen-4, 17 the precise combination of markers can be affected by the isolation method and the presence of pathological conditions. 18,19 Some cells with morphological similarities to VSELs, such as those purified from umbilical cord blood of healthy patients with full-term pregnancies, 20 fail to respond to ESC culture conditions. In this report, we used micromagnetic bead selection, multiparameter flow cytometry, limited-dilution culture and ESC culture expansion to obtain a VSELs subpopulation isolated from blood of the affected coronary artery in patients with MI. The isolated cells expressed unique molecular characteristics of mesenchymal, ESC and adult stem/progenitor cell markers, but were negative for hematopoietic (CD34 and Lineage) and monocyte-macrophage (CD45) marker expression. These cells have the VSELs' morphology as well as ESCs' pluripotency, which can be induced into three germ layers. Therefore, we called this population very small embryonic-like mesenchymal stem cells (vselMSCs). Moreover, the number of these vselMSCs in the infarct related artery was much higher than in those from the peripheral vein. This is the first report of a VSELs content difference in circulating blood MNCs from the culprit coronary artery compared with the peripheral vein. Although myocardial transfection of HIF-1α and cotransplantation of mesenchymal stem cells could decrease the infarct size, and prevent post-infarction remodeling of the heart, 21 but the role of HIF-2α in cell-autonomous VSEL maintenance remains unknown. The present study was the first to observe that HIF-2α expression in vselMSCs isolated and purified from a hypoxic environment was distinct from the uMSCs cultured in a normoxic environment, where only HIF-1α is expressed. By contrast, HIF-1α and HIF-2α are simultaneously highly expressed in vselMSCs, and we identified Oct4 as a novel collaborative interacting partner protein of HIF-2α in vselMSCs. Recent studies suggest that HIF-2α positively regulates the transcriptional activity of Oct4 and enhances the physiological roles of Oct4. 22 We demonstrated that HIF-2α genome occupancy in vselMSCs was similar to that in hESCs, and HIF-2α motifs were found to be enriched adjacent to the Oct4 motifs in vselMSCs. HIF-2α and Oct4 both accumulate in and around the nucleus and the cytoplasm under hypoxia-like conditions, and were upregulated in vselMSCs transfected with HIF-2α or Oct4, which reflected cell number increase and proliferation, and correlated negatively with the apoptotic rate under hypoxic culture or in ischemic hearts. Cooverexpressing HIF-2α and Oct4 together further enhanced the expression of both transcription factors and cell proliferation in both cultured vselMSCs in vitro and transplanted vselMSCs in vivo; silencing one transcription factor while overexpressing the other greatly weakened these effects. This result could aid in clarifying the synergistic effects of HIF-2α and Oct4 in improving cell growth during hypoxic or ischemic conditions. Next, this coordination between HIF-2α and Oct4 was reflected in their regulation of vselMSC pluripotency and therapeutic potential. Overexpressing HIF-2α or Oct4 significantly increased the expression of the multipotency markers Klf4, Nanog, and Sox2; the cardiomyocyte markers MHC and troponin T; and the blood vascular endothelial cell marker factor VIII in the vselMSCs, and co-transfection with HIF-2α and Oct4 further enhanced this increase. Especially after transplantation into the ischemic hearts, HIF-2α and Oct4 expression were significantly higher in the infarcted hearts receiving vselMSCs therapy than in those receiving PBS injection or sham operation. These differences were consistently associated with improvements in cardiac function and left-ventricular structural remodeling of hearts treated with vselMSCs after MI. These effects were generally magnified by HIF-2α and Oct4 overexpression induced by HIF-2α or Oct4 transfection alone, and were further improved by the transfection of HIF-2α and Oct4 together. HIF-2α or Oct4 deficiency abolished these effects and significantly reduced the magnification induced by Oct4 or HIF-2α overexpression, respectively. Thus, the benefit of vselMSCs transplantation appears to be inextricably linked with the extent of HIF-2α and Oct4 coactivation, which is similar to the observation of Covello et al. 9 that Oct4, as a HIF-2α-specific target gene, can regulate embryonic primordial germ cell function, which in turn contributes to HIF-2α's tumor promoting activity. On the other hand, the cooperative relationship between HIF-2α and Oct4 upregulated their target genes, including that for the proangiogenic factors Ang-1, bFGF, and VEGF; and the anti-apoptotic (survivin and Bcl2) and pro-apoptotic (caspase-3) proteins. The upregulation of these cytokines induced by HIF-2α and Oct4 overexpression was associated with significant promotion of the ratio of vselMSCs cultured in vitro differentiating into blood vascular endothelial cells (vasculogenesis) or vselMSCs engrafted into the infarcted hearts developing new blood vessels (angiogenesis). These data demonstrate that HIF-2α and Oct4 jointly regulate the expression of endogenous vascular permeabilizing factors, which are the target genes of HIF-2α-mediated angiogenesis under ischemic conditions. 23,24 The interaction between HIF-2α and Oct4 in regulating the anti-apoptotic and pro-apoptotic proteins was consistent with altered survival and apoptosis of vselMSCs after HIF-2α or Oct4 co-overexpression, overexpression of either transcription factor alone, or the silencing of both. Our results are similar to that of the study of Donskow-Łysoniewska et al. 25 in that cell proliferation and apoptosis were dependent on a low Bax/Bcl-2 ratio, and upregulation of survivin, with inhibition of active caspase-3. . (e-h) Representative phenotype of gated EGFP + (e), Ki67 + EGFP + (f), MHC + EGFP + (g), and factor VIII + EGFP + cells (h) evaluated by FACS in WT uM, vehicle vselMSCs, HIF-2α-or Oct4-overexpressing vselMSCs, and HIF-2α-or Oct4-silenced vselMSCs. (i and j) Immunofluorescence staining showing that transplanted cells expressed MHC (i) and factor VIII (j). The transplanted cells were pre-labeled with EGFP (green); the nuclei were stained with DAPI (blue), and the cytoplasm of the myocardiocytes or blood endothelial cells was stained red with anti-MHC or anti-factor VIII, respectively. Engrafted EGFP-pre-labeled cells expressing MHC or factor VIII were the most numerous in the HIF-2α+siOct4+ vselMSCs, followed by that in cells overexpressing HIF-2α or Oct4, and were lowest in HIF-2α-or Oct4-silenced vselMSCs (arrows) vselMSCs resist hypoxia via HIF-2α and Oct4 S Zhang et al In conclusion, our findings underscore the likelihood that collaboration of HIF-2α and Oct4 must be considered not only as a part of the native physiological mechanisms that enhance stem cell survival and functions, but also act as potential synergists with vselMSCs therapy. In this manner, vselMSCs overexpressing HIF-2α and Oct4 may serve as an optimal donor for myocardial repair post-MI, and this area of physiology represents a potential therapeutic target for the future treatment of ischemic diseases. Materials and Methods An expanded Methods section containing details regarding the patient population, fluorescence-activated cell sorting analysis, vselMSCs isolation, expansion, purification, and in vitro directed differentiation, immunocytofluorescence, microarray analysis, HIF-2α and Oct4 transfection, hypoxic treatment, cell viability and apoptosis analysis, green fluorescent protein (GFP) labeling, MI model and treatment, echocardiography, measurement of body weight ratio and infarct size, histology, immunofluorescence, real-time quantitative reverse transcription-PCR (qRT-PCR), immunoblotting, and statistics is available in the Online Data Supplement. Patient population. We studied ten 20-to 60-year-old patients with acute STsegment elevation MI (STEMI) referred within 12 h after the symptomatic onset for primary percutaneous coronary intervention (PCI). To evaluate whether vselMSCs decline with age in the peripheral blood (PB) of the enrolled patients with AMI, 10 patients with STEMI aged 460-75 years were enrolled as controls. All patientrelated procedures were performed with informed consent and in accordance with the guidelines of the Southern Medical University Committee on the Use of Human Subjects in Research. Fluorescence-activated cell sorting analysis of circulating blood mononuclear cells (MNCs). Immediately after PCI, 10 ml circulating blood was collected from the peripheral vein and the culprit coronary artery, respectively. Fluorescence-activated cell sorting (FACS) analysis was performed to determine the lineage − CD45 − CD133 + cell content in these MNCs. Figure S1 shows the protocol of VSEL isolation and analysis. The vselMSCs were isolated and purified from the isolated MNCs as previously described. 26,27 As a control, uMSCs were cultured from the same patient's blood MNCs. The uMSCs were obtained from the above-mentioned MNCs via the adherent culture method. The hESC line H7 was purchased from SIDANSAI Biotechnology CO. (Shanghai, China, 0204-001) and used as the positive control. Microarray analysis. To analyze anti-apoptotic genetic similarity between vselMSCs and hESCs, these cells were produced over three sequential independent passages and hybridized to six Affymetrix HG-U133A chips. We compared the similarities of HIF-mediated anti-apoptotic genes between vselMSCs and hESCs using GEarray Expression Analysis Suite software containing 112 genes. Hypoxic treatment. Cells were removed and exposed to hypoxic (1%) oxygen levels in a water-jacketed CO 2 incubator. The hypoxic condition was maintained throughout the performance of all subsequent analyses. Analysis of cell proliferation and apoptosis. Cell proliferation was assessed by fluorescence staining for the proliferation marker Ki67 using FACS. Apoptotic cell death under normoxic and hypoxic conditions was evaluated through annexin V (Roche Diagnostic, Indianapolis, IN, USA) and propidium iodide (PI). Echocardiography. Thirty days later, cardiac functions were evaluated by echocardiographic assessments of LVEF, LVFS, LV diastolic area (LVDa), and diastolic diameter (LVEDd), and the structural benefits of therapy were evaluated by measuring the LV infarct size as determined by echocardiography. Histology and immunofluorescence. The left ventricles of the remaining rats were weighed to calculate the ratio of left-ventricular weight to body weight. The size of the infarct was obtained by calculating the percentage of the infarcted area against the whole LV area using a digital imaging program (Scion Image 4.03, Bethesda, MD, USA). The tissues from the autopsy specimens were embedded in paraffin or frozen for cryostat sectioning and were then stained by hematoxylin and eosin or used in immunofluorescence assays. For immunocytofluorescence, cells were fixed with fresh 4% paraformaldehyde in PBS. QRT-PCR and immunoblotting. The cells and the autopsied tissues were collected and pulverized to extract RNA or protein for qRT-PCR and immunoblotting. Table 1 lists the sequences of the primers and probes used to analyze the expression of the human and rat genes. Statistical analysis. The results are expressed as the mean ± S.E.M. and were tested for significance using analysis of variance for multiple comparisons. Chi-square analysis was used to compare survival rates between groups. A P-value of o0.05 was considered statistically significant. Online data supplementary figures. Supplementary Figure S1 in the online-only Data Supplement presents the experimental flow of the vselMSC development, and analysis of cooperation between HIF-2α and Oct4 in regulating vselMSC pluripotency, survival, proliferation, and therapeutic potential. Conflict of Interest The authors declare no conflict of interest.
2018-02-17T12:37:24.792Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "d5ed664e2bc4fbd8300909d911cad72804469f5c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/cddis2016480.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5ed664e2bc4fbd8300909d911cad72804469f5c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225142171
pes2o/s2orc
v3-fos-license
Correlation between adhesion strength and phase behaviour in solid-supported lipid membranes phase transitions observed in the layer of adsorbed lipid vesicles/membranes. It paves the way to explore structural changes on more complex biointerfaces by acoustic-based sensors. Introduction Lipid vesicles are self-assembled structures customarily used as model systems for cell membrane basic studies [1,2], as nanocontainers for bio-reactions [3] and in biotechnology applications such as drug delivery or biosensors [4,5].When supported on solid-surfaces, they might form intact supported vesicle layers (SVLs) or eventually rupture into planar supported lipid bilayers (SLBs).The latter are the result of spontaneous adsorption of small (diameter ≤ 200 nm) vesicles onto solid surfaces.The geometry of SVLs captures the volume to area ratio of the vesicles, strength of adhesion, membrane bending properties and osmotic stress within the supported layer making SVLs useful biomimetic platforms to probe membrane deformation.The latter plays an important role in biological processes such as adhesion, budding, lipid membrane exchange, fission and fusion [6][7][8].SVLs are mimics to endosomes or exosomes, which are important for chemical transport and intercellular communication [9,10]; however, these systems are optically inaccessible and the experimental investigation of their deformation is not straightforward.Adsorbed vesicles onto inorganic surfaces serve also as model systems relevant to biocompatibility studies.The shape of a vesicle upon adsorption to a surface is determined by the interplay of adhesion, bending and geometrical constraints.This interplay is theoretically studied starting from a simple model in which the membrane experiences a contact potential arising from the attractive surface.Let us recall the free energy F expression of an adsorbed vesicle in terms of a simple model taking into account the adhesion energy, the local bending energy term and the geometrical constraints [6,11]: The first two terms depend on the local bending modulus κ, on the Gaussian curvature modulus κ G and on C 1 , C 2 , and C 0 denoting the two principal curvatures, and the (effective) spontaneous curvature, respectively, and dA being an infinitesimal membrane area element.The third term is the adhesion free energy with W being the strength of adhesion and A c the contact area of the membrane and the surface.The last two terms represent the volume (V) and area (A) constraints with corresponding Lagrange multipliers P and Σ. Vesicle deformation upon adhesion depends strongly on the lipid organization of the vesicle membrane, which in turn is intimately linked to their phase behaviour [12][13][14][15].The study of vesicles within such a small size range typically requires surface-sensitive techniques, since vesicles with diameter ≤ 200 nm are optically inaccessible structures.Moreover, the study of these phenomena by traditional calorimetric approaches is often hindered by the instability of SUV dispersions, which tend to fuse into larger LUVs and sediment over long-term measurements [16,17]. In this context, quartz crystal microbalance with dissipation monitoring (QCM-D) has recently emerged as a versatile technique to detect and characterize phase transformations of solid-supported lipid membrane geometries, namely SLBs and SVLs [18].QCM-D is an acousticbased, label-free technique used extensively in bio-interfacial science and has particularly contributed to the understanding of the kinetics of adsorption and formation of supported lipid bilayers [19][20][21][22] and probing their interactions with relevant biomolecules [23,24].QCM-D is highly sensitive to mass and energy dissipation changes at the solidlipid layer-liquid interface.In the particular case of adsorbed vesicles, it can detect changes in their geometry (shape) and membrane conformation when these systems undergo phase transitions [25][26][27].In this work we address the question whether and how the supported membrane shape affects the phase transitions in solid-supported membranes.Vesicles of dipamitoylphosphatidylcholine (DPPC) are chosen, since DPPC is a saturated phospholipid ubiquitous in eukaryotic cells and a well-known lung surfactant, whose main phase transition is well-characterized in bulk [28,29].Two solid surfaces bearing different adhesion levels are used, polycrystalline Au, an interesting transducer material owing to its good thermal and electrical conductivities, plasmon resonance and biocompatibility [30], and SiO 2 which bears a negative charge and additional electrostatic interactions [31][32][33] in determining adhesion [34,35].The correlation between surface wettability and membrane phase transitions has been explored as a function of vesicle size and adsorption temperature.Numerical calculations based on free energy minimization of Eq. ( 1) provide complementary information on the adsorbed vesicle systems. Vesicle preparation DPPC lipid in powder form was first dissolved in spectroscopic grade chloroform, and the solvent was evaporated under a mild flow of nitrogen in a round bottomed flask.The lipid film was kept under low pressure overnight to remove any traces of remaining solvent.The film was then hydrated with HEPES buffer to 1 mg/mL under continuous stirring in a temperature-controlled water bath at 55 °C (well above the melting temperature of DPPC T m ~41.5 °C).Large unilamellar vesicles (LUVs) and small unilamellar vesicles (SUVs) were formed by extrusion through filters with different pores sizes (100 nm or 30 nm) for a fixed number of passes.LUVs were formed after 25 passes through a 100 nm-pore filter, while SUVs were formed after 25 passes through a 100 nm-pore filter followed by 15 passes through a 30 nm-pore filter. Dynamic light scattering Vesicle effective sizes and polydispersity were determined by dynamic light scattering (DLS) (Malvern Zetasizer Nano ZS, Malvern, UK).The obtained mean diameters and polydispersity indexes of the samples used are displayed in Table 1. Quartz crystal microbalance with dissipation monitoring (QCM-D) A Qsense E4 instrument (Gothenburg, Sweden) monitoring the frequency and dissipation changes, Δf and ΔD was used.Q-sense E4 also enables heating or cooling temperature scans in the range between 15 °C and 65 °C.AT-cut quartz crystals with Au and SiO 2 coating (diameter 14 mm, thickness 0.3 mm, quoted surface roughness < 3 nm, and resonant frequency 4.95 MHz) were used.The Au-coated quartz sensors were cleaned with a 5:1:1 mixture of Milli-Q water (resistance of 18.2 MΩ cm at 25 °C), ammonia and hydrogen peroxide, and were UV-ozone treated with a UV-ozone cleaner (Bioforce Nanosciences, Germany) for 20 min, followed by rinsing in Milli-Q water and drying with N 2 .SiO 2 -coated quartz sensors were cleaned in a solution of sodium dodecyl sulfate (2% SDS) for several hours and UV-ozone treated for 20 min, followed by rinsing in Milli-Q water and drying with N 2. The changes in Δf/n and in ΔD were monitored at four different overtones (from 3rd to 9th).The lipid vesicles were inserted into the QCM-D cells with a flow rate of 50 μL/min.Vesicle adsorption experiments were carried out at two different temperatures, at 16 °C and at 50 °C, where DPPC is in phases possessing different bending rigidities, the gel phase and the liquid-disordered phase, respectively.The temperature stability is in the order of ±0.02 °C around the set value.First, a baseline with pure HEPES buffer was established and afterwards lipid vesicles were injected over the Au-coated or SiO 2 -coated sensor chips.After reaching a stable supported membrane layer the pump was switched off and the ensemble was left to stabilize for several hours.Subsequent temperature scans, upon heating and cooling, were performed at a rate of 0.4 °C/min, maintaining a 60 min stabilization time between successive temperature ramps.Measurements were repeated at least three times to check the reproducibility of the results. Contact angle measurements Contact angle (CA) measurements were carried out using an Attension ThetaLite from Biolin Scientific (Sweden) based on the sessile drop method.A small drop (3 μL) of Milli-Q water or diiodomethane was deposited onto clean, UV-ozone treated Au-coated or SiO 2 quartz surfaces, and the shape of the drop formed on the surface was analysed.The contact angle of the 3 μL droplet of either ultrapure water or diiodomethane was determined over a time period of 10 s using a recording speed of 20 frames/s and, afterwards, the average of several drops was calculated.The CA was measured at several points, and an average value was extracted.All contact angles were measured at a room temperature.Surface free energies of UV-ozoned Au and SiO 2 surfaces γ sv (polar γ sv p and dispersive γ sv d parts) were determined based on the Owens, Wendt, Rabel and Kaelble method [36] and are included in Table 2. Details on calculations are included in the Supplementary material.3. Results and discussion Vesicle adsorption and layer formation Fig. 1 shows the Δf and ΔD responses (represented for the third overtone) during vesicle adsorption and supported lipid layer formation on SiO 2 and Au at temperatures well below (16 °C) and above (50 °C) the melting temperature of DPPC.The mechanistic scenario of the observed Δf and ΔD changes is governed by a delicate balance between the adhesive energy from lipid-surface interactions (which tends to maximize the contact area between the vesicle membrane and the surface) and the opposing effect of bending the bilayer [37]. When adsorbed onto Au, a monotonic frequency decrease (mass increase) and dissipation increase can be observed reaching constant nonzero Δf and ΔD plateau values.Such time-dependent responses provide evidence that oxidized Au facilitates non-ruptured vesicle adsorption towards the formation of acoustically non-rigid vesicle layers with saturated coverage. Although the vesicle adsorption profiles are similar for small and large vesicles when adsorbed in both the gel and liquid disordered phases, it is worth exploring the size-and temperature-dependent differences.After initial adsorption both Δf and ΔD plateau values clearly increase (in absolute value) with increasing vesicle size.For LUVs on Au at T < T m , an overshoot behaviour was observed in the dissipation signal but not on the frequency signal.This peak has been seen in previous works and was ascribed to vesicles 'rocking and rolling motion' [38]. Larger vesicles carry more trapped aqueous buffer (larger frequency shifts) and thus are softer structures (larger dissipation shifts).When adsorbed at T > T m the same trend with vesicle size was observed, although the plateau values were significantly smaller.At T < T m the DPPC bilayer envelope of the adsorbed vesicles is in the gel phase and its bending modulus κ ~10 • 10 −19 J [39] renders the vesicle membrane more stiff.Above T m , the bilayer is in the liquid disordered phase and the modulus attains about a ten times smaller value of κ ~1 • 10 −19 J.This makes the membrane of vesicles softer and more deformable upon adsorption with larger contact area, yielding a smaller number of vesicles for similar surface coverage. When adsorbed onto SiO 2 , a completely different pattern of behaviour is observed below and above T m .At T < T m , monotonic frequency and dissipation changes take place reaching constant non-zero Δf and ΔD plateau values.The plateau values follow the same trend with vesicle size as observed in the case of Au.However, the reached plateau values are smaller on SiO 2 , indicating that the stronger SiO 2 adhesion might favour vesicle deformation, induce the formation of transient pores [40], local vesicle rupture events and formation of small bilayer patches (from the ruptured vesicles).It is worth noting that QCM-D is very sensitive to hydrodynamic (wet) mass and the local, partial formation of SLBs might be masked by the adsorption of vesicles on top or in between the bilayer patches [41].At T > T m an initial monotonic adsorption of vesicles is observed until a critical surface coverage is reached (minimum in Δf and maximum in ΔD), followed by vesicle fusion and rupture to form SLB [21,22,37,41].The surface-vesicle interactions on SiO 2 are stronger than on Au and the SiO 2 -adsorbed vesicles deform to a greater extent with higher contact area and higher membrane lateral tension, making them more prone to pore formation or rupture and fusion [42] and therefore less stable.Note that lipid bilayers in the gel phase can sustain higher lateral tensions and therefore rupture appears at higher values of critical tension (i.e.lysis tension) [43]. Au has been oxidized thus is highly hydrophilic and its isoelectric point is around 4.5 to 5 [44].The isoelectric point of SiO 2 is around 2.5 [45], therefore both Au and SiO 2 are negatively charged at the current experimental conditions (pH = 7.4), SiO 2 being even more than Au.This confers Au and SiO 2 an attractive potential to support intact vesicles of zwitterionic lipids such as DPPC.The lateral tension arising by attractive forces between the adsorbing vesicles and the surface does not exceed the threshold for permanent membrane rupture in the case of Au.As a matter of fact, the polar component of the surface free energy in Au is half the value calculated for SiO 2 (see Table 2), while the dispersive component is larger, in agreement with the large Hamaker constant of Au [46].In addition, the formation of transient membrane pores [40] cannot be excluded, especially at T > T m , since in the liquid disordered phase the lipid bilayer area stretching modulus is 3 to 4 times smaller than in the gel phase [47].The theoretically predicted attraction of zwitterionic vesicles (PC headgroup) to negatively charged surfaces (Au or SiO 2 ) is accompanied by more perpendicular average orientation of the zwitterionic head groups [31][32][33], i.e. to more tightly packed lipids, which may favour the gel phase of lipids in the adhered part of the vesicle membrane. The extent of vesicle deformation upon adsorption was estimated following the approach introduced by Tellechea et al. [38].This method consists in plotting −ΔD/Δf ratio vs −Δf for all overtones during initial adsorption (low vesicle surface coverage), which typically shows a linear decrease over a large range of frequency shifts.Extrapolation of this linear decrease to a frequency-independent intercept with the −Δf axis (where overtones intersect) provides a value of the thickness of the adsorbed vesicle layer h referred to as Sauerbrey thickness: h = − ΔfC/ρ, where C = 18 ng/cm 2 Hz and ρ = 1 g/cm 3 is the density of the film [44].This approach assumes a complete surface coverage at the end of the adsorption process, where the presence of trapped buffer has been diminished to occupy only the void spaces between densely packed vesicles (the −ΔD/Δf ratio is close to zero and the −Δf intercept values were the same on the extrapolation of a linear regression) [38,48,49].Though this approach might not provide a correct absolute value of the real thickness it provides valuable qualitative information.Sauerbrey layer thickness data are included in Table 3 and it is noteworthy that the obtained values are systematically smaller than those obtained for vesicles in bulk by DLS measurements.For LUVs and SUVs ruptured on SiO2 at T > T m , the thickness obtained are 28 nm and 23 nm, respectively, and this overestimation is likely related to the fact that layers are not fully homogeneous and they might bear some degree of oligolamellarity, especially in the case of ruptured LUVs.The -Δf intercept values have been obtained from the average of four overtones.From the Saurbrey thickness values, the extent of vesicle deformation Δd (in percent) was calculated as the relative change of size upon adsorption as compared to the original size of vesicles dispersed in buffer, Δd ¼ ðd DLS −hÞ d DLS Â 100.In Fig. 3, the extent of vesicle deformation is displayed for all cases. The time-independent ΔD vs Δf curves provide additional information on the structural properties of the vesicle adsorbed layers as a function of vesicle size and temperature, reflecting the interplay between bending, adhesion and steric contributions in the vesicle adsorption process.The ΔD-Δf curves in Fig. 4 exhibit steeper slopes with increasing vesicle size, indicating a larger increase in energy dissipation per adsorbing vesicle.At T > T m , LUVs vesicles are prone to larger deformation and are less efficiently packed (so-called steric effect) [50], resulting in a greater viscoelastic contribution per adsorbing vesicle both on Au and SiO 2 , the dissipation being smaller in the latter as a result of an increased adhesion interaction and lateral tension by the SiO 2 surface.When adhered onto SiO 2 , the fate of adsorbed vesicles depends strongly on temperature; intact but less dissipative vesicle layers are formed as compared to Au at T < T m , while a re-entrant behaviour in the ΔD-Δf curves is observed at T > T m .This results from the combination of three factors i) the increased lateral tension with increased surface adhesion, ii) the decreased bilayer bending modulus and iii) decreased critical tension for rupture upon increasing temperature, resulting in an easier vesicle fusion and rupture. Calculations of vesicle shapes upon adsorption Fig. 5 shows the axisymmetric shapes of the adsorbed vesicles corresponding to minimal free energy given by Eq. ( 1).In order to better illustrate the effect of the different contributions involved in vesicle adhesion, the vesicle shapes have been calculated for values of reduced volume v = 6π 1/2 V/A 3/2 [11,51] and parameter w = WR s 2 /κ, covering a broad range, namely v = 0.54, 0.75, 0.85, 0.95 and w = 0.4, 6.4, 64 and 640.A detailed description of the numerical procedure for the determination of the vesicles shapes [52] is included in the Supplementary material.For a constant value of the reduced volume v, the calculated shapes depend on the competition between the bending and adhesion contributions.At large w adhesion dominates and favours a large contact area.At small w the obtained shapes of free (non-adhered) membrane parts of the vesicles are more undulated (Fig. 5A and B), since adhesion is smaller and the bending energy takes over.Conversely, at large w, the shapes of the free non-adhered membrane parts become increasingly more spherical (Fig. 5C and D) to maximize the contact area, i.e., the shapes at large w resemble more and more a part of the spherical surface.For very large w (small bending modulus, large W and large vesicles) the shapes of the adhered vesicles approach the limiting shape composed of free (non-adhered) part of the membrane which is spherical, and another flat adhered part (Fig. 5D).As it can be seen in Fig. 5D, the limiting shapes depend only on the value of the reduced volume v [53].The largest part of the vesicle mass and volume is in this case distributed closer to the sensor surface.The obtained nearly limiting shapes for very large values of w (strong adhesion limit) are predominantly determined by the tendency of the vesicles to achieve the maximal possible reduced contact area between the vesicle membrane and the surface (A c /A) at given reduced volume v.In this regime the influence of the bending energy is very weak and the limiting shapes would be the same also for the initially weakly adsorbed prolate vesicle shapes.It is worth noting that QCM-D experiments were carried out at a rather large vesicle concentration thus the packing efficiency of adhered vesicles and the vesicle contact area can be reduced due to steric effects [50]. In order to better visualize the adhesion strength W behind the results presented in Fig. 5, we have calculated W = wκ/R s 2 for κ ~10 • 10 −19 J at room temperature (i.e. in the gel phase) and for two values of R s : 70 nm (LUVs) and 30 nm (SUVs), relevant to this work.For w = 64 (corresponding to the shapes of panel C in Fig. 5) we obtain the values W ~15 mN/m (LUVs) and W ~70 mN/m (SUVs), corresponding to the range of measured surface energies of water on Au and SiO 2 surfaces given in Table 2. The first (local bending) energy term in Eq. ( 1) is scale invariant for C 0 = 0 [11], i.e., it is not dependent on the size of the vesicle.Since the second (adhesion) energy term in Eq. ( 1) is not scale invariant, it becomes increasingly more important for larger vesicles, i.e. larger w since parameter w is proportional to R s 2 (see also Eq. ( 12) in Supplementary material).As a consequence, the adhesion is more probable and energetically favourable for larger vesicles.Thus, the size of the vesicle plays an important role in the process of vesicle adhesion, as shown also in the experiments presented in this work (Table 3).The parameter w is proportional to R s 2 making LUVs more prone to adhere on the larger contact area and deform more.Their shape is closer to the limiting shape not only because they are larger, but also because they are strongly adhered (see Fig. 5D).The parameter w is on other hand inversely proportional to the bending modulus κ and since κ is smaller for T > T m , adhesion becomes stronger for T > T m and the vesicles shapes are closer to limiting shapes (see Fig. 5D).Moreover, w is larger for SiO 2 surface than for Au because of the larger strength of adhesion W of the SiO 2 surface.At constant reduced volume v the thickness (height) of the adhered vesicles should be increasing with w for large values of v (see Fig. 5).However, this effect was not observed in the QCM-D experiments for the surface with the largest adhesion strength SiO 2 .Indeed, on the SiO 2 surface the reduced vesicle volume v is decreased most likely due to the formation of transient pores [40,54] and/or increase of the vesicle area due to membrane "hidden" pool of lipids, conserved in the form of membrane nanotubular protrusions [55], which is released due to increased membrane lateral tension in the strong adhesion regime.Regarding the formation of the transient pores, it is more plausible to think that they are formed at T > T m .This is because in the liquid disordered phase the lipid bilayer area stretching modulus is smaller than in the gel phase [47] and the lipid bilayer can sustain also lesser critical lateral tension [43] needed for the membrane rupture and formation of transient pores.Accordingly, it can be seen in Table 3 that the thickness h of the adhered vesicles is substantially decreased at T > T m , and, as expected, the effect is more pronounced for the SiO 2 surface. Phase transitions Fig. 6 shows an overview of the temperature dependence of the firstorder derivative of Δf shifts (3rd overtone) upon the first heating run for all LUVs and SUVs vesicles adsorbed at T < T m and T > T m on Au and SiO 2 surfaces.Upon heating, lipid bilayers change from a stiffer gel phase to a softer liquid-disordered phase.As shown in recent works [18,[25][26][27], these changes are reflected as anomalies in both frequency and dissipation shift signals and, in particular, in their first-derivatives with respect to temperature that display clear extrema.The anomalies are governed by the interplay between changes in thickness, stiffness and, in the case of large adsorbed vesicles, by the presence of hydrodynamic channels, which change the shape of the adsorbed vesicles [26]. The shape of the dΔf(T)/dT obtained from QCM-D measurements is reminiscent of that of the isobaric heat capacity C p (T) from calorimetry during a melting transition [29,56].In this respect, it is instructive to draw an analogy between these two well-differentiated experimental techniques.Calorimetry is well-established and based on changes in thermal properties when vesicles undergo a phase transition, in this case, the main transition (heat absorption or release).On the other hand, QCM-D is based on changes in the viscoelastic properties of adsorbed vesicles as a result of changes in bilayer thickness, rigidity and vesicle shape.The calorimetric signal yields a maximum in C p corresponding to a finite jump in enthalpy H(T) along the first order lipid melting transition.The size of this jump scales in this case with the mass; the larger the mass, the larger the heat absorbed or released.In order to draw conclusions on the size of the anomalies observed in dΔf(T)/dT let us comment different aspects of Fig. 6 regarding surface free energy, vesicle size and adsorption temperature. Main phase transition of SVLs on Au surfaces We start by supported vesicle layers formed on Au exposed to UVozone, where no global rupture of vesicles to form SLBs was observed. The main transition appears in all cases as a clear, single-peak anomaly and it takes place at the expected temperature range as compared to vesicles in bulk by calorimetric measurements [29,56,57].In some cases, a second maximum deviating from the linear, regular behaviour, can be observed at lower temperatures, which we ascribe to the pretransition.The appearance of this maximum will be discussed later.In order to obtain a more quantitative picture of temperature and size effects on the main phase transition, relevant parameters such as the area below the peak, the height of the maximum, peak width at half maximum, ΔT 1/2 , and the transition temperature (temperature corresponding to the peak maximum) have been extracted in each case and are displayed in Fig. 7.It is observed that the size of the peak, as well as the area below scale with effective vesicle size.Large adsorbed vesicles carry more trapped aqueous buffer and are more deformable structures, thus, stronger changes in Δf and ΔD occur along the main transition.First-order derivatives of the dissipation can be found in Fig. S2 of the Supplementary material.The width of the peak at half maximum is related to the cooperativity of the transition, i.e., measure of the degree of intermolecular cooperation during the main transition. ΔT 1/2 decreases as the size of the adsorbed vesicle layer, indicating that the melting takes place in a narrower temperature range as adsorbed vesicles deform less on the solid surface.ΔT 1/2 increases as size decreases to a greater extent when vesicles are adsorbed T > T m .For vesicles adsorbed at T < T m where deformation is very small, the transition temperature T m shifts to lower values when decreasing size, since the higher curvature in SUVs decreases the lateral pressure [29,58].In addition, adsorbed layers of LUVs might be a combination of mostly unilamellar and some oligolamellar vesicles.Lamellarity results in increased cooperativity during the membrane melting owing to the strengthened bilayer-bilayer interactions [59].In general, the T m values fall within the expected temperature range, although they are slightly higher than those observed for multilamellar vesicles (MLVs) and LUVs given the rather fast scanning rate (0.4 °C/min).At T > T m , the extent of vesicle deformation is larger, they lie closer to the surface, presumably because of the decreased reduced volume v (see Fig. 5) and interactions with the substrate broaden the transition and increase the transition temperature. Main phase transition of SVLs and SLBs on SiO 2 surfaces Before analysing the phase transition behaviour, let us first recall the type of layers formed on SiO 2 surfaces.At T < T m , DPPC vesicles were adsorbed and deformed to a larger extent as compared to Au; however, no global rupture and thus no SLB formation was observed.At T > T m , both LUVs and SUVs are adsorbed, fused and ruptured to formed SLBs.As it can be observed in Fig. 6, the phase transition behaviour of these layers is greatly affected by the strong interactions between SiO 2 and the vesicles, resulting into films lying closer to the surface.For SVLs formed at T < T m , the transition is greatly broadened and takes place in a double-peak manner.Fig. 8 presents a closer view on the double peak melting anomaly.Both peaks have been fitted to Gaussians in an effort to decouple their relative sizes.Decoupling effects have been experimentally observed for SLBs and double SLBs using AFM [60,61], DSC [62], neutron reflectometry [63] and single particle tracking [64].They are ascribed to the stronger interaction between the lipid head groups of the proximal leaflet and the solid surface inducing a more tightly packed lipid distribution [31][32][33], which may favour the gel phase of lipids in the adhered part of the vesicle membrane [33] and/or the highly confined and orientationally ordered water layer between the proximal leaflet and the solid surface [31,32].As a matter of fact, the viscosity of the interfacial water layer is 10 4 times larger than bulk water [64]. In the present work, apart from in DPPC SLBs on SiO 2 , we also observe two peaks in the transitions of DPPC SVLs formed on SiO 2 .For the latter, the double peak is attributed to a decoupling effect in the melting between the lower part of the vesicle that is closer to the surface and the upper part at the vesicle-buffer interface.The low temperature peak takes place at a similar temperature to the melting of vesicles adsorbed on Au (T m ~42 °C).Hence, it confirms that this anomaly corresponds to the part of the vesicle envelope which is less affected by the solid surface. The global size of both peaks scales with the effective adsorbed vesicle layer thickness.The individual peak sizes at the third overtone show an opposite pattern of behaviour between SVLs and SLBs.For SVLs on SiO 2 are greatly deformed and a large part of their area is in contact with the surface, while the remaining part melts like in bulk.As we shall see in the following section, SVLs formed on SiO 2 are quite unstable upon thermal cycling due to their largely deformed shape.In SLBs the peaks are less pronounced than in SVLs since the former are thinner and stiffer layers.Figs.S3 to S6 in the Supplementary material provide a complete overview of the transitions observed for all the systems in terms of frequency and dissipation at two overtones.The peak shapes show a complex behaviour for SLBs formed from LUVs owing to the fact that these layers might be a combination of multilayers and single bilayers.SLBs formed from SUVs precursors should be mostly single bilayers and show double peaks which are significantly broad.We aim to further explore the shapes of these two peaks in a follow up work by carrying out a purely systematic work. Reversibility of the main transition and the pretransition The stability of the main transition upon successive heating and cooling cycles has been also investigated.Fig. 9 displays an example of In some cases, a broader transition at lower temperatures, attributed to the pretransition, is observed upon heating (see Fig. 6).The pretransition typically takes place between the ripple phase and the liquid-disordered phase, the former being linked to the formation of periodic ripples on the membrane surface [66].Ripples with periodicity ranging from 12 nm to 16 nm appear along the so-called stable ripple phase [67], subject to variations as a function hydration and thermal history [68][69][70].The stable ripple phase is formed upon heating the sample from the gel phase to the liquid phase; a metastable ripple phase typically appears on cooling from the liquid crystalline phase to the gel phase [70].From a biological viewing point, pretransition attracts interest arising from its potential to drive the membrane protein assembly via the so-called 'orderphobic' effect [71].The question whether the pretransition takes place in solid-supported small vesicles has not been yet clearly answered; note that its existence could be hampered by enhanced curvature or even by the presence of the solid support.Its study on solid-supported membranes is limited to AFM measurements of lipid multilayers [72] and SLBs [73][74][75].For single phospholipid SLBs, the formation of ripples is precluded due to lateral stress from the solid substrate, while in the presence of tris(hydroxymethyl) aminomethane (Tris) in a buffer solution, the ripples reappear [73].The mechanisms behind this phenomenon are not fully understood.To our knowledge, the pretransition in small solid-supported vesicles is restricted to systems supported on silica beads by differential scanning calorimetry [76], where no pretransition could be observed.In our experiments, when the formed layers are intact vesicles, the transition is more clearly visible for LUVs.The latter, as explained above, are a combination of mostly unilamellar and few oligolamellar structures and possess enough free area for the pre-transition to take place.Small vesicles are mostly unilamellar and in the absence of lamellar stacks (geometrical constraints) the pretransition is expected to be weaker and overlap with the main transition, which is already shifted towards lower temperatures for small vesicles [29].For SLBs, the pretransition is very clear for those formed from adsorbed LUVs that ruptured into thin rigid layers.It is thus tempting to state that, despite being rigid and thin, those SLBs consist of several bilayers (multilamellar) where the pretransition can take place and be detectable. Unlike the main transition, the pretransition is not detected upon cooling.The irreversibility of the pretransition has been observed by Tenchov et al. using time-resolved X-ray diffraction [77].After reaching the liquid disordered phase upon heating, the formation of ripples upon the following cooling might be strongly hindered and require a very long time, thus undercooling occurs. Conclusions We have examined the adhesion and phase behaviour of DPPC vesicles onto two types of surfaces, SiO 2 and Au bearing different adhesion levels.Relevant parameters associated with vesicle adhesion, such as vesicle radius and bending modulus, were also varied. On a given surface, vesicle deformation is promoted for large vesicles at temperatures above their melting (small bending modulus).SiO 2adsorbed vesicles deform to a greater extent than their counterparts on Au, resulting in higher contact area and higher membrane lateral tension, making them more prone to pore formation or rupture and fusion.Numerical calculations based on free energy minimization illustrate the interplay of bending and adhesion contributions into the vesicle shapes of different sizes and bending modulus on surfaces bearing different degrees of adhesion. The temperature derivatives dΔf/dT and dΔD/dT show clear signatures of the phase transitions during heating and cooling runs.The main transition is reversible and the transition peak size scales with vesicle size.When adsorbed on Au at T < T m , larger vesicles display a more cooperative transition, whereas the transition temperature T m shifts downwards for smaller vesicles, reflecting that a larger curvature decreases the lateral pressure.At T > T m , the extent of vesicle deformation is larger, the interactions with the substrate broaden the transition and increase the transition temperature.When adsorbed onto SiO 2 the main transition is greatly broadened and appears as a double-peak anomaly.The peak size appears larger for intact vesicles than for planar bilayers.The double peak can be explained as a decoupling effect in the melting between the lower part of the membrane that is closer to the surface and the upper part at the membrane-buffer interface.The stronger interaction between the lipid head groups of the proximal leaflet and the solid surface induces a more tightly packed lipid distribution.This combined with the highly confined and orientationally ordered water layer make the adhered part of the membrane melt at a higher temperature than in bulk. Fig. 1 . Fig. 1.Left column: the frequency (top panel) and dissipation shifts (bottom panel) are shown for vesicles adsorbed at 16 °C.Right column: the corresponding frequency (top panel) and dissipation shifts (bottom panel) are shown for vesicles adsorbed at 50 °C.Black colour: SUVs on SiO 2 ; red colour: SUVs on Au; blue colour: LUVs on SiO 2 ; green colour: LUVs on Au. Fig. 2 displays the extrapolated Sauerbrey thickness for LUVs adsorbed at 16 °C and 50 °C onto SiO 2 and Au surfaces.The corresponding figure for SUVs adsorbed at 16 °C and 50 °C onto SiO 2 and Au surfaces is included as Fig. S1 in the Supplementary material. Fig. 3 .Fig. 4 . Fig. 3. Extent of vesicle deformation upon adsorption on Au and SiO2 surface above and below the melting temperature of LUVs (left panel) and SUVs (right panel).The red colour corresponds to vesicles adsorbed above melting and blue to the ones below melting. Fig. 5 . Fig. 5.The shapes of the adsorbed oblate vesicles obtained by the minimization of the free energy given by Eq. (1), determined for different values of reduced volume v and parameter w.Each row represents a different value of w: 0.4 (row A), 6.4 (row B), 64 (row C) and 640 (row D).The spontaneous curvature C 0 was selected to be zero.The calculated shapes of nonadsorbed vesicles corresponding to w = 0 and the same reduced volumes v are shown in Fig. S10 of the Supplementary material. Fig. 6 .Fig. 7 . Fig. 6.Temperature dependence of dΔf/dT (3rd overtone) for LUVs and SUVs adsorbed at T > T m and T < T m on Au-and SiO2-coated quartz surfaces.Black colour refers to supported membranes formed on Au and blue colour to supported membranes formed on SiO 2 .The inset at the bottom right panel shows a magnification of the peak for clarity. Fig. 8 . Fig. 8. Close view of the temperature dependence of dΔf/dT (3rd overtone) for SVLs and SLBs formed from precursor LUVs and SUVs adsorbed at T > T m and T < T m on SiO2-coated quartz surfaces.Black, red and green solid lines correspond to gaussian multiple peak fitting results. Fig. 9 . Fig. 9. dΔf 3 /dT vs temperature for the 3rd overtone upon successive heating and cooling runs for large DPPC vesicles adsorbed at T < T m .Left panel: LUVs adsorbed on Au, right panel: LUVs adsorbed on SiO 2 . Table 1 Hydrodynamic mean diameters and polydispersity indexes (PI) obtained by DLS for the DPPC vesicle dispersions used in this work.The number of performed measurements per sample is n = 4. Table 3 Sauerbrey thickness h and corresponding extent of vesicle deformation Δd values.
2020-10-28T19:09:41.449Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "0d11d106fff8f9845ab692653bda555bbd761ffe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.molliq.2020.114492", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "280cd54e9e9c37d550775b20069603bbaee9fe37", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
251780503
pes2o/s2orc
v3-fos-license
Global Journal of Earth Science and Engineering Groundwater Classification by Using Fourier Analysis The article illustrates a statistical technique for the visual representation of geochemical data. Quaternary and Pre-Quaternary groundwater samples from Northern Sinai Peninsula, Egypt, were interpreted statistically using Andrews plots, which use Fourier analysis to transform and represent a set of multivariate data by a waveform pattern. The resulting waveform patterns were classified into low, middle, and high amplitudes, following up the increase in the total dissolved solids of the samples. Comparison with the traditional hydrochemical polygonal Stiff diagrams resulted in a complete matching. The proposed mixing between the Quaternary and Pre-Quaternary aquifers has been proved via the similarity of waveform patterns of the mixed water. The application of Andrews plots is investigated by comparison with the Stiff conventional diagrams. The correlation between different amplitudes and the TDS value of every sample indicates that the amplitude increases with the increase in the salinity. Introduction Mixing between Quaternary and Pre-Quaternary groundwater in Delta Wadi El Arish, Northern Sinai, has been studied by many authors such as [5]. They suggested that the increase of Total Dissolved Solids (TDS) of the groundwater samples collected from the wells to the north of El Aish airport was attributed to the inflow of saline water into the Quaternary aquifer by vertical movement from the deep aquifer along the Lehfan fault. Gomaa [8] has referred to the Early Cretaceous sandstone aquifer as an important charging source of the shallower Quaternary aquifer. Khalil [12] established a detailed geophysical, hydrochemical, and isotope hydrological study to elucidate the source of high salinity groundwater in the delta of Wadi El Arish. His study revealed that the high salinity is attributed to the inflow of Pre-Quaternary high salinity evaporite dissolved water into the shallower fresh Quaternary groundwater forming a mixed zone to the east of El Arish city. The estimated radiocarbon age of groundwater samples in the mixed zone ranges from 900 to 8800 Y.B.P. Tritium dating refers to the mixing between modern to sub-modern or old age. The present study is an approach to classify groundwater samples and illustrate groundwater mixing using Andrews plots, which have recently achieved significant recognition in different areas. Andrews plots is a technique that uses Fourier analysis to transform and represent a set of multivariate data by a waveform pattern. The mathematical details of the method are discussed by [13]. Andrews plots have many applications in different areas, such as robust design [15], correspondence analysis techniques [10], uncertainty analysis [6], and classification techniques for Landsat images [3]. Geological and Hydrogeological Setting The geological succession of the Northern Sinai coastal zone and the delta of Wadi El Arish are shown in table (1). The Quaternary aquifer consists of three hydraulically connected water-bearing formations (1) Holocene sand dune and Upper Pleistocene old beach sand, (2) Alluvium deposits, and (3) Lower Pleistocene calcareous sandstone (Kurkar). The TDS of the Quaternary aquifer ranges from 800 to 7000 ppm. The Quaternary aquifer is characterized by low potentiometric gradient, where the potentiometric surface ranges from +1.5 to -2 meters [9]. The Upper Cretaceous aquifer system consists mainly of chalky limestone and shale in the upper part (Senonian) and limestone, dolomite, dolomitic limestone, and marls in the lower part (Turonian and Cenomanian). The lower boundary of the Upper Cretaceous aquifer is a marly or shally aquiclude changing into calcareous sandstone toward the south. The upper boundary is the base of the overlying Tertiary Formation, which dominates the major part of central and northeastern Sinai. The total salinity of the Upper Cretaceous aquifer ranges from 1000 ppm in middle Sinai to 10.000 ppm in northern Sinai. The potentiometric surface map of the Upper Cretaceous aquifer ranges from +600m in the middle of Sinai to +50m in northern Sinai. The locations of the studied water samples in both Upper Cretaceous and Quaternary aquifers are shown in Figures (1) and (2), respectively. The hydrochemical data of the Upper Cretaceous and Quaternary aquifer ( Table 2) are collected from [1,9,14]. Interpretation Collected water samples representing Quaternary and Upper Cretaceous aquifers are interpreted statistically using Andrews plots in association with the conventional polygonal Stiff diagram. Andrews plotting is a technique that uses Fourier analysis to transform the results of multivariate data and represents a set of multivariate data by a waveform pattern [4]. Andrews plotting or curve is a way to visualize structure in high-dimensional data [7]. Anderson [2] suggested that a P-dimensional vector of measurements (X1, X2 ...Xp) be represented by the finite Fourier series as shown in equation (1). (1) That is, the measurements become the coefficients in an expression whose graph is a periodic function. Plots of the Fourier series representations of the multivariate observations will be curves that can be visually grouped [11]. The application of finite Fourier series on the Upper Cretaceous and Quaternary groundwater samples is plotted in Figures (3 and 4), where all samples of the Upper Cretaceous aquifer have the same waveform pattern with different amplitudes. They are classified into high, middle, and low amplitude in Figure 5 (a, b, and c, respectively). The correlation between different amplitudes and the TDS value of every sample indicates that as the salinity increase, the amplitude increases. The same waveform pattern characterizes Quaternary groundwater samples as in the Upper Cretaceous aquifer with little difference in amplitude. A complete agreement between the three groups resulted from Andrews plotting and Stiff diagram of the Upper Cretaceous water in Figure 6 (a, b, and c, respectively), where all samples have the same ionic water type Cl, SO4, HCO3, Na+K, Mg, Ca with a little exception in samples of El Hamma, Libni-3, El Arish19b, and El Themd, where they have Ca more than Mg. Applying the Stiff diagram for Quaternary water (Figure 7) emphasized the same water type of Upper Cretaceous. This similarity is also reflected in Andrews plots in Figures (3) and (4). Conclusion According to the present study, Andrews plots succeeded to a large degree in illustrating and classifying groundwater samples in both Quaternary and Upper Cretaceous aquifers. The same waveform pattern characterizes quaternary groundwater samples as in the Upper Cretaceous aquifer with little difference in amplitude. The similarity in the waveform pattern reflects the similarity in the chemical composition. This result is confirmed by the comparison with the Stiff conventional diagrams. This agreement between the two groundwater types suggests a hydraulic connection and mixing between the Quaternary and the Upper Cretaceous aquifers. It is worth mentioning that the Quaternary aquifer is in the most active valley in Northern Sinai and is subjected to a very high extraction rate. In addition, the Upper Cretaceous aquifer is confined and is subjected to potentiometric pressure. The mixing between Quaternary and Upper Cretaceous aquifers is confirmed by the radiocarbon age estimated by [12]. The estimated radiocarbon ages of groundwater samples in the delta Wadi El Arish range from 900 to 8800 Y.B.P. Also, Tritium dating refers to mixing between modern to sub-modern (old) age. As a result, transforming the chemical composition of groundwater samples as a set of multivariate data to a waveform pattern as the Andrews plot is a practical statistical application for visual grouping and classification of groundwater samples.
2022-08-25T15:12:21.804Z
2022-08-22T00:00:00.000
{ "year": 2022, "sha1": "e55a9da27ef7d5b9fda53cb3d14564560d878423", "oa_license": "CCBYNC", "oa_url": "https://avantipublishers.com/index.php/gjese/article/download/1240/837", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c441f61a4500c3cf2c5f9dabf73891b55ee17cdd", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [] }
55909721
pes2o/s2orc
v3-fos-license
Socio-economic benefits of the improvement of transport accessibility to the port of Szczecin The aim of the paper is to present the socio-economic effects which will appear in rela­tion with the realization of the large-scale investments consisting in (1) the deepening of the fairway Świnoujście-Szczecin down to 12.5 m and (2) the adjustment of the infrastructure in the port of Szczecin (deepening of the port water area and modernization and deepening of the selected berths) to handle larger seagoing vessels. Those are complementary projects and their combined realization will allow the handling of fully laden vessels with deadweight of 40 thou. tonnes (at present that is 15 thou. tonnes). The realization of the investment projects will contribute to the creation of the socio-economic benefits which will arise in: - the land section of the transportation chain running through the port of Szczecin; within the hin­terland transport there will arise savings in the land transportation costs and in the external costs resulting from the shortening of transportation distance and changes in the sectoral structure of transports (modal shift); - the sea section of the transportation chain running through the port of Szczecin; within the sea transport there will arise savings in the sea transportation costs and in the external costs result­ing from the shortening of transportation distance, and in the costs of exploitation of vessels resulting from the increase of their size (economies of seagoing vessel's size); - the port of Szczecin in the form of the increased gross value added (GVA) in the port activity, additionally created value of port services. The socio-economic analysis covered all planned complementary investments. The socio-economic effects have been identified with the use of the cost benefit analysis (CBA). Introduction The conditions under which the port in Szczecin comes to function currently are as follows: -the accessibility of the port from the sea is determined by the Świnoujście-Szczecin fairway.This is a transport infrastructure object connecting the port in Szczecin with the open sea (Pomorska Bay).It was built by the end of the XIX century.With the observation of the permissible technical depth of the fairway at the level of 10.5 m and the fairway's width of 90 m at its bed, it enables a safe navigation and entering the port in Szczecin for sea vessels with their maximum draft of 9.15 m or with maximum length of 215 m, however, the vessels with the maximum draft of 9.15 m cannot exceed the length of 160 m; -the accessibility of the port in Szczecin from the sea at the current stage of development of maritime trade and shipping is very low.Except for the ports in Kaliningrad and Lubeck, all other main ports situated at the Southern coast of the Baltic Sea poses better navigational conditions enabling handling seagoing vessels with tonnages and capacities greater than those in Szczecin (Hozer, Bernacki, Lis, Kuźmiński 2011); -the low accessibility of the port in Szczecin from the sea causes that the port of Szczecin can handle vessels of small tonnage (fully laden of up to 15,000 tonnes deadweight).Bigger vessels can call the port in Szczecin only under the condition that they are not fully laden (Kotowska, Mańkowska, Pluciński 2014).Among the Polish ports of the essential importance to the maritime economy, in the period of 2008-2014, the port in Gdańsk noted the highest dynamics of turnover.The transhipments in the port of Gdańsk in that period increased by 81.5%, i.e., they increased by the average of 10.4% a year.In the port of Świnoujście, the cargo turnover increased by 40.2%, i.e., by the average of 5.8% a year, whereas in the port of Gdynia, they increased by 25.5%, i.e., by the average of 3.9% a year. Against this background, the port in Szczecin did not note any evident increase of cargo turnover.In 2008, the turnover in the port in Szczecin amounted to 8.95 m tonnes whereas in 2014 -to 9.009 m tonnes, which means the increase by 0.7%, i.e., by the average of 0.1% a year.The consequence of that was the diminishing of the transport importance of the port in Szczecin among the Polish ports of the essential importance to the economy (Table 1).The changes of turnover in the Polish sea ports in the period of 2008-2014 are shown in Fig. 1.The share of the port in Szczecin in the total transhipments of Polish ports dropped from the level of 17.1% in 2008 down to 12.0% in 2014 (5.1 percentage points). During the period of 2008-2014 the Gross Domestic Product (GDP) expressed in basic prices increased in Poland by 16.7% and the average-annual rate of economic growth in Poland amounted to 2.6%.On the basis of the data on transhipments in the ports of the essential importance to the national economy during the years 2008-2014 and the GDP in basic prices (in 2014), the strength of the relation between the cargo turnover in Polish ports and the gross domestic product has been analysed with the use of the Pearson product-moment coefficient of correlation.The results are presented in Table 2.The total volume of cargo turnover in Polish ports shows a strong relation to the economic development of the country.The correlation coefficient for the port turnover in total against the GDP amounted to 0.94, similarly strong relations appeared in Gdańsk (0.93), Gdynia (0.87), and Świnoujście (0.91).The port in Szczecin is the exception where the transhipment dynamics was much lower than the dynamics of the economic growth in Poland.The coefficient of correlation between the volume of cargo transhipment in the port in Szczecin and the GDP was low and it amounted only to 0.28. In the case of the port in Szczecin, the unfavourable tendencies with reference to the growth rate of the cargo transhipments and the weak correlations between the volume of port turnover and the economic growth of the country result most of all from the low navigation parameters of the Świnoujście-Szczecin fairway limiting the access to the port in Szczecin for sea going vessels. The key factor determining the further development of the port in Szczecin is the improvement of the port's accessibility from the sea consisting in the modernization and deepening of the Świnoujście-Szczecin fairway.In relation to the improvement of the transport accessibility, in the port of Szczecin it will be necessary to adjust the existing quays to handling bigger seagoing vessels.The planned activities in favour of the improvement of Szczecin port's accessibility from the sea have been concretized in the form of the following investment projects: 1.The modernization of the Świnoujście-Szczecin fairway to the depth of 12.5 m.The investment consists in the deepening of the fairway down to the technical depth of 12.5 m on the distance of 62.5 km with the simultaneous widening of the fairway's bed from 110 m to 130 m on the selected sections, as well as in the construction and modernization of the enforcements of the river banks, regulating constructions, and silting fields.The investment is planned for realization in 2015-2021.2. Improvement of the accessibility to the port in Szczecin in the area of Kaszubski Basin.The investment consists in the adjustment of the port infrastructure at the bulk cargo transhipment area to handling bigger than presently bulk cargo vessels, and it encompasses the modernization and deepening down to the technical depth of 12.5 m of the three most used port quays.The investment is planned for realization in 2017-2020.3. The improvement of the accessibility to the port in Szczecin together with the extension of the port infrastructure in the area of Dębicki Canal.The investment consists in the adjustment of the port infrastructure in the port of Szczecin at the break bulk cargo transhipment area to handling bigger than presently vessels and the extension of the port infrastructure to handling intermodal units, unitized and conventional break bulk cargo and project cargo.The project encompasses modernization and deepening down to the technical depth of 12.5 m of two transhipment quays and the construction of a new deep-water general cargo transhipment quay.The investment is planned for realization in 2017-2020.4. The construction of a deep-water bulk cargo transhipment quay at Grabowski Islet by Przekop Mieleński.The investment is planned for realization in 2017-2020.The aim of the investment undertaking encompassing all the above-mentioned investment projects is to increase the efficiency of the sea-land transportation chain running through the port in Szczecin. The deepening of the fairway will allow merchant ships of bigger capacity and deadweight to enter the port in Szczecin, whereas the modernization and extension of the port infrastructure at the break bulk cargo transhipment area (Dębicki Canal), bulk cargo transhipment area (Kaszubski Basin), and at Grabowski Islet will enable handling bigger seagoing vessels.The combined and supplementary realization of the investment projects will cause the increase of the costs efficiency of sea-land transports and will guarantee proper conditions for handling vessels and cargo in the port of Szczecin. The aim of the paper is to present the socio-economic effects which will appear in connection with the realization of large scale investment projects consisting in: (1) deepening of the Świnoujście-Szczecin fairway down to 12.5 m, and (2) adjusting the port infrastructure in the port of Szczecin (deepening of harbor water area and modernization as well as deepening of selected berths) to handling bigger sea vessels.Those are complementary projects and their joint realization will enable handling fully laden ships of up to 40,000 tons (presently it is up to 15,000 tons) in the port of Szczecin. Methodology The calculation of the socio-economic benefits for the investment undertaking has been carried out basing on the cost-benefit analysis method in accordance with the assumptions and guidelines elaborated for the investment projects co-financed by the European Union's Cohesion Fund for the period of 2014-2020 (Regional and Urban Policy 2014, Ministry for Infrastructure and Development 2015).The investment undertaking consists of four, mutually supplementing and conditioning the reaching of the objective, investment projects.The socio-economic benefits have been calculated for the complementary investments with the assumption that in the non-investment variant (W0) there will not be realized any investment project, whereas in the investment variant (WI) all the considered projects will be realized.The socio-economic effects of the investment undertaking include the transportation cost savings, external cost savings in maritime transport as well as in land transport, and the increase of the value added in the port in Szczecin established for the difference in cargo transhipment prognosis elaborated for the investment variant (WI) and for the non-investment variant (W0).The differential variant of transhipment prognosis (WI-W0) covers the period of 2021-2041 which results from the assumed investment exploitation period. The year 1999 has been assumed as the basis for the determination of the cargo transhipment prognosis in the non-investment variant because that was the year when the change in the tendency in development of cargo transhipments in the port of Szczecin occurred.On the basis of time series of transhipments in the period of 1999-2014, the parameters of hyperbolic trend line have been estimated.The function of the trend in the total turnover in the port of Szczecin during the years 1999-2014 amounted: where t = 1, 2,…, 16 (subsequent years of the trend parameters estimation). On the basis of the trend function, prognoses of transhipments in the port of Szczecin have been made for the non-investment variant.While making prognoses of the cargo groups transhipments, the structure of transhipments by cargo group in 2014 has been used.For the reference period of 2021-2041, the average rate of change of cargo turnover in the port of Szczecin amounted (-8.08%). The demand for the transhipment services in the port of Szczecin has been, in the investment variant, conditioned on the predicted economic situation in Poland.It has been assumed that in result of the realization of the investment undertaking, the port of Szczecin will return to the development path connected with the increase of the GDP and in a greater than so far degree it will participate in the predicted increase of the demand for the transhipment services.While determining the development of transhipments in the port in Szczecin, first of all, the transhipment prognosis for the Polish ports as prepared in the minimum and maximum variants for the period until the year 2030 (Kotowska, Mańkowska, Pluciński 2014) have been used.On this basis, the total turnover dynamics indexes for the port in Szczecin for the minimum and maximum prognosis variants until 2030 have been established.The turnover dynamics indexes for the period of 2031-2041 have been established on the basis of the mathematical functions of the transhipment prognosis trends assessed for the minimum and maximum variants.The dynamics indexes represent the average-annual rate of change in the transhipment in the port of Szczecin for the minimum and maximum variants in the period of 2021-2041.The geometric averages of the total turnover dynamics (chain) indexes in the port of Szczecin for the minimum and maximum variants are presented in Table 3.The established average dynamics indexes refer to the total turnover in the port of Szczecin.The rate of change for specific cargo groups will be diversified.The 12 benchmarks of turnover dynamics indexes have been established in such a manner where the obtained turnover dynamics indexes for the minimum and maximum variants amounting respectively to 2.66% and 3.06% have been decreased or increased by the value of the dynamics index's spread which amounted to h=0.41%. The development potential of cargo groups in the port of Szczecin is presented in Table 4. Using the turnover dynamics indexes of the specific cargo groups, the cargo turnover prognosis for the port of Szczecin has been made for the investment variant. The average-annual rate of increase of the total turnover in the port of Szczecin for the period of 2021-2041 amounted to 2.91%.It is being anticipated that after the deepening of the Świnoujście-Szczecin fairway down to 12.5 m, the transhipments in the port of Szczecin will amount to 15,561,136 tons in 2041, which means the increase of the turnover in 2041 by 6,938,652 tons in comparison with the non-investment variant.The predicted volume of cargo groups turnover in the port in Szczecin, for the investment undertaking in the differential variant (WI-W0), is presented in Table 5. Study results The realization of the investment undertaking will contribute to the origination of socio-economic benefits which will occur in: a) the land section of the sea-land transportation chain running through the port in Szczecin.The hinterland transport will note cost savings in cargo transportation by land means of transport and in external costs, resulting from the shortening of transportation distance and from the change in the sectoral structure of transports (the increase of the importance of the railway transport versus the road transport -modal shift); b) the sea section of the sea-land transportation chain running through the port in Szczecin.The sea transport will note cost savings in cargo transportation by sea and in external costs, resulting from the shortening of transportation distance and in the costs of exploitation of vessels resulting from their increased size (economies of vessel's size); and c) the sea port of Szczecin in the form of the increase of the (gross) value added produced by port activities, additionally produced value of port services caused by the increase in cargo turnover in the port.The socio-economic effects indicated in c) have been determined as the direct result of the investment undertaking at its exploitation phase (the effects appearing during the realization stage have been excluded), and have been limited to the activities of the port of Szczecin with disregard of the effects appearing in the surroundings of the port of Szczecin (The Centre for EU Transportation Projects 2014). The socio-economic benefits in land transport (in the hinterland of the port of Szczecin) resulting from the realization of the investment undertaking include: In the calculation, the following external costs coefficients were applied: for road freight transport -0.0552 PLN/tkm, for rail freight transport (electric fraction) -0.0134 PLN/tkm.Average freight rail transport cost applied in calculations amounted: for grains -0.0205 PLN/tkm, for dry bulk cargo -0.0177 PLN/tkm, for containerized break bulk -0.0249 PLN/tkm.The applied in calculation average road freight transport cost amounted to 0.3217 PLN/tkm. In order to determine the socio-economic benefits arising in the land transport in connection with the realization of the investment undertaking, the analysis has been performed of the hinterland of the port in Szczecin versus its neighboring sea ports and with reference to handling cargo transports in relations to/from the main points of the hinterland.The analysis has been performed for the following cargo groups: for containerized break bulk cargo (containers) and for grains and bulk cargoes (with the exclusion of crude oil).The socio-economic benefits have been determined, such as the savings in the transportation costs and in the external costs resulting from the shortening of the transportation distance to the port of Szczecin in comparison with alternative transport routes to the ports in Rostock, Gdynia, and Gdańsk, and such as the savings in the transportation costs and the external costs resulting from the expected changes in the sectoral structure (modal shift) of land transports caused by the realization of the investment undertaking.Modal shift from road to rail transport has been forecasted as a consequence of the raise in the shippment load and the increase in the shipment density.The socio-economic benefits have thus been expressed as the additional costs that may be avoided by the realization of the investment undertaking. The socio-economic benefits in the sea transport (the foreland of the port of Szczecin) resulting from the realization of the investment undertaking include: a) external costs savings in the sea transport resulting from the shortening of transportation distance, b) the transportation costs savings in the sea transport resulting from the shortening of transportation distance, and c) the transportation costs savings on the sea leg resulting from economies of scale, decreasing the unit transportation cost due to carriage on a bigger sea vessel.These are the transportation costs savings resulting from the exploitation of big seagoing vessels (economies of vessel's size). External and transport costs for freight shipping have been calculated and depicted in Tables 9 and 10.The following average external cost have been applied in calculus: for general/dry bulk ships -0.0206 PLN/tkm, for container ships -0.0153 PLN/tkm.In order to determine the socio-economic benefits arising in the sea transport in connection with the realization of the investment undertaking, the analysis has been performed of the foreland serviced by the sea port in Szczecin versus its neighboring sea ports.The port of Malmoe has been selected as the reference point for the ship routing alternation for the investment variant (WI) and for the non-investment variant (W0).The analysis has been performed for cargo groups showing the largest increases in their turnover resulting from the realization of the investment undertaking, namely for containerized break bulk (containers), for grains and bulk cargoes.The socio-economic benefits have been determined as the savings in the transportation costs and in the external costs resulting from the shortening of the transportation distance to the port of Szczecin in comparison with alternative transport routes to the ports in Rostock, Gdynia, and Gdańsk, and also caused by the decrease of the unit costs of cargo transportation in big seagoing vessels. The average gross value added created during the transhipment of 1 tonne of cargo in the port in Szczecin during the years 2010-2013 amounted to 30.94 PLN.The unit gross value added multiplied by the volume of the transhipments predicted in the port of Szczecin in the non-investment variant (W0) and in the investment variant (WI) was used to calculate the increase of the gross value added in the differential variant (WI-W0).The gross value added in the differential variant (WI-W0) means the increase in economic results which will occur in the port of Szczecin in result of the realization of the investment undertaking. The total and discounted (at the discount rate of 5%) economic benefits for the investment undertaking are presented in Table 11.The structure of the economic benefits is presented in Fig. 3. Conclusions 1.The realization of the investment undertaking will contribute to the creation of the socio-economic benefits with the combined value of 3.448.517.318,90PLN, where the benefits in the sea transport (savings in the transportation costs and in the external costs) will in total amount to 1,183,462,929.23 PLN (34.40% of the total benefits obtained from the investment undertaking), in the land transport (savings in the transportation costs and in the external costs) -1,345,792,995.22PLN (39.0% of the total economic benefits), in the port of Szczecin, there will be value added created in the amount of 774,405,083.56PLN (22.5% of the total economic benefits).The residual value of the investment undertaking will amount to 144,856,310.90PLN (4.20% of the total economic benefits from the investment undertaking).2. In the sea and land transports, there economic benefits prevail connected with the decreasing of the cargo transportation costs.The savings in sea transportation costs have amounted to 1,091,739,404.87 PLN (31.7% of the total economic benefits from the investment undertaking), the savings in the land transportation costs have amounted to 1,172,726,456.33 PLN (34.0% of the total economic benefits from the investment undertaking).The external costs savings connected with the transportation by sea and land in total have amounted to 264,790,063.24PLN (in total 7.7% of the total economic benefits from the investment undertaking). a) the transportation costs savings in land transport resulting from the shortening of transportation distance in the hinterland, b) external costs savings resulting from the shortening of transportation distance in the hinterland, c) the transportation costs savings in land transport resulting from the change in the sectoral structure of transports (modal shift) in the hinterland, and d) external costs savings resulting from the change in the sectoral structure of transports in the hinterland.External costs for road and rail freight transport have been estimated with the use of Marco Polo calculator while average freight transport costs for rail and road have been calculated with the use of three sources, i.e., Social cost-benefit analysis Iron Rhine Final Report 2009; Delhaye, Breemersch, Vanherle, Kehoe, Liddane, Riordan 2010; and TREMOVE cost simulation model 2006.Respective costs have been indexed and through Purchasing Power Standard coefficient factor adjusted for Poland.Calculations of external and carriage costs for road and rail transport have been presented inTable 6, 7, and 8. Table 3 . Average cargo turnover dynamics indexes for the port in Szczecin in the period of 2021-2041 Table 4 . Predicted yearly dynamics of the development of transhipment of cargo groups in the port of Szczecin within the investment undertaking (WI) Table 5 . The predicted volume of cargo groups turnover in the port of Szczecin for the investment undertaking in the differential variant (WI-W0) in the period of2021-2041 (tons) Table 6 . Average external cost coefficients for the road and rail freight (electric traction) Table 8 . Average cost for road freight transport, year 2014 Table 9 . Average external cost coefficients for freight maritime transport (short sea shipping), year 2014 Table 10 . Average transport cost for grains, dry bulk, and container ships calling to the port of Szczecin, year 2014 (PLN/tkm) Table 11 . Total (discounted) socio-economic benefits of the investment undertaking (PLN)
2018-12-07T20:17:45.874Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "25ff942267611baf2bc0f50d8de76be63c51baf7", "oa_license": "CCBYSA", "oa_url": "https://wnus.edu.pl/epu/file/article/view/2872.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "25ff942267611baf2bc0f50d8de76be63c51baf7", "s2fieldsofstudy": [ "Economics", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
30504775
pes2o/s2orc
v3-fos-license
An overview of Indian research in bipolar mood disorder the studies on bipolar mood disorders (Manic Depressive Psychosis) over the last many decades, published in Indian Journal of Psychiatry, conveys the feeling that over the years, though many aspects have been studied, there is no consistency in reports across the country. Reviewing research would find case reports to studies and reviews. ABSTRACT This review has been done after careful research of articles published in indian journal of psychiatry with the search words of manic depressive psychosis and bipolar mood disorder. Many articles in the following areas are included: 1) Etiology: genetic studies: 2) Etiology – neuro psychological impairment: 3) Adult bipolar disorder 4) Epidemological 5) Clinical picture – phenomenology: 6) Course of bipolar mood disorder: 7) Juvenile onset bipolar affective disorder 8) Secondary mania: 9) Clinical variables and mood disorders: 10) Disability: 11) Comorbidity: 12) Treatment: biological 13) Recent evidence: 14) Pharmacological evidence in special population. Though there seems to be significant contribution, there are still lot of areas which need careful intervention. The findings in various studies from the indian point of view are reviewed. INTRODUCTION In 1896, Kraeplin reported 'manic-depressive psychoses' as a circumscribed disease entity. Ever since, manic depressive psychosis, or the current term used nosologically as 'bipolar' mood disorder, has been studied in the Indian perspective. Though it seems like there is no orderliness in the research pursuit of understanding this disorder in the Indian context, one gets an impression that all aspects like nosology, clinical syndromes, course, pharmacological and in special populations as well, were attempted to be looked at from the Indian context. This review is an attempted peep into Indian context research. A review of the studies on bipolar mood disorders (Manic Depressive Psychosis) over the last many decades, published in Indian Journal of Psychiatry, conveys the feeling that over the years, though many aspects have been studied, there is no consistency in reports across the country. Reviewing research would find case reports to studies and reviews. first time in Indian literature, it commented on the genetic aspects of pharmacological response. Genomic imprinting in bipolar affective disorder was studied by R. Kumar et al. 2000 at Central Institute of Psychiatry (CIP), Ranchi; [3] in the first episode, bipolar affective disorder was diagnosed according to DSM IV 1994 criteria and without any comorbidity. This is the first study in bipolar mood disorder in genomics. Out of 79 conservative cases with first degree, the results of this study did not establish a phenomenon of imprinting in non mendelian patterns of inheritance leading the authors to conclude that bipolar disorder is heterogeneous. A very small sample size might have made it difficult to comment on the results. Etiology-neuro psychological impairment M Taj and Padmavathi R [4] have assessed neuro psychologicalneuro cognitive impairment in 30 patients of bipolar disorder and 30 matched controls for age, gender, education with no past, present or family history of psychiatric illness. This study reports that patients with bipolar disorder in remission and maintenance phases, with mood stabilizers, have impaired attention memory and executive functions. The authors conclude that cognitive dysfunctions contribute to social and occupational difficulties on one hand and reduced insight and increased non adherence and risk of relapse. The contribution of the drugs in the cognitive dysfunction could not be commented on. Psychoeducation and cognitive rehabilitation was suggested for increasing adherence to drugs-mood stabilizers for prevention of relapse. Adult bipolar disorder Epidemological Chopra HD et al. [5] have attempted to study the socio economic status and manic depressive psychosis, in Ranchi in a private psychiatric hospital setting, the out patients and in patients using Kuppuswamy scale for urban and Pareek and Trivedi scale for rural study and the largest per cent of patients fit into socio class III. In this study 100 patients were studied. Interestingly, this is one study which looked at the relation of socio economic classes to any psychiatric disorder and concluded that there is a higher representative in middle class as regards to manic depressive psychosis at that time. Clinical picture-phenomenology Chatterjee and Kulhara [6] have reported sympto matology and symptom resolution in short term (90 days). In this prospective study, present state examination schedule and Bach Rafaelson Mani scales were used in 40 patients diagnosed by DSM III criteria for manic episode. All patients were unitedly managed with chloropromazines equivalence ranging from 300 to 900 mgs. In this study, the behavior affect, speech, delusions and hallucinations was done using PSE Items and the periodic assessments helped to show the resolution time of each symptoms preponderance of male subjects was noted. Mean duration of current episode was shorter in males compared to females. Symptomatologically, Indian patients differed significantly as having distractibility as symptoms and more of embarrassing behavior. Hostile irritability is the dominant affect, 62.5% had one or more delusions. In this sample, which is more than many studies reported by other international studies notably Taylor and Abrams (1973) Carlson and Goodwin (1973) in terms of recovery by four weeks, delusions and hallucinations disappear. Authors comment that where some symptoms resolving quickly, where some resolve quickly, others much slowly. Only 15% remained hospitalized for 90 days, and majority got discharged early; authors conclude that in India Mania patients resolve early with treatment. This is one study which systematically studied the clinical symptomatology in the Indian context, at PGIMER Chandigarh. R Kumar et al. [7] have carried out a systematically studied the phenomenology of mania and used a factor analysis approach to derive the clusters and arrived at three cluster groups using ICD-10 criteria; and 100 consequent patients diagnosed with bipolar disorder or manic episode were the study subjects. Predominantly male (77%), all patients were rated on the scale of manic states (Cassidy et al. 1998). There were three factors which had significant variancefactor number one had motor activity, pressured speech, racing thoughts, increased sexuality, increased contact as the clinical symptoms and, in essence, picks up psychomotor acceleration as the main factor and has largest variance in the patient sample. The second factor picked up thought disorder and psychosis with grandiosity, lack of insight and paranoid factors. The third factor, which has about 13.8% variance, represented mood with large percent having irritability (82%), euphoria (51%), aggression (70%), anxiety (59%) as phenomenology, the study has a good sample size, good methodology representing Indian population. Course of bipolar mood disorder R Kumar et al. [8] have studied, in a 28-day study from day 0 of index diagnosis of bipolar mania diagnosed according to DSM IV, and attempted to study gender differences in the resolution of mania. Rao TSS, Rao V, Shivamoorthy S, Kuruvilla K (1993), carried out a genetic study using pedigree method in patient diagnosed to behaving bipolar affective disorder and evaluated 76 individuals. They opined that the gene borne influence was discernable in their analysis. [9] In contrast to the Chatterjee and Kulhara study [6] the methodology used in this study was by using scale of manic states by Cassody et al. and using survival plot of resolution. The authors attempted to see gender differences in resolution of manic symptoms, in sample of 40 (24 males and 16 females). The attempt included to see the differences both in severity and symptomatology across the genders. There were significant difference at Index rating on certain items in females-viz. increased sexuality and aggression. Srinivasan et al. [10] have attempted to identify the differences of phenomenology, family history between unipolar mania and bipolar mania. They did not find any differences regards to various phenomenologies between two diagnostic groups and concluded that from their results they are homogenous. R Kumar and Dayaram [7] studied the evolution of manic symptoms in the first time diagnosed fresh episode of mania in bipolar affective disorders, defining the evolution in a sample of 98 patients (81 males). No consistent pattern of evolution was identifiable in this study. Median duration of evolution was 45 days, however, females and patients with significant life events had a shorter evolution period. A naturalistic course of bipolar disorder in rural India [11] was reported by Chopra M P et al. This data is from the Primary Health Centre, Sakalwara, adapted at NIMHANS, 27 patients of 34 patients evaluated had not received any treatment at all, though there were many episodes 15% of patient has had rapid cycling, episodes of manic accounted for 72% of episodes. None of the variables examined could predict the total number of episodes. However, patients receiving psycho pharmacological agents are likely to develop rapid cycling. A mania predominant course was observed in this study cohort. Juvenile onset bipolar affective disorder There are very few studies in this area, published in Indian journal of Psychiatry, except for case reports. Narasimha Rao IV L et al. [12] have for the first time reported three case reports. They concluded that phenomenologically the three cases were similar to adult manic depressive psychosis and also lithium is likely to give good response just in adult manic depressive psychosis. Vijaysagar K J [13] has reported a case of juvenile onset bipolar affective disorder. It is pertinent to mention that there were reports of course and phenomenology published in other journals from NIMHANS and can be considered as the data from India, from National Institute of Mental Health and Neuro Sciences, Bangalore. This data suggests the occurrence of discreet, short lived episodes of mania, a high rate of recovery defined and low rate of chronicity. Interestingly, the above findings are in distinct contrast with the findings of western studies. Likewise, Indian studies report low incidence of rapid cycling, less psychosis and less of mixed symptoms as compared to the western studies. Secondary mania Tricyclic anti depressant-induced mania was presented in a case report of four mono polar depressed patients who developed mania after tricyclic anti depressant therapy. [14] This study was the first to report the anti depressant induced mania. There are many case reports of mania occurring due to other medical conditions or due to drugs were reported-Chronic mania due to polio encephalomyelitis, [15] Tertiary Syphilis. [16] Venilafaxine-induced mania, [17] anti depressantinduced mania, [18] Bupropion, [19] M Arora et al. [20] have tried to show, citing two cases, that significant life events can precipitate manic episodes. Yadav R and Pinto C [21] report a case of treatment emergent dyskinesia in a patient with mania occurring in Parkinson's disease. In this case there were two episodes of mania in a seven-year gap; first episodes occurring at the start of the treatment and in the second episode there were issues of dyskinesia due to Parkinson's disease treatment. Mania in HIV infection was reported by Venugopal D et al. [22] where the critical issues of the management and diagnosis were discussed. Chopra V K et al. [23] discussed bipolar disorder associated with tuberous sclerosis in a seven-year-old, as a secondary mania etipathogenous. There are also a few case reports of secondary mania following encephalitis, [24] seen following stroke, [25] Turners syndrome. [26] Mania starting during hypno therapy [27] into mania was reported in a single case report. Clinical variables and mood disorders Gurmeet Singh et al. [28] have attempted to see the relation of A, B, O blood groups with bipolar and unipolar affective disorders, in 200 consecutive patients. They found significantly increased frequency of blood group O in manic depressive psychosis and lesser frequency of blood group A in comparison to normal controls and unipolar group. The authors conclude that their findings might validate the distinction between bipolar and unipolar types of affective disorder. Singhal A K et al. [29] have investigated the role of stressful life events in mania, in 30 cases of acute mania. This work does show the relation of significant life stresses and family pathology in the genesis of mania. Significantly, death of a close relative/spouse, financial difficulties, disappointment of loss turned to be major life events. Tapas K A et al. [30] have attempted to see the relation in Adolescent mania, EEG abnormalities and response to anti convulsant medication in a three-year follow-up study. Significant finding is -a large per cent (43.75%) had moderate to severe EEG abnormalities. Patients who continued anti convulsive mood stabilizers had relapsed very less compared to those who discontinued. This study highlights the value of prophylaxis especially in those adolescents having EEG abnormalities. Disability H Taroor et al. [31] attempted to study the stability and quality of life in euthymic patients of bipolar affective disorder or recurrent depressive disorder with or without comorbid medical illnesses. This report shows the presence of chronic comorbid medical illness did not cause a difference in quality of life between the two groups during euthymia. Comorbidity A few case reports of comorbid, other psychiatric illnesses in bipolar mood disorders were reported in Indian literature. Dysmorphophobia as a comorbid disorder occurring in both depressive episodes and manic episodes in a case of bipolar mood disorder was described by Sengupta et al. [32] The difficulty associated with both diagnosis and management was discussed. H Kalra et al. [33] have discussed a case of bipolar affective disorder with obsessive compulsive disorder as a comorbidity in manic phase of the disorder, the difficulties in the management was discussed. Treatment: Biological There has been a large volume of research reported in the treatment aspects. There have been models to explain the treatment approaches and various clinical trials to show early evidence for various drugs both from experience and experimental. Interestingly, the first report on treatment of manic depressive psychosis treated with long term electro convulsive therapy was by Bhaskaran, [34] who described usage of long term benefits of Electro convulsive treatment in a case of manic depressive psychosis and Venkoba Rao [35] also described a case of rapid cycling. Lithium kinetics was studied by Pradhan N et al. [36] They have studied the differential pharmaco kinetics in 16 patients of manic depressive psychosis between serum and erythrocytes and explained based on their observations that there is a bi compartmental 'model' of plasma and erythrocyte reaching steady state and undergoing fluctuations and may exhibit more different half lines over a time. Essentially, a model of lithium kinetics was attempted to be explained. N Desai et al. [37] Gangadhar B N et al. [38] have, in case reports, demonstrated the benefits of lithium and carbamazepine in a treatment resistant manic depressive psychosis. Prakash H M and S Bharath [39] in a first controlled double blind study have shown the efficacy of valproate in acute mania. In this study, lithium was compared to valproate in acute mania and both showed to equal efficacy in controlling mania symptoms. This study is possibly the first paper for evidence of efficacy. I. P. Khalkho and Khess C. R. J [40] have attempted to identify the various factors of drug non compliance in mania patients, demographic clinical variables and personality variables using 16 personality factors questionnaire. Not so surprisingly, they found that commonest factor for non compliance was side effects of medicines followed by the feeling of 'well being'. Pretension, jealousy, suspiciousness are the personality factors responsible for poor drug compliance leading the authors to conclude that patients with non compliance use less of mature defenses and more of primitive defenses. Recent evidence The role of quetiapine monotherapy was presented in a case report by Khazaal Y [41] arguing for the building up evidence in double blind placebo control long term mood stabilizing studies. Solanki R K et al. [42] have, in a one-week open label trial, shown the benefits of injectable sodium valproate in patients with mania and concluded that substantial improvement was seen and no major side effects were noticed. In this short study, utility in the acute stage was of mania of using injectable sodium valproate was argued for. M Trivedi et al. [43] have, in a single case report, discussed the utility of resperidone mono therapy in the prophylaxis of bipolar affective disorder, further highlighting the need for double blind randomized placebo control studies. Pradeep R J [44] has highlighted, in three case reports, the different types of delirium encountered in valproate induced hyper ammonemia states and management issues. In one of the cases, Pradeep has rechallenged valpraote-caused hyperammonia state validating the risk. The author highlights the clinical significance of watching for hyperammonia state as a cause of delirium where valproate was used. In an interesting case report, V Agarwal and Tripathi [45] report of the utility of memantine as co pharmacy helped in a case of better tolerability and efficacy. R Balon [46] has in his psychiatric pearls contribution has argued on the evidence that lithium is a unique mood stabilizer and meets the rigorous standards for mood stabilization in various forms of the bipolar mood disorder. Pharmacological evidence in special population Khandelwal S K et al. [47] have reported prophylaxis benefits of lithium carbonate in children diagnosed as manic depressive psychosis. The work details in an open label study the utility, efficaciousness and tolerability of lithium carbonate in children diagnosed to have manic depressive psychosis and conclude that side effects are less and if serum levels of lithium are maintained between 0.6 and 1.2 m EQ/L. Mohandas E and Rajmohan V [48] have in a recent C.M.E. topic reviewed the use of Lithium in special populations in detail of various medical disorders, pregnancy and lactation, in elderly and child and adolescents. IN SUMMARY There has been a lot of research on bipolar mood disorders. Significantly, not many studies have been reported on biological, neuro imaging and genetic studies and long term course of bipolar disorders. There is also less replication of significant aspects of the bipolar mood disorder. Most studies have a small number as sample, the later studies methodologically improved. Multi centric studies done across the study with sound methodology and in various above areas of different areas will have to be done to generate the Indian data. Likewise, pharmacologically, the often repeated subjective to experience of optional dose of various medications will also have to be studied. Transcultural differences have to be highlighted by attempting to do research in these areas. More naturalistic studies done across rural and urban background will help us to understand the course of bipolar in Indian context. The specific factors of psycho social and compliance issues in drug and therapy in the Indian context need to be studied.
2018-04-03T03:49:57.482Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "2960e97dcc57d6bef05e654bac9e6f5cd239bc67", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4103/0019-5545.69230", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b44493893b3034547c6c922c8707de684df7c3c3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
52280171
pes2o/s2orc
v3-fos-license
Vasopressin Signal Inhibition in Aged Mice Decreases Mortality under Chronic Jet Lag Summary Chronic jet lag, a model of shiftwork, increases mortality in aged mice. One potential reason for this association is that the chronic desynchronization between the internal clock phase and the environmental light/dark (LD) cycle might increase the mortality rate. However, this hypothesis has not been examined because of the lack of an appropriate animal model to prove this speculation. Here, we found that rapidly entrainable vasopressin receptor V1a–/–V1b–/– mice showed lower mortality under a chronic jet lag condition. Moreover, we found that pharmacological inactivation of V1a and V1b signaling decreased mortality even in aged wild-type mice, thus providing a potential pharmaceutical intervention for shiftwork-related health problems. INTRODUCTION Approximately 20% of the working population in developed countries engage in some sort of shiftwork (Fritschi et al., 2011). Accumulating evidence reveals that shiftwork is a risk factor for cancers (Kubo et al., 2006;Schernhammer et al., 2001), obesity (Karlsson et al., 2001), diabetes (Pan et al., 2011), and heart diseases (Tenkanen et al., 1998). Studies using animals subjected to chronic jet lag (CJL) induced by shifting light-dark (LD) cycles repeatedly at regular intervals to mimic the environment of shift workers have reported that CJL is associated with rapid tumor progression (Filipski et al., 2004), obesity (Kettner et al., 2015), and heart diseases (Penev et al., 1998). Surprisingly, CJL was also shown to increase the mortality rate in aged mice (Davidson et al., 2006). However, the mechanisms underlying these correlations between shiftwork/CJL and deleterious health consequences are still unknown, even though the increase in the number of aged workers and their health problems remain unsettled. Twenty-four-hour rhythms in physiology, metabolism, and behavior are generated by endogenous self-sustained circadian oscillators present in virtually all the cells in the body (Hastings et al., 2008;Silver and Kriegsfeld, 2014;Takahashi et al., 2008), which are governed by the master pacemaker located in the suprachiasmatic nuclei (SCN) of the anterior hypothalamus (Mohawk et al., 2012;Moore and Eichler, 1972;Stephan and Zucker, 1972). Moreover, the SCN is the only site that receives the entraining signal from the environmental LD cycle (Mohawk et al., 2012); entrainment means that rhythmic behavioral or physiological events match their oscillation with that of an environmental cycle. Thus, the SCN is considered as the key site for entraining the circadian clock in a jet lag condition. One potential reason for the high risk of health problems in shift workers and experimental animals subjected to CJL, especially high mortality rate in aged mice, is the dissociation between the internal circadian rhythm (e.g., locomotor activity rhythms) and the external timing. In fact, compared with young mice, aged mice require more days to entrain to the new phase after an LD phase shift (Valentinuzzi et al., 1997). This strongly suggests that the circadian phase of behavior, which is controlled by the SCN, would be continuously desynchronized with the external time under CJL. However, the hypothesis that slow entrainment has a deleterious effect on health has not been examined because of the lack of an appropriate animal model to prove this speculation. RESULTS AND DISCUSSION Rapidly Entrainable V1a -/-V1b -/-Mice Showed Lower Mortality under a Chronic Jet Lag Condition We previously found that mice deficient in vasopressin receptor V1a and V1b (V1a -/-V1b -/mice) showed virtually no jet lag symptoms in behavior, clock gene expression, and body temperature rhythms after an LD shift despite they having a normally functional clock (Yamaguchi et al., 2013). Moreover, V1a -/-V1b -/mice developed normally and exhibited no gross abnormalities (Nakamura et al., 2009). Therefore, in this study, we aimed to use such rapidly entrainable V1a -/-V1b -/mice and investigate whether the absence of dissociation between the internal clock and the environmental timing could overcome CJL-induced death in aged mice. To induce CJL, i.e., a condition in which circadian oscillators will be recurrently forced to re-entrain to a new LD cycle, we placed 116-week-old wild-type (WT) and V1a -/-V1b -/mice under CJL, where the LD cycle was advanced by 8 hr every 5 days. Until this age, 2 of the 10 WT mice and 1 of the 10 V1a -/-V1b -/mice died of natural causes, suggesting minimum difference between the longevity of WT and V1a -/-V1b -/mice. We previously showed that circadian oscillations of clock genes not only in the SCN but also in the peripheral organs of V1a -/-V1b -/mice fully re-entrained on day 5 after the 8-hr LD advance, whereas those in WT mice remained disturbed (Yamaguchi et al., 2013). We found that the locomotor activities in WT mice were not re-entrained continuously under CJL. WT mice even showed two locomotor rhythms with different period lengths simultaneously: one in the phase-advance direction and the other in the phase-delay direction ( Figure 1A). Thus, WT mice showed a high locomotor activity in the light phase, which is not normal in nocturnal animals ( Figure S1). WT mice steadily died after the onset of CJL, and all the mice examined died within 49 days under CJL. This mortality rate is quite similar with that observed in the previous study (Davidson et al., 2006). In contrast, the locomotor activities in V1a -/-V1b -/mice quickly re-entrained after each LD advance and the mutant mice showed most of the locomotor activities in the dark phase ( Figures 1B and S1). Approximately half the V1a -/-V1b -/mice survived till day 61 after CJL initiation. Statistical analysis revealed that V1a -/-V1b -/mice showed significantly less mortality rate than WT mice ( Figure 1C). (C) Survival curves of aged WT and V1a -/-V1b -/mice under CJL. Log rank (Mantel-Cox) test revealed a significant difference between the survival rates of WT and V1a -/-V1b -/mice under CJL (n = 8 for WT and 9 for V1a -/-V1b -/-; p = 0.0191). On day 61, the survival rate was 0% in WT mice and 44.4% in V1a -/-V1b -/mice. Black and red colors indicate WT and V1a -/-V1b -/mice, respectively. See also Figure S1. Infusion of V1a and V1b Antagonists into the SCN Decreased Mortality in Aged Wild-Type Mice Next, we examined whether inhibition of V1a and V1b signaling in the SCN is a key to decrease mortality in aged mice subjected to CJL. To approach this issue, we placed a cannula on the skull of WT mice and infused a mixture of V1a and V1b antagonists on the SCN continuously using a micro-osmotic pump (see Transparent Methods for the details). In this pharmacological study, we started the CJL exposure in mice aged 80 weeks to avoid potential sudden death due to surgical burden. Similar to the locomotor activities in aged WT mice examined above, vehicle-infused WT mice mostly showed a splitting behavior and considerably high locomotor activities in the light phase (Figures 2A and S2). All the vehicle-treated mice examined died in 153 days after CJL initiation (Figures 2A and 2C). In contrast, WT mice infused with a mixture of V1a and V1b antagonists showed a faster re-entrainment under CJL ( Figures 2B and S2), and four of the seven mice examined survived till 165 days after CJL initiation ( Figures 2B and 2C). Statistical analysis confirmed that the survival rate in antagonists-treated mice was higher than that of the vehicletreated mice ( Figure 2C). The causal relationship between CJL and higher mortality rate in aged WT mice is still unclear. Chronic stress was probably not the main cause of this increased mortality since Davidson et al. reported that the total daily fecal corticosterone levels did not increase in aged WT mice subjected to chronic phase advances, which showed higher mortality (Davidson et al., 2006). However, our findings from experiments using V1a -/-V1b -/non-jet-lag mice strongly suggest that long-term circadian misalignment between the endogenous circadian rhythm and the external LD cycle has an adverse effect on health in aged WT mice. In contrast, V1a -/-V1b -/mice or V1a/V1b antagonists-infused mice showed less mortality rate under CJL. This higher survival rate could be attributed to the temporal alignment between the endogenous circadian rhythm and the environmental LD cycle via immediate resetting of the locomotor activity rhythm throughout the CJL period. It was previously shown that aged C57BL/6 mice require longer days for re-entrainment after an LD advance compared with younger controls (Valentinuzzi et al., 1997). Moreover, cohort studies on nurses report that most nurses found it more difficult to cope with shift work as their age increases (Muecke, 2005). Although the precise mechanisms underlying the V1a-and V1b-mediated prevention of CJL-induced death remain unclear, the significant increase in survival rate of V1a -/-V1b -/mice and V1a/V1b antagonists-infused mice may provide the initial steps for pharmaceutical intervention against shift work-related health concerns, which currently cannot be treated with any direct medication. METHODS All methods can be found in the accompanying Transparent Methods supplemental file. DATA AND SOFTWARE AVAILABILITY The numbers of activity counts of each mouse have been deposited in the Mendeley Data repository (https://doi.org/10.17632/xv4b7768xg.1). SUPPLEMENTAL INFORMATION Supplemental Information includes Transparent Methods and two figures and can be found with this article online at https://doi.org/10.1016/j.isci.2018.06.008. ACKNOWLEDGMENTS This research was supported by Core Research for Evolutional Science and Technology (CREST, JPMJCR14W3); Japan Science and Technology Agency (to H.O.); scientific grants from the Ministry of (C) Survival curves of aged vehicle-and antagonists-infused mice under CJL. Log rank (Mantel-Cox) test revealed a significant difference between the survival rates of vehicle-and antagonists-infused mice under CJL (n = 7 each; p = 0.0461). On day 165, the survival rate was 0% in vehicle-infused mice and 57.1% in antagonists-infused mice. Black and red colors indicate vehicle and antagonists treatments, respectively. See also Figure S2. DECLARATION OF INTERESTS The authors declare no competing interests. Mouse and behavioral activity monitoring for jet lag experiments. Wild-type mice (C57Bl/6 mice, male, 78-week-old) were purchased from Shimizu Laboratory Supplies (Kyoto, Japan). For comparing the mortality rate of WT mice and V1a -/-V1b -/mice (Yamaguchi et al., 2013) under CJL condition, the mice were housed in the same facility from the age of 78 to 115 weeks. Then, each mouse was housed individually in light-tight, ventilated closets within a temperature-and humiditycontrolled facility with ad libitum access to food and water. The animals were entrained on a 12-h-light (~200 lux fluorescent light)/12-h-dark cycle and the light-dark cycles were phase-advanced by 8-h once every 5 days. Locomotor activity was recorded in 5-min bins with a passive (pyroelectric) infrared sensor (FA-05 F5B; Omron), and the data obtained were analyzed using Clocklab software (Actimetrics) developed on MatLab (Mathworks). All the experiments were conducted in accordance with the ethical guidelines of the Kyoto University Animal Research Committee. For pharmacological inhibition of V1a and V1b signaling, a mixture of OPC-21268 (2.5 mM, Sigma-Aldrich), a V1a antagonist, and SSR 149415 (2.5 mM, Axon Medchem), a V1b antagonist, was continuously delivered to the SCN via a micro-osmotic pump (Model 1004; ALZET). A hole was drilled at 0.5 mm posterior from the bregma. Then, the cannula (5 mm length, Brain Infusion Kit 2, ALZET) was inserted and fixed to the skulls of 79-weekold WT mice, and the pump was subcutaneously placed in the interscapular region. After the surgery, the animals were returned to their home cages and the LD cycles were advanced by 8-h once every 5 days. The pump was replaced with a new one containing the antagonist mixture or vehicle from ZT5 to ZT8 every 4 weeks under anesthesia (ZT stands for zeitgeber time; ZT0 indicates lights-on and ZT12 lights-off). Data and Software Availability The numbers of activity counts of each mouse have been deposited in the Mendeley Data repository (http://dx.doi.org/10.17632/xv4b7768xg.1).
2018-09-24T15:18:28.356Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "975dab33d8c83d176c9240114003134c62af4ffc", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2589004218300841/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "975dab33d8c83d176c9240114003134c62af4ffc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
108849829
pes2o/s2orc
v3-fos-license
Apoptosis of mesenchymal stem cells is regulated by Rspo1 via the Wnt/β-catenin signaling pathway Objective The aim of this study was to investigate the effect and possible mechanism of action of roof plate-specific spondin1 (Rspo1) in the apoptosis of rat bone marrow mesenchymal stem cells (BMSCs). Methods Osteogenic and adipogenic differentiation of BMSCs was identified by Alizarin Red and Oil Red O staining, respectively. BMSC surface markers (cluster of differentiation 29 [CD29], CD90, and CD45) were detected using flow cytometry. BMSCs were transfected with an adenoviral vector encoding Rspo1 (BMSCs-Rspo1 group). The expression levels of Rspo1 gene and Rspo1 protein in the BMSCs-Rspo1 group and the two control groups (untransfected BMSCs group and BMSCs-green fluorescent protein [GFP] group) were analyzed and compared by quantitative polymerase chain reaction and Western blot. The occurrence of apoptosis in the three groups was detected by flow cytometry and acridine orange-ethidium bromide (AO-EB) double dyeing. The activity of the Wnt/β-catenin signaling pathway was evaluated by measuring the expression levels of the key proteins of the pathway (β-catenin, c-Jun N-terminal kinase [JNK], and phospho-JNK). Results Osteogenic and adipogenic differentiation was confirmed in cultured BMSCs by the positive expression of CD29 and CD90 and the negative expression of CD45. Significantly increased expression levels of Rspo1 protein in the BMSCs-Rspo1 group compared to those in the BMSCs (0.60 ± 0.05 vs. 0.13 ± 0.02; t=95.007, P=0.001) and BMSCs-GFP groups (0.60 ± 0.05 vs. 0.10 ± 0.02; t=104.842, P=0.001) were observed. The apoptotic rate was significantly lower in the BMSCs-Rspo1 group compared with those in the BMSCs group ([24.06 ± 2.37]% vs. [40.87 ± 2.82]%; t = 49.872, P = 0.002) and the BMSCs-GFP group ([24.06 ± 2.37]% vs. [42.34 ± 0.26]%; t = 62.358, P = 0.001). In addition, compared to the BMSCs group, the protein expression levels of β-catenin (2.67 ± 0.19 vs. 1.14 ± 0.14; t = −9.217, P = 0.000) and JNK (1.87 ± 0.17 vs. 0.61 ± 0.07; t = −22.289, P = 0.000) were increased in the BMSCs-Rspo1 group. Compared to the BMSCs-GFP group, the protein expression levels of β-catenin (2.67 ± 0.19 vs. 1.44 ± 0.14; t = −5.692, P = 0.000) and JNK (1.87 ± 0.17 vs. 0.53 ± 0.06; t = −10.589, P = 0.000) were also upregulated in the BMSCs-Rspo1 group. Moreover, the protein expression levels of phospho-JNK were increased in the BMSCs-Rspo1 group compared to those in the BMSCs group (1.89 ± 0.10 vs. 0.63 ± 0.09; t = −8.975, P = 0.001) and the BMSCs-GFP group (1.89 ± 0.10 vs. 0.69 ± 0.08; t = −9.483, P = 0.001). Conclusion The Wnt/β-catenin pathway could play a vital role in the Rspo1-mediated inhibition of apoptosis in BMSCs. Introduction Bone marrow mesenchymal stem cells (BMSCs) have been successfully used in cell transplantation therapy and are considered safe and effective seed cells in the field of regenerative medicine. 1 However, despite positive results from preclinical studies, data from phase I/II clinical trials are inconsistent and the improvement of organ function has been found to be quite limited. The major issues that BMSC therapy faces include inefficient cell delivery to the site of injury, low cell retention, and ineffectiveness of the stem cells in tissue regeneration. 2e6 Moreover, these studies showed that genetic modification significantly improved the regenerative capacity of transplanted stem cells, 6 and that genetic strategies may play a key role in improving the survival and differentiation of mesenchymal stem cells. 7e11 Therefore, it is essential to find a gene or a set of genes that can improve the effect of BMSCs in the treatment of diseases. Recently, Zhao et al 12 found that BMSCs overexpressing midkine could reduce the apoptosis rate in H9C2 cells (fetal rat cardiac cell line) and therefore improve cell survival. Compared with unmodified BMSCs, the improvement in heart function of rats with myocardial infarction was more obvious when treated with BMSCs overexpressing midkine. Moreover, mircoRNA-383, which enhances the expression of glial-derived neurotrophic factor, could improve the therapeutic effect of BMSCs on spinal cord injury. 13e15 BMSCs overexpressing bone morphogenetic protein 2 (BMP-2) could also improve the biological function of the gastrocnemius tendon transplanted in the intramedullary cavity and promote tibia healing. 16 Roof plate-specific spondin1 (Rspo1) is a member of the Rspo family that has a molecular weight of 35 kDa and is associated with activation of the Wnt signaling pathway. 17,18 This family regulates the growth and development of animals, including the formation of blood vessels, muscles, and bones, as well as the development of limbs and the reproductive, digestive, and respiratory systems. 19 Rspo1 can bind to leucine-rich repeat-containing G protein-coupled receptors (LGRs) 4e6 and synergistically induce the phosphorylation of low-density lipoprotein receptorrelated protein 6 (LRP6) with soluble Wnt3a, as well as promote the cytoplasmic stability and accumulation of b-catenin in the nucleus. The conformational changes of these proteins play an important role in cell proliferation, differentiation, and maintenance of stem cell function. 20 A recent study reported that Rspo1 could promote the osteogenesis of BMSCs by activating the Wnt/b-catenin signaling pathway and rescue bone loss. 21 It was recently reported that aspirin induced morphological apoptosis in rat tendon stem cells via the mitochondrial/caspase-3 pathway and induced cellular apoptosis in the Achilles tendon. Importantly, the Wnt/b-catenin pathway played a vital role in aspirin-induced apoptosis by regulating mitochondrial/ caspase-3 function. 22 Wang et al 23 found that inhibition of the Wnt/b-catenin signaling pathway improved the therapeutic effect on transcatheter arterial chemoembolization by suppressing migration and invasion while promoting the apoptosis of transplanted hepatocellular carcinoma cells in rats. Moreover, Rspo1 activated the Wnt/b-catenin signaling pathway, which is involved in the development, proliferation, and differentiation of stem cells, as well as the repair of tissue damage. 17,18 Therefore, we first determined whether Rspo1 could indeed suppress BMSC apoptosis. Then, we analyzed the role of the Wnt/b-catenin pathway in the inhibition of Rspo1 on the apoptosis of BMSCs. Here, to test the possible effect and mechanism of action of Rspo1 in the apoptosis of rat BMSCs, we transfected BMSCs with an adenovirus carrying the Rspo1 gene and measured the apoptosis rate and survival of BMSCs. The expression levels of bcatenin and c-Jun N-terminal kinase (JNK), which are key proteins in the Wnt/b-catenin signaling pathway, were further explored for their roles in apoptosis. Animals Healthy male SpragueeDawley (SD) rats that weighed from 60 to 80 g were used to isolate the BMSCs. All rats were obtained from Animal Research Center of Shanxi Medical University. The experiments were performed in adherence to the National Institutes of Health "Guide for the Care and Use of Laboratory Animals" (Publication No. 85-23, revised 1996) and were approved by Shanxi Medical University Committee on Animal Care (Approval No. 2018026). Isolation and culture of BMSCs SD rats were sacrificed by dislocation of cervical vertebra. The hind limbs to the femoral heads were cut and soaked in 75% alcohol for 5 min, and the femur and tibia were soaked in phosphate-buffered saline (PBS) solution. The bone marrow tissue was exposed and the bone marrow suspension was collected by flushing the marrow cavity with the medium. The collected bone marrow suspension was cultured for 24 h in 5% CO 2 at 37 C and washed with PBS to remove non-adherent cells. The morphology of the primary and passage BMSCs was monitored using an inverted microscope during the experiment. Identification of BMSCs The BMSC surface markers were identified by flow cytometry. Passage 3 (P3) cells were suspended in medium and the cell density was adjusted to 1 Â 10 5 cells/ml. CD29-allophycocyanin (APC), CD90-fluorescein isothiocyanate (FITC), and CD45phycoerythrin (PE) antibodies were added to the suspension and incubated at room temperature for 30 min for flow cytometry. Osteogenic and adipogenic differentiation of BMSCs The P3 cells with a degree of 90% fusion were obtained and inoculated in a 6-well plate at 1 Â 10 5 cells/ml. The cells were induced with osteogenic and adipogenic differentiation media, and then stained with Alizarin Red and Oil Red O, respectively. Transfection of BMSCs Primary BMSCs (4 Â 10 5 cells/well) were seeded in 6-well plates in complete culture medium. To construct the adenovirus (ADV) encoding Rspo1 plasmid (ADV-Rspo1 vectors), the complementary DNA (cDNA) encoding rat Rspo1 was synthesized and cloned into the restriction endonuclease sites of the ADV, a mammalian expression vector containing green fluorescent protein (GFP) and puromycin resistance genes (Shenggong, Shanghai, China). Twenty-four hours after seeding, BMSCs were infected with recombinant ADV (ADV-Rspo1 vectors) and ADV control vectors. The recombinant ADV encoding GFP (BMSCs-GFP) was used as control. The cells were cultured for 72 h, and the transfection efficiency was determined using fluorescence microscopy and flow cytometry. Apoptosis assay The apoptosis rate of BMSCs was measured by Annexin V-APC/7-aminoactinomycin D (7-AAD) Apoptosis Kit (KeyGEN BioTECH, Nanjing, China). Briefly, the transfected BMSCs were collected by trypsin digestion without ethylenediaminetetraacetic acid (EDTA) and then washed with PBS by centrifugation at 2000 r/min for 5 min. To the cell suspension, 5 ml of Annexin V-APC and 5 ml of 7-AAD dye solution were added at room temperature and protected from the light for 5e15 mins. The cells were counted through flow cytometry within 1 h. Detection of apoptosis by AO-EB double staining (Solarbio, Beijing, China) was also performed. The cells were cultured in the 96-well plate; 72 h after transfection, the residual medium and the non-adherent cells were removed by washing with PBS and adding fresh PBS to the cells. A volume (20 ml) of working solution per millilitre of PBS was added (according to the dosage, mixing AO solution and EB solution into the working volume at a 1:1 ratio). After incubation for 2e5 min at room temperature, BMSCs were observed using a fluorescence microscope (Nikon, Tokyo, Japan). Detection of gene expression by quantitative polymerase chain reaction (qPCR) Total RNA was isolated using the RNAiso Plus (Takara, Tokyo, Japan) and was converted to cDNA using the High Capacity cDNA RT Kit (Thermo Fisher SCIENTIFIC, Waltham, MA, USA) according to the manufacturer's instructions. RNAs from three replicates for each treatment were pooled into a custom SYBR Array 48-Well FAST Plate. The fold changes of gene expression relative to b-actin (an endogenous control) were determined according to the 2 ÀDDCt method. The PCR primers were as follows: b-actin-F: Statistical analysis Numerical data with normal distribution were reported as the mean ± standard deviation (relative expression of genes [Rspo1, b-catenin and JNK] and proteins [Rspo1, Bax, Caspase-3, Cleaved Caspase-3, b-catenin, JNK and phospho-JNK] and the apoptotic rates of BMSCs). Statistical analysis was performed using Student's t-test for the comparison of two groups for multiple comparisons. A value of P < 0.05 was considered statistically significant. All statistical analyses were performed using SPSS 22.0 (SPSS Inc., Chicago, IL, USA). Characterization of BMSCs The passage 0 (P0) cells were small with protrusions from the edges and were varied in shapedpolygonal, long fusiform, and irregular shapes. The P3 cells had an enlarged volume, had a long fusiform shape, were uniform in size, and were neatly arranged in a consistent direction (Fig. 1A, B). To identify the differentiation potential of BMSCs, orange-red round lipid droplets in the cytoplasm of Oil Red O-stained cells were observed after adipogenic differentiation for 20 days (Fig. 1E, F); after 21 days of osteogenic differentiation, the surface of Alizarin Red-stained cells had reddish brown calcium deposits that formed round calcium nodules (Fig. 1C, D). Moreover, BMSC surface markers were detected by flow cytometry. The results showed that BMSCs were positive for CD29 ([97.10 ± 0.76]%) and CD90 ([95.83 ± 0.76]%), and were negative for the hematopoietic stem cell surface marker CD45 ([3.93 ± 0.60]%) (Fig. 1GeI). Rspo1 could inhibit the apoptosis of BMSCs The Wnt/b-catenin pathway could play a vital role in the inhibition of Rspo1 on the apoptosis of BMSCs As Rspo1 protein was recognized as an agonist of Wnt/b-catenin signaling, we evaluated the signaling level in the ADV-Rspo1-infected BMSCs during apoptosis. As expected, the expression levels of Wnt target genes (CTNNB1 encoding b-catenin and JNK) were significantly increased (P < 0.01), which indicated the promoted activity of Wnt/b-catenin signaling due to Rspo1 during apoptosis of BMSCs (Fig. 6). Consistent with the qPCR results, compared to the BMSCs group, the protein expression levels of b-catenin (2.67 ± 0.19 vs. 1.14 ± 0.14; t ¼ À9.217, P ¼ 0.000) and JNK (1.87 ± 0.17 vs. 0.61 ± 0.07; t ¼ À22.289, P ¼ 0.000) were increased in the BMSCs-Rspo1 group. Compared to the BMSCs-GFP group, the protein expression levels of b-catenin (2.67 ± 0.19 vs. 1.44 ± 0.14; t ¼ À5.692, P ¼ 0.000) and JNK (1.87 ± 0.17 vs. 0.53 ± 0.06; t ¼ À10.589, P ¼ 0.000) were also upregulated in the BMSCs-Rspo1 group (Fig. 7AeC). Moreover, the protein expression levels of phospho-JNK were increased in the BMSCs-Rspo1 group compared to those in the BMSCs group (1.89 ± 0.10 vs. 0.63 ± 0.09; t ¼ À8.975, P ¼ 0.001) and the BMSCs-GFP group (1.89 ± 0.10 vs. 0.69 ± 0.08; t ¼ À9.483, P ¼ 0.001) (Fig. 7A, D). This suggests that inhibition of the Adenovirus (ADV) can be used as a gene vector because the ADV genome infects cells and integrates into the host chromosomes. 27 One of the significant challenges in using ADV as a gene vector is that virus particles can induce inflammatory reactions and cause damage to host cells. 28 In the present study, we transfected ADV with GFP and there was no significant difference in the expression of GFP between the BMSCs-GFP and BMSCs-Rspo1 groups. Since the spectral characteristics of GFP were similar to those of FITC, we used FITC channels to detect GFP expression in flow cytometry experiments. Using ADV as a gene vector, Rspo1 was successfully expressed in transfected BMSCs. Our data showed that there was no significant difference in the expression of Rspo1 and in the apoptosis rate between the BMSCs group and the BMSCs-GFP group after ADV transfection. Similarly, ADV transfection did not affect the expression of apoptosis-related proteins Caspase-3, Bax, and cleaved Caspase-3. Thus, our results revealed that the expression of the ADV genome in BMSCs did not interfere with the expression and function of Rspo1 in BMSCs. Rspo1 activates b-catenin through a mechanism similar to that by the ligand in the classical pathway and enhances the biological activity of the Wnt/bcatenin pathway. 18 Several reports have shown that Rspo1 is involved in regulating cell proliferation and differentiation, as well as in the development of embryonic bone, blood vessels, muscles, and fingernails; it is also reported to have an effect on the development of the embryo digestive, respiratory, and reproductive systems, as well as limb formation. In addition, it plays an important role in the occurrence of many diseases. 7e11 For example, after acute injury, Rspo1 was reported to be necessary for myogenic precursor cell differentiation at the appropriate time; at the same time, classical Wnt/b-catenin signaling was activated during myogenic differentiation. 29 It was recently reported that the knockout of Wnt5a could inhibit the proliferation and promote the apoptosis of keratinocytes by suppressing Wnt/b-catenin or Wnt5a/Ca 2þ signaling. 30 Okumura et al 31 have shown that Rspo1 regulated the proliferation and apoptosis of corneal endothelial cells by activating the Wnt/b-catenin signaling pathway, effectively maintaining the function of corneal endothelial cells. Consistent with previous findings, our results showed that Rspo1 had a similar effect. The apoptosis rate of BMSCs-Rspo1 was significantly lower than those of the BMSCs and BMSCs-GFP groups, but no statistical difference was observed between the BMSCs and BMSCs-GFP groups. Moreover, the activity of Caspase-3, the expression levels of cleaved Caspase-3 protein, and the expression levels of Bax protein were significantly decreased in the BMSCs-Rspo1 group. This indicates that Rspo1 inhibited the apoptosis of BMSCs. In addition, the expression levels of CTNNB1 and JNK genes, as well as b-catenin, JNK, and phospho-JNK proteins in the BMSCs-Rspo1 group were significantly higher than those in the BMSCs and BMSCs-GFP groups. However, no statistical difference was observed between the BMSCs and BMSCs-GFP groups. Thus, to our knowledge, our results are the first to show that the Wnt/b-catenin pathway may play a vital role in the inhibition of apoptosis of BMSCs and the consequent improved survival of BMSCs mediated by Rspo1. Conflicts of interest None.
2019-04-12T13:50:44.240Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "0180144476a92687069cecb44e8a977a326bfd9a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.cdtm.2019.02.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e1dc291b89db7c76f48b2d507ae92cf2cc93982", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249029516
pes2o/s2orc
v3-fos-license
Loeffler’s Syndrome and Multifocal Cutaneous Larva Migrans Cutaneous larva migrans (CLM) is a zoonotic skin disease that is frequently diagnosed in tropical and subtropical countries. Loeffler’s syndrome (LS) is a transient respiratory ailment characterised by pulmonary infiltration along with peripheral eosinophilia and commonly follows parasitic infestation. We report a 33-year-old male patient who presented to a tertiary care hospital in eastern India in 2019 with LS that was attributed secondary to multifocal CLM. Treatment with seven-day course of oral albendazole (400 mg daily) coupled with nebulisation (levosalbutamol and budesonide) led to complete resolution of cutaneous lesions and respiratory complaints within two weeks. There was complete resolution of pulmonary pathology at four-weeks follow-up. he was afebrile, normotensive (126/78 mmHg) with a saturation of 97% on room air. Bi-basilar crackles were heard on chest auscultation. Cutaneous examination revealed multiple discrete thread-like skin-coloured to erythematous serpiginous tract of various sizes (4-12 cm in length) distributed over the chest and abdomen [ Figure 1]. Focal excoriation and pustules were noted over few lesions. Other mucocutaneous sites were uninvolved. Evaluation of other organ systems was uneventful. Laboratory examination was notable for peripheral eosinophilia (absolute eosinophil count = 2,200 cells/μL). Stool examination for ova, parasite and cyst presence was negative. Chest radiography showed ill-defined bilateral pulmonary infiltrates. A high-resolution computed tomography of his thorax revealed the presence of ground-glass opacities mainly in the mid and lower zones of both lungs with predominant peripheral distribution [ Figure 2A]. Based on suggestive history, characteristic clinical presentation, laboratory and radiological findings, the final diagnosis of Loeffler's syndrome secondary to multifocal cutaneous larva migrans was established. He was treated with oral albendazole (400 mg) once daily for seven consecutive days along with nebulisation with levosalbutamol and budesonide as required. His respiratory symptoms and cutaneous lesions completely subsided in two weeks. There was complete radiological resolution at four weeks followup [ Figure 2B]. Informed written consent was obtained from the patient after full explanation regarding his images C utaneous larva migrans (CLM) is a distinct cutaneous entity that is relatively common in the warmer tropical and subtropical regions. It is characterised by tortuous skin lesions attributed to epidermal burrowing by certain helminthic larvae. 1 Apart from the cutaneous affliction, this condition is rarely uneventful. On rare occasions, CLM can culminate in Loeffler's syndrome (LS), which is characterised by migratory pulmonary infiltrates and peripheral eosinophilia. 2 We present an interesting case of LS associated with multifocal cutaneous larva migrans and review the literature on this uncommon association. Case Report A 33-year-old male patient presented to a tertiary care hospital in eastern India in 2019 with intense, non-productive cough for the last seven days with occasional breathlessness on exertion; he was otherwise healthy. The pulmonary symptoms were accompanied by abrupt onset pruritic skin eruptions over chest and abdomen for the same duration. Recently, he had returned from a vacation to a nearby coastal town where he had spent a significant time on the sandy beaches. There was no history of fever, haemoptysis, wheeze, chest pain, allergic rhinitis or relevant drug intake (prescription, over the counter or illicit). His primary care physician had initiated a fiveday course of oral azithromycin (500 mg daily) without any significant improvement. His medical and family history was non-contributory. On general examination, being published for academic purposes. The patient did not have any objection regarding use of his images which may reveal his identity and gave permission to use them. Discussion LS is a transient respiratory illness associated with peripheral eosinophilia as a response to parasitic infestation or medications. 3 Ascaris lumbricoides is most commonly implicated with the condition followed by Trichuris, Strongyloides, Taenia saginata, Entamoeba histolytica and as a complication of chronic asthmatic states. However, it has rarely been reported with CLM. In 1946, Wright and Gold first described 26 patients with cutaneous larva migrans who developed Loeffler's syndrome. 4 Subsequently, this rare complication of CLM has been reported only in a few cases [ Table 1]. 3,[5][6][7][8][9][10][11][12][13][14][15] CLM, also termed ' creeping eruption' , is a parasitic infestation caused by the invasion and migration of parasitic larvae in the skin. The burrowing of the larva of Ancylostoma braziliense, Ancylostoma caninum, Necator americanus, Uncinaria stenocephala and Strongyloides stenocephala have been implicated in such creeping eruptions. 16 Adult hookworms infest the intestines of cats and dogs and their ova in excreta hatch under favourable conditions. These larvae then penetrate intact or abraded skin following exposure with soil contaminated with faeces. Humans act as an accidental dead-end host as the travelling parasite perishes and the cutaneous manifestations usually resolve uneventfully within months. Warm, sandy, humid and shady fields, sandpits or sea shores are particularly favoured areas. This makes barefoot walkers, farmers, gardeners, hunters, hod carrier or beach visitors particularly susceptible to acquire the infestation. Exposed anatomical sites such as hands and feet are usually affected. However, involvement of atypical locations such as the buttocks, genitalia, scalp and multifocal or disseminated lesions have been rarely reported in the literature. Clinically, an initial small reddish papule progresses to a serpiginous pruritic rash with a slow rate of progression from less than 1-2 cm/day. 1,[16][17][18][19] CLM may be complicated by secondary bacterial infection, allergic reaction, eczematisation or very rarely LS. Concurrently or subsequently, a patient may develop non-productive cough, exertional breathlessness, exacerbation of pre-existing asthma which should raise the clinical suspicion of LS. Interestingly, a unique case of asymptomatic LS in CLM has been reported recently. 12 The exact pathogenesis of pulmonary infiltrates in CLM remains poorly understood. The current understanding encompasses a systemic immunologic process in which hookworm in the skin leads to generalised sensitisation. The lung reacts with the soluble larval antigen and produces the eosinophilic pulmonary infiltration. The complete resolution of pulmonary infiltrates and skin eruptions with oral anti-helminths supports this proposed mechanism. 20 Associated eosinophilia is teleologically related to the role of eosinophils in parasitic destruction. In parasitic infestation such as CLM, eosinophilic chemotaxis may result from IgE-mediated reactivity against the infestant, direct chemotactic property of certain parasites, T-cell dependent mechanism or may be immune-complex related. 13 In the present case, the differential diagnoses for the cutaneous lesions included larva currens, migratory myasis, gnathosto-miasis, cercarial dermatitis, allergic contact dermatitis, inflammatory tinea or scabies. However, these were excluded based on history and clinical examination. Loeffler's syndrome should be considered early as a differential diagnosis for community acquired pneumonia and asthma unresponsive to classic antibiotic therapy in individuals with associated cutaneous pruritic eruption. Pulmonary fibrosis and respiratory failure may rarely complicate LS. 3,6,7,21 The condition is primarily self-limiting but appropriate pharmacological intervention leads to faster resolution. Veraldi et al. reported a new therapeutic regimen of oral albendazole (400/day for seven days) to be highly effective. 22 Single dose therapy of oral ivermectin (200 µg/kg) is equally effective with near 100% cure rates. Topical 10% thiabendazole may be used as an alternative. Opting for surgery or cryotherapy rarely proves to be effective. Sometimes supportive therapy such as oxygen inhalation, systemic or inhalational corticosteroids may be required to alleviate the respiratory symptoms. 4,8,9,23 Conclusion The current case highlights the occurrence of LS secondary to multifocal CLM and adds to the limited existing literature on this rarely documented association. LS should be considered early in the differential diagnosis for respiratory complaints in association with pruritic cutaneous eruption especially in an individual who recently returned from a vacation in a tropical destination. In this era of global migration, physicians should be aware of the uncommon systemic manifestation of this uncommon tropical infestation and provide prompt treatment to avoid long-term complication. a u t h o r s 'contribution AS, DBB and AC drafted the manuscript. AS and SKB contributed to patient management, review of literature and critical revision of the manuscript. All authors approved the final version of the manuscript.
2022-05-25T15:22:01.371Z
2022-05-23T00:00:00.000
{ "year": 2023, "sha1": "1f19ccd821750a06c277a2447b495b19a8673392", "oa_license": "CCBYND", "oa_url": "https://journals.squ.edu.om/index.php/squmj/article/download/5089/3471", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38f32cf370aa3a53a2f351c32fda60d44cbbdf42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59418019
pes2o/s2orc
v3-fos-license
Fruit Setting Behaviour of Passion Fruit Passion fruit has great prospective to fascinate fruit consumer for its taste and delicious fruit juice and improvement of the economic condition of the fruit grower in the developing countries. The self-incompatibility in the passion fruits is an imperative reason to be considered regarding fruit production. Pollination is an essential for self-sterile crops as passion fruit (Passiflora edulis Sims.). The experiment was conducted in the field and laboratories of the Bangabandhu Sheikh Mujibur Rahman Agricultural University, Salna, Gazipur, Bangladesh to investigate the fruit setting behavior of passion fruits at during five flashes. We estimated to study fruit setting behavior of passion fruit at different flashes and determine effective pollination method and suitable flashes among five flashes. Result revealed that percent of fruit set of passion fruit was recorded highest; Seeds per plant were recorded highest in third flash when flowers were pollinated by hand compared with self-, and natural pollinations. Length-breadth ratios of fruits in third flashes were recorded higher when pollinates by hand. Individual fruit weight was also recorded higher at third flash. Plants required minimum days from flower anthesis to full maturity during third flash. On the other hand, fruit growth behavior of hand pollinated flowers was recorded higher during third flash of passion fruit. Results indicated that all studied characteristics of fruit and seed of passion fruit of third flash performed best. Introduction The passion fruit belongs to family passionflowers is an allogamous species [1].In Passiflora genus contain more than 450 species but 12 species are cultivated.Only one species is Passiflora edulis Sims that grown as perennial, possess large flowers, and are cross-pollinated and vastly commercialized fruits [2].It largely distributed around the tropics and warm humid subtropics [3][4][5].It is an agronomically important delicious fruits [6] and cultivated for its ornamental, medicinal and nutritive characteristics [7].Purple and yellow passion fruits are commonly cultivated in northern region of India [8]. Passion fruit has hermaphrodite, solitary flowers, located in the leaf axils.There are usually five stamens and ovary is borne over the androgynophore.There are three styles united at base, and at the top of style there are three bifurcated stigmas [9].Passion fruits are protan-drous as anther dehiscence before stigma becomes receptive and stigma remains receptive from time of flower opening to closing.Rhythmic movement of the style of passion flower has been shown self-incompatibility because of the style is in upright position and it starts curving in due course of time.Due to it floral morphology yellow passion fruit is an allogamous plant and self-incompatibility of sporophytic type [10] and cultivated for its edible fruits [11].Self-incompatibility is an important factor for passion fruit production [12]. Pollination is important for fruit production on passion vines and extent of fruit set is dependent on effective pollination.Many yellow passion fruits do not set fruit unless their flowers are dusted with pollen from a different vine that is genetically compatible.Hand pollination is the easiest way to ensure fruit production.Hand pollination increases fruit yield in passion fruit.Artificial pollination is necessary in passion fruit because of its floral morphology where the anthers placed bellow the stigma. Anther of passion fruit is versatile in nature and turns upside down at the time of anthesis.Pollen grains are large, highly sticky and self-incompatible.The self-incompatibility in the passion fruits is an important factor to be considered regarding fruit production and studies on the heredity [10,12]. The flowers of passion fruit are large, attractive, colorful and fragrant.The flowers produce a plenty of pollen and nectar that facilitate insect pollination.The principal insects visiting passion flower include Apis mellifera (honey bee) and Xylocopa vanpuncta (carpenter bees).Carpenter bee is most effective pollinator as it has large body and its body brushes along the anther and stigma while collecting nectar.On the other hand honey bees are not effective pollinator because of their foraging habit [13].Though the passion flowers are hermaphrodite, they are self-sterile and self-incompatible which lead to poor fruit set [10].The anther at the top of filament is versatile in nature.The anther just at the time anthesis turns upset down.The pollens of passion flower are very sticky in nature which may be another cause of poor fruit set naturally.Wind is ineffective for pollination because of heaviness and stickiness of the pollen though flowers have fertile pollen. The passion flower usually opens around mid-day that is generally the warmest time of day, until the end of the afternoon.During this period pollinators collect nectar, transfer pollen from one flower to another.The effective pollination occurs in the period after the style curves completely [14]. Fruit setting behavior is an important criterion for plant breeder in the process of development of a variety.The development of variety has been associated with yield and fruit quality [15].The agro-ecological conditions mainly of hill tract regions of Bangladesh are amiable for passion fruit cultivations [16].Passion fruit in Bangladesh is grown popularly in home gardens for its lucrative color of flower and flavor and tasty yellow juice.Though the fruit setting behavior of passion fruit has been studied in details in other countries, very little work has been done in Bangladesh [16,17].Due to above mentioned natural constraints pollination and fruit set of passion fruit is hampered.Considering the above facts the present study was undertaken to find out suitable pollination method for successful fruit setting. Materials and Methods The study was conducted at the experimental farm of Bangabandhu Sheikh Mujibur Rahman Agricultural University, Gazipur, situated at 24.09˚ North latitude and 90.26˚East longitudes with an elevation of 8.4 meter from the sea level.The climate of experimental site is subtropical characterized by heavy rainfall during April to September and scanty during the rest of the year.Annual rainfall is favorable for passion fruit growing.The soil of the experimental field is clay loam in texture and acidic in nature with pH of around 5.8.The yellow passion fruit (Passiflora edulis Sims.) was used for the present experiment.Vine cuttings of two years old plant earlier collected from Bangladesh Agricultural Research Institute were used for the this study.Passion fruit were planted in trellised in rows 4.5 m apart and spaced 4.5 m apart within rows.The experimental plot was welldrained high land and where pit were prepared on the raised beds.Recommended fertilizers were applied in the pits (50 × 50 × 50 cm 3 ) 10 days before transplanting of cuttings.Flowers were pollinated by hand compared with self-, and natural pollinations. Natural pollination: For natural pollination 100 well developed flower buds were tagged for each flash from healthy plants.At 15th day after tagging, fruits were counted.Natural pollination is usually caused by wind, water and insects. Self-pollination: For self-pollination another 100 well developed flower buds were tagged and bagged from each flash on previous day of blooming from healthy plants.After 5 days the tags were removed.At 15th day after tagging, fruits were counted. Hand Pollination: For hand pollination another 100 well developed flower buds were tagged from each flash on previous day of blooming from healthy plants.On next day the tagged well bloomed flowers were pollinated by hand.At 15th day after hand pollination fruits were counted. Data on the Fruit weight, Fruit length, Fruit breadth, Fruit and seed setting and Number of seeds per fruits were recorded. Fruit and seed setting after open-, self-, and hand pollination: Flowers tagged for natural, self-and hand pollination were done at different times of the day.Twenty flowers of each type were considered for each treatment.The flowers were bagged 24 hrs before anthesis and they were rebagged for another 5 days after pollination.Open pollinated flowers were only tagged.Fruit set was observed by counting the fruits harvested at maturity.Number of seeds per fruits: To observe the seed setting ability of both types 40 -45 days old fruits were harvested and numbers of seeds per fruit were counted.Total numbers of seeds per fruit were recorded. The collected data were analyzed statistically using MSTAT-C computer package (Michigan State University, East Lansing, MI, USA) following the methods ofGomez and Gomez (1984) [18].The analysis of variance procedure (ANOVA), differences among treatment means were determined using the Least Significant Difference (LSD) at 5% level of significance. Fruit Set Fruit set using three pollination methods at different flashes of passion fruit was described here. Self-Pollination Fruit set percent at different flashes using self-pollination ranged from 1.71% to 4.51% (Table 1).Plants produced maximum (4.51%) fruit during third flash using selfpollination.Before and after third flash using self-pollination, fruit set percent was noticed to be declined gradually.Plants produced significantly higher fruits during third flash compared with other flashes using self-pollination.No significant change of fruit set was observed between third and fifth flashes using self-pollination. Natural Pollination At different flashes imposing natural pollination fruit set percent of passion fruit varied from 12.60 to 25.67.As self-pollination plants produced maximum fruit (25.67%) during third flash imposing natural pollination (Table 1).Fruit set percent during third flash was recorded highest (25.67) compared with other flashes by natural pollination.Higher rate of fruit set of passion fruit was recorded in all flashes during natural pollination compared with self-pollination. Hand Pollination Among the studied methods, hand pollination showed to set highest percent of fruit at all flashes.Imposing hand pollination fruit set percent ranged from 31.61% to 46.71% (Table 1).Among the three pollination methods, hand pollination was noticed to be best in respect of fruit set.As other two methods (self-pollination and natural pollination), hand pollination produced maximum fruit during third flash.Among the three pollination methods and five flashes, plant produced highest fruit imposing hand-pollination during third flash.Among the flashes, flowers pollinated by hand produced maximum fruits during third flash followed by fourth, second, fifth and first flashes. Seeds Per Fruit Effect of the pollination methods on seeds per fruit at different flashes was presented in Table 2. Self-Pollination Seeds per fruit in multiple seeded fruits depend upon number of pollens availability on the stigma of flower.Seeds per fruit of self-pollinated flower of passion fruit at different flashes were recorded variable.Seeds per fruit at different flashes ranged from 3.59 to 10.51 (Table 2).Seeds per fruit were recorded maximum (10.51) at third flashes followed by fourth, second, fifth and first flashes.From the finding it was noticed that imposing self-pol-lination early and late flowers produced minimum seeds per fruit. Natural Pollination From the findings it was observed that seeds per fruit imposing natural pollination at different flashes were recorded higher than that of self-pollination (Table 2).As self-pollination seeds per fruit in open pollinated flower at third flash were recorded maximum (16.51 seeds/fruit).The trend of seeds per fruit in open pollinated flowers at different flashes was noticed more or less as self-pollinated flowers.Seeds per fruit were recorded minimum (8.56) at fifth flash compared with self-pollination. Hand Pollination Among the three pollination methods, hand pollination produced maximum seeds per plant at all flashes (Table 2).It was recorded that among the three pollination methods and five flashes plant produced maximum seeds per fruit imposing hand-pollination at third flashes.Stigma of passion flower received maximum number of pollen which is the cause of maximum seeds per fruit.On the other hand, hot-humid weather was noticed favorable for better growth and development of passion fruit which prevailed during third flash.On the contrary average seeds per fruit imposing hand pollination were recorded maximum (Figure 1). Fruit Weight Effect of pollination methods on individual fruit weight (g) of passion fruit at different flashes at maturity on fresh weight basis was reported in Table 3. Self-Pollination Individual fruit weight of passion fruit at different flashes at maturity using self-pollination was noticed variable (Table 3).Individual fruit weight at different flashes ranged from 25.28 to 35.81 g.Individual fruit weight was noticed maximum at third flash compared with other flashes.At first and fifth flashes individual fruit weight was recorded minimum as compared with other flashes imposing self-pollination. Natural Pollination Imposing natural pollination individual fruit weight of passion fruit at different flashes varied from 34.18 to 42.67 grams at maturity (Table 3).Results revealed that plant produced largest fruit (42.67 grams) using natural pollination followed by fourth, second, fifth and first flashes.As self-pollination, plant produced largest fruit during third flash imposing natural pollination.Compar-ing with self-pollination, individual fruit weight at dif-ferent flashes imposing natural pollination was recorded higher.Individual fruit weight is associated with the number of seeds per fruit.Fruit set of passion fruit was mainly caused by Apis mellifera (data not shown). Hand Pollination Individual fruit weight imposing hand pollination at different flashes ranged from 30.41 to 39.40 grams (Table 3).As self and natural pollinations, individual fruit weight at third flash was also recorded higher.Individual fruit weight of passion fruit at all flashes was noticed comparatively lower than natural pollination.Individual fruit weight depends upon number of fruits per plant.As fruit set it was higher imposing hand pollination.So the individual fruit weight was recorded comparatively lower than natural pollination.Among the pollination methods at different flashes plant produced biggest fruit (42.67 g/fruit) during third flash pollinated with natural pollination.During third flash, plant showed vigorous growth which may be the cause of formation of longer fruit during third flash. Days Required from Anthesis to Maturity Fruit maturity of a crop is influenced by genetic make-up, physiological condition of the specific crop as well as environmental factors such as rainfall, humidity, temperature, day length etc. Fruit maturity of passion fruit at different flashes was noticed variable (Figure 2).Results revealed that plants required minimum days (42 days) from flower anthesis to full maturity during third flash.Days to maturity from anthesis was observed to be enhanced.On the other hand maturity of fruit was noticed to be delayed during first and fifth flashes of passion fruit.During third flash hot and humid weather as well as longest day-length prevailed that might be the cause of enhancement of fruit maturity. Fruit Length and Diameter Effect of pollination methods on fruit length and diameter of passion fruit at different flashes was presented in Table 4. Self-Pollination on Fruit Length Fruit length of passion fruit imposing self-pollination at different flashes was recorded not uniform (Table 4).Fruit length at different flashes ranged from 3.19 to 4.16 cm.Fruit length was recorded maximum (4.16 cm) at third flash followed by fourth, second, fifth and first flashes.No significant change of fruit length between third and fourth flashes imposes self-pollination. Natural Pollination on Fruit Length Similar trend of fruit length at different flashes imposing natural pollination was noticed.But fruit length at all flashes was recorded little bigger than that of self-pollinated fruits.As self -pollination, plant produced longest fruits (5.91 cm) imposing natural pollination at third flash.Among the flashes, plant produced smallest fruit (3.36 cm) at first flash.Fruit length of passion fruit between third and fourth flashes did not show any signify-cant change. Hand Pollination on Fruit Length Plants produced longest fruit at all flashes imposing handpollination compared with self-and open pollinations.Plant produced longest fruit imposing hand pollination as self and open pollination at third flash (Table 4).Among these methods of pollination at different flashes plant showed longest fruit imposing hand pollination at third flash.Fruit length of multiple seeded fruit depends upon number seeds set in the fruit. Fruit Diameter Fruit diameter of passion fruit imposing self-, natural and hand pollinations at different flashes was mentioned in Table 4.No noticeable change of fruit diameter of passion fruit was observed imposing the above mentioned pollination methods during five flashes.Fruit diameter was affected by pollination methods and flashes. Length-Breadth Ratio Length-breadth ratio of passion fruit imposing different methods of pollination at different flashes was shown in Figure 3.It was noticed that length-breadth ratios of fruits of all flashes were recorded higher when passion flowers were pollinated by hand pollination followed by natural and self-pollinations. Fruit Yield Fruits per plant ranged from 2 to 28 by natural pollination.Plant produced maximal number of fruit by natural pollination during third flash.Minimal number of fruits per plant was noticed in fifth flash by natural pollination (Table 5).Fruit set percent at different flashes varied from 7.41 to 20.00 by natural pollination.As fruits per plant, fruit set percent was recorded highest at third flash.Fresh weight of individual fruit (g) was recorded maximum during third flash though it ranged from 21 to 34 by natural pollination. Self-pollination Open Discussion Pollination is an important criterion for fruit set in passion fruit.Our main aim was to determine the extent to which pollination methods are effective for successful fruit production.Kishore et al. (2010) [19] reported that maximum number of bees was observed between 07:00 -08:00 h in purple and giant passion fruit, but between 13:00 -14:00 h in yellow passion fruit.In the present study the maximum number of bees was noticed in the field during 13:00 -14:00 h.For open pollination the most common pollinating bee for purple, giant and yel-low passion fruit was A. mellifera, while A. cerena was for P. foetida [19].In this study Apis mellifera was noted the most common pollination bee.Floral morphology of passion flower is the barrier for self-pollination due to pollen not reaching the stigma.Except self-pollination (0% fruit set), manual pollination gave greater fruit set than natural pollination (about 55% vs. 44.6%),resulting in 90-g fruits (20% -25% heavier than those of the control).Juice percentage and seed number were increased by hand pollination by about 40 and 10%.In general, cross pollinating at around 16.00 h gave the best results (Duarte and Sierra, 1997) [20] which was little contradictory with the present experiment.A high correlation between fruit weight and seed number, fruit length and diameter were noticed [21]. Though passion fruit (P.edulis f. flavicarpa) is capable of a very small degree of self-compatibility, cross-pollination results in increased fruit set, fruit weight, seed set, juice volume and juice sugar concentration.Although large bees such as Xylocopa spp.were expected as pollinators, honeybees (A.mellifera) were the only pollinators observed.The foraging habits of honeybees, not their size, may cause less than expected percentage fruit set [13].Fruit set of passion fruit in the present study in open pollinated flowers was caused mainly by honey bees (data were not shown) which was supported by Hammer (1987) [13]. Hand pollination alone or in combination with GA3 increased fruit set, fruiting and fruit quality (weight, juice weight and TSS) compared with open pollination.Hand pollinated fruits contained more seeds than open pollinated fruits.Hand pollination decreased rind weight and total acidity content [14].Seeds per fruit by hand pollination in the present study were observed highest (35.18) compared with self-and natural pollinations which were supported by Saleh and Zarad (1996) [14].Nazrul et al. (2003) [16] in yellow passion fruit reported 30% fruit set by hand pollination.In the present investigation, fruit set percent was recorded maximum (20) during third flash by natural pollination mainly caused by Apis mellifira.In another study, Kishore et al. (2010) [19] mentioned 42.2% fruit set of yellow passion by using A. mellifera as a natural pollinator.Photoperiod [22], air temperature [23], pesticides [24] and soil moisture are factors that determine the yield of yellow passion fruit.Due to high humidity in anther opening time, cell content of pollen grain contain high osmotic pressure as well as low resistance of their walls, reduced pollen viability as a result affect the frutification and fertilization percentage in pollination [12].Biotic factors such as the physiological stages of the plant interfere with the frutification and seed production [25].The biochemical pathways that control fruit setting and maturity in yellow passionfruit [15].The low yield before and after third flash is due to the lack of flowering and fructification of the plant because of climatic conditions.Flowering of third flash during June-July, that time cause of rainfall and high humidity temperature was relatively low in summer time.Relatively low temperatures are suitable for flowering and fruit set, and moderately high temperatures are favorable for fruit growth and quality in purple passion fruit [26]. Conclusion Finally, it can be concluded that, among the pollination methods (self-, natural and hand pollination), passion fruit produced maximum fruits pollinating flowers by hand.Hand pollination showed maximum fruits per plant at all flashes.Seeds per plant were recorded highest when flowers are pollinated by hand.Individual fruit weight was recorded maximum with naturally pollinated flower.Plants required minmum period day from anthesis to full maturity at third flash compared with other flashes.Plants produced longest fruit by natural pollination followed by hand pollination during third flash. Figure 2 . Figure 2. Effect of different flashes on days to fruit maturity from anthesis (days) of passion fruit. Figure 3 . Figure 3. Length-breadth ratios at different flashes by applying different pollination.
2018-12-21T21:59:24.913Z
2013-05-29T00:00:00.000
{ "year": 2013, "sha1": "6314ae79b9068a6aa3d2c80c584e314469bfd569", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=32145", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6314ae79b9068a6aa3d2c80c584e314469bfd569", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
246907208
pes2o/s2orc
v3-fos-license
Tool Wear Rate and Surface Integrity Studies in Wire Electric Discharge Machining of NiTiNOL Shape Memory Alloy Using Diffusion Annealed Coated Electrode Materials Electrode material used in wire electric discharge machining (WEDM/wire EDM) plays a vital role in determining the machined component quality. In particular, when machining hard materials like nickel titanium/NiTi (NiTiNOL) shape memory alloy, the quality of electrode material is important as it may have adverse effects on the surface properties of the alloy. Different electrode materials give different performances, as each electrode material is made up of different conductivity, compositions and tensile strength. Therefore, detailed experimental studies have been carried out to understand the effect of diffusion annealed coated wires (X-type and A-type) on NiTiNOL SMA during the wire EDM process. The tool wear rate and surface roughness responses have been studied for both the electrode materials against different wire EDM variables such as pulse time, pause time, wire feed and spark gap set voltage. The impact of these process parameters on the stated output responses has been analyzed and further surface and subsurface analysis of the machined component has been carried out to understand the impact of diffusion annealed electrode materials during the wire EDM process. The investigation reveals that an A-type diffusion annealed coated wire is found to be most suitable in terms of tool wear rate, surface roughness and surface integrity during machining of NiTiNOL shape memory alloy compared to X-type and traditional brass-based electrode materials. Surface topographical properties were studied using confocal microscopic analysis and scanning electron microscope (SEM) with energy-dispersive spectroscopy (EDS) analysis. The subsurface analysis like microhardness and recast layer thickness was also studied for both the wires against different machining conditions. Introduction Nickel titanium/NiTi (NiTiNOL) is the special class of shape memory alloys (SMAs) used for biomedical and surgical applications. Biocompatibility is the crucial factor for the usage of NiTiNOL SMAs in the human body [1]. Some of the biomedical applications of NiTiNOL include orthodontic wires, braces, mandible fracture plates, spinal implants, vertebrae discs, etc. [2,3]. As NiTiNOL SMAs are used for biomedical applications, conventional machining such as milling, drilling, turning, etc., cause various problems such as formation of work-hardened layer, frequent tool breakage, burr formation, etc., as the NiTiNOL SMA is a hard and difficult to cut alloy and is characterized by various other peculiar properties [4,5]. Biomedical implants or surgical tools consist of intricate and complex shapes, which are hard to produce using conventional machining processes. Among the different non-conventional machining processes, wire electric discharge machining (WEDM/wire EDM) is found to be the most appropriate and sophisticated process to machine hard and difficult to cut conductive alloys such as NiTiNOL and produce the required intricate shapes for different applications [6,7]. Wire EDM uses a thin wire electrode material to cut the conductive work material or an alloy. The wire electrode is rapidly charged and hence it produces high-intensity sparks to melt the material from the workpiece, which is further flushed away by coolant, preferably deionized water. Along with different process variables in wire EDM such as pulse and pause times, wire feed rate, spark gap set voltage, current, etc., the wire electrode material plays a vital role in the quality of the finished parts. As the sparks are generated from the electrode materials, the type of electrode material used for the operations decides the finish of the machined component. Plain brass (PB) wire and zinc-coated brass wire (ZBW) are the most frequently used wire electrode materials during the wire EDM process and most of the works carried out by researchers during wire EDM of nickel titanium alloys have used either brass or zinc-coated brass wire as electrode material for their experimentation purpose [7][8][9][10]. As copper has high electrical conductivity, it was formerly supposed to be the EDM wire, but the flushability and tensile strength of copper wire electrode were found to be low and therefore, brass wire, which is an alloy of copper and zinc with good flushability, tensile strength and reasonable conductivity, is used as WEDM electrode. Plain brass wire electrode consisting of 65% copper and 35% zinc is categorized under the soft electrode category as its tensile strength is 420 MPa. Few researchers have worked on machining of NiTi SMAs using plain brass wire as electrode material. Liu and Guo [11] used plain brass wire as electrode for machining NiTiNOL SE508 work material and observed that during the machining process both the electrode and workpiece materials were eroded and various compounds such as titanium oxides and titanium carbides were formed due to complex chemical reactions. The top surface of the recast layer was characterized by large depositions of copper (Cu) and zinc (Zn). Elemental diffusion was mainly between electrode and workpiece and there was not much detection of diffusion from dielectric. Many microcracks were also observed on the machined surface. Brass wire of 0.25 mm diameter was used as electrode material in WEDM for machining an equiatomic NiTi SMA by Manjaiah et al. [12]. Much surface oxidation and carbonization were observed over the white layer. The surface hardness of the alloy was more due to the surface oxidation, which is because of disassociation of brass electrodes at higher temperatures. Confocal analysis showed many peaks and valleys, and huge debris on the machined surface was observed through scanning electron microscopic (SEM) analysis. Bisaria and Shandilya [13] used brass wire for WEDM of Ni-rich NiTi SMA and observed that higher pulse time caused higher discharge, resulting in larger material erosion. Huge craters, globules and microcracks were observed in SEM analysis. Compared to lower pulse time, the higher pulse time produced 1.2 times thicker white layer thickness and elements of brass wire such as Cu and Zn were found to be deposited on the machined surface. Soni et al. [14] studied the effects of WEDM process parameters on the machined ternary NiTiCu SMA with PB wire. Surface roughness (SR) increased with increase in pulse time, and as spark gap set voltage (SV) increases, SR was found to decrease. At higher cutting speed, the surface crack density of machined alloy was higher. Brass wire produced poor surface quality at higher pulse time. Kulkarni et al. [15] studied tool wear rate (TWR) of the brass wire electrode during WEDM of NiTiNOL alloy and observed that the wire topography of worn-out electrode showed many craters and crack formations on the surface of the wire electrode at higher TWR. At lower TWR, the craters were less and cracks were insignificant. If the zinc content in brass electrode is increased, it results in higher cutting speed, but due to the limitations of the cold drawing process, it is difficult to increase the Zn content above 40% and therefore, the half hard wire electrode with just above 440 MPa tensile strength was introduced, which has a coating of Zn over the core brass. Such electrode materials are termed as Zn-coated brass wire electrodes. Few researchers worked on machinability studies of NiTi SMA using ZBW as electrode material during WEDM. Daneshmand et al. [16] investigated WEDM of NiTi60 using ZBW and studied the SR output response. Depth craters with huge debris were observed on the machined alloy through SEM analysis. The SR was increased rapidly at higher pulse time and at the same time the electrode material was found to be worn out. The various deposited elements formed the layer of 10-20 microns on the machined surface and had their effect on change of base properties of smart alloy. Ali Akbar and Saeed [17] studied the effect of WEDM process parameters on NiTiNOL alloy using ZBW as electrode material. The microhardness of the machined surface was found to be increased several times compared to the base materials' microhardness and different hard oxides and metal oxides were formed. Manjaiah et al. [18] used both brass and zinc-coated brass wire for machining ternary NiTiCu SMA and concluded that SV is the most influential process parameter for minimizing SR, and ZBW electrode was found to be better than plain brass wire electrode in increasing material removal and minimizing SR. Kulkarni et al. [19] used ZBW electrode for machining NiTiNOL and observed that Cu and Zn depositions of electrode material were found to be very high on the machined surface for higher TWR compared to the machined surface of the lower TWR. Although some research work has been carried out in WEDM of different grades of NiTi SMA using brass wire and ZBW electrode materials, there is plenty of scope to carry out the experiments in WEDM of NiTiNOL SMA using the latest diffusion annealed coated electrode materials. Diffusion annealed coated electrode materials have many advantages over conventional plain brass and ZBW electrode materials. Therefore, in the present experimental studies, WEDM of NiTiNOL SMA has been carried out with two different types of diffusion annealed coated wire electrode materials. The electrodes TWR and machined alloys SR are studied using response surface methodology (RSM). Detailed surface integrity (SI) studies have been carried out using confocal, energy-dispersive spectroscopy (EDS), microhardness and recast layer thickness analysis. Further, the performances of two different diffusion annealed coated wire electrode materials are compared with each other and the performances of diffusion annealed electrodes are compared with the performances of plain brass and ZBW electrode materials to suggest the most feasible wire electrode material to machine medical grade NiTiNOL SMA. Work Material The medical grade NiTiNOL (Ni with 55.74% and Ti as remainder) shape memory alloy with ASTM F 2063 Standard has been used as work material for the present experimental studies. The work material has been procured from HongKong Hanz Material Technology Co., Ltd., Baoji, China. This material can be used for production of different implant applications such as bone plates, mandible fracture plates, etc. Wire Electrode Materials In many cases, zinc from the ZBW blasts off the surface due to higher spark intensity and zinc coating does not live up to its potential. Therefore, zinc needs to be metallurgically bonded to the core material and have a high melting point. Such heat-treated Zn-coated wires are called diffusion annealed coated wires. Two types of diffusion annealed coated wire electrodes (X-type and A-type) have been used as wire electrode materials in the present experimental studies. X-type wire (DX) is the first of its kind in diffused annealed wire sections. It is basically called by other names such as Bronco Cut-X or Beta Cut-X and it consists of coating of beta brass (Cu and Zn alloy with Zn % of 40 to 53%) over a pure copper core. It is an excellent combination of high conductivity and tenacious zinc-rich coating. This type of wire is considered to be half hard wire as its tensile strength is 520 MPa. On the other hand, A-type wire (DA) has good tensile strength compared to X-type diffused annealed wire. It is well known, with a brand name of Cobra Cut. A-type wire consists of a brass core, which is an alloy of copper and zinc with 80:20 proportions and has the coating of beta brass. This type of wire is considered to be hard wire as it has the tensile strength of 900 MPa and has the advantage of improved conductivity of copper core (80:20) and tenacious zinc-rich coating. Experimental Setup The experiments were performed in Electronica make Ecocut Elpuls-15 model CNC wire EDM at KLE Technological University's central MakerSpace facility. The NiTiNOL alloy plate of (800 × 160 × 2) mm 3 was cut into (100 × 160 × 2) mm 3 size. The alloy samples were annealed at temperatures of 350 • C for a duration of one hour using an electric furnace in a controlled argon atmosphere at a heating rate of 10 • C/minute. Two types of wire electrodes such as diffused annealed wires (X-type Bronco cut and Atype Cobra cut) with 0.25 mm diameter were used as wire electrode materials for machining the NiTiNOL components using wire EDM. The wire EDM used for experimentation purposes along with medical grade SMA mounted on the wire EDM bed can be seen in Figure 1. Process Variables and Output Responses The process variables such as pulse time (spark on time/pulse on time/T on ) with three levels (105 µs, 115 µs and 125 µs), pause time (spark off time/pulse off time/T off ) with three levels (25 µs, 40 µs and 55 µs), wire feed rate (WF) with three levels (4 m/min, 6 m/min and 8 m/min) and spark gap set voltage (servo voltage/SV) with three levels (20 V, 40 V and 60 V) were chosen for the present experimental studies. The levels and ranges of the process variables were identified based on earlier experimentation and studies by Kulkarni et al. [20,21]. TWR and SR are the output responses studied in the present study. The difference between the weight of electrode material before machining and the weight of electrode material after machining (weight loss method) has been adopted for the present experimental study. The SR of the machined NiTiNOL circular components was measured using Zeiss SURFCOM 1500SD2 tester which is used over the sampling length of probe 1.6 mm. Average SR (R a ) was considered for the initial studies and surface integrity (SI) studies were carried out to analyze the surface and subsurface of the machined component in the later sections. The surface analysis was carried out using confocal microscopic, SEM and EDS analysis, whereas the subsurface analysis was carried out using recast layer thickness and microhardness testing analysis procedures. The machined surface morphology was analyzed by scanning electron microscopy (SEM) from Zeiss (GEMINI FESEM Technology). The SEM is equipped with an energydispersive X-ray spectrometer (EDS) to provide information on the elemental identification and the quantitative composition. Unlike in optical microscope, where light is used, SEM uses a beam of electrons at a shorter wavelength to extract the information from the specimens, based on the relationship of the atoms in the samples with the beam. The X-ray detector of the EDS measures the amount of emitted X-ray that is in opposition to the energy when samples placed in the sample holder are exposed to a beam of electrons in a vacuum. The X-ray energy provides information about the elements from which it was emitted, and it enables the qualitative and quantitative analysis of the elements present by an evaluation of the energy of the specimen in opposition to the energy of the detected X-rays. The SEM images and the quantitative analysis were acquired and stored by using preinstalled software. Olympus Confocal microscope offers numerous benefits over the wide field optical microscope, and has the capability to regulate field depth and eliminate background information and the ability to collect serial optical section thick specimens. The confocal microscope was used to measure the 3D surface profile and surface roughness parameters of machined samples. In confocal microscopic analysis, one can understand the nature of the surface topography of the machined NiTiNOL sample in terms of average height deviations (surface average roughness) (Sa), maximum height of the peak in the entire area under study (Sz), skewness (S sk ) and kurtosis (S ku ). Skewness is the measure of asymmetry of surface deviations about the mean plane and kurtosis is the measure of peakedness of the surface height distribution [22,23]. Sa, Sz, S sk and S ku are considered for the present study, as Sa is required to understand the overall average deviation and Sz, S sk and S ku are important factors for biomedical applications [24,25]. The microhardness (MH) samples were prepared by using the standard metallographic procedure. The smooth surface helps to improve the accuracy of the microhardness values. MH in the present study have been carried out using Vickers MH Tester as per ASTM E384 standards and the hardness values are measured in HV 0.025 from the outside surface of the machined alloy. Starting from 10 µm, the indentation of the measurement was up to 120 µm. For every 10 µm indentation, the MH value was captured and 12 readings were taken from each sample to understand the MH value in RLZ, HAZ and CLZ regions. The machine uses a pyramid-shaped diamond indenter to make indentations on the recast layer and heat-affected zone cross-section by using a load of 200 g and a dwell time of 15 s. Based on the pulse time and other important process parameters, the microhardness (MH) in the recast layer zone (RLZ) and heat-affected zone (HAZ) region changes and remains unaltered in the converted layer zone (CLZ) region. To justify the MH values especially at RLZ region, the field emission scanning microscopy (FESEM) of the same alloys used for MH measurements was carried out for measuring the recast layer thickness (RLT). FESEM of Carl Zeiss' model-NEON 40 35 33, Germany has been used for the present work to calculate the RLT. The performances of both the diffusion annealed wire electrode materials are compared with performances of plain and coated brass wire electrode materials. The analyses of both surface and subsurface studies are used for comparative studies wherever required. The SR, TWR and SI study results of both the diffusion annealed electrodes experiments are compared and then the most feasible and suitable wire electrode for machining NiTiNOL SMA has been mentioned in the concluding section. As most of the biomedical applications in general have a round and curved profile, a circular hole of 10 mm diameter was considered as the machining profile. The number of experiments was planned based on the full factorial design (FFD) with four different input variables and three different levels. Table 1 shows the detailed experimental plan for WEDM of NiTiNOL alloy using both the diffusion annealed wire electrode materials. Three trials were conducted for each experimental condition and average value has been considered for the purpose of analysis. Methodology The empirical mathematical models are developed using the response surface methodology (RSM) approach [26]. The functional relationship between the chosen output responses such as TWR and SR with different process variables like pulse and pause time, wire feed and spark gap set voltage is established by second-order quadratic mathematical models and the adequacy of the models has been verified. The functional relationship between the performance criteria and input process variables is expressed as: Z = f (y 1 , y 2 , y 3 , . . . , y n ) where Z = performance, f = surface function and y 1 , y 2 , y 3 , . . . , y n = factors. RSM-based second-order quadratic mathematical model for performance criterion with four input process variables K, L, M and N in general form is [27]: Z = c 0 + c 1 K + c 2 L + c 3 M + c 4 N + c 11 K 2 + c 22 L 2 + c 33 M 2 + c 44 N 2 + c 12 KL + c 13 KM + c 14 KN + c 23 LM + c 24 LN + c 34 MN where, c 0 , . . . , c 34 = regression coefficients for the proposed model to be determined. The least squares method has been employed to determine the regression coefficients. The regression coefficients for the projected model are calculated as shown in Equation (3), where C = matrix of regression coefficient estimator, Y = matrix of process parameters, Y = transpose matrix of Y and Z = matrix of response or desired output. In this present experimental study, the second-order quadratic mathematical models for TWR and SR for experiments of X-type and A-type electrodes, based on developed RSM models, have been fitted with pulse time (T on ), pause time (T off ), wire feed (WF) and spark gap set voltage (SV) as the input process parameters. For the proposed WEDM characteristics, the following mathematical models for both the diffusion annealed electrode experiments are generated using Equation (2). The suitability of the developed mathematical models (Equations (4)- (7)) has been checked using analysis of variance (ANOVA). The ANOVA for all three output responses of all the four models, namely, TWR and SR of WEDM of NiTiNOL SMA using X-type (DX) and A-type (DA) wire electrodes has been performed at 95% confidence interval. F-ratio for TWR DX (112.28), SR DX (141.26) and TWR DA (237.68), SR DA (88.13) is found to be greater than F (14.66) 0.05 = 1.80, which indicates that the planned mathematical models are significant for the preferred confidence level. The competency of each of the developed models is also justified through R 2 , as the R 2 values for TWR and SR responses are found to be TWR DX (96%), SR DX (96.8%) and TWR DA (98.1%), SR DA (94.9%), respectively. Results and Discussions Equations (4)-(7) are used to envisage the characteristics of TWR and SR for the results achieved during WEDM of NiTiNOL using both X-and A-types of diffusion annealed coated electrode materials. The impact of process parameters like pulse time, pause time, WF and SV on the proposed output responses such as TWR and SR along with related SEM analysis are depicted in Figures 2-9. For analyzing TWR and SR responses, the 3D plots have been plotted as a function of SV with lower, middle and higher hold values of pulse and pause times with three levels of WF conditions. Figures 2, 4, 6 and 8 clearly show a strong relationship between input process variables on the chosen output responses. Analysis of TWR of X-Type Diffusion Annealed Coated Wire Electrode Interaction effects of SV against TWR with varying WF rates for different hold values of T on and T off are plotted for machining NiTiNOL SMA using X-type wire electrode material and are shown in Figure 2a-c. Irrespective of hold values, TWR decreases with increase in SV and for all WF rates. It can also be observed that TWR is higher for higher WF rate for all the combinational hold values. TWR is found to be increasing from lower combinational hold values to higher combinational hold values of pulse and pause times for all WF rates of 4 m/min, 6 m/min and 8 m/min. Very low TWR of 0.039 g/min is seen for the combination of lower pulse and pause hold values (T on -105 µs and T off -25 µs) with low WF rate of 4 m/min and higher SV of 60 V. Very high TWR of 0.07 g/min is observed at the combination of higher hold values of pulse and pause times (T on -125 µs and T off -55 µs) with higher WF rate of 8 m/min and lower SV of 20 V. It is clearly evident that all the process parameters show visible influence on the results of TWR of X-type wire electrode material during WEDM of NiTiNOL SMA. Especially when SV is high, the gap between electrode and work material is greater; therefore, if pulse time is lower and SV is high, then the TWR is found to be very minimal. The significance of higher TWR can be observed in Figure 3a,b, which shows the SEM images of both unused and used X-type wire electrode material. It is observed from Figure 3b that there is huge wear and tear on the surface of the used wire sample at high TWR, which is caused because of high spark generation from high pulse time. The wire becomes ruptured very soon when the TWR is high, for which the lower spark gap and higher pulse time are responsible. The exposed side of the X-type wire electrode from where the fresh spark generates and hits the work material has been torn due to heavy wear-out. Compared to machining of NiTiNOL using brass wire electrode, TWR of X-type electrode material is very minimal and TWR of X-type wire electrode is found to be almost the same as TWR of ZBW electrode material [15,19]. The ZBW and X-type electrodes have tensile strength of 500 and 520 MPa, respectively, and are called half hard wires, whereas plain brass wire electrode is a soft wire with tensile strength of 420 MPa. Analysis of SR of Machined NiTiNOL SMA Using X-Type Wire Electrode Interaction effects of SV against SR with varying WF rates for different hold values of T on and T off are plotted for machining NiTiNOL SMA using X-type wire electrode material and are shown in Figure 4a-c. In Figure 4a, for lower pulse and pause hold values, it can be observed that lower SR is witnessed at SV of 60 V and WF of 8 m/min. Figure 4b,c show that SR decreases with increased SV irrespective of WF rates. It is revealed that for lower and middle level hold values, WF of 8 m/min is found to be good for lower SR, whereas for higher hold values, WF of 4 m/min gives lower SR results. The highest SR is achieved at lower SV values and higher WF and higher hold values of pulse and pause times. When the SV is low, the spark generated from the wire electrode hits the workpiece with high intensity and thus damages the surface due to higher heat transfer on to the workpiece. Irrespective of WF rates, the higher hold values of pulse and pause times along with lower SV result in high SR. An almost similar kind of SR graph was observed while machining NiTiNOL SMA using brass wire electrode material [15]. Higher SR of 6.49 µm was observed at T on -125 µs, T off -55 µs, WF-8 m/min and SV of 20 V and the same can be witnessed in Figure 4. Figure 5 shows the SEM analysis for the machined NiTiNOL samples with higher and lower SR values. A huge number of cracks followed by very high irregular surfaces can be observed in Figure 5a. The wire depositions on the workpiece can also be observed in Figure 5a. The microcracks, irregular surfaces and extra depositions from the wire cause higher SR for higher hold values of pulse and pause times. On the other hand, for lower hold value combinations along with higher SV, the SEM analysis from Figure 5b shows a comparatively smooth surface without any cracks and very few microglobules can be observed. Very low SR of 1.52 µm was achieved at T on -105 µs, T off -55 µs, WF-8 m/min and SV of 60 V as spark intensity reduces at higher levels of SV, which allows good flushing, leading to better surface finish. Analysis of TWR of A-Type Diffusion Annealed Coated Wire Electrode Interaction effects of SV against TWR with varying WF rates for different hold values of T on and T off are plotted for machining NiTiNOL SMA using A-type wire electrode material and are shown in Figure 6a-c. For all the three different hold values of pulse and pause times, the nature of graphs as observed from Figure 6a-c almost remains the same. TWR is found to be increasing as the hold value increases from lower level to higher level, i.e., from 105-25 µs (T on -T off ) combinational value to 125-55 µs (T on -T off ) combinational value. In all the three cases, it is clearly evident that TWR decreases as SV increases. TWR and SV are inversely proportional to each other. The higher the SV, the lower and better the TWR. WF rates have their impact on TWR, which can be justified from Figure 6a-c. Irrespective of lower, middle and higher pulse and pause combinational hold values, in all three cases, TWR increases as WF rate increases. The lower the feed rate, the lesser the tool wear. WF of 4 m/min causes lesser tool wear compared to wire feed rate of 8 m/min. The wire experiences uncontrolled movement and vibrations during higher feed rates, causing the tool to wear out quickly during lower SV, as the spark gap between the work material and tool material is less. The higher pulse time is also responsible for high tool wear as spark generated during high pulse time of 125 µs removes material for a longer duration compared to pulse time of 105 µs. When the TWR of A-type wire electrode is compared with the performances of the other types of electrode materials, it can be observed that the performance w.r.t. TWR is found to be better in most cases and is almost in line with the performance of ZBW and X-type wire electrode materials in some combinational values. Figure 7a,b show the SEM analysis of A-type wire electrode samples with lower and higher TWR. The lower TWR for A-type wire electrode material is found to be 0.0054 g/min for the combinational value of T on = 105 µs, T off = 55 µs, WF = 4 m/min and SV = 60 V and this is because, when the spark generation time is at the lower level of 105 µs and the off time is at the higher level of 55 µs, the spark duration available for machining the material is less and also the intensity of heat generation is less. Along with this condition, the SV is large (60 V), where the distance between workpiece and electrode material results in lesser intensity. Therefore, less intensity of heat and more pause time along with large SV are responsible for lesser TWR. Lesser TWR results in lower SR. Figure 7a shows the SEM analysis of wire electrode samples with lower TWR. If the spark-generated side of the electrode is observed, it can be seen that the electrode has worn out to the minimum with lesser damage. The higher TWR using A-type electrode material is found to be 0.0694 g/min followed by 0.0678 g/min, which is achieved from high pulse time of 125 µs, larger off time of 55 µs and average SV of 40 V. Even the high tool wear rates can be observed for the combinational values of pulse time of 125 µs and lower SV value of 20 V. Figure 7b shows the SEM image of electrode material which has experienced high TWR. Compared to Figure 7a, the torn-out area and damage to the electrode are high in Figure 7b. However, when the SEM image of a higher TWR sample of A-type electrode is compared with other electrode materials, then there is very little deformation and less damage as the wire is considered to be hard wire with tensile strength of 900 MPa and it has beta brass coating. Analysis of SR of Machined NiTiNOL SMA Using A-Type Wire Electrode Interaction effects of SV against SR with varying WF rates for different hold values of T on and T off are plotted for machining NiTiNOL SMA using A-type wire electrode material and are shown in Figure 8a-c. SR is low for the lower combinational hold values of pulse and pause time compared to the other two hold values. As the hold value increases from middle level to higher level, the SR also increases proportionally. The increase in SV influences the SR of the machined alloy for middle and high hold values of T on and T off , whereas, interestingly, for lower hold values of T on and T off , SR increases as SV increases. This may be because even though there is less spark gap between workpiece and electrode material, the spark that is generated from electrode material is for a lesser time period, as pulse time is just 105 µs. The lesser the spark time, the better the flushing; SR is found to be lower. As the pulse time increases from 105 µs to 115 µs to 125 µs, the spark generation happens for longer duration, and therefore, the heat transferred on to the workpiece is greater, resulting in higher SR. When the SR is compared w.r.t. performance of wire feed rate, then moderate wire feed of 6 m/min has the upper hand, although there is not much difference between the SR performances w.r.t. different wire feed rates of 4 m/min, 6 m/min and 8 m/min. The nature of all three different WF rates is found to be the same against varying SV from all three different pulse and pause hold conditions. Figure 9a,b show the SEM analysis for higher and lower SR values. The lowest SR of 1.66 µm is attained at T on = 105 µs, T off = 25 µs, WF = 6 m/min and SV = 20 V. The machined sample with the lowest SR values has been analyzed using SEM in Figure 8a. It can be seen in the SEM image that the surface of the machined sample is free from cracks and craters. Even the redeposition of the melted metal after machining is very minimal due to lesser spark intensity generated from lower pulse time of 105 µs. The higher SR of 5.36 µm is seen in the machined sample produced from the combination of T on =125 µs, T off = 25 µs, WF = 4 m/min and SV = 20 V and the SEM analysis of the same sample has been depicted in Figure 9b. The SEM analysis reveals that the machined sample has many cracks and huge craters throughout the surface, which are formed due to high sparks hitting the workpiece at higher pulse time of 125 µs and lower SV of 20 V. The lower WF of 4 m/min provides low movement of electrode material, hence flushing reduces and therefore redepositions of melted material after machining can be seen in the form of huge debris all over the surface. Surface Integrity Studies of Machined NiTiNOL SMA Both surface and subsurface analysis of the wire electric discharge machined NiTiNOL samples has been carried out in the surface integrity studies. The details of the methods followed to conduct surface integrity studies have been mentioned in Section 2.3. Figure 10 shows the 3D topographical surface roughness analysis of machined NiTi-NOL samples and Figure 11 depicts the summary of EDS analysis results indicating the elemental analysis for the same set of machined NiTiNOL samples, where X-type diffusion annealed wire has been used as electrode material during the wire EDM process. Figure 10a indicates the surface analysis for the input process parameter combinations of T on = 105 µs, T off = 55 µs, WF = 4 m/min and SV = 20 V with Sa of 12 µm and comparatively higher Sz of 116.06 µm. The skewness (S sk ) and kurtosis (S ku ) values are found to be −0.418 and 2.633, respectively. The average SR and maximum peak heights are moderately high as the diffusion annealed wire electrode material is hard when compared with plain brass wire electrode material. Although there is sufficient time for flushing, the sparks produced in X-type electrodes are comparatively high and, most importantly, the gap between electrode and workpiece is very minimal. The effect of the intensity of the sparks because of the lesser gap between workpiece and electrode material can be observed in the form of valleys in Figure 10a. As seen in Figure 11a, interestingly, there are no depositions of the electrode contents on the machined surface. The time available for flushing is very good and the sparks generated from electrodes are low due to lower pulse time and as the wire is diffusion annealed, the machined surface is free from depositions of Cu and Zn. Figure 10b indicates the surface roughness analysis for the input process parameter combinations of T on = 125 µs, T off = 25 µs, WF = 6 m/min and SV = 20 V with higher Sa of 15.67 µm and higher Sz of 122.76 µm. The S sk and S ku values are found to be −0.555 and 2.383, respectively. Both average SR and maximum peak height are found to be very high for this particular combination, as pulse time is very high and pause time is very low along with lesser gap voltage. As the gap voltage is far less and pulse time is larger, the sparks generated from X-type diffusion annealed electrode materials are of high intensity. It can be seen in Figure 10b that the peaks and valleys are high and SR is not uniformly distributed. The high-intensity sparks hitting the workpiece for longer durations from a very close position is the main reason for higher average SR and higher peak height. This value of average SR is found to be the highest among all the combinations of process parameters across all the electrode materials [15,19]. Even though the wire is diffusion annealed, due to very high intensity and high quality of sparks generated because of higher pulse time, the Cu and Zn content from the electrode material splashed over the machined surface with 23.88 and 9.42 wt.%, respectively. The deposition of Cu and Zn can be seen in Figure 11b. Among the different input combinational values in wire EDM of NiTiNOL using X-type electrode material, this particular combination of higher pulse time, lower pause time and lower SV results in higher wear and tear of the wire, leading to higher depositions of Cu and Zn over the surface of the machined sample, consequently leading to higher SR. Figure 10c indicates the surface analysis for the input process parameter combinations of T on = 115 µs, T off = 40 µs, WF = 8 m/min and SV = 40 V with average Sa of 11.08 µm and lower Sz of 92.73 µm. The S sk and S ku values are found to be −0.581 and 2.647, respectively. The maximum peak height is comparatively low in this combination due to appropriate machining conditions of moderate pulse and pause time along with moderate spark gap set voltage. Except wire feed rate, all other parameters are maintained at middle value in this particular combination. As compared to Figure 10a,b, the valleys and peaks in Figure 10c are very negligible. It can also be observed in Figure 11c that the Zn content from the electrode material is not splashed onto the machined surface. Cu content with 13.41 wt.% can be observed, which is still a far better condition compared to the EDS results indicated in Figure 11b; this is because apart from WF, the other three process parameters are at middle range. Figure 10d indicates the surface roughness analysis for the input process parameter combinations of T on = 125 µs, T off = 55 µs, WF = 8 m/min and SV = 60 V with lower Sa of 10.76 µm and Sz of 103.37 µm. The S sk and S ku values are found to be −0.551 and 2.914, respectively. The skewness values for the samples machined with X-type electrode are almost fairly symmetrical and few of them are moderately skewed. The surface texture height of the machined samples is found to be platykurtically distributed. Surface Analysis of Machined NiTiNOL SMA-X-Type Electrode Material The average surface roughness is found to be minimal among all other input combinations as observed in Figure 10a-c. Although the pulse time and intensity of the sparks generated are higher in this particular input setting, the average SR is found to be minimal because the spark gap between the workpiece and electrode material is also high. Another main reason is the contribution of the pause time. There is sufficient time for the flushing as pause time is maintained for a higher period of 55 µs. Some peaks are observed nearer to the corners as feed rate is very high and there are chances that the wire electrode generates fresh sparks due to high feed rate and these sparks are not uniformly transferred to the workpiece material due to larger spark gap set voltage of 60 V. Figure 11d supports the average surface roughness results obtained in Figure 10d. It can be seen that the surface of the machined NiTiNOL alloy is completely free from the depositions of wire electrode material and some oxidation is observed because the wire EDM process happens in the dielectric water medium. More contact of workpiece material with deionized water results in the oxidation process. Figure 12 shows the 3D topographical surface roughness analysis of machined NiTi-NOL samples and Figure 13 depicts the results of EDS analysis for the same set of machined NiTiNOL samples, where A-type diffusion annealed wire has been used as electrode material during the wire EDM process. it has been seen that the average surface roughness in the case of brass, Zn-coated brass and X-type wire electrode machining was found to be low [19]. The same trend is followed in A-type electrode machining also. The pulse time generates sparks and as the SV is in the lower position, the heat generated by sparks affects the workpiece component. Cooling of machined components and debris removal are very low as pause time is just 25 µs, resulting in insufficient flushing and therefore, the peaks and valleys are created on the machined surface. Figure 12b indicates the surface roughness analysis for the input process parameter combinations of T on = 115 µs, T off = 40 µs, WF = 8 m/min and SV = 20 V with comparatively higher Sa of 11.98 µm and moderately lower Sz of 94.44 µm. The S sk and S ku values are found to be −0.493 and 2.545, respectively. The average surface roughness increases because of an increase in the pulse time from 105 µs to 115 µs. The increase in Sa is not just dependent on pulse time increase, but instead an increase in wire feed rate may also contribute as fresh sparks are generated on the wire. Surprisingly, the maximum peak height shows improvement in Figure 12b. This is because the pause time has been increased from 25 µs to 40 µs and therefore the workpiece has comparatively more time for flushing away the debris. The valleys are almost negligible in Figure 12b when compared with Figure 12a. Figure 12c indicates the surface roughness analysis for the input process parameter combinations of T on = 115 µs, T off = 40 µs, WF = 4 m/min and SV = 40 V with comparatively lower Sa of 10.99 µm and lower Sz of 86.22 µm. The S sk and S ku values are found to be −0.552 and 2.387, respectively. The change in this particular input parameters combination is that the WF has been decreased from 8 m/min to 4 m/min and SV has been increased from 20 V to 40 V. This increase in SV creates a greater gap between the electrode material and the workpiece, and then the decrease in wire feed rate results in consistent machining performance. Therefore, an overall improvement in both average surface roughness as well as maximum peak height can be seen in Figure 12c. Sz of 86.22 µm is almost the lowest value that has been achieved compared to the performances of other electrode materials. Figure 12d indicates the surface roughness analysis for the input process parameter combinations of T on = 125 µs, T off = 55 µs, WF = 8 m/min and SV = 40 V with comparatively lower Sa of 10.78 µm and moderate Sz of 104.49 µm. The S sk and S ku values are found to be −0.793 and 2.897, respectively. The skewness values for the samples machined with A-type electrode are almost moderately skewed. The surface texture height of the machined samples is found to be platykurtically distributed. The pulse time, pause time and wire feed rate are all high for this combination of input parameters, whereas spark gap is maintained at the middle level. The higher feed rate might be the possible reason for increase in the maximum peak height. However, it can be observed that for low, middle and high pulse time, there is not much deviation in the range of average surface roughness. The valleys and peaks in Figure 12a,d are comparatively high, but average surface roughness has not much variation. A-type electrode has shown some consistency in the average surface roughness for different input settings. Observations from Figure 13a-d indicate that the machined surface is completely free from Cu and Zn contents of electrode material. As the tensile strength of the A-type diffusion annealed wire electrode is high compared to other wire electrode materials such as X-type wire or plain brass or zinc-coated brass wire electrodes, the special alloy of copper and zinc with 80:20 proportions and the coating of beta brass in a diffuse manner allows the wire to perform well for the combinational values of different process parameters. As observed in Figure 12a-d, the average surface roughness performance is found to be stable over the combinations of all the different input process parameters and as depicted by Figure 13a-d, none of the machined surfaces have the depositions of Cu and Zn from the electrode material. The Cu and Zn deposition on the machined surfaces is seen more in the case of machined NiTiNOL samples from plain brass and zinc-coated brass electrode materials [15,19]. The Cu and Zn depositions further reduced when NiTiNOL was machined using diffusion annealed X-and A-types of electrode materials during the wire EDM process. Although both X-and A-type wire electrode materials are good in terms of surface topographic performance, A-type electrode material is found to be much better compared to X-type electrode material both in terms of 3D surface and EDS analysis. Figure 14 shows the microhardness (MH) analysis of machined NiTiNOL samples using X-type wire electrodes for three different pulse time conditions. The MH has been studied up to the depth of 120 µm for the varying pulse times of 105 µs, 115 µs and 125 µs. The highest microhardness (MH) of 460 Hv is attained at pulse time of 125 µs. The pattern of MH behavior against the depth of the machined alloy for all three different pulse times is found to be the same, but the MH values of alloy machined at 115 µs and 105 µs are lesser compared to the alloy machined at 125 µs. Recast layer zone (RLZ) is found up to the depth of 30 µm from the machined surface for all three pulse time conditions, which is less compared to the RLZ of alloys machined using plain brass wire and ZBW electrode materials [15,19]. The higher HAZ as observed from Figure 14 ranges from 40 µm to 70-80 µm depth of the machined alloy. From 80 µm onwards, as the depth of the alloy increases up to 120 µm, the HAZ tends to decrease but still does not reach the CLZ. The overall performance of the X-type electrode material in terms of MH values is found to be better than plain brass wire but is not as good as ZBW electrode material. However, although the MH values are higher compared to ZBW electrode performance, the RLZ and HAZ are lesser. This may be because of the quality of diffusion annealed wire. The temperature generated by X-type wire has comparatively less effect on the alloy to a certain depth. Subsurface Analysis of Machined NiTiNOL SMA-X-Type Electrode Material The average MH at RLZ is found to be 41.14%, 26.21% and 21.42% higher than the MH of the base material for pulse time of 125 µs, 115 µs and 105 µs, respectively. Average MH in the HAZ up to the depth of 70-80 µm is found to be 28.54%, 19.14% and 16.01% greater than the MH of the base material for pulse time of 125 µs, 115 µs and 105 µs, respectively. Further, the decline in MH values is observed in the HAZ up to the depth of 120 µm and the MH is found to be 15.72%, 12.66% and 9.97% greater than the MH of the base material (320 Hv) for pulse time of 125 µs, 115 µs and 105 µs, respectively. Figure 15 indicates the RLT analysis for the three different pulse time conditions during wire EDM of NiTiNOL using X-type wire as electrode material. The RLT is an electric spark melted material which is solidified and deposited on the surface of the workpiece without being evacuated and expelled by the dielectric fluids. The structure is extremely hard to remove, and it is examined at various magnification levels and the thickness is measured utilizing the SEM. The thermal zone lies below the recast layer. Figure 15a-c depict the varied recast layer thickness over the machined surface after the completion of the EDM process. RLT increases with increase in the pulse time parameter. The adhesion of melted molten material over the machined surface may be due to insufficient dielectric pressure, and it redeposits unevenly over the machined surface and rapid solidification creates shrinkage of molten material, which may induce high thermal tensile residual stresses resulting in microcracks. Figure 16 shows the MH analysis of machined NiTiNOL samples using A-type wire electrodes for three different pulse time conditions. The MH has been studied up to the depth of 120 µm for the varying pulse time of 105 µs, 115 µs and 125 µs. The highest MH of 420 Hv is attained at pulse time of 125 µs. Compared to all other electrode materials, MH of machined alloy using A-type wire electrodes is found to be the lowest. The RLZ in the machined alloy in this particular type of wire electrode machining is just up to 20 µm. The HAZ for pulse times of 125 µs and 115 µs is found up to the depth of 70 µm and for pulse time of 105 µs there is no much higher HAZ found. For pulse times of 125 µs and 115 µs, the HAZ declines from the depth of 80 µm to 120 µm, whereas, when the alloy is machined with 105 µs of pulse time, HAZ of the alloy ends at early depth of 60 µm and immediately the CLZ starts from the depth of 70 µm and, throughout the depth of 120 µm, the MH of the machined alloy matches the MH of the base alloy. It was observed in the EDS analysis from Figure 13 that the surface of the machined alloy which was machined using A-type electrode material did not have any carbide and oxide deposition. The average MH at RLZ is found to be 30.90%, 22.36% and 20.78% higher than the MH of the base material for pulse times of 125 µs, 115 µs and 105 µs, respectively. Average MH in the HAZ up to the depth of 50-60 µm is found to be 23.64%, 15.52% and 4.79% greater than the MH of the base material for pulse time of 125 µs, 115 µs and 105 µs, respectively. Further, the decline in MH values is observed in the HAZ up to the depth of 120 µm and is found to be 12.37% and 6.46% greater than the MH of the base material for pulse time of 125 µs and 115 µs, respectively, whereas the MH of machined alloy equals the MH of the base alloy when machined with 105 µs pulse time. Figure 17 indicates the RLT analysis for the three different pulse time conditions during wire EDM of NiTiNOL using A-type wire as electrode material. Subsurface Analysis of Machined NiTiNOL SMA-A-Type Electrode Material It can be seen in Figure 17a that the RLT formation is almost similar to the RLT seen in Figure 17b, whereas at the lower pulse time as observed in Figure 17c, the RLT is very minimal and in the range of nm. When the pulse time decreases to 115 µs, the average RLT is found to be 2.36 µm and at pulse time 105 µs, the minimal and negligible RLT of just 0.72 µm is observed. Even microvoids, small droplets and resolidified material were also present on the recast layer. Figure 17a-c illustrate that all of the melted material is not removed, and a residue is left over even after the ejection from the dielectric flushing. These residues are lightly welded to the machined surface and hence cause increases in surface hardness and residual stresses. 3.5.5. Comparing the Surface Integrity Performances of Brass, Zn-Coated Brass, DX and DA Electrodes Figure 18 shows the average surface roughness (Sa) and maximum peak height (Sz) of the machined NiTiNOL for all four different electrode materials. The bar chart ( Figure 18) clearly shows that the DA electrode gives better results in terms of both Sa and Sz compared to the other three electrode materials. When machined with pulse time of 125 µs, the sparks are very high and the highest erosion of work alloy takes place during the machining process. While machined with a DA electrode, the Sa and Sz are found to be 11.98 µm and 94.44 µm, respectively, which are the least values compared to the performances of other electrode materials. The second-best performance considering the values of Sa and Sz is found to be the performance of Zn-coated brass wire electrode material with Sa and Sz values of 12.20 µm and 96.65 µm, respectively, whereas the other two electrode materials, namely plain brass (PB) and DX wire electrodes, give very high Sa and Sz values when machined with 125 µs of pulse time condition. Figure 18 shows the average surface roughness (Sa) and maximum peak height (Sz) of the machined NiTiNOL for all four different electrode materials. The bar chart ( Figure 18) clearly shows that the DA electrode gives better results in terms of both Sa and Sz compared to the other three electrode materials. When machined with pulse time of 125 µs, the sparks are very high and the highest erosion of work alloy takes place during the machining process. While machined with a DA electrode, the Sa and Sz are found to be 11.98 µm and 94.44 µm, respectively, which are the least values compared to the performances of other electrode materials. The second-best performance considering the values of Sa and Sz is found to be the performance of Zn-coated brass wire electrode material with Sa and Sz values of 12.20 µm and 96.65 µm, respectively, whereas the other two The values of Sa and Sz achieved in DA and other electrode materials can be justified using Figure 19, which shows the wire material deposition in terms of weight percentages for different electrode materials at 125 µs of pulse time condition. It is very much clear from Figure 19 that the copper and zinc contents have not been deposited over the machined alloy when machined with DA wire electrode, whereas huge weight % of copper and zinc depositions can be observed for PB, ZB and DX wire electrode materials. Figure 20 shows MH values of machined NiTiNOL alloy for all four electrode materials at pulse time of 105 µs. All the MH values of alloys machined from four different electrode materials at 105 µs are considered and compared in Figure 20. Out of four electrode materials, the MH values of alloy machined with PB and DX wire electrode materials are found to be high and the MH values of alloys machined with ZBW electrode are medium, whereas MH values of the alloys machined with DA electrode material are found to be the least of all. Although the MH value trends from all four electrode materials are found to be the same, the RLZ and HAZ are observed to be far less in the case of alloys machined with DA electrode. For any biomedical material, not just the surface roughness values matter; the surface and subsurface analysis of the machined alloy is also found to be important. In such a case, looking at the performances of all four different electrode materials, diffusion annealed A-type (DA) electrode material is found to be the better choice. The selection can also be justified from Figure 21, which shows the average RLT of machined NiTiNOL alloys for all four different electrodes and for all three different pulse time conditions. The average RLT in the alloys machined with DA electrode for all three pulse time conditions of 125 µs, 115 µs and 105 µs is found to be the least compared to the average RLT performances of the remaining three electrode materials. Conclusions The wire EDM of NiTiNOL SMA with diffusion annealed coated wires has been studied in the present investigation. RSM models were developed to study the interaction effects of various input process variables on responses like TWR and SR. Detailed surface and subsurface analysis was carried out as a part of surface integrity studies. The experimental deviations were within ±5%. After comparing the performances of diffusion annealed wire electrodes w.r.t. TWR, SR and SI studies, the following are concluded. • For higher values of pulse and pause times with higher WF rate and lower SV, a very high TWR of 0.07 g/min was observed when NiTiNOL was machined with X-type electrode material. • Huge wire depositions and larger microcracks were observed on the machined NiTi-NOL samples through SEM analysis when X-type wire electrode material was used. The peaks and valleys during measurement of average surface roughness of the machined alloy were greater when the alloy was machined with X-type electrode material, with Sa of 15.66 µm and Sz of 122.76 µm compared to A-type electrode material, with Sa and Sz of 11.98 µm and 94.44 µm, respectively, for higher pulse time condition of 125 µs. • For pulse time of 125 µs, the RLT was found to be high and medium for 115 µs and low for lower pulse time of 105 µs. As the pulse time decreases, the RLT was found to be low for all the samples and the trend was found to be the same for machining of NiTiNOL using all PB, ZB, DX and DA electrode material. • Average recast layer thickness of NiTiNOL sample machined with A-type electrode materials was found to be much lower, almost 50-80% less than the average RLT of the sample machined with other electrode materials. • A-type electrode material showed overall better performances for TWR, SR and surface integrity studies compared to the performances of all other electrode materials while machining NiTiNOL medical grade SMA. In the present investigations, the TWR results of A-type material were found to be 45%, 95% and 58% better than the plain brass, Zn-coated brass and X-type electrode materials. As shown in Figure 18, the Sa value of the sample machined with A-type electrode showed 37%, 2% and 31% better results as compared to plain brass, Zn-coated brass and X-type electrode values, respectively. Further, as seen from Figure 21, the average recast layer thickness of alloy machined with A-type electrode was less when compared to the alloys machined with plain brass, Zn-coated brass and X-type electrodes. Therefore, A-type diffusion annealed electrode is found to be a better choice for WEDM of NiTiNOL alloy on the basis of TWR, SR and surface integrity results and analysis. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: This study did not report any data.
2022-02-18T16:23:50.424Z
2022-02-15T00:00:00.000
{ "year": 2022, "sha1": "02f49fa34f86f3b060328f1a2dc1ac19d9e801a9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1702/10/2/138/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5eaf6476ece77348a350244f343cbfc3ad0eb607", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
55972216
pes2o/s2orc
v3-fos-license
Noticing grammar in L 2 writing and problem-solving strategies Noticing plays an important role for second language acquisition. Since the formulation of the output hypothesis (Swain, 1985), it has been proven that producing output can lead to noticing. Studies on noticing have revealed little focus on grammar, and an in-depth investigation of grammar noticing has not been conducted so far. Studies into problemsolving strategies applied to resolve noticing in writing have provided differing classifications. The current study investigates the noticing of ten young learners (15 to 16 years) of L2 English while performing a writing task, with a special focus on grammar. The problem-solving strategies these learners applied are analyzed. With regard to the linguistic areas, results suggest that verb forms, especially the use of modals, and the choice of prepositions, are the main issues encountered in morphology. In syntax, learners mainly dealt with the length of sentences and the ways of connecting clauses. Learners relied on their intuition and existing knowledge, common sense and rephrasing as grammar problem-solving strategies. These results open a new area of study into noticing grammar and suggest some implications for teaching. Introduction Since the 1980s and early 1990s, the concept of noticing linguistic features in second language (L2) input and output has been investigated by a number of researchers (for a summary concerning noticing the gap while writing, see Williams, 2012).Due to its potential to facilitate second language acquisition (SLA), noticing and related concepts have found their way into SLA studies as well as pedagogically oriented research.The nature of noticing as well as possible ways to promote noticing L2 features have been investigated (e.g., Hanaoka, 2007;Qi & Lapkin, 2001;Williams, 2001).The studies so far have revealed a strong focus on lexical issues and much less focus on morphosyntactical features of the L2 (e.g., Hanaoka & Izumi, 2012;Williams, 1999). Encountering a linguistic problem while producing L2 output may stimulate noticing when the learner is supplied with the respective L2 input (e.g., Qi & Lapkin, 2001).One of the possibilities to generate input is using problem-solving strategies such as dictionary search.Other problem-solving strategies which do not require external input are also available to learners.So far, not many studies have investigated noticing in L2 writing in connection with the application of problem-solving strategies.The current study has set out to analyze the linguistic problems learners encounter while producing a text in their L2 English and the problem-solving strategies they use in order to deal with their problems.The focus in the present study is on morphosyntactic issues which have so far received little attention in related research.An in-depth qualitative analysis of the nature of learners' focus on grammar is provided in a small-scale study with ten German teenage learners of English. Noticing and related concepts Learners' ability to reflect upon language and their own language use has been discussed and investigated using different concepts such as language and metalinguistic awareness, noticing, or learner-initiated focus on form.The concept of noticing dates back to Schmidt (1990), who pointed out the role of noticing in second language (L2) input for second language acquisition.Swain (1985) formulated the output hypothesis in which she states that noticing can also happen when learners produce output, which indicates that output also has a noticing/triggering function (see also Izumi & Bigelow, 2000, p. 244).According to the output hypothesis, noticing in output production can be triggered by external feedback (coming from an interlocutor) or internal feedback (initiated by the learners themselves).As a reaction to noticing, learners analyze the problem and either come up with a solution which leads to modified output, or they turn to additional input in order to find a solution to their problem.A number of studies have been conducted which have attempted to verify the output hypothesis by testing the effect of noticing in output on second language acquisition (Adams, 2003;Hanaoka, 2007;Izumi, 2002;Izumi & Bigelow, 2000;Izumi, Bigelow, Fujiwara, & Fearnow, 1999;Uggen, 2012).Although much research still has to be done to prove the hypothesis that noticing in the output leads to acquisition, the research conducted so far has at least confirmed that noticing can facilitate the process of second language acquisition (Williams, 2012).This insight stresses the importance of investigating the nature of noticing in producing L2 output. In a large body of research on noticing in L2 writing, the output hypothesis is tested by using external corrective feedback as linguistic input and investigating the learners' reaction to it.The point of interest is whether learners will notice the gap between their own formulations and some kind of input or feedback (implicit or explicit), be it error correction (Ellis, Sheen, Murakami, & Takashima, 2008;Heift & Rimrott, 2008;Varnosfadrani & Basturkmen, 2009), a native speaker reformulation of the learner text (Adams, 2003;Lázaro Ibarolla, 2009;Qi & Lapkin, 2001), or a text written by a native speaker of the target language on the same topic, but independently of the learner text (Hanaoka, 2007), and how this noticing will influence subsequent output and language acquisition.Another approach to investigating noticing after output production is presenting learners with input in the form of a reading text containing a certain target structure (Uggen, 2012).The possibility of consulting reference materials to resolve linguistic problems is mentioned by Williams (2012), but research so far has not investigated this option.The present study attempts to fill this gap by having learners deal with their noticing in the process of writing without teacher intervention and by using problem-solving strategies to generate additional linguistic input, including external resources such as dictionaries and the internet. A pedagogical approach related to the notion of noticing and language awareness is focus on form (e.g., Doughty & Williams, 1998), which is investigated as teacher-initiated and learner-initiated focus on form.Studies into learner-initiated focus on form in communicative tasks and its possible effects on SLA were conducted by Williams (1999Williams ( , 2001)).Williams (1999) analyzed the linguistic focus of learner-initiated attention to form (lexis or grammar), the ways learners draw attention to form, the activity types during which learners attend to form, and the influence of proficiency on learner-initiated focus on form.Williams (2001) investigated the effectiveness of spontaneous attention to form by using tailored tests and spontaneous production.In addition, she compared the effects of learnerinitiated and teacher-initiated attention to form. Noticing grammar Studies into noticing linguistic features in the output so far have revealed that learners mostly focus on lexical and other surface levels of linguistic processing, with little focus on grammar (Hanaoka & Izumi, 2012;Qi & Lapkin, 2001;Swain & Lapkin, 1995;Whalen & Ménard, 1995;Williams, 1999).They also suggest that with increasing proficiency, the frequency of morphosyntactic LREs increases (Williams, 1999).An in-depth analysis of which grammar-related features learners spontaneously attend to has not been conducted so far.The current study aims to offer some insights into the nature and quality of morphosyntactic LREs and shed light on the problem-solving strategies learners employ to resolve these LREs in the process of writing. In order to investigate noticing grammar forms by learners, communicative tasks such as the dictogloss are used to push the learners towards the use of a specific grammatical item (Nassaji & Fotos, 2004).The focus of studies into noticing grammar or grammar teaching is often on a discrete grammar point such as conditionals, past tense, questions, the plural, or the use of articles (e.g., Izumi & Bigelow, 2000;Mackey, 2006;Song & Suh, 2008).The interest of the current study was to find out which grammatical features learners notice if they are confronted with a spontaneous written production task without selecting an explicit target form.Thus, it is possible to see which forms learners actually notice, and these can be compared with the forms used in studies on grammar. Problem-solving strategies Strategies have been classified in various ways in L2 research.Cohen (1996, p. 2) distinguishes between language learning and language use strategies.In contrast to language learning strategies, language use strategies do not have learning as their primary goal, but they can still lead to learning.In the area of spoken language use or communication strategies, reduction and achievement strategies are distinguished (Faerch & Kasper, 1983).When using reduction strategies, a learner changes the communicative goal (functional reduction) or the structure of the utterance (formal reduction) in order to avoid the problematic linguistic feature.When using achievement strategies, learners solve their problems by expanding their communicative resources (Faerch & Kasper, 1983, p. 45).A similar distinction is provided by Uzawa and Cumming (1989) for writing, who distinguish between keep-up-the-standard strategies (as compared to L1 writing) and lower-the-standard-strategies. To my knowledge, there have been two attempts to qualitatively classify problem-solving strategies used by L2 writers.Cumming (1989) distinguishes between knowledge-telling, which does not involve any problem-solving processes, and heuristic search strategies, which are applied when a problem has been encountered by the learner.The heuristic search strategies are further divided into the following strategies: engaging a search routine; directed translation or codeswitching; generating and assessing alternatives; assessing in relation to a criterion, standard, explanation, or rule; relating parts to whole; and setting or adhering to a goal.Swain and Lapkin (1995) identified the following problem-solving behaviors in their qualitative study of young L2 writers: sounds right/doesn't sound right; makes sense/doesn't make sense; lexical search (via L2); lexical search (via L1 or both L1 and L2); translation (phrase or greater); and applying a grammatical rule.Some of the strategies identified in the two studies correspond to each other, but the differences in the classification, in the terminology as well as the fact that there are just two studies of this sort, suggest that some further research into the use of strategies in the L2 writing process is needed. The current study is an attempt to connect the above-mentioned areas of research, investigating grammar noticing in writing and, at the same time, linking it with the problem-solving strategies learners apply to resolve their problems. Research questions The aim of this study is to offer a qualitative investigation of how learners reflect on grammatical phenomena when asked to compose a text.Through an indepth analysis of learner-initiated noticing in a writing task with a given topic, it can be seen which phenomena are noticed by the learners.The strategies learners use to deal with their grammar-related queries are investigated to see how they deal with their problems if there is no intervention, but sufficient linguistic resources (i.e., dictionaries, internet access) are available.The analysis is based on the following two research questions: 1. Which grammar-related features do young (15-to 16-year-old) L2 learners notice when writing in English? 2. Which problem-solving strategies do young (15-to 16-year-old) L2 learners of English use to deal with their grammar-related noticing in English? Participants The participants were ten 15-to 16-year-old learners of English at German schools who all shared German as their mother tongue.Two of the participants were growing up bilingually (German plus another language) and the number of foreign languages learned ranged from two to five.Most of the participants attended German secondary school (called Gymnasium),1 and two of them the German Realschule. 2he participants' grades in English ranged from 1 to 4 (1 being the best grade, 6 the worst).Considering the expected proficiency level in this age group at German schools, the learners were at the B1 level of the Common European framework of reference for languages (Council of Europe, 2001).There were five male and five female participants.The objective of the research study was explained to all participants and their parents, and they were asked to sign an informed consent form. Think-aloud protocols and stimulated recall interviews An individual data collection session was conducted with each participant.In order to acquire rich data on noticing in the process of writing and problem-solving strategies, a combination of two data collection methods was used.First, the participants were asked to think aloud while composing a paragraph on the following topic: "If you could restrict the school subjects to two, which would you choose and why?"They were allowed to choose the language in which they verbalized or to switch between the two languages.They were provided with bilingual and monolingual dictionaries as well as a computer with internet access to use for any type of query.There was no time limit to the tasks and the participants were asked to write a paragraph which they would also hand in at school for grading.The think-aloud session was recorded on video which captured the task sheet.The video recording allowed the researcher to determine whether the participants were only verbalizing or also writing at the same time, and whether they were writing without verbalizing.Any nonverbal behavior which was not captured on the video recording (this was mainly dictionary and internet search and the retrieved results) was noted by the researcher and later included in the transcripts.The think-aloud protocols were chosen among the methods mentioned above because they have been the most widely used method to capture learners' mental processes (Uggen, 2012, p. 509). As recommended by Ericsson and Simon (1993), the concurrent thinkaloud protocols were combined with retrospective reports to counter the issue of incompleteness of the reports.For this reason, a stimulated recall interview took place immediately after the think-aloud session in which the video recording of the think-aloud session was used as a stimulus.The researcher stopped the video at points where the participant stopped verbalizing (suggesting that some thinking took place at this point which could be recalled and verbalized in the stimulated recall interview) and at points where some noticing was obvious, but it was not clear what was noticed and how the participant arrived at a specific decision.The participant was then asked to recall and verbalize their thoughts at that moment of the recording.The participants were also explicitly allowed to stop the recording at any time and comment on their thoughts.The stimulated recall interviews took place in German and they were audio-recorded. As there was no time limit set for the writing task to account for the fact that thinking aloud may slow down the execution of a task (Bowles, 2010), the duration of each session varied between 24 and 101 minutes (with 7 to 34 minutes for writing and 16 to 67 minutes for the stimulated recall interview), depending on the time the participants needed for writing. Transcription and coding The data was transcribed based on the VOICE transcription conventions (VOICE Project, 2007) which were adapted according to the requirements posed by the particular types of data.The coding procedure roughly corresponds to the grounded theory coding (Glaser & Strauss, 1967) and was conducted according to the recommendations specified by Kelle and Kluge (2010).Starting with open, data-driven coding, a system of categories was developed and a hierarchy created.The developed categories were compared with existing research and adapted to it to ensure comparability.In the think-aloud protocols, the coding unit was a language related episode (LRE) in line with most of the previous studies (see above).The stimulated recall interviews did not receive their own codings but served to identify the LRE types and problem-solving strategies in the think-aloud protocols. Grammar LREs This section presents results with regard to Research question 1 (Which grammar-related features do young [15 to 16-year-old] L2 learners notice when writing in English?).Among the LREs identified in the think-aloud protocols, morphological and syntactical LREs were selected for the analysis of grammar-related LREs, corresponding to the grammar or morphosyntactic episodes mentioned in the existing literature (Hanaoka, 2007;Swain & Lapkin, 1995;Williams, 1999).Pure lexical and spelling LREs were not considered as they involve only word choice (not word forms) decisions and LREs above the sentence level were also excluded due to the missing link to what is commonly subsumed under the term grammar.Of the 188 LREs produced by the ten participants, the majority (119) were related to lexis whereas only 36 were related to grammar. Morphological LREs were defined as LREs in which the participant looks for the right form of a word.The following example from a think-aloud protocol illustrates a morphological LRE: which is spoken spoke (.) spoken all over the world (.) speaken nein spoken English translation: which is spoken spoke (.) spoken all over the world (.) speaken no spoken There were altogether twelve morphological LREs in the whole data set.These were produced by five of the ten participants of the study, with two participants (M7 and F10) producing four morphological LREs each.The word classes which were the focus of the morphological LREs were verbs, nouns, prepositions, word class choice and one article.A list of the morphological LREs including precise information about the focus is shown in Table 1.As evident from the table, the forms and uses of modal verbs occurred three times.Other LREs which dealt with verbs concerned tense choice and the correct form of a past participle.Prepositions used together with specific nouns also were a matter of interest.Syntactical LREs were defined as questions of word order, sentence length, and punctuation.The following example from a think-aloud protocol illustrates a syntactical LRE: because i think that it's er late (.) necessary later (.) i think it's (2) later necessary erm (1) i think that it's (5) later necessary (.) necessary later (4) because i think that it's necessa-(.)later {adds "later" between "it's" and "necessary" } In the data set, 24 syntactical LREs were identified.Most participants produced between one and four syntactical LREs, but participant F10 produced eight syntactical LREs.The focus of the syntactical LREs was mainly on sentence length and connecting clauses.Some of these issues were also combined in one LRE (e.g., a learner decided to make his or her sentence longer, which is why he encountered the issue of how to link the new clause to the existing one).In addition, three LREs were concerned with word order and four LREs with other syntactical issues.For an overview, see Table 2. Problem-solving strategies This section presents results with regard to Research question 2 (Which problem-solving strategies do young [15 to 16-year-old] L2 learners of English use to deal with their grammar-related noticing in English?).The strategies used to resolve morphological LREs are listed in Table 1.For the LREs related to verb forms and nouns, reasoning was the preferred strategy in which learners used their common sense, background knowledge and their intended message to decide about the solution.Alternatively, or in addition to reasoning, the learners applied their explicit knowledge of rules, for example the knowledge of the infinitive, past, and past participle in verbs which are often learned together, or the knowledge about when a specific tense or verb form should be used.The questions about prepositions were solved either intuitively or the prepositional phrase was avoided and an alternative formulation was chosen (instead of opting for one out of several possible prepositions used with the word shower, the learner opted for the formulation take a shower). The strategies used to resolve syntactical LREs are listed in Table 2.The main strategy used to solve issues of sentence length was rephrasing which, in these specific cases, meant that the learner either finished a sentence and started a new one instead of using a conjunction to connect a new clause, or that they added a new clause to a sentence which they originally intended to finish.In one case, a learner applied explicit knowledge stating that long sentences are criticized at school.The rephrasing strategy was used in two different ways to deal with connecting clauses.The first way corresponds to the rephrasing strategy as used for the issues of sentence length.The second way is using rephrasing as functional reduction strategy, hereby changing the content of the utterance.For example, one participant wanted to say that it is important to read books, especially German literature, but he was not able to put all the information into one sentence.As a solution, he decided to leave out the information about German literature, finished the sentence and mentioned the skipped information later in his text.Applying a rule (e.g., that the word because should not be used at the beginning of a sentence) was another problemsolving strategy used to solve issues of connecting clauses.Two questions of word order were solved intuitively, one by using rephrasing as a functional reduction strategy (writing my favorite subjects and leaving out the numeral two, because the participant was not sure about its position in the sentence). The role of noticing in producing L2 output The finding that lexical issues are the most frequent ones corresponds to previous findings (Swain & Lapkin, 1995;Whalen & Ménard, 1995;Williams, 1999).However, as also noted in previous studies, noticing in other areas including grammar does take place.Based on the limited data gathered in the current study, it seems that learners are concerned more about syntax than about morphology.In addition, all learners encountered syntactical issues whereas just five learners encountered issues of morphology.A reason for this difference may lie in individual learner differences (e.g., their focus on fluency, accuracy or complexity, or their communicative confidence) which could be an area of future research. The current study has demonstrated the issues which were relevant to learners when they composed in L2 English.In the area of morphology, the choice of correct verb forms was an issue which occurred five times (out of twelve), with the main focus on the use of modal verbs.Interestingly, the choice of a correct tense was an issue that occurred just once in the whole data set.There are two possible reasons for this finding: (1) The task prompted the learners to use mostly the present simple tense or modals (with sentences such as "I would choose subject xy, because it is easy and I could concentrate on my hobbies"); (2) The learners have mastered the tenses to an extent which they perceive as sufficient, which enabled them to notice other issues such as the forms and meanings of modals. Another issue was the choice of the correct preposition.Even though it did not occur very often (three times in the whole data set), the fact that different participants encountered this issue speaks to its relevance.In two cases, the learners decided intuitively which preposition to use.In one case, the learner decided to choose a different phrasing in order to avoid using a preposition altogether.As the learners had dictionaries and the internet at their disposal, it is notable that they did not use them to clarify their problems, even though there was no time limit to the task.One reason can be that they were very confident about the solution they had come up with and another can be that the correct preposition was not so relevant for them. The prevailing focus of syntactical LREs was on sentence length and the ways clauses can be connected.Basically, the learners who encountered these issues decided to use either a comma or the conjunctions and, but and because to connect clauses.Two participants decided to use a non-finite construction instead of a finite one.The LREs the participants encountered did not prompt them to look for other possible ways to connect clauses. Comparing the findings with the foci of the studies into grammar noticing reveals that there was not much correspondence between the issues learners in this study spontaneously focused on while writing and the foci selected in studies into grammar noticing and into teaching grammar such as the use of articles (Bitchener, 2008;Ellis et al., 2008), questions, plurals, or past tense forms (Mackey, 2006).The only slight correspondence is the use of modals by the learners in this study and the use of conditionals in some studies (Izumi & Bigelow, 2000;Song & Suh, 2008). Regarding the use of problem-solving strategies, the data revealed that learners did not turn to additional resources to deal with their grammar LREs even though these would have been available and there was no time limit.Rather, they solved their problems intuitively, rephrased their utterances, or applied logical reasoning.The reason why grammar-related LREs are solved using the learner's own resources may lie in time management (finding a solution for a grammar issue may take longer than for a lexical issue), or in previous instruction (it cannot be excluded that the main focus in teaching how to use a dictionary is on finding single words). Explicit knowledge was used seven times to find a solution.This provides insights into some of the rules apparently taught at school, such as "do not use because at the beginning of a sentence" or "avoid long sentences." In morphological as well as in syntactical LREs, rephrasing occurred as avoidance strategy.In morphological LREs, it could be specified as formal reduction strategy where the content is kept, but a different formulation is chosen.In syntactical LREs, it was the functional reduction strategy where the originally planned utterance was not put on paper.However, the intended message was kept for later and used in a different sentence. A comparison to the strategies identified by Cumming (1989), and Swain and Lapkin (1995) reveals that using intuition and applying rules occurred in the current study as well as in the two previous studies on problem-solving strategies in writing.Rephrasing and reasoning are strategies unique to the current study. Limitations Even though the current study has been able to offer some new insights into grammar-related noticing and problem-solving strategies, it has got a number of limitations.First, the number of participants was too low to allow for any generalizations.Also the number of grammar-related LREs was very small due to the number of participants and due to the fact that the majority of the LREs were lexical.Thus, the detailed analysis only revealed tentative tendencies regarding the focus of grammar-related LREs and the problem-solving strategies.In addition, the methodology does not capture all mental processes and even though care was taken to elicit as much data as possible, some relevant LREs may have been missed due to them not being verbalized.Although caution was taken in the stimulated recall interviews to ask only about the thoughts at the time of writing, it cannot be ruled out completely that the participants also reported some new thoughts which only occurred to them during the stimulated recall interview and not during the writing process. Conclusions, further research and possible implications for instruction The current study has been able to open a small window onto the grammar focus of 15-to 16-year-old writers.It has revealed linguistic areas these learners were concerned with when writing in L2 English and shown that some of these areas are not yet represented in research on grammar noticing and teaching.The analysis of problem-solving strategies has shown that these learners relied mainly on their own resources when trying to resolve their grammar-related problems, not using the external resources available.The strategies identified in this study complement the strategies identified in the previous studies. The qualitative character of the current study with a low number of participants suggests that further research is needed to identify which grammatical features learners notice in a writing task.In addition, it would be interesting to see whether teaching the issues which the learners have encountered would bring about any change in their noticing and in their writing.In addition, some phenomena may be grounded in individual learner differences (for a study into the link between self-correction behaviors in speaking and individual learner differences, see Kormos, 1999).Kormos (2012) stresses the importance of investigating the role of individual differences in L2 writing.Therefore, further research is needed to see how individual learner differences influence noticing and self-correction behavior in writing. In instruction, finding out which problems learners are concerned with in written language production may be a first step towards instruction which considers the learners' developmental stage (see the processability theory by Pienemann, 1998) and therefore is likely to be fruitful.As pointed out by Williams (2012), the relatively new approach of writing to learn looks at L2 writing as a possible instrument for L2 development.A grammar teaching approach which takes the learners' written output as the starting point for explaining grammar is the method of intelligent guessing (MIG) proposed by Angelovska and Hahn (2014).Focusing on the problems learners notice while writing, the teacher may provide them with strategies to deal with these problems, such as more sophisticated ways of connecting sentences or explicit instruction in the use of modals.As noted above, further research would be needed to find out whether there are more topics which the learners find relevant, and to what extent they are already considered in teaching. duty and building the sentence around it (choosing another word would have led to changing the whole sentence) don't write much tests and they are not always so boring.The learner is aware that they could be wrongly related to tests which was not intended Re-phrasing (specifying what is meant by they) (1) Checking the sentence flow (1) Missing a word for a sentence to sound good Intuition (adding the filler even) (1) Clause structure (1) Repeating subject in the second clause of a sentence Reasoning (1)Note.The strategies relate to the broad focus of the LREs, not to the narrow focus or specific examples.The number of occurrences is shown in brackets. Table 1 Focus of morphological LREs and the strategies used to resolve them The strategies relate to the broad focus of the LREs, not to the narrow focus or specific examples.The number of occurrences is shown in brackets. Table 2 Focus of syntactical LREs and the strategies used to resolve them we learned much about German history and this is one of the most important things . . .vs. . . .we learned much about German history, one of the most important things . . .Spanish is a language which is spoken . . .vs. Spanish is a language spoken . . .
2018-12-12T03:26:12.099Z
2017-09-15T00:00:00.000
{ "year": 2017, "sha1": "22a06a75a69eaa1ca8e283fd957471178d058517", "oa_license": "CCBY", "oa_url": "https://pressto.amu.edu.pl/index.php/ssllt/article/download/9511/9196", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "22a06a75a69eaa1ca8e283fd957471178d058517", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
256661535
pes2o/s2orc
v3-fos-license
Improving Data-Efficiency and Robustness of Medical Imaging Segmentation Using Inpainting-Based Self-Supervised Learning We systematically evaluate the training methodology and efficacy of two inpainting-based pretext tasks of context prediction and context restoration for medical image segmentation using self-supervised learning (SSL). Multiple versions of self-supervised U-Net models were trained to segment MRI and CT datasets, each using a different combination of design choices and pretext tasks to determine the effect of these design choices on segmentation performance. The optimal design choices were used to train SSL models that were then compared with baseline supervised models for computing clinically-relevant metrics in label-limited scenarios. We observed that SSL pretraining with context restoration using 32 × 32 patches and Poission-disc sampling, transferring only the pretrained encoder weights, and fine-tuning immediately with an initial learning rate of 1 × 10−3 provided the most benefit over supervised learning for MRI and CT tissue segmentation accuracy (p < 0.001). For both datasets and most label-limited scenarios, scaling the size of unlabeled pretraining data resulted in improved segmentation performance. SSL models pretrained with this amount of data outperformed baseline supervised models in the computation of clinically-relevant metrics, especially when the performance of supervised learning was low. Our results demonstrate that SSL pretraining using inpainting-based pretext tasks can help increase the robustness of models in label-limited scenarios and reduce worst-case errors that occur with supervised learning. Introduction Segmentation is an essential task in medical imaging that is common across different imaging modalities and fields such as cardiac, abdominal, musculoskeletal, and lung imaging, amongst others [1][2][3][4]. Deep learning (DL) has enabled high performance on these challenges, but the power-law relationship between algorithmic performance and the amount of high-quality labeled training data fundamentally limits robustness and widespread use [5]. Recent advances in self-supervised learning (SSL) provide an opportunity to reduce the annotation burden for deep learning models [6]. In SSL, a model is first pretrained on a "pretext" task, during which unlabeled images are perturbed and the model is trained to predict or correct the perturbations. The model is then fine-tuned for downstream tasks. Previous works have shown that such models can achieve high performance even when fine-tuned on only a small labeled training set [7][8][9]. While most SSL models in computer vision have been used for the downstream task of image classification, segmentation comparatively remains an under-explored task [10]. In this work, we systematically evaluate the efficacy of SSL for medical image segmentation across two domains-MRI and CT. We investigate "context prediction" [7] and "context restoration" [8], two well-known and easy-to-implement archetypes of restorationbased pretext tasks that produce image-level representations during pretraining for eventual fine-tuning. Context prediction sets pixel values in random image patches to zero, while context restoration randomly swaps pairs of image patches within an image while maintaining the distribution of pixel values (Figure 1). For both tasks, the model needs to recover the original image given the corrupted image, a process we refer to as "inpainting". We consider these two tasks because they maintain same input-output sizes, akin to segmentation. We hypothesize that such pretext tasks allow construction of useful, image-level representations that are more suitable for downstream segmentation. Example ground truth segmentations for the MRI and CT datasets (both with dimensions 512 × 512), and example image corruptions for context prediction (zero-ing image patches) and context restoration (swapping image patches). Since image corruption happens after normalization, the zero-ed out image patches for context prediction were actually replaced with the mean of the image. The "Inpainting" section depicts image corruptions with four different patch sizes: 64 × 64, 32 × 32, 16 × 16, and 8 × 8. The locations of these patches were determined using Poisson-disc sampling to prevent randomly overlapping patches. While context prediction and context restoration have been proposed before, the effects of the large space of design choices for these two pretext tasks, such as patch sizes for image corruption and learning rates for transfer learning, are unexplored. In addition, prior works exploring SSL for medical image segmentation have primarily focused on the accuracy of segmentation using metrics such as Dice scores [8,11], but have not investigated if SSL can improve clinically-relevant metrics, such as T2 relaxation times for musculoskeletal MRI scans and mean Hounsfield Unit (HU) values for CT scans. These metrics can provide biomarkers of biochemical changes in tissue structure prior to the onset of gross morphological changes [12,13]. Furthermore, within the context of empirical data scaling laws in DL, past SSL works have rarely explored benefits of increasing the number of unlabeled images during pretraining [14]. Characterizing the efficiency of SSL methods with unlabeled data can lead to more informed decisions regarding data collection, an important practical consideration for medical image segmentation. In this work, we address the above gaps by (1) investigating how different design choices in SSL implementation affect the quality of the pretrained model, (2) calculating how varying unlabeled data extents affects SSL performance for downstream segmentation, (3) quantifying our results using clinicallyrelevant metrics to investigate if SSL can outperform supervised learning in label-limited scenarios, (4) evaluating where SSL can improve performance, across different extents of labeled training data availability, and (5) providing detailed analyses, recommendations, and open-sourcing our code to build optimal SSL models for medical image segmentation (code available at https://github.com/ad12/MedSegPy). MRI Dataset We used 155 labeled knee 3D MRI volumes (around 160 slices per volume) from the SKM-TEA dataset [15] and 86 unlabeled volumes (around 160 to 180 slices per volume), each with slice dimensions of 512 × 512 (other scan parameters in [15]). All volumes were acquired using a 5-min 3D quantitative double-echo in steady-state (qDESS) sequence, which has been used for determining morphological and quantitative osteoarthritis biomarkers and for routine diagnostic knee MRI [16][17][18][19]. The labeled volumes included manual segmentations for the femoral, tibial, and patellar cartilages, and the meniscus. The labeled volumes were split into 86 volumes for training, 33 for validation, and 36 for testing, following the splits prescribed in [15]. The 86 training volumes were further split into additional subsets, consisting of 50% (43 volumes), 25% (22 volumes), 10% (9 volumes), and 5% (5 volumes) training data, to represent label-limited scenarios. All scans in smaller subsets were included in larger subsets. CT Dataset The 2D CT dataset consisted of 886 labeled and 7799 unlabeled abdominal CT slices at the L3 vertebral level. The unlabeled images were used in a prior study exploring the impact of body composition on cardiovascular outcomes [20]. The labeled slices included manual segmentations for subcutaneous, visceral, and intramuscular adipose tissue and muscle. These labeled slices were split into 709 slices for training, 133 for validation, and 44 for testing. The training set was split in a similar manner as the MRI volumes into 4 additional subsets of 50% (354 slices), 25% (177 slices), 10% (71 slices), and 5% (35 slices) training data. No metadata from the dataset were used in any models. Data Preprocessing All models segmented 2D slices for MRI and CT images. Each CT image was preprocessed at different windows and levels (W/L) of HU to emphasize different image contrasts, resulting in three-channel images: soft-tissue (W/L = 400/50), bone (W/L = 1800/40), and a custom setting (W/L = 500/50). All images were normalized to have zero mean and unit standard deviation, with MR images normalized by volume and CT images normalized per channel. Model Architecture and Optimization 2D U-Net models [21] with Group Normalization [22], weight standardization [23], and He random weight initializations [24] were used for inpainting and segmentation ( Figure 2). Both inpainting and segmentation used identical U-Nets, except for the final convolutional layer, which we refer to as the "post-processing" layer. For inpainting, the post-processing layer produced an output image with the same number of channels as the input image, whereas for segmentation, it produced a 4-channel image for the four segmentation classes in each dataset. Figure 2. The U-Net architecture used for both inpainting and segmentation, which includes layers grouped into three categories: the "encoder" (in red), the "decoder" (in blue), and the "postprocessing" layer (the final convolutional layer). Each dotted rectangular box represents a feature map from the encoder that was concatenated to the first feature map in the decoder at the same level. We used L2 norm loss for inpainting and Dice loss, aggregated over mini-batches per segmentation class, for segmentation. All training was performed with early stopping and the ADAM optimizer [25] (β 1 = 0.99 and β 2 = 0.995) with a batch size of 9 on an NVIDIA 2080Ti GPU. Additional details are in Appendix A.1. Image Corruption for Pretext Tasks We incorporated random block selection to select the square image patches to corrupt during pretraining. To ensure the amount of corruption per image was fixed and did not affect later comparison, the patches for each image were iteratively selected and corrupted until 1/4 of the total image area was corrupted. For context prediction, we selected and set random patches of dimensions K × K to zero in an iterative manner until the number of pixels set to zero equaled or exceeded 1/4 of the total image area. For context restoration, randomly selected pairs of non-overlapping K × K image patches were swapped in an iterative manner until the number of corrupted pixels equaled or exceeded 1/4 of the total image area. We refer to the result of both methods as "masks". The context prediction binary mask specified which pixels were zero and the context restoration mask was a list of patch pairs to be swapped. When pretraining with multi-channel CT images, the locations of the patch corruptions were identical across channels to avoid shortcut learning [26]. Example image corruptions are shown in Figure 1. To train the model to inpaint any arbitrarily corrupted image region without memorization of image content, we sampled a random mask every iteration for all images. For computational efficiency, we precomputed 100 random masks before training. We further randomly rotated the masks by either 0, 90, 180, or 270 • counter-clockwise to increase the effective number of masks used during training to 400. Design Choices for SSL Implementation Design choices for inpainting-based SSL segmentation revolving around pretraining task implementations [7,8] and transfer learning [27][28][29] have not been systematically compared. To overcome these shortcomings, we explored the following questions: 1. Which pretrained weights should be transferred for fine-tuning? 2. How should the transferred pretrained weights be fine-tuned? 3. What should be the initial learning rate when fine-tuning? 4. What patch size should be used when corrupting images for inpainting? 5. How should the locations of the patches be sampled when corrupting images for inpainting? 2.5.1. Design Choices for Transfer Learning (#1-3) For design choice #1 (which pretrained weights to transfer), we compared transferring only the U-Net encoder weights [7] with transferring both the encoder and decoder weights [8]. To compare different combinations of these three design choices, we performed a grid search and defined the best combination to be the one with the best segmentation performance on the MRI test set when trained with the MRI training subset with 5% training data. More details are in Appendix B.1. Design Choices for Pretraining (#4-5) For design choice #4, we compare patch sizes of 64 × 64, 32 × 32, 16 × 16, and 8 × 8 ( Figure 1). For design choice #5, we compare two sampling methods: (i) fully-random sampling where the location of each patch was selected at random and constrained to lie completely within the image [7,8], and (ii) Poisson-disc sampling that enforces the centers of all K × K patches to lie at least K √ 2 pixels away from each other to prevent overlapping patches [31]. To compare different combinations of design choices #4 and #5 and the two pretext tasks, we performed a grid search by training a model for each combination five times, each time using one of the five training data subsets, for both datasets. We also trained a fully-supervised model for each dataset and training data subset for a baseline comparison. All models were fine-tuned in an identical manner with the same random seed after pretraining, using the best combination of design choices #1-3. All inpainting models were compared by computing the L2 norm of the generated inpainted images. When computing the L2 norm value for each three-channel CT image, the L2 norm value was computed per channel and averaged across all channels. All segmentation models were compared by computing the Dice coefficient for each segmentation class in the test set, averaged across all available volumes/slices. Optimal Pretraining Evaluation We defined the optimal pretraining strategy as the strategy that provided the most benefit over supervised learning, across image modalities and training data extents, in the experiment described in Section 2.5.2. For each baseline (fully-supervised model) and SSL model trained in the experiment using 50%, 25%, 10%, and 5% training data, we computed class-averaged Dice scores for every test volume/slice in the MRI and CT datasets. For each pretraining strategy and dataset, we compared whether the set of Dice scores of the corresponding SSL models were significantly higher than that of the respective fully-supervised models using one-sided Wilcoxon signed-rank tests. As a heuristic, the pretraining strategies were sorted by their associated p-values and the pretraining strategy that appeared in the top three for both the MRI and CT datasets was selected as the optimal pretraining strategy. We defined the optimally trained model for each dataset as the SSL model that was pretrained with this optimal pretraining strategy and fine-tuned for segmentation using the best combination of design choices #1-3. Impact of Extent of Unlabeled Data To measure the effect of the number of pretraining images on downstream segmentation performance, the optimally trained model was pretrained with the standard training set as well as two supersets of the training set containing additional unlabeled imaging data. We refer to the standard training set as 100% pretraining data (86 volumes for MRI and 709 slices for CT). For the MRI dataset, the second and third sets consisted of 150% (129 volumes) and 200% (172 volumes) pretraining data, respectively. For the CT dataset, the second and third sets consisted of 650% (4608 slices) and 1200% (8508 slices) pretraining data, respectively. After pretraining, all the pretrained models were fine-tuned with the five subsets of labeled training data and a Dice score was computed for each fine-tuned model, averaged across all segmentation classes and all volumes/slices in the test set. To quantify the relationship between Dice score and the amount of pretraining data for each subset of labeled training data, a curve of best fit was found using non-linear least squares. The Residual Standard Error, defined as , was computed to quantify how well the curve of best fit fits the data. For MRI and CT, the pretraining dataset that led to the best average Dice score across the extents of labeled training data was chosen for further experiments. Comparing SSL and Fully-Supervised Learning We compared baseline fully-supervised models and the optimally trained models pretrained with the chosen pretraining dataset from the experiment described in Section 2.6. For each training data subset, models were evaluated using two clinically-relevant metrics for determining cartilage, muscle, and adipose tissue health status. For MRI, we computed mean T2 relaxation time per tissue and tissue volume [32]. For CT, we computed crosssectional area and mean HU value per tissue. We calculated their percentage errors by comparing them to values derived from using ground truth segmentations to compute the metrics. To determine which images benefit maximally with SSL, we compared and visualized the percentage error in the clinically-relevant metrics between supervised learning and SSL. For both supervised learning and SSL, the percentage error for each test image was averaged over all classes and label-limited scenarios. Results The subject demographics of all labeled and unlabeled volumes/slices are shown in Table 1. Design Choices for Transfer Learning We observed that all pretrained model variants had high performance when first fine-tuned with an initial learning rate of 1 × 10 −3 and then fine-tuned a second time with an initial learning rate of 1 × 10 −4 . Transferring pretrained encoder weights only and fine-tuning once immediately with an initial learning rate of 1 × 10 −3 achieved similar performance, with the added benefit of reduced training time. Consequently, we used these as the best combination of the three design choices for transfer learning. Additional details are in Appendix B.2. Design Choices for Pretraining The L2 norm consistently decreased as a function of patch size for all combinations of pretext tasks (context prediction and context restoration) and sampling methods (random and Poisson-disc) ( Table 2). Furthermore, L2 norms for Poisson-disc sampling were significantly lower than those for random sampling (p < 0.05). Dice scores for fully-supervised baselines ranged from 0.67-0.88 across subsets of training data for MR images. Downstream segmentation performance for the MRI dataset was similar for all combinations of pretext task, patch size, and sampling method ( Figure 3). All SSL models matched (within 0.01) or outperformed the fully-supervised model in low-label regimes with 25% training data or less for the femoral cartilage, patellar cartilage, and meniscus, and had comparable performance for higher data extents. For the tibial cartilage, all SSL models outperformed the fully-supervised model when trained on 5% training data and had comparable performance for higher data extents. The difference in Dice score between each self-supervised model and the fully-supervised model generally increased as the amount of labeled training data decreased. SSL pretraining also enabled some models to outperform the fully-supervised model trained with 100% training data in patellar cartilage segmentation. Dice scores for fully-supervised baselines were consistently higher for CT images than for MR images, with the exception of intramuscular adipose tissue. Unlike with the MRI dataset, downstream SSL segmentation for CT in low-label regimes depended on the pretext task and the patch size used during pretraining (Figure 4). Models pretrained with larger patch sizes (64 × 64; 32 × 32) often outperformed those pretrained with smaller patch sizes (16 × 16; 8 × 8) for muscle, visceral fat, and subcutaneous fat segmentation, when trained with either 5% or 10% labeled data. Furthermore, when 25% training data or less was used, models pretrained with 32 × 32 patches using context restoration almost always outperformed fully-supervised models for muscle, visceral fat, and subcutaneous fat segmentation, but rarely did so when pretrained using context prediction. For intramuscular fat, all SSL models had comparable performance with fully-supervised models in low-label regimes. For high-label regimes (over 25% labeled data), all SSL models had comparable performance with fully-supervised models for all four segmentation classes. . The downstream segmentation performance on the CT dataset for the Context Restoration pretext task as measured by the Dice score for every combination of patch size and sampling method used during pretraining, evaluated in five different scenarios of training data availability. In each scenario, every model is trained for segmentation using one of the five different subsets of training data as described in Section 2.1.2. The black dotted line in each plot indicates the performance of a fully-supervised model trained using all available training images. The light blue curve indicates the performance of a fully-supervised model when trained using each of the five different subsets of training data. Similar plots for the Context Prediction pretext task are given in Appendix C. Optimal Pretraining Evaluation The top 5 pretraining strategies for the MRI dataset and the top 3 pretraining strategies for the CT dataset led to significantly better segmentation performance compared to fullysupervised learning (p < 0.001) ( Table 3). For MRI, the top 5 strategies all consisted of pretraining with context restoration, with minimal differences in p-value based on the patch size and sampling method used. For CT, the top 5 strategies used a patch size of at least 32 × 32 during pretraining. The strategy of pretraining with context restoration, 32 × 32 patches, and Poisson-disc sampling was in the top 3 for both datasets, and was therefore selected as the optimal pretraining strategy. Table 3. Summary of the top five combinations of pretext tasks, patch sizes, and sampling methods for each dataset with the corresponding p-value for each combination, and sorted by p-value in ascending order. The bolded pretext task, patch size, and sampling method were chosen as the best combination of the three design choices. Rank Pretext Task Impact of Extent of Unlabeled Data For both datasets and for most subsets of labeled training data used during fine-tuning (except 25% and 10% labeled training data for MRI), the optimally trained model performed significantly better in downstream segmentation when pretrained on the maximum amount of data per dataset (200% pretraining data for MRI and 1200% pretraining data for CT) than when pretrained on only the training set (p < 0.05) as seen in Figure 5. When 25% or 10% labeled training data was used for MRI segmentation, the optimally trained model achieved a higher mean Dice score when pretrained on 200% pretraining data, but this was not statistically significant (p = 0.3 for 25% labeled training data and p = 0.02 for 10% labeled training data). For MRI, Dice scores almost always improved as the amount of pretraining data increased. This improvement was greatest when only 5% of the labeled training data was used for training segmentation. Improvements in segmentation performance were slightly higher for CT. For all extents of labeled training data, segmentation performances improved when the amount of pretraining data increased from 100% to 650%. There was limited improvement when the amount of pretraining data increased from 650% to 1200%. For both datasets, when 25%, 10%, or 5% of the labeled training data was used, the change in dice score as a function of the amount of pretraining data followed a power-law relationship of the form y = ax k + c (residual standard errors ≤ 0.005), where the value of k was less than 0.5. Pretraining on the maximum amount of data enabled the optimally trained models to surpass the performance of fully-supervised models for all extents of labeled training data, in both MRI and CT. For the MRI dataset, the highest improvement over supervised learning was observed when 5% labeled training data was used. For CT, considerable improvements over supervised learning were observed when 5%, 10%, or 25% labeled training data was used. For both the MRI and CT datasets, the best average Dice score over all extents of labeled training data occurred when the maximum possible amount of pretraining data was used (200% pretraining data for MRI and 1200% pretraining data for CT). Figure 5. The downstream segmentation performance of the optimally trained model when pretrained with different amounts of pretraining data and fine-tuned using each of the five training data subsets. 100% pretraining data refers to the regular training set for each dataset. The data point for 0% pretraining data is the performance of a fully-supervised model. The black dotted line indicates the performance of a fully-supervised model trained on all available training data for the appropriate dataset. The other dotted lines are the best-fit curves for each of the training data subsets, modeled as a power-law relationship of the form y = ax k + c. The values of a, k, c, and the Residual Standard Error (S) for the best-fit curves are displayed in the two tables. Comparing SSL and Fully-Supervised Learning For each dataset, optimally trained models were pretrained with the maximum amount of pretraining data from Section 3.4. For all clinical metrics, using optimally trained models generally led to lower percent errors than using fully-supervised models in regimes of 10% and 5% labeled training data ( Figure 6). These differences were especially pronounced for CT tissue cross-sectional area, MRI tissue volume, and MRI mean T2 relaxation time. With 5% labeled training data for MRI, segmentations from optimally trained models more than halved the percent error for both tissue volume and mean T2 relaxation time of patellar cartilage, compared to segmentations from fully-supervised models. With 100% or 50% labeled training data, percent errors for all clinical metrics had lower improvement when optimally trained models were used. This was observed for CT tissue cross-sectional area, CT mean HU value, and MRI T2 relaxation time, where optimally trained models had similar or slightly worse performance than fully-supervised models when 100% or 50% labeled data was available. However, for MRI tissue volume, optimally trained models almost always outperformed the fully-supervised models, even in scenarios with large amounts of labeled training data. For both datasets, clinical metrics improved the most for the most challenging classes to segment. This included intramuscular adipose tissue for CT, where percent error decreased from around 3940% to 3600% for tissue cross-sectional area when 10% labeled training data was used, and patellar cartilage for MRI, where percent error decreased from around 30% to 12% for tissue volume when 5% labeled training data was used. Figure 6. A comparison of the percent error in calculating clinical metrics for the MRI and CT datasets between when the tissue segmentations are generated by fully-supervised models and when the tissue segmentations are generated by optimally trained models, pretrained using 200% data for MRI and 1200% data for CT. Each bar represents the median percent error across the test set for a particular tissue, clinical metric, and label regime. The percent error in the calculation of tissue cross-sectional area and mean HU for intramuscular fat extends beyond the limits of the y-axis when 10% and 5% labeled training data for segmentation is used. On a per-image basis, using SSL consistently matched or reduced the percent errors of supervised learning across both datasets and all clinical metrics (Figure 7). Furthermore, when using SSL, the percent error for all clinical metrics improved more for test images with larger percent errors when using supervised learning. For tissue cross-sectional area and mean HU value for CT, the improvement in SSL percent error gradually increased as the supervised percent error increased beyond 10%. The same pattern existed for MRI tissue volume as the supervised percent error increased beyond 20%. For MRI mean T2 relaxation time, the improvement in percent error when using SSL increased for most test images as the supervised percent error increased beyond 5%, but this was not as consistent as for the other clinical metrics. On average, when excluding intramuscular fat for CT, SSL decreased per-image percent errors for CT tissue cross-sectional area, CT mean HU value, MRI tissue volume, and MRI mean T2 relaxation time by 4.1, 1.9, 4.1, and 2.2%, respectively. The relationship between the percent error when using supervised learning and the percent error when using SSL. Each blue point represents an image in the test set for the appropriate dataset. The percent error was averaged over all classes and label-limited scenarios. For CT, the intramuscular fat was excluded to prevent large percent error values. For MRI T2 relaxation time, one point with a high percent error for supervised learning was excluded to reduce the range of the x-axis. Discussion In this work, we investigated several key, yet under-explored design choices associated with pretraining and transfer learning in inpainting-based SSL for tissue segmentation. We examined the effect of inpainting-based SSL on the performance of tissue segmentation in various data and label regimes for MRI and CT scans, and compared it with fully-supervised learning. We quantified performance using standard Dice scores and four clinically-relevant metrics of imaging biomarkers. We observed that the crosstalk between the initial and fine-tuning learning rate was a design choice that most affected model performance. All model variants achieved optimal performance with an initial learning rate of 1 × 10 −3 and a fine-tuning learning rate of 1 × 10 −4 ( Figure A1). This suggests the need for not perturbing the pretrained representations from the pretext task with a large learning rate. Moreover, although freezing and then fine-tuning the transferred weights provided an improvement over fine-tuning immediately for this learning rate combination ( Figure A1), the improvement was very small. This result matches the findings of Kumar et al. [30], where the performance of linear probing (freezing) and then fine-tuning only slightly improved the performance of finetuning immediately after transferring. Additional details are provided in Appendix B.3. Here, we suggest some best practices for inpainting-based SSL for medical imaging segmentation tasks. We observed that downstream segmentation performance for MRI was similar for all combinations of pretext tasks, patch sizes, and sampling techniques. This observation remained consistent despite significant differences in the L2 norms of the inpainted images. While decreasing patch sizes and sampling patch locations via Poissondisc sampling to ensure non-overlapping patches both resulted in significantly lower L2 norms, they did not improve downstream segmentation performance. These observations suggest a discordance between learning semantically meaningful representations and the accuracy of the pretext task metric. Thus, simply performing good enough pretraining may be more important than optimizing pretext task performance. For both MRI and CT, segmentation performance usually increased in proportion to the amount of pretraining data. The highest improvements over supervised learning were observed in the context of very low labeled data regimes of 5-25% labeled data. These empirical observations across both MRI and CT demonstrate that pretraining with large enough datasets improves performance compared to only supervised training, especially when the amount of available training data is limited. Similar to supervised learning, improvements in SSL Dice scores tended to follow a power-law relationship of the form y = ax k + c as the size of the unlabeled corpora increased [5]. The observations that the value of k was less than 0.5 when 25%, 10%, or 5% labeled data was used for either dataset and pretraining on 650% and 1200% CT pretraining data led to similar improvements over supervised learning suggest a limit exists where the learning capacity of a model saturates and additional unlabeled data may not improve downstream performance. A good practice for future segmentation studies may be to create Figure 5 to evaluate the trade-off between the challenges of annotating more images and acquiring more unlabeled images. Compared to fully-supervised models, optimally trained models generally led to more accurate values for all clinical metrics in label-limited scenarios. We also observed that clinical metrics improved the most with SSL for tissue classes that had the highest percent error with fully-supervised learning-intramuscular adipose tissue in CT and patellar cartilage in MRI. This observation, combined with the Dice score improvement in low labeled data regimes, suggests that SSL may be most efficacious when the performance of the baseline fully-supervised model is low. A similar pattern was observed on a per test image basis. For all clinical metrics, the improvement in percent error when using optimally trained models was greater for test images that performed poorly when using fully-supervised models. This suggests that SSL pretraining can reduce worst-case errors that occur with traditional supervised learning. Moreover, our observation that SSL percent errors consistently either matched or were lower than supervised percent errors indicates SSL pretraining also increases the robustness of models in label-limited scenarios. However, we also observed that optimally trained models sometimes had similar or even worse performance than fully-supervised models for CT tissue cross-sectional area, CT mean HU value, and MRI T2 relaxation time in scenarios with 100% or 50% labeled data. This observation suggests that SSL does not have much benefit when the labeled dataset is large. In such cases, it may be more efficient to simply train a fully-supervised model, rather than spend additional time pretraining with unlabeled data. When training with 5% labeled data for all MRI classes and muscle on CT, our optimal pretraining strategy improved Dice scores by over 0.05, compared to fully-supervised learning. In such cases, the Dice score for fully-supervised learning was 0.8 or lower, which suggests a critical performance threshold where inpainting-based SSL can improve segmentation performance over supervised learning. SSL may be beneficial in these cases because the models still have the capacity to learn more meaningful representations, compared to models with Dice scores over 0.8 that may already be saturated in their capacity to represent the underlying image. Importantly, it should be noted that the improvement in segmentation performance with SSL pretraining in label-limited scenarios is on the similar order as prior advances that used complex DL architectures and training strategies [34][35][36]. Comparatively, our proposed SSL training paradigm offers an easy-to-use framework for improving model performance for both MRI and CT without requiring large and difficult to train DL models. Moreover, since we have already investigated different implementation design choices and experimentally determined the best ones, our proposed training paradigm will provide researchers with an implementation of inpainting-based SSL for their own work, without requiring them to spend resources/compute investigating these design choices again. This is especially important as we have shown that simply performing inpainting-based pretraining on the same data that is ordinarily only used for supervised learning improves segmentation accuracy compared to supervised learning only. Study Limitations There were a few limitations with this study. Although we investigated two different methods for selecting which pretrained weights to transfer, we did not conduct a systematic study across all possible choices due to computational constraints that made searching over the large search space too inefficient. We also leave other SSL strategies such as contrastive learning to future studies since it requires systematic evaluation of augmentations and sampling strategies. Furthermore, when we investigated the impact of unlabeled data extents on downstream segmentation performance, we did not pretrain our SSL models with equal extents of unlabeled MRI and CT data since we maximized the amount of available MRI data. In addition, our investigations in this work are limited to the U-Net architecture, though future work can explore other powerful segmentation architectures. Finally, we did not experiment with other optimizers potentially better than the ADAM optimizer. Recent studies [37] have shown that there may be value in optimizers such as Stochastic Gradient Descent for better generalization in natural image classification and that there is potential trade off while choosing different optimizers. We leave the systematic investigation of this issue on medical imaging data for future follow up work. Conclusions In this work, we investigated how inpainting-based SSL improves MRI and CT segmentation compared to fully-supervised learning, especially in label-limited regimes. We presented an optimized training strategy and open-source implementation for performing such pretraining. We describe the impact of pretraining task optimization and the relationship between the sizes of labeled and unlabeled training datasets. Our proposed approach for pretraining for improving segmentation performance that does not require additional manual annotation, complex model architectures, or model training techniques. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Stanford University (protocol codes: 26840 and date of approval 14 October 2020, 58903 and date of approval 10 November 2020). Informed Consent Statement: Informed consent was obtained for all subjects involved in the MRI analysis due to its prospective data acquisition. Informed consent was waived for all subjects in the CT analysis since it included retrospective analysis of already-acquired data. Data Availability Statement: All MRI data can be accessed via the publicly-shared SKM-TEA data repository (https://github.com/StanfordMIMI/skm-tea), CT data can be made available via request to the authors in a manner compliant with IRB approvals and human subjects research. Conflicts of Interest: The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Appendix A. Implementation Details Appendix A. 1 . Model Architecture and Optimization The loss function for inpainting was the L2 loss, implemented as in Equation (A1) for a model output (X) and ground truth (Y), where N, H, W, and C denote the batch size, height, width, and number of channels of the images, respectively. The loss function for segmentation was a variant of the Dice loss, implemented as in Equation (A2) for a model output (X) and ground truth (Y), where N, H, W, and C denote the batch size, height, width, and number of channels of the images, respectively. Due to the instability of pixel-wise losses for sparse classes, we used a batch-aggregate Dice loss, where the Dice loss was computed over the aggregate of a mini-batch per segmentation class and the final loss was the mean Dice loss across segmentation classes. For inpainting, the learning rate was set to 1 × 10 −3 and decayed by a factor of 0.9 every 2 epochs. To prevent overfitting, early stopping [38] based on the validation L2 loss was used with a threshold of 50 and patience of 4 epochs. For a baseline fully-supervised segmentation, the initial learning rate was also set to 1 × 10 −3 and the learning rate followed the same schedule as inpainting. For self-supervised segmentation, the fine-tuning initial learning rate was considered a design choice and is described in Section 2.5.1, but the learning rate schedule was the same as for inpainting and fully-supervised segmentation. For both fully-supervised and self-supervised segmentation, early stopping based on the validation Dice loss was used to prevent overfitting, with a threshold of 1 × 10 −3 and a patience of 10 epochs. All inpainting and segmentation models were trained until the criteria for early stopping was achieved. The same random seed was used for all experiments. Appendix A.2. Design Choices for SSL Implementation For all experiments described in Section 2.5, the self-supervised models were pretrained on all of the training data for the appropriate dataset. Appendix B. Design Choices for Transfer Learning Appendix B.1. Grid Search Implementation As described in Section 2.5.1, we compared two methods for selecting which pretrained weights to transfer and two methods for fine-tuning the transferred pretrained weights. To compare the four combinations of these two design choices, we trained one model per combination. Since the first fine-tuning method, in which the pretrained weights are fine-tuned immediately, involves one training run, and the second fine-tuning method, in which the pretrained weights are first frozen, involves two training runs, we chose to train the two models in which the pretrained weights were fine-tuned immediately two times to ensure a fair comparison. To investigate the impact of the initial learning rate during fine-tuning, we trained each of the four models four times during the first training run, each time with one of the four possible initial learning rates, and then trained each of the sixteen trained models again four times, each time with one of the four possible initial learning rates. We selected the learning rates 1 × 10 −2 , 1 × 10 −3 , 1 × 10 −4 , and 1 × 10 −5 for specific reasons. 1 × 10 −2 was selected as an example of a large learning rate, to determine if finetuning with a large learning rate will destroy pretrained features. 1 × 10 −3 was selected as an example of a "common" learning rate, and was used as the initial learning rate for all our other experiments. Finally, 1 × 10 −4 and 1 × 10 −5 were selected arbitrarily as examples of small learning rates. All pretrained weights were derived from an inpainting model that was trained with context prediction with 16 × 16 patches and Poisson-disc sampling, and all models were fine-tuned for segmentation using the MRI training subset with 5% data. The same random seed was used when training each model. All models were compared by computing the Dice coefficient for each volume in the MRI test set, averaged across the four segmentation classes. Appendix B.2. Results For the first training run following pretraining, higher initial learning rates of 1 × 10 −2 and 1 × 10 −3 produced better results. The FEF, FBF, and FFEF models had similar performance for all initial learning rates, and consistently performed better than the FFBF models ( Figure A1). When each type of model was trained for a second time with an initial learning rate of 1 × 10 −2 , each model's performance was similar to its performance when trained only once with an initial learning of 1 × 10 −2 . This occurred regardless of the initial learning rate during the first training run. For example, the FEF, FBF, and FFEF models had relatively high performance when trained once with an initial learning rate of 1 × 10 −3 , but when these models were trained for a second time with an initial learning rate of 1 × 10 −2 , the performance of all three models dropped to the same level of performance as when each type of model was trained only once with an initial learning rate of 1 × 10 −2 . Similarly, but in the opposite way, the FEF, FBF, and FFEF models had relatively low performance when trained once with an initial learning rate of 1 × 10 −4 or 1 × 10 −5 , but when these models were trained for a second time with an initial learning rate of 1 × 10 −2 , the performance of all three models increased to the same level of performance as when each type of model was trained only once with an initial learning rate of 1 × 10 −2 . When each type of model was trained once with an initial learning rate of 1 × 10 −2 and then trained a second time with a smaller learning rate, the FFBS models outperformed the FES and FBS models when the initial learning rate of the second training run was 1 × 10 −3 or 1 × 10 −4 , and the FFES models outperformed the FES and FBS models for all initial learning rates smaller than 1 × 10 −2 . When each type of model was trained once with an initial learning rate of 1 × 10 −4 and then trained a second time with an initial learning rate of 1 × 10 −3 or lower, the FES and FFES models had similar performance and always outperformed the FBS and FFBS models. Figure A1. Box plots displaying the spread of Dice scores among the volumes in the MRI test set. The top row displays the spread of Dice scores after each type of model was trained once, with the initial learning rate set to the appropriate value on the x-axis. The remaining four rows display the spread of Dice scores after each model in the first row was trained again, with the initial learning rate set to the appropriate value on the x-axis. We used the following structure for acronyms to distinguish between the different types of models: ABC. If A is F, the pretrained weights were fine-tuned immediately, and if A is FF, the pretrained weights were first frozen and then fine-tuned. If B is E, only the pretrained encoder weights were transferred, and if B is B, both the pretrained encoder and decoder weights were transferred. If C is F, the model was trained only once (the first training run), and if C is S, the model was trained a second time (the second training run). When each type of model was trained once with an initial learning rate of 1 × 10 −3 , training each model again with an initial learning rate equal to or less than 1 × 10 −3 did not improve or only slightly improved the model's performance. The exception was the FFBF model, for which the performance always increased by a large amount during the second training run, regardless of the initial learning rate used during the second training run. The FEF, FES, FBF, and FBS models had similar performance when the initial learning rates for the first and second training runs were set to 1 × 10 −3 . We concluded that FEF (fine-tuning immediately after transferring only the pretrained encoder weights), trained with an initial learning rate of 1 × 10 −3 , was the best combination of design choices for transfer learning because this model achieved high segmentation performance with minimal training time. Appendix B.3. Discussion In this experiment, we determined the best combination of fine-tuning mechanism, weight loading strategy, and initial learning rate during fine-tuning. Overall, every model had high performance when first trained with an initial learning rate of 1 × 10 −3 and then trained a second time with an initial learning rate of 1 × 10 −4 , despite using different fine-tuning mechanisms and different methods for selecting which pretrained weights to transfer. This suggests choosing the initial learning rates for the first and second training runs is the design choice for transfer learning that most affects model performance. If the initial learning rate of the first training run is too large, like 1 × 10 −2 , the pretrained features are at risk of being destroyed. For example, the FEF and FBF models performed worse when trained with an initial learning rate of 1 × 10 −2 than when trained with an initial learning rate of 1 × 10 −3 . Furthermore, when the initial learning rate during the second training run is too large, a model has a risk of escaping out of an already found local minimum. For example, although the FBF model had high performance when trained once with an initial learning rate of 1 × 10 −3 , its performance dropped when trained again with an initial learning rate of 1 × 10 −2 . On the other hand, if the initial learning rate is too small, a model may not be able to learn during fine-tuning. This was suggested by the low performance of all four types of models when they were trained once with an initial learning rate of 1 × 10 −5 and then trained again with an initial learning rate of either 1 × 10 −4 or 1 × 10 −5 . Although the design choice for transfer learning that most affects model performance is the initial learning rate during fine-tuning, the results of this experiment suggest that transferring only the pretrained encoder weights may lead to better performance gains than transferring both the pretrained encoder and decoder weights. For instance, in the first training run, the FFEF models performed similarly to the FEF models for all initial learning rates, suggesting the frozen encoder features in the FFEF models were as good as the fine-tuned encoder features in the FEF models. In addition, when the four types of models were first trained with an initial learning rate of 1 × 10 −4 and then trained again with an initial learning rate of 1 × 10 −3 or lower, the FES and FFES models always outperformed the FBS and FFBS models. These results suggest that transferring only the pretrained encoder weights provides a better initialization point for segmentation fine-tuning than transferring both the pretrained encoder and decoder weights. Appendix C. Design Choices for Pretraining Below, we provide additional figures that illustrate the effect of design choices on the context prediction task for MRI ( Figure A2) and CT ( Figure A3). Figure A2. The downstream segmentation performance on the MRI dataset for the Context Prediction pretext task as measured by the Dice score for every combination of patch size and sampling method used during pretraining, evaluated in five different scenarios of training data availability. In each scenario, every model is trained for segmentation using one of the five different subsets of training data as described in Section 2.1.1. The black dotted line in each plot indicates the performance of a fully-supervised model trained using all available training images. The light blue curve indicates the performance of a fully-supervised model when trained using each of the five different subsets of training data. Figure A3. The downstream segmentation performance on the CT dataset for the Context Prediction pretext task as measured by the Dice score for every combination of patch size and sampling method used during pretraining, evaluated in five different scenarios of training data availability. In each scenario, every model is trained for segmentation using one of the five different subsets of training data as described in Section 2.1.2. The black dotted line in each plot indicates the performance of a fully-supervised model trained using all available training images. The light blue curve indicates the performance of a fully-supervised model when trained using each of the five different subsets of training data.
2023-02-08T16:15:07.777Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "621b1ed951786a01d0e079987a405c6c96d22eb3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-5354/10/2/207/pdf?version=1675503817", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25e1205d929adfa58e1cb52f31b5b0d2b8b5fd49", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
219401342
pes2o/s2orc
v3-fos-license
Current Status and Factors Affecting Knowledge Sharing Practices among Health Professionals in Hiwot Fana Specialized University Hospital in Ethiopia Knowledge sharing is about sharing relevant task (skills, experiences) among team members or with other people and making the shared knowledge reusable. Hence, the objective of this study was to assess the current status and factors affecting knowledge sharing practices among health professionals in Hiwot Fana Specialize University Hospital in Ethiopia. A Cross-sectional research method with questionnaire was employed for the study. The sample size was 152 categories of health professionals that were: 46 nurses, 18 doctors, 15 radiologists, 26 laboratory technologists, 22 health officers and 26 pharmacists; selected from a population of 268 professionals in the hospital. The sampling technique was stratified simple random sampling. The results showed that the current status of Knowledge Sharing practices by all categories of health professionals was ‘high’ in areas such as: formal training programs and workshops to share knowledge (3.43); individuals’ pleasure to share their know-how, information, working experience and knowledge to colleagues voluntarily (3.81); individuals’ pleasure to share freely information and knowledge that improves the hospital performance (4.00) and colleagues awareness of the importance of knowledge sharing in the hospital (3.90). Knowledge sharing was ‘very low’ based on the non-availability of motivational scheme in the hospital to motivate knowledge sharing (1.74) but ‘low’ on staffs feeling of motivation to share knowledge in the hospital (2.44). However, there were factors that affected knowledge sharing practices, which included: lack of willingness by colleagues to share their information with other colleagues at all times; lack of awareness on the importance of knowledge sharing in the daytoday work and lack of intrinsic motivation that staff would gain new ideas, technologies skills or techniques by sharing knowledge. The study concludes that there were variations in the opinions of the categories of health professionals on the current status and factors affecting knowledge sharing practices in the hospital. Introduction In today's global economy, knowledge is essential in everyday work. Knowledge is skills and practices, which is found in people, organizations and embedded in different artifacts, procedures; as well as stored in different media. Such media as print and non-print alike, as well as resources that employee should possess to effectively execute their tasks [1] & [2]. People rely on knowledge to perform daily activities and largely communicate ideas both at local and international levels for successful execution of routine responsibilities, activities and knowledge transfer. Knowledge transfer on the other hand is placed on a continuum that is passed from one generation to another. It improves communal activities and engagements, corporate cognitive experiences and competitive advantage, which includes scientific communications. Whereas, [3] stated that every accomplishment needs some sort of knowledge, because there is nothing which can be performed without knowledge. While knowledge as a source of resource and economy, it should therefore be managed through knowledge management. Knowledge management (KM) is a popular concept that plays an important role in an organization by improving performance and gaining competitive advantage. It helps in improving the systematic management of knowledge within healthcare organizations, industries, educational institutions, governmental and business organizations [4]; [5] & [6]. Healthcare organizations are increasingly becoming a knowledge-based community that depends critically on knowledge management to improve the healthcare delivery services through knowledge sharing (KS), to as well make or produce best practices and benefits to the organizations [7] & [8]. KS again, is a deliberate act that makes knowledge reusable between two or more parties such as individuals, organizations or parts of an organization; through the knowledge exchange. Organizational knowledge is primarily embedded in the minds of the employees (called tacit knowledge) and in the activities, procedures, routines, processes and norms of the organization (known as explicit knowledge) [9]. The sharing of both tacit and explicit knowledge is very important and essential for the competitiveness of any business organization including healthcare institutions. However, there are factors that hinder the KS among employees of a given organization. These factors might be categorized into three: individual factors, organizational factors and technological factors [10]. The individual factors, which are influenced by willingness, awareness, past mistakes, trust, motivation, job satisfaction and fear of loss of personal competitiveness are the building blocks of the success of KS practices in organizations. The success of any KS practice depends largely on the communication among individuals, particularly sharing knowledge among the individuals. KS practice is among other reasons, related to the readiness of individuals to share their knowledge with others. However, effective KS among individuals depend on the individual KS behavior. Therefore, organizations may focus on the individual factors that influence KS behavior of individuals to have successful KS initiatives. Trust, awareness, willingness, job satisfaction, and intrinsic factor of motivation are categorized as individual factors [11]. In organizations, there are many ways of motivating and promoting KS practices. Knowledge exists in organizations and is influenced by organizational culture, management support, organizational structure, group interaction, reward, incentives, recognition and perceived openness [12]. However, its existence does not guarantee its utilization and dissemination among employees. Organizations' management of its knowledge resources can only be successful if it goes a long way to effectively facilitate knowledge sharing; to achieve maximum competitive advantage or successes within the organization. Therefore, organizations are required to build and maintain organizational factors that will support a KS environment [13]. Technology is said to be one of the knowledge management (KM) infrastructures along with people and processes. [14] & [15] believed that it is necessary to find technical ways in order to find, disseminate and utilize the knowledge. Information technology is usually said to be a good way for inter-organizational KS practice; especially for companies that are dispersed but want an environment which motivates people to share information, knowledge and best practices. It is also important to note that KM software needs to be integrated into the organizational culture, human resource as well as information technology (IT) infrastructure [11]. Also, companies on their part should consider technology which fits more with their employees and the organization [10]. However, technological factors are influenced by ICT infrastructure, usage, training and compatibility of ICT tools. The concepts of knowledge and KS practices were described as a resource that employees should possess to effectively execute their tasks, while knowledge sharing practices/activities are sets of tasks that are used to share knowledge between knowledge owners and knowledge seekers [1]. The study revealed lack of teamwork, lack of communication channels, and lack of encouragement as hindering factors to KS practices. Besides, lack of skill and knowledge, lack of trust to peers was identified to be a major impediment for practicing the culture of KS. The lack of trust towards management was another hindrance. On the other hand, respondents' in the study did not perceive lack of policies and guidelines to hinder KS practices. But [16] on barriers to KS and strategies to promote KS in an American based multinational company in Malaysia revealed that most of the respondents agreed that there was a KS strategy and a growing awareness of the benefit of KS in the organization. However, it was worrying to know that 22 percent of the study participants responded negatively to the statement that KS was important to the organization. Also, 27 percent of the respondents were not willing to share knowledge. The study pointed out that the most effective method to promote KS was to link it with rewards and performance appraisal. The top management support was considered vital to ensure the success of KS in the organization. Considering Cheng and Waiet.al studies it would be reasonable to note that individual and organizational factors can play a major role in knowledge sharing in organizations. However, [17] on factors affecting employees' KS intention, KS behavior and innovation behavior in South Korea top ranked four university hospitals. The study used self-administered questionnaires to collect data. The researchers categorized factors influencing KS into individual factors (i.e. incentives, reciprocity, subjective norms and behavioral control) and organizational factors (i.e. organizational structure, administrative support, learning climate, IT systems, rewards systems and trust). The result generally revealed that behavioral control and trust were factors affecting hospital employees' KS intention, KS behavior and innovation behavior, which of course could be relative based on what contributions the IT professionals using a facility based cross-sectional study employing both quantitative and qualitative methods. The study included a total of 196 health professionals working in the Hospital. The study indicated that there was no frequent KS activity, due to lack of formal and informal KS opportunities. The hospital had no ICT infrastructures, which helped to facilitate KS. Due to lack of incentives and poor management support the respondents were not motivated to share knowledge. In the study, KS opportunity, communication channel, motivation, resource allocation and high education were found as an independent predictor of KS practices. However, [29]; [20] & [21] on assessment of KS practices of healthcare professionals showed that work experience, willingness, KS opportunity and intrinsic motivation were common independent predictors of KS practice in public and private hospitals; with association between KS practice and learning commitment in private hospitals. Also, motivation to transfer knowledge, salary increment, supportive leadership, KS opportunity were a significant predictor that affects healthcare professionals' KS practices, while job satisfaction, very high level of motivation, extrinsic motivation, use of communication channel and presence of KS opportunity were independent predictors of KS in hospitals. But [22] on status of KS among health professionals indicated that the vast majority of the respondents (89%) said that there was no KS strategy in Assosa Hospital. 73% of the respondents disagreed on healthcare workers sharing their knowledge, work experience and ideas through group discussions, review meetings frequently and the participants (59%) said there was no motivational scheme in hospital for KS practices. In today's rapidly changing healthcare environment and health institutions (such as hospitals and clinics), optimization of chances and successes is not by telling healthcare professionals what to do, but by enabling them to make informed decisions. There is no health professional that knows everything he or she needs to know where his/her limitations are [23]. Health professionals need up-to-date health information from credible sources to improve their knowledge and provide evidence based healthcare services to their clients. In which case, having KS habits within the organizations will benefit health institutions and their customers by maximizing intellectual capital, minimizing costs, and making individuals and organizations stay competitive in their jobs and environments. Although, KS practices among different healthcare sectors are being acknowledged globally, it is also poorly practiced [7]. Objective The objective of this study was to assess the current status and factors affecting knowledge sharing practices among health professionals in Hiwot Fana Specialize University Hospital (HFSUH) in Ethiopia. Methodology A Cross-sectional research method was employed for the study. The study was from the month of December 2018 to June 2019. Questionnaire instrument was used as a data collection tool to collect quantitative data from the participants. The participants were categories of health professionals in the Hiwot Fana Specialized University hospital (HFSUH) in Ethiopia that included: nurses, doctors, radiologists, laboratory technologists, health officers and pharmacists. A sample size of 152 categories of health professionals comprising of 46 nurses, 18 doctors, 15 radiologists, 26 laboratory technologists, 22 health officers and 26 pharmacists were taken from a target population of 268 for the study. The sampling technique used for the selection of the categories of health professionals as respondents was stratified simple random sampling technique. This was because the population of heterogeneous and currently working in the HFSUH. The target population was first divided into uniform strata and then using a simple random sampling technique, the sample was selected [24]. The questionnaire items were prepared in English and used to collect relevant data from the 152 sample population of nurses, doctors, radiologists, laboratory technologists, health officers and pharmacists in HFSUH. It comprised only closed ended questions with five points Likert's scale intervals of 'strongly agreed/very high'; 'agreed/high'; 'undecided/medium'; 'disagreed/low' and 'strongly disagreed/very low'. It also covered the collection of data on the current status and the factors affecting KS practices in the HFSUH. The response rate for the study was 152(100%) of the total sample population of 152 (100%) of the study. The proportional distributions of the respondents were recorded as in Table 1 on the frequency and percentage response rate of participants in the study. Current Status of Knowledge Sharing Practices at HFSUH Knowledge sharing is about sharing relevant task (skills, experiences) among team members or with other people and making the shared knowledge reusable. Knowing the current status of KS practices of the categories of health professionals (i.e. Doctors, Nurses, Laboratory technologists, Radiologists, Health officers and Pharmacists) at the HFSUH becomes very necessary because it will help to determine the shortcomings of KS practices in the hospital and help in proposing a way to improve the practices. To analyze the data collected from the participants' questionnaire, the SPSS version 20, computer programme was used. In Table 2 below, eight (8) items/KS variables on the current status of knowledge sharing practices among health professionals at HFSUH were measured on a five point scale, using an equal interval of 0.80. The eight (8) items or KS variables were: i) Staffs feeling motivated to share knowledge in the hospital. ii) Availability of motivational scheme in the hospital to motivate knowledge sharing. iii) There is periodic meetings in which people working in different departments participate in the hospital. iv) In the hospital there are formal opportunities like training programs and workshops that allow employees to share knowledge v) It is individuals' pleasure to share their know-how, information, working experience and knowledge to their colleagues voluntarily. vi) It is individuals' pleasure to share freely information and knowledge that improves the Hospital performance. vii) Colleagues are willing to share information with other colleagues at all times in the hospital. viii) Colleagues are aware of the importance of knowledge sharing on a daily basis in the hospital. Thus the guideline below was used for interpreting the attitude mean scores of respondents in Table 2 on the current status of knowledge sharing practices among health professionals at HFSUH. The Table is divided into seven (7) columns as follows: serial numbering of the items/KS variables; the items/KS variables; categories of health professionals; responses; mean scores and decision column of the researchers, respectively. A mean score was considered 'Very Low (VL)', if it falls within the range of 1.00 -1.80; a mean score within the range 1.80 -2.60 was taken as 'Low (L)'; a mean within the range 2.60 -3.40 was considered 'Medium (M)', while a mean score within the range 3.40 -4.20 was taken as 'High (H)'; and a mean score within the range 4.20 -5.00 was considered 'Very High (VH)' for positive items [25]. Table 2 above was analyzed based on item analysis on the opinions of the categories of the health professionals that included: nurses, doctors, radiologists, laboratory technologists, health officers and pharmacists in the HFSUH. The items analysis showed that the current status of KS practices by all categories of health professionals was 'high' in items 4, 5, 6 and 8 (i.e. In the hospital there were formal opportunities like training programs and workshops within the hospital that allow employees to share knowledge (3.43); It was individuals' pleasure to share their know-how, information, working experience and knowledge to colleagues voluntarily (3.81); It is the individuals' pleasure to share freely information and knowledge that improves the Hospital performance (4.00) and Colleagues are aware of the importance of knowledge sharing on daily basis in the hospital (3.90), respectively). But the availability of motivational scheme in the hospital to motivate knowledge sharing was 'very low' in item 2, i.e. There were availability of motivational scheme in the hospital to motivate knowledge sharing (1.74); they responded low in item 1, i.e. Staffs feeling motivated to share knowledge in the hospital (2.44) and finally the response was 'medium' in item 3, i.e. In the hospital there were periodic meetings in which people working in different departments participate (3.32), respectively. A cursory look at the individual categories on the items showed that Radiologists had a 'very high' KS practice in item 6 (i.e. It is individuals' pleasure to share freely information and knowledge that improves the Hospital performance), while they had 'very low' experiences in KS practices in items 1(i.e. Staffs feel motivated to share knowledge in the hospital) and 2 (i.e. There were availability of motivational scheme in the hospital to motivate knowledge sharing), although other categories like: nurses, health officers, and pharmacists had 'very low' in item 2. Factors Affecting Knowledge Sharing Practices in Hiwot Fana Specialized University Hospital Seventeen (17) variables were identified as factors affecting the six (6) categories of health professionals, who were: nurses, doctors, radiologists, laboratory technologists, health officers and pharmacists. The seventeen identified factors were addressed under three (3) categories of factors that included: individual factors, which had seven (7) components (i.e. willingness: awareness: past mistakes, trust, job satisfaction, intrinsic motivation and fear of loss personal competitiveness); organizational factors, which had six (6) components (i.e. organizational culture, management support, organizational structure, group interaction, reward and recognition, perceived openness) and technological factors that had four (4) components (i.e. ICT infrastructure, ICT Usage, ICT training and compatibility of ICT tools). To analyze the data collected from the participants in questionnaire, the SPSS version 20, computer programme was used; as represented in Table 3 below on the factors affecting KS practices among health professionals in HFSUH. The factors were measured on a five point scale, using an equal interval of 0.80. Thus the guideline used for interpreting the attitude mean scores of respondents in Table 3 shows that, a mean score was considered 'strongly disagreed (SD)', if it falls within the range of 1.00 -1.80; a mean score within the range 1.80 -2.60 was taken as 'Disagreed (D)'; a mean within the range 2.60 -3.40 was considered 'undecided (UD)', while a mean score within the range 3.40 -4.20 was taken as 'Agreed (A)'; and a mean score within the range 4.20 -5.00 was considered 'strongly Agreed (SA)' for positive items [26]. Table 3 above, shows that all categories of health professionals that include: nurses, doctors, radiologists, laboratory technologists, health officers and pharmacists in the HFSUH; 'agreed' that three (3) factors affect KS practices in the hospital. Such factors include items 1(i.e. Lack of willingness by my colleagues to share their information with other colleagues at all the time),2(i.e. Lack of awareness on the importance of knowledge sharing in the day-to-day work) and 3(i.e. Lack of intrinsic motivation that staff would gain new ideas, technologies skills or techniques by sharing knowledge). The items were categorized under individual factors, while the other two categories of organizational and technological factors did not have any item agreed upon. However, the table also revealed that all categories of health professionals were 'undecided' on eight (8) of the factors. The factors were items 4(i.e. In our organization past mistakes of employees are never solved by giving training for those who made that), 6(i.e. Lack of trust of my co-workers hampers knowledge sharing),10(i.e. Lack of organizational structure in the hospital knowledge sharing),11(i.e. Our organization does not encourage group interaction (team work) regarding knowledge sharing),12(i.e. Most of the people I work with are not cooperative and open to share knowledge), 13(i.e. In my organization individuals who share their knowledge are not recognized and acknowledgement), 14(i.e. Lack of ICT infrastructure (internet, intranet) in the hospital) and 15(i.e. There is lack of technical support and maintenance of integrated Information Technology system). Although, the KS factors cut across the three (3) categories of KS factors affecting the categories of health professionals, the perception had a greater weighting coming from the organizational category; that had four (4) items (i.e. items 10, 11, 12 and 13), while individual and technological factors had two (2) items each (i.e. 4 and 6 as well as 14 and 15 respectively). On the perceived 'undecided factors' they could as well be considered problems that could affect KS practices in the hospital. All categories of health professionals 'disagreed' on six (6) of the factors that fall in the three (3) categories of individual, organizational and technological factors as follows: two (2) items were in individual factors (i.e. items 5-'Fearing the loss of personal competitiveness on sharing knowledge in my organization' and item 7-'Lack of job satisfaction in the organization reduce knowledge sharing among colleagues', respectively); then two (2) items were from organizational factors (i.e. items 8 & 9; being 'Lack of organizational culture does not create knowledge sharing' and 'Lack of management support knowledge sharing with colleagues', respectively) and two (2) other items were from technological factors (i.e. items 16 & 17 that were: 'In the hospital, employees do not use knowledge networks such as (email, intranet, internet) to communicate with colleagues' and 'In our organization ICT tools are not easily used by employees of the organization', respectively). It is safe to conclude therefore, that the identified factors affecting KS practices in HFSUH were: Lack of willingness by my colleagues to share their information with other colleagues all the time; Lack of awareness on the importance of knowledge sharing in the day-to-day work and Lack of intrinsic motivation that staff would gain new ideas, technologies skills or techniques by sharing knowledge. The items were categorized under individual factors without the inclusion of organizational and technological factors. To identify the degree of statistical significance on the factors affecting KS practices in HFSUH, an inferential statistics was performed on the mean score of respondents in Table 4. A One-way analysis of variance (One-way ANOVA) statistic was used to test the significant differences that existed among the groups/categories of health professionals against the categories of factors that include individual factors, organizational factors and technological factors. Table 4 shows that the mean difference of the items of factors affecting KS practices of the categories of health professionals compared to 'between groups' and 'within groups' was significant on six (6) factors that included items 4, 5, 7, 11, 14 & 15 at the p-value = <0.05. The p-values of the six (6) items were p-values = <0.044; 0.021; 0.002; 0.048; 0.048 and 0.008. The items were: on past mistakes of employees are never solved by giving training for those who made the mistakes in the organization; lack of trust of co-workers hampers knowledge sharing; lack of job satisfaction in the organization reduce knowledge sharing among colleagues; organization does not encourage group interaction (team work) regarding knowledge sharing; lack of ICT infrastructure (internet, intranet) in the hospital and lack of technical support and maintenance of integrated information technology system, respectively. Discussion The healthcare sector is a knowledge intensive organization, where sharing knowledge is important to achieve the intended goals and objective delivery of quality healthcare services. While the objective of this study was to assess the current status of knowledge sharing practices and identify the factors affecting knowledge sharing practices among health professionals in Hiwot Fana Specialized University Hospital. The study found that the current status of Knowledge Sharing practices by all categories of health professionals that include: nurses, doctors, radiologists, laboratory technologists, health officers and pharmacists vary according to their mean difference from 'high' to 'low' and to 'very low'. It is 'high' in areas such as: formal training programs and workshops within the hospital that allow employees to share knowledge (3.43); individuals' pleasure to share their know-how, information, working experience and knowledge to colleagues voluntarily (3.81); individuals' pleasure to share freely information and knowledge that improves the hospital performance (4.00) and colleagues awareness of the importance of knowledge sharing on daily basis in the hospital (3.90). These findings corroborate [29]; [19]; [27]; [28] & [16] on knowledge sharing, which in today's rapidly changing healthcare environment health institutions (such as hospital and clinics) optimize their chance of success not by telling healthcare professionals what to do, but by enabling them to make informed decisions. There is no health professional that knows everything he or she needs to know through the context and content of knowledge possession, which are skills and resources found in employees and organizational processes that are embedded into different artifacts, procedures and stored into different media that are being shared. It is 'low' on staffs feeling of motivation to share knowledge in the hospital (2.44) and 'very low' in the area of availability of motivational scheme in the hospital to motivate knowledge sharing (1.74). These results also support or confirm the finding of [20] & [22] that revealed health professionals need for up-to-date health information from credible sources to improve their knowledge and provide evidence based healthcare services to their clients. On having KS habits within the organizations, it will benefit health institutions and their customers by maximizing intellectual capital, minimizing costs, and making individuals and organizations to stay competitive in their environment. Also, participants' motivation to share their knowledge to colleagues will be enhanced; based on their engagement, the level of motivation and preferred motivational scheme in the organization. Identifying factors affecting KS practices in organizations in general and the hospitals under study in particular; is important because it will increase the use of knowledge that already exists in the hospitals, which might be tacit or explicit. The factors affecting KS as categorized in this study were three factors: individual, organizational and technological factors [10]. From the One Way ANOVA tests of significant results, it was found that the factors affecting KS practices of the categories of health professionals at HFSUH were: past mistakes of employees are never solved by giving training for those who made the mistakes in the organization; lack of trust of co-workers hampers knowledge sharing; lack of job satisfaction in the organization reduce knowledge sharing among colleagues; organization does not encourage group interaction (team work) regarding knowledge sharing; lack of ICT infrastructure (internet, intranet) in the hospital and lack of technical support and maintenance of integrated information technology system. The results show an association with previous studies of [17] & [18] that found hospital employees' knowledge sharing that were based on intention and behavior with innovational behavior. Conclusions Based on the objective and findings of this study that was on assess the current status of knowledge sharing practices and to identify the factors affecting knowledge sharing practices among health professionals in Hiwot Fana Specialized University Hospital, it is necessary to conclude that the current status and factors affecting knowledge sharing practices vary according to the categories of the health professionals feelings and opinions in the hospital. These categories of health professionals include: nurses, doctors, radiologists, laboratory technologists, health officers and pharmacists in the HFSUH. It is therefore the commitment of the hospital authority to consider addressing the factors that hinders the health professionals in the hospital to share their knowledge through developing an effective knowledge sharing framework and other knowledge sharing Universal Journal of Management 8(4): 160-174, 2020 173 mechanisms.
2020-06-06T21:40:02.381Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "6f312af12b91e64e50b4451c49bd616a6e33e365", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20200530/UJM5-12115594.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6f312af12b91e64e50b4451c49bd616a6e33e365", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
17678104
pes2o/s2orc
v3-fos-license
A Model-Based Fuzzy Control Approach to Achieving Adaptation with Contextual Uncertainties Self-adaptive system (SAS) is capable of adjusting its behavior in response to meaningful changes in the operational context and itself. Due to the inherent volatility of the open and changeable environment in which SAS is embedded, the ability of adaptation is highly demanded by many software-intensive systems. Two concerns, i.e., the requirements uncertainty and the context uncertainty are most important among others. An essential issue to be addressed is how to dynamically adapt non-functional requirements (NFRs) and task configurations of SASs with context uncertainty. In this paper, we propose a model-based fuzzy control approach that is underpinned by the feedforward-feedback control mechanism. This approach identifies and represents NFR uncertainties, task uncertainties and context uncertainties with linguistic variables, and then designs an inference structure and rules for the fuzzy controller based on the relations between the requirements model and the context model. The adaptation of NFRs and task configurations is achieved through fuzzification, inference, defuzzification and readaptation. Our approach is demonstrated with a mobile computing application and is evaluated through a series of simulation experiments. INTRODUCTION Self-adaptive system (SAS) is a novel computing paradigm in which the software is capable of adjusting its behavior in response to meaningful changes in the environment and itself [1]. The ability of adaptation is characterized by self-*properties, including self-healing, self-configuration, self-optimizing and selfprotecting [2]. Innovative technologies and methodologies inspired by these characteristics have already created avenues for many promising applications, such as mobile computing, ambient intelligence, ubiquitous computing, etc. Software-intensive systems are systems in which software interacts with other software, systems, devices, sensors and with people intensively. Such an operational environment may be inherently changeable, which makes self-adaptiveness become an essential feature. Context can be defined as the reification of the environment [3] that is whatever provides as a surrounding of a system at a time. It provides a manageable and manipulable description of the environment. Context is essential for the deployment of self-adaptive systems. As the environment is changeable, the context is unstable and ever changing and the system is desired to perform different behaviors according to different contexts. Therefore, engineers need to build effective adaptation mechanisms to deal with context changes. Requirements Engineering (RE) for self-adaptive systems primarily aims to identify adaptive requirements, specify adaptation logic and build adaptation mechanisms [4]. Conducting context analysis at requirements phase will be worthwhile at the design and development phases, because context may influence the decisions about what to build and how to build them. However, some kinds of uncertainty may occur in both context and requirements [5]. First, it is often infeasible to precisely detect, measure and describe all the context changes. This kind of imprecision in the context can be viewed as context uncertainty [6]. Second, the extent to which the non-functional requirements (NFRs) are satisfied, and the task configurations with which the system operates in the changing context, are also uncertain. These kinds of uncertainties are known as requirements uncertainties [7]. Thus, dealing with uncertainties both in the requirements and the context becomes a challenge for the research community of RE for SASs. Many research works in the literature have shown remarkable progress in providing solutions to mitigating the uncertainty. A research agenda towards tackling context uncertainties is provided in [8]. More recently, related works are fully synthesized and summarized in a roadmap paper [6]. Some of the existing works focus on modeling and specifying the requirements uncertainty. FLAGS [9] is proposed for mitigating the requirements uncertainty by extending the goal model with adaptive goals. RELAX [10], a formal requirements specification language, is introduced to relax the objective of SAS. Other works proposed approaches of architecture-based adaptation. FUSION [11] uses online learning to mitigate the uncertainty associated with the changes in context and tune behaviors of the system to unanticipated changes. POISED [12] improves the quality attributes of a software system through reconfiguration of its components to achieve a global optimal configuration for the software system. However, how to dynamically adapt NFRs according to the context changes and how to dynamically adjust task configurations to satisfy the changed NFRs are still lacking in quantitative studies, especially when the context uncertainties intertwine with the requirements uncertainties. To solve these issues, two difficulties should be addressed. First, before the adaptation of the task configurations, the system requirements may evolve according to the context changes and the evolution may modify the criteria on which the trade-off of adaptation decision is based. Second, due to the informal nature of RE activities, e.g., the inherent fuzziness and vagueness of human perception, understanding and communication of their desire in relation to the non-formal real world, we cannot precisely define the mathematical relations between changing contexts and the system requirements [13]. The objective of this paper is to provide SAS the capability of adapting NFRs and adjusting the task configurations with the context uncertainty. It is divided into two answerable research questions: (RQ1) how the desired satisfaction degrees of NFRs can be dynamically adapted with context uncertainties and (RQ2) how the task configurations can be dynamically adapted incorporating context uncertainties considering the trade-off among NFRs. To this end, we propose a model-based fuzzy control approach by integrating the requirements and the context into a feedforwardfeedback control mechanism. The feedforward controller is a fuzzy controller while the feedback controller is a crisp controller (only crisp values involved). The feedforward loop is mainly designed for solving RQ1, while RQ2 will be answered within both feedforward and feedback control loops. First, this approach is derived from the goal-oriented requirements model and a hierarchical context model. Then it identifies and represents the uncertainties of the requirements and the context with some linguistic variables and membership functions. The inference structure and heuristic rules of the fuzzy controller are designed based on types of relations among the uncertainties. The fuzzy controller takes the monitored context as input and makes decisions on the adaptation of desired satisfaction degrees of NFRs and the task configurations through fuzzification, inference and defuzzification. If the deviation between desired satisfaction degrees and the actual ones is above a given threshold, the feedback controller will readapt task configurations until the deviation falls below the threshold. The approach is demonstrated and evaluated with an application from the mobile computing domain. The rest of this paper is organized as follows. Section 2 provides preliminary knowledge followed by the approach overview and the motivating example in Section 3. Section 4 presents the concepts, models and representation of the uncertainty, followed by the design of the fuzzy controller in Section 5. Section 6 elaborates the adaptation process, followed by the evaluation and discussion in Section 7. Section 8 presents the related work, followed by conclusion and future work in Section 9. PRELIMINARIES This section introduces the background knowledge of the goal model, the feedforward-feedback control and the fuzzy controller. Goal model Goal model and the goal-oriented analysis are proposed in the RE literature to present the rationale of both humans and systems. A goal model describes the stakeholder's needs and expectations for the target system. Figure 1 presents a simple KAOS model [14]. The goals model stakeholders' intentions while the tasks model the functional requirements which can be used to achieve the goals. Goals can be refined through AND/OR decompositions into sub-goals or can be achieved by sub-tasks. For AND decomposition (e.g., 1 ∧ 2 → 2 ), a parent goal will be satisfied when all its sub-elements are achieved, while for OR decomposition (e.g., 3 ∨ 4 → 1 ), a parent goal can be satisfied by achieving at least one of its sub-elements. OR-decompositions incorporate and provide sets of alternatives which can be chosen flexibly to meet goals. Softgoals model the NFRs, which have no clear-cut criteria for their satisfaction and can be used to evaluate different choices of alternative tasks. Tasks can contribute to softgoals through the help or hurt contribution relation. Feedforward-feedback control Feedback control loop is proven to be an appropriate way of building adaptation mechanisms in adaptive systems [17]. We can both consider an adaptive system as a feedback control system [15] and conduct the requirements analysis from a feedback control standpoint [16]. The systematic survey [18] provides other control types that can be applied in designing adaptive systems. In this paper, we adopt feedforward-feedback control mechanism to underpin the entire adaptation process. Figure 2 presents a conventional feedforward-feedback control mechanism. Feedforward control loop measures the disturbances and adjusts the control input to reduce the impact of the disturbance on the system output. Thus, it is considered as a proactive control mechanism. On the other hand, feedback control loop adjusts the input according to the measured error and maintains the output sufficiently closed to what is desired. Therefore, it can be viewed as a retroactive control mechanism. Feedforward-feedback control mechanism has the advantage of both control schemes. First, it can tune system behavior based on the measured disturbances at runtime. Second, when deviations exist between the measured output and desired output, it can correct the behavior accordingly. Fuzzy control and Fuzzy controller Fuzzy control is a practical alternative for achieving highperformance control on nonlinear time-variant system since it provides a convenient method for constructing nonlinear controllers using heuristic rules. Heuristic rules may come from domain experts. Engineers incorporate these rules into a fuzzy controller that emulates the decision-making process of the human. A fuzzy controller, depicted in Figure 3, has four principal components [19]: (1) Rule base holds the knowledge in the form of a set of control rules, of how best to control the system. (2) Fuzzification block modifies the crisp input with membership functions, so that they can be interpreted and compared according to the rules in the rule base. (3) Inference machine evaluates which control rules are relevant to the current input and then decides what the membership degree of output to the plant should be. (4) Defuzzification block converts the conclusions in the form of membership degree into the crisp output to the plant. A set of membership functions is responsible for all the transforming processes. OVERALL APPROACH AND MOTI-VATING EXAMPLE This section provides the adaptation mechanism, elaborates the adaptation processes and describes our motivating example. Figure 4 presents the feedforward-feedback fuzzy control mechanism underpinning our approach. The mechanism consists of a feedforward control loop (the upper part of Figure 4) and a feedback control loop (the lower part of Figure 4). The inputs are the monitored context and the desired satisfaction deviation, while the outputs are the desired satisfaction degrees of NFRs and the task configurations For RQ1, we consider that the adaptation of NFRs is achieved through feedforward control. Contexts are identified and integrated within a hierarchical model, while objectives of the target system are represented within its requirements model. To achieve higher performance or lower cost, NFRs are always resilient and the satisfaction degrees of NFRs need to be adapted according to the context changes at runtime. Under this circumstance, the context changes can be viewed as outside disturbance, whose value should be monitored and delivered to the feedforward controller. Control mechanism For dealing with context uncertainties, we use a fuzzy controller as the feedforward controller. After inference, the desired satisfaction degrees of NFRs are sent to the actuator as the control input. Meanwhile, they are also sent to the sensor to compute the deviation from actual satisfaction degrees. For RQ2, we refer the task adaptation to both the parametric adaptation and the structural adaptation [20]. There are two aspects of the task adaptation. First, the system tasks should be adapted based on context changes at runtime. This kind of task adaptation can be achieved through the feedforward control. Meanwhile, the actual satisfaction degrees of NFRs can be derived. Deviations between the desired satisfaction degrees and the actual ones are measured by a sensor. If the deviations are above the desired threshold, the crisp controller will readapt the task configurations. The re-adaptation is viewed as the second kind of task adaptation. The feedforward-feedback fuzzy control mechanism benefits from both the conventional feedforward-feedback control and the fuzzy control. It is a general mechanism that can be built into many types of SAS to support dynamic adaptation of satisfaction degrees of NFRs and the configurations of system tasks. Processes towards adaptation Based on above control mechanisms, we design processes to achieve the adaptation in Figure 5, which includes uncertainty identification, fuzzy controller design, feedforward control-based adaptation and feedback control-based readaptation. The former two processes can be viewed as the preprocessing for the latter two. Each process consists of several sub-processes. Uncertainty identification process is composed of modeling the requirements and the context and specifying their uncertainties. The requirements are modeled with goal-oriented method, while the contexts are identified according to the requirements model. Then uncertainties of requirements and contexts are identified and specified with linguistic terms and membership functions. Fuzzy controller design process consists of three steps. First, we choose the appropriate input and output for the fuzzy controller based on our research questions. Then the inference structure is determined according to the input and output. Thereafter, the heuristic rules need to be built with the knowledge of domain experts. Feedforward control-based adaptation process is responsible for solving RQ1 and achieving the first-fold adaptation for RQ2. Fuzzy controller (Figure 3) is the first-class entity in this process, since it is used to complete all the three sub-processes, including fuzzification, inference and defuzzification. At last, feedback control-based adaptation process is responsible for achieving the second-fold adaptation for RQ2. The actual satisfaction degrees of NFRs are derived according to the adapted tasks. The final adaptation decisions can be made through an iterative process consisting of evaluating the deviation of satisfaction degree and readapting task configurations. Motivating example To illustrate the above processes and evaluate our approach, we take the Push Ambient Notification application from the mobile computing domain as the example. Similar examples of such applications based on the push notification technology include Prowl (http://www.prowlapp.com) and Pushover (https://pushover.net/). Typically, push notifications is a technique used by apps to alert smartphone owners on content updates, messages, and other events that users may want to be aware of. The objective of the Push Ambient Notification app is to notify users of surrounding information and events, such as traffic conditions, credit of restaurants, contact information of cinemas, etc., according to the location of the user in a certain district. To this end, the application should be capable of locating the user and receiving pushed ambient notifications. During the locating and receiving process, some quality attributes are expected to be kept. Users expect higher performance, such as quicker responding and receiving more information. Meanwhile, they desire a lower cost, e.g., lower energy cost. Consequently, system tasks should be performed according to several contexts, such as available memory, dump energy, etc. UNCERTAINTY IDENTIFICATION This section first introduces basic concepts and definitions and then presents the requirements model and the context model illustrated by the motivating example. Both the requirements uncertainty and the context uncertainty are identified and formally specified thereafter. Concepts and definitions The conceptual model is provided in Figure 6 for defining the entities and relations that should be considered in the following modeling process. We adopt four concepts of KAOS method [14], including goal, task, softgoal and decomposition. The entities in dark background are newly proposed in this paper. Figure 6. Conceptual model Definition 1 (Atomic Context) An atomic context is a quantified context that doesn't consist of any sub-context. Definition 2 (Composed Context) A composed context refers to the context consists of some sub-contexts, which can be either composed context or atomic context. Definition 3 (Linguistic Variable) By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language [21]. For instance, bandwidth rate in our motivating example can be a linguistic variable. The crisp value refers to the monitored value of bandwidth rate. Linguistic term is the value of the linguistic variable, which can be low, mid and high, referring to low, mid and high bandwidth rate respectively. Each linguistic term is associated with a membership function to compute the membership degree of the crisp value with this linguistic term. Figure 7 depicts when the maximal bandwidth rate is 500kbps, the membership degree of 400Kbps with high bandwidth is 0.5, while that of 350 is 0.25. Similarly, the membership degree of 400Kbps with mid bandwidth is 0.4 and that with low bandwidth is 0. Figure 7. Linguistic terms and membership functions of bandwidth rate Three types of linguistic variables can be generalized, including monitored variable, configurable parameter and satisfaction degree. The comparison of their usage is presented in Table 1. Describe the monitored atomic context, e.g., the bandwidth rate is high. Configurable parameter Describe the extent to which a task is configured, e.g. the configured locating time is short. Satisfaction degree Describe the extent to which a softgoal is satisfied, e.g., the desired satisfaction degree of time efficiency is high. In addition to the Decomposition relation, three other relations are defined, including Update, Enable and Correlation relation. Definition 4 (Update) Update is a binary relation between the atomic contexts and the softgoals. The desired satisfaction degrees of softgoals are updated according to atomic contexts. Definition 5 (Enable) Enable is a binary relation between the atomic contexts and the tasks. The first kind of task adaptation is enabled based on atomic contexts. Definition 6 (Correlation) Correlation is a binary relation between the tasks and the softgoals. Positive correlation refers to that once the task parameter increases (decreases), the satisfaction degree of relevant softgoal increases (decreases). Negative correlation means that once the task parameter increases (decreases), the satisfaction degree decreases (increases). The actual satisfaction degrees of softgoals are derived based on the correlated tasks. Correlation is different from the original Contribution relation in KAOS method, because no matter how the task is performed, the effect of Contribution relation (help or hurt) is changeless. According to above entities and relations, we formally define the entire model as a quadruple: ℳ = (ℛ, , , ). ℛ refers to the requirements model of a self-adaptive system, which can be defined as a quintuple: ℛ = ( , , , , ). = { 1 … } is a set of goals. = { 1 … } is a set of tasks. Requirements model According to the motivating example in Section 3.3, the requirements model is presented in Figure 8. Figure 8. Requirements model of the motivating example To achieve the root goal 0 , the system needs to achieve locating user's position ( 1 ) and receiving pushed notifications ( 2 ). Two The tasks are associated with the softgoals through Positive correlations and Negative correlations. The actual satisfaction degrees of softgoals can be derived through the inference with these relations. For example, to compute the actual satisfaction degree of 1 , inference should be conducted with 1 , 2 , 3 and 4 . Context model To identify and model the relevant contexts, we adopt the context classification in the mobile computing domain [22]. According to the conceptual model and the requirements model, the contexts are identified within a hierarchical structure presented in Figure 9. In this paper, we do not want to exhaust all the relevant contexts but only focus on the computing context that can be monitored directly. Four atomic computing contexts can be captured from the motivating example, including bandwidth rate ( 1 ), network delay ( 2 ), dump energy ( 3 ) and available memory ( 4 ). The relations between the requirements model and the context model, i.e., and relations, can be represented within a three level topology structure depicted in Figure 10. For example, the desired satisfaction degree of 1 should be derived through the inference with 1 , 2 and 3 . The first kind of adaptation of 3 should be achieved through inference with 2 , 3 and 4 . Uncertainty representation We categorize the uncertainties into three types: the atomic context uncertainty, the softgoal uncertainty and the task uncertainty. For the convenience of representation, we apply triangular membership functions to all the examples in this section. Atomic context uncertainty Atomic context uncertainty may be caused by the outside noise. Consequently, the system may not be able to accurately monitor the value of atomic context. To deal with this kind of uncertainty, engineers may not need to focus on describing the precise value, but describe these contexts with some linguistic terms, such as in short time, with low bandwidth, to a high satisfaction degree, etc. Thus, to quantitatively represent atomic context uncertainty, we map each linguistic term to a certain interval of monitored values, which is associated with a membership function. We formally define atomic context as: where refers to the linguistic variable; refers to the crisp value of the monitored variable; refers to the th linguistic term; refers to the membership function of ; is the membership degree of with . For example, bandwidth rate ( 1 ) can be represented as: Softgoal uncertainty Softgoal uncertainty refers to the extent to which the system satisfies the softgoal in the changing context is uncertain. We deal with this kind of uncertainty by describing the satisfaction degree with low, mid and high. We formally define softgoal as where is the value of satisfaction degree. For instance, high time efficiency ( 1 ) can be represented as: Task uncertainty Task uncertainty refers to the parametric or the structural configurations of tasks with which the system operates in a changing context are uncertain. We represent the parametric uncertainty by describing the parameter with linguistic terms. Analogically, we formally define task as where is the value of configurable parameters. For example, 3 can be represented as: Hence, when the received data size is 350KB, the membership degrees with Small, Mid and Large data size are 0, 0.5 and 0.25 respectively. Figure 12. Linguistic terms and membership functions of parametric task (t 3 ) For the structural uncertainty, we extend the conventional linguistic term with the name of task options, such as Network and GPS. Assume that a goal has alternative tasks. We formally define these tasks together as: where refers to the name of the goal; refers to an indicator; refers to th alternative task option. As depicted in Figure 13, the shapes of membership functions are all congruent. Figure 13. Linguistic terms and membership functions for structural tasks Intuitively, the choice of an alternative task is crisp. To utilize fuzzy theory, we convert structural uncertainty to parametric uncertainty by assigning a certain configurable parameter to the indicator. Then the choice of an alternative task is made according to the value of the indicator. For example, in Figure 13, we refer the indicator to the relative invoking time and membership functions represent the optimal invoking time of each alternative. If the derived indicator equals 1 , the invoking time equals 1 ; task 1 is chosen; the membership degree with optimal invoking time equals 1 . If the derived indicator equals 2 , invoking time equals 2 − 2 0 ( 2 is a relative time); task 3 is chosen; the membership degree with optimal invoking time equals 2 . In our motivating example, we formally define task 1 2 as: where refers to the invoking time. 1 and 2 represent the optimal invoking time of Network and GPS respectively and are defined according to Figure 14: Consequently, when the indicator equals -7.5, the membership degrees with the optimal invoking time of Network and GPS are 0.75 and 0 respectively. Locating by network is chosen to be invoked for 7.5s. When the indicator equals 15, the membership degrees with optimal invoking time of Network and GPS are 0 and 0.5 respectively. Then locating by GPS is chosen to be invoked for 15s. FUZZY CONTROLLER DESIGN In this section, we present how to design the inference structure and how to design rules for the fuzzy controller. Choosing input and output We first choose appropriate input and output for the fuzzy controller. According to the research questions in Section 1, the feedforward-feedback control mechanism in Section 3.1 and the definition of relations in Section 4.1.2, the identified inputs and outputs are synthesized in Table 2. Designing inference structure According to the chosen inputs and outputs, the inference structure is presented in Figure 15. refers to the desired satisfaction degree of a softgoal, while refers to the actual satisfaction degree. F-square denotes the Fuzzification block and DFsquare denotes the Defuzzification block of a fuzzy controller. The inference machine is incorporated with three types of heuristic rules, which is presented in Table 3. Table 3. Types of heuristic rules Rule type Usage UPD-rules Rules that can be used for deriving the desired satisfaction degrees of softgoals ENA-rules Rules that can be used for achieving the firstfold adaptation of tasks. COR-rules Rules that can be used for computing the actual satisfaction degrees of softgoals. Designing rules According to Table 3, rules should be designed for each rule type. Assume that = { 1 … } is a set of inputs and = { 1 … } is a set of outputs, a general rule of the rule set can be: We introduce an operator, called Regulation, denoted by ℜ: × → . That is to say, given a set of inputs and a set of outputs, ℜ could map the elements in the sets to a set of rules. UPD-rules UPD-rules can be derived through ℜ: ̃×̃, where ̃∈ 2 and ̃∈ 2 . An UPD-rule is represented by: According to the UPD relations in Figure 10, an instance of UPDrules built on 1 , 2 , 3 and 1 can be: ENA-rules ENA-rules can be derived through ℜ: ̃×̃, where ̃∈ 2 and ̃∈ 2 . An ENA-rule is represented as: According to the ENA relations in Figure 10, an example of ENArules built on 1 , 2 , 3 and 4 can be: COR-rules COR-rules can be derived through ℜ: ̃×̃, where ̃∈ 2 and ̃∈ 2 . A COR-rule is represented as: According to the COR relations in requirements model (Figure 8), an instance of COR-rules built on 1 & 2 , 3 , 4 and 1 can be: CONTROL BASED ADAPTATION This section provides how to achieve adaptation through the feedforward control and the feedback control. Fuzzification Fuzzification modifies the crisp inputs with membership functions. Figure 16 depict an example of fuzzification with bell-shaped membership functions. The crisp value 0 is modified to membership degrees 1 , 2 and 3 with MF 1 , MF 2 and MF 3 respectively. Figure 16. Example of fuzzification According to Table 2, the inputs are the monitored values (mv) of the atomic contexts and the configurable parameters (cp) of the tasks. They all need to be fuzzified. Fuzzification of the monitored atomic contexts is based on the process provided in Section 4.3.1. Fuzzification of the configurable parameters depends on the process presented in Section 4.3.3. Inference and defuzzification To demonstrate how the inference machine works, we take the inference with ENA-rules built on 2 , 3 , 4 and 3 as an example. Related ENA relations are presented in Figure 10. For the convenience of illustrating, we simplify each linguistic variable of input and output with two linguistic terms. The linguistic terms and membership functions are presented in Figure 17. Figure 17. Linguistic terms and membership functions Assume that the given ENA-rules are: Rule 1: If NetworkDelay is Short AND DumpEnergy is High AN D AvailableMemory is Large, then ReceiningDataSize is Large. Rule 2: If NetworkDelay is Long AND DumpEnergy is Low AN D AvailableMemory is Small, then ReceiningDataSize is Small. When the input vector is ( 1 , 2 , 3 ), the inference process is presented in Figure 18. ( 1 , 2 , 3 ) is the membership degree vector of ( 1 , 2 , 3 ) with linguistic term Short, Low and Large. While ( 1 ′ , 2 ′ , 3 ′ ) is the membership degree vector of ( 1 , 2 , 3 ) with linguistic term Long, High and Small. Then we can derive two membership degrees of with linguistic term Large and Small as: We can defuzzify the membership degree of y with the Centre of Gravity method [19]. The crisp value of y can be computed by: If LocatingOption is GPS AND ReceivedDataSize is Small AND UpdateTimeInterval is Short, then the ActualSatisfac-tionDegree of high time efficiency is High. If BandwidthRate is High AND NetworkDelay is Low AND DumpEnergy is High, then the UpdateTimeInterval is Short. If BandwidthRate is High AND NetworkDelay is Low AND DumpEnergy is High, then the DesiredSatisfactionDegree of high time efficiency is High. Figure 18. Visualized inference process The above inference and defuzzification process can also be applied to the inference with UPD-rules and COR-rules. In this way, the desired satisfaction degree of softgoals, the task configurations and the actual satisfaction degree of softgoals are all derived. Readaptation Readaptation is conducted the by the crisp controller. For a system with n softgoals, individual deviation between the desired satisfaction degree ( ) and the actual satisfaction degree ( ) of the ith softgoal can be computed by Δ = − . Assume that the threshold of desired satisfaction deviation is ξ ∈ ℝ + . If Δ ≥ − , it means that the softgoal is right satisfied or over satisfied. If Δ < − , it means that the softgoal is not fully satisfied and task configurations should be readapted. Actually, there are many methods can be utilized to compare the satisfaction deviations. We can compute the total deviation by: where is the weight of the ith softgoal. If Δ ≥ − , no readaptation of task configurations is needed. If Δ < − , readaptation should be performed. To achieve the readaptation, the configurable parameters can be modified within a certain range. We suggest adopting the simplex algorithm, which is a popular algorithm in the mathematical optimization field. The objective function can be: Δ . Due to the limitation of space, we couldn't exhaust it. EVALUATION To evaluate the proposed approach, we conduct a series of experiments with MATLAB Fuzzy Toolbox. Experiment questions According to RQ1 and RQ2, we design four experiment questions: (Q1) Can the desired satisfaction degrees of NFRs be adapted to the changing context at runtime? (Q2) Can the parametric and the structural configurations of tasks be adapted to the changing context? (Q3) To what extent can the adapted tasks satisfy the NFRs? (Q4) When the satisfaction deviation is intolerable, can the system readapt the tasks to achieve the desired deviation? Experiment design The settings of the atomic contexts, the configurable parameters and the satisfaction degrees in our motivating example are provided in Table 4 and Table 5. The curves of the atomic contexts with gauss white noise in 200 time steps are presented in Figure 1. Totally, we design 81 UPD-rules, 81 ENA-rules and 45 COR rules. Each rule has the same weight of 1. Sample rules are presented in Figure 20. Based on inference with the 81 UPD-rules, the adaptation of the desired satisfaction degrees of NFRs are presented in Figure 21. The three curves correspond to our common sense. It means that the knowledge represented by the rules is reasonable. The oscillations of curves are caused by the attached noise. Energy efficiency is related to system cost, while time efficiency and information efficiency are related to system performance. Thus, trade-off among the three NFRs should be considered at runtime. Figure 21. Desired satisfaction degrees at runtime The cross-point at 50 time steps refers to the configurations are balanced for each NFR at that time. When time is less than 50 time steps, the desired satisfaction degrees of high time efficiency and high information efficiency are higher than that of high energy efficiency, because the dump energy is still abundant. However, as both the dump energy and available memory decreases, high energy efficiency becomes more important than the other NFRs. We attribute this phenomenon to the fact that keeping a long battery life is more desired. The system has to degrade the performance in exchange for lower system cost. Q2: Can the parametric and the structural configurations of tasks be adapted to the changing context? Figure 22 (a) presents the dynamically adapted configurations of tasks. As time step moves forward, the received data size decreases. In the beginning, the update time interval is short. While it keeps a slight increase along the time axis. For locating option, we find that before 90 time steps, the value of locating option is either a positive number or a negative number. The positive values depict GPS is chosen, while the negative values depict network is chosen. After 90 time steps, network is always chosen to achieve locating users, because using GPS hurts high energy efficiency. This question can be answered by computing the actual satisfaction degrees of NFRs. The results are depicted in Figure 23 (a). The actual satisfaction degrees of NFRs are 0.5 most of the time. This phenomenon may be caused by the trade-off. We set the given threshold of desired satisfaction deviation as 0.1. Figure 24 (a) depicts that 57% individual deviations are intolerable while 43% individual deviations are acceptable. Thus, readaptation should be conducted on task configurations to satisfy the threshold. Q4: When the satisfaction deviation is intolerable, can the system readapt the tasks to achieve the desired deviation? Figure 22 (b) presents the configurations after adaptation. Compared with Figure 22 (a), the invoking time of network decreases; the received data size decreases; the update time interval increases. Figure 23 (b) depicts that from 100 time steps, the actual satisfaction degree of high information efficiency decreases while that of high energy efficiency increases. Figure 24 (b) depicts that after readaptation, 92% individual satisfaction deviations are acceptable. It also proves that the task configurations are well controlled through the feedback control loop. Feedforward-feedback control The above results show that the proposed approach is effective to achieve adaptation for SAS. It supports the adaptation of both NFRs and system tasks. Indeed, we suggest that it is better for SAS to first adapt NFRs according to context changes at runtime, because task adaptation always needs to consider the trade-off among NFRSs. This is also the reason why we propose to utilize feedforward control before feedback control. Fuzzy inference Though fuzzy control is widely applied in industry [23], it is still lack of utilization in RE field. We use various linguistic terms to represent uncertainties. Inference rules are built based on the linguistic terms. Fuzzy inference is performed with the rules. Thus, our approach supports the notion of inferring with uncertainty. Different from the label propagation algorithm used in [24], fuzzy inference is a quantitative approach. In addition, the results of inference correspond to our cognition and perception. Threats to validity Expert knowledge. Rules are designed based on expert knowledge. Sometimes, a human expert cannot observe all system behaviors in the changing context. Thus, to use the proposed approach, engineers should first capture abundant and valid domain knowledge. Secondly, a rule can be assigned with a weight to represent the expert's confidence or the importance of the rule. Thus, sensitivity analysis may need to be conducted with different assigned weight. Excessive rules. For an atomic context, when the number of linguistic terms increases, the total number of rules increases in an arithmetic ratio. In this situation, when designing rules, flexibilities are needed. In our experiment, we map each linguistic term to a positive number in the interval of [1,3]. Then, rules are designed by computing with the mapped values. Conflict rules. The rules designed in the experiment are integrated with AND operators. However, rules can also be integrated with OR operators. Under this circumstance, conflict among rules may occur. Thus, when engineers intend to design more complex rules, conflicts need to be detected and eliminated before inference. RELATED WORK Dealing with uncertainty. The concept of uncertainty was described in detail in pioneering works [6][7][8]. Sawyer et al. [8] provided a research agenda for dealing with environmental uncertainty. Ramirez et al. [7] introduced the definition and taxonomy of uncertainty in the context of dynamically adaptive systems and identified existing techniques for mitigating different types of uncertainty. More recently, in roadmap paper [6], Esfahani and Malek characterized sources of uncertainty in SAS and discussed the state-of-the-art for dealing with uncertainty. Baresi et al. [9] proposed FLAGS for mitigating requirements uncertainty by extending goal model with adaptive goals. With this approach, requirements can be partially satisfied and the system possesses the ability of fault tolerance. Whittle et al. [10] proposed RELAX, a formal requirements specification language, for specifying the uncertain requirements in SAS. With RELAX, we can establish the boundaries of adaptive behavior. In their following work [ opment requirements for dynamically adaptive systems when identifying uncertainty factors in the environment. FUSION was proposed by Elkhodary et al. [11]. Authors used online learning to mitigate the uncertainty associated with changes in context and tune system behaviors to unanticipated changes. Esfahani et al. [12] proposed POISED for improving the quality attributes and achieve a global optimal configuration of a system by assessing both the positive and negative consequences of context uncertainty. In our approach, we consider both context uncertainty and requirements uncertainty. We quantitatively represent uncertainties with linguistic variables and membership functions. Rules are built by integrating these uncertainties and adaptation is achieved through inference with these uncertainties. Building control mechanism. Brun et al. [17] explored and elaborated how feedback loops can be utilized in engineering selfadaptive systems, especially the MAPE loop [26]. MAPE is a wildly used feedback loop for building adaptation mechanism in SAS. Wang et al. [24] focused on monitoring and analysis aspect. They proposed a framework for diagnosing failure of software requirements by transforming the diagnostic problem into a propositional satisfiability problem. In [27], Wang and Mylopoulos proposed an autonomic architecture consisting of monitoring, diagnosing, reconfiguration and execution component. Vromant et al. [28] introduced how to enable MAPE computations across multiple loops to coordinate with one another. Souza and Mylopoulos [15] argued for a control-theory perspective for adaptive systems and provided a research agenda for applying control theory to the design of adaptive systems. In our research, we integrate feedforward control and feedback control together. The approach benefits from both control types and supports the dynamic adaptation of both NFRs and system configurations. CONCLUSION AND FUTURE WORK In this paper, we proposed a model-based fuzzy control approach to achieve adaptation for self-adaptive systems with context uncertainty. Our approach is based on control theory and fuzzy set theory. The adaptation mechanism underpinning the approach is built with feedforward-feedback control loops. To integrate the requirements with the context, we introduced three newly defined relations. To identify and specify the requirements uncertainty and the context uncertainty, we utilized linguistic terms and membership functions. The inference structure of the fuzzy controller is designed according to the defined relations. Heuristic rules are built with expert knowledge. Adaptation decisions are derived through fuzzification, inference, defuzzification and readaptation. We evaluated our approach through a series of simulation experiments. The results showed that our approach is effective to support dynamic adaptation of both satisfaction degrees of NFRs and parametric or structural configurations of tasks. In addition, adaptation of tasks is achieved through trade-off among NFRs. The results also depicted that the satisfaction deviations are well controlled through the feedback loop when the deviations are diagnosed intolerable. The key benefits and contributions of our approach to engineering self-adaptive systems are that the feedforward-feedback control mechanism can serve as a flexible adaptation mechanism and the fuzzy controller can perform reasonable inference with requirements uncertainty and context uncertainty. Meanwhile, our approach also provides ideas for representing uncertainty, trade-off among NFRs, reasoning with uncertainty and model-based selfadaptation and evolution. Our future work will focus on modeling and specifying requirements uncertainty and context uncertainty. We intend to develop tools to support quantitative modeling and reasoning, and investigate the performance of the approach with different expert knowledge and different expert confidence. We will also explore how other control mechanisms can be used in the context of selfadaptive systems, e.g., fuzzy adaptive control. These ideas motivate us to present more exciting research results to the community. ACKNOWLEDGMENTS This research is supported by the National Natural Science Foundation of China under Grant Nos 61232015 and 91318301.
2017-04-03T04:06:35.000Z
2017-04-03T00:00:00.000
{ "year": 2017, "sha1": "254bd653c92ddb191c3e7edf9c0ec40700dca9d8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "254bd653c92ddb191c3e7edf9c0ec40700dca9d8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
244337979
pes2o/s2orc
v3-fos-license
APPLICATION OF DIFFERENT MANAGEMENT MODELS IN PRIVATE AND PUBLIC ENTERPRISES THROUGH THE PROCESS OF PROFESSIONALIZATION AT THE LEVEL OF LOCAL SELF-GOVERNMENT City economic structures on the territory of the Republic of Serbia should adapt and accelerate the process of balanced development between the urban and rural parts of the city on modern principles of management. On the other hand, the role of the process of professionalization of company management, as a factor of modernization at the level of local private and public companies should be explored, starting from the position that management is not only an economic category dominated by rational, financial, market principles and activities but also a sociological category, primarily reflected in the professionalization and democratization of labor relations. For the purpose of analysis, the forms of changes of the following should be considered: a) development strategies, b) production/property relations, c) changes in existing management models. INTRODUCTION P rofessionalization is a very complex process that takes place simultaneously at several levels of social and economic development. From a systemic point of view, it is a fundamental process of the social division of labor, and from an institutional point of view, it is the legitimization of the privileges of the profession; from a social point of view, it is a source of social stratification. Sociological theories of professions treat professionalization in most cases from the institutional point of view, equating the process of professionalization with the institutionalization of social position, status and privileges of certain groups of professions. Consideration of the degree of professionalization of management structures at the local government level aims to encourage the development and implementation of new functional management models that should include a specific angle of observation and modeling of the development strategy concept on the concept of management professionalization as a form of management system modernization in public and private sector on the local level in Serbia. The paper starts from the assumption that a different approach to changes and development in terms of recomposing the structure of relations in the field of management and executive work, provides an opportunity to identify or encourage the needs for new potential, functional, institutional and non-institutional solutions in the management process of socio-economic development on the local level. The analysis included the area of a local self-government of southern Serbia, an area that represents a true representative of the Serbian average, suitable for measuring potential functional management models in companies, both for selected underdeveloped area of Serbia and for other Serbian companies, which are more developed than the Serbian average, and they can use easily visible research experiences to select their strategic development goals, especially since similar research for the territory of Serbia has not been undertaken so far. PROFESSIONALISM -CONCEPTUAL DEFINITION The definitions of professional activity that we find today in sociology, mostly follow Parsons' basic understanding of the profession as a normatively universal and functionally specific activity. The most frequently mentioned and most criticized is Greenwood's definition of the profession. In his opinion, professional activity is an extremely complex phenomenon in society, which contains the following elements: 1. systemically grounded theory, 2. professional authority, 3. social sanctions, 4. code of ethics and 5. specific subculture (E. Greenwood, according to Rus and Arzensek, 1984). Barber's definition differs from Greenwood's in that it is somewhat more selective and also in that it emphasizes less the autonomy or authority of professional activity. Different definitions of professions, which we come across in the professional literature, for example, in E. A. Krause (Krause, 1971), are quite similar to Greenwoods. They differ from it in that they place less emphasis on professional autonomy and a monopoly on expertise. Sociologically speaking, the difference between craft and professional activity is that the former is focused on continuity and professional activity on innovation and creativity. Social changes as an environment are not inherent in craft activities, while professional activities take place and are maintained on constant changes, criticism and introduction of innovations. Theory cannot be replaced by experience, but only methodologically-empirically supplemented and refined. Any compensation of experience with theory is non-functional and methodologically wrong and thus poses a danger to the quality of the professional activity. PREREQUISITES AND LIMITATIONS ON THE PROFESSIONALIZATION OF MANAGEMENT IN LOCAL PRIVATE AND PUBLIC ENTERPRISES The focus of the paper is the role of the process of professionalization of management in enterprises as a factor of modernization and development. The relations that are analyzed are monitored through the analysis of the forms of changes of a) development strategy, b) production/ property relations and c) management models. The starting point is the assumption that the speed of transitional changes in post-socialist societies, which have found themselves in the process of restoration of the socio-economic capitalist system of various historical forms, from neoconservative to neoliberal, will depend on the degree and form of professionalization of management in the management process at the levels of work organizations. In order to discover the preconditions, basic obstacles and limitations of professionalization of management in local private and public enterprises, the theoretical starting point are two criteria that define the professional activity and they are professional activities and type of knowledge of the professional occupation. There are different needs and consequences that arise from the ownership relationship in the process of professionalization at work, so a different degree of importance is attached to certain professional activities and the type of knowledge of professional occu-pations. The results of research 2 conducted in the area of local self-government 3 which represents the Serbian average showed that in the private sector, professionalism in work, above all, is seen in adherence to the professional code of ethics and the first place in the public sector, professional approach has as an effect rationalization of work, productivity and democracy and, to a much lesser extent, that it is a professional activity that should adhere to a professional code of ethics. Confirmation of the different priorities in understanding professionalization at work in the public and private sectors is in declaring the type of knowledge of professional occupations. In the private sector, the development of new professional insights is in the first place and the second place is shared by: non-routine approach in the application of knowledge in solving professional problems and a professional approach as an effect of rationalization of work, productivity and democracy. In the public sector, when it comes to the type of knowledge of professional occupations, the answer is in the first place: professional approach has the effect of work rationalization, productivity and democracy; secondly, non-routine approach in applying knowledge in solving professional problems, and thirdly, what is in the first place in the private sector -developing new professional insights. From a comparative analysis of the answers obtained by both groups can be concluded that they reflect the level of priority of business activities in order to achieve business goals or successful business. In the private sector, new, fresh ideas and new products are important and a more innovative and creative approach to the work process is sought, which will enable a better market position and competitiveness. Unlike the organization and business in the public sector, which suffers from problems related to non-rationality, inefficiency, inability to make independent decisions by directors and chiefs, non-market orientation because the policy maker in public and public utility companies and their financier, is a local government unit or city which manages public companies through its executive and legislative bodies. RELATIONSHIP BETWEEN PUBLIC AND PRIVATE SECTOR REPRESENTATIVES TO THE FOUR APPROACHES TO THE STRATEGIC CONCEPT Starting from the position that different strategic approaches differently emphasize the advantages and disadvantages of the offered alternatives for problems solving, the theoretical starting point in the paper are four approaches, which start from the fact that we should first hear the pros and cons arguments, from several offered alternatives, and then approach the problem and tension solving. There are four general approaches to identifying and interpreting strategic tensions or dilemmas. It is viewed: 1. As a riddle. A riddle is a challenging problem with one optimal solution (a "riddle answer"); 2. As a dilemma. The dilemma is a troubling problem with two possible solutions. 3. As a compensatory relation. Compensatory relation (or trade-off) is a problem situation in which there are many possible solutions, each of which represents a different balance of conflicting pressures, where more than one always means the same amount less than the other, i.e. what is again for one player is a loss for another; 4. As a paradox. A paradox is a situation in which two, seemingly contradictory or even mutually exclusive factors (A and B) appear at the same time as true and valid. The paradox has no real solution because there is no way for the two opposites to logically integrate into a consistent understanding of the problem (Ocić, 2014). Based on the stated theoretical starting point, the research results *4 show that the strategy is mostly understood in both the private and public sectors as a dilemma, i.e. as a disturbing problem with two possible solutions, where each option has its advantages and disadvantages but is not unequivocally superior to the other. It should be emphasized that the results showed a high percentage of public sector representatives trying to give their views on the understanding of the strategy, i.e. that it is: an instrument for successful implementation of strategic goals and activities almost equal to the choice of good business decisions that contribute to market competitiveness, profitability and financial gain. The positive attitude of private sector representatives towards the strategic approach is seen primarily in the attitude that it is very important as a condition for development, because it provides an instrument for a rational approach to business, defining clear goals, through setting a clear vision of enterprise development. The importance of the strategy in the private sector is also recognized in the sustainability of the family business due to the medium-term, good planning of material and all other resources as in encouraging greater motivation of employees in the work process. The new paradigm points several objections to the neoclassical analysis of economic development: a) the absence of historical specification, b) the absence of social analysis, c) disregard for the importance of the structure space in the development process (Ocić, 2014). For the thematic framework of the paper, the objection related to the absence of social analysis is especially important. Analysis of the mentioned shortcoming, the neoclassical paradigm starts from the assumption that the development and organization of the economy take place in an implicitly harmonious social order, in which there are no internal structural contradictions. (Mitrović, 2014). THE NEED FOR CHANGES IN MANAGEMENT AND DECISION MAKING The need for change in management and decision-making is the result of both the imposed practical needs in the business of modern enterprises and the scientific actualization of the form and degree of participation of employees in management processes, starting from the assumption that they are one of the partners in the working process. An integral part of the consideration of people management is the very concept of management. Management, according to American social worker and management consultant Mary Parker Follett, is: "The ability to get things done through people *5 ." Management, as a process, consists, according to the vast majority of authors, of four phases: planning, organizing, leading and control (Janićijević, 2008). Based 4 Vukosavljević Pavlović, V. (2020), "Socio-ekonomski aspekti profesionalizacije menadžmenta kao oblik modernizacije sistema upravljanja u preduzećima na primeru grada Leskovca", str. 201 -205 on the results of the already mentioned research, the management process is not only an economic category dominated by rational, financial and market principles and activities but also a sociological one, which is primarily reflected in the professionalization and democratization of work relations. To understand the essence of management and its different forms, different management models are taken into account and at first glance, it is noticeable that an essential difference is made between individual models and the attitude about acceptability or unacceptability in doing business in Serbian enterprises at the local government level. The analysis of research results related to attitudes and opinions towards different models of management shows that the liberal-democratic model of management is, for the most part, characterized as positive. The advantage of this concept is seen in the clearly defined responsibility, the ability of employees to participate in the decision-making process; it emphasizes its adaptation to change and respect for expertise and knowledge. A strong view of the democratic model of management is that it has been more acceptable in the past than in modern business conditions. The assumption is that it arises from the ideological identification with the former socio-economic concept of real-socialism during the SFR Yugoslavia and the model of workers' self-government, which proved to be not so successful and scientific and social controversies are still going on about the reasons for failure. The socio-democratic model is positively characterized because it respects the interests and needs of all employees, develops a sense of teamwork, provides the opportunity for employees to make decisions according to the position in the company, cares about people in a social sense. The only limitation of this model is that it is inapplicable in small firms (there is little room for greater employee participation in management because technical issues are mostly addressed). The fourth management model that is the subject of analysis is authoritarian. Its positive characteristics are recognized in the greater discipline in running the company and negative in the wrong management of the company, causing conflicts, in the blind execution of work orders by subordinates (the consequences are lack of new ideas, a characteristic model in small firms' management where the owners are managers at the same time). The attitude towards the authoritarian model of management in the public sector, in the largest percentage (80%) is positive, primarily because the decision is made by one person on behalf of all. A positive attitude towards the authoritarian model of management by respondents from the public sector can be understood if it is analyzed from the angle of socio-economic and political conditions in which public companies operate in Serbia. Directors, managers of public and public utility companies, do not have sufficient independence in management, so they may see a solution in the authoritarian model of management. The reasons for this positive attitude of public sector representatives, according to the authoritarian model of management can be found in the results of the survey, which aimed to examine, among other things, the characteristics of corporate governance in public enterprises in the Republic of Serbia and the business results showed to be non-efficient or insufficiently efficient in relation to the available resources of public enterprises; the work of public enterprises is very often politicized and there is no professional management that is socially responsible (Jokić, 2015). CONCLUSION In finding optimal solutions for a successful business, for private and public enterprises operating on the territory of local governments through efficient management models in transitional market economic conditions, first, further economic development should be harmonized with the needs of post-socialist society in transition which requires a different concept of development and management model for development, both at the level of society and at the level of work organizations. In encouraging and developing the professionalization of managers through the democratization of relations by involving employees in management processes, one should start with separate management modalities in private and public companies, taking into account various parameters, such as: ownership and management model regardless of the formal-legal, the management framework is the same for both private and public companies in Serbian legislation. There are differences in business policy makers and sources and methods of financing. Second: in finding a successful management model in companies through the professionalization of management and democratization of labor relations must be taken into account, in addition to ownership, applied management model and differences in priorities in understanding the concept of professionalization should not be an obstacle, but on the contrary, the ability to find the best management models.
2021-11-19T00:33:23.366Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3de6765cc736584051f3f033ddba795ebe65205b", "oa_license": "CCBYNC", "oa_url": "https://eman-conference.org/wp-content/uploads/2021/09/EMAN.2021.287.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6319fcb790871d5e4098d2b5381c41df20b2d9e4", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
238650104
pes2o/s2orc
v3-fos-license
Recent Development of Two Alternative Gases to SF 6 for High Voltage Electrical Power Applications † : For many years, SF 6 has been the preferred dielectric medium in electrical power applications, particularly in high voltage gas-insulated equipment. However, with the recognition that SF 6 has an extremely long atmospheric lifetime and very high global warming potential, governments have pursued emission reductions from gas-filled equipment. The electrical power industry has responded to this environmental challenge applying SF 6 -free technologies to an expanding range of applications which have traditionally used SF 6 , including gas-insulated switchgear, gas-insulated circuit breakers and gas-insulated lines or bus bars. Some of these SF 6 -free solutions include gas mixtures containing fluorinated compounds that have low climate impact, among them, a fluoroni-trile and a fluoroketone developed as 3M™ Novec™ 4710 Insulating Gas and 3M™ Novec™ 5110 Insulating Gas, respectively. Both fluoronitrile and fluoroketone mixtures are successfully used in gas-insulated equipment currently operating on the grid where they reduce greenhouse gas emissions by more than 99% versus SF 6 . This paper reviews these leading components of alternative-gas mixtures with updates on the performance, safety and environmental profiles in electrical power applications. Introduction Sulfur hexafluoride, SF 6 , has been a critical component in high voltage applications for several decades with the installed base of gas-filled equipment continuing to grow.Its combination of chemical, electrical, and physical properties has made SF 6 the preferred dielectric medium in gas-insulated switchgear (GIS), gas circuit breakers (GCB) and gas insulated lines (GIL).While much of the equipment operating on the electrical grid today depends upon the use of SF 6 , the industry has been searching for an alternative due to environmental concerns over the properties of this highly stable insulating gas.A long atmospheric lifetime of 3200 years results in a global warming potential (GWP) for SF 6 of 23,500, making it the most potent greenhouse gas identified to date. Identification of viable alternatives to SF 6 is complicated by the unique combination of properties required in dielectric applications.Unfortunately, the very properties that make SF 6 an ideal insulating gas, namely chemical inertness, are the same properties that make it exceptionally long lived in the atmosphere.Therefore, any replacement of SF 6 as an insulating gas must implicitly have some form of reactivity to facilitate degradation in the atmosphere and overcome the environmental concerns.The materials also need to be nonflammable and low enough in toxicity to allow for safe handling using practices similar to those currently used within the industry.Alternatives certainly need to have very high dielectric strength, providing performance as close to SF 6 as possible.Since the gas-filled equipment will be used in a variety of conditions, the materials must remain gaseous over the expected operating temperatures of these systems.The dielectric medium must also be stable over the working life of this equipment without contributing to corrosion or other adverse effects on the device.Most importantly, to be sustainable alternatives, new compounds need to have acceptable combinations of environmental properties, including no ozone depletion potential and significantly reduce the greenhouse gas emissions from these applications compared to SF 6 , since this is the principal reason for transitioning to new technology. Two compounds, a fluoronitrile and a fluoroketone, were found to combine the requisite properties for electric power applications.They both have been shown to function as a key dielectric component in insulating gas mixtures while providing significantly lower climate impact.As a result, the electric power industry has begun implementing SF 6 -alternative gas mixtures based upon these compounds over the last several years [1][2][3][4].The fluoronitrile and fluoroketone are recognized within the electric power industry as 3M™ Novec™ 4710 Insulating Gas and 3M™ Novec™ 5110 Insulating Gas, respectively [5].In some publications they are referred to as C4-FN or C5-FK or even simply C4 or C5.For the duration of this paper, these compounds will be identified as Novec 5110 gas and Novec 4710 gas.This paper is a review of these components in SF 6 -alternative gas mixtures covering material properties and performance in dielectric applications as well as safety and environmental considerations. Properties of Pure Novec Insulating Gases The Novec Insulating Gases exhibit several physical properties that are similar to SF 6 .They are highly fluorinated, nonflammable, high density gases with extremely low freezing points and excellent dielectric properties.At any given pressure, the pure Novec gases display dielectric breakdown voltages that are superior to that of SF 6 as shown in Figure 1.Table 1 provides a summary of these key physical properties.It also lists the environmental attributes of each gas.Like SF 6 , the Novec Insulating Gases are non-ozone depleting since they do not affect stratospheric ozone leading to an ozone depletion potential (ODP) of zero.However, their measurably shorter atmospheric lifetimes lead to significantly lower GWPs.As will be shown below, the shorter atmospheric lifetimes are also the key attribute that enables substantial reductions in the overall greenhouse gas (GHG) emissions resulting from gas-insulated equipment using these alternatives. Energies 2021, 14, x FOR PEER REVIEW 2 of 13 high dielectric strength, providing performance as close to SF6 as possible.Since the gasfilled equipment will be used in a variety of conditions, the materials must remain gaseous over the expected operating temperatures of these systems.The dielectric medium must also be stable over the working life of this equipment without contributing to corrosion or other adverse effects on the device.Most importantly, to be sustainable alternatives, new compounds need to have acceptable combinations of environmental properties, including no ozone depletion potential and significantly reduce the greenhouse gas emissions from these applications compared to SF6, since this is the principal reason for transitioning to new technology.Two compounds, a fluoronitrile and a fluoroketone, were found to combine the requisite properties for electric power applications.They both have been shown to function as a key dielectric component in insulating gas mixtures while providing significantly lower climate impact.As a result, the electric power industry has begun implementing SF6-alternative gas mixtures based upon these compounds over the last several years [1][2][3][4].The fluoronitrile and fluoroketone are recognized within the electric power industry as 3M™ Novec™ 4710 Insulating Gas and 3M™ Novec™ 5110 Insulating Gas, respectively [5].In some publications they are referred to as C4-FN or C5-FK or even simply C4 or C5.For the duration of this paper, these compounds will be identified as Novec 5110 gas and Novec 4710 gas.This paper is a review of these components in SF6-alternative gas mixtures covering material properties and performance in dielectric applications as well as safety and environmental considerations. Properties of Pure Novec Insulating Gases The Novec Insulating Gases exhibit several physical properties that are similar to SF6.They are highly fluorinated, nonflammable, high density gases with extremely low freezing points and excellent dielectric properties.At any given pressure, the pure Novec gases display dielectric breakdown voltages that are superior to that of SF6 as shown in Figure 1.Table 1 provides a summary of these key physical properties.It also lists the environmental attributes of each gas.Like SF6, the Novec Insulating Gases are non-ozone depleting since they do not affect stratospheric ozone leading to an ozone depletion potential (ODP) of zero.However, their measurably shorter atmospheric lifetimes lead to significantly lower GWPs.As will be shown below, the shorter atmospheric lifetimes are also the key attribute that enables substantial reductions in the overall greenhouse gas (GHG) emissions resulting from gas-insulated equipment using these alternatives.[5]. Properties of Gas Mixtures Due to their higher boiling points and corresponding lower vapor pressures, the Novec gases are used in gaseous mixtures rather than as pure materials.Dilution in gaseous mixtures allows the equipment to operate at temperatures well below the boiling points of these materials without condensation.Once gases form a homogeneous mixture, they do not physically separate unless liquefied by cooling below the condensation temperature or compressed to very high pressures.Similarly, although gas density will vary with height in a vertical column, the mixture does not separate over time with the higher molecular weight components concentrating at lower elevations.Figure 2 shows the change in gas density as a function of height in a column of gas for several gases.The pressure exerted by the column of gas above any point creates a greater density compared to higher elevations.Thus, the density of a gas decreases at higher elevations.Larger variations occur as the molecular weight of the gas increases since the greater mass produces higher pressures at the lower elevations.However, the concentrations of individual components in a gas mixture do not change with height.The pressure exerted by the column of gas mixture above a molecule of any component is the same, resulting from the density of the gas mixture above it rather than any individual pure gas.As a result, all components of a mixture are exposed to the same gravitational force and pressure.Therefore, no driving force is created to cause a separation.A similar conclusion was reached in the 1982 EPRI Report EL-2620 [6]: "In the absence of condensation, a gas mixture will not separate into its component gases over a short or long period of time even when the molecular weights of the component gases are markedly different."Accordingly, gas separation has not been observed experimentally [5][6][7].For example, a gas mixture containing Novec 4710 gas and CO 2 was stored in a 2-m vertical tube at −15 • C for 6 months with no change in composition detected over the height of the tube [7]. Table 2 shows a comparison of representative gas mixtures that are used in high voltage systems relative to pure SF 6 .The dielectric breakdown voltage of a gas mixture varies with the concentration of Novec gas as well as the total pressure of the mixture.As shown in Figure 3, it is possible to compensate for the lower dielectric strength of a dilute gas mixture by increasing the total gas pressure used within the system.In fact, that is the strategy often employed by manufacturers of gas-insulated equipment.Numerous systems using Novec gas mixtures are currently operating on the grid, including installations of GIS, GCB and GIL.These systems have been designed to deliver performance comparable to similarly rated SF 6 equipment, [8,9].2. Table 2 shows a comparison of representative gas mixtures that are used in high voltage systems relative to pure SF6.The dielectric breakdown voltage of a gas mixture varies with the concentration of Novec gas as well as the total pressure of the mixture.As shown in Figure 3, it is possible to compensate for the lower dielectric strength of a dilute gas mixture by increasing the total gas pressure used within the system.In fact, that is the strategy often employed by manufacturers of gas-insulated equipment.Numerous systems using Novec gas mixtures are currently operating on the grid, including installations of GIS, GCB and GIL.These systems have been designed to deliver performance comparable to similarly rated SF6 equipment, [8,9]. Safety Considerations A key aspect for use of any SF6-alternative technology is the ability to use it safely within gas-filled equipment.Personnel may come into contact with an insulating gas through handling during initial filling and maintenance of the equipment, leakage during normal operation and when decommissioning the system.The safety of Novec Insulating Gases has been evaluated through a series of toxicological studies [10,11].These 3M-sponsored studies were approved by the laboratories' Institutional Animal Care and Use Committees and animal care complied with all applicable national and local regulations.All toxicological studies that followed OECD guidelines (Organization for Economic Co-operation and Development) were performed under GLP conditions (Good Laboratory Practice).Both gases demonstrated low acute toxicity hazard as reflected in their Globally Har- Safety Considerations A key aspect for use of any SF 6 -alternative technology is the ability to use it safely within gas-filled equipment.Personnel may come into contact with an insulating gas through handling during initial filling and maintenance of the equipment, leakage during normal operation and when decommissioning the system.The safety of Novec Insulating Gases has been evaluated through a series of toxicological studies [10,11].These 3M-sponsored studies were approved by the laboratories' Institutional Animal Care and Use Committees and animal care complied with all applicable national and local regulations.All toxicological studies that followed OECD guidelines (Organization for Economic Co-operation and Development) were performed under GLP conditions (Good Laboratory Practice).Both gases demonstrated low acute toxicity hazard as reflected in their Globally Harmonized System (GHS) classification of Category 4 or higher.Both Novec gases also presented a low hazard profile in repeat-dose inhalation toxicity studies where irritant-associated effects were noted in tissues at the portal of entry (nose and mouth), the respiratory and gastrointestinal tracts, at the highest exposure concentrations.In addition, both gases have demonstrated no genotoxicity potential where Novec 4710 gas was found to be not mutagenic in both in vitro and in vivo assays and Novec 5110 gas was shown to be not mutagenic in an in vitro genotoxicity assay.While Novec 5110 gas has not yet been evaluated in an in vivo study the next nearest homologue (an analogous fluoroketone with chain length one carbon longer) has been shown to be not mutagenic through in vivo tests.Thus, based on all available data the weight of evidence indicates that the both Novec Insulating Gases would not be classified as CMR hazards (carcinogenicity, mutagenicity, reproductive toxicity). As an additional step, the assessment of the available data and associated hazard classification recommendation for Novec 4710 gas was confirmed and validated in an independent, third-party assessment [12].This technical assessment confirmed that "Based on the available data, no self-classification for the CMR hazard categories is currently warranted or anticipated in the future."A summary of the key results for both Novec gases is shown in Table 3. Considering the results from the full range of studies, the 3M Medical Department established occupational exposure limits (OEL) of 65 ppm and 225 ppmv (8-h time weighted averages) for Novec 4710 gas and Novec 5110 gas, respectively.Small releases of insulating gases can occur during filling, maintenance, and decommissioning operations when gastight connections are sealed and unsealed.However, airborne concentrations measured during gas transfer operations are normally less than 10 ppmv [5].Workplace airborne SF 6 concentrations observed in indoor gas-insulated switchgear applications are typically below 1 ppmv [13].As a result, the OELs stated above provide a sufficient margin of safety in these applications and the observed airborne concentrations of Novec Insulating Gases described above are well below the action level of 1 2 the OEL as defined by US Occupational Safety and Health Administration (OSHA).On this basis, risk analyses have established that gas mixtures containing the Novec Insulating Gases are safe to handle in gas-filled equipment under all expected operational conditions [1,2]. Independent groups have also conducted toxicological tests [14][15][16] with Novec 4710 gas using non-OECD test protocols.Variation in test parameters such as the animal species, exposure time and the condition of the gas will provide significantly different results.As a result, OECD and international standards such as GHS have standardized hazard testing criteria, requiring test methods that are scientifically sound and validated according Energies 2021, 14, 5051 6 of 12 to international procedures in order to provide information relevant to a human health assessment while minimizing the need for animal testing.The results from tests conducted using non-standard protocols have led to some confusion over the toxicological profile for the Novec gases. The data reported by Li and colleagues [14] for acute inhalation tests conducted in the rat over a 4-h time interval found an LC 50 value of 15,000-20,000 ppmv, which is consistent with the LC 50 value discussed above.Additional tests were conducted at high concentrations over a time interval of 24-h.Such an exceptionally long test period is far beyond the 4-h exposure required for acute inhalation testing that is used for GHS classification of a chemical and does not aid in performing a human health risk assessment.The alleged effects on various organs systems observed in the 24-h exposure were actually a result of pulmonary edema-induced hypoxia (insufficient oxygen reaching the internal organs) and not a direct response of the test material on these organs.The 3M-sponsored, 28-day inhalation toxicity study referenced above found the respiratory tract to be the target organ, exhibiting signs of an irritant-like effect, but no histopathological changes were noted in other organ systems.Overall, the results in the paper are consistent with the LC 50 values published to date and do not contradict the recommended 65 ppmv occupational exposure limit. Preve and colleagues [15] have repeatedly cited toxicological data developed outside of the recommended and validated testing protocols.The acute inhalation LC 50 data used in their publications were derived using different animal models (mouse).The OECD protocols for acute inhalation toxicity (OECD 403, 433 and 436) all state that the preferred test species is the rat as it has been previously been demonstrated that mice are often more sensitive in acute inhalation studies than other mammals, a factor which complicates the use of data generated in mice for risk assessment purposes [17].Similarly, the discussions in these papers regarding mutagenicity aspects appear to overlook both the available data on the Novec gases as well as the recommendations for the use of read-across techniques encouraged by regulatory bodies such as the European Chemicals Agency (ECHA).As a result, the data generated in those studies do not augment the information for a human health risk assessment. Zhang and colleagues published the results from a series of inhalation toxicity studies conducted in the mouse [16].As expected, the results demonstrated the higher sensitivity of the mouse in acute inhalation studies compared to the rat but again did not demonstrate any additional relevance for a human health risk assessment.While the authors stated that there is still much work to be conducted on the toxicity of C4 nitrile and a need for an occupational exposure level, this assessment clearly does not reflect the significant amount of data readily available on this material which includes GLP-conducted acute, sub-chronic, developmental and reproductive, and genetic toxicity studies.Based upon these studies, 3M has developed an occupational exposure limit of 65 ppm which is published on the 3M safety data sheets and product literature. Additional considerations apply when handling any insulting gas after arcing events.In the case of electrical arcing in equipment containing SF 6 , high-toxicity decomposition byproducts such as HF, S 2 F 10 and SO 2 can be generated.These byproducts are highly hazardous and pose a potential toxicity risk to those exposed.Depending on the nature of the arcing event, the Novec gas mixtures may also undergo some degree of decomposition.Even though testing demonstrated that arced Novec gas mixtures can be less hazardous than arced SF 6 mixtures [1,2], similar precautions should be taken when handling such gas mixtures.Employees performing maintenance procedures on electrical switches containing arced SF 6 are required to use proper handling procedures and wear personal protective equipment.Similar precautions should be taken with arced Novec gas mixtures. Global Warming Potentials One metric for analyzing the potential environmental impact of SF 6 alternatives is a comparison of the global warming potential (GWP) for the gases used within the different technologies.The GWP is an index that provides a relative measure of the possible climate impact of a compound which acts as a greenhouse gas in the atmosphere.It effectively calculates the amount of energy absorbed by a compound over a period of time relative to that of a reference compound, CO 2 .The GWP as defined by the Intergovernmental Panel on Climate Change (IPCC) [18] is calculated as the integrated radiative forcing due to the release of 1 kg of that compound relative to the warming due to 1 kg of CO 2 over the same time interval (the integration time horizon (ITH)), as shown in Equation ( 1): where R is the radiative forcing per unit mass of a compound (the change in the flux of radiation through the atmosphere due to the infrared (IR) absorbance of the compound), C is the atmospheric concentration of a compound, τ is the atmospheric lifetime of a compound, t is time and i is the compound of interest.The commonly accepted ITH is 100 years. Only two variables in the GWP calculation are affected by the physical characteristics of the compound-the radiative forcing due to IR absorbance and the atmospheric lifetime.All fluorinated compounds absorb IR energy in the "window" at 8 to 12 µm which is largely transparent in the natural atmosphere.This IR absorbance, coupled with a long atmospheric lifetime, results in a high GWP for many perfluorinated compounds such as SF 6 . The most effective approach to producing a lower GWP alternative is to develop a compound with a significantly shorter atmospheric lifetime.For highly fluorinated compounds this means synthesizing a molecule containing functionality or structural features that allow it to decompose more readily in the natural atmosphere.This is precisely the approach that was taken with the Novec Insulating Gases.Novec 5110 gas incorporates a carbonyl group that undergoes direct photolysis when exposed to sunlight in the lower atmosphere leading to a GWP value of less than 1 [19].Novec 4710 gas contains a nitrile group that reacts with hydroxyl radicals in a process similar to the degradation mechanism for most organic compounds that enter the lower atmosphere.Multiple studies have reported an atmospheric lifetime and GWP value for Novec 4710 gas.At first glance, these values may appear to vary considerably.However, as the review below demonstrates, the results are consistent within recognized experimental uncertainty. The initial studies were performed in the 3M Environmental Laboratory to investigate the atmospheric lifetime of Novec 4710 gas.A series of experiments measured the rate of degradation for Novec 4710 gas due to reaction with hydroxyl radicals relative to methane or pentafluoroethane as a reference compound.Hydroxyl radicals were generated via photolysis of ozone in the presence of water vapor.Concentrations of the reactants were measured continuously by Fourier transform infrared spectroscopy (FTIR) using a 10-m pathlength within a 5.7 L gas cell maintained at 300 K. Additionally, gas samples were analyzed by gas chromatography with mass spectrometry during one of the experiments to confirm the concentrations of Novec 4710 gas.The average atmospheric lifetime calculated from four separate experiments was 30 years for Novec 4710 gas [20]. The radiative efficiency for Novec 4710 gas was calculated at 0.225 Wm −2 ppbv −1 using the method of Pinnock et al. [21] with an IR cross-section measured using 0.5 cm −1 resolution.This radiative efficiency value takes into account the necessary stratospheric temperature adjustments and atmospheric lifetime corrections.The radiative efficiency combined with a 30-year lifetime results in a GWP of 2100 using the IPCC calculation method [18].A study published by Sulbaek Andersen and colleagues conducted smog chamber experiments to investigate the atmospheric fate of Novec 4710 gas [22].Experiments were performed within a 101 L photoreactor maintained at 296 K. Hydroxyl radicals were generated by photolysis of ozone in the presence of hydrogen gas.The atmospheric lifetime was determined from these experiments to be approximately 22 years.Combining this lifetime with the radiative efficiency they measured at 0.217 Wm −2 ppbv −1 using an FTIR resolution of 0.25 cm −1 resulted in a GWP value reported as 1490.The lifetime reported in this study was calculated using the measured reaction rate constant and an average hydroxyl radical concentration in the atmosphere.For compounds considered to be wellmixed in the atmosphere (i.e., lifetimes more than a few months), it is more common to calculate the lifetime relative to a reference compound such as methyl chloroform since there is a comprehensive analysis of its abundance in the atmosphere as well as its rate of emission and removal.The atmospheric lifetime calculated from this method is 32 years, resulting in a GWP of 2090. Another series of experiments were conducted by Blázquez and colleagues in which they examined the temperature dependence of the reaction of hydroxyl radical with Novec 4710 gas [23].Hydroxyl radicals were produced by photolysis of HNO 3 .Measurements were made from 278 to 358 K.A linear equation (in the form of the Arrhenius equation) was fit to these kinetic data.The atmospheric lifetime was reported as 47 years using kinetics extrapolated to 272 K.The radiative efficiency was measured in this study to be 0.279 Wm −2 ppbv −1 using a 1 cm −1 spectral resolution.These data combined to report a GWP value of 3646.While the lower temperature for the kinetic calculations is more representative of the average tropospheric temperature, a comparison of values across all studies requires data to be compared from equivalent conditions.The kinetic data measured at 298 K in this study results in an atmospheric lifetime of 31 years.Calculation of the GWP using this lifetime and the above radiative efficiency produces a value of 2620. While there is variability in the GWP values resulting from these independent studies, the values are well within the uncertainty reported by IPCC of ±35% [18] as shown in Figure 4.The average lifetime and GWP values from the 3 studies are 31 years and 2260, respectively, which agree well with the original values report by 3M.On this basis, 3M continues to report the lifetime and GWP values derived from their internal studies of 30 years and 2100, respectively. Energies 2021, 14, x FOR PEER REVIEW 9 of 13 298 K in this study results in an atmospheric lifetime of 31 years.Calculation of the GWP using this lifetime and the above radiative efficiency produces a value of 2620.While there is variability in the GWP values resulting from these independent studies, the values are well within the uncertainty reported by IPCC of ±35% [18] as shown in Figure 4.The average lifetime and GWP values from the 3 studies are 31 years and 2260, respectively, which agree well with the original values report by 3M.On this basis, 3M continues to report the lifetime and GWP values derived from their internal studies of 30 years and 2100, respectively.The GWP for a gas mixture is calculated using the GWP value for each individual component multiplied by its weight fraction in the mixture according to Equation (2): where xi and GWPi are the weight fraction and GWP of component i, respectively.The GWP for a gas mixture is calculated using the GWP value for each individual component multiplied by its weight fraction in the mixture according to Equation (2): Global Warming Potential where x i and GWP i are the weight fraction and GWP of component i, respectively. Greenhouse Gas Emissions A comparison of GWP values for representative gas mixtures used as alternatives to SF 6 is shown in Table 4.However, this type of comparison only provides a partial assessment of the environmental impact from insulating gas technologies.The mass of gas released, even from the same volumetric leakage rate, can be significantly different due to the considerably different gas densities.Table 4 also shows the GHG emission reductions achieved by the alternative-gas mixtures are even lower than would have been apparent through a simple comparison of GWPs.Another disadvantage to assessing the climate impact of gases solely through comparison of GWP values is the inherent limitations within the GWP calculation itself.It is important to note that the commonly recognized GWP for a substance is calculated over a 100-year ITH.This ITH is a compromise between shorter-term and longer-term effects [18].However, this means that the full climate impact of a very long-lived gas, such as SF 6 , is not fully accounted for in the GWP calculation.Figure 5 displays a plot of the quantity of gas remaining in the atmosphere following a 1 kg release.A compound such as Novec 4710 gas with an atmospheric lifetime of 30 years is expected to be essentially fully degraded within the GWP calculation timeframe.Contrast that with SF 6 which, due to its atmospheric lifetime of 3200 years, remains in the atmosphere far longer than the 100-year ITH.As a result, only a fraction of its potential impact on climate change is included in the GWP calculation. Installations of gas-filled electric power equipment are expected to remain in use for decades with low level emissions occurring throughout this time due to leakage.Many regions require reporting of these GHG emissions on an annual basis, even though, as shown in Figure 5, a portion of the gas leaked in any year can remain in the atmosphere for far longer.An assessment of the cumulative GHG emissions would account for not only the mass of gas emitted annually but also the amount of the gas that accumulates in the environment during its use.Both factors can have a measurable influence on the overall climate impact of a technology. of gas remaining in the atmosphere following a 1 kg release.A compound such as Novec 4710 gas with an atmospheric lifetime of 30 years is expected to be essentially fully degraded within the GWP calculation timeframe.Contrast that with SF6 which, due to its atmospheric lifetime of 3200 years, remains in the atmosphere far longer than the 100-year ITH.As a result, only a fraction of its potential impact on climate change is included in the GWP calculation.Installations of gas-filled electric power equipment are expected to remain in use for decades with low level emissions occurring throughout this time due to leakage.Many regions require reporting of these GHG emissions on an annual basis, even though, as shown in Figure 5, a portion of the gas leaked in any year can remain in the atmosphere for far longer.An assessment of the cumulative GHG emissions would account for not only the mass of gas emitted annually but also the amount of the gas that accumulates in the environment during its use.Both factors can have a measurable influence on the overall climate impact of a technology. Figure 6 compares the cumulative GHG emissions that would occur due to leakage of insulating gas over a 40-year lifetime of an installed base of gas-filled equipment.The comparison assumes volumetric emissions from the equipment equivalent to 1 T/year of SF6 over that lifetime.The calculations are carried out for 100 years corresponding to the timeframe used in GWP assessments in order to illustrate the limitation of relying on the GWP parameter alone.Results for alternative-gas mixtures with GWPs of 398 and 1 are plotted along with SF6.Comparison of the GWPs for these mixtures to SF6 suggests that these alternatives represent a 98.3% and >99.9% improvement, respectively.Additionally, if the different gas densities are factored into the calculation, the reduction in GHG emissions improves to 99.1% and >99.9%, respectively, as shown in Table 4.However, the shorter atmospheric lifetimes of the alternative gases mean that both materials degrade much more rapidly over time compared to SF6, preventing measurable accumulation of these alternatives in the environment.This limits the cumulative GHG emissions from the Figure 6 compares the cumulative GHG emissions that would occur due to leakage of insulating gas over a 40-year lifetime of an installed base of gas-filled equipment.The comparison assumes volumetric emissions from the equipment equivalent to 1 T/year of SF 6 over that lifetime.The calculations are carried out for 100 years corresponding to the timeframe used in GWP assessments in order to illustrate the limitation of relying on the GWP parameter alone.Results for alternative-gas mixtures with GWPs of 398 and 1 are plotted along with SF 6 .Comparison of the GWPs for these mixtures to SF 6 suggests that these alternatives represent a 98.3% and >99.9% improvement, respectively.Additionally, if the different gas densities are factored into the calculation, the reduction in GHG emissions improves to 99.1% and >99.9%, respectively, as shown in Table 4.However, the shorter atmospheric lifetimes of the alternative gases mean that both materials degrade much more rapidly over time compared to SF 6 , preventing measurable accumulation of these alternatives in the environment.This limits the cumulative GHG emissions from the alternative-gas insulation technologies.When calculated over a 100-year timeframe both gas mixtures reduce GHG emissions by more than 99.9%, regardless of the GWP of the alternative-gas mixture. Energies 2021, 14, x FOR PEER REVIEW 11 of 13 alternative-gas insulation technologies.When calculated over a 100-year timeframe both gas mixtures reduce GHG emissions by more than 99.9%, regardless of the GWP of the alternative-gas mixture.A lifecycle assessment (LCA) comparing the climate impacts of these alternative-gas technologies came to similar conclusions [24].The analysis compared 145 kV GIS bays operating with the alternative-gas mixtures to identical equipment designed for SF6 A lifecycle assessment (LCA) comparing the climate impacts of these alternative-gas technologies came to similar conclusions [24].The analysis compared 145 kV GIS bays operating with the alternative-gas mixtures to identical equipment designed for SF 6 throughout the gas-use phases of the equipment lifecycle (filling, operation, decommissioning).The LCA demonstrated that the alternative-gas technologies result in large reductions of the carbon footprint of these applications with a climate impact that is negligible compared to SF 6 , confirming the results of the GHG calculations shown above. Conclusions Gas mixtures containing a fluoroketone or a fluoronitrile, Novec™ 5110 Insulating Gas and Novec™ 4710 Insulating Gas, respectively, are being implemented as low climateimpact alternatives to SF 6 .When used at higher pressure, these gas mixtures can deliver dielectric performance comparable to SF 6 in high voltage systems.The safety of Novec gases has been evaluated through a series of toxicological studies which demonstrate that the hazard profiles of the gas mixtures containing these materials are safe to handle in gas-filled equipment.Both alternative gases have significantly lower GWPs than SF 6 .Moreover, their shorter atmospheric lifetimes prevent measurable accumulation of these gases in the atmosphere.This results in substantial reduction (>99.9%) in GHG emissions over the expected working life of equipment using these alternatives, irrespective of the GWP for the individual gas mixture components.As a result, these advanced materials enable insulation technologies that can make a meaningful contribution to reducing the environmental impact of high voltage applications.Therefore, limiting alternative-gas technologies based on GWP alone could be counterproductive to the goal of reducing the climate impact from electric power applications.In fact, the European Commission report in 2020 stated "In specific sites where the voltage rate must be maintained and space is restricted, such as substations at power plants or in urban areas, currently designs based on fluoronitriles may be the only viable alternative to SF 6 based switchgear" [25].Gas insulated equipment containing Novec 4710 gas mixtures first started operating on the grid in 2017, while equipment containing Novec 5110 gas mixtures first started operating on the grid in 2015.More than 100 equipment bays containing alternative gas mixtures have now been installed by multiple utilities located primarily in Europe with recent installations in Asia and North America. Figure 2 . Figure 2. Variation in gas density as a function of gas column height.Gas mixtures are described in Table2. Figure 3 . Figure 3. Dielectric breakdown voltage of gas mixtures compared to pure SF6. Figure 3 . Figure 3. Dielectric breakdown voltage of gas mixtures compared to pure SF 6 . Figure 4 . Figure 4. GWP values for Novec 4710 gas with uncertainty cited by IPCC. Figure 4 . Figure 4. GWP values for Novec 4710 gas with uncertainty cited by IPCC. Figure 5 . Figure 5. Residence time of insulating gas in the atmosphere, assuming 1 kg release of each compound at time zero. Figure 5 . Figure 5. Residence time of insulating gas in the atmosphere, assuming 1 kg release of each compound at time zero. Figure 6 . Figure 6.Cumulative greenhouse gas emissions, assuming emission equivalent to 1T/yr of SF6 for 40 years of operation. Figure 6 . Figure 6.Cumulative greenhouse gas emissions, assuming emission equivalent to 1T/yr of SF 6 for 40 years of operation. Table 1 . Alternative gas properties compared to SF 6 Table 2 . Gas mixture properties compared to SF6. Table 3 . Key toxicological results on pure Novec Insulating Gases. 1 Defined as a liquid under Globally Harmonized System based upon vapor pressure. Table 4 . Initial climate performance of alternative-gas mixtures compared to SF 6 .
2021-09-27T18:45:42.596Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "dbec704d6d43ba7dd5b0ee8bd2eb5590b3e6065e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/16/5051/pdf?version=1629201932", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e9a6a75ab382ef8558fcd3c27d4599ce0ee974ff", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
231927030
pes2o/s2orc
v3-fos-license
Metabolomic Analysis of Skin Biopsies from Patients with Atopic Dermatitis Reveals Hallmarks of Inflammation, Disrupted Barrier Function and Oxidative Stress The main objectives of this study were to characterize the metabolomic profile of lesional skin of patients with atopic dermatitis, and to compare it with non-lesional skin of patients with atopic dermatitis and skin of controls with no dermatological disease. Skinpunch biopsies were collected from 15 patients and 17 controls. Targeted analysis of 188 metabolites was conducted. A total of 77 metabolites and their ratios were found, which differed significantly between lesional skin of atopic dermatitis, non-lesional skin of atopic dermatitis and skin of controls. The metabolites were members of the following classes: amino acids, biogenic amines, acylcarnitines, sphingomyelins or phosphatidylcholines, and the most significant differences between the groups compared were in the concentrations of putrescine, SM.C26.0 and SM.C26.1. The alterations in metabolite levels indicate inflammation, impaired barrier function, and susceptibility to oxidative stress in atopic skin. A topic dermatitis (AD) is a chronic inflammatory skin disease (CISD) that has a significant negative impact on the physical, emotional and psychosocial wellbeing of affected patients (1)(2)(3). AD affects approximately 10-20% of children and 1-3% of adults worldwide (4). The prevalence of AD is increasing, especially in the younger age groups (5). The risk factors and patho physiological mechanisms of AD involve skin barrier dysfunction, alterations in immune response, such as prevalent Th2 response during the acute phase, which leads to increased IgE synthesis, genetic factors, of which mutations in the filaggrin gene are most widely known, and environmental factors, such as reduced humidity and presence of pollutants (6)(7)(8)(9)(10). Clinically, patients with AD have dry skin, typical appearance and distribution of itchy rash, which is dependent on age (6). Skin lesion is usually the first manifestation of the "atopic march", which includes the appearance of asthma and allergic rhinitis at later stages. Patients with AD also have more frequent and severe skin infections, most often caused by Staphylococcus aureus and herpes simplex virus (1). To date, only a few metabolomic studies on AD have been conducted. Ottas et al. investigated the blood serum metabolome of patients with AD and found differences in the levels and ratios of acylcarnitines, phosphatidyl cholines, and in a cleavage product of fibrinogen Aα (11). Huang et al. compared the serum of children with AD and healthy children and found decreased concentra tions of glycine, and taurineconjugated bile acids, and increased levels of unsaturated fatty acids, leukotriene B4 (LTB4), prostaglandins including PGD2, PGB2, PGE2; 8, 9, 11, 12hydroxy eicosatetraenoic acid (HETE) as well as 13 and 9hydroxy octadecadienoic acid (HODE) in an AD group. In addition, differences between metabolomic profiles of patients with AD with elevated and normal IgE levels were identified (12). An analysis of blood sera of adult patients with AD who responded to omalizumab treatment showed increased baseline phosphatidylcholine concentrations compared with sera from nonresponders (13). Furthermore, levels of creatinine, creatine, citrate, formate, 2hydroxybuty rate, dimethylglycine and lactate in the urine of infants with AD are increased, and the concentrations of betaine, glycine and alanine are decreased (14). To our knowledge this study analysed, for the first time, the metabolomic profiles of the skin of patients with AD, compared with the metabolomes of nonlesional skin samples obtained from the same patients and from healthy controls. Ethics approval The study was approved by the Research Ethics Committee of the University of Tartu, (permission number 269/T9). The Declaration of Helsinki protocols were followed and patients provided their informed, written consent. Volunteer recruitment Adult patients with AD were recruited from the Tartu University Hospital at the Clinic of Dermatology between 2013 and 2015. Controls were recruited either from the Clinic of Traumatology and Orthopaedics or from the Clinic of Dermatology. The exclu sion criterion for participants was any other concomitant skin disease. Fifteen patients with AD (11 women, 4 men, ages 20-50 years) and 17 controls (7 women, 10 men, ages 23-75 years) were enrolled in the study. Fourteen of the patients with AD had one or more known allergies (food allergies: 9 patients; allergy to medication: 3 patients; to dust or hair: 11 patients; to pollen: 11 patients), 11 patients had other concomitant atopic diseases (al lergic rhinoconjunctivitis, bronchial asthma) and 7 had a positive family history of AD. Four people in the control group had one or more known allergies (food allergies: 3 patients; allergy to medication: 2 patients; to dust or hair: 2 patients) and 2 patients (who had no known allergies) had a positive family history of AD. All participants were Caucasians of Eastern European descent and all provided written informed consent. Skin biopsies Threemillimetre punch biopsies were taken from the visually welldefined AD lesional skin and adjacent (1-2 cm from lesions) nonlesional skin from the upper arm and torso of patients with AD and from similar locations of nonsunexposed skin of controls. The biopsies were collected before the first meal of the day. Skin samples were frozen immediately in liquid nitrogen and stored at -80°C until needed. The samples were collected over a period of 3 years, after which metabolites were extracted, and samples lyophilized. The lyophilized samples were stored at -80°C until analysis, as described previously (15). Prior to measurements, the skin samples were weighed and a mix of 12 ml/g methanol and chloroform and 6 ml/g water was added according to the skin sample weight. Twelvemm steel balls were added to the tube and milled using BulletBlender (NextAdvance). The sample was incubated for 1 h on ice, the supernatant was transferred to a clean tube and centrifuged at 16,000 × g, and 4°C for 15 min. The methanol/water and chloroform phases were pipetted to separate tubes and lyophilized. Metabolomic analysis AbsoluteIDQ p180 kit (Biocrates Life Sciences AG, Innsbruck, Austria) was used for the targeted analysis of 188 metabolites and their ratios. An Agilent Zorbax Eclipse XDB C18, 3.0 × 100 mm, 3.5 µm with PreColumn SecurityGuard, Phenomenex, C18, 4 × 3 mm was used on a 1260 series HPLC (Agilent, Santa Clara, CA, USA) in tandem with a QTRAP 4500 (ABSciex, Framing ham, MA, USA) mass spectrometer. The protocol is set out in the user's manual of the AbsoluteIDQ p180 kit. Briefly, lyophilized samples were thawed on ice, the 2 lyophilized phases were both dissolved in 85% methanol/15% water, according to their previous weight (15-25 μl added solvent) and both phases were added to the filter plate of the kit. Subsequently, 10 µl internal standards were added. The samples were derivatized using phenylisothio cyanate, dried, and metabolites extracted using 40% methanol in water. Acetonitrile, chloroform, formic acid (FA), methanol and water were all HPLC grade and purchased from SigmaAldrich (Darmstadt, Germany). Data analysis Data were analysed using R version 3.5.1 (16). The nonparametric Kruskal-Wallis ranksum test and the Wilcoxon ranksum test were used when looking for phenotype differentiating metabolites. Benjamini-Hochberg (false discovery rate; FDR 5%) corrected pvalue < 0.05 was considered statistically significant. RESULTS A total of 77 metabolites and their ratios were found, which differed significantly between lesional skin of atopic dermatitis (ADL), nonlesional skin of atopic dermatitis (ADNL) and skin of controls (C) in targeted analysis (Kruskal-Wallis ranksum test, Benjamini-Hochberg (FDR 5%) corrected pvalue < 0.05; Table I). These metabolites belonged to amino acids (AAs), biogenic amines, acylcarnitines, sphingomyelins and phosphatidylcholines groups. While comparing the me tabolomic profile of ADL skin with C skin there were 21 metabolites that had significantly higher concentrations in ADL skin and 2 metabolite ratios (citrulline to ar ginine, and citrulline to ornithine) that had higher values in C skin (Fig. 1). When the profiles of ADL and ADNL skin were compared 73 metabolites were found that had elevated concentrations in ADL skin (Fig. 2). As in the case of ADL and C comparison, the ratios of citrulline to arginine and citrulline to ornithine had higher values in ADNL skin. There were no statistically significant differences between the metabolite concentrations in ADNL and C skin. The concentrations of putrescine were higher in ADL skin compared with both ADNL (p = 0.0005, 3.7 median fold change) and C skin (p = 0.0004, 3.2 median fold change). The levels of asymmetric dimethylarginine (ADMA) were higher in ADL skin compared with ADNL skin (p = 0.0007, 2 median fold change). The only acylcarnitine that differed significantly bet ween the groups was acetylLcarnitine (C2), the levels of which were elevated in ADL skin compared with ADNL skin (p = 0.0008, 1.2 median fold change). Regarding AAs, 8 metabolites and their ratios were found, which had statistically significant differences between the 3 phenotypic groups. The concentrations of glutamate (Glu) were higher in ADL skin compared with ADNL (p = 0.0002, 2.2 median fold change) and C skin (p = 0.0252, 2.4 median fold change). Similarly, methionine (Met) level in ADL skin was higher com pared with both ADNL (p = 0.0007, 1.8 median fold change) and C (p = 0.0244, 2.8 median fold change) skin. In ADL skin there were also higher levels of glutamine (Gln) (p = 0.0008, 2 median fold change), asparagine (Asn) (p = 0.0013, 2.2 median fold change), arginine (Arg) (p = 0.0112, 1.9 median fold change) and lysine (Lys) (p = 0.003, 1.7 median fold change) compared with ADNL skin, but no significant differences were found among the levels of named metabolites when C skin was compared with ADL and ADNL skin. The ratios of ci trulline to Arg (Cit…Arg) and Cit to ornithine (Cit…Orn) were higher in C skin compared with ADL skin (p = 0.007 and p = 0.0372, respectively) and higher in ADNL skin compared with ADL skin (p = 0.0119 and p = 0.0423, respectively). Furthermore, 14 sphingolipids, 46 phosphatidylcho lines (PC) and 6 lysophosphatidylcholines (lysoPC) were found that had significant differences between the 3 groups. All sphingolipids had the highest concentra tions in ADL skin and almost all of these (12 of 14) had significant differences between both ADL and ADNL skin and ADL and C skin. Two sphingolipids had higher concentrations in ADL skin only compared with ADNL skin. Four lysoPCs had significantly higher levels in ADL skin than in ADNL skin, and the levels of lysoPC.a.C20.4 and lysoPC.a.C18.1 did not differ between the groups in pairwise comparison. There were 40 PCs that had elevated concentrations in ADL skin compared with ADNL skin, and 6 PCs that had higher levels in ADL skin compared with both ADNL and C skin. DISCUSSION To our knowledge, this study addressed, for the first time, the wide range of metabolomic differences between AD L, ADNL and C skin. The levels of putrescine were elevated in ADL skin compared with both ADNL and C skin. Biogenic amines putrescine, spermidine and spermine have important roles in diverse cell functions, and they can be found in, and are produced by, all mammalian cell types. They take part in cell proliferation and growth, protect DNA from oxidative damage and regulate its conformational status, interact with ion channels, and participate in immune regulation (17)(18)(19)(20). In neoplastic diseases the metabolism of polyamines is altered and, consequently, their levels are elevated in cancer cells. This has led to the search for cancer treatment options that target different aspects of biogenic amines metabolism, such as inhibiting their synthesis (e.g. inhibition of ornithine decarboxylase I (ODC1), which acts as a primary rate limiting enzyme for polyamine biosynthesis) or blocking their transport (17,21,22). In addition, polyamine levels are elevated during inflammation; however, their role in this process is not clear, as they seem to have both pro and anti inflammatory properties (23)(24)(25). The de novo synthesis of these biogenic amines is accomplished via a dedicated pathway. The first step in this pathway is the production of putrescine by ODC1. Importantly, Orn is produced from Arg in a separate set of reactions. To convert putrescine into other biogenic amines the key reactions are as follows: activation of Met and subsequent production of SadenosylLmethionine (AdoMet), its decarboxylation by AdoMet decarboxylase (AMD1) and utilization of propylamine groups from decarboxylated AdoMet for the synthesis of spermidine by spermidine synthase and spermine by spermine syn thase (26,27). Increased putrescine levels were found in inflamma tory ADL skin compared with both ADNL and C skin. However, the concentrations of spermidine and spermine were not significantly increased and, accordingly, Met concentration was significantly increased in ADL skin. This can be explained by the continuous production of putrescine by ODC1 and potential downregulation of AMD1 in ADL skin. On the other hand, Lim et al. (27) found that AMD1 was upregulated at the wound edge cells after wounding human keratinocytes ex vivo and in scratch assays. As pruritus is one of the major symptoms in AD that leads to scratching, this study would have been expected to find higher levels of spermidine and spermine in ADL samples. Increased levels of ADMA were found in ADL skin compared with ADNL skin. ADMA and SDMA are related to Arg metabolism. ADMA inhibits nitric oxide synthase (NOS), and SDMA competes with Larginine for transport into the cell via cationic amino acid trans porter 2B, which results in decreased cellular Larginine levels (28,29). Larginine is a substrate for NO synthe sis, which facilitates normal endothelial function (30). SDMA and ADMA have been found to participate in inflammatory processes, and are increased in diseases, such as chronic kidney disease, atherosclerosis, rheuma toid arthritis and psoriasis (31)(32)(33)(34)(35). We have previously found elevated ADMA and total DMA levels in lesional skin of psoriatic patients (36). Thus, dimethylarginines may act as biomarkers for inflammation and metabolic imbalance. The ratios of Cit to Arg and Cit to Orn were decreased in ADL skin compared with ADNL and C skin. A pos sible reason for this is that more Arg and Orn are needed to produce putrescine via previously described pathway by ODC1. Although the levels of Orn were higher in ADL skin than in ADNL and C skin, the differences were not statistically significant when comparing the 3 phenotypic groups. The level of Arg was elevated in ADL skin compared with ADNL skin, which is consis tent with the findings of Dimitriades et al. (37), who also described decreased activity of arginase I and slightly higher levels of Arg in the plasma of paediatric patients with AD. In addition, as a result of increased ADMA levels in ADL skin, NOS is inhibited and biosynthesis of Cit from Arg is diminished. We found similar chan ges in psoriatic skin, where the ratio of Cit to Orn was decreased in lesional skin compared with nonlesional and control skin, which could have been due to lower activity of Orn carbamoyltransferase (36,38). The concentration of a nonessential AA Asn was elevated in ADL skin compared with ADNL skin. Asn is required for development of the brain and has an im portant role in protein synthesis and cellular responses to AA homeostasis. Asn can be converted into aspartate (Asp), which is also needed for the urea cycle and is re lated to production of Arg. Asp can be converted to Asn. The biosynthesis of Asn and Glu requires Asp and Gln, which are metabolized by the enzyme Asn synthetase (39)(40)(41). Glu is another nonessential AA. In addition to its importance in protein synthesis, it is also a crucial neurotransmitter. It has been found that, in depressive patients, increased plasma inflammatory markers are linked to elevated Glu concentration in basal ganglia and administration of ketamine, a glutamate receptor antagonist, gives a rapid positive effect in otherwise treatmentresistant depression (42,43). Increased levels of Glu have been found in the blood of patients with other diseases, such as rheumatoid arthritis, obesity and Alzheimer's disease; the conditions which, similar to AD, have a strong inflammatory component (44)(45)(46). In addition, the Glu median level in ADL samples was more than 2fold higher than in ADNL or in C samp les. Nevertheless, as in lymphocytes, one of the major intermediate compounds in glutamine (Gln) metabolism is Glu. The elevated levels of Glu in ADL skin might be connected to the increased need for Gln, which, in addition to its other functions, is an important energy source for immune cells (47,48). The concentrations of the essential AA Met were increased in ADL skin compared with ADNL and C skin. Met has important roles in protein biosynthesis and protection from oxidative stress. Met oxidation into methionine sulphoxide (Met.SO) and conversion via AdoMet into cysteine provide protective measures to neutralize oxygen radicals. Furthermore, cysteine, together with Glu, act as precursors for a potent cellular antioxidant glutathione (49,50). Interestingly, the levels of Met.SO were undetectable in all the skin samples. The increased levels of Met and Glu and undetectable Met.SO level in ADL skin may be hallmarks of an im paired synthesis of antioxidative compounds, which, in turn, suggests the presence of increased oxidative stress in AD skin. This is substantiated by the notions that ADL skin contains less glutathione than normal skin, and environmental stress factors induce oxidative stress in the skin of patients with AD (51,52). The concentrations of another AA, Lys, were increased in ADL skin compared with ADNL skin. In addition to their other functions, Lys and Met are needed in the syn thesis of carnitine. Carnitine has an important role in fatty acid metabolism, as it transports longchain fatty acids through the inner mitochondrial membrane, after which βoxidation takes place (53,54). Among acylcarnitine esters, the most significant is C2, which is formed after the acetyl moiety from acetylcoenzyme A (acetylCoA) is transferred to carnitine. Thereby, C2 also regulates intracellular CoA homeostasis (55). Huang et al. compa red the metabolites in patients with high and low serum IgE who had AD, and found elevated carnitine levels in the blood serum of patients with elevated levels of IgE. They concluded that the increased levels of fatty acids, carnitines, lactic acid and citric acid in children with high serum IgE AD point to impaired energy metabolism (12). Interestingly, lower levels of C2 were found in the serum of patients with AD (11). However, as in the cur rent study, we have previously found elevated levels of C2 in psoriatic lesional skin compared with nonlesional skin, reflecting the metabolic and energetic status of the skin cells (36,54). The shift towards Th2mediated immune responses might alter the lipid composition of AD skin (56). Ce ramides are the precursors for sphingolipids, and faci litate preserving intact skin barrier and reducing water loss. In AD skin, reduced ceramide content with altered chain lengths have been found (57)(58)(59). In addition to glucosylceramide, sphingomyelins (the most frequent sphingolipids in mammalian cells) can also be metabo lized to ceramides by sphingomyelinase. The activity of sphingomyelinase is decreased in ADL as well as in AD NL skin and, subsequently, smaller amounts of ceramides are produced (57,60,61). These data are in agreement with the results of the current study, in which the levels of 12 sphingomyelins were elevated in ADL skin com pared with ADNL and C skin, and 2 sphingomyelins had higher concentrations in ADL skin compared with ADNL skin, suggesting deficient ceramide synthesis and, consequently, a reduced skin barrier. In addition to being the main components of cell mem branes SM and PC are sources of bioactive compounds that are involved in different signalling pathways (62). The current study found 40 PCs that had elevated ratios in ADL skin compared with ADNL skin, and 6 PCs that had higher concentrations in ADL skin compared with both ADNL and C skin. Peiser et al. (63) have suggested that a defective phosphatidylcholinesphingomyelin transacylase might play a role in AD and could also be a reason why elevated PC levels were found in ADL skin. As in psoriasis, the concentrations of 4 lysoPCs, which are derived from PCs, were increased in AD lesional skin compared with nonlesional skin, and similarly, the attraction of Tlymphocytes might be one of their roles in AD (64). The number of reports on metabolome of patients with AD is still very limited, and only single reports describing the metabolome of blood and urine of patients with AD could be retrieved. When the metabolomic profile of AD skin was compared with that of blood samples obtained from patients with AD a few concomitant alterations in specific metabolite levels were found. Interestingly, the changes were mostly in opposite directions, suggesting a decrease in these metabolites in the bloodstream and concomitant increase in the skin tissue. For instance, the concentration of C2 was decreased in the blood samples of patients with AD compared with controls; however, in the current study, the level of C2 was significantly higher in ADL skin compared with ADNL skin (11). Similarly, the concentrations of 3 PCs were decreased and only one PC increased in the blood samples of patients with AD; however, the current study found that the concentrations of 40 PCs were increased in ADL skin compared with ADNL skin, and the concentrations of 6 PCs were higher in ADL skin compared with ADNL and C skin (11). Data regarding the metabolomic profile of the urine of patients with AD are also scarce; we found only one study, analysing the urine of infants with AD (14). How ever, there were no similarities to the current study with regard to altered metabolite concentrations. The reasons may lie in the specific properties of renal filtration, as well as the differences in the ages of patients (infants vs adults). Limitation A limitation of this study was the relatively small number of enrolled patients. Conclusion A wide metabolomic analysis of AD and control skin samples were conducted, and notable alterations in the concentrations of amino acids, biogenic amines and lipids identified, which indicate ongoing inflammatory respon se, disruption of the barrier function, and susceptibility to oxidative stress. These findings are in concordance
2021-02-16T06:16:19.448Z
2021-02-15T00:00:00.000
{ "year": 2021, "sha1": "32bcb67e58ee28cde5ca72f5654cce518942639f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2340/00015555-3766", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51ddf85ebf1fe20ddff85ad70a4ad0933cdf9e22", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
195820256
pes2o/s2orc
v3-fos-license
Contact Engineering High Performance n-Type MoTe2 Transistors Semiconducting MoTe2 is one of the few two-dimensional (2D) materials with a moderate band gap, similar to silicon. However, this material remains under-explored for 2D electronics due to ambient instability and predominantly p-type Fermi level pinning at contacts. Here, we demonstrate unipolar n-type MoTe2 transistors with the highest performance to date, including high saturation current (>400 ${\mu}A/{\mu}m$ at 80 K and>200 ${\mu}A/{\mu}m$ at 300 K) and relatively low contact resistance (1.2 to 2 $k{\Omega}\cdot{\mu}m$ from 80 to 300 K), achieved with Ag contacts and AlOx encapsulation. We also investigate other contact metals, extracting their Schottky barrier heights using an analytic subthreshold model. High-resolution X-ray photoelectron spectroscopy reveals that interfacial metal-Te compounds dominate the contact resistance. Among the metals studied, Sc has the lowest work function but is the most reactive, which we counter by inserting monolayer h-BN between MoTe2 and Sc. These metal-insulator-semiconductor (MIS) contacts partly de-pin the metal Fermi level and lead to the smallest Schottky barrier for electron injection. Overall, this work improves our understanding of n-type contacts to 2D materials, an important advance for low-power electronics. Atomically-thin field-effect transistors (FETs) based on the sulfides and selenides of Mo and W have demonstrated significant current modulation, moderate carrier mobilities, 1,2 and the capability for lowpower complementary logic. 3,4 n contrast, semiconducting 2H-(α-)MoTe2 remains relatively underexplored, despite moderate indirect band gaps EG ≈ 0.88 to 1.0 eV in bulk crystal [5][6][7] and direct EG ≈ 1.1 eV (optical) and 1.2 eV (electronic) in monolayers, [8][9][10] similar to bulk silicon.Such moderate band gaps may facilitate low-power transistors with tunable injection of either electrons or holes, enabling low-voltage complementary logic and optoelectronics in the visible-to-infrared range. 11,12 oreover, 2H-MoTe2 is metastable with a semimetallic 1T' phase, and switching between the two is enabled by temperature, 13 strain, 14 or electrostatic gating, 15 with applications for ultra-low-power switches, phase-change memory, or phaseengineered transistor contacts. 16Stable metallic nanowire formation has also been recently reported. 17spite these favorable properties, two major challenges have hindered the broader exploration and integration of MoTe2 devices.The first is ambient degradation, 18 as these tellurides are more prone to oxidation than sulfides or selenides, 19 as noted in the rapid oxidation of WTe2 exposed to atmosphere. 20,21 n et al. 22 tracked the decomposition of MoTe2 over hours to days in air, which was accompanied by a decline in photoluminescence yield attributed to oxidation around defect sites.The second challenge is that conventional metal contacts to MoTe2 exhibit highly variable Fermi level pinning across reported studies, with poor carrier selectivity and significant Schottky barriers.Previous multi-layer MoTe2 transistors have been predominantly p-type [23][24][25][26] or ambipolar, [27][28][29][30][31][32][33] ostensibly due to contact pinning or minute variations in MoTe2 stoichiometry (including doping from iodine flux agents 23 during crystal growth).Devices with midgap pinning may achieve more unipolar contacts with electrostatic gating or absorbate doping, 28,[34][35][36] potentially enabling complementary logic, similar to ambipolar WSe2. 3,4 esides studies based on initially ntype MoTe2 (ostensibly from growth-flux dopants, which diffuse out under thermal treatment), 37,38 the sole report of unipolar n-type transport by Cho et al. 16 used laser heating to pattern phase-engineered 1T' contacts.However, such techniques are difficult to scale up, in terms of local thermal budget and laser resolution.Further development of selective, low-resistance contacts is thus required for energy-efficient electronics. In this work, we address the dual challenges of air sensitivity and contact pinning in few-layer MoTe2, fabricating air-stable, AlOx-encapsulated n-type transistors with the highest drive currents reported to date for this layered semiconductor.Our previous method of air-free fabrication, 21,39 in which channel regions avoid any ambient exposure, is applied to MoTe2 transistors with multiple contact metals.We then obtain self-consistent estimates of electron and hole Schottky barriers, accounting for combined thermionic and tunneling mechanisms of carrier injection. 40From the conventional metals investigated, Ag contacts yield the lowest n-type electron barriers and smallest contact resistance RC, obtained with transfer length measurement (TLM) extractions. 41RC appears to be independent of metal deposition pressure, which is explained by the formation of Ag-Te compounds at the MoTe2-metal interface, profiled by high-resolution X-ray photoelectron spectroscopy (XPS).We found higher reactivity with ultra-low work function Sc contacts, which form a disordered metal-telluride complex consuming multiple MoTe2 layers.To mitigate this issue, we insert a chemically-grown monolayer of h-BN 42,43 as a diffusion barrier between Sc and MoTe2, preventing Sc-MoTe2 reactions and helping to de-pin the metal Fermi level.With these metal-insulator-semiconductor (MIS) contacts, we achieve the first completely unipolar n-type operation in MoTe2 transistors, demonstrating strongly suppressed reverse leakage current and high-field on-state current saturation. Contact Pinning To first characterize the extent of Fermi level pinning at MoTe2 contacts, we extract effective Schottky barriers from electrical measurements for several common contact metals with work function from ~4.3 to ~5.7 eV (see Table 1).We fabricated transistors using MoTe2 exfoliated from synthetic bulk crystals (Supporting Information, Section 1).Following our previous method for passivating oxygen-sensitive few-layer WTe2, 21 we performed all processing in the inert atmosphere of nitrogen gloveboxes (O2, H2O < 3 ppm), such that our MoTe2 was only exposed to ambient air for < 5 minutes prior to contact metal deposition.To protect the channel from oxygen and moisture, we performed metal lift-off in a glovebox connected to an atomic layer deposition (ALD) chamber, allowing us to immediately encapsulate our devices in situ with 200 Å of AlOx via benign, low-temperature (150 °C) ALD. 21Figure 1a presents a schematic of a completed device, with further processing details described in Methods and Section 2 of the Supporting Information. Theoretical band alignments between our metal contacts and MoTe2 are presented in the left half of Figure 1b, based on the Schottky-Mott rule which predicts the "Schottky barrier height" for electron injection to the conduction band, Φn = ΦM -χ, where ΦM is the metal work function and χ is the semiconductor electron affinity.This simple theory suggests that preferential electron or hole injection into either conduction or valence bands, respectively, is possible among the examined metals, though with larger Schottky barriers expected for electron injection due to the relatively low electron affinity (χ ≈ 3.85 eV) of MoTe2 compared to Si and MoS2. 44,45 owever, true band alignments will be determined by the charge neutrality level ECNL established by semiconductor defects and penetrating metal gap states at the contact interface.We performed initial electrical measurements of multi-layer devices (channel thickness tch = 5-10 nm corresponding to 7-14 layers) with 40-60 nm of contact metals (Ag, Au, Ni, Pt, and Ti/Au) deposited in a conventional high-vacuum (HV) evaporator at 1-2 Å/s, with deposition pressures of 0.2 to 5×10 -6 Torr (with gettering metals like Ti deposited at the lower end of pressure).Figure 1c displays measured roomtemperature width-normalized drain current density (ID) vs. back-gate voltage (VGS) data for long-channel devices (Lch = 0.9-1.1 µm; only reverse sweeps shown for clarity) on 90 nm SiO2 global back-gate oxide at VDS = 1 V.We observe clear ambipolar transport for all contact metals, but with preferential n-type conduction by several orders of magnitude, and maximum Ion/Ioff ratios of 10 4 to 10 5 .Qualitatively, these ambipolar transport curves suggest contact pinning at or above mid-gap, consistent with calculations for a charge neutrality level ECNL set deep within the gap by Te-vacancy and possibly Te-interstitial defects at the metal/MoTe2 interface. 46,47 ur Ti/Au and Ni-contacted samples produce ambipolar transfer characteristics similar to published devices, 27,30,32 We do not observe any p-type dominant transport, even with high work function Pt contacts.This is ostensibly an effect of the AlOx encapsulation which partly n-dopes the MoTe2, 48 comparable to reports of enhanced electron transport in similarly-capped WSe2 and MoTe2 FETs. 3,32,36 Smilar to these reports, 32,36 the peak n-to-p current ratio is enhanced by less than an order of magnitude, although the exact doping amount is difficult to quantify due to significant hysteresis of the MoTe2 devices prior to capping.However, the off-state current minima are not shifted to as far a negative VGS as in prior reports, 36 indicating only moderate doping and enabling study of both current branches.The AlOx passivation also prevents air exposure, limiting surface oxidation of MoTe2 into MoOx, 49,50 which could otherwise enhance hole injection (with p-type transport correlated to oxygen exposure in uncapped devices). 24,36,38 Pr extractions of effective Schottky barrier heights to MoTe2 used temperature-dependent Arrhenius analysis, 23-25, 29, 30, 33 which assumes reverse leakage current is entirely due to thermionic emission (see Supporting Information, Section 3).However, this approach is inaccurate for ambipolar transistors, where the contact Fermi level EF is pinned deep in the gap, as identified by Das and Appenzeller for WSe2 devices. 51ep EF pinning implies that the reverse current does not reach the exclusively thermionic regime due to significant tunneling injection.Arrhenius analysis will simply interpret this tunneling contribution as thermionic current, underestimating barrier height significantly. 52,53 his effect may contribute to previous reports of very low barrier heights extracted via conventional Arrhenius analysis for both p-and n-type injection into MoTe2 with Ti contacts, including Φp ≈ 5-130 meV, 23,25,27 and Φn ≈ 50-190 meV, 30,33 despite clear evidence of ambipolar transport.For comparison, 50-150 meV Schottky barriers are obtained for unipolar n-type MoS2 devices, 41,52 with contacts well-known to pin just below the conduction band. Instead, we perform more comprehensive barrier extractions using an analytic Schottky contact model based on Landauer transport theory, developed by Penumatcha et al. for ambipolar black phosphorus FETs (more details in Section 3 of Supporting Information). 40The subthreshold electron current density is where TC(E) is the electron tunneling transmission in the Wentzel-Kramers-Brillouin (WKB) approximation, MC(E) are electron modes in the conduction band, and fD(E) is the Fermi-Dirac distribution.A similar expression exists for the hole current.This model accounts for combined thermionic and tunneling current, the latter through a simplified WKB model.We extract Schottky barrier heights by using this model to fit electron and hole branches around the current minimum in the subthreshold regime of ID vs. VGS sweeps for long-channel MoTe2 transistors (Lch ≈ 1 µm) at low drain bias (VDS = 100 meV).This method also selfconsistently yields the semiconductor band gap, from the sum of electron and hole Schottky barriers EG ≈ Φn + Φp.Table 1 presents a summary of extracted barrier heights for five contact metals (multilayer samples approaching the electronic bulk value; averaged over multiple extractions and forward/reverse current sweeps).An additional extraction for (Au-capped) Cr, omitted in Figure 1c for the sake of clarity, is also included. Contact Metal Metal Work Function 54 [5][6][7] likely due to this model's simplifications as well as our underestimates of Φp from suppressed subthreshold hole branches in more unipolar, n-type Ag-contacted devices.Figure 1d further plots the extracted barrier heights vs. metal work function, along with extractions for pure Sc contacts.We perform a simple linear fit for physically relevant metals (Table 1 data; ΦM > χ) to estimate the pinning factor S = dΦn/dΦM.The extracted S ≈ 0.06 is almost that of a completely pinned material (S = 0) rather than the ideal case of the charge neutrality level being set by ΦM (S = 1, follows Schottky-Mott rule).This is considerably lower than the idealized prediction of S ≈ 0.16 via density functional theory calculations with metal-induced gap states, 55 though is consistent with a recent report of S = 0.07 on ambipolar monolayer MoTe2 devices (and comparable to that of Ge, S = 0.05 but pinned just above the valence band ). 33,56Clearly additional physical mechanisms beyond the metal work function influence the charge neutrality level, which may include mid-gap chalcogenide defect states (i.e.Te-vacancies or interstitials) 46,47 or chemical intermixing between reactive MoTe2 and various contact metals at their interfaces. Silver Contacts We further investigated Ag electrodes as n-type contacts, because they demonstrated the smallest barrier for electron injection as well as the highest current density and steepest subthreshold activation, as shown in Figure 1c.We fabricated transistors on 30 nm SiO2 global back-gate oxide (depicted in Figure 2a), facilitating induced sheet carrier densities n2D > 10 13 cm -2 .In pursuit of clean contact interfaces, we deposited Ag (25 nm, capped with 15 nm Au) in a load-locked, cryopump-driven chamber with evaporation pressures down to 2.5×10 -8 Torr, just above the crossover to the ultrahigh vacuum regime (UHV, ~10 -9 Torr and below).transport with current minima around VGS = 0. Measurements down to 78 K via closed-loop nitrogen cooling reveal strong temperature-dependence, which is characteristic of a Schottky-barrier-dominated device, including increased on-state current densities at lower temperature due to reduced RC and enhanced mobility (see discussion below).From suppression of thermionic hole injection at 78 K, we saw three orders of magnitude increase in peak current Ion/Ioff ratio, reaching 10 6 -10 7 (Figure 2b inset), despite an onset of short-channel effects for Lch < 100 nm.We note the stronger temperature-dependence of hole leakage current (at negative VGS) above 150 K, indicating more thermionic rather than tunneling charge injection despite the relatively large Schottky barrier (Table 1).This is consistent with expectations of dominant thermionic contribution at higher temperature and low lateral field in the long-channel device of Figure 2b, as described in the extraction model and resembling transfer curves of ambipolar black phosphorus transistors. 40In this regime, the temperature evolution of off-state current resembles that of the strongly thermionic MIS devices presented in the following section. Per capacitive scale-length theory, ultra-thin body devices should be electrostatically "well-behaved" down to short channel lengths (relative to gate oxide thickness).Figure 2c presents both linear and logarithmic transfer characteristics of a thin 5-layer FET (tch ≈ 3.5 nm) with a channel length of ~100 nm that maintains predominantly n-type transport with peak Ion/Ioff ≈ 10 5 (~10 8 ) at 300 K (78 K).Forward/reverse sweeps demonstrate low hysteresis in these air-stable AlOx encapsulated devices.Drive currents double at 78 K to 300 µA/µm at VDS = 1 V, surpassing previously highest reported ~100 µA/µm for p-type transport in substantially thicker, uncapped MoTe2 samples. 24 extract the MoTe2 channel sheet resistance using TLM methodology, from which we estimate intrinsic channel electron mobilities µe (Figure 2d). 41Electron mobilities saturate around 25-36 cm 2 V -1 s -1 at high carrier density at 300 K, matching the range of peak hole mobility from 4-point measurements on intrinsically p-type samples. 23Our electron mobilities rise to 129-137 cm 2 V -1 s -1 at 78-80 K, decaying with a T -1 dependence softer than the canonical ~T-1.6 evolution expected from homopolar optical phonon scattering under low impurity concentrations. 57This reduction in temperature coefficient is likely due to encapsulation with our high-κ AlOx, where the enhanced dielectric environment dampens low-lying optical phonons, limiting the energetic cross-section for carrier scattering.Short-channel devices experience high-field current saturation for VDS ~ 2.5 V, with record room-temperature current densities >200 µA/µm, increasing by a further ~50% to >400 µA/µm at cryogenic temperatures from reduced RC and enhanced µe (Figure S7, S11 in Supporting Information).Figure 2e presents a low temperature ID vs. VDS sweep of a typical fewlayer, short channel (Lch ~ 80 nm) device, achieving record saturation current density of 420 µA/µm at 78 K, approaching that of chemically doped or ultra-short channel MoS2. 58Low-temperature current densities saturate just below 450 µA/µm in bulk samples (Supporting Information, Section 7).We note that maximum achievable saturation current densities are most likely limited by device self-heating, and could be further increased in transistors that are better heat sunk, or functioning in pulsed (digital) operation with low duty cycles. 21, 59We extract contact resistance via TLM measurements on samples containing 4-6 separate channel lengths, from Lch = 80 nm to over 2 µm.This followed prior methodology demonstrated for MoS2 transistors, 41 extracting RC from the intercept of net resistance vs. channel length at fixed channel sheet charge density n2D (derived from gate overdrive, accounting for channel quantum capacitance; details in Support- ing Information, Section 4). Figure 3a presents extracted RC for two AlOx-encapsulated, multi-layer samples with metal contacts deposited in HV (2.2 × 10 -6 Torr, circular symbols) and just above UHV (2.2 × 10 -8 Torr, square symbols).Contact resistance ranges from 1.4-1.5 kΩ⋅µm at 78 K (n2D ≈ 10 13 cm -2 ), and up to ~2.0-2.25 kΩ⋅µm at 300 K.This is several times higher than the lowest reported RC for top metal contacts (UHV-deposited Au) on 2H-MoS2 (<750 Ω⋅µm), though is consistent with projected values from Tsu-Esaki/Arrhenius models assuming comparable Schottky barrier heights on such devices (i.e. for Φn ~ 300 meV for MoTe2 versus the extracted ≤ 150 meV for MoS2). 41We also note that MoS2 is projected to have a local minimum in RC around 200 K, attributed to competing effects of reduced thermionic emission (less injected charge from metal into semiconductor) and lower access resistance due to less phonon scattering at lower temperatures. 41In contrast, our contact resistance to MoTe2 monotonically decreases with cooling down to 78 K, implying that access resistance dominates our contacts, which is consistent with the larger Φn to MoTe2 suppressing the contribution of thermionic emission relative to field emission in the onstate.We extract room-temperature transfer lengths LT ≈ 120 nm from TLM analysis (Supporting Information, Section 4), a three-fold increase over Au/MoS2 contacts. 41Our metal contact lengths LC = 700-800 nm ≫ LT ≈ 120 nm at 300 K and LT ≈ 380 nm at 77 K, so our TLM structures and RC extractions should not be significantly impacted by current crowding. 41re unique is the apparent independence of RC on metal deposition pressure, with RC curves in Fig- ure 3a overlapping within extraction uncertainty, despite two orders of magnitude difference in reactor pressure.This trend nominally persists for UHV-deposited Ag (≤ 5 × 10 -9 Torr; Supporting Information, Section 5).Such invariance contradicts expectations of lower RC with cleaner interfaces (from lower reactor pressure) for idealized, inert top metal contacts on a van der Waals crystal, suggesting significant chemical modification of the top MoTe2 layer(s) following metallization.However, this is not unexpected given earlier knowledge about MoS2-metal interactions, 60,61 with the extent and composition of interfacial compounds mediated by both metal-chalcogen reactivity and metal evaporator chamber pressure.In particular, the chemical instability of MoTe2 and persistence of numerous silver-telluride compounds support this conjecture, with prior surface studies of high-temperature Ag nucleation on 1T'-(β-) MoTe2 under UHV detecting covalent bonding and growth of epitaxial Ag2Te islands. 62However, the 1T' phase of MoTe2 exhibits greater instability than its 2H allotrope and therefore chemical perturbation by Ag deposition may not manifest analogously on 2H-MoTe2. To verify such interfacial reactivity, we performed high resolution X-ray photoelectron spectroscopy However, it is unclear how the AlOx encapsulation on the devices fabricated in this work affects iodine diffusion.Nonetheless, the intermetallic formed at the Ag-MoTe2 interface via thermal annealing dominates interfacial chemistry, and thus contact resistance, which may explain why we see no dependence of RC on metallization pressure.This is in contrast to the case of Au on MoS2, whose interface remains inert even during UHV deposition, 61 exhibiting several times lower RC for Au contacts deposited at UHV compared to HV due to cleaner interfaces. 41 Spectra on the bottom are from exfoliated MoTe2 prior to processing, as well as pure Sc metal.The green signal overlaid on the measured spectra in black is the fitted envelope.We observe significant ScTex formation upon deposition, followed by progressive oxidation of Sc from trace O-and OH-after annealing at 150 °C and 250 °C. Scandium Contacts with and without h-BN Insulation We itate direct charge injection into the conduction band if the contacts are de-pinned.This has been demonstrated by n-type conduction in typically p-type carbon nanotubes and black phosphorus with Sc or Er electrodes, 64,65 and significant reduction of Schottky barrier height in Sc-contacted n-type MoS2. 52 address this first challenge, we evaluated scandium (Sc) contacts to MoTe2.The extreme sensitivity of Sc to trace oxygen necessitated deposition in a custom-built UHV chamber (e-beam evaporation pressure in the low ~10 -9 Torr, idle base pressure ~5 × 10 -11 Torr).25 nm of Sc was capped with 45 nm Cr and 30 nm Ag, all evaporated at 2.0 Å/s.Subsequent ALD AlOx capping protects both the MoTe2 and contacting metals from ambient oxidation.Despite the apparent air-stability of encapsulated devices, initial results on multi-layer channels (Figure 4a, on 90 nm SiO2 back-gate oxide) were markedly inferior to the conventional metal contacts of Figure 1, with lower current densities, increased hysteresis, and ambipolarity suggesting mid-gap pinning.Analytical subthreshold analysis suggests a considerable Φn ≈ 360 meV for such devices (Figure 1d), exceeding values obtained for Ag.Moreover, no significant current above the measurement noise floor was recorded for thin-channel transistors, i.e. 5-layers and fewer. Thus, it appears that poor device performance and the unexpectedly large electron Schottky barrier of Sc contacts to MoTe2 result from severe chemical reaction at this interface, consuming multiple MoTe2 layers. 65Using XPS, we find that metallic Sc reacts with MoTe2 when deposited at room temperature in either UHV or HV, forming ScTex, ScxOy, and Sc(OH)x, (Figure 4b) with peaks at 399.69, 400.86, and 402.42 eV, respectively.Any metallic Sc remaining in the deposited film is below the limit of XPS detection as evidenced by the absence of the expected metallic Sc chemical state at ~398.7 eV. 66Even in UHV conditions at room temperature, Sc spontaneously reacts with adsorbed species on the MoTe2 surface and background gases within the deposition chamber to form scandium oxide.Fixed charges in oxygen-deficient ScxOy and/or Sc(OH)x incorporated within the deposited Sc contact could contribute to hysteresis.Formation of Sc oxides and/or hydroxides are thermodynamically favorable (Gibbs free energy ΔG °f,ScxOy = -629.94kJ/mol, ΔG °f,Sc(OH)x = -411.15kJ/mol), 67 presumably more favorable than the persistence of Sc-Te bonds (see Supporting Information Figure S10a for more details).Note the 0.29 eV shift to lower binding energy exhibited by bulk MoTe2 states detected after annealing at 250 °C (Figure S10a).9][70] The contact metal and semiconductor are physically offset by a nanometer-scale insulating tunnel barrier, limiting the impact of metal-induced gap states and surface dipoles on Fermi level pinning.Although typical insulator layers are often oxides, here we must avoid additional oxygen species and instead we employ hexagonal boron nitride (h-BN) as an atomically-flat, wide-gap insulator (EG ≈ 6.0 eV), 71 as shown in Figure 5b,c.Recent investigations into the h-BN MIS contact structure have demonstrated improved RC and reduced Schottky barrier height to MoS2 and black phosphorus FETs, 42,72,73 including for nominally-inert Au and Ni electrodes, provided the interlayer thickness is restricted to mono-or bilayers (i.e. in the tunneling-dominated limit).For Co electrodes, recent measurements confirm theoretical predictions of an interfacial dipole effect further lowering the effective metal work function by >1 eV in the presence of a h-BN monolayer. 43,72,73 Mreover, h-BN may replicate the role of atomically-thin graphene as a metal diffusion barrier 74 and a chemically inert passivation layer for 2D electronics, 71 preventing Sc reactions from scavenging Te in the underlying MoTe2. Continuous, centimeter-scale h-BN monolayers were grown on re-usable Pt foils via low-pressure CVD (Supporting Information, Section 8).These were transferred onto exfoliated MoTe2 flakes on separate 90 nm SiO2 back-gates in a dry, polymer-stamp based process requiring <3 minute air exposure on an 80°C hot plate.After polymer removal and a 200°C anneal for 1-3 hours in nitrogen ambient, the Sc/Cr/Ag with high-field saturation currents comparable to those with Ag contacts, despite a longer channel and thicker back-gate oxide (90 vs. 30 nm SiO2).The strong suppression ("flattening") of reverse leakage current and excellent high-field saturation represent the most unipolar MoTe2 transistors reported to date.In contrast, previous n-type MoTe2 devices formed by electrostatic or molecular doping displayed significant p-type reverse leakage as a function of negative VGS. 34,35 Conversely, transistors made from intrinsically ntype material operate in depletion mode and barely reach current minima across wide VGS sweeps, [36][37][38] indicative of a large negative threshold voltage shift rather than de-pinning at the contacts. We note that the strongly suppressed off-state (hole) current prevented reliable extraction of Schottky barrier heights by the model described earlier.However, such unipolar behavior suggests significantly depinned charge injection, with a predominantly thermionic character enabling conventional Arrhenius extrapolation of Φn (see Supporting Information, Section 3). Figure 6 presents sample fits at various VGS for a Lch ~1 µm, 7-layer unipolar n-type MoTe2 device with h-BN/Sc contacts; a good match is achieved for high-temperature data between 300-450 K, for which thermionic and thermally-assisted-tunneling injection is enhanced.An effective, high-temperature electron Schottky barrier height is extracted for the metal/h-BN/van-der-Waals gap system, 42 as determined from VGS = VFB, beyond which Φ(VGS) loses linear dependence on gate voltage in conventional barrier models.The inset of Figure 6 presents such an extraction, with a conservative upper bound of Φn ≈ 90 meV.We consistently extracted a range of Φn ≈ 80 -100 meV across several few-layer, long-channel samples.This represents an effective 200 meV reduction over the average electron barrier of Ag contacts, within the range of Schottky barriers for unipolar MoS2 n-FETs (Φn ≤ 150 meV). 41,52 onetheless, this represents a non-trivial barrier for electron injection, indicating depinning is incomplete in light of the theoretical band alignment of Figure 5d.Ultra-low work function metals can move the charge neutrality level of MoTe2 closer to the conduction band, but only when their high reactivity is mitigated by an inert barrier; further study of other metal/h-BN combinations may establish true pinning factors for this semiconductor, wherein the metal work function is not modified by the presence of a local metal-telluride compound. Summary In summary, we demonstrated air-stable, high-performance transistors of semiconducting MoTe2 in the few-layer limit.These were achieved by air-free processing and AlOx encapsulation, enabling a study of multiple contact metals informed by chemical profiling of metal/MoTe2 interactions at contact interfaces. We achieved highest performance with Ag contacts despite (or perhaps because of) the formation of an Ag-Te contact interlayer, achieving record current densities >400 µA/µm at 80 K.We also achieved the most unipolar n-type devices with Sc contacts employing a h-BN contact interlayer.This was required to prevent interfacial Sc-Te reactions, also functioning as a MIS tunnel barrier to partially de-pin the Fermi level. Together, these are the highest performance unipolar n-type MoTe2 transistors demonstrated to date, complementary to prior published p-type MoTe2 devices.More generally, we found strong evidence of metalchalcogen reactivity as a key engineering parameter in contact design, demonstrating strategies for both the exploitation and prevention of such reactions at these interfaces. Methods We exfoliated MoTe2 flakes from synthetic bulk crystals, grown by Chemical Vapor Transport with a molecular precursor source (see Supporting Information, Section 1), onto SiO2/p ++ Si substrates with oxide thickness tox = 30 (for some Ag-contacted devices) or 90 nm (other contacts).Following our previous method for passivating oxygen-sensitive chalcogenides, 21,39 we performed all processing in the inert atmosphere of nitrogen gloveboxes (O2, H2O < 3 ppm), coating samples in a protective PMMA layer that also served as resist for electron beam (e-beam) lithography.Following patterning of top-contacts, we developed our samples in air and quickly transferred them to e-beam metal evaporators, limiting the exposure of our contact regions to ambient atmosphere to only 1 to 5 minutes.To prevent channel oxidation, we performed metal lift-off in a nitrogen glovebox followed by in situ encapsulation with 200 Å of AlOx via benign, lowtemperature atomic layer deposition (ALD; 150°C thermal process using H2O and trimethylaluminum precursor) to act as an oxygen and moisture barrier.We perform a final 250°C anneal in our vacuum probe station (~10 -5 torr) to suppress hysteresis in electrical measurements.Devices were measured in a Janis Cryogenic probe station with a Keithley 4200-SCS parameter analyzer.Further processing details are outlined in Section 2 of the Supporting Information, including Raman spectra of encapsulated flakes.XPS analysis and sample preparation are described in detail in Section 6 of the Supporting Information.h-BN growth, transfer and characterization are covered in Section 8. TEM cross-sections were prepared by Evans Analytical Group using a FEI Dual Beam FIB/SEM and both Tecnai TF-20 and Tecnai Osiris FEG/TEM units at 200 kV. ASSOCIATED CONTENT Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: XXXXX. Bulk Crystal Growth Mirroring a technique previously applied towards facile synthesis of WTe2 crystals, bulk MoTe2 exfoliation sources were grown by re-crystallizing a commercial molecular powder using closed-tube chemicalvapor transport (CVT). 1 Molybdenum ditelluride powder (ESPI Metals MoTe2, 99.9%) was sealed in quartz ampoules with elemental iodine as a transport agent (Alfa Aesar, 99.99+%) at 5 mg/cm 3 , evacuated under argon.To remain below the ~850-900 °C transition range for the 1T'-/βsemimetallic polytype, 2, 3 the central hot zone was kept at 800 °C, maintaining a 100 °C thermal gradient along a ~11 cm transport length during 14 days of growth.Despite a lower base temperature than WTe2 CVT, a high yield of millimetric crystal platelets was achieved across multiple growths (Figure S1a), with layered structure evident in scanning electron microscopy (SEM) micrographs of sheet-like gradations in edge terraces (Figure S1b).Scanning tunneling microscopy (STM) studies confirmed crystal composition and quality consistent with commercially-available synthetic samples, with trace levels of silicon and iodine near bulk surfaces incorporated from the growth ampoule and flux agent, respectively. Air-Free Device Fabrication and Characterization We fabricated top-contacted, back-gated MoTe2 field-effect transistors (FETs) avoiding any air exposure to channel regions, as outlined in a prior study on encapsulated WTe2. 1 We exfoliated flakes onto p ++ Si chips with 30 or 90 nm SiO2 (dry thermal growth) in an N2 glovebox with O2, H2O < 1 ppm using a lowresidue thermal release tape (NittoDenko Revalpha series).After an acetone/2-propanol solvent bath, chips were coated in situ with a 300 nm layer of 950k polymethyl methacrylate (PMMA) (MicroChem A5), acting as a protective film while optically searching for suitable flakes to make devices, and as a resist layer for a two-step electron-beam lithography process (patterning alignment markers then contacts in the same resist layer; Raith 150 at 20 kV).We developed contacts in open air and rapidly transferred chips to any of several electron-beam metal evaporators, with deposition pressures spanning the high vacuum (HV) to ultra-high vacuum (UHV) range as outlined in the manuscript; only contact regions saw ambient atmosphere, for sub-5 minute periods. Following metallization, we performed lift-off by acetone/2-propanol soaking in another N2 glovebox (O2, H2O < 3 ppm) interfacing a Savannah thermal atomic layer deposition (ALD) reactor used to immediately encapsulate devices with 200 Å of amorphous AlOx.To minimize TMD oxidation during ALD, AlOx growth was conducted at a lower temperature (150 °C), using a less reactive H2O reagent, after first saturating surfaces with 10 leading pulses of trimethylaluminum (TMA) metal precursor.Finally, we vacuumannealed devices for 1 hour at 250 °C in a Janis Cryogenic probe station, cooling to ambient over several hours under vacuum levels below 5×10 -5 Torr prior to electrical measurement.Our FETs demonstrate stable performance over weeks of storage. Smooth nucleation of the capping alumina allowed us to determine MoTe2 channel layer-count from atomic force microscopy (AFM) topography profiles (Veeco Dimension 3100 in soft-tapping mode).We determine layer-count from channel thickness assuming atomic interlayer spacing of ~0.7 nm with an extra ~0.2-0.3 nm offset for the first layer consistent with recent reports and bulk lattice constants. 4,5Optical transparency of our alumina capping enabled us to verify layer-specific vibrational Raman modes, as presented in Figure S2 for encapsulated samples, collected using a 1.25 mW, 532 nm laser (Horiba LabRam; renormalized to peak intensity).Laser spot-size was confined to several-µm, with no oxide or flake damage detected by subsequent optical and AFM profiling.Both characteristic cross-plane A1g and in-plane E 1 2g modes appear at 173 and 234 cm -1 respectively, matching published studies on freshly-exfoliated few-layer crystals, 4,6 undergoing minimal softening or stiffening (<1 cm -1 ) approaching 5 layers (5L) as in previous reports.This includes reduction of A1g intensity in the 5-layer limit (tch ≈ 3.5 nm) corresponding to peak-splitting recorded across 3-10L samples. An additional mode around 289-291 cm -1 matches reports of a cross-plane B 1 2g vibration, in which metal and tellurium atoms oscillate in opposite directions perpendicular to the layer plane. 4Thought to be inactive in both monolayer and bulk, it emerges with increasing relative intensity down to bilayer thickness, matching the general trend in Figure S2.A novel persistence of this mode is recorded here in bulk capped samples (tens to hundreds of nanometers thick), albeit with low intensity.An absence of surface disorder from ambient oxidation may contribute to this effect, as suggested by the rapid disappearance of Raman peaks in exposed WTe2, 7 alongside an enhanced local dielectric constant under the AlOx.Similar activation of nominally bulk-inactive modes has been observed in WSe2/h-BN stacks, 8 attributed to electron-phonon coupling at interfaces, suggesting an influence of encapsulating layers on the Raman spectra of 2D crystals. Schottky Barrier Extraction Schottky barrier extraction from 2D FETs has conventionally assumed reverse leakage current (i.e.0][11][12][13] Arrhenius analysis of thermionic emission is then used to estimate the effective Schottky barrier.Current density per unit width from thermionic emission at a metal-semiconductor junction is modelled as: 14 where q is the unit electron charge, kB is the Boltzmann constant, T is temperature, A * is the effective Richardson's constant (A * = 4πm * qkB 2 /h 3 ), ΦB is the effective barrier height from the metal Fermi energy to semiconductor conduction band, V is the voltage applied at the junction, and n is the metal-semiconductor ideality factor (accounting for non-idealities of the Schottky barrier height such as image force lowering). For a MOSFET in subthreshold, the voltage dropped at the metal-semiconductor contact is roughly the drain bias, V ≈ -VDS.ΦB is furthermore a function of gate-source voltage, VGS.Multiplying together the terms with exponential qV/kB/T and assuming an ideality factor n ≈ 1, we can rewrite (S.1) in the form commonly used to model the subthreshold thermionic emission in a MOSFET: The polarity of ID is arbitrarily flipped here as (S.1) is current flowing out while (S.2) is now current flowing into the contact.To extract effective Schottky barrier height, the drain voltage is biased such that qVDS ≫ kBT so [1exp(-qVDS/kBT)] ≈ 1 across the range of interest for T. Then taking the natural log of ID/T 2 yields an equation linear with 1/T, where the slope is proportional to the effective Schottky barrier height Φ(VGS) at a specific gate bias: 3) The true Schottky barrier height ΦSB is obtained at the flat-band bias, VGS = VFB, beyond which Φ(VGS) is no longer linear with gate voltage due to non-negligible tunneling contributions to the reverse current. In the case of an ambipolar transistor, charge-neutrality level pinning sufficiently deep in the band gap implies that reverse current will never reach the exclusively thermionic regime, and it will instead be dominated by tunneling injection.This is outlined in the band diagrams in Figure S3 across various gate biasing regimes, demonstrating joint contributions of thermionic and tunneling currents to electron/hole barrier height Φn / Φp.Arrhenius analysis will simply incorporate the tunneling contribution into a thermionic calculation, underestimating barrier height significantly. 15 therefore chose to perform more comprehensive barrier extractions using an analytic Schottky model based on Landauer transport theory, recently developed for ambipolar, 2D black phosphorus FETs. 16his model assumes no voltage drop across the channel and that the current characteristics are dominated by the contacts.Fits are made using the subthreshold regime of ID vs. VGS sweeps of MoTe2 transistors, centered on the point of minimum off-current.Width-normalized hole and electron contributions to current are defined by: These integrate energy E relative to conduction and valence band edges, EC and EV.T(E) denotes carrier tunneling transmission in the Wentzel-Kramers-Brillouin (WKB) approximation, fD(E) is the Fermi-Dirac distribution, and M(E) are electron/hole modes in conduction/valence bands.Modes are defined by: ( ) defining complex carrier momentum along the valence and conduction bands.Both thermionic and tunneling injection of electrons and holes are thus considered across subthreshold sweeps. We simultaneously fit both Φn and Φp for ~1 µm long, AlOx-capped MoTe2 FETs on 90 nm SiO2, at low drain bias VDS = 100 mV.In-plane bulk MoTe2 carrier masses of me * ≈ 0.49m0 and mh * ≈ 0.61m0 were used, extracted from parabolic fits to a DFT band model, calculated in VASP PBE with spin-orbit coupling.Sample fits for Ti and Ag are shown in Figure S4a, in good agreement with subthreshold current data. Figure S4b presents the reconstructed band gap values EG ≈ Φn + Φp from Schottky barrier fits to silver-contacted devices of varying thickness.This model generally captures the trend of increasing electronic band gap with reduced layer count, albeit with a 100-150 meV underestimate of established values (~0.9 to 1.2 eV).For more n-type Ag-contacted devices, however, gradual subthreshold p-type activation may produce underestimates of Φp (with fitting to this weaker branch dominating uncertainty).Local variation in layer number across a sample may also have a smaller contribute to extraction uncertainty, because Figure S5 presents a summary of extracted barrier heights and the pinning factor S = dΦn/dΦM from Figure 1d in the manuscript, compared to prior published Arrhenius-based estimates. 9-11, 13, 17This comparison reveals the prevalence of p-type transport in uncapped MoTe2 devices.Additionally, it demonstrates consistent underestimates of true barrier height under the purely thermionic contact models employed in other studies, with reported Φn or Φp much closer to band edges despite clearly ambipolar FET transport. Transfer Length Measurement Analysis We use Transfer Length Measurement (TLM) analysis to simultaneously measure contact resistance and intrinsic mobility in MoTe2 FETs, using multiple patterned channel lengths Lch (minimum 4-6 channels, from 80 nm to >2 µm in length).We use average channel widths for all calculations, determined from postmeasurement SEM imaging.Following a methodology established for exfoliated MoS2 FETs, 18 we model width-normalized net channel resistance RTOT as a linear function of channel length Lch: Sheet resistance RS is extracted from the linear slope and contact resistance RC from half of the RTOTintercept in the TLM plots, such as Figure S6c.Pd [13] [10] [9] [11] Ni Pt Au Ti Ag Sc Cr [17] [ [17] [13] We perform extractions at fixed 2D sheet charge density, accounting for quantum capacitance of the channel (Cdq) in series with the gate oxide capacitance (Cox).At high carrier density, the channel 2D electron sheet density is given by: 19 ( ) where VT is device threshold voltage, Vcrit is the critical gate voltage above which this approximation is valid, Cox ≈ 116 nF/cm 2 for 30 nm SiO2 is the experimentally-verified capacitance (per area) of the global back gate, and Cdq is the degenerate limit of the channel quantum capacitance.Cdq is given by: * 22 SV dq 2D 2 e g g m C q g q  = (S.11) gV = 6 is the valley degeneracy for few-layer MoTe2, and me * ≈ 0.49m0 is the density of states effective mass.We calculate Cdq ≈ 197 μF/cm 2 , over ~1000x larger than Cox, and Vcrit ≈ 0.33 V.The conventional expression for n2D (no series Cdq and Vcrit = 0) is 30% larger at 10 12 cm -2 but only 2.5% larger at 10 13 cm -2 .Thus, our quantum capacitance correction is less than for single layer MoS2 because few-layer MoTe2 has a smaller band gap and larger valley degeneracy, 19 which increases Cdq and reduces Vcrit.Furthermore, we perform all extractions at high carrier density (close to n2D ≈ 10 13 cm -2 ) where quantum capacitance has a minimal effect. Conventional bulk MOSFET theory poorly models VT in our devices due to their ultra-thin floating bodies.Traps and fixed charge in our AlOx encapsulation further complicate the observed VT.As such, we model drain current with the most simplified first-order model of the linear VGS region 14 ( ) This form is best for experimental fitting because it allows VT and effective mobility μeff to act as fitting parameters, with fewer assumptions about precise channel physics.VT is estimated by linearly extrapolating ID(VGS) to the VGS-axis intercept, using a line tangent to the point of max transconductance ∂ID/∂VGS (using the reverse sweep).Figure S6c presents linear fits of width-normalized channel resistance at varying overdrive voltages for a 5-channel TLM sample (11-layer thick MoTe2) at room temperature.Both RC and RS are plotted in Figure S6d and S6e, with error bars delineating 95% confidence intervals of the linear least squares fit.Extracted values saturate in the high carrier density limit (n2D ≥ 10 13 cm -2 ). 1 Converting RC to a specific contact resistivity ρC enables extraction of the transfer length LT, the characteristic length over which injected current decays into a contact: where LC is the mean contact length, varied between 500-700 nm across devices.Sample extractions for a representative MoTe2 device in Figure S6h reveal room temperature transfer lengths up to LT ≈ 115 nm, representing nearly 3x increase over comparable Au/MoS2 contacts (LT ≈ 40 nm). 18This value increases to over 300 nm below 80 K, nearly matching that of Au/MoS2 contacts in this lower temperature regime.However, there was a constant ~40% uncertainty in our LT extractions.As LT < LC across our entire measurement temperature range, we can assume current crowding is negligible in our electrical measurements. Ag Contacts Deposited at Ultra-high Vacuum (UHV) To fully elucidate the role of reactor pressure on Ag contacts to MoTe2, we fabricated devices with pure 40 nm Ag contacts deposited at rates exceeding 2 Å/s in a custom-built UHV metal evaporation chamber, at pressures of 5×10 -9 Torr and below.Figure S7 presents SEM micrographs (Figure S7a) and device characteristics (Figure S7b-e) of an encapsulated multilayer device following this process.A confluence of lower pressures, fewer incidental O-and H2O-species, and a lack of sample rotation in a line-of-sight ebeam evaporator contributed to poor metal nucleation across the 30 nm SiO2 substrate.This is evidenced in SEM profiles (Figure S7a) with localized discontinuities and agglomerations across the Ag leads.In contrast, all metal contacting MoTe2 surfaces is continuous and more uniformly dispersed, indicating improved metal wettability and hinting at the interfacial chemical reactions outlined in following sections. Despite metal constrictions potentially increasing lead resistance, TLM extractions produced room temperature contact resistances matching the range of high-field RC values for contacts evaporated at one to three orders of magnitude higher pressure (contrast Figure S7d with Figure 3a of the manuscript).No measurable improvement in RC was found at 80 K. Short-channel saturation current densities (Figure S7c) are consistent with those of non-UHV devices.Room temperature current densities in a Lch ~ 240 nm device reached ~230 µA/µm at VDS = 2.5 V, rising to ~350 µA/µm at T = 80 K. Shorter channel length patterning was limited by shadowing from a lack of sample rotation during deposition. Interface Chemistry Analysis as a Function of Deposition Chamber Ambient All electronegativity values below are reported according to the Pauling electronegativity scale and all standard Gibbs free energy of formation (ΔG °f) values are reported per chalcogen or oxygen atom or single -OH group. Ag-MoTe2 The chemical states found at 227.94 and 40.11 eV in the Mo 3d and Te 4d core level spectra after exfoliation (not shown) and also following Ag deposition at room temperature (RT) are indicative of metallic behavior of a small concentration of MoTe2 within the probed region, associated with Te excess typically observed in MoTe2 crystals. 21These chemical states persist after Ag deposition regardless of reactor base pressure and are denoted in < 2 × 10 -9 mbar) or high vacuum (HV, base pressure < 5 × 10 -6 mbar) conditions that would suggest the formation of reaction products.In addition, the bulk MoTe2 chemical states in these core levels do not exhibit shifts which would suggest Ag-induced band bending, which is in contrast with strong Fermi level pinning near the charge neutrality level of MoTe2 (4.77 eV) recently reported in the cases of other contact metals. 17However, the devices discussed here are fabricated from multilayer MoTe2 whereas C. Kim et al. 17 observe strong Fermi level pinning in back gated, single layer MoTe2 devices.Therefore, direct comparison of contact performance may not be appropriate.In addition, a small concentration of native TeOx is detected on exfoliated MoTe2 (not shown) and persists throughout Ag deposition and post metallization annealing as evidenced by the chemical state detected between 43.70 and 44.13 eV in the Te 4d spectra obtained throughout the experiment. Following both Ag and Sc deposition, no evidence of e-beam related surface damage was observed across MoTe2 spectra, consistent with prior studies of comparable contact deposition on MoS2 and WSe2. 22,23 .The penetration depth of the most energetic characteristic X-rays of both metals exceeds 10 µm, far deeper than the <10 nm XPS probe depth and the thickness of relevant FET devices.Additionally, metal evaporation for device fabrication and XPS samples were all performed at relatively low rates at or below 0.1 nm/s, far lower than the rates (and resultant X-ray fluxes) associated with e-beam induced damage in thick gate oxides and underlying semiconductor channels.[24][25][26] Annealing under UHV at 150 °C drives reactions between Ag and MoTe2 resulting in the formation of a substantial concentration of intermetallic MoxAg1-xTe as evidenced by the high intensity chemical states at 227.86 and 40.06 eV regardless of reactor base pressure.MoxAg1-xTe interfaces with MoTe2, while unreacted metallic Ag lies on the surface according to comparisons of relevant core level spectra obtained at two different takeoff angles (not shown here).The MoxAg1-xTe chemical states detected at low binding energy from the bulk MoTe2 states in the Mo 3d and Te 4d core level spectra following the 250 °C anneal decrease in intensity by factors of 4 and 16 in the cases of Ag deposited under HV and UHV, respectively, presumably due to loss by desorption.This likely does not occur in the devices discussed here as they are fabricated with 40-60 nm thick Ag contacts and capped with a 20 nm AlOx film, which would both prevent contact metal desorption (as is observed in interface chemistry study with 1 nm Ag film employed) and limit perturbations at elevated temperatures to intermixing between Ag, I, and MoTe2.Much of the deposited Ag and related reaction products (MoxAg1-xTe, AgI) are desorbed in-situ during annealing.This is possibly due to the thermodynamically favorable formation of a substantial concentration of AgI during 150 °C anneal and desorption during 250 °C anneal made possible by the extremely thin Ag film. Iodine, originating from the growth process, is present in a concentration near the limit of XPS detection (~0.1 atomic %) following Ag deposition at RT under UHV conditions (Figure S8c).Post metallization annealing at 150 °C results in the accumulation of a substantial amount of iodine within the vicinity of the Ag-MoTe2 interface.The residual carrier gas primarily retains its original chemistry as evidenced by the binding energy of the most intense chemical state (619.67 eV), 27 however the elevated temperature drives increased concentration of AgI and the formation of bonds between iodine and oxygen based adsorbates (621.35eV) presumably at the Ag-MoTe2 interface. 28Interestingly, iodine is below the limit of detection following annealing at 250 °C suggesting this step is enough to cause substantial iodine out-diffusion from the sample.The same is expected in the devices discussed here as they experience a similar thermal history to the samples fabricated for interface chemistry analysis.The evolution of chemical states associated with iodine detected in the I 3d5/2 core level spectra obtained from the second MoTe2 sample throughout Ag deposition under HV and subsequent post metallization annealing (Figure S8d) are analogous with those shown in Figure S8c. Annealing at 150°C catalyzes reactions between Ag and MoTe2, drives substantial formation of Ag-I bonds, and facilitates I migration to the Ag-MoTe2 interface.Iodine is presumably drawn to the Ag-MoTe2 interface during annealing at 150°C as the intensities of the AgI chemical state in the 'UHV' and 'HV' Ag 3d core level spectra (Figure S9) exhibit marked increases compared with that detected following Ag deposition at room temperature.It is possible that the formation of intermetallic Ag1-xMoxTe is catalyzed by I2 migration to the Ag-MoTe2 interface at elevated temperature, where chemical perturbation of the interface increases in severity with increased concentration of migrating I2.The concentration of Ag1-xMoxTe relative to metallic Ag under UHV conditions is 3.4:1 while that under HV conditions is 1.1:1.This seemingly coincides with 2.7× higher I2 concentration detected in the 'UHV' sample compared with that in the 'HV' samples after annealing at 150 °C.Interfacial Ag-OH bonds formed upon Ag deposition are not affected by annealing as the intensity of corresponding chemical states in the Ag 3d core level spectra do not change upon annealing relative to the total Ag 3d intensity. Sc-MoTe2 MoTe2 employed here exhibits slightly less severe Te excess, with Te:Mo ratio of 2.2 as indicated by the bulk MoTe2 states detected from the exfoliated sample.Low binding energy states denoted by '**' in both the Mo 3d and Te 4d core level spectra (228.08 and 40.15 eV, respectively) obtained from exfoliated MoTe2 correspond with metallic states associated with Te excess. 21 aggressively reacts with MoTe2 when deposited under UHV conditions, completely reducing the top-most MoTe2 to form metallic Mo and ScTex as evidenced by the chemical states in addition to those of Interestingly, interactions between Sc and iodine are below the limit of detection by XPS following either deposition at RT or post metallization annealing (Figure S10b).Residual iodine is detected at 619.80 eV following Sc deposition.Similar to Ag contacts, annealing at 150 °C drives iodine aggregation within the vicinity of the Sc-MoTe2 interface.However, dissimilar to Ag contacts, no additional chemical states in the I 3d5/2 core level spectrum are detected aside from that representing I2.This suggests reactions between Sc and excess Te in MoTe2 or adsorbed gaseous species on the MoTe2 surface are more energetically favorable than the formation of Sc-I bonds.Sc getters the adsorbed species initially present on the MoTe2 surface, preventing the formation of I-O bonds, which are observed at the Ag-MoTe2 interface following annealing at 150 °C.Interestingly, the concentration of iodine remains above the limit of XPS detection following annealing at 250 °C unlike in the Ag-MoTe2 case.It is possible that the diffusion rate of iodine through the ScxOy/Sc(OH)x film formed in-situ is far slower than that through Ag. XPS Experimental Details Interface chemistry at room temperature: Bulk MoTe2 flakes were mechanically exfoliated via Scotch Tape method.Ag and Sc source material with 99.99% purity employed in this work was purchased from Kurt J. Lesker. Interface chemistry formed under ultra-high vacuum (UHV, 5 × 10 -9 mbar) conditions: Exfoliated MoTe2 was loaded into a UHV cluster tool (base pressure 10 -9 mbar, described elsewhere) 29 as quickly as possible (<5 min air exposure) and the initial surface was characterized by X-ray photoelectron spectroscopy (XPS).Metal source outgassing and deposition to a thickness of 1 nm was performed using the same procedure as employed in similar studies and described elsewhere. 23The temperature of the sample did not exceed 30°C throughout the deposition process.The configuration of the deposition chamber requires the deposition rate to be determined with quartz crystal monitor prior to deposition on MoTe2. Interface chemistry formed under high vacuum (HV) conditions: The sample was exfoliated and immediately loaded into the cleanroom deposition tool (Temescal BJD-1800 e-beam evaporator).The metal source outgassing and deposition to a thickness of 1 nm (base pressure 5 × 10 -6 mbar) was performed using the same procedure as has been employed in similar studies and described elsewhere. 23The thickness of the Ag film was monitored in situ by quartz crystal monitor.After shutting off the e-beam, the sample was transferred as quickly as possible (<5 min air exposure) from the cleanroom deposition chamber to UHV cluster tool for XPS. Post metallization annealing After characterizing the interface chemistry of the samples fabricated at RT with XPS, each respective sample was transferred to the deposition chamber attached to the UHV cluster tool without breaking vacuum and annealed at 150°C and 250°C under UHV conditions for 1 hour each.Following each annealing step, the interface chemistry was characterized in-situ by XPS. X-ray Photoelectron Spectroscopy A monochromated Al Kα source (1486.7 eV), takeoff angle of 45° and Omicron EA125 hemispherical analyzer with ±0.05 eV spectral resolution, pass energy of 15 eV, and acceptance angle of 8° were employed during spectral acquisition.The analyzer was calibrated according to ASTM E2108. 30AAnalyzer was employed to deconvolve all spectra. 31 Saturation Current Density As discussed in the manuscript, cooling to 77-80 K yields reduced contact resistance and enhanced mobility, subsequently increasing saturation current density in Ag-contacted n-type transistors by 50-100%. Figure S11 presents sample ID vs. VDS sweeps for two short channel devices in this temperature range, representing both a thin-channel sample (Figure S11a; 5-layer, tch ≈ 3.5 nm) and one approaching the bulk limit (Figure S11b; 17-layer).Saturation current densities at ~80 K ambient increase from approximately 350 to 450 µA/µm across this thickness range.In the thicker sample in Figure S11b, the sub-linear ID vs. VDS at VDS < 0.5 V is likely due to increased access resistance from interlayer screening. h-BN Growth and Device Integration We grew continuous, large-area hexagonal boron nitride (h-BN) monolayers on re-usable Pt foils in a 2 inch furnace-based low-pressure chemical vapor deposition (CVD) process, using a borazine / hydrogen flux derived through thermal decomposition of an ammonia borane (BH3NH3) solid precursor. 32During the growth, the furnace temperature is set to 1100°C while the precursor is heated to 100°C, facilitating the decomposition of ammonia borane into hydrogen, polyaminoborane, and borazine.Hydrogen gas is flowed through the furnace so that the borazine gas can diffuse to the platinum foil and adsorb onto the surface. An electrochemical bubbling method is used to transfer the h-BN to the target substrate, so that the polycrystalline Pt growth substrate (15 mm x 24 mm) can be reused.A PMMA layer is spin coated onto the h-BN while still on the Pt foil, followed by a layer of polystyrene (PS).The PMMA is primarily for adhesion to the h-BN, while the PS is a more rigid polymer layer for ease of handling.The entire structure (PS/PMMA/h-BN/Pt) is attached to an electrode and submerged into a 1 M NaOH solution with a Pt mesh also submerged as the anode.Applying a voltage difference between the Pt foil substrate and Pt mesh induces H2 bubbles, which delaminate the PS/PMMA/h-BN from the Pt. Centimeter-scale h-BN films were subsequently transferred onto exfoliated MoTe2 using these transfer stamps, applied directly on an 80°C hot-plate, necessitating several minutes of flake exposure to ambient atmosphere.Transfer-stamp adhesion was achieved through a series of 80-130°C bakes, on both hot-plates and in low-vacuum ovens, prior to polymer removal through a 2-hour, glovebox-based soak in N-methyl-2-pyrrolidone (NMP; commercially as Remover PG resist stripper).A final solvent dip was followed by a Figure S12 presents optical images, AFM micrographs and Raman spectra of transferred monolayer h-BN onto SiO2/Si substrates.The presence of monolayer h-BN on 285 nm SiO2 is confirmed by the Raman peak at 1369 cm -1 .The characteristic h-BN peak is centered at approximately 1366 cm -1 in bulk h-BN, but will exhibit blue shifts up to 4 cm -1 in monolayer, due to a hardening of the E2g phonon mode. Figure S13 displays a representative TEM cross-section of a thin few-layer MoTe2 device with h-BN/Sc MIS contacts.We observe consistent layer count between MoTe2 channel and contact regions, with the h-BN monolayer distinct in both images.This transferred layer maintains a sizeable van der Waals gap above the top-most MoTe2 layer within the device channel.Underneath the contacts, however, this gap is apparently reduced between the top 2-3 layers due to the pressure from the deposited metal stack (with each of the Sc/Cr/Ag layers evaporated at rates matching or exceeding 2 Å/s in the UHV reactor).We note no metal-telluride intermixing with only 7 layers of semiconductor; this can be compared to the electrical data from Figure 4a of the manuscript, where lack of conduction across 5-layer flakes with direct Sc contacts suggests local consumption of all MoTe2 at those contact regions. Figure 1 . Figure 1.Contact Pinning (a) Schematic of AlOx-encapsulated MoTe2 FETs fabricated in inert atmosphere, illustrating channel width (W), length (Lch), oxide thickness (tox), source (S), drain (D) and back-gate (G) contacts.(b) Expected vs. extracted band alignment of common contact metals to few-layer MoTe2.The extracted band alignments are due to contact Fermi level pinning, as explained in the text.(c) Measured ID vs. VGS curves of long-channel, ambipolar MoTe2 FETs with five different contact metals (measured at 300 K, on tox = 90 nm SiO2).(d) Extracted electron Schottky barrier heights to few-layer MoTe2, showing fitted pinning factor S ~ 0.06 (solid line), far from the ideal, unpinned case of S = 1 (dashed line). Figure 2 . Figure 2. Ag Contacts (a) False-colorized SEM micrograph of a typical Ag-contacted MoTe2 TLM test structure.(b) ID vs. VGS transfer curves of AlOx encapsulated, Ag-contacted MoTe2 FET on tox = 30 nm SiO2, measured at 78-300 K. Inset: Peak Ion/Ioff ratio for varying channel lengths fabricated on the same flake across this temperature range.(c) Measured ID vs. VGS curves of a thin-body, short-channel (tch = 3.5 nm, Lch = 100 nm) MoTe2 transistor with Ag-contacts (tox = 30 nm SiO2, dual-sweep from the origin).Black arrows indicate sweep direction and a small amount of hysteresis.(d) TLM-extracted channel electron mobility vs. temperature for several few-layer MoTe2 samples at high carrier density, n2D ~ 10 13 cm -2 .Mobility roughly follows a T -1 dependence.(e) Measured ID vs. VDS curves of a short-channel, few-layer Ag-contacted MoTe2 n-type FET (tox = 30 nm SiO2, forward and reverse sweeps, showing almost no hysteresis) demonstrating current saturation density exceeding 420 µA/µm at 79 K ambient temperature.The back-gate voltage decreases from VGS = 20 V (top curve) in -2.5 V steps. Figure 2b displays representative ID vs. VGS transfer curves of a multi-layer MoTe2 FET with Ag contacts (reverse sweep only; negligible hysteresis at positive gate bias) showing predominantly n-type Figure 3 . Figure 3. Ag Contact Chemistry (a) TLM-extracted contact resistance RC vs. carrier density at 78 and 300 K for electron injection at Ag-MoTe2 contacts.The extracted RC does not appear to depend on the metal deposition pressure, unlike in previous work with (less reactive) MoS2 contacts. 41(b) High-resolution XPS spectra of Ag 3d core levels for 1 nm Ag films deposited in situ under HV and UHV conditions.Interfacial Ag(Mo)-Te compounds emerge following a 250 °C UHV anneal, which replicates the thermal budget of fabricated devices.(c) High resolution TEM crosssection of an Ag/MoTe2 contact interface, revealing a gradual transition between the metal and layered semiconductor, corresponding to the identified chemical intermixing. ( XPS) in situ on Ag evaporated onto MoTe2 under various reactor pressures, with measured spectra depicted in Figure3b(for experimental details, see Supporting Information, Section 6).Despite the inherent Terich nature of our MoTe2 (average Te:Mo ratio ≈ 2.3 for MoTe2 discussed here), any reaction products formed between MoTe2 and Ag as deposited at room temperature are below the limit of XPS detection, regardless of reactor base pressure.However, Ag reacts with iodine (residual from crystal growth) to form AgI as evidenced by the chemical states at low binding energy in the corresponding Ag 3d5/2 (368.10 eV) and I 3d5/2 (618.8 eV, FigureS8c,d) core levels.63Iodine is presumably drawn to the Ag-MoTe2 interface during Ag deposition, which is evidenced by the increased iodine concentration detected by XPS after Ag deposition on MoTe2 at room temperature.Annealing Ag-MoTe2 at 150 °C (temperature of our AlOx ALD process) drives substantial reactions at the interface with Ag, resulting in the formation of intermetallic MoxAg1.xTeacross multiple MoTe2 layers.Thermally activated intermetallic formation is accompanied by the appearance of an associated chemical state (red curve, 368.55 eV) in the "150°C anneal" Ag 3d core level spectrum (FigureS9), which persists in the "250°C anneal" Ag 3d spectrum (temperature of our post-ALD vacuum anneal, Figure3b).The binding energies of theMoxTe1-xAg chemical states in the Te 4d, Mo 3d, and Ag 3d core level spectra relative to bulk MoTe2 and metallic Ag suggest Ag reduces MoTe2, forming a compound with (Mo+Ag):Te ratio of ~1.Instability of the Ag-MoTe2 interface does not linearly depend on the post-metallization annealing temperature (up to 250 °C).The intensity ratio of metallic Ag and MoxAg1-xTe chemical states in the corresponding Ag 3d core level spectra remain virtually constant (~0.5) in this work.In addition, annealing at 250 °C drives complete dissociation of Ag-I bonds based on the decrease in intensity of AgI chemical states below the limit of XPS detection in the corresponding Ag 3d and I 3d5/2 core level spectra (Figure S8, S9). Figure 3c displays high resolution transmission electron micrographs (TEM) of Ag deposited on multilayer MoTe2, confirming the presence of a gradual transition region at this interface, consistent with local intermixing.Partially-visible outlines of MoTe2 layers blend into the Ag film, with spatial inhomogeneity suggesting varying degrees of interfacial reaction mediated by metal grain size and possible MoTe2 defects. Figure 4 . Figure 4. Sc Contacts (a) Measured ID vs. VGS curves of several MoTe2 FETs with Sc metal contacts (on tox = 90 nm SiO2, dual-sweep from the origin) showing poor, ambipolar performance and non-negligible hysteresis.We do not observe current modulation in thin-body devices (i.e.5-layers or less).(b) High resolution XPS of Sc 2p levels for metal following in situ evaporation of 1 nm Sc on MoTe2, and anneals replicating conditions during device fabrication.Spectra on the bottom are from exfoliated MoTe2 prior to processing, as well as pure Sc metal.The green signal overlaid on the measured spectra in black is the fitted envelope.We observe significant ScTex formation upon deposition, followed by progressive oxidation of Sc from trace O-and OH-after annealing at 150 °C and 250 °C. thus have two unique challenges to realizing n-type MoTe2 devices: (1) the low electron affinity of MoTe2, χ ≈ 3.85 eV, resulting in a conduction band minimum EC 150-200 meV above that of Si or MoS2 in bulk, 44, 45 and (2) interfacial compounds from reactions between contact metals and MoTe2, which exhibit unknown work functions and may shift ECNL.This first challenge suggests significant Schottky barriers for conventional low work function metals, such as Ag and Ti, even in the ideal theoretical case of Figure 1b.However, ultra-low work function metals such as Er and Sc (ΦM ≈ 3.1 and 3.5 eV, respectively) may facil-t ch = 16L L ch ~ 500 nm L ch ~ 700 nm t ch = 11L L ch ~ 1000 nm t ch = 5L Sc-MoTe2 band alignment.The initial Fermi level position of MoTe2 investigated in studying the Sc-MoTe2 interface indicates n-type pinning (0.7 eV above the valence band edge, valence band spectrum not shown in Figure4b).However, the Fermi level shifts to ~0.4 eV above the valence band edge (near mid-gap) after the 250 °C anneal according to shifts exhibited by the MoTe2 chemical states.Therefore, the XPS results are consistent with the ambipolar transport observed in MoTe2 devices employing direct Sc contacts. Figure 5 . Figure 5. Sc MIS Contacts with h-BN.(a) Cartoon of bare Sc on MoTe2, showing interfacial reaction.(b) Cartoon of a 1L h-BN interlayer as a solid-state diffusion barrier between Sc and pristine MoTe2 layers.(c) High resolution TEM micrograph of a h-BN/Sc MIS contact, with underlying MoTe2 thickness consistent with that measured in channel regions.(d) Proposed band alignment for the h-BN/Sc MIS contacts on few-layer MoTe2.(e) Measured ID vs. VGS curves of a unipolar n-type MoTe2 FET with Sc/h-BN MIS contacts (AlOx encapsulated, back-gated on tox = 90 nm SiO2).Inset: Peak device Ion/Ioff ratio for various channels fabricated on the same flake.(f) Measured ID vs. VDS for the shortest channel device on the prior 10-layer sample, demonstrating saturation current densities exceeding 330 µA/µm at 78 K ambient temperature.The back-gate voltage decreases from VGS = 40 V (top curve) in -5 V steps. and ALD AlOx capping were deposited by the prior methodology (primarily to prevent oxidation of the Sc-contact stack), followed by a final 250°C vacuum anneal.High resolution TEM (Figure5c) reveals a pristine contact interface, preserving distinct layers with no sign of chemical intermixing.Supporting FigureS13confirms consistent MoTe2 thickness between channel and contact regions in thin, fewlayer samples.Figure 5e displays representative transfer curves of a 10-layer device with Sc/h-BN MIS contacts, demonstrating the most unipolar n-type MoTe2 transport to date.The off-state (hole) current is clearly "flattened", indicating strong suppression of hole injection at these contacts.Steep subthreshold activation indicates significant reduction of the electron Schottky barrier Φn, despite electron tunneling through the wide band gap h-BN and the van der Waals gap.Minimum values of inverse subthreshold slope in short-channel devices are almost half the equivalent value in Ag-contacted FETs of similar channel thickness, when adjusted for gate oxide tox (90 nm vs. 30 nm SiO2).These unipolar MIScontacted devices maintain Ion/Ioff ratios of 10 5 at room temperature (~10 8 below 80 K) down to electrostatically short channels (e.g.< 200 nm; Figure 5e inset), with significantly less ambipolar behavior than Agcontacted devices in Figure 2. Figure 5f displays the measured low temperature ID vs. VDS characteristics, Figure 6 . Figure 6.Sc/h-BN MIS Contacts.Arrhenius plot of drain current (normalized by width, and by temperature squared) for a long-channel, 7-layer MoTe2 FET with Sc/h-BN contacts on a 90 nm SiO2 back gate oxide, measured at VDS = 1 V. Effective Schottky barrier height is extracted assuming the subthreshold drain current is predominantly thermionic emission.Inset: effective electron Schottky barrier extraction for the Sc/h-BN/MoTe2 contact at the flat-band condition, in the high-temperature regime (300 to 450 K).Black line is a linear fit and gray arrow marks the flat-band point where Φn(VGS) is no longer linear, yielding Φn (VFB = 6.35V) ≈ 90 meV. Bulk crystal growth and characterization, Raman spectra.Full details of analytical Schottky barrier model and transfer length measurement fitting.Details of XPS sample preparation and measurements with supplementary spectra for Ag/MoTe2 and Sc/MoTe2 interactions.Additional transistor saturation curves.Methods of h-BN film growth, transfer and characterization.sponsored in part by the Air Force Office of Scientific Research (AFOSR) grant FA9550-14-1-0251, the National Science Foundation EFRI 2-DARE grant 1542883, Army Research Office grant W911NF-15-1-0570, the Stanford SystemX Alliance, and the US/Ireland R&D Partnership (UNITE) under the NSF award ECCS-1407765.TEM work was sponsored by the Applied Materials corporation.M.J.M. would like to acknowledge an NSERC PGS-D fellowship.RMW and EP acknowledge the support of the NEWLIMITS center in nCORE, a Semiconductor Research Corporation (SRC) program sponsored by NIST through award number 70NANB17H041, and of ASCENT, one of six centers in JUMP, a SRC program sponsored by DARPA. Figure S1 : Figure S1: (a) CVT-grown bulk MoTe2 crystals with mm increments for scale.(b) SEM micrograph displaying layered structure at edges. Figure S2 : Figure S2: Raman spectra of AlOx-capped MoTe2 flakes showing preservation of characteristic vibrational peaks, as well as novel persistence of a cross-plane B 1 2g mode in bulk samples (Horiba LabRam, 532 nm laser, 1.25 mW). Figure S3 : Figure S3: Band alignment of an ambipolar Schottky barrier FET (a) below, (b) at, and (c) above flat-band biasing in gate voltage with principal mechanisms of carrier injection and true barrier heights Φn, Φp demarcated for a contact Fermi level pinned deep within the band gap. Figure S4 : Figure S4: (a) Representative transfer curves of long-channel encapsulated MoTe2 FETs (300 K, VDS = 100 mV, on 90 nm SiO2) depicting subthreshold fits of an analytical Schottky barrier model (lines) against experimental data (symbols) for Ag and Ti contacts.(b) Extracted electron, hole Schottky barrier heights and electronic band gap EG vs. layer count for Ag-contacted MoTe2 transistors.Experimentally reconstructed EG increases with decreasing layer count as expected, but the values are 100-150 meV lower than expected. Figure S5 : Figure S5: Summary of estimated electron Schottky barrier heights for common contact metals on MoTe2, contrasting our values (analytical subthreshold model; solid squares) against prior estimates using Arrhenius analysis (open symbols).A fit of pinning factor (S) for our analytically fitted barrier heights is provided. Figure S6 : Figure S6: (a) Width-normalized ID vs. VGS characteristics of the longest and shortest channels of a 5-device, 11-layer MoTe2 TLM test structure (Ag contacts, 30 nm SiO2 back-gate).(b) Threshold VT extraction by extrapolating the linear ID regime to the VGS-intercept.Inset shows VT versus channel length Lch at temperatures T = 78 K and 300 K.(c) Width-normalized resistance RTOT vs. channel length with linear fits at varying channel carrier densities.RTOT-axis intercept corresponds to 2RC and the slope is sheet resistance RS.Extracted (d) RC and (e) RS for this structure at 78 K and 300 K, demonstrating saturation of both parameters at high induced charge density (large gate overdrive).(f) Intrinsic electron mobility, calculated from RS and the 2D carrier density.(g) Specific contact resistivity ρC and (h) transfer length LT vs. carrier density, at both temperatures.Inset graphic of (g) demonstrates the decay of current injection (red arrows) across LT, ideally only a portion of the total contact length LC. Figure S7 : Figure S7: (a) SEM micrographs of an AlOx capped MoTe2 FET on a 90 nm SiO2 back gate, with nominally 40 nm-thick Ag contacts deposited under UHV ambient.Note the Ag metal leads poorly nucleate on SiO2, whereas the Ag uniformly wets and deposits across the MoTe2 surface.(b) ID vs. VGS transfer characteristics for the longest channel (Lch ~ 975 nm) at 80 K and 300 K ambient.(c) ID vs. VDS dual sweeps presenting short-channel saturation currents at 80 and 300 K. TLM-extracted (d) contact resistance and (e) sheet resistance for the pictured device, matching high-field values of devices fabricated under HV ambient (as shown in the manuscript). Figures S8a and S8b by '**'.No chemical states appear in either the Mo 3d or Te 4d core level spectra following Ag deposition under either ultra-high vacuum (UHV, base pressure Figure S8 : Figure S8: Mo 3d, Te 4d, and I 3d5/2 core level spectra obtained from MoTe2 after depositing 1 nm Ag under HV (a,c) and UHV (b,d) ambient and also following subsequent post metallization annealing at 150 °C and 250 °C. Figure S9 : FigureS9: Ag 3d core level spectra from MoTe2 metallized with 1 nm Ag under UHV and HV conditions (spectra labeled accordingly) and subsequently annealed at 150 °C under UHV and HV conditions (separate samples). Figure S10 : Figure S10: a) Mo 3d and Te 4d core level spectra obtained from exfoliated MoTe2.a) Mo 3d, Te 4d, and b) I 3d5/2 core level spectra obtained from MoTe2 after 1 nm Sc deposition at ~300 K under UHV conditions and following post metallization annealing at 150 °C and 250 °C. t ch = 17L L ch = 90 nm t ch = 5L L ch = 100 nm 1-3 hour-long, 200°C anneal in N2 ambient, prior to device fabrication with Sc/Cr/Ag top contacts as described in the manuscript.Select samples were subject to an hour-long, 400°C high-vacuum anneal, which improved the yield of clean, measurable channels (though not device performance). Figure S12 : Figure S12: (a) Optical images of dry-transferred monolayer h-BN on 285 nm (top) and 90 nm (bottom) SiO2/Si substrates.Arrows indicate large tears and gaps in an otherwise uniform mm-scale film.(b) Raman spectra of transferred monolayer h-BN on 285 nm SiO2 (532 nm laser).(c) AFM-micrograph of an AlOx/h-BN capped MoTe2 device depicting complete coverage and some wrinkling/transfer residue across the encapsulating monolayer. [eV] Φn [meV] Φp [meV] EG = Φn + Φp [eV] Ag achieves the smallest room temperature Φn ≈ 290 meV to multi-layer MoTe2.However, most ntype barriers fall between 400-500 meV, implying deep mid-gap EF pinning.Qualitatively, these barrier heights are more consistent with the observed ambipolar conduction, unlike previously reported barriers extracted via conventional Arrhenius methodology.Our bulk EG estimates fall 100-150 meV short of the commonly accepted values,
2019-07-04T20:55:01.000Z
2019-07-04T00:00:00.000
{ "year": 2019, "sha1": "8b999d945d6ad9165823ca8d72198af1d98c4dc5", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1907.02587", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8b999d945d6ad9165823ca8d72198af1d98c4dc5", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
104329325
pes2o/s2orc
v3-fos-license
Bacterial community mapping of the intestinal tract in acute pancreatitis rats based on 16S rDNA gene sequence analysis Numerous studies have revealed that the status of intestinal microbiota has a marked impact on inflammation, which may progressively aggravate the systemic inflammatory response caused by acute pancreatitis (AP). However, our understanding of microbial communities in the intestinal tract is still very limited. Therefore, the aim of this paper is deciphering bacterial community changes in AP rats. In this study, samples taken from AP rats were subjected to 16S rDNA gene sequence-based analysis to examine the characteristic bacterial communities along the rat intestinal tract, including those present in the small intestine, colon and feces, with samples from rats with a sham operation (SO) as the control group. Operational taxonomic units (OTUs) network analyses displayed that the small intestine and colon of the AP rats had a “core microbiota” composed of bacteria belonging to Firmicutes, Proteobacteria and Bacteroidetes, whereas the “core microbiota” of feces included Firmicutes, Bacteroidetes, Proteobacteria, Tenericutes and Actinobacteria. Bacterial diversity analysis showed that the species richness and diversity of the small intestine, colon and feces in AP rats were lower than those in the SO rats, and species difference between the AP and SO groups were observed. In addition, at different levels of bacterial classification, dramatic alterations in the predominant intestinal microbiota were observed along intestinal tracts in AP rats compared to the SO rats. COG and KEGG analyses indicated that the significantly differential flora were involved several clusters and signaling networks. Thus, this work systematically characterizes bacterial communities from the intestinal tract of AP rats and provides substantial evidence supporting a prospective strategy to alter the intestinal microbiota improving AP. Introduction The adult human intestine is home to microbial communities comprised of at least 1000 bacterial resident species and up to 10 14 microorganisms. 1 Host-microbe interaction is crucial for host normal physiology, ranging from metabolic activity to immune homeostasis; perturbation of microbial communities is now regarded as a promoter of secondary infection and systemic inammation in a range of diseases outside the gut, especially pancreatic disease. [2][3][4][5][6] Acute pancreatitis (AP) is an inammatory disorder of the pancreas encountered in emergency settings with an annual incidence range of 13-45/100 000 inhabitants. [7][8][9] Approximately 20% of AP patients develop necrotizing pancreatitis with a high mortality risk, mostly attributed to sepsis and infection of pancreatic and peripancreatic necrotic tissue induced by gutderived bacteria overgrowth and subsequent bacterial translocation. 10 Recently, it has been established that disruption of gut microora is signicantly correlated with "second hit summit" involved in the progression of AP, which demonstrated that gut microbiota is a promising modulator of inammatory cascade in AP pathophysiology. 11 Recent studies indicate that inammatory cascade in AP is dependent on damage-associated molecular patterns (DAMPs)-mediated cytokine activation (IL-1b, IL-6 and TNF-a) causing the translocation of intestinal ora into the circulation and their induction of innate immune responses in acinar cells. Watanabe, et al., have reviewed that the innate responses involve activation of responses by an innate factor, nucleotide-binding oligomerization domain 1 (NOD1), and that such NOD1 responses have a critical role in the activation/production of nuclear factor-kappa B (NF-kB) and type I interferon (IFN). 12 The balance of the intestinal microbiota through the administration of probiotics has been considered to be one of the reasonable treatment option for AP with the potential benet of lower costs and side effects. 13 However, evidence from previous studies concerning the effects of probiotics is inconclusive and sometimes, even contradictory due to lack of awareness of specic strains, dosages and clinical situations. Therefore, better insight into the diversity and composition of intestinal microbiota is urgently needed to develop adequate prophylaxis and treatment strategies for AP. Previous studies have provided a detailed description on the characteristic of bacterial species in patients and rats with AP. [14][15][16] In detail, the paper of Li et al., provided a detailed description on the features of bacterial species and the prevalence of bacteremia in patients with AP. 14 Şenocak et al., proved that prophylactic total colectomy was associated with elevated bacterial translocation to the pancreas in AP rats. 15 Zhang et al., found that the intestinal microbes of AP patients were different from those of healthy volunteers through detecting the fecal samples. 16 However, these studies have several limitations: (1) they usually focused on studying the microbiota in one intestinal site of AP; (2) the microbiota in colonic mucosal sites is quite different than fecal microbial communities according to previous studies, 17 however there was no enough researches about the colon microbiota; (3) the composition and functional analysis of the intestinal microbiota in AP had not been sufficiently explored. Consequently, our understanding of microbiota in different sections along with the mammalian gastrointestinal tract is still very limited, especially for rat, which is one of the most commonly used animals for studying the pathogenesis of AP. 18 Because rat models used to study the dysbiosis of gut microbiota involved in the progression of AP have major advantages over clinical studies, such as the availability of study subjects, ability to perform invasive tests, and extensive tissue sampling. Deciphering the comprehensive characterization of gut microbiota in AP and normal rats is a critical prerequisite to exploring and predicting these microbiota alterations in relation to disease, and it can also lay the new foundation for the study of the pathogenesis of AP. In this study, we characterized the bacterial community mapping in the small intestine, colon and feces obtained from AP rats induced by administering 5% sodium taurocholate using a high-throughput 16S rDNA sequencing technology, which contributes to elucidate the alterations of intestinal microbiota and its role in AP progression. AP induced pancreatic injury Since the rst case of experimental pancreatitis established by injection of bile into the pancreatic duct of a rat, AP rat model has been indispensable in providing insight in pathophysiology and therapy of this disease. Advantages of this model are the quick procedure of AP induction and the reproducibility of results. In order to establish possible associations between the intestinal ora changes and AP pathophysiology, we had conducted the rat with AP induced by retrograde injection of 5% sodium taurocholate into the pancreatic duct in this investigation, which simulates AP due to bile reux into the pancreas. According to the AP diagnosis and treatment guidelines, pancreatic digestive enzymes, especially amylase and lipase, are the most usually recommended as biochemical markers. 19 In the present work, we detected activities of these two enzymes by enzyme-linked immunosorbent assay (ELISA). As shown in 1A, compared with the sham operation (SO) group, the increased levels of amylase and lipase in plasma were detected when AP was induced. Furthermore, inammatory cascade activated by trypsin-mediated pancreatic autodigestion is a critical risk factor for AP. The ELISA results proved that the plasma levels of the tumor necrosis factor (TNF)-a, interleukin (IL)-1b and IL-6 in the AP group were markedly increased compared with the SO group (Fig. 1B). In addition, H&E staining was performed to observe the pathological changes of pancreatic tissue, the typical images shown none distinct pathological changes in the SO group, whereas pancreatic injury featured by large areas of interstitial edema and hemorrhage, acinar cell necrosis and vacuolization, together with leukocyte inltration (the red arrow) in AP group (Fig. 1C). The grading was based on the number of the presence of interstitial edema, inammation and vacuolization, and to what extent these characteristics affected the pancreas (0 being normal and 4 being severe), giving a maximum score of 12 (Table 1); and the results of histological scores indicated that the AP group (9.5 AE 1.6) was signicantly increased compared with the SO group (1.3 AE 0.5). Therefore, the rat model induced by retrograde injection of 5% sodium taurocholate into the pancreatic duct can be used to evaluate possible associations between the intestinal ora changes and AP pathophysiology. High-throughput 16S rDNA sequencing of the small intestine, colon and feces Previous studies reported gut-derived bacteria overgrowth and higher intestinal colonization of Gram negative bacteria occurs in AP rat models by conventional culture techniques. 20,21 However, 80% of gut bacteria are novel species that cannot be cultivated, and only 20% of intestinal microbiota can be iden-tied using culture-based methods. 22 Furthermore, although polymerase chain reaction (PCR) has been used to identication of bacterial DNA in blood samples, it fails to detect several microorganisms in a single specimen. 23 Thus far, the diversity and species composition of the intestinal microbiome in rats with AP have not been sufficiently explored, not to mention the molecular mechanism of intestinal ora destruction involved in the pathogenesis of AP. In recent years, 16S rDNA gene sequence-based analysis provides a high-throughput approach to investigate the biodiversity and abundance of gut microbiota in health and disease. 24 In this study, a total of 749 142 highquality clean tags were obtained from 36 samples (small intestine, colon and feces) in AP rats and the SO rats aer quality control: removed barcode and connector sequences; combined each paired-end reads into a longer tag; removed sequences with more than one ambiguous base; and ltered chimeric sequences. Each sample was covered by an average of 20, 809 reads. The value of Q 30% of all samples averaged 94.98 AE 2.30% (mean AE SD, ranging from 83.39% to 97.10%), which indicates that the probability of a base misalignment caused by the sequencing instrument is less than 0.1% in the sequencing result (ESI Table S1 †). The count of sequences with a length of less than 200 bp is 45, 917, 200-300 bp is 100, 095, 300-400 bp is 246, 016, and 400-500 bp is 357, 114, accounting for 6.13%, 13.36%, 32.84% and 47.67% of the total, respectively. Therefore, all samples met the requirements for library establishment, and they could be performed the subsequent operational taxonomic units (OTUs) analysis. OTUs analysis across different anatomic sites of the rat intestinal tract In order to facilitate the study of the species diversity information of the sample species, we clustered the effective sequences of all samples and clustered these sequences into OTUs according to 97% sequence similarity. 25 At the OTU level, the similarities and differences between different samples were statistically analyzed, and the common or unique OTUs between different samples were shown through the Venn diagrams ( Fig. 2A). OTU network analyses for the gut conrmed the existence of some core OTUs, a common microbial composition, among the different sections. Moreover, fecal samples shared more OTUs, both in terms of numbers and more diverse compositions, than the small intestine and colon did, and the selective pressures of unique physicochemical conditions of anatomical regions on microbiota may account for this difference, such as intestinal motility, pH, nutrient supplies and host secretions. 26,27 In addition, as shown in Fig. 2B, OTUs and small intestine, colon and feces of AP rats were labeled as nodes in bipartite network. OTUs were linked with the samples, and their sequences would be found in OTUnodes. 28 The OTUs network-based analyses displayed that some core OTUs were found in the intestinal tract site collected from different individuals. Different anatomical parts shared different common "core microbiota" both in amount and compositions, which might manipulate unique functions from an intestinal tract site to other sites. The small intestine (ESI Table S2 †) and colon (ESI Table S3 †) of the 6 individuals had a small "core microbiota" (43 and 31 OTUs) composed of bacteria belonging to Firmicutes, Proteobacteria and Bacteroidetes, whereas the feces of them had a relatively bigger "core microbiota" (202 OTUs) comprised of bacteria belonging to Firmicutes, Bacteroidetes, Proteobacteria, Tenericutes and In parenchyma (<50 of lobules) 3 In parenchyma (51-75 of lobules) 4 In parenchyma (>75 of lobules) Severe (>50) Actinobacteria (ESI Table S4 †). These results would provide supports for the hypothesis that the unique physicochemical environments of different anatomical regions play critical roles in the forming of intestinal microbiota due to selective pressures on microbiota. Although it was not clear that whether these shared microbiota were the "permanent residents" or the "passengers" from the foods, the "core microbiota" in the different intestinal tract sites should be paid more attentions. Furthermore, the unique microbiota along the intestinal tract is expected to be the gut microbial marker in the near future. Alpha diversity analyses of the bacterial community To characterize intestinal microbes in AP rats, bacterial diversity analysis was performed. Chao1 and Observed species were estimated to reect the number of OTUs in a sample, and the values of which are positively correlated with the species richness of the sample. Moreover, Shannon and Simpson indicates reect the averaging or uniformity of the abundance of different species in a sample, which are positively correlated with the species diversity. Alpha diversity analyses including the values of Chao1, Observed species, Shannon and Simpson are the comprehensive indicators of species richness and uniformity in community ecology. In this paper, we implemented the alpha diversity analyses, and the results showed that the samples from feces had the highest Chao1, Observed species, Shannon and Simpson values, which indicated that the fecal bacteria had the highest level of species richness and diversity ( Fig. 3A-D). In addition, as shown in Fig. 3A and B, the alpha diversity indexes (Chao1 and Observed species) showed the species richness of the small intestine and feces in the AP group were signicantly reduced compared to the SO group (p < 0.05). However, there was no signicant differences in the species richness of colonic bacteria in SO and SAP groups. Furthermore, as shown in Fig. 3C and D, the alpha diversity indexes (Shannon and Simpson) showed that the species diversities of microbiota in SO and AP group had no notable differences (p > 0.05). Therefore, the microbiota in fecal samples had the greatest species richness, and the microbiota in colon samples had the least species richness; and these results indicated that fecal samples maybe the most suitable for the researches of gut microbiota. Moreover, AP induced the decrease in the species richness of gut microbiota in rats, which further proved that gut microbiota played an crucial role in AP pathophysiology. PCoA and RDA analyses Beta diversity refers to species differences between different environmental communities, and it also can be used to evaluate the overall heterogeneity of the species or the environmental community. Principal coordinate analysis (PCoA) is the most suitable presentation method for the beta diversity analysis. As shown in Fig. 4A, PCoA plots using weighted UniFrac distances clustered samples mainly by sites, the different colors in the result represent different anatomical sites and groups, and the closer the sample distance is, the more similar the microbial composition between the samples is, and the smaller the difference is. PCoA analysis revealed that bacterial communities in the feces samples clustered closely to one another while those from small intestinal and colon samples did not (Fig. 4A). These results showed that inter-rat variations of fecal microbiota were lower than those of small intestinal and colon samples, and it suggested that fecal microbiota had better individual similarity. Therefore, beta diversity analyses further proved the studies on intestinal micro-organisms in AP rats may give priority to the use of fecal samples. In addition, as shown in Fig. 4B, redundancy analysis (RDA) proved that bacterial species in fecal samples, such as the Escherichia, Bacteroides, Allobaculum and Prevotella, had a positive correlation with AP; and the severity of AP was also positively correlated with the plasma concentration of amylase, lipase, TNF-a, IL-1b and IL-6, and the histological scores. Therefore, there were signicant differences in species composition between different intestinal anatomy sites in SO and AP groups, which also supported the view of intestinal ora disorder in AP. Analysis of bacterial composition in AP rats For taxonomy community analysis of the AP rat intestinal tract, eight different bacterial phyla were identied. As shown in Fig. 5A and ESI Table S5, † the majority of the sequences belonged to Firmicutes, Bacteroidetes, Proteobacteria, Actinobacteria, Tenericutes, Cyanobacteria, Fusobacteria and Candidatus Saccharibacteria. However, only Firmicutes, Bacteroidetes, Proteobacteria and Actinobacteria were found in all samples. The results indicated that the communities within the different anatomical regions differed largely in their compositions and proportions of the major bacteria. In addition, Firmicutes was the most abundant phyla in all samples, and the relative abundance of Firmicutes was signicantly higher (p < 0.05) in colon (91.89%) than that in the small intestine (74.19%) and feces (43.31%). Moreover, the increased relative abundance of Bacteroidetes in the feces (40.70%) was detected compared with the other two sites (small intestine, 3.49%; colon, 0.52%). These results showed that Firmicutes and Bacteroidetes were the "core microbiota" of colon and feces, respectively. In detail, as shown in Fig. 5B and ESI Table S6, † Lactobacillus (belonged to Firmicutes) was enriched in small intestine, and decreased from colon to feces. The relative abundance of Romboutsia (belonged to Firmicutes) was signicantly higher (p < 0.05) in colon than that in the small intestine and feces, and a larger proportion of Paraprevotella (belonged to Bacteroidetes) and Bacteroides (belonged to Bacteroidetes) were observed in feces than in the other sites. According to previous reference, the cause may be the higher oxygen availability of small intestine. Conversely, a larger proportion of Paraprevotella and Bacteroides were observed in feces where less oxygen is available. Therefore, the analysis of bacterial composition in AP rats indicated that the "core microbiota" of small intestine, colon and feces are as follow: Lactobacillus, Romboutsia, and Paraprevotella/Bacteroides. There are many differences between various gut regions, selection of the sampling site along the intestinal tract is therefore crucial for the investigation of microbiota-related health and disease issues. Bacterial taxonomic composition in AP rats compared to the SO rats To explore the effects of AP on the intestinal micro ecology, we compared bacterial communities in different anatomic sites of AP rats with SO rats. As shown in Fig. 6A and B, at phylum level, the relative abundances of Bacteroidetes in small intestine and Tenericutes in colon of the AP rats were all signicantly decreased compared with the SO group (p < 0.05). Moreover, the relative abundance of Fusobacteria and Proteobacteria were higher in feces samples from the AP group than those in the SO group, in contrary to the relative abundance of Tenericutes and Cyanobacteria (p < 0.05). In addition, because Firmicutes was the most abundant phyla in all samples, we compared its difference abundance between AP and SO rats, and the results showed that Firmicutes were decreased in the feces of the AP group compared to the SO group, but there is no statistically signicant differences. Furthermore, as shown in Fig. 7A and B, the relative abundances of 40 species at genus level were obviously changed in the small intestine of AP rats compared with the SO group (p < 0.05), while the numbers of genus changes in the colon and feces were 14 and 21, respectively. Linear discriminant analysis (LDA) is a linear classier which assigns objects into groups based on their Mahalanobis distance to group centres, 29 and it was usually used to evaluate the difference in relative abundances of intestinal ora. In this paper, we calculated the LDA scores of genus changes between AP and SO rats, and found that the relative abundances of Lactobacillus, Gemella and Bacteroides were signicantly increased in the small intestine and feces of AP rats, separately. These bacterial colonies maybe the targets for the treatment of AP. COG and KEGG analyses The COG database consists of proteins from complete genomes of bacteria, archaea and eukaryotes, and each categorized COG term represents an ancient conserved domain. 30 As shown in Fig. 8A, the results proved that COG terms of differential ora in small intestine (p < 0.05) were mainly contained "[F] Nucleotide transport and metabolism", "[G] Carbohydrate transport and metabolism" and "[J] Translation, ribosomal structure and biogenesis". The signicantly differential ora in faeces (p < 0.05) were included "[C] energy production and conversion", "[D] Cell cycle control, cell division, chromosome partitioning" and "[E] Amino acid transport and metabolism", etc. However, in the colonic tissue, there was no notably different COG functional distribution. In addition, KEGG database acts as an important knowledgebase to provide signicant details regarding the molecular networks and biological pathways associated with the given genes or transcripts. 31 As shown in Fig. 8B, the results indicated that differential ora in small intestine (p < 0.05) mainly involved in the signaling pathways of "Biosynthesis of Other Secondary Metabolites", "Cell Motility", "Endocrine System", "Environmental Adaptation", "Glycan Biosynthesis and Metabolism", "Immune System", "Infectious Diseases", "Replication and Repair", "Translation" and "Transport and Catabolism". Moreover, differential ora in colon (p < 0.05) contained the signaling pathways of "Carbohydrate Metabolism", "Cardiovascular Diseases", "Circulatory System", "Immune System Diseases" and "Signal Transduction". In the differential ora of faeces tissue (p < 0.05), the signaling pathways involved in the "Cancers", "Cellular Processes and Signaling", "Infectious Diseases", "Nervous System" and "Poorly Characterized". There results maybe prove key information for the treatment of AP through balancing intestinal ora. Animal and ethical approval Twelve Sprague-Dawley (SD) rats weighing 180-220 g (6 weeks old) were obtained from the Dalian Medical University Laboratory Animal Center (Dalian, China) and housed in groups with three rats per cage in the specic pathogen-free (SPF) experiment environment. The rats were free access to standard laboratory food and water (autoclaved before use) and housed at 24 AE 2 C with 65% AE 5% humidity on a 12 h light/dark cycle for adaptive feeding 1 week prior to the experiment. Randomization was used to assign samples to the experimental groups and to collect and process data. The experiments were performed by investigators blinded to the groups for which each animal was assigned. All animal care and experimental procedures were performed according to the guidelines of the Institutional Animal Care and local veterinary office and ethics committee of Dalian Medical University and were conducted in strict compliance with the People's Republic of China Legislation Regarding the Use and Care of Laboratory Animals. Animal studies have been reported in compliance with the ARRIVE guidelines. 32 AP model establishment and sample preparation Twelve SD rats were randomly divided into 2 groups (n ¼ 6): group I, SO group; group II, AP model group. Experimental AP models were established as previously described. [33][34][35] In detail, the rats were fasted 12 h with free access to water before anaesthesia with pentobarbital sodium (40 mg kg À1 ). Hepatic duct was clamped by a small artery clip aer the pancreas was fully exposed by a midline incision. The biliopancreatic duct was cannulated through the duodenum, and a freshly prepared 5.0% sodium taurocholate (0.1 mL/100 g body weight, Sigma, Inc., USA) was then administered into the biliopancreatic duct through a standard retrograde infusion. Presenting as controls, rats in the SO group received only abdomen opened and closed. Aer 48 hours of duct infusion, fresh feces were collected in sterile tube and stored at liquid nitrogen immediately, then transferred to À80 C. Blood samples were obtained via the abdominal aorta of the rats for biochemical analyses aer anaesthesia. Pancreatic head samples were xed in 10% buffered formalin and embedded in paraffin for histopathological examination. The small intestine and distal colon were sampled under aseptic condition and immediately frozen in liquid nitrogen, then maintained at À80 C until DNA extraction. The lengths of the rat small intestine (including duodenum, jejunum and ileum) and colon were about $30 and $10 cm, respectively. Plasma amylase and inammatory cytokine levels Plasma amylase, lipase, IL-6, IL-1b and TNF-a levels were assayed with ELISA kits (Lengton Inc., China) following the manufacturer's instructions. The absorbance at 450 nm representing the relative level of amylase, lipase, IL-6, IL-1b and TNFa in each well was measured by Thermo Scientic Multiskan FC (Massachusetts, USA). Histopathological examination Formalin-treated tissue samples from each group were sliced into sections (5 mm), and then were dewaxed in graded alcohols, followed by stained with hematoxylin and eosin (H&E). Images were obtained using a light microscope (Leica DM4000B, Germany) at 200Â magnication. Moreover, we calculated the histological scores according Table 1 in order to standardize and detail the stage of AP induced by sodium taurocholate. 33,36,37 DNA extractions Total genomic DNA from different samples (100 mg) was extracted using the E.Z.N.A.® Stool DNA Kit (D4015, Omega, Inc., USA) according to manufacturer's instructions. The reagent which was designed to uncover DNA from trace amounts of sample has been shown to be effective for the preparation of DNA of most bacteria. Nuclear-free water was used for blank. The total DNA was eluted in 50 mL of elution buffer and stored at À80 C until measurement in the PCR by LC-Bio Technology Co. Ltd (Hangzhou, China). PCR amplication The V3-V4 region of the bacteria 16S ribosomal RNA genes were amplied genomic DNA with slightly modied versions of primers 338F (5 0 -ACTCCTACGGG AGGCAGCAG-3 0 ) and 806R (5 0 -GGACTACHVGGGTWTCTAAT-3 0 ). 38 The 5 0 ends of the primers were tagged with specic barcode sequences per sample and sequencing universal primers. PCR amplication was performed in a total volume of 25 mL reaction mixture containing 50 ng of template DNA, 12.5 mL of PCR Premix, 2.5 mL of each primer, and PCR-grade water to adjust the volume. The PCR conditions to amplify the bacteria 16S fragments consisted of an initial denaturation at 98 C for 30 s, followed by 35 cycles of denaturation at 98 C for 10 s, annealing at 54 C for 30 s, and extension at 72 C for 45 s; and then a nal extension at 72 C for 10 min. 16S rDNA sequencing The PCR products were conrmed with 2% agarose gel electrophoresis. Throughout the DNA extraction process, ultrapure water, instead of a sample solution, was used to exclude the possibility of false-positive PCR results as a negative control. The PCR products were puried using AMPure XT beads (Beckman Coulter Genomics, Danvers, MA, USA) and quantied by Qubit (Invitrogen, USA). The amplicon pools were prepared for sequencing and the size and quantity of the amplicon library were assessed on Agilent 2100 Bioanalyzer (Agilent, USA) according to the standard protocols and with the Library Quantication Kit for Illumina (Kapa Biosciences, Woburn, MA, USA), respectively. PhiX Control library (v3) (Illumina) was combined with the amplicon library (expected at 30%). The libraries were sequenced either on 300 PE MiSeq runs and one library was sequenced with both protocols using the standard Illumina sequencing primers, eliminating the need for a third (or fourth) index read. Data analysis Samples were sequenced on an Illumina MiSeq platform according to the manufacturer's recommendations, provided by LC-Bio. Paired-end reads was assigned to samples based on their unique barcode and truncated by cutting off the barcode and primer sequence. Paired-end reads were merged using FLASH. Quality ltering on the raw tags were performed under specic ltering conditions to obtain the high-quality clean tags according to the FastQC (V 0.1). Chimeric sequences were ltered using Verseach soware (v2. 3.4). Sets of sequences with $97% similarity were assigned to the same OTUs by Verseach (v2.3.4). Representative sequences were chosen for each OTU, and taxonomic data were then assigned to each representative sequence using the RDP (Ribosomal Database Project) classi-er. The differences of the dominant species in different groups, multiple sequence alignment were conducted using the PyNAST soware to study phylogenetic relationship of different OTUs. OTUs abundance information were normalized using a standard of sequence number corresponding to the sample with the least sequences. Alpha diversity is applied in analyzing complexity of species diversity for a sample through 4 indices, including Chao1, Shannon, Simpon and Observed species. All these indices in our samples were calculated with QIIME (Version 18.0). Beta diversity analysis was used to evaluate differences of samples in species complexity. Beta diversity were calculated by PCoA and cluster analysis by QIIME soware (Version 1.8.0). Moreover, RDA was analyzed by CANOCO soware; and the COG and KEGG analyses were performed by the STAMP soware. Statistical analysis Data were presented as mean AE Standard Deviation (SD). Statistical analysis was performed with the unpaired t-test when comparing two different groups. SPSS 18.0 (SPSS, Chicago, IL, USA) was used to handle these data and only when a minimum of n ¼ 5 independent samples was acquired. p < 0.05 was considered statistically signicant. The data and statistical analysis comply with the recommendations on experimental design and analysis in pharmacology. 39 Conclusions In conclusion, as numerous reports focus on fecal samples in animal and human with AP, we rstly characterized bacterial community map along the AP rat gut to indicate bacterial communities associated with AP. Although experimental and clinical studies have revealed probiotics might be an effective strategy to prevent and manage AP, using the microbiota as a biomarker of impending or fully manifest AP within or outside of the gut and for monitoring therapeutic responses needs to be explored. Owing to the intestinal tract of mammals harbors a different microbial ecosystem that varies according to the location within the intestinal tract, attention should be paid to ensure that the proper gut samples are used to represent each bacterial community during microbiota-related research. These data will provide clinical guidance for the treatment of AP through regulating bacterial communities, and give a reference for the researches of bacterial communities in different parts of the intestine. Conflicts of interest There are no conicts to declare.
2019-04-10T13:13:20.180Z
2019-02-05T00:00:00.000
{ "year": 2019, "sha1": "16f3b59462211f508d060fbf974146f0e931524e", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c8ra09547g", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "460117e04b14180c3d416e67250f9e319cdf51ce", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
17951992
pes2o/s2orc
v3-fos-license
The Use of Recombinant Pseudotype Virus-Like Particles Harbouring Inserted Target Antigen to Generate Antibodies against Cellular Marker p16INK4A Protein engineering provides an opportunity to generate new immunogens with desired features. Previously, we have demonstrated that hamster polyomavirus major capsid protein VP1-derived virus-like particles (VLPs) are highly immunogenic and can be employed for the insertion of foreign epitopes at certain surface-exposed positions. In the current study, we have designed pseudotype VLPs consisting of an intact VP1 protein and VP2 protein fused with the target antigen—cellular marker p16INK4A—at its N terminus. Both proteins coexpressed in yeast were self-assembled to pseudotype VLPs harbouring the inserted antigen on the surface. The pseudotype VLPs were used for generation of antibodies against p16INK4A that represents a potential biomarker for cells transformed by high-risk human papillomavirus (HPV). The pseudotype VLPs induced in immunized mice a strong immune response against the target antigen. The antisera raised against pseudotype VLPs showed specific immunostaining of p16INK4A protein in malignant cervical tissue. Spleen cells of the immunized mice were used to generate monoclonal antibodies against p16INK4A protein. The specificity of antibodies was proven by the immunostaining of HPV-transformed cells. In conclusion, the current study demonstrates the potential of pseudotype VLPs with inserted target antigen as a new type of immunogens to generate antibodies of high diagnostic value. Introduction Gene and protein engineering provides an opportunity to generate novel chimeric proteins with desired features, such as enhanced immunogenicity. Structural proteins originating from human and animal viruses, for example, papilloma, hepatitis B, and parvo-and rotaviruses with their intrinsic capacity to self-assemble to highly organized structuresvirus-like particles (VLPs)-have been shown to possess high immunogenicity and therefore exploited as potential vaccines [1][2][3]. Moreover, recombinant VLPs can be employed as carriers for non immunogenic proteins or peptides in order to enhance their immunogenicity. Previous studies demonstrated that insertions/fusions of foreign protein segments at certain sites of VLP carriers derived from papilloma-, polyoma-, hepadna-, parvo-, and retroviruses did not influence protein folding and assembly of chimeric VLPs. The immunogenicity of foreign sequences presented on the surface of chimeric VLPs is enhanced making these VLPs promising vaccine candidates [4][5][6][7]. Recently, we have demonstrated that hamster polyomavirus (HaPyV) major capsid protein VP1-derived VLPs are highly immunogenic and tolerate inserts of different size and origin at certain surface-exposed positions. The chimeric HaPyV-VP1 VLPs have been shown to activate efficiently the antigenpresenting cells and induce strong insert-specific B-and Tcell responses in mice [8,9]. These studies demonstrated that chimeric VLPs represent promising novel immunogens to generate monoclonal antibodies (MAbs) of the desired epitope-specificity. The main advantage of chimeric VLPs 2 The Scientific World Journal over tradicional immunogens such as synthetic peptides chemically coupled to carrier proteins is the exposure of the target sequence on the surface of VLPs thus allowing its accessibility to the B cells [9]. Although chimeric VLPs tolerate inserts up to 120 amino acid (aa) residues, the insertion of longer protein sequences generally affects proper folding and self-assembly of VLPs (our unpublished observation). Therefore, new approaches for enhancing the immunogenicity of long protein segments or full-length proteins are needed. This is especially important for human cellular proteins that may be tolerogenic in mice because of high homology with murine proteins. Strong immunogens presenting the target protein sequence on a suitable carrier may break the tolerance barrier and increase the immunogenicity of non-immunogenic proteins or protein segments. In the current study, we designed novel recombinant immunogens based on pseudotype VLPs consisting of two HaPyV-derived capsid proteins-an intact VP1 protein and modified VP2 protein harbouring the target protein sequence at VP2 N terminus. As a target sequence, we have used full-length cellular protein of high diagnostic relevance p16 INK4A that is considered to be a potential marker for cells transformed by high-risk human papillomavirus (HPV). We have demonstrated that pseudotype VLPs consisting of an intact VP1 protein and VP2 protein fused with the p16 INK4A antigen at its N terminus induced a strong antibody response against the target sequence which allowed generation of p16 INK4A -specific MAbs. Production of Pseudotype VLPs Harbouring Full-Length p16 INK4A Protein. All DNA manipulations were carried out according to standard procedures [10]. Enzymes and kits for DNA manipulations were purchased from Thermo Scientific Fermentas (Vilnius, Lithuania). Recombinant plasmids were screened in E.coli DH10B cells. The synthetic gene encoding the full length p16 INK4A protein (synthesized by Integrated DNA Technologies, BVBA, Leuven, Belgium) was fused to hamster polyomavirus (HaPyV) VP2 gene modified at its N terminus in the plasmid pFGG3-VP1/VP2Bg. This plasmid was constructed by inserting HaPyV VP1 gene into GAL 7 expression cassette and modified HaPyV VP2 gene under GAL10-PYK1 hybrid promoter into yeast expression vector pFGG3 [11]. To construct the modified HaPyV VP2 gene, the sequence encoding 1-100 aa was deleted and GSS linker coding sequence and the BglII restriction site were introduced at its N terminus for a fusion with p16 INK4A coding sequence. The resulting plasmid pFGG3-VP1/VP2-p16 was used for the transformation of the Saccharomyces cerevisiae strain AH22-214 (a, leu2-3, 112, his4-519). Transformed yeast cells were grown in YEPD medium (yeast extract 1%, peptone 2%, and glucose 2%) supplemented with 5 mM formaldehyde at 30 • C. The production of the recombinant protein was induced after 24 h of cultivation by adding galactose until 3% final concentration. After 18 h growth the yeast cells were harvested by centrifugation and stored at −20 • C until use. The expression of recombinant VP1 and VP2 proteins was verified by gel electrophoresis and Western blot analysis of the yeast cell lysate as decribed hereinafter. Purification and Electron Microscopy Analysis of Pseudotype VLPs. S. cerevisiae yeast biomass harbouring recombinant proteins was resuspended and homogenized in DB 450 buffer (450 mM NaCl, 1 mM CaCl 2 , 0.001% Trition X-100, 0.25 M L-Arginine in 10 mM Tris/HCl-buffer, pH 7.2) containing 2 mM phenylmethylsulfonyl fluoride (PMSF) and EDTA-free Complete Protease Inhibitor Cocktail (Thermo Scientific Fermentas) and mechanically disrupted using French press. After centrifugation, the supernatant was collected and loaded onto a 20-60% sucrose gradient. After centrifugation at 25,000 rpm (Rotor SW28, Beckman, USA) overnight the fractions of 0.5 mL were collected and subjected to SDS-PAGE. The fractions containing proteins of 42 and 45 kDa corresponding to VP1 and p16 INK4A fused with VP2 (VP2-p16 INK4A ), respectively, were pooled and diluted in buffer DB 150 (150 mM NaCl, 1 mM CaCl 2 and 0.001% Trition X-100 in 10 mM Tris/HCl-buffer pH 7.2). The mixture was subjected to ultracentrifugation overnight at 100,000 ×g (Beckman) on CsCl gradient with densities from 1.23 to1.42 g/mL. The collected fractions were analyzed as described previously. As recombinant VP1 and VP2-p16 INK4A proteins were almost identical according to their molecular mass, the presence of VP2-p16 INK4A protein was verified by Western blot using in-house produced murine polyclonal antibodies against VP2 protein. Fractions containing VP1/VP2-p16 INK4A protein were diluted and precipitated by ultracentrifugation for 4 h, then dissolved in phosphate buffered saline (PBS), and dialyzed against PBS. The dialyzed VP1/VP2-p16 INK4A protein was aliquoted and lyophilized. The VLP formation was verified by examination of the purified proteins by Morgagni 268 electron microscope (FEI Inc., Hillsboro, OR, USA). Protein samples were placed on 400-mesh carbon-coated palladium grids and negatively stained with 2% aqueous uranyl acetate. INK4A Protein. To produce p16 INK4A protein fused to glutathione S-transferase (GST) in E.coli, the DNA sequence encoding p16 INK4A protein was cloned into the expression vector pGEX-5x (Amersham). The resulted plasmid pGEX-5x-p16 was used to transform E.coli strain BL1. The expression of GST-fused p16 INK4A protein was confirmed by SDS-PAGE and Western blot analysis with anti-GST antibodies (GE Healthcare, Uppsala, Sweden). The GST-p16 INK4A -fused protein was purified using Glutathione Sepharose 4 Fast Flow (GE Healthcare Bio-Sciences AB SE-751 84) following the manufacturer's recommendations. Immunization of Mice and Generation of Monoclonal Antibodies. BALB/c mice (obtained from a breeding colony at the Department of Immunology of the Center for Innovative Medicine, Vilnius, Lithuania) were immunized at days 0, 28, and 56 by a subcutaneous injection of 50 µg of either recombinant pseudotype VLPs harbouring p16 INK4A protein or purified GST-p16 INK4A -fused protein. For an The Scientific World Journal 3 initial immunization, the antigen was emulsified in complete Freund adjuvant (Sigma). Subsequent immunizations were performed without an adjuvant, with the antigen dissolved in PBS. Antisera were collected two weeks after the second injection and tested for the presence of antibodies specific to p16 INK4A protein. The mouse with the highest antibody titer against pseudotype VLPs was selected for the development of MAbs. Hybridomas were generated essentially as described by Kohler and Milstein [12]. Three days after the final injection, mouse spleen cells were fused with Sp2/0-Ag 14 mouse myeloma cells using polyethylene glycol 1500 (PEG/DMSO solution, HybriMax, Sigma). Hybrid cells were selected in growth medium supplemented with hypoxantine, aminopterin, and thymidine (50x HAT media supplement, Sigma-Aldrich, St. Louis, USA). Samples of supernatant from wells with viable clones were screened by an indirect enzyme-linked immunosorbent assay (ELISA) using recombinant VLPs and GST-fused p16 INK4A protein as described hereinafter. Hybridomas secreting specific antibodies to p16 INK4A protein were subcloned twice by a limiting dilution assay. Hybridoma cells were maintained in complete Dulbecco's modified Eagle's medium (DMEM, Biochrom) containing 15% fetal calf serum (Biochrom) and antibiotics. Antibodies in hybridoma culture supernatants were isotyped using the Mouse Monoclonal Antibody Isotyping kit (ISO-2, Sigma) in accordance with the manufacturer's protocol. All procedures involving experimental mice were performed under controlled laboratory conditions in strict accordance with the Lithuanian and European legislation. SDS-PAGE. Proteins were analysed by electrophoresis on 12.5% sodium dodecylsulfate-polyacrylamide gels (SDS-PAGE) followed by Coomassie brilliant blue staining. The SDS-PAGE sample buffer (Thermo Scientific Fermentas) was added to the prepared protein samples, boiled for 5 min, applied to a polyacrylamide gel, and run in SDS-Trisglycine buffer. Protein bands were visualized by staining with Coomassie brilliant blue (Sigma). 2.6. Western Blot Analysis. The proteins were separated by SDS-PAGE and electrotransferred to Immobilon P membrane (Millipore). The membranes were blocked with 5% milk in PBS for 2 h at room temperature (RT). The membranes were then incubated for 1 h at RT with primary antibodies at working dilution and subsequently incubated with goat antimouse IgG conjugated to horseradish peroxidase (HRP) (Bio-Rad) diluted 1 : 2000 in PBS with 0.1% Tween 20 (PBST). The enzymatic reaction was developed using tetramethylbenzidine (TMB) ready-to-use chromogenic substrate (Sigma). As primary antibodies for the identification of VP1/VP2-p16 INK4A proteins, mouse MAb against HaPyV VP1, clone 3D10 [9], and polyclonal antibodies produced in-house against HaPyV VP2 protein were used (dilution 1 : 1000 in PBST). For analysing MAb specificity, undiluted hybridoma supernatants were used. Indirect ELISA. Polystyrene microtiter plates (Nerbe) were coated with 100 µl/well of the antigen diluted in coating buffer (0.05 M sodium carbonate, pH 9.6) to a concentration of 5 µg/mL by incubation overnight at 4 • C. The coated plates were blocked with 150 µL/well of 1% BSA for 2 h at RT. Plates were rinsed twice with PBST. Antiserum samples or hybridoma growth medium were diluted in PBST, added to the wells, and incubated for 1 h at RT. The plates were rinsed 3 times with PBST and incubated for 1 h with HRPconjugated goat antimouse IgG (Bio-Rad) diluted 1 : 2000 in PBST. The plates were rinsed 5 times with PBST. The enzymatic reaction was visualized by the addition of 100 µL of ready-to-use TMB substrate (Sigma) to each well. After 10 min of incubation at RT, the reaction was stopped by adding 50 µL/well of 10% sulphuric acid. The optical density (OD) was measured at 450 nm (reference filter 620 nm) in a microplate reader (Tecan, Groedig, Austria). Immunohistochemistry Analysis. Immunohistochemical staining was performed on paraffin-embedded samples of cervical squamous cell carcinomas and nonneoplastic cervical tissue selected from archival materials at the National Specialized Hospital for Active Treatment in Oncology, Sofia, Bulgaria. Haematoxylin and eosin-stained slides of all samples were reviewed by a pathologist and their histopathological diagnoses were reconfirmed. Sections (approximately 5 µm thick) were cut and mounted on poly-l-lysine coated microscope slides (Thermo Scientific). Samples were deparaffinized in xylene and rehydrated in graded alcohols. Antigen retrieval was performed in 0.01 M citrate buffer (pH 6.0) in a heating bath for 20 min at 97 • C. Endogenous peroxidase activity was blocked by incubating the sections in 3% H 2 O 2 for 5 min. After blocking the nonspecific binding with 3% BSA in PBS for 3 h at RT, slides were incubated with the primary antibody (mouse polyclonal antibody raised against pseudotype VLPs harbouring p16 INK4A protein; 1 : 300, or mouse polyclonal antibody raised against GST-p16 INK4A fused protein; 1 : 100) and left overnight in moist chambers at 4 • C. The bound antibody was visualized using a biotinylated secondary antibody, peroxidase-labelled streptavidin, and DAB substrate-chromogen (LSAB2 System-HRP, Dako, Denmark) according to the manufacturer's protocol. Sections were counterstained in hematoxylin, mounted and analyzed under light microscopy. As a negative control, irrelevant mouse polyclonal antibody raised against yeastexpressed hPIV3 nucleocapsid protein was used [13]. Flow-Cytometry. Adherent human cervical epithelial HeLa cells (ATCC Cat. No. CCL-2) were cultivated in RPMI-1640 growth medium (Biochrom, Berlin, Germany) supplemented with 10% fetal bovine serum (Biochrom) and antibiotics. The cells were grown at 37 • C and 5% CO 2 to approximately 70% confluence, harvested, resuspended in Fixation/Permeabilization solution (BD Biosciences, Franklin Lakes, USA), and incubated for 20 minutes at 4 • C. The cells were washed two times with BD Perm/Wash buffer (BD Biosciences) and transferred to plastic tubes for immunofluorescent staining, 10 6 cells per test. One hundred µL of BD Perm/Wash buffer containing 5 µg/mL of the MAb against p16 INK4A or appropriate positive and negative controls were added to the cells and incubated at 4 • C for 30 min. As a positive control, commercial anti-CDK2A/p16 INK4A MAb, clone DCS-50.1/H4 (Abcam, Cambridge, UK) was used (10 µg/mL). As a negative control, irrelevant MAb of IgG1 isotype against yeast-expressed hPIV3 nucleocapsid protein was used (10 µg/mL) [13]. After incubation, the cells were washed two times with BD Perm/Wash buffer and then incubated for 30 min at 4 • C in the dark with 50 µL of BD Perm/Wash buffer containing a predetermined optimal concentration of FITC-conjugated goat antimouse IgG (BD Pharmingen, Franklin Lakes, USA). Finally, the cells were washed two times with BD Perm/Wash buffer and resuspended in Staining Buffer (BD Pharmingen) prior to flow cytometric analysis. Cells were analyzed with CyFlow R space flow cytometer (Partec, Muenster, Germany). Not less than 20.000 events per test were evaluated with FloMax 2.7 software. Results Full-length human p16 INK4A protein (16 kDa, 133 aa-long) was selected as a target protein for the generation of pseudotype VLPs and further immunization experiments. The alignment of aa sequences of human and murine p16 INK4A proteins using ClustalLW and BLAST computer programs revealed 75% sequence homology (Figure 1). High number of identical and similar aa residues indicated the low immunogenicity of human p16 INK4A in mice; therefore, the antigen was considered to be suitable as a target sequence for presenting on pseudotype VLPs. For the construction of expression plasmids, synthetic gene encoding full-length p16 INK4A sequence was inserted into yeast expression vector pFGG3-VP1/VP2Bg designed for the coexpression HaPyV VP1 protein together with VP2 protein truncated until the 101 aa residue. The resulted plasmid pFGG3-VP1/VP2-p16 was used to transform yeast S.cerevisiae strain AH22-214. The SDS-PAGE analysis of crude lysates of transformed yeast cells revealed an overlapping protein band of approximately 42-45 kD because the molecular mass of full-length VP1 protein and VP2-p16 INK4A fused protein was very similar (42 and 45 kD, resp.) (Figure 2(a), lane 2). The corresponding protein band was not visible in the lysate of yeast cells transformed with empty vector pFGG3 used as a negative control (Figure 2(a), lane 1). Protein bands representing the VP1 protein and VP2-p16 INK4A fused protein were specifically immunostained with the respective antibodies against HaPyV VP1 and VP2 proteins (Figures 2(b) and 2(c), lane 2). The soluble fraction of the lysate of transformed yeast cells was subjected to ultracentrifugation in sucrose and CsCl density gradients. The purified recombinant VP1/VP2-p16 INK4A proteins were analyzed by SDS PAGE and Western blot. According to SDS-PAGE data, the purity of VP1/VP2-p16 INK4A proteins after the ultracentrifugation step was about 99% (Figure 2(a), lane 4). The identity of purified proteins was confirmed by Western blot analysis using specific antibodies (Figures 2(b) and 2(c), lane 4). Electron microscopy analysis of the purified negatively stained VP1/VP2-p16 INK4A proteins confirmed the formation of VLPs of about 45 nm in diameter (Figure 2(d)) similar in their size and shape to the nonmodified VP1/VP2 VLPs (Figure 2(e)) and to native viral capsids. The pseudotype VLPs were used to immunize BALB/c mice to generate antibodies against the inserted target sequence. In parallel, the BALB/c mice were immunized with purified GST-fused p16 INK4A protein. After 2 immunizations, the titers of antibodies specific to VP1/VP2-p16 INK4A determined by an indirect ELISA in the sera of mice immunized with pseudotype VLPs ranged from 1 : 16000 to 1 : 32000 (data not shown). To confirm the specificity of the antisera with p16 INK4A protein, their specificity was analyzed by Western blot using fused protein GST-p16 INK4A expressed in E.coli. Specific immunostaining of GST-p16 INK4A fuse was observed, which confirms that the antibodies raised against pseudotype VLPs recognize the p16 INK4A sequence (data not shown). In contrast, the antisera raised against GST-p16 INK4A fused protein recognized only the antigen used for immunization (titers after 2 immunizations ranged 1 : 4000-1 : 12000) and did not show any reactivity with the p16 INK4A sequence displayed on VLPs (data not shown). To investigate the reactivity of the antisera with cellular p16 INK4A protein, they were applied to the immunohistochemistry analysis (IHC) of cervical tissue specimens. The antisera raised against pseudotype VLPs showed specific immunostaining of malignant cervical tissue containing HPV-transformed cells and did not react with nonneoplastic cervical tissue (Figure 3). This demonstrates the reactivity of the antisera raised against pseudotype VLPs with the cellular p16 INK4A protein present in malignant cervical tissue. In contrast, the antisera raised against GST-fused p16 INK4A protein did not show any reactivity in IHC (data not shown). Therefore, no further experiments with mice immunized with GST-p16 INK4A fused protein were performed. Spleen cells of mice immunized with pseudotype VLPs were used to generate the MAbs against p16 INK4A protein. Three stable hybridoma cell lines producing p16 INK4Aspecific MAbs of IgG isotype (IgG1 subtype) were generated. The MAbs reacted specifically in Western blot both with pseudotype VP1/VP2-p16 INK4A VLPs and GST-p16 INK4A fuse but did not react either with yeast cell lysate or nonmodified VP1/VP2 VLPs used as a negative control (Figure 4). To prove the reactivity of the MAbs with native intracellular p16 INK4A protein, they were applied to flow-cytometry analysis of HeLa cells that represent HPV-18-transformed cervical No specific immunostaining of HeLa cells was observed with an irrelevant MAb of the same isotype ( Figure 5(c)). Thus, polyclonal and monoclonal antibodies raised against pseudotype VLPs harbouring the full-length p16 INK4A sequence were reactive with cellular native p16 INK4A protein. This is an indirect evidence that the p16 INK4A molecule displayed on the surface of pseudotype VLPs is natively folded. In conclusion, our results demonstrate that pseudotype VLPs represent highly efficient carrier for cellular antigens and elicit a strong antibody response against the target protein presented on the surface of VLPs. Discussion The aim of the current study was to design novel recombinant antigen capable to display surface-exposed foreign protein sequences and enhance their immunogenicity. Such recombinant antigens may be applied for the generation of antibodies against cellular proteins of low immunogenicity. As a target protein for the construction of the recombinant antigen, we have selected the cellular marker p16 INK4A that represents an indirect indicator of cell cycle dysregulation associated with high-risk HPV infection. The expression of p16 INK4A is specifically induced in HPV-infected cells by HPV E7 protein that inactivates regulatory protein pRb (retinoblastoma gene product) and upregulates transcription factor E2F, which allows the cdk gene transcription. The product of cdk gene is protein p16 INK4A . Several studies examined the p16 INK4A protein by immunocytochemical analysis and confirmed its diagnostic relevance as a biomarker for dysplastic squamous and glandular cells of the cervix [14][15][16]. Based on the alignment of aa sequences of human and murine p16 INK4A proteins that revealed high degree of homology, the low immunogenicity of human p16 INK4A in mice was predicted. To enhance its immunogenicity, we have constructed pseudotype VLPs consisting of an intact HaPyV VP1 protein and VP2 protein fused with the target antigen-p16 INK4A protein-at VP2 N terminus. Both recombinant proteins coexpressed in yeast S.cerevisiae were able to selfassemble to pseudotype VLPs harbouring the inserted target antigen. The shape and size of the pseudotype VLPs was similar to that observed for the native HaPyV capsids [17]. From the resolved crystal structures of the virions of SV-40 [18] and murine polyomavirus [19] it is known that the capsid of polyomaviruses mainly consists of 72 pentamers
2016-05-04T20:20:58.661Z
2012-04-26T00:00:00.000
{ "year": 2012, "sha1": "ebbdf9594ddbf3b8ba40e8d643553121845e35d6", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2012/263737.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37c71cade4dc4bc21f5d9dcbca0a99abe3c6bf16", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
93003872
pes2o/s2orc
v3-fos-license
Can Machine Learning Approaches Lead Toward Personalized Cognitive Training? Cognitive training efficacy is controversial. Although many recent studies indicate that cognitive training shows merit, others fail to demonstrate its efficacy. These inconsistent findings may at least partly result from differences in individuals’ ability to benefit from cognitive training in general, and from specific training types in particular. Consistent with the move toward personalized medicine, we propose using machine learning approaches to help optimize cognitive training gains. INTRODUCTION Cognitive training efficacy is controversial. Although many recent studies indicate that cognitive training shows merit, others fail to demonstrate its efficacy. These inconsistent findings may at least partly result from differences in individuals' ability to benefit from cognitive training in general, and from specific training types in particular. Consistent with the move toward personalized medicine, we propose using machine learning approaches to help optimize cognitive training gains. COGNITIVE TRAINING: STATE-OF-THE-ART FINDINGS AND DEBATES Cognitive training targets neurobiological mechanisms underlying emotional and cognitive functions. Indeed, Siegle et al. (2007) suggested that cognitive training can significantly improve mood, daily functioning, and cognitive domains. In recent years, various types of cognitive training have been researched. Frequently researched training types include cognitive bias modification (CBM) aims to modify cognitive processes such as interpretations and attention, making these more adaptive and accommodating to real-life demands (Hallion and Ruscio, 2011); inhibitory training seeks to improve inhibitory control and other executive processes, thus helping regulate behavior and emotion (Cohen et al., 2016;Koster et al., 2017); working memory training targets attentional resources, seeking to increase cognitive abilities by improving working memory capacities (Melby-Lervåg and Hulme, 2013). All these types demonstrated major potential in improving psychopathological symptoms or enhancing cognitive functions (Jaeggi et al., 2008;Hakamata et al., 2010). Despite the accumulating body of evidence suggesting that cognitive training is a promising research path with major clinical potential, questions remain regarding its efficacy, and generalizability. Recent meta-analyses further corroborate this (for a discussion, see Mogg et al., 2017;Okon-Singer, 2018). For example, several research groups tested CBM studies using meta-analyses. Hakamata et al. (2010) analyzed twelve studies (comprising 467 participants from an anxious population), reporting positive moderate effects of training on anxiety symptom improvement. Yet two other meta-analyses focusing on both anxiety and depression (49 and 45 studies, respectively) demonstrated small effect sizes and warned of possible publication bias (Hallion and Ruscio, 2011;Cristea et al., 2015). These inconsistent results raise important questions about training efficacy. Several factors have been suggested as potential sources of this variability in effect size, including differences in inclusion criteria and quality of the studies included (Cristea et al., 2015). As in the CBM literature, meta-analyses of working memory training also yielded divergent results. Au et al. (2015) analyzed twenty working memory training studies comprising samples of healthy adults and reported small positive effects of training on fluid intelligence. The authors suggested that the small effect size underestimates the actual training benefits and may result from methodological shortcomings and sample characteristics, stating that "it is becoming very clear to us that training on working memory with the goal of trying to increase fluid intelligence holds much promise" (p. 375). Yet two other meta-analyses of working memory (87 and 47 studies, respectively) described specific improvements only in the trained domain (i.e., near transfer benefits) and few generalization effects in other cognitive domains (Schwaighofer et al., 2015;Melby-Lervåg et al., 2016). As with CBM, these investigations did not include exactly the same set of studies, making it difficult to infer the reason for the discrepancies. Nevertheless, potential factors contributing to variability in intervention efficacy include differences in methodology and inclusion criteria (Melby-Lervåg et al., 2016). Some scholars suggested that the inconsistent results seen across types of training may be result from the high variability in training features, such as dose, design type, training type, and type of control groups (Karbach and Verhaeghen, 2014). For example, some studies suggest that only active control groups should be used and that using untreated controls is futile (Melby-Lervåg et al., 2016), while others discovered no significant difference between active and passive control groups (Schwaighofer et al., 2015;Weicker et al., 2016). Researchers have also suggested that the type of activity assigned to the active control group (e.g., adaptive or non-adaptive) may influence effect sizes (Weicker et al., 2016). Adaptive control activity may lead to underestimation of training benefits, while nonadaptive control activity may yield overestimation (von Bastian and Oberauer, 2014). Training duration has also been raised as a potential source of variability. Weicker et al. (2016) suggested that the number of training sessions (but not overall training hours) is positively related to training efficacy in a brain injured sample. While only studies with more than 20 sessions demonstrated a longlasting effect. In a highly influential working memory paper, Jaeggi et al. (2008) compared different numbers of training sessions (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19). Outcomes demonstrated a dose-dependency effect: the more training sessions participants completed, the greater the "far transfer" improvements. In contrast, in a 2014 meta-analytical review Karbach and Verhaeghen reported no dose-dependency, as overall training time did not predict training effects. This is somewhat consistent with the findings of Lampit et al. (2014) meta-analysis, which indicated that only three or fewer training sessions per week were beneficial in training healthy older adults in different types of cognitive tasks. Furthermore, even time gaps between training sessions when the overall number of sessions is fixed may be influential. A study that specifically tested the optimal intensity level of working memory training revealed that distributed training (16 sessions in 8 weeks) was more beneficial than high intensity training (16 sessions in 4 weeks) (Penner et al., 2012). In sum, literature reviews maintain that this large variability in training hampers attempts to evaluate the findings (Koster et al., 2017;Mogg et al., 2017). So far, the majority of studies in the field of cognitive training have been concerned mainly with establishing the average effectiveness of various training methods, with studies based on combined samples comprising individuals who profited from training and those who did not. Therefore, the samples' heterogeneity might be too high to evaluate efficacy for the "average individual" in each sample. We contend that focusing on the average individual contributes to the inconsistent findings, as is also the case with other interventions aimed at improving mental health (Zilcha-Mano, 2018). We argue that the inconsistent findings and large heterogeneity in studies evaluating cognitive training efficacy do not constitute interfering noise but rather provide important information that can guide us in training selection. In addition to selecting the optimal training for each individual, achieving maximum efficacy also requires adapting the selected training to each individual's characteristics and needs (Zilcha-Mano, 2018). In line with this notion, training games studies (i.e., online training platforms displayed in a game-like format) showcased different methods which personalized cognitive training by (a) selecting the type of training according to a baseline cognitive strengths and weaknesses evaluation or the intent of the trainee, and (b) adapting the ongoing training according to the individual's performance (Shatil et al., 2010;Peretz et al., 2011;Hardy et al., 2015). Until now, however, training personalization was made by pre-exist defined criteria and rationale (i.e., individual's weaknesses and strengths, individual's personal preference). Additional method for personalization, that is becoming increasingly popular in recent years, is data-driven personalization implemented by machine learning algorithms (Cohen and DeRubeis, 2018). The observed variation in efficacy found in cognitive training studies may serve as a rich source of information to facilitate both intervention selection and intervention adaptation-the two central approaches in personalized medicine (Cohen and DeRubeis, 2018). Intervention selection seeks to optimize intervention efficacy by identifying the most promising type of intervention for a given individual based on as many pre-training characteristics as possible (e.g., age, personality traits, cognitive abilities). Machine learning approaches are especially suitable for such identification because they enable us to choose the most critical items for guiding treatment selection without relying on specific theory or rationale. In searching for a single patient characteristic that guides training selection, most approaches treat all other variables as noise. It is more intuitive, however, to hypothesize that no single factor is as important in identifying the optimal training for an individual as a set of interrelated factors. Traditional approaches to subgroup analysis, which tests each factor as a separate hypothesis, can lead to erroneous conclusions due to multiple comparisons (inflated type I errors), model misspecification, and multicollinearity. Findings may also be affected by publication bias because statistically significant moderators have a better chance of being reported in the literature. Machine learning approaches make it feasible to identify the best set of patient characteristics to guide intervention and training selection (Cohen and DeRubeis, 2018;Zilcha-Mano et al., 2018). With that said, given the flexibility of methods like decision tree analyses, there is a risk of overfitting that reduces validity for inference out of sample, such that the model will fit specifically the sample on which it was built and may be therefore unlikely to be generalizable in an independent application (Ioannidis, 2005;Open Science Collaboration, 2015;Cohen and DeRubeis, 2018). Thus, it is important to test out-ofsample prediction, either on a different sample or a sub-sample of the original sample on which the model was not built (e.g., cross-validation). An example of treatment selection from the field of antidepressant medication (ADM) demonstrates the utility of this approach. Current ADM treatments are ineffective for up to half the patients, despite much variability in patient response to treatments (Cipriani et al., 2018). Researchers are beginning to realize the benefits of implementing machine learning approaches in selecting the most effective treatment for each individual. Using the gradient boosting machine (GBM) approach, Chekroud et al. (2016) identified 25 variables as most important in predicting treatment efficacy and were able to improve treatment efficacy in 64% of responders to medicationa 14% increase. Whereas, training selection affects pre-treatment decisionmaking, training adaptation focuses on continuously adapting the training to the individual (see Figure 1). A patient's baseline characteristics (e.g., age, personality traits, cognitive abilities) and individual training performance trajectory can be used to tailor the training parameters (training type, time gaps between sessions, number of sessions, overall training hours) to achieve optimal performance. Collecting information from a sample of patients with similar baseline characteristics that underwent the same intervention yields an expected trajectory. Deviations from this expected trajectory act as warning signs and can help adapt the training parameters to the individual's needs (Rubel and Lutz, 2017). An example of treatment adaptation comes from the field of psychotherapy research, where a common treatment adaptation method involves providing therapists with feedback on their patients' progress. This method was developed to address the problem that many therapists are not sufficiently aware of their patients' progress. While many believe they are able to identify when their patients are progressing as expected and when not, in practice this may not be true (Hannan et al., 2005). Many studies have demonstrated the utility of giving therapists feedback regarding their patients' progress (Lambert et al., 2001;Probst et al., 2014). Shimokawa et al. (2010) found that although some patients continue improving and benefitting from therapy (on-track patients-OT), others seem to deviate from this positive trajectory (not-on-track patients-NOT). These studies provided clinicians feedback on their patients' state so they could better adapt their therapy to the patients' needs. This in turn had a positive effect on treatment outcomes in general, especially outcomes for NOT patients, to the point of preventing treatment failure. These treatment adaptation methods have recently evolved to include implementations of the nearest neighbor machine learning approach originating in avalanche research (Brabec and Meister, 2001), as well as other similar approaches to better predict an individual's optimal trajectory and identify deviations from it (Rubel et al., in press). Machine learning approaches may thus be beneficial in the efforts of progressing toward personalized cognitive training. The inconsistencies between studies in terms of the efficacy of CBM, inhibitory training, and working memory training can serve as a rich and varied source to guide the selection and adaptation of effective personalized cognitive training. In this way, general open questions such as optimal training duration and time gaps between sessions will be replaced with specific questions about the training parameters most effective for each individual. AUTHOR CONTRIBUTIONS RS managed the planning process of the manuscript, performed all administrative tasks required for submission and drafted the manuscript. HO-S and SZ-M took part in planning, supervision, brainstorming, and writing the manuscript. ST took part in brainstorming and writing the manuscript.
2019-04-04T13:04:44.117Z
2019-04-04T00:00:00.000
{ "year": 2019, "sha1": "ce5a8bab49da49fd0b20c8d5598840282f8afd16", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2019.00064/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce5a8bab49da49fd0b20c8d5598840282f8afd16", "s2fieldsofstudy": [ "Computer Science", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
53514193
pes2o/s2orc
v3-fos-license
Wide field-of-view fluorescent antenna for visible light communications beyond the étendue limit Visible light communications (VLC) is an important emerging field aiming to use optical communications to supplement Wi-Fi. This will greatly increase the available bandwidth so that demands for ever-higher data rates can be met. In this vision, solid-state lighting will provide illumination while being modulated to transmit data. An important obstacle to realizing this vision are receivers, which need to be inexpensive, sensitive, fast, and have a large field of view (FoV). One approach to increasing the sensitivity of a VLC receiver is to increase the area of the receiver’s photodetector, but this makes them expensive and slow. An alternative approach is to use optical elements to concentrate light, but conservation of etendue in these elements limits their FoV. In this paper, we demonstrate novel antennas that overcome these limitations, giving fast receivers with large collection areas and large FoV. Our results exceed the limit of etendue, giving an enhancement of light collection by a factor of 12, with FoV semi-angle of 60°, and we show a threefold increase in data rate. INTRODUCTION Visible light communications (VLC) is a promising new paradigm for wireless communications, in which light from existing light sources is used as the data carrier [1,2]. Despite the relatively short time since its inception [3], VLC has already achieved higher data rates than comparable radio frequency (RF) communications systems [4,5]. The popularity of wireless communications means that RF bandwidth will not be enough to satisfy future user demands, and, hence, further increases in channel capacity will be essential. Like other communications systems, the maximum capacity of a VLC channel is determined by a combination of the bandwidth of the channel and the signal-to-noise ratio of the receiver. However, future VLC systems will use LEDs primarily designed to provide illumination as transmitters. Their bandwidths are limited to 20 MHz or less [6,7], and this makes achieving a high signal-to-noise ratio particularly important in VLC. In principle, a large receiver signal can be obtained by collecting a large fraction of the light from a transmitter. However, a large photodetector (e.g., photodiode) would normally be slow (limiting bandwidth [8]) and expensive. This means that the only way to increase the signal-to-noise ratio in a receiver is to concentrate light onto a photodetector using an optical element, such as a lens or a compound parabolic reflector (CPC). These optical elements are based upon reflection and refraction and therefore conserve étendue [see Fig. 1(a)]. This means that increasing the optical gain (the input area S in divided by the output area S out ) of the element reduces its field of view (FoV) [9][10][11]. In particular, for optical elements with a refractive index n, the étendue-limited maximum possible gain [9,12] C max is n 2 ∕sin 2 θ, where θ is the semi-angle that defines the FoV. This means that, for optical elements with a refractive index of 1.5 and a gain of 50, the FoV will be less than 12°, and will not be compatible with mobile devices such as smartphones and laptop computers. There is, therefore, a need to develop a convenient method of concentrating light from a relatively large area onto a photodetector, while maintaining a bandwidth of more than 20 MHz and a wide FoV. In this paper, we present a novel approach to tackle this challenge and experimentally demonstrate a simple and inexpensive optical antenna that combines gain of more than 1 order of magnitude with a wide FoV of 60°, exceeding the limit set by the conservation of étendue. Our approach for VLC optical antennas is to use a thin layer of fluorescent material and glass cladding [see Fig. 1(c)]. In these fluorescent antennas, a fluorescent material absorbs the incident light and re-emits it at a longer wavelength. Then, since the fluorescent material has a higher refractive index than its surrounding environment, part of the emitted light is retained inside it by total internal reflection (TIR). This light can then escape at the edge of the fluorescent antenna, where the photodetector of the receiver is placed. The achievable gain, in power density, from the edges can be significant, while the FoV follows closely a cosine law and so is approximately 60°(halfwidth at half-maximum). The reason the fluorescent antenna exceeds the étendue limit for gain is because it is based on fluorescence, which is accompanied by a Stokes shift, and not exclusively on refraction or reflection. In other words, the number of photons rather than the energy flux is conserved [10,11]. In this regard, they resemble luminescent solar concentrators that have been proposed as an inexpensive method of collecting solar energy [13][14][15]. A detailed description of the underlying physics can be found in Refs. [16][17][18][19]. Nevertheless, implementation of a fluorescent antenna for a communications system presents some difficult optical, material, and fabrication challenges [20]. FABRICATION AND PHOTOPHYSICS OF FLUORESCENT OPTICAL ANTENNAS The main characteristics required for a good fluorescent antenna for VLC are: (i) a refractive index that will capture the emitted light in the layer of fluorescent material; (ii) strong absorption in the region of 450 nm matching the emission of GaN/InGaN LEDs, but weak re-absorption of its own fluorescence; (iii) high fluorescence quantum yield (PLQY); (iv) excited states with a lifetime of less than 10 ns; and (v) a thin fluorescent layer to enable small area photodetectors to be used, since the bandwidth of small photodetectors is broader [8]. Based on the above criteria, we selected the dye Coumarin 6 (Cm6) because of its high PLQY, absorption peaking in the blue region of the spectrum, and short fluorescence lifetime. However, as dyes suffer severe quenching of fluorescence at high concentrations, the Cm6 was dispersed in a host matrix. In particular, we explored two crosslinkable materials as hosts, the photoresist SU-8 and the epoxy NOA68. These were selected because they are simple to process, transparent in the wavelength range of interest, can co-dissolve Cm6, and give films with higher refractive index than glass, enabling them to be used to make waveguides with glass cladding. To fabricate the antennas, a thin fluorescent film was sandwiched between two microscope slides (25 mm × 75 mm × 1.1 mm, n 1.52) using UV curable epoxy NOA68 (n 1.54). Two different structures were made. In one case, the yellow light waveguides in the combined SU-8 and NOA68 layers; in the other case, it waveguides in the NOA68 layer. The total thickness of the thin waveguiding layer, between the two microscope slides, was 100 μm. Some light will escape into the glass layers (microscope slides). For this reason, the edges of the glass slides were covered by black tape [see Fig. 1(d)], so that only light collected from the thin waveguiding layer would be detected at the edges of the antenna. The photophysical properties of the dye Cm6 in the photoresist SU-8 are shown in Fig. 2. The properties of the dye in NOA68 are very similar and can be found in Supplement 1. In particular, Cm6 absorbs blue light strongly [see Fig. 2(a)], matching the emission band of the GaN/InGaN LEDs, typically used in solid-state lighting. This dye also emits in the green region of the spectrum, with peak wavelengths at 500 and 510 nm for concentrations of 0.01 and 3 mg∕ml in a blend of SU-8, respectively [see Fig. 2(a)]. The higher concentration film has a more pronounced shoulder appearing at 540 nm. The difference in peak position and the shoulder appearance is due to self-absorption of the emitted light that occurs due to some overlap of the emission and absorption spectra on the short wavelength side of the emission. The key point is that, as shown in Fig. 2(b), the PLQY of these films remains high, between 80% and 95%, for concentrations up to 3 mg∕ml. Furthermore, the stability of the film was verified by measuring the PLQY for a period of more than a month (see Supplement 1). In addition, Fig. 2(c) shows that the time-resolved fluorescence exhibits a single exponential decay with a lifetime of 3.1 ns. This means that the temporal response of Research Article the Cm6 is much faster than the typical LED modulation rate and will not limit the bandwidth of the VLC communications channel. OPTICAL CHARACTERIZATION To understand the performance and optimize the fabrication parameters, the optical properties of the antennas were investigated, and the results are presented in Fig. 3. As the absorption and emission spectra are very important for antenna operation, we measured them on the actual antenna structure, collecting the emission from the edge. The results, shown in Fig. 3(a), are similar to the materials measurements presented in Fig. 2(a). Experimentally (see Table 1), it was found that the gain of the antenna increased as the optical density of the fluorescent layer increased. For this reason, the optical density was initially increased by using a higher concentration of Cm6 in SU-8. However, this approach was limited by concentration quenching, and so Cm6 was instead mixed directly with the NOA68. The Photoluminescence (PL) emission from the edge of the antennas also shows the characteristics identified in the photophysical investigation. In particular, as the optical density of the fluorescent layer increases, the shoulder becomes more pronounced. Another important fabrication parameter that determines the useful antenna length is attenuation loss. Figure 3(b) shows the relative fluorescence power detected at the edge of the antenna as a function of the distance of excitation from the edge. Propagation losses in the waveguide lead to an attenuation of the fluorescence that shows a complex exponential decay. We attribute this to the loss of nonguided rays at short distances, isotropic emission in the plane, and the change in absorption as the light propagates through the fluorescent layer due to self-absorbance and the resulting shift in spectra. As shown in Fig. 3(b), the use of the optically dense antenna results in more power being collected at the antenna edge. At the same time, however, as can be seen from Fig. 3(a), there is stronger self-absorption of short wavelengths. Table 1 summarizes the gain of the samples, when measured Research Article under flood illumination, i.e., when the whole sample is illuminated, by a blue 450 nm LED. The FoV was measured by placing the antenna and attached photodiode on a rotation stage. The sample was flood illuminated with collimated light from a blue LED, and the received signal was recorded as a function of the excitation angle. The results for these FoV measurements [see Fig. 3(c)] show that the received power at the edge of the antenna falls away with incidence angle, following a cosine dependence. This shows that the dominant phenomenon that determines the FoV is the projected area of the antenna, i.e., a cosine law, and the FoV, therefore, has a semi-angle of 60°. Table 1 summarizes the gain of the samples, when measured under flood illumination, i.e., when the whole sample is illuminated, by a blue 450 nm LED. At normal incidence, the measured gain of the most optically dense antenna was 12, which is more than 4 times higher than the maximum theoretical gain of an optical, étendue-limited, concentrator with a similar FoV. In comparison, the FoV of a compound parabolic concentrator [21], with a gain similar to that of the fluorescent antenna [also presented in Fig. 3(c)], would be 20°. COMMUNICATION CHARACTERIZATION In a communication system, the achievable data rates are proportional to the system bandwidth (BW), at a specific signalto-noise ratio. We therefore measured the BW as a function of excitation distance from the edge with the detector, and the results are presented in Fig. 4(a). It can be seen that there is a steady decrease in BW with excitation distance, which eventually saturates, as the distance increases. We attribute this variation in bandwidth to self-absorption of the emitted light. In particular, the absorption and re-emission of photons have the effect of increasing the effective PL lifetime, resulting in a lower BW (see Supplement 1). Nonetheless, in all cases, the 3 dB electrical bandwidths were above 40 MHz, which is significantly higher than the BW of most commercially available LEDs designed for illumination, which is typically in the range of 5-20 MHz [6,7]. Finally, the possibility of creating a communication link using a fluorescent antenna was investigated. A pseudo-random binary sequence was translated into voltage variations using an arbitrary waveform generator, and used to modulate the light output of a commercial blue LED. The modulation scheme that was used was on-off keying (OOK), where the absence of light represents bits with a value of zero and full intensity represents bits with a value of 1. In these experiments, a circular avalanche photodiode (APD) with active area of 1 mm 2 , followed by a transimpedance amplifier, was used as the photodetector. The data rates, achieved in a 0.5 m long data link, are presented in Fig. 4(b). In this figure, two cases are compared: one where the APD was excited directly, and the other where the APD received light from the edge of the fluorescent antenna. In both cases, the FoV was 60°. To have the same illuminated area of APD, a 100 μm metallic slit was used in front of the APD in the case of direct excitation. An important characteristic of a communications system is how the bit error rate (BER), defined as the number of error bits divided by the total number of bits, depends upon the data rate. The results of BER measurements are shown in Fig. 4(b). As forward error correction can work well at error rates of up to 3.8 × 10 −3 , the data rate of the system at BER is of particular interest [4,22]. For the APD alone, at this BER, the data rate is 65 Mb∕s, and this increases to 190 Mb∕s when the fluorescent antenna is used. This shows that an almost threefold improvement in the data rate can be achieved, without decreasing the FoV. CONCLUSIONS In conclusion, we have demonstrated a new category of simple and inexpensive optical antennas for VLC. Our fluorescent antennas overcome the limitations created by the conservation of étendue in refractive-reflective-based optical systems, giving a large signal gain, combined with a large FoV. Specifically, we demonstrated a gain of 12 combined with a huge FoV, with a full width at half-maximum of 120°. Nevertheless, the achieved gain, although significantly higher than a conventional optical element, is well below the upper bound arising from thermodynamic considerations [10,11]. Hence, it should be possible to improve the performance of our optical antennas in the future. In addition, we have demonstrated the importance of the new optical antennas for free-space optical communications by showing a threefold enhancement of data transmission rate. Since these antennas are inexpensive and thin, they could be easily Research Article incorporated in mobile phones, tablets, computers, and even clothing to enable rapid mobile communication. The research data supporting this publication can be accessed at [23].
2018-10-15T16:43:05.875Z
2016-07-20T00:00:00.000
{ "year": 2016, "sha1": "5d40fdac79a2e5a556becc3596e5e7f2c2b23b0c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/optica.3.000702", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "156435adf51012243cfd1a28572566c81def54ea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248445675
pes2o/s2orc
v3-fos-license
Simulated Forest Immersion Therapy: Methods Development Shinrin-yoku, forest bathing, may provide relief from chronic and breakthrough pain in patients with axial spondyloarthritis and improve immune function through increasing NK cell numbers and activity and their downstream effectors, perforin and granulysin, after chemo- or radiation therapy in breast and prostate cancer patients. The aim of this paper is to describe the study protocol for a simulated forest immersion therapy using virtual reality and atomized phytoncides, volatile organic compounds found in forested areas designed to effect positive change for these two patient populations. The setting, including the room set up and samples with inclusion/exclusion specific to this type of intervention, is outlined. Measures and calibration procedures pertinent to determining the feasibility of simulated forest immersion therapy are presented and include: ambient and surface room temperatures and relative humidity in real time, ambient ultrafine particulate matter, ambient droplet measurement that coincides with volatile organic compounds, specific phytoncides, and virtual reality and atomization of phytoncide set up. Particular lessons learned while training and setting up the equipment are presented. Simulated forest immersion therapy is possible with attention to detail during this early phase when development of methods, equipment testing, and feasibility in deploying the intervention become operational. The expected outcome of the development of the methods for this study is the creation of a standardized approach to simulating forest therapy in a controlled laboratory space. Introduction Shinrin-yoku, roughly translated as forest bathing, is the traditional Japanese practice of immersing oneself in nature by mindfully using our senses, such as sight and smell [1,2]. Shinrin-yoku practices range from sitting quietly and enjoying the forest to walking or hiking through forested areas [3]. The key to shinrin-yoku is connecting with the atmosphere of the forest, taking in forest aerosols, volatile organic compounds known phytoncides [4]. Phytoncides are inhaled airborne particles that trees naturally emit during different stages of development [5][6][7][8][9]. In 1982, the Japanese Ministry of Agriculture, Forestry and Fisheries began researching the health benefits of shinrin-yoku [2]. There are several therapeutic effects of shinrinyoku in the context of depression, including improving immune system function, such as decreased pro-inflammatory cytokine activity and increased in anti-inflammatory cells, decreased depressive symptoms, stress, and anxiety, improved mental relaxation and attentional focus, and increased feelings of awe, gratitude, and selflessness [4,[9][10][11][12][13][14][15][16]. The benefit of forested greenspaces for human psychology and physiology is a reduction in stress [14,17], which in turn positively impacts mood [18] and further reduces inflammation [19]. In the context of normal male subjects, after both shinrin-yoku forest immersion and hotel/sleep-based phytoncide humidification with α-pinene, β-pinene, and d-limonene, the following immune system improvements were found: natural killer T-cell numbers and activity increased, as did perforin and granulysin [6,20,21]. In 2017, a research agenda for nature contact and health was published with several research domains, including the mechanistic biomedical studies domain (1.0) for future research development [17]. We developed a simulated forest immersion therapy (SFIT) that includes both forest aerosols and virtual reality based on this research agenda. We are interested in psychological and physiological variables that suit our differing populations of interest (i.e., axial spondyloarthritis patients with chronic or breakthrough pain or breast and prostate cancer patients who have undergone chemo-or radiation therapy and have NK cell depletion), and these align with the mechanistic biomedical domain [17]. Specifically, our research objectives are to elucidate 1.1: to what extent stress reduction mediates observed health benefits of nature contact; 1.1b: which natural elements are most associated with stress reduction; 1.2: to what extent improved immune function mediates observed health benefits of nature contact; 1.2b which natural elements are most associated with improved immune function; and 1.2c which markers of immune function are the most useful for studying this effect [17]. If there are health effects of forest immersion, then could the same health benefits be achieved with simulated forest immersion as a way of providing therapy to patients not able to exercise or move in outdoor forests or greenspaces due to debility or frailty from chronic illness (i.e., chronic and breakthrough pain) or acute illness (i.e., recovery of immune system function after chemo-or radiation therapy in cancer)? Significance An estimated 1,806,590 new cases of cancer will be diagnosed in the United States each year, of which 281,550 will be breast cancer and an estimated 248,530 will be prostate cancer [22]. Additionally, up to 1% of the population of the United States, an estimated 2.7 million people, may have axial spondyloarthritis [23], and 50% of them suffer from chronic widespread pain [24,25] and breakthrough pain after standard treatment in approximately 60% of people [26]. Using complementary interventions to improve outcomes in patients who are seriously ill is paramount to extending healthcare to vulnerable populations. We hope to accelerate the translation of findings for healthy individuals for the implementation of a novel minimally invasive immune therapy for cancer patients with solid tumors where NK cells are depleted, both in number and activity [27,28], and to determine clinically meaningful protocols for the management of pain, and comorbid symptoms of pain, in patients with axial spondyloarthritis with chronic and breakthrough pain. Forest bathing, virtual reality (VR), the use of phytoncides, and their separate or combined effects constitute a new research avenue. It is intriguing to explore the effects of nature-based interventions on chronic and breakthrough pain in patients with axial spondyloarthritis, as well as the effects on the immune system, and how we might harness them to benefit acutely ill patients who are immunocompromised. Psychological Pathways of Interest Contemporary theories, such as Kaplan's Attention Restoration Theory [29,30], Ulrich's Stress Reduction Theory [31][32][33], and Kellert and Wilson's Biophilia Hypothesis [34,35] provide a conceptual framework for the practice of shinrin-yoku and engaging with nature in various forms of nature therapy. Shinrin-yoku researchers Song, Ikei, and Miyazaki (2016) developed a conceptual framework based on an extensive review that describes how the restorative effects of nature increase physiologic immune system recovery from stress as well as physiologic relaxation [36]. Kaplan and Kaplan hypothesized that exposure to natural settings through the five senses has a direct effect on parasympathetic nervous system activation, thus leading to states of greater awareness achieved through relaxation [29]. Ulrich's Stress Reduction Theory [31] was developed from observational studies wherein patients in hospitals with patient room windows facing nature-laden scenery (e.g., trees, green foliage) experienced marked improvement in health and recovery with shortened hospital stays compared to patients in rooms with an urban view [31,32]. Wilson's Biophilia Hypothesis [35], suggests that humans have a developmental affinity for natural surroundings, and being immersed in nature is therefore innately appealing. This research suggests that a disconnect from nature has adverse health impacts [34], and therefore finding effective means for individuals to access nature is crucial [37,38]. Simulated nature and greenspace exposure has been applied in clinical settings for the treatment of acute [41,46,47] and chronic pain [37,44,50]. Virtual reality (VR)-based therapies for pain reduction are not new, and several theories for how and why VR-based therapies improve pain outcomes center on the element of "distraction", such that the virtual viewing experiencing distracts an individual from feeling their pain [51][52][53][54]. This is largely based on the Gate Control Theory proposed by Melzack and Wall [55], which suggests that the attention paid to the pain experience, as well as the emotion tied to the experience of pain, which includes past emotional memories, play a role in pain interpretation; therefore, directing attention away from the experience of pain may reduce the sensation of pain [47,53,[56][57][58][59]. Research on VR-based greenspaces or nature exposure for pain reduction describes VR as a tool for delivering nature, and that nature is the crucial element within the interventional design [47]. In a repeated-measures design, 50 patients attending chemotherapy sessions were evaluated for pain and stress during intravenous port access. While findings were insignificant after one nature-based VR session, participants reported feeling relaxed, peaceful, and distracted by positive thoughts [47]. Potential benefits of virtual nature directly link to the theories describing the health effects of shinrin-yoku, including improved relaxation, restoration, and alertness, improved functioning of the immune system, and reduced exposure to air pollution and urbanicity [37]. Exposure to greenspaces can induce relaxation via psychoendocrine pathways, including the function of the hypothalamicpituitary-adrenal (HPA) axis and resulting cortisol secretion [60,61]. Further, exposure to greenspaces, which include greenery in the form of foliage, trees, and vistas, such as with shinrin-yoku, improves health outcomes whether the exposure involves "live" nature or virtual nature [62,63]. Biological Pathways of Interest Immune suppression is a major issue for adults with a cancer diagnosis receiving chemo-and/or radiation therapy. In particular, NK cell suppression in this population is problematic as NK cells are the major immune cell type surveilling foreign or infectious antigens and eliminating them [64]. Thus, implementation of a novel minimally invasive immune therapy in cancer patients with solid tumors where NK cells are depleted, both in number and activity, is crucial [27,28]. Patients with solid tumors that have activated NK cells within the tumor have longer overall survival [65,66]. Blood levels of NK cells are essential to the movement of NK cells into tumor tissue. Research shows positive effects of forest bathing on NK cell numbers and activity [67] (NK CD3 − /CD56 + / and NK CD3 − /CD56 + /CD69 + , respectively) and on expressed proteins, such as perforin and granulysin [6,20,68,69]. NK cells use pattern recognition molecules (epitope) on the surface of transformed or stressed cells to accelerate detection and elimination of problematic cells. Perforin and granulysin are key to enabling the natural killing mechanism of the NK cell [70]. Perforin is a downstream effector related to the number and activity of the NK cells [64]. Perforin creates a pore in the target cell once the target cell's epitope is recognized [64]. The pore allows granulysin to enter the cell and effect apoptosis of the intracellular structures; the cell lyses and dies [5,64]. Perforin and granulysin are needed to maintain normal immune surveillance and reduction in infection, specifically in immunocompromised cancer patients [27,28]. Two proof-of-principle studies were conducted in middle-to-older-aged healthy men. These two studies, a 3-day forest experience (immersion experience) and a 3-night hotel experience, measured or used humidified-forest-derived volatile organic compounds, known as phytoncides, respectively. Of the phytoncides tested, humidified αand β-pinene and limonene in combination produced an increased number of NK cells and elevated activity [6,20]. Purpose If forest immersion can provide immune system benefits in healthy men (i.e., improved NK cell numbers and activity, increased perforin and granulysin), can dispersal of three phytoncides (α-and β-pinene and limonene in combination) paired with a greenspace virtual reality provide the same positive effects on NK cells in patients with solid tumor cancer who have completed cancer therapy? Additionally, can humidified limonene paired with virtual reality reduce pain and psychological stress in patients with axial spondyloarthritis? Our purpose is to deploy a standardized study protocol for simulated forest immersion intervention in cancer patients with NK cell depletion and patients with axial spondyloarthritis with chronic and breakthrough pain to test its feasibility and rigor. The simulated forest immersion intervention will provide greenspace/forest experience through three of the five senses. Virtual reality will provide visual and auditory stimuli. Humidified aromatic forest oils will provide olfactory stimuli. Virtual reality and atomized forest oils may be used in combination or alone [71]. Two distinct studies will use this standardized protocol for (1) cancer patients and (2) patients with axial spondyloarthritis. The purpose of this paper is to outline the development of the study protocol for the intervention and the control conditions of the clinical lab setting. Research Design We will use a two-arm study design with concurrent controls selected from the breast and prostate cancer clinics and from the arthritis clinic with two measurement time periods to test the proposed simulated forest exposure intervention. In the SFIT study with patients with axial spondyloarthritis who have chronic or breakthrough pain, the two-time points were before and immediately after the intervention. In the SFIT study with patients with either breast or prostate cancer, the two time periods were before and on Day 3 after the intervention. Study Sample The study sample for study #1 will be recruited from cancer patients with solid tumors (HR + HER2-breast, or prostate cancer) who have completed cancer therapy (hormone therapy excepted) as this population may benefit the most from increases in NK cell number and activity, perforin, and granulysin to prevent infection as patients become relatively immunocompromised after chemo-or radiation therapy. For study #2, the study sample will be recruited from axial spondyloarthritis patients who have chronic or breakthrough pain, as virtual reality has been shown to reduce pain, and d-limonene administration in animals has shown pain reduction. Since the two studies are set in the future and are pilot studies, we expect for study #1 to recruit and enroll 25 participants and for study #2 to recruit and enroll 25 participants. For study #1, the participant number is limited by budget and the cost of pre-clinical and clinical laboratory tests. For study #2, the participant number is limited by budget and the cost of paper-based tools. Concurrent controls for both studies will be identified by the clinicians in either the cancer clinics or the arthritis clinic. Concurrent controls will meet the same study inclusion and exclusion criteria as those who are enrolled in the study interventions. We expect to have 4 of each of the assigned genders in the control groups. The control groups will be randomized to receive neither SFIT intervention, VR, or atomized phytoncides. They will be exposed to atomized water dispersal for 1 h, the same length of time as the study participants who will receive the SFIT interventions. All the same data will be collected for both study #1 and study #2 on these control participants. At the end of the studies, should an effect of the SFIT intervention be noted, the control participants will have the opportunity to complete the same intervention. Clinicians will lead the recruitment of these patients, followed by a phone screening for inclusion and exclusion conducted by the principal investigator and research associate. Inclusion and Exclusion for SFIT-General Considerations Since we will be using atomized phytoncides as well as virtual reality, either in combination or separately, several exclusion criteria apply, as seen in Table 1. Table 1. General exclusion criteria related to intervention only. Exclusion Criterion Rationale History of asthma [72] Inhaled phytoncides may produce airway irritation, asthma exacerbation, or bronchoconstriction Inability to detect common odors from commercial fragrances [73] Inhaled phytoncides provide half of the intervention and smell of the forest History of smoking within 15 min before the start of SFIT [74] Smoking within 15 min before therapy will alter the ability of the participant to detect commercial fragrances or the aroma of the phytoncides Allergy to pine or citrus aroma [74,75] Inhaled phytoncide aromas are pine and citrus and may cause dermatitis History of intractable seasickness [76] VR may cause nausea/vomiting without relief after 5-10 min History of seizures [77] VR may heighten susceptibility to photosensitive seizure due to changing light in the forest video Limitations of vision and hearing not corrected by eye lenses or hearing aids VR requires good vision and hearing correction with eye lenses or hearing aids Inability to complete study requisites Intervention directions must be followed; specific to follow up measurements Inclusion and Exclusion for SFIT for Breast and Prostate Patients Participants will be included if they are willing and able to provide informed consent, are of either biological sex, older than 18 years of age, and have completed cancer therapy for HR + HER2 breast cancer or prostate cancer, Stage I-III, with no evidence of metastasis. Participants will be excluded if they have a history of autoimmune disease, are on immune modulating therapies (endocrine therapy allowed), have had surgery or an invasive procedure in the past two months, and recent infection in the past two weeks (these are known confounding variables in immune system measures of interest). Inclusion and Exclusion for SFIT for Axial Spondyloarthritis Patients Participants diagnosed with axial spondyloarthritis (axSpA) will be included if they are willing and able to provide informed consent and are at least 18 years of age or older, and they may be any sex or gender. Additional inclusion criteria are: a score of 4 or higher on the 10-point Bath Ankylosing Spondylosis Disease Activity Index (BASDAI) (a standard criterion for suboptimal control of symptoms and disease [78] with a correlation between patient-reported BASDAI scores and measurable disease) [79], and a rheumatologist overseeing their care. Participants will be excluded if they are in an active phase of treatment with biologic cytokine inhibitors (which may confound the effects of the intervention on outcomes measures) [80]. Use of commonly prescribed painkillers is acceptable, and we will control for their use in the analysis. Setting The SFIT Lab is located in our Integrated Bio-Behavioral Lab space within our school. The lab room, where the instrumentation for the SFIT is set up, is 20 feet × 15 feet with a 12-foot ceiling. The room has temperature control, so a consistent temperature between deployments of SFIT with participants can be maintained, as well as lighting control, so the lights can be dimmed when patients are using VR. Intervention-Procedure Our two studies are novel as no one to date has used simulated forest immersion in patients with acute or chronic morbid conditions. The principal investigator (PI) and research associate (RA) will implement SFIT in a separate room from the room used to cross-check inclusion and exclusion, obtain informed consent, collect baseline measures, and allow the participant to rest. Prior to the arrival of the participant, the SFIT intervention space will be prepared. Preparation of the intervention space includes calibration of the instruments that will measure volatile organic compound particles (phytoncides are volatile organic compounds) and droplets, room temperature and humidity, and room surface temperatures, followed by measurement of the ambient particles and droplets and room temperature and humidity prior to the implementation of the atomized phytoncides [71,81]. Phytoncides α-pinene, β-pinene, and d-limonene (Floraplex Terpenes, Ypsilanti, MI, USA) will be prepared for atomization with a commercially available atomizer (Asakuki 500 mL Premium Atomizer, Tronhon Co., Ltd., Chongqing, China) that can emit phytoncides for up to 3 h. Once the dose expected (0.80 ppm) reaches the dose published by Li [6], the participant will be brought into the intervention room. To date, the Li study, which was conducted with healthy men, has been the only study to record phytoncide dose in a controlled setting. We will use this concentration as the target dose for our humidified phytoncide set up. Both at the beginning and at the end of 1 h of exposure to the simulated forest immersion therapy intervention, ambient phytoncide in the contained space will be measured indirectly by measuring both the increase in ambient air particle mass and number, as well as by measuring the change in total volatile organic compounds (VOCs). Total particle numbers will be measured with a continuous ultrafine particle counter (P-Trak 8525, TSI, Shoreview, MN, USA), and the total VOCs will be measured with the portable handheld monitor (Mini ppbRAE 3000, Honeywell International INC.; San Jose, CA, USA). The P-Trak and Mini ppbRAE 3000 will measure the increase in particle numbers and water droplets (aerosol), respectively, which will serve as a surrogate of relative exposure to phytoncide. Continuous monitoring of room temperature and relative humidity will be measured by the HOBO MX2301 Temperature/RH Data Logger (ONSET, Bourne, MA, USA). Room surface temperature will be measured by ADC Adtemp Mini 432 Non-Contact Infrared Thermometer (American Diagnostic Corporation, Hauppauge, NY, USA). VR will be provided by VIVE Pro Eye, HTC (high-tech computer) Corporation, (New Taipei City, Taiwan) with digital rendering of forested greenspace. Once 1 h of SFIT concludes, the participant will be removed to the preparation area and given instructions related to reporting unanticipated problems, adverse events, and serious adverse events, and an appointment to return for follow up on Day 3. See Figure 1, which illustrates the SFIT process/procedure. Intervention-Equipment and Calibration All instruments will be placed horizontally on a table in the center of the room against one wall with sampling ports directed towards the participant for optimal measurement and dispersal of phytoncides. P-Trak 8525, TSI, Shoreview, MN, USA The P-Trak is a continuous ultrafine-particle (UFP) counter. The P-Trak has the capability of measuring particles as small as 100 nm. UFPs are currently studied to find associations with specific health effects [82]. The P-Trak will be zero-calibrated for each use using a charcoal filter. Both during the calibration and survey mode, research-grade isopropyl alcohol will be used in a small alcohol cartridge chamber in the instrument. The P-Trak has a data log that updates every minute and has a minimum and maximum range that is noted when the instrument is in survey mode. The data log will be downloaded onto a computer using software specific for the P-Trak [83]. All software will be accessed from the P-Trak website [81]. Mini ppbRAE 3000, Honeywell International INC.; San Jose, CA The ppbRAE 3000 measures volatile organic compounds (VOCs) related to the phytoncides that we are atomizing. The ppbRAE 3000 will be zero-calibrated using a charcoal zero filter and Isobutylene Air Balanced span gas [84]. For two-point calibration, isobutylene at 10 ppm and 100 ppm will be used [85]. During two-point calibration, we will use a set 0.5 LPM regulator. This regulator can handle up to 500 psi, which is the psi of the span gas cylinders and also within the toleration limit of the ppbRAE 3000 regarding pressure Intervention-Equipment and Calibration All instruments will be placed horizontally on a table in the center of the room against one wall with sampling ports directed towards the participant for optimal measurement and dispersal of phytoncides. P-Trak 8525, TSI, Shoreview, MN, USA The P-Trak is a continuous ultrafine-particle (UFP) counter. The P-Trak has the capability of measuring particles as small as 100 nm. UFPs are currently studied to find associations with specific health effects [82]. The P-Trak will be zero-calibrated for each use using a charcoal filter. Both during the calibration and survey mode, research-grade isopropyl alcohol will be used in a small alcohol cartridge chamber in the instrument. The P-Trak has a data log that updates every minute and has a minimum and maximum range that is noted when the instrument is in survey mode. The data log will be downloaded onto a computer using software specific for the P-Trak [83]. All software will be accessed from the P-Trak website [81]. Mini ppbRAE 3000, Honeywell International Inc., San Jose, CA, USA The ppbRAE 3000 measures volatile organic compounds (VOCs) related to the phytoncides that we are atomizing. The ppbRAE 3000 will be zero-calibrated using a charcoal zero filter and Isobutylene Air Balanced span gas [84]. For two-point calibration, isobutylene at 10 ppm and 100 ppm will be used [85]. During two-point calibration, we will use a set 0.5 LPM regulator. This regulator can handle up to 500 psi, which is the psi of the span gas cylinders and also within the toleration limit of the ppbRAE 3000 regarding pressure and flow. It is important that the gas cylinder connection is CGA 600 and corresponds to the connection on the regulator, meaning that the threading has to be compatible between the cylinder and the regulator. If 500 psi is exceeded during calibration or survey mode, the diaphragm within the RAE 3000 may be damaged, leading to inaccurate data collection. In survey mode, data will be updated every 60 s and a data log will be created. The data log will be downloaded using software specific to the ppbRAE 3000 and found online [82]. HOBO MX2301 Temperature/RH Data Logger, ONSET, Bourne, MA, USA The HOBO temperature and relative humidity (HOBO T/RH) data logger, which is suitable for both indoor and outdoor application, is a small portable unit that uses an application (HOBOConnect) loaded onto a mobile device. The app will use a Bluetooth connection to the HOBO T/RH that is easily configurable and logs temperature and relative humidity in real time that you will view on our mobile device, in this case an iPhone. The HOBO T/RH will be placed within a 30 m line of sight towards the participant. Due to the size of the room within which we will set up the intervention, we will use one HOBO T/RH. Data download will be accomplished when the iPhone (mobile device) is within 100 m of the HOBO T/RH unit. Data updates every 2 min with an accuracy of ±0.2 • C and ±3.5% RH [86]. Data software will be downloaded online [84]. Measurement of Ambient Room Conditions Room temperature and humidity may alter the overall measurement of particle number and droplets. Measuring all four of these ambient conditions will allow for consistency in the experimental condition between participants. The room has a set temperature of 70 • F, and since the room is located against a foundation wall, humidity may vary; therefore, it is important to monitor both of these ambient conditions and use cutoff criteria based on average temperature and humidity of the controlled lab setting. We will also use a non-contact infrared surface thermometer to measure radiant heat of room surfaces that may add to the perceived comfort of the SFIT intervention room [81]. The ADC Adtemp Mini 432 Non-Contact Infrared Thermometer (American Diagnostic Corporation, Hauppauge, NY, USA) with a range of 59-77 • F will be used for this purpose. The ambient room air temperature, humidity, and surface temperature date will be collected as a mean prior to, during, and at the end of the intervention. Asakuki 500 mL Premium Atomizer, Tronhon Co., Ltd., Chongqing, China The atomizer holds 500 mL of liquid that can be atomized over 3 h. Of that 500 mL, a portion will be reduced that coincides with the amount of phytoncide that will be added. We expect that we will add 30 mL per phytoncide to the atomizer to achieve the detectable published amount [6,21]. Mist will be created by an ultrasonic plate within the atomizer, and the mist will be adjusted for a weak mist or a strong mist. Choice of weak or strong mist will be adjusted to fit the published detectable amount of phytoncide. Mist time can be regulated to maintain 60, 120, and 180 min of operation. We will use a 60 min mist time with the participants of study #1 and study #2. A fan within the atomizer will disperse the mist into the room. Room temperature and humidity will be monitored continuously as low temperature and high humidity may condense the mist into water droplets [87], which is to be avoided to allow for accurate dose calculations. Phytoncides, Floraplex Terpenes, Ypsilanti, MI, USA α-pinene, β-pinene, and d-limonene are the phytoncides (forest oils) of interest for SFIT. Interestingly, all three in combination have been tested in normal males in both forest immersion and hotel/sleep contexts and have shown effective elevations in NK cell numbers and activity as well as increased expression of perforin and granulysin [5,6,20,21]. However, these three phytoncides have not been tested in the SFIT context with breast and prostate cancer patients. D-limonene alone has been tested in the context of pain and shown to be effective in an animal model when not paired with VR [88]. All three phytoncides are available as purified isolates in containers of 4, 8, or 32 ounces. The phytoncides will be added to the Asakuki atomizer with an easy calculation of ounces to ml, and that amount will be subtracted from the 500 mL total container in the atomizer so as to maintain a standardized addition of phytoncides:water ratio. Measurement of this mixed mist by ultrafine particulate and VOC survey will be the method of determining the dose of the phytoncide. VIVE Pro Eye, HTC (High-Tech Computer) Corporation, New Taipei City, Taiwan The VIVE Pro Eye is capable of delivering digitally rendered greenspace visual recordings from forested or park-like greenspaces. The headset has sensors that coordinate the virtual greenspace with the participants' visual gaze (native eye tracking) to move the surroundings of the digital greenspace through interaction with the base stations using motion sensors mounted on tripods in front of the participant [89,90]. Set up will include downloading the VIVE and SteamVR software onto a computer specifically dedicated for the VIVE system (e.g., using Windows 10 operating system). Tracking will be performed on the computer software and saved for later review during data entry [91]. The computer will be located behind the chair in which the participant will be sitting. Sounds will be adjusted for those that are a part of the virtual greenspace; ambient sounds in the room will be muted. Although software comes with the purchase of the VIVE system, training videos can be found online and will be completed before use [87][88][89]. Baseline Fidelity Measures-Linkages to Equipment Perception of air quality prior to introduction of phytoncides into the room's air will be evaluated repeatedly prior to each intervention day using two healthy individuals of both assigned sexes each time to assess the air quality of the room at the set temperature of 70 • F while monitoring the relative humidity. We will use the facial exposure method as described by Fang, Clausen, and Fanger [92]. In order to ensure fidelity of the measures of phytoncide dose prior to, during, and at the end of the intervention period, the equipment mentioned above will be zero and span-calibrated before introduction of phytoncides into the room's air. Recording temperature and humidity using the HOBO T/RH before and during the intervention will ensure that the reliability of the survey data from the P-Trak and the ppbRAE 3000 has not been affected by changes in temperature and humidity. Feasibility and Reliability of Intervention Stability We expect to determine the ease of use of VR and phytoncide atomization, the drop off of phytoncide over the intervention period, and engagement in VR leading to a standardization of the procedure and protocol. Data will be collected from the participant, and the research associate and principal investigator will use field notes and will include challenges and facilitators related to the delivery method of VR and phytoncides and the ability of the participant to engage in VR for the duration of the study intervention. Atomized phytoncides prior to and after 1 h of dose delivery will be used to measure by the P-trak and the Mini ppbRAE3000 to determine dose drop off during the intervention delivery. Quantitative data related to dose drop off will be analyzed by t-test with an α level of 0.05. We will collect deviations from the standardized procedures, including the deployment of VR and atomized phytoncides. We will monitor the timing, preservation, and delivery of specimens to specialized labs on the academic healthcare campus in order to track the ability to maintain the expected optimized rigor in testing immune cells. Data Collection Since the participants will be recruited from the breast and prostate cancer clinics and the rheumatology clinic, the medical history that appears in the EPIC electronic medical record will be available for review prior to enrollment per IRB approval (OHSU IRB#00023183) and cross-checked with the participants after we have obtained informed consent on the day of the SFIT intervention. This will serve as the start of the data collection process for determining inclusion/exclusion of participants (see Section 2.2.1, Table 1; In-clusion and Exclusion for SFIT for Breast and Prostate Patients and Inclusion and Exclusion for SFIT for Axial Spondyloarthritis Patients) and baseline data collection (see Section 2.6.1, Section 2.7, and its subsections, and Section 2.8 and its subsections). Case report forms will be used to capture participant baseline and data collected at all pertinent time points per protocol and will serve as a hard-copy record of data, which we will enter into a research electronic data capture system, as required by our university. Baseline Demographic data collection will occur prior to the start of both SFIT intervention studies and will be specific to each population of interest [93]. The behavioral/psychological measures, biological measures, and feasibility measures will be collected prior to placement of the participant in the SFIT intervention room. Follow Up Follow-up data collection will be within the 3-4-day period after the SFIT intervention and in study #1 participants with breast or prostate cancer and will include blood specimens for CBC with differential counts of leukocytes, NK cell phenotyping and plasma for perforin and granulysin ELISAs [6]. We will survey the participants on events that might affect immune response. Every effort will be made to minimize the amount of blood drawn. Data collection will occur immediately after the intervention for study #2 participants with AxSpA and include the same measures as baseline as well as the intervention fidelity measures. Follow up will also include collection of adverse events, serious adverse events, and unanticipated problems, per IRB protocol for intervention studies. Measures-Behavioral/Psychological To measure the impact of SFIT on patients with chronic or breakthrough pain due to axial spondyloarthritis (axSpA), scales which interpret the direct effect on symptoms of pain, psychological distress, and physical functionality specific to axSpA will be used. These include the Visual Analog Scale for pain [94,95], the Depression, Anxiety, and Stress Scale [96,97], and the Bath Ankylosing Spondylitis Disease Activity Scale [98]. Demographics Demographic characteristics of participants will include clinically relevant ethnographic details specific to assigned sex (male or female), race, and ethnicity. Diagnostic information specific to axSpA, including date of diagnosis and onset of symptoms (date), chronicity of symptoms (in months), and non-biologic medication management (name of medication/last date of use), will be ascertained. Since culture, religion, and personal belief systems influence perception of pain, depression, stress, and functionality, we will include questions about these three pertinent individual characteristics in our baseline demographic data collection [99]. Visual Analog Scale (VAS) The VAS is a widely used self-reported tool measuring present-state perceived pain intensity [100]. Patients will be asked to indicate their perceived pain intensity along a 10 cm horizontal line (which can be on paper or computerized), and this rating will then be measured from the left edge up to the indicated marking to represent the level of pain intensity. The line represents a continuum between "no pain" and "worst pain". The VAS is often used in clinical settings and is sensitive in determining the effect of comfort or pharmacological interventions [94]. The VAS has performed well on psychometric tests of validity (for example, η 2 = 0.47; F = 0.44 [94,101]), and reliability (r s,VAS = 0.52-0.89 [102]) for measuring pain clinically. VAS scores will be treated as ratio data [103]. Depression, Anxiety, and Stress Scale (DASS) The DASS comprises a set of three self-report scales, which are intended to measure clinically significant symptoms of emotional states of depression, anxiety, and stress [96,104]. Each of the three DASS scales (depression, anxiety, and stress) contains 14 items, divided into subscales of 2-5 items measuring the same construct, for a total of 42 items. The participants will be asked to complete the DASS prior to and immediately after the SFIT intervention. The DASS, which is intended to measure symptom severity of self-reported negative emotional states, including depression, anxiety, and stress, shows good psychometric validity and reliability (Cronbach's α = 0.89; test-retest and split-half reliability scores are r DASS = 0.99 and 0.96, respectively [96]) as a dimensional measurement of psychological distress associated with chronic conditions [104]. Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) The BASDAI is commonly used to measure clinical symptoms of AS and axSpA, including fatigue, spinal pain, joint pain related to swelling, and enthesitis, or inflammation of the tendons and ligaments, as well as morning stiffness duration and severity [105]. It consists of 6 self-report questions, with each question scored from 1, representing "none" or no symptoms, to 10, representing "the worst", with the score from the questions pertaining to morning stiffness and duration averaged such that 5 questions in total are scored. The participants will be asked to complete the BASDAI prior to and immediately after the SFIT intervention. The resulting score (from 0 to 50) is divided by 5 to give a final BASDAI score of 0-10, with scores of 4 or greater indicating significant disease [106]. The BASDAI has demonstrated extraordinary reliability at p < 0.001 [105]. In a test of validity of the BASDAI for AS patients, Cronbach's α = 0.786 [98]. Measures-Biological/Immune System We will use established pre-clinical and clinical measures to characterize the immune responses before and after the simulated forest exposure intervention. Demographics Demographic characteristics of participants will include clinically relevant ethnographic details specific to age, assigned sex (male or female), race, and ethnicity, and history of smoking. CBC and Differential Cell Count Complete blood count (CBC) and differential cell counts will be measured at baseline (prior to implementation of the simulated forest exposure intervention) and at Day 3. Correlation between this clinical measure and the data from flow cytometry and ELISA (outlined below) will be conducted to translate the pre-clinical findings into clinical use. A whole blood sample for a CBC with a differential cell count will be collected two times using a 4 mL EDTA tube, prior to the SFIT intervention and on Day 3 after the SFIT intervention. This measure will include WBC count and percentages of 100 cell counts and absolute counts for neutrophils, lymphocytes, and monocytes. The clinical core laboratory at OHSU complies with established inter-and intra-assay parameters as it is accredited by Clinical Laboratory Improvement Amendments. Flow Cytometry for NK Cell Number and Activity Flow cytometry is used to monitor immune system changes tied to specific disease states, which makes it ideal for defining cellular responses of interest [107]. NK CD3 − /CD56 + and NK CD3 − /CD56 + /CD69 + , (i.e., NK number and activity, respectively) will be measured by flow cytometry immunophenotyping using freshly collected peripheral whole blood (approximately 4 mL). Cells will be prepared for flow cytometry using the standard fluorescence-activated cell sorting method. Data analysis will be performed by gating on live cells based on forward versus side scatter profiles, then on singlets using forward scatter area versus height, followed by cell-subset-specific gating [107]. Perforin Expression Perforin expression will be measured by an enzyme-linked immunosorbent assay (ELISA) and will be used to monitor downstream perforin secretion due to NK cell activity [27]. Perforin will be measured using plasma extracted from whole blood, which will be frozen at −80 • C and stored until needed for the assay. Optimized ELISA kits from ThermoFisher Scientific, Waltham, MA, USA, will be used per manufacturer instructions to detect perforin levels. The enzyme-dependent color change will be read out on a Multi-Mode Mircroplate Reader. Perforin concentration will be extrapolated from the standard curve [108]. Granulysin Expression Granulysin expression will be measured by an enzyme-linked immunosorbent assay (ELISA), which will be used to monitor downstream granulysin secretion due to NK cell activity [27]. Granulysin will be measured using plasma extracted from whole blood, which will be frozen at −80 • C and stored until needed for the assay. Optimized ELISA kits from AbCam, Cambridge, MA, USA, will be used per manufacturer instructions to detect granulysin levels. The enzyme-dependent color change will be read out on a Multi-Mode Mircroplate Reader. Granulysin concentration will be extrapolated from the standard curve. For both perforin and granulysin expression, we will need 4 mL of freshly collected peripheral whole blood [108]. Follow-Up Measures Day 3 In addition to collecting whole-blood specimens for CBC with differential, NK cell number and activity, perforin, and granulysin, we will note any unanticipated problems, adverse events and serious adverse events affecting the participants per IRB protocol for intervention studies. Unanticipated problems will be determined with the assistance of the clinicians and the study team as these are determined through a ranking procedure specified by our university's Office of Human Research Protections. Adverse events will include subjective or objective symptoms occurring spontaneously, significant clinical lab abnormalities, a worsening of the participants condition from baseline, are recurrence or increase in signs and symptoms of original disease that occur after the SFIT intervention and are worsened or changed in quality. Serious adverse events will include death, lifethreatening adverse event, new hospitalization or prolongation of current hospitalization, or a new significant incapacity or new substantial inability to complete activities of daily living. Discussion We have presented our lab set up for the SFIT intervention. The SFIT intervention as described may be used in a multi-arm design with a control group, using VR only, phytoncide atomization only, or both in combination. We expect to use whatever combination is proven to be most effective as a minimally invasive intervention for all of the following designs in a stepwise progression: longer intervention duration to optimize dose effect; intermittent, but repeated intervention to optimize dose effect drop off; and home use application to move the optimized intervention into practical use. An intervention using VR and humidified phytoncides, αand β-pinene, and limonene, in a simulated forest exposure intervention as a substitute for forest bathing in AXSpA patients with chronic or breakthrough pain and cancer patients with early-stage solid tumors (HR + HER2-breast cancer and prostate cancer) who have completed surgery or chemo-and/or radiation therapy (exclusive of hormone therapy) is possible in a lab setting. Moving SFIT to a home setting may be challenging, but not insurmountable. Expected Outcomes The expected outcome for the future two studies is the creation of a standardized protocol for deploying SFIT. This will include calibration and measurement set points, cutoff criteria and describing how to maintain a consistent dose of phytoncide and ease the use of VR for participants. We expect to uncover pertinent adverse events, severe adverse events, and unanticipated problems. For study #1, we expect that the combined use of our three phytoncides of interest and VR will improve NK cell numbers and activity and blood levels of perforin and granulysin in patients with breast or prostate cancer. For study #2, we expect that the use of d-limonene and VR will reduce pain, stress, and depression in patients with axial spondyloarthritis. Lessons Learned We have had challenges with procurement of supplies during the current development and delivery backlog produced by delays in shipping due to COVID-19 conditions in our current world state. Some of these were delivered relatively quickly, within one week; however, we procured span gas of the wrong concentration, and the next week all span gas was out of stock nationwide with a delay in the expected delivery of 1-2 months. Phlebotomy supplies have been challenging to obtain through our medical supply process within the university due to the increased usage of the supplies to care for COVID-19 patients. At times, our academic medical center announced requests for a reduction in the usage of various supplies for laboratory work. The lesson learned here is to start early after IRB approval and be aware of potential delays that might encumber grant funding previously awarded. The cylinder containing the span gas connection and the fitting to the regulator must match. If the cylinder has a male threading and the regulator has female threading, these may match, but care needs to be taken in looking at the specification of the cylinder connection with respect to the regulator connection. In our case, we needed to have male threading on the replacement span gas with a connection that was a CGA 600 specification in order to match our existing regulator. The replacement price differential is 12:1 with the regulator being much more expensive than the span gas cylinder. Conclusions We have presented the theoretical conceptions and established the foundation for the move to the pragmatic operations of the SFIT intervention. SFIT is in its early stages as a potential therapy in the two populations of interest to us presented here. We expect further development in building this novel lab set up in the immediate foreseeable future as we work to move this therapy into the home setting under the control of patients needing this minimally invasive therapy. This is relevant to healthcare science because healthcare providers are responsible for optimizing patient healing and recovery, while reducing the harmful effects of therapies that deleteriously affect the patient's ability to thrive with their chronic or temporarily morbid conditions. Informed Consent Statement: Informed consent will be obtained from all subjects involved in the study, once we are enrolling participants. Data Availability Statement: Not applicable.
2022-04-30T15:15:08.382Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "21cb2834d1f758bab969cf157ab235c4319a8540", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/9/5373/pdf?version=1651142991", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb5b6c192b8a2168f2ccd4e435eed780c64e99e8", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
37294857
pes2o/s2orc
v3-fos-license
TA-designed vs. research-oriented problem solutions In order to study graduate teaching assistants (TAs) beliefs and values about the design of instructor problem solutions, twenty-four TAs were provided with different solutions and asked to discuss their preferences for prominent solution features. TAs preferences for solution features were examined in light of the modeling of expert-like problem solving process as recommended in the literature. Results suggest that while many of the features TAs valued align with expert-like problem solving approaches, they noticed primarily"surface features"of solutions. Moreover, self-reported preferences did not match well with the solutions TAs wrote on their own. INTRODUCTION Cognitive apprenticeship approach [1] underlies many pedagogical techniques that have been shown to promote expert-like problem solving. In this approach a prescribed problem-solving framework is made explicit through "modeling" it in instructors' solutions to problems. The framework involves: 1) initial problem analysis, 2) solution construction (choice of sub-problems), and 3) checking of solution [2]. If we wish to help instructors make problem solving approaches explicit on problem solutions they provide students, it is necessary to understand how these instructors currently perceive and value the design features of solutions to problems. In previous work we have investigated faculty beliefs and values related to the use of instructor solutions [3,4]. In this paper, we report on an investigation of the beliefs and values of graduate teaching assistants (TAs). TAs play a central role in the teaching of physics problem solving in many physics departments. Two main research questions are: (1) Do TAs notice and value features that explicate the expert decision-making process? (2) What do TAs have in mind when "discussing/mentioning" features that explicate the expert decision-making process? METHODOLOGY Twenty four first-year graduate TAs enrolled in a TA training course were provided with three instructor solutions for the same physics problem and asked to explain how these solutions compare with their preferences for the design of instructor solutions. Data were collected using a Group-Administered Interactive-Questionnaire (GAIQ) approach [5] in which each TA first wrote a solution for the designated problem that they would hand out to their students. The TAs then read three example problem solutions and identified prominent features of those solutions (e.g., providing a diagram) in a worksheet. They also ranked the three solutions based on a) which solution has more of each feature, and b) their preference for including these features in solutions. TAs were also asked to explain the reasons behind their preferences. To verify meaning and allow for the sharing of ideas, TAs were later asked to discuss their ideas in small groups and report their conclusions in a whole class discussion. Finally, each TA was given the opportunity to explain whether (and why) their preference changed by filling in a similar post-discussion worksheet. On this post-discussion worksheet they were asked to match the features they identified on the prediscussion worksheet to a list of pre-defined features (See Table 1) representing different aspects of the solution presentation. The list represents categories of features identified in a pilot study with the same population. Some of these categories relate to the expert problem solving process [2]. Both the pre-and post-discussion worksheets as well as TAs' own solutions were collected for analysis. Features on the pre-worksheet that were not matched to Table 1 by the TAs were categorized as additional features by the researchers. The complete corpus of data was analyzed by two researchers. Any disagreements were discussed by 4 researchers until full agreement was established. The details of the GAIQ approach are presented in a companion paper [5]. 2. Provides a list of knowns/unknowns 3. Provides a "separate" overview of how the problem will be tackled ( In addition to the 14 pre-defined features given in Table 1, there were 3 additional features that the TAs noticed. Because each was mentioned by only 1 or 2 TAs, we will focus only on the pre-defined features. Figure 1 shows the number of TAs who noticed each of the pre-defined features, and whether or not they liked it or were conflicted about it. If the TAs' preference for the feature changed after the discussion, or if the TAs explained both the pros and cons of a feature, they are placed in the "conflict" category. In the following we will separate our discussion of these results as related to the different components of an expert-like problem solving process [2]. Features Related to Initial Problem Analysis Providing a schematic visualization of the problem (F1) and providing a list of knowns/unknowns (F2) are the features that relate to the explication of the initial problem analysis stage in an expert-like problem solving process [2]. F1 is one of the most mentioned features (13 out of 24 TAs). F2 was mentioned by 9 TAs (the median for all features). These features were valued by almost all TAs who mentioned them. Only one TA expressed that he didn't like to provide a list of knowns/unknowns because it encourages students to solve problem via mindless plug and chug. Other TAs valued the list of knowns/unknowns because it "gives an idea of what you have and what you need." Examination of TAs' own solutions (which 23 TAs provided) indicates that all TA solutions included a diagram. The list of knowns (and sometimes with the unknown targeted variable included) was found in the solutions of 12 TAs. Although all TAs valued F1 (visualization), different TAs had different ideas about the preferred visualization shown in Figure 2. Table 2 shows that initially 9/13 TAs distinguished between the quality of diagrams, with 6 of them preferring a detailed drawing as presented in solution 3. Most of the TAs did not articulate why the detailed diagram was better than the others. TAs who chose the less detailed diagrams in solution 1 and/or 2 explained, for example, that they didn't like diagram 3 because "complicated diagrams can be confusing". Some TAs worried that the arrows in diagram 3 could be confusing to the students because they are used to represent both acceleration and velocity. It is Solution 1 (S1) Solution 2 (S2) Solution 3 (S3) likely that this concern was spread during the peer discussion stage, and therefore between the pre and the post the number of TAs who did not distinguish between solutions decreased and the number of TAs preferring solution 1 increased. Features Related to Solution Construction Six of the features (F3, F4, F5, F6, F10, F12) relate to the solution construction stage in an expert-like problem solving process. They can be further classified into 3 groups shown below: Choices made (major solution steps): F4) Explicit sub-problems are identified F6) Principles/concepts used are explicitly written Reasons for choices (additional explanations): F3) Providing a "separate" overview F5) Reasoning is explained in explicit words Framework within which choices are made: F10) Providing alternative approach F12) Forward vs. backward solution Based on Reif's [2] suggestion to represent the process of solving a problem as a decision making process, the major choices a person makes in a solution process involve defining sub-problems: intermediate variables and principles to find them. Underlying these choices is the solver's reasoning. While F4 and F6 present the major choices one makes, F3 and F5 provide additional explanations regarding the reasons underlying these choices. We note that this reasoning is guided by the solver's general perception of the framework within which choices are made (e.g., as a process that involves choosing between alternatives, or arriving at identified goal in a backward manner) represented in F10 and F12. Figure 1 shows that features related to reasons for choices were the most noticed ones. Table 3 shows the solutions TAs believed best represent features related to reasons for choices. Most of the TAs who noticed these features thought that they were best represented in solution 2 or 3. However, as shown in Figures 3 and 4, these solutions present reasoning in different ways. Solution 2 identifies the goal of each sub-problem and provides justification for the principles separately as the progress of the solution. Solution 3 describes a complete overview of how the problem should be broken into sub-problems and explains the principles applicable in each of the subproblems at the very beginning. In general, solution 3 was slightly preferred by TAs for its enactment of F3 while solution 2 was generally preferred as the best enactment of F5. Although most TAs did not explicate why one presentation is better than the other in the worksheets, in the whole-class discussion several TAs raised their concerns that students may not have the patience to read the whole chunk of text at the beginning of solution 3. Students may simply ignore all the explanations in the first part and jump directly into the second part with equations. Reasoning that is presented beside the equations, as in solution 2, makes it easier to reference and students are more likely to process the information better. In general, F3 and F5 were valued by most TAs who noticed them. The TAs believed that these features play an important role in instructor solutions because they make the solution process clear and make the solution easier to follow. The TAs also believed that these features help students understand the internal thinking process that the instructor went through when solving the problem and facilitate better transfer to other problems. Except for minor concerns, such as "overdoing the motivations can lead to undesired chunks of text", which was the major reason why a few of the TAs expressed a conflicted preference, these features were generally valued by TAs. However, examination of TAs' own solutions indicates a discrepancy between their self-reported preferences and their actual practice. In total, only 3 out of 23 TAs provided some outline of the subproblems (F3) either at the very beginning or along the solution progression, and only 6 of the 23 TAs provided any justification for the principles used (F5). Features 4 and 6, which explicate the choices made, were less noticed (2 and 5 TAs, respectively), although they were valued by all TAs who noticed them. One TA explained that "I enjoy this feature [F4] because it helps set up a logical progression of the problem"; other TAs explained their preference towards F6 in that "the concepts may be more important than the answer" or "if we can use less math, I think we should do that, so students focus on physics". Examination of TAs' own solutions indicates that no TA presented a solution in which the goals for each sub-problem were clearly stated. On the other hand, the concepts of "conservation of energy" and "Newton's 2 nd Law" were explicitly written in words or the basic mathematical forms by 18 and 8 TAs, respectively. Regarding the framework within which choices are made, 4 of the 5 TAs who noticed F10 (providing alternative approach) preferred this feature, explaining, for example, that "this [feature] demonstrates how to develop an expert knowledge structure and how it makes the problem much simpler." One TA was conflicted about this feature, as presenting an alternative approach "could possibly confuse students." However, no TA provided an alternative approach in their own solutions. As for F12 (backward vs. forward solution), most TAs did not notice it as an important consideration in the design of a solution. One difference between experts and novices is experts (teachers) commonly regard introductory physics problems as exercises while they are actually problems for novices (students). As a result experts may present problem solutions in a forward manner, reflecting their knowledge of the problem solution in an algorithmic way. Yet, to explicate the decision making process of an expert when solving a real problem, as suggested by instructional strategies aligned with cognitive apprenticeship [1], one has to present the solution in a backward manner. Only one TA mentioned this feature. However, this TA presented his/her solution in a forward manner. On the other hand, there were 8 TAs who originally presented a backward solution, even though they did not mention F12 in the worksheets. It is likely that many of the TAs consider the backward and forward solutions as interchangeable. Features Related to Checking of Solution F14, providing a check of the final result, is the feature which is related to the last step of an expert problem solving process: checking of solution. We expected this feature to stand out in the artifact comparison technique since only 1 of the 3 solutions included it. However, only 4 TAs noticed this feature. In addition, examination of TAs' solutions indicates that none of the TAs performed an answer check in the solutions they prepared for the introductory students. Although this feature was valued by all the TAs who noticed it, the findings suggest that this feature was underrated or ignored by most of the TAs. CONCLUSIONS In general, we find that the TAs did notice and value features related to the explication of an expert-like problem solving process, in particular, problem redescription and the planning of the solution. Yet, most features that the TAs noticed were "surface features" such as F1 (drawing), F3 (separate overview), and F9 (length) that one is likely to be aware of even if s/he doesn't know much about physics problem solving. This is compared to features such as F6 (principles used) or F12 (direction) that are deeper features of the solution and were less commonly identified by the TAs. In addition, we find that the self-reported preferences didn't match well with the solutions TAs wrote on their own before seeing the 3 artifacts. Although features in all 3 groups that are aligned with the expert-like problem solving process were in general valued by the TAs, only features related to problem re-description (especially F1) were generally found in their own solutions. The majority of the TA solutions contained little or no reasoning to explicate the underlying thought processes. No answer check was found in any TA's solution. We note that the TAs' solutions were collected at the beginning of the TA training course, when the TAs had just entered graduate school and started their TA jobs. It is likely that this activity, which helps to elicit TAs' initial ideas about the design of problem solutions in physics teaching, will influence their practices in the future. Thus, we believe that the activity described in this paper provides a starting point for TAs' professional development. In addition to this activity, follow up activities that are aligned with the theoretical strategies for enhancing conceptual change could be implemented. For example, it would be beneficial if new ideas are imported from the research literature, and the TAs are explicitly guided to evaluate their practice in light of these new ideas.
2017-11-03T14:39:27.236Z
2012-02-09T00:00:00.000
{ "year": 2016, "sha1": "ce9e4baf59daf83fd96e85e1419ae557f574235b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1602.06392", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ce9e4baf59daf83fd96e85e1419ae557f574235b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
268919599
pes2o/s2orc
v3-fos-license
Fracture Toughness Investigation of AL6082-T651 Alloy under Corrosive Environmental Conditions The crack initiation and propagation in an aluminium alloy in a corrosive environment are complex because of the loading parameters and material properties, which may result in a sudden failure in real-time applications. This paper investigates the fracture toughness of aluminium alloy under varying environmental and corrosion conditions. The main objective of the work is to link the interdependencies of humidity and temperature for an AL6082-T651 alloy in a corrosive environment. This study investigates AL6082-T651alloy's fracture behaviour and mechanism through microstructure and fractographic studies. The results show that a non-corroded sample, at room conditions, provided more load-carrying capacity than a corroded sample. Additionally, an increase in temperature improves fracture toughness, while an increase in humidity results in a decrease in fracture toughness. Introduction Fracture toughness is a vital parameter that characterizes a material's ability to resist crack propagation.It can be quantified and standardized through experimental fracture mechanics techniques like structural integrity assessment, residual strength analysis, life service evaluation, and damage tolerance design for diverse engineering components and structures.Consequently, evaluating and testing fracture toughness has emerged as a crucial aspect in advancing the fracture mechanics approach and its engineering implementations.Standard terminology for fracture toughness testing and evaluation is specified by the American Society for Testing and Materials (ASTM) in E1823 and E399 [1][2][3].A material's fracture toughness can also be affected by environmental factors such as humidity, temperature, and corrosion [4].An environmental stress fracture is the general term in materials science used to refer to the premature failure of materials like metals, alloys [5], and composites due to tensile loads and environmental conditions.These fractures are induced by factors such as humid air, saltwater, and corrosive chemicals [6,7].Many of these processes are also capable of affecting aluminium alloy and its composites [9,8]. Al6082-T651 alloy is frequently employed as a structural material in numerous applications [7,10] due to its high specific strength among the 6xxx aluminium alloys [11].Fracture toughness values can be used as a basis for material characterization, performance assessment, and quality assurance in common engineering structures like cars, ships, and aircraft.To store hydrogen in automobiles, high-pressure tanks made of an aluminium alloy and a layer wrapped in carbon fibre are now the most common option [12,13].However, high-pressure hydrogen can quickly impact the aluminium alloy, which causes its embrittlement to develop.When Aluminium alloys were subjected to a corrosive environment, 3.5% NaCl solution for 24 hours [14,15] and more, aluminium reacted with the water, causing hydrogen embrittlement (HE) [16].In these samples, HE increases if subjected to high temperature, and when subjected to high humidity conditions causes its severity [18,17].High temperature, humidity, and applied load on the samples lead the hydrogen embrittlement and contribute to stress corrosion cracking (SCC).However, forming secondary phase particles in Al-Mg-Si alloys at high temperatures reduces the risk of hydrogen embrittlement and SCC in the 6xxx series compared with 7xxx aluminium alloys [19].Hydrogen induced cracks will form in a material at critical temperatures and humidities because of the enriched hydrogen atoms near the crack tip [20]. Saudi Arabia is characterized by a desert climate with an extensive coastal area.The central region experiences sweltering and dry summers [21].The humidity in the coastal area is high and oppressive.Its average monthly relative humidity in the Jeddah location, Riyadh, and Dhahran varies from 37% to 100.[21,22].The surroundings in the coastal areas, such as temperature, humidity, and corrosive environment, affect aircraft components' conditions.Al-Mg-Si alloys were used in a helicopter rotor blade application [23][24][25].The operation conditions of the rotor blade were affected by the surrounding environment, which encouraged corrosion.According to the failure analysis, corrosion of the threaded portion caused the failure to occur around the bolt hole [26][27][28]. There is still a need for a comprehensive approach to relate the interdependencies between couple loads, such as humidity and temperature of Al6082-T651 alloys in corrosive environments, which affect their properties [4].Most investigations focused on studying aluminium alloys and their composites to determine their fracture toughness at room temperature and in the absence of humidity and corrosive environments.Additionally, it has been very uncommon to compare the effects of fracture toughness at various temperatures and humidity levels and their combined impact on the performance of AL6082-T651alloy.The temperature values are 20 oC, 40 oC, 60 oC, and 120oC, and the humidity levels are 40% to 90%, considered for experimentation. In this paper, we investigated the effects of temperature and humidity on the fracture toughness of Al6082-T651 alloys.Using scanning electron microscopy (SEM), the fractographic characteristics of the Al6082-T651 alloy were studied to find the failure mechanism.Material selection and its properties, experimental procedures followed to pre-crack the specimen, immersion in a corrosive environment and thermal chamber, fracture toughness testing, and results and discussions are explained in subsequent sections. Material The AL6082-T651 alloy (also known as Al-Mg-Si-Mn) is a popular choice for structural applications due to its physical properties resembling Al6061 alloy [29].Manganese is incorporated into the extruded medium to high-strength Al6082 alloy in T6 condition to enhance strain hardening, toughness, and strength through solution strengthening, while preserving ductility and corrosion resistance [30,31].The T651 state is achieved through solution treatment, stress relief by stretching, and artificial aging at approximately 180°, which prevents elastic recovery after processing.Tables 1 and 2 show, respectively, the chemical compositions and properties of AL6082-T651 alloy in the T651 condition.[29 -32].The AL6082-T651 alloy is designed to improve strength, toughness, elastic modulus, damage tolerance, and fatigue crack growth resistance.AL6082-T651 alloy is recommended for parts that need high strength and high toughness levels. Specimen Preparation The specimen utilized in this experiment to determine fracture toughness (KIc) is the compact tension (CT) specimen.Each AL6082-T651 alloy sample is machined as per the dimensions mentioned in the geometry given in Fig. 1.The notch in the centre of the specimen has been cut using the Wire cut EDM.The introduction of a fatigue crack was done using the INSTRON servo-hydraulic fracture toughness testing apparatus Moreover, in all CT specimens, a fatigue crack is introduced at 4 Engineering Innovations Vol. 10 the notch's end while ensuring that the crack length to width (a/W) ratio remains at 0.54.Fracture toughness experimentation is carried out with a constant frequency of 3 Hz and by applying a cyclic load equivalent to 0.1 times the material's yield load. Fig. 2 shows the prepared sample, and Fig. 3 shows the experimental setup. Localized Corrosion The AL6082-T651 alloy samples were suspended in a still solution of 3.5 wt.% NaCl aqueous solution for the localised corrosion tests.By taking into account the literature [33] and the ASTM G31 standard, the test duration-which is roughly 168 hours-has been calculated.Testing for static immersion corrosion was done in a room setting.Each of the pre-craked CT specimens shown in Fig. 3(b) was submerged in a 3.5 wt.% NaCl solution for 168 hours before being removed and allowed to dry in the air. Fracture Toughness Test The fracture toughness test was carried out for four different cases by considering the different testing environments as mentioned below: Case i: Non-corroded and corroded specimens at temperature 20oC and humidity 40%.Case ii: Corroded specimens at temperatures 40oC and 60oC, and humidity 40%.Case iii: Corroded specimens at temperature and humidity (60oC & 80% and 120oC & 40%).Case iv: Corroded specimens at temperature 20oC and humidity 70% and 90%. In order to test the fracture toughness, a 0.1 mm/min displacement rate was maintained.The CT specimen's relative displacement of two knife edges was monitored during the fracture toughness test using a crack opening displacement (COD) gauge.To determine the fracture toughness of the AL6082-T651alloy, the load and displacement data are collected and subsequently analysed in accordance with standards [34].Three specimens were tested for each condition, and the average fracture toughness value was taken into account. Microstructures and compositions The microstructures of the SEM images of the AL6082-T651 alloy under various temperature and humidity conditions are shown in Fig. 4. The major influencing elements like Mg, Mn, and Si during exposure to different temperature and humidity conditions are shown in Fig. 4 (a-f).However, once the AL6082-T651 alloy is subjected to various temperature and humidity levels, oxide surfaces [35] start forming on their fractured surfaces, as seen in Fig. 4 (c-f).The presence of contents Mn and Mg show the alloying elements in the AL6082-T651alloy.In the compositions of the AL6082-T651 alloy at temperature 20oC, there is no oxide content in both uncorroded and corroded situations.However, the oxygen (O) content has been attained at higher temperatures and humidity levels.The content of O is due to the formation of oxides on the top surface of the fractured alloy. 6 Engineering Innovations Vol. 10 Along with Mg, Si and Mn, the presence of small O has also been identified for corroded samples.Exposure to a corrosive environment (168 hrs), different temperatures and humidity levels (72 hrs) form the oxide layer on the fractured surface of the AL6082-T651alloy.Since the exposure time is much less, the oxide layer formed is very thin, nearly 1%. Load vs. COD Fig. 6 displays the load variation versus crack opening displacement (COD) of the AL6082-T651 alloy under various temperature and humidity conditions.The alloy mentioned has a load-carrying capacity that decreases with rising humidity levels and rises with rising temperatures.The critical load (PQ) value can be calculated by drawing the 5% secant line to the maximum load (Pmax) on experimental data using the curve fitting phenomenon, as shown in Fig. 6.The plot shows that all cases' load vs COD curves follow the type III curve [34,36].In the Type III condition, PQ = Pmax applies where a specimen fails before exceeding 5% nonlinearity.Conditional fracture toughness KQ is computed from the PQ value and the measured crack length for each test using the equation (1 and 2) [37]. Engineering Innovations Vol. 10 Where for CT Specimens, Fracture Toughness The fracture toughness is significantly influenced by the specimen thickness (B).The fracture toughness decreases as the specimen thickness increases.When the specimen thickness gets close to the critical limit, the fracture toughness value seems stable.Plane-strain fracture toughness [38], Engineering Innovations Vol. 10 denoted by KIc, is the name given to the estimated fracture toughness at this point.Equations (3 and 4) provide the conditions for the plane-strain fracture toughness [37]: Plain strain fracture toughness requirements are met by the dimensions taken into consideration here [34].Using Eq. ( 1), the fracture toughness of the AL6082-T651 alloy is calculated for a range of temperature and humidity conditions.Fig. 7 depicts how temperature and humidity affect the AL6082-T651alloy's fracture toughness.The graph demonstrates that while the material's fracture toughness decreases with an increase in humidity, it increases as the temperature rises. Fracture Surface Morphology Fracture micrographs of the various CT specimens are obtained and are shown in Figs. 8 (a-f).Fig. 8 compares SEM micrographs of the fracture surfaces of smooth and notched specimens subjected to quasi-static strain rates.It was discovered that the core zone of the fracture surface featured dimples of varying heights and diameters.The high stress applied causes voids to form quickly during the nucleation stage.Large dimples are most likely produced by high uniaxial stress, which could accelerate the dimple fracture.The fracture surface's center zone exhibits ductile fracture filled with lumps and hollows.A typical nucleation-growth-coalescence phase of ductile fracture is depicted in Fig. 8 (a-f), where several voids and dimples may be seen.When a ductile material is subjected to uniaxial stress, micro-voids are formed in the core zone, prioritizing impurities like Mg, Si.As deformation progresses, the micro-voids continue to increase, eventually coalescing to produce numerous microcracks.The shear fracture occurs when the microcracks connect and grow to the vicinity of the specimen surface.Engineering Innovations Vol. 10 Fig. 8 (b).SEM analysis of a corroded specimen's fractured surface at 20°C and 40% humidity Fig. 8 (c).SEM analysis of a corroded specimen's fractured surface at 60°C and 40% humidity Fig. 8 (d).SEM analysis of a corroded specimen's fractured surface at 60°C and 80% humidity Fig. 8 (e).SEM analysis of a corroded specimen's fractured surface at 120°C and 40% humidity Fig. 8 (f).SEM analysis of a corroded specimen's fractured surface at 20°C and 90% humidity 10 Engineering Innovations Vol. 10 Discussion The experiments were conducted on four different cases, and the outcomes were presented in the following section. Case i: Non-corroded and corroded specimens at temperature 20 o C and humidity 40%. The load-carrying capacity for the non-corroded sample is higher than all other samples (Fig. 6.Since all other samples are tested under corroded conditions, the material's load-carrying capacity decreases.The peak load is about 8.12 kN for room conditions, which is 2.2% higher than the peak load of the corroded sample.The reduction in the 2.2% load-carrying capacity is small; however, it is obtained for the short period (168 hours) of sample immersion in the corrosive environment.This change in peak value is small due to the short-period corrosion effect on the sample.The corrosion environment has crucially affected the mechanical properties of the material.Due to corrosion, the material's surface reacts with the surrounding environment (oxygen and room humidity) and forms the oxide layers.The formation of oxide layers increases the crack nucleation and thus reduces the fracture toughness. Large dimples on the rough fracture surfaces indicate a ductile fracture, which suggests the fracture process in each case was caused by void coalescence.Under various testing settings, void development and coalescences cause the dimples to form.In all the cases, the transition zone is evident as depicted.These locations correspond to the matrix's local embitterment or restricted plastic deformation in Figs. 8 (a) and 8 (b), there is no sign of quasi-cleavage fracture that may be observed [39].The non-corroded specimen fails to be completely ductile in nature under room conditions and is characterized by voids and dimples shown in Fig. 8 (a).The fracture surface of the corroded sample, shown in Fig. 8 (b), shows the formation of single-mode fracture, i.e., voids and dimples, and there is no sign of brittle failure.However, the little oxide layer can be observed. Case ii: Corroded specimens at temperatures 40oC and 60oC, and humidity 40%. At 40°C and 60°C, the corroded sample has nearly the same load-carrying capacity.The percentage of the alloy's elongation increases as temperature rises from room temperature to 60oC [40].As a result, as the temperature rises, the material becomes more ductile, which in turn makes it harder to fracture.The cracked surface of the AL6082-T651 alloy is subjected to plastic deformation at 60°C temperature during crack propagation, which causes a thin layer of the oxide layer to form [35].The microstructure shown in Fig. 4 (c) and Fig. 8 (c) also displays the formation of the oxide layer. Case iii: Corroded specimens at temperature and humidity (60oC & 80% and 120oC & 40%). The load carrying capacity for the corroded sample at temperature 60 o C and humidity 80% is 6.69kN, shown in Fig. 6, whereas for temperature 120oC and humidity 40%, the load carrying capacity decreases by 16 %.The corroded sample at a temperature of 120oC and room humidity has a higher load carrying capacity than the combined temperature of 60oC and humidity condition of 80%. At a temperature of 120oC, a thin oxide layer has been observed acting as crack closure [41].The development of crack closure lessens the likelihood of crack propagation, increasing the material's ability to support more weight.At the high temperature depicted in Fig. 7, the crack toughness of the alloy increases because of the crack closure and increased ductility.The EDS analysis of the particles in Fig. 4 and 5 revealed that those phases typically contain a significant amount of Fe, as shown in Fig. 4 (b-f), in addition to Al, Si, Mg, and Mn.These particles might be therefore recognized as AlMg(Fe3Mn2)Si phase [42] (EDS plot in Fig. 4).The mapping of these phase particles is also shown in Fig 8(e).The amount of Mn and Mg elements present affects the morphology of the AlMg(Fe3Mn2)Si phase.Higher hardness and less ductility are the results of these phase particles.As a result, the material's ability to support more weight is reduced, which lowers fracture toughness. Case iv: Corroded specimens at temperature 20oC and humidity 70% and 90%. The load-carrying capacity for the corroded sample at room temperature and humidity at 70% is 6.09 kN, whereas it is reduced for 90% humidity.Fig. 6 (g) shows the load vs COD plot, which offers some non-linearity due to the higher humidity.Fig. 6 shows that the corroded samples' load-carrying Engineering Innovations Vol. 10 capacity and fracture toughness under high humidity conditions is 24.2% lower than that of the corroded samples at room conditions.The formation of oxide layers, loss of ductility due to corrosion, and increased crack initiation were the reasons for the decrement in fracture toughness.Fig. 8 (f) illustrates the creation of brittle features such as voids, deeper dimples, and oxide layers on the fracture surfaces at the notch area of the CT specimen in a high-humidity environment.Early crack propagation brought on by the crack nucleation lowers the material's threshold fracture toughness.As a result, as the corrosive environment, such as humidity, increases, the AL6082-T651 alloy's plain strain fracture toughness decreases. Conclusions The paper explores the impact of localized corrosion on the fracture toughness of AL6082-T651 alloy at various temperatures and humidity levels.The investigation involves experiments and fractographic studies.The following conclusions are made based on the examinations: • The load-carrying capacity of the non-corroded sample, at room conditions, is 3% higher than the corroded sample.The fractured surfaces show no sign of brittleness and a little oxide layer formation.• As the temperature increases, the percentage elongation of the aluminum alloy also increases, from 20°C to 120°C, enhancing the material's ability to withstand fracture.At high temperatures and humidity levels, the pre-cracked surface of AL6082-T651 alloy is subjected to forming a very thin oxidized layer.• The increment in humidity, from 40% to 90% at temperature 20oC, forms an oxide layer, loss of ductility due to corrosion, formation of hard AlMg(Fe3Mn2)Si phases, and increased crack initiation and thus, reducing the fracture toughness by 12.5% than that of the corroded samples at room conditions.Furthermore, from the conclusions, it is recommended to investigate the fracture toughness of AL6082-T651alloy, considering the implications in the coastal region. Fig. 3 . Experimental setup: (a) Fatigue crack introduction to the CT specimen; (b) Immersion in 3.5% NaCl Solution; (c) Fracture toughness testing of the fatigue pre-cracked specimen with clip strain gauge; (d) Thermal Chamber used to maintain the required temperature and humidity 4 . Fig.5from Energy Dispersive Spectroscopy (EDS) micrographs illustrates the mapping of Mg, Si, Mn, and other elements found in aluminium alloys.The EDS micrograph shows the mapping distribution of the elements of AL6082-T651 alloy for different temperature and humidity conditions.Along with Mg, Si and Mn, the presence of small O has also been identified for corroded samples.Exposure to a corrosive environment (168 hrs), different temperatures and humidity levels (72 hrs) form the oxide layer on the fractured surface of the AL6082-T651alloy.Since the exposure time is much less, the oxide layer formed is very thin, nearly 1%.
2024-04-05T15:14:00.360Z
2024-04-03T00:00:00.000
{ "year": 2024, "sha1": "222d22c763cbb9ea79f69a033e6f3dca3ada2aee", "oa_license": "CCBY", "oa_url": "https://www.scientific.net/EI.10.3.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7197c2d238bc27c5e78dfded27ea952313da4cfa", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
238361756
pes2o/s2orc
v3-fos-license
Methods of monitoring the Ground-Climate-Pipeline system in sections with hazardous processes The paper considers the concept of combining various means of monitoring of the Ground-Climate-Pipeline system sections exposed to hazardous processes. The concept of "hazardous processes" is described. Sensors selected for monitoring parameters in the Ground-Climate-Pipeline system are described. A monitoring scheme is proposed and described, all elements of which are functionally combined through information transfer networks. The advantages of using artificial intelligence in the proposed monitoring system are explained. Introduction The traditional approach in the investigation of pipeline network safety level as the main reasons of accident occurrence classifies [1,2]: corrosion (external and internal, swamping, waterlogging of the territory by aggressive groundwaters); equipment/material/joints defects, equipment failure; external (mechanical) impacts caused by various natural phenomenamudflows, screes, landslides; erroneous actions of the personnel during operation. Accepted monitoring means are mainly designed to detect and measure the level of these impacts [3]. The desire to record impacts is associated with an increase in the number of sensors installed in hazardous sections. An increase in their number leads to an increase in the probability of false alarm in means of pipeline network control, which reduces the efficiency of its functioning. This research contains a description of the concept of combining various means of monitoring of pipeline sections, computer modelling, and artificial intelligence to improve the reliability of forecasting the nature of a possible accident while reducing the level of false alarms [4]. The proposed concept allows reducing the potential damage to the environment due to the system approach in analysing the state of the pipeline system section considered as a set of interacting elements [5]. Methods Hazardous processes (HP) should be understood as a change in the physical and mechanical properties of the ground, including the stress-strain state of the Ground-Climate-Pipeline (GCP) system, which 2 depends on the climate, the mobility of the ground (speed, acceleration, and amplitude) and the pipeline state (temperature, pressure, etc.) [6][7]. Thus, the change in the equilibrium state of the GCP system is described by the structure in which the climatic trend contributing to the occurrence of HP affects the bearing capacity and stress-strain state of the soil, which in turn affects the stress-strain state of the pipeline [8]. To control the interaction of elements of the GCP system, the authors recommend using sensors that provide measurements of the parameters of the interacting elements of the system on the sections with HP, presented in Table 1. Sensors presented above are placed temporarily or permanently in the places of installation, which are connected to HP in the pipeline, ground, on the surface of measuring equipment [10]. The parameters measured by the sensors are monitored at the control points, each of which is assigned the appropriate geographic and operational coordinates of the pipeline by the parameters under control. Therefore, measurements from the sensors are collected at a frequency depending on the process intensity and the value of the measured parameters, e.g. once per hour at the control points [11]. After 3 measuring and pre-automatically checking the measured value, it is transferred to the data collection and transmission unit (DCTU) [12]. This block accumulates information obtained from control points and analyses it for the compatibility of individual measurements [13]. If there are no contradictions between the individual portions of data received from different sensors, the information goes to the monitoring database, where it should be checked for consistency of the computer simulation results [14]. The authors propose the following monitoring scheme, all elements of which are functionally combined through information transfer networks (Fig. 1). The received information from the DCTU is transferred further to the information processing and decision-making unit equipped with Artificial Intelligence (AI). This unit has access to the monitoring database, which stores the received data from sensors collected at the control points. The information stored in the monitoring database is processed by the AI block and the obtained values are compared with the parameters obtained from computer simulation. The results of the comparison are processed by AI block, which allows formulating a solution aimed at reducing the probability of the System exceeding the limits of dangerous values of the system state parameters, such as the stress-strain state. It should be noted that the given monitoring means provide not the only fixation of the current state of the system, but also its forecast depending on predicted climate changes and nature of geological processes. The resulting time gap can be used for preventive risk reduction measures and selection of effective means of situation normalization by computer modelling. Results The first important result is a new approach to considering a pipeline section as a system combining interacting Ground-Climate-Pipeline media, which allowed classifying its limiting or dangerous states. Parameters, on which classification is carried out, are determined based on indications of a set of different sensors and mathematical simulation of the interaction processes of the system elements. The number of false alarms during monitoring the system is reduced by the application of multistage control procedure and compliance with the conditions of mutual conditionality of sensor readings, which for each class of conditions are recorded in the complex. Classification of conditions is linked to the risk indicators of an accident, resulting in tangible or severe consequences. Risk indicators are calculated based on the analysis of soil and pipeline stress-strain state, which characterizes the GCP system. To reduce the probability of false alarms, the authors propose to use an adapted set of sensors when processing information with which the combined readings are considered. For example, a combined reading of inclinometer for groundmass movement control and groundwater level sensor together with an accelerometer for ground shaking control may indicate possible dangerous ground movements. To consistently improve the quality of recognition of different classes of situations, it is proposed to use the feedback provided by the AI training system.
2021-10-06T20:08:51.934Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "213f5a75e872d8eb6bd4462e6af30fb7f1b93f59", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/864/1/012022/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "213f5a75e872d8eb6bd4462e6af30fb7f1b93f59", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
228914871
pes2o/s2orc
v3-fos-license
Climbing Jacob’s ladder: A density functional theory case study for Ag2ZnSnSe4 and Cu2ZnSnSe4 Recent advances in the development of exchange and correlation functionals to be employed in density functional theory calculations combined with the availability of ever more powerful high-performance computing facilities, let predictive computational materials science become reality. In order to assess the quality of calculated material properties, Jacob’s ladder provides an informal classification, where exchange and correlation functionals of similar capabilities are placed at the same rung of this ladder, while improved and more accurate ones are placed at higher rungs. Climbing Jacob’s ladder, i.e. employing more accurate exchange and correlation functionals, increases the quality of the results and the computational demands, and provides some guidance as to what accuracies and computational costs to expect from specific calculations. This is particular important for materials whose electronic ground state properties are incorrectly described, e.g. small band gap semiconductors, and materials, where system sizes for subsequent investigations, like defect properties or band offsets in heterostructures, become prohibitively large for more accurate exchange and correlation functionals. Here, we provide a systematic density functional theory study on the ground state properties of Ag2ZnSnSe4 and Cu2ZnSnSe4 for the lowest four rungs of Jacob’s ladder. Cu2ZnSnSe4, and in particular its alloys with Ag, is a promising candidate material for future thin-film solar cell absorber layers. In the present work, the obtained material properties are compared to available experimental data, allowing to benchmark the accuracy of the employed exchange and correlation functionals. We also provide a comparative study for subsequent quasiparticle calculations. Therein, the influence of differently obtained eigenvalues and orbitals as starting points are critically assessed with respect to available experimental data. Our results show that, structural properties based on the SCAN functional show overall best agreement with available experimental data, whereas additional hybrid functional calculations are necessary for satisfying results on electronic and optical properties. Introduction Novel photovoltaic materials based on the kesterite crystal structure are ingredients for third generation thin-film solar cells based only on earth-abundant and non-toxic elements. Prominent examples include Cu 2 ZnSnS 4 (CZTS), Cu 2 ZnSnSe 4 (CZTSe), and their solid solution Cu 2 ZnSn(S x Se 1 − x ) 4 (CZTSSe), where the latter one allows to continuously tune the electronic band gap between around 1.0 eV for CZTSe [1,2] and 1.53-1.67 eV for CZTS [3], respectively, thereby including the optimum band gap of 1.34 eV for single junction photovoltaic cells [4]. Subsequently, the overall potential of CZTSSe based solar cell devices has been demonstrated with a reported power conversion efficiency of 12.6% [5]. Following up on the question posed very early on, of why kesterite solar cells are not 20% efficient [6], the last years have seen a lot of experimental investigations into the structural, electronic, and optical properties of kesterite based solar cell materials, as well as theoretical investigations based on first-principles calculations employing density functional theory (DFT) for a comprehensive atomic understanding. These combined experimental and theoretical efforts have identified the most likely cause for limiting further advances as a large defect density of Cu-Zn antisites [7], either intrinsic Cu Zn or Zn Cu antisite defects or Cu Zn -Zn Cu defect complexes observed for the Cu-Zn disorder [8]. Looking at the final device geometry, i.e. a CZTSSe/CdS interface for a p-n junction, these defects are accumulating at the interface region, thereby pinning the Fermi level in the middle of the band gap [7]. Subsequently this leads to only a small band bending into the absorber layer and a large deficit in the open-circuit voltage (V oc ) [9]. Experimentally, this can be verified by the difference in the band gap determined from internal quantum efficiency and photoluminescence measurements [10]. In order to significantly reduce the Cu-Zn antisite defect density, a partial replacement of Cu with larger Ag cations has been proposed [11,12], based on promising results on kesterite type Ag 2 ZnSnSe 4 (AZTSe) [13,14]. DFT calculations indicate an increase in the formation energy of I-II antisite defects, leading to a significant reduction in I-II antisite defect density by an order of magnitude and subsequently a reduced band tailing [9,15]. From a first-principles perspective, kesterite type materials pose a very challenging task. First of all, they are quaternary materials with a complicated interplay of interactions among the atoms. Secondly, as potential absorber layers in solar cell devices, they have band gaps in the range of 1.0-1.5 eV, which are notoriously difficult to treat accurately by DFT methods due to an underestimation of the band gap in simpler exchange and correlation functionals. One way to circumvent this particular band gap problem is to employ more accurate exchange and correlation functionals, i.e. hybrid functionals, which incorporate some fraction of Hartree-Fock exact exchange and have been shown to yield better electronic properties of semiconducting materials. Another way would be to perform additional quasiparticle investigations based on many-body perturbation theory, i.e. subsequent GW calculations. While those two approaches can routinely be applied to the bulk properties of kesterite type materials, an increased demand of computational resources prohibits their use for subsequent material properties investigations, e.g. formation energies of defects and defect complexes or the calculation of band offsets in modelled device heterostructures. To this end, a well tested and proven theoretical approach would be highly desirable, providing a guide towards a useful combination of available exchange and correlation functionals together with their overall accuracy and their demands on computational resources. Here, we provide a detailed first-principles investigation on the structural, electronic, and optical properties of AZTSe and CZTSe in both, the kesterite and stannite crystal structure, and benchmark the results obtained from exchange and correlation functionals of increasing complexity with respect to available experimental data. The paper is organised as follows. Section 2 provides information on the structural polymorphs of AZTSe and CZTSe, as well as the necessary theoretical background for first-principles calculations, the employed exchange and correlation functionals, and important computational details. In a first step, in section 3 the obtained results on the structural, electronic, and optical properties are compared with and benchmarked against available experimental data. Results from subsequent quasiparticle calculations based on the GW approximation will be discussed as well. In a second step, the overall performance of the employed exchange and correlation functionals will be critically discussed, leading to suggestions for material properties investigation, such as the calculation of defects and defect complexes or band offsets in modelled heterostructures. Finally, section 4 provides a short summary and conclusion, and gives an outlook into future investigations. Structural polymorphs At ambient condition AZTSe and CZTSe crystallise in the kesterite crystal structure (space group I4, no. 82), as shown in figure 1(a). The kesterite crystal structure can be most easily understood as originating from a binary II-VI zinc-blende crystal structure via pairwise cation substitutions, with the restriction that the bonding to adjacent cations has to fulfil the so-called octet rule, i.e. the valence shell of each atom comprises eight electrons. Starting from a II-VI zinc-blende material, doubling the unit cell along the c axis, and additionally replacing the group-II element by a pair of group-I and group-III elements, can lead to ternary I-III-VI 2 compounds of the chalcopyrite and the CuAu-like structure, respectively. A further replacement of the group-III elements by pairs of group-II and group-IV elements in the chalcopyrite structure leads to the I 2 -II-IV-VI 4 kesterite crystal structure. However, performing the same replacement starting from the CuAu-like structure can lead to the I 2 -II-IV-VI 4 stannite crystal structure (space group I42m, no. 121), which is shown in figure 1(c). To make matters worse, the ionic radii of Cu and Zn are nearly identical, giving rise to a high probability of cation exchange in the Cu-Zn planes perpendicular to the crystallographic c axis, located at z = 0.25 and z = 0.75. While partial exchange of Cu and Zn cations leads to an increase in the Cu Zn and Zn Cu defect density, a totally random distribution of Cu and Zn cations within the Cu-Zn planes leads to the introduction of additional symmetry elements and has therefore been named disordered kesterite structure, shown in figure 1(b). In terms of structural investigations this disordered kesterite structure crystallises in the same space group as the stannite crystal structure, and has hindered the correct crystal structure determination of possible solar absorber materials for some time. Theoretical background In recent years, first-principles calculations based on DFT have become a very powerful tool for materials science investigations, not only due to the development of ever more accurate exchange and correlation functionals, but also due to the wide-spread availability of local, regional, and national high-performance computing (HPC) facilities, providing the necessary computational resources for investigations of increasing complexities. With the constant development of improved exchange and correlation functionals to be employed in first-principles investigations, for somebody not that familiar with the topic it became more and more difficult to judge the accuracy and reliability of the reported results. In order to allow for some guidance, the so-called Jacob's ladder can provide a first hint [16], as adopted in figure 2. Here, exchange and correlation functionals of similar capabilities are grouped together at the same rung of this ladder, while improved and more accurate ones are placed at a higher rung. Functionals of the local density approximation (LDA) can be found at the lowest rung and are based solely on the electron density, the more common examples include the parametrisations after Perdew and Zunger [17] and Perdew and Wang [18]. The next rung depicts the generalised gradient approximation (GGA) functionals, which additionally take into account the gradient of the electron density. Examples include the parametrisations after Perdew, Burke, and Ernzerhof (PBE) [19], the PBE parametrisation revised for solids (PBEsol) [20], and the parametrisation after Armiento and Mattsson (AM05) [21], respectively. Additional inclusion of the second derivative of the electron density (the Laplacian) and the orbital kinetic energy density leads to the rung of the so-called meta-GGAs, with the SCAN functional [22] being a prominent example. This SCAN functional satisfies all known possible exact constraints for the exact density functional, and has been claimed to match or improve on the accuracy of computationally more demanding hybrid functionals [23]. The mentioned hybrid functionals replace a pre-defined fraction of the underlying exchange and correlation energy by a Hartree-Fock exact exchange term. Prominent examples include the parametrisations after Heyd, Scuseria, and Ernzerhof [24], or the PBE0 functional [25]. Recent years witnessed a surge in investigations trying to improve hybrid functional investigations by adjusting the pre-defined fraction of Hartree-Fock exact exchange, ultimately being determined in a fully self-consistent manner [26,27]. On the one hand, by climbing Jacob's ladder and employing ever more sophisticated exchange and correlation functionals, we expect to obtain improved accuracy from our first-principles calculations, as indicated by the right arrow in figure 2. On the other hand, this is accompanied by an increase in computational resources, as depicted by the left arrow in figure 2. However, with the more wide-spread availability of HPC facilities, hybrid functional investigations became more and more common in recent years and led to a better description of structural, electronic, and optical properties of a range of materials [28][29][30][31]. Apart from the calculation of electronic and optical properties by means of exchange and correlation functionals of four distinct rungs of Jacob's ladder as described above, we also performed quasiparticle calculations of those material properties based on the GW approximation introduced by Hedin [32]. While the GW approximation remains the most accurate method for quasiparticle calculations available, its numerical requirements and perturbative nature make approximations unavoidable [33][34][35][36][37]. From a numerical requirements' point of view several flavours of practical GW calculations have been established over the years. The most simple one is the single-shot G 0 W 0 method, with G 0 being the noninteracting Green's function of the system and W 0 its screened Coulomb interaction. Extensions iterate the eigenvalues to self-consistency in the Green's function alone (GW 0 ), or also in the screened interaction W (GW). Methodical extensions see the inclusion of vertex corrections as well [38], however, this is beyond the scope of the present work. From a perturbative nature's point of view, all mentioned flavours of GW calculations depend on the starting one-electron energies and orbitals, usually taken as Kohn-Sham eigenenergies and orbitals from preceding DFT calculations. This immediately implies that the accuracy of GW calculations will be influenced by the accuracy of the starting one-electron energies and orbitals, as will be discussed later in the results section. Computational details All the calculations of the present work have been performed using the Vienna ab initio Simulations package (VASP, 5.4.4) [39][40][41] together with the projector-augmented wave (PAW) method [42,43]. Structural relaxations employed the recommended PAW potentials supplied by VASP that provided 17, 17, 12, 14, and 6 valence electrons for the Cu, Ag, Zn, Sn, and Se atoms, respectively. The electronic band structures, the optical properties, and the many-body perturbation theory calculations based on the GW approximation made use of the PAW potentials recommended for GW calculations, and provided 19, 19, 20, 14, and 6 valence electrons for the Cu, Ag, Zn, Sn, and Se atoms instead. In order to judge the accuracy and computational efficiency of several rungs of Jacob's ladder, we employed the following exchange-correlation potentials: for the LDA functional we chose the Perdew and Zunger parametrisation [17,44], for the GGA functional we chose the Perdew, Burke, and Ernzerhof implementation revised for solids (PBEsol) [20], as meta-GGA functional we chose the relatively new SCAN functional [22], and finally as hybrid functional we chose the range-separated HSE06 functional [24], respectively. Structural relaxations of the kesterite and stannite crystal structures have been performed for the standard primitive unit cells, containing one formula unit (f.u.) for both structural polymorphs. The ground state structures have been optimised by analysing the total energy curves, which have been obtained for several volumes around the experimentally known ground state volume. Keeping these volumes fixed, the respective lattice constants and all internal coordinates have been allowed to relax until the forces on all atoms were below 0.001 eV Å -1 . While the ground state volume V 0 has been obtained by a spline fit to the total energy curves, a subsequent analysis via Murnaghan's equation of state [45,46] additionally yielded the bulk modulus B 0 and its pressure derivative B ′ 0 , respectively. A final relaxation of internal coordinates has been performed for the ground state volumes, yielding the overall ground states that have been further analysed using the AFLOW [47] and FINDSYM [48,49] packages. The obtained relaxed ground state structures served as a starting point for subsequent calculations of the electronic band structures and the real and imaginary parts of the dielectric functions, which have been obtained by summing over empty states using Fermi's Golden Rule, transition matrix elements, and applying a Kramers-Kronig transformation [50]. In order to ensure converged results the number of empty bands in the calculations of the optical properties have been increased by a factor of four. The final real and imaginary parts of the dielectric functions have been obtained by diagonalising the dielectric tensors for every energy point and averaging over the resulting main diagonal elements, as applied before to non-cubic oxide [51] and amorphous materials [52,53]. Together with the other technical parameters, k-point grid of 6 × 6 × 6, cut-off energy of the plane-wave expansion of 500 eV, and a convergence criteria for the total energy of 10 −6 eV, this ensured well-converged results. Due to the increased numerical demand the k-point grid has been reduced to 4 × 4 × 4 for the GW calculations. Structural properties The total energy curves for various employed exchange and correlation functionals, normalised to one functional unit (f.u.) and rescaled to zero energy, are shown in figure 3 for the kesterite crystal structures of AZTSe (a) and CZTSe (b), respectively. The dashed vertical lines indicate the experimental ground state volumes taken from a combined x-ray and neutron powder diffraction study of Gurieva et al [2]. The respective total energy curves for the stannite crystal structures of AZTSe and CZTSe can be found in the appendix ( figure A1). The obtained ground state structural properties of the kesterite and stannite crystal structures of AZTSe and CZTSe are given in tables 1 and 2 for various exchange and correlation functionals in comparison to experimental data, respectively. For both, kesterite-type AZTSe and CZTSe, the LDA calculated ground state volume is severely underestimated by approximately 4%, as shown in figures 3 and 4. Deviations from the experimental ground state volumes of about 1% are obtained for the PBEsol and the SCAN functionals, with a much better agreement of the SCAN functional in case of kesterite-type CZTSe. The hybrid functional HSE06 overestimates the experimental ground state volumes by 2%-3%, respectively. [2], and all energies are normalised to one functional unit (f.u.) and rescaled to zero energy, respectively. The obtained ground state volumes increase with the increasing sophistication of the employed exchange and correlation functionals (LDA, PBEsol, SCAN, and HSE06). This trend is equally observed for the lattice parameters a and c for both materials, AZTSe and CZTSe, and for both crystal structures, given for kesterite in table 1 and for stannite in table 2, respectively. The bulk modulus B 0 shows the same decreasing trend for both materials and crystal structures, and its pressure derivative is mostly around 5. The fractional coordinates of the anion Wyckoff positions show a complementary trend, i.e. increasing (decreasing) for the kesterite-type 8g fractional coordinates x and y (z), and decreasing (increasing) for the stannite-type 8i fractional coordinates x (z), respectively. The performance of various exchange and correlation functionals in describing fractional coordinates will become important in comparison to precise experimental local structure investigations, and has been shown to substantially influence the electronic band gaps in CZTS and CZTSe [54]. Electronic and optical properties Based on the obtained ground state structures we calculated the electronic band structures as outlined in section 2.3. The electronic band structures are shown in figure 5 for the kesterite crystal structures of AZTSe (a) and CZTSe (b), respectively. The respective electronic band structures for the stannite crystal structures of AZTSe and CZTSe can be found in the appendix ( figure A2). In all electronic band structure figures, green and red lines depict the valence and conduction bands calculated with the hybrid HSE06 functional, whereas the shaded grey background indicates the SCAN calculated valence and conduction bands. The dotted black lines are the results of many-body perturbation theory calculations by means of the single-shot G 0 W 0 approximation with the HSE06 eigenvalues and orbitals as starting points. As a first approximation to the electronic band gaps of the materials, we look at the Kohn-Sham eigenvalue differences between the valence and conduction bands for the LDA, PBEsol, SCAN, and HSE06 functionals, and the quasiparticle energy differences in case of G 0 W 0 calculations, respectively. For kesterite-type AZTSe the electronic band gaps amount to 0.16, 0.12, 0.30, and 1.04 eV for the LDA, PBEsol, SCAN, and hybrid HSE06 functionals. The closest agreement to the experimental band gap of 1.31 eV [2] is obtained for the hybrid HSE06 calculation. Similarly, for kesterite-type CZTSe the electronic band gaps amount to 0.01, 0.02, and 0.03 eV, i.e. close to a metallic solution, and 0.87 eV for the LDA, PBEsol, SCAN, and hybrid HSE06 functionals. Again, the hybrid HSE06 calculation yields closest agreement with the experimental band gap ranging between 0.94 eV [2] and 1.0 eV [1]. The closer agreement of the calculated band gap energies with the experimental values in case of CZTSe is possibly due to the better agreement of the obtained fractional coordinates of the anion Wyckoff position with respect to the experimental values, as given in table 1. From the band gap values and the electronic band structures in Figure 5 it becomes apparent, that the simpler non-hybrid exchange and correlation functionals show the known band gap underestimation with respect to the experimental values. This severe underestimation of the electronic band gaps obtained by non-hybrid functionals is shown in figure 6. One important objective of the present work is to benchmark different computational approaches against the agreement with experimental structural and electronic properties. As for the structural properties, overall best agreement with available experimental data has been obtained by the SCAN functional, and the PBEsol functional being very close. The electronic properties, however, show best agreement for the hybrid HSE06 functional. While the present investigation deals with relatively small unit cells, where even a full structural relaxation employing the hybrid HSE06 functional is feasible, for many other possible investigations structural relaxations based on hybrid functional become prohibitively expensive in terms of computational resources. This includes all types of investigations requiring larger supercells, e.g. formation energies of Figure 5. Electronic band structures for the kesterite crystal structures of Ag2ZnSnSe4 (a) and Cu2ZnSnSe4 (b), with zero energy at the top of the valence bands. Shown are the valence (green) and conduction bands (red), calculated using the hybrid HSE06 functional [24]. The dotted lines (· · · · · ·) and the shaded grey backgrounds show the results from the G0W0 and SCAN calculations, respectively. Over the last years it became more and more popular to perform structural relaxations of larger unit cells employing computationally cheaper functionals, and obtain the electronic properties by means of hybrid functional calculations without further relaxing the structure, so-called single-shot or one-shot hybrid calculations. The HSE06 calculated electronic band gaps based on the SCAN optimised structures amount to 0.97 and 0.74 eV for AZTSe and CZTSe, being somewhat lower that the HSE06 values obtained for the HSE06 optimised structures of 1.04 and 0.87 eV, respectively. Compared to the full hybrid HSE06 investigation, this approach yields the second best agreement with respect to the experimental band gaps, however, for larger unit cells it is presently the only approach available for restricted computational resources. In addition, we performed quasiparticle calculations based on the single-shot G 0 W 0 method. For kesterite-type AZTSe and depending on the starting eigenvalues and orbitals, the G 0 W 0 calculations yield a quasiparticle gap of 1.48, 1.47, and 1.56 eV for the plain SCAN functional, the HSE06 calculations on top of the SCAN optimised structures, and the hybrid HSE06 functional, respectively. The respective G 0 W 0 calculations for kesterite-type CZTSe yield quasiparticle gaps of 1.43, 0.97, and 1.12 eV, with starting eigenvalues and orbitals from the plain SCAN functional, the HSE06 calculations on top of the SCAN optimised structures, and the hybrid HSE06 functional, respectively. It can be seen, that in all cases, the G 0 W 0 quasiparticle gaps are overestimated with respect to the experimental band gaps; however, the agreement is best for the starting eigenvalues and orbitals taken from a single-shot hybrid HSE06 In terms of a direct comparison between different theoretical approaches, the G 0 W 0 quasiparticle gaps are enlarged by approximately 0.5 eV (0.25 eV) for kesterite-type AZTSe (CZTSe) for starting eigenvalues and orbitals taken from both, the HSE06 calculations on top of the SCAN optimised structures, and the hybrid HSE06 functional, respectively. The same trend is observed for the stannite-type structures. The total overestimation of the quasiparticle gap of 1.43 eV with starting eigenvalues and orbitals from a SCAN calculation for kesterite-type CZTSe highlights the perturbative character of the GW method in general, and that the near metallic solution of the SCAN calculation is inadequate to provide starting conditions for subsequent quasiparticle calculations. Based on the overall best agreement with the experimental band gaps for the hybrid HSE06 calculations, we also calculated the optical properties. The real (red) and imaginary (green) parts of the dielectric function are shown in figure 7 for the kesterite crystal structures of AZTSe (a) and CZTSe (b), respectively. In case of CZTSe, the dashed lines present experimental results of Leon et al [55], obtained via spectroscopic ellipsometry on bulk crystals. Although the overall structure of the theoretical results is much more pronounced, the important peak positions agree well with each other. G 0 W 0 calculations of the dielectric functions basically show the same peak structure, and are only shifted to reflect the change in the band gap energies. The respective dielectric functions for the stannite crystal structures of AZTSe and CZTSe can be found in the appendix (figure A3). Conclusions In terms of a methodical overview, the present work provides a detailed first-principles investigation of the ground state structural, electronic, and optical properties of the kesterite and stannite crystal structures of AZTSe and CZTSe. The DFT calculations employed exchange and correlation functionals covering the lowest four rungs of Jacob's ladder, thereby including the LDA parametrised by Perdew and Zunger [17], the GGA with the PBE parametrisation revised for solids [20], the meta-GGA with the SCAN functional [22], and the hybrid HSE06 functional [24], respectively. It has been shown, that overall best agreement for the ground state structural properties is obtained by the SCAN functional, while the electronic and optical properties are overall best described by the hybrid HSE06 functional. Performing a single-shot HSE06 calculation on top of the SCAN functional's optimised structures still provided acceptable agreement with respect to experimental data. Subsequent quasiparticle calculations based on the GW approximation revealed a strong influence of the starting eigenvalues and orbitals, and tended to open the gaps for all considered cases. Overall best agreement has been obtained for the starting eigenvalues and orbitals stemming from a single-shot HSE06 calculation on top of the SCAN functional's optimised structures. While the presented full HSE06 structural optimisations might be suitable for the unit cells employed in the present work, they become prohibitively expensive in terms of computational time for subsequent material property investigations, e.g. defects and defect complexes or the calculation of band offsets in heterostructures. In those cases, performing a structural relaxation based on the SCAN functional followed by a single-shot HSE06 calculation for an accurate description of the electronic properties, provides a reasonable compromise of accuracy and required computational time. This approach seems to be more widely applicable and has recently been suggested for oxide perovskites [56]. Moreover, the eigenvalues and orbitals provided by single-shot HSE06 calculations on top of the SCAN functional's optimised structures, turn out to be an excellent starting point for subsequent quasiparticle calculations based on the GW approximation. In terms of the material properties, the present work provides a case study into the accuracy and expected agreement with available experimental results for the kesterite and stannite crystal structures of AZTSe and CZTSe. The identified methodical recipes will be beneficial for subsequent investigations requiring larger unit cells, e.g. defects and defect complexes or band offsets. They are equally applicable to investigations of solid solutions based on the two materials, AZTSe and CZTSe, or to get a detailed atomic scale insight into the intrinsic disorder effects. Lastly, the identified numerical recipes are expected to be transferable to similar kesterite and stannite materials as well, thereby widening the impact of the present results considerably. Conflicts of interest There are no conflicts to declare. Figure A1. Total energy curves for the stannite crystal structures of Ag2ZnSnSe4 (a) and Cu2ZnSnSe4 (b), calculated using various exchange and correlation functionals. All energies are normalised to one functional unit (f.u.) and rescaled to zero energy, respectively. Figure A2. Electronic band structures for the stannite crystal structures of Ag2ZnSnSe4 (a) and Cu2ZnSnSe4 (b), with zero energy at the top of the valence bands. Shown are the valence (green) and conduction bands (red), calculated using the hybrid HSE06 functional [24]. The dotted lines (· · · · · ·) and the shaded grey backgrounds show the results from the G0W0 and SCAN calculations, respectively.
2020-10-28T19:07:52.422Z
2020-11-13T00:00:00.000
{ "year": 2020, "sha1": "4c14b3c0330be17a2e941d7d149c61ea0bb72055", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2515-7655/abc07b", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "08cd698e275cc48e68c924347070e3887da579fb", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Physics" ] }
236155211
pes2o/s2orc
v3-fos-license
Pinning and gyration dynamics of magnetic vortices revealed by correlative Lorentz and bright-field imaging Topological magnetic textures are of great interest in various scientific and technological fields. To allow for precise control of nanoscale magnetism, it is of great importance to understand the role of intrinsic defects in the host material. Here, we use conventional and time-resolved Lorentz microscopy to study the effect of grain size in polycrystalline permalloy films on the pinning and gyration orbits of vortex cores inside magnetic nanoislands. To assess static pinning, we use in-plane magnetic fields to shift the core across the island while recording its position. This enables us to produce highly accurate two-dimensional maps of pinning sites. Based on this technique, we can generate a quantitative map of the pinning potential for the core, which we identify as being governed by grain boundaries. Furthermore, we investigate the effects of pinning on the dynamic behavior of the vortex core using stroboscopic Lorentz microscopy, harnessing a new photoemission source that accelerates image acquisition by about two orders of magnitude. We find characteristic changes to the vortex gyration in the form of increased dissipation and enhanced bistability in samples with larger grains. In this work, we investigate the influence of the microcrystalline structure of permalloy thin films on the pinning of vortex cores by correlating the grain structure in bright-field images to the magnetic configuration in Lorentz micrographs. In order to obtain maps of pinning sites, we developed TRaPS (TEM Rastering of Pinning Sites). This procedure locates defects by laterally shifting the vortex core across a nanostructure and imaging its position with high resolution. This allows us to directly calculate a quantitative two-dimensional representation of the pinning potential. Moreover, we use time-resolved Lorentz microscopy with an improved photoemission source to assess the effect of grain size on the core gyration. We find that annealing leads to a more corrugated pinning potential and larger average distances between pinning sites. Our findings suggest preferential pinning at grain boundaries and vortex orbits that avoid particularly large grains, demonstrating the combined strengths of correlated and in-situ magnetic and structural characterization. A. Sample System The sample system we investigate is a magnetic vortex confined in a square permalloy (Ni 80 Fe 20 ) nanoisland [1,2]. A magnetic vortex is a flux-closure type domain configuration which is predominantly oriented in-plane and curls either counter-clockwise (c = +1) or clockwise (c = −1) around its central core. At the core region, the magnetization rotates out-of-plane and, either points up or down, said to have a polarization of p = +1 or p = −1, respectively [2]. A schematic representation of the sample is depicted in Figs. 1 a,b (light and scanning-electron images of the sample are in Supp. Fig. 4). The permalloy square has a thickness of 30 nm and an edge length of 2 µm. To electronically excite the sample, we overlap 100 nm thick gold contacts on two opposing sides of the square, which extend to wire-bonding pads. The nanoisland is positioned at the center of a 15 µm × 15 µm large amorphous silicon nitride window. At a thickness of 30 nm the window is near electron-transparent and is supported by a 200 µm thick silicon frame. The sample fabrication processes involve electronbeam lithography using a positive-tone electron resist as well as electron-beam and thermal evaporation for the deposition of permalloy and gold, respectively. We take special care to remove any resist prior to metal deposition by subjecting the developed sample to a short oxygen plasma. The contacts on either side of the microstructure allow us to excite the vortex with in-plane RF-currents, forcing its core on an elliptical trajectory [40][41][42]. This gyrotropic motion is a consequence of a combination of spin-transfer torques and current-induced Oersted fields [41,43]. Generally, these systems allow for a resonance frequency between 10 MHz and 10 GHz, depending on the material and nanostructure size. For the parameters of the current sample, we expect resonance frequencies around 100 MHz [37]. To identify the effects of the nanocrystalline structure on the pinning of vortex core, we investigate one annealed sample A as well as two non-annealed samples N1 and N2. We prepared all three samples under identical conditions on the same silicon frame. Annealing is carried out by heating a single nanostructure using a low-frequency alternating current and monitoring the progress via a resistance measurement (for further details, see Suppl. Note 2). B. TRaPS We introduce "TEM Rastering of Pinning Sites" (TRaPS) as a method to identify and locate static magnetic pinning sites using Fresnel-mode Lorentz microscopy [44,45]. For our sample, this imaging method 1. (a) Sketch of the TRaPS measurement: The magnetic nanoisland (dark grey) is illuminated with a continuous collimated electron beam (green) and subjected to an external magnetic field Hz. Tilting the sample along the two tilt axes αx and αy enables us to move the vortex core within the nanostructure. (b) Sketch of the time-resolved experimental setup: Here, the nanoisland is imaged using a pulsed electron beam at a repetition rate of frep and excited with a synchronized alternating current at an integer multiple of the frequency of fex = n · frep. (c) Four micrographs illustrating a TRaPS measurement. The vortex core is scanned horizontally back and forth across the island using αx, while αy is gradually increased to vertically offset the core in between scans, as is illustrated by the blue arrows. (d) Example of four time-resolved Lorentz-micrographs acquired at different time delays between exciting current and the pulsed electron beam. The bright lines indicate the position of the domain walls within the nanostructure, with their intersection marking the location of the vortex core. Between frames, the core is visibly displaced and moving on a counter-clockwise trajectory. visually highlights the position of the four domain walls as bright lines (cf. Fig. 1 c). At their intersection, a peak is formed, marking the position of the vortex core. To perform a TRaPS measurement, we shift the core across the film in a rasterized fashion using in-plane magnetic fields and record Lorentz micrographs at each raster step. A repeated occurrence of the core at the same position can then indicate the location of a pinning site. We generate the in-plane magnetic field components H x , H y along the x and y directions of the sample by applying an out-of-plane field H z and tilting the sample along the two tilt axes α x ,α y , as indicated in Figs. 1 a,c. For sufficiently small angles, the resulting field components are (H x , H y ) ≈ H z · (α y , α x ). The out-of-plane magnetic field H z is generated by weakly exciting the main objective lens of the microscope while using the objective mini-lens for imaging. C. Time-resolved Lorentz microscopy Time-resolved Lorentz microscopy is carried out at the Göttingen ultrafast transmission electron microscope (UTEM), a modified JEOL 2100F transmission electron microscope (TEM) featuring a high-brightness photoemission electron source [37,46]. For the present study, we equipped the UTEM with a radio-frequency (RF)generator that is electronically synchronized to the photoemission laser, using a methodology introduced in Ref. [37]. One output of the RF-generator (Keysight 81160A) feeds into a custom TEM holder that enables TEM imaging under in-situ current excitation with frequencies up to the GHz regime. A second output triggers the photoemission laser, a gain-switched diode laser (custom Onefive Katana 05-HP) operating at a wavelength of 532 nm. With its continuously variable repetition rate of f rep = 20 − 80 MHz and a pulse duration of 35 ps, this laser enables us to increase the electron-pulse duty cycle by more than two orders of magnitude compared to previous studies [37]. Thus, we can reduce the image acquisition time of a single stroboscopic micrograph from several minutes to a few seconds and compile much larger data sets that allow deeper insights into the vortex gyration. We compile time-resolved movies of the vortex dynamics by exciting it at frequencies that are integer multiples of the laser repetition rate f ex = n · f rep and by incrementally changing the excitation phase between frames (see Fig. 1 b,d). D. Bright-field imaging and vortex core localization To assess the nanocrystalline structure of our samples, we recorded bright-field images in low-magnification mode (Figs. 2 a,b) by filtering out scattered parts of the beam using an aperture. This allows us to compare the vortex core positions we find in time-resolved and TRaPS measurements to the grain structure of the films. Therefore, we track the core in the Lorentz micrographs and map the results on top of the bright-field images. The tracking process involves calculating the center-of-mass of the pixels corresponding to the bright peak at the core position, which we identify as the largest pixel cluster above an intensity threshold. This method enables a core localization with few-nanometer precision [37]. To map the core positions onto the bright-field images, we use a geometric transformation [47], which is derived from the location of easily identifiable image features in both Lorentz and bright-field images. Using this approach, we can translate position information between both reference frames. A. TRaPS Measurements TRaPS measurements are performed on the nonannealed sample N1 and on sample A. In their brightfield images (Figs. 2 a,b), the bright and dark regions surrounding the grey permalloy nanostructure are bare silicon-nitride and the opaque gold contacts, respectively. A thin residue of permalloy, stemming from a partially coated undercut of the electron beam resist, is faintly vis-ible at the top and bottom of the squares and can also be seen in the Lorentz images (e.g. Fig. 1 d). Contrast variations in the films arise from spatially varying diffraction conditions of differently oriented grains and reveal a significant increase in grain size in the annealed sample (cf. Figs. 2 c,d). This change in grain size is less significant in the vicinity of the gold contacts as these locally increase the thermal coupling, resulting in an inhomogeneous temperature profile during the annealing process. For the TRaPS measurements, we apply an external field of H z = 35.8 kA m −1 and vary both tilt axes in a range from −2.2°to 2.2°in increments of 0.2°. Along the primary tilt direction α x the samples are tilted back and forth once for every tilt position of the secondary axis α y . Additionally, each sweep of α x includes an approach step at either α x = 3.0°or −3.0°. Supplementary Movies 1 and 2 show the complete TRaPS measurements of either sample, with some example micrographs of sample N1 presented in Fig. 1 c. As an example, we marked the path of the core during one sweep of α x in Figs. 2 c,d, where we see that the core does not move in straight lines but rather zigzags and occasionally gets trapped. All tracked positions are marked with dots in Figs. 2 e,f where they are plotted on top of the bright-field image of the corresponding region. Due to the small size of the grains in sample N1, there is no apparent visual correspondence between the nanocrystalline structure and the core positions (Fig. 2 e). However, for the annealed sample A, we find numerous pinning sites located directly at boundaries of larger grains (see arrows Fig. 2 f). Furthermore, it stands out that the core never resides within one of the large grains. Both observations clearly demonstrate that grain boundaries can pin vortex cores in polycrystalline films. While this behavior was suspected before [29], it has, to our knowledge, never been directly observed. To assess the accuracy of our measurement technique, we repeat the tracking and the transform calculation for a second set of experimental conditions on sample A. The results presented in Fig. 2 d are acquired with a clockwise curl (c = −1), necessitating underfocused imaging conditions to achieve a bright spot at the core position [48]. For the second measurement, we altered the domain state to a counter-clockwise curl (c = +1), and acquire overfocused Lorentz images (see Suppl. Movie 3). This second set of tracking data was likewise correlated with the bright-field image as shown in Fig. 3 a, and is in excellent agreement with the previous results. This is particularly evident when we plot both data sets together in the same reference frame as in Fig. 3 b, where the only discernible difference is a minor lateral displacement of about 6 nm. In addition to confirming that TRaPS allows for accurate localization of pinning sites, this comparison also demonstrates that the underlying process is independent of the vortex curl. To further analyze our data, we calculate the distance d traveled by the vortex between two consecutive steps of a TRaPS scan. Fig. 4 b shows histograms of the jump distances for both samples (the initialization steps at α x = ±3°are not included in these statistics). The most prominent feature in both distributions is a strong peak at small distances d < 8 nm. For these distances, we can assume that the vortex has remained at the same pinning site, and as we would expect, this happens more frequently for sample A. In contrast, we identify substantial differences between both samples for distances d > 8 nm: Firstly, the vortex in the annealed sample rarely moves by distances in the range of 8 nm to 20 nm, and secondly, it more frequently jumps over longer dis- tances beyond 40 nm, which evidently is a result of larger grain sizes. For comparison, we also compute the average grain size in both samples using the full-width-half-maximum of the radial autocorrelation function [49,50] [51]. From the autocorrelation functions in Fig. 4 we estimate grain sizes of 4 nm and 10 nm for sample N1 and A, respectively. This demonstrates that the typical jump distances span up to several grain diameters, suggesting that not every grain boundary causes effective pinning. B. Quantifying the Pinning Potential In order to extract the pinning energy landscape from our TRaPS measurement, we compare experimental with simulated TRaPS data. Therefore, we model the movement of the vortex core in a global quadratic plus a local pinning potential, as illustrated in Fig. 5 a. By scanning the quadratic potential across the disorder potential, we replicate how the core is trapped and moves between pinning sites. We define the quadratic potential using the rigid vortex model [52], for which the energy of the vortex domain configuration is expressed in terms of the core position (X, Y ) and the external magnetic field Here, k is the stiffness factor of the quadratic potential, and χ is the displacement susceptibility. For our sample geometry, we calculate a stiffness factor of k = 1.63 × 10 −3 eV nm −2 using micromagnetic simulations [53]. The simulation is performed with a 512 × 512 × 4 cell geometry, an exchange coupling of A ex = 1.11 × 10 −11 J m −1 [54], and a saturation magnetization of M S = 440 kA m −1 [55]. The equilibrium core position, i.e., the minimum of E quad , is given by Here, we replace χ with a modified displacement susceptibilityχ = χH z , which specifies the core movement per tilt angle. By fitting Eq. 2 to our data, we findχ to be 98 nm/°and 108 nm/°for sample N1 and A, respectively. This is in good agreement with the micromagnetic simulation, which predicted a value ofχ = 123 nm/°. To simulate the behavior of the pinning sites, we place a Gaussian-potential dip with depth E pin and width σ pin at every position (X i,exp , Y i,exp ) tracked in the TRaPS measurement (see Fig. 5a). This ensures a deeper and/or broader potential in regions where the core is encountered more frequently. The pinning potential is hence given by where 2 is the distance to the core position measured at tilt step i of the TRaPS measurement. The total potential is thus E total = E quad + E pin,all . In the course of a single simulation we set the same sequence of tilt angles and, at each tilt step i, find the next local minimum (X i,sim , Y i,sim ) of E total in the direction of steepest descent. The starting point of this minimization is the simulated minimum from the previous tilt step i−1, just like in the experiment. This leaves us with a set of simulated core positions from which we can calculate the median radial deviation Lastly, we find the combination of E pin and σ pin (the only free parameters in our model) that minimizes r med and thus best represents our experimental data. Figure 5 b shows the median radial deviation for all simulations based on the TRaPS measurement of sample N1. It is minimum at E pin = 80 meV and σ pin = 11 nm. In case of sample A, we obtain similar values of E pin = 90 meV and σ pin = 12 nm, which results in an increase of the integrated pinning potential of about 30 %. The simulated core positions are in good agreement with experimental data, as we present in Suppl. Fig. 7, together with images of the pinning potential. A three-dimensional representation of E pin,all of sample N1 is given in Fig. 5 c. It has a roughness, estimated by the standard deviation of the potential, of 193 meV and measures −2.0 eV at its deepest point. For the annealed sample, the respective values are 281 meV and −2.4 eV. This analysis demonstrates that polycrystalline samples with larger grains show an overall increase in their pinning potential, with an increased roughness and deeper minima. Furthermore, we note that the demonstrated method is capable of quantifying pinning potentials down to sub-100 meV and spanning multiple orders of magnitude. C. Time-resolved Trajectories Besides studying the static interaction of the vortex core with pinning sites, we also probed how the dynamic vortex gyration is affected by the nanocrystalline configuration. We therefore perform time-resolved measurements on the non-annealed sample N2 and the annealed sample A, where we excite the permalloy square with an alternating current forcing the vortex core on an elliptical trajectory. We recorded this motion in sample N2 for excitation frequencies from f ex = 86 MHz to 99.5 MHz at a current density of j = 3.6 × 10 10 A rms m −2 and in case of sample A for frequencies from 72 MHz to 84 MHz at j = 8.2 × 10 10 A rms m −2 . The higher currents and lower frequencies in the case of sample A were only necessary after annealing, whereas before, we were able to observe core gyration at similar excitation parameters as for sample N2. To measure and ensure a constant excitation current throughout the frequency range, we monitor the sample current I S using an oscilloscope (for details, see Suppl. Note 1). At each frequency, we record up to 60 micrographs and incrementally increase the phase between the RF-current and the probing electron beam to cover the whole excitation period. These micrographs are combined into Supp. Movies 4 and 5, and show the time-resolved gyration together with the tracked core positions. Figures 6 a,b show Similar to our TRaPS measurements, we can also identify grains in the annealed sample in Fig. 6 b that appear to be avoided by the core, however, less conclusively as in the static case. We marked two of these grains with arrows in Fig. 6 b. Furthermore, we find discontinuous jumps of the vortex position upon cycling the excitation phase, where it appears to switch between two or more equally favorable trajectories. Interestingly, this behavior is encountered much more regularly for the annealed sample, both at a larger number of frequencies and during a measurement at a given frequency (see Supp. Figs. 5 and 6). Most likely, the jumps between multistable orbits are triggered by the sudden (yet small) change of the excitation phase between time-resolved micrographs. This multistability can be considered the dynamic counterpart of stochastic switching between bistable static pinning sites [24,30,56]. To further evaluate the trajectories, we fit them to an ellipse and determine the mean of both semiaxes r [57]. Figure 7 shows the resulting radii r diveded by the sample current I S and plotted against the normalized frequency f ex /f r . The graphs reveal a resonance frequency of the Excitation-current-normalized radius r/IS as a function of normalized frequency fex/fr. For some frequency ranges the orbit radius stays constant upon changing the excitation frequency (arrows). gyration of f r = 90.5 MHz and 76 MHz for sample N2 and A, respectively. Here, two observations stand out. First, the radius of the trajectories does not change continuously in size as a function of frequency f ex (see also Refs. [37,42]), suggesting hysteretic behavior. Instead, we find plateaus, for which trajectories cluster at certain radii (marked with arrows in Fig. 7), clearly indicating an orbital stabilization by the pinning potential. Secondly, we find that the current-normalized radius r/I S is significantly smaller and exhibits a broader resonance in the case of the annealed sample A, which is a distinct sign of enhanced dissipation in this sample. These two observations, together with enhanced multistability in the annealed sample, demonstrate that grain sizes have an important influence on the dynamic behavior of gyrating vortices. IV. CONCLUSION Nanocrystallinity and surface roughness have long been linked to the pinning of vortices in soft-magnetic films [21,27,29]. The direct real-space identification of grain boundaries as effective pinning sites for the core was enabled by the TRaPS method introduced in this study. The correlation of structural and magnetic imaging in electron microscopy can be further developed to trace the microscopic origins of pinning down to the atomic scale, combining high-resolution (scanning) TEM with holography [58,59] or differential phase contrast [33,60]. The joint high spatial and temporal resolution of our approach will be critical to explore transient pinning and local damping effects, while the quantitative TRaPS potential will serve as input for future theoretical studies on driven vortex dynamics. The global ansatz for the calculation of the trapping potential based on common properties of pinning sites may be further refined by taking into consideration characteristics of individual defects. The observation of increased roughness, deeper traps and enhanced bistability for samples with larger grains may become relevant for device fabrication and in the tailoring of annealing processes to mitigate or selectively enhance pinning. Finally, on the methodical side, the two orders of magnitude increase in time-averaged brightness of the photoemission source will have immediate benefits in picosecond stroboscopic imaging of ultrafast dynamics also beyond magnetism, including nanoscale structural and electronic phenomena.
2021-07-22T01:16:25.733Z
2021-07-21T00:00:00.000
{ "year": 2022, "sha1": "26344d0a27491a9e8943099aac4353a11c5acf03", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "26344d0a27491a9e8943099aac4353a11c5acf03", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
52822075
pes2o/s2orc
v3-fos-license
Multi-organ spreading of Actinobacillus pleuropneumoniae serovar 7 in weaned pigs during the first week after experimental infection Actinobacillus (A.) pleuropneumoniae is normally considered strictly adapted to the respiratory tract of swine. Despite this, scattered case reports of arthritis, osteomyelitis, hepatitis, meningitis or nephritis exist, in which A. pleuropneumoniae remained the only detectable pathogen. Therefore, the aim of this study was to investigate whether spreading to other organs than the lungs is incidental or may occur more frequently. For this, organ samples (blood, liver, spleen, kidney, tarsal and carpal joints, meninges, pleural and pericardial fluids) from weaners (n = 47) infected experimentally with A. pleuropneumoniae serovar 7 by aerosol infection (infection dose: 10.9 × 103 cfu/animal) were examined by culture during the first week after infection. In addition, tissue samples of eight weaners were examined by histology and immunohistochemistry (IHC). A. pleuropneumoniae was isolated in all examined sample sites (86.7% pleural fluids, 73.3% pericardial fluids, 50.0% blood, 61.7% liver, 51.1% spleen, 55.3% kidney, 14.9% tarsal joints, 12.8% carpal joints, 27.7% meninges). These results were also obtained from animals with only mild clinical symptoms. IHC detection confirmed these findings in all locations except carpal joints. Histological examination revealed purulent hepatitis (n = 2), nephritis (n = 1) and beginning meningitis (n = 2). Isolation results were significantly correlated (p < 0.001) with the degree of lung colonization and, to a lower extent, with the severity of disease. Detection of A. pleuropneumoniae in peripheral tissues was significantly correlated to spleen colonization. In conclusion, multi-organ spreading of A. pleuropneumoniae serovar 7 strain AP 76 seems to occur more frequently during acute infection following effective lung colonization than previously thought. Introduction Actinobacillus (A.) pleuropneumoniae, a Gram-negative bacterium belonging to the family Pasteurellaceae, is one of the most important respiratory tract pathogens in the pig industry. A. pleuropneumoniae is distributed worldwide and considered a primary pathogen that can cause severe respiratory disease. It affects the animal's welfare due to peracute to chronic diseases, including severe fibrino-haemorrhagic and necrotizing pleuropneumonia with an increase in mortality [1,2]. Therefore, it leads to severe economic losses in the pig industry which are due to the loss of animals, and costs of disease treatment as well as reduced performance during the chronic course of disease resulting in a prolonged fattening period of the animals [3,4]. The severity of disease depends on several factors, including the infecting serovar, infection dose, co-infections, immune status and genetic background of the animal, as well as environmental factors [5][6][7]. In addition to its primary role in porcine pleuropneumonia A. pleuropneumoniae is also often involved in the development of the porcine respiratory disease complex, a co-infection of the respiratory tract of pigs [8][9][10]. In contrast to certain other porcine respiratory pathogens, e.g., Haemophilus (H.) parasuis [10,11], A. pleuropneumoniae is considered a non-invasive lung pathogen [12] which solely affects lungs and pleura by spreading from the infected lung to the pleura via lymph vessels or oedematous fluid [13]. A bacteraemia is very rare [12,13]. Despite its non-invasive character there are several case reports about non-respiratory clinical diseases in which A. pleuropneumoniae was the only detectable pathogen. These reports include cases of fibrino-purulent arthritis and necrotizing osteomyelitis [14], granulomatous hepatitis [15], meningitis and, nephritis [16] as well as endocarditis and fibrinous peritonitis [17]. These recurring case reports prompted us to find out if A. pleuropneumoniae can be detected more frequently within different body tissues during the acute phase of disease. For this, we investigated possible spreading of A. pleuropneumoniae to body tissues other than the lungs in experimentally infected weaners. Animals and housing conditions To investigate the possible spreading of A. pleuropneumoniae to other organs than lungs forty-seven weaners experimentally infected with A. pleuropneumoniae serovar 7 from another infection trial were chosen by simple randomization. This infection trial was originally conducted in the context of evaluation of genetically determined susceptibility of pigs towards the pathogen. The pigs were kept in accordance with the Guidelines for Protection of Vertebrate Animals used for experimental and other Scientific Purposes, European Treaty Series, nos. 123/170. Study setup and housing of the animals were approved by an independent local committee on ethics. The pigs were fed a commercial standardized diet and were provided 2.5 kg of hay flakes per group ( ® AGROBS Pre Alpin Wiesenflakes, Co. AGROBS, Degerndorf, Germany) per day as material for rooting and manipulation. The group size was 8 to 10 pigs on 8 m 2 . Ambient temperature was 27.5 °C ± 1.9 °C (MEAN ± SD) and humidity was 33.2% ± 12.5% (MEAN ± SD). Within each pen 2.8 m 2 were covered with a rubber mat, heated by two infrared lamps, for bedding area. Allocation to the housing groups was also performed by simple randomization. Drinking quality water was constantly available. The pigs arrived 3 weeks prior to infection for an adequate adaptation to diet and new environment. Experimental infection Prior to experimental infection all pigs underwent a general clinical examination and bronchoalveolar lavage fluids (100 mL 0.9% NaCl-solution ad us. vet., Co. WDT, Garbsen, Germany) as well as serum samples were taken. The bronchoalveolar lavage fluid was examined by bacteriological culture and PCR [18] for the absence of A. pleuropneumoniae. Bronchoalveolar lavage was performed under general anesthesia with 20 mg/kg Ketamine i.m. (Ketamin 100 mg/mL ® , Co. CP-Pharma, Burgdorf, Germany) and 2 mg/kg Azaperon i.m. (Stresnil ® , Co. Janssen-Cilag GmbH, Baar, Switzerland). Serum samples were analyzed by ApxIV-ELISA (IDEXX APP-ApxIV Ab Test ® , Co. IDEXX Laboratories, Maine, USA). Only pigs confirmed as clinically healthy and negative for A. pleuropneumoniae by direct and indirect screening procedures were included for infection. At the time of infection the pigs were 7 weeks of age and had a mean bodyweight of 12.6 kg ± 2.1 kg. The experimental infection was done by aerosol infection with a total exposure time of 30 min [6]. For this, approximately 1 × 10 5 bacteria of A. pleuropneumoniae AP76 serovar 7 were nebulized resulting in an aerosol concentration of 1 × 10 2 colony forming units (cfu) per liter aerosol. Based upon a mean tidal volume of pigs of 9 mL/kg BW [19] the mean infection dose inhaled per pig was 10.9 × 10 3 cfu. The group size for infection was five to six pigs. After infection the pigs were clinically monitored every 2 h within the first 48 h after infection. Subsequently, the clinical examination was done twice a day. Exit criteria for euthanasia of the pigs were defined to minimize the suffering of the infected animals. Exit criteria were as follows: a respiratory score (Table 1) of 3; a depression score (Table 1) of 3; a rectal body temperature > 42.0 °C; a respiratory or depression score greater than 1 and a body temperature < 37.5 °C; a body temperature > 40.3 °C and/or a depression score and respiratory score > 1 for more than two consecutive days; a body temperature > 40.3 °C and/or depression score and respiratory score > 0 for more than two consecutive days from day three post infection onwards; any unpredictable event, reaction to treatment or disease leading to a moderate reduction of general condition or inducing pain for more than 48 h; any unpredictable event, reaction to treatment or disease leading to severe reduction of general condition or inducing pain. Clinical investigation The monitoring of the clinical signs consisted of an assessment of posture, behavior, feed intake, rectal body temperature, vomiting, breathing noise, dyspnea, respiratory frequency, coughing and skin color. The results of the clinical examinations were transferred to an objective clinical scoring system [20]. According to this scoring system the pigs were assessed as non-diseased when showing scoring points ≤ 1.3; as mildly diseased when showing scoring points > 1.3 ≤ 12.5; as moderately diseased with scoring points > 12.5 ≤ 23.7 and as severely diseased when reaching scoring points > 23.7 on day seven post-infection (pi). Necropsy and bacteriological examination Seven days post infection (or earlier in case of withdrawal due to exit criteria) the pigs were euthanized by intravenous application of 80 mg/kg pentobarbital (Euthadorm ® , Co. CP Pharma GmbH, Burgdorf, Germany). Necropsy was performed and the degree of lung lesions was assessed using a lung lesion score [21]. For this score a schematic map of the lung is divided into triangles. According to the assessed lung lesions triangles are marked. Each lung lobe can reach a maximum score of 5 resulting in an overall maximum score of 35. The degree of lung lesions was classified as mild with scoring points > 0 ≤ 11.6; as moderate with scoring points > 11.6 ≤ 23.2 and as severe when reaching scoring points > 23.2. For the isolation of A. pleuropneumoniae tissue samples were taken from 13 different locations. In total seven lung tissue samples of defined positions located in the outer third of each of the seven lung lobes, three central organ samples including liver, spleen and left kidney and three peripheral tissue samples including swabs from the meninges as well as the carpal and tarsal joints of the animals were collected. All tissue samples had a size of approximately 1 cm 2 . In cases where there were no macroscopic lesions in the defined locations of the lung lobes but in other parts of the lung lobes additional samples from the macroscopically altered regions were taken. Tissue samples and swab samples were plated on A. pleuropneumoniae-selective blood agar [22] using the quadrant streaking method. Abundance of growth was assessed semi-quantitatively. Additionally from 15 pigs pleural and pericardial exudates were sampled and evaluated regarding volume and number of viable A. pleuropneumoniae cells. The second was determined by plating tenfold serial dilution on A. pleuropneumoniae-selective blood agar. Bacterial isolates were identified as A. pleuropneumoniae by PCR amplification of the apxIV gene [23]. The amount of growth for each organ or swab sample was transferred to a scoring system (0 = no growth; 1 = sparse growth; 2 = moderate growth; 3 = heavy growth). For the lung tissue samples the level of isolation from the lungs was translated into a combined isolation score [24]. For this purpose the amount of growth from all lung tissue samples was added and divided by the total number of lung tissue samples. Results were classified as low-grade isolation level (score 0-1), moderate (> 1 and ≤ 2) and high-grade (> 2-3). Additionally blood samples were taken from the Vena jugularis at the time of euthanasia from 26 pigs. These blood samples were inoculated into SIGNAL blood culture systems (SIGNAL blood culture system ® ; Co. Oxoid, Basingstroke, Hampshire, UK) and bottles with medium in the growth indicator device were subcultured for A. pleuropneumoniae. From 23 of these pigs 0.5 mL of the taken blood sample was directly plated on A. pleuropneumoniae-selective blood agar [22] using the quadrant streaking method, too. For these directly plated blood samples the abundance of growth was assessed semiquantitatively like for the other tissue samples. Histological examination and immunohistochemistry A total of eight pigs were chosen randomly for additional histological examination and immunohistochemical (IHC) investigation. Tissues collected at necropsy for these investigations included liver, spleen, pericardium, peritoneum, kidney, cerebrum, and carpal and tarsal synovialis. Tissues were fixed by immersion in 10% neutral buffered formalin for 1-7 days and routinely processed in an automated tissue processor, embedded in paraffin, sectioned at 3 µm, and stained with hematoxylin Calm, alert, decreased ingestion Dull, increased recumbence, increased reaction time, still moving to the feeding trough but without or only small feed intake or dull, sitting like a dog, increased reaction time, still moving to the feeding trough but without or only small feed intake Apathetic, no reaction to stimulation and/or shaky movements without lying down and/or standing with head down without lying down and/ or vomiting and/or foam around nostrils and mouth and eosin. Additional sections were mounted on polyl-lysine-coated slides (SuperFrost Plus ® ; Thermo Fischer Scientific, Dreieich, Germany) for immunohistochemical evaluation. A biotin-free horse-radish-peroxidase (HRP)-polymerbased detection system (Ultravison LP HRP, Thermo Fisher Scientific, Dreieich, Germany) was used for immunohistochemical detection of A. pleuropneumoniae antigen. Antigen retrieval was achieved by immersion of the deparaffinized rehydrated slides in preheated EDTA buffer (pH 8) for 15 min at 90 °C in a steamer. After cooling down to at least 60 °C the slides were washed 3 times in Tris buffer containing 0.05% Tween 20 (Carl Roth GmbH, Karlsruhe, Germany) at room temperature. Then they were blocked for 7 min at room temperature with Ultra V blocking solution (Thermo Fisher Scientific, Dreieich, Germany) and incubated with a mixture of two primary antibodies. These primary antibodies had been produced previously in rabbit using recombinant Outer Membrane Lipoprotein A (OmlA) of A. pleuropneumoniae serovar 1 and 5 isolates [25,26]. The ability of the anti-OmlA serovar 1 serum to detect A. pleuropneumoniae serovars 1-12 and 14 and the ability of the anti-OmlA serovar 5 serum to detect A. pleuropneumoniae serovar 5-8 and 10 had been verified previously during western blot validation assays performed at the IVD laboratory using isolates of A. pleuropneumoniae serovars 1-15. No unspecific cross-reaction of the primary antibodies was detected in western blot and dot blot assays with Actinobacillus porcitonsillarum, A. minor, A. porcinus, A. indolicus, A. suis, Haemophilus parasuis or Pasteurella multocida. After primary antibody incubation overnight at 4 °C the slides were rinsed 3 times in Tris buffer containing 0.05% Tween 20 (TBS-T) and incubated with the secondary antibody (Primary Antibody Enhancer of the Ultravision Detection LP kit mentioned above) for 15 min at room temperature. Following 3 washing steps in TBS-T the slides were incubated with the HRP-polymer for 30 min at room temperature. After 3 washing steps in TBS-T the slides were immersed in 3% hydrogen peroxide for 10 min in order to block endogenous peroxidase. Followed by 3 washing steps in TBS-T the slides were treated with 3,3-diaminobenzidine tetrahydrochloride (DAB) chromogen plus substrate solution (Thermo Fisher Scientific, Dreieich, Germany) for 10 min, washed again in TBS-T and counterstained with hematoxylin Gill 2. Statistical methods The collected data were transferred to an Excel ® based database (Co. Microsoft Cooperation, Dublin, Ireland). Statistical analyses were carried out using Excel ® and IBM SPSS Statistics ® (Co. IBM Deutschland GmbH, Ehningen, Germany). For correlation analysis Spearman Rank Correlations were calculated. Correlations of < 0.05 were classified as significant and of < 0.01 as highly significant. Results Based on the clinical scoring, 7 of 47 infected pigs were classified as non-diseased, 21 pigs as mildly diseased, 3 pigs as moderately diseased and 16 pigs as severely diseased. Nineteen pigs had to be euthanized due to acute exit criteria and 1 pig due to chronic exit criteria. Twenty-seven pigs were euthanized on day 7 pi. The degree of gross lung lesions was mild in 13 pigs, moderate in 12 pigs and severe in 19 pigs. In 3 pigs there were no macroscopic lung lesions. For the lung tissue, isolation score of A. pleuropneumoniae was low-grade in 9 pigs, moderate in 13 pigs and high-grade in 22 pigs. A. pleuropneumoniae could not be isolated from the lungs of 3 pigs. An overview of the isolation results from other tissues than the lungs is shown in Table 2. A. pleuropneumoniae could be cultured from 50% of the blood samples by blood culture, from 43.5% of the blood samples by direct plating, from 61.7% of the liver samples, 51.1% of the spleen samples, 55.3% of the kidney samples, 14.9% of the tarsal joints swabs, 12.8% of the carpal joints swabs and 27.7% of the meningeal swabs. The pleural fluid was positive for A. pleuropneumoniae in 86.7% and the pericardial fluid in 73.3% (Table 3). The bacterium could not be reisolated from any of these locations in pigs classified as non-diseased. Within the sampled pleural fluids the volumes ranged from 1 to 230 mL and the isolation results of these samples ranged from 9.5 × 10 3 to 1.3 × 10 9 cfu. The volumes of the pericardial fluids ranged from 1 to 70 mL and here the isolation results ranged from 2 to 2.0 × 10 8 cfu. From the tissue samples isolation scores for A. pleuropneumoniae ranged from low-grade to highgrade whereas the isolation score was only low-grade (76.9%) and moderate (23.1%) from meningeal swabs and only low-grade (100%) from tarsal and carpal joint swabs. A. pleuropneumoniae could be cultured from all these locations not only from pigs classified as moderate or severely diseased but also from pigs classified as mildly diseased (blood: 2 pigs; liver: 15 pigs; spleen: 14 pigs; kidney: 13 pigs; tarsal swabs: 7 pigs; carpal swabs: 6 pigs; meningeal swabs: 10 pigs). Isolation was possible from animals euthanized due to exit criteria (blood: 11 pigs; liver: 16 pigs; spleen: 16 pigs; kidney: 14 pigs; tarsal joints: 5 pigs; carpal joints: 5 pigs; meninges: 9 pigs) as well as from animals euthanized 7 days pi (blood: 2 pigs; liver: 13 pigs; spleen: 8 pigs; kidney: 12 pigs; tarsal joints: 2 pigs; carpal joints: 1 pig; meninges: 4 pigs). A. pleuropneumoniae was not detected in organ samples of pigs that showed no colonisation of the lungs. It was mainly detected in tissues other than the lungs if the animals had a moderate or high-grade score of isolation from lung tissue, too. There was no detection of A. pleuropneumoniae in blood, kidney, tarsal, carpal and meningeal samples from animals with low-grade isolation from lung tissue. For liver and spleen samples only one pig with a lowgrade lung isolation result was culturally positive. From the eight animals chosen for histological and IHC examination three were euthanized due to exit criteria and five were euthanized at day 7 pi. Two were clinically classified as non-diseased, four as mildly diseased and two as severely diseased. By IHC examination A. pleuropneumoniae was detected in all eight spleen samples (oligofocal within the red splenic pulp), five liver samples (focal to oligofocal, mainly sinusoidal and intravascular), three kidney samples (focal, intertubular, within the renal pelvis and intravascular embolism), two tarsal samples (focal as intravascular embolus), two meningeal samples (focal, leptomeningeal and pachymeningeal) and in none of the carpal samples. A. pleuropneumoniae was detected on the inner surface of the parietal pericardium in five animals ( Figure 1). These positive results were obtained from animals euthanized due to exit criteria (liver: 3 pigs; spleen: 3 pigs; tarsal joints: 1 pig; meninges: 2 pigs) as well as from pigs euthanized at day 7 pi (liver: 2 pigs; spleen: 5 pigs; tarsal joint: 1 pig; meninges: 6 pigs). In the kidney samples A. pleuropneumoniae was only detected on day 7 pi by IHC. Although the macroscopic examination revealed no gross lesions in other organs than the lung the histological examination revealed a hypertrophy of the Schweigger-Seidel-sheaths and an acute diffuse moderate to severe hyperaemia in all spleen samples, a moderate neutrophilia in five spleen samples and a hypertrophy of the reticulocytes as well as an increased number of apoptotic cells in two spleen samples. Two pigs had a multifocal, acute and mild to moderate purulent hepatitis (Figure 2), one pig showed a Mean cell count ± SD (cfu/mL) -5.9 × 10 7 ± 9.7 × 10 7 -4.1 × 10 8 ± 5.0 × 10 8 -7.5 × 10 6 ± 8.6 × 10 6 6.4 × 10 7 ± 1. focal, acute mild to moderate embolic fibrinous purulent nephritis ( Figure 2) and two pigs showed a focal accumulation of neutrophilic granulocytes within Dura mater and cerebrum. The histological examination of all seven lung samples revealed a severe acute fibrino-haemorrhagic and necrotizing pleuropneumonia for the two pigs that had been clinically classified as severely diseased when chosen for IHC. For two pigs with a mild clinical manifestation nonetheless a severe acute fibrinohaemorrhagic and necrotizing pleuropneumonia was detected but only in the lung tissue sample from the right caudal lung lobe. The third pig with a mild clinical disease showed mild to moderate fibrino-haemorrhagic and necrotizing lung lesions. In lung samples showing necro-haemorrhagic lesions, coccoid bacteria were detected in particular closely related to and within the lymph vessels. The fourth pig with a mild clinical disease and one of the two pigs that had been classified as clinically healthy showed no histological lesions associated with A. pleuropneumoniae infection. The second pig that was clinically classified as non-diseased showed a focal acute mild fibrinous pleuritis in the Lobus accessorius including pleural exudate composed of neutrophils, macrophages, fibrin and erythrocytes. There was no significant correlation, either with the degree of clinical disease or with the degree of isolation from the lungs, the lung lesion score or the detection in other organ samples (Figure 3) for the detection of A. pleuropneumoniae in blood by either method (blood culture or direct plantings). Volumes of the effusions and A. pleuropneumoniae cfu within the pleural fluids were highly significantly correlated with the degree of clinical disease and the degree of lung lesions (volume: p disease = 0.005, p lesions ≤ 0.001; cell count: p disease = 0.004, p lesions = 0.001). Regarding the pericardial fluids there was no significant correlation between fluid volumes and the degree of clinical disease or lung lesions developed during acute infection (p disease = 0.264; p lesions = 0.129). However, there was a highly significant correlations between both, the degree of clinical disease and the degree of lung lesions, and the amount of viable A. pleuropneumoniae cells within the pericardial fluids (p disease/ lesions = 0.001). The detection of A. pleuropneumoniae by bacteriological culture within the inner organ samples (liver, spleen, kidney) was highly significantly correlated with the degree of isolation from the lung tissue (p ≤ 0.001; Figure 3). The detection in liver and spleen was also highly significantly correlated with the degree of clinical disease (spleen: p ≤ 0.001; liver: p = 0.003). The same was found for the isolation from spleen tissue and the lung lesion score (p = 0.001). For the detection in the kidney and the clinical score (p = 0.013) as well as for the lung lesion score and the isolation from liver (p = 0.016) and kidney (p = 0.049), respectively, there was a significant correlation. For the peripheral swab samples (carpal joints, tarsal joints and meninges) there was a highly significant correlation between the detection of the bacterium in the meninges and the degree of clinical disease (p = 0.004). There was also a significant correlation between the isolation from the lungs and the isolation from tarsal joints (p = 0.032) and meninges (p = 0.019) as well as for the detection in the meninges and the lung lesion score (p = 0.039). Regarding the isolation of A. pleuropneumoniae from inner organ tissues and peripheral organ swab samples, the isolation was highly significantly correlated between spleen and all peripheral samples (carpal joints: p = 0.010; tarsal joints: p = 0.004; meninges: p = 0.004) as well as between tarsal joints and kidney (p = 0.009), respectively. There was a significant correlation between the detection in carpal joint swabs and liver (p = 0.040) or kidney (p = 0.018) as well as between tarsal joint swabs and liver (p = 0.024) and between liver and meninges (p = 0.047). Discussion The detection of A. pleuropneumoniae in pleural und pericardial fluids of pigs during acute infection are in accordance with earlier studies on porcine pleuropneumonia pathogenesis describing the mechanism of spreading of the pathogen within the pleural cavity not only via lymph vessels but also by oedematous fluids [13] and also describing the incidence of viable bacteria in pericardial fluids in cases of severe pleuropneumonia [27]. Though there was a significant correlation between the degree of clinical disease and lung lesions and detected cfu within the pericardial fluids, we could show, that, in contrast to Nicolet and König [27], A. pleuropneumoniae does not only appear in the pericardial fluids of animals developing a severe disease after infection, but also in the pericardial fluids of pigs only showing mild clinical symptoms and mild lung lesions. Since A. pleuropneumoniae strain AP 76 could be isolated from all tested organ locations, it can be concluded that this strain spreads regularly within the whole body of the pig during the acute phase of infection. Notably, an examination by PCR might have been more sensitive [28] and, thus, might have even revealed higher detection rates in other tissues than the lungs, the isolation by bacteriological culture in the present study demonstrates the viability of the detected A. pleuropneumoniae isolates and avoids that the detection might be based on fragments processed by the immune system [29]. As A. pleuropneumoniae antigen was also detected by IHC intravascular, intralesional, and intracellular within macrophages in different tissue samples, contamination of the samples during the sampling procedure can be excluded. Given that A. pleuropneumoniae was detected within the blood samples both by blood culture and direct plating as well as in intravascular emboli in the IHC examination it can be assumed that the spreading within the organism was due to bacteraemia. Since A. pleuropneumoniae is normally considered to be noninvasive [12] the bacterium might reach the bloodstream due to a damage of the blood vessel endothelium within areas of necrotizing pneumonia. However, in the lung samples of three of the animals examined by IHC and histology no damage of the blood vessel endothelium could be detected although the bacterium was detected within other organ tissues. The fact that there was no positive correlation between the detection in blood and the detection in any other organ tissues might, therefore, also be considered as a hint that the bloodstream might not be the first or only way for spreading. Nevertheless, it should be taken into account that bacteraemia is not always detected by blood culture and that a minimum of three samples per patient are needed to reach a sensitivity of 96% [30]. The histological detection of the bacteria notably adjacent and inside of the pulmonary lymph Figure 3 Correlation analysis of isolation from lung tissue and central organ tissues. X-axis: Score quantifying the degree of isolation from seven lung tissue localizations with 0 = no isolation, > 0-≤ 1 = low-grade isolation, > 1-≤ 2 = moderate isolation, > 2 = high-grade isolation. Degree of isolation from different central organ tissue is displayed on the y-axis as isolation score with 0 = no isolation, 1 = low-grade isolation, 2 = moderate isolation, 3 = high-grade isolation; diagonal lines: balance lines for degree of correlation; r: concordance value, p: significance value; concordance and significance calculated by Spearman Rank analysis; *results were obtained by direct plating of blood samples on A. pleuropneumoniae-selective blood agar [21] using the quadrant streaking method. vessels suggests that spreading takes place mainly via the lymph system. In this case the pathogen might reach the blood stream subsequently via the thoracic duct at the venous angle. In conclusion, it remains to be clarified whether the spreading of A. pleuropneumoniae mainly occurs via the lymph system, as stated for the spreading from the lungs to the pleura [13] or mainly by the blood system. Direct passage to the blood stream might also be an incidental event within damaged lung tissue. Nevertheless, the results suggest that pathogenic isolate AP76, in contrast to previous reports on the pathogenesis of A. pleuropneumoniae infection, seems to have an invasive capacity. The histological findings indicate that in most cases during acute infection A. pleuropneumoniae seems to be harboured within different organs without causing any lesions or immune reactions. However, this investigation also reveals mild cases of nephritis and hepatitis, as well as findings of indicative possible beginnings of meningitis. These results are in accordance with the published case reports in which A. pleuropneumoniae was identified as the causative agent of clinical relevant hepatitis [15], meningitis and nephritis [16]. The detection in the tarsal and carpal swabs also might indicate an emigration to the joints even if a minimal contamination of the synovia with blood during sampling cannot be excluded with certainty. However, the fact that the agent was detected in these locations corresponds to a reported case of fibrino-purulent arthritis and necrotizing osteomyelitis [14], although, no alterations could be detected by histological examination of the joint capsules in our samples. So far, we know hardly anything about virulence mechanisms involved in causing tissue alterations in organs other than lungs or, how long the bacterium survives in these organs and how it escapes the immune system especially during bacteraemia. Combining the IHC results and the results of the correlative analysis it appears that the degree of the colonisation of the lungs, as well, as the colonisation of the spleen seems to play a central role for the spreading of strain AP76 within the pig's organs. Regarding the results presented here, regularly multiorgan spreading within the pig seems to be proven for at least A. pleuropneumoniae serovar 7 strain AP67 used in this experiment. Whether this applies to other isolates and other serovars, too, still needs to be explored. However, the fact, that the strains identified in the published case report on other organ diseases include strains of serovar 2 [14,15,17], serovar 6 [16,17] and serovar 3 [31] suggests that the ability for spreading within the whole organism of the pig might be held by several A. pleuropneumoniae strains. Actinobacillus pleuropneumoniae strain AP76 seems to spread regularly to different body tissues of the pig during the acute phase of infection. The detection of the pathogen is not only limited to lungs and pleural cavity suggesting that it has invasive capacities. The pathogens ability to colonize the lungs and the degree of spleen colonisation are highly significantly correlated to the extent of spreading within the pig. In most cases A. pleuropneumoniae does not cause pathomorphological alterations in other organs than lungs and pleural cavity. Nevertheless, the strain used within this study seems to harbour the potential of causing lesion within these other organs as in five of eight animals inflammatory lesions were detected by histological examination. Further investigations are needed to examine underlying mechanisms of invasion involved in triggering disease of tissues other than the lungs, as well as, to identify the route of spreading, the ability of other strains and serovars for multi-organ spreading and the survival time of the agent when harboured in other tissues. Learn more biomedcentral.com/submissions Ready to submit your research ? Choose BMC and benefit from: Availability of data and materials The datasets used and analyzed during the current study are available from the corresponding author on request. Ethics approval The pigs were kept in accordance with the Guidelines for Protection of Vertebrate Animals used for experimental and other Scientific Purposes, European Treaty Series, nos. 123/170. Study design and housing of the animals were approved by a local, independent committee on ethics (Commission for ethical estimation of animal research studies of the Lower Saxonian State Office for Consumer Protection and Food Safety; Approval Number: 33.12-42502-04-15/1962). Funding Parts of this study were supported by the German Ministry of Agriculture (BLE) and the German Annuity Bank (Deutsche Rentenbank).
2018-09-26T20:47:57.417Z
2018-09-25T00:00:00.000
{ "year": 2018, "sha1": "6ce10f518acc4abac171ecf8e28596e60d21b302", "oa_license": "CCBY", "oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/s13567-018-0592-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ce10f518acc4abac171ecf8e28596e60d21b302", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
54517961
pes2o/s2orc
v3-fos-license
Agreement Technologies for Coordination in Smart Cities : Many challenging problems in today’s society can be tackled by distributed open systems. This is particularly true for domains that are commonly perceived under the umbrella of Smart Cities, such as intelligent transportation, smart energy grids, or participative governance. When designing computer applications for these domains, it is necessary to account for the fact that the elements of such systems, often called software agents, are usually made by different designers and act on behalf of particular stakeholders. Furthermore, it is unknown at design time when such agents will enter or leave the system, and what interests new agents will represent. To instil coordination in such systems is particularly demanding, as usually only part of them can be directly controlled at runtime. Agreement Technologies refer to a sandbox of tools and mechanisms for the development of such open multiagent systems, which are based on the notion of agreement. In this paper, we argue that Agreement Technologies are a suitable means for achieving coordination in Smart Cities domains, and back our claim through examples of several real-world applications. Introduction The transactions and interactions among people in modern societies are increasingly mediated by computers.From email, over social networks, to virtual worlds, the way people work and enjoy their free time is changing dramatically.The resulting networks are usually large in scale, involving huge numbers of interactions, and are open for the interacting entities to join or leave at will.People are often supported by software components of different complexity to which some of the corresponding tasks can be delegated.In practice, such systems cannot be built and managed based on rigid, centralised client-server architectures, but call for more flexible and decentralised means of interaction. The field of Agreement Technologies (AT) [1] envisions next-generation open distributed systems, where interactions between software components are based on the concept of agreement, and which enact two key mechanisms: a means to specify the "space" of agreements that the agents can possibly reach, and an interaction model by means of which agreements can be effectively reached.Autonomy, interaction, mobility and openness are key characteristics that are tackled from a theoretical and practical perspective. Coordination in Distributed Systems is often seen as governing the interaction among distributed processes, with the aim of "gluing together" their behaviour, so that the resulting ensemble shows desired characteristics or functionalities [2].This notion has also been applied to Distributed Systems made up of software agents.Initially, the main purpose of such multiagent systems was to efficiently perform problem-solving in a distributed manner: both the agents and their rules of interaction were designed together, often in a top-down manner and applying a divide-and-conquer strategy to solve the problem at hand [3].However, many recent applications of multiagent systems refer to domains where agents, possibly built by different designers and representing different interests, may join and leave the system at a pace that is unknown at design time.It is apparent that coordination in such open multiagent systems requires a different, extended stance on coordination [3]. Application areas that fall under the umbrella of Smart Cities have recently gained momentum [4].Intelligent transportation systems, smart energy grids, or participative governance are just some examples of domains where an improved efficiency of the use of shared urban resources (both physical and informational) can lead to a better quality of life for the citizens.It thus seems evident that new applications in the context of Smart Cities have the potential for achieving significant socioeconomic impact. We believe that applying AT to the domain of Smart Cities may enable the development of novel applications, both with regard to functionality for stakeholders as well as with respect to the level of sustainability of Smart City services.In particular, in this article we discuss how coordination can be achieved in practical applications of multiagent systems, with different levels of openness, by making use of techniques from the sandbox of AT.Section 2 briefly introduces the fields of AT, coordination models, and Smart Cities, and relates them to each other.Section 3 describes several real-world applications, related to the field of Smart Cities, that illustrate how coordination models can be tailored to each particular case and its degree of openness.Section 4 summarises the lessons learnt from this enterprise. Background In this section we introduce the fields of Agreement Technologies and Coordination models and relate them to each other.We then briefly characterise the field of Smart Cities, and argue that Agreement Technologies are a promising candidate to instil coordination in Smart City aplications. Agreement Technologies Agreement Technologies (AT) [1] address next-generation open distributed systems, where interactions between software processes are based on the concept of agreement.AT-based systems are endowed with means to specify the "space" of agreements that can be reached, as well as interaction models for reaching agreement and monitoring agreement execution.In the context of AT, the elements of open distributed systems are usually conceived as software agents.There is still no consensus where to draw the border between programs or objects on the one hand and software agents on the other, but the latter are usually characterised by four key characteristics, namely Autonomy, Social ability, Responsiveness and Proactiveness [5].The interactions of a software agent with its environment (and with other agents) are guided by a reasonably complex program, capable of rather sophisticated activities such as reasoning, learning, or planning.Two main ingredients are essential for such multiagent systems based on AT: firstly, a normative model that defines the "rules of the game" that software agents and their interactions must comply with; and secondly, an interaction model where agreements are first established and then enacted.AT can then be conceived as a sandbox of methods, platforms, and tools to define, specify and verify such systems. The basic elements of the AT sandbox are related to the challenges outlined by Sierra et al. for the domain of Agreement Computing [6], covering the fields of semantics, norms, organisations, argumentation & negotiation, as well as trust & reputation.Still, when dealing with open distributed systems made up of software agents, more sophisticated and computationally expensive models and mechanisms can be applied [7]. The key elements of the field of AT can be conceived of in a tower structure, where each level provides functionality to the levels above, as depicted in Figure 1.Semantic technologies provide solutions to semantic mismatches through the alignment of ontologies, so agents can reach a common understanding on the elements of agreements.In this manner, a shared multi-faceted "space" of agreements can be conceived, providing essential information to the remaining layers.The next level is concerned with the definition of norms determining constraints that the agreements, and the processes leading to them, should satisfy.Thus, norms can be conceived of as a means of "shaping" the space of valid agreements.Organisations further restrict the way agreements are reached by imposing organisational structures on the agents.They thus provide a way to efficiently design and evolve the space of valid agreements, possibly based on normative concepts.The argumentation and negotiation layer provides methods for reaching agreements that respect the constraints that norms and organisations impose over the agents.This can be seen as choosing certain points in the space of valid agreements.Finally, the trust and reputation layer keeps track of as whether the agreements reached, and their executions, respect the constraints put forward by norms and organisations.So, it complements the other techniques that shape the "agreement space", by relying on social mechanisms that interpret the behaviour of agents. Even though one can clearly see the main flow of information from the bottom towards the top layers, results of upper layers can also produce useful feedback that can be exploited at lower levels.For instance, as mentioned above, norms and trust can be conceived as a priori and a posteriori approaches, respectively, to security [6].Therefore, in an open and dynamic world it will certainly make sense for the results of trust models to have a certain impact on the evolution of norms.Some techniques and tools are orthogonal to the AT tower structure.The topics of environments [8] and infrastructures [9], for instance, pervade all layers.In much the same way, coordination models and mechanisms are not just relevant to the third layer of Figure 1 , but cross-cut the other parts of the AT tower as well [10].We will elaborate on this matter in the next subsection. Coordination models The notion of coordination is central to many disciplines.Sociologists observe the behaviour of groups of people, identify particular coordination mechanisms, and explain how and why they emerge.Economists are concerned with the structure and dynamics of the market as a particular coordination mechanism; they attempt to build coordination market models to predict its behaviour.Biologists observe societies of simple animals demonstrating coordination without central coordinators; coordination mechanisms inspired from Biology have proven useful to various scientific disciplines.In Organizational Theory, the emphasis is on predicting future behaviour and performance of an organization, assuming the validity of a certain coordination mechanism.From a Computer Science point of view, the challenge is to design mechanisms that "glue together" the activities of distributed actors in some efficient manner.However, beyond such high-level conceptions, within the Computer Science field, and even among researchers working on Multiagent Systems, there is no commonly agreed definition for the concept of coordination.An important reason for this are the different interests of the designers in coordination mechanisms (micro and/or macro level properties), as well as different level of control that designers have over the elements of the distributed intelligent system (the degree of openness of the system), as we will argue in the following [3]. Early work on coordination in MAS focused essentially on (Cooperative) Distributed Problem Solving.In this field, it is assumed that a system is constructed (usually from the scratch) out of several intelligent components, and that there is a single designer with full control over these agents.In particular, this implies that agents are benevolent (as instrumental local goals can be designed into them) and, by consequence, that the designer is capable of imposing whatever interaction patterns are deemed necessary to achieve efficient coordination within the system.Efficiency in this context usually refers to a trade-off between the system's resource consumption and the quality of the solution provided by the system: agents necessarily have only partial, and maybe even inconsistent views of the global state of the problem-solving process, so they need to exchange just enough information to be able to locally take good decisions (i.e., choices that are instrumental with respect to the overall system functionality).Resource consumption is not only measured in terms of computation but also of communication load. From a qualitative perspective, coordination in Distributed Problem-Solving systems can be conceived as a Distributed Constraint Problem (see [11] for an example).Agents locally determine individual actions that comply with the constrains (dependencies) that affect them, so as to give rise to "good" global solutions.Alternatively, in quantitative approaches the structure of the coordination problem is hidden in the shape of a shared global multi-attribute utility function.An agent has control over only some of the function's attributes, and the global utility may increase/decrease in case there is a positive/negative dependency with an attribute governed by another agent, but these dependencies are hidden in the algorithm that computes the utility function, and are thus not declaratively modelled.Quantitative approaches to coordination can be understood in terms a of a Distributed Optimization Problem. More recent research in the field of MAS has been shifting the focus towards open systems, where the assumption of a central designer with full control over the system components no longer holds.This raises interoperability problems that need to be addressed.In addition, the benevolence assumption of Distributed Problem Solving agents needs to be dropped: coordination mechanisms now have to deal with autonomous, self-interested behaviour -an aspect that is usually out of the scope of models from the field of Distributed Computing.Approaching agent design in open systems from a micro-level perspective means designing an intelligent software entity capable of successful autonomous action in potentially hostile (multiagent) environments.In this context coordination can be defined as "a way of adapting to the environment" [12]: adjusting one's decisions and actions to the presence of other agents, assuming that they show some sort of rationality.If the scenario is modelled within a quantitative framework, we are still concerned with multi-attribute utility functions, where only some attributes are controlled by a particular agent, but now there are different utility functions for each agent.The most popular way of characterizing a problem of these characteristics is through (non-constant sum) Games [13].Coordination from a micro-level perspective this boils down to agents applying some sort of "best response" strategy, and potentially leads to some notion of (Nash) equilibrium.From a macro-level perspective, coordination is about designing "rules of the game" such that, assuming that agents act rationally and comply with these rules, some desired properties or functionalities are instilled.In the field of Game Theory, this is termed Mechanism Design [13].In practice, it implies designing potentially complex interaction protocols among the agents, which shape their "legal" action alternatives at each point in time, as well as institutions or infrastructures that make agents abide by the rules [9].From this perspective instilling coordination in an open multiagent system can be conceived as an act of governing interaction within the system. If the environment is such that agents can credibly commit to mutually binding agreements, coordinating with others comes down to negotiating the terms such commitments.This is where the link to AT becomes evident.Norms and organisations define and structure the interactions that may take place among agents.The shape of these interactions depends on the particular case, but often they can be conceived as negotiating an agreement for a particular outcome of coordination.In addition, depending on the agents' interests, information and structured arguments can be provided to make agents converge on such an agreement.Norms and trust can be seen as a priori and a posteriori measures, respectively, that make agents comply with the constraints imposed by norms and organisations. Smart Cities There is a broad variety of domains where the potential of AT becomes apparent (see Part VII of [1]).In these domains, the choices and actions of a large number autonomous stakeholders need to be coordinated, and interactions can be regulated, by some sort of intelligent computing infrastructure [9], through some sort of institutions and institutional agents [14], or simply by strategically providing information in an environment with a significant level of uncertainty [15].The advent of intelligent road infrastructures, with support for vehicle-to-vehicle and vehicle-toinfrastructure communications, make smart transportation a challenging field of application for AT, as it allows for a decentralized coordination of individually rational commuters.But also the infrastructure of the electricity grid is evolving, allowing for bidirectional communication among energy producers and consumers.Therefore, in the near future large numbers of households could coordinate and adapt their aggregate energy demand to the supply offered by utilities.AT can also be applied to the domain of smart energy in order to integrate large numbers of small-scale producers of renewable energy into the grid infrastructure.In much the same way, smart governance can make use of electronic institutions the support citizens, for instance in e process of dispute resolution. The above are only a few examples of applications and domains that are often referred to under the umbrella of Smart Cities.Even though many definitions of that term exist [4], there is still no commonly agreed conception of a smart city.Still, we believe that authors tend to concur that a key challenge of Smart Cities is to improve the efficiency of the use of shared urban resources (both physical and informational) through the use of ICT, so as to improve the quality of life of citizens (see, e.g., [16,17,18]).Most of the world's urban areas have a limited space to expand, congestion and contamination seriously affect people's well-being, and a constant and reliable supply of energy is essential for almost all aspects of urban life.Therefore, ICT-based solutions can help to adequately disseminate information and effectively coordinate the urban services and supplies, so as to make urban life more comfortable and efficient. While initially smart city research had a strong focus on ICT and "smartness", more recently impact indicators of environmental, economic or social sustainability have also gained importance [19], so in present days the term smart sustainable city is commonly used [20].This notion underlines that, on the tack to making our cities smarter, preserving the "needs of present and future generations with respect to economic, social and environmental aspects" is of foremost importance [21]. The Internet of Things (IoT) is often considered as crucial in the development of smart cities [22].It is usually conceived as a global infrastructure, enabling advanced services by interconnecting (physical and virtual) devices based on ICT.Recently, it has been moving from interesting proofs of concept to systemic support for urban processes that generates efficiency at scale (see, e.g., [23]).With increasing connectivity between people, data and things based on IoT, the challenge is how to manage and coordinate the decisions of a myriad of decision makers in real time considering the scarcity of resources and stochasticity in demand. We believe that there is a significant potential in applying AT outlined in section 2.1, targeting methods and tools that support the formation and execution of agreements in large-scale open systems, in order to progress towards the vision of smart and sustainable cities mentioned above.In much the same way, it seems straightforward that the efficient discovery, orchestration, and maintenance of services, based largely on data from heterogeneous sensors and all sorts of embedded devices, calls for the application of both scalable and tailorable coordination models.In the following sections, we will focus on different types of assignment problems in the context of sustainable smart cities: we provide examples of how AT-based coordination services mediate the use of scarce resources to the benefit of citizens. Applications In this section we show how the AT paradigm can be applied to achieve coordination in various real-world problems.Depending on the structure and characteristics of each domain, different technologies from the AT sandbox need to be selected and combined so as to meet the requirements for each case.Section 3.1.highlights the use of techniques related to norms and organisations (in particular, auction protocols and market-based control) in an open domain, where flows of autonomous vehicles, controlled by individually rational driver agents, are coordinated through a network of intelligent interactions.Section 3.2 is dedicated to the problem of evacuation guidance in smart buildings where evacuees, suffering from significant levels of uncertainty concerning the state of an emergency, are provided with individualised route recommendations in a coordinated manner.In this context, issues related to situation-awareness and semantics play a major role.Section 3.3.addresses the coordination of fleets of ambulance vehicles.Even though this is a primarily closed scenario, we address it by techniques from the field of AT applying an algorithm that simulates multiple concurrent computational auctions.Section 3.4 focuses on coordination of emergency medical services for angioplasty patients -a problem similar to the previous one, even though its internal structure (different types of agents, etc.) leads to a more complex coordination mechanism.Finally, section 3.5.also addresses a coordination problem related to fleet management, but applied to the field of taxi services.Here, we again have a higher degree of openness, as taxis are conceived of as autonomous agents, so coordination needs to be induced by incentives, targeted at influencing the choices of drivers whose actions are not fully determined by organizational rules and protocols. Coordination of traffic flows through intelligent intersections Removing the human driver from the control loop through the use of autonomous vehicles integrated with an intelligent road infrastructure can be considered as the ultimate, long-term goal of the set of systems and technologies grouped under the name of Intelligent Transportation Systems (ITS).Autonomous vehicles are already a reality.For instance, in the DARPA Grand Challenges1 different teams competed to build the best autonomous vehicles, capable of driving in traffic, performing complex manoeuvres such as merging, passing, parking and negotiating with intersections.The results have shown that autonomous vehicles can successfully interact with both manned and unmanned vehicular traffic in an urban environment.In line with this vision, the IEEE Connected Vehicle initiative2 promotes technologies that link road vehicles to each other and to their physical surroundings, i.e., by vehicle-to-infrastructure and vehicle-to-vehicle wireless communications.The advantages of such an integration span from improved road safety to a more efficient operational use of the transportation network.For instance, vehicles can exchange critical safety information with the infrastructure, so as to recognise high-risk situations in advance and therefore to alert drivers.Furthermore, traffic signal systems can communicate signal phase and timing information to vehicles to enhance the use of the transportation network. In this regard, some authors have recently paid attention to the potential of a tighter integration of autonomous vehicles with the road infrastructure for future urban traffic management.In the reservation-based control system [24], an intersection is regulated by a software agent, called intersection manager agent, which assigns reservations of space and time to each autonomous vehicle intending to cross the intersection.Each vehicle is operated by another software agent, called driver agent.When a vehicle approaches an intersection, the driver requests that the intersection manager reserves the necessary space-time slots to safely cross the intersection.The intersection manager, provided with data such as vehicle ID, vehicle size, arrival time, arrival speed, type of turn, etc., simulates the vehicle´s trajectory inside the intersection and informs the driver whether its request is in conflict with the already confirmed reservations.If such a conflict does not exist, the driver stores the reservation details and tries to meet them; otherwise it may try again at a later time.The authors show through simulations that in situations of balanced traffic, if all vehicles are autonomous, their delays at the intersection are drastically reduced compared to traditional traffic lights. In this section we report on our efforts to use different elements of the sandbox of AT to further improve the effectiveness and applicability of Dresner and Stone's approach, assuming a future infrastructure where all vehicles are autonomous and capable of interacting with the regulating traffic infrastructure.We extend the reservation-based model for intersection control at two different levels. • Single Intersection: our objective is to elaborate a new policy for the allocation of reservations to vehicles that takes into account the drivers´ different attitudes regarding their travel times.• Network of Intersections: we build a computational market where drivers must acquire the right to pass through the intersections of the urban road network, implementing the intersection managers as competitive suppliers of reservations which selfishly adapt the prices to match the actual demand, and combine the competitive strategy for traffic assignment with the auctionbased control policy at the intersection level into an adaptive, market-inspired, mechanism for traffic management of reservation-based intersections. Mechanism for single intersection For a single reservation-based intersection, the problem that an intersection manager has to solve comes down to allocating reservations among a set of drivers in such a way that a specific objective is maximised.This objective can be, for instance, minimising the average delay caused by the presence of the regulated intersection.In this case, the simplest policy to adopt is allocating a reservation to the first agent that requests it, as occurs with the first-come first-served (FCFS) policy proposed by Dresner and Stone in their original work.Another work in line with this objective takes inspiration from adversarial queuing theory for the definition of several alternative control policies that aim at minimising the average delay [25].However, these policies ignore the fact that in the real world, depending on people's interests and the specific situation that they are in, the relevance of travel time may be judged differently: a business person on his or her way to a meeting, for instance, is likely to be more sensitive to delays than a student cruising for leisure.Since processing the incoming requests to grant the associated reservations can be considered as a process of assigning resources to agents that request them, one may be interested in an intersection manager that allocates the disputed resources to the agents that value them the most.In the sequel, we design an auctionbased policy for this purpose.In line with approaches from mechanism design, we assume that the more a human driver is willing to pay for the desired set of space-time slots, the more they value the good.Therefore, our policy for the allocation of resources relies on auctions. The first step is to define the resources (or items) to be allocated.In our scenario, the auctioned good is the use of the space inside the intersection at a given time.We model an intersection as a discrete matrix of space slots.Let S be the set of the intersection space slots, and T the set of future time-steps, then the set of items that a bidder can bid for is I = S x T. Therefore, differently from other auction-based approaches for intersection management (e.g.[26]), our model of the problem calls for a combinatorial auction, as a bidder is only interested in bundles of items over the set I. As Figure 2 illustrates, in the absence of acceleration in the intersection, a reservation request implicitly defines which space slots at which time the driver needs in order to pass through the intersection. The bidding rules define the form of a valid bid accepted by the auctioneer.In our scenario, a bid over a bundle of items is implicitly defined by the reservation request.Given the parameters arrival time, arrival speed, lane and type of turn, the auctioneer (i.e., the intersection manager) is able to determine which space slots are needed at which time.Thus, the additional parameter that a driver must include in its reservation request is the value of its bid, i.e., the amount of money that it is willing to pay for the requested reservation.A bidder is allowed to withdraw its bid and to submit a new one.This may happen, for instance, when a driver that submitted a bid b, estimating to be at the intersection at time t, realises that, due to changing traffic conditions, it will more likely be at the intersection at time t'>t, thus making the submitted bid b useless for the driver.The rational thing to do in this case, as the driver would not want to risk being involved in a car accident, is resubmitting the bid with the updated arrival time.However, we require the new bid to be greater than or equal to the value of the previous one.This constraint avoids the situation whereby a bidder "blocks" one or several slots for itself, by acquiring them early and with overpriced bids. Figure 3 shows the interaction protocol used in our approach.It starts with the auctioneer waiting for bids for a certain amount of time.Once the new bids are collected, they constitute the bid set.Then, the auctioneer executes the algorithm for the winner determination problem (WDP), and the winner set is built, containing the bids whose reservation requests have been accepted.During the WDP algorithm execution, the auctioneer still accepts incoming bids, but they will only be included in the bid set of the next round.The auctioneer sends a CONFIRMATION message to all bidders that submitted the bids contained in the winner set, while a REJECTION message is sent to the bidders that submitted the remaining bids.Then a new round begins, and the auctioneer collects new incoming bids for a certain amount of time.Notice that the auction must be performed in real-time, so both the bid collection and the winner determination phase must occur within a specific time window.This implies that optimal and complete algorithms for the WDP are not suitable.Therefore, we use an approximation algorithm with anytime properties, i.e. the longer the algorithm keeps executing, the better the solution it finds [27]. We expect our policy based on combinatorial auction (CA) to enforce an inverse relation between the amounts spent by the bidders and their delay (the increase in travel time due to the presence of the intersection).That is, the more money a driver is willing to spend for crossing the intersection, the faster will be its transit through it.For this purpose, we designed a custom, microscopic, timeand-space-discrete simulator, with simple rules for acceleration and deceleration [27].The origin O and destination D of each simulated vehicle are generated randomly.The destination implies the type of turn that the vehicle will perform at the intersection as well as the lane it will use to travel.We create different traffic demands by varying the expected number of vehicles l that, for every O-D pair, are spawned in an interval of 60 seconds, using a Poisson distribution.The bid that a driver is willing to submit is drawn from a normal distribution with mean 100 cents and variance 25 cents, so the agents are not homogeneous in the sense that the amount of money that they are offering differs from one to another.Similar results are achieved with lower and higher densities [27]. Notice that even with a theoretically infinite amount of money, a driver cannot experience zero delay when approaching an intersection, as the travel time is influenced by slower potentially "poorer" vehicles in front of it.Extensions to our mechanism that address this problem are subject to future work.We also analysed the impact that such a policy has on the intersection´s average delay, comparing it to the FCFS strategy.Figure 5 shows that when traffic demand is low, the performance of the CA policy and the FCFS is approximately the same, but as demand grows there is a noticeable increase of the average delay when the intersection manager applies CA.The reason is that the CA policy aims at granting a reservation to the driver that values it the most, rather than maximising the number of granted requests.Thus, a bid b whose value is greater than the sum of n bids that share some items with b is likely to be selected in the winner set.If so, only 1 vehicle will be allowed to transit, while n other vehicles will have to slow down and try again.When extending the CA mechanism to multiple intersections, we will try to reduce this "social cost" of giving preference to drivers with a high valuation of time. Mechanisms for multiple intersections In the previous section we have analysed the performance of an auction-based policy for the allocation of reservations in the single intersection scenario.A driver is modelled as a simple agent that selects the preferred value for the bid that will be submitted to the auctioneer.The decision space of a driver in an urban road network with multiple intersections is much broader: complex and Therefore, this scenario opens up new possibilities for intersection managers to affect the behaviour of drivers.For example, an intersection manager may be interested in influencing the collective route choice performed by the drivers, using variable message signs, information broadcast, or individual route guidance systems, so as to evenly distribute the traffic over the network.This problem is called traffic assignment.In the following we first evaluate how market-inspired methods can be used within a traffic assignment strategy for networks of reservation-based intersections (CTA strategy).Then, we combine this traffic assignment strategy with the auction-based control policy into an integrated mechanism for traffic management of urban road networks (CTA-CA strategy).Finally, the performance of the different approaches is evaluated. The complexity of the problem puts limits to coordination approaches based on cooperative multiagent learning [28].Therefore, our Competitive Traffic Assignment strategy (CTA) models each intersection manager as a provider of the resources (in this case, the reservations of the intersection it manages).Each intersection manager is free to establish a price for the reservations it provides.On the other side of the market, each driver is modelled as a buyer of these resources.Provided with the current prices of the reservations, it chooses the route, according to its personal preferences about travel times and monetary costs.Each intersection manager is modelled so as to compete with all others for the supply of the reservations that are traded.Therefore, our goal as market designers is making the intersection managers adapt their prices towards a price vector that accounts for an efficient allocation of the resources. In CTA, for each incoming l, an intersection manager defines the following variables: • Current price p t (l): the price applied by the intersection manager to the reservations sold to the drivers that come from the incoming link l. • Total demand d t (l): the total demand of reservations from the incoming link l that the intersection manager observes at time t, given the current price p t (l), i.e. the number of vehicles that intend to cross the intersection coming from link l at time t. • Supply s(l): the reservations supplied by the intersection manager for the incoming link l.It is a constant and represents the number of vehicles that cross the intersection coming from link l that the intersection manager is willing to serve. • Excess demand z t (l): the difference between total demand at time t and supply, i.e. d t (l) -s(l).We define the price vector p t as a vector that comprises all prices at time t, i.e. the prices applied by all intersection manager to each of its controlled links.In particular, we say that a price vector p t maps the supply with the demand if the excess demand z t (l) is 0 for all links l of the network.This price vector, which corresponds to the market equilibrium price, can be computed through a Walrasian auction [29] where each buyer (i.e., driver) communicates to the suppliers (i.e., intersection managers) the route that it is willing to choose, given the current price vector p t .With this information, each intersection manager computes the demand d t (l) as well as the excess demand z t (l) for each of its controlled links.Then, each intersection manager adjusts the prices p t (l) for all the incoming links, lowering them if there is excess supply (z t (l) < 0) and raising them if there is excess demand (z t (l) > 0).The new price vector p t+1 is communicated to the drivers that iteratively choose their new desired route on this basis.Once the equilibrium price is computed, the trading transactions take place and each driver buys the required reservations at the intersections that lay on its route. In order to adapt the Walrasian auction to the traffic domain, we implement a pricing strategy that aims at reaching the equilibrium price but works on a continuous basis, with drivers that leave and join the market dynamically, and with transactions that take place continuously.To reach general equilibrium, each intersection manager applies the following price update rule: at time t, it independently computes the excess demand z t (l) and updates the price p t (l) as follows: where d is the minimum price that the intersection manager charges for the reservations that it sells.As drivers that travel through road network links with low demand shall not incur any costs, for the CTA strategy we choose d=0. The integrated mechanism for traffic management (CA-CTA) combines the competitive traffic assignment strategy (CTA) with the auction-based policy (CA).Since the intersection manager is the supplier of the reservations that are allocated through the combinatorial auction, it may control the reserve price of the auctioned reservations, i.e. the minimum price at which the intersection manager is willing to sell.At time t, for each link l, CTA-CA simply sets this reserve price to the price p t (l) computed by the price update rule of the CTA strategy. The experimental evaluation of the strategies is performed on a hybrid mesoscopic-microscopic simulator, where traffic flow on road segments is modelled at mesoscopic level, while traffic flow inside intersections is modelled at microscopic level.Although our work does not depend on the underlying road network, we chose a topology inspired by the urban road network of the city of Madrid for our empirical evaluation (see Figure 6).The network is characterised by several freeways that connect the city centre with the surroundings and a ring road.Each large dark vertex in Figure 6, if it connects three or more links, is modelled as a reservation-based intersection.In the experiments, we recreate a typical high load situation (i.e., the central, worst part of a morning peak), with more than 11,000 vehicles departing within a time window of 50 minutes (see [27] for details).We aim at comparing the performance of FCFS, CTA, and CTA-CA.In FCFS, each intersection manager performs combinatorial auctions (without reserve price) in isolation.In this case, the drivers´ route choice model simply selects the route with minimum expected travel time at free flow, since there is no notion of price.For the other strategies, we assume that drivers choose the most preferred route they can afford.Since the prices of links are changing dynamically, a driver continuously evaluates the utility of the route it is following and, in case that a different route becomes more attractive, it may react and change on-the-fly how to reach its destination, selecting a route different from the original one. To assess the social cost incurred by CA-CTA at the global level, we measure the moving average of the travel time, that is, how the average travel time of the entire population of drivers, computed over all the O-D pairs, evolves during the simulation.The results, with 95% confidence interval error bars, are plotted in Figure 7.In the beginning, the average travel time is similar for all the scenarios, but as the number of drivers that populate the network (i.e., its load) increases, it grows significantly faster with FCFS than with the CA-CTA policy.In terms of average travel times CTA is the best performing policy.CA-CTA has a slightly inferior performance, but it can be shown that it enforces an inverse relationship between bid value and delay, similar to the results presented in the previous section [27].The fact that both CA-CTA and CTA outperform FCFS is an indication that, in general, a traffic assignment strategy (the "CTA" component of both policies) improves travel time.In fact, FCFS drivers always select the shortest route, which in some cases is not the best route choice.Furthermore, granting reservations through an auction (the "CA" component of the CA-CTA policy) ensures that bid value and delay reduction are correlated. Evacuation Coordination in Smart Building The objective of an evacuation is to relocate evacuees from hazardous to safe areas while providing them with safe routes.Present building evacuation approaches are mostly static and preassigned (e.g.[30]).Frequently, no coordination is available except for predefined evacuation maps.Still, due to the lack of overall evacuation network information, there might be casualties caused by a too slow evacuation on hazardous routes.Real-time route guidance systems, which dynamically determine evacuation routes in inner spaces based on the imminent or ongoing emergency, can help reducing those risks.Chen and Feng in [31] propose two heuristic flow control algorithms for a real-time building evacuation with multiple narrow doors: with no limitation on the number of evacuation paths and k required evacuation paths, respectively.Filippoupolitis and Gelenbe in [32] proposed a distributed system for the computation of shortest evacuation routes in real-time.The routes are computed by decision nodes and are communicated to the evacuees located in their vicinity.However, this approach considers only the physical distance and the hazard present in each link and does not take into consideration crowd congestion on the routes. A dynamic, context-sensitive notion of route safety is a key factor for such recommendations, in particular as herding and stampeding behaviours may occur at potential bottlenecks depending, among other factors, on the amount of people who intend to pass through them.Furthermore, smart devices allow guidance to be personalized, taking into account, for instance, the specific circumstance of the elderly, disabled persons, or families.In such settings, an adequate notion of fairness of evacuation route recommendations is of utmost importance to assure the trustworthiness of the system from the standpoint of its users [33]: the guidance should not only achieve good overall performance of the evacuation process, but must also generate proposals for each of its users that each of them perceive as efficient.Finally, large groups of people may need to be evacuated so scalability plays a key role. Our proposal concentrates on real-time situation-aware evacuation guidance in smart buildings. The system aims at assigning efficient evacuation paths to individuals based on their mobility limitations, initial positions, respecting individual privacy, and other evacuation requirements.In our approach, a network of smart building agents calculate individual routes in a decentralized fashion.Complex event processing, semantic technologies and distributed optimization techniques are used to address this problem.In addition, we use the notion of agility to determine robust routes in the sense that they are not only fast but also allow finding acceptable alternatives in case of upcoming contingencies. We rely on the existence of a rather extensive set of possible evacuation routes, which may be determined by evacuation experts or through some automated online or offline process.The different evacuation routes are stored in an emergency ontology that, together with an ontology describing the topological structure of the building specifies the a priori knowledge of our system.In addition, situational knowledge about the current situation in each moment of the building and of the evacuees is generated in real-time through a network of sensors.This dynamic knowledge is merged with the static knowledge about the infrastructure.In an emergency situation, semantic inference is used to select the most appropriate agile evacuation route for each individual in the building.Furthermore, real-time monitoring allows the system to reroute evacuees in case of contingencies and, thus, to propose evacuation routes that are adaptive to unpredictable safety drops in the evacuation network. Distributed Architecture The objective of the evacuation route guidance architecture (ERGA) is to provide individualized route guidance to evacuees over an app on their smartphones based on the evacuation information received from connected smartphones within the building and the building sensor network.However, even if an evacuee did not have a smartphone available, s/he could still receive information on relevant evacuation directions, e.g., through LED displays on the walls of a smart building. ERGA (Figure 8) consists of user agents (UA) and a network of smart building (SB) agents.User agents.The user agent is associated with the application on a smartphone of an evacuee.It manages and stores all the information that is related to a specific evacuee in the building.Here, we assume that people that enter the building own a smartphone with the evacuation app installed, or they have been provided with some smartphone-like device that runs the app when they start to evacuate.The user agent contains three modules: (i) user preferences and constraints, (ii) user situation awareness module, and (iii) route guidance module. The user preferences and constraints module allows defining constraints such as disabilities (e.g., the use of wheelchair or vision impairment) as well as evacuation-related behavioural disorders (e.g., agoraphobia, social phobia, etc.), while the preferences include the affiliate ties with other users of the building.The user situation awareness module exploits sensor data (from the smartphone and building) and reasons about the behaviour and location of the user.The presence of an evacuee together with the information derived from the situation awareness module and the individual preferences and constraints are passed to the closest SB agent.In order to assure privacy, only certain basic data about the user's situation should be forwarded to the SB agent (e.g., location, running events).In case of an emergency evacuation, the user interface provides the user with personalized navigation guidelines for evacuation. Smart building agents.Situation awareness and decision making are distributed in the network of SB agents such that each agent is responsible of the semantic reasoning concerning the safety of its assigned physical space, as well as the evacuation route computation for the evacuees positioned in its physical space.We assume that each SB agent has at its disposal the information regarding all evacuation network's layout, topology and safety. A single SB agent controls only its own physical space.The number and location of SB agents is defined when the system is installed.Each SB agent has a corresponding region (Voronoi cell) consisting of all user agents closer to that SB agent than to any other SB agent.Each SB agent contains a local space situation awareness module that perceives the safety conditions of the physical space it controls through combining and analysing the events provided by the sensors and individual user agents located within the smart space controlled by it.Moreover, each SB agent communicates with its neighbouring SB agents and with the user agents present within its physical space. The local space situation awareness module functions in cycles.At the first phase, the local building sensor data is fused with the data from the locally present user agents.Then, the safety value is deduced.This data is sent to a blackboard or alike globally shared data structure containing the overall network safety values and visible to all agents.When an SB agent detects an emergency situation, it sends the updated safety value of its physical space to the shared blackboard.This allows, on the one hand, to monitor the real-time situation of the building and, on the other hand, to trigger an evacuation process and to execute control actions in such a process.SB agent's evacuation route recommender module computes optimized evacuation routes for each locally present user agent by distributed computation and communication with the rest of the SB agents in a multi-hop fashion.In this process, the algorithm uses: (i) data regarding the building topology, (ii) general knowledge about emergency and evacuation scenarios (e.g., facts that people with strong affiliate ties should always be evacuated together, the appropriateness of certain routes for people with limited mobility, and the influence of certain events like fire and smoke on the security level), and (iii) the current physical space situation awareness of the SB agent itself as well as regarding the evacuees that are currently in its space and evacuation network's safety values. During evacuation, the global safety situation of the building is dynamically updated in realtime and each SB agent recalculates the evacuation routes if necessary. Situation Awareness We assume the existence of data provided by a smart infrastructure as well as by the users currently in the building.In particular, we require information for identifying the location of each user in the building. There are various technological techniques to localize people in buildings.Measuring the strength of the signal of several WiFi access points could be used to calculate a person's location via trilateration.However, the signal strength is easily affected by the environment (obstacles, users, …) making it very difficult to obtain accurate positions.Another option is using RFID technology, but a lot of expensive readers would need to be installed in the building, and there are also similar trilateration problems as for WiFi.In addition, it would require providing an RFID tag to each person in the building.We opt for using Beacons, a recent technology to support indoor navigation.Beacons are cheap devices that emit Bluetooth signals, which can be read by beacon readers, in particular smartphones.Beacons send, among other information, a unique ID that allows identifying the specific sensor the user is near to, thus providing accurate user location. Besides user location, other infrastructure sensors provide different measures such as temperature, smoke, fire, and so on.In addition, the users' smartphones built-in sensors provide information that allows detecting their activity (e.g. if the person carrying the phone is running). Sensor events (each piece of information forwarded by or read from a sensor) are processed using Complex Event Processing (CEP), a software technology to extract the information value from event streams [34].CEP analyses continuous streams of incoming events in order to identify the presence of complex sequences of events (event patterns).Event stream processing systems employ "sliding windows" and temporal operators to specify temporal relations between the events in the stream.The core part of CEP is a declarative event processing language (EPL) to express event processing rules.An event processing rule contains two parts: a condition part describing the requirements for firing the rule and an action part that is performed if the condition matches.An event processing engine analyses the stream of incoming events and executes the matching rules. UAs exploit sensor data and infer the location and behaviour of their user.For example, data read from beacons is introduced as events of type beaconEvent(beaconID).Then, the following CEP rule creates enteringSection and leavingSection events, meaning that the user is entering, respectively leaving a certain space.The rule describes the situation that a new beaconEvent b2 has been read in the phone, where the beacon ID has changed.The symbol "->" indicates that event b1 occurs before event b2. CONDITION: beaconEvent AS b1 -> beaconEvent AS b2 ∧ b1.id <> b2.id ACTION: CREATE enteringSection(userID, b2) CREATE leavingSection(userID, b1) enteringSection and leavingSection events, as well as others like runningEvent (generated by a CEP rule that checks if the average velocity of the user is higher than 5 km/h for the last 10 seconds) are forwarded to the SB agent monitoring the user´s location area.SB agents receive processed events from the UA in their area.That information, as well as the obtained from smart building sensors, is incorporated into a stream of events.Again, the event stream is processed by the CEP engine generating more abstract and relevant situation awareness information.For instance, a panic event can be inferred if more than 40% of persons in a certain section of the building emit a running event. Finally, situation awareness information, in form of events, is then transformed into a semantic representation, namely RDF facts.Afterwards, the situation information is ready to be consumed by semantic inference engines.We use OWL ontologies to represent information semantically in our system (user preferences, building topology, emergency knowledge, building situation).Semantic representations provide the means to easily obtain inferred knowledge.For example, if we define a class DisabledPerson to represent people with at least one disability, then we can infer disabled people even though they have not been explicitly described as instances of that class.For more complex reasoning tasks, we use rules on top of our OWL ontologies, which typically add new inferred knowledge.In particular, we use rules to determine the accessibility of certain sections in the building, and to select possible evacuation routes. Personalized Route Recommendation Our aim is to safely evacuate all the evacuees (or at least as many as possible) within an allotted upper time limit.This limit is usually given by the authorities in charge of evacuation. Initially, we rely on the existence of a set of predefined evacuation routes.This set is independent of user constraints.The set of routes is analysed with the objective of generating personalised efficient evacuation routes, i.e. sets of alternative routes for each particular user considering the current situation of the building and user constraints (e.g.wheelchair, blind, kids, ...).This is carried out in two steps.First, those routes that are not time-efficient (e.g.their expected evacuation time are not within the time limit) are filtered out.Next, using a rule-based system, safe personalised plans for each user are created.These routes only include traversing sections that are accessible for that particular user (e.g.avoiding paths through staircases if the person uses a wheelchair).Semantic rules and OWL reasoning are used in this task.For example, the following Jena3 rule identifies staircase sections that are not accessible for people in a wheelchair: (?user :hasDisability :Wheelchair) (?section rdf:type :Staircase) -> (?section :notAccessibleFor ?user) The personalised efficient evacuation routes need to be ranked so as to select one route for each person in the building.We represent the evacuation network by a directed graph G = (N, A), where N is a set of n nodes representing sections, and A is the set of m arcs a = (i, j), i, j ∈ N and i ¹j, representing walkways, doors, gateways, and passages connecting sections i and j.Let O ⊆ N and D ⊆ N be a set of all evacuees' origins and safe exit destinations, respectively.We model the evacuation as a unified crowd flow, each individual is seen as a unit element (particle) of that flow and the objective is to maximize the flow of demands (evacuation requests) with certain constraints.We consider travel time optimization with path safety, envy-freeness (fairness) and agile paths. Route Safety.Our objective is to safely evacuate as many evacuees as possible from all origins o ∈ O over the safest and the most efficient evacuation paths to any of the safe exits d ∈ D. Let us assume that safety status Sa is given for each arc a ∈ A as a function of safety conditions that can be jeopardized by a hazard.Safety can be calculated from sensor data (e.g.temperature, smoke…), and using space propagation models and aggregation functions to combine different influences and variables measured.A thorough description of this field can be found in, e.g., [35,36].We normalize it to the range [0, 1], such that 1 represents perfect conditions while 0 represents conditions impossible for survival, with a critical level for survival 0 < S cr < 1 depending on the combination of a the previously mentioned parameters. If each constituent arc a ∈ k of a generic path k has safety Sa∈k ≥ S cr , then path k is considered to be safe.On the contrary, a path is considered unsafe and its harmful effects may threaten the evacuees' lives.The proposed evacuation paths should all satisfy safety conditions S k ≥ S cr .However, when such a path is not available, a path with the maximal safety should be proposed where the travel time passed in the safety jeopardized areas should be minimized.Since safety may vary throughout a path, we introduce a normalized path safety that balances the minimal and average arc safety values: Where Po is the set of simple paths from origin o ∈ O to an exit. Fair route recommendation.An adequate notion of fairness of evacuation route recommendations is important to assure the trustworthiness of the system from the evacuees' viewpoint [33]: the guidance should not only achieve good overall performance of the evacuation process, but must also generate evacuation routes for each of the evacuees that each of them perceives as efficient and fair.For example, if there are two close-by evacuees at some building location, they should be proposed the same evacuation route, and if not possible, then the routes with similar safety conditions and evacuation time. We aim at proposing available safe simple paths with a maximized safety acceptable in terms of duration in free flow for each evacuation origin.By acceptable in terms of duration in free flow, we mean the paths whose traversal time in free flow is within an upper bound in respect to the minimum free flow duration among all the available evacuation paths for that origin. The concept of envy-free paths is introduced in [37].Basically, it defines a path allocation to be α-envy-free if there is no evacuee at origin o′ that envies any other evacuee at origin o for getting assigned the path with a lower duration than α th power of the path duration assigned to the evacuee on o′. Agile routes.When an unpredicted hazard occurs on a part of the evacuation route, it becomes unsafe and impassable.If, in the computation of an evacuation route, we did not consider this fact and the related possibility to reroute to other efficient evacuation routes on its intermediate nodes, then, in case of contingency, re-routing towards safe areas might be impossible causing imminent fatalities of evacuees.A similar case may occur if, for example, a high flow of evacuees saturates an evacuation path and causes panic.Therefore, we prefer routes where each intermediate node has a sufficient number of dissimilar efficient evacuation paths towards safe exits, if possible within the maximum time of evacuation given for a specific emergency case.In that respect, evacuation centrality is defined in [38] as follows. Evacuation centrality Cε(i) of node i is a parameter that represents the importance of node i for evacuation.The value of the evacuation centrality of the node is the number of available sufficiently dissimilar time-efficient evacuation paths from that node i towards safe exits. Once when we find the evacuation centrality measure for each node of the graph, the objective is to find an evacuation path that maximizes the overall value of the intermediate nodes' centrality measures.We call every such path agile evacuation path; a path where an evacuee has higher chances to re-route in case of contingency in any of the intermediate nodes or arcs.Path agility Δ(k) is defined as: Since we are not concerned about the number of arcs in the path, we take the |(i, j) ∈ k| th root of the Nash product in this formula.We recommend the evacuation paths with the highest agility to evacuees and recompute this value every time the safety and/or congestion conditions change along the recommended path. Emergency medical service coordination The domain of medical assistance, includes many tasks that require flexible on-demand negotiation, initiation, coordination, information exchange and supervision among different involved entities (e.g., ambulances, emergency centres, hospitals, patients, physicians, etc.).In the case of medical urgencies, in addition, the need for fast assistance is evident.It is of crucial importance for obtaining efficient results, improving care and reduce mortality, especially in the case of severe injuries.Out of hospital assistance in medical urgencies is usually provided by Emergency Medical Assistance (EMA) services, using vehicles (typically ambulances) of different types to assist appearing patients at any location in a given area.In such services, the coordination of the available resources is a key factor in order to assist patients as fast as possible.The main goal here is to improve one of the key performance indicators: the response time (the time between a patient starts calling an EMA service centre and the moment medical staff, e.g., an ambulance, arrives at his location and the patient can receive medical assistance). One way to reduce response times consists in reducing the part that depends on the logistic aspects of an EMA service through an effective coordination of the assistance vehicle fleet (for simplicity, here we assume a fleet of ambulances).In this regard, there are two principal problems EMA managers are faced with: the assignment or allocation of ambulances to patients and the location and redeployment of the ambulance fleet.The assignment or allocation problem consists in determining at each moment which ambulance should be sent to assist a given patient.And the location and redeployment consists in locating and possibly relocating available ambulances in the region of influence in a way that new patients can be assisted in the shortest time possible. Most of recent works for coordinating ambulance fleets for EMA have been dedicated to the redeployment problem.A lot of work has concentrated on the dynamic location of ambulances, where methods are proposed to redeploy ambulances during the operation of a service in order to take into account the intrinsic dynamism of EMA services (e.g.[39,40,41]).Most proposals on dynamic redeployment of ambulances only consider the possibility to relocate ambulances among different, predefined sites (stations).This requirement is relaxed in the work proposed in [42], where a number of ambulances can be relocated to any place in the region.Regarding dispatching strategies (the patient allocation problem), most works use the ''nearest available ambulance'' rule for assigning ambulances to patients in a first-came first-served manner.Some works analyse priority dispatching strategies to account for severity level of patients ( [42,43]). In our previous work [44], we have proposed a system that re-allocates ambulances to patients and redeploys available ambulances in a dynamic manner in order to reduce the average response times.Our redeployment approach differs from others in the sense that we do not try to maximize the zones in a region that are covered with respect to some time limits.Instead, we use an approach based on geometric optimization that tends to optimize in each moment the positions of all ambulances that are still available such that the expected arrival time to potential new emergency patients is minimized.With regard to the allocation of patients to ambulances, we propose a dynamic approach similar to [45] but, instead of optimizing the global travel times of all ambulances, we concentrate only on the sum of the arrival times of ambulances to the pending emergency patients.This system is summarised in this section. We use the following notation to describe the problem and to present our solution.The set of ambulances of an EMA service is denoted by A={a1, . . ., an}, where n is the cardinality of A. Even though, most EMA services employ different types of ambulances, for reasons of simplicity, we just consider a single type.Each ambulance has a position and an operational state which vary during time.p(ai) and s(ai) denote the current position and the current state of ambulance ai, respectively.The position refers to a geographical location and the state can be one of the following: • assigned: An ambulance that has been assigned to a patient and is moving to the patient's location. • occupied: An ambulance that is occupied either attending a patient ''in situ'' or transferring him/her to a hospital. • idle: An ambulance that has no mission in this moment.We denote by A A , A O and A I the sets of available, occupied and idle ambulances at a given moment. Regarding the patients, P={p1, . . .pm}, denotes the current set of unattended patients in a given moment, e.g., patients that are waiting for an ambulance, where m is the cardinality of P. Each patient pj ÎP has a location (denoted by p(pj)).We assume that patients do not move while they are waiting for an ambulance, thus p(pj) is constant.Furthermore, once an ambulance has reached a patient's location in order to provide assistance, this patient is removed from P. Dynamic re-assignment The ambulance allocation problem consists in finding an assignment of (available) ambulances to the emergency patients that have to be attended.In current EMA services, mostly a priority dispatching strategy is used, where patients are assigned in a sequential order of appearance and patients with a higher severity level are assigned first.In each case, usually the nearest idle ambulance aiÎ A I is assigned.This can be seen as a first-call first-served (FCFS) rule, where patients with the same security level that called first are also assigned first to an ambulance.After an ambulance has been assigned to a patient, this assignment is usually fixed. The FCFS approach is not always optimal from a global perspective.First, if more than one patients have to be attended it is not optimal in the sense that is does not minimize the response times to all patients.Furthermore, the dynamic nature of an EMA system implies that a given assignment of ambulances to patients at one point in time, might not be optimal at a later point, e.g., if new patients appear or an ambulance that has been occupied before is getting available again. In order to reduce the average arrival time in the dynamic environment of an EMA service, the assignments of ambulances to patients could be optimized globally and the assignments should be recalculated whenever relevant events take place and a better solution may exist.Based on this idea, we proposed a dynamic assignment mechanism of ambulances to patients, which optimizes the assignments at a given point in time and recalculates optimal assignments when the situation changes. Given a set of patients to be attended P and a set of ambulances that are not occupied, A A ÈA I , at a specific moment, the optimal assignment of ambulances to patients is a one-to-one relation between the sets A A ÈA I and P, that is, a set of pairs AS={< ak,pl >,< as,pq >,...} such that the ambulances and the patients are all distinct, and that fulfils the following conditions: • The maximum possible number of patients is assigned to ambulances, that is: The total expected travel time of the ambulances to their assigned patients: ∑ (( * ), .% & ,/ ( 1Î23 ( , )) is minimized ETT(x,y) denotes the expected travel time for the fastest route from one geographical location x to another location y. Calculating such an optimal assignment is a well-known problem which can be solved in cubic time, e.g., with the Hungarian method [46] or with Bertsekas' auction algorithm [47].We propose to use the second approach because it has a naturally decentralized character and could be optimized in settings as the one analysed here. An optimal assignment AS at a moment t, due to the dynamic nature of an EMA service, might become suboptimal at a time t' (t' > t).The following cases need to be considered: 1.One or more new patients require assistance: In this case, the set of patients that have to be attended changes and the current assignment AS may not be optimal any more.2. Some ambulances that have been occupied at time t have finished their mission and are idle at time t'.These ambulances could eventually improve the current assignment. Based on this analysis, we propose a dynamic system based on an event-driven architecture and that recalculates the global assignment whenever one of the following events occur: newPatient(pj) (a new patient has entered the set P) or ambFinishedEvent(ai) (an ambulance that was occupied before, is getting idle again).In the recalculation of an existing assignment, ambulances that have been already dispatched to a patient, but did not reach the patient yet, may be de-assigned form their patients or might be re-assigned to other patients.This approach assures that the assignment AS is optimal, with regard to the average travel time to the existing patients, at any point in time. Dynamic re-deployment The second part of the proposed coordination approach for EMS services consists in locating and redeploying ambulances in an appropriate manner.Here, the objective is to place ambulances in such a way that the expected travel time to future emergency patients is minimized. We address this problem by using Voronoi tessellations [48].A Voronoi tessellation (or Voronoi diagram) is a partition of a space into a number of regions based on a set of generation points, and such that for each generation points there will be a corresponding region.Each region consists of the points in the space that are closer to the corresponding generation point than to any other.Formally, let WÎR 2 denote a bounded, two-dimensional space and let S={s1,...,sg} denote a set of generation points in W. For simplicity, let W be a discrete space.The Voronoi region Vi corresponding to point si is defined by: Experimental results We tested the effectiveness of the dynamic re-assignment and re-deployment approaches in different experiments simulating the operation of SUMMA112, the EMA service provider organization in the Autonomous Region of Madrid in Spain.We used a simulation tool that allows for a semi-realistic simulation of intervals of times of normal operation of an EMA service.The tool reproduces the whole process of attending emergency patients, from their appearance and communication with the emergency centre, the schedule of an ambulance, the ''in situ'' attendance and, finally, the transfer of the patients to hospitals.The simulator operates in a synchronized manner based on a step-wise execution, with a step frequency of 5 seconds.That is, every 5 seconds, the activities of all agents are reproduced leading to a new global state of the system.In the simulations we are mainly interested in analysing the movements of ambulances and the subsequent arrival times to the patients.The movements are simulated using an external route service to reproduce semirealistic movements on the actual road network with a velocity adapted to the type of road.External factors, like traffic conditions or others, are ignored.The duration of the phone call between a patient and the emergency centre and the attendance time ''in situ'' are set to 2 and 15 minutes, respectively.As the area of consideration, we used a rectangle of 125´133 km that covers the whole region of Madrid.For calculating the probability distribution of upcoming patients, we divided the region in cells with size 1300´1300 meters.A different probability distribution is estimated for each day of the week and each hour from statistical data (patient data from the whole year 2009).We used 29 hospitals (all located at their real positions) and 29 ambulances with advanced life support (as it was used by SUMMA112 in 2009) and we simulated the operation of the service for 10 different days (24 h periods) with patient data from 2009 (in total 1609 patients).The days where chosen to have a representation of high, medium and low workloads.We only take into account so called level 0 patients, e.g., patients with a live threading situation. We compare two approaches: • SUMMA112: the classical approach (used by SUMMA112).Patients are assigned to the closest ambulances using a fixed FCFS strategy.Furthermore, ambulances are positioned on fixed stations (at the hospitals), waiting for missions.After finishing any mission, the ambulances return to their station.• DRARD: In this case, the dynamic re-assignment and re-deployment methods are employed. With regard to dynamic re-employment, idle ambulances only move to a new recommended position if it is further away than 500 meters.This is to avoid short, continuous movements.Table 1 presents the average arrival times (in minutes) obtained with the two models in simulations for the 10 selected days.As the results show, the use of the DRARD approach provides a considerable improvement (between around 10 and 20%).If we look on all 1609 patients, the average times are 11:45 and 9:54 minutes, respectively, which implies an improvement of 15.8%.In Figure 9 we present the results of the distribution of arrival times for the different approaches for all 1609 patients of the 10 selected days.The patients in each curve are ordered by increasing arrival time.A clear difference can be observed between the DRARD method with respect to the current operation model of SUMMA112.The results are clearly better for almost all arrival time ranges.Furthermore, the most important improvements can be observed in the range of higher times.This is a very positive effect because it assures that more patients can be attended within given response time objectives.For example, out of the 1609 patients, 1163 (72.3%) are reached within 14min with in the SUMMA model, whereas this number increases to 1356 patients (84.3 %) with DRARD.As shown in the results, the proposed dynamic re-assignment and re-deployment methods clearly improve the efficiency of a EMA service in terms of reducing response time.However, the approaches, in particular dynamic re-deployment introduce an extra cost.Since the mechanism is based on an almost continuous repositioning of idle ambulances, the travel distances the ambulances have to do increase.Considering the 10 days, the average distance each ambulance has to cover each day in the SUMMA model is 95.48 km, whereas it is 299,97 km for the DRARD approach.This is, ambulances have to travel about three times the distance because of frequent location changes.It is a political decision whether this extra effort is acceptable in order to improve the quality of service.In any case, compared to augmenting the number of ambulances in order to reduce response times, the DRARD approach appears to be a less costly alternative.In this sense, we have executed the DRARD method also with less ambulances and roughly the same average arrival time than obtained with 29 ambulances in the SUMMA model, can be obtained with 21 ambulances and the DRARD approach. Distributed coordination of emergency medical service for angioplasty patients Based on the World Health Organization data, ischemic heart disease (IHD) is the single most frequent cause of death killing 8.76 million people in 2015, and one of the leading causes of death globally in the last 15 years [50].It is a disease characterized by ischaemia (reduced blood supply) of the heart muscle, usually due to coronary artery disease.At any stage of coronary artery disease, the acute rupture of an atheromatous plaque may lead to an acute myocardial infarction (AMI), also called a heart attack.AMI can be classified into acute myocardial infarction with ST-segment elevations (STEMI) and without ST elevation (NSTEMI).Effective and rapid coronary reperfusion is the most important goal in the treatment of patients with STEMI. One of the reperfusion methods is angioplasty or primary percutaneous coronary intervention (PCI).It is the preferred treatment when feasible and when performed within 90 minutes after the first medical contact [51,52].Due to insufficient EMS coordination and organizational issues, elevated patient delay time, defined as the period from the onset of STEMI symptoms to the provision of reperfusion therapy, remains a major reason why angioplasty has not become the definitive treatment in many hospitals. Conventional EMS procedure in assisting AMI emergencies is the following.Patients are diagnosed in the place where they suffer chest pain: at their momentary out-of-hospital location or at a health centre without angioplasty.In both cases the ECC applies First-Come-First-Served (FCFS) strategy and locates the nearest available (idle) ambulance with Advanced Life Support (ALS) and dispatches it to pick up the patient.After the ambulance arrives to the scene and diagnoses AMI by an electrocardiogram, ambulance confirms the diagnosis to the ECC which has a real time information of the states of ambulances.The ECC applies FCFS strategy for hospital and cardiology team assignment by locating the nearest available hospital with catheterization laboratory and alerting the closest hospital cardiology team of the same hospital. The improvements of the EMS coordination in the literature are achieved both by novel fleet real-time optimization and communication methods, as by new multiagent models, see, e.g., [¡Error!No se encuentra el origen de la referencia., 52,54,55].Despite of an exhaustive quantity of work on the optimization of EMA, to the best of our knowledge, there is little work on optimization models for the coordination of EMS when the arrival of multiple EMS actors needs to be coordinated for the beginning of the patients' treatment.This is the case with STEMI patients assigned for angioplasty treatment where, in the case of multiple angioplasty patients, the FCFS strategy discriminates the patients appearing later. EMA coordination for STEMI patients includes the assignment of three groups of actors: assignment of idle ambulances to patients, assignment of catheterization laboratories in available hospitals to patients receiving assistance in-situ, and assignment of available cardiology teams to hospitals for the angioplasty procedure performance.All of the three assignments need to be combined in a region of interest such that the shortest arrival times are guaranteed to all patients awaiting angioplasty at the same time.In continuation, we present the solution approach from [52], which presents a coordination model for EMS participants for the assistance of angioplasty patients.The proposed approach is also applicable to emergency patients of any pathology needing prehospital acute medical care and urgent hospital treatment. We concentrate on the minimization of the patient delay intended as the time passed from the moment the patient contacts the medical emergency coordination centre (ECC) to the moment patient starts reperfusion therapy in the hospital.The patient delay defined in this way is made of the following parts, Figure 10: T1 Emergency call response and decision making for the assignment of EMS resources; T2 Mobilization of an idle ambulance and its transit from its current position to the patient; T3 Patient assistance in situ by ambulance staff; T4 Patient transport in the ambulance to assigned hospital; T5 Cardiology team transport from its momentary out-of-hospital position to the hospital; T6 Expected waiting time due to previous patients in the catheterization laboratory (if any).The optimal patient delay time for a single patient is the lowest among the highest values of the following three times for all available ambulances and angioplasty-enabled hospitals, Figure 10: • the expected patient delay time to hospital (the sum of times T2, T3, and T4, in continuation represented by parameters t(a,p), t(p), and t(p,h), respectively. • the expected minimal arrival time among cardiology teams to the same hospital (T5), represented by minc∈Cav t(c,h) • and the expected shortest waiting time until hospital h gets free for patient p, min ρh,p (T6). For simplicity, we let tphp = maxh∈Hav (t(p,h), min ρh,p) for all patients p ∈ P.Then, from the global point of view, considering all pending out-of-hospital patients, the problem transforms into: The overall patient delay time ΔtP in Figure 10 is an additive function.Since the minimum arrival times cannot be always guaranteed for all patients due to the limited number of EMS resources, a sum of the EMA tasks' durations should be minimized for each patient individually and for the system globally considering individual constraints.This gives an underlying linear programming structure to the EMS coordination problem.Therefore, it is possible to guarantee optimal outcomes even when the optimization is performed separately on individual sum components, i.e., when ambulance assignments are negotiated separately from the hospital and cardiology team assignment, e.g., [56,57].This fact significantly facilitates the multiagent system's distribution and enables a multi-level optimization.Hence, we decompose optimization problem (1)-(2) as follows.On the first level, we assign ambulances to patients such that the expected arrival time of ambulances to patients t(a,p) is minimized.Note that since t(p) in ( 1) is a constant for every patient p depending only on the patient's pathology and not on the assigned ambulance, we can exclude it from the optimization. Then, on the second optimization level, we approach the second part of (1) which is an NP-hard combinatorial problem.However, by approximating (1) with a sequence of problems where we first decide on the assignment of hospitals to pending patients and then assign cardiologists to patients already assigned to hospitals, we obtain two linear programs to which we can apply tractable optimal solution approaches such as the auction algorithm [47].By decomposing (1) as can be seen in Figure 11 and allowing for reassignment of resources based on the adaptation to contingencies in real time, we obtain a flexible EMS coordination solution. Figure 11.Proposed Three-level decomposition of the problem of EMS coordination for STEMI patients awaiting angioplasty. In the following, we propose a change of the centralized hierarchy-oriented organizational structure to a patient-oriented distributed organizational structure of EMS that increases the flexibility, scalability, and the responsiveness of the EMS system.The proposed decision-support system is based on the integration and coordination of all the phases EMS participants go through in the process of emergency medical assistance (EMA).The model takes into consideration the positions of ambulances, patients, hospitals, and cardiology teams for real-time assignment of patients to the EMS resources. EMA for angioplasty patients The emergency medical system for the assistance of patients with STEMI is made of the following participants: patients, hospitals with angioplasty facilities, Medical Emergency Coordination Centre (ECC), ambulances staff, and cardiology teams, each one compound of a cardiologist and one or more nurses. Usually, each hospital with angioplasty has assigned to it its own cardiology team(s) positioned at alert outside the hospital and obliged to come to the hospital in the case of emergency.This is because the cardiology teams' costs make a large portion of the overall costs in surgical services [55]. The objective of the proposed system is the reduction of patient delay times by distributed realtime optimization of decision-making processes.In more detail, we model patient delay time and present a three-level problem decomposition for the minimization of combined arrival times of multiple EMS actors necessary for angioplasty.For the three decomposition levels, we propose a distributed EMS coordination approach and modify the auction algorithm proposed by Bertsekas in [47] for the specific case.The latter is a distributed relaxation method that finds an optimal solution to the assignment problem. On the first level, agents representing ambulances find in a distributed way the patient assignment that minimizes arrival times of available ambulances to patients.After the treatment in situ, on the second optimization level, ambulances carrying patients are assigned to available hospitals.On the third level, arrival times of cardiology teams to hospitals are coordinated with the arrival times of patients.The proposed approach is based on a global view, not concentrating only on minimizing single patient delay time, but obtaining the EMS system's best solution with respect to the (temporal and spatial) distribution of patients in a region of interest. Simulation experiments In this Section, we describe settings, experiments, and results of the simulated emergency scenarios that demonstrate the efficiency of the coordination procedure and a significant reduction in the average patient delay.We test the proposed approach for the coordination of EMS resources in angioplasty patients' assistance focusing on the average patient delay time in the case of multiple pending patients.We compare the performance of our approach with the FCFS method since it is applied by most of the medical emergency-coordination centres worldwide. To demonstrate the scalability of our solution and its potential application to small, medium and large cities and regions, in the experiments, we vary the number of EMA ambulances from 5 to 100 with increment 5 and the number of angioplasty-capable hospitals from 2 to 50 with increment 2. The number of cardiology teams |C| in each experiment equals the number of hospitals |H|.Thus, the number of setup configurations used, combining different numbers of ambulances and hospitals with cardiology teams, sums up to 500. For each configuration, we perform the simulation on 3 different instances of random EMS participants' positions since we want to simulate a sufficiently general setting applicable to any urban area that does not represent any region in particular.The EMS participants are distributed across the environment whose dimensions are 50×50 km.In each instance, we model hospital positions and the initial positions of ambulances, out-of-hospital cardiology teams, and patients based on a continuous uniform distribution.Therefore, each configuration can be considered as a unique virtual city with its EMS system.Assuming that the EMS system is placed in a highly dense urban area, this kind of modelling of the positions of EMS participants represents a general enough real case since the election of the hospital positions in urban areas is usually the result of a series of decisions developing over time with certain stochasticity, influenced by multiple political and demographical factors. In the simulations, ambulances are initially assigned to the base stations in the hospitals of the region of interest.Additionally, we assume that after transferring a patient to the hospital, an ambulance is redirected to the base station where it waits for the next patient assignment.Furthermore, we assume that the hospitals have at their disposal a sufficient number of catheterization laboratories so that the only optimization factor from the hospital point of view is the number of available cardiology teams.If there are more patients with the same urgency already assigned waiting for treatment in the same hospital, they are put in a queue. The simulation of each instance is run over a temporal horizon in which new patients are generated based on a certain appearance frequency.The EMS resources are dynamically coordinated from the appearance of a patient until the time s/he is assisted in hospital by a cardiology team.Each instance simulation is run over the total of 300 patients whose appearance is distributed equally along the overall time horizon based on the following two predetermined frequency scenarios: low (1 new patient every 10 time periods) and high (1 new patient every 2 time periods). The period between two consecutive executions of the EMS coordination algorithm is considered here as a minimum time interval in which the assignment decisions are made; usually it ranges from 1 to 15 minutes.In each period, the actual state of EMS resources and pending patients is detected and the EMS coordination is performed such that the EMS resources are (re)assigned for all patients.To achieve an efficient dynamic reassignment of ambulances, the execution of the EMS coordination algorithm is furthermore performed with every new significant event, i.e., any time there is a significant change in the system due to new patients, or the significant change of travel time or state of any of the EMS participants. In the experiments, we test the performance of the proposed EMS coordination method with respect to the FCFS benchmark approach.The comparison is based on the relative performance function P = (tFCFS -tOR)/ tFCFS• 100 [%], where tFCFS and tOR are average patient delay times of the benchmark FCFS approach and the proposed model, respectively.The simulation results of performance function P for the two simulated cases of patient frequency appearance of 1 and 5 new patients every 10 time periods are presented in the following.The performance of the proposed approach increases as the number of angioplasty enabled hospitals increases from almost identical average patient delay in the configuration setup with 2 hospitals up to 87,14 % with 50 hospitals, as can be seen in Figure 13.Observing the performance dynamics with respect to the varying number of hospitals, it is evident that the performance of the proposed EMS coordination method increases on average proportionally to the increase of the number of hospitals. With a relatively low number of angioplasty-enabled hospitals (less than 15), our proposed EMS coordination approach performs on average better than FCFS up to 15 %.As the number of hospitals increases, the performance improves on average up to the maximum of 39,98 % for the first case, Figure 12 and up to 87,14 % , for the second case, Figure 13.However, mean patient delay improvement for the two cases is 35 % and 45.5 % respectively.The static assignment of the FCFS principle discriminates against patients appearing later.Since ambulances are not equally distributed in the area, the proposed EMS coordination method compensates for the lack of EMS resources and their unequal distribution by reassigning them dynamically to pending patients.Dynamically optimized reassignment of EMS resources in real time is the main key to the improvement of the system's performance.Thus, proportional to the increase of the number of hospitals, there is a constant improvement of performance.Even though the velocity of the EMS actors is not a relevant factor in the comparison of the performance of our proposed EMS coordination solution and the FCFS method, looking individually at the performance of each one of these methods, it is evident that the assignment cost accumulated through the time will be lower when the velocity of the EMS actors is higher. Our simulation results show the efficiency of the proposed solution approach, resulting in significantly lower delay times for angioplasty on average.Of course, the effectiveness of the proposed model depends on the initial classification of patients, and the related determination of the urgency of their cases, as well as on the timely availability of cardiology teams and hospitals.Still, as the current experience shows, good quality patient assessments and the EMS resource availabilities can be assured in practice. To implement our approach in practice, a patient's location needs to be known to the system.Ideally, patients should contact the ECC through a mobile phone with GPS for easier location.In addition, ambulances should have a GPS and a navigator for localizing the patient and navigating the way to him/her, as well as a means of communication with the rest of the EMS participants, and a digitalized map showing ambulances, patients and hospitals. Moreover, hospitals should have a digitalized receptionist service to receive and process relevant data of a patient before his/her arrival.None of these requirements go significantly beyond the current state of affairs in major cities (such as Madrid).Moreover, there are intrinsic uncertainties present in the EMS coordination.In our experiments, we assume that travel times can be accurately forecasted, which, of course, is an important factor for the performance of the proposed system.In reality, this may not always be the case, as real world traffic conditions are notoriously hard to predict.However, there is abundant literature on traffic-aware vehicle route guidance systems tackling this problem, and we believe that such systems can be easily integrated into our approach.Still, an effective proof of this conjecture is left to future work. Coordination of transportation fleets of autonomous drivers A similar problem to the emergency medical service coordination consists in coordinating fleets for transportation in an urban area, e.g., messaging services or taxi fleets.However, in contrast to medical emergency services, here the objective is not only focused on response time, but also on cost efficiency.Furthermore, a primary characteristic of such systems, at least with the boom of collaborative economy, is that such types of fleets may be open [58] in the sense that private persons may participate in the fleet as autonomous workers, with their own vehicle and at different time intervals.With regard to the coordination of such fleets, the autonomy of the drivers is a crucial characteristic.It implies that the drivers get their income on a per-service basis instead of a monthly salary.This means that, besides accepting a set of basic rules, drivers are more concerned with the actual service they have been assigned from the system and may have more freedom to accept or decline assignment decisions. Maciejewski et al. [59] present a real-time dispatching strategy based on solving the optimal taxi assignment among idle taxis and pending requests at certain intervals or whenever new events (new customer/available taxi) take place.Zhu and Prabhakar [60] analyse how suboptimal individual decisions lead to global inefficiencies.While most existing approaches try to minimize the average waiting time of customers, other works have a different focus.BAMOTR [61] provides a mechanism for fair assignment of drivers, i.e. to minimize the differences in income among the taxi drivers.For that, they minimize a combination of taxi income and extra waiting time.Gao et al. [62] propose an optimal multi-taxi dispatching method with a utility function that combines the total net profits of taxis and waiting time of passengers.Meghjani, and Marczuk [63] propose a hybrid path search for fast, efficient and reliable assignment to minimize the total travel cost with a limited knowledge of the network.In contrast to the previous works, the main characteristic of our approach is the possibility of modifying the assignment when a taxi has been dispatched but has not yet picked up a customer.In this sense, we followed a similar approach to the emergency medical service coordination presented in section 3.3.One of the few other works in this line is [64], which presented an adaptive scheduling in which reassignment is possible during a time interval until pick-up order is sent to the taxi and customer.In our case, we do not restrict reassignment to a specific interval.Furthermore, we propose a method that economically compensates the negatively affected taxis in the new schedule such that they do not have a loss in their income.We consider a system that uses some mediator in charge of matching the transportation requests with vehicles of the fleet.On one hand, customers contact the mediator via some telematics means, requesting a transportation service.On the other hand, drivers subscribe to the system offering their services during specific time intervals where they are available. We assume a payment structure for transportation services where clients pay a fixed price plus a fare per distance, as described in the following: • From the client side, the price of a transportation service s is determined by the distance of the requested service plus a fixed cost: Where d(s) denotes the distance from the service origin to its destination, fcost is a fixed cost, fare is a rate a client has to pay per distance unit. • From the driver or vehicle side, for a driver v the earnings depend on the price the client pays minus the cost of the vehicle for traveling the distance from the current position of v towards the position po(s) and later to the destination d(s): Where d(v,s) denotes the distance from the current position of the vehicle in the moment of assigning a service s to the origin point of that service and vcost is the actual cost rate of moving the vehicle on a per distance basis.vcost will implicitly include petrol, maintenance, depreciation of the vehicle, etc. For simplicity, here we assume that fcost, fare and vcost are the same for all services and vehicles, that is, in the price structure we do not distinguish between different vehicles costs nor between different requests.If the use of different cost factors is important, the proposed model could be adapted accordingly.Furthermore, part or all of the amount of fcost could be retained by the mediator service as income.In this case, the earnings of the driver would be reduced by this amount. Like in the emergency medical case, a typical approach for assigning services to vehicles in such a system is the first-call/first-served (FCFS) rule where each incoming request is assigned to the closest available driver in that moment and no re-assignments are done.As shown in [59], if there are more unassigned requests than available vehicles, it is better to assign vehicles to service requests.We call this strategy nearest-vehicle/nearest-request (NVNR).Dynamic assignment strategies can improve the overall efficiency of a fleet.We assume that drivers, once they are available, are obliged to accept a transportation service assigned to them.However, in contrast to the emergency medical scenario presented in the previous section, drivers do not have to accept changes in the assignments.That means, they are free to accept or to decline proposed re-assignments from the system, considering their own objectives and benefit.So, the dynamic reassignment approach, as proposed in section 3.3.1 cannot be applied directly, as a driver would not be willing to accept a re-assignment that reduces his/her net income.In order to still taker advantage from cost reduction through re-assignment, we developed an incentive schema so as to convince drivers to accept re-assignments that are economically efficient from the global perspective of the system.The approach is detailed in the next subsection. Dynamic re-assignment with compensations The idea of coordinating the assignments of transportation tasks to (autonomous) drivers is similar to the one proposed in section 3.3.1:we want to reduce globally the total travel distance (and proportionally time) towards the origin points of the requested services.However, due to the rules of the system, a driver will usually not accept a "worse" task than the one he is already assigned to.We assume drivers to be economically rational, that is, they want to maximize their net income and minimize the time spent on their trips.In particular, with make the following assumptions: • A driver would always prefer a task with the same net income, but that requires less time (e.g., less travel distance). • A driver would always accept to do an extra distance d if he would get an extra net earnings of d × (fare -vcost).This is actually the current rate a driver earns when accomplishing a service and, thus, he would always be willing to provide his service for this rate. Let's consider that a driver v is currently assigned to a service sk and the mediator wants the driver to do other services sj instead of sk.Furthermore, let td(v,s)= d(sk)+d(v,sk), denote the total distance driver v has to accomplish in order to serve service s.In order to convince the driver, we define a compensation c that is applied if a driver accepts the re-assignment.This compensation is calculated as follows: In this case the effective income of the driver when accepting the re-assignments and receiving the compensation c, would be Earn(v, sk) + (td(v,sj) -td(v,sk))× (fare -vcost).That is, the driver receives the same income as before, plus the normal fare for the extra distance. In this case the effective new income of the driver with the compensation is Earn(v, sk).Thus, the driver would have the same earnings as before, but for less distance (and less time). It is clear that, with the assumptions mentioned above, an economically rational driver would accept any re-assignment with the defined compensations. It should be noted that compensations may be positive or negative, e.g., a driver may get extra money for accepting a new service or s/he may have to pay some amount to the mediator.For instance, if a driver is proposed a re-assignment from a service sk to another service sj with d(sk)= d(sj) and d(v,sk) > d(v,sj) (case 2), the situation is a priori positive for the driver.S/he would have less distance to the starting point of the transportation service request but would earn the same money for the service itself.Thus, his net income would be higher.In this case, the compensation would be negative, with the amount c= vcost× (d(v,sj)-d(v,sk)).That is, the driver would have to pay the cost of the difference in distances towards sj wrt. the previous service sk. The idea of the mediator is to dynamically find global re-assignments with compensations, such that the overall outcome of the mediator is cero or positive, e.g., there would be no extra mediation cost.Given an existing assignment Ac at a given time, the algorithm we propose for calculating a new assignment An with compensations is summarized as follows: 1. Assign all pending transportation requests to vehicles using the NVNS rule and add the assignments to Ac else return Ac The algorithm is executed by the mediator whenever either a new transportation request (service) is registered, or a driver becomes available (either after terminating a previous mission or because he starts working).In the first step, the system tries to assign pending requests in a rather standard fashion.Then, in step 2 and 3, a more efficient global assignment is searched for and the compensation cost of this new assignment is estimated.The new assignment is applied, if the accumulated overall mediator earnings together with the compensation cost remains positive.This last part assures that the mediator has no extra mediation cost. Regarding step 2, we use Bertsekas' auction algorithm [47] to calculate an optimal assignment. In particular, we calculate the assignment An that minimizes D(An) + g×Co, where D(An) is the sum of the distances of all vehicles in An to the corresponding original positions of the assigned service requests.This means, we look for assignments that minimize the sum of the distances and also the potential cost of the compensations.g is a factor for scaling monetary earnings into distance values (meters). Evaluation We tested the proposed approach in different experiments simulating the operation of a taxi fleet which basically has the characteristics of the type of fleets we want to address here.We used an operation area of about 9´9 km, an area that roughly corresponds to the city centre of Madrid, Spain.In the simulations we randomly generated service requests (customers) who are assigned to available taxis, and we simulate the movement of taxis to pick up a customer, to drive him to his/her destination and then waiting for the assignment of a new customer.The simulations are not aimed at reproducing all relevant aspects of the real-world operation of a taxi fleet, but to analyse and compare the proposed coordination strategy (here called DYNRA) to the standard strategies FCFS and NVNR.Thus, we simplified the movements of taxis to straight-line movements with a constant velocity of 17 km/h.This velocity is within the range of the average velocity in the city centre of Madrid.Hence, we do not take into account neither the real road network, nor the possibility of different traffic conditions. The general parameters used in the simulation are as follows.We use 1000 taxis (initially distributed randomly in the area with a uniform distribution) and a simulation interval of 5 hours.The taxis do not cruise, that is they only move if they are assigned to a customer.We accomplish different simulation runs with different numbers of customers in order to represent different supply/demand ratios.We generate a fixed number of customers every 15 minutes (ranging from 250 to 1000 in steps of 125).For each customer, his/her origin (point of appearance) and destination location are randomly chosen such that each trip goes either from the outside of the area to the centre, or vice versa.The origin and destination points are generated using a normal distribution (for centre and outside points).When a taxi arrives at a customer's location, a pick-up time of 30 seconds is used where the taxi does not move.In the same sense, the simulated drop-off time is 90 seconds.The system assignment process is accomplished every 5 seconds and only if a new client appeared or a taxi has become available again after a previous trip. The payment scheme we used in the experiments is the one that has been used in the city of Madrid in the last years.A taxi trip has a fixed cost fcost = 2.4 euros and fare = 1.05 euros/km.Furthermore, the cost factor is vcost = 0.2 euros/km.This factor roughly corresponds to the actual cost of a vehicle, including petrol, maintenance, as well as other fixed costs.Finally, we apply a factor of g=1/0.00085,which corresponds to the net benefit a taxi receives per meter when transporting a client in the used payment scheme. Each experiment is repeated 10 times with a different random seed, in order to avoid biased results due to a particular distribution of clients.The presented results are averages over those 10 runs. Table 2 presents the average waiting times of the customers for the three methods and the different numbers of generated customers per hour.As it can be observed, between 2000 and 2500 customers per hour, the FCFS approach starts to be perform really badly.Basically, the system gets saturated and the rate of serving customers is lower than the rate of appearance of new customers.The other two methods, NVNR and DYNRA, can deal much better with this situation and their saturation point is higher (between 2500 and 3000 customers per hour).There is a clear improvement in the waiting times of these two methods with respect to FCFS if there are more than 2000 customers per hour.The dynamic re-assignment approach with compensations performs better than the other two methods in all cases.The improvement is rather low if there are less customers but increases with the number of customers up to 101.4 and 2.4 minutes with respect to FCFS and NVNR, respectively, for 4000 customers a hour.In terms of relative improvement, the highest peak is reached at 2500 customers, with an improvement of 94.6% wrt. to FCFS and 44.9 % wrt.NVNR.It should be noted that the DYNRA approach is based on the compensation scheme presented above, that is, reassignments include compensations and (economically rational) taxi drivers will accept such re-assignments.In Figure 14 we analyse the net income of the system, composted of the income of the taxi drivers plus the income of the mediator in case of the DYNRA approach.The presented results are normalized to the income of 1000 drivers and 1000 customers.The overall system income is highest for the DYNRA approach for all numbers of customers.The difference to the FCFS approach is considerable, between 473 and 565 euros, above 2500 customers per hour (where FCFS is saturated).The difference of DYNRA wrt.NTNR is highest at 2500 customers (79 euros) and about 6-7 euros above that point.The taxi drivers earn always more money with the DYNRA approach up to 2500 customers per hour.However, their net income is slightly lower than in the NTNR approach for more customers.Nevertheless, it should be noted that the mediator could redistribute its income among all drivers and, thus, the drivers would have a higher income in all cases.Summarizing, the proposed dynamic re-assignment strategy can improve the performance of a transportation fleet of autonomous, self-interested drivers in terms of higher income and less movements (and thus, also more environmental friendly).The improvements are in general rather small if there are few movements (few service request) and higher if the demand of transportation services is increasing. The approach relies on a mediator service that manages the assignments of transportation tasks to drivers and pays compensations if necessary.Besides this fact, as proposed here, the mediator does not incur in extra costs.Instead, it may have some positive income itself.The overall travel times and distances may still be reduced if the compensation system allowed for a negative balance.This could be of interest if, for example, a municipality would be willing to invest money in order to reduce CO2 emissions. Discussion In this paper we have argued that recent technological advances open up new possibilities for computers to support people's interactions in a variety of domains with high socio-economic potential.In these domains, the choices and actions of a large number autonomous stakeholders need to be coordinated, and interactions can be regulated, by some sort of intelligent computing infrastructure, through institutions and institutional agents, or simply by providing information in an environment with a significant level of uncertainty.Many problems related to the vision of Smart Cities fall under this umbrella. While centrally designed systems may be a suitable choice to address certain challenges related to Smart Cities, others are unlikely to be dealt with satisfactorily, either because stakeholders are unwilling to implement system recommendations that they do not understand and that they may not trust, or because it is impossible to compute good global solutions based on the information provided by stakeholders, which can be insufficient or biased by their personal interests.While the former problem can be addressed by providing stakeholders with their own trusted software agent which represents them and acts on their behalf, the latter requires coordination mechanisms that take into account the autonomy of the stakeholders (and their software agents).Still, designing and implementing such coordination mechanisms in open systems is challenging, especially if the systems are large in scale, as in the case of most smart Cities applications. We argued that technologies from the AT sandbox are suitable for fostering coordination in such scenarios.To back this claim, we reported on a variety of real-world applications, ranging from truly open systems, where coordination among agents is achieved either though (economic) incentives or by offering relevant information, to more closed domains where AT techniques are used to "simulate" interactions among autonomous agents, which may, for instance, take the shape of auctions or market equilibria.While in some of the applications the use of AT, and market-based approaches in particular, was a mere design choice, in other more open domains their use enables the provision of new functionalities and services. In fact, a key lesson learnt from our work is that market-based coordination schemes can be successfully applied to quite different problems and domains, even though their particular shape needs to be carefully tailored and adapted based on the degree of autonomy of the stakeholders.In the applications outlined in sections 3.3 and 3.4 the degree of autonomy of the different agents is low, as ambulance drivers, for instance, have to follow the assignments that they are given by the coordination mechanism.Evacuees in section 3.2 do have a choice but, due to the specific characteristics of the emergency situation and the scarceness of adequate information, the suggestions of the system are likely to be followed.A similar "take it or leave it" situation is present in the taxi fleet coordination example of section 3.5, but the stakeholders can make more informed decisions, so it is important that the incentives offered as part of the system proposal are such that taxi drivers can conclude that they are better off following the recommendations than without it.Finally, within the case study related to networks of reservation-based intersections in section 3.1 there are no explicit proposals for drivers to follow a particular route, but traffic assignment is achieved implicitly by coordinating the intersections' reserve prices.In addition, the auction protocol used at each reservation-based intersection needs to be such that the mechanism is resilient to attempts of strategic manipulation. A limitation of our approach is related to the level of scalability needed for a particular domain.The applications outlined in this article have been evaluated in simulations with hundreds or thousands of agents, but going beyond these numbers may require the use of different approximations algorithms, e.g. to determine winner determination in auctions.Also, it should be noticed that take-it-or-leave-it recommendations may not work well when users can try to go for "outside options", i.e. when competing service providers exist and users are allowed to use them.Finally, making mechanisms stable against strategic manipulation attempts often relies on assumption regarding the underlying communication infrastructure, which may hold in simulation experiments but are not achievable in all real-world situations. We intend to continue developing applications for the aforementioned type of domains making use of the AT sandbox.We will particularly be looking into applications for which the semantic as well as the trust and reputation layers of the AT tower are of foremost importance.This will help us broaden the set of models and tools based on AT.In much the same way, we plan to extract further guidelines for designing these type of systems, based on a descriptions of problems characteristics and requirements. semantic Web, and self-adaptive systems.He has published more than 100 research papers in international journals and conferences, and has participated in more than 20 research projects.He has been member of the organizing committee and the program committee of multiple international workshops and conferences. Figure 4 plots (in logarithmic scale) the relation between travel time and bid value for values of l=20, with error bars denoting 9% confidence intervals.It clearly shows a sensible decrease of the delay experienced by the drivers that bid from 100 to 150 cents.The delay reduction tends to settle for drivers that bid more than 1000 cents. Figure 7 . Figure 7. Moving average of travel times Figure 8 . Figure 8. Situation-aware real-time distributed evacuation route guidance architecture (ERGA).User agents 1, 2, and m are located in the physical space of SB Agent 1 so that they are given route recommendations by SB Agent 1. Vi ={yÎW : |y -si|<|y -sj| for j=1,...,k and j¹i} where |×| denotes the Euclidean norm.The set V(S)={V1, . .., Vk} with ⋃ * $ *4# = W is called a Voronoi tessellation of S in W. A particular type of tessellations are Centroidal Voronoi Tesselation (CVT).A centroidal Voronoi tessellation is one where each generation point si is located in the mass centroid of its Voronoi region w.r.t.some positive density function r on W. A CVT is a necessary condition for minimizing the cost function and, thus, provides a local minimum for the following cost function: Figure 9 . Figure 9.Comparison of arrival times on patients.Here, the 1609 patients from the 10 analysed days are ordered with respect to the arrival time in each curve. Figure 10 . Figure 10.Gantt diagram of the coordination of EMS for angioplasty treatment Figure 12 . Figure 12.Avg.patient delay time of the proposed EMS coordination approach vs. the FCFS strategy [%] for the frequency of appearance 1 patient each 10 time periods Figure 13 . Figure 13.Avg.patient delay time of the proposed EMS coordination approach vs. the FCFS strategy, [%] for the frequency of appearance of 1 patient each 2 time periods 2 . Calculate an optimal assignment An between all vehicles and requests assigned in Ac 3. Calculate the overall compensation Co to be paid/received to/from drivers for the change from Ac to An 4. If mediatorEarning -Co > 0 then 5. mediatorEarning := mediatorEarning -Co 6. return An Figure 14 . Figure 14.Average net income of taxi drivers and mediator in euros.The data are normalized to 1000 taxis serving 1000 customers. Table 1 . Average arrival times in minutes for 10 different days. Table 2 . Average waiting times for customers (in minutes).
2018-12-02T11:32:37.813Z
2018-05-18T00:00:00.000
{ "year": 2024, "sha1": "3459fee54011f7b807aba9ceb74133dcae49dd68", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/8/5/816/pdf?version=1526639306", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "127cc634b8891fd16d8e7a95089cfea591ca9fb0", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
155543405
pes2o/s2orc
v3-fos-license
Deposition of >3.7 Ga clay-rich strata of the Mawrth Vallis Group, Mars, in lacustrine, alluvial, and aeolian environments The presence of abundant phyllosilicate minerals in Noachian (>3.7 Ga) rocks on Mars has been taken as evidence that liquid water was stable at or near the surface early in martian history. This study investigates some of these clay-rich strata exposed in crater rim and inverted terrain settings in the Mawrth Vallis region of Mars. In Muara crater the 200-m-thick, clay-rich Mawrth Vallis Group (MVG) is subdivided into five informal units numbered 1 (base) to 5 (top). Unit 1 consists of interbedded sedimentary and volcanic or volcaniclastic units showing weak Fe/Mgsmectite alteration deposited in a range of subaerial depositional settings. Above a major unconformity eroded on Unit 1, the darktoned sediments of Unit 2 and lower Unit 3 are inferred to represent mainly wind-blown sand. These are widely interlayered with and draped by thin layers of light-toned sediment representing fine suspended-load aeolian silt and clay. These sediments show extensive Fe/Mg-smectite alteration, probably reflecting subaerial weathering. Upper Unit 3 and units 4 and 5 are composed of well-layered, finegrained sediment dominated by Al-phyllosilicates, kaolinite, and hydrated silica. Deposition occurred in a large lake or arm of a martian sea. In the inverted terrain 100 km to the NE, Unit 4 shows very young slope failures suggesting that the clay-rich sediments today retain a significant component of water ice. The MVG provides evidence for the presence of large, persistent standing bodies of water on early Mars as well as a complex association of flanking shoreline, alluvial, and aeolian systems. Some of the clays, especially the Fe/Mg smectites in upper units 1 and 2 appear to have formed through subaerial weathering whereas the aluminosilicates, kaolinite, and hydrated silica of units 3, 4, and 5 formed mainly through alteration of fine sediment in subaqueous environments. In the Mawrth Vallis region (Fig. 1), the phyllo silicate-rich rocks, here informally termed the Mawrth Vallis Group (MVG), are well displayed on the walls of a small crater, Muara, just west of Mawrth Vallis (Figs. 2 and 3).The oldest part of the MVG (Fig. 4) is characterized by Fe/Mg smectite alteration (Figs.5A, 5B).It is overlain across a transition zone by 50-70 m of rocks rich in Al-phyllosilicates and hydrated silica and a complex variety of other minerals (Bishop et al., 2013a(Bishop et al., , 2016)).The Al-phyllosilicate-rich unit is overlain in most areas by a thin, younger caprock of largely unaltered pyroxene-bearing mafic volcanic rocks (Loizeau et al., 2007).The implications of these rocks toward the nature of the early martian climate is complicated by the uncertain origins of the rocks themselves and of the included phyllosilicates.These layered clay-rich rocks have been variously interpreted as weathering products of crustal rocks or sediments under surface conditions (Noe Dobrea et al., 2010;Farrand et al., 2014;Bishop and Rampe, 2016), as detrital phyllosilicate sediments deposited in a large martian sedimentary basin (Wray et al., 2008), or alteration products formed through widespread hydrothermal and/or diagenetic processes (Ehlmann et al., 2011;Noe Dobrea et al., 2010;Sun and Milliken, 2015).They have also been attributed to precipitation from magmatic fluids moving upwards during the last-stage degassing of the martian interior (Meunier et al., 2012). The present study examines the MVG in two areas in the Mawrth Vallis region (Fig. 1): (1) around the walls of Muara crater (Figs. 2 and 3) just west of Mawrth Vallis and (2) in an area of so-called inverted terrain ~100 km north-east of Muara crater and east of Mawrth Vallis.The objective of this study is to better characterize the internal stratigraphy of the phyllo silicate-rich layers, to evaluate the processes and conditions under which they formed, and to assess whether they represent a wetter early Mars or some other combination of conditions less removed from those prevailing today. METHODOLOGY The geologic interpretations developed here utilize grayscale images of the martian surface provided by the High Resolution Imaging Science Experiment (HiRISE) camera (McEwen et al., 2010).The images have been enlarged and examined using a range of brightness, contrast, and sharpening options, but otherwise, unless specified, individual figures are unenhanced.Both study areas have stereo images available and the anaglyph images have been used as well as available digital terrain maps.All references to the relative tone of outcrops (e.g., light-toned, dark-toned, etc.) are to the grayscale tones as seen in the HiRISE images and the specific figures included in this paper.The distance and length scales used are those provided on the HiRISE images.Single pixels on the HiRISE images are 25-50 cm across; CRISM spectral images have a resolution of 18 m/pixel or more. Determining the strike and dip of layering in the walls of Muara crater has been problematic.Bedding attitudes along the north and northwest walls, where the present study is focused, have been discussed by Wilhelm et al. (2013).Their conclusion was that the structural complexity in this area makes it difficult to evaluate dip magnitudes and directions but their figure 2 shows dips averaging ~20-25 degrees, mostly toward the crater center.However, this is also roughly the magnitude and direction of the topographic slope in this area and the more linear, slopeperpendicular outcrop pattern of units is not consistent with rocks dipping parallel or nearly parallel to the topographic slope.Most small, simple craters show nearly flat to outward dips of crater-wall strata (Melosh, 1989).We have attempted to estimate dips from bed deflection where the beds cross topography and from HiRISE images adjusted to show views along the crater walls.These results suggest roughly flat-lying strata.Our estimates of unit thickness are based on the strata being horizontal. The accuracy of orbital interpretations of martian geology can be problematic but have been tested where rovers have explored the same portion of the surface (Stack et al., 2016;Williams et al., 2018).The main shortcoming in these comparisons is that rovers are confined to lowrelief terrain whereas the best stratigraphic and sedimentological details are generally provided by higher-relief areas where cliffs show internal stratigraphic details and erosional topog raphy can reflect internal architecture.In general, many larger-scale sedimentologically useful details can be seen in high-quality orbital images but finer, more detailed analyses at scales below 1 or 2 m require ground-based images (Stack et al., 2016). SPECTRAL ANALYSIS Spectral images of Muara crater and their interpretation in terms of the main phyllosilicate minerals present are shown in Figures 5 and 6.Examples of CRISM spectra of the dominant three compositional units are shown in Figure 6.Mapped in green (Fig. 5B) are spectra consistent with short-range ordered aluminosilicates such as allophane and imogolite containing an OH overtone band near 1.39-1.40µm, a broad H 2 O combination band centered near 1.92 µm, and a broad OH combination band near 2.19-2.20 µm (Bishop et al., 2013b;Bishop and Rampe, 2016).This unit is observed at the top of the phyllosilicate-bearing units as shown in Figure 5C.Mapped in blue (Fig. 5B) are spectra consistent with Al-phyllosilicates such as montmorillonite and halloysite/kaolinite or opal.These spectra are characterized by an OH overtone band at 1.41 µm, a H 2 O combination band at 1.91 µm that is narrower than in the upper unit, and an OH combination band centered near 2.20-2.21µm (Fig. 6).This blue unit is present above the red unit and below the green unit (Fig. 5C).Finally, mapped in red is a larger portion of the phyllosilicate-bearing outcrop (Fig. 5C) that is characteristic of Mg-bearing nontronite or Febearing saponite because the spectral features fall MUARA CRATER Muara is a small, roughly circular impact crater ~3.8 km in diameter centered at ~24.3°N and 340.7°W (19.3°E) (Figs. 1 and 2).It is covered by HiRISE images ESP_012873_2045 and PSP_004052_2045, which form a stereo pair.The crater is a simple, bowl-shaped crater with a rim that reaches ~300 m above the crater floor.The HiRISE images show surface features strongly suggesting the presence of layered sedimentary and/or volcanic rocks around the crater walls (Figs. 3) and spectral features are present in CRISM image FRT000094F6 indicating the presence of phyllosilicates (Fig. 5). The floor of Muara is covered by a field of wind-generated sand dunes ~2.5 km across (Figs. 2 and 3).Spectral images show no phyllosilicates in the dunes.Similar low albedo, pyroxene-bearing deposits throughout the region have been interpreted as aeolian or pyroclastic deposits composed of volcanic sand (Loizeau et al., 2007).The dune crests are elongate generally ENE-WSW.Dune geometry suggests wind from the SSE toward the NNW.The dune field has an irregular edge and is surrounded on the north side of the crater by a fringe of post-cratering sedimentary deposits interspersed with outcrops of the MVG (Fig. 7).The sedimentary deposits in this fringe include patches of dark windblown sand, chaotic landslide deposits, and water-worked deposits that include what appear to be well-developed, contour-parallel gravel beach ridges (Fig. 7).A variety of small channels is present including some eroded into MVG bedrock and others cutting across the older landslides (Fig. 7).These features attest to an early history of wall collapse, water runoff, and accumulation of debris in a lake on the crater floor following crater formation. Illumination at the time HiRISE images were collected was from the SSW and the best exposed and structurally least complicated section of the MVG along the northwestern and northern crater wall is taken as the type section (Fig. 8).Based on tone, texture, and structuring, the 200-250 m of exposed section are here divided into five stratigraphic units, numbered units 1 through 5 from base upward (Figs. 4 and 8).The dark caprock that overlies the MVG contains pyroxene (Loizeau et al., 2007), lacks phyllosilicates, and probably represents unaltered volcanic and/or volcaniclastic rocks. Description Unit 1, making up the lower half of the walls of Muara crater, is a succession of mediumtoned rocks at least 100 m thick with the base not exposed.These rocks are well stratified in most exposures and in the CRISM image show a weak Fe/Mg-smectite alteration except for the uppermost 30-50 m, which show an intense Fe/Mg smectite alteration (Fig. 5C).Stratification in Unit 1 ranges from thin layers 1-2 m or less in thickness (Fig. 9A) to crude layering at a scale of 5-10 m thick (Figs.9B, 9C).Overall, the lower half of Unit 1 is more finely layered than the upper half.On the lower part of the northern and NW crater wall, fine layering is locally well developed (Fig. 9A).These relatively thin, even, tabular layers are continuous for at least 100 m along strike with no evidence of internal scour or erosion.In other areas, there appears to be abundant large-scale cross-stratification (Fig. 9A).There is common large-scale thickening, thinning, and wedging of units and there are a number of internal erosion surfaces (Figs.9A, 9C, and 10A).The upper third to half of Unit 1 around most of Muara Crater crops out as massive to thickly bedded, intensively fractured rock (McKeown et al., 2013) broken into blocks from <1 m to ~20 m across (Figs. 8 and 10B).Crude layering is often visible through the fracturing (Fig. 10B) and it is possible that the rock itself is more finely layered. Interpretation The upper 30-50 m of Unit 1 appear to represent the lower part of the Fe/Mg smectite zone of previous investigators (e.g., Bishop et al., 2013a).However, the bulk of Unit 1 shows only weak Fe/Mg smectite alteration (Fig. 5).Based on the presence of plagioclase and pyroxene in this and other dark units in Muara crater and vicinity (Poulet et al., 2008;Viviano and Moersch, 2013), the widely developed stratification, and the alteration of the rock to Fe/Mg smectite, we would infer that the more crudely layered parts of Unit 1, such as much of the section on the east crater wall, may represent altered mafic or ultramafic volcanic flows and/or volcaniclastic units.Previous studies of alteration and clay formation in this area have also assumed that these rocks represent a basaltic sequence (e.g., Zolotov and Mironenko, 2016).The fine, even layering in the lower part of Unit 1 (Fig. 9A) suggests that some parts may have been deposited in standing bodies of water and the association of these layers with large-scale cross-stratification (Fig. 10A) and channel-like features (Fig. 9A) may indicate that overall the lower part of Unit 1 represents a complex of interfingering alluvial, fluvial, aeolian, and/or lacustrine environments.The more crudely stratified upper half of Unit 1 that shows undulating layering and some largescale cross-stratification on the northern and NW wall of the crater may represent largely windblown sediments. Description Above Unit 1 across an irregular and probably unconformable contact is up to 35 m of mediumto dark-toned, well stratified rock of Unit 2 (Figs. 3,8,and 11).The base forms a distinct break around the crater wall between under lying, massive, fractured rock of Unit 1 and overlying, dark-toned, stratified rock of Unit 2 (Fig. 8).Unit 2 varies in thickness from 0, where Unit 3 rests directly on Unit 1, to ~35 m.These variations in thickness probably reflect in large part the highly irregular surface on which Unit 2 was deposited.This contact may be the regional unconformity identified by Loizeau et al. (2012) as a "paleosurface" within the MVG sequence.The best exposures of Unit 2 in Muara crater occur along the NW crater wall, although there is some young dark windblown sediment masking portions of the rock in this area and layering in the lower part of Unit 2 may be better developed than can be seen in the HiRISE images (Fig. 11). Along the NW wall, the outcrop of Unit 2 shows a series of parallel, sharp-crested ridges, 20-50 m wide and a few meters high, that run up-and-down the crater wall, roughly perpendicular to contours and the strike of bedding (Fig. 11).The bottoms of the round-bottomed swales between ridges are widely coated by dark windblown sand (Fig. 11).The ridges are elongated parallel to the wind direction up the crater wall, show bedrock layering throughout, and were apparently cut into bedrock by the prevailing wind. Overall, Unit 2 consists of well-stratified rock made up of lenticular, interfingering light-, medium-, and dark-toned layers from <0.5 to perhaps 2 or 3 m thick.Locally, the lowest few meters of Unit 2 are very dark-toned, massive to weakly stratified rock (Fig. 11).Upward, welldeveloped stratification is defined by complexly interfingering lighter streaks and lenses up to ~1 m thick, a few discontinuous blocky layers, medium-toned layers up to ~2 m thick, and thin, very dark layers mostly <1 m thick.The thin dark layers are mostly overlain or underlain by thin light-toned beds (Fig. 11).Most layers persist for less than 100 m along the outcrop (Fig. 11). Interpretation Unit 2 shows an intense Fe/Mg-smectite spectral signature (Fig. 5C).Modeling estimates of the unit containing spectral features near 2.3 µm indicate 30-70 vol% Fe/Mg-smectite (Poulet et al., 2014).We infer that Unit 2 represents a sedimentary sequence, although some blocky-weathering units may be thin mafic lava flows.Interbedded layers probably include dark, reworked volcaniclastic debris derived in large part by erosion of the underlying rocks of Unit 1 and possibly pyroclastic materials.The thin, lenticular dark layers overlain by light layers are more prominent in Unit 3 and will be discussed there, but are thought to represent windblown sand layers (darkest) capped by thin very fine-grained layers of windblown dust (light).A few peaked, lenticular, very dark layers may represent dunes of windblown sand (Fig. 11).All bed-scale units are highly lenticular and at least one surface within Unit 2 along the northern wall appears to cut down through a few meters of underlying material, but overall there is no evidence for large-scale scour, erosion, or steeply inclined layering.The abundance of lenticular, dark-, and light-toned layers and couplets is consistent with an aeolian origin, especially toward the top where the dark-light alternation is most prominent.There may also be a component of water-worked sediments in the medium-toned units, but there is no clear evidence for aqueous transport, such as erosional channels, and all layers could be composed of windblown components. Description Unit 3 is a transitional unit, 20-30 m thick, of well-stratified, mainly medium-to light-toned rock between the top of the predominantly dark-toned, layered rocks of Unit 2 and the very light-toned rocks of Unit 4 (Figs. 8,11,and 12).Much of Unit 3 shows well-developed layering down to the resolution of the HiRISE images, indicating that it is stratified at scales to or below ~50 cm.The base of Unit 3 is locally an erosional unconformity (Fig. 11) along which as much as 5 m of strata of the underlying Unit 2 are truncated.The lower half of Unit 3 in the NW crater wall consists of prominently banded units showing either post-depositional deformation or syndepositional stratigraphic complexity (Fig. 11).Dark-toned layers in this zone, where thicker, are commonly lenticular or show pinchand-swell (Fig. 11).They also widely exist as thin, very continuous layers probably mostly <50 cm thick that coat both unconformities and the boundaries of virtually all light-toned units.Dark-toned layers lack evidence for internal layering, although their low albedo may simply make internal stratification difficult to see.Interbedded light-toned layers are generally thicker, more continuous, and often underlie or drape the dark layers (Fig. 11).The light layers commonly show even finer internal layering down to the resolution of the HiRISE images. The boundary between the Fe/Mg smectite zone and the overlying Al-phyllosilicate layer appears to fall within Unit 3 (Fig. 5D), although the lower resolution of the CRISM images and uncertainties in correlating HiRISE and CRISM images would also allow the boundary to coincide roughly with the Unit 2-3 contact. Interpretation We would suggest that the dark layers in the lower half of Unit 3 represent windblown sand that was commonly swept into dunes but is widely distributed and/or preserved as thin, more tabular layers.If the dark layers represent sand-sized windblown bed-load sediment, the draping light-toned caps are probably silt to clay-sized debris deposited during the waning stages of dust storms as well as during periods of reduced wind activity between storm events.An especially good example of this close association of lenticular, dark-toned dune sands and light fine-sediment drapes can be seen on the north crater wall in the lower part of Unit 3 (Fig. 12).Mixed medium-toned layers may represent either stacks of very thin alternating light-and dark-toned layers or of mixtures of hydraulically equivalent dark sand grains and light clay aggregates. It is not clear whether the undulating layers and truncated units in the lower part of Unit 3 represent deformation in response to an external event, such as an impact, or whether it is entirely of depositional origin involving erosion, draping of irregular surfaces, deposition of lenticular layers, and perhaps minor sliding.However, we suspect that all of these features are sedimentary in origin because no clearly deformational features are present, such as over-steepened beds (dips greater than the angle of repose), overturned or recumbent layers, isolated blocks (slides), or truncation of overlying layers by under lying ones. Above this lower zone of dunes, drapes, and complex bedding, the upper part of Unit 3 is a zone of evenly stratified light-or medium-toned rock (Fig. 12).There is a rather cyclic interlayering of light layers ~1-3 m thick and thin, <50-cm-thick dark-or medium-toned layers.The light layers locally show a very fine, even, flat internal layering at the resolution of the HiRISE images.The thinness and evenness of the layering, the predominance of phyllosilicate minerals and the implied fine grain size, and absence of scour features suggest deposition under low energy, possibly subaqueous conditions.In some areas, flat, even bedding passes laterally into curved, upward-peaked layering that resembles inferred dunes lower in Unit 3 (Fig. 12). Description Unit 4 is the prominent 20-25-m-thick lighttoned band around the upper part of the crater wall.This part of the crater wall is a zone of extensive fracturing, including both closely spaced sets of more-or-less orthogonal fractures that sweep across the crater wall (Fig. 13), probably related to the impact that formed Muara crater, and large, cross-cutting fractures and faults.There also appears to be a change in northerncrater-wall slope roughly coincident with the Unit 3-4 contact and there may be fractures at this point formed during downslope sliding and slumping.Fractures at all scales are marked by accumulations of medium-to dark-toned material and the smaller fracture sets subdivide the rock into small rectangular blocks a few meters across that are separated by thin septa of dark material.We infer that the fractures are marked by more easily erodible materials and/or were open and have been widely filled by dark windblown sand.There is also a host of irregular lines and features across the crater wall at all scales that could be fractures, joints, small faults, layering, or other features within the rock, but for which, in many cases, a specific origin remains elusive. The intersection of bedding and fracture sets has widely resulted in the downslope slippage of fracture-bounded blocks, forming a jagged, saw-tooth geometry (Figs. 12 and 13).Windblown dark sand has filled the fractures and more horizontal surfaces formed by block sliding (Fig. 13). Interpretation Overall, Unit 4 is characterized by bedded, light-toned rock interrupted by very thin laminations of darker material.This unit correlates with the Al-phyllosilicate-rich layer observed in CRISM images (Fig. 5C).As with the upper part of Unit 3, it appears to have been deposited mainly as fine-grained sediment.Bedding can be difficult to distinguish in Unit 4 because of the uniformly light-toned character and paucity of layers of contrasting tone or color.The spaced dark laminations seen on the crater wall may be sedimentary layers but others may be accumulations of modern windblown sand along crevices and on ledges following bedding.These suggest layering on the scale of 1-3 m thick.Layers appear to be tabular with no clear undulations, dune-shaped features, or scours/channels.Finer-scale layering has been widely observed in other areas (Loizeau et al., 2010) and, if the light-toned layers in Unit 4 are similar to those in Unit 3, then the thick light layers are also most likely made up of much thinner layers.We would also infer deposition of the fine silts and clays of Unit 4 in a subaqueous environment. Description Unit 5, the uppermost member of the MVG in the crater-wall sequence, is laterally heterogeneous and the outcrop widely disrupted by impact processes and crater-wall collapse.On the northwestern wall (Fig. 8), there is a reasonably well-exposed section.It includes a basal zone of dark rock that widely produces debris that covers and obscures the Unit 4-5 contact.Above this zone, Unit 5 consists of interfingering and interlayered dark and medium-to light-toned layers for a total Unit 5 thickness of ~25-30 m. Interpretation The dark, more friable layers that yield abundant slope-mantling debris most probably represent mafic flows, mafic volcaniclastic layers, and/or windblown sediment composed of mafic sand grains.The light-toned layers are thought to be layers of phyllosilicate-rich clays and silts and poorly crystalline materials.This interlayering suggests that the sedimentation of the light-toned units was interrupted by volcanic episodes or floods of volcaniclastic sediment, although details of the internal makeup of these layers cannot be resolved.CRISM images indicate that Unit 5 contains poorly crystalline aluminosilicate phases similar to allophane and imogolite (Fig. 5D). INVERTED TERRAIN Surface Features About 100 km north-northeast of Muara crater (Fig. 1) is an area of irregular terrain in which dark, sinuous features that resemble channels cross broad, lighter areas (Fig. 14).When examined closely, it is seen that the "channels" are topographically high plateaus and the intervening lighter areas are broad, open valleys.This topography has been referred to as inverted topography because one hypothesis for its origin is that it started as a flatter landscape crossed by channels that were later filled with resistant volcanic rock or sedimentary materials.Subsequent erosion left the more resistant "channel fill" standing in relief above the more easily eroded surrounding materials that form today's valley floors (Loizeau et al., 2007;Noe Dobrea et al., 2010). The present study lies within an area of inverted terrain at ~26°N and 18.5°W, Oxia Pallis Quadrangle, and is covered by the stereo pair of HiRISE images ESP_013361_2060 and ESP_013084_2060.It is in the northwest part of a large valley bounded by channel-like plateaus to the north and west (Figs. 14,15,and 16).These are capped by flat-lying, medium-to dark-toned, well-cratered rock, probably basalt, and local patches of dark windblown dunes.The elevation difference between the valley bottom and flat-topped plateau is ~25-30 m.The light-toned valley floor is made up of essentially flat-lying, light material.The cliff-forming unit, which is composed of interbedded dark-and light-toned units with a dark caprock forming the plateau surface, appears to be roughly equivalent to Unit 5 of the MVG and the caprock at Muara crater and the underlying light-toned, valley-floor unit to Unit 4, although precise correlation across this distance is uncertain.The base of Unit 4 in the inverted terrain is not exposed within the HiRISE images used. The valley floor (Fig. 16) slopes down from west to east at ~4° to 6° and appears to be underlain by light-toned Unit 4 rock, possibly interbedded with thin layers of dark-toned material: dark layers are interbedded with light-toned layers in the valley wall, but these are not apparent on the valley floor.Part of the valley floor is marked by step-like elongate zones of rubble up to ~40 m wide separated by areas of smoother light topography (Fig. 16).The smooth light-toned surfaces are covered either by very fine, featureless material, possibly bedrock or a regolith cover, or by finely patterned ground (Fig. 16).Adjacent to the dark debris aprons at the bases of the bounding cliffs, dark material has been swept into small dunes.In contrast, dunes of light-toned material are essentially absent.The polygons making up the patterned ground are mostly 1-4 m across, al- though smaller polygons may be present below the resolution of the images. The rubbly zones bear a strong resemblance to many terrestrial landslides and debris flows formed by slope failure (Fig. 17).In the study area on Mars, their formation appears to have involved failure of the surface layers of the light-toned rock, collapse and brecciation, but with only minor downslope movement.Most rubbly areas show a distinct, arcuate head scarp or series of arcuate scarps on the upslope side.Down-dropped blocks immediately below the head scarp are often large and broken by additional fractures along which the downslope side has been further down-dropped.Above the head scarp in some areas are arcuate dark lines parallel to the head scarp that appear to represent fractures that have not yet evolved into failures (Fig. 16).Within the rubble zones, brecciation tends to become more chaotic from the head scarp to the downslope tip: the blocks tend to become smaller, more jumbled, and do not fit together.The distal 5-10 m of the slides widely consist of blocks only 1-3 m across embedded in a matrix of fine material, probably representing debris flows (Fig. 16).Shadowing of the sharp downslope ends of the chaotic zones indi-cates a terminal drop-off.There are many areas that do not display these exact size trends and some slides have coarser material toward the tips but these general trends are widespread. These features suggest that the rubble areas formed by decoupling of the light-toned rock layers, possibly along dark layers, and some downslope sliding of the detached layers, with normal faulting and fragmentation of the upslope parts of the detached mass of material and increasing amounts of flow and brecciation toward the fronts of the slides.The frontal zones of the slides often mixed and became debris flows.These features are well developed in many modern terrestrial landslides (Fig. 17).In some areas, the slides appear to have collapsed, thinned, and lost much of their volume (Fig. 16, point f).Such areas include abundant dark sediment. In several areas, localized failure has formed irregular, disconnected patches of rubble without major downslope movement of the fractured debris.In the small rubbly patch shown in Figure 18 lapsed in two areas (Fig. 18), each marked by a distinct, dark-toned topographic low area that extends to the edge of the failure.One shows a small debris-flow tongue extending beyond the rubble zone onto the edge of the adjacent light, patterned ground (Fig. 18).The dark areas are interpreted to expose or be partially covered by dark sediment, possibly from dark layers, and the light collapsed layer is greatly thinned or missing in these low areas.Many of the rubbly areas have narrow, dark, topographically low zones along their fronts and extending downslope (Fig. 18).These locally show the accumulation of dark sediment along their courses, small dark debris cones right up against the rubble fronts, and isolated light-toned blocks.Some of these topographically low areas appear to represent small runoff channels. Origin of the Patterned Ground While impact-generated joints are widespread in the areas studied, such as along the upper walls of Muara (Fig. 13), the small polygons that widely cover areas of flatter ground underlain by Unit 4 in both study areas are most likely either thermal contraction fractures (Mellon, 1997;Mellon et al., 2008), like those formed widely in terrestrial glacial and peri-glacial settings, or desiccation contraction fractures.Based on criteria discussed by Levy et al. (2009), evidence for the wholesale collapse of the light layers, and features suggesting that collapse was locally accompanied by flowing water, we would infer that the patterned ground in the study area formed mainly by thermal contraction associated with a layer of permafrost.Others have interpreted these structures as desiccation cracks (El-Maarry et al., 2014) but unlike most desiccation cracks, those in the study area are very uniform in size and shape, most 0.5-4 m across, and include only one size of visible crack and bounding joints, not a hierarchy of crack types common to desiccation fractures (El-Maarry et al., 2014).The presence of expandable smectite clays in Unit 4 upon which polygons are widely developed may suggest that desiccation as well as thermal contraction, ice wedging, and sublimation influenced their formation. Controls on Failure and Mass Movement The wide distribution of patterned ground, the collapse and thinning of the light-toned layers within the slides, and the apparent evolution of water during sliding suggest that the light layers contain abundant water ice and probably permafrost at a shallow depth.The polygons formed before the mass movement as shown by the fact that in many places the polygon boundaries served as the loci of fracturing and breakage of the light layers into blocks during collapse and sliding (Figs. 16 and 18).Because exposed ice cannot survive at the martian surface, the surface of the light-toned "bedrock" and permafrost layers is probably mantled by non-icy debris.A similar debris and vegetative layer characterizes most terrestrial permafrost terrains.Because these light-toned areas and the light regolith at the surface are widely characterized by aluminous clays, we would suggest that the light-toned layers consist mainly of a mixture of phyllosilicates and water ice.At the surface, the water has been lost through sublimation until a solid, protective layer of light clay, phyllosilicates, and other rocky debris has formed. We infer that the rubble zones formed as a result of partial melting of the permafrost at a shallow depth (Fig. 19).As the ice melted, the overlying sedimentary and regolith layers were destabilized, decoupled from the underlying layers, fractured, and locally slid.Water appears to have run out through more porous layers, perhaps the dark layers, and/or via the fractures, carrying some of the fine dark sand and silt.Fracturing associated with collapse exposed additional permafrost to sublimation and/or melting, promoting continued fracturing, downslope movement, and head scarp retreat. Within many of the rubbly areas, the fractured light-toned layers have thinned and largely disappeared during collapse, probably through increased sublimation of water ice within the more intensely fractured rock.The latitude of the inverted terrain study area (26°N) is close to the lower latitude of 30°N for permafrost and polygon development (Head et al., 2003) and ground ice may be periodically subject to freeze-thaw cycles during martian summers.This would be consistent with the inferred runoff features seen associated with Unit 4 areas of fracturing and collapse and perhaps even more likely under different, recent climatic conditions at higher obliquity (e.g., Forget et al., 2006). Our results suggest that the light-toned layers are mixtures of phyllosilicates, silt, and water ice.This interpretation is consistent with the observation that the light layers yield little or no coarse debris to form windblown dunes or talus slopes: they tend to break into large blocks during fracturing and then break down rapidly through sublimation and melting of the included water into very fine-grained silt and clay. Age of Surface Features in the Inverted Terrain The rubble zones and patterned ground in the inverted terrain study area show evidence of a very young origin.There are very few craters across the valley surface.The patterned ground is sharp and fresh in most areas and probably formed relatively recently perhaps over the last few million years during periods of climatic instability (Head et al., 2003).The patterned ground polygons are cut and disturbed by the processes of failure, sliding, and mass movement to form the rubbly zones, further indicating an even more recent origin of the mass transport events. Composition of the Mawrth Vallis Group We interpret the Mawrth Vallis Group as a stratigraphic sequence within which the layers are primarily of sedimentary origin.The bulk of units 2 through 5 can be described in terms of two main end-member sediment types represented by the dark-and light-toned materials. (1) The dark-toned sediment is well represented in Muara crater where dark sand makes up the modern dune field on the crater floor and sand patches on the crater walls, and where dark beds in the Noachian MVG show common pinch-and-swell and dune-like features and geom etries in cross section.Deposits at the bases of cliffs are composed almost exclusively of fine, dark-toned material, locally swept up into small dark dunes.The similarity of all of these dark-toned sedimentary layers, including their apparent fine granularity and presence of dunelike bedforms, suggests that most are composed of sand-to fine gravel-sized material, probably representing reworked and eroded volcanic material.Most appear to represent windblown sand. Wind-generated sand dunes are among the most widespread sedimentological features on Mars (Grotzinger et al., 2005), both at the surface and in the geologic record, so their development in Muara crater and the MVG is not unexpected.There are many studies of martian aeolian deposits, most focused on dunes that provide 3-D outcrops, including smaller, roverbased exposures, than the MVG studied here (e.g., Grotzinger et al., 2005;Milliken et al., 2014;Banham et al., 2018). (2) Upper Unit 3 in Muara crater and Unit 4 in both study areas are composed primarily of light-toned materials.Because these light-toned layers correspond to the distribution of aluminosilicate clays, they are probably composed largely of clay and silt-to fine sand-sized phyllosilicate particles.The common draping of darktoned dune-like bedforms by thin light-toned layers and fine interlayering of the two sediment types in units 2 and 3 are consistent with the interpretation that the dark layers represent windblown sand and the light layers are composed of sediment deposited out of suspension, often forming drapes, during periods of reduced wind or current activity.Higher in Unit 3 and in parts of Unit 4, light-toned sediment shows widespread fine layering and tabular geometries consistent with deposition of very fine silt and clay under quiet, subaqueous conditions.However, ripped up silt-and sand-sized chunks of light-toned sediment may have formed sparse dunes that are largely indistinguishable in the uniformly light sediment as seen in upper Unit 3 (Fig. 12).Medium-toned layers may be mixtures of both light-and dark-toned particles.Studies in the inverted terrain suggest that today the light-toned layers are mixtures of fine sediment and water ice, a conclusion consistent with HiRISE observations of near surface ice in midlatitude regions (Dundas et al., 2018). Depositional Evolution of the MVG If the dark-toned layers in units 2 and 3 are composed mostly of windblown sand, their environment of deposition was mainly subaerial.The upward increase in the proportion of lighttoned layers through units 2, 3, and 4 in Muara crater offers two possibilities: (1) upper Unit 3 and Unit 4 are parts of a gradually expanding regional subaerial loess plain, perhaps along the margins of a drying ocean or large lake that occupied the lowlands along the eastern edge of Chryse Planitia, or (2) there is a transition from subaerially deposited windblown sediment in Unit 2 and the lower part of Unit 3 to subaqueously deposited sediment in the upper part of Unit 3 and Unit 4. Both scenarios are consistent with the light layers having been composed largely of fine-grained clay, silt, and very finegrained sand.The thin, even, fine layering down to the resolution of the HiRISE images in the light-toned layers is consistent with deposition under low-energy conditions with little scour or erosion.However, loess deposits are typically massive, homogeneous, and accumulate to thicknesses of many meters with little or no internal layering (e.g., Pye, 1995).We would suggest that the features of the light-toned layers of upper Unit 3, Unit 4, and parts of Unit 5 are most consistent with sedimentation by the settling of silt and clay out of a water column in a large lake or sea.With the possible exception of the stratigraphically complex zone at the base of Unit 3, deposition took place under very quiet, subaqueous conditions.The presence of some sweeping, dune-like strata in upper Unit 3 suggests that the lake/sea floor may have been periodically exposed with fluctuations in water level and that the exposed lake margins were sites of wind activity and the construction of small dunes or deposition of fine suspended silt and clay eroded from the exposed clay-rich lake beds. The stratigraphically complex interval at the base of Unit 3 includes erosive surfaces, truncations, onlap surfaces, and a complex interlayering of dark and light sediments.This stratigraphically complex deposit may mark a coastal zone where the largely terrestrial sequence of Unit 2 interfingers with the largely lacustrine/ocean sequence of upper Unit 3. Wave and current activity could have locally been at a maximum within this zone and it could also have been a zone of local erosion during falls in water level.This zone is also roughly coincident with the contact between the lower Fe/Mg-smectite zone and upper Al-phyllosilicate zone in CRISM images.This narrow contact zone is locally characterized by a spectral signature indicating an abundance of Fe +2 -bearing components (Bishop et al., 2013a) in contrast to the phyllosilicaterich units on either side.We suggest that this zone includes shoreface sediments composed mainly of winnowed, unaltered mafic minerals worked and deposited within the coastal zone represented by lower Unit 3. The presence of jarosite along this horizon (Bishop et al., 2016) is also consistent with the development of local coastal evaporitic ponds. Origin of the Phyllosilicates in the MVG The presence of pyroxene and plagioclase is consistent with Unit 1 being composed largely of basalt or a similar mafic or ultramafic volcanic rock and associated volcaniclastic sedi-ments.There is a low level of Fe/Mg-smectite alteration throughout the middle and lower parts of Unit 1. Alteration in the bulk of Unit 1 does not seem to form discrete layers.Sun and Milliken (2015) have argued that Fe/Mg smectite alteration is more widespread in rocks exposed in central peak uplifts of craters in Noachian terrain across Mars than Al-phyllosilicate rocks and that this Fe/Mg smectite alteration appears to have originated at depth within the mafic volcanic crust and was brought to the surface during uplift of the central peaks.In the absence of any visible clay stratigraphy in the middle and lower parts of Unit 1 and the widespread and deep record of Fe/Mg smectite alteration across broad areas of Noachian terrain, we would suggest that the Fe/Mg smectite alteration of rocks of the middle and lower parts of Unit 1 occurred under the influence of deep diagenetic or hydrothermal fluids.However, the absence of minerals characteristic of high-temperature alteration, such as Mg-rich clays, chlorite, serpentine, prehnite, and chlorite (e.g., Ehlmann et al., 2011) implies that the unit was not altered at elevated temperatures (Michalski et al., 2015;Bishop et al., 2018). The upper part of Unit 1, Unit 2, and lower Unit 3 show intense Fe/Mg smectite alteration (Fig. 5C).All of these units are made up in large part of layers inferred to represent detrital, sand-sized, mafic volcaniclastic debris, but their greater degree of alteration in comparison to lower parts of Unit 1 suggest either that they were affected by different processes of alteration and/or that the processes were similar but more intense.Our results suggest that this intense alteration originated through subaerial weathering associated with the unconformity at the top of Unit 1 and, subsequently, of debris derived from erosion of Unit 1 during deposition of Unit 2. In Unit 2 and lower Unit 3 mafic dark-toned sediments are interbedded with layers of lighttoned material.Based on spectral data, lighttoned layers in upper Unit 3 and Unit 4 are interpreted to be Al-phyllosilicate-rich layers.The resolution of the CRISM images is lower than that of the HiRISE images and we cannot determine at this stage whether the thin, lighttoned layers in Unit 2 and lower Unit 3 are also composed of Al-phyllosilicates interbedded with dark Fe/Mg-smectite layers, as we suspect, or whether they are light toned but also show Fe/Mg smectite alteration. The light-toned rocks of upper Unit 3 and units 4 and 5 are dominated by aluminous clays and hydrated silica that must have formed before the fresh volcanic caprock was emplaced.Much of the layering in these units is flat, even, and tabular, consistent with the deposition of fine sediment under subaqueous conditions. The presence of layers of dark, sand-sized sediment interlayered with light, Al-phyllosilicate rocks in Unit 3 confirms that wholesale regional hydro thermal alteration did not form the phyllosilicates in situ from a single, compositionally uniform protolith since adjacent layers would have suffered similar degrees of alteration.The interstratification of contrasting sediment types further argues against the formation of the entire MVG as an extraordinarily thick soil zone during weathering at the top of Unit 4 or Unit 5. Unit 4 can be traced over wide areas in the Mawrth Vallis region and appears to have a sheet-like geometry, parallel in general to layering in other MVG rocks, suggesting that all represent depositional stratigraphic units and not hydrothermal alteration zones. Our results suggest that the Al-phyllosilicates in units 2, 3, 4, and 5 of the MVG could have formed (1) elsewhere and were transported to their present sites of deposition as fine windblown grains, (2) through water runoff from surrounding land areas, or (3) in situ through the alteration of fine silt-to clay-sized clastic grains transported into a body of water by wind, as would be the case for volcanic dust and ash.All of these processes may have been involved in forming the final Mawrth Vallis Group sediments, as occurs in and around modern terrestrial oceans and lakes (e.g., Bristow and Milliken, 2011). CONCLUSIONS The Mawrth Vallis Group is here subdivided into five informal stratigraphic units numbered 1 (base) through 5 (top).Unit 1 consists of what we infer to be interlayered and interfingering volcaniclastic strata and locally volcanic flows.The sedimentary rocks show evidence of deposition under a range of conditions that included aeolian, fluvial/alluvial, and quiet, possibly subaqueous environments.The surface of Unit 1, below Unit 2, is irregular and in some places Unit 2 appears to be absent, suggesting that the top of Unit 1 is an erosional unconformity and that Unit 2 has locally accumulated as valley fill between highs of Unit 1 rock.This surface may represent a major paleosol characterized by Fe/Mg smectite alteration that extends several tens of meters into the top of Unit 1. Unit 2 and lower Unit 3 form a transition from the dark bedrock of Unit 1 to the flat-layered, fine-grained, muddy, subaqueous depositional settings represented by sedimentary rocks of upper Unit 3 and Unit 4. Unit 2 appears to be dominated by mafic volcaniclastic debris derived from Unit 1 and perhaps some thin, late-stage volcanic flows.Upward in Unit 2, its initially massive, dark-toned character is increasingly interrupted by thin lenses of lighttoned sediment, which often drape undulating surfaces.We suggest that this association of dark lenticular units draped by light-toned layers represents windblown layers composed of mafic sand-sized sediment that are overlain and/or underlain by light-toned layers deposited as finer, suspended windblown sediment.Deposition of upper Unit 2 and lower Unit 3 occurred on the lower parts of an alluvial surface with increasing proximity to water and to potential sources of windblown silt and clay eroded from exposed basinal sediments. In Muara crater, the thickness and proportion of light-toned layers increases upward through Unit 2 and into Unit 3. The accompanying upward thinning of the dark layers and thickening of the light layers suggests a transition from mainly aeolian sand deposition (dark) to the deposition of fine-grained dust and clays (light).This transition is locally marked by a zone of stratigraphic complexity in lower Unit 3, including channels eroded several meters into underlying strata, and widespread interlayering of dark windblown sand and thicker, finely laminated dust deposits.There is a zone of Fe +2rich deposits that may mark a zone of hydraulic concentration of more resistant, heavy mineral phases, such as occurs on terrestrial beaches and coastal zones. Most of the light-toned layers in the upper part of Unit 3 and lower Unit 4 are tabular, with only a few, very thin dark layers.The abundance of phyllosilicates suggested by CRISM data, the continuity and evenness of layering, the wide presence of fine flat layering down to the resolution of the HiRISE images, and paucity of evidence for coarse sediments and features suggesting high levels of current activity, such as visible erosional features, all suggest deposition in a quiet subaqueous setting.This could have been a large lake or arm of a sea.The interlayering of thick dark and light layers in Unit 5 suggests the possible alternation of mafic volcanic and volcaniclastic events with continuing subaqueous deposition of clay-rich sediments. Our results suggest that the MVG phyllosilicates formed in three main settings: (1) the weak, widely distributed Fe/Mg smectite alteration of middle and lower Unit 1 is thought to reflect low-temperature diagenetic to hydrothermal alteration.(2) The intense Fe/Mg smectite alteration in uppermost Unit 1, Unit 2, and lower Unit 3 formed by alteration at the martian surface during accumulation of the sequence, probably through subaerial weathering of mafic materials and lithic grains.(3) The Al-phyllosili cates of upper Unit 3 and Unit 4 may include some clay blown into a sea or lake by the wind and some material carried across fluvial systems from surrounding areas of weathering.However much probably formed by in situ alteration of airborne dust and volcanic ash on and below the bottom of a large lake or sea. We note that all of the inferred phyllosilicaterich sediments of units 2, 3, 4, and 5 of the MVG are light toned.Most pure kaolinite and smectite clays on Earth are white to very pale, whereas terrestrial shales and mudstones of all ages and representing virtually all depositional settings are dark toned.The latter, of course, reflects the presence of reduced organic matter.If these martian clay-rich deposits lack organic matter, either the lake or ocean in which they were deposited was characterized by environmental conditions that precluded the existence of life forms and/or the accumulation of dark organic matter, such as some extreme evaporitic environments, or life had not evolved on Mars at 3.7 Ga, when these sediments were deposited. The results of this study suggest that water was widespread in Noachian surface environments on Mars, probably as precipitation, streams, standing bodies of water including lakes and/or seas, and in the subsurface.Future studies in stratigraphically complex areas like the Mawrth Vallis Group will add to our understanding of the complexity, distribution, and persistence of these water-rich environments on early Mars and whether or not they hosted early martian life. Figure 1 . Figure 1.Shaded relief map of the Mawrth Vallis region, Mars, showing the location of the study areas at Muara crater and the inverted terrain.MOLA (Mars Orbiter Laser Altimeter) Science Team, NASA. Figure 2. Muara crater, Mars, showing the field of windblown dunes covering the crater floor (medium to dark) and bedrock outcrops of the Mawrth Va l l i s G ro u p ( m e d i u m t o very light) around the crater walls.NASA HiRISE image PSP_004052_2045. Figure 3 . Figure 3. Northwest wall of Muara crater, Mars, enlarged from HiRISE image PSP_004052_2045, showing outcrops of the Mawrth Vallis Group subdivided into units 1 through 5, a thin regional caprock of what is probably basalt, and the locations of other figures.The prominent light-toned band is the aluminosilicate-, clay-rich Unit 4. The dark-toned areas to the lower right and lower center are covered by windblown sand. Fe Figure 4. Stratigraphy of the Mawrth Vallis Group, Mars.(A) Stratigraphic divisions of (A) Loizeau et al. (2010) and (B) Bishop et al. (2013b).(C) Stratigraphic subdivisions recognized in the present study in Muara crater.Correlations between the spectral divisions of (A) and (B) and the lithologic divisions of the present study are approximate. Figure Figure 6.I/F CRISM spectra from image FRT000094F6 representing the units colored green, blue, and red in Figure 5C.Spectra 1 and 2 are dominated by short-range ordered aluminosilicates, spectra 3 and 4 include Alrich phyllosilicates, and spectra 5 and 6 include Fe-rich smectites.Lab spectra of minerals/materials are shown for comparison: allophane and imogolite from Bishop et al. (2013b), montmorillonite and halloysite from Bishop et al. (2008b), and nontronite and Fe/Mg smectite from Bishop et al. (2008a, 2008b).Grey vertical lines mark the spectral features due to H 2 O and metal-OH in the crystal structure that are used to characterize the phyllosilicates and related materials present in the regions associated with these spectra. Figure 8 . Figure 7. Lower part of the NW crater wall and adjacent crater floor of Muara crater, Mars.(A) With features unlabeled.(B) With features labeled showing large landslides, runoff channels, beach ridges marking old shorelines around crater lake, and later windblown sand.This part of the wall is composed mainly of bedrock Unit 1 of the Mawrth Vallis Group.Lower on the slope, bedrock is overlain by landslides, some of which probably represent the edge of the breccia lens formed at the time of cratering.Water runoff subsequently cut local channels across the Unit 1 outcrops and the landslide breccia.This runoff resulted in the formation of a transient crater lake that gradually dried, leaving a set of very coarse-grained beach ridges as the shoreline retreated. Figure 9 . Figure 9. Unit 1 of the Mawrth Vallis Group (MVG) in Muara crater, Mars.(A) Lower part of Unit 1 of the MVG on the northwest crater wall showing the fine layering parts of this section (a), a possible erosional channel (b), and cross-sets that may be aeolian beds (c).(B) A portion of the Mawrth Vallis Group on the eastern wall of Muara crater.The upper part of Unit 1 occupies most of the photo.It shows crude layering and the interstratification of resistant and weaker units.Many areas are covered by patches of younger windblown sand (a) and the area is crossed by irregular wind-sculpted ridges (b).There is some divergence of bedding in Unit 1 but no conspicuous erosional channels or steeply dipping layering.Overlying MVG units 2 (c), 3 (d), and 4 (e) lie along the right side of the photo.Details have been sharpened.(C) Lower part of NE crater wall showing major truncation surface (dashed line) in the lower part of Unit 1. Figure 10.Unit 1 on north wall of Muara crater, Mars.(A) Middle to upper part of Unit 1 showing thicker, crude layering that includes large cross sets (a) and undulating layering (b).This sequence may consist largely of windblown sand.The thinner layering of the lower part of Unit 1 can be seen at the bottom of the photo (c).(B) Massive fractured rock at the top of Unit 1 (a).Some areas of well-defined layering are laterally equivalent to fractured rock (b) and some layering can be seen through fractures (c).See Figure 3 for location. Figure Figure 11.Uppermost Unit 1 (light, lower right), Unit 2 (dark band, center), and lower Unit 3 (light banded, upper left) of the Mawrth Vallis Group, northwest wall of Muara crater, Mars.The crater wall slopes from upper left to lower right.Low, sharp-crested ridges (a), illuminated on their SW sides, extend up and down the slope.The blocky weathering of Unit 1 is overlain with a sharp contact by the more massive, dark layer of lowest Unit 2 (solid line).The bulk of Unit 2 shows medium-to light-toned layers, <1 to ~2 m thick, interbedded with slightly darker layers and very dark thin layers.Most layers are relatively flat, with apparent undulations reflecting topographic irregularities, but some show low peaks (immediately right of b) and swales (c).The complex stratigraphy of the lower part of Unit 3 is evident, with an unconformity between units 2 and 3 (dashed line) and internal erosional unconformities (d), discontinuous strata, pinch and swell features, and draping units.Contrast and sharpness enhanced. Figure 12.Part of the northern wall of Muara crater showing the upper part of Unit 2 and most of Unit 3 of the Mawrth Vallis Group, Mars.The lower part of Unit 3 shows a train of well-developed, lenticular, dune-like features (a) composed of dark sandy(?) sediment that are draped by light-toned material.Above the zones of dunes and lenticular dark layers, Unit 3 consists of tabular lighttoned layers separated by very thin, continuous layers of dark material (b), many of which appear thicker than they actually are because of younger, dark, windblown sand accumulating on small benches marking bedding surfaces.A set of closely spaced joints produces sawtooth like fractures of the light-toned layers that are filled by dark windblown(?) sediment.The tabular light layers traced to the right show swales and features suggesting possible dunes (c).For location of this area, see Figure 3. N Downloaded from https://pubs.geoscienceworld.org/gsa/gsabulletin/article-pdf/doi/10.1130/B35185.1/4691951/b35185.pdf by Univ of Massachusetts Amherst, Christopher D. Condit Figure 13.Joint set (a) in the upper wall of Muara crater, Mars.This view is immediately higher on the crater slope than the view in Figure 12 and includes Unit 4 (light toned) and overlying Unit 5, which is largely covered by dark debris.The joints and bedding form saw-tooth fractures and displaced blocks (b). Figure 15 . Figure 15.Study area in the inverted terrain, Mars.A large, low, essentially uncratered area of light-toned rock of Unit 4 of the Mawrth Vallis Group is bounded to the north and west by a plateau capped by darktoned rock, probably basalt (a).The plateau is flanked by cliffs of exposed Unit 5 of the Mawrth Vallis Group (b) and, lower on the slope, by accumulations of dark sandy or gravelly debris (c).Unit 4 outcrops on the valley floor (light toned) show zones of heavily broken rock (d) separated by smooth areas of unbroken light-toned rock (e). Figure 16 .Figure 17 . Figure 16.Area in the inverted terrain showing zones of rubble and brecciated rock separated by zones of smooth, intact rock.The rubble zones have developed by failure, sliding, and brecciation of surface layers composed mainly of light-toned sediments.Some of these failures display a distinct downslope zonation including locally a headward fracture (a), a head scarp (b), and an upper zone of large, little displaced blocks (c) grading downslope into finer rubble zones to a front of small, jumbled blocks probably representing debrisflow material (d).The front of the slides are distinct, shadowed scarps (e).Where most of the light-toned layers have been removed there are accumulations of darker sediment (f).On the right are a series of fractures (g and h) parallel to the slide front.In some areas, brecciation involved fracturing along polygon boundaries in the strongly patterned ground (i).A few through-going fractures are present (k).Enhanced contrast and sharpened.See Figure 15 for location. Figure 18 .Figure 19 . Figure 18.Small slide area of brecciated Unit 4 surrounded by largely intact, strongly patterned ground.The slide shows a headward scarp (a), a downslope end composed of larger blocks (b), and a middle area that includes two topographic low areas covered with dark, finer, probably sandy sediment (c and d).(d) is at the head of a low area with abundant dark sediment that extends as a low-channel-like feature to the NW.It is characterized by dark sediment and isolated blocks of Unit 4. One small debris-flow lobe extends 8-10 m out onto the adjacent flat area of patterned ground (e).The fine fractures characterizing the mid-part of the failure have formed along the sides of polygons in the original patterned ground.Older lobes of failed and brecciated rock (f) appear to be covered by a thin layer of fine light-toned, possibly windblown dust and silt.There are irregular ribbons of darker material and isolated larger blocks (g) extending downslope from the tips of the failures that appear to mark a runoff channel.See Figure 15 for location.25 m
2019-05-17T14:19:51.499Z
2019-05-02T00:00:00.000
{ "year": 2020, "sha1": "2aea615ddff47a06e4086fd91cd2cd88488ac07b", "oa_license": "CCBY", "oa_url": "https://pubs.geoscienceworld.org/gsa/gsabulletin/article-pdf/132/1-2/17/4906812/17.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "893749ed3d7f0ee6a30c0cb78582ead9f1c5365f", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
18071763
pes2o/s2orc
v3-fos-license
Regulation and Essentiality of the StAR-related Lipid Transfer (START) Domain-containing Phospholipid Transfer Protein PFA0210c in Malaria Parasites* StAR-related lipid transfer (START) domains are phospholipid- or sterol-binding modules that are present in many proteins. START domain-containing proteins (START proteins) play important functions in eukaryotic cells, including the redistribution of phospholipids to subcellular compartments and delivering sterols to the mitochondrion for steroid synthesis. How the activity of the START domain is regulated remains unknown for most of these proteins. The Plasmodium falciparum START protein PFA0210c (PF3D7_0104200) is a broad-spectrum phospholipid transfer protein that is conserved in all sequenced Plasmodium species and is most closely related to the mammalian START proteins STARD2 and STARD7. PFA0210c is unusual in that it contains a signal sequence and a PEXEL export motif that together mediate transfer of the protein from the parasite to the host erythrocyte. The protein also contains a C-terminal extension, which is very uncommon among mammalian START proteins. Whereas the biochemical properties of PFA0210c have been characterized, the function of the protein remains unknown. Here, we provide evidence that the unusual C-terminal extension negatively regulates phospholipid transfer activity. Furthermore, we use the genetically tractable Plasmodium knowlesi model and recently developed genetic technology in P. falciparum to show that the protein is essential for growth of the parasite during the clinically relevant asexual blood stage life cycle. Finally, we show that the regulation of phospholipid transfer by PFA0210c is required in vivo, and we identify a potential second regulatory domain. These findings provide insight into a novel mechanism of regulation of phospholipid transfer in vivo and may have important implications for the interaction of the malaria parasite with its host cell. Phospholipid transfer proteins play important roles in the trafficking of phospholipids within eukaryotic cells (1). One subset of phospholipid transfer proteins is represented by a group of proteins containing a StAR-related (START) 4 lipidtransfer domain, which mediates the binding to lipids or sterols and can promote their transfer between membranes. Although sequence similarity between different START domains can be very low, all are characterized by a specific fold consisting of four ␣-helices and a nine-stranded twisted antiparallel ␤-sheet that together form a cavity in which a hydrophobic phospholipid or sterol is held (2,3). The human genome encodes 15 different START domain-containing proteins (START proteins) that can be categorized into five different groups based on the bound ligand specificity and the presence of additional functional domains (Table 1) (4 -7). The roles of the different START proteins are in most cases not well understood, although mutations in genes encoding START proteins have been linked with various diseases (5,8). It is clear from genetic experiments in mice that at least two of the murine START proteins, STARD11 and STARD12, are essential (9,10). The best characterized human START proteins are STARD2, STARD7, and STARD10, which all transfer phosphatidylcholine (and additionally phosphatidylethanolamine in the case of STARD10) (11)(12)(13). Although their exact functions are unclear, the proteins appear to have a role in the transfer of phospholipids from the endoplasmic reticulum to mitochondria and possibly the plasma membrane (14,15). What is also not understood is whether and how the transfer of phospholipids by these proteins is regulated. Some of the START proteins consist of little more than the START domain itself, whereas others contain additional domains, such as thioesterase or Rho-GAP domains (Table 1). Some START proteins, including STARD2 and STARD12, have been shown to interact with other proteins (16,17), which may provide a mechanism of regulating phospholipid transfer activity, or conversely, to allow the START protein to regulate the interacting protein. In the case of STARD10, phosphorylation of a residue in a C-terminal extension has been shown to regulate transfer activity (18). Most members of the genus Plasmodium, which are the obligate intracellular parasites that cause malaria, encode four START proteins (19). These include a putative orthologue of STARD2 (phosphatidylcholine transfer protein; PF3D7_1351000) and an uncharacterized protein (PF3D7_0911100) with a putative cyclin-dependent serine/threonine kinase domain that also contains a START domain at its C terminus. The third START protein (PF3D7_1463500) displays similarity to StarD3 (MLN64), a cholesterol transfer protein (20). Interestingly, in rodent malaria parasites this protein forms part of the large Fam A family, whereas in non-rodent malaria parasites, only one family member is present (21)(22)(23). The best characterized START protein in Plasmodium spp. is the exported broad specificity phospholipid transfer protein PFA0210c (PF3D7_0104200). PSI-BLAST analysis shows that PFA0210c is most closely related to STARD7, whereas structure prediction reveals that the highest similarity is to human phosphatidylcholine transfer protein STARD2 (24). Phospholipid transfer proteins in Plasmodium are of particular interest as one of the most striking changes induced by the parasite in the host erythrocyte, the site of replication of the parasite during the clinical stage of the disease, is the formation of a large exomembrane system (25). This consists of several different, most likely unconnected, membranous compartments that have various functions within the cell. One of these membranous compartments is the parasitophorous vacuole membrane that surrounds the parasite during the entire intraerythrocytic life cycle and that separates the parasite from the erythrocyte cytosol. Another group of membranous compartments is the Maurer's clefts, small vesicles that are important for the transfer of parasite proteins to the surface of the infected cell (26,27). As mature erythrocytes are devoid of internal membranes and lack the capacity to produce membranes (28), these newly formed membranes, which are all outside of and unconnected to the parasite, must be produced by the parasite. How the parasite transfers phospholipids across the aqueous environment of the parasitophorous vacuole lumen to the parasitophorous membrane and the Maurer's clefts is unknown. It has been suggested that PFA0210c may play a part in this process (24). In support of this, a unique feature of PFA0210c among the Plasmodium falciparum START proteins is the presence of a signal sequence, which mediates the secretion of the protein from the parasite into the parasitophorous vacuole, as well as a PEXEL export signal, which directs export beyond the parasitophorous vacuole membrane into the erythrocyte cytosol (Fig. 1A). Previous studies have indicated that PFA0210c can be exported from the parasite to the erythrocyte cytosol (29), although at least a fraction remains in the parasitophorous vacuole (24). The presence of the protein at these locations indicates that it may function to transfer phospholipids between the parasite and parts of the exomembrane system. Here, we show that PFA0210c and its Plasmodium orthologues contain an unusual C-terminal extension that regulates the phospholipid transfer activity of the protein. Furthermore, we provide evidence that the Plasmodium knowlesi orthologue of PFA0210c, PKH_020910, is essential, and we apply new genetic techniques to show that P. falciparum parasites lacking PFA0210c do not develop or proliferate within the infected erythrocyte. Additionally, we show that the regulation of phospholipid transfer through the C-terminal extension is required for parasite growth. Together, these experiments reveal a new mechanism of regulating phospholipid transfer and show that phospholipid transfer is an essential and regulated process in Plasmodium parasites. Results PFA0210c and Its Orthologue in P. knowlesi Are Essential-PFA0210c was initially identified as a conserved exported protein (29). We subsequently showed that PFA0210c as well as its orthologues in the simian and human malaria pathogen P. knowlesi (PKH_020910 or PKNH_0209300) and the rodent parasite Plasmodium chabaudi (PCHAS_020730 or PCHAS_0207300) are phospholipid transfer proteins that can transfer a broad range of phospholipids in vitro (24). All three proteins contain a signal sequence and a predicted PEXEL motif that directs export from the parasite into the host erythrocyte ( Fig. 1A) (29 -31). The proteins are characterized by a poorly conserved N-terminal region, a START domain that is required for the phospholipid transfer activity (24), and a C-terminal extension of ϳ84 amino acid residues (Fig. 1B). To gain further insight into the function of this protein, we used the recently developed P. knowlesi A.1-H.1 strain that has been adapted to in vitro growth in human erythrocytes (32) to attempt to generate a mutant that lacks the gene encoding PKH_020910. The P. knowlesi model is highly genetically tractable, allowing for rapid gene modification by homologous recombination. We first confirmed that PKH_020910 is expressed in in vitro culture by immunoblotting. Full-length PKH_020910 has a predicted molecular mass of 55.4 kDa, which is reduced to 52.7 kDa after removal of the signal sequence and further reduced to 43.7 kDa after cleavage of the PEXEL domain. Using an affinity-purified polyclonal antibody, a protein of approximately the expected molecular mass was detectable in extracts of infected erythrocytes ( Fig. 2A) but not in that of uninfected erythrocytes, despite the presence of a higher amount of protein in the lane with uninfected erythrocyte extract (Fig. 2B). Immunofluorescence staining of erythrocytes infected with late stage parasites (schizonts) revealed that the protein is likely expressed in a subset of apical organelles, as indicated by the localized punctate pattern of staining (Fig. 2C). The identity of this organelle could not be ascertained because of the lack of organelle markers for P. knowlesi, and no signal was obtained by immunoelectron microscopy using the available anti-PKH_020910 antibodies. Nonetheless, these experiments confirmed that PKH_020910 is expressed in blood stages of the P. knowlesi life cycle. To determine whether PKH_020910 is required for parasite growth, we attempted to disrupt the PKH_020910 gene through single crossover homologous recombination (Fig. 3A). Parasites were transfected with linearized plasmids that were designed to integrate into the genome and either reconstitute the entire gene or truncate the coding sequence to remove 46 residues of the START domain. A similar truncation of PFA0210c ablates its capacity to transfer phospholipids in an in vitro assay (24). After selection of the transfected parasites with pyrimethamine, drug-resistant parasites were recovered in all cases within 10 -14 days. Diagnostic polymerase chain reaction (PCR) analysis of genomic DNA (Fig. 3A) indicated that integration of the plasmid could only be detected in the case of the plasmids designed to reconstitute the entire open reading frame (PKH_020910 476 in Fig. 3B); no integration was detected in the plasmid designed to truncate the gene (PKH_020910 346 in Fig. 3B). To rule out the unlikely scenario that the different results reflected the different lengths of the targeting region within the plasmids (972 bp versus 1362 bp), we engineered a plasmid that contained the same 972-bp homology region as the deletion plasmid but also contained the FIGURE 1. Overview and alignment of the PFA0210c orthologues from P. falciparum, P. knowlesi (PKH_020910), and P. chabaudi (PCHAS_020730). A, outline of the domains of PFA0210c and its orthologues, indicating the signal sequence (black), the motif that mediates export to the erythrocyte (PEXEL; black rectangle), the non-conserved N-terminal region (dark blue), the START domain (light blue), and the C-terminal extension (gray). Numbers above the outline indicate the position of the amino acid residues at the start and end of the domains. The domain structure among the PFA0210c orthologues is the same, although the length of the non-conserved N-terminal domain, and therefore the length of the entire protein, varies. B, alignment of PFA0210c with its orthologues of P. knowlesi (PKH_020910) and P. chabaudi (PCHAS_020730). The START domain is underlined. Conserved residues in the C terminus that were targeted in the mutagenesis studies are shown in boldface type. The numbers at the end of the sequence indicate the position of the residue at the extreme C terminus; the numbers above the sequence and the italicized numbers below the sequence indicate the position of the last residue of the truncations of the P. falciparum and P. knowlesi protein, respectively, used in this study. Note that the variation in sequence length is determined by variation in the N-terminal portion of the proteins; the length of the sequence from the start of the START domain to the C terminus varies by only two residues between these proteins. Sequences were obtained from PlasmoDB (40) and aligned using Clustal Omega (45). remainder of the entire gene in a re-codonized form. Integration of this plasmid was thus mediated by the same targeting sequence as the truncation plasmid but would lead to reconstitution of the full-length gene. This plasmid readily integrated into the chromosome (PKH_020910 346ϩrecodonized in Fig. 3B), indicating that the inability of the truncation to integrate was not a consequence of the insufficient targeting sequence but rather reflected a requirement for PKH_020910 for parasite viability. To avoid caveats associated with negative data, we wanted to confirm further that this protein is required for parasite survival. We exploited a combination of recently developed Cas9 technology with conditional gene excision (33)(34)(35)(36) to obtain an inducible disruption of the gene in order to examine the essentiality of PFA0210c in P. falciparum. For this, we replaced the native PFA0210c open reading frame with a form interrupted by a loxP-containing intron (37). This was followed by 855 bp of a re-codonized PFA0210c-coding sequence plus a second loxP site immediately following the stop codon (Fig. 4A). This gene modification was performed in the 1G5DC P. falciparum clone (38), which expresses the rapamycin-inducible Cre recombinase (DiCre) (38,39). Several transgenic parasite clones were generated expressing the loxP-containing PFA0210c gene (called PFA0210c-LoxP; Fig. 4B). Treatment of these clones with rapamycin resulted in the expected excision of the segment of the PFA0210c gene that is flanked by the loxP sites (Fig. 4C). In confirmation of this, no PFA0210c protein could be detected in schizonts of the rapamycin-treated parasites (Fig. 4D). To assess the effects of gene disruption on parasite viability, growth assays were performed, comparing rapamycintreated PFA0210c-LoxP parasites with control mock-treated counterparts. This revealed a severe growth defect in the parasites lacking PFA0210c (Fig. 4E), whereas treatment of 1G5 parasites with rapamycin did not affect their growth rate. Giemsa staining of the parasites revealed that loss of PFA0210c resulted in the formation of dysmorphic ring-stage parasites in the growth cycle following that in which the parasites were treated with rapamycin (the 60-h time point in Fig. 4F). These parasites have a translucent, nearly white center and do not develop past the ring stage, whereas DMSO-treated parasites and 1G5 parasites do develop into trophozoites (see the 76-h time point in Fig. 4F). These results demonstrate that PKH_020910 and PFA0210c are essential proteins in the asexual blood stage of the Plasmodium life cycle. PFA0210c and Its Orthologues Contain an Extended C Terminus-The 84-residue C-terminal extension of PFA0210c is highly unusual among START proteins. Although the 15 human START proteins range in size from 205 residues (STARD5) to 4705 residues (STARD9), the START domain is almost exclusively located at or very near the extreme C terminus of the protein (Table 1); the longest C-terminal extension found in a human START protein, STARD10, is 50 residues. All orthologues of PFA0210c possess a C-terminal extension ranging from 81 residues in the Plasmodium yoelii orthologue (PY17X_0210300) to 92 residues in the Plasmodium berghei orthologue (PBANKA_0208900), with most (including those of PFA0210c, PKH_020910, and PCHAS_020730) comprising 84 residues (Table 1)). Sequence conservation in the C-terminal extension is overall much lower than within the START domain ( Fig. 1B), However, we noticed the presence of a number of highly conserved residues near the extreme C terminus, suggesting a conserved function (Fig. 1B). C Terminus of PFA0210c Negatively Regulates Phospholipid Transfer-To determine whether the C-terminal extension of PFA0210c has a role in the regulation of its phospholipid transfer activity, we produced recombinant forms of PFA0210c, PKH_020910, and PCHAS_020730 that contained the entire START domain but were truncated to various extents within the C-terminal extension. The proteins were then compared in an in vitro phospholipid transfer assay. All the truncated proteins showed higher phospholipid transfer activity than the fulllength protein, indicating that the C-terminal extension affects the activity (Fig. 5A). To investigate whether the observed regulation through the C terminus was mediated simply through steric hindrance by a large protein domain or whether it required the presence of a conserved sequence-specific element in the C terminus, we replaced the C-terminal 48 residues of PFA0210c (the residues absent from the truncated version of the protein used in Fig. 4A) with either the corresponding sequence of the P. chabaudi orthologue or the C-terminal 48 residues of green fluorescent protein (GFP). The chimeric protein containing C-terminal residues of the P. chabaudi orthologue displayed levels of phospholipid transfer activity similar to that of full-length wild type PFA0210c, whereas the form containing the GFP sequence showed increased activity, similar to that of the truncated Erythrocytes infected with latestage P. knowlesi were stained with anti-PKH_020910 antiserum and DAPI to visualize the parasite nuclei. Staining is clearly visible within the parasites but is absent in uninfected erythrocytes. DIC, differential interference contrast. PFA0210c (Fig. 5B). These results suggest that regulation of phospholipid transfer activity involves conserved sequence elements in the C-terminal extension. We conclude from these experiments that the C-terminal extension of PFA0210c and its orthologues contains one or more conserved sequence-specific elements that regulate the phospholipid transfer activity of the proteins. Regulation through the C Terminus Is Confined to a Short Conserved Region-We next wanted to define in more detail the region of the C terminus that mediates the regulation of phospholipid transfer. As the sequence conservation is primarily restricted to the C-terminal 27 residues of the protein (Fig. 1B), we focused on this region. To identify the residues involved in the regulation of phospholipid transfer activity, we produced serially truncated versions of recombinant PFA0210c that sequentially removed three to six residues (see Fig. 1B for the positions where the proteins were truncated) and evaluated the effects on in vitro phospholipid transfer activity. This revealed that removal of the last 19 residues (C-terminal residue 447), which includes the highly conserved residues from position 448 through 466, produced fully active protein (Fig. 6A). Truncated proteins that lacked between 16 and 13 residues (C-terminal residues 450 and 453, respectively) displayed low activity that was nonetheless higher than that of the full-length protein. A truncation of only seven residues (C-terminal residue 459) had no effect on activity relative to the full-length protein (C-terminal residue 466). These results show that the phospholipid transfer activity of PFA0210c is regulated through a short patch of residues near the extreme C terminus, extending from Trp-448 to Lys-457. These residues are conserved in P. falciparum and P. knowlesi but not in the rodent malaria species (Fig. 1B). However, the Ile-448 and Trp-449 as well as the Lys residue nine residues downstream that are all part of the conserved region in PFA0210c are also present in the rodent malaria parasite species, albeit shifted by 12 residues compared with the non-rodent malaria parasites. This suggests that the positioning of the regulatory region is shifted slightly in the rodent malaria parasites. To understand the roles of the conserved residues in more detail, we separately substituted the conserved Trp-448 and Ile-449 residues as well as Lys-456 and Lys-457 in PFA0210c (see boldface residues in Fig. 1B) into full-length and truncated recombinant proteins. Phospholipid transfer analysis of these mutants revealed that substitution of Trp-448 and Ile-449 in the protein truncated at residue 450 resulted in gain-oftransfer activity, producing activity similar to that of the protein truncated at residue 447. However, the same mutation in the protein that contains an additional nine residues (C-terminal residue 459) did not result in an increase in activity. Mutation of Lys-456 and Lys-457 did not affect phospholipid transfer activity of either full-length or truncated proteins. To test whether the two Lys residues by themselves provide the additional regulation in proteins lacking the Trp-448 and Ile-449 residues, we produced a mutant protein that contained both sets of mutations (this mutant also lacks the last seven residues, as full-length protein with the Trp-448 and Ile-449 mutation could not be obtained in a non-aggregated form). However, this double mutant protein did not display enhanced activity above that of the wild type protein. Together, these results show that the phospholipid transfer activity of PFA0210c is regulated by specific residues in the extreme C-terminal 18-residue segment of the protein. The Trp-448 and Ile-449 residues play a pivotal role in this regulation, but additional residues are also involved. mine whether the regulation of phospholipid transfer revealed through the in vitro experiments is functionally important in vivo. For this, we used the same integration strategy as described in Fig. 3A to attempt to introduce serial truncations of PKH_020910 in the P. knowlesi genome. Integration of the various targeting plasmids used was designed to produce truncated genes encoding proteins lacking the C-terminal eight residues (retaining all the residues required for the regulation of the phospholipid transfer activity in vitro), 13 residues (thereby removing some but not all of the regulatory residues), or 20 residues (thereby removing all of the regulatory residues). In addition, we used a plasmid that upon integration would produce a gene that encodes a protein lacking internal residues 433-444 of the C-terminal extension (⌬12), thus shortening the C-terminal extension but retaining the regulatory residues at the extreme C terminus. Parasites were recovered after transfection with each plasmid, and integration was determined using the same strategy as described in Fig. 1. This revealed that the positive control plasmid designed to reconstitute the entire gene readily integrated (Fig. 7). However, integration of none of the plasmids designed to give rise to a truncated protein was detected. These results indicate that regulation of phospholipid transfer by the protein is required for parasite growth. Surprisingly, the plasmid designed to give rise to a protein lacking 12 residues of the C-terminal extension but containing the conserved region at the end of the C terminus readily integrated. It thus appears that the residues at the extreme C terminus of PKH_020910 are required for regulation but that their distance from the end of the START domain may vary. Discussion In this work, we have shown that the START protein PFA0210c and its orthologues are required for the growth of the malaria parasite in its clinically relevant blood stages. Replacement of the native gene with a version that encodes a protein that cannot transfer phospholipids was not successful in P. knowlesi, whereas control plasmids readily integrated, strongly suggesting that parasites lacking a functional gene are not viable. Furthermore, conditional disruption of the PFA0210c gene in P. falciparum blocked the development of the parasites in the subsequent round of replication. Further support for the essential nature of the protein is that genetic modification of the orthologous gene in P. berghei was unsuccessful, as listed on PlasmoDB (www.plasmodb.org (40)). Additional in vitro analysis showed that PFA0210c is regulated through a unique C-terminal extension. The residues responsible for the regulation lie close to the extreme C terminus and are conserved among the Plasmodium orthologues. This conservation of sequence is supported by the demonstration that the corresponding P. chabaudi sequence could restore regulation of the P. falciparum orthologue PFA0210c in vitro, whereas replacement of the C terminus with the equivalent sequence from GFP did not affect the regulation. In further experiments, the presence of the C-terminal 48 residues of PFA0210c in the form of a separate peptide or a glutathione S-transferase (GST) fusion protein did not affect phospholipid transfer levels of truncated PFA0210c, indicating that the regulatory function of the C-terminal sequence cannot be mediated in trans, although we cannot rule out that the GST fusion and the peptide were not in the correct conformation to mediate their effect. Importantly, this regulation is likely to be essential for parasite viability, as we were unable to obtain P. knowlesi parasites expressing a truncated version of the protein. The precise location of the regulatory region relative to the end of the START domain appears to be somewhat flexible, as replacing the native gene with a version that encodes a protein in which the extreme C terminus is shifted 12 residues closer to the START domain appeared to be tolerated in vivo. Consistent with this, in the rodent malaria parasite orthologues, residues that are highly conserved in the non-rodent malaria parasites, are shifted by 12 residues. Interestingly, we could not obtain viable parasites that expressed a version of the protein that lacked the C-terminal eight amino acids but retained the residues that are important for regulating phospholipid transfer in vitro. Hence, in vivo the regulation of phospholipid transfer by PFA0210c and its orthologues may be more complex. We hypothesize that there are two regulatory regions as follows: the first region bounded by the Trp-448 and Ile-449 and the Lys residues at position 456 and 457 that was identified in the in vitro assay as the negative regulatory region (the WI-KK domain); and a second region, closer to the C terminus, that can counteract the negative regulation mediated by the WI-KK domain upon a signal from another source. Hence, removing this second region would make the protein permanently inactive, as the regulation mediated by the WI-KK domain cannot be removed. The protein is essential for the growth of the parasite, and therefore, removing its capacity to activate phospholipid transfer activity would be a lethal event for the parasite. Potentially, a cofactor is required that interacts with the second region to counteract the negative regulation of the WI-KK domain in the transfer of phospholipids. This would allow for the intriguing possibility that this cofactor regulates the directionality of phospholipid transfer; the protein would only be able to obtain and/or release its phospholipid to a membrane in which the cofactor is present. Identification of a binding partner of PFA0210c would shed light on this model. This report is only the second to provide evidence for regulation of the activity of a mammalian START protein. The only other example is that of STARD10, which similarly contains a C-terminal extension, albeit shorter (50 amino acids compared with 86 amino acids in PFA0210c; Table 1). This C-terminal extension is phosphorylated in vivo, whereas phosphorylation in vitro with casein kinase II decreased its phospholipid transfer activity (18). Interestingly, an eight-amino acid truncation of STARD10, removing the phosphorylation site, increased its activity beyond that of the unphosphorylated protein. It was initially speculated that phosphorylation of the protein may decrease the binding of STARD10 to membranes, but in light of the results presented here, it may also be possible that phosphorylation of the protein induces a conformational shift that reduces activity. This may point to a broader regulation of START proteins that contain a C-terminal extension. In conjunction with the previous report (18), this study begins to elucidate a mechanism by which phospholipid transfer can be regulated in vivo, a mechanism potentially shared from Apicomplexa to humans. For transfections of P. knowlesi, parasites were transfected using the Amaxa 4D electroporator (Lonza), as described previously (32,41). Late stage parasites were harvested by flotation on a 55% Nycodenz (Axis-Shield) stock solution (consisting of 27.6% (w/v) Nycodenz powder in RPMI 1640 medium). Purified parasites were maintained in RPMI 1640 medium until a majority had reached the eight-nuclei stage. The parasites were then centrifuged briefly, and the supernatant was removed. The parasites were suspended in 100 l of nucleofection solution (P3 Primary cell 4D Nucleofector X Kit L (Lonza)), and 10 l of TE containing ϳ50 g of plasmid that had been linearized with BsaBI was added, and the parasites were subsequently electroporated. Parasites were transferred to a flask containing 300 l of blood and 1.7 ml of RPMI 1640 medium and maintained, with shaking, for 30 min at 37°C to allow for efficient invasion. Eight ml of medium was added, and the parasites were further maintained at 37°C. Selection for transfected parasites with 0.1 M pyrimethamine was initiated ϳ20 h after transfection. Drug-resistant parasites were usually detected 10 -14 days later. Integration of plasmids in P. knowlesi was determined by isolating genomic DNA from the parasites using the Qiagen Blood and Tissue kit and using the extracted DNA as template for PCR using primer pairs specific for the integrated plasmid (CVO093 and CVO079, M13 reverse, and CVO104), the wild type locus (CVO093 and CVO104), and circularized plasmid (M13 reverse and CVO079) as described under "Results." All primer sequences are listed in Table 2. The 1G5DC clone and the PFA0210c-LoxP strain of P. falciparum 3D7 were maintained as P. knowlesi, without the addition of human serum. Generation of the inducible PFA0210c-LoxP strain in the 1G5 background was conducted using the CRISPR/Cas9 system. Schizonts of the 1G5DC strain were transfected according to standard protocol (38) with pBLD529 (the plasmid that introduces the LoxP sites, see below) and the Cas9-expressing pDC2-cam-Cas9-U6-hDHFR (42) plasmid to which the gene encoding the yeast cytosine deaminase-uracil phosphoribosyltransferase had been added, as well as a sequence encoding guide RNAs specific for PFA0210c. Transfectants were initially selected with 2.5 nM WR99210 and were subsequently treated with 1 M ancotil to select against parasites carrying the pDC- based plasmid. Ancotil-resistant parasites were cloned by limiting dilution. Integration of the plasmid and recombination of PFA0210-LoxP were determined as for P. knowlesi, using primer pairs CVO150 with CVO083 and CVO071 with CVO183 to test the presence of the wild type gene, CVO150 with CVO162 and CVO321 with CVO183 to determine integration of the plasmid. Removal of the gene after rapamycin treatment was determined by PCR using primers CVO001 and CVO097. To induce Cre recombinase activity, parasites in the early ring stage were incubated at 37°C in the presence of 10 nM rapamycin (added from stocks in DMSO) or the equivalent volume of DMSO as control. After 30 -60 min, the parasites were washed once and resuspended in growth medium. Parasitemia was determined by cell counting using a FACSAria fusion flow cytometer. First, ϳ2 l of infected cell culture was fixed with 0.2% glutaraldehyde in PBS for 1 h and subsequently washed with PBS and stored at 4°C. Prior to counting, the cells were stained with 2 M Hoechst 33342 in PBS for 30 min. The number of infected erythrocytes per 50,000 erythrocytes was determined. Immunoblotting and Immunofluorescence Imaging-To detect PKH_020910 and PFA0210c by immunoblotting, extracts of uninfected erythrocytes and erythrocytes infected with schizont stage parasites were produced by suspending the cell pellet in 3 volumes of 1ϫ SDS loading dye and separating the proteins on a SDS-12.5% polyacrylamide gel. The proteins were transferred to nitrocellulose, and after blocking the blot with 5% milk in PBS containing 0.05% Tween, the proteins were detected by incubating the blot with the affinity-purified anti-PKH_020910 or anti-PFA0210c antibody at a dilution of 1:5000 or 1:2500, respectively. Antibody binding was visualized by incubating the blot with HRP-linked goat anti-rabbit secondary antibody and developing with Immobilon Western chemilumi- CVO010 GGACctcgagTCAGAAGATGCTGGTAACGATAACGTAAATTTTTTTG NOVEMBER 11, 2016 • VOLUME 291 • NUMBER 46 JOURNAL OF BIOLOGICAL CHEMISTRY 24289 nescent HRP substrate (Millipore). Spectrin was detected using an anti-spectrin (␣ and ␤) mouse monoclonal antibody (Sigma) and an HRP-linked goat anti-rabbit secondary antibody. Immunofluorescence imaging was performed as described (43). A small aliquot of infected erythrocytes was spun down and resuspended in PBS containing 4% paraformaldehyde and 0.01% glutaraldehyde. After 1 h of agitation at room temperature, the parasitized erythrocytes were pelleted, washed with PBS, permeabilized with 0.1% Triton X-100 for 15 min, washed once more with PBS, blocked with 3% BSA in PBS, and then incubated for 1 h at room temperature in PBS containing 3% BSA and anti-PKH_020910 antibody diluted at 1:5000. The erythrocytes were then washed three times with PBS and subsequently incubated at room temperature in PBS containing 1 g/ml 4Ј,6-diamidino-2-phenylindole (DAPI) and Alexa 596conjugated anti-rabbit antibodies diluted at 1:5000 for 1 h. The erythrocytes were washed three times with PBS, resuspended in a small volume of PBS, and placed on a polyethyleneiminecoated microscope slide. This was covered with a coverslip and sealed with nail polish. Differential interference contrast and fluorescence images were obtained on a Nikon Eclipse Ni, fitted with a Hamamatsu C11440 digital camera. Images were processed in Photoshop. Note that the anti-PKH020730 signal was false colored green. Regulation of a Phospholipid Transfer Protein Production of Plasmids-Plasmids used for disrupting the PKH_020910 locus were produced by amplifying the region of PKH_020910 to be targeted (omitting the first 22 codons of the genomic sequence) by PCR. The resulting fragment was cloned into the XmaI and SacII sites of pHH4-MyoA-GFP (32), producing the plasmids pBLD468 (product of primers CVO123 and CVO122, ending at codon 476 (full length)), pBLD481 (product of primers CVO123 and CVO237, ending at codon 468), pBLD482 (product of primers CVO123 and CVO238, ending at codon 461), pBLD483 (product of primers CVO123 and CVO239, ending at codon 456), and pBLD467 (product of primers CVO123 and CVO121, ending at codon 336). Plasmid pRH34, which contains a fusion of the sequence in pBLD467 and a re-codonized version of the remaining coding sequence, was produced by overlapping PCR. The wild type gene (ending at codon 367) was amplified from genomic P. knowlesi DNA (using primers CVO123 and RJH42), and the remaining sequence (codons 368 -476) was amplified by PCR using a version of the gene codon-optimized for Escherichia coli (GeneArt) (using primers RJH41 and RJH40). The two fragments were fused by PCR, using built-in overlapping ends and primers CVO123 and RJH040. Plasmid pBLD484, which lacks codons 433-444, was also made by overlapping PCR, using primers that amplified codons 23-432 (CVO123 and CVO241) and codons 445-476 with overlapping ends (CVO240 and CVO122). The fragments were fused by PCR using the internal overlap and primers CVO123 and CVO122. Plasmid pBLD529 that was used to replace the native PFA0210c gene with a version containing the SERA2 intron containing a loxP site was created as follows. A synthetic gene product containing a fusion of wild type sequence and recodonized sequence was cloned into the pBAD vector. This sequence was fused to 3Ј UTR sequence of PFA0210c by overlapping PCR using primer pairs CVO001 and CVO254 (com-bined with CVO305 to add the LoxP site) to amplify the coding region and CVO306 and CVO163 to amplify the 3Ј region. The overlapping PCR introduced a loxP site immediately following the stop codon. This sequence was cloned into pGEM-T by T-tailing, creating pBLD509. To introduce the loxP site contained in the SERA2 intron, a synthetic DNA fragment was obtained that contained wild type PFA0210c sequence fused with PFA0210c re-codonized sequence containing a SERA2 intron in which a loxP site was inserted between bp 463 and 464. A SpeI-Tth111I fragment of this fusion was cloned into pBLD509 to give rise to pBLD529. Plasmids used for protein production in E. coli were produced as follows. As the N-terminal region of PFA0210c does not affect the phospholipid transfer activity of PFA0210c (24), it was not included in the recombinant proteins ("full-length" denotes the presence of the entire C-terminal extension). Gene fragments starting at codon 144 in PFA0210c, codon 111 in PKH_020910, and codon 49 in PCHAS_020730 were amplified from genomic DNA using primer pairs PFA0210c 5Ј-23 and PFA0210c 3Ј-26 (PFA0210c), CVO014 and CVO015 (PKH_020910), and CVO059 and CVO060 (PCHAS_020730), respectively, and cloned into either the EcoRI (PFA0210c and PKH_020910) or the XbaI (PCHAS_020730) and the SalI site of pMAL c2x (New England Biolabs). Truncated versions of PFA0210c, PKH_020910, and PCHAS_020730 were produced using primer pairs CVO022 and CVO021, CVO014 and CVO064, and CVO059 and CVO61, respectively. To produce the serial truncations of PFA0210c, DNA was amplified with primer CVO022 paired with primer CV057 (terminal codon 459, plasmid pBLD413), primer CVO056 (terminal codon 453, plasmid pBLD412) or primer RJH023 (terminal codon 450, plasmid pRH17), and primer RJH024 (terminal codon 447, plasmid pRH18), and the resulting DNA was cloned into the EcoRI and HindIII sites of pMAL c2x. Each 3Ј primer contained the sequence of a hexahistidine tag, hence the resulting protein contained the maltose-binding protein at the N terminus and a hexahistidine tag at the C terminus. The fusions of PFA0210c with the C terminus of PCHAS_020730 or the C terminus of GFP were produced by overlapping PCR. The region of PFA0210c encompassing codon 144 -418 was amplified from genomic DNA using primers CVO022 paired with CVO065 for the fusion with PCHAS_020730 and CVO067 for the fusion with GFP, with an overlapping tail complementary to the fragment to be fused. The PCHAS_020730 fragment, spanning codons 368 -415, was amplified by PCR using genomic P. chabaudi DNA as template using primers CVO065 and CVO060. The GFP fragment, spanning codons 194 -241, was amplified by PCR using the gene encoding enhanced GFP as template using primers CVO068 and CVO069 and fused to the PFA0210c fragment by overlapping PCR using primers CVO022 and CVO069. Point mutations were generated by overlapping PCR. The 5Ј region was generated using primer CVO022 paired with primer RJH019 (KK-EG) or RJH021 (WI-SQ), whereas the 3Ј region was generated using primers RJH020 (KK-EG) or RJH022 (WI-SQ) paired with M13 forward. DNA fragments were fused by PCR using primers CVO022 and M13 forward using the 5Ј and 3Ј region as template and cloned into pMAL c2x using EcoRI and HindIII. To introduce these point mutations into truncated genes, the genes were amplified using primers CVO022 paired with CVO426 (terminal codon 459, KK-EG mutation) using pBLD413 (which contains the gene encoding PFA0210c with codon 459 as terminal codon) as template, CVO427 (terminal codon 459, WI-SQ mutation), or CVO428 (terminal codon 453, WI-SQ mutation). To generate the double mutant, primers CVO022 and CVO451 were used, using pBLD413 as template. The product was cloned into pMAL c2x using EcoRI and HindIII. Protein Purification-All proteins were purified from E. coli strain BL21(DE3) as fusions with maltose-binding protein at the N terminus and a hexahistidine tag at the C terminus. Protein production was induced in 1-liter cultures by the addition of isopropyl ␤-D-1-thiogalactopyranoside to 0.5 mM when the culture was at an A 600 of ϳ0.5. The bacteria were harvested after an overnight incubation at 18°C and resuspended in column buffer (20 mM Tris, pH 7.4, 500 mM NaCl, 20 mM imidazole) containing protease inhibitors (Complete EDTA-free Mixture, Roche Applied Science). The bacteria were lysed with a cell disruptor (Constant Cell Disruption Systems), and the lysate was sonicated with a microtip for three 30-s pulses (50% duty cycle, setting 4; Vibracell, Sonics and Materials). Lysates were clarified by centrifugation in a JA25.5 rotor at 9000 ϫ g for 30 min. The clarified lysates were mixed with Ni 2ϩ -nitrilotriacetic acid resin (Qiagen) and incubated at 4°C for 1 h while rotating. The mixture was poured into a 1.5 ϫ 12-cm chromatography column (Bio-Rad), and the resin was washed with ϳ50 column volumes of column buffer. The protein was eluted with 5 column volumes of column buffer supplemented with 250 mM imidazole. The eluate was concentrated to 0.5-1.5 ml using a Vivaspin 15 concentrator (Sartorium Stedim Biotech) and loaded onto a HiLoad 26/60 Superdex 200 prep-grade column equilibrated in standard assay buffer (10 mM HEPES-Na ϩ , pH 7.4, 1 mM EDTA, 50 mM NaCl, pH 7.4). Elution of protein was detected through monitoring the UV absorption of the eluate, followed by SDS-PAGE. The fractions containing monomeric protein were concentrated as described above, aliquoted, and snap-frozen in liquid nitrogen. We were unable to purify unaggregated full-length protein containing the Trp-448 and Ile-449 point mutations in five independent attempts; as aggregated protein is not active, this mutant was not included in the analysis. Phospholipid Transfer Assay-Phospholipid transfer activity was measured as described previously (44). Briefly, donor vesicles and acceptor vesicles were produced by mixing 98 mol % phosphatidylcholine and 2 mol % phosphatidic acid or 88 mol % phosphatidylcholine, 2 mol % phosphatidic acid, 10 mol % N-lactosyl-phosphatidylethanolamine (all non-radioactive lipids were obtained from Avanti Polar Lipids, Inc.) and a trace of 14 C-labeled phosphatidylcholine (L-␣-dipalmitoylphosphatidylcholine; PerkinElmer Life Sciences), respectively. To this mixture, 200 l chloroform was added, and the mixture was dried under a stream of N 2 gas until completely dry. The dried lipids were resuspended in standard assay buffer and solubilized in a sonicating water bath (Ultrawave U300H) until the solution became completely translucent. To measure phospholipid transfer, 69 nmol of acceptor vesicles were mixed with 23 nmol of donor vesicles in the presence of 1 mg/ml essentially fatty acid-free bovine serum albumin (Sigma), followed by the addition of protein to a final concentration of 25 g/ml. The final volume of the reaction was 100 l. This mixture was incubated at 37°C for 30 min. To measure total radioactivity in the sample, a small aliquot was removed, and radioactivity was counted using scintillation counting. To the remaining mixture, agglutinin RCA 120 (lectin from Ricinus communis; Sigma) was added to agglutinate the donor vesicles, and the samples were incubated on ice for 30 min, followed by incubation at room temperature for 10 min. The agglutinated donor vesicles were pelleted by centrifugation for 6 min at 13,000 rpm in a microcentrifuge. The radioactivity in the supernatant was then measured using scintillation counting, and the amount of transfer was calculated.
2018-04-03T06:15:38.435Z
2016-10-02T00:00:00.000
{ "year": 2016, "sha1": "51667d29bf49dfa3ff9e90b9ff8bccec69e3d51b", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/291/46/24280.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "51667d29bf49dfa3ff9e90b9ff8bccec69e3d51b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10764690
pes2o/s2orc
v3-fos-license
Service Chain and Virtual Network Embeddings: Approximations using Randomized Rounding The SDN and NFV paradigms enable novel network services which can be realized and embedded in a flexible and rapid manner. For example, SDN can be used to flexibly steer traffic from a source to a destination through a sequence of virtualized middleboxes, in order to realize so-called service chains. The service chain embedding problem consists of three tasks: admission control, finding suitable locations to allocate the virtualized middleboxes and computing corresponding routing paths. This paper considers the offline batch embedding of multiple service chains. Concretely, we consider the objectives of maximizing the profit by embedding an optimal subset of requests or minimizing the costs when all requests need to be embedded. Interestingly, while the service chain embedding problem has recently received much attention, so far, only non- polynomial time algorithms (based on integer programming) as well as heuristics (which do not provide any formal guarantees) are known. This paper presents the first polynomial time service chain approximation algorithms both for the case with admission and without admission control. Our algorithm is based on a novel extension of the classic linear programming and randomized rounding technique, which may be of independent interest. In particular, we show that our approach can also be extended to more complex service graphs, containing cycles or sub-chains, hence also providing new insights into the classic virtual network embedding problem. I. INTRODUCTION Computer networks are currently undergoing a phase transition, and especially the Software-Defined Networking (SDN) and Netwok Function Virtualization (NFV) paradigms have the potential to overcome the ossification of computer networks and to introduce interesting new flexiblities and novel service abstractions such as service chaining. In a nutshell, in a Software-Defined Network (SDN), the control over the forwarding switches in the data plane is outsourced and consolidated to a logically centralized software in the so-called control plane. This separation enables faster innovations, as the control plane can evolve independently from the data plane: software often trumps hardware in terms of supported innovation speed. Moreover, the logically centralized perspective introduced by SDN is natural and attractive, as many networking tasks (e.g., routing, spanning tree constructions) are inherently non-local. Indeed, a more flexible traffic engineering is considered one of the key benefits of SDN [10], [18]. Such routes are not necessarily shortest paths or destination-based, or not even loop-free [13]. In particular, OpenFlow [29], the standard SDN protocol today, allows to define routing paths which depend on Layer-2, Layer-3 and even Layer-4 header fields. Network Function Virtualization (NFV) introduces flexibilities in terms of function and service deployments. Today's computer networks rely on a large number of middleboxes (e.g., NATs, firewalls, WAN optimizers), typically realized using expensive hardware appliances which are cumbersome to manage. For example, it is known that the number of middleboxes in enterprise networks can be of the same order of magnitude as the number of routers [22]. The virtualization of these functions renders the network management more flexible, and allows to define and quickly deploy novel in-network services [7], [9], [21], [23], [27]. Virtualized network functions can easily be instantiated on the most suitable network nodes, e.g., running in a virtual machine on a commodity x86 server. The transition to NFV is discussed within standardization groups such as ETSI, and we currently also witness first deployments, e.g., TeraStream [39]. Service chaining [33], [34], [38] is a particularly interesting new service model, that combines the flexibilities from SDN and NFV. In a nutshell, a service chain describes a sequence of network functions which need to be traversed on the way from a given source s to a given destination t. For example, a service chain could define that traffic originating at the source is first steered through an intrusion detection system for security, next through a traffic optimizer, and only then is routed towards the destination. While NFV can be used to flexibly allocate network functions, SDN can be used to steer traffic through them. A. The Scope and Problem This paper studies the problem of how to algorithmically exploit the flexibilities introduced by the SDN+NFV paradigm. We attend the service chain embedding problem, which has recently received much attention. The problem generally consists of three tasks: (1) (if possible) admission control, i.e. selecting and serving only the most valuable requests, (2) the allocation of the virtualized middleboxes at the optimal locations and (3) the computation of routing paths via them. Assuming that one is allowed to exert admission control, the objective is to maximize the profit, i.e., the prizes collected for embedding service chains. We also study the problem variant, in which a given set of requests must be embedded, i.e. when admission control cannot be exerted. In this variant we consider the natural objective of minimizing the cumulative allocation costs. The service chain embedding algorithms presented so far in the literature either have a non-polynomial runtime (e.g., are based on integer programming [30], [34], [38]), do not provide any approximation guarantees [31], or ignore important aspects of the problem (such as link capacity constraints [26]). More generally, we also attend to the current trend towards more complex service chains, connecting network functions not only in a linear order but as arbitrary graphs, i.e. as a kind of virtual network. B. Our Contributions This paper makes the following contributions. We present the first polynomial time algorithms for the (offline) service chain embedding problem with and without admission control, which provide provable approximation guarantees. We also initate the study of approximation algorithms for more general service graphs (or "virtual networks"). In particular, we present polynomial time approximation algorithms for the embedding of service cactus graphs, which may contain branch sub-chains and even cycles. To this end, we develop a novel Integer Program formulation together with a novel decomposition algorithm, enabling the randomized rounding: we prove that known Integer Programming formulations are not applicable. C. Technical Novelty Our algorithms are based on the well-established randomized rounding approach [35]: the algorithms use an exact Integer Program, for which however we only compute relaxed, i.e. linear, solutions, in polynomial time. Given the resulting fractional solution, an approximate integer solution is derived using randomized rounding, in the usual resource augmentation model. However, while randomized rounding has been studied intensively and applied successfully in the context of path embeddings [35], to the best of our knowledge, besides our own work, the question of how to extend this approach to service chains (where paths need to traverse certain flexible waypoints) or even more complex graphs (such as virtual networks), has not been explored yet. Moreover, we are not aware of any extensions of the randomized rounding approach to problems allowing for admission control. Indeed, the randomized rounding of more complex graph requests and the admission control pose some interesting new challenges. In particular, the more general setting requires both a novel Integer Programming formulation as well as a novel decomposition approach. Indeed, we show that solutions obtained using the standard formulation [5], [37] may not be decomposable at all, as the relaxed embedding solutions are not a linear combination of elementary solutions. Besides the fact that the randomized rounding approach can therefore not be applied, we prove that the relaxation of our novel formulation is indeed provably stronger than the well-known formulation. D. Organization The remainder of this paper is organized as follows. Section II formally introduces our model. Section III presents the Integer Programs and our decomposition method. Section IV presents our randomized approximation algorithm for the service chain embedding problem with admission control and Section V extends the approximation for the case without admission control. In Section VI we derive a novel Integer Program and decomposition approach for approximating service graphs, and show why classic formulations are not sufficient. Section VII reviews related work and Section VIII concludes our work. II. OFFLINE SERVICE CHAIN EMBEDDING PROBLEM This paper studies the Service Chain Embedding Problem, short SCEP. Intuitively, a service chain consists of a set of Network Functions (NFs), such as a firewall or a NAT, and routes between these functions. We consider the offline setting, where batches of service chains have to be embedded simultaneously. Concretely, we study two problem variants: (1) SCEP-P where the task is to embed a subset of service chains to maximize the profit and (2) SCEP-C where all given service chains need to be embedded and the objective is to minimize the resource costs. Hence, service chain requests might be attributed with prizes and resources as e.g., link bandwidth or processing (e.g., of a firewall network function) may come at a certain cost. A. Definitions & Formal Model Given is a substrate network (the physical network representing the physical resources) which is modeled as directed network G S " pV S , E S q. We assume that the substrate network offers a finite set T of different network functions (NFs) at nodes. The set of network function types may contain e.g., 'FW' (firewall), 'DPI' (deep packet inspection), etc. For each such type τ P T , we use the set V τ S Ď V S to denote the subset of substrate nodes that can host this type of network function. To simplify notation we introduce the set R V S " tpτ, uq |τ P T , u P V τ S u to denote all node resources and denote by R S " R V S Y E S the set of all substrate resources. Accordingly, the processing capabilities of substrate nodes and the available bandwidth on substrate edges are given by the function d S : R S Ñ R ě0 . Hence, for each type and substrate node we use a single numerical value to describe the node's processing capability, e.g. given as the maximal throughput in Mbps. Additionally, we also allow to reference substrate node locations via types. To this end we introduce for each substrate node u P V S the abstract type loc u P T , such that V loc u S " tuu and d S ploc u, uq " 8 and d S ploc u, vq " 0 for nodes v P V S ztuu. The set of service chain requests is denoted by R. A request r P R is a directed chain graph G r " pV r , E r q with start node s r P V r and end node t r P V r . Each of these virtual nodes corresponds to a specific network function type which is given via the function τ r : V r Ñ T . We assume that the types of s r and t r denote specific nodes in the substrate. Edges of the service chain represent forwarding paths. Since the type for each node is well-defined, we again use consolidated capacities or demands d r : V r Y E r Ñ R ě0 for both edges and functions of any type. Note that capacities on edges may differ, as for instance, a function 'WAN optimizier' can compress traffic. In the problem variant with admission control, requests r P R are attributed with a certain profit or benefit b r P R ě0 . On the other hand, costs are defined via c S : R S Ñ R ě0 . Note that this definition allows to assign different costs for using the same network function on different substrate nodes. This allows us to model scenarios where, e.g., a firewall ('FW') costs more in terms of management overhead if implemented using a particular hardware appliance, than if it is implemented as a virtual machine ('VM') on commodity hardware. We first define the notion of valid mappings, i.e. embeddings that obey the request's function types and connection requirements: The function m V r : V r Ñ V S maps each virtual network functions to a single substrate node. The function m E r : E r Ñ PpE S q maps edges between network functions onto paths in the substrate network, such that: ‚ All network functions i P V r are mapped onto nodes that can host the particular function type. Formally, m V r piq P V τrpiq S holds for all i P V r . ‚ The edge mapping m E r connects the respective network functions using simple paths, i.e. given a virtual edge pi, jq P E r the embedding m E r pi, jq is an edgepath xpv 1 , v 2 q, . . . , pv k´1 , v k q y such that pv l , v l`1 q P E S for 1 ď l ă k and v 1 " m V r piq and v k " m V r pjq. Next we define the notion of a feasible embedding for a set of requests, i.e. an embedding that obeys the network function and edge capacities. Definition 2 (Feasible Embedding). A feasible embedding of a subset of requests R 1 Ď R is given by valid mappings m r " pm V r , m E r q for r P R 1 , such that network function and edge capacities are obeyed: ‚ For all types τ P T and nodes u P V τ S holds: ř rPR 1 ř iPVr,m V r piq "u d r piq ď d S pτ, uq . ‚ For all edges pu, vq P E S holds: ř rPR 1 ř pi,jqPEr:pu,vqPm E r pi,jq d r pi, jq ď d S pu, vq . We first define the SCEP variant with admission control whose objective is to maximize the net profit (SCEP-P), i.e. the achieved profit for embedding a subset of requests. Given: A substrate network G S " pV S , E S q and a set of requests R as described above. Task: Find a subset R 1 Ď R of requests to embed and a feasible embedding, given by a mapping m r for each request r P R 1 , maximizing the net profit ř rPR 1 b r . In the variant without admission control, i.e. when all given requests must be embedded, we consider the natural objective of minimizing the cumulative cost of all embeddings. Concretely, the cost of the mapping m r of request r P R is defined as the sum of costs for placing network functions plus the number of substrate links along which network bandwidth needs to be reserved, times the (processing or bandwidth) demand: The variant SCEP-C without admission control which asks for minimizing the costs is hence defined as follows. Given: A substrate network G S " pV S , E S q and a set of requests R as described above. Task: Find a feasible embedding m r for all requests r P R of minimal cost ř rPR cpm r q. B. NP-Hardness Both introduced SCEP variants are strongly NP-hard, i.e. they are hard independently of the parameters as e.g. the capacities. We prove the NP-hardness by establishing a connection to multi-commodity flow problems. Concretely, we present a polynomial time reduction from the Unsplittable Flow (USF) and the Edge-Disjoint Paths (EDP) problems [19] to the the respective SCEP variants. Both USF and EDP are defined on a (directed) graph G " pV, Eq with capacities d : E Ñ R ě0 on the edges. The task is to route a set of K commodities ps k , t k q with demands d k P R ě0 for 1 ď k ď K from s k P V to t k P V along simple paths inside G. Concretely, EDP considers the decision problem in which both the edge capacities and the demands are 1 and the task is to find a feasible routing. The variant of EDP asking for the maximum number of routable commodities was one of Karp's original 21 NP-complete problems and the decision variant was shown to be NP-complete even on series-parallel graphs [32]. In the USF problem, for each commodity an additional benefit b k P R ě0 , 1 ď k ď K, is given and the task is to find a selection of commodities to route, such that capacities are not violated and the sum of benefits of the selected commodities is maximized. Solving the USF problem is NP-hard and proven to be hard to approximate within a factor of |E| 1{2´ε for any ε ą 0 [19]. We will argue in the following that EDP can be reduced to SCEP-C and USF can be reduced to SCEP-P. Both reductions use the same principal idea of expressing the given commodities as requests. Hence, we first describe this construction before discussing the respective reductions. For commodities ps k , t k q with 1 ď k ď K a request r k consisting only of the two virtual nodes i k and j k and the edge pi k , j k q is introduced. By setting s r k " i k and t r k " j k and τ r pi k q " loc s k and τ r pj k q " loc t k , we can enforce that flow of request r k originates at s k P V and terminates at t k P V , hence modeling the original commodities. In both reductions presented below, we do not make use of network functions, i.e. T " tloc u|u P V S u, and accordingly we do not need to specify network function capacities. Regarding the polynomial time reduction from EDP to SCEP-C, we simply use unitary virtual demands and substrate capacities. As this yields an equivalent formulation of EDP, which is NP-hard, finding a feasible solution for SCEP-C is NP-hard. Hence, there cannot exist an approximation algorithm that (always) finds a feasible solutions within polynomial time unless P " NP or unless capacity violations are allowed. Regarding the reduction from USF to SCEP-P, we adopt the demands by setting d r k pi k , j k q fi d k for 1 ď k ď K, adopt the network capacities via d S pu, vq fi dpu, vq for pu, vq P E, and setting the profits accordingly b r k fi b k for 1 ď k ď K. It is easy to see, that any solution to this SCEP-P instance also induces a solution to the original USF instance. It follows that SCEP-P is strongly NP-hard. C. Further Notation We generally denote directed graphs by G " pV, Eq. We use δÈ puq :" tpu, vq P Eu to denote the outgoing edges of node u P V with respect to E and similarly define δÉ puq :" tpv, uq P Eu to denote the incoming edges. If the set of edges E can be derived from the the context, we often omit stating E explicitly. When considering functions on tuples, we often omit the (implicit) braces around a tuple and write e.g. f px, yq instead of f ppx, yqq. Furthermore, when only some specific elements of a tuple are of importance, we write px,¨q P Z in favor of px, yq P Z. III. DECOMPOSING LINEAR SOLUTIONS In this section, we lay the foundation for the approximation algorithms for both SCEP variants by introducing Integer Programming (IP) formulations to compute optimal embeddings (see Section III-A). Given the NP-hardness of the respective problems, solving any of the IPs to optimality is not possible within polynomial time (unless P " NP ). Hence, we consider the linear relaxations of the respective formulations instead, as these naturally represent a conical or convex combination of valid mappings. We formally show that linear solutions can be decomposed into valid mappings in Section III-B. Given the ability to decompose solutions, we apply randomized rounding techniques in Sections IV and V to obtain tri-criteria approximation algorithms for the respective SCEP variants. A. Integer Programming To formulate the service chain embedding problems as Integer Programs we employ a flow formulation on a graph construction reminiscent of the one used by Merlin [38]. Concretely, we construct an extended and layered graph consisting of copies of the substrate network together with super sources and sinks. The underlying idea is to model the usage (and potentially the placement) of network functions by traversing inter-layer edges while intra-layer edges will be used for connecting the respective network functions. Figure 1 depicts a simple example of the used graph construction. The request r consists of the three nodes i, j, and l. Recall that we assume that the start and the end node specify locations in the substrate network (cf. Section II). Hence, in the example the start node s r " i and the end node t r " l can only be mapped onto the substrate nodes v and u respectively, while the virtual node j may be placed on the substrate nodes u and w. Since for each connection of network functions a copy of the substrate network is introduced, the edges between these layers naturally represent the utilization of a network function. Additionally, the extended graph G ext r contains a single super source or and a super sink oŕ , such that any path from or to oŕ represents a valid mapping of the request (cf. Discussion in Section III-B). Formally, the extended graph for each request r P R is introduced as follows. Definition 5 (Extended Graph). Let r P R be a request. The extended graph G ext r " pV ext r , E ext r q is defined as follows: We denote by E ext r,u,v " tppu i,j r , v i,j r q, pi, jqq|pi, jq P E r u all copies of the substrate edge pu, vq P E S together with the respective virtual edge pi, jq P E r . Similarly, we denote by E ext r,τ,u " tppu i,j r , u j,k r q, jq|j P V r , τ r pjq " τ, pi, jq, pj, kq P E r u the edges that indicate that node u P V S processes flow of network function j P V r having type τ P T . Having defined the extended graph as above, we will first discuss our Integer Program 1 for SCEP-P. We use a single variable x r P t0, 1u per request r to indicate whether the request is to be embedded or not. If x r " 1, then Constraint (5) induces a unit flow from or to oŕ in the extended graph G ext r using the flow variables f r,e P t0, 1u for e P E ext r . Particularly, Constraint (6) states flow preservation at each node, except at the source and the sink. Constraints (7) and (8) compute the effective load per request on the network functions and the substrate edges. Towards this end, variables l r,x,y ě 0 indicate the load induced by request r P R on resource px, yq P R S . By the construction of the extended graph (see Definition 5), the sets E ext r,u,v and E ext r,τ,u actually represent a partition of all edges in the extended graph for r P R. Since each layer represents a virtual connection pi, jq P E r with a specific load d r pi, jq and each edge between layers pi, jq P E r and pj, kq P E r represents the usage of the network function j with demand d r pjq, the unit flow is scaled by the respective demand. Constraint (9) ensures ÿ pv,uqPδ´puq f r,e @r P R, u P V ext r ztor , oŕ u (6) ÿ pe,iqPE ext r,τ,u d r piq¨f r,e " l r,τ,u @r P R, pτ, uq P R V S ÿ pe,i,jqPE ext r,u,v d r pi, jq¨f r,e " l r,u,v @r P R, pu, vq P E S (8) ÿ rPR l r,x,y ď d S px, yq @px, yq P R S (9) x r P t0, 1u @r P R f r,e P t0, 1u @r P R, e P E ext r (11) l r,x,y ě 0 @r P R, px, yq P R S (12) 5 -9 and 11 -12 the feasibility of the embedding (cf. Definition 2), i.e., the overall amount of used resources does not exceed the offered capacities (on network functions as well as on the edges). Lastly, the objective function sums up the benefits of embedded requests r P R for which x r " 1 holds (cf. Definition 3). In the following, we shortly argue that any feasible solution to IP 1 induces a feasible solution to SCEP-P (and vice versa). If a request r P R is not embedded, i.e. x r " 0 holds, then no flow and hence no resource reservations are induced. If on the other hand x r " 1 holds for r P R, then the flow variables tf r,e |e P E ext r u induce a unit or -oŕ flow. By construction, this unit flow must pass through all layers, i.e. copies of the substrate network. As previous layers are not reachable from subsequent ones, cycles may only be contained inside a single layer. Hence, network function mappings are uniquely identified by considering the interlayer flow variables. Thus, there must exist unique nodes at which flow enters and through which the flow leaves each layer. Together with the flow preservation this implies that the respective network functions are connected by the edges inside the layers, therefore representing valid mappings. Considering SCEP-C, we adapt the IP 1 slightly to obtain the Integer Program 2 for the variant minimizing the costs: (i) all requests must be embedded by enforcing that x r " 1 holds for all requests r P R and (ii) the objective is changed to minimize the overall resource costs (cf. Equation 1). As the constraints safeguarding the feasibility of solutions are reused, the IP 2 indeed computes optimal solutions for SCEP-C. While solving Integer Programs 1 and 2 with binary variables is computationally hard (cf. Section II-B), the respective linear relaxations can be computed in polynomial time [28]. Concretely, the linear relaxation is obtained by simply replacing t0, 1u with r0, 1s in Constraints (10) and (11) respectively. We generally denote the set of feasible solutions to the linear relaxation by F LP and the set of feasible solutions to the respective integer program by F IP . We omit the reference to any particular formulation here as it will be clear from the context. We recall the following well-known fact: The above fact will e.g. imply that the profit of the optimal linear solution will be higher than the one of the optimal integer solution. B. Decomposition Algorithm for Linear Solutions As discussed above, any binary solution to the formulations 1 and 2 represents a feasible solution to the respective problem variant. However, as we will consider solutions to the respective linear relaxations instead, we shortly discuss how relaxed solutions can be decomposed into conical (SCEP-P) or convex combinations (SCEP-C) of valid mappings. Concretely, Algorithm 1 computes a set of triples D r " tD k r " pf k r , m k r , l k r qu k , where f k r P r0, 1s denotes the (fractional) embedding value of the k-th decomposition, and m k r and l k r represent the (valid) mapping and the induced loads on network functions and edges respectively. Importantly, the load function l k r : R S Ñ R ě0 represents the cumulative loads, when embedding the request r P R fully according to the k-th decomposition. The pseudocode for our decomposition scheme is given in Algorithm 1. For each request r P R, a path decomposition is performed from or to oŕ as long as the outgoing flow from the source or is larger than 0. Note that the flow variables are an input and originate from the linear program for SCEP-C or SCEP-P respectively. We use G ext r,f to denote the graph in which an edge e P E ext r is contained, iff. the flow value along it is greater 0, i.e. for which f r,e ą 0 holds. Within this graph an arbitrary or -oŕ path P is chosen and the minimum available 'capacity' is stored in f k r . In Lines 8-15 the node mappings are set. For all virtual network functions i P V r and all potential substrate nodes u P V τrpiq S , we check whether u hosts i by considering the interlayer connections contained in P . Besides the trivial cases when i " or or i " oŕ holds, the network function i is mapped onto node u iff. edge pu r,¨,i , u r,i,¨q is contained in P . As P is a directed path from or to oŕ and the extended graph does not contain any inter-layer cycles, this mapping is uniquely defined for each found path P . For the start node s r of the request r P R and the end node t r connections from or or to oŕ are checked respectively. Concerning the mapping of the virtual edge pi, jq P E r , the edges of P used in the substrate edge layer corresponding to the (virtual) connection pi, jq are extracted in Line 18. Algorithm 1: Decomposition Algorithm Input : Substrate G S " pV S , E S q, set of requests R, solution p x, f , lq P F LP Output: Fractional embeddings D r " tpf k r , m k r , l k r qu k for each r P R 1 for r P R do 2 set D r Ð H and k Ð 1 if i " s r and por , u sr,r q P P then 11 set m V r piq Ð u 12 else if i " t r and pu¨, i r , oŕ q P P then k r for all e P P and k Ð k`1 23 return tD r |r P Ru Note that P is by construction a simple path and hence the constructed edge paths will be simple as well. In Lines 16 and 20, the cumulative load on all physical network functions and edges are computed, that would arise if request r P R is fully embedded according to the k-th decomposition. Lastly, the k-th decomposition D k r , a triple consisting of the fractional embedding value f k r , the mapping m k r , and the load l k r , is added to the set of potential embeddings D r and the flow variables along P are decreased by f k r . By decreasing the flow uniformly along P , flow preservation with respect to the adapted flow variables is preserved and the next iteration is started by incrementing k. By construction, we obtain the following lemma: Each mapping m k r constructed by Algorithm 1 in the k-th iteration is valid. As initially the outgoing flow equals the embedding variable and as flow preservation is preserved after each iteration, the flow in the extended network is fully decomposed by Algorithm 1: Lemma 8. The decomposition D r computed in Algorithm 1 is complete, i.e. ř D k r PDr f k r " x r holds, for r P R. Note that the above lemmas hold independently of whether the linear solutions are computed using IP 1 or IP 2. We give two lemmas relating the net profit (for SCEP-P) and the costs (for SCEP-C) of the decomposed mappings to the ones computed using the linear relaxations. We state the first without proof as it is a direct corollary of Lemma 8. Lemma 9. Let p x, f , lq P F LP denote a feasible solution to the linear relaxation of Integer Program 1 achieving a net profit ofB and let D r denote the respective decompositions of this linear solution for requests r P R computed by Algorithm 1, then the following holds: While the above shows that for SCEP-P the decomposition always achieves the same profit as the solution to the linear relaxation of IP 1, a similar statement holds for SCEP-C and IP 2: Let p x, f , lq P F LP denote a feasible solution to the linear relaxation of Integer Program 2 having a cost ofĈ and let D r denote the respective decompositions of this linear solution computed by Algorithm 1 for requests r P R, then the following holds: Additionally, equality holds, if the solution p x, f , lq P F LP , respectively the objectiveĈ, is optimal. Proof. We consider a single request r P R and show that ř D k r PDr f k r¨c pm k r q ď ř px,yqPR S c S px, yq¨l r,x,y holds. The Integer Program 2 computes the loads on resources px, yq P R S in Constraints 7 and 8 based on the flow variables, which then drive the costs inside the objective (cf. Constraint 13). Within the decomposition algorithm, only paths P P G ext r are selected such that f r,e ą 0 holds for all e P P . Hence, the resulting mapping m r , obtained by extracting the mapping information from P , uses only resources previously accounted for in the Integer Program 2. Since the computation of costs within the IP agrees with the definition of costs applied to the costs of a single mapping (cf. Equation 1), the reduction of flow variables along P by f k r (and the corresponding reduction of the loads) reduces the cost component of the objective by exactly f k r¨c pm r q. Thus, the costs accumulated inside the decompositions D k r P D r are covered by the respective costs of the Integer Program 2. This proves the inequality. To prove equality, given an optimal solution, consider the following. According to the above argumentation, all costs accumulated within the resulting decomposition D r are (at least) accounted for in the IP 2. Thus, the only possibility that the costs accounted for in the linear programming solution p x, f , lq are greater than the costs accounted for in the decomposition D r is that the linear programming solution still contains (cyclic) flows after having fully decomposed the request r. As these (cyclic) flows can be removed without violating any of the constraints while reducing the costs, the given solution cannot have been optimal. By the above argumentation, it is easy to check that the (fractionally) accounted resources of the returned decompositions D r are upper bounded by the resources allocations of the relaxations of Integer Programs 1 and 2, and hence are upper bounded by the respective capacities (cf. Constraint 9). Lemma 11. The cumulative load induced by the fractional mappings obtained by Algorithm 1 is less than the cumulative computed in the respective integer program and hence less than the offered capacity, i.e. for all resources px, yq P R S holds where l r,x,y refers to the respective variables of the respective Integer Program. IV. APPROXIMATING SCEP-P This section presents our approximation algorithm for SCEP-P which is based on the randomized rounding of the decomposed fractional solutions of Integer Program 1. In particular, our algorithm provides a tri-criteria approximation with high probability, that is, it computes approximate solutions with performance guarantees for the profit and for the maximal violation of capacities of both network functions and and edges with an arbitrarily high probability. We discuss the algorithm in Section IV-A and then derive probabilistic bounds for the profit (see Section IV-B) and the violation of capacities (see Section IV-C). In Section IV-D the results are condensed using a simple union-bound argument to prove our main theorem, namely that the presented algorithm is indeed a tri-criteria approximation for SCEP-P. A. Synopsis of the Approximation Algorithm 2 The approximation scheme for SCEP-P is given as Algorithm 2. Besides the problem specification, the approximation algorithm is handed four additional parameters: the parameters α, β, and γ will bound the quality of the found solution with respect to the optimal solution in terms of profit achieved (0 ď α ď 1), the maximal violation of network function (0 ď β) and edge capacities (0 ď γ). As Algorithm 2 is randomized and as we will only show that the algorithm has a constant success probability, the parameter Q controls the number of rounding tries to obtain a solution within the approximation factors α, β, and γ. Algorithm 2 first uses the relaxation of Integer Program 1 to compute a fractional solution p x, f , lq. This solution is then decomposed according to Algorithm 1, obtaining decompositions D r for requests r P R. The while-loop (see attempts to construct a solution pR 1 , tm r |r P R 1 uq for SCEP-P (cf. Definition 3) according to the following scheme. Essentially, for every request r P R a dice with |D r |`1 many faces is cast, such that D k r P D r is chosen with probability Algorithm 2: Approximation Algorithm for SCEP-P Input : Substrate G S " pV S , E S q, set of requests R, approximation factors α, β, γ ě 0, maximal number of rounding tries Q Output: Approximate solution for SCEP-P 1 compute solution p x, f , lq of Linear Program 1 2 compute tD r |r P Ru using Algorithm 1 for px, yq P R S do 16 set Lrx, ys Ð Lrx, ys`l r px, yq 17 if¨B ě α¨Opt LP and Lrτ, us ď p1`βq¨d S pτ, uq for pτ, uq P R V S and Lru, vs ď p1`γq¨d S pu, vq for pu, vq P E S‚ then 18 return pR 1 , tm r |r P R 1 uq 19 q Ð q`1 20 return NULL f k r and none of the embeddings is selected with probability 1´ř D k r PDr f k r . Within the algorithm, casting the dice is done by uniformly selecting a value p in the range of r0, 1s such that thek-th decomposition Dk r " pfk r , mk r , lk r q P D r is chosen iff. řk l"1 f l r ď p ă řk`1 l"1 f l r holds. In case that a mapping was selected, the corresponding mapping and load informations are stored in the (globally visible) variableŝ m r andl r . In Lines 13-16 the request r P R is added to the set of embedded requests R 1 , the currently achieved objective (B) and the cumulative loads on the physical network functions and edges are adapted accordingly. Note that the load informationl r : R S Ñ R ě0 of the decomposition stores the total allocations for each network resource of mappingm r . After having iterated over all requests r P R, the obtained solution is returned only if the constructed solution achieves at least an α-fraction of the objective of the linear program and violates node and edge capacities by factors less than 1`β and 1`γ respectively (see Lines 17 and 18). If after Q iterations no solution within the respective approximation bounds was found, the algorithm returns NULL. In the upcoming sections, the probabilities for finding solutions subject to the parameters α, β, γ, and Q will be analyzed. Concretely, the analysis of the performance with respect to the objective is contained in Section IV-B, while Section IV-C proves bounds for capacity violations and Section IV-D consolidates the results. B. Probabilistic Guarantee for the Profit To analyze the performance of Algorithm 2 with respect to the achieved profit, we recast the algorithm in terms of random variables. For bounding the profit achieved by the algorithm we introduce the discrete random variable Y r P t0, b r u, which models the profit achieved by (potentially) embedding request r P R. According to Algorithm 2, request r P R is embedded as long as the random variable p in Line 9 was less than ř D k r PDr f k r . Hence, we have that PpY r " 0q " 1´ř D k r PDr f k r holds, i.e. that the probability to achieve no profit for request r P R is 1´ř D k r PDr f k r . On the other hand, the probability to embed request r P R equals ř D k r PDr f k r as in this case some decomposition will be chosen. Hence, we obtain PpY r " b r q " ř D k r PDr f k r . Given these random variables, we can model the achieved net profit of Algorithm 1 as: The expectation of the random variable B computes to ř rPR ř D k r PDr f k r¨br and by Lemma 9 we obtain the following corollary: Corollary 12. Given an optimal solution p x, f , lq P F LP for the Linear Program 1 and denoting the objective value of this solution as Opt LP , we have: where D r denotes the decomposition of p x, f , lq obtained by Algorithm 1 for requests r P R. To bound the probability of achieving a fraction of the profit of the optimal solution, we will make use of the following Chernoff-Bound over continuous random variables. Theorem 13 (Chernoff-Bound [8]). Let X " ř n i"1 X i be a sum of n independent random variables X i P r0, 1s, 1 ď i ď n. Then the following holds for any 0 ă ε ă 1: Note that the above theorem only considers sums of random variables which are contained within the interval r0, 1s. In the following, we lay the foundation for appropriately rescaling the random variables Y r such that these are contained in the interval r0, 1s, while still allowing to bound the expected profit. To this end, we first show that we may assume that all requests can be fully fractionally embedded in the absence of other requests as otherwise the respective requests cannot be embedded given additional requests. Then we will effectively rescale the random variables Y r by dividing by the maximal net profit that can be obtained by embedding a single request in the absence of other requests. Lemma 14. We may assume without loss of generality that all requests can be fully fractionally embedded in the absence of other requests. Proof. To show the claim we argue that we can extend the approximation scheme by a simple preprocessing step that filters out any request that cannot be fractionally embedded alone, i.e. having all the substrate's available resources at its availability. Practically, we can compute for each request r P R the linear relaxation of Integer Program 1 in the absence of any other requests, i.e. for R 1 " tru. As the profit b r of request r is positive, the variable x r is maximized effectively. If for the optimal (linear) solution x r ă 1 holds, then the substrate capacities are not sufficient to (fully) fractionally embed the request. Hence, in this case the request r cannot be embedded (fully) under any circumstances in the original SCEP-P instance as any valid and feasible mapping m r for the original problem would induce a feasible solution with x r " 1 when request r is embedded alone. Note that this preprocessing step can be implemented in polynomial time, as the relaxation of IP 2 can be computed in polynomial time. According to the above lemma we can assume that all requests can be fully fractionally embedded. Let b max " max rPR b r denote the maximal profit of any single request. The following lemma shows that any fractional solution to the IP 1 will achieve at least a profit of b max . Lemma 15. Opt LP ě b max holds, where Opt LP denotes the optimal profit of the relaxation of IP 1. Proof. Let r 1 P R denote any of the requests having the maximal profit b max and let p x r 1 , f r 1 , l r 1 q denote the linear solution obtained by embedding the request r 1 according to IP 1 in the absence of other requests. Considering the set of original requests we now construct a linear solution p x R , f R , l R q over the variables corresponding to the original set of requests R. Concretely, for p x R , f R , l R q we set all variables related to request r 1 according to the solution p x r 1 , f r 1 , l r 1 q and set all other variables to 0. This is a feasible solution, i.e. p x R , f R , l R q P F LP holds, which achieves the same profit as the solution p x r 1 , f r 1 , l r 1 q, namely b max by Lemma 14. Hence, the profit of the optimal linear solution is lower bounded by this particular solution's objective and the claim follows. The above lemma is instrumental in proving the following probabilistic bound on the profit achieved by Algorithm 2: Theorem 16. The probability of achieving less than 1{3 of the profit of an optimal solution is upper bounded by expp´2{9q. Proof. Instead of considering B " ř rPR Y r , we consider the sum B 1 " ř rPR Y 1 r over the rescaled variables Y 1 r " Y r {b max P r0, 1s. Obviously Y 1 r P r0, 1s holds. Choosing ε " 2{3 and applying Theorem 13 on B 1 we obtain: By Lemma 15 and as B 1¨b max " B holds, we obtain that EpB 1 q " EpBq {b max ě 1 holds. Plugging in the minimal value of EpB 1 q, i.e. 1, into the equation, we maximize the term expp´2¨EpB 1 q{9q and hence get P`B 1 ď p1{3q¨EpB 1 q˘ď expp´2{9q . By using B 1¨b max " B, we obtain Denoting the optimal profit of Integer Program 1 by Opt IP and the optimal profit of the relaxation by Opt LP , we note that EpBq " Opt LP holds by Lemma 9. Furthermore, Opt IP ď Opt LP follows from Fact 6. Thus, holds, completing the proof together with Equation 22. C. Probabilistic Guarantees for Capacity Violations The input to the Algorithm 2 encompasses the factors β, γ ě 0, such that accepted solutions must satisfy Lrτ, us ď p1`βqd S pτ, uq for pτ, uq P R V S and Lru, vs ď p1`γq¨d S pu, vq for pu, vq P E S . In the following, we will analyze with which probability the above will hold. Our probabilistic bounds rely on Hoeffding's inequality: Fact 17 (Hoeffding's Inequality). Let tX i u be independent random variables, such that X i P ra i , b i s, then the following holds: or our analysis to work, assumptions on the maximal allocation of a single request on substrate nodes and edges must be made. We define the maximal load as follows: Definition 18 (Maximal Load). We define the maximal load per network function and edge for each request as follows: Additionally, we define the maximal load that a whole request may induce on a network function of type τ on substrate node u P V τ S as the following sum: The following lemma shows that we may assume max L r,x,y ď d S px, yq for all network resources px, yq P R S . Lemma 19. We may assume the following without loss of generality. Proof. Assume that max L r,τ,u ą d S puq holds for some r P R, pτ, uq P R V S . If this is the case, then there exists a virtual network function i P V r with τ r piq " τ , such that d r piq ą d S pτ, uq holds. However, if this is the case, there cannot exist a feasible solution in which i is mapped onto u, as this mapping would clearly exceed the capacity. Hence, we can remove the edges that indicate the mapping of the respective virtual function i on substrate node u in the extended graph construction a priori before computing the linear relaxation. The same argument holds true if max L r,u,v ą d S pu, vq holds for some pu, vq P E S . We now model the load on the different network functions pτ, uq P R V S and the substrate edges pu, vq P E S induced by each of the requests r P R as random variables L r,τ,u P r0, max L, ř r,τ,u s and L r,u,v P r0, max L r,u,v¨| E r |s respectively. To this end, we note that Algorithm 2 chooses the k-th decomposition D k r " pf k r , l k r , m k r q with probability f k r . Hence, with probability f k r the load l k r is induced for request r P R. The respective variables can therefore be defined as PpL r,x,y " l k r px, yqq " f k r (assuming pairwise different loads) and PpL r,x,y " 0q " 1´ř D k r PDr f k r for px, yq P R V S . Additionally, we denote by L x,y " ř rPR L r,x,y the overall load induced on function resource px, yq P R S . By definition, the expected load on the network nodes and the substrate edges Together with Lemma 11 these equations yield: Using the above, we can apply Hoeffding's Inequality: The probability that the capacity of a single network function τ P T on node u P V τ S is exceeded by more than a factor p1`a2¨log p|V S |¨|T |q¨∆ V q is upper bounded by p|V S |¨|T |q´4. Proof. Each variable L r,τ,u is clearly contained in the interval r0, max L, ř r,τ,u s and hence r,τ,u q 2 will be the denominator in Hoeffding's Inequality. We choose t " a 2¨log p|V S |¨|T |q¨∆ V¨dS pτ, uq and obtain: In Line (30) we have used and accordingly increased the denominator to increase the probability. It is easy to check, that Equation (31) holds, as we may assume that max L r,τ,u ď d S pτ, uq holds (see Lemma 19). By using Equation (29) and plugging in t we obtain for all pτ, uq P R V S , proving our claim. It should be noted that if network functions are unique within a request, then ∆ V equals the number of requests |R|, since in this case max L, ř r,τ,u " max L r,τ,u holds. Next, we consider a very similar result on the capacity violations of substrate edges. However, as in the worst case each substrate edge pu, vq P E S is used |E r | many times, we have to choose a slightly differently defined ∆ E . The probability that the capacity of a single substrate edge pu, vq P E S is exceeded by more than a factor p1`a2¨log p|V S |q¨∆ E q is bounded by |V S |´4. Proof. Each variable L r,u,v is clearly contained in the interval r0, max L r,u,v¨| E r |s. We choose t " a 2¨log |V S |¨∆ Ed S pu, vq and apply Hoeffdings Inequality: The rest of the proof is analogous to the one of Lemma 20: In Equation (33) we have used again the fact that max L r,u,v ď d S pu, vq holds (see Lemma 19). In the numerator of Equation 34 we have replaced ∆ E by its definition and placed d S pu, vq 2 outside the sum in the denominator. Analogously to the proof of Lemma 20, the remaining part of the proof follows from Equation (29). We state the following corollaries without proof, showing that the above shown bounds work nicely if we assume more strict bounds on the maximal loads. Corollary 22. Assume that max L r,τ,u ď ε¨d S pτ, uq holds for 0 ă ε ă 1 and all pτ, uq P R V S . With ∆ V as defined in Lemma 20, we obtain: The probability that in Algorithm 2 the capacity of a single network function τ P T on node u P V τ S is exceeded by more than a factor p1`ε¨a2¨log p|V S |¨|T |q¨∆ V q is upper bounded by p|V S |¨|T |q´4. Corollary 23. Assume that max L r,u,v ď ε¨d S pu, vq holds for 0 ă ε ă 1 and all pu, vq P E S . With ∆ E as defined in Lemma 21, we obtain: The probability that in Algorithm 2 the capacity of a single substrate node pu, vq P E S is exceeded by more than a factor p1`ε¨a2¨log p|V S |q¨∆ E q is upper bounded by |V S |´4. D. Main Results We can now state the main tri-criteria approximation results obtained for SCEP-P. First, note that Algorithms 1 and 2 run in polynomial time. The runtime of Algorithm 1 is dominated by the search for paths and as in each iteration at least the flow of a single edge in the extended graph is set to 0, only Op ř rPR |E r |¨|E S |q many graph searches are necessary. The runtime of the approximation itself is clearly dominated by the runtime to solve the Linear Program which has Op ř rPR |E r |¨|E S |q many variables and constraints and can therefore be solved in polynomial time using e.g. the Ellipsoid algorithm [28]. The following lemma shows that Algorithm 2 can produce solutions of high quality with high probability: Lemma 24. Let 0 ă ε ď 1 be chosen minimally, such that max L r,x,y ď ε¨d S pτ, uq holds for all px, yq P R S . Setting α " 1{3, β " ε¨a2¨log p|V S |¨|T |q¨∆ V and γ " ε¨a2¨log |V S |q¨∆ E with ∆ V , ∆ E as defined in Lemmas 20 and 21, the probability that a solution is found within Q P N rounds is lower bounded by 1´p19{20q Q for |V S | ě 3. Proof. We apply a union bound argument. By Lemma 20 the probability that for a single network function of type τ P T on node u P V τ S the allocations exceed the capacity by more than a factor p1`βq is less than p|V S |¨|T |q´4. Given that there are maximally |V S |¨|T | many of network functions overall, the probability that any of these exceeds the capacity by a factor above p1`βq is less than p|V S |¨|T |q´3 ď |V S |´3. Similarly, by Lemma 21 the probability that the edge capacity of a single edge is violated by more than a factor p1`γq is less than |V S |´4. As there are at most |V S | 2 edges, the union bound gives us that the probability that the capacity of any of the edges is violated by a factor larger than p1γ q is upper bounded by |V S |´2. Lastly, by Theorem 16 the probability of not finding a solution having an α-fraction of the optimal objective is less or equal to expp´2{9q « 0.8074. The probability to not find a suitable solution, satisfying the objective and the capacity criteria, within a single round is therefore upper bounded by expp´2{9q´1{9´1{27 ď 19{20 if |V S | ě 3 holds. The probability find a suitable solution within Q P N many rounds hence is 1´p19{20q Q for |V S | ě 3. Theorem 25. Assuming that |V S | ě 3 holds, and that max L r,x,y ď ε¨d S px, yq holds for all resources px, yq P R S with 0 ă ε ď 1 and by setting β " ε¨a2¨log p|V S |¨|T |q¨∆ V and γ " ε¨a2¨log |V S |¨∆ E with ∆ V , ∆ E as defined in Lemmas 20 and 21, Algorithm 2 is a pα, 1`β, 1`γq tricriteria approximation algorithm for SCEP-P, such that it finds a solution with high probability, that achieves at least an α " 1{3 fraction of the optimal profit and violates network function and edge capacities only within the factors 1`β and 1`γ respectively. V. APPROXIMATING SCEP-C In the previous section we have derived a tri-criteria approximation for the SCEP-P variant that maximizes the profit of embedding requests while only exceeding capacities within certain bounds. We show in this section that the approximation scheme for SCEP-P can be adapted for the cost minimization variant SCEP-C by introducing an additional preprocessing step. Recall that the cost variant SCEP-C (see Definition 4) asks for finding a feasible embedding m r for all given requests r P R, such that the sum of costs ř rPR cpm r q is minimized. We propose Algorithm 3 to approximate SCEP-P. After shortly discussing the adaptions necessitated with respect to Algorithm 2, we proceed to prove the respective probabilistic guarantees analogously to the previous section with the main results contained in Section V-D. A. Synopsis of the Approximation Algorithm 3 The approximation for SCEP-C given in Algorithm 3 is based on Algorithm 2. Algorithm 3 first computes a solution to the linear relaxation of Integer Program 2 which only differs from the previously used Integer Program 1 by requiring to embed all requests and adopting the objective to minimize the overall induced costs (cf. Section III-A). While for the relaxation of Integer Program 1 a feasible solution always exists -namely, not embedding any requests -this is not the case for the relaxation of Integer Program 2, i.e. F LP " H might hold. Hence, if solving the formulation was determined to be infeasible, the algorithm returns that no solution exists. This is valid, as F IP Ď F LP holds (cf. Fact 6) and, if F LP " H holds, F IP " H must follow. Having found a linear programming solution p x, f , lq P F LP , we apply the decomposition algorithm presented in Section III-B to obtain the set of decomposed embeddings D r " tpf k r , m k r , l k r qu k for all requests r P R. As the Integer Program 2 enforces that x r " 1 holds for all requests r P R, we derive the following corollary from Lemma 8, stating that the sum of fractional embedding values is one for all requests. Lines 5-14 is the core addition of Algorithm 3 when compared to Algorithm 2. This preprocessing step effectively removes fractional mappings that are too costly from the set of decompositions D r by setting their fractional embedding values to zero and rescaling the remaining ones. Concretely, given a request r P R, first the weighted (averaged) cost WC r " ř D k r PDr f k r¨c pm k r q is computed. In the next step, the fractional embedding values of those decompositions costing less than two times the weighted cost WC r is computed and assigned to λ r . Then, for each decomposition D k r " pf k r , m k r , l k r q P D r a new fractional embedding valuef k r is defined: eitherf k r is set to zero or is set to f k r rescaled by dividing by λ r . The rescaling guarantees, that also forD r the sum of newly defined fractional embedding valuesf k r equals one, i.e. ř |Dr| k"1f k r " 1 holds. As there will exist at least a single decomposition m k r such that cpm k r q ď 2¨WC r holds, the set of decompositions will not be empty and the rest of the algorithm is well-defined. We formally prove this in Lemma 28 in the next section. After having preprocessed the decomposed solution of the linear relaxation of Integer Program 2, the randomized rounding scheme already presented in Section IV is employed. For each request one of the decompositionsD k r PD r is chosen according to the fractional embedding valuesf k r (see . Note that as řD k r PDrf k r " 1 holds, the indexk will always be well-defined and hence for each request r P R exactly one mappingm r will be selected. Analogously to Algorithm 2, the variables Lrx, ys store the induced loads by the current solution on substrate resource px, yq P R S and a solution is only returned if neither any of the node or any of the edge capacities are violated by more than a factor of p2`βq and p2`γq respectively, where β, γ ą 0 are again the respective approximation guarantee parameters (cf. Section IV), which are an input to the algorithm. In the following sections, we prove that Algorithm 3 yields solutions having at most two times the optimal costs and which exceed node and edge capacities by no more than a factor of p2`βq and p2`γq respectively with high probability. B. Deterministic Guarantee for the Cost In the following we show that Algorithm 2 will only produce solutions whose costs are upper bounded by two times the optimal costs. The result follows from restricting the set of potential embeddingsD r to only contain decompositions having less than two times the weighted cost WC r for each request r P R. To show this, we first reformulate Lemma 10 in terms of the weighted cost as follows. Corollary 27. Let p x, f , lq P F LP denote an optimal solution to the linear relaxation of Integer Program 2 having a cost of Opt LP and let D r denote the decomposition of this linear solution computed by Algorithm 1, then the following holds: ÿ rPR WC r " Opt LP . The above corollary follows directly from Lemma 10 and the definition of WC r as computed in Line 6. As a next for px, yq P R S do 24 set Lrx, ys Ð Lrx, ys`l r px, yq 25 ifˆL rτ, us ď p2`βq¨d S pτ, uq for pτ, uq P R V S and Lru, vs ď p2`γq¨d S pu, vq for pu, vq P E Sṫ hen 26 return tm r |r P Ru 27 q Ð q`1 28 return NULL step towards proving the bound on the cost, we show that Algorithm 3 is indeed well-defined, asD r ‰ H. We even show a stronger result, namely that λ r ě 1{2 holds and hence the cumulative embedding weights f k r of the decompositions pf k r , m k r , l k r q not set to 0 makes up half of the original embedding weights. Lemma 28. The sum of fractional embedding values of decompositions whose cost is upper bounded by two times WC r is at least 1{2. Formally, λ r ě 1{2 holds for all requests r P R. Proof. For the sake of contradiction, assume that λ r ă 1{2 holds for any request r P R. By the definition of WC r and the assumption on λ r , we obtain the following contradiction: For Equation 38 the value of WC r as computed in Algorithm 3 was used. Equation 39 holds as only a subset of decompositions, namely the ones with mappings of costs higher than two times WC r , are considered and f k r ě 0 holds by definition. The validity of Equation 40 follows as all the considered decompositions have a cost of at least two times WC r and Equation 41 follows as p1´λ r q ą 1{2 holds by assumption. Lastly, Equation 42 yields the contradiction, showing that indeed λ r ě 1{2 holds for all requests r P R. By the above lemma, the setD r will indeed not be empty. As per rescaling řD k r PDrf k r " 1 holds, for each request exactly one decomposition will be selected. Finally, we derive the following lemma. Lemma 29. The cost ř rPR cpm r q of any solution returned by Algorithm 3 is upper bounded by two times the optimal cost. Proof. Let Opt LP denote the cost of the optimal linear solution to the linear relaxation of Integer Program 2 and let Opt IP denote the cost of the respective optimal integer solution. By allowing to select only decompositions m r for which cpm r q ď 2¨WC r holds in Algorithm 3 and as for each request a single mapping is chosen, we have cpm r q ď 2¨WC r , wherem r refers to the actually selected mapping. Hence ř rPR cpm r q ď 2¨ř rPR WC r holds. By Corollary 27 and the fact that Opt LP ď Opt IP holds (cf. Fact 6), we can conclude that ř rPR cpm r q ď 2¨Opt IP holds, proving the lemma. Note that the above upper bound on the cost of any produced solution is deterministic and does not depend on the random choices made in Algorithm 3. C. Probabilistic Guarantees for Capacity Violations Algorithm 3 employs the same rounding procedure as Algorithm 2 presented in Section IV for computing approximations for SCEP-P. Indeed, the only major change with respect to Algorithm 2 is the preprocessing that replaces the fractional embedding values f k r withf k r . As the changed valuesf k r are used for probabilistically selecting the mapping for each of the requests, the analysis of the capacity violations needs to reflect these changes. To this end, Lemma 28 will be instrumental as it shows that each of the decompositions is scaled by at most a factor of two. As the following lemma shows, this implies that the decompositions contained inD r are using at most two times the original capacities of nodes and functions. holds for all resources px, yq P R V S . By Lemma 28, λ r ě 1{2 holds for all requests r P R. Hence, the probabilities of the decompositions having costs less than WC r are scaled by at most a factor of two, i.e.f k r ď 2¨f k r holds for all requests r P R and each decomposition. As the mappings and the respective loads of D r andD r are the same, we obtain that Given the above lemma, we restate most of the lemmas already contained in Section IV-C with only minor changes. We note that the definition of the maximal loads (cf. Definition 18) are independent of the decompositions found and that the corresponding assumptions made in Lemma 19 are still valid when using the relaxation of Integer Program 2 and are independent of the scaling. Concretely, forbidding mappings of network functions or edges onto elements that can never support them is still feasible. Furthermore note that while we effectively scale the probabilities of choosing specific decomposition, this does not change the corresponding loads of the (identical) mappings. We again model the load on the different network functions pτ, uq P R V S and the substrate edges pu, vq P E S induced by each of the requests r P R as random variables L r,τ,u P r0, max L, ř r,τ,u s and L r,u,v P r0, max L r,u,v¨| E r |s respectively. To this end, we note that Algorithm 3 chooses the k-th decompositionD k r " pf k r , l k r , m k r q with probabilityf k r . Hence, with probabilityf k r the load l k r is induced for request r P R. The respective variables can therefore be defined as PpL r,x,y " l k r px, yqq "f k r (assuming pairwise different loads) and PpL r,x,y " 0q " 1´řD k r PDrf k r for px, yq P R V S . Again, we denote by L x,y " ř rPR L r,x,y the overall load induced on resource px, yq P R S . By definition of the expectation, the expected load on the network resource px, yq P R V S computes to EpL x,y q " Together with Lemma 30 we obtain: EpL x,y q ď2¨d S px, yq for px, yq P R V S (46) Using the above, we again apply Hoeffding's Inequality, while slightly adapting the probabilistic bounds compared with Lemma 20. The probability that the capacity of a single network function τ P T on node u P V τ S is exceeded by more than a factor p2à log p|V S |¨|T |q¨∆ V q is upper bounded by p|V S |¨|T |q´2. Proof. Each variable L r,τ,u is clearly contained in the interval r0, max L, ř r,τ,u s. We choose t " a log p|V S |¨|T |q¨∆ Vd S pτ, uq and obtain: Analogously to Lemma 20, we have used again max L, We restate Lemma 21 without proof as the only change is replacing the factor of ? 2 with the factor a 3{2 and observing that by using Equation 46 we obtain a 2`γ approximation for the load instead of a 1`γ one. The probability that the capacity of a single substrate edge pu, vq P E S is exceeds the capacity by more than a factor p2`a3{2¨log p|V S |q¨∆ E q is bounded by |V S |´2. The following corollaries show that the above bounds play out nicely if we assume more strict bounds on the maximal loads. Corollary 33. Assume that max L r,τ,u ď ε¨d S pτ, uq holds for 0 ă ε ă 1 and all τ P T , u P V τ S . With ∆ V as defined in Lemma 20, we obtain: The probability that in Algorithm 2 the capacity of a single network function τ P T on node u P V τ S is exceeded by more than a factor p2`εä log p|V S |¨|T |q¨∆ V q is upper bounded by p|V S |¨|T |q´2. Corollary 34. Assume that max L r,u,v ď ε¨d S pu, vq holds for 0 ă ε ă 1 and all pu, vq P E S . With ∆ E as defined in Lemma 21, we obtain: The probability that in Algorithm 2 the capacity of a single substrate node pu, vq P E S is exceeded by more than a factor p2`ε¨a3{2¨log p|V S |q¨∆ E q is upper bounded by |V S |´2. D. Main Results We can now proceed to state the main tri-criteria approximation results obtained for SCEP-C. The argument for Algorithm 3 having a polynomial runtime is the same as for Algorithm 2, since the preprocessing can be implemented in polynomial time. The following lemma shows that Algorithm 2 can produce solutions of high quality with high probability. Lemma 35. Let 0 ă ε ď 1 be chosen minimally, such that max L r,x,y ď ε¨d S pτ, uq holds for all network resources px, yq P R S . Setting β " ε¨alog p|V S |¨|T |q¨∆ V and γ " ε¨a3{2¨log |V S |q¨∆ E with ∆ V , ∆ E as defined in Lemmas 31 and 32, the probability that a solution is found within Q P N rounds is lower bounded by 1´p2{3q Q if |V S | ě 3 holds. Proof. We again apply a union bound argument. By Lemma 31 the probability that for a single network function of type τ P T on node u P V τ S the allocations exceed p2`βq¨d S pτ, uq is less than p|V S |¨|T |q´2. Given that there are maximally |V S |¨|T | many network functions overall, the probability that on any network function more than p2`βq¨d S pτ, uq resources will be used is less than p|V S |¨|T |q´1 ď |V S |´1. Similarly, by Lemma 32 the probability that the allocation on a specific edge is larger than p2`βq¨d S pu, vq is less than |V S |´3. As there are at most |V S | 2 edges, the union bound gives us that the probability that any of these edges' allocations will be higher than p2`γq¨d S pu, vq is less than |V S |´1. As the cost of any solution found is deterministically always smaller than two times the optimal cost (cf. Lemma 29), the probability of not finding an appropriate solution within a single round is upper bounded by 2{|V S |. For |V S | ě 3 the probability of finding a feasible solution within Q P N rounds is therefore 1´p2{3q Q . Finally, we can state the main theorem showing that Algorithm 3 is indeed a tri-criteria approximation algorithm. Theorem 36. Assuming that |V S | ě 3 holds and that max L r,x,y ď ε¨d S pτ, uq holds for all network resources px, yq P R S with 0 ă ε ď 1 and by setting β " εä log p|V S |¨|T |q¨∆ V and γ " ε¨a3{2¨log |V S |¨∆ E with ∆ V , ∆ E as defined in Lemmas 20 and 21, Algorithm 2 is a pα, 2`β, 2`γq tri-criteria approximation algorithm for SCEP-C, such that it finds a solution with high probability, with costs less than α " 2 times higher than the optimal cost and violates network function and edge capacities only within the factors 2`β and 2`γ respectively. VI. APPROXIMATE CACTUS GRAPH EMBEDDINGS Having discussed approximations for linear service chains, we now turn towards more general service graphs (essentially a virtual network), i.e. service specifications that may contain cycles or branch separate sub-chains. Concretely, we propose a novel linear programming formulation in conjunction with a novel decomposition algorithm for service graphs whose undirected interpretation is a cactus graph. Given the ability to decompose fractional solutions, we show that we can still apply the results of Sections IV and V for this case. Our results show that our approximation scheme can be applied as long as linear solutions can be appropriately decomposed. We highlight the advantage of our novel formulation by showing that the standard multi-commodity flow approach employed in the Virtual Network Embedding Problem (VNEP) literature cannot be decomposed and hence cannot be used in our approximation framework. This section is structured as follows. In Section VI-A we motivate why considering more complex service graphs is of importance. Section VI-B introduces the notion of service cactus graphs and introduces the respective generalizations of SCEP-P and SCEP-C. Section VI-C shows how these particular service graphs can be decomposed into subgraphs. Building on this a priori decomposition of the service graphs, we introduce extended graphs for cactus graphs and the respective integer programming formulation in Sections VI-D and VI-E. In Section VI-C we show how the linear solutions can be decomposed into a set of fractional embeddings analogously to the decompositions computed in Section III. Section VI-G shows that the approximation results for SCEP-P and SCEP-C still hold for this general graph class. Lastly, Section VI-H shows that the standard approach of using multi-commodity flow formulations yields non-decomposable solutions, i.e. our approximation framework cannot be applied when using the standard approach. This also sheds light on the question why no approximations are known for the Virtual Network Embedding Problem, which considers the embedding of general graphs. A. Motivation & Use Cases While service chains were originally understood as linear sequences of service functions (cf. [38]), we witness a trend toward more complex chaining models, where a single network function may spawn multiple flows towards other functions, or merge multiple incoming flows. We will discuss one of these use cases in detail and refer the reader to [11], [17], [20], [25], [30] for an overview on more complex chaining models. Let us give an example which includes functionality for load balancing, flow splitting and merging. It also makes the case for "cyclic" service chains. The use case is situated in the context of LTE networks and Quality-of-Experience (QoE) for mobile users. The service chain is depicted in Figure 2 and is discussed in-depth in the IETF draft [20]. Depending on whether the incoming traffic from the packet gateway (P-GW) at the first load balancer LB 1 is destined for port 80 and has type TCP, it is forwarded through a performance [20] with up-(solid) and downstream communications (dashed). The packet gateway (P-GW) terminates the mobile (3GPP) network and forwards all traffic to a load balancer (LB) which splits the traffic flows: TCP traffic on port 80 is forwarded to the performance enhancement proxy (PEP) which connects to two (load balanced) caches. The load balancer LB 2 merges the outgoing traffic flows and forwards them through a firewall (FW) and a network address translator (NAT) and finally to the Internet. Depending on the ratio on the amount of web-traffic, the different up-and downstream connections will have different bandwidth requirements. Furthermore, e.g., video streams received by the PEP from one of the caches may be transcoded on-the-fly, and hence the outgoing bandwidth of the PEP towards LB 1 might be less than the traffic received. enhancement proxy (PEP). Otherwise, LB 1 forwards the flow directly to a second load balancer LB 2 which merges all outgoing traffic destined to the Internet. Depending on the deep-packet inspection performed by the PEP (not depicted in Figure 2, cf. [20]) and whether the content is cached, the PEP may redirect the traffic towards one of the caches. Receiving the content either from the caches or the Internet, the PEP may -depending on user agent information -additionally perform transcoding if the content is a video. This allows to offer both a higher quality of experience for the end-user and reduces network utilization. It must be noted that all depicted network functions (depicted as rectangles) are stateful. The load balancer LB 2 e.g., needs to know whether the traffic received from the firewall needs to be passed through the PEP (e.g., for caching) or whether it can be forwarded directly towards LB 1 . Similarly, the firewall and the network-address translation need to keep state on incoming and outgoing flows. Note that the PEP spawns sub-chains towards the different caches and that the example also contains multiple types of cycles. Based on the statefulness of the network functions, all connections between network functions are bi-directed. Furthermore, there exists a "cycle" LB 1 , PEP, LB 2 , LB 1 (in the undirected interpretation). Our approach presented henceforth allows to embed either the outbound or the inbound connections of the example in Figure 2 (depicted as dashed or solid), but can be extended to also take bi-directed connections into account. B. Service Cactus Graphs Embedding Problems Given the above motivation, we will now extend the service chain definition of Section II to more complex service graphs, like the one in Figure 2. While for service chains request graphs G r " pV r , E r q were constrained to be lines (cf. Section II), we relax this constraint in the following definition. Definition 37 (Service Cactus Graphs). Let G r " pV r , E r q denote the service graph of request r P R. We call G r a service cactus graph if the following two conditions hold: 1) E r does not contain opposite edges, i.e. for all pv, uq P E r the opposite edge pu, vq is not contained in E r . 2) The undirected interpretationḠ r " pV r ,Ē r q, withV r " V r andĒ r " ttu, vu| pu, vq P E r u, is a cactus graph, i.e. any two simple cycles share at most a single node. Moreover, we do not assume that the service cactus graphs have unique start or end nodes. Specifically, all virtual nodes i P V r can have arbitrary types. The definitions of the capacity function d r : V r Y E r Ñ R ě0 and the cost functions c S : R S Ñ R ě0 are not changed. As the definitions of valid and feasible embeddings as well as the definition of SCEP-P and SCEP-C (see Definitions 1, 2, 3, and 4 respectively) do not depend on the underlying graph model, these are still valid, when considering service cactus graphs as input. To avoid ambiguities, we refer to the respective embedding problems on cactus graphs as SCGEP-P and SCGEP-C. C. Decomposing Service Cactus Graphs As the Integer Programming formulation (see IP 3 in Section VI-E) for service cactus graphs is much more complex, we first introduce a graph decomposition for cactus graphs to enable the concise descriptions of the subsequent algorithms. Concretely, we apply a breadth-first search to re-orient edges as follows. For a service cactus graph G r " pV r , E r q, we choose any node as the root node and denote it by r r . Now, we perform a simple breadth-first search in the undirected service graphḠ r (see Definition 37) and denote by π r : V r Ñ V r Y tNULLu the predecessor function, such that if π r piq " j then virtual node i P V r was explored first from virtual node j P V r . Let pV bfs r , E bfs r q denote the graph G bfs r with V bfs r " V r and E bfs r " tpi, jq |j P V r , π r pjq " i^π r pjq ‰ NULLu. Clearly, G bfs r is a directed acyclic graph. We note the following based on the cactus graph nature of G r : 1) The in-degree of any node is bounded by 2, i.e. |δÉ bfs r piq| ď 2 holds for all i P V bfs r . 2) Furthermore, if |δÉ bfs r piq| " 2 holds for some i P V bfs r , then there exists a node j P V bfs r , such that there exist exactly two paths P r , P l in V bfs r from j to i which only coincide in i and j. Proof. With respect to Statement 1) note that node i must be reached from the (arbitrarily) chosen root r r P V bfs r . For the sake of contradiction, if |δÉ bfs r piq| ą 2 then there exist at least three (pair-wise different) paths P 1 , P 2 , P 3 from r r to i. Hence, in the undirected representationḠ r there must exist at least two cycles overlapping in more than one node, namely in i as well as some common predecessor of i. This is not allowed by the cactus nature ofḠ r (cf. Definition 37) and hence |δÉ bfs r piq| ď 2 must hold for any i P V r . Statement 2) holds due to the following observations. As all nodes are reachable from the root r r , we are guaranteed to find a common predecessor j P V bfs r while backtracking along the reverse directions of edges in E bfs r . If this was not the case, then there would exist two different sources, which is -by construction -not possible. Based on the above lemma, we will now present a specific graph decomposition for service cactus graphs. Concretely, we show that service cactus graphs can be decomposed into a set of cyclic subgraphs together with a set of line subgraphs with unique source and sink nodes. For each of these subgraphs, we will define an extended graph construction that will be used in our Integer Programming formulation. Concretely, the graph decomposition given below, will enable a structured induction of flow in the extended graphs: the flow reaching a sink in one of the subgraphs will induce flow in all subgraphs having this node as source. This will also enable the efficient decomposition of the computed flows (cf. Section VI-F). Definition 39 (Service Cactus Graph Decomposition). Any graph G bfs r of a service cactus graph G r can be decomposed into a set of cycles C r " tC 1 , C 2 , . . . u and a set of simple paths P r " tP 1 , P 2 , . . . u with corresponding subgraphs G C k r " pV C k r , E C k r q and G P k r " pV P k r , E P k r q for C k P C r and P k P P r , such that: 1) The graphs G C k r and G P k r are connected subgraphs of G bfs r for C k P C r and P k P P r respectively. and sinks s P k r , t P k r P V P k r are given for C k P C r and P k P P r respectively, such that δÉ C k r ps C k r q " δÈ C k r pt C k r q " δÉ P k r ps P k r q " δÈ P k r pt P k r q " H holds within G C k r and G P k r respectively. 3) tE C k r |C k P C r u Y tE P k r |P k P P r u is a (pair-wise disjoint) partition of E bfs r . 4) Each C k P C r consists of exactly two branches B C k r,2 , B C k r,1 Ă E C k r , such that both these branches start at s C k r and terminate at t C k r and cover all edges of G C k r . 5) For P k P P r the graph G P k r is a simple path from s P k r to t P k r . 6) Paths may only overlap at sources and sinks: @P k , P k 1 P P r , P k ‰ P k 1 : r u. We introduce the following sets: Note that the node sets V C,r , V P,r , V C,ŕ , V P,ŕ , V C,ȓ , and V P,ȓ only contain virtual nodes of paths or cycles that are either the source, or the target or any of both. Similarly, E C r and E P r contain all virtual edges that are covered by any of the paths or any of the cycles. With respect to Ý Ñ E C k r and Ý Ñ E P k r we note that these edges' orientation agrees with the original specification E r and the edges in bfs r contains all edges whose orientation was reversed in G bfs r . B C k r denotes the set of branching nodes of cycle C k P C r , i.e. nodes which have an out-degree of larger than one in the graph G bfs r and which are not a source or a target of any of the cycles. Lastly, we abbreviate V τrpt C k r q S by V C k S,t , i.e. the substrate nodes onto which the target t C k r of cycle C k can be mapped. The constructive existence of the above decomposition follows from Lemma 38 as proven in the following lemma. Lemma 40. Given a service cactus graph G r " pV r , E r q, the graph G bfs r " pV bfs r , E bfs r q and its decomposition can be constructed in polynomial time. Proof. Note that the graph G bfs r is constructed in polynomial time by choosing an arbitrary node as root and then performing a breadth-first search. Having computed the graph G bfs r , the decomposition of G bfs r (according to Definition 39) can also be computed in polynomial time. First identify all 'cycles' in G bfs r by performing backtracking from nodes having an in-degree of 2. Afterwards, no cycles exist anymore and for each remaining edge e " pi, jq P E bfs r ztE C k r |C k P C r u a path G P k r " pV P k r , E P k r q with V P k r " ti, ju and E P k r " tpi, jqu can be introduced. An example of a decomposition is shown in Figure 3. The request graph G r is first reoriented by a breadth-first search to obtain G bfs r . As the virtual network function l is the only node with an in-degree of 2, first the "cycle" along j, k, l and j, m, l is decomposed to obtain cycle C 1 . Afterwards, all remaining edges are decomposed into single paths P r " tP 1 , P 2 , P 3 , P 4 u, consisting only of one edge. With respect to the notation introduced in Definition 39, we have e.g. s P1 r " j, s C1 r " j, t P1 r " i, and t C1 r " l. Furthmerore, according to the original edge orientation we have " tpj, kq, pk, lqu, and Ð Ý E C1 r " tpj, mq, pm, lqu. We note that node m is a branching node, i.e. B C1 r " tmu, as it is contained in the subgraph of C 1 and has a degree larger than 1 and is neither a source nor the target of a cycle. D. Extended Graphs for Service Cacti Graphs Based on the above decomposition scheme for service cactus graphs, we now introduce extended graphs for each path and cycle respectively. Effectively, the extended graphs will be used in the Integer Programming formulation for SCGEP as well as for the decomposition algorithm. In contrast to the Figure 3. Note that the orientation of edges is reversed, if the virtual edge was reversed in the breadth-first search. This is e.g. the case for the edge pi, jq P Gr respectively the path P 1 . The flow values fr ,¨,¨d epicted here are used in the Integer Program 3 to induce flows in the extended graphs. Definition 5 in Section III-A, these extended graphs will not be directly connected by edges, but thoughtfully stitched using additional variables (see Section VI-E). The definition of the extended graphs for paths P k P P r generally follows the Definition 5 for linear service chains: for each path P k P P r and each virtual edge pi, jq P E P k r a copy of the substrate graph is generated (cf. Figure 4). If an edge's orientation was reversed when constructing G bfs r , the edges in the substrate copies are reversed. Additionally, each extended graph G P k r contains a set of source and sink nodes with respect to the types of s P k r and t P k r . Concretely, the extended graph G P k r,ext contains two super sources, as the virtual node j can be mapped onto the substrate nodes v and w. The concise definition of the extended graph construction for the extended graphs of paths is given below. Definition 41 (Extended Graph for Paths). The extended graph G P k r,ext " pV P k r,ext , E P k r,ext q for path P k P P r is defined as follows: We set E P k r,ext " E P k r,`Y E P k r,´Y E P k r,S Y E P k r,F , with: For cycles we employ a more complex graph construction (cf. Definition 42), that is exemplarily depicted in Figure 5. As noted in Definition 39, the edges V C k r,ext of cycle C k P C r are partitioned into two sets, namely the branches B C k r,1 and B C k r,2 . Within the extended graph construction, these branches are transformed into a set of parallel paths, namely one for each potential substrate node that can host the function of target function t C k r . Concretely, in Figure 5, two parallel path constructions are employed for both realizing the branches B C k r,1 and B C k r,2 , such that for the left construction the network function l P V r will be mapped to v P V S and in the right construction the function l P V r will be hosted on the substrate node w P V S . This construction will effectively allow the decomposition of flows, as the amount of flow that will be sent into the left path construction of branch B C k r,1 will be required to be equal to the amount of flow that is sent into the left path construction of branch B C k r,2 . Furthermore, for each substrate node u P V τrps C k r q S that may host the source network function s C k r of cycle C k P C r , there exists a single super source ur ,s C k r . Together with the parallel paths construction, the flow along the edges from the super sources to the respective first layers will effectively determine how much flow is forwarded from each substrate node u P V τrps C k r q S hosting the source functionality towards each substrate node v P V τrpt C k r q S hosting the target functionality t C k r . The amount of flow along edge pvr ,j , v j,k r,w q in Figure 5 will e.g. indicate to which extent the mapping of virtual node j P V r to substrate node v P V S will coincide with the mapping of virtual function l P V r to substrate node w P V S . In fact, to be able to decompose the linear solutions later on, we will enforce the equality of flow along edges pvr ,j , v j,k r,w q and pvr ,j , v j,m r,w q. We lastly note that the flow variables fr ,¨,¨d epicted in Figure 5 will be used to induce flows inside the respective constructions or will be used to propagate flows respectively. r,ext of service cactus request r and cycle C 1 depicted in Figure 3. Similar styles of dashing the edges incident to the super sources vr ,j , wr ,j and sinks vŕ ,l and vr ,j indicates that the same amount of flow will be sent along them. We set E C k r,ext " E C k r,`Y E C k r,´Y E C k r,S Y E C k r,F , with: Note that we employ the abbreviation V C k S,t to denote the substrate nodes that may host the cycle's target network function, i.e. V E. Linear Programming Formulation We propose Integer Programs 3 and 4 to solve SCGEP-P and SCGEP-C respectively. The IPs are based on the individual service cactus graph decompositions and the corresponding extended graph constructions discussed above. As IP 4 reuses all of the core Constraints, we restrict our presentation to IP 3. We use variables fr ,i,u P t0, 1u to indicate the amount of flow that will be induced at the super sources of the different extended graphs (paths or cycles) for virtual node i P Vȓ and substrate node u P V τrpiq S . Note that these flow variables are included in the Figures 4 and 5. Additionally flow variables f r,e P t0, 1u are defined for all edges contained in any extended graph construction, i.e. for r P R and e P E ext,SCG r,flow , with E ext,SCG r,flow " p Furthermore, the already introduced variables (see Section III-A) x r P t0, 1u and l r,x,y ě 0 are used to model the embedding decision of request r P R and the allocated loads on resource px, yq P R S for request r P R respectively. Embeddings are realized as follows. If, x r " 1 holds, by Constraint 50 exactly one of the flow values fr ,rr,u is set to 1 for u P V τrprrq S and hence a flow will be induced in all paths and cycles having the arbitrarily chosen root r r P V r as source. Specifically, for all paths P k P P r , Constraint 53 sets the flow values of edges incident to the respective super source nodes in the extended graphs G P k r according to the values to fr ,i,u . For cycles, Constraint 51 induces a flow analogously, albeit deciding which target substrate node w P V C k S,t will host the function t C k r . Importantly, Constraint 51 only induces flow in the branch B C k r,1 of cycle C k P C r . Indeed, Constraint 52 forces the flow in the branch B C k r,2 to reflect the flow decisions of the branch B C k r,1 . Having induced flow in the extended networks for paths P k P P r and cycles C k P C r , Constraint 54 enforces flow preservation in all of the extended networks. Here we define V ext,SCG r,S q to be all nodes that are neither a super source nor a super sink in the respective extended graphs. As flow preservation is enforced within all extended graphs, the flow of path and cycle graphs must eventually terminate in the respective super sinks. By Constraints 55 and 56 the variables fr ,j,¨o f the target of the cycle C k P C r -i.e. j " t C k r -or the target of the path P k P P r -i.e. j " t P k r -are set according to the amount of flow that reaches the respective sink in the respective extended graph. This effectively induces flows in all the extended graphs of cycle C k P C r or path P k P P r for which s C k r P V C,ŕ Y V P,ŕ or s P k r P V C,ŕ Y V P,ŕ holds, respectively. However, paths P k P P r or cycles C k P C r may be spawned also at nodes lying inside another cycle C k 1 . This is e.g. the case for path P 2 in Figure 3, as the virtual node m P V r is an inner node of the branch B C k r,2 of cycle C 1 . To nevertheless, induce the right amount of flow at the different super sources in the graph G P2 r,ext , the sum of flows along processing edges inside the parallel path constructions have to be computed, as visualized in Figure 5: the flow variable fr ,m,v needs to equal the sum of the flow along edges pv j,m r,v , v m,l r,v q and pv j,m r,w , v m,l r,w q and the flow variable fr ,m,u needs to equal the sum of the flow along edges pu j,m r,v , u m,l r,v q and pu j,m r,w , u m,l r,w q as the respective edges denote the processing of function m P V r on node v P V S and w P V S respectively. Constraint 57 realizes this functionality, i.e. the inter-layer edges that denote processing are summed up to set the respective flow inducing variables fr ,¨,¨. As paths may only overlap at sources and sinks (cf. Definition 39), Constraints 55-57 propagate flow induction across extended graphs. Constraints 58 and 59 set the load variables l r,x,y induced by the embeding of request r P R substrate resources px, yq P R S . Within the Integer Program 3, we again make Integer Program 3: SCGEP-P max ÿ rPR b r¨xr (49) ÿ uPV τr prr q S fr ,rr,u " x r @r P R (50) f r, pw i,j r,w ,wŕ ,j q " fr ,j,w @r P R, C k P C r , pi, jq P B C k r,1 , j " t C k r , w P V τrpjq S (55) f r, pu i,j r ,uŕ ,j q " fr ,j,u @r P R, P k P P r , pi, x r P t0, 1u @r P R (61) fr ,i,u P t0, 1u @i P Vȓ (62) f r,e P t0, 1u @r P R, e P E ext,SCG r,flow l r,x,y ě 0 @r P R, px, yq P R S (64) use of specific index sets to simplify notation (cf. Definition of E ext r,u,v and E ext r,τ,u in Section III-A and Integer Program 1). Concretely, we introduce index sets E ext,SCG r,u,v and E ext,SCG r,τ,u to contain the edges that indicate the processing on substrate resource pτ, uq P R V S and the edge pu, vq P E S respectively as follows: r , u j,l r q, jq|P k P P r , pi, jq, pj, lq P E P k r , τ r pjq " τ uY tppu i,j r,w , u j,l r,w q, jq|C k PC r ,pi, jq,pj, lqPE C k r ,τ r pjq"τ, r,w , u i,j r,w q, i, jq|C k P C r , pi, jq P Ð Ý E C k r , w P V C k S,t u In contrast to the definition of E ext r,τ,u in Section III-A, the set E ext,SCG r,τ,u only collects the internal edges of extended graphs representing the processing at node u having type τ . Hence, processing at nodes that are the start or the target of any of the decomposed paths or cycles must be separately accounted for. Concretely, in Constraint 58, for all nodes i P Vȓ z Ť C k PCr B C k r , the respective flow induction variables fr ,i,u are added. With respect to accounting for the load of substrate edges pu, vq P E S , we note that the only difference to the construction in Section III-A is the re-reorientation of edge directions and the consideration of both paths and cycles. Lastly, the objective 49 as well as the Constraint 60 have not changed with respect to (Integer) Linear Program 1. Summarizing the workings of Integer Program 3, we note the following: ‚ For each request r P R, a breadth-first search from an arbitrarily root r r P V r is performed to obtain graph G bfs r . This graph is decomposed according to Definition 39. 50 -60 and 62 -64 x r " 1 @r P R (66) ‚ Within each extended graph flow preservation holds (excluding super sources and sinks). ‚ For cycles C k P C r , it is additionally enforced that the amount of flow sent towards destination w P V C k S,t must be equal for both branches (see Constraints 51 and 52). ‚ As flow must terminate at the sinks of the extended graphs, the variables fr are eventually set (see Constraints 55 and 56), thereby potentially inducing flows in other extended graphs. Additionally, if another cycle or path is spawned at a node internal to a branch of a cycle the respective flow input variables fr ,¨,¨a re set in Constraint 57. ‚ Hence, if x r " 1 holds for request r P R, then by Constraint 50 one of the variables fr ,rr,u must be set to one. By the above observations, this will induce flows in all extended graphs. F. Decomposition Algorithm / Correctness In the following we present Algorithm 4 to decompose a given solution to the (relaxation of) Integer Program 3. As in the decomposition algorithm for linear service chains (cf. Algorithm 1), the output of the algorithm is a set D r for each request r P R consisting of triples pf k r , m k r , l k r q, where f k r P r0, 1s is the fractional embedding value of the (frational) embedding defined by the mapping m k r . We again note that the load function l k r : R S Ñ R ě0 indicates the load on substrate resource px, yq P R S , if the mapping m k r is fully embedded. For each request r P R, the algorithm decomposes the flows in the extended graphs iteratively as long as x r ą 0 holds. This is done by placing Initially, only the root r r P V r is placed in the queue Q and one of the substrate nodes u P V τrprrq S with fr ,rr,u ą 0 is chosen as the node to host r r . Such a node must always exist by Constraint 50 of Integer Program 3 and the node mapping is set accordingly. The queue Q will only contain nodes that are sources of paths or cycles in the decomposition, i.e. which are contained in Vȓ . Furthermore, by construction, each extracted node from the queue will already be mapped to some specific substrate node via the function m V r . The set V is used to keep track of all variables of the Integer Linear Program whose value used in the decomposition process. For each node in the queue i P Q, all cycles C k P C r and paths P k P P r starting at i, i.e. that s C k r " i or s P k r " i holds, are handled one after another using the function ProcessPath. We start by discussing how paths P k are handled (see . Note that by construction, the source Additionally, equality holds, if the solution p x, f , lq P F LP , respectively the objectiveĈ, is optimal. The proofs and lemmas contained in Sections IV and V to obtain the approximations to SCEP-P and SCEP-C are purely based on Lemmas 7 -11. The above presented analogues (Lemmas 43 -47) hence give all the prerequisites to employ the approximation framework developed in Sections IV and V. The only changes to the respective approximations (cf. Algorithms 2 and 3) are to employ the novel Integer Programs 3 and 4 in conjunction with the novel decomposition Algorithm 4. We lastly note that the number of variables and constraints of Integer Programs 3 and 4 is still polynomial in the respective graph sizes. Concretely, the number of variables and constraints is bounded by Op ř rPR |E r |¨|V S |¨|E S |q, as for each edge pi, jq P E r , which lies on a cycle C k P C r , exactly |V C k S,t | ď |V S | many parallel path constructions with Op|E S |q many edges is used. Hence, the runtime to compute the linear relaxations of the respective integer programs is still polynomial and the runtime of the novel decomposition algorithm increases at most by a factor |V S |. Without further proofs, we state the following two theorems. Theorem 48. We adapt Algorithm 2 by replacing Integer Program 1 by Integer Program 3 and Algorithm 1 by Algorithm 4. Assuming that |V S | ě 3 holds, and that max L r,x,y ď ε¨d S px, yq holds for all resources px, yq P R S with 0 ă ε ď 1 and by setting β " ε¨a2¨log p|V S |¨|T |q¨∆ V and γ " ε¨a2¨log |V S |¨∆ E with ∆ V , ∆ E as defined in Lemmas 20 and 21, we obtain a pα, 1`β, 1`γq tri-criteria approximation algorithm for SCGEP-P, such that it finds a solution with high probability, that achieves at least an α " 1{3 fraction of the optimal profit and violates network function and edge capacities only within the factors 1`β and 1`γ respectively. Theorem 49. We adapt Algorithm 3 by replacing Integer Program 2 by Integer Program 4 and Algorithm 1 by Algorithm 4. Assuming that |V S | ě 3 holds and that max L r,x,y ď ε¨d S pτ, uq holds for all network resources px, yq P R S with 0 ă ε ď 1 and by setting β " ε¨alog p|V S |¨|T |q¨∆ V and γ " ε¨a3{2¨log |V S |¨∆ E with ∆ V , ∆ E as defined in Lemmas 20 and 21, we obtain a pα, 2`β, 2`γq tri-criteria approximation algorithm for SCGEP-C, such that it finds a solution with high probability, with costs less than α " 2 times higher than the optimal cost and violates network function and edge capacities only within the factors 2`β and 2`γ respectively. H. Non-Decomposability of the Standard IP The a priori graph decompositions, the extended graph constructions, the respective integer program, and the corresponding decomposition algorithm to obtain approximations for SCGEP are very technical (cf. Sections VI-C to VI-F). In the following we argue that standard multi-commodity flow integer programming formulations typically used in the virtual network embedding literature, see e.g. [5], [30], [37], cannot be employed for our randomized rounding approach. This result also suggests that the hope expressed by Chowdhury et al. to obtain approximations using this standard formulation [5], is unlikely to be true in general. In particular, we consider the solutions of the relaxation of the archetypical Integer Program 5 for SCEP-C, which uses multi-commodity flows to connect virtual network functions in the substrate (cf. [5], [30], [37]), and show that this formulation allows for solutions which cannot be decomposed into valid embeddings. As we will show, this has ramifications beyond not being able to apply the randomized rounding approach, as the relaxations obtained from Integer Program 5 are provably weaker than the ones obtained by Integer Program 3. Integer Program 5 employs three classes of variables. The variable x r P t0, 1u indicates whether request r P R shall be embedded. Variables y r,i,u P t0, 1u and z r,i,j,u,v denote the embedding of virtual node i P V r on substrate node u P V τrpiq S and the embedding of virtual edge pi, jq P E r on substrate edge pu, vq P E S respectively. By Constraint 71, the virtual node i P V r of request r P R must be placed on any of the appropriate substrate nodes in V τrpiq S iff. x r " 1 holds. Constraint 72 induces a unit flow for each virtual edge pi, jq P E r : if virtual node i P V r is mapped onto substrate node u P V S and virtual node j P V r is mapped onto v P V S , then the flow balance at node u is 1, and the flow balance at substrate node v is´1 and flow preservation holds elsewhere. Note also that in the case that both the source and the target are mapped onto the same node no flow is induced. Lastly, Constraints 73 to 74 compute the effective node and edge allocations and bound these by the respective capacities. By the above explanation, it is easy to check that any integral solution to Integer Program 5 defines a valid mapping. Indeed, for each request r P R with x r " 1 the node i P V r is mapped onto substrate node u, i.e. m V r piq " u holds, iff. y r,i,u " 1 holds and the edge mapping of pi, jq P E r can be recovered from the flow variables y r,i,j,¨,¨b y performing a breadth-first search from m V r piq P V S to m V r pjq P V S where an edge pu, vq P E S is only considered if y r,i,j,u,v " 1 holds. We denote the set of feasible integral solutions of Integer Program 5 by F mcf IP and the set of linear solutions by F mcf LP . Reversely, each solution to SCEP-P, i.e. each subset R 1 Ď R of requests with mappings pm V r , m E r q for r P R 1 , induces a specific solution to the Integer Program 5. We denote by ϕ the function that given a set R 1 and corresponding mappings tm r |r P R 1 u yields the respective integer programming solution p x, y, zq P F mcf IP defined as follows: ‚ x r " 1 iff. r P R 1 for all requests r P R, ‚ y r,i,u " 1 iff. r P R 1 and m V r piq " u for all requests r P R, virtual nodes i P V r , and substrate nodes u P V τrpiq S . ‚ z r,i,j,u,v " 1 iff. r P R 1 and pu, vq P m E r pi, jq for all requests r P R, virtual edges pi, jq P E r and substrate edges pu, vq P E S . By the above argumentation, we observe that Integer Program 5 indeed is a valid formulation for SCEP-P. We will now investigate the linear relaxations of Integer Program 5 and show that the formulation may produce solutions, which cannot be embedded. As an important precursor to this result, we first define the set of all linear programming solutions that may originate from convex combinations of singular embeddings as follows. Definition 50 (Decomposable LP Solution Spaces). Let M r denote the set of all valid mappings for the request r P R. We denote by Note that the above also allows for the possibility of the nonembedding, as ř n k"1 λ k " 0 is allowed. We denote the space of all decomposable linear programming solutions as F D LP " tp x, y, zq P Ś rPR F D LP,r |p x, y, zq satisfies Constraints 71´74u. As stated before, we denote by F mcf LP " tp x, y, zq|p x, y, zq satisfies Constraints 71´74u denote the set of linear programming solutions feasible according to the constraints of Integer Program 5. Using the definitions of F D LP and F mcf LP we can now formally prove, that the linear relaxation of Integer Program 5 does contain solutions, which cannot be decomposed. Lemma 51. The set of feasible LP solutions of Integer Program 5 may contain non decomposable solutions, i.e. F mcf LP Ę F D LP . Proof. We show F mcf LP zF D LP ‰ H by constructing an example in Figure 6 using a single request and a 8-node substrate. We assume that V τrpiq S " tu 1 , u 5 u, V τrpjq S " tu 2 , u 6 u, V τrpkq S " tu 4 , u 8 u, V τrplq S " tu 3 , u 7 u holds. The depicted request shall be fully embedded, i.e. x r " 1 holds. The fractional embeding is represented indicating the node mapping variables, 1 2 i means that e.g. the corresponding mapping value y r,i,u1 is 1{2. Edge mappings are represented according to the x r P t0, 1u @r P R (75) z r,i,j,u,v P t0, 1u @r P R, pi, jq P E r , pu, vq P E S (78) dash style of the request and always carry a flow value of 1{2. Clearly, the depicted fractional embedding is feasible and therefore contained in F mcf LP . ‚ Constraint 71 holds as each virtual node is mapped with a cumulative value of 1 to substrate nodes supporting the respective function. ‚ Constraint 72 holds as the substrate nodes onto which the tails of virtual edges (i.e. ti, j, ku) have been mapped on, have corresponding outgoing flows while the heads of virtual edges have corresponding incoming flows. ‚ Constraints 73 and 74 hold trivially if we assume large enough capacities on the substrate. Assume for the sake of contradiction that the depicted embedding is a linear combination of elementary solutions, i.e. there exist mappings M k for k P K such that the depicted solution is a linear combination ř kPK λ k¨ϕ ptru, M k q with λ k ě 0 and ř kPK λ k ď 1. As virtual node i is mapped onto substrate node u 1 , and u 2 and u 8 are the only neighboring nodes that host j and k respectively there must exist a mapping pm V r , m E r q within M k with m V r piq " u 1 , m V r pjq " u 2 , m V r pkq " u 8 and m E r pi, jq " pu 1 , u 2 q and m E r pi, kq " pu 1 , u 8 q. Similarly, as the flow of virtual edge pj, lq at u 2 only leads to u 3 and the flow of virtual edge pj, kq at u 8 only leads to u 6 , the virtual node l must be embedded both on u 3 and u 7 . As the virtual node l must be mapped onto exactly one substrate node, this partial decomposition cannot possible be extended to a valid embedding. Hence, the depicted solution cannot be decomposed into elementary mappings and Substrate nodes are annotated with the mapping of virtual nodes and 1 2 i for substrate node u 1 stands for y r,i,u 1 " 1{2. Substrate edges are dashed accordingly to the dash style of the virtual links mapped onto it. All virtual links are mapped with values 1{2. The dash style of substrate edge pu 1 , u 2 q therefore implies that z r,i,j,u 1 ,u 2 " 1{2 holds. space F mcf LP of Integer Program 1 (see [1] for an introduction on comparing formulations using projections). . ‚ z r,i,j,u,v " ř D k r PDr:pu,vqPm k r pi,jq f k r for all requests r P R, virtual edges pi, jq P E r and substrate edges pu, vq P E S . As the Formulation 3 and the decomposition Algorithm 4 are valid, it is easy to check that the above projection is valid, i.e. πpF new LP q Ď F mcf LP holds. Furthermore, given any solution p x, f`, f , lq P F new LP , it is obvious that the projected solution πp x, f`, f , lq is decomposable, as the respective mappings were computed explicitly in the decomposition algorithm. Thus, πpF new LP q Ď F D LP holds and under this projection all solutions are decomposable. Together with Lemma 51, we obtain πpF new LP q Ď F D LP Ĺ F mcf LP . Lastly, as F mcf LP zF D LP ‰ H holds, the relaxations of Integer Program 3 are indeed provably stronger than the ones of Integer Program 5. The above theorem is based on the structural deficits of the Integer Program 5 and does neither depend on the particular way we have formulated IP 5 nor does it depend on whether we consider SCGEP-P or SCGEP-C. Furthermore, we note that the above theorem can also have practical implications, when trying to obtain obtain good bounds via linear relaxations, as the difference in the benefit (or cost) can be unbounded. Proof. We first note that by Theorem 52 we have Opt new LP ď Opt mcf LP as πpF new LP q Ĺ F mcf LP holds. We reuse the example depicted in Figure 6 and we assume that the depicted request is the only one. As discussed, the depicted fractional solution is a feasible relaxation of Integer Program 5. As x r " 1 holds, we have Opt mcf LP " b r . Considering the relaxations of Integer Program 3, we claim that only x r " 0 is feasible. This is easy to check as there does not exist a potential substrate location for virtual node l, such that the branches i Ñ j Ñ l and i Ñ k Ñ l can end in the same substrate location while also emerging at the same location. Hence, the benefit obtained by the relaxation of Integer Program 3 is Opt new LP " 0. Hence, the absolute difference is Opt mcf LP´O pt new LP " b r and -as b r can be set arbitrarily -the absolute as well as the relative difference are unbounded. VII. RELATED WORK Service chaining has recently received much attention by both researchers and practitioners [31], [33], [34], [38]. Soulé et al. [38] present Merlin, a flexible framework which allows to define and embed service chain policies. The authors also present an integer program to embed service chains, however, solving this program requires an exponential runtime. Also Hartert et al. [34] have studied the service chain embedding problem, and proposed a constraint optimization approach. However also this approach requires exponential runtime in the worst case. Besides the optimal but exponential solutions presented in [34], [38], there also exist heuristic solutions, e.g., [31], [33]: while heuristic approaches may be attractive for their low runtime, they do not provide any worst-case quality guarantees. We are the first to present polynomial time algorithms for service chain embeddings which comes with formal approximation guarantees. Concretely, our results based on randomized rounding techniques are structured into two papers: In [12], we only consider the admission control variant and derive a constant approximation under assumptions on the relationship between demands and capacities as well as on the optimal benefit. In contrast, we have considered in this paper a more general setting, in which the decomposition approach based on random walks [12] is not feasible, as we allow for arbitrary service cactus graphs. Importantly, while our approach does not require specific assumptions on loads and benefits, we obtain worse approximation ratios. From an algorithmic perspective, the service chain embedding problem can be seen as a variant of a graph embedding problem, see [6] for a survey. Minimal Linear Arrangement (MLA) is the archetypical embedding footprint minimization problem, where the substrate network has a most simple topology: a linear chain. It is known that MLA can be Op ? log n log log nq approximated in polynomial time [4], [14]. However, the approximation algorithms for VLSI layout problems [6], [16], cannot be adopted for embedding service chains in general substrate graphs. Very general graph embedding problems have recently also been studied by the networking community in the context of virtual network embeddings (sometimes also known as testbed mapping problems). Due to the inherent computational hardness of the underlying problems, the problem has mainly been approached using mixed integer programming [5] and heuristics [40]. For a survey on the more practical literature, we refer the reader to [15]. Algorithmically interesting and computationally tractable embedding problems arise in more specific contexts, e.g., in fat-tree like datacenter networks [2], [36]. In their interesting work, Bansal et al. [3] give an n Opdq time Opd 2 log pndqq-approximation algorithm for minimizing the load of embeddings in tree-like datacenter networks, based on a strong LP relaxation inspired by the Sherali-Adams hierarchy. The problem of embedding star graphs has recently also been explored on various substrate topologies (see also [36]), but to the best of our knowledge, no approximation algorithm is known for embedding chain-and cactus-like virtual networks on arbitrary topologies. Finally, our work is closely related to unsplittable path problems, for which various approximation algorithms exist, also for admission control variants [24]. In particular, our work leverages techniques introduced by Raghavan and Thompson [35]: in their seminal work, the authors develop provably good algorithms based on relaxed, polynomial time versions of 0-1 integer programs. However, our more general setting not only requires a novel and more complex decomposition approach, but also a novel and advanced formulation of the mixed integer program itself: as we have shown, standard formulations cannot be decomposed at all. Moreover, we are not aware of any extensions of the randomized rounding approach to problems allowing for admission control and the objective of maximizing the profit. VIII. CONCLUSION This paper initiated the study of polynomial time approximation algorithms for the service chains embedding problem, and beyond. In particular, we have presented novel approximation algorithms, which apply randomized rounding on the decomposition of linear programming solutions, and which also support admission control. We have shown that using this approach, a constant approximation of the objective is possible in multi-criteria models with non-trivial augmentations, both for service chains as well as for more complex virtual networks, particularly service cactus graphs. Besides our result and decomposition technique, we believe that also our new integer program formulation may be of independent interest. Our paper opens several interesting directions for future research. In particular, we believe that our algorithmic approach is of interest beyond the service chain and service cactus graph embedding problems considered in this paper, and can be used more generally. In particular, we have shown that our approach can be employed as long as the relaxations can be decomposed. Hence, we strongly believe that our approach can be applied to further graph classes as well. Moreover, it will also be interesting to study the tightness of the novel bounds obtained in this work.
2016-04-07T21:28:31.000Z
2016-04-07T00:00:00.000
{ "year": 2016, "sha1": "0a74a7a6c8fe4b2042dc5182db3cba3d89b61808", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0a74a7a6c8fe4b2042dc5182db3cba3d89b61808", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54464737
pes2o/s2orc
v3-fos-license
Predicting Text Readability with Personal Pronouns . While the classic Readability Formula exploits word and sentence length, we aim to test whether Personal Pronouns (PPs) can be used to predict text readability with similar accuracy or not. Out of this motivation, we first calculated readability score of randomly selected texts of nine genres from the British National Corpus (BNC). Then we used Multiple Linear Regression (MLR) to determine the degree to which readability could be explained by any of the 38 individual or combinational subsets of various PPs in their orthographical forms (including I , me , we , us , you , he , him , she , her (the Objective Case), it , they and them ). Results show that (1) subsets of plural PPs can be more predicative than those of singular ones; (2) subsets of Objective forms can make better predictions than those of Subjective ones; (3) both the subsets of first-and third-person PPs show stronger predictive power than those of second-person PPs; (4) adding the article the to the subsets could only improve the prediction slightly. Reevaluation with resampled texts from BNC verify the practicality of using PPs as an alternative approach to predict text readability. Introduction The history of predicting textual readability quantitatively dates back to the 1940s when several linguists including Rudolf [1], George [2], Dale and Chall [3] introduced readability formulas into the field of research, thus unleashing a wave of researches and applications.Until 2017, Web of Science has published more than 11,000 researches on readability and its applications have moved from the field of education to fields of administration, commerce, computers, military, scientific research, etc. [4][5][6]. Traditional readability studies usually start with vocabulary and sentence complexity.For instance, the most widely recognized Flesch Reading Ease Formula uses word length (in terms of syllable) and sentence length (in terms of word count) as variables to calculate readability; the Dale-Chall Readability Formula exploits numbers of words that are not in the Dale-Chall 3000 Vocabulary and sentence length as criteria for predicting readability; the Gunning Fog Formula [7] and the SMOG Formula [8] employ number of polysyllabic words and sentence length as measures of readability. As computer technologies improve, many other factors are taken into account, such as type-token ratio, numbers of affixes, prepositional phrases and clauses, cohesive ties, other linguistics features [9], and even L2 learner's reading experience, etc. [10].While these studies are valuable and significant, they usually involve multiple indirect indices that are subjectively defined or difficult to calculate in large-scale analysis.For example, it is hard to tell whether a word such as factory with two or more phonetic variants should be counted as 2 syllables (/'faektrɪ/) or 3 syllables (/'faektəri/).Besides, most of the classic formulae target for texts in English (and some other syllabic language), their applicability for non-syllabic languages such as Chinese remain untested. In this research, we hope to test whether Personal Pronouns (hereinafter referred to as PPs) alone can have any predictive power for readability or not.There are several reasons for us to try them: (1) Given that PPs are always monosyllabic words used to replace full personal names or noun phrases, their usage in a text would affect its total word number, average sentence length as well as average word length; (2) PPs are often anaphorically used and can thus serve as cohesive ties to reduce redundancy and improve comprehension; (3) PPs were only tested collectively in [11] and [12] as part of linguistic features or cohesive ties, and consequently reached different conclusions on the role PPs play in readability prediction. Since most languages have pronouns, we therefore propose that PPs could be promising candidate indicators of readability across languages and deserve further investigation.In this study, we will use a corpus-based approach to test the utility of individual PP forms in English texts of different genres.Specific research questions are as follows: (1) Which person (first-, second-, or third-person, hereinafter referred to as 1P, 2P and 3P respectively) of PPs can predict text readability most accurately? (2) Which number (Singular and Plural) of PPs can predict text readability more accurately? (3) Which case (Subjective and Objective, with Possessive temporarily excluded) of PPs can predict text readability more accurately?Section 2 and Section 3 will introduce our research methods and data processing, Section 4 will report the data results from 5 aspects, Section 5 will reevaluate the results and Section 6 will summarize our major findings and limitations. Materials and Methodology This research uses corpus-based method and examines the predictability of various subsets of the PP forms (as shown in Corpus data British National Corpus (BNC) was chosen as our research object for the following reasons: (1) All text materials in BNC were collected from native speakers as representative samples of Standard British English.So errors in pronoun use by non-native speakers have been excluded to a large extent; variations in geographical and social dialects should have been reasonably controlled or avoided as well. (2) BNC contains approximately 100 million words, 90% of which are written materials collected from nine domains (also referred to as "genres" hereinafter) namely: of genres on usages of PPs [13], proper sampling of this balanced general corpus allows for control over the genre variable that may affect readability. Text materials used in this study (Corpus I) consist of 1,091,347 words in total, which are randomly selected from each of the nine domains.Corpus II consists of 972,490 words in total. Readability Formula In the present study, we choose the Flesch Reading Ease Score, which is recognized as the most widely used and the most tested and reliable formula [6] Data Processing Data processing are divided into 4 steps: (1) Use Perl program to count word and sentence length; (2) Calculate the Flesch Reading Ease scores of sample texts of nine genres respectively; (3) Use AntConc to count numbers of PP forms.Tokens of US as the abbreviation of the United States and tokens of the Possessive her are excluded during the retrieval. After that, the densities of the individual pronouns (D(I), D(we), etc.) based on the total word number of each text domain are calculated respectively; (4) Use SPSS for multivariate regression analysis.Take the density of each subset of PPs as an independent variable, and the Flesch Reading Ease score as the dependent variable.Use Sig., correlation coefficient (R 2 ), as well as the adjusted correlation coefficient (adjusted R 2 ) values to determine which subset(s) of PPs may have better predictability.The criteria and process for determining moderate and strong fitting subsets are shown in Fig. 1. Results in Fig. 2 show that the 3P group has the best fitting degrees, with 5 subests (over 40%) of strong fitting and 2 (nearly 10%) of medium fitting subsets.The mixed (1P+3P) group performs similarly well, with 3 subsets (nearly 30%) of strong fitting and another 2 (nearly 10%) of good fitting subsets, way better than 1P and 2P subsets do.Therefore, it can be concluded that 3P subsets perform better than 1P and 2P subsets do in both individual and mixed subsets, which means that adding 1P and 2P subsets into the 3P subsets will lowered their predictability.First, we use D(the) to predict text readability and gain a medium performance (Sig.=0.019,R 2 =0.570,Adjusted R 2 =0.509).Results in Fig. 5 show that subsets with the included perform slightly better than those without the in good and in strong fitting ranges.To test whether there is a significant difference while adding the in PPs, we use chi-square tests and draw the conclusion that the improvement is not significant (Chisquare value=0.213,df=2, p=0.899>0.05). Reevaluation for Strong Fitting Subsets All the subsets with a strong fitting degree are shown in Table 4.To explore whether subsets with strong predicting power can perform consistently, we repeated the procedures in Section 3 with re-sampled texts from BNC (Corpus II) and recalculated the pronoun and readability data in the new corpus.results from both Corpus I and II are shown in Table 4. Table 4 shows that there are still two subsets with strong fitting degree in Corpus II, namely "he + him + she + her + it" and "I + me + he + him + she + her + it". Although the other subsets have some changes in the fitting degree, they are almost in the moderate fitting range, indicating fair predictability. Conclusion A corpus-based approach is used in research to explore the readability predictability of predictability.Therefore, we believe that using specific subsets of PPs to predict text readability appears practical. However, large-scale tests are needed before any solid conclusion can be drawn concerning the applicability of PPs for readability prediction.Detailed investigation into the predictability of Possessive PPs, and it in Subjective and Objective Cases may be needed as well.Besides, it needs to be verified on whether texts in other geographical varieties such as American English are similar to their British matches. Fig. 2 . Fig. 2. Results for predictability of different Persons on readability in Corpus I Number and Readability.The 38 individual and combinational subsets of PPs can be divided into three groups according to Number (singular PPs: 12 subsets, plural PPs: 9 subsets, singular + plural PPs: 17 subsets). Fig. 3 Fig. 3 shows that 50% of the singular-Number group offer good predication (with strong and/or medium fitness); and nearly 45% (11.1%+33.3%) of the plural-Number group show good prediction.The mixed-number group performs not as well. Fig. 3 . Fig. 3. Results for predictability of different Numbers on readability in Corpus I Case and Readability.The 38 individual and combinational subsets of PPs can be divided into three groups according to Case (Subjective PPs: 9 subsets; Objective PPs: 9 subsets; Subjective + Objective PPs: 20 subsets). Fig. 4 Fig.4shows that Objective PP group has much stronger predictability than the Subjective group and the mixed-Case group, in both good and strong fitting area. Fig. 4 . Fig. 4. Results for predictability of different Cases on readability in Corpus I Fig. 5 . Fig. 5. Results for predictability of including and excluding the on readability 77 subsets with various personal pronoun forms and the definite article the.The results show that: (1) them has the best predictive power among individual pronoun forms; (2) 3P and 1P make better predictions than 2P; (3) plural PPs outperforms singular ones only in strong fitting range; (4) Objective PPs can predict more accurately than Subjective ones; (5) definite article the may only improve subsets' predictability slightly; (6) Retesting results are consistent for those PP subsets with good Table 1 ) on text readability in terms of Person, Number and Case.It should be noted that the Possessive Case is not taken into consideration in this research.Nor will this paper look into the gender issue.So (he+she) and (him+her) will be considered as individual Subjective and Objective singular forms of 3P+HUMAN respectively; it be considered as the individual singular form of 3P-HUMAN with unclear Case; and you as the only 2P form with unclear Number and Case.Consequently, there are 38 reasonable subsets of PP forms: 10 subsets with only individual PP forms, and 28 others with various Person/Number/Case combinations. Table 1 . Personal pronoun forms studied in this project Table 2 shows that texts from Belief, Arts and Imagination domains are easiest to understand with highest readability scores among all texts from the nine domains; texts of Commerce, Natural Science, Applied Science and World Affairs are most difficult to read with lowest scores. Table 2 . Readability results for nine domains in BNC Table 4 . Personal pronoun subsets with strong fitness in Corpus I and II
2018-12-11T14:06:05.454Z
2018-11-02T00:00:00.000
{ "year": 2018, "sha1": "2dcca0c39642f50b30b105a2186e31d60624a1cc", "oa_license": "CCBY", "oa_url": "https://hal.inria.fr/hal-02118839/file/474230_1_En_27_Chapter.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "85f93bf3a296c654f3e5133f3a55ea78ac0cc206", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
246690939
pes2o/s2orc
v3-fos-license
Performance of Brinjal ( Solanum melongena L) Germplasm for Shoot and Fruit Borer Infestation and Marketable Yield The present field experiment was aimed to evaluate the tolerance level to shoot, fruit borer infestation, and yield traits with 174 brinjal accessions at the University orchard, Department of Vegetable crops, Horticultural College and Research Institute, Tamil Nadu Agricultural University, Coimbatore. Among the evaluated accessions, the Acc- EC 490062, Acc - EC 144139-D and Acc - IC 344646 were identified as the best performers as they showed favorable effects for earliness to flowering and harvest, number of fruits per plant and yield. In addition, they recorded the lowest level of shoot and fruit borer infestation with a high marketable yield. The least incidence of shoot borer (13.97 %) was registered in the accession EC 144139-D, EC 490062 (14.35 %), followed by the accession IC 344646 (15.74 %). Whereas the minimum infestation of fruit borer was recorded by the accessions EC 490062 (13.18 %), EC 144139-D (13.52 %), and IC 344646 (13.67 %). The maximum marketable yield per plant (3.46 kg) was registered by the Acc EC 490062 followed by Acc - EC 144139-D (3.80 kg) and Acc- IC 344646 (3.64 kg). The genotypes acknowledged in the current investigation can be explored as parents in future crop improvement programmes of brinjal. INTRODUCTION Vegetable cultivation plays a vital role in making the cropping system more remunerative. There are plenty of reasons, which hinder the production levels of vegetable crops. They are, non-availability of suitable cultivars/varieties, high cost of the desirable seed /planting materials, damage of pests and diseases and change of climatic conditions, etc. To overcome these problems, selecting the best genotypes from the germplasm pool that will serve as a basic material in crop improvement programmes to develop suitable high yielding varieties with resistance to biotic and abiotic stresses gains importance. Brinjal is a common vegetable grown throughout India. Nevertheless, it has its regional specificity based on its color, size, shape, stripes on the surface, thorniness, etc. The nutritional value of brinjal per 100g according to the United States Department of Agriculture (USDA), shows that brinjal fresh weight comprises 0.3 per cent minerals, 0.3 per cent fat, 1.3 per cent fiber, 1.4 per cent protein, 4 per cent of various vitamins and carbohydrates (A and C) and 92.7 percent moisture. It is a good source of potassium, phosphorus, calcium, iron and the vitamin B group. Besides its nutritional quality, brinjal has numerous health benefits in orthodox and traditional medicine. Although brinjal is not so popular for its high health-promoting micronutrients, it has low calories and low fat, making it valuable in diets. Remarkably, available literature suggested that brinjal is used as a medicine in different parts of the world for various illnesses (Oladosu, 2021). The germplasm collection could be a source of desirable traits for improving existing brinjal varieties in the country. In addition, the global interest in the development of cultivars has encouraged germplasm collection and preservation. Hence, these resources are important to plant breeders as a reservoir of genetic variation. Characterization and evaluation of plant germplasm are vital for identifying desirable accessions for utilization in breeding programs (Upadhyaya et al., 2008). The importance of utilization and development of varieties towards high yield and high marketable yield to supply high-value brinjal to the market is highly essential to enhance the profitability to the growers. Knowledge of agro-morphological genetic variation and cropping conditions on vegetative and yield-related traits plays a significant role in varietal improvement and production of brinjal (Sulaiman et al., 2020). In any selection programme, the mean performance of the genotypes for individual characters serves as an essential criterion for discarding the undesirable types. This indicates that germplasm studies may act as a potential source and offer scope for the selection of highyielding genotypes with desirable horticultural attributes. Hence the potential of germplasm act as genetic resources, "Evaluation and identification of suitable cultivars is mandatory" for maintaining plant genetic resources to identify the best types and use them in the further breeding programme. Therefore, the present investigation was undertaken to characterize brinjal accessions collected from NBPGR, New Delhi, by assessing the performance of brinjal accessions for important plant traits and shoot and fruit borer resistance, marketable yield to identify the best performing accessions. were utilized for the study. Seeds were sown in the nursery and the seedlings were transplanted in the main field after 30 days. The experiment was laid out in "Augmented Block Design". Required cultural operations like watering, manuring and weeding were done periodically by adopting standard cultural practices. A random of five plants were marked for recording observations in each accession. The observations were recorded on plant traits viz., plant height and no. of branches, flowering, earliness to flowering and fruit harvest, fruit and yield traits viz., single fruit weight, fruit length, fruit girth, fruit weight, no. of fruits per plant, yield per plant, shoot and fruit borer infestation and marketable yield per plant. The obtained data were subjected to statistical analysis as suggested by Panse and Sukhatme, 1985. RESULTS AND DISCUSSION The recorded observations were statistically analyzed and the mean values were tabulated and presented in Table 1. The statistical analysis revealed highly significant differences among the genotypes for all the traits indicating the presence of sufficient variability in the experimental material. The selection of suitable parents is an essential criterion for the success of a crop improvement program. The research conducted by Srivastava (2020) revealed that eggplant germplasm had ample genetic variation portrayed through agromorphological characterization. Totally 94 accessions recorded more plant height than the grand mean (77.76 cm). The maximum plant height of 118.73 cm was recorded in the Acc -IC090132, followed by Acc -IC089888 and Acc -IC 112779. Previous workers also stated the variation in plant height among different genotypes in brinjal Dahatonde et al., (2010), Kumar et al., (2011), Nirmala (2012, Praneetha (2016) and Srivastava et al., (2019). The number of branches is another criterion that contributes to more yield. It was found that as low as 3.12 (Acc-IC 261788) branches to as high as 12.43 branches per plant (Acc-EC 144139-D) were recorded among the evaluated germplasm. The mean for this trait was 8.15. Among the evaluated germplasm, 77 accessions recorded more branches above the grand mean value. The next best values for more number of branches recorded was in Acc-EC 144139-D followed by Acc -EC 490062(11.63) and Acc-IC344646 (11.38). Variation in the number of branches in brinjal was also observed in the evaluated brinjal entries by Kamalakkannan et al., (2007) , Voddoria et al., (2007, Satesh Kumar et al., (2011) , Nirmala (2012, Praneetha (2016) and Srivastava et al., (2019). Earliness for flowering and fruiting is a highly desirable character, which facilitates for more number of harvests from a plant in its entire duration. The range for days to 50 per cent flowering varied from 56.33 days (Acc -EC 144139-D) to 81.36 days (Acc-IC 090132 and Acc-IC 090132). On an average 68.34 days was recorded as a grand mean for this trait. A total number of 78 accessions took a lesser number of days for 50 per cent of flowering in the population and the accession IC 099670 recorded 68.34 days for 50 per cent flowering, which was on par with the grand mean. The Acc -EC 144139-D was the earliest one, which took 56.33 days for 50 per cent flowering and the Acc -EC 490062 was the second earliest to register 50% of flowering. Earliness was also measured based on the number of days taken for the first harvest. Early harvest also facilitates for more number of harvests. Days taken for the first harvest ranged from 65.57 days (Punjab Sadabahar) to 103.33 days (Acc-IC 354525) with a grand mean value of 86.02 days. A total number of 72 accessions recorded a lesser number of days taken for the first harvest than the grand mean. The Acc -EC 490062 was the first which took the least number of days to first harvest (65.57 days). The next best accession was Acc -EC 144139-D, which took 66.63 days for the first harvest. Early flowering and early harvest were reported by Chowdhury et al., (2010) and Nirmala (2012). Similar findings for early harvest in brinjal were registered by Omkar singh and Kumar (2005), Suneetha et al., (2006), Vaddoria et al., (2007), Chowdhury et al., (2010), Nirmala (2012, Praneetha (2016) and Srivastava et al., (2019). The fruit weight ranged from 35.16 to 118.32 g. The maximum single fruit weight of 118.32 g was recorded by the Acc -IC 090871 followed by Acc -IC EC 112773 (107.82 g) and Acc -IC 090088 (103.54 g). A similar pattern for different ranges of fruit weight in brinjal was reported by Shafeeq et al., (2007), Kumar et al., (2011), Nirmala (2012, Praneetha (2016) and Srivastava et al., (2019). Fruit length also contributes to the yield increase and variation was observed among the evaluated accessions for this character. The grand mean for the length of the fruit measured among the evaluated brinjal germplasm was 10.13 cm. A total number of 81 accessions exceeded the mean value for fruit length and 93 accessions recorded a lesser value for the trait. The maximum fruit length of 16.43 cm was recorded in the Acc -EC 169079, followed by the Acc -IC 344646 (14.84 cm) and Acc -IC 023969 (14.35 cm). The minimum fruit length of 5.3 cm was recorded in the ACC IC 261792. A wide range of fruit length in brinjal was reported by Paikra et al., (2003), Deep et al., (2006), Chowdhury et al., (2010 Kumar et al., (2011) , Nirmala (2012) Praneetha (2016), and Srivastava et al.,(2019. Fruit girth is another signifi cant yield contributing character. As low as 4.37 cm to as high 17.65 cm was measured as fruit girth. The grand mean was 8.30 cm. Totally 67 accessions recorded high fruit girth than the grand mean. The maximum fruit girth was measured by the Acc. IC 023771. The next best values for the fruit girth were 15.97cm (Acc-IC 090871), 15.42cm (Acc-IC 090132) and 15.17 cm (Acc-EC112773). These fi ndings of varied fruit girth in brinjal are in accordance with Kumar et al.(2011), Nirmala (2012, Praneetha (2016) and Srivastava et al., (2019). The number of fruit is the main trait, that directly decides the yield level. It was observed that a minimum of 20.77 fruits to the maximum of 64.75 fruits per plant were registered in different accessions. The Acc -EC 490062 recorded the maximum number of fruits per plant (64.75) followed by Acc-EC 144139-D (55.84) and Acc-IC 344646 (52.47). The minimum number of fruit per plant was recorded in the Acc-IC 354597. The same trend of results for variation in the number of fruits per plant were registered by Chowdhury et al., (2010), Kumar et al., (2011), Nirmala (2012, Praneetha (2016) and Srivastava et al., (2019). The yield per plant decides the economic returns to the growers. The per plant yield ranged from 1.14 kg per plant in the Acc -IC 112934 to 4.45 kg per plant in the Acc-EC 490062. On an average, the evaluated accessions recorded 2.33 kg of yield per plant. The fruit yield of promising accessions are depicted in the fi gure 1. Various levels of yield in brinjal were registered by Kumar et al., (2011), Nirmala (2012, Praneetha (2016) and Srivastava et al., (2019) in the evaluated germplasm. When there is no or less shoot and fruit borer damage, naturally increase in yield will be ascertained. The minimum percentage of shoot borer (12.14%) was recorded by Acc-EC 305163 followed by Acc -EC 144139-D (13.97%). The shoot borer infestation was maximum in the ACC IC 261788 (26.34%). The mean shoot borer infestation was 19.55% and 91 accessions recorded lesser shoot borer infestation than the grand mean. The shoot borer infestation is presented in fi gure 3. Fruit borer infestation decides the marketable fruit yield per plant. The minimum percentage of fruit borer (13.18 % ) damage was recorded in the Acc -EC 490062 and Acc -EC 144139-D (13.52%). The highest fruit borer infestation of 32.16% was recorded in the ACC IC 111037. The mean for this trait was 22.73 and 90 accessions recorded lower fruit borer infestation than the grand mean. The wide range of shoot and fruit borer infestation in brinjal was also reported by Kamalakkannan et al. (2007), Marketable fruit yield contributes to direct profi t for the growers, which are obtained after deducting the fruit borer-infested fruits. The marketable yield ranged from 0.91 kg (Acc-IC 112934) to 3.86 kg and the mean marketable fruit yield per plant was 1.80kg per plant. The Acc IC 310884 registered on par marketable yield of 1.80 kg with the grand mean value. The performance of brinjal germplasm from the present study showed that the Acc -IC090132 recorded the maximum plant height and the accessions Acc-EC 144139-D and Acc -EC 490062 registered the maximum number of branches per plant. The Acc -EC 144139-D was the earliest for 50% of fl owering followed by Acc -EC 490062. The Acc -EC 490062 was the fi rst one to register early harvest among the accessions evaluated. The maximum single fruit weight was recorded in the Acc -IC 090871, the maximum fruit length was recorded in the Acc -EC 169079, the maximum fruit girth was measured in the Acc -IC 023771. Maximum number of fruits per plant was recorded in the Acc -EC 490062 followed by Acc-EC 144139-D and Acc-IC 344646. The highest yield per plant was recorded by the Acc-EC 490062 and Acc -EC 144139-D. The accessions Acc-EC 305163 and EC 144139-D recorded the minimum percentage of shoot borer and fruit borer damage. The Acc -EC 490062 followed by Acc -EC 144139-D and Acc-IC 344646 recorded maximum marketable yield per plant. The best accessions identifi ed in the present study can be well utilized for varietal release and to use as parents in breeding programmes for further improvement of the desirable traits. CONCLUSION The germplasm evaluation study showed that the Acc-EC 490062, Acc -EC 144139-D, and Acc -IC 344646 were identifi ed as best performers as they recorded desirable characters for earliness to fl owering and fruit harvest, to record more number of fruits per plant and yield per plant. Also, they recorded the lowest level of shoot and fruit borer infestation and high marketable yield. Knowledge of morphological genetic variation on vegetative and yield-related traits plays a signifi cant role in varietal improvement and production of brinjal (Solanum melongena L.). Therefore these accessions can be used in brinjal breeding programme to develop superior varieties/hybrids with high yield and low shoot and fruit borer infestation.
2022-02-10T16:40:11.747Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "04d5d28ebbad18928a166b8d2684907830b64883", "oa_license": "CCBY", "oa_url": "http://masujournal.org/108/AlBpy3kwcGqMr0exHUxqDeAfGX9lw0.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bff89d8876703a1e83313a8b2898d30769452b25", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [] }
216121438
pes2o/s2orc
v3-fos-license
Development of a Novel Polyherbal Formulation for Augmenting Milk Production in Healthy Dairy Cows Aim: The study was designed to develop and standardize a novel polyherbal formulation (PHF) for augmenting milk production in healthy dairy cattle. Materials and methods: Five raw plant drugs, viz., tubers of Asparagus racemosus Willd. (Shatavari), whole plant of Eclipta alba (L.) Hassk. (Bhringraj), seeds of Trigonella foenum-graecum L. (Methika), fruits of Foeniculum vulgare Mill. (Mishreya.), and Anethum sowa Roxb. ex Fleming (Shatapushpa) were used to prepare hydroalcoholic extracts using the Soxhlet method. Three in-house batches of PHF were prepared and standardized as per Ayurvedic Pharmacopoeia of India (API) methods. Pharmacognostic authentication and chemical identification were done by macroscopic and microscopic studies, phytochemical screening, physicochemical analysis, and high performance thin layer chromatography (HPTLC) fingerprinting. The safety studies of galactagogue preparation were performed through heavy metals, microbial contamination, aflatoxins, and pesticide residue analysis. Results: Organoleptic studies revealed that all the batches appeared as semisolid in nature, blackish-brown in color, with a pleasant odor and slight bitter taste. Phytochemical screening confirmed the presence of similar secondary metabolites in the different batches of both raw drugs and PHF. Physicochemical analysis and HPTLC fingerprints at different illuminations showed that all three batches were uniformly composed and complied the pharmacopeial limits. Results of safety parameters advocated that all the three batches were safe and complied as per the WHO and API guidelines. Conclusion: The present work first claims the standardization of this unique, cost-effective, nonhormonal, Ayurvedic galactagogue in-house preparation, i.e., PHF for augmenting milk yield in dairy herd. It proves that all the three batches have similar characteristics and uniformly composed. It serves as a reference for identification and distinguishing the galactagogue herbs. IntroductIon India is one of the world's largest milk-consuming countries with inflated milk demands. Many adverse factors, viz., unidentified disease conditions, lack of proper feed management, unhealthy nutrition, and poor medication, have badly impacted the milk yield of lactating dairy cattle and now, it has become a challenging issue in front of Indian dairy industry. Moreover, applications of harmful chemotherapeutic agents and synthetic hormones like recombinant bovine somatotropin and oxytocin for the purpose to increase the milk production and getting more profits are reported to cause several serious health issues like hypothyroidism, cystic ovary, and reproductive failures in dairy animals. 1,2 On the other hand, the uses of nonhormonal polyherbal galactagogue do not cause such deleterious effects on cows, goat, and buffaloes. [3][4][5][6][7] An effective feed management along with herbal agents potentiated milk production in the dairy herd as per the veterinary data. Marketed phytogalactagogues are cost-effective than the synthetic hormonal galactagogues. Polyherbal formulations (PHFs) are the compositions of several varieties of herbs in different dosage forms; they are purely herbal, nonhormonal, safe, and environment friendly, without showing any side effects and upsetting total health of cattle. Ancient literature such as Charaka and Susruta Samhita described some indigenous medicinal species like Asparagus racemosus Willd., etc., for inducing milk. Ayurvedic galactagogues were studied to be safe, capable to induce therapeutic actions in domestic animals at fixed doses, 8 and used as nutritive food supplements in dairy herds. 5,9 They assist in the maintenance and augmentation of dairy cows' milk production. Many traditional culture and folk claimed that they stimulate milk yield not only in dairy cattle but also in humans. 10 They stimulate, maintain, and enhance the milk yield through physiological actions. 11 Supplementation of herbal galactagogue showed a significant hike, i.e., 14.24% in total milk yield in Surti buffaloes without affecting milk composition. 12 Payapro, a herbal galactagogue, increased cow's milk yield by 31.10%. Similarly, dairy cows fede with Lectovet, a PHF, not only increased the milk output but also enhanced the percentage of milk fat. 13,14 The comparative evaluation of feeding impact of two commercialized PHFs, i.e., Ruchamax and Payapro, in lactating cows revealed that the average milk yield was highest, i.e., 11.8 L/day, in Ruchamax-supplemented cows while moderate volume of milk, i.e., 9.3 L/day, was observed in Payapro-supplemented cows as compared to 7.1 L/day in control animals. 15 9,16 which were employed either as single drug itself or ingredients of some commercialized Ayurvedic preparations. [17][18][19] Many more plant species like Leptadenia reticulata (Retz.) Wight and Arn., Nigella sativa L., including Asparagus racemosus Willd. were identified for their galactagogenic properties. 20 Moreover, some Nigerian herbs, viz., Vitex doniana Sweet, Kigelia Africana (Lam.) Benth., Allophylus africanus P. Beauv., Alternanthera sessilis (L.) R. Br. ex DC (sessile joyweed), Secamone afzelii (Roem. and Schult.) K. Schum., Calotropis procera (Aiton) Dryand., Adansonia digitata L., Lecaniodiscus cupanioides Planch. ex Benth., Launaea taraxacifolia (Willd.) Anim ex C. Jeffrey, etc., were reported to increase milk yield effectively. 21 Globally, Trigonella foenum-graecum L. is a potent galactagogue that is commonly recommended for lactating mothers to increase the milk secretion. 22,23 Chloroform extracts of Trigonella foenumgraecum L. seeds produced a mastogenic effect that stimulated the growth of mammary glands and induced estrogenic actions. 21 In vitro assays of fenugreek seeds suggested some estrogen-like compounds embedded in this plant, i.e., phytoestrogen that stimulated pS2 (estrogen-induced protein) expression; likewise, another phytoestrogen, i.e., diosgenin was responsible to increase the milk flow without any toxic effects. 11 Its seeds also influenced the maintenance of lactation in ruminants, i.e., buffaloes, goat, etc. 24 Trigonella foenum-graecum L. supplemented to goat's diet increased the milk production, i.e., 13% hike through growth hormone (PRL) stimulation. 25 Some E2-like molecules, such as anethole and estragole, were secondly identified for galactagogenic properties because of their structural resemblance to dopamine and inhibiting the antisecretory action of dopamine on prolactin resulted high milk secretion likewise; another constituent of Foeniculum vulgare Mill. named anol has been reported to increase the growth of mammary glands in immature female rabbits. 26 Asparagus racemosus Willd, an Indian milk-enhancer herb for mammals, had been described in several classical texts such as Charaka and Susruta Samhita. Alcoholic extracts of this plant induced estrogenic effects in mammary glands of rats with increased milk yield 27 and also played as an active constituent of some important galactagogue formulation like Ricalex. Some clinical trials have established a fact that Asparagus racemosus Willd. increased the PRL level 28 and produced a lactogenic effect when supplemented in rat's food. 29 The galactagogue effect of roots of this herb was observed to be significant in dairy buffaloes. 30 The phytochemical investigation of Asparagus racemosus Willd. reported some bioactive compounds, i.e., Shatavarins I-IV, a steroidal saponins 31 that are responsible for producing estrogenic activity. 28 A hypothetical opinion established the fact that Asparagus racemosus Willd. causes enlargement of the mammary gland through corticoids or PRL actions. 27 Some Ayurvedic formulations were reported to enhance the milk production in lactating cattle as well as in humans. They significantly worked and were found safe for human breastfeeding medicines and veterinary dairy pharmaceuticals. [32][33][34][35][36] The efficacy of PHF was preclinically evaluated in rats and further applied to stimulate the milk yield in lactating dairy animals. In this direction, plant drugs were utilized without adopting an appropriate uniform chemical standardization. 37 However, for ensuring the safety, efficacy, quality, and authenticity of herbal and veterinary pharmaceutical, the use of standardized materials and preparations is essential as per the WHO guidelines. 38,39 Now it has become acceptance criteria for phytogalactagogue so as to compensate the surging milk demand globally at a validated scientific platform. Previous studies reported that various regulatory bodies worked in this way to ensure that Ayurvedic drugs are prepared strictly in accordance with prescribed pharmacopeial standards so that consumers may get safe, pure, potent, and effective herbal medicines. Standardization of such polyherbal medicines provided a set of standards as the qualitative and quantitative values that guide to assurance the quality, efficacy, safety, and reproducibility. It plays a fundamental role in guaranteeing the quality and stability of herbal formulations and works as a regulatory tool in the quality control of herbals. 40,41 Some investigations also revealed that due to the lack of standard data for the identification and authentication, quality control measures for PHF could not be established. Standardization of Ayurvedic milk inducing herbs determines their mechanisms of action and establishment of therapeutic dosage. It has vast importance in terms of their acceptability and quality enhancement for supporting the various interdisciplinary investigations. Commercially quality compliance through pharmacopeial standardization of such formulations has wide importance to accelerate market, gaining profits, and maintaining the goodwill of herbals internationally. Not only this, standardization of PHF scientifically supports various traditional claims through clinical studies on different animal models. 39,42,43 In view of such objective, the present work was designed to formulate and standardized a novel, nonhormonal, and potent polyherbal galactagogue as PHF having milk-yielding properties consisting hydroalcoholic extract of five herbs, i.e., tubers of Asparagus racemosus Willd. (Shatavari.), whole plant of Eclipta alba (L.) Hassk. (Bhringraj), seeds of Trigonella foenum-graecum L. (Methika), fruits of Foeniculum vulgare Mill. (Mishreya), and Anethum sowa Roxb. ex Fleming (Shaptapushpa.) for augmenting milk production without the side effects unlike synthetic hormonal galactagogue in healthy dairy cows, which may help in making up the high milk demand in India. So far, no one has established this unique combination of herbal gougue with standardization procedures. The study was designed including four criteria, namely, identification of the plant material by pharmacognostical characters, phytochemical screening of secondary metabolites, physicochemical analysis, and chromatographic profiling. Solvents and Reagents All solvents, reagents, and chemicals utilized for experiment were of analytical grade. Double distilled water was utilized throughout the analysis. For high performance thin layer chromatography (HPTLC) analysis, aluminum TLC plate, E-Merck, thickness of 0.2 mm, precoated Silica Gel 60 with fluorescent indicator F 254 used as stationary phase. Instrumentation The HPTLC analysis was performed using CAMAG HPTLC assembly (Muttenz, Switzerland) attached with a semiautomatic sample applicator (spray-on technique) Linomat IV, Hamilton (Reno, Nevada, USA) Syringe (100 μL), twin trough development chamber, lighting system CAMAG TLC visualizer Reprostar 3 integrated into win CATS software of version 1.4.2. The Olympus CH20iBIMF microscope (trinocular with camera attachment) was used in pharmacognosy of the plant material. Photomicrographs in both the cases were taken using SONY digital camera, model no. DSC-350. Identification and Authentication of Plant Materials The collected plant materials were identified and authenticated taxonomically by detailed pharmacognostical studies including correct taxonomic identification and their parts' macroscopic and microscopic characterizations. Preparation of Hydroalcoholic Extract (50:50) The raw plant materials were cleaned, dried, and then prepared as coarse powder. The 50% ethanol was added to the raw materials in a ratio 8:1, mixed thoroughly, and the mixture was macerated for 4 hours. Individual hydroalcoholic (ethanol:water, 50:50) extracts were prepared at 35-40°C and were evaporated in petri plates and rotary evaporator on water bath maintained at 60°C. The extract was scraped and packed accordingly. The percentage yields of the extracts according to the weight were calculated. Preparation of PHF Hydroalcoholic extracts of each plant extract was mixed in the equal proportion to prepare the three batches of finished PHF. Standardization of Crude Plant Ingredients and PHF The well-authenticated plant materials were chemically standardized through phytochemical and physicochemical standard (API) evaluation methods. The organoleptic characters of sample extracts of individual drugs and the formulation were tested on the basis of their physical properties. The sensory perceptions were determined by the physical examination of color, odor, taste, and clarity of raw drugs as well as the PHF. Preliminary phytochemical screening was carried out using standard confirmatory tests to detect the presence of various categories of secondary metabolites such as alkaloids, flavonoids, saponins, tannins, steroids, reducing sugars, and phenols. The physicochemical analysis of sample extracts was performed to determine chemical parameters like pH value, total ash, acid-insoluble ash, loss on drying, water-soluble extractive, alcohol-soluble extractive, and TLC profiles. 44 While safety studies incorporated various tests viz., analysis of heavy/toxic metals (Pb, As, Hg, and Cd), microbiological analysis, aflatoxins (B1, B2, G1, and G2) and pesticidal residue analysis. Optimization of Mobile Phase The mobile phase were optimized with distinct combinations of solvent systems in contrasting ratios under standard chromatographic conditions with aim to get best separation and resolution of different samples tracks. HPTLC Fingerprinting The sample solution of PHF each 10 μL was applied in the form of bands indicated as tracks 1, 2, and 3 in Figure 1 on an E Merck aluminum plate that was precoated with silica gel 60 F 254 of 0.2 mm thickness through a HPTLC Linomate IV semiautomatic applicator. The plate was developed in an optimized mobile phase, i.e., ethyl acetate:isopropyl alcohol:water, 65:25:10 (v/v/v), in a CAMAG twin trough glass chamber and dried in the oven for 5 minutes. The plate was visualized under UV 254 nm, 366 nm, and images were taken through the software-supported system. Further, the plate was post-derivatized in ASR through the dipping method, dried, and photodocumented. results The collected plant materials for batches 1, 2, and 3 were identified and authenticated taxonomically through the standard pharmacognostical tools; this included correct taxonomic identification of plants species as well as macroscopic and microscopic characterization of official parts. Macroscopy Five raw drugs were subjected to macroscopical examination for the identification of shape, size, color, texture, and organoleptic characters as per the pharmacopeial standard method. Tubers of Asparagus racemosus Willd. are borne in a compact bunch, fleshy. and spindle-shaped. They are silvery white or light ash-colored externally and white internally, more or less smooth when fresh, longitudinal and wrinkles when dry. They have no well-marked odor but sweet with bitter in taste ( Fig. 2A). The whole plant of Eclipta alba (L.) Hassk. is herbaceous annual, 30-50 cm high, erect, and branched. It has well-developed roots associated with a number of secondary branches arising from the main root. The stem is herbaceous, branched, and cylindrical. Leaves of the plant are 2.2-8.5 cm long and 1.2-2.3 cm wide (Fig. 2B). Seeds of Trigonella foenum-graecum L. are oblong, 0.2-0.5 cm long, and 0.15-0.35 cm broad. Seeds become mucilaginous when soaked in water, odor is pleasant, and have bitter taste (Fig. 2C). Seeds of Foeniculum vulgare Mill. are about 6 mm long, green, and beaked (Fig. 2D). Fruits of Anethum sowa Roxb. ex Fleming are dark brown, often stalk attached, broadly oval, and compressed dorsally. Mericarps are 4 mm long, 2-3 mm broad and 1 mm thick, glabrous, traversed from the base to apex, and odor is faintly aromatic and warm with slightly sharp taste (Fig. 2E). Microscopy The microscopic analysis of crude drugs of PHF included the examination of size, shape, and relative position of different cells and tissues as well as the chemical nature of the cell walls and also incorporated the form and nature of the cell contents. Tuber of Asparagus racemosus Willd. has three anatomical zones: periderm, cortex/stele, and pith. The cells are thin walled, submersed, are radially oblique and oblong (Fig. 3A). The mature root of Eclipta alba (L.) Hassk. shows poorly developed cork, consisting of three to five rows of thin-walled, tangentially elongated cells, while the secondary cortex consists of outer one or two rows of tangentially elongated or rounded cells with air cavities, inner secondary cortex of tangentially elongated to irregular shaped, parenchymatous cells with conspicuous air cavities (Fig. 3B). The transverse section of the leaf through midrib shows both upper and lower single-layered epidermis, externally covered with cuticle, and a few epidermal cells elongated outward to form uniseriate hairs; the epidermis is followed by the cortex (Fig. 3C). The mature stem has single-layered epidermis, externally covered with cuticle, few epidermal cells elongate to form characteristic nonglandular trichomes, the cork where formed, poorly developed consisting of rectangular cells, secondary cortex composed of large, rounded, or irregular-shaped parenchymatous cells (Fig. 3D). The seed of Trigonella foenum-graecum L. showed a layer of thick-walled, columnar palisade, covered externally with thick cuticle; cells are flat at base, mostly pointed but a few flattened at apex, supported internally by a tangentially wide bearer cells having radial rib-like thickenings; followed by four to five layers of tangentially elongated, thin-walled, parenchymatous cells (Fig. 3E). Transverse sections of fruits of Foeniculum vulgare Mill. show pericarp with outer epidermis of quadrangular to polygonal cells with smooth cuticle and a few stomata (Fig. 3F). The pericarp of Anethum sowa Roxb. ex Fleming fruits shows epidermis of polygonal tabular cells having thick outer wall and striated cuticle; mesocarp, parenchymatous, some cells lignified and show reticulate thickening; the endocarp consists of tabular cells sometimes with Organoleptic Studies Determination of organoleptic characters for PHF's batches 1, 2, and 3 indicated that they appeared as semisolid in nature, blackishbrown in color, and having a pleasant odor with slight bitter taste. Chemical Standardization of PHF The physicochemical study of all the three batches of PHF as presented in Table 1 indicating that total ash and acid-insoluble ash were calculated in the range of 7.28-7.35% (w/w) and 0.28-0.36% (w/w), respectively. pH (10% aqueous solution) value was observed in range of 5. 48-5.76. Presence of almost similar secondary metabolites found in all the batches of PHF indicated that three batches were observed in the close proximity. Analysis of batches 1, 2, and 3 for the safety parameters like heavy metals (Pb, Cd, As, Hg), microbial contamination, pesticide residue analysis, and aflatoxins as given in Table 2 indicates that all the batches of formulation are safe and complied as per the WHO and API guidelines. Phytochemical screening confirmed the presence of various categories of phytocompounds, viz., tannins, steroid, flavonoids, alkaloids, coumarins, quinone, saponins, carbohydrates, phenolics, furanoids, amino acids in batches 1, 2, and 3 of PHF as shown in Table 3 dIscussIon Plant-based galactogouges emerge as substituent of synthetic hormonal milk enhancer to compensate surging milk demands at a global scale. Phytogalactagogues are in high demand because they do not produce any side effects, enhance milk secretion efficiently, and work as high nutritive supplements with immense therapeutic potentials. Hence, standardization of PHF is a vital step that helps to establish uniformity, international acceptability, effective sale, and Ab. Total fungal count (cfu/g) upgrade the quality level of Ayurvedic galactagogues. Majority of commercialized herbal galactagogues have not been thoroughly standardized as per pharmacopeial standards and evaluated scientifically. So, standardization of polyherbal galactagogue preparations along with their raw ingredients is quite essential in order to maintain the quality, efficacy, and optimizing the high milk productivity in dairy cattle for better milk profits. 15,45 Under this subject, detection of microbial contamination, foreign matters adulteration in plant materials, and toxicity of herbal extracts 38,46 were reported to be important segments of chemical standardization as far as quality, efficacy, and safety of herbal galactagogues are concerned. The present work scientifically favors various indigenous knowledge in the form botanicals in applications as Ayurvedic galactagogues. 21,47 The various traditional claims and previous phytogalactagogues research are utilized in this work for the development of novel PHFs in the veterinary pharmaceuticals as well as dairy farming with fixed aim to induce milk secretion in dairy cattle and other mammals of our interest. 48 This study proposed its novelty in terms of pharmacognostical and chemical examination of three batches of five raw drugs, their hydroalcoholic extracts, along with PHFs for their standardization for the purpose of augmenting the milk production in healthy dairy cows. The study first applied standard botanical and pharmacognostical tools for the purpose to identify and authenticate plant materials. Assessment of sensory, macroscopic, as well as microscopic attributes is the initial examination part to authenticate the identity and purity of plant drugs before further analysis is undertaken as per the WHO recommendations. 49,50 Powder microscopy through histological characterization of five raw drugs of PHF is mandatory for the correct and clear identification of powdered drug samples because it serves as diagnostic parameters. It is useful for the observations of the internal structure, constitution, and inclusions of plant cells. 51 It is essential for the detection of contaminants of formulation and useful for finding the authenticity and quality of materials used in PHF. The results of botanical standardization showed that all three batches of the individual plant ingredients have shown similarity for different botanical characteristics. Macro-morphological and microscopic profiles were established for rapid identification and the quality control parameters of the different parts of raw drug ingredients of PHF. The tested organoleptic characters of crude drugs may be set as characteristics for the initial identifications. Raw ingredients, their mixture, and their hydroalcoholic extracts (50:50) along with PHF were qualitatively as well as quantitatively evaluated through physicochemical parameters. Chemical standardization was carried out to authenticate the three batches of in-house preparation. Phytochemistry plays a prime role in authentication, identification, and quality evaluation of raw drugs along with PHF. Phytoestrogens, steroidal saponins, and glycosided molecules have reported to produce estrogenic and milk-stimulating actions. Therefore, phytochemical screening is foremost to confirm the presence of such categories of phytocompounds and may become one of the basis for screening botanicals selection for galactagogue properties. Due to this reason, phytochemical screening were carried out to detect the presence of different secondary metabolites such as alkaloids, flavonoids, saponins, tannins, steroids, etc. In this subject, some of the bioactive markers reported to have high milk-yielding properties were isolated and identified, namely, Diosgeni, Shatavarine (I-IV), Anethole and Estragole in Trigonella foenum-graecum L., Asparagus racemosus Willd., Foeniculum vulgare Mill., and Pimpinella anisum L., respectively, as botanical galactagogues. Such investigations contributed to explore the use of the botanicals as herbal galactagogues as well as the applications in the development of Ayurvedic milk-yielding formulations for veterinary, dairy, and human consumptions. Phytochemical studies in Anethum sowa Roxb. ex Fleming detected tannin, flavonoids, coumarins, phenol, alkaloids, carbohydrates, and saponin in all the three batches. While steroids, acid, quinine, amino acid, protein, and furanoids were absent in batches 1, 2, and 3. Eclipta alba (L.) Hassk., known for its curative properties, is used as analgesic, antibacterial, antihepatotoxic, antihemorrhagic, antihyperglycemic, antioxidant, and immunomodulator agent and also considered a good rejuvenator. Most of the phytoconstituents like tannins, flavonoids, coumarins, saponin, and phenol were confirmed in all the three batches of E. alba (L.). Steroids, alkaloid, quinines, carbohydrates, protein, amino acids, and furanoids were not observed in all the batches of Eclipta alba (L.) Hassk. In this sequence, another chief constituent of PHF, i.e., Asparagus racemosus Willd. that is known for the galactagogue properties beside it, has been recognized for the treatment of impotency and as a rejuvenative tonic for females. Moreover, it showed antibacterial, antioxidant, and anxiolytic properties. It was also studied for increasing milk secretion and yield. For this Asparagus racemosus Willd. was tested phytochemically, which revealed presence of tannin, steroids, flavonoids (by LET), coumarins, quinone, saponin, carbohydrate, phenol (by LET), amino acid, proteins, and furanoids in the tubers of Asparagus racemosus Willd. Quinine, amino acid, and protein were found in low concentrations in the tuber of Asparagus racemosus Willd. The important phytochemicals, namely, flavonoids and alkaloids that chiefly induce milk secretion, were not found in its three batches of PHF. Similarly, in Foeniculum vulgare Mill., all the phytochemicals as shown in Table 3 were present. Likewise, Trigonella foenum-graecum L. was tested positively for tannin, flavonoids, alkaloid, coumarins, saponin, carbohydrate phenol, amino acid, protein, and furanoids in all the three batches. The quantitative tests method, i.e., physicochemical analysis, is an essential part of standardization to set the pharmacopoeial standards of PHF. Almost similar results of all the three batches for these parameters indicate that all the three batches of PHF are found in close proximity. Result of safety parameter of PHF showed that all the batches are safe as per the WHO and API guidelines. Moreover, the outcomes of aflatoxins, microbial load, and pesticidal residue analysis help to draw an inference that PHF with its ingredients are free from chemicals, toxins, and safe for consumption. Results of phytochemical screening of PHF with its ingredients shown in Table 1 revealed the presence of various secondary metabolites that are responsible for galactagogue function, medicinal and nutritive properties as literature per the past studies. It also showed that tannins, coumarins, saponins, carbohydrates, phenolics, and furanoids were present in higher concentration. The study confirmed the presence of analogous phytoconstituents in its five raw ingredients, which proved the presence of same raw ingredients in PHF 1, 2, and 3. The generated data in the form of physicochemical results revealed that the raw ingredients complied with limits of the Ayurvedic Pharmacopoeia of India (API) and up to the mark as far as quality is concern. The chromatographic analysis was conducted in the form of the HTPLC fingerprinting profile under optimized chromatographic conditions for the identification and authentication of in-house phytogalactagouge preparation. This study established reliable HPTLC fingerprint profile at different illuminations with prominent bands at specific R f as presented in Figure 1, which represents the active constituents of five different herbs in the form of distinct bands that control milk-inducing function in lactating cattle. Fingerprint patterns at different illuminations with prominent bands and characteristics R f may set as the quality standard of developed PHF, HPTLC profiles revealed a uniform chemical pattern in its three batches. The HPTLC fingerprints are suitable for rapid and simple authentication and comparison of three batches of PHF, which indicated that all the three batches are qualitatively similar in respect of phytoconstituents. The uniformity in composition of batches 1, 2, and 3 was determined through identification of similar band patterns and almost similar R f values. Whereas the results of safety parameters indicate that all the three batches of PHF are safe as per the WHO and API guidelines. Results of the organoleptic, physicochemical, phytochemical, and HPTLC evaluations showed that all the three batches of this formulation are equivalent in respect of composition and quality. This work may be utilized as a base study for further preclinical and clinical trials in different animal models specifically dairy cattle. The outcomes in the form of scientific data could help to validate the traditional belief that some Ayurvedic herbs are potent to improve milk production in lactating cattle after the preclinical studies. The application of present work may be fruitful for healthy practices in veterinary and livestock, specifically to fulfill the massive dairy production demands. conclusIon The present study developed and standardized new PHF, a galactagogue in different batches to establish the quality medicine for the purpose of augmenting milk production in healthy dairy cows. This Ayurvedic veterinary medication may not only be helpful to compensate the massive demands of milk in India but also have the therapeutic potentials to keep the cattle healthy and productive. The study could be utilized as a base model for further evaluating safety, toxicity, and clinical trials on dairy animals. Data may be useful for the standardization and routine quality control of different pharmaceuticals batches and other commercial veterinary as well as human galactagogues. This Ayurvedic formulation may work as an alternative for hormone replacement therapy (HRT) or exogenous estrogens; hence, it is safer and effective in the dairy economics and livestock management.
2020-04-16T09:15:26.309Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "efd0ceebccf5953daa6b90950c16a97e35352e53", "oa_license": null, "oa_url": "https://doi.org/10.5005/jdras-10059-0064", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0b7099489705dfc853845cd1ac7a235a715fa3cf", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
29283202
pes2o/s2orc
v3-fos-license
Differentially Regulated Expression of Endogenous RGS4 and RGS7* Regulators of G protein signaling (RGS proteins) constitute a family of newly appreciated components of G protein-mediated signal transduction. With few exceptions, most information available on mammalian RGS proteins was gained by transfection/overexpression or in vitro experiments, with relatively little known about the endogenous counterparts. Transfection studies, typically of tagged RGS proteins, have been conducted to overcome the low natural abundance of endogenous RGS proteins. Because transfection studies can lead to imprecise or erroneous conclusions, we have developed antibodies of high specificity and sensitivity to focus study on endogenous proteins. Expression of both RGS4 and RGS7 was detected in rat brain tissue and cultured PC12 and AtT-20 cells. Endogenous RGS4 presented as a single 27–28-kDa protein. By contrast, cultured cells transfected with a plasmid encoding RGS4 expressed two observable forms of the protein, apparently due to utilization of distinct sites of initiation of protein synthesis. Subcellular localization of endogenous RGS4 revealed predominant association with membrane fractions, rather than in cytosolic fractions, where most heterologously expressed RGS4 has been found. Endogenous levels of RGS7 exceeded RGS4 by 30–40-fold, and studies of cultured cells revealed regulatory differences between the two proteins. We observed that RGS4 mRNA and protein were concomitantly augmented with increased cell density and decreased by exposure of PC12M cells to nerve growth factor, whereas RGS7 was unaffected. Endogenous RGS7 was relatively stable, whereas proteolysis of endogenous RGS4 was a strong determinant of its lower level expression and short half-life. Although we searched without finding evidence for regulation of RGS4 proteolysis, the possibility remains that alterations in the degradation of this protein could provide a means to promptly alter patterns of signal transduction. G proteins transduce signals across the plasma membrane by sequential interactions with cell surface receptors and appropriate second messenger-producing effectors (e.g. enzymes and ion channels). These interactions are modulated by nucleotide-driven conformational changes in the ␣ subunits of heterotrimeric G proteins (G␣). 1 A ligand-bound receptor catalyzes the exchange of GDP for GTP on its cognate G␣ and the dissociation of G␣ from the complex of G protein ␤ and ␥ subunits (G␤␥). These dissociated subunits are competent to modulate the activity of effectors. The duration of G protein-mediated responses are dependent on the intrinsic GTPase rate of G␣ and on extrinsic factors, such as regulators of G protein signaling (RGS proteins). RGS proteins serve to regulate G protein signaling by functioning as GTPase-activating proteins (GAPs). GAP activity can sharpen the termination of a signal upon removal of a stimulus, attenuate a signal either as a feedback inhibitor or in response to a second input, promote regulatory association of other proteins, or redirect signaling within a G protein signaling network (reviewed in Ref. 1). RGS proteins are related by a conserved RGS domain that is composed of ϳ130 amino acid residues. The RGS domain alone is capable of binding G␣ and accelerating GTP hydrolysis, although other domains contribute to affinity and/or selectivity for G protein targets (2,3). Mammalian RGS proteins, of which Ͼ20 are now known, can be grouped into five subfamilies based on sequence similarity (R4, R7, R12, RA, and RZ) (4). Although several members of the RGS family are relatively simple ϳ25-kDa proteins that contain short amino and carboxyl sequences flanking the characteristic RGS domain (such as RGS4), others include more substantial modules that impart other functions. The R7 subfamily is characterized by possession of so-called DEP (disheveled, EGL-10, pleckstrin) and GGL (G protein gamma subunit-like) domains. While not well established, the DEP domain may play a role in directing G␣ subunit specificity for the RGS domain (2). The GGL domain apparently specifies an obligate interaction of an RGS protein with the G protein ␤ 5 subunit (5,6). RGS4 has the capacity to accelerate in vitro GTPase activity of G␣ i subfamily (including G␣ i , G␣ o , and G␣ z ) and G␣ q subfamily members but not G␣ s or G␣ 12 subfamily members. By contrast, GGL-containing RGS proteins exhibit specificity for G␣ o and G␣ t (7,8). Much of the currently available information on mammalian RGS proteins was gained by transfection/overexpression or in vitro experiments, with little known about the endogenous counterparts (particularly for the RGS4 subfamily). Because such studies can lead to imprecise or erroneous conclusions, caused by problems such as mislocalization and/or loss of substrate specificity, we have focused study on endogenous proteins. A case in point is the apparent difference in selectivity of RGS2 (an R4 family member) for G␣ i and G␣ q when different transfection systems have been utilized (9,10). Although the mRNA for RGS4 is relatively abundant in brain, detection of the protein only recently has been reported for this tissue (11). Ubiquitylation and proteasomal degradation may maintain the RGS4 protein at very low levels (12) despite the expression of substantial levels of mRNA. To address localization, regulation, and quantification of endogenous RGS proteins, we have developed antibodies with appropriate specificity and sufficient sensitivity to detect endogenous RGS4 or RGS7. Herein we compare and contrast these two proteins with one another and reveal differences between endogenous and heterologously overexpressed RGS4. DNA-cDNA for bacterial expression of untagged RGS4 short (initiation site Met-19) was produced by PCR using pQE60-H6-RGS4 (long) (13) as a template. The upstream primer included an appended BspHI restriction site (underlined): 5Ј-AGATCGATGAAACATCGGCTGGGA-TTTC-3Ј. The downstream primer was annealed to a site 3Ј of the RGS4 termination site within the pQE bacterial expression plasmid (5Ј-TCA-ACAGGAGTCCAAGCTCAGC-3Ј). The PCR product was digested with BspHI and BamHI and subcloned into a compatible NcoI and BamHIdigested pQE60 vector (Qiagen). The newly formed pQE60RGS4short vector also served as the source for subcloning RGS4 short into the mammalian expression vector, pCMV5 (14,15), following EcoRI and BamHI digestion. Proteins-Recombinant RGS4 was produced by transformation of Escherichia coli strain, BL21(DE3), with pQE60RGS4 or pQE60-RGS4short. The transformed bacteria were allowed to grow at 37°C until A 600 of ϳ1.0, and expression was induced by 100 M isopropyl-␤-D-1-thiogalactopyranoside (Roche Applied Science) for 4 h. Cells were harvested, flash-frozen in liquid nitrogen, and stored at Ϫ80°C until lysis, as described (13). Nontagged RGS4 protein (short or full-length) expressed to significantly higher levels than histidine-tagged RGS4 and formed the predominant protein band in the supernatant fraction from high speed centrifugation of lysates (that were resolved by SDS-PAGE and visualized by Coomassie Blue stain). A sample of lysate containing RGS4 short and SDS-PAGE sample buffer served as a gel migration standard. Full-length, nontagged RGS4 was purified from the supernatant fraction by successive Mono Q and phenyl-Sepharose columns (Amersham Biosciences) essentially as described for histidine-tagged RGS4 (13). Antibodies-Untagged recombinant RGS4 (full-length, Ͼ95% purity) from E. coli was injected intradermally into a New Zealand White rabbit for production of an antiserum designated U1079. 150 g of protein was divided among multiple sites on the back for the initial injection and each of the subsequent three boosts over a period of 6 months. Crude antiserum was employed for Western immunoblotting. Antibodies to RGS7 (designated U1480) were produced from rabbits injected subcutaneously with the peptide (C)TSKSLTSLVQSY (synthesized at the Biopolymer Core Facility, University of Texas Southwestern Medical Center), corresponding to amino acids 458 -469 of mouse RGS7. The additional Cys residue (shown in parentheses) was appended for conjugation to the carrier protein, keyhole limpet hemocyanin (16). Specific antibodies were affinity-purified from the crude antiserum by binding to the peptide immobilized on Sepharose (17). A similar strategy was employed to produce antiserum R-381 against a synthetic peptide representing the 16 carboxyl-terminal amino acids of human GAIP (RGS19): (C)YRALLLQGPSQSSSEA. An antiserum to the carboxyl terminus of G␣ i isoforms 1 and 2 has been described (18). Mammalian Cell Culture, Transfection, Fractionation-PC12M (rat pheochromocytoma), AtT20 (human pituitary tumor), and COS-M6 (simian kidney) cells were obtained from the laboratories of Drs. Paul C. Sternweis, Elliott M. Ross, and Joseph Goldstein, respectively (all of the University of Texas Southwestern Medical Center). Cells were cultured in Dulbecco's modified Eagle's medium with high glucose supplemented with 10% fetal calf serum (Invitrogen) and an atmosphere of 10% CO 2. A stably transfected line of human embryo kidney cells (HEK293) was derived as described (19). Gene silencing of RGS4 and RGS7 in PC12M cells was accomplished by transient transfection of cells (80 -95% confluent) with Lipo-fectAMINE 2000 (2.8 g/ml; Invitrogen) and plasmid (0.33-1 g/ml) and/or short interfering RNA (siRNA; 100 nM) according to the manufacturer's instructions. The sequences of the sense strands of the siRNA duplexes used for targeting RGS4 and RGS7 were CCGUCGUUUCC-UCAAGUCUdTdT and GCAGAGGAAUCACCGAACAdTdT, respectively. The targeted regions correspond to nucleotides 494 -512 and 42-60 of the respective open reading frames. RNA oligonucleotides were synthesized and deprotected at the RNA Oligonucleotide Synthesis Core (Center for Biomedical Inventions at the University of Texas Southwestern Medical Center). Cells transfected with siRNA duplexes were harvested 48 h post-transfection. Tissue Preparations-Tissue samples of various regions of the rat brain were prepared from male Sprague-Dawley rats (200 -350 g; Charles River). Rats were decapitated, and the brains removed from the skull were chilled for 1 min in phosphate-buffered saline. Coronal 1-mm slabs were obtained with an acrylic brain matrix (Ted Pella, Inc.). Needle punches of dorsolateral striatum, cerebellum, ventrobasal thalamus (all 12-gauge), or parietal neocortex (14-gauge) were transferred to a Microfuge (Beckman) tube, rapidly frozen on dry ice, and stored at Ϫ80°C until use. Dorsal hippocampal samples were obtained identically, except that they were microdissected from the slab. The various tissue sections were routinely solubilized by sonication in buffer containing 1.0% SDS and protease inhibitors (from Sigma, unless otherwise noted): lima bean trypsin inhibitor (10 g/ml), leupeptin (10 g/ ml), phenylmethylsulfonyl fluoride (15 g/ml), L-1-p-tosylamino-2phenylethyl chloromethyl ketone (15 g/ml), (3S)-7-amino-1-chloro-3tosylamino-2-heptanone hydrochloride (15 g/ml), and MG-132 (10 M; Calbiochem). Samples were boiled immediately for 3 min, aliquots were removed for protein determination, and the remainder of the lysate was rapidly frozen on dry ice and stored at Ϫ80°C until further use. Protein Determination and Western Blots-For samples prepared with SDS-PAGE sample buffer, protein concentration was determined by Amido Black (20) or by the Lowry method (21). Fractionation samples did not contain detergent and were assayed using Bradford reagent (Bio-Rad) (22). Bovine serum albumin served as standard in all assays. Except where noted, equal masses of total protein were processed by Western immunoblotting. cDNA Preparation and PCR-Total RNA was isolated from PC12 cells using Trizol reagent (Invitrogen). The RNA was primed using random hexamers and oligo(dT) and translated into cDNA using Maloney murine leukemia virus reverse transcriptase. RGS4 (sense primer, CAGCAAGAAGGACAAAAGTAG; antisense primer, GCAGCT-GGAAGGATTGGTCA) was detected by PCR using 92°C/1 min of denaturing, 54°C/1 min of annealing, and 72°C/3 min of extension for 35 cycles. Each reaction was separated on a 1% agarose gel, and DNA products were detected with ethidium bromide. RGS4 appeared as a 430-base pair single band. RNase Protection Assay-Assays were conducted using the RPA II-I TM ribonuclease protection assay kit (catalog no. 1414) essentially as described by the manufacturer (Ambion). Briefly, a DNA template for RGS4, containing an appended T7 polymerase binding site was generated by PCR using the following primer set: sense, 5Ј-gtcaagaaatgggctgaatcg; antisense, 5Ј-gctaatacgactcactatagg-(N) 20 -gaatcgagacttgaggaaacg, where the core T7 polymerase binding site is underlined, and N represents a random nucleotide. (The amplified fragment corresponds to nucleotides 166 -519 of the RGS4 open reading frame). The template used to generate the probe for the internal standard, cyclophilin, was supplied as a linearized plasmid pTRI-cyclophilin (catalog no. 7794; Ambion). Labeled antisense probes for RGS4 and cyclophilin mRNAs were generated using T7 polymerase (Maxiscript TM ; catalog no. 1312; Ambion) and inclusion of 3 M [␣-32 P]CTP (800 Ci/mmol, 10 mCi/ml) in the polymerase reaction. The specific activity of the [␣-32 P]CTP was reduced 10-fold for the cyclophilin probe. A mixture of the probes was allowed to hybridize with 10 g of total RNA (isolated using RNAqueous-4PCR, catalog no. 1914; Ambion). The hybridized mixture was subject to RNase A and T1 digestion, and protected fragments (corresponding to nucleotide lengths of 353 and 105, respectively, for RGS4 and cyclophilin) were separated on a 5% acrylamide, 8 M urea denaturing gel. Dried gels were exposed to phosphorimaging screens overnight. Screens were developed by a phosphor imager (Fuji), and data were analyzed using MacBAS imaging software. RESULTS Heterologously Overexpressed RGS4: Alternative Initiation of Translation-COS cells transfected with a full-length cDNA for RGS4 expressed a protein that migrated more rapidly on denaturing SDS-PAGE gels than RGS4 protein purified from E. coli (Fig. 1A (lanes 3 and 4) versus the full-length standard (lane 6)). This expression of an apparently shorter than expected form of RGS4 was not cell type-specific, because it was also observed in transfected human embryonic kidney 293 cells and murine Neuro 2A cells (data not shown). We and others (12) noted that the nucleotide sequence surrounding the portion encoding the second methionine at position 19 formed a putative (or alternative) translational start site (26) and thus could explain the production of the short form of RGS4 in transfected mammalian cells. For this reason, RGS4 cDNAs, lacking the portion encoding the first 18 amino acids, were constructed for expression in mammalian cells and E. coli (RGS4 short). The RGS4 short purified from E. coli (Fig. 1A, lane 5) co-migrated with the RGS4 protein expressed in COS cells transfected with the full-length or short RGS4 cDNA (lanes 1-4). Epitope tags preceding or succeeding full-length RGS4 resulted in the expression of a longer form of the protein in COS cells (lanes 7 and 8). These results suggest that the heterologously overexpressed, untagged RGS4, which we can detect, is predominantly initiated at the methionine at position 19 (of the full-length RGS4). Subcellular Localization of RGS4 -RGS4 heterologously overexpressed in HEK 293 cells was found predominantly in the soluble fraction. This was the case whether RGS4 was N-terminally tagged (Fig. 1B) or C-terminally tagged, or untagged (not shown). Whereas this observation runs counter to the localization of G protein signaling at the plasma membrane, it is consistent with other reports (27,28). To confirm whether endogenous RGS4 demonstrated a similar subcellular distribution pattern, we required highly sensitive and specific antibodies. Antiserum U1079 was generated against fulllength recombinant RGS4 purified from E. coli. Given the similarity of the RGS domain among RGS subtypes, it was possible that such an antiserum would exhibit cross-reactivity with other RGS family members. However, Fig. 2A shows the remarkable specificity of U1079 for RGS4 by Western immunoblotting. Antibodies were developed with specificity for RGS7 in order to compare diverse RGS proteins and to provide a positive control for expression (as endogenous RGS7 expression has been previously identified in tissue and cultured cells (29 -32)). Affinity-purified U1480 antibodies were generated against a unique peptide sequence of RGS7, and, as anticipated, the antibodies were specific for this RGS protein (Fig. 2B). Initially, we experienced difficulty detecting endogenous RGS4 by immunoblotting (of COS, murine Neuro 2A neuroblastoma, and NG108 neuroblastoma/glioma cells). To guide our search for cell types that might express the most RGS4 protein, a PCR-based screen was performed to "semiquantitatively" examine the level of RGS4 mRNA in various cell types. Strong signals were obtained for rat PC12M and human AtT-20 cells (but little or no signal was produced from murine Neuro 2A neuroblastoma, rat pituitary GH3, rat RBL-2H3, rat C6 glioma, Chinese hamster ovary, or NG108 neuroblastoma/glioma cells; data not shown). In correlation with the PCR results, Western blots of PC12M and AtT20 cells revealed detectable immunoreactive bands consistent with RGS4 expression. One factor that contributed to our early difficulty in detection of endogenous RGS4 was the dependence of expression on cell density. We discovered that confluent cultures of PC12M (Fig. 3, A and B) or AtT20 cells (not shown) consistently expressed greater levels of RGS4 (per unit of total cell protein) than did cultures harvested at lower cell densities (Fig. 3A). By contrast, the amount of G␣ i detected was unaffected by cell density (Fig. 3B). The level of expression of each of these proteins was largely unaffected by coating of the substrata with poly-L-lysine or the laminin (Fig. 3, A and B), which promote adhesion and/or differentiation of PC12 cells in culture. Further examination for the cause of the cell densitydependent increases in RGS4 protein revealed that subconfluent and confluent cells revealed no significant difference between the percentages of cells occupying the various stages of the cell cycle as determined by fluorescence-activated cell sorting. The percentages of cells in G 2 ϩ S phases of the cell cycle for confluent and subconfluent cultures of PC12M cells were 29 Ϯ 5 and 27 Ϯ 7%, respectively (mean Ϯ S.D.). However, RNase protection assays indicated that the relative mRNA levels for RGS4 were increased for confluent cells compared with subconfluent cells (Fig. 3C), suggesting that the regulation of cell density-dependent expression of RGS4 lies upstream of translation. To verify that the prominent immunoreactive band on U1079-probed blots was indeed RGS4, PC12M cells were transfected with siRNA duplexes targeted to RGS4 or RGS7 (Fig. 3D). Effectiveness and fidelity of the siRNA duplexes was initially examined by co-transfecting the siRNAs with a plasmid constitutively expressing green fluorescent protein fused to the N terminus of RGS4. Western blots of transfected PC12M lysates revealed that silencing of green fluorescent protein-RGS4 was complete with RGS4 siRNA and minimal with RGS7 siRNA. A similar pattern of silencing was observed for endogenous RGS4. The siRNA directed against RGS7 caused partial reduction of endogenous RGS7 protein expression but was without effect on endogenous RGS4. This experiment demonstrated that the siRNA oligonucleotides were RGS-specific and confirmed the identity of the RGS4 band detected by Western blotting with antiserum U1079. Once it was clear that endogenous RGS4 could be reliably identified by Western blot, we examined the subcellular localization of endogenous RGS4. PC12M cells were fractionated by differential centrifugation for separation of nuclear (1000 ϫ g; P1) and membrane (200,000 ϫ g; P200) pellets plus cytosolic soluble proteins (S200). Unlike heterologously overexpressed RGS4 (Fig. 1B), endogenous RGS4 was found mostly in the pellet fractions of PC12M cells, including the 200,000 ϫ g pellet, where membranes are expected to be located (Fig. 3E). The presence of RGS4 and G␣ i in the 1000 ϫ g (low speed) pellet fractions may, in part, be accounted for by some plasma membrane sheets that became trapped with nuclei and other relatively large subcellular particles. Another possibility is that some of the RGS4 associated with the low speed pellet reflects nuclear localization of the protein as has been reported for heterologously expressed, tagged forms of the protein (28,33). Endogenous Levels of RGS4 and -7 in Rat Brain and PC12 Cells-In the absence of specific antibodies against endogenous proteins, many researchers have relied on in situ hybridization to qualitatively predict the level of expressed protein. The distribution of mRNA for various RGS proteins in brain has been assessed by in situ hybridization (34). We surveyed brain regions for expression of the protein to learn whether it correlated with messenger RNA abundance. Regions of brain were dissected and frozen before being extracted with SDS-and protease inhibitor-containing buffer. Equal amounts of total protein from each region were analyzed for the presence of RGS4 and RGS7 by Western immunoblotting. RGS4 protein was detected in cortex, caudoputamen, and thalamus with lower levels in hippocampus and cerebellum (Fig. 4). RGS7 was distributed similarly to RGS4 except that it was well represented in cerebellum. Of note, these relative levels of protein detected by Western blot correlated well with the reported differences for mRNA (34). We employed Western immunoblotting with purified RGS standards to estimate the endogenous expression of RGS4 and RGS7 proteins in duplicate independent samples of confluent PC12M cells and frontal cortex from rat. We detected ϳ3 pg of RGS4 per g of total protein from PC12M cells and only 1 pg per g of total protein from frontal cortex. RGS7, on the other hand, was 30 -40-fold more abundant: 40 pg/g PC12M protein and 30 pg/g frontal cortex protein. Recently, Muma et al. (11) monitored RGS4 in human brain samples by immunoblotting with an RGS4 antibody from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) (at 1:5000 -10,000 dilution), but no standards for the amount or migration of RGS4 were shown. The Santa Cruz Biotechnology catalog shows migration of a doublet from mouse and rat brain that is Ͼ34 kDa, which leads us to question the specificity of the antibody, at least for use with mouse or rat tissues. We calculate the mass of RGS4 to be 23.2 kDa (from sequence data) and estimate size as a singlet of 27-28 kDa in Western blots of PC12 cells using our antiserum and molecular weight standards. We tested the antibody from Santa Cruz Biotechnology (1:200 dilution) with known amounts of recombinant human RGS4 standard and found it to be sensitive to 10 ng, whereas our antiserum, at 1:2000 dilution, was sensitive to less than 0.01 ng of RGS4 (after a 1-min exposure of chemiluminescent blot to film). Thus, we estimate our antiserum to be about 10,000-fold more sensitive than the Santa Cruz Biotechnology preparation that we tested. In total, the available data suggest that either RGS4 is expressed to considerably greater levels in human brain (compared with mouse and rat brain) or that the Santa Cruz Biotechnology antiserum is unable to detect bona fide endogenous RGS4. Stability of Endogenous RGS4 and 7-Both RGS4 and RGS7 have been reported to be susceptible to degradation by the proteasome pathway (12,35). Accordingly, the Western blot signal for RGS4 was increased significantly when PC12M or AtT20 cells were exposed to a proteasomal inhibitor, MG132 or lactacystin (Fig. 5, A and B). By contrast, these inhibitors had no effect on the endogenous levels of G␣ i or RGS7 (Fig. 5A). We hypothesized that the expression of RGS4 was limited by a high rate of degradation, and we therefore tested whether inhibition of protein synthesis by cycloheximide would cause RGS4 to diminish more quickly than RGS7 and G␣ i . This prediction was supported by the data in Fig. 5C. Only about half of the immunoreactive RGS4 remained detectable after PC12M cells were exposed to cycloheximide for 1 h, whereas the expression of RGS7 and G␣ i was apparently stable for at least 7 h (at which time the morphology of the cells had not changed appreciably). We also examined whether an increase in endogenous RGS4 protein levels, as a result of PC12 cell exposure to the proteasome inhibitor MG132, would correlate with an increase in GAP activity toward G␣ z . Of the known mammalian RGS proteins, only RZ and R4 family members have been demonstrated to accelerate the GTPase activity of G␣ z . 2 GAP activity in the 200,000 ϫ g pellet fractions was almost 2-fold greater in the membranes from cells exposed to MG132 relative to untreated cells (18 Ϯ 2.3 versus 11 Ϯ 0.47 units/mg, respectively; triplicate determinations). This increase in GAP activity is likely to be related, at least in part, to the increase in the amount of RGS4 present in the membrane fraction from the cells exposed to MG132 (ϳ4-fold measured by densitometry) (Fig. 5D). Additional data supports this inference. No mRNA for RGSZ1 or RGSZ2 was identified in PC12M cells by Northern or PCR (not shown). Another member of the RZ family, GAIP (also known as RGS19), could not be detected by Western immunoblotting with an antibody that could detect less than 1 ng of purified GAIP. RGS5 and -16 are additional R4 family members that would be anticipated, based on their N-terminal sequences, to be candidates for proteasomal degradation (12). However, we could not detect RGS16 in PC12M cells by Western immunoblotting with an antibody (that could detect less than 0.1 ng of purified RGS 16; data not shown). Regulation of Endogenous RGS Proteins-The relatively rapid turnover of RGS4 (and the accumulation of endogenous RGS4 and GAP activity in cells treated with proteasome inhibitors) prompted us to consider regulation of degradation as a swift means for cells to adjust levels of RGS4 protein. Because RGS4 has the capacity to negatively regulate G i -and G q -mediated signaling (19), we hypothesized that RGS4 levels would be promptly elevated in response to activation of one or both of these G proteins and thus constitute a mechanism of negative feedback regulation of signaling. We were, however, unable to reveal changes in expression of RGS4 protein by acute exposure of cells to G protein activators such as 1 mM carbachol (agonist for G i -and G q -coupled receptors), 1 M bradykinin (ligand for a G q -coupled receptor), or 20 or 40 M peptide Mas 07 (a derivative of the G i activator, mastoparan, (36), or aluminum fluoride (an activator of G proteins that is effective on some, but not all, varieties of intact cells). Time courses for those reagents (with points ranging from 5 or 15 min to 6 or 8 h) were conducted on confluent and subconfluent cultures, but no changes in RGS4 protein expression were detected. NGF and cAMP signaling pathways promote differentiation in PC12 cells. Pepperl et al. (37) reported that treatment of PC12 cells with forskolin or cAMP analogs decreased RGS4 mRNA by 2 Y. Tu, personal communication. FIG. 4. Differential expression of RGS4 and RGS7 in regions of brain. SDS extracts of samples from various regions of brain (from two rats) were separately examined by Western immunoblotting for expression of RGS4 (in 25 g of total protein) and RGS7 (in 11 g of total protein). Cx, cortex; Cp, cauduputame; Thal, ventrobasal thalamus; Hip, hippocampus; Cblm, cerebellum. nearly 50%. We did not observe an effect of 10 M forskolin, 0.1-1 mM 8-(4-chlorophenyl thio)cAMP, or 1 mM dibutyryl cAMP on the expression of RGS4 protein in PC12M cells (data not shown). Instead, we found that NGF treatment for 48 h decreased RGS4 protein levels by 2-3-fold, with no concomitant change in RGS7 and G␣ i (Fig. 6A). Northern blot analysis indicated that this decrease in RGS4 protein correlated with a decrease in RGS4 mRNA (Fig. 6, B and C). By contrast, levels of mRNA for RGS6, -7, -8, and -16 were unaffected. Message for RGS1 and -2 was not detected (Fig. 6C). It is possible that the NGF-induced reduction in RGS4 expression would promote G i /G o activity and thereby contribute to the process by which this class of G protein participates in NGF-dependent activation of mitogen-activated protein kinase and differentiation of PC12 cells (38). DISCUSSION We discovered substantial differences between endogenous and heterologously overexpressed RGS proteins, including start sites utilized for synthesis of protein, subcellular localization, and susceptibility to proteolysis. Davydov and Varshavsky (12) reported that, in addition to full-length RGS4, a shorter more stable form of RGS4, beginning at methionine 19, was produced by in vitro translation. We observed the shorter form exclusively in multiple cell types as a result of transfection with a cDNA that encoded nontagged RGS4. By contrast, however, we found that only the longer form was expressed endogenously in tissue or cultured cells. Thus, we conclude that cells in vivo do not typically utilize the alternative start site at methionine 19 of RGS4. We found a substantial portion of endogenous RGS4 protein was associated with membrane fractions of PC12M cells. This RGS4 is presumably located strategically for regulation of membrane-bound G␣ and thereby precludes the necessity for recruitment of RGS4 to the membrane, a translocation that had been concluded from studies of heterologously overex-pressed RGS4 (27,28). On the other hand, one model of G protein activation suggests that GTP-bound G␣, dissociated from G␤␥, is released from the plasma membrane (39). Perhaps cytosolic proteins, such as the subpopulations of endogenous RGS proteins detected in soluble fractions (Fig. 3E) (6), could serve to inactivate G␣ released from the membrane, thus promoting the return of G␣ to G␤␥ at the plasma membrane. Highly efficient pools of RGS proteins in multiple subcellular compartments may have prevented some investigators, including us, from finding substantial quantities of activated G␣ subunits in the cytosol (40). We found the half-life of endogenous RGS4 to be short, on the order of just 1 h (Fig. 5C). We attribute this brief lifetime to the N-end rule pathway of protein degradation, as elucidated by Davydov and Varshovsky (12) for in vitro produced and transfection-produced RGS4. The N-end rule relates the in vivo half-life of a protein to the identity of its N-terminal amino acid. The Cys residue at position 2 of RGS4 is subject to arginylation, which targets the protein for ubiquitylation and degradation by the proteasome. In our mammalian cell transfection experiments, we did not detect full-length, untagged RGS4, presumably because this overexpressed protein was too rapidly degraded. In support of this inference, when an Nterminal Myc tag was added (thus creating a stabilizing amino acid at the N terminus), a protein of the expected size (full length plus tag) was produced (Fig. 1A). Despite a report that RGS7 is also subject to degradation by the proteasome (35), we do not find that this pathway of degradation is a common characteristic of endogenous RGS proteins. Kim et al. (35) reported that heterologously overexpressed RGS7 is subject to degradation by the proteasome because inhibitors of this pathway increased the level of expression of the protein. By contrast, we found that proteasome inhibitors did not affect expression of endogenous RGS7 in PC12M cells; this is consistent with the protein sequence beginning with alanine, which is not a destabilizing amino acid. In addition, our experiments involving inhibition of protein synthesis indicated that endogenous RGS7 was resistant to proteolysis over 7 h (Fig. 5C). We ascribe the stability of endogenous RGS7 in PC12M cells to its obligate association with G␤ 5 (which may be limiting when RGS7 is overexpressed), as has been demonstrated by Slepak and co-workers (6). We suggest that the particularly low level of expression of endogenous RGS4 is related to a high rate of degradation relative to synthesis of the protein. The amount of RGS4 detected was about 30-fold lower than RGS7 in frontal cortex. The levels of RGS4 and RGS7 were only 0.0001 and 0.003%, respectively, of total protein in cortex, whereas their substrates, such as G o and G i , are highly expressed, comprising 1.5% of membrane protein (41). This disparity in the abundance of RGS and G proteins is consistent with RGS proteins acting catalytically in vitro (13). We speculate that localization of RGS proteins within cells of the brain, perhaps in preformed signaling complexes (1), may be a particularly crucial determinant of specifically which molecules of G␣ i and G␣ o will be subject to regulation by the relatively small number of RGS proteins. In our screen of regions of brain and various cell types, we found a positive correlation between the amount of RGS4 mRNA and the amount of protein assayed by Western immunoblotting. For example, we detected RGS4 mRNA and protein in PC12M and AtT20 cells but little or no mRNA and no protein in NG108 or Neuro 2A cells. Whereas we also observed concomitant modulation of expression of RGS4 mRNA and protein by cell density or exposure of cells to NGF, a similar pattern of regulation of mRNA and protein is not necessarily universal. In a separate study, we found that, following acute or chronic FIG. 6. NGF coordinately reduces expression of RGS4 mRNA and protein in PC12M cells. Cells were cultured for 24 h in the presence (ϩ) or absence (Ϫ) of 40 ng/ml NGF. Another addition of half as much NGF was made (to NGF-treated cells only), and the cultures were incubated for another 24 h. Cultures were extracted with SDS-PAGE sample buffer for analysis of protein expression or with Trizol for analysis of mRNA. A, duplicate samples of 45 g of cellular protein were processed for Western blotting with antibodies as indicated by protein names at the left of three blot fragments. B and C, Northern blots were processed with radiolabeled probes for RGS isoforms (as indicated by the numbers at the top of blots). The ticks at the left (for RGS2, -4, and -6) and right (for RGS1, -7, and -16) margins of C indicate the migration of 28 and 18 S ribosomal RNA. treatment of rats with morphine, the levels of RGS4 mRNA and protein in the locus coeruleus did not change in unison (42). This result points to the importance of monitoring protein (as opposed to just mRNA) in evaluating the impact of modulators on the physiological expression and function of RGS proteins. Why is the level of RGS4 protein expression dependent on cell density? Reducing the rate of RGS4 degradation and/or increasing its rate of synthesis would increase the steady-state levels of endogenous RGS4. It is unlikely that regulation of the rate of protein degradation makes a major contribution for increased RGS4 protein levels, because treatment of PC12M cells with the proteasome inhibitor, MG132, resulted in increased RGS4 expression regardless of cell density (data for subconfluent cells not shown). Additionally, MG132 treatment of subconfluent cells failed to achieve the level of RGS4 expression found in confluent cells. RNase protection assays suggested that the mechanism of regulation is based, at least in part, on transcriptional control (Fig. 3C). Cell cycle did not appear to be a major factor in transcriptional control, because fluorescence-activated cell cycle analysis did not reveal significant differences between the distributions of cells among phases of the cell cycle. Because RGS4 mRNA and protein levels were coordinately and inversely affected with NGF treatment and higher cell density, the most likely explanation is that elevated RGS4 expression occurs as a result of increased transcriptional activity related to increased cell/cell contacts (vertically in addition to horizontally) that exist at higher cell densities. In Saccharomyces cerevisiae, the RGS protein, Sst2p, helps overcome cell cycle arrest induced by mating factor (a ligand for a G protein-coupled receptor). Mating factor induces expression of Sst2p via a transcriptional mechanism, and this RGS protein serves as a negative feedback regulator of the mating factor pathway (43). Because we observed a high rate of RGS4 degradation (and increased GAP activity in cells exposed to a proteasome inhibitor), we hypothesized that an appropriate agonist or G protein activator would reduce RGS4 protein degradation in PC12M cells. This could provide the means to increase the level of RGS4 protein for function as a negative feedback regulator, which would be more rapid than a mechanism relying on transcription. To date, however, we were unable to find conditions to regulate proteolysis of RGS4 either by receptor agonist or by direct activation of G proteins. Although our current experience suggests that endogenous RGS4 protein levels in PC12M cells are not dictated by G protein activity, the possibility remains that degradation of RGS4 may be regulated via a mechanism that involves specific receptor(s) or other means that we have yet to address.
2018-04-03T01:24:06.656Z
2004-01-23T00:00:00.000
{ "year": 2004, "sha1": "6cb145ef21981e215a704b5dde8be70c9bb46c3d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/4/2593.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "9aa4d1e3299cec8fb22b8931aafd83ee6db18f6c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
209202122
pes2o/s2orc
v3-fos-license
Efficient crowdsourcing of crowd-generated microtasks Allowing members of the crowd to propose novel microtasks for one another is an effective way to combine the efficiencies of traditional microtask work with the inventiveness and hypothesis generation potential of human workers. However, microtask proposal leads to a growing set of tasks that may overwhelm limited crowdsourcer resources. Crowdsourcers can employ methods to utilize their resources efficiently, but algorithmic approaches to efficient crowdsourcing generally require a fixed task set of known size. In this paper, we introduce cost forecasting as a means for a crowdsourcer to use efficient crowdsourcing algorithms with a growing set of microtasks. Cost forecasting allows the crowdsourcer to decide between eliciting new tasks from the crowd or receiving responses to existing tasks based on whether or not new tasks will cost less to complete than existing tasks, efficiently balancing resources as crowdsourcing occurs. Experiments with real and synthetic crowdsourcing data show that cost forecasting leads to improved accuracy. Accuracy and efficiency gains for crowd-generated microtasks hold the promise to further leverage the creativity and wisdom of the crowd, with applications such as generating more informative and diverse training data for machine learning applications and improving the performance of user-generated content and question-answering platforms. Introduction Crowdsourcing platforms enable large groups of individual crowd members to collectively provide a crowdsourcer with new information for many problems [1,2] such as completing user surveys [3], generating training data for machine learning models [4,5], or powering citizen science programs [6,7]. The work performed by the crowd is often used by researchers and firms to address problems that remain computationally challenging. Yet incorporating humans into a problem domain introduces new challenges: workers must be paid and even volunteers should be properly incentivized, bad actors or unreliable crowd members should be identified, and care must be taken to efficiently and accurately aggregate the response of the crowd. Algorithmic crowdsourcing focuses on computational approaches to these challenges, allowing crowdsourcers to maximize the accuracy of the data generated by the crowd while also efficiently managing the costs of employing the crowd. Sec. 6 with a discussion of this work and its applications, including the limitations of our study and promising directions for future research. Background Here we describe the problem model we employ in our study to represent crowdsourcing tasks, describe prior research on crowd-generated microtask crowdsourcing, as well as provide details on existing methods for crowdsourcing microtask data under budget constraints. Problem model and existing work We focus on problems where crowd members propose binary labeling tasks as a representative model for individual microtasks, as is standard practice in algorithmic crowdsourcing. In the context of crowd-generated microtasks, workers can introduce novel microtasks for other workers to label, leading, perhaps after appropriate validation, to a growing set of labeling tasks. For example, when crowdsourcing causal attributions [13], a worker may introduce a novel microtask by posing a new question (Do you think that viruses cause sickness?) which then becomes a new yes/no binary labeling microtask for other crowd workers. While binary labeling is a simplification of the nuance of many real-world crowdsourcing tasks, binary labeling can represent image categorization tasks or even basic survey questions, and can be readily generalized to categorical labeling tasks such as multiple choice questions, although those tasks can also be binarized (see [19]). Let z i 2 {0, 1} be the true but unknown label for task i and let y ij be the response provided by worker j when given task i. We define the associated task parameter θ i � Pr(z i = 1) as the unknown probability that the true label for task i is 1. Multiple workers are typically asked to respond to a given task, allowing us to aggregate their responses for improved accuracy; we assume that workers respond independently so that the {y ij } are iid for a given i. To track the response tallies for task i, let a i and b i be the total number of '+1' and '0' responses, respectively, for i, and let n i = a i + b i be the total number of responses received for i. As responses are gathered, these tallies will change, so a i , b i , and n i are considered functions of time t, where we track 'time' as the number of responses received across all workers and tasks (t = ∑ i n i (t)). We can estimate θ withŷ ¼ a i =n i . The final goal is to infer the true label of the task accurately, i.e., developẑ i � z i using the responses {y ij } for task i. Most work on efficient crowdsourcing assumes a fixed set of tasks but some studies have considered task growth. The work of Sheng, Provost & Ipeirotos [20] considers the idea of soliciting new training examples (labeling tasks) from the crowd, and discusses strategies for how often to request new tasks depending on the cost of receiving a new task relative to the cost of receiving a response to an existing task. However, the focus on their work is on how many responses a single task requires, as multiple responses are typically used to overcome noisy workers, and they do not consider the cost to complete a task (something we will focus on; Sec. 3), only the cost on a per-response basis. Likewise, the recent work of Liu and Ho [9] studies task growth using a multi-armed bandit approach, where the arms of the bandit increase over time. They assume the crowdsourcer is not able to control when new tasks are generated, however, and neither study considers the use of efficient allocation methods for guiding workers to tasks when costs are constrained by a budget. Of course, returning to the example of a QA platform, users typically submit questions on their own, but any QA site can implement an approval process allowing the site to control the rate of new questions. To the best of our knowledge, crowdsourcing a growing set of tasks when efficient allocation methods are used to complete those tasks has not been studied. Efficient allocation methods Often a crowdsourcer must accurately infer the z i labels under budget constraints, as only finite resources (such as time or money) will be available to support the crowd. For simplicity, we assume a crowdsourcer has a total budget of B requests that can be elicited from the crowd. The budget then imposes the constraint ∑ i n i (t) � B for all t � B. This constraint becomes especially challenging for a growing set of tasks, since the finite budget must be spread out over an increasing number of individual tasks. Crowdsourcing allocation methods [18,19,21] have been developed to efficiently and accurately infer labels for tasks under a finite budget. These methods choose which tasks to give to workers with a goal of maximizing the efficiency and accuracy of the task labels the crowdsourcer will infer from the worker responses. In this work, we apply the Optimistic Knowledge Gradient (Opt-KG) method [18]. Opt-KG works to optimize accuracy by implementing a Markov Decision Process that chooses tasks with the largest expected improvement in accuracy. This method has shown improvement in accuracy when applied to finite budget crowdsourcings [18]. Opt-KG focuses on optimizing overall accuracy, which makes it particularly beneficial for applying to crowd-generated microtasks and is the reason we focus on it in this work (see also our discussion of Opt-KG and other methods in Sec. 6). Further, Opt-KG has no parameters that need to be tuned or chosen by the crowdsourcer. Opt-KG and other allocation methods assume a fixed set of N tasks. The goal of our work here is to enable an efficient allocation method to support crowdsourcing problems where the crowd can provide new tasks to the crowdsourcer, leading to a set of tasks that grows over the duration of the crowdsourcing. Cost forecasting Here we introduce a method to enable efficient allocation methods such as Opt-KG to work with crowd-generated microtasks. First, we extend the traditional binary labeling model for a fixed set of tasks to an open-ended problem where the crowdsourcer begins with a small seed of tasks that grows as the crowd generates novel tasks. We then describe the components of cost forecasting including cost estimators for how many responses are needed to complete tasks and a decision rule (Growth Rule) based on those costs that allows the crowdsourcer to choose whether a crowd worker should work on an existing task or propose a new task. Model for crowd-generated microtasks The problem model given above (Sec. 2.1) describes each of a fixed set of N tasks. Typically, allocation methods assume there is a fixed number of tasks that a crowdsourcer wishes to distribute to workers. However, in this work we consider task growth where the number of tasks grows as new tasks are generated by the crowd. Growing tasks can represent the submission of new questions to a question-answering site, for example, while responding to a task represents a user answering an existing question or more simply flagging an existing question-answer pair as correct. Let N t be the total number of tasks that exist at time t, where N 0 initial seed tasks are used to begin the crowdsourcing and we track time such that each timestep represents one request made by the crowdsourcer. When a new task is desired at timestep t, a worker will be prompted to propose a new task, which is then added to the set of all tasks, and N t+1 = N t + 1. Later, other workers can submit responses to this new task so that a label for that task can be inferred. In this model, the cost of a new task generated by the crowd and the cost of a response is defined to be f t and f r units, respectively. Depending on problem-specific considerations, the crowdsourcer can set f t = f r or let the costs differ (see also [20]). In this work, we define cost units in number of responses, taking f t = f r = 1; we discuss f t 6 ¼ f r in our discussion. In practice, an approval process may also be needed to guarantee requirements for the new task such as appropriateness, novelty, or importance. For simplicity, here we assume this process has already been implemented. Forecasting the cost to complete a task Suppose at some time t during the crowdsourcing that task i has already received n i (t) independent (0, +1) responses, of which a i (t) are +1 responses. Our current estimate of the task's associated parameter θ i isŷ i ðtÞ ¼ a i ðtÞ=n i ðtÞ. We can decide if task i should be labeled +1 or labeled 0 based on whetherŷ i > 1=2 orŷ i < 1=2, but we want to minimize the probability of giving i the wrong label. This may require waiting until more responses to i are gathered, so a conclusion can be drawn more safely, but we also want to avoid wasting additional responses on tasks that we can already label i with an acceptable accuracy or on tasks that are too difficult (or too expensive) to answer accurately. Thus, we need to incorporate our uncertainty inŷ given the collected data. In general, for n independent samples of a Bernoulli random variable, the probability that our estimateŷ differs from the true value θ by at least � is bounded by Hoeffding's Inequality: This inequality allows us to decide a value for this probability and then estimate the minimum number of labels needed to ensure that probability. Suppose we want the probability that we are off by more than � to be no more than δ. Then at least responses are needed to provide a bound on δ. (Note that tighter bounds than Hoeffding's may be used, but for simplicity here we focus on Eq (1); see the Discussion for more.). Our crowdsourcing goal for a given task is to determine if the unknown label z is 1 or 0 (for now we suppress the dependence on task index i and timestep t). The difference between our current estimateŷ and 1/2 represents our weight of evidence towards this decision. If we are confident to some degree that our estimateŷ is different from 1/2, then we are able to conclude the label of the task based on whetherŷ > 1=2 orŷ < 1=2 and when we can draw that conclusion we can also deem the task complete. Using Eq (2) and our current estimate with n responses, we can then estimate how many additional responses m we need until our confidence interval (or margin of error) does not include 1/2: Eq (3) shows us that the closer the task's parameter θ is to 1/2, the more costly the task will be in terms of requiring more responses to distinguish if the label should be 0 or 1. Of course, this estimate may be inaccurate as it relies on the current value ofŷ ¼ a=n at n responses. In reality, as more responses are gathered,ŷ will be revised. These updated estimates can be automatically incorporated into this equation as new responses are received, yielding improved forecasts for m. However, Eq 3 is not valid whenŷ ¼ 1=2. In this scenario, we can ask: what if we receive our next response and it is +1 or it is 0? Since all we currently know in this scenario is ŷ ¼ 1=2, we should assume either outcome is equally likely, giving a revised estimateŷ ¼ a=ðn þ 1Þ (if the new response is 0) orŷ ¼ ða þ 1Þ=ðn þ 1Þ (if the new response is +1). Thankfully, ðŷ À 1=2Þ 2 is the same in both cases, and so plugging either into Eq (3) will give the same estimate for m: where the −1 counts the additional label we assume we will receive. In summary, we can estimate the number of additional responses m needed to complete a task using m � lnð2=dÞ 2 a n À 1 Once a task'sŷ has been shown to be different statistically from 1/2, the additional cost is m � 0 (no additional responses are needed). To use in subsequent sections, we define the set of available tasks M(t) as those where additional responses are needed: where (suppressing the dependence on i and t) m i (t) is given by Eq (5). Deciding when to request a new task The ability to estimate the cost to complete a task allows us to introduce a simple decision rule for when to request new tasks: request a new task when the expected cost to complete a new task is less than the estimated cost to complete the currently available task that is closest to completion. Specifically, let i 2 [1, . . ., N t ] index the N t currently available tasks, and let m i be our current estimate for the cost to complete task i. Let the expected cost to complete a new, unseen task be E[n j ] (we compute this below). Comparing the {m i } with E[n j ] then informs our decision rule for growing the set of tasks. To decide whether or not to request a new task at some time t, we study two specific Growth Rules (GRs): Request a new task when where the minimum and the median are taken over the set of tasks for which additional responses are needed at time t, M(t). We include the second rule (GR II) to provide a potentially less extreme counterpoint to GR I in that using the median as a decision point may be less influenced by outlier tasks than the minimum. The intuition behind these growth rules is as follows. As the crowd works on completing the currently available tasks, inexpensive tasks (those with θ far from 1/2) will finish first, and soon only expensive tasks (those with θ close to 1/2) will remain. Eventually, the remaining tasks will be costly enough that the crowdsourcer will be better off taking the chance on a brand new task. Our experiments (Secs. 4 and 5) investigate using these rules to elicit new tasks during crowd-generated microtask crowdsourcing. Estimating the cost to complete an unseen task Given the growth rules introduced in Eqs (6) and (7), a question remains: how can we estimate the expected cost to complete a task j when the task is unseen or has no responses (i.e., a j = n j = 0)? One option is to track the mean completion cost of previously completed tasks and use that for E[n j ]. Another option is to track the mean parameterŷ of previously completed tasks E½ŷ� and use that mean within Eq (5) to estimate the completion cost. The former uses more data, but the latter option may be preferable as the GRs are then comparing two estimated costs instead of one observed cost and one estimated cost-if the estimates are biased then comparing two estimates may prevent or at least limit the bias from having a harmful impact. However, here we take a simpler approach focused on computing the expected cost from only a given prior distribution of θ. Given a prior distribution P(θ) for task parameters, we can estimate the expected minimum cost to complete unseen tasks if they are sampled from that prior: where n min � 2ln(2/δ) is the expected minimum cost for the ideal case of θ = 0 or θ = 1. Here P(n) can be derived by performing a change-of-variables on the prior distribution P(θ). Unfortunately, E[n] diverges for any P(θ) that assigns sufficient probability at or near θ = 1/ 2, as tasks at that θ will on average never be completed. To ensure convergence, we assume a bound is used for the maximum amount of responses n max that should be spent on a given task, and tasks i that reach n i � n max without being deemed complete are abandoned. Although here we used this bound only theoretically (when computing E[n]) since Opt-KG itself helps to prevent over-spending [18], in practice this bound can prevent a growth in sunk costs where expensive tasks consume an inordinate amount of the crowdsourcer's budget. We explore the effects of this bound below. Using this bound, the expected minimum cost to complete unseen tasks can be estimated: ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi n min n max p ð2 À ZÞ À n min ð1 À ZÞ; where Z � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi n min =n max p and the second line holds for a uniform (prior) distribution of θ. Finally, Eq (10) for E[n] (or Eq (9) for a different prior) and Eq (5) for additional costs {m i } can be used in our Growth Rules, Eqs (6) and (7), to perform cost forecasting for crowd-generated microtask crowdsourcing. Materials and methods Here we describe the real and synthetic crowdsourcing datasets we apply cost forecasting to, how to perform crowd-generated crowdsourcing on these data, and we introduce a nongrowth baseline control to understand the performance of cost forecasting. Datasets We study three crowdsourcing datasets. These data were not generated using an efficient allocation algorithm, and so it has become standard practice to evaluate such algorithms with these data [8,19]-since labels were collected independently, one can use an allocation algorithm to choose what order to reveal labels from the full set of labels, essentially "rerunning" the crowdsourcing after the fact. Due to generally small number of responses for each task in these datasets, to simulate a response from a worker to a task we sample from a Bernoulli distribution with a probabilityŷ that is estimated from the responses for that task given in the original data. Below we describe each dataset and how to use these data with crowd-generated microtask crowdsourcing, where the set of tasks changes throughout the crowdsourcing. RTE. Recognizing Textual Entailment [4]. Paired written statements from the PASCAL RTE-1 data challenge [22]. Workers were asked if one written statement entailed the other. These data consist of N = 800 tasks and 8, 000 responses, with each task receiving 10 responses. Data are available at https://sites.google.com/site/nlpannotations/. Bluebirds. Identifying Bluebirds [23]. Each task is a photograph of either a Blue Grosbeak or an Indigo Bunting, Workers were asked if the photograph contains an Indigo Bunting. There are N = 108 tasks and 4, 212 responses, with 39 responses for each task. Data are available at https://github.com/welinder/cubam. Games. This dataset contains crowdsourcing tasks generated from an app based on a TV game show, "Who Wants to Be a Millionaire" [24]. When a question is first revealed on the show, the app sends a task containing the question and 4 possible answers to the users. Responses from users and correct answers were collected. Data were preprocessed and responses binarized following the procedure used by Li et al. [19]. The dataset contains N = 1, 682 tasks and 179, 162 responses. Data are available at https://github.com/bahadiri/Millionaire. To study crowd-generated microtask crowdsourcing on these datasets, we first sample N 0 tasks from the N tasks in the dataset to construct the initial seed tasks for the crowdsourcer to use. To replicate requesting a new task, we simply draw from the set of tasks remaining in the dataset that have not yet been requested. In other words, at the start of crowdsourcing there are N 0 tasks available to the crowdsourcer and N − N 0 tasks which are in the data but not yet requested. The growth rule in use determines when new tasks should be generated, simulating the crowdsourcer's decision process. Crowdsourcing continues until the budget B is exhausted or all N tasks have been requested. Budget is used to request new tasks and to receive responses to existing tasks. Synthetic crowdsourcing We supplement our results from real crowdsourcing data by performing controlled simulations. We generate datasets following the model defined above by assuming each worker response to task i follows a Bernoulli distribution with parameter θ i . This controls for the cost of the task and the amount of responses needed to accurately labelẑ i ¼ 0 orẑ i ¼ 1. This assumes workers are reliable; see the Discussion for incorporating worker reliability. Note also that θ i is used only to simulate worker responses-all subsequent calculations are performed using the estimateŷ i as θ i itself is unknown to the crowdsourcer. When tasks are created, we draw θ i from a uniform prior distribution but we can also draw from other probability distributions such as the Beta distribution. To begin each run of crowdsourcing, we generate a set of N 0 seed tasks. To simulate requesting a new task j from a worker at time t, we draw a new θ j from the underlying prior distribution, add j to the set of tasks, increment the number of tasks N(t + 1) = N(t) + 1, and so forth. Unless otherwise noted, in simulations, we used N 0 = 100 and a total budget (Sec. 2.2) of B = 3000; we explore the effects of these and other parameters in our experiments below. Using this model, we can apply efficient budget allocation techniques such as Opt-KG and implement the growth rules defined above. Baseline control. To understand better the performance of cost forecasting, for each Growth Rule, we compare to a non-growth baseline that controls for the number of tasks and total budget spent on responses to those tasks. In this baseline, the number of tasks available at the start matches the final number of tasks generated when using cost forecasting, no new tasks are proposed by the crowd, and the budget available to the baseline is equal to the number of labeling responses received when using cost forecasting. Specifically, the budget for responses B r available to the baseline is B r = B − (N − N 0 ) where B is the total budget used by cost forecasting and N is the final number of tasks generated by the crowdsourcing we are comparing against. We perform one matching realization of the baseline for each realization of cost forecasting, as randomness in worker responses leads to variability in the total number of tasks proposed across different realizations of cost forecasting. Note that this baseline is equivalent to a growth rule that performs all growth at the start of the crowdsourcing, then receives all worker responses to those tasks until the budget is exhausted. This contrasts with cost forecasting which dynamically alternates between growing tasks and responding to tasks using a given Growth Rule. Real and synthetic data We evaluate the performance of cost forecasting on simulated and real crowdsourcing data (Fig 1). Solid lines correspond to cost forecasting while dashed lines correspond to the nongrowth baseline. For these results we used cost forecasting parameters (Sec. 3.2) δ = 0.9 for GR I, δ = 0.5 for GR II (which exhibits faster growth than GR I), and n max = 10 (Sec. 3.4) for both; we further explore the dependence on δ and n max below. (Bluebirds, a smaller, noisier dataset, used δ = 0.5 (GR I), δ = 0.1 (GR II), N 0 = 10, B = 600.) Cost forecasting leads to slower growth at the beginning of crowdsourcing, visible in the long pause before the number of tasks begins to grow (Fig 1). Our method does not begin to grow until the crowd has provided enough responses about the seed tasks to achieve accurate labels. In contrast, the non-growth baseline begins with all tasks initially available. Examining the accuracy, or proportion of correct tasks, shows that cost forecasting achieves higher accuracy than the baseline for most data, especially for earlier in the budget, with Bluebirds (a difficult task with a global accuracy of only �0.65) being a possible exception. Note that by controlling for the overall growth rate and budget of cost forecasting in the baseline (see above), the final accuracy (at high budgets) of both methods will on average always be the same, as both methods use the same Opt-KG allocation method. Yet, cost forecasting can achieve higher accuracy at low budgets (often up to �5%) by dynamically determining the growth rate based on the past and current state of the crowdsourcing. Dynamics of cost forecasting Cost forecasting decides between requesting responses to existing tasks and requesting new tasks. The dynamics of this decision process will vary as the responses are gathered for existing tasks, leading to a dynamical pattern distinctly different from that exhibited by, e.g., constant random growth (Fig 2, top). A well-established way to study these dynamics is through the interevent times Δt, the number of non-growth requests that occur between growth requests. If a discrete-time process is memoryless, where each request is equally likely to be a growth request, Δt will follow a geometric distribution P(Δt = k) = p(1 − p) k where p is the probability for a growth event. This converges to an exponential distribution for a continuous-time process, P(Δt) = λe −λΔt , with rate parameter λ. In contrast, bursty processes exhibit heavy-tailed, often power-law distributions of Δt: P(Δt) / (Δt) −α for power-law exponent α > 1 [25]. Power-law distributions show higher probabilities relative to exponentials for both very short Δt and very long Δt, capturing the long pauses of non-activity punctuated by sudden bursts of activity that are characteristic of bursty processes. Fig 2 shows the interevent distribution for both cost forecasting growth rules. At top, we use a "spike train" to illustrate the growth events around one run of simulated crowdsourcing, with another random growth spike train demonstrating a memoryless process where growth events occur at the same rate as the cost forecasting growth rule. Below, we show power-law and geometric distributions fitted to the Δt observed over 50 runs [26]. Indeed, we see that Cost forecasting applied to synthetic and real world crowdsourcing data. Accuracy of inferred labels is generally higher at given total budget for both growth rules (solid lines; blue: Growth Rule I, orange: Growth Rule II) than if all tasks were available to start (control, dashed lines). Higher accuracy at tight budgets allows cost forecasting to handle crowd-generated sets of tasks and to handle budget-uncertain scenarios (see Discussion), helping the crowdsourcer to ensure the gathered data is high-quality even if the budget is suddenly cut. https://doi.org/10.1371/journal.pone.0244245.g001 cost forecasting is heavy-tailed and at least approximately well explained by a power-law distribution, indicating it is a bursty process. Furthermore, likelihood-ratio tests [26] showed significant evidence (p < 10 −14 ) for power-laws over exponentials (the continuous analog of the geometric distribution) for both growth rules. The burstiness of cost forecasting shows that the algorithm tends to alternate between suddenly requesting multiple new tasks (short interevent times) and then focusing for some time on receiving responses to existing tasks (long interevent times). In other words, it is reactive to the current state of the crowdsourcing, trading off expected costs given by responses to the current tasks with the potential cost a new, unseen task will require to be completed. Parameter dependence The cost forecasting procedure introduced in Eqs (3)-(10) depends on parameters δ and n max . Here we explore some effects of these parameters. Further, we assume each crowd-generate microtask crowdsourcing begins with an initial seed of N 0 known tasks (and no responses), so we also study how cost forecasting behaves for different size seeds. Fig 3 uses simulated crowdsourcing to explore the dependence of the average growth rate of tasks on δ and n max . Examining Fig 3, n max has little effect on GR I's growth rate while increasing δ provides the researcher with some ability to tune a given growth rule's growth rate. In particular, using GR I and varying δ from 1/2 to 1 increases the typical growth rate by about 4% (Fig 3, bottom) essentially independently of n max . GR II, in contrast, exhibits a higher overall growth rate, a slightly greater dependence on n max than GR I, and the growth rate increases by �8% for δ = 1 compared with δ = 0.1 (Fig 3, bottom). These results show that the choice of n max does not have a large impact on growth rate for GR I, while GR II shows increased growth rate for small values of n max . We next investigate how growth rate depends on the initial number of available tasks N 0 . When many tasks are available to start, we anticipate that cost forecasting will spend more time exploring the available tasks before it begins to grow, which will lead to a lower overall growth rate for a fixed budget. Indeed, Fig 4 (top) shows that larger N 0 crowdsourcings have lower growth rates than smaller N 0 crowdsourcings for a given Growth Rule. For example, when N 0 = 200, the growth rate is approximately 5% lower (for GR I) or 3% lower (for GR II) than when N 0 = 50, indicating a small but potentially important affect on the overall crowdsourcing. Given that larger N 0 gives lower growth rates, what effect does N 0 have on accuracy? The bottom panels of Fig 4 explore how accuracy improvement (accuracy of cost forecasting minus accuracy of corresponding baseline) depends on different values of N 0 . Generally, accuracy is improved at tight budgets using cost forecasting, but this improvement is lessened to some extent as N 0 increases-this is plausible as very large values of N 0 are effectively fixedsize traditional microtask crowdsourcings, meaning large N 0 are scenarios where there is less advantage for a crowdsourcer to apply cost forecasting. Smaller N 0 , however, show the advantages at tight budgets in terms of accuracy for cost forecasting. We also note that (as in Fig 1) there is a consistent trend for GR II to briefly perform worse than the baseline at high values of B (�2000) before higher values of B lead to comparable performance between the two approaches. Non-stationary crowdsourcing-Increasing completion costs Our cost forecasting approach assumes the expected minimum cost to complete an unseen task is constant over the course of the crowdsourcing. Yet, is this a realistic assumption? One can imagine a scenario where the crowd initially proposes "easy" tasks (where consensus is reached quickly and the label can be inferred with few responses) then the crowd runs out of "low-hanging fruit" and later tasks will tend to be more expensive. An example scenario is a question-answering site where all the easy-to-answer questions have already been proposed and subsequently proposed questions tend to be polarizing for the community. If this occurs, how will it affect the performance of crowdsourcing using cost forecasting? To explore how cost forecasting behaves under an increasing-cost scenario, we augment our crowdsourcing model by enabling the prior distribution for θ i , the probability of a 1-label for task i, to vary as more tasks are proposed by the crowd. When this distribution becomes more sharply peaked at θ = 1/2, tasks will tend to be more costly to complete. Then, to capture an increasing-cost scenario, we take a Beta distribution B(α, β) for the prior of θ and make the parameters linearly increasing functions: α(N t ) = β(N t ) = 1 + s(N t − N 0 ), where N t − N 0 is the number of tasks proposed so far, s parameterizes the rate at which tasks become more costly (as increasing α = β leads to a prior more sharply peaked at θ = 1/2), and the intercept 1 ensures the initial prior is a uniform distribution. We illustrate the changing prior of the increasing-cost model in the left panel of Fig 5. In the inset of this panel we show how the Beta distribution parameters change as budget B increases (and more new tasks are proposed), with the colored points in the inset corresponding to the distributions shown in the main plot. In the right of Fig 5 we illustrate how the growth rules perform as tasks of increasing cost are proposed-note that the cost forecasting method used here is not made aware of these changing costs. Here we used δ = 0.5 (0.1) for GR I (GR II). As we also saw in Fig 1, GRII generally exhibits more growth and lower accuracy than GRI, and we expect higher accuracy when there is lower growth as there will be more responses for fewer tasks. This growth-accuracy tradeoff effect is exacerbated further here, when later tasks are more difficult than earlier tasks, as less growth leads to more responses to earlier, easier tasks. Indeed, accuracy drops at larger B for higher s, as tasks become more difficult, but both growth rules handle the change in s rather well, showing similar drops in accuracy for both s = 0.1 and the more costly s = 0. 2. Yet GR II shows a faster growth rate for s = 0.1 than s = 0.2, demonstrating how, despite incorrectly assuming new tasks are always equally costly to complete, cost forecasting can still react to some extent to non-stationary task sets. Discussion In this work, we introduced cost forecasting as a means to crowdsource crowd-generated microtasks where the crowd both completes tasks but also proposes new tasks to the crowdsourcer. Crowdsourcing of crowd-generated microtasks can be used for question-answering sites, the design of new surveys, and in general can enable crowds to combine creative task proposal with traditional microtask work. We demonstrated for binary labeling tasks on both synthetic and real-world crowdsourcing data that cost forecasting can leverage the performance of an efficient crowd allocation method and lead to improved accuracy. Cost forecasting can also help budget-uncertain crowdsourcing. If a crowdsourcer does not know how many responses they will be able to gather, they will want to achieve and maintain a high accuracy as soon as possible, so that, whenever crowdsourcing terminates, the labels received for tasks are of as high a quality as possible. One application of such budget-uncertain crowdsourcing is large-scale, automated A/B/n testing, where stopping rules may be evaluated online for many concurrent crowdsourcings. There are many further directions to explore and extend this research. One direction is the integration of cost forecasting with different crowd allocation methods. We focused our validation on applying cost forecasting to Opt-KG, a popular and effective crowd allocation method for fixed sets of microtasks, free of parameters and focused on the overall accuracy of the generated task labels. Likewise, the statistical decision process of cost forecasting brings to mind Markov decision processes (MDP) and POMDP, and MDP and POMDP are common approaches to algorithmic crowdsourcing [27]. Indeed, Opt-KG itself defines a policy using MDP [18]; thus our results here demonstrate that cost forecasting can be fruitfully interfaced with MDPs. More generally, as improved allocation methods are developed, it is important to examine if and how they can benefit from cost forecasting or other methods geared towards applying an allocation strategy to a set of crowd-generated microtasks. Developing methods that can directly allocate workers without assuming a fixed and known number of tasks would be an especially useful area of research. Another direction for future research is to better understand how a crowdsourcer can integrate information about a particular crowdsourcing problem of interest. For example, a crowdsourcer may already have a good idea about the difficulties of new tasks, perhaps from performing a pilot study. This information can be integrated into cost forecasting by choosing a non-uniform prior distribution for θ. What about other cost forecasting parameters such as δ, n max or a different growth rule? A crowdsourcer will wish to balance their needs for accuracy and budget constraints when choosing these parameters. Low-budget, pilot crowdsourcings may again be fruitful to help select these parameters and it is worth studying procedures for estimating their values. Our formulation of cost forecasting is simple in several ways, but can be fruitfully extended. We based our cost forecasting calculations on the Hoeffding bound for simplicity. This leaves considerable room for improvement as the Hoeffding bound is not particularly tight, and better results may be achieved using a tighter bound such as the empirical Bernstein inequality [28,29]. Further improvements include using a learning procedure where the estimated unseen task completion cost is dynamically learned as crowdsourcing is performed, although we found some support (Sec. 5.4) using an increasing-cost model that our basic cost forecasting procedure can already handle some changing costliness of new tasks. We assume reliable workers, but worker reliability can be readily incorporating by using the worker reliability (or "one-coin") variant of Opt-KG or by incorporating worker reliability into whatever allocation method the crowdsourcer wishes to use. We also assume the costs to request new tasks or request responses to existing tasks are the same, but of course in practice these may be different [20]. However, cost forecasting can automatically capture any task cost differential by modifying E[n] to include a different proposal cost. Likewise, the completion costs of unseen tasks are likely to vary over the course of a crowdsourcing, a phenomena we investigated using an increasing-cost model. While such models are useful, it is also important to understand how these costs may vary in practice (see [30]). Do workers really run out of low-hanging fruit when performing crowd-generated microtask crowdsourcing? Experiments are needed to understand better how the set of tasks changes over time as the crowd proposes new tasks. Finally, our cost forecasting Growth Rules focus on completion costs of tasks, as probabilistic cost estimators can be applied. Yet it would be especially interesting to use other quantities for growth rules. For example, if one can estimate the expected gain of novel information when requesting a new task, then a crowdsourcer can design crowd-generated microtask crowdsourcing to achieve goals such as crowdsourcing until a certain number of interesting or novel tasks are generated.
2019-12-10T23:23:54.000Z
2019-12-10T00:00:00.000
{ "year": 2020, "sha1": "9315eb55283bf50b45a601366c7096a55d89201a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0244245&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1bcf1b97d7b95f7d0f37452148c20d33465dbb6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Medicine" ] }
263611237
pes2o/s2orc
v3-fos-license
Identification of Galaxy Protoclusters Based on the Spherical Top-hat Collapse Theory We propose a new method for finding galaxy protoclusters that is motivated by structure formation theory and also directly applicable to observations. We adopt the conventional definition that a protocluster is a galaxy group whose virial mass M vir < M cl at its epoch, where M cl = 1014 M ⊙, but would exceed that limit when it evolves to z = 0. We use the critical overdensity for complete collapse at z = 0 predicted by the spherical top-hat collapse model to find the radius and total mass of the regions that would collapse at z = 0. If the mass of a region centered at a massive galaxy exceeds M cl, the galaxy is at the center of a protocluster. We define the outer boundary of a protocluster as the zero-velocity surface at the turnaround radius so that the member galaxies are those sharing the same protocluster environment and showing some conformity in physical properties. We use the cosmological hydrodynamical simulation Horizon Run 5 (HR5) to calibrate this prescription and demonstrate its performance. We find that the protocluster identification method suggested in this study is quite successful. Its application to the high-redshift HR5 galaxies shows a tight correlation between the mass within the protocluster regions identified according to the spherical collapse model and the final mass to be found within the clusters at z = 0, meaning that the regions can be regarded as the bona fide protoclusters with high reliability. We also confirm that the redshift-space distortion does not significantly affect the performance of the protocluster identification scheme. Introduction Galaxy clusters are typically defined as the objects that are bound and dynamically relaxed with total mass of M tot > 10 14 M ⊙ (e.g., Overzier 2016).As the progenitors of present-day galaxy clusters, protoclusters must have formed in the densest environments in the early universe, and the majority of the galaxies in protoclusters probably have formed and evolved earlier than those in other environment (Kaiser 1984). Many observational efforts have been made to search for protoclusters at high redshifts.Deep-field spectroscopic survey is a direct approach for finding protoclusters (e.g., Steidel et al. 1998Steidel et al. , 2000Steidel et al. , 2005;;Lee et al. 2014c;Toshikawa et al. 2014;Cucciati et al. 2014;Lemaux et al. 2014;Chiang et al. 2015;Diener et al. 2015;Wang et al. 2016;Calvi et al. 2021;McConachie et al. 2022).However, the survey volume should be very large to include many of such rare objects, and spectroscopic observations are currently too time-consuming to carry out large-volume blind surveys for the deep universe.Therefore, large-area imaging surveys have been often con- Lee et al. ducted to search for overdense regions at high redshifts by utilizing the narrow-band photometry for emissionline galaxies or the photo-z/dropout technique (e.g., Shimasaku et al. 2003;Ouchi et al. 2005;Toshikawa et al. 2012Toshikawa et al. , 2016;;Cai et al. 2017;Toshikawa et al. 2018;Shi et al. 2019;Yonekura et al. 2022). Some energetic events are expected to happen in overdense regions at high redshifts.High-z radio galaxies are believed to be the potential progenitors of brightest cluster galaxies and, thus, they are assumed as a proxy for protoclusters (Pascarelle et al. 1996;Le Fevre et al. 1996;Venemans et al. 2002Venemans et al. , 2004Venemans et al. , 2005Venemans et al. , 2007;;Hatch et al. 2011b,a;Hayashi et al. 2012;Cooke et al. 2014;Shen et al. 2021).Although it is still debated (see Husband et al. 2013;Hennawi et al. 2015), high-z QSOs are also known to trace overdense regions (Djorgovski et al. 2003;Wold et al. 2003;Stevens et al. 2010;Falder et al. 2011;Adams et al. 2015).Lyα blobs can be lit by a huge amount of ionized photons emitted from AGNs or starburst galaxies in dense regions which still bear sufficient cold gas as a fuel.High-z submillimeter galaxies are regarded as the progenitors of massive ellipticals (e.g., Lilly et al. 1999;Fu et al. 2013;Toft et al. 2014).Therefore, Lyα blobs or overdensity regions of submillimeter galaxies are also used as the indicators of protocluster regions (Stevens et al. 2003;Greve et al. 2007;Prescott et al. 2008;Daddi et al. 2009;Prescott et al. 2012;Umehata et al. 2014Umehata et al. , 2015;;Oteo et al. 2018;Cooke et al. 2019;Rotermund et al. 2021;Álvarez Crespo et al. 2021).Gas absorption lines are another probe of protoclusters that does not rely on galaxy distribution: high-z overdense regions that still contain plenty of intergalactic neutral hydrogen can be detected by examining the Lyα forests in the spectra of background QSOs or star-forming galaxies (e.g., Lee et al. 2014b;Stark et al. 2015;Cai et al. 2016Cai et al. , 2017;;Newman et al. 2022). While the observations targeting protoclusters have used a variety of selection techniques, they commonly focus on the identification of overdense regions.The protoclusters that are expected to eventually form massive clusters with the total mass of M tot > 10 15 M ⊙ have the overdensity of δ ∼ 10 − 12 for typical galaxies or Lyα emitters within an aperture radius of R ∼ 15 cMpc at z ∼ 2 − 3 (e.g., Lemaux et al. 2014;Cucciati et al. 2014;Cai et al. 2017).Toshikawa et al. (2018) identify protocluster candidates in a wide field of > 100 deg 2 by selecting the regions that show the galaxy overdensity significance level higher than 4σ within an aperture radius of R ∼ 16 cMpc at z ∼ 3.8.This significance level corresponds to the overdensity of the regions ending up forming halos of M halo ≳ 5 × 10 14 M ⊙ .The overdensity significance level is adopted to achieve ∼ 80% reliability, at the cost of completeness (Toshikawa et al. 2016) Several theoretical studies have been conducted to examine the properties of protocluster regions.Chiang et al. (2013) and Muldrew et al. (2015) investigate the matter and galaxy overdensity in the areas enclosing protoclusters using the semi-analytic model of Guo et al. (2011) based on the Millennium simulation (Springel et al. 2005).In the two studies, protoclusters are traced using halo merger trees.They show that the protocluster galaxies are more widespread in larger clusters, and the distribution of protocluter galaxies largely shrink during z = 4 − 2. Chiang et al. (2013) also show that, in a top-hat box of (15 cMpc) 3 , the galaxy overdensity of protoclusters strongly correlates with final cluster mass.Wang et al. (2021) develop a method to identify protoclusters from halo distribution of an N-body simulation using an extension of the Friend-of-Friend (FoF) algorithm.They show that the approach reasonably recovers protoclusters with high completeness. Hydrodynamical simulations are also used to study the formation and evolution of clusters of galaxies.Given that the mean separation of rich clusters is ∼ 70 cMpc (Bahcall & West 1992), it is thus necessary to use a simulation box larger than about 1 cGpc 3 to study the formation and evolution of Coma-like clusters accurately and with high statistical significance.However, due to the limitation of the current computing resources, it has been nearly impossible to conduct hydrodynamical simulations in such a large box while keeping a resolution below ∼ 1kpc.As a compromise between the need for the extremely large dynamic range and the limited computing resources, the zoom-in technique is widely adopted in the hydrodynamical simulations for galaxy clusters (Bahé et al. 2017;Choi & Yi 2017;Truong et al. 2018;Yajima et al. 2022;Trebitsch et al. 2021).In these simulations, cluster regions are pre-identified and zoomed in the initial conditions, and protoclusters are traced by using merger trees. It should be noted that, in the previous studies, protoclusters have been defined inconsistently between observations, theories, and numerical simulations.If a protocluster is defined as the group of all the objects that will eventually collapse into a cluster, their initial distribution typically spans more than tens of cMpc (Chiang et al. 2013;Muldrew et al. 2015Muldrew et al. , 2018)).In this definition, protoclusters can be neither self-bound nor compact, and thus a protocluster is hardly viewed as a physical object in which galaxies are associated with each other in a common environment.Furthermore, diachronic information is not available in observations.Therefore, observers have focused on the identification Lee et al. of sufficiently overdense regions.This is justified by the fact that larger structures in the current universe are more likely to originate from more massive progenitors at high redshifts (Chiang et al. 2013;Muldrew et al. 2015).The range of overdense region varies between protoclusters.Since the virial radius only encloses the objects which are already bound to the local density peak, it inevitably misses a number of progenitors which are still in the course of infall, outside the virialized regions.Because the proto-objects of larger clusters are more extended (Muldrew et al. 2015), a systematic approach is required to define the boundary (or spatial extent) of protoclusters, which should be based on the physical conditions of specific environments of interest. This study aims at proposing a new scheme for the identification of protoclusters that is motivated by structure formation theories, and also applicable to observations directly.Our prescription is justified and calibrated on a cosmological hydrodynamical simulation Horizon Run 5 (hereafter HR5, Lee et al. 2021;Park et al. 2022).HR5 covers a volume of (1048.6 cMpc) 3 with a spatial resolution down to about 1 kpc.Thanks to its large volume, HR5 enables us to look into the formation and evolution of galaxies in a wide range of environments.By taking advantage of HR5, we derive a scheme applicable to observations to find the centers of protocluster candidates based on the spherical top-hat collapse (SC) model.The scheme also defines the physical region of a given protocluster as the volume within the turnaround radius from their centers.The turnaround radius is the zero-velocity surface at which gravitational infall counterbalances the local Hubble expansion (Gunn & Gott 1972). This paper is organized as follows.In Section 2, we briefly introduce the HR5 simulation, a structure finding and a tree building algorithm, and the scheme to identify clusters using a low resolution version of HR5.In Section 3, we present the methodology to find the candidate regions for protoclusters from the galaxy distribution.The method for finding the boundary of protoclusters is presented in Section 4. We discuss and summarize this study in Section 5. Additional details of structure identification, merger tree building schemes, the SC models, and protocluster identification are given in Appendix. Horizon Run 5 HR5 is a cosmological hydrodynamical zoomed simulation aiming at covering a wide range of cosmic structures in a 1.15 cGpc 3 volume, with a spatial resolution down to ∼ 1 kpc.We adopt the cosmological parameters of Ω m = 0.3, Ω Λ = 0.7, Ω b = 0.047, σ 8 = 0.816, and h = 0.684 that are compatible with the Planck data (Planck Collaboration et al. 2016).We generate the initial conditions using the MUSIC package (Hahn & Abel 2011), with a second-order Lagrangian scheme to launch the particles (2LPT; Scoccimarro 1998;L'Huillier et al. 2014).HR5 is conducted using a version of the adaptive mesh refinement code RAMSES (Teyssier 2002) upgraded for an OpenMP plus MPI two-dimensional parallelism (Lee et al. 2021).We generated a number of random sets and selected the one that reproduced the theoretical baryonic acoustic oscillation features most closely.While the volume of the zoomed region is still somewhat insufficient for accurate statistical analyses of the most massive galaxy clusters and the impact of the very large-scale structures, the whole simulation box does manage to encompass the relevant large-scale perturbation modes, and provides us with a representative volume corresponding to the input cosmology. The volume of HR5 is set to have a high-resolution cuboid zoomed region of 1048.6 × 119.0 × 127.2 cMpc 3 crossing the center of the volume.The effective volume of the region is ∼ (260 cMpc) 3 .The cosmological box has 256 root cells (level 8, ∆x = 4.10 cMpc) on a side and the zoomed region has 8192 cells (level 13, ∆x = 0.128 cMpc) along the long side in the initial conditions.The high-resolution region initially contains 8192×930× 994 cells and dark matter particles, and is surrounded by the padding grids of levels from 12 to 9. The dark matter particle mass is 6.89×10 7 M ⊙ in the zoomed region, and increases by a factor of 8 with a decreasing grid level.The cells are adaptively refined down to ∆x ∼ 1 kpc when their density exceeds eight times the dark matter particle mass at level 13.HR5 was proceeded through z = 0.625. Physical processes driving the evolution of baryonic components are implemented in subgrid forms in RAMSES.Gas cooling is computed using the cooling functions of Sutherland & Dopita (1993) in a temperature range of 10 4 − 10 8.5 K and fine-structure line cooling is computed down to ∼ 750 K using the cooling rates of Dalgarno & McCray (1972).RAMSES approximates cosmic reionization by assuming a uniform UV background (Haardt & Madau 1996).The statistical approach of Rasera & Teyssier (2006) is adopted to compute a star formation rate.Supernova feedback affects the interstellar medium in thermal and kinetic modes (Dubois & Teyssier 2008) and AGN feedback operates in radio-jet and quasar modes, relying on the Eddington ratio (Dubois et al. 2012).Massive black holes (MHBs) are seeded with an initial mass of 10 4 M ⊙ in grids when gas density is higher than the threshold of star formation and no other MBHs is found within 50 kpc (Dubois et al. 2014b).MBHs grow via accretion and The white dotted lines display the Lagrangian volumes enclosing the dark matter particles that end up forming clusters at z = 0 in HR5-Low.We assume that all the objects in HR5 located inside the same Lagrangian volume are the progenitors of the corresponding cluster.The thickness of the projected volume is 8.2 cMpc (top), 13.8 cMpc (middel), and 21.5 cMpc (bottom), fully containing each cluster in the projected direction.In the right panels, M enclosed presents the total mass enclosed by the Lagrangian volume.All the objects inside the Lagrangian volume are traced back to high redshifts using their merger trees in this study.Lee et al. coalescence, and their angular momentum obtained from the feeding processes are traced (Dubois et al. 2014a).Metal enrichment is computed using the method proposed by Few et al. (2012) based on a Chabrier initial mass function (Chabrier 2003), and in particular the abundance of H, O, and Fe are traced individually.One can find further details of HR5 in Lee et al. (2021). Identification of Clusters Using a Low Resolution Simulation We identify FoF halos and self-bound objects embedded in FoF halos using PGalF (Kim et al. 2022).We also construct the merger trees of self-bound objects using ySAMtm (Jung et al. 2014;Lee et al. 2014a) based on stellar particles for galaxies and dark matter particles for halos that contain no stars.The details of the structure finding and tree building algorithms are given in Appendix A. In this study, we define a galaxy cluster as the virialized object that has acquired the total mass of M tot > 10 14 M ⊙ at or before z = 0.The mass cut is adopted following the conventional mass range of galaxy clusters (e.g., Overzier 2016), and can be varied if a different mass range is necessary.Protoclusters are the progenitors of galaxy clusters that have not reached the clusterscale mass range yet.By this definition, both clusters and protoclusters can be found at any epoch.According to this definition of a galaxy cluster, we cannot directly identify all the clusters and protoclusters in HR5 as the simulation stopped at z = 0.625.At this redshift, we find 63 clusters with M tot > 10 14 M ⊙ in the zoomed region.Objects having mass contamination higher than 0.7% by the lower level particles are excluded.However, there can be many structures that are not massive enough to be identified as clusters at z = 0.625 but will evolve to cluster-scale halos by z = 0. To find clusters and protoclusters in the last snapshot of HR5 (i.e.z = 0.625), we additionally conduct a lowresolution simulation HR5-Low (∆x ∼ 16 kpc) based on the initial conditions and the model parameters used in HR5.We identify structures from the snapshots of HR5-Low at z = 0 and 0.625 using PGalF.At z = 0, we find 2,794 objects of M 0 tot ≥ 10 13 M ⊙ and 189 objects of M 0 tot ≥ 10 14 M ⊙ with the contamination tolerance mentioned above.The dark matter particles are traced back to z = 0.625 using their IDs, to search for the progenitors of the clusters.We then construct the Lagrangian volume (hereafer LV, for details see Oñorbe et al. 2014) of the progenitors using the uniform cubic grids enclosing the dark matter particles finally assembling the clusters.We assume that the LVs constructed from HR5-Low also enclose the clusters or protoclusters in HR5.We present the details of the identification scheme and reliability of this approach in Appendix B. Figure 1 shows the dark matter distribution in three HR5-Low cluster regions at z = 0 (left), the same regions of HR5-Low (middle), and HR5 (right) at z = 0.625.The structure colored in yellow is the FoF halo of each cluster (left), its progenitors at z = 0.625 (middle), and its counterpart in HR5 (right panels).The grids enclosed by dotted lines mark the LVs of the objects constructed by tracing the dark matter particles.This figure demonstrates that the two simulations are in good agreement despite their different resolutions.The position of a structure may show a slight offset between the two different resolution simulations (HR5-Low and HR5) at z = 0.625 partly due to the adaptive time step in RAMSES. Identification of protoclusters We define 'protoclusters' as galaxy groups whose total mass within R vir is currently less than 10 14 M ⊙ at their epochs but would exceed that limit by z = 0.The physical extent of a protocluster is defined as the spherical volume within the turnaround radius or the zerovelocity surface.The concept is schematically visualized in Figure 2. A protocluster is located at the center of a sphere that has the mean density δ and encloses the total mass exceeding 10 14 M ⊙ .The critical overdensity δ sc m = δ is given by the spherical top-hat theory, and the mass contained is the expected virial mass of the region at z = 0.It should be noted that only the galaxies within the turnaround radius are called the protocluster member galaxies, and that the cluster progenitor galaxies can be spread out to much larger radii. We first identify the authentic proto-objects by tracing their merger trees in Section 3.1, and then present a systematic approach for finding the candidate regions enclosing protoclusters from the galaxy distribution in a snapshot, without diachronic information, in Section 3.2. Identification of Proto-objects using Merger Trees We search for the bona-fide progenitors of each cluster or protocluster of HR5 at z = 0.625 by tracing backward their merger histories.All the progenitors of each object are identified in all snapshots.Note that we do not call all the progenitors the protocluster galaxies, as protocluster galaxies will be defined as those within the turnaround radius.We define the most massive galaxy among the progenitors in a snapshot as the central galaxy.Thus, the central galaxy of a protocluster may change over time, depending on their mass accretion history. The bottom panel of Figure 3 shows the distribution of the galaxies belonging to clusters or protoclusters cluster progenitor galaxies RTA Rvir protocluster member galaxies Rvir < l a t e x i t s h a 1 _ b a s e 6 4 = " A j E a c S B t 2 c F J u Z 7 2 / c P O M tot 10 14 M Protocluster cluster member galaxies Figure 2. A schematic diagram presenting the definition of galaxy protoclusters and clusters.Clusters are groups of galaxies with Mvir currently greater than 10 14 M⊙.Protoclusters are those with Mvir < 10 14 M⊙ currently, but will have Mvir ≥ 10 14 M⊙ by z = 0.The future virial mass is estimated from the total mass within the region having the mean overdensity δ equal to the critical overdensity δ sc m for complete collapse at z = 0 predicted by the spherical top-hat theory.The physical volume of protoclusters is defined to be the region within the turn-around radius RTA. in comoving space at z = 0.625.The upper four panels show their progenitors that are traced along merger trees.Red, yellow, and blue dots mark the galaxies with M ⋆ > 10 11 M ⊙ , 10 10 − 10 11 M ⊙ , and 10 9 − 10 10 M ⊙ , respectively.It can be seen that the overall locations of protoclusters hardly change over time: the initial conditions are essentially preserved for these massive objects sitting at deep gravitational potential minima.On the other hand, the systems of cluster progenitor galaxies have been monotonically shrinking since z ∼ 2.4.However, at redshifts higher than z ∼ 2.4, their extent is roughly static at the value of R ∼ 10 − 30 cMpc, and the systems start to fade away.The three redshifts of z = 2.4, 3.1, and 4.5 are the target redshifts of the ODIN survey for LAEs at z = 2.4, 3.1, and 4.5 (Ramakrishnan et al. 2022).We will discuss the results of this study mainly at these redshifts. Identification of Protocluster Candidates based on the Spherical Top-Hat Collapse Model In this subsection, we propose a systematic method to identify the candidate regions enclosing protoclusters from galaxy distribution based on the SC models. Overdensity threshold for complete collapse at z = 0 We define protocluster candidate regions as the spherical volumes that enclose total mass greater than 10 14 M ⊙ and will collapse completely at z = 0 according to the overdensity threshold given by the spherical top-hat collapse model.We will search for the centers of protoclusters inside the spherical regions. In the spherical top-hat collapse model, an overdense region at an epoch will contract into a point at some stage if its overdensity is equal to the critical threshold density.We find this threshold density as a function of redshift for two types of cosmology.In the Einstein de-Sitter (EdS) universe with Ω m = 1, a homogeneous density sphere that collapses at z = 0 reaches its maximum radius at z = 0.59 with δ sc m = 9π 2 /16 − 1 ≃ 4.55, where δ sc m is the spherical top-hat matter overdensity.See Appendix C for more details.For comparison, the linear theory predicts overdensity δ lin m ≃ 1.062 at t max in the EdS universe. On the other hand, the SC model does not have an exact analytic solution in a flat universe with a nonzero cosmological constant, i.e., Ω m + Ω Λ = 1.We thus numerically solve the second order non-linear differential equation of the spherical top-hat overdensity δ sc m given in Pace et al. (2010): (1) where the derivatives are with respect to the expansion factor a, and E(a) = H(a)/H 0 = Ω m /a 3 + Ω Λ , where H(a) and H 0 are the Hubble parameter at the epoch of an expansion factor a and z = 0 (a = 1), respectively.The density parameters of Ω m = 0.3 and Ω Λ = 0.7 are adopted in this calculation.We numerically search for the initial conditions δ sc,i m and δsc,i m = δ sc,i m /a i at a i = 10 −3 that lead to δ sc m → ∞ at z = 0, and find a solution δ sc,i m = 2.16 × 10 −3 .The evolution of δ sc m is shown in Figure 4 (dashed line).For the general flat universe with non-zero Ω Λ , a fitting formula for the numerical solution of the SC model for the objects collapsing at z = 0 is given in Appendix C. Overdensity of the HR5 regions to be collapsed The SC model gives insight into the evolution of overdensities based on a simple assumption of homogeneous density distribution in a spherical region.However, in the real universe, structures are generally not spherical nor homogeneous.To examine if the simple assumption is applicable to practical cases, we compare the critical overdensity predicted by the SC model with the actual overdensity of the spherical region at a high redshift that encloses M 0 tot , the total mass of each cluster at z = 0 measured in the HR5-Low simulation.The sphere is centered at the most massive galaxy among all the cluster progenitors at the redshift. The open circles in the upper panel of Figure 4 show the mean matter overdensity within the radius R(M 0 tot ) from the most massive progenitor of each of 189 HR5-Low clusters.It should be noted that δ m [R(M 0 tot )] for HR5 clusters agrees quite well with the prediction of the SC model (dashed line) at all redshifts in the flat ΛCDM universe.This result demonstrates that the SC model is remarkably accurate in the ΛCDM universe at the mass scale of galaxy clusters, and thus the critical density threshold is applicable to identify protocluster regions. Identification of the regions enclosing protoclusters We have shown a good agreement between the spherical top-hat overdensity predicted by the SC model and that actually measured for the HR5 clusters.However, to propose a protocluster identification scheme applicable to observations, it is necessary to find the relation between the total mass and stellar mass at the cluster mass scale.For the clusters with log M 0 tot /M ⊙ > 14 the bottom panel of Figure 4 shows the stellar-mass to totalmass ratio M ⋆ /M tot within the spherical region having the critical overdensity of δ sc m at redshift z.Open diamonds are the ratio when only the stars of the galaxies Identification of Protoclusters Lee et al. m for collapse at z = 0 predicted by the spherical top-hat collapse model in the ΛCDM universe.Bottom: Ratio of stellar mass to total mass within the protocluster regions whose mean overdensity is equal to the critical value δ sc m of the ΛCDM cosmology.Open circles show the ratios computed from entire stellar mass, and open diamonds are calculated from the galaxies with M gal,⋆ > 2 × 10 9 M⊙.The dotted curves are the fitting functions given in Equation 2. with M gal,⋆ > 2 × 10 9 M ⊙ are used, and open circles are those when all stars are taken into consideration.We provide a fitting formula for the stellar-total mass relation in the following form: (2) This formula can fit the ratio well as a function of redshift with (α, β, γ) = (−0.055,1.903, −1.915) when the galaxies of M gal,⋆ > 2 × 10 9 M ⊙ are used (shown as the dotted curve fitting the diamonds in the bottom panel of Figure 4).When all stellar components are used (open circles), the best fit is made with the parameter set (−0.057, 1.755, −1.855).We note that the stellar-tototal mass relation is insensitive to mass in the case of the proto-objects of M 0 tot > 10 13 M ⊙ .This is because the region having the mean overdensity δ sc m is typically so large that the ratio converges to a value at a given redshift.The stellar-to-total mass ratio relation can be changed if the parameters of subgrid physics regulating star formation activities are changed.Therefore, the relation needs to be calibrated based on observations. The protocluster identification starts with finding the candidate regions that enclose protoclusters.At a given epoch, we visit galaxies starting from the most massive ones, and inspect the spherical volume centered at the galaxy.The radius of the sphere is increased until the overdensity drops to the critical value δ sc m at that epoch.If the total mass contained within the sphere exceeds 10 14 M ⊙ , the galaxy can be assumed as a candidate for the center of a protocluster.The fitting formula in Equation 2 is used to convert the observed stellar mass to the total mass. The central candidate galaxies do not always locate at the density peak of each sphere.Thus, we compute the center of mass (CM) from all the galaxies with M gal,⋆ > 2×10 9 M ⊙ located inside the spherical regions.To find the most representative center of galaxy distribution, we iterate the identification process until the CM converges to |⃗ x i−1 − ⃗ x i | < ϵ,, where ⃗ x i is the CM at i th iteration.In this study, we adopt ϵ = 0.25 cMpc for efficient searching since a smaller ϵ does not notably affect the results.A sphere is selected as a region enclosing protoclusters when it finally has M tot ≥ 10 14 M ⊙ after the iteration process. In dense environment, the separations between the centers of the protocluster candidates can be very small.We combine a protocluster candidate region i with another one j if D ij /R i < 1.0 or D ij /R j < 1.0, where D ij is the distance between the centers and R i and R j are the radii of the spheres within which the mean overdensity meets δ sc m .In this case, we define the most massive sphere as the central one, and accordingly, M tot of the central one is set as the estimated total mass of a spherical region group (SRG). Reliability of the Protocluster Identification Scheme We assume that the objects in the spherical regions of protoclusters identified based on the SC model eventually form cluster-scale objects by z = 0. We evaluate the reliability of this approach by comparing the total mass of an SRG (M SRG tot ) at a redshift z with the mass M 0,SRG tot that ends up being inside clusters at z = 0.The latter is estimated using the final total mass weighted by the stellar mass of the cluster progenitor galaxies found 3. Larger dots are the SRGs with cluster-scale mass (M 0,SRG tot ≥ 10 14 M⊙).The black solid, dotted, and dashed curves delineate the region of average final mass of M 0,SRG tot = 10 14 , 10 14.25 , and 10 14.5 M⊙, respectively.The SRGs in the upper left corner demarcated by red lines are discarded in this work as protocluster candidates. within the SRG as follows: where G is the set of the galaxies enclosed by an SRG, P i is the set of the progenitor galaxies of a cluster i, M (P i ) is the mass sum of P i , and M 0 tot,i is the final total mass of cluster i.The relation between M SRG tot and M 0,SRG tot tells us how reliably the spherical top-hat model predicts the final mass of enclosed objects. It is reasonable to expect that the growth history of an SRG can be affected by its environment and the above relation may depend on the history.So we inspect if the final mass depends on both M SRG tot and mass growth environment.As a proxy of the environment, we choose D 1 /R SRG , where D 1 is the distance to the nearest neighbor SRG and R SRG is the radius of the target SRG.An SRG should have the total mass larger than half the total mass of the target SRG of interest to be qualified as a neighbor. Figure 5 , which justifies our use of the spherical overdensity criterion for identifying the protocluster centers.In particular, 90% of the SRGs whose M SRG tot is larger than 10 14.2 M ⊙ end up having M 0,SRG tot > 10 14 M ⊙ , indicating that they probably contain the authentic protoclusters.This illustrates the high reliability of our identification scheme.This figure also demonstrates that the final mass to be included in clusters is rather independent of the environment represented by the nearest neighbor SRG distance.We find, however, that the purity slightly improves if we discard the isolated small-mass SRGs with M SRG tot < 10 14.15 M ⊙ and D 1 /R SRG > 2.5 (the region enclosed by double dot-dashed lines).Based on these criteria, we examine the purity and completeness of our approach in identifying the bona-fide protoclusters in Appendix D. We find that the identification scheme recovers the authentic protoclusters with high reliability.We also show in Appendix E that the redshift-space distortion (RSD) does not significantly affect the performance of the protocluster identification scheme. Protocluster member galaxies within Turnaround Radius In numerical simulations and theories, it is relatively easy to define a protocluster as a group of objects that eventually contracts and forms a cluster.As described in Section 3.1, the progenitors of cluster galaxies can be traced using their merger trees in numerical simulations, and the corresponding protoclusters can be identified. However, as shown in Figure B3, the progenitor galaxies of clusters are widespread up to ∼ 30 cMpc at high redshifts and it is not reasonable to adopt all the progenitor galaxies as the physically-associated members of protoclusters.Most observations identify protoclusters by finding sufficiently overdense regions of galaxies (see Overzier 2016, and references therein).However, there has been no consensus on the value of the overdensity defining the membership of protocluster galaxies.Applying the virial radius in identifying protocluster galaxies is not so desirable as protoclusters are supposed to be the objects still under the process of formation and virized regions of protoclusters tend to vanish quickly as redshift increases. We thus propose to define the protocluster member galaxies as those within the zero proper velocity surface from protocluster center.The distance from a density peak to the zero-velocity surface is dubbed the turnaround radius R TA .The turnaround radius is the distance to the spherical surface on which the gravitational infall counterbalances the Hubble expansion (Gunn & Gott 1972).The turnaround radius provides a theoretically motivated overdensity for defining the protocluster region, and also makes protoclusters physical objects where their member galaxies can have some degree of conformity.In this section we present a scheme for finding R TA from observed galaxy distribution.To measure R TA from the protocluster centers in HR5, we construct the matter (dark matter, gas, and stars) density and peculiar velocity fields on a uniform grid with pixel size of ∆x = 0.128 cMpc.The proper radial velocity v r at r 1 relative to a local density peak at r 0 is given as follows: Turnaround Radius where r = r 1 − r 0 , H(z) is the Hubble parameter at redshift z, e r is the unit vector of r, and v is the peculiar velocity at r 1 relative to the mean velocity of matter within |r|.The turnaround radius is measured by finding the radius of a shell on which the average v r becomes zero. As an illustration, Figure 6 shows the matter-density and velocity fields of a HR5 protocluster region at four redshifts.The blue and yellow circles indicate R vir and R TA , respectively, centered at the most massive galaxy in the field at each epoch.Arrows are the proper velocity vectors projected onto a 4 cMpc-thick slice centered at the galaxy.The overdensity of the protocluster increases with time, and consequently, both R TA and R vir increase with time too.It can be seen that R vir contains only the very center of the protocluster and becomes uninterestingly too small at high redshifts.On the other hand, R TA is much larger than R vir , does separate the inner collapsing region from the outer expanding space, and embraces the high density region of intersecting filaments of galaxies.In this sense R TA defines the outer boundary of the protocluster and the galaxies within R TA can be called its 'members'.Even though protocluster members are identified only within a spherical region, their distribution is quite anisotropic as the region encloses connecting filaments. Figure 7 shows R TA of the HR5 protoclusters and protogroups at four redshifts as a function of their final total mass at z = 0. R TA has a good correlation with the final mass.The tightness of the correlation increases toward low redshifts.The linear Pearson correlation co-efficient is 0.634 at z = 4.5 and this increases to 0.81 at z = 1.0 in the log R TA − log M 0 tot /M ⊙ plane.We have also checked if the turnaround radii measured from the most massive galaxies in SRGs are accurate compared to those of bonafide protoclusters, and find that more than 80% of the SRGs have R TA identical to that of the bonafide protoclusters (Appendix F). Correlations between Turnaround Radius, Virial Mass, and Viral Radius In this section we study the general nature of the turnaround radius by inspecting its relation with the virial mass and radius.The turnaround radius is known to be 3-4 times the virial radius of massive objects in the local universe (Mamon et al. 2004;Wojtak et al. 2005;Rines & Diaferio 2006;Cuesta et al. 2008;Falco et al. 2013).The virial mass of an object is defined as M vir = 4πr 3 vir ∆ c ρ c /3, where r vir is the virial radius within which the mean matter density is ∆ c times the critical density of the universe ρ c = 3H 2 /8πG, where H is the Hubble parameter at z and ∆ c is computed using the fitting formula derived by Bryan & Norman (1998) for the cosmology with Ω Λ > 0: where x = Ω m (z) − 1.Meanwhile, the total mean radial velocity at r from the center of a bound object is the sum of the Hubble expansion velocity and mean infall peculiar velocity: ⟨v r ⟩ = H(z)r + ⟨v infall (r)⟩, where ⟨v infall (r)⟩ is the averaged radial velocity of matter in a spherical shell at radius r. In the region where the Hubble flow starts to dominate and the total mean radial velocity becomes positive, Falco et al. (2014) found a good approximation for the infall velocity profile as follows: where v vir = GM vir /r vir is the circular velocity at r vir , and a and b are free fitting parameters.The best fit values found are a = 0.8 ± 0.2 and b = 0.42 ± 0.16 at z = 0 in the N-body simulations of a ΛCDM universe with Ω m = 0.24 and h = 0.73 (Falco et al. 2014).Since ⟨v r ⟩ = 0 at the turnaround radius, the ratio of R TA to r vir can be reduced to R TA /r vir = (a ∆ c /2) 1/(b+1) by combining the equations above with r = R TA and ⟨v r ⟩ = 0. Thus, the ratio R TA /r vir is expected to be ∼ 4.3 and in the range of 3.1 -6.2 at z = 0. We now inspect the relation of R TA with M vir or R vir directly for the HR5 protocluster/group regions.Measurements are made relative to the most massive galaxy in each region.Figure 8 demonstrates the tight correlation between R TA and the virial mass at each epoch.Objects are distinguished in color according to their total mass at z = 0.It can be noticed that the relation moves slowly downward with time and R TA decreases at the same virial mass at lower redshifts. A weak evolution of the turnaround-to-virial radius ratio can be seen in Figure 9 for protoclusters (red, M 0 tot ≥ 10 14 M ⊙ ) and the proto-groups (blue, M 0 tot = 10 13 − 10 14 M ⊙ ).The median of the ratio slowly decreases from 4.8 at z = 6 to 3.9 at z = 0.625 for protoclusters or clusters (red).The decreasing rate of the ratio is higher at z < 2 than before as ∆ c significantly lowers.The ratio also decreases a little faster for protogroups.This seems to be caused by the disturbance of velocity field that becomes more severe for smaller mass objects at lower redshifts.The major origin of this weak redshift dependence will be discussed in the next section.Our measurement of R TA /R vir at z = 0.625 is consistent with the ratio range of 3.1-6.2derived based on the semi-analytic approach of Falco et al. (2014). Matter Overdensity within Turnaround Radius The tight correlation between R TA and R vir implies nearly constant overdensity within R TA at z > 2. We measure the average matter overdensity of the HR5 proto-objects inside the sphere of radius R TA .Figure 10 presents the matter overdensity δ TA m as a function of R TA for all proto-objects.The large dots mark the protoclusters and small dots are proto-groups with the final mass of M 0 tot = 10 13 − 10 14 M ⊙ .The turnaround radius Identification of Protoclusters Lee et al. R TA of protoclusters can temporarily decrease and δ TA m can jump up when they undergo close encounters with neighbors.In order to mitigate the impact of such temporal events, we choose to use the lower boundary (bottom 5%) of the distribution of δ TA m shown in Figure 10 for the threshold overdensity corresponding to R TA .When protoclusters have close neighbors, the radius found with the lower boundary will be somewhat larger than the actual turnaround radius directly measured, and the protocluster regions are allowed to overlap.The bottom 5% of the distribution of δ TA m are 4.96, 5.04, 5.30, and 6.55 at z = 4.5, 3.1, 2.4, and 1.0, respectively.The median and 1σ dispersion are 5.63 (σ = 0.58), 5.98 (1.01), 6.17 (1.07), and 7.71 (1.91), respectively. Like R TA /R vir , δ TA m also weakly evolves over time, with small scatter for protoclusters.It should be noted that δ TA m hardly depends on R TA or the final cluster mass of the protoclusters (large dots).On the other hand, δ TA m of the low mass structures with relatively small R TA shows stronger evolution.The scatter of δ TA m at small R TA emerges when the field of interest is disturbed by neighbouring structures. Figure 11 illustrates the evolution of four HR5 protoclusters representing different total mass scales at z = 0. Dotted circles mark the turnaround radii, and properties shown are stellar mass density and age, gas density and metallicity.Similar to Figure 6, Figure 11 again shows that the volume within the turnaround radius does encompass the interesting large-scale structures connected to the protocluster cores.It can be noticed in Figure 11 that R TA is not always larger for the protoclusters with larger mass.It is also possible for R TA to decrease temporarily when mergers happen.This is a desirable nature of R TA as it is supposed to define the member galaxies of protoclusters and separate them from approaching nearby objects.However, during close interactions with neighbors, R TA becomes smaller and δ TA m tends to increase.The upward scatter of the proto-objects in Figure 10 can be attributed to such events. Stellar mass to total mass conversion within Turnaround radius We define the outer boundary of protoclusters as R TA , which is the turnaround radius enclosing the threshold overdensity given by Equation 7 below.We will use an empirical relation between the total mass and stellar mass within R TA so that the definition can be applied to observations.Figure 12 shows the redshift evolution of δ TA m of the HR5 protoclusters (top) and the stellar-to-total mass ratio within the turnaround radius, M TA ⋆ /M TA tot , averaged over the HR5 protoclusters.The stellar mass is obtained from all stars (red open circles) or only for the galaxies with M gal,⋆ > 2 × 10 9 M ⊙ (blue open circles). The overdensity δ TA m delineating the bottom 5% of the distribution at z can be fit well by the following formula: Figure 13.The turnaround radius estimated from the stellar mass distribution using the relations shown in Figure 12 versus the directly-measured turnaround radius of protoclusters.The former is based on the δ TA m of the bottom 5%.The turnaround radii estimated by using all stars, and using only the galaxies more massive than 2 × 10 9 M⊙ are marked by red and blue dots, respectively.at z < 1.5 due to decrease of the Hubble parameter and disturbance by neighboring structures. The stellar-to-total mass ratio within the turnaround radius can be also fit well by Equation 2 with (α, β, γ) = (−0.0092,2.027, −1.962) when all stellar mass is counted, or with (−0.0128, 1.882, −2.017) when only the stellar mass in the galaxies with M gal,⋆ > 2 × 10 9 M ⊙ are used.We use this fitting formula to derive the total mass from the stellar mass within a radius from each protocluster center, and find the radius within which the mean total mass density reaches the predicted δ TA m at the given redshift (i.e.Equation 7).This gives the estimated turnaround radius. Figure 13 compares the directly measured R TA with R est TA estimated from stellar mass.They correlate quite well for both cases when all stellar mass is counted in R est TA or only the stellar mass in the galaxies with M gal,⋆ > 2 × 10 9 M ⊙ is used.R est TA tends to be larger than R TA as expected, particularly for relatively smaller mass protoclusters, because we use bottom 5% δ TA m .At z = 4.5, when protoclusters have only a few galaxies above our stellar mass threshold, the correlation breaks.This necessitates to include the small mass galaxies with M gal,⋆ < 2 × 10 9 M ⊙ at z ≳ 4 for accurate estimation of R TA and reliable identification of protocluster environment. Summary and Discussion In this paper we have proposed a practical method to find galactic protoclusters in observational data, and demonstrated its validity to the protoclusters in the cosmological hydrodynamical simulation HR5.We first define 'protoclusters' as galaxy groups whose total mass within R vir is currently less than 10 14 M ⊙ at their epochs but would exceed that limit by z = 0. Conversely, 'clusters' are the groups of galaxies whose virial mass currently exceeds 10 14 M ⊙ .Therefore, there can be a mixture of clusters and protoclusters at z > 0. The extent of a protocluster is defined as the spherical volume within the turnaround radius or the zero-velocity surface.The future mass that a protocluster would achieve at z = 0 is estimated using the spherical top-hat collapse model.The whole concept is schematically visualized in Figure 2. Our protocluster identification method is summarized as follows: 1. Visit galaxies starting from the most massive ones, and measure the mean total mass density within radius R. The total mass is obtained from the stellar mass by using the conversion relation in Equation 2. 2. Find the radius where the mean density drops to the threshold density given by the SC model.Equation C6 is a useful fitting formula for the threshold overdensity δ sc m . 3. Adopt the galaxy (or nearby density peak) as a protocluster center candidate if the total mass included within the radius is greater than 10 14 M ⊙ .Group the spherical regions if their separation is less than their radii.Protocluster centers are now identified. 4. The protocluster region is defined as the spherical volume from the protocluster center up to the turnaround radius.The turnaround radius is the radius where the mean overdensity drops to the threshold value given by Equation 7. The stellar mass to total mass conversion within R TA is made using Equation 2, with the parameters given in Section 4.4. HR5 used in this paper adopts a flat ΛCDM cosmology with Ω m = 0.3 and Ω Λ = 0.7.As the threshold density given by the spherical top-hat collapse model is used to find the protocluster centers, it will be useful to check how sensitive the threshold is to the cosmology adopted.We examine how δ sc m changes depending on the matter density parameter while keeping the geometry of the universe flat and fixing the dark energy equation of state parameter to −1.Our choice of HR5 is based on the Planck data (Planck Collaboration et al. 2016).This is close to the recent measurement of Dong et al. (2023) who used the extended Alcock-Paczyński test to obtain Ω m = 0.285 +0.014 −0.009 .In Figure 14, δ sc m for four choices of Ω m , i.e., 0.25, 0.3, 0.35, and 1, safely bracketing the recent observational values, are plotted.The figure shows that δ sc m differs on average only by ∼12% at z = 6 − 2 among the flat ΛCDM models with Ω m from 0.25 to 0.35.Therefore, the threshold density used for finding the protocluster centers is not very sensitive to the choice of the matter density parameter, when the current tight constraint on the parameter is taken into account. To estimate the reliability of this prescription, we use the clusters at z = 0 with M 0 tot ≥ 10 14 M ⊙ and groups with 10 13 M ⊙ < M 0 tot < 10 14 M ⊙ identified in HR5-Low, a low-resolution version of HR5.There are 2,794 objects with M 0 tot > 10 13 M ⊙ in the zoomed region of HR5, and among them 189 are clusters.Merger trees are constructed for these objects, and all progenitor galaxies are identified.We apply our protocluster identification scheme to the galaxy distributions at four simulation snapshots of z = 4.5, 3.1, 2.4, and 1, being motivated by the ODIN survey of Lyman-α emitters.We find a tight correlation between the mass within the protocluster regions identified in accordance with the SC model, and the final mass to be situated within clusters at z = 0.In particular, it is highly likely (probability ≳ 90%) for a protocluster region to evolve to a cluster if the region contains a total mass greater than about 2 × 10 14 M ⊙ , meaning that the region is likely to be the authentic protocluster. We have defined the outer boundary of protoclusters as the zero-velocity surface at the turnaround radius.Even though protocluster members are identified within a spherical region, their distribution is quite anisotropic as the region encloses numerous filaments beaded with galaxies.The definition would make sense if the galaxies within the turnaround radius do share some physical properties, which is not found for those outside.In the next study, we will examine the physical properties and evolution of the protocluster galaxies based on the definition proposed in this study.acknowledgments J.L. is supported by the National Research Foundation of Korea (NRF-2021R1C1C2011626).C.P. and J.K. are supported by KIAS Individual Grants (PG016903, KG039603) at Korea Institute for Advanced Study.BKG acknowledges the support of STFC through the University of Hull Consolidated Grant ST/R000840/1, access to viper, the University of Hull High Performance Computing Facility, and the European Union's Horizon 2020 research and innovation programme (ChETEC-INFRA -Project no.101008324).This work benefited from the outstanding support provided by the KISTI National Supercomputing Center and its Nurion Supercomputer through the Grand Challenge Program (KSC-2018-CHA-0003, KSC-2019-CHA-0002).This research was also partially supported by the ANR-19-CE31-0017 http://www.secular-evolution.org.This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIT, 2022M3K3A1093827).Large data transfer was supported by KREONET, which is managed and operated by KISTI.This work is also supported by the Center for Advanced Computation at Korea Institute for Advanced Study. Appendix A Structure Finding and Merger Trees We use a galaxy finder PGalF introduced by Kim et al. (2022) to extract self-bound and stable galaxies from the snapshots of HR5.PGalF is devised to identify the Friend-of-Friend group of particles from the distribution of heterogeneous particles, i.e., star, MBH, gas, and dark matter in HR5.For the mixture of various types of particles, PGalF uses an adaptive linking length to connect a pair of particles of different species or masses.PGalF identifies self-bound substructures in the FoF halos.We classify a substructure as a galaxy when it contains stellar particles.To find galaxies from a FoF halo, PGalF first constructs an adaptive stellar density field and hierarchically determine the membership of the particles Lee et al. bound to the galaxies centered at stellar density peaks.A bound particle is eventually assigned to a galaxy when it is located inside the tidal boundary of the galaxy.We note that a galaxy identified in this process is generally composed of heterogeneous particles.For the substructures with no stellar particles, a similar process is conducted for the rest matter species.For a full description on the method, refer to Kim et al. (2022). Since stellar or dark matter particles carry their own unique identification numbers (IDs) throughout the simulation runs, we are able to trace the progenitors/descendants of substructures between two time steps.A branch of a merger tree is described using the binary relation between the two sets of all stellar particles in two snapshots, motivated by the Set theory.First, we define S i as a set of all stellar particles at time step, t i .Then, where "new stars" are those created between time steps, t i−1 and t i .We define G j i as the group of star particles of the j'th galaxy at time step i.Because a stellar particle is never destroyed in HR5, S i−1 ⊆ S i .Our galaxy finder dictates that where n is the total number of galaxies identified in time step i.The left-hand and right-hand sides of the equation are not always equal due to unbound stray stellar particles which are not bound to any galaxies. We associate galaxies between two snapshots by mapping a set of stellar particles (a galaxy) at a time step into sets of stellar particles (galaxies) at the next time step using ySAMtm (Jung et al. 2014;Lee et al. 2014a).In ySAMtm, we define the j'th galaxy as the main descendant of the k'th galaxy when satisfying the mapping: where P (G j i+1 |G k i ) is the fractional number of stellar particles of the k'th galaxy to be found in the j'th galaxy.Multiple galaxies in time step i are allowed to have a common main descendant in time step i + 1 once the mapping is satisfied or in short f (j) = f (k) for j ̸ = k.Now we consider the reverse mapping as ) which denotes that the j'th galaxy in time step i − 1 is the main progenitor to k'th galaxy in time step i.Unlike the mapping f for the main descendant, in principle, multiple galaxies in time step i cannot have a common main progenitor in time step i−1.So, in this case g(j) ̸ = g(k) for all j ̸ = k.This is because we assume that a galaxy cannot be fragmented into multiple descendants in ySAMtm. The mapping f is the left inverse mapping of g ; it can be defined more formally as, Here, equation (A4) means that the main descendant of a main progenitor is the galaxy itself.One the other hand, g is not left inverse mapping of f (Eq.A5) because of the case when the j'th galaxy is merged into its descendant. Our tree building scheme does not allow two galaxies to have the same main progenitor (or g(j) ̸ = g(k) for j ̸ = k), but this usually happens when a galaxy flies by a more massive galaxy.To circumvent such cases, we remove the main progenitor mapping of the less massive galaxy (the flying-by one) and trace back its previous history until its actual main progenitor is found, using the most bound particle (MBP).The MBP is a particle that has the largest negative total energy in the galaxy (Hong et al. 2016) and, thus, we assume that the MBPs trace density peaks of galaxies.We use dark matter particles as the MBPs because, unlike stellar particles, they do not disappear when backtracking snapshots.We also use the MBP scheme to trace the substructures with no stellar particles.The merger trees of substructures are constructed by connecting the progenitor-descendant relations across the all snapshots.The progenitor/descendant relation of FoF halos is traced based on the merger trees of their most massive substructures.Further details of the tree buliding algorithm are given in Park et al. (2022). Appendix B Identification of Cluster Progenitors in HR5 In this section, we describe the details of the identification process of the clusters in HR5 using its low resolution simulation HR5-Low.While HR5 achieves a spatial resolution down to ∆x ∼ 1 kpc and minimum dark matter particle mass of m p ≃ 6.89×10 7 M ⊙ , HR5-Low is set to have a spatial resolution down to ∆x ∼ 16 kpc with a minimum dark matter particle mass of m p ≃ 3.02 × 10 9 M ⊙ .Because the main purpose of HR5-Low is to identify structures at z = 0, we use the parameters and initial conditions of HR5 without any modification or calibration.We identify structures from the snapshot at z = 0 and 0.625 of HR5-Low using PGalF.At z = 0, we find 2,794 halos in M 0 tot ≥ 10 13 M ⊙ and 189 halos in M 0 tot ≥ 10 14 M ⊙ with the number fraction of lower level particles less than 0.1%, which ensures the mass contamination lower than 0.7%.The dark matter particles of the clusters are traced back to z = 0.625 using their IDs, to search for the progenitors of halos of M 0 tot ≥ 10 13 M ⊙ .We measure the LV of a cluster in terms of the Cartesian grids.In HR5-Low, we place a mesh of uniform cubic grids with ∆l = 0.512 cMpc over the entire volume of interest (the simulated zoomed region).To build a density field, we use the dark matter particles of cluster halos at z = 0.When dark matter particles in a grid do not belong to (or are not members to) a single cluster, the grid is finally associated with the cluster which contributes most to the grid mass.By utilizing the LV method with the HR5-Low data, we are able to define protocluster regions at an arbitrary redshift. In the subsequent analysis we assume that the LVs of HR5 clusters are identical to the LVs of corresponding HR5-Low clusters.In the last snapshot of HR5, therefore, we are able to find structures inside the LVs directly imported from the HR5-Low clusters.We only use grids having mass larger than 10 10 M ⊙ because 97.5% of galaxies with M ⋆ ≥ 10 9 M ⊙ have M tot > 10 10 M ⊙ .This mass cut helps us minimize the contamination by noncluster progenitors in the LVs of the cluster progenitors at z = 0.625. Figure B1 presents the relation between the cluster mass in HR5-Low at z = 0 and the corresponding LV mass M LV in HR5 at z = 0.625.The two masses are nearly same with a median scattering of ∼ 6%.The Identification of Protoclusters Lee et al. mass difference may be caused by matter that happens to be enclosed in the LVs but would not fall into the cluster at z = 0. To examine the consistency or similarity in particle distributions between HR5-Low and HR5 especially on halo scales at z = 0.625, we identify an HR5 FoF halo which is spatially closest to the main progenitor of each HR5-Low cluster.Here, the progenitor of a cluster is determined by the scheme described in Section 2.2. Figure B2 shows the relation of FoF halo masses between the main progenitors of clusters in HR5-Low and their counterparts in HR5 at z = 0.625.Except for two cases marked by A and B, all FoF halos in the two simulations have nearly same mass.We slightly overestimate the mass of FoF halos in HR5-Low compared to HR5 because of the purer mass resolution which tends to more easily destroy clumpy structures in the outskirts of halos.Here, A and B are the cases when substructures are distinguishable only within eitherHR5 or HR5-Low.We assume that, although rare, the adaptive linking length may cause the different FoF halo identification between two simulations at different resolutions.Alternatively, the different-resolution simulations may, of course, produce different particle distributions more often in the outskirts of halos especially around a close binary or a multiple system of halos. Figure B3 shows R 95 , the radius enclosing 95% of stellar mass in cluster progenitors, as a function of the final total mass.The progenitors of more massive halos tend to have larger R 95 .The range of R 95 is consistent with Muldrew et al. (2015) who measure R 90 of protoclusters using a semi-analytic model of Guo et al. (2011).In this study, we suggest the turnaround radius as the physical size of protoclusters, instead of R 95 because R 95 measures merely the spatial extent of the distribution of progenitor galaxies. Appendix C Spherical top-hat overdensity in the ΛCDM and Einstein de-Sitter universe In the Einstein de-Sitter (EdS) universe with Ω m = 1, the outermost radius R of a sphere of mass M evolves over time t as follows: where G is the gravitational constant.This equation has the cycloidal solution: where t max is the time when the sphere reaches a maximum radius R max .In this solution, the spherical region collapses at the collapse time t c = 2t max (θ = 2π).The overdensity of the sphere at a given epoch derived from the analytic solution is given by (e.g., Peebles 1980;Suto et al. 2016): A homogeneous density sphere that collapses at z = 0 reaches its maximum radius at z = 0.59 with δ sc m = 9π 2 /16 − 1 ≃ 4.55 in the EdS universe.For comparison, the linear theory predicts overdensity δ lin m ≃ 1.062 at t max in the EdS universe. In the flat universe with non-zero Ω Λ , the expansion factor of maximum radius a max can be derived using the formula (Peebles 1984;Eke et al. 1996): where ω = Ω Λ /Ω m and I(ω) is given from: where a c is the expansion factor at the time of collapse.These equations give a max = 0.56 in the case of a c = 1.0 (z = 0) and the overdensity at the epoch is interpolated as δ sc m = 5.85 for our choice of the ΛCDM universe.When Ω m = 1.0 and Ω Λ = 0, Equation 1 has the solution that is equal to the exact solution of the EdS universe case derived above. Figure C1 shows the overdensity evolution in a homogeneous sphere that collapses at z = 0 in the EdS (blue) and ΛCDM (red) universe.The two filled stars indicate the overdensities at the epochs of maximum radius.Because dark energy counteracts gravitational collapse and the growth of overdensity is relatively slower, the sphere should have higher overdensity in the universe with Ω Λ > 0 than in the EdS universe, to be able to collapse by z = 0. Since δ sc m does not have an exact analytic solution in the ΛCDM universe adopted, we find a formula that fits the numerical solution of the SC model for the objects collapsing at z = 0: (bottom).We define the purity as the number fraction of the SRGs enclosing bona-fide protoclusters (those identified based on merger trees) to the all SRGs more massive than a given mass.The completeness is the number fraction of the authentic protoclusters enclosed by SRGs above a given mass.In these statistics, we assume that an SRG recovers a protocluster when the most massive galaxy of the SRG is the member of the protocluster and half the galaxy mass of the protocluster is enclosed by the SRG.In this scheme, an SRG can be associated with only one protocluster.Color code in the bottom panels indicates the D 1 /R SRG parameter.Colored dots show the distribution of all the SRGs sample and black concentric circles mark the SRGs with M SRG tot > 10 14.15 or D 1 /R SRG < 2.5.These two different mass definitions are overall in good agreement, particularly at z < 4. Their correlation becomes tighter with decreasing redshift as structures form and develop further.The completeness and purity show that more than 80% of protoclusters can be recovered by our scheme with ∼ 60% purity at z ∼ 2 − 3. The purity increases to 80% in M SRG tot ≥ 2 × 10 14 M ⊙ .At z = 4.5, however, these statistics are inevitably poorer than at lower z, because galaxies have not had time to develop yet.We note that the purity and completeness are enhanced by ∼ 10% if an SRG is allowed to associate with all the protoclusters in which half their galaxy mass is enclosed by the SRG. Appendix E Redshift-Space Distortion Effect on the Protocluster Identification The peculiar velocities of galaxies distort the distribution of galaxies in redshift space (e.g., see Guzzo et al. 1997;Hamilton 1998).We examine the impact of RSD on the protocluster identification scheme.In this test, we assume that a virtual observer has the line of sight aligned with the major axis of the HR5 zoom-in region.The redshift of a snapshot is assigned to the center of the zoomed region, and the cosmological redshifts of the galaxies in the snapshot are computed from the distance relative to a virtual observer at z = 0.The Doppler redshifts induced by the peculiar velocities of galaxies are added to the cosmological redshifts, and the distances to the galaxies are re-estimated from the combined redshifts.The standard deviations of the differences between the intrinsic and redshift-distorted distances are 2.3, 3.0, and 3.6 cMpc at z = 2.4, 3.1, and 4.5, respectively.Figure E1 presents the impact of the RSDs on the protocluster identification scheme.Since the large-scale peculiar velocity vector tends to point toward overdense regions, the galaxy distribution near a protocluster is statistically flattened along the line of sight in redshift space (Kaiser 1987).This results in slight overestimation of the overdensity and size of the top-hat spheres of dense regions.The final impact is that the completeness increases, at higher redshifts in particular, while the purity slightly decreases.The bottom panels of Figure E1 show that the RSD effect slightly increases the SRG mass, but overall distribution is similar between the cases with and without the RSD effects.These statistics are computed based on the assumption that an SRG is only associated with a protocluster.The purity and completeness can change if an SRG is allowed to recover multiple protoclusters.This result demonstrates that the RSD effect does not have significant impact on the protocluster identification scheme.D1, but for the cases with and without the RSD effect.In the bottom panels, the scatter between M 0,SRG tot and M SRG tot is similar between the two cases with and without the RSD effect.Upper panels show that the RSD effect lowers the purity while it slightly enhances the completeness at given mass.This is caused by the RSD effect that makes overdense regions look flattened in the redshift space (Kaiser 1987), resulting in the overestimation of the SRG radius. Figure F1 . Relation of turnaround radius of bona-fide protoclusters (RTA) to turnaround radius measured from the most massive galaxies in SRGs (R SRG TA ).A protocluster is assumed to be associated with an SRG when half its galaxy mass is enclosed by the SRG.Scatter is caused when the most massive galaxy of an SRG is not the most massive one of its host protocluster.We find that ∼ 80% SRGs recover the RTA of enclosed protoclusters. Figure 1 . Figure1.Dark matter particles in three clusters found at z = 0 in HR5-Low (left), in their progenitors at z = 0.625 (middle), and in the same volumes in HR5 at z = 0.625 (right).In the left panels, the halos in yellow are the members of the clusters with M 0 tot ∼ 10 14 M⊙ (top), 10 14.5 M⊙ (middle), and 10 15 M⊙ (bottom).White horizontal bars illustrate the scale of 4cMpc.The white dotted lines display the Lagrangian volumes enclosing the dark matter particles that end up forming clusters at z = 0 in HR5-Low.We assume that all the objects in HR5 located inside the same Lagrangian volume are the progenitors of the corresponding cluster.The thickness of the projected volume is 8.2 cMpc (top), 13.8 cMpc (middel), and 21.5 cMpc (bottom), fully containing each cluster in the projected direction.In the right panels, M enclosed presents the total mass enclosed by the Lagrangian volume.All the objects inside the Lagrangian volume are traced back to high redshifts using their merger trees in this study. Figure 4 . Figure 4. Top: Matter overdensity inside the radius enclosing the final mass of protoclusters (M 0 tot > 10 14 M⊙) as a function of redshift.Dots are the medians, and scatter bars show 16 th − 84 th percentile distributions.Dashed line is the critical matter overdensity δ scm for collapse at z = 0 predicted by the spherical top-hat collapse model in the ΛCDM universe.Bottom: Ratio of stellar mass to total mass within the protocluster regions whose mean overdensity is equal to the critical value δ sc m of the ΛCDM cosmology.Open circles show the ratios computed from entire stellar mass, and open diamonds are calculated from the galaxies with M gal,⋆ > 2 × 10 9 M⊙.The dotted curves are the fitting functions given in Equation2. Figure 5 . Figure 5. Final total mass M 0,SRG tot (encoded by color) as functions of distance to the nearest spherical region group (SRG) D1 (normalized by RSRG) and its total mass M SRG tot at each redshift.Each dot indicates a SRG, and color represents the final mass M 0,SRG tot shows the final mass M 0,SRG tot (encoded by color of large circles) in the D 1 /R SRG versus M SRG tot space.Redder color indicates larger final mass.Small dots are the SRGs with M SRG tot ≥ 10 14 M ⊙ at redshift z but with M 0,SRG tot ≤ 10 14 M ⊙ at z = 0, namely failed protocluster candidates.The figure demonstrates a tight correlation of M SRG tot with M 0,SRG tot Figure 6 . Figure 6.Matter density and velocity fields within and in the vicinity of a protocluster at four epochs.Denser regions are brighter.The panels show matter distribution within ±2cMpc from the most massive galaxy along the projected direction.Blue and yellow circles indicate Rvir and RTA measured from the density peak, respectively.All the blue dots are the gravitationally self-bound objects with the total mass greater than 10 10 M⊙.Larger blue dots are the cluster progenitor objects, and among them, those with M⋆ > 2 × 10 9 M⊙ are marked by red open circles. Figure 7 . Figure 7. Turnaround radius RTA of the proto-objects as a function of the final total mass M 0 tot at z = 0 that is measured in HR5-Low.Contrary to R95, RTA gradually increases as dense regions grow in mass and the Hubble parameter decreases with decreasing redshift. Figure 8 . Figure 8. Relations between the turnaround radius RTA and virial mass Mvir at four epochs for the proto-objects that will have the final total mass of M 0 tot that is measured in HR5-Low.The final total mass is color-coded.Protoclusters are marked by large filled circles and non-protocluster objects are marked by small dots. Figure 9 . Figure 9. Ratio of the turnaround radius RTA to the virial radius Rvir as a function of redshift.The blue and red circles correspond to the structures with log M 0 tot /M⊙ = 13 − 14 and log M 0 tot /M⊙ > 14 at z = 0, measured from HR5-Low, respectively.The scatter bars show 16 th −84 th percentile distributions.This figure indicates that RTA/Rvir evolves very weakly before z = 2. Figure 10 . Figure 10.Matter overdensity within the turnaround radius of proto-objects at the four redshifts.Protoclusters (M 0 tot ≥ 10 14 M⊙ at z = 0, measured in HR5-Low) are marked by large filled circles and non-protocluster objects are marked by small dots.The color code presents the final total mass of proto-objects.The dashed and solid arrows indicate the medians and bottom 5% of δ TA m of protoclusters.The matter overdensity of protoclusters only weakly increases from δ TA m ≈ 5.0 (bottom 5%) at z = 4.5 to δ TA m ≈ 5.3 (bottom 5%) at z = 2.4 (see the text and Figure 12). Figure 11 . Figure11.Distribution of gas and stars in the regions of four protoclusters that end up forming clusters with M 0 tot ≈ 10 14 − 10 15 M⊙ at z = 0 that is measured in HR5-Low.The dotted circles mark the turnaround radius of the protoclusters.Metal poor gas is colored in green, and gas color becomes redder with increasing metallicity.Younger stars are colored in blue and older ones are yellow.Grayish shades display the regions filled with the hot medium with T > 10 6 K.The upper two panels are relatively zoomed, as indicated by the scale bars. Figure 12 . Figure12.Top: Redshift evolution of the overdensity within RTA (δ TA m ) for the protoclusters of M 0 tot > 10 14 M⊙ that is measured in HR5-Low.The open circles indicate the bottom 5% overdensity measured from the HR5 protoclusters and the dashed and solid lines are the fits to the median and bottom 5%.Bottom: ratio of stellar to total mass within RTA as a function of redshift.The red and blue open circles denote the ratios measured from all stars and from galaxies with M gal,⋆ > 2 × 10 9 M⊙, respectively.The dashed curves are the fits based on Equation 2 with fitting parameters given in section 4.4. ) where (a, b, c, d) = (0.168, 4.068, −0.381, −0.734), which is shown as the solid line in the top panel of Figure 12.The error of the fit is smaller than 0.9%.As shown in Section 4.3, δ TA m monotonically increases with time on average, and reaches a finite maximum at z = 0.The evolution of δ TA m is weak at z > 2, but becomes rapid Identification of Protoclusters Lee et al. Figure 14 . Figure14.The critical overdensity for complete collapse at z = 0 given by the spherical top-hat collapse model in the Einstein-de Sitter universe (EdS, red) and the flat ΛCDM universes with three different matter density parameters. Figure B1 . FigureB1.The relation between the total mass of the clusters found at z = 0 in HR5-Low and the LV mass at z = 0.625 in HR5.The LV mass is on average ∼ 6% higher than the cluster mass due to the matter that are contained in voxels at the epoch but will not form the clusters. Figure B2 . Figure B2.Relation between the total mass of the main progenitors (z = 0.625) of the clusters found at z = 0 in HR5-Low and the total mass of their counterparts in HR5.Halo A is the one that is identified as two separate structures in HR5-Low while a smaller one already becomes a substructure of the halo in HR5.Halo B is the opposite case.The halos in HR5 are ∼ 9% less massive than their counterparts in HR5-Low because their small neighboring structures are not well resolved in HR5-Low. Figure B3 . Figure B3.Radius that encloses 95% of the stellar mass of the proto-objects of the FoF halos identified at z = 0 as a function of their final mass that is measured from HR5-Low.The radius measurement is centered at the most massive galaxy in each proto-object.Red dashed and sold lines mark 16 th and 84 th percentiles and the median of R95 at a given final mass. Figure C1 . Figure C1.The critical overdensity of a homogeneous tophat sphere collapsing at z = 0 predicted by the spherical tophat collapse model in the ΛCDM (blue) and EdS (red) universe in logarithmic (top) and linear (bottom) scales.Stars indicate the epoch and overdensity when the sphere reaches its maximum radius in each universe. Figure D1 demonstrates the completeness and purity of our protocluster identification scheme (top) and the relation between M 0,SRG tot Figure D1 . Figure D1.Bottom: Final mass of SRGs estimated from the final mass of bona-fide protoclusters (those identified based on merger trees) embedded in the SRGs (M 0,SRG tot ) as a function of the total mass of SRGs (M SRG tot ).Color code denotes D1/RSRG.Colored dots mark all the SRGs sample and black concentric circles indicate the SRGs with M SRG tot > 10 14.15 or D1/RSRG < 2.5.As also seen in Figure5, most protoclusters have D1/RSRG ≲ 4. We note that M 0,SRG tot is an estimated mass to examine the prediction accuracy of M SRG tot .Top: Purity (blue) and completeness (red) of the bona-fide protoclusters in the spherical regions found by the SC model as a function of M SRG tot .The purity is the number fraction of the SRGs enclosing bona-fide protoclusters to the entire SRGs above a given mass.The completeness is the number fraction of the authentic protoclusters which are recovered by SRGs and more massive than a given mass. Figure E1 . Figure E1.Same as FigureD1, but for the cases with and without the RSD effect.In the bottom panels, the scatter between M 0,SRG
2023-08-02T06:42:45.785Z
2023-08-01T00:00:00.000
{ "year": 2024, "sha1": "b32d4189e4380bba517f585467093a308b46b1f7", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad0555/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "053c7d9fbb5da9ee7c2f930ca2a942634bc9579e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257119733
pes2o/s2orc
v3-fos-license
PASSED: Brain atrophy in non-demented individuals in a long-term longitudinal study from two independent cohorts Introduction Alzheimer’s disease (AD) is indicated by a decrease in amyloid beta 42 (Aβ42) level or the Aβ42/Aβ40 ratio, and by increased levels of Tau with phosphorylated threonine at position 181 (pTau181) in cerebrospinal fluid (CSF) years before the onset of clinical symptoms. However, once only pTau181 is increased, cognitive decline in individuals with subjective or mild cognitive impairment is slowed compared to individuals with AD. Instead of a decrease in Aβ42 levels, an increase in Aβ42 was observed in these individuals, leading to the proposal to refer to them as nondemented subjects with increased pTau-levels and Aβ surge with subtle cognitive deterioration (PASSED). In this study, we determined the longitudinal atrophy rates of AD, PASSED, and Biomarker-negative nondemented individuals of two independent cohorts to determine whether these groups can be distinguished by their longitudinal atrophy patterns or rates. Methods Depending on their CSF-levels of pTau 181 (T), total Tau (tTau, N), Aβ42 or ratio of Aβ42/Aβ40 (A), 185 non-demented subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and 62 non-demented subjects from Erlangen AD cohort were assigned to an ATN group (A–T–N–, A–T+N±, A+T–N±and A+T+N±) and underwent T1-weighted structural magnetic resonance imaging (sMRI). Longitudinal grey matter (GM) atrophy patterns were assessed with voxel-based morphometry (VBM) using the cat12 toolbox on spm12 (statistical parametric mapping) of MRI scans from individuals in the ADNI cohort with a mean follow-up of 2 and 5 years, respectively. The annualized atrophy rate for individuals in the Erlangen cohort was determined using region of interest analysis (ROI) in terms of a confirmatory analysis. Results In the A–T+N± group, VBM did not identify any brain region that showed greater longitudinal atrophy than the A+T+N±, A+T+N± or biomarker negative control group. In contrast, marked longitudinal atrophy in the temporal lobe was evident in the A+T–N± group compared with A+T–N±  and biomarker-negative subjects. The ROI in the angular gyrus identified by VBM analysis of the ADNI cohort did not discriminate better than the hippocampal volume and atrophy rate between AD and PASSED in the confirmatory analysis. Discussion In this study, nondemented subjects with PASSED did not show a unique longitudinal atrophy pattern in comparison to nondemented subjects with AD. The nonsignificant atrophy rate compared with controls suggests that increased pTau181-levels without concomitant amyloidopathy did not indicate a neurodegenerative disorder. Introduction The most common neurodegenerative disorder is Alzheimer's disease with amyloid plaques and neurofibrillary tangles as neuropathological hallmarks. A biological definition of AD proposes to include surrogate markers for amyloidopathy (A), neurofibrillary tangles (T) and neurodegeneration (N) (Jack et al., 2018a). Surrogate markers for neurodegeneration (N) at the onset of Alzheimer's disease are atrophy of the mesial temporal lobe, in particular the hippocampal formation, and an increase of total Tau in cerebrospinal fluid (CSF; Jack et al., 2018a). For amyloid plaques, the surrogate markers are a decrease of Aβ42 or a lowered level of the Aβ42/Aβ40 ratio in CSF (McKhann et al., 2011;Dubois et al., 2014;Hansson et al., 2019). For neurofibrillary tangles (T), the surrogate marker can be a higher level auf pTau181 in CSF (Blennow et al., 1995;Hampel et al., 2004;McKhann et al., 2011;Dubois et al., 2016;Jack et al., 2018a). Further, both plaques and tangles can be detected by positron emission tomography (PET) using Aβ or pTau binding tracers (McKhann et al., 2011;Dubois et al., 2016;Jack et al., 2018a). As long as both A and T are pathologically altered, i.e., A+T+, the presence of Alzheimer's disease is likely (Jack et al., 2018a). In many cases, however, A and T are not congruently changed; for example, the A-T+ group alone comprises up to 23% of cases in cohorts of non-demented elderly (Jack et al., 2018a). Together with the group with evidence of neurodegeneration without amyloidopathy (A-T-N+), it was proposed that they be collectively referred to as suspected non-AD pathophysiology (SNAP) because nondemented individuals with these biomarker constellations had no or only slightly different cognitive trajectories and longitudinal hippocampal atrophy rates compared with control subjects (Burnham et al., 2016;Jack et al., 2018a;Oberstein et al., 2022). The classification of individuals with elevated pTau without amyloidopathy remains controversial, as pTau181 elevation was reported to be specific in AD at least compared with frontotemporal dementia and lewy body dementia and the recently reported specificity of elevation of pTau181 in AD patients in serum (Blennow et al., 1995;Hansson et al., 2019;Thijssen et al., 2020). Moreover, although this biomarker constellation does not seem to be necessarily associated with AD, it was associated with Aβ, as Aβ42 and Aβ40 levels were concomitantly increased with pTau181 in this group, so we proposed to refer to the biomarker constellation in non-demented individuals as PASSED, a pTau and Aβ surge with subtle deterioration (Oberstein et al., 2022). The strong association between pTau181 and tTau in PASSED and A-T+ individuals, respectively, indicates the presence of neurodegeneration by definition, but is not reflected in increased longitudinal atrophy of the mesial temporal lobe and hippocampus, respectively. (Burnham et al., 2016) To determine whether neurodegeneration in the sense of longitudinal atrophy occurs in other brain regions, we used voxel-based morphometry (VBM) in this study to compare the distribution of longitudinal GM volume loss of non-demented A-T + individuals to those with A + T-and A + T + and CSF-biomarker negative subjects. The identified brain regions were tested in a confirmatory analysis using ROI analysis in a second independent cohort. Study population We selected 185 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). 1 The ADNI is a longitudinal multicenter study founded 2004 by Principal Investigator M. Weiner as a public-private partnership, and enrolls participants from all over North America. With the data of various imaging and clinical assessments and their sharing of the data for researchers worldwide, ADNI aims at improving diagnosing and treating of AD. Further information about the ADNI cohort, the study protocol and MR image acquisition and processing can be accessed via. 2 This study included only participants who were over 50 years of age and had an analysis of Aβ42 and pTau181 levels in CSF, a neuropsychological assessment with a Mini Mental State Examination (MMSE) score greater than 23, and a structural brain examination with a magnetization-prepared rapid acquisition gradient echo (MP-RAGE) sequence at baseline and at least 12 months later. The biomarker negative control group (Aβ42-T-N-) also had to have normal tTau values to be included in this study. Subjects with focal brain lesions or defects on MRI or significant T2w white matter hyperintensities, i.e., Fazekas 2 or 3, were excluded after inspection of the native and segmented images (Fazekas et al., 1987;Oberstein et al., 2022). 2,150 individuals from the ADNI were screened for eligibility, of which 1,250 had CSF diagnostics with the required parameters and of those, 381 had an MMSE greater than 23. One hundred and ninety four of these had at least one additional MRI > 12 months after the baseline examination of which 9 were excluded due to poor image quality or processing issues. The study population included data from ADNI 1, ADNI 2, ADNIGO, and ADNI3. From the Erlangen cohort, 62 individuals over 50 years of age with MCI or SCI were included in this study from April 2010 to November 2021. Inclusion in the study was contingent on the presence of a complete set of CSF parameters (Aβ42/Aβ40 ratio, Aβ42-, pTau181-and tTau-level), a neuropsychological assessment with the German version of the CERAD neuropsychological battery, Frontiers in Aging Neuroscience 03 frontiersin.org a structural brain examination including an MP-RAGE sequence at baseline and at least one MRI examination after more than 12 months (Aebi, 2002). The composition of the cohort and further inclusion criteria are described elsewhere (Oberstein et al., 2022). After receiving a detailed description of the study a written consent was provided either by the patients themselves or their authorized legal representatives. The clinical ethics committee of the University of Erlangen-Nuremberg approved the study protocol. For the assessment of amyloidopathy, in contrast to the ADNI cohort, both pathological Aβ42 level and Aβ42/Aβ40 ratio were considered. The N variable of the AT (N) classification was determined in both cohorts using the tTau level alone to avoid circularity, since the regional brain volume was examined as a dependent variable. Magnetic resonance image (MRI) acquisition and brain volumetry The protocol for MRI acquisition of the ADNI cohort is described elsewhere (see footnote 1). Structural MRI scans of the ADNI cohort were used in this study from MRI scanner platforms of different manufacturers, provided that all scans from one subject were from one type of device. The MRI acquisition and VBM workflow of the Erlangen cohort have already been described in detail (Oberstein et al., 2022). In short, T1-weighted high-resolution structural MRI were acquired using a 3T MR Scanner (Magnetom Tim Trio 3,0 Tesla, Siemens Healthineers AG, Erlangen, Germany) for brain volumetry of the Erlangen cohort. For processing the T1-weighted structural MRI, we used the VBM workflow for longitudinal models for large changes (e.g., ageing effects) of the Computational Anatomic Toolbox (CAT12 v. 12.8; University Hospital Jena; Jena, Germany) for SPM12 3 running on MatLab R2021a (Mathworks, Inc.; Natick, Massachusetts, United States). Structural MRI images of the ADNI and the Erlangen cohort were analyzed using the identical workflow. The preprocessed and normalized gray matter maps were used for the group comparisons, and the significant clusters identified by this were characterized using the AAL atlas to determine their anatomical location (Rolls et al., 2020). MarsBar was used to define custom regions of interests (ROIs) based on contrast images from the SPM results of the ADNI data for the Erlangen cohort and to extract GM density from all MRI scans for the ROI analyses (Matthew Brett et al., 2002). Cerebrospinal fluid-ELISA The details of the CSF sample collection and analytic processing are described elsewhere 4 (Shaw et al., 2009). For the AT (N) grouping of the ADNI cohort based on CSF values, we used the archived data set "UPENNBIOMK_MASTER.csv. " A cutoff value of 192 pg./ml was used to determine Aβ42 status in CSF, a cutoff value of 23 pg./ml was used for pTau181 status, and a cutoff value of 93 pg./ml was used for tTau status (Shaw et al., 2009). If multiple CSF values were reported at baseline, we used the median value of these results. The details of CSF sample collection, analytical processing, and cutoff values for the Erlangen cohort are described elsewhere (Oberstein et al., 2022). In short, the Statistics Normality was examined using the Shapiro-Wilk test and upon visual inspection of the quantile-quantile-plots. The parametric or nonparametric analyses were applied accordingly. The assumption of homogeneity of variance was assessed with the Levene's test. Group comparisons were performed with Pearson's χ 2 for categorical variables and for ordinal or nonnormally distributed interval variables with the Kruskal-Wallis test followed by Dunn multiple comparison test if a significant effect was observed. For normally distributed interval variables, an analysis of variance (ANOVA) or, for groups with inhomogeneous variances, the Brown-Forsythe test was applied, followed by Bonferroni corrected multiple comparison if a significant effect was observed. Voxel-based morphometry analyses using SPM12 used a flexible factorial analysis of covariance (ANCOVA) to assess changes in GM volumes over two time points (baseline vs. 2-year follow-up or baseline vs. 5-year follow-up) within the four selected ATN groups and differences in longitudinal atrophy between these groups in the ADNI cohort with age as covariate. If no MRI was acquired exactly after 2 or 5 years of follow-up, the next time point was taken, provided this did not change the time interval by more than 12 months. The GM and WM morphological abnormalities are reported after using a family-wise error (FWE) as indicated (p ≤ 0.05, p ≤ 0.01, or p ≤ 0.001). The calculation of the sample size (n) of the different groups (i) for the ROI analysis comparing annualized atrophy rates in the Erlangen cohort was computed based on the arithmetic mean of the ADNI cohort (μ), the pooled standard deviation (σ), and Z-values (Z) determined as a function of the αand β-error levels as follows: The annualized percent change in an unbiased ROI (uROI), determined by VBM analyses of differences between longitudinal atrophy rates among ATN groups in the ADNI cohort, was computed based on the uROI volumes in MRI scans (V in cm 3 ) from different time points (t i in months) of individuals in the Erlangen cohort as follows: We used a one-way analysis of covariance (ANCOVA) to identify main effects of the selected ATN groups on the uROI volume and annualized atrophy rates while controlling for total intracranial volume (TIV), age, time of follow-up and education. Homogeneity of regression slopes was not violated with regard to the dependent variable, as the Frontiers in Aging Neuroscience 04 frontiersin.org interaction terms were not statistically significant (p > 0.05). The Areas under the curve (AUC) under receiver operating characteristic (ROC) curves were compared in a paired-sample scenario based on the nonparametric methods (DeLong et al., 1988). Data analysis was performed using the statistical package SPSS (version 28.0; SPSS, Chicago, IL, United States) and MATLAB (version R2021b; The Mathworks Inc., Natick, United States). Quartiles are indicated as follows: 1st quartile = Q1; 3rd quartile = Q3; significance levels are indicated as follows: ***p < 0.001; **p < 0.01; *p < 0.05; and ns, not significant. Baseline characteristics of the ADNI study population The voxel-based analysis of 185 non-demented individuals of the ADNI cohort was employed to identify unbiased regions of interest (ROI) based on differential longitudinal atrophy rates between the different AT (N) groups at 2 and 5 years of follow-up. The Aβ42/Aβ40 ratio in CSF has not regularly been assessed in these individuals. In order to indicate this lack of information, subjects from the ADNI cohort are referred to being "Aβ42" instead of "A" positive or negative in the results section, which is why the groups are designated as follows: Aβ42 + T+(N±), Aβ42-T+(N±), Aβ42 + T-(N±) and Aβ42-T-N-. Of the 185 subjects, 115 (62.2%) were male and 70 (37.8%) were female. The mean age at baseline was 75 years with a range from 60 to 89 years. The mean MMSE score at baseline was 28 with a range from 24 to 30. The Aβ42 + T + N ± group represented the largest group with a total number of 60 (32%) subjects, followed by the Aβ42-T-N-group with 50 (27%) subjects, the Aβ42-T + N ± with 43 (23%) subjects, and the Aβ42 + T-Ngroup with 32 (17%) subjects. There were no subjects with an Aβ42+T-N+ profile in this cohort. The baseline demographics, the MMSE results and the measured CSF-biomarker values are given in detail in Table 1. MMSE, Age, Education at baseline as well as TIV, grey and white matter volume did not differ significantly between the groups (Table 1). Of the 185 subjects included in the study, 180 (69 women) came for follow-up at 2 years, and 99 subjects (40 women) came for follow-up at 5 years. The characteristics of the participants who came for follow-up at 2 and 5 years are shown in Supplementary Table 1. Patterns of longitudinal brain atrophy in the Aβ42TN groups of the ADNI cohort Two years from baseline, the Aβ42 + T + N ± group showed the most widespread longitudinal GM atrophy of all groups (Figure 1; Supplementary Table 2). The volume loss comprised both temporal lobes and reached to parietal and frontal regions including the anterior cingulate gyrus (p ≤ 0.01 FWE corrected). The Aβ42 + T-N-group showed a similar pattern of longitudinal GM atrophy as the Aβ42 + T + N ± group, however, in this case the grey matter loss was noticeably to the detriment of the left temporal lobe. In comparison with the Aβ42 + groups, the lowest number of voxels survived the p < 0.01 FWE correction in the Aβ42-T + N ± group (Supplementary Table 2). The localization of significant GM loss of the Aβ42-T + N ± group was limited to regions within temporal and insula lobes of both hemispheres. The direct comparison of GM atrophy rates between the Aβ42-T + N ± group and the Aβ42 + T + N ± group showed a significantly greater atrophy rate in regions of the middle temporal gyrus and angular gyrus of the Aβ42 + T + N ± group, p ≤ 0.01 (FWE corrected; Figure 1; Table 2). Similarly, the direct comparison of the Aβ42-T + N ± with the Aβ42 + T-N-group showed greater volume loss for the Aβ42 + T-N-group, however, in this case only for clusters in the left hemisphere, p ≤ 0.01 (FWE corrected; Figure 1; Table 2). The Aβ42-T + N ± group showed no regions of greater longitudinal GM loss compared to the Aβ42 + T + N ± or the Aβ42 + T-N ± group neither after p ≤ 0.01 FWE correction nor without FWE correction (p < 0.001, uncorrected). After 5 years, the longitudinal GM loss in the Aβ42-T + N ± group affected more regions within the temporal and insula lobes and was not limited to these anymore but extended to the frontal lobe including the anterior cingulate gyrus (Supplementary Figure 1). The slice overlay of the SPM displaying significantly larger linear volume decline in the Aβ42 + T + N ± compared to the Aβ42-T + N ± group 2 and 5 years from baseline, respectively, indicated only little variation in the maximum intensity projections (MIP; Table 2). The MIP of the SPM 5 years from baseline shifted towards the parietal lobe compared to the MIP of the corresponding SPM 2 years from baseline ( Table 2). The clusters surviving the p < 0.001 FWE correction of SPM displaying a significant greater linear GM decline of the Aβ42 + T + N ± group compared to the Aβ42-T + N ± group were extracted to serve as a mask for the generation of an unbiased ROI (uROI) for the analysis of longitudinal brain Longitudinal voxel-wise analysis revealed regionally increased atrophy rates in Aβ42-T-N-(n = 50), Aβ42-T + N ± (n = 43), Aβ42+T-N-. (n = 32), and Aβ42 + T + N ± (n = 60) non-demented individuals from the ADNI cohort with a mean follow-up of 2 years, p < 0.01 (FWE corrected) (A). Slice overlay of the statistical parametric maps illustrates the linear decrease of grey matter (GM) volume that was significantly larger in the Aβ42 + T + N ± (group-red yellow) or the Aβ42 + T-N ± (blue green) compared to the Aβ42-T + N ± group, p < 0.01 (FWE corrected) (B). The color bars indicate the value of p. Frontiers in Aging Neuroscience 06 frontiersin.org atrophy in the Erlangen cohort (Supplementary Figure 2). The effect size of greater longitudinal atrophy in the uROI of the Aβ42 + T + N ± compared with the Aβ42-T + N ± group was medium after 2 years, Cohen's d = 0.77, and strong, d = 1.20, after 5 years. The calculated sample size with a power of 80% and an alpha error of 0.05 was 16 subjects for the Aβ42 + T + N ± and 14 subjects for the Aβ42-T + N ± group based on the effect size for longitudinal atrophy after 5 years. Baseline characteristics of the Erlangen study population The Erlangen cohort comprised 62 individuals with an A + T + N±, A-T + N ±, or A-T-N-profile of whom 35 (56%) were male and 27 (44%) were female. The mean age was 63 ± 8 years, the median MMSE score was 28 [27; 29], and the mean years of TABLE 2 Results of two 4 × 2 ANCOVAs with group (Aβ42-T-N-, Aβ42-T + N ±, Aβ42 + T-N ±, Aβ42 + T + N ±) and time point (baseline, follow-up) as independent variables and age as a covariate of no interest. Frontiers in Aging Neuroscience 07 frontiersin.org education were 14 ± 4 years ( Table 3). The median follow-up period was 48 months with a range from 13 to 104 months with no significant difference in the mean follow up times between the different ATN profiles (Table 3). The baseline characteristics of the Erlangen cohort are given in Table 3. The A + T + N ± group was significantly older than the A-T-N-group (p = 0.017, M Diff = 7.199, 95% CI [1.02; 13.38]). Apart from that, the groups did not differ in terms of age, sex, length of education, and MMSE score at baseline in Bonferroni corrected pairwise comparisons. The A-T + N ± not only showed increased pTau181-levels compared with the A-T-Ngroup, but also showed significantly increased Aβ40 levels ( Longitudinal unbiased ROI and hippocampal atrophy is significantly larger in the A + T + groups compared to the A-T + and A-T-groups with no difference between the latter An ANCOVA with age at baseline, years of education, TIV, and time of follow-up as covariates was conducted to compare the annualized atrophy rate of uROI (ΔuROI) identified in the ADNI cohort between the A + T + N ±, A-T + N ±, and A-T-N-groups of the Erlangen cohort. The groups differed statistically significantly in the ΔuROI, F (2, 53) = 124.734, p = 0.002, partial η 2 = 0.213. A priori contrasts showed statistically significant higher longitudinal atrophy of the unbiased ROI in the A + T + N ± group than in the A-T + N ± group, M Diff = −3.951, 95%-CI[−6.133, −1.769], F(1, 53) = 13.187, p < 0.001, partial η 2 = 0.199 or the A-T-N-group, M Diff = −3.101, 95%-CI[−5.144, −1.058], F(1, 53) = 9.266, p = 0.004, partial η 2 = 0.149. No significant difference was found in comparison between the A-T + N ± and the A-T-N-groups. Unadjusted means and means of the longitudinal atrophy of the uROI adjusted for age, years of education, TIV, and follow-up time in months are given in Supplementary Table 3. No statistically significant difference in the selected ATN groups was found for atrophy at baseline in the brain region of the unbiased ROI or the hippocampi. ΔuROI had an area under the curve (AUC) of 78.8% (95% CI = 69.3-97.4) and uROI had an AUC of 72.2% (95% CI = 54.1-92.9) to classify nondemented participants between A-T + N±, i.e., PASSED, and A + T + N± (Figure 2). Compared with the AUCs of longitudinal hippocampal atrophy rate and hippocampal atrophy, neither the ΔuROI (Z = 0.04, p = 0.698) nor the uROI (Z = 0.778, p = 0.431) in the Erlangen cohort did discriminate better between PASSED and A + T + N ±. Between the biomarker negative control group, A-T-N-, and the A + T + N ± group, there were similar AUCs for the ΔuROI (AUC of 76.9% [95% CI = 60.6-93.3]) and the ROI (AUC of 70.1% [95% CI = 52.4-87.8]) (Figure 2). Again, there was no significant difference between the ΔuROI (Z = −0.699, p = 0.485) and ROI (Z = 0.555, p = 0.579) compared to longitudinal hippocampal atrophy and baseline hippocampal atrophy. Discussion In this study, non-demented individuals with increased CSF-pTau181 levels without amyloidopathy (A-T + N ±) showed no significantly greater longitudinal grey matter atrophy in any brain region compared to controls (A-T-N-) or non-demented AD individuals (A + T + N ±) as assessed by VBM. Similar to the A + T + N ± group, the A-T + N ± group exhibited the most severe longitudinal atrophy rate in the medial temporal lobes and the singular gyrus. These findings are in accordance with previous reports that the atrophy of nondemented individuals with SNAP shows great overlap with the atrophy of nondemented individuals with AD, particularly in the medial temporal lobe, despite possibly different underlying pathologies (Jack et al., 2016;Wisse et al., 2021). However, compared to AD patients, longitudinal atrophy appears to be less pronounced in SNAP patients and only slightly more or indistinguishable from biomarker negative controls (Burnham et al., 2016;Jack et al., 2016;Schreiber et al., 2017;Stocks et al., 2022). In the ADNI cohort investigated in this study, a region with overlap in the angular gyrus of the A-T + N ± group showed significantly less longitudinal atrophy than that in the A + T + N ± group. This was also confirmed in the second cohort of the Erlangen cohort. This is accordance with previous reports, which identified the angular gyrus as a signature region for AD (Dickerson et al., 2017). However, as conducted by ROC Analyses, the longitudinal atrophy of the identified region in the angular gyrus did not discriminate better between A-T + N ± and A + T + N ± individuals than the longitudinal atrophy of the hippocampal volume, which has already been established to measure neurodegeneration (N; Jack et al., 2018b). In summary, the A-T + N ± group exhibited no evidence of pronounced longitudinal brain atrophy in nondemented individuals compared with controls, whereas both groups showed less atrophy in the hippocampus and in the ROI overlapping with the angular gyrus compared with AD individuals. Moreover, in the Erlangen cohort, no pronounced hippocampal atrophy or atrophy of the ROI could be detected cross-sectionally compared with controls. This is in contrast to the reported characteristics of SNAP, in which atrophy of the medial temporal lobe and hypometablosimus in temporal-parietal regions as assessed by 18F-Fludeoxyglucose-PET were considered as criteria for its definition (Jack et al., 2012(Jack et al., , 2016(Jack et al., , 2018b). In addition, as previously reported by others and us, pTau181 and Aβ are positively associated in the A-T + group, which is why we proposed the term PASSED, a pTau and Aβ surge with subtle cognitive deterioration, for this biomarker constellation (DeLong et al., 1988;Delvenne et al., 2022;Oberstein et al., 2022). The absence of brain atrophy both cross-sectionally and longitudinally, the absence of differences in psychometric trajectories that we have previously reported, but a higher mean age of nondemented individuals in this group compared with biomarker-negative controls may suggest that this is not a dementing disease but rather an ageingassociated condition (Oberstein et al., 2022). Despite the similarity to biomarker negative individuals in terms of course, the A-T + group appears to be a distinct condition in which Aβ peptides are elevated in CSF, but a number of other proteins are decreased in CSF and plasma (Delvenne et al., 2022). The differences between PASSED and SNAP in its original definition, i.e., A-T ± N +, need to be clarified in the future. Possibly pTau alone, unlike tTau, is not indicative of neurodegeneration in terms of brain atrophy. However, considering the strong association between pTau and tTau another possibility seems more likely: For the second cohort, not only Aβ42 level but also Aβ42/Aβ40 ratio was considered in the assessment of amyloidopathy. The Aβ42/Aβ40 ratio Frontiers in Aging Neuroscience 09 frontiersin.org appears to detect AD earlier, so more individuals with AD in the first group may have been misclassified as A-T + because of the lower sensitivity of the Aβ42 level alone (Lewczuk et al., 2004(Lewczuk et al., , 2017. Considering that brain atrophy in AD typically begins in the mesial temporal lobe (McKhann et al., 2011), the misclassification of individuals with abnormal Aβ42/Aβ40 ratio but normal Aβ42 levels could explain the lack of difference between the longitudinal hippocampal atrophy of the A-T + and A + T + groups in the direct comparison in the first cohort. Therefore, we believe it is essential to measure the Aβ42/Aβ40 ratio for comparison of PASSED and SNAP in future studies. The advantages of machine learning methods, such as the ability to use data from different platforms for disease classification, such as MRI, neuropsychological data, and data from omics platforms, may be useful in future studies to determine whether and to what extent SNAP and PASSED are different pathophysiological conditions. An important strength of our study is the use of two independent datasets. The difference in results between the two datasets underscores the need to consider the method used to detect amyloidopathy, tauopathy, and neurodegeneration or neuronal damage when evaluating ATN classification. One limitation is that the study population of the Erlangen cohort was not randomly selected from the community and was generally well educated, precluding extrapolation of study results to the general population. Furthermore, subjects in the ADNI cohort were on average significantly older than those in the Erlangen cohort. When interpreting longitudinal atrophy rates in the Erlangen cohort, it should be noted that follow-up intervals differed between subjects. Finally, in this study, the number of A + T-N ± and A-T-N + subjects was too small to draw conclusions about the interaction of these groups with those studied. A-T-N-A-T + N± A + T + N± In summary, nondemented subjects with elevated pTau levels without amyloidopathy and Aβ surge with subtle cognitive decline (PASSED) did not show a unique longitudinal atrophy pattern compared with nondemented subjects with AD. The lack of a significant difference between atrophy rates in PASSED and controls suggests that elevated pTau181 levels without concomitant amyloidopathy are not indicative of a neurodegenerative disorder. Data availability statement Publicly available datasets were analyzed in this study. This data can be found at: https://adni.loni.usc.edu/data-samples/access-data/. Ethics statement The studies involving human participants were reviewed and approved by Ethik-Kommission der Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany. The patients/participants provided their written informed consent to participate in this study. Author contributions TO designed the study. TO and A-LH analyzed the data, interpreted the results, and drafted the manuscript. A-LH, PO, JU, and E-MS contributed to the data collection, the diagnostic review process, and coordination of the MRI appointments. PS contributed to the oversight of data collection, the diagnostic review process, and reviewing manuscript. AF contributed to the data interpretation and manuscript revision. MS contributed to the MRI data analysis. AD oversaw the MRI data collection and contributed to the manuscript revision. PL contributed to the data analysis, the diagnostic review process, interpretation of results, and manuscript revision. JK oversaw the clinical data collection and contributed to the interpretation of findings, the diagnostic review process, and revision of the manuscript. JM contributed to the study design, oversight of data collection, the diagnostic review process, and manuscript revision, and obtained ethics permission. All authors contributed to the article and approved the submitted version. Funding We received financial support from Deutsche Forschungsgemeinschaft and Friedrich-Alexander-Universität Erlangen-Nürnberg within the funding programme "Open Access Publication Funding". Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI; National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: The remaining authors that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-02-24T17:03:20.240Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "e4d5de4005ef68f55f27e46a03033a77bda4942e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3a6073eda7469363a06f95b0c89b61087f563078", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270350404
pes2o/s2orc
v3-fos-license
Recent advances in Rapidly-exploring random tree: A review Path planning is an crucial research area in robotics. Compared to other path planning algorithms, the Rapidly-exploring Random Tree (RRT) algorithm possesses both search and random sampling properties, and thus has more potential to generate high-quality paths that can balance the global optimum and local optimum. This paper reviews the research on RRT-based improved algorithms from 2021 to 2023, including theoretical improvements and application implementations. At the theoretical level, branching strategy improvement, sampling strategy improvement, post-processing improvement, and model-driven RRT are highlighted, at the application level, application scenarios of RRT under welding robots, assembly robots, search and rescue robots, surgical robots, free-floating space robots, and inspection robots are detailed, and finally, many challenges faced by RRT at both the theoretical and application levels are summarized. This review suggests that although RRT-based improved algorithms has advantages in large-scale scenarios, real-time performance, and uncertain environments, and some strategies that are difficult to be quantitatively described can be designed based on model-driven RRT, RRT-based improved algorithms still suffer from the problems of difficult to design the hyper-parameters and weak generalization, and in the practical application level, the reliability and accuracy of the hardware such as controllers, actuators, sensors, communication, power supply and data acquisition efficiency all pose challenges to the long-term stability of RRT in large-scale unstructured scenarios. As a part of autonomous robots, the upper limit of RRT path planning performance also depends on the robot localization and scene modeling performance, and there are still architectural and strategic choices in multi-robot collaboration, in addition to the ethics and morality that has to be faced. To address the above issues, I believe that multi-type robot collaboration, human-robot collaboration, real-time path planning, self-tuning of hyper-parameters, task- or application-scene oriented algorithms and hardware design, and path planning in highly dynamic environments are future trends. Introduction Path planning is one of the extremely important problems in robotics, and one definition is that a robot works autonomously to plan a path from the initial point to the goal point in a certain map scenario under the premise of ensuring that no collision occurs with other obstacles in the map.Challenges in this field include the difficulty of modeling the environment, convergence of algorithm, and avoiding local optimal solutions.In addition, most traditional path planning algorithms are designed for scenarios with known maps and may not be suitable for use in unfamiliar environments. The classical path planning algorithms contain a search strategy represented by A* and a sampling strategy represented by E-mail address: xutong0901@126.com. Contents lists available at ScienceDirect probabilistic roadmap (PRM), the former relies on the current position and historical position information, and thus is prone to local optimal solutions; the latter makes use of the global information of the map, which is possible to get the optimal solution, but this method lacks the optimization strategy of the local paths, which leads to the redundancy of path generation.The Rapidly-exploring Random Tree (RRT) [1,2] algorithm combines the characteristics of search and sampling: Its search characteristics are manifested in the fact that the algorithm starts from the root node and keeps branching, growing like a tree until it searches for the target node; its sampling characteristics are manifested in the fact that the algorithm is affected by the random sampling points in the process of branching, so the RRT improvement algorithm has the potential to achieve a better performance than the other algorithms.Theoretically, the hyper-parameters or strategies involved in the RRT algorithm, such as the setting of the search step size, the selection of branching nodes and the design of branching strategies, can be designed to address different optimization objectives; application-wise, the improvement strategies of RRT also needs to be designed to address the objectives and constraints of specific application scenarios; however, both the theoretical optimization itself and the leap from theory to reality are facing challenges.At the theoretical level, we need to know whether the RRT-related improved algorithms and the model-driven RRT are capable of searching large-scale scenarios?How is the real-time performance of the improved RRT algorithms?Can the improved RRT algorithms cope with uncertain environments?Has the path quality been further improved?At the application level, we need to know in which fields the improved RRT algorithm has been applied?What are the results?What problems are still faced in real scenarios?Therefore, it is crucial to review the recent advances in the RRT-related literature at both the theoretical and application levels, summarize the challenges mentioned in the literature, and look ahead to future work.Based on the above motivations, the layout of this paper is as follows: In section 2, the advantages and disadvantages of RRT and other traditional path planning algorithms are compared.In order to show the improvement strategies of the RRT algorithm more clearly in the subsequent sections, this section also describes the implementation process of the traditional RRT algorithm in detail.Section 3 describes in detail the research progress of the RRT improvement algorithm at the theoretical level, from the aspects of branching strategy improvement, sampling strategy improvement, post-processing, model-driven RRT, and summarizes the advantages of the theoretical improvement in large-scale scenario search, real-time performance, uncertain environments, and path quality improvement.Section 4 reviews the achievements of RRT-related algorithms on real robots, including arc welding robots, assembly robots, search and rescue robots, surgical robots, free-floating space robots, mining robots, inspection robots, and interactive robots.Section 5 summarizes the challenges faced by the algorithms at the level of real-world applications, as well as future research trends. Fundamentals of rapidly-exploring random tree The Rapidly-exploring Random Tree (RRT) algorithm commonly referred to in the field is actually heuristically biasing RRT (HBRRT) [3], target-biased RRT (TBRRT) [4][5][6][7], goal-biased RRT (GBRRT) [8][9][10][11], goal-oriented RRT (GORRT) [12][13][14] or goal-directed RRT algorithm (GDRRT) [15][16][17][18], the idea of this algorithm is to take the initial position as the root node and then add leaf nodes by random sampling.When the leaf nodes of the random tree arrive at the target position, the path from the initial position to the goal position is planned.The implementation of the RRT algorithm is shown in Algorithm 1: Firstly, randomly scatter the node q rand , then call the function FindQNear to find the nearest node q near to q rand , then call the function FindQNew to get the new node q new , next, judge whether the generated the new node q new has collision with the obstacles in the map environment, if not, then a space search extension is successfully completed and the new node q new is determined, on the contrary, q new needs to be discarded.Finally, repeat the above process until q new reaches the defined desired goal position q goal . T. Xu In the traditional RRT algorithm, among all the existing nodes, the strategy to select the node q near that is currently the most suitable for branching is to choose the Euclidean distance as the cost function, from which the node with the minimum surrogate value is selected; and the strategy for generating the path from q near to q new is to connect these two nodes directly, thus generating a line segment.In an ideal path planning scenario, these two strategies of the traditional RRT algorithm seem to be the optimal solution, but Table 1 most of the autonomous robots in real scenarios, such as unmanned aerial vehicle (UAV), self-driving cars, unmanned surface vessel, etc., belong to non-holonomic constraint systems, and thus, scholars do the following thinking: (1) Is there a better strategy for selecting q near ?(2) Is there a better strategy to connect q near and q new ?In addition, it may also be worthwhile to further investigate these issues involved in the GDRRT algorithm: (3) Can the feasible solutions generated by the GDRRT algorithm be further optimized?(shorter, smoother and traceable)?(4) How can the algorithm can be improved in cases where the map is not globally known?(5) Whether hotly debated large-scale models can also be incorporated into RRTs.This review focuses on these improvement strategies for RRT algorithms, presenting recent advances in the last three years (2021-2023). Performance comparison Table 1 summarizes the advantages and disadvantages of some classical path planning algorithms.These algorithms can be classified into search-based and sampling-based strategies.Representative search-based algorithms include Dijkstra, A*, D*, etc.Among them, some use traversal-based search, such as breadth first search (BFS) and Dijkstra, thus have high cost when the target point is far away from the initial position; some are cost-function oriented, such as the A*, which expects to find the relatively optimal path quickly, but search-based algorithms rely on information about current and historical positions, and thus are prone to have locally optimal solutions.Such methods have their inherent limitations.The representative sampling-based algorithm is probabilistic roadmap (PRM), which employs a completely randomized node search and determines the feasibility of the generated paths based on the presence of obstacles between nodes, utilizing the global information of the map and therefore possessing the potential to obtain optimal solutions.However, this random search lacking objectives leads to redundancy of generated nodes and lacks theoretical basis for optimizing local paths.To solve the problems above, strategies that fuse search-based and sampling-based strategies are widely used, the most representative of which is the familiar RRT. In addition, some strategies from other fields have been introduced into the field of path planning.For example, artificial potential field (APF), genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO) in the field of optimization, deep deterministic policy gradient (DDPG) in the field of deep learning, all of these have been applied to path planning with good results. Overall, RRT is a hybrid algorithm which contains both searching and sampling features, its searching feature is manifested in the fact that RRT starts branching from the root node and finds feasible paths connecting the start and end points in the obstacle map, and its sampling feature is manifested in the fact that the branching process is affected by probability, so the algorithm has the potential to achieve better performance metrics compared with searching features-based or sampling features-based algorithms, and furthermore, the development of other fields such as optimization and artificial intelligence also contributes to the development of RRT. Overview of RRT-based algorithm improvements The stochastic nature of the branch makes it possible to further optimize the algorithm.At the theoretical level, it is generally necessary to study the optimality of the paths, and at the practical level, it is generally necessary to weigh the scene constraints, model constraints, and other constraints, and design the most suitable paths according to the established requirements. In response to the large amount of literature emerging on RRT-based improvement algorithms, this review discusses four aspects, including branching strategy improvement, sampling strategy improvement, post-processing, and model-driven RRT, and looks into several possible future research directions in this field. Branch node selection There are no more classical algorithms for branch node selection than RRT* [19], which adds a two-step optimization strategy including 'rewire' and 'random relink', where rewire means that after a new node is added to the tree, a parent node is re-selected for it, so that the cost value of the newly generated path is smaller, and random relink means that after rewiring, the nodes surrounding the new node are reconnected to the new node.Karaman et al. [20] prove the asymptotic optimality of the RRT* algorithm, implying that the algorithm can find the optimal path when the time tends to infinity.Some researchers suggest improvements to the algorithm in response to the shortcomings of RRT* in specific application scenarios.Aiming at the problem that many RRT*-based variants are inefficient in sampling and slow in convergence in the environment consisting of long corridors, Ding et al. [21] propose the expanding path RRT* (EP-RRT*) based on heuristic sampling in the path expansion area, and experiments show that compared with RRT* and Informed-RRT*, the proposed EP-RRT* improves the node utilization, speeds up the convergence rate, and obtains better paths for the same number of iterations.Reconfigurable modular robotic systems under RRT* are inefficient due to the formation variety.For this reason, Lu et al. [22] propose an obstacle-aware hybrid density network to guide the generation of polygonal nodes, and the results show that the strategy of connecting convex polygon trees* (CPT*) in the form of RRT* improves scalability to large environments.To minimize the energy consumption of the robot, Yu et al. [23] propose a cylinder-based informed-RRT* (Cyl-iRRT*) algorithm, which seeks to find the optimal homotopy path by focusing the search space on the designed gradually shrinking cylinder.Alam et al. [24] present a pick-and-place RRT* under the novel flight cost (FC-RRT*), which generates nodes in a predetermined direction and then calculates the energy consumption using the circle-point method.Modular self-reconfiguring robots can change their configurations to efficiently adapt to various tasks.To address the characteristics of such robots, Odem et al. [25] defined RRT*-based topological configurations, a new approach that represents a set of T. Xu modular self-reconfiguring robot equivalent configurations as topological configurations, thus significantly reducing the tree size.The Quick-RRT* algorithm proposed by Jeong et al. [26] reselects the parent nodes and optimizes the pruning range, and the superiority of the algorithm is verified by calculating the space complexity and time complexity. RRT-connect [27,28] also known as bidirectional RRT (B-RRT), introduces a dual-tree expansion mechanism that simultaneously expands the random tree with the initial and goal points as root nodes, respectively, Rajendran et al. [29] present RRT-Connect that incorporates human awareness, and paths generated by the algorithm are as visible as possible to humans, making it easier for human-robot collaboration.Chen et al. [30] fuse RRT-Connect and Bezier curves and apply them to a cooperative assembly system with two robotic arms composed of six degrees of freedom, effectively optimizing the trajectories of the dual robotic arms.Li et al. [31] notice that Rapidly-exploring random vines (RRV) [32] performs well in single narrow channel environments but poorly in cluttered environments, while RRT presents the opposite result, so they propose a fusion algorithm based on RRT-Connect and RRV.Cao et al. [33] point out the advantages of RRT-Connect, artificial potential field (APF) and cubic B-spline for global path planning, local path planning and curve smoothing, respectively and fuse these three algorithms together to optimize the paths of unmanned aerial vehicle (UAV).In Ref. [34], a novel adaptive gravity field-based RRT-connect method is proposed and the efficiency of the improved algorithm is further demonstrated on an actual manipulator platform.Li et al. [35] optimize the RRT-Connect nearest node selection mechanism, which has better adaptability in the apple orchard environment and promote the automation of orchard operations.Other application scenarios for RRT-connect include path planning of the fruit tree pruning manipulator [36], insect-like mobile robots in a narrow environment [37], UAV trajectory tracking [38] and navigation [39], multi-UAV formation shape generation [40], automatic sampling of exhaust emissions [41], closed-loop control of microrobots subjected to flowing fluid disturbances in a microfluidic environment [42], home service robot arms [43].Several researchers investigate improved strategies for RRT-connect, Spinos et al. [44] propose the link-augmented graph to increase the range of feasible solutions for truss robots and apply it to RRT-Connect to search all target points more efficiently.Kang et al. [45] propose a rewiring method based on triangular inequalities to bring RRT-Connect closer to the optimum.To deal with manipulator path planning in complex multi-obstacle environment, Petit et al. [46] introduces a new method called RRT-Rope, which builds on RRT-connect by using a deterministic shortcut technique for fast post-processing while adding intermediate nodes at branches of the tree. The study of the fusion of RRT-connect and RRT*-based algorithms is also a branch of the study of improved RRT-based algorithms.The B-RRT* algorithm [47] directly fuses RRT-connect and RRT* and also inherits the advantages of both algorithms.To improve the obstacle avoidance efficiency of the redundant robotic arm, Dai et al. [8] propose a B-RRT* based on a novel potential field guidance (PB-RRT*), and the results show that the proposed algorithm plan shorter paths, larger gap between the manipulator and the obstacle and fewer invalid nodes when compared with the B-RRT*.Singh et al. [48] incorporate B-RRT and a modified Bezier curve, and try and test the technique in various real-world experiments.Shama et al. [49] use a probabilistic Gaussian mixture model to identify the regions most likely to generate nodes for faster convergence.Wang et al. [50] propose a growth point evaluation function based on the adaptive resolution octree map, which guides the generation of RRT paths to make the growth strategy purposeful, and then the algorithm also reselects the parent and candidate nodes and rewires them.The experimental results show that the improved RRT algorithm can eliminate the redundant bifurcations of the growing tree, reduce the number of sampling times, and greatly improve the growth efficiency compared with the traditional RRT, RRT* and B-RRT*.Other application scenarios for B-RRT or B-RRT* include ice navigation [51], arc welding robot [52], the autonomous flight of UAV [53], robot path planning in constrained environment [48,54], redundant manipulators [8,55], autonomous parking [56], lunar rover [57], litchi-picking robot [58]. There is also some literature that implements branch-node selection improvements based on RRT.In order to design trackable paths for articulated vehicles, I and my team propose a hybrid strategy combining RRT, farthest node search and head correction with fixed wheel position [16].To solve the non-convex optimization problem, He et al. [59] devise an barrier pairs-based RRT (BP-RRT) algorithm, where each barrier pair consists of a quadratic barrier function and a full-state feedback controller, to achieve the synthesis of locally optimal controllers, which are validated in the simulation of a dual linkage manipulator robot.Hao et al. [60] present the complex environments RRT (CERRT), which restricts vertex selection and expansion, and samples the region near obstacles multiple times to avoid useless exploration.Guo et al. [61] introduce the concept of candidate point array, where when an obstacle is present on the straight line between the nearest point and the randomly generated current point, that point is not used for the current stage of path generation but for subsequent path optimization, and this strategy is applied to the Yobogo legged robot navigation.Pareto dominance is a very well known theorem in game theory, where a Pareto improvement is defined as an allocation of resources that makes at least one person better off without making anyone worse off.When using the RRT path planning algorithm, the Pareto improvement method is used to select the optimal node from dozens to hundreds candidate nodes [62], motion planning based on zero-sum games is also a similar idea [63]. Branch design Branching designs are generally improved in terms of branch step size or branching mode.In terms of branch step size improvement, some researchers determine the optimal branch size through extensive experiments in a given scenario [64,65], and others give specific strategies, for example, In Ref. [6], a step growth rate is introduced into the expression of angle selection [66].introduces the "step-size dichotomy" to solve the problem of excessively long step size in the APF algorithm due to the large range of obstacle rejection, and applies it to the motion planning of the citrus picking manipulator.In Ref. [67], the step size varies according to the density of obstacle distribution and Q-learning is used to reduce the randomness of RRT, a scenario that no longer requires accurate environment modelling and vehicle modelling.In Refs.[36,68], the adjustment coefficient is introduced and when encountering an obstacle collision, the step size becomes progressively larger, and vice versa, the branch size returns to its initial value.A similar strategy is presented in Ref. [34], where the branch size increases by a fixed length when the node is successfully extended, and vice T. Xu versa, the branch size decreases by the fixed length.Another similar strategy is presented in Ref. [26], where a direct connection between ancestor nodes and the new node is considered during the rewiring procedure.In Ref. [69], the step size is adapted using the gradient descent method to make the step size smaller as the target region is approached.In Ref. [70], the dynamic step size is designed by means of a segmentation function, where the branch length varies in a logarithmic trend when it is less than a certain threshold, and is a fixed constant when it is greater than a certain threshold.In Ref. [37], a dynamic step size formula is designed based on the pattern of exponential changes.In Ref. [71], Peng et al. calculate the number of expansion failure times and use this parameter as a variable in the branch size adjustment coefficient.In Ref. [72], gravity adaptive step size strategy is applied.In terms of branch mode improvement, Nasir and Islam notice that the paths explored by RRT and RRT* are often stitched together by multiple folded segments, whereas the optimal path in the obstacle-free space is generally a straight line, and therefore design an algorithm called RRT*-Smart [73,74], which, after generating a feasible path, continuously searches for an unobstructed and direct connection to the parent node from the leaf node, turning multiple folded segments into a single folded segment.A similar strategy is that of Chung et al. [75] who argue that when encountering obstacle collision, branches extend in the same direction.In Ref. [76], a simplified rewiring step is performed, where only nodes in the set of goal trees are rewired compared to RRT*.Literature [77] proposes the cyclic pruning algorithm to shorten the paths where the first pruning is used to delete unwanted nodes and the second pruning is used to optimize the paths. Some literature design branching mode depends on scenarios; for example, Zacchini et al. [78] predict the effect of taking more actions in unknown environments by considering branch information gain and assessing the impact of the rewiring process on tree growth.To enable the spacecraft to plan time-optimal attitude trajectories under complex pointing constraints, Xu et al. [79] propose an intelligent branching model based on geometric-level spherical and quadrilateral interpolation, which is capable of planning suboptimal solutions within a few milliseconds.Long et al. [80] propose RRT*-Smart based adaptive regional dynamics (RRT*Smart-AD) to deal with dynamic characteristics in dynamic environments.Some literature design branching modes depend on kinematic constraints, for example, Song et al. [81] develop the kinodynamic RRT*-smart algorithm for collision avoidance strategy for unmanned surface vessels.To achieve the lychee harvesting with a rigid robotic arm, Ye et al. [58] propose an adaptive weighted particle swarm optimization method.Wang et al. [82] suggest an efficient branch pruning strategy that introduces a new state while taking kinematic constraints into account and apply the algorithm to a differential drive robot.James et al. [83] develop two types of fillets to meet curvature-constrained vehicles need, a circular fillet for limiting the maximum curvature of the path and a spline fillet for achieving continuous path curvature.To address the angular limitations of RRT* for hyper-redundant manipulator applications, Ji et al. [84] design an ellipsoid-shape RRT* (E-RRT*), which adopts ellipses instead of line segments to connect adjacent nodes, and verifies the superiority of the algorithm in a narrow environment.Xu et al. [16] propose the farthest node search strategy and head correction with fixed wheel position to redesign the branch based on the articulated vehicle model.To account for the different pointing constraints faced by multiple instruments during spacecraft attitude manoeuvres, discrete quadratic path nodes are obtained using RRT* [85].To solve the path planning problem under ocean currents [86], establish a data-driven energy model and a time-varying current model, and introduce these constraints into RRT. Sampling strategy improvement B-RRT is an improved RRT algorithm based on branch node selection.However, this algorithm still suffers from slow convergence, some scholars note that more subtrees may accelerate search efficiency, Chai et al. [87], Tu et al. [88] and Zhang et al. [89] propose generating multiple subtrees within a narrow channel and merging them into a main tree in subsequent sampling.Luo et al. [90] construct a third random tree based on B-RRT, which constructs intermediate points to find and converge global paths in an efficient way.In Ref. [68], a pre-harvesting guide point is designed between the start point and the end point and four random trees are generated between these three points.Some scholars make improvements based on bias probabilities [91].proposes to construct the RRT with the probability of changes of 1 and 0 (1-0 Bg-RRT) in order to jump out the local minima in time, while [77] proposes the GM (1,1) model to optimize the background values and verifies that the proposed model has a faster convergence speed and higher stability in a narrow-channel environment as compared to the 1-0 Bg-RRT.Some scholars notice that the slow convergence of the RRT* algorithm is due to uniform sampling of the free space, so Informed-RRT* [92] employs an elliptical sampling approach instead of globally uniform sampling.Uzun et al. [93] propose a n-sliced Informed-RRT* method, which divides the reference path into a certain number of path segments and optimizes them one by one, solving the convergence problem of Informed-RRT* in multi-curve paths.In Ref. [94], spatio-temporal Informed RRT* is proposed to provide the coordinates and velocity of each planning point in a road scene.In Ref. [95], a linear quadratic minimum time local planner-based Informed-RRT* method is proposed for a laboratory-scale 3D gantry crane.In Ref. [73], Informed-RRT* incorporates pre-harvest points and quad-tree for fast citrus harvest.In Ref. [96], Informed-RRT* incorporates dynamic window approach further reduces the global path length.To address the safety and productivity of construction sites due to temporal and spatial conflicts, a hybrid algorithm incorporating Informed-RRT*, geometry and discrete event simulation is implemented and evaluated, and experiments show that the proposed algorithm satisfies all types of obstacle avoidance constraints, provides priority for higher priority construction activities, and also obtains the shortest route and time for each transport vehicle [97].In order to plan energy-efficient paths for reconfigurable robots in complex environments, Kyaw et al. [98] introduce an energy objective function into the batch informed trees*, where the energy objective function considers the energy cost of each reconfigurable action of the robot.In Ref. [99], the design of the short-horizon planner for collision avoidance is addressed, and an improved Informed-RRT* algorithm is proposed, which encodes the International Regulations for Preventing Collisions at Sea (COLREGs) by defining a rule-compliant region in the configuration space and searching for feasible paths within that region. T. Xu Similarly, the circle-based sampling strategy is proposed.For example, the center-circle sampling strategy based B-RRT shows the advantage of generating a smaller number of nodes [100], the forward and reverse sampling circles strategy based B-RRT demonstrates the advantages of local optimization of quadrotors trajectories in complex dynamic environments [101].In Ref. [24], an energy-efficient industrial robot motion planning method is proposed and the energy consumption is calculated using the circle point method. Another hotly discussed sampling strategy is Quick-RRT* [26], which uses triangular inequality for parent node selection and rewire improvement.One class of studies uses Quick-RRT* as a baseline to highlight the superiority of the proposed algorithm, for example [102], generates semi-triangular regions based on triangular inequality to improve the path quality and uses a tabu table to enhance the path generation efficiency during the rewiring process, and the results show that the proposed algorithm outperforms Quick-RRT* in terms of both path quality and path generation efficiency [103].constructs the gravity field function based on seafloor topographic data and uses it for underwater path planning, and the results show that the proposed algorithm is more efficient under the demand of path planning considering gravity, with an efficiency of 5.98 times that of Quick-RRT*.The underwater path planning problem has also been considered in the literature [104], with a focus on rescue missions for underwater robots.The algorithm takes the ancestor node that is farthest from the sampling point and has no collision as the parent node, increases the intermediate nodes in the path depending on the step size, and uses the triangular inequality several times throughout the process to obtain the optimized path, and the results validate that the path planned by the proposed algorithm is faster compared to the Quick-RRT* algorithm.Unlike the above algorithms, Literature [105] is more concerned with robustness, and tries to adjust the sampling strategy to optimize the planned path, specifically, adjusting the size of the sector sampling region and the sampling probability through the scene, and the results verify that the algorithm is more robust than Quick-RRT*.In Ref. [106], potential function-based optimal path planning considering congestion (CCPF-RRT*) is proposed and the results show a better performance in initial solution, quick convergence speed and the movement cost compared to Quick-RRT*.Another class of studies performs further optimization based on Quick-RRT*, for example, Wang et al. [107] combine the backtracking idea of Quick-RRT*, the greedy search strategy of RRT-Connect, the triangle inequality path optimization, and verify the effectiveness of the proposed algorithm, they also propose a novel algorithm based on circular arc fillet (CAF-RRT*) [108] for the path planning problem in two-dimension workspaces, which obtains the initial path by combining Quick-RRT* with B-RRT.Qureshi et al. introduce APF to achieve a trade-off between exploration and exploitation, named P-RRT* [109], and potential functions based Quick-RRT* (PQ-RRT*) is proposed [110] to overcome the limits of the slow convergence rate of RRT*.Experiments show that the proposed algorithm guarantees a fast convergence to an optimal solution and generates a better initial solution.In Ref. [111], Quick-RRT*-based map is constructed and unsafe nodes for mobile robots are deleted from the constructed map.In Ref. [112], the virtual light-based Quick-RRT* is proposed, where in this algorithm, a sector-shaped light intensity sensing region centered on the target point is constructed [113].optimizes the sampling area in real time based on the node density.On the application area of the Quick-RRT* algorithm, aiming at the application object of the cable-driven super-redundant manipulator with 17 degrees of freedom and the narrow and complex application environment [114], design paths based on the relationship between the maximum deflection angle and the operating speed of the manipulator, and achieve good results, Jeong et al. [115] develop an optimal obstacle avoidance path planner for stabilizing the robot's heading, and this strategy.Similar improvement strategies to Quick-RRT* are Fast-RRT* and F-RRT*.Fast-RRT* [116] combines a target bias strategy with constrained sampling to reduce the blindness of sampling, and prioritizes the ancestors of the nearest node to the root node in the branching phase.To address the issue of autonomous navigation for micro aerial vehicles, Martinez et al. [117] optimize the paths by reconnecting the nodes based on Fast-RRT*.They also develop a signed distance field to facilitate collision checking.In Ref. [4], a new fusion algorithm for target bias, gravitational potential field and Fast-RRT* is also proposed.F-RRT* [118] optimizes path creation by generating parent nodes for random points, outperforming traditional RRT's parent selection strategy.Experiments demonstrate that the algorithm surpasses RRT*-Smart [73], and Quick-RRT* [26] in terms of initial solution and fast convergence rate.Spatial offset sampling RRT* (SOF-RRT*) [7] is an enhancement of the F-RRT* algorithm, which introduces a spatially probabilistic weighted sampling strategy that increases the likelihood of sampling in regions with larger feasible regions.Cong et al. [119] propose FF-RRT*, a hybrid method that combines the target bias sampling strategy with the random sampling strategy.The branching strategy in this algorithm creates new parent nodes.Experiments demonstrate a significant reduction in convergence time when compared to the Fast-RRT* [116] and F-RRT* [118] in both simple and complex maze environments.The improved path planning algorithm considering congestion in Ref. [106] incorporate the advantages of F-RRT* and utilize the movement cost function to design an ideal path in a crowded environment.In Ref. [120], radar data is used to generate maps and the Graham algorithm is employed to delineate the dynamic flight exclusion zone (FEZ) for planning feasible paths under hazardous weather.The influence range, size and distribution characteristics of the FEZ are then investigated.Other mainstream strategies based on region sampling improvements include the generalized voronoi diagram-based sampling regions [121], non-threshold adaptive sampling region [122], Gaussian distribution and local biasing sampling regions [123], dynamic region sampling [105], the adaptive forward and backward sampling regions [124]. In addition to considering more trees and designing differently shaped sampling regions, there are a number of other strategies for improvement.Wang et al. [52] devise a function to compute the variation parameter whose exponential term introduces the number of collision detection failures.In Ref. [76], the conditional sampling method is proposed, where nodes that cannot be part of the solution path due to velocity constraints are prohibited from sampling.In Ref. [125], dynamic RRT is proposed which, by heuristically using this path length as the major axis diameter of the informed subset, balances convergence time and path length in an environment with randomly distributed obstacles.In Ref. [126], sequential convex feasible set is introduced to RRT*, and bad local optima are avoided to get stuck in.In Ref. [72], double sampling points comparison and selection strategies are used to reduce the randomness of sampling points [127].introduces a virtual field sampling algorithm and a current constraint function for multi-unmanned surface vehicle path planning under spatially varying currents. T. Xu In addition, requirements at the application level also contribute to the improvement of RRT sampling strategies, to address the problem of relatively low efficiency due to the increase in degrees of freedom in the robot system equipped with gantry structures, Wang et al. [128] introduces the sampling pool mechanism, and selects the node nearest to the connection line between the starting node and the target node in the sampling pool, which effectively shortens the length of the search path.Experiments show that compared with IB-RRT* [129], the path cost and time cost are increased by 22.2 % and 32.5 %, respectively, and the success rate is relatively stable.To address the issue of low search efficiency in large-scale road network environments, Adjacent-relation based RRT repetitive sequence optimization (A-RRT-RSO) is proposed, in which the A-RRT adopts a greedy strategy to sample neighboring mesh nodes, increase the leaf nodes and generate the RRT + extended tree.The proposed A-RRT-RSO reduces the number of search nodes, avoids blind search, and minimizes the cost of path computation [130].To address obstacle avoidance path planning for manipulators, Zhanga et al. [131] develop a sampling motion planning algorithm.The algorithm samples configurations in advance that satisfy specific constraints associated with a prescribed motion planning task and stores them in an offline configuration dataset.This approach is based on the premise that the manifolds are continuous over a specific range.Zeng et al. [132] propose a tournament-selected point sampling strategy based on RRT to guide the underwater vehicle to the area of interest for sampling.The strategy maximizes information collection while working within a limited budget.Other applications include circular sampling strategy for autonomous vehicles [133], path planning for a UAV in communication-constrained operating environments [134], flight cost-based RRT for energy-efficient industrial robot motion planning [24], vector field stream based RRT* to plan unmanned surface vehicle paths under spatially variable ocean currents [135,136]. Post-processing The reason why RRT algorithms need post-processing can be stated at the theoretical level and the application level.From the theoretical level, the feasible solutions can be further optimized in terms of evaluation metrics such as path length, complexity, and consumption time, etc.Some of the previous reviews also falls into post-processing, like Some kinds of RRT-Connect [35,36,45,46], informed-RRT* [23,92,96], Quick-RRT* [26,110,112,113], RRT*-Smart [73,79,80], Fast-RRT* [116,119], F-RRT* [7,118], post triangular rewiring strategy [52,137], interpolation post-processing [138,139].From the application level, the feasible solutions are likely to not satisfy the kinematic model constraints of the robot or the scenario constraints in some application scenarios, therefore, the theoretically generated feasible solutions cannot be applied to real scenarios and further optimization is still required.In terms of considering kinematic model constraints, Zhou et al. [140] design a redundant dual-chain manipulator with two kinematic chains and a fixed base.They generate joint trajectories corresponding to the paths using flow shape analysis.Experiments confirm the method's practicality.Berg et al. [141] design curvature-aware with closed-loop RRT (CA-CL-RRT) to enhance the path planner's performance on curved highways, where in the closed-loop prediction phase, the virtual car follows the generated reference path and speed profile.In the curvature phase, the upper bound of the curvature constraint is introduced.Experiments demonstrate that the CA-CL-RRT algorithm proposed can significantly improve the path quality, particularly on curved roads with a radius of less than 1000 m.In the current automatic parking system, the parking trajectory planning algorithms based on geometric connection or optimization problem descriptions have problems such as strict requirements on the starting position, low planning efficiency, and discontinuous reference trajectory curvature.To solve these problems, Wang et al. [142] propose a hierarchical planning algorithm combining nonlinear optimization and an improved RRT* algorithm with Reeds-Shepp curves.Simulation results show that the proposed algorithm can design an effective parking trajectory under multiple parking scenarios.The Stanley algorithm is also used for path tracking and Reeds-Shepp curve to adjust the final parking attitude of the truck [143].Mao et al. [144] propose the retains the discrete search of the original rules of RRT while adding the continuity of the motion of unmanned surface vehicle (USV), where each movement including position, yaw angle, velocity, etc. is takes into account the complete dynamic constraints, which is called the state prediction RRT (SP-RRT) algorithm.In Ref. [145], the improved two-step timed elastic band is introduced to smooth the path and optimize path lengths of automatic guided vehicles.To address the significant challenges faced by RRT-related algorithms in ship path planning, such as slow convergence and excessive turning points, Gu et al. [146] first cluster the priori data to construct a bootstrap region that guides the RRT branching.Then, they optimize the paths using the Douglas-Peucker compression technique.The results show that the proposed algorithm achieves a good balance between efficiency and accuracy.To solve the online cooperative path planning problem for multiple quadrotors in unknown dynamic environments, Jia et al. [147] establish the kinematic constraints for quadrotors and propose a spatio-temporal coordination strategy applicable to RRT.In addition, commonly used curve smoothing methods are also introduced into the RRT algorithm, including the Bezier curve-based RRT [83,107,148], B spline curve-based RRT [37,116,149]. In terms of considering scenario constraints, Kim et al. [150] introduce TargetTree-RRT* for complex environments like narrow parking spots, where in this algorithm, clothoid paths are introduced for post-processing to address curvature discontinuity, a cost function is used to create an objective tree that considers obstacles.For the problem of path planning in dynamic environments, Guo et al. [151] propose a hierarchical structure that updates the surrounding information in real-time at the perception layer.This structure aims to obtain heuristic paths at the path planning layer and improve path quality at the path optimization layer by combining a sampling method with an artificial potential field function.To address the long-distance and multi-level planning tasks of self-driving vehicles, Zhao et al. [152] propose a lifelong learning framework with GAN and RRT, taking the tractor-trailer as an application case and testing the proposed method in several scenarios with different characteristics.To address the path planning problem in visual servoing, Reyes et al. [153] incorporate visual servo control into state information to create local trajectories for RRTs.They demonstrate that this approach is probabilistically complete.To address the problem of RRT* failure in certain constrained environments, Ramasamy et al. [154] design an adaptive RRT* (ARRT*) algorithm in a created digital twin simulation environment, which designs a collision detection function to achieve dynamic sampling.The results show that ARRT* performs better than RRT* in T. Xu constrained environments.In response to stochastic disturbances in the scene, Pedram et al. [155] propose a novel path length metric containing a weighted sum of the robot motion cost and the robot perception cost in an uncertain configuration space, and combine this strategy with the existing RRT* algorithm.In Ref. [156], the improved RRT algorithm takes into account the direction of movement of the obstacles, iteratively generates paths within a specified time and selects the shortest flight path.To overcome the situation when scanning complex objects with many obstacles, Yan et al. [65] propose the direction-guided RRT, an algorithm that builds on RRT by first simplifying the invalid paths through linear processing and then, smoothing the paths. Some scholars notice that RRT-based improvement are generally applicable to static scenarios, but there is a need for dynamic obstacle avoidance in real scenarios, so combining other dynamic obstacle avoidance strategies to achieve RRT dynamic obstacle avoidance path planning is also a hot research topic.Dynamic window enables dynamic obstacle avoidance tasks, but there is a risk of falling into local optimal solutions, therefore [157], proposes RRT*-fuzzy dynamic window approach (RRT*-FDWA) for collision-free path planning, where a reward and penalty function is designed so that the robot can quickly enter global path planning once it has successfully avoided an obstacle.D*, or Dynamic A*, is also a general class of techniques for dealing with dynamic obstacles, based on the principle of updating the overheads between states in the map in real time when a moving obstacle is detected during robot motion, and in Ref. [80], dynamic A* evaluation function is designed and introduced to RRT*-smart.The finite acceptance of bad solutions is the core of the Metropolis acceptance criterion.It is frequently used in simulated annealing algorithms to calculate the acceptance probability of a solution.In Ref. [158], based on the Metropolis acceptance criterion, an asymptotic vertex acceptance criterion and a nonlinear dynamic vertex acceptance criterion are developed.Q-Learning is an algorithm in reinforcement learning for making decisions and learning based on behavioural norms and rewards, in Ref. [63], a decoupled real-time motion planning framework is proposed that combines robust intermittent Q-learning with a sampling-based motion planner in which the sampling module begins each iteration by updating the neighbourhood radius.Some advanced control theories, such as linear quadratic regulator (LQR) and nonlinear model predictive control (NMPC), can withstand dynamic disturbances.The literature [159] proposes a two-stage risk aversion architecture designed for the safe control of stochastic nonlinear robotic systems.This architecture combines a novel RRT* variant for nonlinear steering, distributed robust collision checking, and a low-level reference tracking controller.Numerical experiments on unicycle dynamics demonstrate that NMPC outperforms LQR and its LQR variants in terms of performance metrics.In Ref. [160], the nominal mean value of the stochastic control distribution in the model predictive path integral is provided by RRT, leading to satisfactory control performance in both static and dynamic environments without any parameter fine-tuning.Other applications include manipulator dynamic obstacle avoidance [161], hybrid assembly path planning for complex products [162], 10-DOF rover traversing over 3D uneven terrains [163], UAV path planning [77,151,164], electric inspection robot navigation [165], cobot in dynamic environment [166], underground vehicles [167], automated guided vehicle [145], mining truck [143], redundant robots [168]. RRT with support vector machine classifier In 2019, I and our team plan paths fuse RRT, Support vector machine (SVM) classifier and longest accessible path with course correction (LAP-CC) to address the articulated vehicle path planning problem under globally known maps [169].We first use the RRT algorithm to generate a feasible path from the starting point to the end point, and then label the obstacles on both sides of the path as positive and negative classes, and learn to obtain the zero-potential decision curve through the SVM algorithm, and then for the problem that the curvature of the zero-potential decision curve does not conform to the kinematic model of an articulated vehicle, we propose the LAP-CC algorithm for post-processing of the path. RRT with random trees classifier Dominik et al. [170] investigate the classification results of Random Trees (RT) classifier and SVM classifier in a real outdoor scenes containing grass, leaves, pavement, trees, asphalt, wall, bushes and concrete as a way to construct the obstacle map, and the authors conclude that RT classifiers are more suitable for field scenes compared to SVMs, and then the authors use the RRT-Connect algorithm for six-legged walking robot path planning in scenarios modelled after RT-based classifiers. RRT with k-nearest neighbor queries Pan et al. [171] present a new method for fast probabilistic collision checking to accelerate the performance of RRT-based motion planning, where k-nearest neighbor (k-NN) is used to find the nearest prior query sample to the new query configuration.The results show that this fusion method improves the RRT-based path planner in terms of accelerating local path and improving search order on the roadmap, and the findings are also validated on rigid and articulated robots.Similar work is done in Ref. [172].The authors also state the shortcomings of the fusion algorithm and discuss future directions: first, the algorithm is parameter-sensitive, which means that the hyper-parameters of k-NN need to be adaptively tuned for datasets of different sizes; second, the collision search strategy should not be balanced, and well-explored regions should be subjected to as few collision detection queries as possible; furthermore, it is also meaningful and necessary to improve the method to adapt it to collision exploration in dynamic environments. RRT with logistic regression To address the problem of obstacle avoidance path planning for irregularly shaped obstacles, Peng et al. [173] propose a control barrier function-based RRT* to generate a collision-free path for a bipedal robot with multiple polynomial-shaped obstacles, where the polynomial-shaped obstacles are estimated by logistic regression.The authors summarize the shortcomings of this method, including T. Xu an obstacle occupancy that is too small to generate a suitable control barrier function, and the inability to express the surrounding obstacles with a single control barrier function. RRT with random forest Cano et al. [174] evaluate and compare four different hyper-parameter tuning methods (random sampling, AUC-bandit, random forest, and bayesian optimization) for RRT-connect, and random forest (RF) is more effective in hyper-parameter optimization as compared to random sampling and bayesian optimization, and its path generation under RRT-connect is 1.2 times more efficient than the default hyper-parameters. Song et al. [19] use five machine learning algorithms to learn static obstacle data from the scene in the expectation of improving obstacle avoidance efficiency.Experiments show that RF provided the best predictions compared to k-NN, multinomial naive bayesian, gaussian naive bayesian and SVM. RRT with neural networks Baldoni et al. [175] utilize neural networks to guide the generation of path planning with the expectation of improving the generation of datasets and increasing the efficiency of path planning.The authors conclude that the path planning task for narrow passages and maze-like maps is very similar to the image segmentation task, so they choose U-net, which is widely used in the field of image segmentation, to train the path planning task, and the results show that the neural network-guided RRT significantly outperforms the traditional RRT in terms of path planning. In many cases, robots do not want their paths to be predictable, and to satisfy this need, Nichols et al. [176] exploit the feature of adversarial neural networks to improve network generalization and propose adversarial RRT*, which is the addition of a deception cost term to RRT* through the use of recurrent neural networks (RNN).The results show that the paths generated by RRT* containing RNN reduce the observer accuracy (47 % for RRT* and 19 % for adversarial RRT*), increase the path length by 29 % and the entropy by 22 % compared to the optimal paths, suggesting that the paths generated by adversarial RRT* are more difficult to predict. Considering that RRT* fails to generate optimal paths due to the inability to know the features of the environment in advance, Ma et al. [177] propose a supervised neural network based method to learn scene features.In terms of network design, considering that conditional generative adversarial networks (CGANs) have the property of improving the performance of both adversarial and collaborative work, they have more potential for algorithmic performance metrics improvement than traditional supervised neural networks.The authors fuse CGAN into RRT*, and the results show that CGAN-RRT* outperforms Neural RRT* [178] on the same training set. In highly dynamic environments where efficient embedded implementation of algorithms is critical due to limited resources of the on-board microcontroller, Chaulwar et al. [179] propose a hybrid augmented CL-RRT including strategies such as introducing iterative ConvNet feature generation, input iterative convolution, compression of fully-connected layers in sparse columns, and storing only the states with the lowest severity of injuries for rapid generation of safety trajectories in critical traffic scenarios and deploy it on a TMS570LS20216 microcontroller. RRT with reinforcement learning To address the non-homotopic path problem that leads to significant differences between the demonstration path and the generated path (the generated path may be shorter, but do not conform to human social relations), Ding et al. [180] suggest combining a non-isotropic path penalty strategy with RRT inverse reinforcement learning, where the non-isotropic features are extracted as a penalization term in the objective evaluation metrics, and the subjects are allowed to perform a Turing test as a real moving obstacle on the generated paths in the subjective evaluation metrics. For multi-robot scheduling problems, such as the multi-ship aircraft unmanned path optimization problem for ship deck scheduling efficiency, Shang et al. [181] point out that reinforcement learning is used to generate paths for multi-ship aircraft, and the results show that the reinforcement learning based path planning algorithms are superior in terms of response time, scheduling completion rate and average path length compared to traditional path planning algorithms. For the path planning problem in hostile environments, the classical path planning method fails when the risk characteristics cannot be accurately labeled manually, to address this problem, Guo et al. [182] reduce the risk of generating paths based on the data-driven approach that constraints the growth direction of RRT*.The authors also mention that multi-UAV risk aversion and the need for further balance between unbiased and biased sampling also deserve attention. In safety-related domains such as driverless driving, it is extremely difficult to design a cost function and assign appropriate weights taking into account self-interests, ethics, laws, and limitations faced by the field of view, and therefore the use of human driving experience data and reinforcement learning algorithms for generating local goals and semantic speeds is also an important research topic.Yu et al. [183] develop a framework for self-driving cars by combining deep learning with RRT.Specifically, the authors extend deep neural networks (DNNs) to environments with multiple traffic participants and accelerate the training process using double-deep Q-networks (DDQNs) and prioritized experience replay (PER).Reinforcement learning is also needed to guide the generation of path planning for similar problems containing good candidate states identification and the accurate steering angle calculation [184], bevel-tipped needle paths design for surgical robots [185], assembly tasks with human-robot collaboration [186]. T. Xu Advantages of the RRT-based improvement strategies 3.5.1. Branching strategy improvement for large-scale scenario search The reason why traditional RRT is difficult to be applied in large-scale scenarios is that RRT tends to get stuck under certain obstacles, such as maze-like obstacles and narrow passages, resulting in the failure to obtain a feasible solution, as a contrast, branching strategy improvement plays an important role in the application of RRT in large-scale scenarios.For example, RRT-Rope is designed for UAV exploring large-scale environments such as underground mining stopes.The algorithm is validated in real time on an HP Z440 workstation equipped with 12 Intel Xeon processors for a long tunnel over 20 m, long and a sloped planar space over 30 m, and an indoor environment containing columns and doors over 40 m, with path generation taking 0.25 s, 0.55 s, and 0.45 s, respectively [46].Studies that are relevant to branching strategy improvement and also claim to be large-scale scenarios include 131.4 s to explore an 8 m*8 m map [187], 56.75 s to explore 143.13 square meters of 182-square-meter indoor maze map with 81.46 % precision and 92.89 % accuracy [188], 1758.8 s to explore 30 m*30 m map [189], 555.6 s to explore 1000 m*1000 m with 3D terrain map [64], 10 s to explore 467*785 pixel maps of complex ocean environment with multiple vortices [136], 12 machine cycles for power inspection [165], 53.1 s to explore 20*20 grid maps [190], 6.654 s, 8.8845 s, 6.654 s, 16.1148 s, 7.7544 s, 6.0529 s to explore the dataset of Chem97ZtZ, gemat12, bcsstk33, kron_g500-logn162, CoAuthorsCiteseer [191].The reason for the large gap in time cost is that some results only consider the time required for path generation, while others consider the time required for the robot to explore the map, and the above results demonstrate that branching strategy improvements can handle the challenges of large-scale environments. Sampling strategy improvement for real-time performance The improvement to branching strategy, sampling strategy, and post-processing increase the complexity of the algorithm, but due to the increased computational power of the hardware, these improved strategies still result in a fast solution while substantially increasing the quality of the paths.The real-time performance of the improved RRT in a static scenario is shown in 3.4.1,and even for dynamic scenarios, the improved algorithm still shows its superior real-time performance.For example, STL-RT-RRT* [192] and RT-RRT* [193] can react to moving obstacles in real scenarios (1.1 m/s for moving obstacles and 0.55 m/s for the robot).STL-RT-RRT* even achieves 100 % obstacle avoidance success rate in 1000 trials, with a failure chance of less than 0.1 %.In Ref. [151], the time-based sampling process performs well in radar and missile tracking avoidance (92 % success rate).In Ref. [192], a car-like robot on a miniature circular runway at a speed of 0.17 m/s successfully avoid the moving obstacles across the runway.In simple dynamic scenarios, the average dynamic obstacle avoidance success rate of the robotic arm is 100 % and the average path generation time is less than 0.1s [161].Minkyu et al. [156] conduct UAV path planning experiments in which UAVs can generate an average of 1262 potential paths in a single obstacle environment and 770 potential paths in a multi-obstacle environment within 0.1s.In Ref. [101], when the distance between the quadrotor and the mobile platform is 15 m, the average computation times for path search, corridor generation, and trajectory generation are 3.7 ms, 4.9 ms, and 20.1 ms, respectively, even if the environment is filled with various types of dynamic obstacles.In Ref. [194], the robot with a speed of 4 m/s explore a dynamic scene of 97 m*142 m in less than 70s, and the path generation time is almost negligible compared to the path tracking time. Post-processing in uncertain environments One type of uncertain environment is the dynamics of the scene, Alexis et al. [192] introduce a signal temporal logic (STL) for real-time dynamic obstacle avoidance, which has the potential for dynamic obstacle avoidance optimization as it expresses through quantitative semantics that a robot should always keep a safe distance from a human or move slowly in narrow passages.The authors compare the performance of STL-RT-RRT* and RT-RRT* [193] and show the better performance of STL-RT-RRT* in terms of success rate in dynamic obstacle avoidance (no collisions for STL-RT-RRT* and 513 collisions for RT-RRT* in 1000 trials), the number of stops affected by dynamic obstacle avoidance (4 times for STL-RT-RRT* and 286 times for RT-RRT* in 1000 trials), and safe distance from dynamic obstacles (e.g., 1.2 m radius from human).Aiming at the targeted dynamic threat (radar or missile tracking) and random dynamic threat (tracking moment and tracking speed) that UAV may face, Guo et al. [151] propose a time-based sampling process for continuous change process of dynamic obstacles, fuse the APF structure for potential collision process, and introduce the cost function consisting of the true distance cost and the estimated distance cost to construct the heuristic path-finding process, and the comparative experiments verify the advantages of this algorithm in terms of the navigation time, path length, and the success rate of generated paths.Yu et al. [195] construct a miniature circular runway as an autopilot environment for a car-like robot and set moving obstacles across the runway as dynamic obstacles, and the algorithm can plan the trajectory points over time in advance according to the trajectories of the moving obstacles and smooth the paths under the premise of satisfying the curvature constraints of the car-like robot, tracking the trajectories at a speed of 0.17 m/s and avoiding the obstacles successfully.For the robotic arm path planning problem in a dynamic scene, Yuan et al. [161] compare D-RRT and DBG-RRT, and both algorithms have good real-time performance and high success rate of obstacle avoidance under three different maps.Under map 1, the average time and the average success rate for generating paths are 0.064s and 97 % for D-RRT and 0.017s and 100 % for DBG-RRT.Under map 2, the average time and the average success rate for generating paths are 0.083s and 98 % for D-RRT and 0.022s and 100 % for DBG-RRT.Under map 3, the average time and the average success rate for generating paths are 0.204s and 91 % for D-RRT and 0.021s and 100 % for DBG-RRT.Shubhi et al. [163] investigate the rapidity of path generation and the stability of path tracking for rover in 3D terrain, and several experiments show that the rewiring process for dynamic obstacle avoidance only takes less than 3 ms.Another interesting example is the interaction with humans.Considering the two types of human behaviors (conservative walkers and aggressive walkers) when encountering obstacles, the authors [196] design two types of moving obstacles to simulate conservative and aggressive behaviors; when the moving obstacle is perceived as an conservative walker, the robot replans its trajectory to go around the obstacle from in front of the obstacle, T. Xu and when the moving obstacle is perceived as an aggressive walker, the robot replans its trajectory that bypasses the obstacle from behind the obstacle. Another type of uncertain environment is unknown environment, and this part of the task mainly involves exploration tasks, as detailed in section 4.2.5.The above literature demonstrates that the improved RRT has been widely used in two types of uncertain environments with good results. Model-driven RRT for path quality improvement Model-driven RRT is broadly classified into two categories, one in which there is already prior data (supervised learning), and the other in which lessons are learned over multiple explorations (reinforcement learning), both of which can improve the quality of path generation, as summarized below: (1) Considering irregular obstacles in real scenarios, some supervised learning strategies model obstacles in the environment to improve the accuracy of paths and enhance the applicability of the algorithm in complex scenarios [19,169,170,173].(2) Supervised learning algorithms improve the search order of paths, thus improving the quality of local paths [172]. (3) Under supervised learning algorithms, the collision detection queries involved in RRT should not be balanced, i.e., regions with sparse obstacles can have fewer collision detection queries, which can further improve the path generation efficiency [171].(4) The selection of hyper-parameters is difficult for any algorithm, whereas supervised algorithms can use historical data to select more appropriate hyper-parameters to optimize path quality [174].( 5) For the similar specific problems in the field of artificial intelligence and the field of path planning, the methods involved in the field of artificial intelligence may also be directly used in the field of path planning to achieve good results, for example in Ref. [175], the authors fuse U-net into RRT due to the similarity between path planning for maze-like maps and high-precision image segmentation.( 6) An interesting work is to generate unpredictable paths to ensure privacy, a time when supervised adversarial neural networks show their strengths [176].(7) Supervised networks that enable more accurate environment perception have the ability to enable RRT* with asymptotic optimality to achieve optimality [177,178].(8) Some optimal paths that are consistent with human cognition are difficult to design suitable cost functions, in which case it is necessary to use reinforcement learning and social rules for path optimization, such as calculating non-homotopic features (area between the demonstration paths and the generated paths), as well as a questionnaire survey of the subjects (Turing test) to evaluate the degree of socialization of the generated paths [180].( 9) Reinforcement learning primarily addresses problems caused by the inability to accurately model or evaluate, including the inability to estimate risk characteristics [182], good candidate states identification and the accurate steering angle calculation [184] and the various factors caused by human-robot interaction [183,185,186]. Recent advances in the application of RRT to robotics According to the different application scenarios of robots, the International Federation of Robotics (IFR) classifies robots into industrial robots and service robots, and this part summarizes the latest progress of RRT on various types of robots. Industrial robots Industrial robots are robots that can be automatically controlled, reprogrammable, multifunctional, and multi-degree-of-freedom robots, which include handling operation/loading and unloading robots, welding robots, spraying robots, machining robots, and assembly robots, etc., and this section reviews recent advances in RRT for these robots. Welding robots Wang et al. [52] design the adaptive extended bidirectional RRT* algorithm applied to the path planning problem in complex environments with concave and convex surfaces, narrow passages and multiple obstacles.The authors not only prove the probabilistic completeness and asymptotic optimality of the proposed algorithm, but also conclude that the time complexity and space complexity are the same as those of the RRT and RRT* algorithms.Although the authors do a lot of simulation experiments and verify that the performance of the proposed algorithm is much better than many improved RRT* algorithms, the field test consumes a lot of time because of the need for collision detection of the six arms of the robotic arm and the welding gun.The results show that with a search step size of 10 mm, when the path length are 831.5 mm, 353.7 mm, 554.0 mm and 543.7 mm, the running time is 1109s, 307s, 210.8s and 447.5s.Similar tasks, strategies and conclusions are presented in Ref. [128], where the improved RRT* is used for gantry welding robot system. Considering three conflicting objectives of arc welding robots: minimum transfer path length, energy consumption and joint smoothness, Zhou et al. [197] calculate the path length using Euclidean distance and the energy consumption and joint smoothness expressions using the kinematics of the robotic arm, and assign 40 %, 30 %, and 30 % weights to the three objectives, respectively.In order to accomplish the multi-objective search task, the authors also propose a decomposition-based multi-objective evolutionary algorithm with a hybrid environment selection algorithm, and set the value ranges of the optimization parameters according to the application scenarios.The authors also mention that future work needs to focus on real-time path planning tasks for welding robots.Similar tasks, strategies and conclusions are presented in Ref. [198], where the goal is to minimize path length and energy consumption. Assembly robots Shu et al. [199] present an improved RRT* for the assembly of lightweight structures for COVID-19 healthcare facilities, which excels in collision avoidance, trajectory smoothness, trajectory length and execution time, and the authors note that the total assembly T. Xu time for the assembly robots designed in this paper is approximately 28 min, whereas professionals recommend a manual assembly time of no less than 45 min, making it competitive.However, the authors also note that the assembly process is not yet fully automated due to the lack of flat-packed housing components, and suggest that the use of aerial operation robots to assist in tightening the bolted joints can further reduce installation time and fully automate the assembly. Chen et al. [30] propose a dual-robot collaborative assembly path planning algorithm based on RRT-Connect and design a collaborative wire removal task for the dual robots.The results show that the trajectories generated in field tests are consistent with the simulation, and the task of removing the wires is successfully executed, and the results indicate that the designed trajectory planning method and the dual-robot collaborative system are effective. For the problem of assembling parts with different geometries, Ahmad et al. [200] introduce constraints on the initial and final pose grasping of the parts, and successfully accomplish the task of assembling the parts in a disturbance-free environment; for the dynamic environment, the authors propose that in the future, they will intend to install proximity sensors on the robotic arm to detect the surrounding obstacles in real time and to generate dynamic obstacle avoidance paths. Service robots Service robots are designed to perform service tasks that benefit humans, including commercial and domestic robots in the fields of accommodation, catering, finance, cleaning, logistics, education, culture and entertainment, as well as robots that include assisting or replacing a human in performing tasks, such as search and rescue robots, surgical robots, underwater robots, free-floating space robots, etc., and this section reviews recent advances in RRT for these robots. Search and rescue robots For exploration missions, commonly used methods include frontier-based methods and sampling-based methods.Frontier-based methods are similar to search-based strategies in the path planning domain, but two different definitions are used in the field of path planning, but two different definitions are used in the field of path planning and in the field of simultaneous localization and mapping (SLAM), since exploration missions also involve mapping the unknown regions.Sampling-based methods in the field of SLAM are also similar to sampling-based strategies in the field of path planning, and RRT-based algorithms are also a representative algorithm for this type of methods for exploration task.The advantages and disadvantages of the two types of algorithms are shown in Table 2. The two representative algorithms have their own strengths, therefore, the fusion of the two algorithms is a research direction in the field of task exploration.To address the problem of switching exploration targets back and forth, which exists in both methods, Bi et al. [201] design a novel utility function to evaluate candidate targets and a target reselection mechanism to assign exploration targets.Simulation and field tests show that compared with traditional RRT-based multi-robot exploration, the proposed framework has less time cost and path cost with 99 % exploration coverage rate.The authors also draw some conclusions and discuss them: (1) When the number of robots is increased from 2 to 4, the exploration efficiency increases almost linearly with the number of robots, while when the number of robots is increased from 4 to 7, the increase in the exploration efficiency is not significant, so the optimal value of the number of robots in the same scene can be determined through multiple experiments.The reason that increasing the number of robots does not increase the exploration efficiency is that the starting position of each robot, the complexity of the scene, the size of the scene, and the detection distance of the laser scanner all affect the exploration efficiency.(2) When the detection distance of the LiDAR is increased from 4 m to 16 m, the exploration efficiency is increased, while when the detection distance of the LiDAR is increased from 16 m to 50 m, the exploration efficiency is slightly decreased, so the detection distance should also be set appropriately according to the size of the environment.In addition, too large LiDAR range also increases the computational cost.(3) Compared with the frontier-based method, the proposed centroids of unknown connected regions is superior in terms of computational cost, detection robustness and decision-making performance in unknown environments.(4) This paper adopts a centralized architecture, and when the number of robots is large, the amount of data in the central node increases dramatically.( 5) Data transmission can be optimized in the future to reduce the requirement of communication bandwidth. For the underwater search and rescue missions, Wang et al. [104] propose the smooth-RRT algorithm and use the 3D point cloud of the underwater scene captured by the sonar as the environment data to simulate and verify the algorithm's contribution to the initial solution quality and convergence speed.Similar work is in Ref. [202], which introduces the KD-tree based RRT-connect algorithm and verifies that the algorithm can be used for fire guidance on a rescue robot equipped with sensors including LiDAR, IMU, and camera.The above literature only tests the performance of one robot in a single scenario containing only static obstacles and only verifies the feasibility of the algorithms, thus failing to evaluate the level of intelligence of rescue robots in real scenarios. For the ground search and rescue missions, Noé et al. [203] investigate a ground robot navigation and exploration system in complex indoor 3D environments such as mines, solving or optimizing the problems of scenario creation, path planning, path tracking, T. Xu and region importance assignment, and summarize the following conclusions: (1) After filtering local point cloud with a fixed size and without walls and ceiling, the improved RRT algorithm can be prevented from sampling and evaluating in invalid regions, and therefore, the efficiency of path generation is significantly improved.(2) To balance the quality and efficiency of the generated paths, as well as to avoid inaccurate scene descriptions, this paper investigates the appropriate point cloud downsampling resolution, which is taken to be 0.05 m. (3) RRT rather than grid or voxel grid is used to find the frontiers with the strategy of clustering the leaf nodes, but the clustering radius and the clustering density still need to be selected manually.(4) The coefficients of the cost function of the proposed improved RRT algorithm need to be manually adjusted to find the optimal path, taking into account different terrains such as inclination, roughness, etc. ( 5) During path tracking, the maximum linear and angular velocities of the vehicle need to be manually adjusted for optimal performance.With the manually adjusted parameters, a 4-m wide tunnel is explored in 20 min and a 17-m by 17-m rugged terrain with different ramps and obstacles is explored in 25 min, both successfully covering more than 90 % of the area.(6) In terms of the rescue mission, the frontier threshold determines the level of exploration; excessively low frontier thresholds result in many areas being potentially unexplored, and too high frontier thresholds result in many areas being thoroughly explored, which can lead to some relatively unimportant regions being over-explored, resulting in a significant increase in rescue time, so an adaptive frontier threshold selection strategy is also particularly important.(7) To avoid repetitive exploration of the same area, the evaluation indicator of visit area assessment is also introduced, but no solution is given.Gui et al. [204] propose a decentralized multi-UAV cooperative exploration method that considers both the position and current task of each UAV to ensure that the task can be reassigned in real time during each UAV's exploration.Each UAV is equipped with a depth camera to achieve localized scene sensing in a dynamically partitioned area, and the improved RRT is used to explore the unknown environment.The paper draws the following conclusions: (1) The average time for the three UAVs to explore an area of 10 m*8 m*3 m with obstacles is 209.4s, and there is no collisions or prolonged inability to complete the task, therefore, although the authors are not concerned with hardware limitations, network bandwidth, and flight trajectory control, the results are stable and effective; (2) Although multiple UAVs can accomplish the missions in a limited time, there are some undetected areas due to the inherent limitations of the depth camera, and the authors mention the need to use more powerful sensors (e.g., 3D LiDAR) to alleviate this problem, but this approach still requires further consideration of the applicability of the experimental system and its applicability to different environments; (3) In the early stage of the exploration, UAVs with sampling-based strategies can find a sufficient number of mission points in a short time, but as the unexplored area gradually shrinks, the time to compute the target becomes longer and even converges to a lower level.( 4) Uncertainty in sampling leads to irrational partitioning, preventing different trials with the same parameter settings from providing a stable exploration process. Communication constraints is an important problem faced by robots.To address the problem of signal interference of disaster relief UAVs caused by low-altitude clouds and smoke, Diao et al. [70] regard clouds as dynamic obstacles and set the direction and speed of their movement.At the algorithmic level, the authors use the adaptive step and angular incremental sampling to limit the sampling range, which reduces the curvature of the generated paths and accelerates the convergence speed. None of the above literature considers the impact of communication constraints on the rescue robots, or simply identifies communication constraints as obstacles [70], but since robots can only communicate with each other within line-of-sight, the safety hazards due to communication constraints in decentralized collaborative exploration tasks should be considered.To address this issue, Victoria et al. [205] propose decentralize path planning for multi-robot systems with line-of-sight constrained communication, and make the following assumptions: (1) Known static scenes; (2) Robots can brake instantaneously; (3) Robots can communicate with other robots through a multi-hop communication network; (4) The communication between robots is lossless and without delay; (5) The initial positions of all robots are sufficiently safe; and (6) Robots start from a set of waypoints that satisfy all safety constraints.Some of the assumptions mentioned above are extremely idealized, therefore, although the results validate the effectiveness of the RRT-based method, it is far from meeting the requirements of practical applications. Surgical robots Zhang et al. [206] design flexible needles that can flexibly avoid blood vessels and organs and reach diseased tissues.Considering that the non-holonomic characteristics of the needle tip can lead to dynamic and tissue deformation of the flexible needle during insertion, the accessibility and safety of the needle state need to be considered in the path planning stage.In this paper, the following conclusions are drawn: (1) The proposed algorithm can adapt some of the parameters according to the strategies of different tissues during the insertion of the needle tip, which improves the safety of surgical procedures.(2) The proposed algorithm takes into account the potential field of the surrounding obstacles in the path cost and simulates the effect of the needle tip insertion process on the local motion of the tissues, so as to address the effect of needle tip insertion on local tissue movement.(3) The improved RRT* algorithm generates smooth and safe trajectories in a layered fabric environment, verifying the first two conclusions.It can be seen that for surgical robot path planning, the modeling of tip motion constraints and human tissue properties determines the quality of the generated paths.Similar design and modeling constraints are in Informed-RRT* based cranial puncture robot [207] and RRT-Connect based surgical suture robot [208]. Chen et al. [209] propose the shape state cross-entropy based RRT* planner for surgical experiments, and for the continuum robot, this algorithm has the following advantages: (1) High-dimensional continuum robot configuration space is considered.Arc coordinate domain is designed to perform both obstacle-free and approximate follow-the-leader motions.(2) Kinematic constraints are considered.The generated paths introduce kinematic and shape constraints, including constraints on the maximum extension direction angle of the robot tip, to ensure traceability of the trajectories.(3) Stable numerical solutions are ensured.Stable numerical solutions are ensured by rewiring paths via damped least squares to minimize robot configuration changes and trajectory costs, and by solving the singularity problem in the pseudo-inverse kinematics solutions.The authors segment the anatomical environment from medical T. Xu images of real patients and generate point cloud maps of this environment in MATLAB.Simulation results show that the proposed algorithm performs well in follow-the-leader error and success rate, as long as collisions with intracranial blood vessels (arteries and veins) can be avoided. Song et al. [210] also design a master-slave control framework applied to minimally invasive surgery, in which the motion of the master manipulator with respect to the monitor corresponds to the motion of the surgical instrument with respect to the endoscope, and realize the removal of a porcine gallbladder. Fan et al. [211] design a micro-robot for working in vascular environments.Although the simulation results demonstrate the safety and feasibility of the experiments, as well as the ability to automatically avoid static and dynamic obstacles in the simulated vascular environments, the human vascular and tissue environments are extremely complex and almost difficult to model, and thus the implementation of micro-robots working in vascular environments is still a great challenge.Similar work can be found in Ref. [212]. Surgical robots have very different path planning strategies due to their specific application scenarios. Free-floating space robots Free-floating space robots (FFSR) consisting of spacecraft and robotic arms are commonly used for on-orbit servicing, and motion planning is also the most essential task.Tomasz et al. [213,214] investigate the trajectory tracking performance of a space station robotic arm and find that the actual values differ from the expected values, although the deviation does not affect the utility of path planning, the authors still analyze the phenomenon: (1) The planar air-float microgravity simulator is subjected to various perturbations.The unevenness of the granite table may cause the spacecraft to slide to one side of the table, the robotic arm is controlled in the joint space, and the control system is unable to compensate for the positional and directional errors of the gripper.( 2) There are significant uncertainties in the known parameters of the spacecraft and the robotic arm.Attitude deviations are most likely due to non-perfect knowledge of spacecraft inertia.In response to the latter question above, the authors also perform Monte Carlo simulations, which shows that the results of the Monte Carlo simulations are still useable even with a large range of uncertainty (±10 %), which further suggests that the errors caused by the latter are negligible.The authors also discuss the problems associated with the application of RRT to FFSR, as summarized below: (1) Spline-based planner and B-RRT* are compared, and the paths generated by RRT have more complex shapes, larger errors, discontinuities and unstable results, but in complex scenarios filled with obstacles, spline planner fails to generate feasible paths, so both algorithms have their advantages and disadvantages.(2) Only static obstacles and target objects are considered.However, earth-based measurements of potential targets for active debris removal missions show that some of these targets have non-zero angular velocities.Therefore, the end effector linear and angular velocities should be allowed to be adjusted during trajectory tracking.(3) The generated optimal paths do not take into account self-collisions between the robotic arm links.For the specific kinematic structures, such as the WMS lemur robotic arm, there is no risk of collision between the links of the robotic arm, but this risk should still be avoided for some robotic arms, as it could lead to a further increase in path costs.( 4) The experiments are carried out in open-loop control mode, so it is impossible to correct the paths, especially when the navigation state is uncertain.For example, unexpected changes in attitude during a grappling maneuver may disrupt the communication link or interrupt power generation.( 5) The computational cost is not taken into account, which makes it difficult to apply in practice.To address (3) mentioned above, Yu et al. [215] separate inertial spaces when performing the spline-RRT* algorithm and apply it to a dual-arm FFSR.Some studies do not analyze in depth the drawbacks of these algorithms in applications, but nevertheless validate the applicability of various improved versions of RRT in FFSR, such as RRT for 12+2n dimensional satellite-manipulator system [216], B-RRT for a three-link FFSR [217], RRT without inverse kinematics for FFSR [218], B-RRT for 7-DOF FFSR [219], Improved adaptive RRT for close-range FFSR with continuous thrust [220], RRT-Connect for dual-arm FFSR [221] et al.These studies corroborate that RRT-based planner has potential for further development in the field of FFSR. Mining robots Compared with ordinary robots, mining robots, as a kind of underground robots, work in a non-structural terrain with limited space closure and GPS rejection.In addition, explosion-proof, dust-proof, moisture-proof, water-proof, corrosion-proof and other problems are challenging! In [46], RRT-Rope benefits from RRT-Connect and is designed with the expectation of finding suboptimal solutions in a short period of time.Simulation results show that the method works well in large-scale uncluttered 3D environments such as underground mining stopes.The algorithm is also insensitive to the number of iterations and the environment due to the introduction of the shortened rope algorithm, resulting in further improvements in stability and efficiency, although the authors also point out that future work should introduce dynamic constraints to handle more complex environments.In addition, the authors specify that RRT-Rope falls into the local minimum for pillar-like structural obstacles, but since underground mining stopes scene do not involve such obstacles, the proposed RRT-Rope is also more applicable to underground mining scenarios. Wang et al. [167] attempt to solve the path planning problem that satisfies the scenario constraints and the articulated vehicle model constraints in coal mine roadways, but the following problems still exist: (1) Despite utilizing parallel computing, the path generation time reaches 86.12s, which still fails to satisfy the application requirements; and (2) Joint debugging with the real vehicle has not yet been realized, and thus the gap between the performance of the simulation and that of the field experiments has not yet been understood. Aiming at the problem that geological structure, ore characteristics, rock characteristics, and mine planning parameters all affect ore transportation, Shao et al. [222] first use Dubins paths and hierarchical density-based spatial clustering of noisy algorithm to build a constraint model containing the above conditions mentioned, and then combine the 3D RRT with Dubins paths for obstacle avoidance.The proposed framework has the following advantages: (1) Nodes of varying importance are identified based on ore characteristics and rock characteristics to provide guidance to the path planner.(2) Safer and less costly paths are selected for underground haulage based on geological structure.(3) User-intervention step is added to improve path connections in some cases after Dubins paths smoothing. Inspection robots (1) Ground inspection robots Five-axis coordinate measuring machine is widely used for machining part data acquisition, but how to rationally plan inspection paths of multiple features from different locations is an important issue in the automatic inspection of machining parts.Unlike other path planning tasks, the special application scenarios of machining and manufacturing require that certain critical features need to be measured multiple times to ensure the accuracy of the product, and furthermore, the critical features to be inspected are different in each process.To address problems mentioned above, one strategy is to reuse the initial paths, but this approach generates a large number of redundant paths; another strategy is to re-plan the paths, which reduces the redundant paths but increases the path search time.Zhao et al. [224] formulate a path reuse strategy that takes into account both the number of paths and the direction of the probing measurements, which is achieved by firstly, based on the feasible cone of probing directions and the accessibility of MPs classification of measurement features to minimize the probe rotation time, secondly, RRT with multi-root node is proposed to plan the local paths for path reuse, and then, the intra-and inter-group planning paths are generated based on the enhanced genetic algorithm, and finally, the effectiveness of the method is verified by the simulation of the cylinder body inspection case. Aiming at the problem that long-term operation of gas-insulated switchgear leads to the accumulation of foreign matter in the cavity, which can lead to safety accidents, Zhong et al. [225] propose a beetle-antennae search-guided RRT* for the gas-insulated switchgear inspection and maintenance robot.Since the map for path planning is a cylindrical surface, the authors introduce the idea of simulating the foraging behavior of beetles, but still only at the level of algorithm validation. Huang et al. [226] investigate a possible multi-objective path planning problem in inspection and solve the path planning problem with 25 destinations on an open street map of Chicago containing 866,089 nodes and 1,038,414 edges in only 0.44 s.However, realistic inspection scenarios may encounter some great challenges: firstly, the realistic inspection scenarios are a three-dimensional, which means that the obstacles cannot be treated as points as in a 2D plane, and second, the maps are not known a priori and the quality of the map generation is limited by the accuracy of the sensors, which further exacerbates the difficulty of path planning.Nevertheless, the conclusions of this paper provide good evidence that improved RRT is capable of achieving excellent performance at this stage. With the rapid growth of the demand for clean energy, the number of solar power plants is gradually increasing, and the automation of solar power plant fault inspection is also booming, for the path planning problem of such narrow and long passages of solar power plants, Wang et al. [227] improve the RRT* algorithm, and the authors also point out two limitations, one is that the parameters designed are not applicable to other environments, and the other is that it fails to solve the problem of real-time dynamic obstacle avoidance. In [228], the authors design a snake robot and investigate the effect of dynamic obstacles velocity on its path planning.The results show that the success rate of dynamic path planning gradually decreases as the velocity of dynamic obstacles increases, although the path planning time remains almost the same.This reveals that highly dynamic environments substantially increase the difficulty of the path planning task. ( 2) Aerial inspection robots To address the difficulty of observing the faults of high-altitude equipment in substations, Yang et al. [229] construct an architecture involving sensor technology, edge computing, and UAV substation inspection, and design a path planner using UAV physical constraints, sensor operation constraints, and inspection task constraints as the constraints, and inspection distance, time, and energy consumption as evaluation metrics.Zhao et al. [165] also do research on multi-objective cooperative path planning for electric inspection robot.Since only simulation experiments are done, the application effect of the algorithm in real scenarios is difficult to evaluate, and the authors also mention that future research should focus on substation inspection in complex environments such as plateaus, where the problem may still have challenges.Fang [230] et al. do similar work, but the authors further explore the effectiveness of different numbers of aircraft performing inspection tasks at different intervals.Similar work and findings also includes UAV trajectory planning for track inspection of rail-mounted gantry cranes in container terminal yards in port environments [231,232]. To address the subway tunnel environment with poor light and no GPS signal, Zhou et al. [233] equip MB1212 sonar and HOKUYO UTM-30LX LiDAR on a quadcopter and validate RRT with regression filtering mechanism and trunk constraint.Similarly, the authors only use a quadcopter to obtain the map information of the tunnels and verify the performance of the proposed algorithm on the simulation software, so many issues remain unresolved, such as the stability, accuracy, real-time performance of the scene modeling, the percentage of the actual inspected area, and the communication constraints. The olive fly is a pest that affects the quality of olives in the Mediterranean region and one strategy is the use of insect traps.In order to realize the inspection of insect traps in the olive tree area, Gabriel et al. [234] verify the performance of Dijkstra, genetic algorithm and RRT + DQN in a simulation environment built with Gazobo, where the Dijkstra algorithm, although it performs the best in terms of time, generates paths with a low accuracy (1 m error), and the genetic algorithm, while the genetic algorithm generates smooth paths T. Xu but has the highest time cost.In contrast, RRT + DQN is more suitable for this scenario.Unlike traditional simulation experiments, the authors create a simulation environment that attempts to match reality, including adding the perturbed environments, non-random robots behavior, and stochastic functions to dynamically adjust the velocity of objects.However, DQN's performance in large-scale (>400 cubic meters) challenging environments is far from optimal, in which case the authors suggest that large tasks can be countered by dividing them into multiple subtasks.Future directions on this topic mentioned by authors include testing in real scenarios and fusion of different types of sensors to obtain multi-source data. In order to improve the durability and accuracy of pipeline array inspection in chemical plants, Alejandro et al. [235] propose an aerial manipulation robot with a rolling base.In this system, a human operator determines the inspection point and manipulates the robot to realize the pipeline inspection task under human-robot collaboration, and the RRT algorithm used to assist the human operator in generating paths for the inspection task is also validated in an outdoor scenario as well.In addition, considering the specificity of the pipeline inspection task, the authors also design a slender robot implemented with the RRT path planning algorithm to realize the inspection of narrow spaces such as pipelines under human-robot collaboration.For another robot specifically designed robot for pipeline inspection of gas-insulated switchgear, see Ref. [236]. For hazardous inspection tasks in open-air warehouse environments, Zhang et al. [237] perform similar work using the high maneuverability and aerial view characteristics of UAVs, and they also note that countermeasures to dynamic changing environments to improve UAV reaction time to emergencies are a focus for future work. (3) Underground inspection robots In subsea inspection tasks, traditional strategies utilize sensors to passively collect data; these paths can be optimized by taking into account the characteristics of the sensors, but cannot predict the environmental conditions, therefore, they do not guarantee access to data across the entire domain of interest and may lead to repeated attempts.To address this problem, Leonardo et al. [78] build the forward-looking sonar seabed inspections framework, present a RRT-based sensor-driven receding horizon approach, and introduce the entropy-based evaluation metric for solution searching.Comparative results show the proposed algorithm achieves the same coverage area rate at a shorter path cost. The effectiveness of the seabed inspection tasks is related to the quality of the data acquired by sonar sensors, which depends on the characteristics of the environment, the target, and the sensor, which are also difficult to predict.To address this issue, Zacchini et al. [238] design a sensor-driven path planning method.The proposed framework realizes the inspection path planning task in a 67 m*67 m*4 m seabed area in a few minutes and can cover more than 90 % of the area to be inspected, but with the following problems: (1) For some special environments, special attention needs to be paid to certain areas of interest, but the strategy of passively acquiring data through sensors is inefficient in inspections.(2) The modeling quality of the experimental scenarios is closely related to the quality of the data, but for undetected regions, the characteristics of the obstacles or targets are difficult to predict, and therefore the modeling quality is unstable and unpredictable.(3) High-precision sensors for environmental modeling contain large amounts of data and are accurate but time-consuming, while low-precision sensors are able to model the scene quickly but with low accuracy. In order to obtain cleaner and better quality seafood from distant waters, large net cages are used for seafood farming, and damaged cages lead to huge economic losses.Aiming at this issue, Wu et al. [239] provide robust path tracking algorithms, in which RRT is introduced to correct the trajectory when the underwater vehicle deviates from the desired path. Other applications of human-robot interaction 4.3.1. Crowd avoidance Inspired by the social force model (SFM), Henderson et al. [240] introduce the social intention model into RRT to design paths capable of interacting with humans, and to clearly evaluate social interaction-based path performance, the authors introduce the social effort index (SEI) as a novel benchmark.The authors conduct experiments in three scenarios: In the oncoming traffic scenario, the robot is required to avoid collisions with oncoming crowds, and the results show that the median SEI of RRT-SMP is on average 86.3 % lower than that of typical RRT-embedded MP (RRT-MP).In the bi-directional intersections, the robot is required to avoid crowds in two-way traffic and at crosswalks, and the results show that the median SEI of RRT-MP increase significantly with increasing population density, whereas the median SEI of RRT-SMP does not increase significantly, implying that RRT-SMP is more socially competent in chaotic environments.The SEI performance of RRT-SMP and RRT-MP on the moving with the crowd is also similar to the bi-directional intersections case, showing the result that RRT-SMP is more socially competent in chaotic environments, but the conclusion is not verified in a real scenario. Repetitive tasks requiring human cognitive assistance Robots excel at simple repetitive tasks, whereas humans possess unique cognitive skills for handling a variety of different tasks, and are therefore more likely to handle repetitive tasks in unstructured environments with human collaboration.In Ref. [241], Kelly et al. design a human-assisted RRT path planner and validate on a Franco-Emika Panda robotic arm, which overcomes the constraint of actuator saturation and limited joint ranges and avoids the property of RRT that it tends to fall into the local optimal solution, and the authors summarize the conclusions as follows: (1) The planner takes 30-130 ms to plan paths with varying levels of obstacle complexity, so the human-robot collaborative system is able to perform path-planning tasks in complex scenarios when the rate of change of dynamic obstacles in the scene is less than 1 Hz.(2) The trajectory-based explicit reference governor, as a closed feedback control scheme, has a maximum average computation time of 1 ms and can therefore be neglected with respect to the path planner.( 3) Planners with human assistance are able to avoid falling into local minima in some cases compared to planners lacking human assistance.(4) The safety of the constrained planned path can only be demonstrated when the robot is moving slowly, since a fast-moving robot cannot stop moving immediately.Human-assisted RRT to surgical robots [210] and autonomous driving [242], Customized plug-ins for collaborative autonomous path planning of multi-manipulator collaboration [243], is presented in other sections. Human driving experience assisted vehicle interaction Due to the large number of decision-making behaviors during interaction with other vehicles, such as determining the safe distance, whether to overtake or not, and how the speed changes dynamically, human driving experience assisted vehicle interaction tasks are also important.Chi et al. [242] consider the process of interaction of unmanned vehicle with other moving vehicles.The authors specify the desired speed with a look-ahead distance (a safe distance to avoid emergency braking by other vehicles in front) of the unmanned vehicle, while the vehicle in front (viewed as a moving obstacle) moves at a slow speed.The results show that the unmanned vehicle goes through several phases; in the first phase, the distance between the position of the unmanned vehicle and the moving obstacle decreases gradually due to the fact that the desired speed of the unmanned vehicle is greater than the moving obstacle; in the second phase, the path tracker evaluates the speed and the safe distance in an integrated manner and reduces the speed of the unmanned vehicle; in the third phase, when the environment-awareness sensors detect that there is no vehicle in the neighboring lane, the unmanned vehicle performs an overtaking operation; in the fourth stage, it changes back to its original lane and adjusts the speed to the desired speed; and in the fifth stage, it arrives at the target position and stops. Multi-robot cooperation Multi-robot collaboration without human intervention is described in other sections, such as multi-robot systems with line-of-sight constrained communication [205], decentralized multi-UAV cooperative exploration [204], online cooperative path planning for multi-quadrotor maneuvering in unknown dynamic environments [147], formation shape generation for multi-UAV [40], multi-unmanned surface vehicle path planning under spatially varying currents [127], cooperative path planning for multiple shipboard aircraft [181].However, in some tasks, the lack of guidance from human experience reduces the reliability of path planning. For the path planning problem in hostile environments, the classical path planning method fails when the risk characteristics cannot be accurately labeled manually, to address this problem, Guo et al. [182] reduce the risk of generating paths based on the data-driven approach that constraints the growth direction of RRT*.The authors also mention that multi-UAV risk aversion and the need for further balance between unbiased and biased sampling also deserve attention. Ramasamy et al. [243] investigate a multi-robot based plug-and-play system suitable for a multi-robot system that can be operated online during the manufacturing process, which utilizes an improved RRT algorithm to automatically plan the paths of multi-manipulator collaboration.The results show that the plug-in exhibits its superiority in terms of energy consumption, path quality, and path generation efficiency.The hyper-parameters involved in such customized plug-ins require large a priori datasets or manual guidance. Challenges This paper reviews the RRT improvement strategies in the last three years in terms of branching strategy, sampling strategy, postprocessing and model-driven RRT.From the above review, it can be seen that the research of RRT algorithm has two directions roughly: theoretical research and application research, the former is aimed at optimization, such as path length, time complexity, space complexity, etc., while the latter takes more consideration of various problems in real scenarios, such as model constraints, scenario constraints.Theoretically, the improved RRT algorithms are capable of accomplishing real-time path planning tasks for large-scale, complex scenes with multiple objectives, e.g., the literature [226] plans paths with 25 objectives on a map containing more than 800,000 nodes and 1,000,000 edges in only 0.44 s.Maps with narrow passages are generally considered challenging for RRT, however, literature [227] uses an improved RRT* algorithm to plan paths on maps containing multiple narrow passages, and due to the advantage of efficient path generation in RRT-based algorithm, the search time is less than 1 s, even when the algorithm has more than 10,000 iterations, and thus is sufficiently tailored to the application scenario.However, from the above literature, the application of improved RRT-based algorithms is still challenging, and the main issues are discussed in the following sections. Modeling of unknown environments The need to model unknown scenarios is largely due to GPS denial, in which case many researchers also load sensors on robots to build maps, such as sonar for seabed inspection [78,238] and tunnel inspection [233], LiDAR for indoor inspection [201], tunnel inspection [233] and fire guidance [202], and depth camera for indoor scene modeling [204].This environment exploration strategy has been proven to be effective, for example, three UAVs each loaded with a depth camera take 209.4s to explore a 10 m*8 m*3 m room filled with various types of obstacles [204], whereas underwater robots loaded with sonar sensors detect more than 90 % of the 67 m*67 m*4 m seafloor area in a few minutes [238], but there are problems such as difficulty in selecting interest point regions, unstable modeling quality, and difficult trade-off between modeling accuracy and efficiency. T. Xu Limitation to highly dynamic environments Although the RRT algorithm has been shown to have global convergence and asymptotic optimality [20], this is based on the globally static known environment, which means that improved RRT algorithms may not be able to cope with dynamically changing environments or partially (globally) unknown scenarios.Some literatures regard dynamically changing scenarios as a direction for future research, such as hazardous inspection tasks in open-air warehouse environments [237], solar power plant fault inspection [227], underground mining stopes [46], assembling parts with different geometries [200], some literatures considers dynamic scene changes, such as motion planning framework for the quadrotor [101,147], robot with nonlinear control affine dynamical system [160], however, the performance of the algorithm is not validated in real scenarios.Yuan et al. [161] conduct the robotic arm motion planning test to validate the dynamic obstacle avoidance performance of RRT in real scenarios, but the dynamic obstacles given in the experiment are added suddenly by human beings, and are not the dynamic obstacles that we usually think of as changing position over time, and this assumption is also unreasonable.For similar work, see Ref. [62], where the dynamic obstacles also change abruptly in that scenario.Such assumptions are not compatible with the design of moving obstacles in real scenarios. Strictly speaking, the dynamic obstacle avoidance problem of RRT-based path planning algorithms in real highly dynamic environments remains unsolved due to reasons including the extremely fast accuracy degradation as the obstacle speed increases [228], extremely complex human vascular and tissue environment [211], safety issues encountered in overtaking [242]. Hyper-parameter settings Due to the limitations of the algorithms themselves, many of the algorithm parameters need to be manually tuned for better performance in different scenarios.For example, the hyper-parameters of RRTs are different in different scenarios, and the tuning process of these hyper-parameters is unavoidable, and many studies have been done by manual tuning, such as search step size [52].In the design of the cost function, since the evaluation of many tasks is multiple levels, for example, the evaluation of the arc welding robot path planning task consists of the minimum transmission path length, energy consumption, and joint smoothness [197], which are constrained by each other, it is also necessary to set the weights of these three metrics with the help of human experience.The choice of how the sensor acquires the data and the post-processing of the acquired data may also involve some hyper-parameters, for data-rich sensors such as LiDAR, experiments show that an increase in detection distance within a certain range improves the detection efficiency, but beyond this range, increasing the detection distance leads to a slight decrease in detection efficiency, while too large a detection range greatly increases the computational cost [201].Properly filtering the local point cloud and selecting the appropriate point cloud downsampling accuracy also greatly improve the quality and efficiency of the scene description, but this still requires setting the parameters manually and implementing several experiments [203].For multi-robot collaborative tasks, the optimal value for the number of robots also needs to be determined by several experiments [201].In addition, the improved RRT may introduce new algorithms whose parameters need to be chosen manually in several experiments, for example, leaf clustering is one of the improvement strategies for RRT-based exploration, but unsupervised clustering still requires manual selection of cluster radius and cluster density [203].In terms of frontier-based rescue mission, the selection of the frontier threshold is also critical [203]. No generalization of hyper-parameter settings When the scenario is changed or the algorithm is adjusted, the manually-tuned parameters mentioned in Section 5.1.2may also no longer be applicable, such as the step size of RRT [52,227], weights of multiple evaluation metrics under different missions [197], cluster radius and cluster density for leaf nodes of RRT-based algorithms [203]. Multi-robot collaboration task Multi-robot collaboration involves more difficult tasks than individual robots, and the challenges are illustrated in several points. (1) Centralized or decentralized architecture The centralized architecture is easy to manage, but when the number of robots is too large, the amount of data in the central node increases dramatically [201].Decentralized architecture is characterized by high performance, but data consistency is a challenge [128].In addition, over-idealized assumptions about the decentralized architecture, including the fact that robots can brake instantaneously and that inter-robot communication is lossless and latency-free, lead to the framework that is theoretically feasible but far from satisfying the needs of practical applications [205]. (2) Efficient or high-precision sensors Multiple UAVs loaded with depth cameras separately can detect known areas in a short time.However, low accuracy of scene modeling is inevitable due to the limitations of sensor accuracy.LiDAR is a sensor that allows for accurate scene modeling, but the realtime performance of the LiDAR-based RRT navigation is poor due to its large amount of data [204]. (3) Breadth or depth of exploration In [203], the frontier threshold determines the level of exploration; Excessively low frontier thresholds result in many areas being potentially unexplored, and too high frontier thresholds result in many areas being thoroughly explored, which can lead to some relatively unimportant regions being over-explored, resulting in a significant increase in rescue time, so the search strategy is also In the field of multi-objective exploration under multiple robots, either frontier-based methods or sampling-based methods suffers from the problem of one robot switching back and forth between two regions or two objectives, leading to difficulty in accomplishing the task [203]. (5) Rationalization of the dynamic division of tasks Regions of the same size may vary greatly in the number of critical nodes, but this information may only be known gradually during the robot exploration process, so it is also important to assign tasks to each robot in advance and adjust task assignment strategy in real time.In addition, the problem that sample-based algorithms are fast in the early stages of detection and slow in the later stages of detection needs to be considered [204]. (6) Communication constraints Since robots can only communicate with each other within line-of-sight, the safety hazards due to communication limitations in decentralized collaborative exploration tasks cannot be ignored.Victoria et al. [205] demonstrate the effectiveness of the improved RRT method to this problem, but over-idealized assumptions including that robots can be braked instantaneously, that inter-robot communication is lossless and latency-free, and static scenarios result in algorithms that are far from being adequate for practical applications. Poor real-time performance Although simulation experiments validate the real-time performance of RRT algorithms [79,226,227], such algorithms still perform unsatisfactorily in real application scenarios, for example, the improved path planning algorithm in coal mine roadways takes 86.12 s s under various constraints [167], due to the collision detection constraints between the six arms, the proposed AEB-RRT takes 1109 s, 307 s, 210.8 s and 447.5 s to generate weld trajectories of 831.5 mm, 353.7 mm, 554.0 mm and 543.7 mm, respectively, which is far from the requirements of coherent welding [52].The average time for the three UAVs to explore an area of 10 m*8 m*3 m with obstacles is 209.4s [204].The assembly of lightweight structures for healthcare facilities is approximately 28 min [199]. Restricted by upper limits in other areas For exploratory tasks, robots usually undertake the tasks of localization, mapping, partitioning and path planning simultaneously.Therefore, as the last tasks (path planning), the upper limit of the quality and efficiency of RRT is determined by the previous tasks, which means that inaccurate localization, mapping, and irrational task allocation all result in RRT being far away from the high-quality solution, and this problem cannot be solved at the path planning level alone. Unstable results Due to the stochastic nature of RRT, even in the same algorithm, the same set of hyper-parameters, and the same scenario, differences in initial values (e.g., initial pose of robots) lead to large differences in results [201], not to mention that real scenarios include a variety of disturbances such as communication constrain and dynamic obstacles. Large-scale environment Model-driven algorithms are the trend in the development of path planning incorporating artificial intelligence, but large-scale environments pose a challenge to these kinds of algorithms, for example, DQN's performance in large-scale (>400 cubic meters) environments is far from optimal [234].In contrast, improved RRTs may still work well in large-scale scenarios, for example, RRT-Rope is designed with the expectation of finding suboptimal solutions in a short period of time, and results show the method works well in large-scale uncluttered environment [46].However, this conclusion is only valid when only static obstacles are included and the scenario is globally known, so large-scale scenarios are still challenging for exploration tasks. Stability and long-term performance involving continuous operation Stability is critical in safety-related fields, represented by human-related surgical robots.Surgical robots overcome the problems of poor precision, long operating times, surgeon fatigue, and lack of a three-dimensional precise field of view in traditional surgical procedures.Current surgical robots are auxiliary systems that are not fully intelligent enough to perform surgeries on their own, but needs to provide assistance to surgeons through remote control and mechanical drive, but the major concern of patients undergoing surgery and their families is the ability to ensure the absolute safety of the surgical robot.Although the surgical robot undergoes a long period of debugging and calibration before each surgery, and the surgery is operated by experienced surgeons, it is still difficult to ensure that the operation of the surgical robot is foolproof, and worse still, the surgical robot fails to deal with the regulation of special situations, and once a safety hazard occurs, the robotic arm cannot make adjustments independently like a clinically experienced surgeon to minimize the harm.Therefore, at present, most of the surgical robots still belong to the primary stage of semi-automatic collaboration, for the stability of the power system and control system requirements are extremely high, if there is a sudden power T. Xu failure and restart or a malfunction of the control system during surgery, the surgical robotic arm is likely to cause irreparable damage to the patient's organs.Although the probability of a safety incident occurring is low, once it happens, the damage to the patient's life and health is irreparable. Liu et al. [244] propose a long-term perspective planning strategy based on RRT*.which is motivated by the fact that (1) Co-awareness provides critical information beyond the field of view, (2) Planning using perspective long-range sensing techniques allows earlier response to obstacles, improving path quality and path generation efficiency, and (3) the branching property of RRT* is easy for real-time online path planning and preserves probabilistic completeness.The strategy investigates the fusion of different vehicle perception beliefs and proposes a cooperative perception-based cost map to represent the uncertainty and transmission delay.Results of campus experiments at the National University of Singapore demonstrate the improvement of the algorithm in remote sensing.Although the paper claims to validate the long-term performance, no quantitative metrics are actually given, and the experimental scenarios are only conducted on campus, and no indication is given of how long the algorithm runs for. Thus, while much of the literature describes their work as stable and highly reliable, e.g., no failures in 1000 trials [192], in practice, even with very low failure rates (Consider hardware, communication failures, etc.), some robots may fail to achieve full autonomy, especially in safety-related tasks, such as surgical robots [206].On the other hand, the long-term performance of RRTs in continuous operation is hardly reported, the only relevant literature retrieved is from 2013 [244], but although the authors mention the long-term performance, they do not give detailed data, so there is no way to know the reliability of RRT under long-term operation.Overall, various studies have shown that RRT has good long-term performance in small and medium-sized environments [52,167,204,234,235], but this is also limited to the algorithmic level.This suggests that, first, the long-term stability of RRT in large-scale unstructured scenarios remains to be verified; second, since RRT itself does not enable robot autonomy, the long-term stability of RRT in real-world applications is also related to the control (path-tracking stability), sensors (environment-awareness stability), communication (signal-awareness stability), and power supply. Challenges to artificial intelligence approaches Data-driven artificial intelligence methods achieve many difficult problems, such as inaccurate modeling, hardly evaluable cost functions, and various problems related to human beings.However, there are some problems with artificial intelligence itself, the most important of which is the poor interpretability, which leads to the increasing difficulty in designing and improving the model, and optimizing the hyper-parameters, and secondly, the solution of the path planning problem under artificial intelligence is closely related to the quality of the data, and it is very difficult to obtain a large amount of high-quality data in some scenarios, which also leads to the failure of this type of methods. Ethics and morality Ethics and morality are also factors that cannot be ignored.Common ethical and moral issues in autonomous driving include overspeeding, dangerous overtaking, and failure to maintain a safe distance, and literature [242] discusses an overtaking operation process while maintaining a safe distance.Unlike other tasks, when it comes to metrics in safety-related domains, it is important to design a reasonable framework that weighs the "public transportation rules" and "utility" rather than just optimizing the driver's interests (e.g., energy consumption, minimum time, etc.).In addition, in some extreme cases, when autonomous vehicles fail to find a path that avoid all the risks, they are caught in a dilemma, which generally consists of two tricky scenarios (1) the "other-other" dilemma, where the choice fails to take into account the safety of different people outside the car.For example, the autonomous car is driving normally according to the traffic rules, and suddenly there are two children running in front of the driveway.If there are only two choices at this point, either drive straight ahead and run over the two children, or avoid the children and dash to the sidewalk and run over pedestrians.(2) The "other-me" dilemma, where the choice fails to take into account the safety of people sitting in the car and people outside the car.For example, the autonomous car is driving normally according to the tunnel entrance, and suddenly a child runs into the lane ahead, if there are only two choices at this time, run over the child straight ahead or avoid the child to crash into the wall at the side of the tunnel, resulting in the death of the person in the car. RRT, as a path planning method, is also an important process for intelligent military robots, but there may be ethical and legal issues.For example, whether military robots choose lethal weapons or weapons of mass destruction when performing tasks and making decisions independently; whether international humanitarian law is obeyed to ensure the safety of civilians and non-combatants, and how to define the responsibility of robots or human beings in human-robot collaboration of combat robots. Multi-type robot collaboration Typical unmanned scenarios of cross-domain collaboration include sea-air unmanned cluster systems and air-ground unmanned cluster systems; the former is mostly used for maritime combat and joint operation missions, such as sea area alert patrol, ocean detection, target reconnaissance, and coordinated anti-submarine warfare; the latter consists of ground-based unmanned systems such as UAV and unmanned vehicles, and its missions include cross-domain collaborative reconnaissance, strikes, and searches.Specifically, for example, in Ref. [199], the authors mention that full automation of lightweight structure assembly can be achieved by ground-based robots assembling the shell while aerial-based robots assist in tightening the bolts.In the field of pavement construction, pavers and rollers collaborate with each other to automate the entire process of pavement compaction tasks, and I and our team also propose a bi-directional path planning strategy for the idle robot and the malfunction robot to address the problem of insufficient fuel that may be involved in pavement construction machinery [245]. T. Xu Human-robot collaboration In some safety-critical applications, human-robot collaboration is a major trend for technical and safety reasons, for example, emotionally, patients and their families do not trust surgeries that are completely dependent on robots; technically, the complexity of the human tissue and organ environment also poses a great challenge for the full autonomy of surgical robots, and more critically, there is an inevitable failure rate of the hardware, so the stability of human-robot collaboration in surgery is much better [207,208], such as removing porcine gallbladder using a master-slave robotic arm within the framework of human-robot collaboration [210].Under some accuracy-sensitive path planning requirements, such as ore transportation [222], the smoothed paths may no longer be asymptotically optimal solutions under the RRT framework, and may even generate unreasonable local paths, which makes manual intervention extremely important.Certain experimental results show that better performance can be achieved with human-assisted path planning [241], which is one of the conclusions that human-robot collaboration is the future trends.Balancing unbiased and biased sampling based on human experience is also one of the important research directions for RRT improvement [182].In conclusion, all kinds of robots still have a bright future under human-robot collaboration. Real-time path planning Real-time path planning plays a decisive role in the efficiency of task completion, and examples of applications where tasks cannot be completed efficiently due to poor real-time performance are mentioned in Section 5.1.6,such as path planning for a coherent welding task [52] and path planning for a coal mine tunnel [167].In addition, in some safety-critical domains, such as autonomous driving, failure to perform real-time path optimization and feedback control for sudden obstacles increases the risk of accidents, and thus the development of real-time path planning is still urgent from the perspectives of work efficiency and safety. Self-tuning of the hyper-parameters The self-tuning of hyper-parameters is also a very important research direction as many hyper-parameters change with different application scenarios or even different maps in the same application scenario, which is discussed in detail in Section 5.1.3.At present, swarm optimization algorithms or artificial intelligence algorithms have been capable of self-tuning the hyper-parameters of some algorithms, but there are still many algorithms whose hyper-parameters still need to be adjusted manually, which leads to a large amount of resource consumption.Artificial intelligence-driven path planning algorithms also suffer from self-tuning of network model hyper-parameters.Therefore, the self-tuning of hyper-parameters is one of the future trends. Application-scenario-oriented algorithm and hardware design At the algorithmic level, each algorithm has its advantages and disadvantages, so improving the existing algorithms for specific application scenarios is also a future trend.For example, although the proposed RRT-Rope in Ref. [46] cannot handle pillar-like structural obstacles, the authors apply it to a mining robot path planning scenario without such obstacles, and also achieve satisfactory results.At the hardware level, targeted hardware design is also critical, for example, Alejandro et al. [235] specifically design a slender robot for the pipe inspection task and further improve the RRT algorithm for the robot's kinematic characteristics, and this optimization, which includes both hardware and software system design, is more effective.In Ref. [236], a novel robot is also designed for pipeline inspection of gas-insulated switchgear.In the field of autonomous driving, it is also particularly important to customize some strategies, such as maintaining a safe distance and triggering overtaking conditions, taking into account self-interests, ethics, laws, and local field of view constraints, and the customized data can also be obtained through model-driven RRTs in human experience [183].In the field of multi-robot and smart city infrastructure, such as the full automation of lightweight structure assembly mentioned in Ref. [199], aerial robots used to assist in tightening bolts also require customized designs.Therefore, task-or application-scenario-oriented algorithm and hardware design is also a future research trend. Uncertainties handling Uncertainty is caused by various factors, such as unknown time-varying environments, unknown types of perturbations, and communication connection interruptions, etc.Therefore, enhancing the system's resistance to interference under various constraints is also one of the research trends, and the solution of this problem can also help to improve the instability of the results described in Section 5.1.8. Highly dynamic environments handling Highly dynamic environments are extremely challenging.At the algorithmic level, such environments not only require stable and reliable generated paths, but also require algorithms with high computational efficiency, strong dynamic optimization capability, and even predictability; at the hardware level, sensors are required to accurately capture the rapid changes in the scene, and the sampling period of various types of hardware is also a determining factor that affects the ability of the robot to cope with highly dynamic environments. Conclusion This paper mainly reviews the articles published in the field of RRT in the past three years, and the author believes that although RRT-based improved algorithms have advantages in large-scale scenarios, real-time performance, and uncertain environments, and some strategies that are difficult to be quantitatively described can be designed based on model-driven RRT, there are still the problems of difficult hyper-parameter design and weak generalization ability.In the practical application level, the reliability and accuracy of T. Xu the hardware such as controllers, actuators, sensors, communication, power supply and data acquisition efficiency in large-scale unstructured scenarios is still a challenge.In the field of multi-robot collaboration and human-robot collaboration, there are still a lot of problems to be solved.In summary, the author believe that multi-type robot collaboration, human-robot collaboration, real-time path planning, self-tuning of hyper-parameters, task-or application-scenario-oriented algorithm and hardware design, and path planning in highly dynamic environments are the future trends. ( 4 ) Switching exploration targets back and forth Table 2 Performance comparison of classical algorithms. EfficiencyWeak robustness; Trajectories overlap; Moving back and forth between two targets; Trapped in narrow regions; Missing small regions;
2024-06-09T15:18:27.848Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "fdde85fce4149b1c4fcdedce54495a3c1e257a8c", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844024084822/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e0d2022a3ec3f195f173c196c0880455bd702d2d", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [] }
49336871
pes2o/s2orc
v3-fos-license
On Robust Tie-line Scheduling in Multi-Area Power Systems The tie-line scheduling problem in a multi-area power system seeks to optimize tie-line power flows across areas that are independently operated by different system operators (SOs). In this paper, we leverage the theory of multi-parametric linear programming to propose algorithms for optimal tie-line scheduling within a deterministic and a robust optimization framework. Through a coordinator, the proposed algorithms are proved to converge to the optimal schedule within a finite number of iterations. A key feature of the proposed algorithms, besides their finite step convergence, is the privacy of the information exchanges; the SO in an area does not need to reveal its dispatch cost structure, network constraints, or the nature of the uncertainty set to the coordinator. The performance of the algorithms is evaluated using several power system examples. Introduction For historic and technical reasons, different parts of an interconnected power system and their associated assets are dispatched by different system operators (SOs). We call the geographical footprint within an SO's jurisdiction an area, and transmission lines that interconnect two different areas as tie-lines. Power flows over such tie-lines are generally scheduled 15 -75 minutes prior to power delivery. The report in [3] indicates that current scheduling techniques often lead to suboptimal tie-line power flows. The economic loss due to inefficient tie-line scheduling is estimated to the tune of $73 million between the areas controlled by MISO and PJM alone in 2010. Tie-lines often have enough transfer capability to fulfill a significant portion of each area's power consumption [17]. Thus they form important assets of multi-area power systems. SOs from multiple areas typically cannot aggregate their dispatch cost structures and detailed network constraints to solve a joint optimal power flow problem. Therefore, distributed algorithms have been proposed. Prominent examples include [5,7,13] that adopt the so-called dual decomposition approach. These methods are iterative, wherein each SO optimizes the grid assets within its area, given the Lagrange multipliers associated with inter-area constraints. Typically, a coordinator mediates among the SOs and iteratively updates the multipliers. Alternative primal decomposition approaches are also proposed in [11,15,19]. Therein, the primal variables of the optimization problem are iteratively updated, sometimes requiring the SO of one area to reveal part of its cost structure and constraints to the SO of another area or a coordinator. Traditionally, solution techniques for the tie-line scheduling problem assume that the SOs and/or the coordinator has perfect knowledge of the future demand and supply conditions at the time of scheduling. Such assumptions are being increasingly challenged with the rapid adoption of distributed energy resources in the distribution grid and variable renewable generation like wind and solar energy in the bulk power systems. Said differently, one must explicitly account for the uncertainty in demand and supply in the tieline scheduling problem. To that end, [4,12] propose to minimize the expected aggregate dispatch cost and [14] propose to minimize the maximum of that cost. In this paper, we adopt the latter paradigm -the robust approach. Our contribution With the system model in Section 2, we first formulate the deterministic tie-line scheduling problem in Section 3, where we propose an algorithm to solve this deterministic problem that draws from the theory of multiparametric programming [6]. The key feature of our algorithm is that a coordinator can produce the optimal tie-line schedule upon communicating only finitely many times with the SO in each area. In contrast to [19], our method does not require SOs to reveal their cost structures nor their constraints to other SOs or to the coordinator. In Section 4, we formulate the robust counterpart of the tie-line scheduling problem. We then propose a technique that alternately uses the algorithm for the deterministic variant and a mixed-integer linear program to solve the robust problem. Again, our technique is proved to converge to the optimal robust tie-line schedule that requires the coordinator to communicate finitely many times with each SO. Also, SOs are not required to reveal the nature and range of the values the uncertain demand and available supply can take. Our proposed framework thus circumvents the substantial communication burden of the method proposed in [14] towards the same problem. We remark that [14] adopts the column-and-constraint generation technique described in [18] that requires SOs to reveal part of their network constraints, costs and ranges of demand and available renewable supply to the coordinator. We empirically demonstrate the performance of our algorithm in Section 5 and conclude in Section 6. System model To formulate the tie-line scheduling problem, we begin by describing the model for multi-area power systems. Throughout, we restrict ourselves to a two-area power system, pictorially represented in Figure 1 for the ease of exposition. The model and the proposed methods can be generalized for tie-line scheduling among more than two areas. Figure 1: An illustration of a two-area power system. For the power network in each area, we distinguish between two types of buses: the internal buses and the boundary buses. The boundary ones in each area are connected to their counterparts in the other area via tie-lines. Internal buses do not share a connection to other areas. Assume that each internal bus has a dispatchable generator, a renewable generator, and a controllable load 1 . Boundary buses do not have any asset that can inject or extract power. Such assumptions are not limiting in that one can derive an equivalent power network in each area that adheres to these assumptions. Let the power network in area i be comprised of n i internal buses and n i boundary buses for each i = 1, 2. We adopt a linear DC power flow model in this paper. 2 This approximate model sets all voltage magnitudes to their nominal values, ignores transmission line resistances and shunt reactances, and deems differences among the voltage phase angles across each transmission line to be small. Consequently, the real power injections into the network is a linear map of voltage phase angles (expressed in radians) across the network. To arrive at a mathematical description, denote by g i ∈ R n i , w i ∈ R n i , and d i ∈ R n i as the vectors of (real) power generations from dispatchable generators, renewable generators, and controllable loads, respectively. Let θ i ∈ R n i and θ i ∈ R n i be the vectors of voltage phase angles at internal and boundary buses, respectively. Then, the power flow equations are given by (1) Non-zero entries of the coefficient matrix depend on reciprocals of transmission line reactances, the unspecified blocks in that matrix are zeros. Throughout, assume that one of the boundary buses in area 1 is set as the slack bus for the two-area power system. That is, the voltage phase angle at said bus is assumed zero. Power injections from the supply and demand assets at the internal buses of area i are constrained as The inequalities are interpreted elementwise. The lower and upper limits on dispatchable generation G i , G i are assumed to be known at the time when tie-line flows are being scheduled. Our assumptions on the available renewable generation W i and the limits on the demands [D i , D i ] will vary in the subsequent sections. In Section 3, we assume that these limits are known and provide a distributed algorithm to solve the deterministic tie-line scheduling problem. In Section 4, we formulate the robust counterpart, where these limits are deemed uncertain and vary over a known set. We then describe a distributed algorithm to solve the robust counterpart. The power transfer capabilities of transmission lines within area i are succinctly represented as for each i = 1, 2. Here, H i and H i define the branch-bus admittance matrices, and f i models the respective transmission line capacities. Similarly, the transfer capabilities of tie-lines joining the two areas assume the form Again, H 12 , H 21 denote the relevant branch-bus admittance matrices and f 12 models the tie-line capacities. Finally, we describe the cost model for our two-area power system. For respectively procuring g i and w i from dispatchable and renewable generators, and meeting a demand of d i from controllable loads, let the dispatch cost in area i be given by We use the notation v ⊺ to denote the transpose of any vector or matrix v. The linear cost structure in the above equation is reminiscent of electricity market practices in many parts of the U.S. today. The second summand models any spillage costs associated with renewable generators. The third models the disutility of not satisfying all demands. 3 The deterministic tie-line scheduling problem Tie-line flows are typically scheduled ahead of the time of power delivery. The lead time makes the supply and demand conditions uncertain during the scheduling process. Within the framework of our model, the available capacity in renewable supply and lower and upper bounds on power demands, i.e., W i , D i , D i , can be uncertain. In this section, we ignore such uncertainty and formulate the deterministic tie-line scheduling problem, wherein we assume perfect knowledge of W i , D i and D i to decide the dispatch in each area and the tie-line flows. Our discussion of the deterministic version will serve as a prelude to its robust counterpart in Section 4. To simplify exposition, consider the following notation. for i = 1, 2. The above notation allows us to succinctly represent the constraints (1) -(3) as for each i = 1, 2 and suitably defined matrices A x i , A ξ i , A y i and vector b i . Denote by m i the number of inequality constraints in the above equation. Next, we describe transmission constraints on tie-line power flows in (4) as Without loss of generality, one can restrict Y to be a polytope 3 . Finally, the cost of dispatch in area i, as described in (5), can be written as for scalar c 0 i and vectors c x i , c ξ i . Equipped with the above notation, we define the deterministic tie-line scheduling problem as follows. Distributed solution via critical region exploration The structure of the optimization problem in (6) lends itself to a distributed solution architecture that we describe below. Our proposed technique is similar in spirit to the critical region projection method described in [11]. 4 We assume that each area is managed by a system operator (SO), and a coordinator mediates between the SOs. Assume that the SO of area i (call it SO i ) knows the dispatch cost c i and the linear constraint involving x i , ξ i , y in (6) in area i, and that SOs and the coordinator all know Y. Our algorithm relies on the properties of (6) that we describe next. To that end, notice that (6) can be written as where Assume throughout that all optimization problems parameterized by y is feasible for each y ∈ Y. Techniques from [14] can be leveraged to shrink Y appropriately, otherwise. The optimization problem in (8) is a multi-parametric linear program, linearly parameterized in (y, ξ i ) on the right-hand side 5 . Such optimization problems are well-studied in the literature. For example, see [6]. Relevant to our algorithm is the structure of the parametric optimal cost J * i . Describing that structure requires an additional notation. We say that a finite collection of polytopes {P 1 , . . . , P ℓ } define a polyhedral partition of Y, if no two polytopes intersect except at their boundaries, and their union equals Y. With this notation, we now record the properties of J * i in the following lemma. The proof is immediate from [6, Theorem 7.5]. Details are omitted for brevity. We refer to the polytopes in the polyhedral partition of Y induced by J * i (·, ξ i ) as critical regions. Recall that the feasible set of (8) is described by a collection of linear inequalities. Essentially, each critical region corresponds to the subset of Y over which a specific set of these inequality constraints are activei.e., are met with equalities -at an optimal solution of (8). A direct consequence of the above lemma is that the aggregate cost J * (·, ξ 1 , ξ 2 ) is also piecewiseaffine and convex. Sets over which this cost is affine define a polyhedral partition of Y. The polytopes of that partition -the critical regions -are precisely the non-empty intersections between the critical regions induced by J * 1 (·, ξ 1 ) and those by J * 2 (·, ξ 2 ). The relationship between the critical regions induced by the various piecewise affine functions are illustrated in Figure 2. In what follows, we develop an algorithm wherein the coordinator defines a sequence of points in Y towards optimizing the aggregate cost. In each step, it relies on the SOs to identify their respective critical regions and the affine descriptions of their optimal costs at these iterates. That is, SO i can compute the critical region P y i that contains y ∈ Y and the affine description [α y i ] ⊺ z + β y i of its optimal dispatch cost J * i (z, ξ i ) over z ∈ P y i by parameterizing the linear program described in (8) 6 . We relegate the details of this step to Appendix A to maintain continuity Critical regions induced by J * 1 . of presentation. For any y ∈ Y, we assume in the sequel that the coordinator can collect this information from the SOs to construct the critical region P y induced by the aggregate cost containing y and its affine description [α y ] ⊺ z + β y for z ∈ P y , where In presenting the algorithm, we assume that the coordinator can identify the lexicographically smallest optimal solution of a linear program. A vector a is said to be lexicographically smaller than b, if at the first index where they differ, the entry in a is less than that in b. See [9] for details on such linear programming solvers. When a linear program does not have a unique optimizer 7 , such a choice provides a tie-breaking rule. The final piece required to state and analyze the algorithm is an optimality condition that is both necessary and sufficient for a candidate minimizer of (7). Stated geometrically, y * ∈ Y is a minimizer of (7) if and only if The first set on the right-hand side of (10) is the sub-differential set of the aggregate cost J * (·, ξ 1 , ξ 2 ) evaluated at y * 8 . And, the second set denotes the normal cone to Y at y * . The addition stands for a set-sum. Algorithm 1 delineates the steps for the coordinator to solve the deterministic tie-line scheduling problem. In our algorithm, v * 2 denotes the Euclidean norm of v * . If D := {α 1 , . . . , α ℓ D } and N Y (y * ) := {z | K y z ≥ 0}, then computing the least-square solution v * amounts to solving the following convex quadratic program. over the variables v ∈ R n 1 +n 2 , η ∈ R ℓ D , and ζ ∈ R ℓ N , where 1 is a vector of all ones, and K y ∈ R (n 1 +n 2 )×ℓ N . Algorithm 1 Solving the deterministic tie-line scheduling problem. Communicate with the SOs to obtain P y and α y , β y . 4: y opt ← lexicographically smallest minimizer in step 4. Analysis of the algorithm The following result characterizes the convergence of Algorithm 1. See Appendix B for its proof. The above result fundamentally relies on the fact that each time the variable y is updated, it belongs to a critical region (induced by the aggregate cost) that the algorithm has not encountered so far. And, there are only finitely many such critical regions. That ensures termination in finitely many steps. Each time the algorithm ventures into a new critical region, we store the optimizer and the optimal cost over that critical region in the variables y opt and J opt . Forcing the linear program to choose the lexicographically smallest optimizer always picks a unique vertex of the critical region as y opt . Unless J opt improves upon the cost at y * , we ignore the new point y opt . However, the exploration of the new critical region provides a possibly new sub-gradient of the aggregate cost at y * . The sub-differential set at y * is given by the convex hull of the sub-gradients of the aggregate cost over all critical regions that y * is a part of. The set D we maintain is such that conv(D) is a partial sub-differential set of the aggregate cost at y * . Notice that conv(D) ⊆ ∂J * (y * , ξ 1 , ξ 2 ) throughout the algorithm. Therefore, any y * that meets the termination criterion of the algorithm automatically satisfies (10). As a result, such a y * is an optimizer of (7). The proposed technique is attractive in that each SO only needs to communicate finitely many times with the coordinator for the latter to reach an optimal tie-line schedule. Further, each SO i can compute its optimal dispatch x * i by solving (8) with y * . A closer look at the nature of the communication between the SOs and the coordinator reveals that an SO will not have to disclose the complete cost structure nor a complete description of the constraints within its area to the coordinator. in a distributed manner, where F i : Y → R satisfies two properties. First, it is piecewise affine and convex. Second, given any y ∈ Y, SO i can compute an affine segment containing that y. While we do not explicitly characterize how fast the algorithm converges to its optimum, one can expect the number of steps to convergence to grow with the number of critical regions so induced. However, we do not expect our algorithm to explore all such critical regions on its convergence path. A pictorial illustration of the algorithm To gain more insights into the mechanics of Algorithm 1, consider the example portrayed in Figure 3. The coordinator begins with y A as the initial value of y. It communicates with SO i to obtain the critical region induced by J * i containing y A , and the affine description of J * i over that critical region. Using the relation in (9), it then computes the critical region P A induced by the aggregate cost and the affine description of that cost α A ⊺ z + β A over that region. For convenience, we use and extend the corresponding notation for y B , . . . , y E . The coordinator solves a linear program to minimize the affine aggregate cost α A ⊺ z+β A over z ∈ P A , and obtains the lexicographically smallest optimizer y opt . Such an optimizer y opt is always a vertex of P A . Identify y B as that vertex in Figure 3. The optimal cost at y B is indeed lower than the initial value of J * = ∞, and hence, the coordinator sets y * ← y B . It also updates J * to the aggregate cost at y B , and the partial sub-differential set to D ← {α A }. Next, the coordinator solves the least square problem described in (11) to compute v * . In so doing, it utilizes D = {α A }, and K y = 0 that describes the normal cone to Y at y B . 9 Suppose v * = 0. The coordinator updates the value of y to y C , obtained by moving a 'small' step of length ε from y B along −v * . Recall that y C / ∈ P A . The coordinator again communicates with the SOs to obtain the new critical region P C induced by the aggregate cost that contains y C . Again, it obtains the affine description of that cost and optimizes it over P C to obtain the new y opt . In the figure, we depict the case when y opt coincides with y * = y B . Notice that the optimal cost J opt at y opt is equal to J * , and hence, the coordinator only updates the partial sub-differential set D to {α A , α C }. With the updated set of D, the coordinator solves (11) to obtain v * . In 9 The normal cone to Y at y B is {0} because y B lies in the interior of Y. this example, v * is again non-zero, and hence, the coordinator moves along a step of length ε along −v * from y B to land at y D . Again, y D / ∈ {P A , P C }. The coordinator repeats the same steps to optimize the aggregate cost over P D to obtain y E as the new y opt . Two cases can now arise, that we describe separately. • If the optimal cost J opt at y opt = y E does not improve upon the cost J * at y B , the coordinator ignores y E and updates the set D to {α A , α C , α D }. It computes v * with the updated D. Again, if v * = 0, it traverses along −v * to venture into a yet-unexplored critical region. The process continues till we get y * = y B as an optimizer (if v * = 0 at a future iterate), or we encounter the case we describe next. • If J opt < J * , then the coordinator sets y E as the new y * . It retraces the same steps with this new y * . In this example, since y E is a vertex of Y, one can show that (11) will yield v * = 0, and hence, y * = y E will optimize the aggregate cost over Y. The robust counterpart The deterministic tie-line scheduling problem was formulated in the last section on the premise that available renewable supply and limits on power demands within each area are known at the time when tie-line schedules are decided. We now alter that assumption and allow these parameters to be uncertain. In particular, we let ξ i = W i , D i , D i take values in a box, described by for i = 1, 2. The robust counterpart of the tie-line scheduling problem is then described by We now develop an algorithm that solves (13) in a distributed fashion. Problem (13) has a minimax structure. Therefore, we employ a strategy in Algorithm 2 to alternately minimize the objective function over Y and maximize it over Ξ 1 × Ξ 2 . Thanks to the following lemma, the maximization over Ξ 1 × Ξ 2 can be reformulated into a mixed-integer linear program. Lemma 2. Fix y ∈ Y. Then, there exists M > 0 for which maximizing J * i (y, ξ i ) over ξ i ∈ Ξ i is equivalent to the following mixed-integer linear program: We use the notation ∆ ξ i to denote a diagonal matrix with ξ U i − ξ L i as the diagonal. The lemma builds on the fact that J * i (y, ξ i ) is convex in ξ i , and hence, reaches its maximum at a vertex of Ξ i . The convexity is again a consequence of [6, Theorem 7.5]. Our proof in Appendix C leverages duality theory of linear programming and the so-called big-M method adopted in [8,Chapter 2.11] to reformulate the maximization of J * i (y, ·) over the vertices of Ξ i into a mixed-integer linear program. An optimal ξ opt i can be recovered from w * i that is optimal in (14) using ξ opt i := ξ L i + ∆ ξ i w * i . Next, we present our algorithm for solving the robust counterpart. In the algorithm, the SOs exclusively maintain and update certain variables; we distinguish these from the ones the coordinator maintains. Algorithm 2 Solving the robust counterpart. 1: Initialize: Coordinator uses Algorithm 1 to solve 9: J opt i ← optimal cost in step 7. 10: We summarize the main property of the above algorithm in the following theorem, whose proof is given in Appendix D 10 . Our algorithm to solve the robust counterpart makes use of Algorithm 1 in step 3. The coordinator performs this step with necessary communication with the SOs. However, it remains agnostic to the uncertainty sets Ξ 1 and Ξ 2 throughout. Therefore, our algorithm is such that the SOs in general will not be required to reveal their cost structures, network constraints, nor their uncertainty sets to the coordinator to optimally solve the robust tie-line scheduling problem. Further, Theorems 1 and 2 together guarantee that the coordinator can arrive at the required schedule by communicating with the SOs only finitely many times. These define some of the advantages of the proposed methodology. In the following, we discuss some limitations of our method. The number of affine segments in the piecewise affine description of max ξ i ∈V i J * i (y, ξ i ) increases with the size of the set V i . The larger that number, the heavier can be the computational burden on Algorithm 1 in step 3. To partially circumvent this problem, we initialize the sets V i with that vertex of Ξ i that encodes the least available renewable supply and the highest nominal demand. Such a choice captures the intuition that dispatch cost is likely the highest with the least free renewable supply and the highest demand. Our empirical results in the next section corroborate that intuition. We make use of mixed-integer linear programs in step 7 of the algorithm. This optimization class encompasses well-known NP-hard problems. Solvers in practice, however, often demonstrate good empirical performance. Popular techniques for mixed-integer linear programming include branch-and-bound, cutting-plane methods, etc. See [8] for a survey. Providing polynomial-time convergence guarantees for (14) remains challenging, but our empirical results in the next section appear encouraging. Numerical Experiments We report here the results of our implementation of Algorithm 2 on several power system examples. All optimization problems were solved in IBM ILOG CPLEX Optimization Studio V12.5.0 [1] on a PC with 2.0GHz Intel(R) Core(TM) i7-4510U microprocessor and 8GB RAM. On a two-area 44-bus power system Consider the two-area power system shown in Figure 4a, obtained by connecting the IEEE 14-and 30bus test systems [2]. The networks were augmented with wind generators at various buses. Transmission capacities of all lines were set to 100MW. The available capacity of each wind generator was varied between 15MW and 25MW. The lower limits on all power demands were set to zero, while the upper limits were varied between 98% and 102% of their nominal values. Our setup had 36 uncertain variables -32 power demands and 4 available wind generation. Bus 5 in area 1 was the slack bus. From the data in Matpower [20], we chose the linear coefficient in the nominal quadratic cost structure for each conventional generator to define P g i in (5). Further, we neglected wind spillage costs by letting P w i = 0, and defined P d i by assuming a constant marginal cost of $100/MWh for not meeting the highest demands. Iteration Step in Algorithm 2 Aggregate cost (in $/h) Run-time (in ms) 1 Step 3 to compute y * 9897.7 113.6 1 Step 7 to compute ξ opt 9910.3 99.6 2 Step 3 to compute y * 9899.3 93.4 2 Step 7 to compute ξ opt 9899.3 121.5 Table 1: Evolution of aggregate cost of Algorithm 2 for the two-area power system in Figure 4a. To run Algorithm 2, we initialized V i with the scenario that describes the highest power demands and the least available wind generation across all buses. To invoke Algorithm 1 in step 3, we initialized y with a vector of all zeros. When the algorithm encountered the same step in future iterations, it was initialized with the optimal y * from the last iteration to provide a warm start. Algorithm 2 converged in two iterations, i.e., it ended when the cardinality of V 1 and V 2 were both two. The trajectory of the optimal cost and the run-times for each step are given in Table 1. In the first iteration, Algorithm 1 in step 3 with ε = 10 −5 converged in four iterations 11 of its own and explored five critical regions induced by the aggregate cost. A naive search over Y yielded that the aggregate cost induced at least 126 critical regions. Our simulation indicates that Algorithm 1 only explores a 'small' subset of all critical regions. Step 7 of Algorithm 2 was then solved to obtain ξ opt i . As Table 1 suggests, the aggregate cost J opt 1 + J opt 2 exceeded J * obtained earlier in step 3. Thus, the scenario of demand and supply captured in our initial sets V 1 and V 2 was not the one with maximum aggregate dispatch costs. To accomplish this step, two separate mixed-integer linear programs were solved -one with 13 binary variables (in area 1) and the other with 23 binary variables (in area 2). CPLEX returned the global optimal solutions in 15ms and 77ms, respectively. In the next iteration, step 3 was performed with ξ opt i added to V i , where Algorithm 1 converged in five iterations, exploring only four critical regions. Finally, step 7 yielded J opt 1 + J opt 2 = J * , implying that the obtained y * defines an optimal robust tie-line schedule. To further understand the efficacy of our solution technique, we uniformly sampled the set Ξ 1 × Ξ 2 3000 times. With each sample (ξ 1 , ξ 2 ), we solved two optimization problems -P 1 and P 2 . Precisely, P 1 is a deterministic tie-line scheduling problem solved with Algorithm 1, and P 2 is the optimal power flow problem in each area with the optimal y * obtained from Algorithm 2 for the robust counterpart. The histograms of the optimal aggregate costs from P 1 and P 2 are plotted in Figure 4b. The same figure also depicts the optimal cost of the robust tie-line scheduling problem, which naturally equals the maximum among the costs from P 2 . And for each sample, the gap between the optimal costs of P 1 and P 2 captures the cost due to lack of foresight. Figure 4b reveals that such costs can be significant. The median run-time of P 1 was 48.5ms over all samples. The run-time for the robust problem was 458.2ms -roughly 10 times that median. On a three-area 187-bus system test For this case study, we interconnected the IEEE 30-, 39-, and 118-bus test systems as shown in Figure 5a. All transmission capacities were set to 100MW. Five wind generators were added to the 118-bus system (at buses 17, 38, 66, 88, and 111), three in the 39-bus system (at buses 3, 19, and 38), and two in the 30-bus system (at buses 11, and 23). Again, we adopted the same possible set of available wind power generations and power demands, as well as the cost structures as in Section 5.1. In total, our robust tie-line scheduling problem modeled 151 uncertain variables. For this multi-area power system, Algorithm 2 converged in the first iteration. The mixed integer programs in step 7 yielded the global optimal solution for each area, taking 62ms, 109ms, and 281ms, respectively. We again sampled the set Ξ 1 × Ξ 2 × Ξ 3 3000 times, and solved P 1 . The run-time of Algorithm 2 was 825.3ms, that is roughly 1.8 times the median run-time of P 1 , given by 450.8ms. (a) A three-area 187-bus power system. We studied how our algorithm scales with the number of boundary buses by adding more tie-lines to the same system. The aggregate iteration count of Algorithm 1 is expected to grow with the number of induced critical regions, that in turn should grow with the boundary bus count. On the other hand, the iteration count of Algorithm 2 largely depends on the initial choice of the scenario encoded in the sets V 1 , V 2 , V 3 , and thus, varies to a lesser extent on the same count. Figure 5b validates these intuitions. Summary of results from other case-studies We compared Algorithm 1 with a dual decomposition based approach proposed in [5]. That algorithm converges asymptotically, while our method converges in finitely many iterations. Table 2 summarizes the comparison. 12 Compared to that in [5], our algorithm clocked lesser number of iterations and lower runtimes in our experiments. Items Two-area 44-bus system Three-area 187-bus system # iterations in Algorithm 1 8 9 # iterations of [5] 23 78 Run-time of Algorithm 1 (ms) 458.2 825.3 Run-time of [5] (ms) 779.8 1227.5 Table 2: Comparison with the method in [5]. 12 We say the method in [5] converges when the power flow over each tie-line as calculated by the areas at its end mismatches by < 0.01 p.u.. Apart from the two systems considered so far, we ran Algorithm 2 on a collection of other multi-area power systems, details of which can be found in Appendix E. The results are summarized in Table 3. Our experiments reveal that Algorithm 2 often converges within 1 -4 iterations. The run-time of Algorithm 2 grows significantly with the number of uncertain parameters. The 418-bus and the 536-bus systems with 422 and 546 uncertain variables, respectively, corroborate that conclusion. Such growth in run-time is expected because the complexity of (14) grows with the number of binary decision variables that equals the number of uncertain parameters. Run-time of a joint multi-area optimal power flow problem with a sample scenario in the last column provides a reference to compare run-times for the robust one. Conclusion This work presented an algorithmic framework to solve a tie-line scheduling problem in multi-area power systems. Our method requires a coordinator to communicate with the system operators in each area to arrive at an optimal tie-line schedule. In the deterministic setting, where the demand and supply conditions are assumed known during the scheduling process, our method (Algorithm 1) was proven to converge in finitely many steps. In the case with uncertainty, we proposed a method (Algorithm 2) to solve the robust variant of the tie-line scheduling problem. Again, our method was shown to converge in finitely many steps. Our proposed algorithms do not require the system operator to reveal the dispatch cost structure, network parameters or even the support set of uncertain demand and supply within each area to the coordinator. We empirically demonstrated the efficacy of our algorithms on various multi-area power system examples. [20] R. A How SO i can compute P y i , α y i , β y i With ξ i ∈ Ξ i and y ∈ Y fixed, consider the optimization problem described in (8). Suppose the optimal solution x * i (y, ξ i ) is unique. We suppress the dependency on (y, ξ i ) for notational convenience. Distinguish between the constraints that are active (met with an equality) versus that are inactive at optimality with the subscript A and I, respectively, as follows. The set of active versus inactive constraints remains the same over the critical region P y i . Assuming [A x i ] A is a square and invertible matrix, the optimal solution x * i is unique for each z ∈ P y i , given by The inequalities for the inactive constraints, together with the above relation defines the critical region That is, y * optimally solves (7). Next, we argue that the algorithm terminates in finitely many iterations. Consider the sequence of y * 's and J * 's produced by the algorithm. Notice that J * is a piecewise constant but non-increasing sequence. Further, a change in y * always accompanies a strict decrease in J * . Therefore, if y * changes in an iteration from a certain point, that same point can never become y * again. Since there are finitely many critical regions with finitely many vertices, it only remains to show that y * cannot remain constant over infinitely many iterations. Towards that goal, notice that y * can only belong to a finite number of critical regions. In the rest of the proof, we argue that the variable y computed in step 13 always belongs to a different such critical region containing y * , unless the algorithm terminates. At an arbitrary iteration, assume that y has taken values in critical regions P 1 , . . . , P ℓ D that contain y * . For convenience, let the optimal aggregate cost be given by α j ⊺ z + β j for z ∈ P j for each j = 1, . . . , ℓ D . Thus, D := {α 1 , . . . , α ℓ D }. Then, the new value of y is computed as y * − εv * , with v * as defined in (11). If v * = 0, then the algorithm terminates, proving our claim. Otherwise, assume that y * − εv * ∈ P 1 , contrary to our hypothesis, implying over w i ∈ {0, 1} n i , ρ ∈ R n i , and λ ∈ R m i + . Here, diag(w i ) denotes the diagonal matrix with w i as the diagonal. Since we maximize 1 ⊺ ρ, one can replace the second equality constraint in the above problem with the inequality that is further equivalent to for a large enough M > 0. That completes the proof. D Proof of Theorem 2 Let J rob denote the optimal aggregate cost of (13). Then, J * from step 3 and J opt 1 + J opt 2 from step 7 at any iteration of Algorithm 2 satisfy J * ≤ J rob ≤ J opt 1 + J opt 2 . If Algorithm 2 terminates, the termination condition implies that the above inequalities are all equalities. In that event, y * optimally solves (13). To argue the finite-time convergence, notice that at least one among V 1 and V 2 increases in cardinality unless the termination condition is satisfied. The rest follows from the fact that Ξ 1 and Ξ 2 have finitely many vertices. E Power system details for additional simulations The multi-area power systems considered in Section 5.3 are given in Figure 6. Tie-line capacities were set to 100MW and their reactances were set to 0.25p.u. Capacity limits on the transmission lines within each area were set to their respective nominal values in Matpower [20] wherever present, and to 100MW, otherwise. For all two-area tests, two wind generators were installed in the two areas at buses 6 and 14 in area 1 and buses 11 and 23 in area 2. For the three-area tests, we replicated the placements described in Section 5.2. Power demands and available wind generations were varied the same way as in Sections 5.1 and 5.2.
2017-10-06T12:27:29.000Z
2017-04-24T00:00:00.000
{ "year": 2018, "sha1": "3292c3ab8f21c6a7999c1bda105a5f3b5ffebdd4", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://doi.org/10.1109/tpwrs.2017.2775161", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "12ac063ca96840ed48bcc104ca331c48295a1ad1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
251040319
pes2o/s2orc
v3-fos-license
Measuring Hubble Constant with Dark Neutron Star-Black Hole Mergers Detection of gravitational waves (GWs) from neutron star-black hole (NSBH) standard sirens can provide local measurements of the Hubble constant ($H_0$), regardless of the detection of an electromagnetic (EM) counterpart: The presence of matter terms in GWs breaks the degeneracy between mass parameters and redshift, allowing simultaneous measurement of both the luminosity distance and redshift. Although the tidally disrupted NSBH systems can have EM emission, the detection prospects of an EM counterpart will be limited to $z<0.8$ in the optical, in the era of the next generation GW detectors. However, the distinctive merger morphology and the high redshift detectability of tidally-disrupted NSBH makes them promising standard siren candidates for this method. Using recent constraints on the equation-of-state of NSs from multi-messenger observations of NICER and LIGO/Virgo/KAGRA, we show the prospects of measuring $H_{0}$ solely from GW observation of NSBH systems, achievable by Einstein Telescope (ET) and Cosmic Explorer (CE) detectors. We first analyze individual events to quantify the effect of high-frequency ($\ge$ 500 Hz) tidal distortions on the inference of NS tidal deformability parameter ($\Lambda$) and hence on $H_0$. We find that disruptive mergers can constrain $\Lambda$ up to $\mathcal{O}(60\%)$ more precisely than non-disruptive ones. However, this precision is not sufficient to place stringent constraints on the $H_0$ for individual events. By performing Bayesian analysis on different sets of simulated NSBH data (up to $N=100$ events, corresponding to a timescale from several hours to a day observation) in the ET+CE detectors, we find that NSBH systems enable unbiased 4\% - 13\% precision on the estimate of $H_0$ (68\% credible interval). This is a similar measurement precision found in studies analyzing populations of NSBH mergers with EM counterparts in the LVKC O5 era. INTRODUCTION The value of the Hubble constant H 0 , which quantifies the current expansion rate of the Universe, has been measured extensively since it was first established in 1929 (Hubble 1929). Even with the current highprecision measurements of H 0 , the most recent local measurement H 0 = 73.04 ± 1.04 km s −1 Mpc −1 of the Hubble Space Telescope and Supernova H0 for the Equation of State (SH0eS) team (Riess et al. 2021), highlights a level of ≈ 5σ tension with the constraint inferred by the Planck Collaboration H 0 = 67.4 ± 0.5 km s −1 Mpc −1 (Aghanim et al. 2020). Despite the ongoing efforts to find conclusive evidence of systematic errors in modeling the data of these experiments, or a compelling novel theoretical explanation, there is currently no agreement on the cause of the discrepancy in H 0 between the different measurements. GW detection and sky-localization of merging binaries can provide a direct and independent local measurement of H 0 , as first proposed by Schutz (1986), and further analysed and advanced by Holz & Hughes (2005) Mastrogiovanni et al. (2021); Borhanian et al. (2020); Gray et al. (2022); Cigarrán Díaz & Mukherjee (2022). These systems are referred to as bright standard sirens, in case an EM follow up can be assigned to the event, and otherwise dark standard sirens. The GW detection of the binary neutron star (BNS) system GW170817 and the electromagnetic (EM) identification of its host galaxy ( Abbott et al. (2017b) and references therein) allowed the first application of the bright standard siren's approach, giving H 0 = 70 +12.0 −8.0 km s −1 Mpc −1 (Abbott et al. 2017a). This measurement was followed by improved estimate of H 0 = 68.9 ± 4.7 km s −1 Mpc −1 (Hotokezaka et al. 2019), using high angular resolution imaging of radio counterparts of GW170817, and later on estimated to H 0 = 68.3 +4.6 −4.5 km s −1 Mpc −1 Mukherjee et al. (2021b) and H 0 = 68.6 +14.0 −8.5 km s −1 Mpc −1 (Nicolaou et al. 2020), by accounting for the systematic uncertainties that arise from the calculation of the peculiar velocity. Recently, the third gravitational wave catalogue was released, bringing the total number of GW detections to 90 events (The LIGO Scientific Collaboration et al. 2021b). Selecting 47 of these events, the Hubble constant was constrained to H 0 = 68 +13.0 −7.0 km s −1 Mpc −1 when using the redshifted mass distribution, and H 0 = 68 +8.0 −6.0 km s −1 Mpc −1 when combining the GW information with a galaxy catalog (The LIGO Scientific Collaboration et al. 2021a), and to H 0 = 67 +6.3 −3.8 km s −1 Mpc −1 when using the GWTC-3 catalogue in combination with a galaxy catalogue . The combined GW and EM detection of NSBH systems, however, is yet to be observed. In general, NSBH systems are expected to be promising standard sirens -as both dark and bright candidates -since they have higher masses compared to BNSs, leading to mergers that occur at lower frequencies, potentially within the current and future ground-based detector bands, and also accessible at higher redshifts (Nissanke et al. 2010;Vitale & Chen 2018a;Feeney et al. 2021). The key difference between the NSBH systems and BNSs, which makes NSBH systems specifically interesting as dark standard sirens as well, is in their inspiral-merger phenomenology. For both systems, the tidally-induced deformations on the neutron star (NS) from the companion object (quantified by the NS tidal deformability parameter Λ) would increase the GW energy emission rate at the early inspiral stage in a similar manner -for NSBH mergers this effect can increase even further if the BH is spinning fast. Yet, closer to the merger of NSBHs, the strong tidal fields of the BH can, in some cases, significantly disrupt the NS, causing a sudden decrease in the GW amplitude at high frequencies and an accelerated merger, followed by mass ejection and formation of accretion torus around the BH and consequently, EM radiation. The other possible fate of the NS is that it plunges into the BH before getting highly disrupted and having a chance to emit any EM radiation. Qualitatively, whether the disruption happens or not, and how strong it is, depends primarily on the eccentricity, mass and spin of the BH, and the internal NS matter structure (Lattimer & Schramm 1974;Vallisneri 2000;Foucart 2012;Deaton et al. 2013;Pannarale et al. 2015a;Foucart 2020;Shibata et al. 2009;Etienne et al. 2009;Foucart et al. 2013a;Stephens et al. 2011). In the highly-disruptive cases that lead to EM radiation, NSBH systems can be used as bright standard sirens (Feeney et al. 2021;Vitale & Chen 2018b;Nissanke et al. 2010) with the spectroscopic redshift for the host being obtained with very high accuracy. Identifying such EM counterparts, however, remains challenging as the current and future planned EM facilities have limitations in the sky coverage (Metzger & Berger 2012;Nissanke et al. 2013b;Raaijmakers et al. 2021b;Chase et al. 2022;Sathyaprakash et al. 2019) and hence many of such events will be too far (z > 1) to be detectable by wide field optical and radio telescopes. Moreover, the probability of detecting an EM counterpart could strongly depend on the orientation angle of the system. In the cases when no EM counterpart is observed or generated, GW measurements of BNS or NSBH mergers can solely constrain the distance-redshift relation (hence the cosmological parameters), with a method first proposed by Messenger & Read (2012) and applied to BNS mergers more recently by Del Pozzo et al. (2017);Chatterjee et al. (2021) and Ghosh et al. (2022) (see Farr et al. (2019) for a technique applicable to BBHs). Tidal deformation in a binary system affects the transfer of GW energy and consequently, the GW phase and amplitude evolution and also accelerates the coalescence. In the absence of such matter effects, the mass parameters of GW signals are degenerate with the redshift z, resulting in the detection of redshifted masses m N S,d measured by the detector. However, the tidal corrections in the GW signals depend on the physical masses i.e., the source-frame NS masses such that m N S,s = m N S,d /(1 + z). Therefore, these Λ-dependent corrections break the degeneracies in the waveform and thus allow the simultaneous estimation of the GW luminosity distance d L and redshift z, from the waveform's amplitude and phase respectively. To constrain H 0 with this approach, we need independently derived information on the NS matter effects, either by assuming a known NS equation-of-states (EoS) (Messenger & Read 2012) or by using some form of parameterization -such as Taylor expansion of Λ in terms of NS mass -in an EoS-insensitive way. An example of latter is using the so-called universal binary-Love relations (Yagi & Yunes 2017;Doneva & Pappas 2018), which fits Λ around a fiducial NS mass value for which the tidal parameter is known. This approach is so far only applied to GW170817 for which the NS has a fiducial mass that lies in the steepest region of mass-radius plot. Hence, probing the extreme cases and also very high NS masses with this approach can be limited. In this paper, we use a new approach for modelling the viable EoS parameters. We use the posterior EoS samples from Raaijmakers et al. (2021a), which are inferred from a combination of multimessenger astrophysical observations and low-density nuclear calculations done within a chiral effective field theory framework (Hebeler et al. 2013). The astrophysical observations include the two mass-radius measurements of millisecond pulsars PSR J0030+0451 (Miller et al. 2019;Riley et al. 2019) and PSR J0740+6620 (Miller et al. 2021;Riley et al. 2021) by NASA's Neutron Star Interior Composition Explorer (NICER) (Gendreau et al. 2016) and the tidal deformability measurement from GW170817 and its accompanying EM counterpart AT2017gfo, and the low signal-to-noise ratio (SNR) event GW190425 (see also Guerra Chaves & Hinderer 2019;Pang et al. 2021a;Dietrich et al. 2020) for which no EM counterpart was observed. In addition to the uncertainties in the modelling techniques, the significance with which H 0 can be inferred also depends on how well the tidal deformability parameter Λ can be constrained by the observed GWs. Note that measuring H 0 with this approach is not applicable to the recently detected NSBH systems GW200105 and GW200115 (Abbott et al. 2021) due to the low SNR of their detected signals and the uninformative constraints on their Λ parameter with the current detectors. This itself is partly due to the lack of information from their merger stage and partially due to the fact that they are (very likely) non-disrupting systems which can make them uninformative even at high SNRs. The effect of high-frequency tidal disruptions on the GW amplitude of NSBH systems is usually modeled by introducing a cut-off frequency parameter which roughly indicates whether a merger is disruptive or not, and also marks the beginning of a possible disruption (Vallisneri 2000;Ferrari et al. 2009Ferrari et al. , 2010Lackey et al. 2014;Pannarale et al. 2015b,a;Lackey et al. 2014). This is a distinctive feature of the waveform emitted by NSBH binaries: If the disruption, characterised by the cut-off frequency, takes place before the NS crosses the BH's innermost-stable-circular orbit (ISCO), then the ejected material from the NS can form a disk around the BH and suppress the amplitude of the waveform, leading to an accelerated merger with no post-merger GW signal. On the other hand, in the case of a non-disruptive merger (e.g., when the BH mass is very large), the waveform is shown to be comparable to binary BH waveforms (Pannarale et al. 2011), where, instead, the high-frequency amplitude is governed by the ringdown of the companion BH. Therefore, the main difference between a BBH and a NSBH signal that is non-disruptive, is mainly in their GW phasing behaviour. In this paper we determine constraints on Λ -and consequently on H 0 -by performing Bayesian parameter estimation on simulated NSBH binary systems in the next generation detectors, ET (Maggiore et al. 2020) +CE detector era (Evans et al. 2021;Reitze et al. 2019), using the previously mentioned multi-messenger constraints on NS EoS to model Λ. We compare the bounds on Λ and H 0 derived from the tidally disrupted and nondisrupted mergers to see how the high-frequency NS disruption effects in the waveforms can improve the overall parameter inference in the era of these next generation observatories. Our primary interest is in the tidallydisrupted systems (i.e., systems with low mass-ratios and possibly high prograde BH spins) as most of them can merge inside the ET+CEs detector bandwidth while the current ground-based detectors can not capture their mergers. In order to model the GW strain data, we use the NSBH specific GW waveform model IMRPhenomNSBH (Thompson et al. 2020) (hereafter referred to as Phenom-NSBH). We then analyse the prospects of measuring H 0 by performing a two-step Bayesian parameter estimation on simulated individual NSBH systems, as well as catalogues of NSBH events with different number of simulated events. We use the same waveform model for both the injection and recovery of the waveforms. The paper is organized as follows. In Sec. 2 we introduce our approach for modelling the tidal deformability parameter, and give an overview of the tidal modelling in the chosen waveform model. After explaining the details of source and population simulations in Sec. 2.3, we describe the details of the Bayesian statistical framework used in our analysis in Sec. 2.4.1. We present the results of analysing single NSBH systems and stacked catalogs in Sec. 3. Sec. 4 summaries our conclusions. Tidal deformability model During the evolution of a NSBH binary, the tidal fields of the BH produce deformations in the companion NS. These deformations depend on the NS matter properties, predominantly through an EoS-dependent dimensionless tidal deformability parameter Λ defined as: where m N S,s is the source-frame mass of the NS, C = Gm N S,s /Rc 2 is the NS compactness, and k 2 is the dimensionless relativistic quadrupole tidal love number such that λ = 2/3R 5 k 2 G −1 (with R being the NS radius) characterizes the strength of the induced quadrupole given an external tidal field (Hinderer et al. 2010a;Flanagan 1998). The right hand side of the equation shows the redshift dependency once we transform to detector-frame mass m N S,d = m N S,s (1 + z). We base our modeling of the tidal deformability parameter on the EoS constraints inferred by Raaijmakers et al. (2021a) (For other multimessenger EoS constraints, see e.g. Dietrich et al. 2020;Al-Mamun et al. 2021;Huth et al. 2022). In this work, the EoS is constrained by employing a parameterized high-density EoS, coupled to low-density NS matter calculations within a chiral effective field theory framework (Hebeler et al. 2013). Posterior distributions on the EoS parameters are then obtained by combining information from NICER's mass and radius measurements of the pulsars PSR J0030+0451 (Miller et al. 2019;Riley et al. 2019) and PSR J0740+6620 (Miller et al. 2021;Riley et al. 2021), and measurements of the tidal deformability from GW170817 (together with its optical counterpart AT2017gfo) and GW190425. Figure 1 shows the massradius and mass-tidal deformability constraints as found by the NICER and LIGO/Virgo observations as well as the possible EoS relations consistent with the posterior distribution found in Raaijmakers et al. (2021a). Instead of sampling Λ, we sample EoSs from the posterior distribution of Raaijmakers et al. (2021a). For a drawn EoS, a value of Λ can then be assigned based on a given m N S,s . By considering a broad set of EoSs as a prior in our analysis, we can take into account the uncertainties in the NS's microphysics. However, we remain dependent on our choice of EoS parameterization, which may introduce a systematic bias. We specifically take the results from the piecewise-polytropic parameterization employed in Raaijmakers et al. (2021a), but note that many other high-density parameterizations exist (Lindblom 2018;Greif et al. 2019;Capano et al. 2020;O'Boyle et al. 2020) as well as non-parametric methods (Landry & Essick 2019;Essick et al. 2020;Landry et al. 2020;Legred et al. 2021). Although this approach does not provide an EoS-insensitive model, the advantage of this approach is that it does not require an approximate analytical modeling of the Λ parameter and is based on the current empirical multi-messenger constraints. Waveform model To model the GW waveform, we use the Phenom-NSBH model (Thompson et al. 2020), which extends the analytical inspiral wave to incorporate the merger and post-merger dynamics through calibration with numerical relativity simulated waveforms. This waveform is built on IMRPhenomC model for the amplitude and IMRPhenomD model for the phasing, covering nonprecessing systems with mass ratios (Q = m BH /m N S ) Figure 2. The dimensionless sensitivity curves √ Sn × f and the characteristic signals hc = f ×h(f ) as a function of frequency f . The solid lines correspond to the Phenom-NSBH model and the dot-dashed lines to the corresponding BBH binary modeled signal IMRPhenomD for a system at z = 0.02 (dL ≈ 100 Mpc). As the mass-ratio of the system increases, the mergers change from total disruption of the NS, with amplitude of the waveform being exponentially suppressed at high frequencies, to nondisruptive NS for which the waveform is comparable to the BBH waveform. The shaded blue and green lines correspond to cut-off frequencies fcut(Q = 2) = 1535 Hz and fcut(Q = 4) = 2000 Hz. from equal-mass up to 15, BH spins aligned with the orbital angular momentum up to a dimensionless value of |χ BH | = 0.5, and Λ ranging from 0 (the BBH limit) to 5000. Note however that, in the case of χ BH < 0, the model amplitude is calibrated to NR simulations up to Q < 4 and hence may not perform well beyond this mass-ratio. 1 In modelling the Phenom-NSBH waveform, the tidal corrections to the GW phase are incorporated as post-Newtonian (PN) spin-induced quadrupole corrections at 5 PN and 6 PN order (Hinderer et al. 2010b;Vines et al. 2011). To first order, the change in GW phase scales linearly with Λ, through a dimensionless quantity defined as: This scaling shows that finite size effects are expected to be mainly detectable for NSBHs involving low mass BHs. As the mass ratio increases, tidal effects scale away as Q −4 in the phase, making the non-disruptive NSBH signal hard to differentiate from a BBH signal. Therefore, in the case of short (≈ less than 100 s) or low SNR (≈ less than 30) signals, the only differences between these two waveform models -non-disruptive NSBH systems and BBH systems-could be the slightly different properties of the remnant quantities after merger which are hard to distinguish with the current GW detectors (Foucart et al. 2013b;Takami et al. 2014). Note that, in the case of possibly long and loud GW signals, the accumulated waveform phase difference between a 1 An alternative state of the art model for NSBH mergers is based on the effective-one-body approximation; see Matas et al. (2020). BBH and a disruptive NSBH system (due to the presence of tidal terms in the latter case) can lead to bounds on Λ such that the BBH case (Λ = 0) gets excluded. However the overall bounds on Λ for the non-disruptive systems are still expected to be broader than the disruptive ones, and generally uninformative in most of the cases. For the GW amplitude in the PhenomNSBH model, a semi-analytical modelling of tidal effects at the lateinspiral is adapted (Pannarale et al. 2015b). In this modelling, the merger of a NSBH binary is considered disruptive whenever the mass ratio Q < Q D (C, χ), with the threshold being fitted by: where χ is the BH spin parameter and the fitting parameters a i,j are as given in Pannarale et al. (2015a). The corresponding fitted cut-off frequency f cut to this threshold is given by: With the fitting parameters f ijk (Pannarale et al. 2015a). The fitted threshold parameters depend on the EoS only implicitly and through the NS compactness. Moreover, the aforementioned waveform model approximates C in terms of Λ by using the quasi-universal relation on the compactness (Yagi & Yunes 2017): This allows us to quantify the disruptions of each merger solely based on the waveform parameters that can be derived from detected GW data. Figure 2 shows non-spinning NSBH GW characteristic signal h c = f ×h(f ) at z = 0.02 (≈ 100 Mpc), for disruptive and non-disruptive mergers as compared to their BBH analogue signal. For the disruptive mergers, the f cut is also indicated and shows the approximate frequency at which the waveforms start to deviate from a BBH signal. As expected, increasing the mass ratio would change the mergers from being totally disruptive to non-disruptive. Although not shown here, we have also investigated that disruptive waveforms with a higher Λ have larger deviations from their BBH counterpart waveforms. 2 Simulations and sources We simulate samples of individual NSBH binary events in the detector bands of the 3-interferometer ET and CE (i.e., 5 Hz to 4 kHz), and using a sampling frequency rate of 4096 Hz. For the possible CE sites, we choose the Northwestern USA and Southeastern Australia location (see Gossan et al. (2022) for the detector coordinate details and further work). The ET site implemented in lalsuite (LIGO Scientific Collaboration 2018), and hence used in this work, is same as the location of the Virgo detector. Having multiple detectors allows us to localize the GW sources and infer the different GW polarization content, thus allowing one to break the degeneracy between the luminosity distance and inclination angle for high SNR systems (Borhanian & Sathyaprakash 2022). Note that we are assuming that the calibration errors will be less than 1% in the amplitude and phasing (see Huang et al. (2022) for more detailed analysis). For the injected cosmological parameters we use H 0 = 67.4 km s −1 Mpc −1 , along with the rest of the parameters as reported in Aghanim et al. (2020). In order to isolate the effect of tidal interactions on the inference, as well as due to the limitations of the waveform model, we do not consider the effects of spin precession and orbital eccentricity here. The inclusion of spin precession, however, has been shown to improve the H 0 inference by breaking the distance-inclination degeneracy, in some NSBH systems (cf. Vitale & Chen (2018b)). We generate two sets of simulations: I) Set of individual mock binary samples, that will allow us to compare the disruptive and non-disruptive mergers and their effect on the H 0 inference. We generate such mock binary samples, having fixed m N S,d = 1.4M , Λ = 490. This choice of Λ corresponds to fixing the EoS parameter to the maximum-a-posteriori relation based on the results of Raaijmakers et al. (2021a), inferred from NICER's pulsar measurements and the BNS GW events detected by LIGO, Virgo and KAGRA (see also section 2.1 and the black line of Fig. 1). We vary the mass ratios to be Q = m BH,d /m N S,d = {2, 4, 6} , at a certain sky position and polarization of (ra, dec, polarization) = (3.45, −0.41, 2.35) rad. In order to study the impact of GW parameters on the inference, we also choose the BH spins to be χ BH = {0, 0.5, −0.5} and consider systems with different inclination angles θ jn bracketing to the two extreme cases of edge-on and face-on binaries. Increasing the distance, as expected, would broaden the inferred bounds on Λ, as the disruptive binaries would merge outside the detector's sensitivity band. Here we consider binaries located at z = 0.07 and z = 0.2. II) Due to possible degeneracies between parameters and due to measurement uncertainties, the analysis of such single events may lead to multi-modal H 0 measurements. To overcome these limitations, we also analyse stacked catalog of mock binary samples of size N =10, 70 and 100, in the ET+CEs detector era. We distribute the binary systems uniformly in sky location, orientation, and volume (∝ d 2 L ). We also sample from uniform distributions of BH and NS source masses and spins, same as that of Sec. 2.4.1. For generating these samples, we also vary the initial choice of EoS parameter by uniformly sampling ≈ 3000 EoS choices based on the posterior distribution of Raaijmakers et al. (2021a) . We consider the systems with the network SNR of above 8 to be detectable. Gravitational wave parameter inference In order to perform a probabilistic inference of Λ and subsequently H 0 through Bayesian analysis (as described in, for example, Appendix B of Abbott et al. (2019a)), we first evaluate the GW probability distribution function (PDF) P (θ i |x) with x being the simulated GW strain observation and θ i the set of waveform parameters that we want to estimate. For this, we marginalize the GW likelihood over the coalescence phase. We sample the marginal GW likelihood using the relative binning approach (Zackay et al. 2018), as implemented in the inference library PyCBC (Biwer et al. 2019) together with the DYNESTY nested-sampler (Spea- Figure 3. The posterior probability density and the 90% credible regions of Λ parameter as measured for Q = 2 (disruptive),Q = 4 (mildly-disruptive), and Q = 6 (non-disruptive) systems located at z = 0.07 (top) and z = 0.2 (bottom). The orange, pink and green shaded regions correspond to the choice of χBH = 0, χBH = 0.5 and χBH = −0.5, respectively. The black vertical lines correspond to the injected value of Λ = 490 and the gray shaded regions show the implied prior on Λ parameter. gle 2020), using 2100-2500 live points. The relative binning method allows for fast analysis of GW signals by assuming that the difference between adequate waveforms in the frequency domain is describable by a smoothly varying perturbation. Having the fiducial gravitational waveforms close to where the likelihood peaks, this approach reduces the number of frequency points for the evaluation of the waveforms to O(10 2 ), as compared to O(10 7 ) for traditional GW parameter inference. The prior choice on each parameter is as follows. We draw uniformly distributed BH masses from P(m BH,s ) = U(2.5M , 12M ) with the lower-limit set in such a way to allow BHs in the mass-gap region and the upper limit in such a way to avoid running into high values of Q for which the waveform approximation does not function. The NS masses are drawn from P(m N S,s ) = U(1M , 2.5M ), with the upper limit chosen to be above the current estimates of the maximum mass of NSs (Legred et al. 2021;Pang et al. 2021b). We consider aligned-spin binary components with the dimensionless BH and NS spin parameter drawn from P(χ BH ) = U(−0.5, 0.5) and low-spin prior of P(χ N S ) = U(−0.05, 0.05), respectively. In a future work, we will examine astrophysically motivated populations for NSBH mergers (Broekgaarden et al. 2021a;Boersma & van Leeuwen 2022). The redshift parameter is sampled from P(z) = U(0, 1.5). The inclination angle, sky position, phase and polarization angles are isotropically distributed. The luminosity distance d L is distributed uniformly in the comoving volume such that P(d L ) ∝ d 2 L . For the EoS parameter, we choose a prior based on the multi-messenger constraints ( as explained in Sec. 2.1), covering up to 3000 viable EoS choices. Consequently, for each realisation of the sampler, the Λ parameter is derived based on the sampled set of {z, m N S,d EoS} parameters, using the pycbc implemented transformation function Lambda from multiple tov files. For this we require the sampling to be done in terms of z and m N S,d instead of Λ and m N S,d . Moreover, in order to speed-up the inference, we sample in detector-frame chirp mass and mass ratio instead of the individual masses. Hubble constant inference We estimate H 0 using a two-step Bayesian inference analysis. In order to estimate H 0 from the GW data, we use the redshift-distance relation as given by: where E(z ) = Ω r (1 + z ) 4 + Ω m (1 + z ) 3 + Ω DE corresponds to the assumption of a flat universe and Ω r , Ω m , Ω DE are the radiation, matter and dark energy energy densities respectively. In this analysis we use the third order Taylor expansion of Eq. 6 around z = 0, i.e. for low redshifts. Having the GW strain data x of a single event, the posterior on H 0 can be obtained from the semimarginalized GW likelihood and the prior P 0 (H 0 ): where P (x, Λ, d L , m N S,d |H 0 ) is the GW likelihood marginalised over all the parameters other than {d L , Λ, m N S,d }. Equations eq. 1 and eq. 6 allow us to write down d L as a function of cosmological parameters and the GW inferred parameters {Λ, m N S,d } such that d L = d L (Λ, m N S,d , H 0 ). Therefore the marginalized likelihood can be further expanded as: where again, P 0 shows the prior on each parameter. The constraint between parameters is defined through P (d L |m N S,d , Λ, H 0 ) and can be replaced by a delta function such that: where in the last line we have applied the delta function and also marginalized over the rest of the parameters. We perform the sampling of the semi-marginal likelihood L(H 0 ) = P (x|d L [m N S,d , Λ, H 0 ]) using pymultinest (Buchner et al. 2014). In order to do so, we first perform a kernel density estimation fit to the semi-marginal likelihood using kalepy (Kelley 2021). We assume a flat prior on H 0 of P 0 (H 0 ) = U(10, 300) km s −1 Mpc −1 . In addition to this method, we have also performed the direct sampling of H 0 through pycbc and recover similar results, yet the twostep inference shows better convergence in some cases such as for low-redshift systems. Single events: Λ detectability It is widely expected that disruptive NSBH mergers, once visible in a detector band, will allow for a more accurate measurement of the NS tidal effects, and hence the redshift. In order to quantify the effect of highfrequency tidal disruption on the parameter inference, we analyse a selected sample of single NSBH events with different Q and χ BH . During the inference, the EoS parameter is being varied freely, with a prior consistent with the posterior distribution found in Raaijmakers et al. (2021a) The derived constraints on the Λ parameter are shown in Fig. 3, for systems with z = 0.07 (top, d L ≈ 300 Mpc) and z = 0.2 (bottom, d L ≈ 1 Gpc). All the systems that are located at z = 0.07 merge within the ET+CEs detectors bandwidth. The network SNRs for the top panel are ≈ 367, 470, 541, and for the bottom panel are ≈ 107, 138, 159 for the Q = 2, 4, 6, respectively. The initial inclination angle is chosen as θ ij = 90 • for these plots. Due to the high SNR of the selected events, we have shown that the results remain consistent once changing θ ij . This is not generally true for systems at lower SNR and there we can clearly see the effect of θ ij on the inference of d L , and consequently H 0 . Focusing on the non-spinning cases considered here, we see that, the disruptive-mergers (Q = 2) constrain Λ (95% credible interval) to Λ = 570 +216 −195 (top) and Λ = 578 +635 −516 (bottom), with the relative error of ≈ 16% in both cases. In the case of highly-disruptive mergers (Q = 6), these values worsen and are given as Λ = 868 +420 −460 (top) and Λ = 1549 +1284 −1223 (bottom), with the relative error of ≈ 77% in the former case, and an uninformative constraint on the latter case. This analysis does not clearly show the expected positive (negative) effect of prograde (retrograde) BH spin on the inference of Λ. Possible sources of limitations can be the degeneracies (such as the degeneracy between the reduced mass, entering at 1 PN order, and the spin, entering at 1.5 PN order (Cutler & Flanagan 1994)), as well as the limi-tation of Phenom NSBH model at prograde spins: we anticipate that a more detailed analysis of the BH spin effects on Λ inference is not feasible with the current limitations of the GW waveform models. In general, having a flat prior on Λ, exclusion of Λ = 0 value from the posterior samples would suggest that GW information alone can distinguish NSBH systems from a BBH merger. This may happen if a merger happens in-band for a detector system, or by having a long detected signal duration for an event: over a long inspiral time, the BBH and non-disruptive NSBH signals can get a dephasing of ≈ O(1) radians from the point-particle tidal effects, which makes the waveforms non-identical. However, it is important to keep in mind that, in our analysis, the inferred bounds also depend on the {m N S , z, EoS} priors which may apriori exclude the Λ = 0 value. More specifically, the Lambda from multiple tov files function assigns a Λ = 0 value to each sampled m N S,s that exceeds the maximum allowed NS physical mass for a given EoS parameter file, and hence allows for some of the systems to be predicted as BBH systems. This implied prior on Λ is shown in the gray shaded regions of Fig. 3. For the middle and right panels, the minimum allowed Λ values of the prior are 91 and 132, respectively. In the cases where Λ = 0 is allowed by the implied prior on Λ (left panels), exclusion of this value from the posterior samples would suggest that GW information alone can distinguish this system from a BBH merger, which is critical for the validity of this approach on real GW data. This can be seen in the top left panel of Fig. 3 but not in the bottom left panel, due to the lower SNRs of the latter systems. Single events: pair plots and statistical uncertainties In order to see the degeneracy between sampled GW parameters and their effect on the overall inference, we have shown the full inference results for two of the considered examples, including the bounds on H 0 from these single events.For this section we are showing the results with the method of direct sampling of H 0 , instead of the post-processing approach. This is necessary if we want to capture the degeneracy between the Λ, z, H 0 parameters all in one plot. We confirm that both of the methods result in similar constraints on the parameters. Our results for the inference of the redshift parameter are comparable with the similar bounds found by studying single BNS systems at z < 1 (Messenger & Read 2012). However, while the authors of Messenger & Read (2012) considered a Q = 1, m N S,s = 1.4M BNS case, such a system does not represent a physical NSBH binary and hence we can not perform a one-toone comparison with the results of Messenger & Read (2012). We note that, even for highly-disruptive systems, the measurement of H 0 by individual events is affected by covariances between the parameters that result in a low precision, as well as hidden systematic uncertainties. This can be seen, for instance, for the low-redshift points of Fig. 6, where the error on H 0 inference is following the trend of z error, rather than being small. Also, as shown in Fig. 6, we clearly see that the ability to constrain H 0 from single events is limited at high distances and saturates at a certain error limit. This follows from the limitation on the inference of z, which is mainly tied to the uncertainties in inferring Λ,as well as d L . A similar analysis in Chatterjee et al. (2021) as well as in Ghosh et al. (2022) shows the same limitations on the H 0 inference at high distances, for the case of BNS systems, even though the modelling approaches are different from what we consider. Although not shown here, we have also found that fixing the choice of the EoS parameter does not significantly improve the fractional error estimates of H 0 and redshift at high distances. Overall, in the case of multiple events, the statistical uncertainty is expected to decrease as inverse square root of number of events yet this does not affect the systematic uncertainties and general limitations. We note again that, we have assumed that calibration errors will be negligible in the next generation GW detectors, yet this assumption should be revised in future works. Stacked events: H 0 detectability The next generation detectors, such as CE and ET, are expected to detect tens of thousands of NSBH events per year and hence can provide precise statistical measurement of H 0 . Here we stack the individual H 0 measurements of simulated synthetic NSBH mergers as described in Sec. 2.3. The combined H 0 PDF is found by multiplying the single PDFs of eq. 7 such that: where we use an overall flat prior P (H 0 ) on the stacked H 0 to get the stacked PDF. The results are shown in Fig. 7 for the different catalogues considered. The gray dashed lines show the individual PDFs and the green line shows the stacked PDF which is peaked at the injected H 0 value. We find that having N = 10, 50 and 100, the H 0 can be measured with precision of ≈ 13%, 6.6% and 4% (at 68% credible interval). This precision is in agreement with the √ N scaling of the relative errors , which can be used as a first order estimate for uncertainties in a catalogue of events (cf. Mortlock et al. (2019b)). We note that, in all the catalogues considered, the nondisruptive NSBH systems are the most dominant as they naturally cover a broader range of the parameter space once we generate the population parameters uniformly. This may not necessarily be the case for more realistic population models of NSBH systems (cf. Broekgaarden et al. (2021b); Mapelli et al. (2019)). If a population allows for more disruptive mergers (or that we end up detecting more of these systems) (Mapelli & Giacobbo 2018), we would naturally expect more stringent constraints on Λ-related parameters. Also, note that we are not incorporating any GW detector selection effects in this analysis (cf. Abbott et al. (2019b); Gerosa et al. (2020), and references therein, for possible methods on including the selection biases.) Our results for N = 100 case show the same order of inference precision to that of a similar study performed on BNS mergers in the CE detector era (Chatterjee et al. 2021). There the authors report a 2% precision in the measurement of H 0 with N = O(10 3 ) BNS events as seen by one CE detector, using the universal binary-Love relations to model Λ. CONCLUSIONS We have analysed the prospects of inferring H 0 from the GW data of NSBH events without considering EM counterparts, based on the foundational idea that NS tidal effects can be exploited to break the redshift degeneracy of the GW waveforms, as proposed by Messenger & Read (2012). In this paper, we model the NS tidal parameter based on the latest multi-messenger constraints on the NS matter effects from NICER + LIGO/Virgo/KAGRA, hence modelling the Λ parameter solely based on empirically viable EoS choices. Our analysis is done on both single NSBH systems, as well as their synthetic catalogues, in the ET+CE detector era. The unique merger phenomenology of NSBH systems allows for highly disruptive events that are key candidates for the precise measurement of NS tidal parameter, once they merge within the sensitivity band of the GW detectors considered. In the case of single events, by comparing some examples of disruptive and nondisruptive mergers, we show that in-band mergers of disruptive systems can constrain Λ with ≈ 36% pre- Figure 6. The fractional statistical error in measurement of z (red) and H0 (blue) as a function of dL for a few representative parameter-estimation runs with Q = 2. The injected parameters for these sample runs are same as those given in Sec. 2.3. The gray shaded region shows the threshold of SNR=30. cision, while this number increases to ≈ 50% for the in-band mergers of non-disruptive systems. In the case of a system merging outside the detector's sensitivity bound, the disruptive mergers result in a mostly uninformative precision of ≈ 95% and the non-disruptive mergers lead to a totally uninformative constraints on Λ. Note however, that the specific values of these ranges are highly dependent on the specific initial signal configurations and may change by comparing signals with different e.g., duration, distance, inclination and sky location. The overall improvement in H 0 inference is, however, limited due to the fundamental degeneracy between the redshift and H 0 parameter. Importantly, we find that the precision at which the redshift can be inferred with this approach, is not strong enough to result in unbiased and highly constrained measurements of H 0 from single events. This retains the need for accurate localization of systems in order to have a highly accurate redshift (and H 0 ) estimation. Also, we realise that the measurement of the redshift (and hence the H 0 ) with this approach is strongly affected by the details of spin precession and tidal distortion modelling in GW waveforms, highlighting the need for improved GW template models of NSBH mergers. In order to improve against this fundamental limitation of H 0 inference, we analyse synthetic sets of NSBH GW signals with samples of N = 10, 50, 100 events, with the population being generated by uniform sampling of all parameters. Our results show that for the ET + CS detector era, this method can result in unbiased 13 − 4% precision in the measurement of H 0 , once the same wave- Figure 7. The single and stacked H0 PDFs for simulated NSBH mergers in the ET+CEs detector era with N = 10, 50, 100 simulated events. The vertical line corresponds to the true value of H0 = 67.4 km/s/M pc. The estimated precision on H0 are ≈ 13%, 6.6% and 4% for the N = 10, 50, 100 case, respectively. form model is used for the injection and recovery of the signals. Future detailed analysis following this work can be done by including the effect of orbital precession in the analyzed signals. This is important as systems with large retrograde BH spins can result in significant orbital precession. We also note that, due to the intrinsic limitation of the waveform model Phenom-NSBH to m N S,d 3M , the detectable NS population depends on the redshift of the event. This modeling results in NSs with high masses to be less dominant at high redshifts and also that the lightest NSs (i.e., m N S,s ≈ 1M ) can not be located at redshifts higher than z = 2, which is a serious limiting factor in the analysis of these events in the next generation detector era. Also, including higher than quadrupole multipole moments in the waveform modelling is shown to be necessary for precise parameter measurements for systems with Q > 3 Kalaghatgi et al. (2020). Another important avenue of improvement is to study more realistic NSBH population models. In principle, population models predicting more number of disruptive mergers would result in more stringent constraints on H 0 .
2022-07-26T01:16:23.080Z
2022-07-24T00:00:00.000
{ "year": 2022, "sha1": "f5cb7f818b9f442ccae210ea3ac8829a0a4bce23", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f5cb7f818b9f442ccae210ea3ac8829a0a4bce23", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
210800722
pes2o/s2orc
v3-fos-license
Fast Computation of Electrostatic Interactions for a Charged Polymer with Applied Field Using a hybrid simulation approach that combines a finite difference method with a Brownian dynamics, we investigated the motion of charged polymers. Owing to the fact that polymer-solution systems often contain a large number of particles and the charged polymer chains are in a state of random motion, it is a time-consuming task to calculate the electrostatic interaction of the system. Accordingly, we propose a new strategy to shorten the CPU time by reducing the iteration area. Our simulation results illustrate the effect of preset parameters on CPU time and accuracy, and demonstrate the feasibility of the “local iteration” method. Importantly, we find that the increase in the number of charged beads has no significant influence on the time of global iterations and local iterations. For a number of 80 × 80 × 80 grids, when the relative error is controlled below 1.5%, the computational efficiency is increased by 8.7 times in the case that contains 500 charged beads. In addition, for a number of 100 × 100 × 100 grids with 100 charged beads, the computational efficiency can be increased up to 12 times. Our work provides new insights for the optimization of iterative algorithms in special problems. INTRODUCTION The diffusion of charged polymers is often the rate-determining step in many biological processes. Stochastic methods (such as Brownian dynamics) and continuum methods (such as finite difference method) are two main computational methods for studying diffusion. [1−6] In the process of diffusion, electrostatic force plays an important role in polymer interactions. At present, electrostatic properties of polymers have been extensively studied in the interdisciplinary areas of biomolecule, computational mathematics, physics, and chemistry. There are many methods for calculating electrostatic interactions. However, the existing methods (such as particle-mesh Ewald (PME), Ewald summation, particle-particle particle-mesh (P3M), Coulomb force formula, etc.) all require a lot of CPU time and use large computer memory. [7−11] When a huge three-dimensional simulation system is taken into account, it is unsuitable to use these methods to calculate electrostatic forces owing to the serious challenge in computing time. Therefore, an efficient method is urgently needed for saving computing time. The numerical me-thod is more advantageous, because it sacrifices some accuracy to get a great increase in the calculation speed. At the same time, the numerical method has a weaker dependence on computer memory. [12] Hence, through combining the advantages of traditional methods and numerical methods to simulate the motion of charged polymers is an effective and accurate way, where the numerical methods is mainly used to handle the electrostatic interaction of charged polymers by solving the Poisson equation. [13] This is important because many diffusing molecules in biological systems are charged. The key to solving Poisson equation is how to figure out the potential distribution of electrostatic field quickly and accurately by combining the boundary conditions, and provide accurate force analysis for the subsequent simulation of charged polymer motion. Traditional numerical simulations of three-dimensional problems tend to be computationally intensive because of the requirements on the memory and CPU time to obtain solution with desirable accuracy. Even the most advanced computers may not be able to handle them directly. Thus, in electrostatics, calculating the potential distribution of a given charge density function is crucial. In solving such three-dimensional problems, finite difference method (FDM) is a relatively simple and convenient method in many numerical methods, which has the mature theory and the optional accuracy. [14] When dealing with the regular area, it is easier to program and im-plement on computer than the finite element method (FEM) and the finite volume method (FVM). [15] The principle of FDM is to discretize Poisson's equation. [16−20] Most of the discretized Poisson equations will give a series of linear equations with a large number of unknowns and diagonally dominant coefficient matrices. By using Jacobi, Gauss-Seidel, successive over relaxation (SOR), or Multigrid iterative methods, however, the unknown solutions at the corresponding grid nodes can be approximated iteratively. Since the iterative algorithm takes up too much CPU time, almost all optimizations are for iteration algorithms. Kamboh et al. have proposed the parallel Jacobi method (PJ), which can reduce the computing time for large data moderately. [21−23] It is superior to the sequential Jacobi (SJ) and the sequential Gauss-Seidel (SGS). Scheduling Relaxation Jacobian (SRJ) is another method for classical optimization, which greatly speeds up convergence and reduces computation time, and it has certain advantages in dealing with complex boundaries. [24−31] In addition, there are also some studies on Multigrid method, for solving 3D Poisson equation with high accuracy, which bases on Richardson extrapolation method. [32−36] The fourth-order compact difference discretization scheme and V-cycle algorithm of multigrid are used to solve the Dirichlet boundary value problem of 3D Poisson equation. Finally, the expensive computation of 3D Poisson equation is solved very well. This method is characterized by high accuracy and some speed improvement. Nonetheless, in Brownian dynamics, we need to calculate millions or even tens of millions of time steps, which means that the calculation of electrostatic interactions will take substantial time even the aforementioned methods are fast in calculation. Therefore, in the process of simulating the movement of charged polymer chains, it is necessary for us to find a new way to save time and ensure accuracy for millions or even more calculations. In this study, we present "local iteration" to accelerate the simulation of charged polymers's motion, which possesses the irrelevant local iteration process and global iteration process. The simulation model and corresponding optimization algorithm are presented in the second section. And in the third section, we systematically discuss the effect of the parameters in "local iteration" on the simulation efficiency and accuracy, which further verifies the feasibility of our optimization algorithm. Finally, comprehensive summarization of the processes and results of the simulation test are presented in the last section, as well as some prospections. SIMULATION MODEL AND METHODS As shown in Fig. 1, in a simulation box, we fix the left face of electric potential (blue) as U and the others (gray) as 0, and put into a polymer, which is modelled as a conventional beadspring chain composed of N successive Lennard-Jones (LJ) [37−39] beads and connected through the finitely extensible nonlinear elastic (FENE) potential, [40] U LJ (r) = 4ε[(σ/r) 12 and where σ is the diameter of a bead, ε gives the strength of the LJ κ b potential, represents the spring constant, and b denotes the maximum separation distance between the consecutive beads along the polymer chain. In addition, we set the charge of beads in the polymer as q and solve the Poisson equation with the boundary electric potential to calculate the electrostatic interactions between charged beads. Therefore, we can employ the Brownian dynamics to simulate the charged polymer diffusion, which includes a noise and a friction term. Having specified the interaction potentials, the BD equation of motion for bead i is where U i is the sum of all the interaction potentials discussed above, μ denotes the friction coefficient, and is the random force related to μ by the fluctuation dissipation theorem , where I is the unit tensor. We integrate the equations of charged polymer motions against a time step Dt. In our model, σ, ε, and m denote the units of length, energy and mass, respectively, which result in a unit of the time as t = t*(ε/m/σ 2 ) 1/2 , the charge as q = q * /(4πε 0 σε) 1/2 , and the electric field as E = E * (4πε 0 σε) 1/2 σ/ε. We present our results in reduced units in which σ = ε = m = 1 and the temperature is k B T = 1.0, and the friction coefficient is set as μ = 0.5τ -1 . Therefore, the dimensionless parameters of the polymer are fixed as = 30 and b = 1.5, the charge of beads in the polymer is q = 10, and the time step is chosen as Dt = 0.0002. We fix the size of the simulation box as L = 80, the electric potential of left surface as U = 4, and the relative dielectric constant ε r = 80. In this work, we focus on the rapid solution of electrostatic interactions between charged beads in a polymer with the external electric field, which corresponds to solving a threedimensional Poisson equation with: and which can be represented by the sum of a series of Dirac-Delta functions, u is the electric potential, ρ is the charge density as a function of position, ε 0 and ε r are the vacuum permittivity and the relative permittivity of the solvent, respectively, α repres- ents a potential function of the boundary node, and Ω denotes a continuous domain in 3D space with boundary conditions prescribed on its boundary. As shown in Fig. 2, in order to obtain the numerical solution of u i,j,k , we use the traditional 7-point central difference to discretize Eq. (4), where i, j, and k represent the node location in the discretized mesh and Δx, Δy, and Δz represent the mesh spacing in the x, y, and z directions (Δx = Δy = Δz = 1). The charge density of each node is estimated by a linear interpolation scheme from charged beads in the polymer. After discretization, Eq. (6) represents the linear equations, which can be expressed by a standard matrix form Au = B. Thus, we can obtain u i,j,k as: where ρ i,j,k is the charge density distribution function. In this work, we use Successive Over-Relaxation (SOR) method for solving the above equations, which is one of the most efficient iterative methods. Compared with Gauss-Seidel and Jacobi iteration methods, this method shows great advantage in calculation speed for large linear equations, whose iterative format can be written as: where λ represents the relaxation factor and n denotes the number of iteration steps. Obviously, when λ = 1, the SOR is the Gauss-Seidel iteration. The optimal relaxation factor can be obtained by [30] λ optimal = 2/[1 + sin( where Mig is the minimum number of grid nodes in three directions. For 80 × 80 × 80 grids, Mig = 81, λ optimal = 2/{1 + sin[π/(81 -1)]} ≈ 1.924. Thus, we can obtain the potential of each nodes, and further calculate the electric field in the position of charged beads by the same linear interpolation scheme, which means that we can calculate the electrostatic interactions between charged bead in the polymer with the external electric field. In order to compare the CPU time among SOR, Gauss-Seidel, and Jacobi iteration methods, we display the CPU time for these methods in Fig. 3, which includes the direct solution of Coulomb force between charged beads. The comparison results show that with the increase of charged beads, the CPU time of these methods all exhibits an increase. The SOR method is different from direct solution of Coulomb force, which is insensitive to the increase of the number of charged beads. For current grid numbers (80 × 80 × 80), we find that when the number of charge beads Num is about 75, the CPU time for SOR is almost equal to that of direct solution of Coulomb force, which indicates that the SOR method is more suitable for large scale charged bead system. In fact, in Brownian dynamics, we often need to calculate millions or even tens of millions of time steps, which means that the calculation of electrostatic interactions will take substantial time. If we only consider one charged polymer in solvents with uniform dielectric constant, within a certain time interval, the effect of the motion of charged polymer on the potential of grids far from the charged polymer can be neglected, especially in the presence of the external electric field. Thus, we can save most of the iteration time for grids far from the charged polymer, whose value of potential has almost no change within a certain time interval. This means that we can use "local iteration" instead of "global iteration" on the premise of guaranteeing calculation accuracy to save a lot of unimportant iteration time. Specially, in the simulation box, the charged beads in the polymer often occupy only a small area. Thus, in a short simulation time, we only calculate the potential of adjacent grids and the potential of other grids is fixed, which will be solved after several steps. For example, in Fig. 4, in order to better verify the "local iteration" method, we put the charged polymer in the middle of the simulation box. We first define a smallest "box", which just surrounds the charged polymer. Then we expand L w grids outwards and form a "local box". When the "local box" exceeds the global box in a certain direction, it will not be further expanded. In addition, we set d step as the interval of steps for each global iteration and a warning value (L d ). When the nearest distance between the smallest box and the "local box" equals the warning value, we carry out the global iteration and re-divide the "local box". Therefore, in this work, we mainly study the effects of three parameters on the calculation accuracy and speed, which include L w , d step , and L d . We expect that this method can greatly improve the calculation speed while ensuring the accuracy. RESULTS AND DISCUSSION In the numerical experiment, all the codes use double precision arithmetic and run on a personal desktop with Intel Core i7-6700 CPU (3.40 GHz) and 16 GB RAM. We mainly study the independent effects of three parameters (L w , d step , and L d ) on the calculation accuracy and speed. The effect of the number of charged beads in a polymer (denoted as Num) on the calculation is also discussed. Finally, the applicability of the local iteration method is verified by the different numbers of grids. Based on the comparison between the results of local iteration and global iteration, the advantages of local iteration are explained from two aspects: on the one hand, the relative potential error (RE U ) of each charged beads between global iteration and local iteration is calculated through: where θ RMS denotes the root mean square sign, U global and U local represent the potential at each charged beads in the global iteration and the local iteration, respectively. On the other hand, the CPU time of global iteration (denoted as G-t) and local iteration (denoted as L-t) is compared. In order to make L w , d step , L d , and other parameters more universal, we use the relative size for illustration. For example, RL w = L w /L, RS = d step /Step, RL d = L d /L w . Because the charged beads occupy some grids, we ex-tend L w grids in six directions. When RL w = 50%, whether the "local box" completely covers the simulation box depends on the initial position of the charged beads in a polymer. Firstly, we discuss the influence of the value of L w in local iteration on the accuracy and CPU time, which are compared with the result of global iteration. We set the grid size as 80 × 80 × 80, the number of charged beads in a polymer Num = 100, total step number of motion Step = 10000, the relative distance between charged polymer and boundary RL d = 80%, and the relative global iteration step interval RS = 0.25%, 1%, 2.5%, 5%. Under these conditions, we calculate L-t, G-t, and RE U . As shown in Fig. 5(a), with the decrease of RL w , the CPU time decreases, which implies an increase in the simulation efficiency. However, the decrease of RL w has no effect on the global time G-t, because G-t only depends on the grid size and is not affected by the parameters in "local iteration" (such as L w , d step ). Otherwise, with the same RL w , an increasing RS also induces the decrease of CPU time. On the other hand, as shown in Fig. 5(b), both the decrease of RL w and the increase of RS, which are of advantages to the computational efficiency, induce the increase of relative potential error RE U . Obviously, with the increase of RL w , the scope of local iteration is also enlarged, which implies the increase of CPU time consumed by local iteration and further results in the increase of total CPU time. At the same time, enlarged local iteration also makes the solutions approximate that of the global iteration (i.e. a smaller RE U ). Besides, under the same total iteration step, the number of global iterations increases with RS decrease. Hence, a smaller RS indicates more computational consumption in global iteration and fewer cumulative errors, which further induces the increase of total CPU time and the reduction of relative potential error RE U . Note that when RL w is beyond 47%, L-t approaches G-t and RE U approximates zero. This is because RL w > 47% implies that the "box" of local iteration is nearly as large as simulation box, resulting in the similar computational consumption and accuracy. Both the results of CPU time and RE U demonstrate the availability of our "local iteration": when RL w = 6.25% and RS = 5%, the simulation efficiency is increased by 12.5 times compared to that of the traditional global iteration; meanwhile, RE U is merely 1.43%, which is in the acceptable range. Even one wants a greater accuracy, RE U ≈ 0.5% when RS = 0.25%, the smaller RL w (6.25%) also ensures a sufficient efficiency (8 times than global iteration). In order to further observe and verify the effect of d step on accuracy and speed, we use RS to explain more intuitively. Similarly, under the grid number of 80 × 80 × 80, we set Num = 100, the length ratio of "local box" to "global box" RL w = 6.25%, the relative distance between charged polymer and boundary RL d = 80%, and Step = 10000, 15000, 20000, 30000. Then, we calculated L-t, G-t, and RE U . As shown in Fig. 6(a), with the increase of RS, the CPU time decreases, which implies an increase in the simulation efficiency. In addition, we find that an increasing Step induces the increase of CPU time under the same RS. Otherwise, as shown in Fig. 6(b), the increase of RS, which is of advantage to the computational efficiency, would induce the increase of relative potential error RE U . We know that with the increase of RS, the number of global iterations decreases, which implies the decrease of total CPU time. However, decreasing the value of RS can make the solution approximate that of the global iteration. We find that when RS = 0.25%, the computational efficiency of Step = 10000 and Step = 30000 is increased by 10.1 times and 11.3 times, respectively. The result indicates that with the increase of Step, the computation efficiency of local iteration increases more obviously. In other words, the large number of steps provide more speedup than that of the small number of steps under same conditions. That 1% ≤ RS ≤ 2.5% is the best under these Step is clearly exhibited in Fig. 6(b). This is due to the enough large number of global iterations even under large Step. As a result, RE U is lower than 1.35%, and the computational efficiency can be increased by at least 9.8 times. Even one wants a greater accuracy, we can choose a larger RL w to reduce RE U . Moreover, when the charged polymer moves towards the boundary of the "box" (the following "box" denotes "local box"), we also set a "warning line" -L d . We still select the number of 80 × 80 × 80 grids, under Num = 100, Step = 10000, RS = 1%, and RL w = 6.25%, 10%, 12.5%, 15%; CPU time and RE U corresponding to different RL d are individually calculated. Owing to different values of L w , the value of L d also varies correspondingly. For example, when RL w = 6.25% (L w = 5), L d can only take 4, 3, 2, or 1. According to Fig. 7(a), we find that the size of L d has almost no effect on the CPU time, because L d is only a parameter for re-dividing the "box" and it does not increase the amount of calculations like L w and d step . As shown in Fig. 7(b), RE U decreases with the increase of RL d . The result indicates that the accuracy increases with the increase of update frequency of "box". This is because the effect of the charged polymer on the potential of external node of the "box" weakens with the increase of the distance between charged polymer and boundary of the "box" and RE U thereby increases. In addition, we find that the RE U /RL d curve becomes steeper with the increase of RL w . In theory, RE U decreases with the increase of RL w . However, we find that the accuracy of RL w = 15% is not as good as that of RL w = 12.5% when RL d value is very small. This phenomenon is caused by the fact that the moving distance for updating the "box" increases with the increase of RL w when RL d is the same, Step = 10000 Step = 15000 Step = 20000 Step = 30000 Step = 10000 Step = 15000 Step = 20000 Step = 30000 Fig. 6 (a) CPU time t and (b) the relative error of electrostatic potential RE U as a function of the relative global iteration step interval RS. Grid size 80 × 80 × 80, the number of charged beads in a polymer Num = 100, the length ratio of "local box" to "global box" RL w = 6.25%, the relative distance between charged polymer and boundary RL d = 80%. leading to the lower frequency of updating the "box", and the cumulative error further becomes too large. Such phenomenon is especially obvious when RL d ≤ 20%. Then, aiming at the uncertainty of the number of charged beads in polymer Num in practical problems, we choose Num = 100, 200, 300, and 500 under the condition that the number of grids is 80 × 80 × 80, Step = 10000, RS = 1%, and RL d = 80%. As shown in Fig. 8(a), the CPU time decreases with the decrease of RL w , which is similar to Fig. 5(a). Moreover, we find that with the same RL w , the increase of Num leads to the increase of CPU time for both local iteration and global iteration, while G-t of Num = 500 is only 1123 s more than G-t of Num = 100. There are two reasons why the CPU time of large number of charged beads (L Num ) is slightly longer than that of small number of charged beads (S Num ). Firstly, the number of grids occupied by the charged polymer increases with the increasing number of charged beads in a polymer. Therefore, the local iteration range of L Num is larger than that of S Num even though RL w is the same. Secondly, L Num will consume more CPU time in the process of charge interpolation. However, the CPU time consumed by the interpolation calculation process is much less than the iterative calculation pro-cess. As shown in Fig. 8(b), RE U increases slightly with the increase of Num. The reason for the difference in error is that the influence of L Num on the external node potential of the "box" is greater than that of S Num . We find that when RL w is large enough, the influence of L Num on the potential of external nodes is gradually weakened. We also observe that RE U of L Num and S Num approach more obviously around RL w = 28.75%. When RL w is between 36% and 50%, RE U of the four cases of Num is infinitely close. Of course, if we want to analyze a huge number of charged beads, we should expand the "local box" appropriately. Finally, we use the "local iteration" method for testing the effect of different numbers of grids. Under the conditions of Num = 100, Step = 10000, RS = 1%, and RL d = 80%, we test the G-t and L-t of 100 × 100 × 100, 80 × 80 × 80, 100 × 50 × 50, and 50 × 50 × 50 grids and find that the values of G-t are 46514.8, 23252.4, 6158.3, and 2373.9 s, respectively. As shown in Fig. 9(a), the CPU time decreases with the decrease of RL w , which is similar to Fig. 5(a) same RL w , the increasing number of grids leads to the increase of CPU time. As shown in Fig. 9(b), with L w = 5, the ratio of L-t/G-t decreases with the increase of grid number, which indicates that the efficiency of acceleration increases with the increase of grid number. The grid of 100 × 100 × 100 increases computational efficiency by 12 times more than the other cases. Our conclusion is that the "local iteration" is more suitable to computationally intensive problems. As shown in Fig. 9(c), the decrease of RL w , which is of advantage to the computational efficiency, induces the increase of RE U . Distinctly, the number of 100 × 100 × 100 grids is much larger than that of other sizes, and the external node increases as the "box" becomes smaller. Therefore, cumulative error caused by external nodes of the "box" leads to the larger RE U . However, we can still control RE U of the huge number of grids such as 100 × 100 × 100 below 3.5% with RS = 1%. In addition, we also find that RE U of 50 × 50 × 50 is even greater than that of 80 × 80 × 80 when RL w ≤ 18%. For smaller simulation boxes, the effect of charge on nodes outside the local iteration is greater, so RL w > 18% is a better choice for the small number of grids such as 50 × 50 × 50 from the Fig. 9(c). CONCLUSIONS In this work, basing on the SOR iteration algorithm in the finite difference method, we present a "local iteration" method to accelerate the simulation of three-dimensional diffusion of charged polymers in solution. The effects of three parameters (L w , d step , and L d ) in the "local iteration" method on the computational efficiency and accuracy are further tested. Our simulation results demonstrate that both the decrease of RL w and RS could dramatically increase the computational efficiency, while the corresponding price is the decrease of accuracy. Otherwise, the increase of RL d could enhance the computational accuracy and has no influence on the calculative expense. Therefore, in order to reduce the effect of charged beads on the potential of external nodes of the "box", we should make RL d as large as possible. Accordingly, for a simulation system with Num = 100 and grid size 80 × 80 × 80, the "local iteration" method could present an approximate 12-fold calculating speed in comparison with that of the global iteration method, where the relative potential error (RE U ) is controlled below 1.5%. Moreover, the increased complexity of the system (i.e., the number of charged beads) has little influence on the acceleration effect: when the same accuracy is ensured, the "local iteration" method could also reach 8-fold calculating speed for a system with Num = 500 and grid size 80 × 80 × 80. However, the enlargement of simulation system will increase the relative potential error. In order to ensure the calculating accuracy, a larger RL w should be used (RL w ≥ 25%), while the acceleration effect will decrease to only 3-fold (the relative error is controlled below 2%). Hence, the "local iteration" method needs the optimization for special problems, which includes the grid number and three parameters. This could significantly accelerate the simulation processes and guarantee precision control, thereby efficiently accelerating the simulation processes. In practical problems, we may also encounter inhomogeneous media. Therefore, it is very meaningful to further optimize different types of three-dimensional Poisson equation algorithm. The relative error of electrostatic potential RE U as a function of the length ratio of "local box" to "global box" RL w . Simulation box with dimensions of (100, 100, 100), (80, 80, 80), (100, 50, 50), (50, 50, 50), grid spacing Δx = Δy = Δz = 1. The number of charged beads in a polymer Num = 100, the total step number of motion Step = 10000, the relative global iteration step interval RS = 1%, the relative distance between charged polymer and boundary RL d = 80%.
2019-10-24T09:14:27.616Z
2019-10-11T00:00:00.000
{ "year": 2019, "sha1": "acf62104d25690c75a650ebc79192fea76b3aab4", "oa_license": "CCBY", "oa_url": "http://www.cjps.org/article/doi/10.1007/s10118-020-2343-8", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "82639e5b82ec04a77e9710c3cdb9886559a19f86", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
220968973
pes2o/s2orc
v3-fos-license
Flexible coinductive logic programming Recursive definitions of predicates are usually interpreted either inductively or coinductively. Recently, a more powerful approach has been proposed, called flexible coinduction, to express a variety of intermediate interpretations, necessary in some cases to get the correct meaning. We provide a detailed formal account of an extension of logic programming supporting flexible coinduction. Syntactically, programs are enriched by coclauses, clauses with a special meaning used to tune the interpretation of predicates. As usual, the declarative semantics can be expressed as a fixed point which, however, is not necessarily the least, nor the greatest one, but is determined by the coclauses. Correspondingly, the operational semantics is a combination of standard SLD resolution and coSLD resolution. We prove that the operational semantics is sound and complete with respect to declarative semantics restricted to finite comodels. This paper is under consideration for acceptance in TPLP. Introduction Standard inductive and coinductive semantics of logic programs sometimes are not enough to properly define predicates on possibly infinite terms (Simon et al. 2007;Ancona 2013). Consider the logic program in Fig. 1, defining some predicates on lists of numbers represented with the standard Prolog syntax. For simplicity, we consider built-in numbers, as in Prolog. In standard logic programming, terms are inductively defined, that is, are finite, and predicates are inductively defined as well. In the example program, only finite lists are considered, such as, e.g., [1|[2|[]]], and the three predicates are correctly defined on such lists. Coinductive logic programming (coLP) (Simon 2006) (x, l) succeeds iff x is in l, maxElem (l, x) succeeds iff x is the greatest number in l. proof of equivalence with declarative semantics can be nicely done in a modular way, that is, by relying on a general result proved by Dagnino (2020). After basic notions in Section 2, in Section 3 we introduce logic programs with coclauses and their declarative semantics, and in Section 4 the operational semantics. We provide significant examples in Section 5, the results in Section 6, related work and conclusive remarks in Section 7. Logic programs as inference systems We recall basic concepts about inference systems (Aczel 1977), and present (standard inductive and coinductive) logic programming (Lloyd 1987;Apt 1997;Simon 2006;Simon et al. 2006;Simon et al. 2007) as a particular instance of this general semantic framework. Inference systems Assume a set U called universe whose elements j are called judgements. An inference system I is a set of (inference) rules, which are pairs Pr, c , also written Pr c , with Pr ⊆ U set of premises, and c ∈ U conclusion. We assume inference systems to be finitary, that is, rules have a finite set of premises. A proof tree (a.k.a. derivation) in I is a tree with nodes (labelled) in U such that, for each j with set of children Pr, there is a rule Pr, j in I. A proof tree for j is a proof tree with root j. The inference operator F I : ℘(U ) → ℘(U ) is defined by: An interpretation of an inference system I is a set of judgements, that is, a subset of the universe U . The two standard interpretations, the inductive and the coinductive one, can be defined in either model-theoretic or proof-theoretic terms (Leroy and Grall 2009). • The inductive interpretation µ I is the intersection of all closed sets, that is, the least closed set or, equivalently, the set of judgements with a finite proof tree. • The coinductive interpretation ν I is the union of all consistent sets, that is, the greatest consistent set, or, equivalently, the set of judgements with an arbitrary (finite or not) proof tree. By the fixed point theorem (Tarski 1955), both µ I and ν I are fixed points of F I , the least and the greatest one, respectively. We will write I ⊢ µ j for j ∈ µ I and I ⊢ ν j for j ∈ ν I . Logic programming Assume a first order signature P, F , V with P set of predicate symbols p, F set of function symbols f , and V countably infinite set of variable symbols X (variables for short). Each symbol comes with its arity, a natural number denoting the number of arguments. Variables have arity 0. A function symbol with arity 0 is a constant. Terms t, s, r are (possibly infinite) trees with nodes labeled by function or variable symbols, where the number of children of a node is the symbol arity 3 . Atoms A, B, C are (possibly infinite) trees with the root labeled by a predicate symbol and other nodes by function or variable symbols, again accordingly with the arity. Terms and atoms are ground if they do not contain variables, and finite (or syntactic) if they are finite trees. (Definite) clauses have shape A ← B 1 , . . . , B n with n ≥ 0, A, B 1 , . . . , B n finite atoms. A clause where n = 0 is called a fact. A (definite) logic program P is a finite set of clauses. Substitutions θ , σ are partial maps from variables to terms with a finite domain. We write tθ for the application of θ to a term t, call tθ an instance of t, and analogously for atoms, set of atoms, and clauses. A substitution θ is ground if, for all X ∈ dom(θ ), θ (X) is ground, syntactic if, for all X ∈ dom(θ ), θ (X) is a finite (syntactic) term. In order to see a logic program P as an inference system, we fix as universe the complete Herbrand base HB ∞ , that is, the set of all (finite and infinite) ground atoms 4 . Then, P can be seen as a set of meta-rules defining an inference system P on HB ∞ . That is, P is the set of ground instances of clauses in P, where A ← B 1 , . . . , B n is seen as an inference rule {B 1 , . . . , B n }, A . In this way, typical notions related to declarative semantics of logic programs turn out to be instances of analogous notions for inference systems. Notably, the (one step) inference operator associated to a program T P : ℘(HB ∞ ) → ℘(HB ∞ ), defined by: is, it is closed with respect to P . Dually, an interpretation I is a comodel of a program P if I ⊆ T P (I), that is, it is consistent with respect to P . Then, the inductive declarative semantics of P is the least model of P and the coinductive declarative semantics 5 is the greatest comodel of P. These two semantics coincide with the inductive and coinductive interpretations of P , hence we denote them by µ P and ν P , respectively. Coclauses We introduce logic programs with coclauses and define their declarative semantics. Consider again the example in Fig. 1 where, as discussed in the Introduction, each predicate needed a different kind of interpretation. As shown in the previous section, the above logic program can be seen as an inference system. In this context, flexible coinduction has been proposed (Dagnino 2017;Ancona et al. 2017b;Dagnino 2019), a generalisation able to overcome these limitations. The key notion are corules, special inference rules used to control the semantics of an inference system. More precisely, a generalized inference system, or inference system with corules, is a pair of inference systems I, I co , where the elements of I co are called corules. The interpretation of I, I co , denoted by ν fl I, I co , is constructed in two steps. • first, we take the inductive interpretation of the union I ∪ I co , that is, µ I ∪ I co , • then, the union of all sets, consistent with respect to I, which are subsets of µ I ∪ I co , that is, the largest consistent subset of µ I ∪ I co . In proof-theoretic terms, ν fl I, I co is the set of judgements with an arbitrary (finite or not) proof tree in I, whose nodes all have a finite proof tree in I∪I co . Essentially, by corules we filter out some, undesired, infinite proof trees. Dagnino (2019) proved that ν fl I, I co is a fixed point of F I . To introduce flexible coinduction in logic programming, first we slightly extend the syntax by introducing (definite) coclauses, written A ⇐ B 1 , . . . , B n , where A, B 1 , . . . , B n are finite atoms. A coclause where n = 0 is called a cofact. Coclauses syntactically resemble clauses, but are used in a special way, like corules for inference systems. More precisely, we have the following definition: Definition 3.1 A logic program with coclauses is a pair P, P co where P and P co are sets of clauses. Its declarative semantics, denoted by ν fl P, P co , is the largest comodel of P which is a subset of µ P ∪ P co . In other words, the declarative semantics of P, P co is the coinductive semantics of P where, however, clauses are instantiated only on elements of µ P ∪ P co . Note that this is the interpretation of the generalized inference system P , P co . Below is the version of the example in Fig. 1, equipped with coclauses. In this way, all the predicate definitions are correct w.r.t. the expected semantics: • all pos has coinductive semantics, as the coclause allows any infinite proof trees. • member has inductive semantics, as without coclauses no infinite proof tree is allowed. • maxElem has an intermediate semantics, as the coclause allows only infinite proof trees where nodes have shape maxElem(l, x) with x an element of l. As the example shows, coclauses allow the programmer to mix inductive and coinductive predicates, and to correctly define predicates which are neither inductive, nor purely coinductive. For this reason we call this paradigm flexible coinductive logic programming. Note that, as shown for inference systems with corules (Dagnino 2017;Ancona et al. 2017b;Dagnino 2019), inductive and coinductive semantics are particular cases. Indeed, they can be recovered by special choices of coclauses: the former is obtained when no coclause is specified, the latter when each atom in HB ∞ is an instance of the head of a cofact. Big-step operational semantics In this section we define an operational counterpart of the declarative semantics of logic programs with coclauses introduced in the previous section. As in standard coLP (Simon 2006;Simon et al. 2006;Simon et al. 2007), to represent possibly infinite terms we use finite sets of equations between finite (syntactic) terms. For instance, the equation L ≖ [1,2|L] represents the infinite list [1,2,1,2,...]. Since the declarative semantics of logic programs with coclauses is a combination of inductive and coinductive semantics, their operational semantics combines standard SLD resolution (Lloyd 1987;Apt 1997) and coSLD resolution (Simon 2006;Simon et al. 2006;Simon et al. 2007). It is presented, rather than in the traditional small-step style, in big-step style, as introduced by Ancona and Dovier (2015). This style turns out to be simpler since coinductive hypotheses (see below) can be kept local. Moreover, it naturally leads to an interpreter, and makes it simpler to prove its correctness with respect to declarative semantics (see the next section). We introduce some notations. First of all, in this section we assume atoms and terms to be finite (syntactic). A goal is a pair G; E , where G is a finite sequence of atoms. A goal is empty if G is the empty sequence, denoted ε. An equation has shape s ≖ t where s and t are terms, and we denote by E a finite set of equations. Intuitively, a goal can be seen as a query to the program and the operational semantics has to compute answers (a.k.a. solutions) to such a query. More in detail, the operational semantics, given a goal G; E 1 , provides another set of equations E 2 , which represents answers to the goal. The judgment of the operational semantics has shape P, P co ; S G; E 1 ⇒ E 2 meaning that resolution of G; E 1 , under the coinductive hypotheses S ), succeeds in P, P co , producing a set of equations E 2 . Set Var(t) the set of variables in a term, and analogously for atoms, set of atoms, and equations. We assume Var(S) ⊆ Var(E 1 ), modelling the intuition that S keeps track of already considered atoms. This condition holds for the initial judgement, and is preserved by rules in Fig. 2, hence it is not restrictive. Resolution starts with no coinductive hypotheses, that is, the top-level judgment has shape P, P co ; / 0 G; E 1 ⇒ E 2 . The operational semantics has two flavours: • If there are no corules (P co = / 0), then the judgment models standard SLD resolution, hence the set of coinductive hypotheses is not significant. • Otherwise, the judgment models flexible coSLD resolution, which follows the same schema of coSLD resolution, in the sense that it keeps track in S of the already considered atoms. However, when an atom A in the current goal unifies with a coinductive hypothesis, rather than just considering A successful as in coSLD resolution, standard SLD resolution of A is triggered in the program P ∪ P co , that is, also coclauses can be used. The judgement is inductively defined by the rules in Fig. 2, which rely on some auxiliary (standard) notions. A solution of an equation s ≖ t is a unifier of t and s, that is, a substitution θ such that sθ = tθ . A solution of a finite set of equations E is a solution of all the equations in E and E is solvable if there exists a solution of E. Two atoms A and B are unifiable in a set of equations E, written E ⊢ A = B, if A = p(s 1 , . . . , s n ), B = p(t 1 , . . . ,t n ) and E ∪ {s 1 ≖ t 1 , . . . , s n ≖ t n } is solvable, and we denote by E A,B the set {s 1 ≖ t 1 , . . . , s n ≖ t n }. Rule (EMPTY) states that the resolution of an empty goal succeeds. In rule (STEP), an atom A to be resolved is selected, and a clause of the program is chosen such that A unifies with the head of the clause in the current set of equations. Then, resolution of the original goal succeeds if both the body of the selected clause and the remaining atoms are resolved, enriching the set of equations correspondingly. As customary, the selected clause is renamed using fresh variables, to avoid variable clashes in the set of equations obtained after unification. Note that, in the resolution of the body of the clause, the selected atom is added to the current set of coinductive hypotheses. This is not relevant for standard SLD resolution (P co = / 0). However, if P co = / 0, this allows rule (CO-HYP) to handle the case when an atom A that has to be resolved unifies with a coinductive hypothesis in the current set of equations. In this case, standard SLD resolution of such atom in the program P ∪ P co is triggered, and resolution of the original goal succeeds if both such standard SLD resolution of the selected atom and resolution of the remaining goal succeed. In Fig. 3 we show an example of resolution. We use the shorter syntax =max, abbreviate by eq L the equation L ≖ [1,2|L], by eqs the equations M3≖2, M2≖2, by mE the predicate maxElem, and by (S), (C) the rules (STEP) and (CO-HYP), respectively. When applying rule (STEP), we also indicate the clause/coclause which has been used: we write 1,2,3 for the two clauses and the coclause for the maxElem predicate (the first clause is never used in this example). Finally, to keep the example readable and focus on key aspects, we make some simplifications: notably, (MAX) stands for an omitted proof tree solving atoms of shape is max( , ); morever, equations between lists are implicitly applied. As final remark, note that flexible coSLD resolution nicely subsumes both SLD and coSLD. The former, as already said, is obtained when the set of coclauses is empty, that is, the program is inductive. The latter is obtained when, for all predicate p of arity n, we have a cofact p(X 1 , . . . , X n ). Examples In this section we discuss some more sophisticated examples. ∞-regular expressions: We define ∞-regular expressions on an alphabet Σ, a variant of the formalism defined by Löding and Tollkötter (2016) for denoting languages of finite and infinite words, the latter also called ω-words, as follows: where a ∈ Σ. The syntax of standard regular expressions is extended by r ω , denoting the ω-power of the language A r denoted by r. That is, the set of words obtained by concatenating infinitely many times words in A r . In this way, we can denote also languages containing infinite words. In Fig. 4 we define the predicate match, such that match(W, R) holds if the finite or infinite word W , implemented as a list, belongs to the language denoted by R. For simplicity, we consider words over the alphabet {0, 1}. Concatenation of words needs to be defined coinductively, to correctly work on infinite words as well. Note that, when w 1 is infinite, w 1 w 2 is equal to w 1 . On operators of regular expressions, match can be defined in the standard way (no coclauses). In particular, the definition for expressions of shape r ⋆ follows the explicit definition of the ⋆closure of a language: given a language L, a word w belongs to L ⋆ iff it can be decomposed as w 1 . . . w n , for some n ≥ 0, where n = 0 means w is empty, and w i ∈ L, for all i ∈ 1..n. This condition is checked by the auxiliary predicate match star. To define when a word w matches r ω we have two cases. If w is empty, then it is enough to check that the empty word matches r, as expressed by the first clause, because concatenating infinitely many times the empty word we get again the empty word. Otherwise, we have to decompose w as w 1 w 2 where w 1 is not empty and matches r and w 2 matches r ω as well, as formally expressed by the second clause, To propertly handle infinite words, we need to concatenate infinitely many non-empty words, hence we need to apply the second clause infinitely many times. The coclause allows all such infinite derivations. An LTL fragment: In Fig. 5 we define the predicate sat s.t. sat(w, ϕ) succeeds iff the ω-word w over the alphabet {0, 1} satisfies the formula ϕ of the fragment of the Linear Temporal Logic with the temporal operators until (U) and always (G) and the predicate zero and its negation 6 one. Since sat ([B|W ], always(Ph)) succeeds iff all infinite suffixes of [B|W ] satisfy formula Ph, the coinductive interpretation has to be considered, hence a coclause is needed; for instance, sat(W 0 , always(zero)), with W 0 = [0|W 0 ], succeeds because the atom sat(W 0 , always(zero)) in the body of the clause for always unifies 7 with the coinductive hypothesis sat(W 0 , always(zero)) (see rule (CO-HYP) in Figure 2) and the coclause allows it to succeed w.r.t. standard SLD resolution (indeed, atom sat(W 0 , zero) succeeds, thanks to the first fact in the logic program). An interesting example concerns the goal sat([1, 1|W 0 ], until(one, always(zero))), where the two temporal operators are mixed together: it succeeds as expected, thanks to the two clauses for until and the fact that sat(W 0 , always(zero)) succeeds, as shown above. Some of the issues faced in this example are also discussed by Gupta et al. (2011). Big-step semantics modeling infinite behaviour and observations Defining a big-step operational semantics modelling divergence is a difficult task, especially in presence of observations. Ancona et al. (2018;2020) show how corules can be successfully employed to tackle this problem, by providing big-step semantics able to model divergence for several variations of the lambda-calculus and different kinds of observations. Following this approach, we present in Fig. 6 a similar example, but simpler, to keep it shorter: a logic program with coclauses defining the big-step semantics of a toy language to output possibly infinite sequences 8 of integers. Expressions are regular terms generated by the following grammar: where skip is the idle expression, out n outputs n, and seq(e 1 , e 2 ) is the sequential composition. The semantic judgement has shape e ⇒ r, s , represented by the atom eval(e, r, s), where e is an expression, r is either end or div, for converging or diverging computations, respectively, and s is a possibly infinite sequence of integers. Clauses for concat are pretty standard; in this case the definition is purely inductive (hence, no coclause is needed) since the left operand of concatenation is always a finite sequence. Clauses for eval are rather straightforward, but sequential composition seq(e 1 , e 2 ) deserves some comment: if the evaluation of e 1 converges, then the com- putation can continue with the evaluation of e 2 , otherwise the overall computation diverges and e 2 is not evaluated. As opposite to the previous examples, here we do not need just cofacts, but also a coclause; both the cofact and the coclause ensure that for infinite derivations only div can be derived. Furthermore, the cofact handles diverging expressions which produce a finite output sequence, as in eval(E, div, [ ]) or in eval(seq(out(1), E), div, [1]), with E = seq(skip, E) or E = seq(E, E), while the coclause deals with diverging expressions with infinite outputs, as in eval(E, div, S) with E = seq(out(1), E) and S = [1|S]. The body of the coclause ensures that the left operand of sequential composition converges, thus ensuring a correct productive definition. Soundness and completeness After formally relating the two approaches, we state soundness of the operational semantics with respect to the declarative one. Then, we show that completeness does not hold in general, and define the regular version of the declarative semantics. Finally, we show that the operational semantics is equivalent to this restricted declarative semantics. Relation between operational and declarative semantics As in the standard case, the first step is to bridge the gap between the two approaches: the former computing equations, the latter defining truth of atoms. This can be achieved through the notions of answers to a goal. Given a set of equations E, sol(E) is the set of the solutions of E, that is, the ground substitutions unifying all the equations in E. Then, θ ∈ sol(E) is an answer to G; E if Var(G) ⊆ dom(θ ). The judgment P, P co ; S G; E 1 ⇒ E 2 described in Section 4 computes a set of answers to the input goal. Indeed, solutions of the output set of equations are solutions of the input set as well, since the following proposition holds. On the other hand, we can define which answers are correct in an interpretation: Definition 6.1 For I ⊆ HB ∞ , the set of answers to G; E correct in I is ans(G, E, I) = {θ ∈ sol(E) | Gθ ⊆ I}. Hence, soundness of the operational semantics can be expressed as follows: all the answers computed for a given goal are correct in the declarative semantics. Completeness issues The converse of this theorem, that is, all correct answers can be computed, cannot hold in general, since, as shown by Ancona and Dovier (2015), coinductive declarative semantics does not admit any complete procedure 9 , hence our model as well, since it generalizes the coinductive one. To explain why completeness does not hold in our case, we can adapt the following example from Ancona and Dovier (2015) 10 , where p is a predicate symbol of arity 1, z and s are function symbols of arity 0 and 1 respectively. p(X) ← p(s(X)). p(X) ⇐ Let us define 0 = z, n + 1 = s(n) and ω = s(s(. . .)). The declarative semantics is the set {p(x) | x ∈ N ∪ {ω}}. In the operational semantics, instead, only p(ω) is considered true. Indeed, all derivations have to apply the rule (CO-HYP), which imposes the equation X ≖ s(X), whose unique solution is ω. Therefore, the operational semantics is not complete. Now the question is the following: can we characterize in a declarative way answers computed by the big-step semantics? In the example, there is a difference between the atoms p(ω) and p(n), with n ∈ N, because the former has a regular proof tree, namely, a tree with finitely many different subtrees, while the latter has only with non-regular, thus infinite, proof trees. Following this observation, we prove that the operational semantics is sound and complete with respect to the restriction of the declarative semantics to atoms derivable by regular proof trees. As we will see, this set can be defined in model-theoretic terms, by restricting to finite comodels of the program. Dagnino (2020) defined this restriction for an arbitrary (generalized) inference system. We report here relevant definitions and results. Regular declarative semantics Let us write The regular interpretation of I, I co is defined as This definition is like the one of ν fl I, I co , except that we take the union 11 only of those consistent subsets of µ I ∪ I co which are finite.The set ρ fl I, I co is a fixed point of F I and, precisely, it is the rational fixed point (Adámek et al. 2006) of F I restricted to ℘(µ I ∪ I co ), hence we get ρ fl I, I co ⊆ ν fl I, I co . The proof-theoretic characterization relies on regular proof trees, which are proof trees with a 9 That is, establishing whether an atom belongs to the coinductive declarative semantics is neither decidable nor semidecidable, even when the Herbrand universe is restricted to the set of rational terms. 10 Example 10 at page 8. 11 Which could be an infinite set, hence it is not the same of the greatest finite consistent set. finite number of subtrees (Courcelle 1983). That is, as proved by Dagnino (2020), ρ fl I, I co is the set of judgments with a regular proof tree in I whose nodes all have a finite proof tree in I ∪ I co . As special case, we get regular semantics of logic programs with coclauses. Definition 6.2 The regular declarative semantics of P, P co , denoted by ρ fl P, P co , is the union of all finite comodels included in µ P ∪ P co . We state now soundness and completeness of the operational semantics with respect to this semantics. We write θ σ iff dom(θ ) ⊆ dom(σ ) and, for all X ∈ dom(θ ), θ (X) = σ (X). It is easy to see that is a partial order and, if θ σ and Var(G) ⊆ dom(θ ), then Gθ = Gσ . That is, any answer computed for a given goal is correct in the regular declarative semantics, and any correct answer is included in a computed answer. Theorem 6.2 immediately entails Theorem 6.1 as ans(G, E, ρ fl P, P co ) ⊆ ans(G, E, ν fl P, P co ). Proof technique In order to prove the equivalence of the two semantics, we rely on a property which holds in general for the regular interpretation (Dagnino 2020): we can construct an equivalent inductive characterization. That is, given a generalized inference system I, I co on the universe U , we can construct an inference system I I co with judgments of shape H ⊲ j, for j ∈ U and H ⊆ fin U , such that the inductive interpretation of I I co coincides with the regular interpretation of I, I co . The set H, whose elements are called coinductive hypotheses , is used to detect cycles in the proof. In particular, for logic programs with coclauses, we get an inference system with judgments of shape S ⊲ A, for S finite set of ground atoms, and A ground atom, defined as follows. Definition 6.3 Given P, P co , the inference system P P co consists of the following (meta-)rules: The following proposition states the equivalence with the regular interpretation. The proof is given by Dagnino (2020) in the general case of inference systems with corules. Note that the definition of P P co ⊢ µ S ⊲ A has many analogies with that of the operational semantics in Figure 2. The key difference is that the former handles ground, not necessarily finite, atoms, the latter not necessarily ground finite atoms (we use the same metavariables A and S for simplicity). In both cases already considered atoms are kept in an auxiliary set S. In the former, to derive an atom A ∈ S, the side condition requires A to belong to the inductive intepretation of the program P ∪ P co . In the latter, when an atom A unifies with one in S, standard SLD resolution is triggered in the program P ∪ P co . To summarize, P P co ⊢ µ S ⊲ A can be seen as an abstract version, at the level of the underlying inference system, of operational semantics. Hence, the proof of soundness and completeness can be based on proving a precise correspondence between these two inference systems, both interpreted inductively. This is very convenient since the proof can be driven in both directions by induction on the defining rules. The correspondence is formally stated in the following two lemmas. Soundness follows from Lemma 6.1 and Proposition 6.2, as detailed below. Analogously, completeness follows from Lemma 6.2 and Proposition 6.2, as detailed below. Proof of Theorem 6.3 Let G = A 1 , . . . , A n and θ ∈ ans(G, E, ρ fl P, P co ). Then, for all i ∈ 1..n, we have A i θ ∈ ρ fl P, P co and, by Proposition 6.2, we get P P co ⊢ µ / 0 ⊲ A i θ . Hence, the thesis follows by Lemma 6.2. Related work and conclusion We have provided a detailed formal account of an extension of logic programming where programs are enriched by coclauses, which can be used to tune the interpretation of predicates on non-well-founded structures. More in detail, following the same pattern as for standard logic programming, we have defined: • A declarative semantics (the union of all finite comodels which are subsets of a certain set of atoms determined by coclauses). • An operational semantics (a combination of standard SLD resolution and coSLD resolution) shown to be sound and complete with respect to the declarative semantics. As in the standard case, the latter provides a semi-algorithm. Indeed, concrete strategies (such as breadth-first visit of the SLD tree) can be used to ensure that the operational derivation, if any, is found. In this paper we do not deal with this part, however we expect it to be not too different from the standard case. It has been shown (Ancona and Dovier 2015) that, taking as declarative semantics the coinductive semantics (largest comodel), there is not even a semi-algorithm to check that an atom belongs to that semantics. Hence, there is no hope to find a complete operational semantics. On the other hand, our paper provides, for an extension of logic programming usable in pratice to handle non-well-founded structures, fully-developed foundations and results which are exactly the analogous of those for standard logic programming. CoLP has been initially proposed by Simon et al. (2006;2006;2007) as a convenient subparadigm of logic programming to model circularity; it was soon recognized the limitation of its expressive power that does not allow mutually recursive inductive and coinductive predicates, or predicates whose correct interpretation is neither the least, nor the greatest fixed point. Moura et al. (2013;2014) and Ancona (2013) have proposed implementations of coLP based on refinements of the Simon's original proposal with the main aim of making them more portable and flexible. Ancona has extended coLP by introducing a finally clause, allowing the user to define the specific behavior of a predicate when solved by coinductive hypothesis. Moura's implementation is embedded in a tabled Prolog related to the implementation of Logtalk, and is based on a mechanism similar to finally clauses to specify customized behavior of predicates when solved by coinductive hypothesis. While such mechanisms resemble coclauses, the corresponding formalization is purely operational and lacks a declarative semantics and corresponding proof principles for proving correctness of predicate definitions based on them. Ancona and Dovier (2015) have proposed an operational semantics of coLP based on the bigstep approach, which is simpler than the operational semantics initially proposed by Simon et al. and proved it to be sound. They have also formally shown that there is no complete procedure for deciding whether a regular goal belongs to the coinductive declarative semantics, but provided no completeness result restricted to regular derivations, neither mechanisms to extend coLP and make it more flexible. Ancona et al. (2017a) were the first proposing a principled extension of coLP based on the notion of cofact, with both a declarative and operational semantics; the latter is expressed in bigstep style, following the approach of Ancona and Dovier, and is proved to be sound w.r.t. the former. An implementation is provided through a SWI-Prolog meta-interpreter. Our present work differs from the extension of coLP with cofacts mentioned above for the following novel contributions: • we consider the more general notion of coclause, which includes the notion of cofact, but is a more expressive extension of coLP; • we introduce the notion of regular declarative semantics and prove coSLD resolution extended with coclauses is sound and complete w.r.t. the regular declarative semantics; • we show how generalized inference systems are closely related to logic programs with coclauses and rely on this relationship to carry out proofs in a clean and principled way; • we extend the implementation 12 of the SWI-Prolog meta-interpreter to support coclauses. While coSLD resolution and its proposed extensions are limited by the fact that cycles must be detected in derivations to allow resolution to succeed, a stream of work based on the notion of structural resolution (Komendantskaya et al. 2016;) (S-resolution for short) aims to make coinductive resolution more powerful, by allowing to lazily detect infinite derivations which do not have cycles. In particular, recent results (Li 2017;Komendantskaya and Li 2017;Basold et al. 2019) investigate how it is possible to integrate coLP cycle detection into S-resolution, by proposing a comprehensive theory. Trying to integrate S-resolution with coclauses is an interesting topic for future work aiming to make coLP even more flexible. Another direction for further research consists in elaborating and extending the examples of logic programs with coclauses provided in Section 5, to formally prove their correctness, and experiment their effectiveness with the implemented meta-interpreter. Appendix A Proofs In this section we report proofs omitted in Section 6. Soundness We prove Lemma 6.1. To carry out the proof, we rely on Proposition 6.2 and on the following proposition, stating that the inductive declarative semantics of a logic program coincides with the regular semantics of a logic program with no coclauses. The proof is given by Dagnino (2020) in the general case of inference systems with corules. Proposition Appendix A.1 Let P be a logic program, then µ P = ρ fl P, / 0 . Proof of Lemma 6.1 The proof is by induction on rules of Figure 2. (EMPTY) There is nothing to prove. (CO-HYP) We have G = G 1 , A i , G 2 , there is an atom B ∈ S that unifies with A i in E, that is, E 1 = E ∪ E A i ,B is solvable, and P ∪ P co , / 0 ; / 0 A i ; E 1 ⇒ E 2 and P, P co ; S G 1 , G 2 ; E 2 ⇒ E ′ hold. Let θ ∈ sol(E ′ ), then, by induction hypothesis, we get, for all j ∈ 1..n with j = i, P P co ⊢ µ Sθ ⊲ A j θ holds. By Proposition 6.1, we have E 1 ⊆ E 2 ⊆ E ′ , hence sol(E ′ ) ⊆ sol(E 2 ) ⊆ sol(E 1 ), thus θ ∈ sol(E 2 ) ⊆ sol(E 1 ), and, since E A i ,B ⊆ E 1 , θ is a unifier of A i and B, that is, A i θ = Bθ . By induction hypothesis, we get (P ∪ P co ) / 0 ⊢ µ / 0 ⊲ A i θ , hence, by Proposition 6.2 and Proposition Appendix A.1, we get A i θ ∈ µ P ∪ P co . Furthermore, since A i θ = Bθ and B ∈ S, we have A i θ ∈ Sθ . Therefore, by rule (HP) of Definition 6.3, we get that P P co ⊢ µ Sθ ⊲ A i θ holds as well. Completeness We need some preliminary results, then we prove Lemma 6.2. Proof The proof is by induction on the big-step rules in Figure 2. Proof The proof is by induction on the derivation of Aθ in P ∪ P co . Let A ′ ← A 1 , . . . , A n ∈ P ∪ P co be the last applied rule in the finite derivation of Aθ , hence we have A ′ = Aθ . By definition of P ∪ P co , we know there is a fresh renaming of a clause in P ∪ P co , denote it by B ← B 1 , . . . , B n , and a substitution θ ′ such that Bθ ′ = A ′ and B i θ ′ = A i , for all i ∈ 1..n. Since the variables in this clause are fresh, we can assume dom(θ ) ∩ dom(θ ′ ) = / 0, hence θ ′′ = θ ⊎ θ ′ is well-defined and, by construction, we have θ θ ′′ , thus θ ′′ ∈ sol(E), and Aθ ′′ = Bθ ′′ , that is, θ ′′ ∈ sol(E A,B ). As a consequence θ ′′ is a solution of E ∪ E A,B , hence, by induction hypothesis, we get that, for all i ∈ 1..n, P ∪ P co , / 0 ; / 0 B i ; E ∪ E A,B ⇒ E i holds and θ ′′ σ i , for some σ i ∈ sol(E i ). By applying n times Lemma Appendix A.1, we get P ∪ P co , / 0 ; / 0 B 1 , . . . , B n ; E ∪ E A,B ⇒ E ′ and θ ′′ σ , for some σ ∈ sol(E ′ ). Then, the thesis follows by applying rule (STEP) to this judgement and P ∪ P co , / 0 ; / 0 ε; E ′ ⇒ E ′ . Lemma Appendix A.3 For all S and A; E , if P P co ⊢ µ Sθ ⊲Aθ , and θ ∈ sol(E), then P, P co ; S A; E ⇒E ′ and θ σ , for some E ′ and σ ∈ sol(E ′ ). Proof The proof is by induction on the derivation of Sθ ⊲ Aθ (see Definition 6.3).
2020-08-06T01:01:10.469Z
2020-08-05T00:00:00.000
{ "year": 2020, "sha1": "26a4e6b96691b36f1cba12479584dba2a541e45d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2008.02140", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "26a4e6b96691b36f1cba12479584dba2a541e45d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253153779
pes2o/s2orc
v3-fos-license
Infant Feeding Practices, Nutrition, and Associated Health Factors during the First Six Months of Life among Syrian Refugees in Greater Beirut, Lebanon: A Mixed Methods Study The objective was to describe infant feeding practices, nutrition and related health aspects of infants under six months among Syrian refugees in Greater Beirut, Lebanon. A cross-sectional study was conducted among Syrian refugee mothers with infants under six months in July–October 2018 (N = 114). Additionally, eleven focus group discussions were conducted to explore supportive factors and barriers associated with early breastfeeding practices. The prevalence of pre-lacteal feeding was high (62.5%), whereas early initiation of breastfeeding was low (31%), and exclusive breastfeeding very low (24.6%). One-fifth of the infants were anemic (20.5%) and 9.6% were wasted. A significantly higher proportion of non-exclusively breastfed infants had a fever and took medicines than those who were exclusively breastfed. Supporting factors of adequate infant feeding practices comprised knowledge on maternal nutrition and exclusive breastfeeding, along with receiving support from healthcare professionals and family members. Identified barriers included preterm delivery, pre-lacteal feeding, an at-risk waist circumference and moderate to severe depression among mothers, bottle feeding, early introduction of food, maternal health reasons, breastmilk substitutes’ distribution, and misinformation offered by mothers-in-law. To address sub-optimal feeding practices documented among Syrian refugees, awareness on proper breastfeeding practices, maternal nutrition, and psychosocial support should be provided to mothers and family members alike. Introduction The benefits of breastfeeding have been well-documented in the long-and shortterm for both the mother and child [1]. Human breast milk is renowned to be the safest and healthiest food for infants, offering invaluable protection against infections and their consequences [2,3]. As such, the World Health Organization (WHO) and United Nations International Children's Emergency Fund (UNICEF) strongly recommend early initiation of breastfeeding within the first hour of birth and exclusive breastfeeding (EBF) for the first six months of life for optimal growth and development [4,5]. This is most critical during emergencies, as infants and children quickly fall among the most vulnerable victims. Malnutrition and infections emerged as the leading cause of childhood mortality and were often associated with crowded and unsanitary spaces subsequent to large-scale migration [6,7]. Study Design and Sampling Strategy The mixed-methods survey was part of a larger project conducted among Syrian refugees living in the Greater Beirut area in Lebanon (July-October 2018). This area represented the urban agglomeration of the capital city of Beirut and its adjacent districts of Mount Lebanon Governorate, constituting the melting pot of the country. Primary health care centers were identified in localities with the highest level of vulnerability according to the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA) [21]. Mothers with at least one child aged less than five years were enrolled through primary health care centers using a two-step purposeful sampling strategy. The original sample size was calculated based on previous prevalence of anemia among Syrian refugee women in Lebanon to provide a power of 80%, a margin of error of 5% at 95% CI, and a design effect of 1.5 (N = 444). Mother-child pairs were eligible to participate if the inclusion criteria were met: (1) mother and child were Syrian, (2) the mother was between 15 and 49 years old, and (3) the child was aged 0-59 months and did not suffer from any inborn errors of metabolism or physical malformations. Out of the 590 eligible mother-child pairs to participate in this research project, 489 were recruited (17.1% non-response rate) and 433 completed the interview (11.4% dropout rate). Further details on the research project were presented elsewhere [15]. For this current study, mothers with children less than six months were included (N = 114). Recruiting Strategy and Data Collection In the original research project, mothers with children under five years were identified in primary health care centers via three approaches, including (1) the nurses at the centers, (2) direct contact by trained research assistants in the waiting rooms of the centers, and (3) posting flyers with a brief description of the project in the centers. An oral script was used by the research assistant to introduce the project to the potential participant, check the eligibility criteria, and seek the informed written consent of the mother. One-on-one interviews were conducted by well-trained enumerators (Collaborative Institutional Training Initiative certified) to collect data using a multi-component questionnaire (July-September 2018). Data quality control and random checks were conducted during data collection and entry to increase the accuracy of the data and reduce the risk of reporting bias. Data were collected on socio-economic, household, maternal characteristics, nutritional status of mothers and their children, infant health and birth characteristics, infant feeding practices, maternal dietary diversity, and maternal mental health. The minimum dietary diversity for women (MDD-W) of reproductive age was measured using the open recall method. Achieving MDD-W was defined as consuming five or more out of ten food groups [22]. The Patient Health Questionnaire-9 (PHQ-9) was used to measure depression among mothers [23]. Self-reported gestational age, weight, and length at birth were recorded. Birth weight was classified as low birth weight (<2500 g) [24], normal birth weight (2500-3999 g), and macrosomia (≥4000 g) [25]. Gestational age of infants was collected in months instead of weeks for cultural adaption and was classified as: pre-term for infants born before 37 weeks (<nine months) and full-term from 37 weeks (≥nine months) [26]. The household monthly income was classified according to the legal minimum wage in Lebanon, approximately equal to 750,000 Lebanese Pounds (LBP) (equivalent of USD 500 at the time of data collection) [27]. Crowding index was based on the American Crowding Index definition (total number of co-residents by number of rooms without kitchens, bathrooms, hallways, balconies, and garages) [28]. Definitions of Infant Feeding Practices Data on infant feeding practices were collected using a culturally adapted questionnaire based on the 2008 Infant and Young Child Feeding (IYCF) indicators [29,30]. Definitions of infant feeding indicators used in this study were described as follows: 1. 2. Ever received pre-lacteal feeding before any breast milk: percentage of infants who were offered pre-lacteal food or liquid before receiving any breast milk after birth. 3. Early initiation of breastfeeding: percentage of infants who were put to the breast within one hour of birth (Indicator 2) [31]. 4. Child breastfed yesterday: percentage of infants who received breast milk yesterday, including drops or syrups and anything else (any food or liquid, non-human milk, and formula) [29]. 5. Bottle feeding yesterday (BoF): percentage of children who were fed from a bottle with a nipple during the previous day (Indicator 16) [31]. 6. Ever received infant formula and/or other types of milk: percentage of infants who ever received infant formula and/or other types of milk. 7. Introduction of solid, semi-solid, or soft foods before six months: percentage of infants who ever received solid, semi-solid, or soft foods before six months. In addition, a detailed breakdown and description of infant feeding practices among infants under six months during the previous day (adding up to 100%) was defined as follows: 8. Exclusive breastfeeding: percentage of infants who received exclusively breastmilk yesterday, including drops or syrups, but without anything else (Indicator 3) [ Anthropometric and Biochemical Assessment Standardized techniques and calibrated equipment were used to measure anthropometrics among mothers and infants. The average of two measurements was recorded to the nearest decimal. The waist circumference of non-pregnant mothers was measured with light clothing using a non-elastic measuring tape (SECA 201) and was classified as normal (≤79 cm) and at-risk (>80 cm) [33]. As for infants under six months, weight and length were measured using a measuring mat (SECA 417) and an electronic 2-in-1 weighing scale (SECA 876). Their nutritional status was defined using the WHO child growth standards and the WHO Anthro Survey Analyzer to derive the z-scores [34]. Stunting was defined using the length/height-for-age Z-scores (L/HAZ < −2), underweight using the weight-for-age Z-scores (WAZ < −2), wasting using the weight-for-length/height Z-scores (WHZ < −2), and overweight/obese using the body mass index-for-age (BMI-for-age) Z-scores (BAZ > +2) [35]. Microcephaly was assessed using the head circumference Zscores (HCZ < −2) [36]. Mid-upper arm circumference (MUAC) was measured using the UNICEF MUAC measuring tapes and was classified as acute malnutrition (<115 mm) and at-risk malnutrition (115-129 mm) [37]. Anemia status was assessed among infants under six months by certified phlebotomists using the "HemoCue Hb301 System" to measure hemoglobin (Hb) concentrations. Given the lack of WHO criteria for classifying the severity of anemia for this age group [38], total anemia was defined as Hb < 10.5 g/dL, according to Marques et al. (2014) [39]. Statistical Analysis Descriptive analyses were presented as frequencies with percentages for categorical variables and as means with standard deviations (SD) for continuous variables. Characteristics between non-exclusively and exclusively breastfed infants were conducted using chi-square analysis for categorical variables and independent sample t-test to compare means across groups. Factors associated with early initiation of breastfeeding, time of breastfeeding initiation, and exclusive breastfeeding (as dichotomous dependent variables) were identified using multiple logistic regressions. Factors associated with the weight-forlength/height Z-score (as continuous dependent variables) was evaluated using a multiple linear regression. Different models were used for each of the dependent variables. All variables that have shown a p-value < 0.05 in the univariate logistic or linear regressions were checked for multicollinearity using the tolerance, variance inflation factors (VIF), and the condition index prior being included in the models. Independent variables that were entered in the models were reported. The significance of regression models was evaluated using R-squared, the overall percentage, and Hosmer and Lemeshow test for logistic regression models and using R-square and the normal probability plot of residuals for linear regression models. Results from the logistic regressions were expressed as aOR for adjusted odds ratios with a 95% confidence interval (CI) and from the linear regressions as aβ for adjusted β with a 95% CI. A p-value < 0.05 was considered statistically significant. KoBo ToolBox (version March 1, 2018, Harvard Humanitarian Initiative, Cambridge, MA, USA) was used for data entry and Statistical Package for Social Sciences (version 27.0, SPSS Inc., Chicago, IL, USA) was used to conduct the statistical analysis [40]. Qualitative Study and Analysis A qualitative research approach was adopted to explore perceived barriers to, and facilitators of, infant feeding practices in relationship with maternal mental health and nutritional status, infant formula use, and traditional practices among Syrian refugee mothers living in Greater Beirut, Lebanon. A topic guide was developed and approved by the ethical board prior the onset of the study. A pool of participants was drawn from the parent study by contacting mothers who had provided permission to be contacted for future research (n = 201). Overall, 183 mothers were invited to participate in focus group discussions (FGDs), of which 30 mothers partook together with 13 Syrian female relatives (sister, mother, or mother-in-law). In total, 43 Syrian women took part in 11 FGDs between September and October 2018. An additional informed consent was obtained from participants prior to the start of FGDs. Participants were randomly assigned to the FGDs, depending on their availabilities. Discussions were conducted in colloquial Arabic and lasted between 20 and 75 min, with two to nine participants per group, and were recorded using a digital voice recorder. Data collection was completed when data saturation was achieved for the studied themes. FGDs were first transcribed to Latin Arabic and then translated to English. Transcripts were analyzed using MAXQDA software (version 2022.2, VERBI, Berlin, Germany) [41]. Qualitative content analysis was conducted using a combined technique of deductive and inductive thematic analysis. A coding guideline was developed with anchor quotes defining main and sub-themes. To meet the criteria of validity and reliability, a countercheck was conducted by trained members of the research team. Inter-code reliability checks were carried out to reduce inter-subjectivity by checking the coding guidelines and their corresponding quotes [42,43]. Results Household and maternal characteristics are described in Table 1. Nearly half of the mothers were less than 25 years old (44.7%), had one to two children aged less than five years (53.1%), and lived with their extended family (51.8%) with a mean crowding index of 3.8 (±1.6). The majority were registered as refugees with the United Nations High Commissioner for Refugees (UNHCR) (82.3%) and had a household monthly income below the minimum wage (63.3%). Two-thirds of the mothers did not achieve MDD-W (64.0%), and three-quarters of non-pregnant mothers had an at-risk waist circumference (75.5%). Lack of corresponding sum of frequencies with total sample size is due to missing data. b Nonpregnant mothers include lactating and non-pregnant non-lactating mothers. Table 2 presents feeding practices of infants under six months. The vast majority were ever breastfed (98.2%). Among those, only 31.0% were breastfed within the first hour of birth and 30.1% during the first day (1-23 h), while more than a third received late breastfeeding initiation (≥24 h; 38.9%). In addition, nearly two-thirds received prelacteal feedings before the onset of lactation (62.5%). The most common type of pre-lacteal feeding was infant formula milk (70.0%), followed by sugary water (31.4%) and herbal infusions (15.7%). Current feeding practices showed that most of the infants breastfed the previous day (90.4%) and almost half were bottle fed the day before (44.7%). Only 24.6% were exclusively breastfed during the previous day. Most of the infants received breast milk with other liquids or foods, but without infant formula milk (37.7%), followed by those who received breast milk and infant formula milk, with other liquids or foods (15.8%) or without other liquids or foods (12.3%). The most common types of liquids consumed the previous day included plain water (51.8%), infant formula milk (36.8%), and herbal infusions (10.5%). Furthermore, only 39.3% of the infants never received infant formula or other types of milk, and 25.9% received them since the first day of birth. In addition, 83.9% had never received solid, semi-solid, or soft foods before six months. Table 3 displays maternal, birth, and health characteristics of infants according to their exclusive breastfeeding status. Overall, 79.1% of infants were born full-term (≥37 weeks), the prevalence of low birth weight (<2500 g) was 10.9%, and the rate of caesarean section reached 33.3%. The prevalence of anemia among infants was 20.5% and of wasting 9.6%. Mild and moderate to severe depression rates among mothers reached 17.6% and 15.7%, respectively. Among non-exclusively breastfed infants as compared to exclusively breastfed infants, a significantly higher proportion of Caesarean section (38.4% vs. 17.9%, p = 0.045), late breastfeeding initiation (≥24 h; 44.7% vs. 21.4%, p = 0.019), and maternal depression rates (mild: 21.7% vs. 4.0%, p = 0.032; moderate to severe: 18.1% vs. 8.0%, p = 0.032) were found. Categorical variables are expressed as n (%). Lack of corresponding sum of frequencies with total sample size is due to missing data. * Multiple answers were allowed. In addition, a higher proportion of mothers attended one or less antenatal care visits (26.2% vs. 18.5%, p = 0.014) among non-exclusively breastfed infants compared to those who were exclusively breastfed. While nearly one third of the infants suffered from various symptoms in the past two weeks, a higher proportion of non-exclusively breastfed infants had a fever compared to exclusively breastfed infants (25.0% vs. 7.1%, p = 0.043). Likewise, significantly more non-exclusively breastfed infants were taking medicines in the past two weeks (46.8% vs. 23.1%, p = 0.033), particularly pain killers and anti-inflammatory tablets (67.6% vs. 16.7%, p = 0.018), than exclusively breastfed infants. Factors associated with early initiation of breastfeeding, time of breastfeeding initiation, exclusive breastfeeding, and wasting using regression models are shown in Table 4. Mothers who initiated breastfeeding within the first hour of birth had significantly higher odds of not achieving MDD-W (aOR = 5.52, 95% CI:1.28-20.75) and having a higher WHZ score among their infants (aOR = 1.58, 95% CI: 1.03-2.43), compared to mothers who did not. On the other hand, early initiation of breastfeeding was significantly associated with lower odds of having a preterm birth (aOR = 0.18, 95% CI: 0.03-0.98), receiving pre-lacteal feeding among infants (aOR = 0.06, 95% CI: 0.02-0.20), and having an at-risk waist circumference (aOR = 0.19, 95% CI: 0.05-0.82), compared to those with late initiation of breastfeeding (>1 h after birth). Further analysis showed that late initiation of breastfeeding (≥24 h) was nearly 4-times more likely to occur among mothers with moderate to severe depression (aOR = 3.46, 95% CI: 1.09-10.97) as compared to those who initiated breastfeeding within the first day of birth. Exclusive breastfeeding was negatively associated with the age of the infant (aOR = 0.56, 95% CI: 0.39-0.81) and bottle feeding (aOR = 0.04, 95% CI:0.00-0.38), compared to those who were not exclusively breastfed. As for wasting, the WHZ score was positively associated with receiving infant formula milk (aβ = 1.41, 95% CI: 0.34-2.48) and negatively with bottle feeding (aβ = −0.83, 95% CI: −1.52; −0.14). Findings from FGDs related to infant feeding practices are found in Table 5 with major themes, codes, and representative quotes. Two major thematic axes emerged as follows: (1) Enablers and (2) Barriers for early initiation of breastfeeding and exclusive breastfeeding. Table 5. Enablers and barriers for the initiation of breastfeeding and exclusive breastfeeding among Syrian refugee mothers of children under five years in vulnerable areas of Greater Beirut, Lebanon -qualitative analysis of focus group discussion (N = 11). Themes Codes Representative Quotes A-Enablers for early initiation of breastfeeding and exclusive breastfeeding Knowledge on infant feeding practices and maternal nutrition Cultural beliefs "I wished I was able to breastfeed. They say that a child who has taken his mother's milk develops a better immunity and becomes gentle and caring." (Mother 1) "I gave him what he deserves." (Mother 2) "I don't want to stop breastfeeding, I want to give him his right to breastfeed, even if it's from my heart, I want to give him his rights." (Mother 3) Maternal nutrition "Some people have certain convictions, that when the child breastfeeds from his mother, everything the mother eats comes with the milk, so him, he eats everything." . My mother isn't here, but she has a telephone, so I also take information from her. (. . . ) Yes true [that we ask relatives], if the condition progresses, we go to the doctor, but we ask relatives more." (Mother 10) "[I take advice from] my mother-in-law, (. . . ) [she doesn't live with me], but she has experience, she's tried it, if you look at the internet, everyone says something . . . and some of which are not true either, but the mother law has tried them before." (Mother 11) "We all breastfeed." (Mothers 12 and 13) B-Barriers for early initiation breastfeeding of and exclusive breastfeeding Pre-lacteal feeding Oral rehydration solutions "When they are in the hospital, when they bring them to breastfeed, [they would have given him a] water that is called "mayit el ghalib" [referring to oral rehydration solutions]." (Mother 13) "I, it's different from what the doctor gives . . . I personally didn't give the child any cumin [or water and sugar] when he was small. After the first day, the doctor gave me a small box [referring to a type of feed] and said to give it to him the first few days, and after that the milk came in and I was breastfeeding." (Mother 14) Infant feeding practices Early introduction of solid, semi-solid, or soft foods (before the age of six months) "My children starting three to four months to taste whatever I cook so they won't be disgusted by it when they grow up. Now, they eat anything I put in front of them. Maternal health reasons Perceived lack of breast milk "I had five kids, and I didn't breastfeed anyone of them. The reason is that I don't have [any breast] milk." (Mother 1) "He is the only one I had to buy milk for. And the rest I breastfed them, but him, he is the only one whom I couldn't breastfeed because it was dry. Under the Enablers axis, two major themes were generated representing (1.a) knowledge on infant feeding practices and maternal nutrition and (1.b) support from healthcare providers and family. Breastfeeding emerged as a core practice according to their cultural beliefs as mothers referred to it as a "right" (Mother 3) and crucial for the development of the child. Mothers highlighted the importance of proper maternal nutrition and to be able to "eat everything [ . . . ] and not stop anything" (Mother 5). Many mothers also demonstrated positive attitude towards exclusive breastfeeding and eagerly confirmed to have given their infants "only breast milk, until six months (. . . ), not even water" (Mother 4). Receiving support from healthcare professional or family was discussed by participants. A couple of mothers mentioned that they follow the recommendations of doctors, yet they still valued their family members' advice. As a result, several women preferred to talk to their mothers or mothers-in-law first and turn to a doctor when "the condition progresses". When feeling overwhelmed, one mother stated that she took the advice of her sister-in-law to "calm down" because she "can't breastfeed" and other times she would call her mother "to take information from her" (Mother 10). Other mothers stated that "we all breastfeed" (Mothers 12 and 13), indicating that it was a common practice adopted well and supported by their families and communities. As for the Barriers axis, four major themes emerged, including (2.a) pre-lacteal feeding, (2.b) infant feeding practices, (2.c) maternal health reasons, and (2.d) social factors. According to the discussions with the mothers, pre-lacteal feedings and early introduction of solid, semi-solid, or soft foods may have not been seen as the main barrier to early initiation of breastfeeding and exclusive breastfeeding. Pre-lacteal feeding consisted of oral rehydration solutions and possibly infant formula milk, as mothers referred to it as "a small box [referring to a type of feed or solutions]", and mothers seemed to be content with using it until "the milk came in" (Mother 14). Similarly, early introduction of foods appeared to be common and was perceived as a practice that encouraged infants to taste a variety of food "so they won't be disgusted by it when they grow up" (Mother 1). Mothers appeared to be satisfied with introducing a "small quantity" of foods such as family dishes, "yogurt, fig jam, herbal tea", "fruits", and "mashed rice" at "five months" or even earlier starting "three to four months" (Mothers 1, 4, 15, 16, and 17). When asked about the use of infant formula milk, many mothers confirmed having used a mixed milk feeding to "help" the child for various reasons "from the first month" or between the second and fourth months (Mothers 4, 17, and 18). Water consumption was also widely discussed. It appears that this practice was known among mothers "from day 40 [after birth]" (Mother 18). As for herbal tea consumption, one mother explained to have used it as an alternative to breast milk because she "didn't have any milk left to breastfeed", which caused her child to be hospitalized because of "so much herbal tea" (Mother 3). On the other hand, discussions suggested that maternal health reasons were perceived as a major barrier to adequate breastfeeding practices. Perceived lack of breast milk was widely mentioned as a reason for not breastfeeding. Commonly used statements included "I didn't have any breast milk", "it was dry", and "the milk stopped" (Mothers 1, 3, 19, and 20). Some mothers even highlighted that they "had no milk; it was dry from lack of nutrition" (Mother 10), indicating insufficient maternal dietary intake was a known barrier to breastfeeding. Furthermore, mothers were able to express that having poor mental health, an illness, or feeling tired were known obstacles to proper breastfeeding practices. One mother readily stated that "[when I am worried or tired], I can't breastfeed (. . . )" (Mother 10) and "I'm getting tired when I'm breastfeeding, I'm very tired, my bones, they hurt. (. . . ) I can't stand up, when I finish breastfeeding, I feel beaten and dizzy . . . " (Mother 3). Complications during pregnancy or delivery and premature delivery were also viewed by mothers as barriers to breastfeeding. Mothers openly shared their struggles to initiate breastfeeding because of experiencing "complications" and "bleeding" during delivery (Mother 23) or having a "very small" newborn (Mother 25). As for social factors, discussions indicated that the use of breastmilk substitutes seemed to influence the mother's choice to initiate or continue breastfeeding. Several mothers readily acknowledged receiving free samples of infant formula milk from healthcare professionals or local non-governmental organizations. Mothers also stated receiving strong recommendations to use infant formula milk from healthcare professionals when their infants were very ill or when "there wasn't any milk" (Mother 13). In addition, discussions revealed that mothers were aware of some misinformation being passed down by earlier generations, particularly from mothers-in-law. For instance, mothers shared that they received specific nutritional advice to avoid or consume a certain food group such as "potato and rice only" (Mothers 20 and 6) and for infants to consume "starch" (Mother 20), resulting in malnutrition among both the mother and her infant. Discussion This is the first study to investigate infant feeding practices as well as the prevalence of anemia and the nutritional status of infants under six months among Syrian refugees in Greater Beirut, Lebanon-representing an urban setting of a humanitarian crisis. Despite the very high rate of ever breastfed infants (98.2%), less than a third were breastfed within the first hour of birth (31.0%) and a quarter were exclusively breastfed (24.6%) in our study. Our prevalence was comparable to those reported in Syria in 2019 [44,45] and among Syrian refugees in Lebanon in 2013 [19], but lower than those recorded in Syria before the start of the war in 2009-2010 [44,45] and among Syrian refugees in Southern Turkey, Northern Lebanon, and Jordan in 2016-2020 [20,46,47]. Similarly, low exclusive breastfeeding (EBF) rates were documented among internally displaced persons in eastern Ukraine [48] and Sahrawi refugees in Algeria [49]. Our findings were also in in line with recent national and regional studies in Lebanon reporting a prevalence of EBF varying from 30% to 59% among children under six months conducted in 2019-2021 [50][51][52]. However, our rates remained below the global and Middle East and North Africa regional rates of early initiation of breastfeeding and EBF in 2014-2020 [44,45]. Ever breastfeeding was most widespread among Syrian refugees in Greater Beirut, Lebanon. Similar findings were registered among Syrian refugees in Northern Lebanon and Turkey [20,46,53]. In our study, mothers explained that breastfeeding is a core practice in their culture as they "all breastfed" and believed that breastfeeding is a "right" for their children. Breastfeeding intention was found to have a crucial role in the initiation and sustainability of effective breastfeeding among refugees [54,55]. According to the World Health Organization (WHO), ever breastfeeding is a reflection of the acceptance of breastfeeding in the culture [31]. In fact, the prevalence of ever breastfeeding, early initiation, and EBF were even higher in Syria in 2009-2010 before the start of the war [44,45], indicating that this practice has been well integrated in their culture. Nevertheless, evidence have shown that breastfeeding practices deteriorate during humanitarian crises and conflicts, increasing the risk of infant malnutrition [48,56]. The prevalence of anemia (20.5%) and wasting (9.6%) among children under six months were classified as a medium public health significance in our study [57,58]. Very few studies examined anemia levels of infants under six months in the literature, especially among refugees. The anemia level in our study were not too far from findings recorded in Argentina (28.9%) [59] and South Africa (33%) [60]. As for wasting, recent surveys conducted among Syrian refugees and Lebanese children aged 6 to 59 months in Lebanon showed that childhood wasting was maintained to about 5% in 2018-2021 [15,52]. However, a similar prevalence of wasting was found among infants of Sahrawi refugees [49]. This suggests that infants among the refugee population may be at an increased risk of acute malnutrition from an early age. Pre-lacteal feedings were given to 62.5% of the infants in our study, with the majority receiving infant formula milk, followed by sugary water and herbal infusions, before lactation was started. It appears that mothers in our study did not view pre-lacteal feedings, consumption of water or herbal tea, and early introduction of foods as a barrier to exclusive breastfeeding. Syrian refugees in Turkey and Jordan as well as Sahrawi refugees in Algeria practiced similar customs [47,49,61]. For instance, sugary water was believed to cleanse the intestines and prevent jaundice by most parents and family members in Syria [61]. Other common beliefs were observed among internally displaced persons in eastern Ukraine, such as giving infants water when the mother felt thirsty while it was warm outside [48]. Some mothers also introduced foods before the age of six months, despite the WHO recommendations. In our study, mothers justified this common practice as an approach to get their children to eat "everything" and "anything" at home. Others believed that introducing food early would lead to a chubby baby, as a sign of good health [62]. Similar traditions were also documented among Syrian refugees in Turkey [61], Lebanon [20], and Germany [62] as well as refugees in Algeria [49] and displaced persons in eastern Ukraine [48]. Early initiation of breastfeeding was shown to be protective against receiving prelacteal feeding and wasting (WHZ) among infants. Timely onset of breastfeeding was also significantly associated with EBF, despite the lack of association in the regression models. Breastfeeding within the first hour of birth is known for its decisive role in securing that newborns receive colostrum feedings, limiting the possibilities of feeding newborns anything other than breast milk, and establishing EBF successfully [5]. This is confirmed by the robust literature supporting the advantages of breastfeeding for the baby by reducing gastrointestinal and respiratory infections and non-communicable diseases as well as increasing intelligence in the long-term [63][64][65]. Our findings also showed that a significantly higher proportion of non-exclusively breastfed infants suffered from a fever and received medicines compared to those who were exclusively breastfed. Moreover, early introduction of complementary food has been known to shorten the duration of breastfeeding and stop it prematurely [66]. Therefore, adequate infant feeding practices can set the child on the right path to prevent malnutrition from early infancy [67]. Bottle feeding was also identified as a main barrier to exclusive breastfeeding in our study. In addition, the use of bottles had a significant negative association with the weight-for-length/height Z-score, while the use of infant formula milk had a positive association. According to the WHO, feeding bottles and teats were discouraged as they are often difficult to keep clean and could lead to the transmission of pathogens. Their uses increased the risk of diarrhea, dehydration, and malnutrition [6]. Bottle feeding was also known to increase unfavorable behaviors during breastfeeding, such as suckling behavior, affectivity, baby's response, and the mother/baby position, which might interfere with the infant's weight gain and increase the risk of early weaning [68]. Hence, it is recommended to use a feeding cup when needed and to secure a suitable breastmilk substitute, prepared according to instructions for safe preparation and use, to be given only to infants who do not have access to breast milk [7]. The prevalence of bottle feeding documented in our study (44.7%) was similar to those observed among the Lebanese population and Syrian refugees in Lebanon [19,51], but lower compared to Syrian refugees in Jordan [47]. Our discussions with the mothers indicated that the use of breastmilk substitutes seemed to influence the mother's choice to breastfeed due to free samples of infant formula milk distributed or strong recommendations for their use by healthcare professionals. These findings were a direct reflection of the violations to the Code of Marketing of Breastmilk Substitutes previously documented in Lebanon [69,70]. Moreover, having a pre-term delivery was found to be negatively associated with early initiation of breastfeeding. Studies have shown that preterm delivery might delay the initiation of lactation among mothers [71]. Nevertheless, it would still be possible to establish exclusive breastfeeding among preterm infants [72]. A high prevalence of caesarian delivery was recorded in our study (33.3%), exceeding the global rate of 21.1% [73]. Caesarian delivery, labor complications, and premature birth also emerged as barriers to breastfeeding during focus group discussions in our study. Caesarian deliveries are known to hamper the initiation of breastfeeding and exclusive breastfeeding rates. This might be mediated by various reasons, including the delayed onset of breastfeeding and disrupted mother-infant interaction due to postoperative care routines [74]. The lack of association of caesarian delivery and breastfeeding in our study might be explained by the large number of caesarian sections performed that could have been elective and not medically recommended. As a result, mothers who underwent elective caesarian sections might have better outcomes and be able to start breastfeeding earlier compared to those who had a medical reason [75]. Mixed findings on the role of caesarian sections as a predictor of delayed initiation of breastfeeding were also documented in Lebanon [75,76]. Low dietary diversity among mothers was also associated with higher odds of early initiation of breastfeeding. Poor dietary diversity was also linked to low income and food insecurity among Syrian refugee mothers in Greater Beirut, Lebanon [16]. This suggests that mothers facing increased economic vulnerabilities would be more inclined to initiate breastfeeding early as breast milk is known to be more cost-effective. In addition, Syrian refugees in Turkey and Lebanon stated that breast milk is "free", "less costly and more natural" [53,61]. However, it was also mentioned by the mothers that insufficient maternal nutrition and poor health were major reasons to halt breastfeeding prematurely. Among Syrian refugees in Greater Beirut, Lebanon, lactating mothers were found to have higher proportions of nutritional inadequacies and 19.4% of them suffered from anemia [15]. In addition, Syrian refugees in Turkey reported that malnutrition and fatigue of mothers were often the biggest obstacles to breastfeeding [61]. These findings shed the light on the role of adequate maternal nutrition during pregnancy and lactation as it impacts the development and health of the offspring [77,78], which was particularly difficult to achieve among Syrian refugee mothers in Greater Beirut, Lebanon [15,16]. Maternal-related adverse factors also included maternal obesity and maternal depression. Having an at-risk waist circumference was associated with lower odds of early initiation of breastfeeding. This is concerning as more than 60% of the mothers in our study were found to be overweight or obese [15]. According to the literature, maternal obesity is associated with lower rates of early initiation of breastfeeding and EBF not only due to mechanical factors, hormonal imbalances, and a delayed onset of lactogenesis II, but also to psychosocial factors associated with body image dissatisfaction or concerns [79,80]. Maternal depression was also identified as a predictor to delayed initiation of breastfeeding. Possible explanations are that maternal psychological distress, such as depression or anxiety, may impair the release of oxytocin and delay the onset of lactogenesis, thus reducing breastfeeding outcomes [81]. Crowded spaces were also reported to negatively impact breastfeeding practices among Syrian refugees in Lebanon [53]. This could be explained by the strong association between poor maternal mental health, food insecurity, and a high crowding index found among Syrian refugee mothers in Lebanon [16]. Particularly, stress related to conflicts was recognized as a major contributor to the discontinuation of breastfeeding among refugees and displaced persons, as it might interfere with the mother's let down reflex and lactation [6,46,48,53]. These cues could be easily perceived as a sign of insufficient milk production by mothers [6], as stated during focus group discussions. Providing reassurance and support to these mothers is beneficial and would enhance the lactation outcome [6], particularly among mothers that were experiencing loss of social support due to migration [61]. Mothers in our study described that they were able to breastfeed following the advice of a family member to "calm down" when worried or tired. These findings emphasize the significance of the role of psychosocial support for effective breastfeeding practices. Receiving support from healthcare providers and family members were mentioned as enablers for breastfeeding by mothers in our study. Spousal support was found to be a strong influencer for breastfeeding attitudes [54], along with female relatives which were viewed as key sources of support for breastfeeding among Syrian refugee mothers in Turkey, Germany, and Lebanon [53,61,62]. However, some nurses and midwives reported that the support provided by grandmothers was sometimes seen as excessive since it reduced the mother's contact with the newborn and affected the mother-infant bonding [61]. In addition, some mothers also discussed receiving misinformation from their mothers-in-law that lead to malnutrition and an early cessation of breastfeeding in our study. This indicates the importance of engaging family members such as grandmother and mothers-in-law in counselling, programs, or interventions targeted to promote optimal infant and young child feeding practices as well as maternal nutrition [82]. Although nutrition knowledge is known to be a key modulator in breastfeeding practices [82,83], a strong intention to breastfeed prenatally and a positive attitude towards breastfeeding are also significant predictors [84]. Furthermore, the number of antenatal care visits was positively associated with exclusive breastfeeding, even though this relationship was not significant in the multivariate models. The positive impact of antenatal care visits on breastfeeding practices has been well documented in the literature [6,85,86]. However, the lack of or difficulty to access antenatal care might act as a barrier for Syrian refugee mothers, especially among those not registered as refugees with the UNHCR in Lebanon or those facing a language barrier such as in Turkey and Germany [15,46,61,62]. Findings of this study should be interpreted considering some limitations and strengths. This is a cross-sectional study, hence limiting the ability to infer causality. Data collection was focused on the most vulnerable areas of Greater Beirut using a purposeful sampling approach; thus, findings cannot be generalized to rural areas or the whole country. However, one strength of our study is the inclusion of unregistered refugees living in the catchment area of these primary health care centers, which would have not been possible if a randomized approach using the UNHCR list of refugees was used. Another limitation is the use of the previous day's feeding to measure the proportion of exclusively breastfed infants. The proportion of EBF can be overestimated as some infants may have not received other liquids or foods on the day before the interview [87]. It is also worth noting that data on birth characteristics may be subjected to recall bias as the gestational age, weight, and length at birth were self-reported. In addition, the definition of gestational age was adapted to the culture and was interpreted in months instead of weeks as understood by the mothers. Therefore, the proportion of full-term births could be overestimated. Lastly, the Patient Health Questionnaire-9 (PHQ-9) was validated among Lebanese adults instead of Syrian refugees. It also had a poor specificity in capturing depressive symptoms, yet a good sensitivity. Thus, it was regarded as a useful screening tool for depression in settings lacking sufficient psychiatric care [88]. Conclusions In conclusion, this study demonstrated poor infant feeding practices among infants under six months of Syrian refugees in Greater Beirut, Lebanon. Our findings showed that 20.5% and 9.6% of infants under six months were anemic and wasted, respectively. Despite a nearly universal onset of breastfeeding, early initiation and exclusive breastfeeding were considerably low. While the early initiation of breastfeeding was associated with lower odds of receiving pre-lacteal feeding and wasting among infants, bottle feeding was identified as a significant obstacle to exclusive breastfeeding. Key maternal-related factors were found to be adversely related to the early initiation of breastfeeding, which included low dietary diversity, obesity, and depression. The role of family members' support, antenatal and post-natal care, and nutrition education for effective breastfeeding practices were underlined in our study. Our findings could help strengthen existing campaigns and interventions to address the identified gaps and barriers. Raising awareness on proper breastfeeding practices would be essential for mothers and their family members alike. Psychosocial support would be particularly important for refugee mothers, considering the role of maternal mental health status in the successful establishment of breastfeeding. As maternal nutrition and infant nutrition are tightly linked, nutrition specific interventions should focus their efforts on improving the nutritional status of mothers at the community and individual levels to reduce infant malnutrition. Furthermore, the implementation of the Code of Marketing of Breastmilk Substitutes and the Baby Friendly Hospital Initiative would need to be reinforced in Lebanon. Future research should investigate the determinants of anemia and the nutritional status among infants under six months in prospective cohort studies. Informed Consent Statement: Written consents were obtained from all mothers prior to enrollment in the study. A parental consent and an informed assent were sought when mothers were aged less than 18 years. In case of illiteracy, a witness or the nurse signed on the mother's behalf after reading and explaining the consent form to the participant. Confidentiality was assured to the participants by assigning random identifiers and allowing access to the data only to the investigators. Data Availability Statement: The dataset analyzed during this study is available from the corresponding author on reasonable request.
2022-10-27T15:31:55.343Z
2022-10-23T00:00:00.000
{ "year": 2022, "sha1": "41f6c038c979a54d4cad3922e6131a04797e2851", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/21/4459/pdf?version=1666756525", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8490f043cd5a1564c730c5ba0e4d605610394b8c", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
53228617
pes2o/s2orc
v3-fos-license
Stakeholder Insights from Zika Virus Infections in Houston, Texas, USA, 2016–2017 Responding to Zika virus infections in Houston, Texas, USA, presented numerous challenges across the health system. As the nation’s fourth-largest city, in a subtropical region with high travel volume to Latin America and the Caribbean, Houston was an ideal location for studying experiences encountered by clinicians and public health officials as they responded to the Zika virus crisis. To identify the challenges encountered in the response and to explore strategies to improve future responses to emerging infectious diseases, we interviewed 38 key stakeholders who were clinical, scientific, operational, and public health leaders. From the responses, we identified 4 key challenges: testing, travel screening, patient demographics and immigration status, and insufficient collaboration (between public health officials and clinicians and among clinical providers). We also identified 5 strategic areas as potential solutions: improved electronic health record support, specialty centers and referral systems, standardized forms, centralized testing databases, and joint academic/public health task forces. I n February 2016, the World Health Organization (WHO) declared the cluster of cases of microcephaly and other neurologic abnormalities associated with Zika virus a public health emergency of international concern. Since 2015, this virus has infected >1 million persons in 70 countries (1). From 2015 through December 2017 in the United States and its territories, >42,000 laboratory-confirmed symptomatic cases were reported and ≈7,000 pregnant women had laboratory evidence of possible Zika virus infection (2,3). Responding to Zika virus presented numerous challenges across the health system. Zika virus research before 2015 was scarce, leaving clinicians and public health policy makers with little guidance regarding the virus's natural history, rate of perinatal transmission, or mechanisms or rate by which infections triggered microcephaly and other severe congenital abnormalities. Furthermore, diagnostic tools were limited because of Zika virus cross-reactivity with other flaviviruses on serologic assays, complicating individual diagnoses and population-based serosurveillance (4). To develop guidelines to support the clinical and public health response, the Centers for Disease Control and Prevention (CDC) mobilized rapidly. Nevertheless, considerable work was needed by those on the ground to translate CDC guidelines and other emerging research into actionable policies at the institutional level. The challenges encountered by clinicians and public health officials when responding to the emerging Zika virus crisis in Houston, Texas, USA, from January 2016 through June 2017 were ideal for a case study. As the nation's fourth most populous city and with >10 million annual international travelers, Houston is a global gateway to Latin America and the Caribbean, putting it at high risk for travel-associated cases (5). Furthermore, the city's subtropical bayou setting enhances the threat of locally acquired transmission from Aedes aegypti and Ae. albopictus mosquitoes (6). During 2015-2017, a total of 365 Zika virus cases, including at least 7 transmitted by local mosquitoes, were reported in Texas (7). In a 7-month period, 105 pregnant patients were referred to a specialty clinic for potential Zika virus exposure; 75 met testing criteria and 8 ultimately had positive test results, a screen-positive rate of 11% (8). To explore the clinical and public health responses to Zika virus in Houston, we interviewed expert stakeholders. We report the key challenges they encountered and propose strategies to inform the response to Zika virus and future emerging infectious diseases. Methods We conducted semistructured interviews of 38 clinical, scientific, and public health experts in Houston and current or former Texas public health officials (Table 1). Almost half (45%) worked in the fields of obstetrics or pediatrics, and the majority of clinicians were affiliated with an academic medical center. The interview guide elicited participants' perceived challenges related to Zika virus infection prevention, testing, and clinical management, as well as strategies for addressing those challenges. During April-June 2017, a researcher with doctoral training in qualitative methods (S.R.M.) conducted the interviews. When possible, interviews were conducted in person (n = 24); the others were conducted via telephone. All interviews were audiorecorded, transcribed, and reviewed for accuracy. We analyzed transcripts by using MAXQDA (https://www.maxqda. com) and used qualitative coding to identify key themes related to challenges and strategies in the Zika virus response. Challenges From stakeholders' discussions of the challenges encountered in responding to Zika virus infection, 4 primary themes emerged. These themes were testing, travel screening, patient demographics and immigration status, and collaboration (between public health entities and clinicians and among clinical providers) ( Table 2). Testing The most commonly described challenges were associated with Zika virus testing. Every clinician interviewed described such challenges. Five testing issues emerged. First, clinicians described logistics burdens associated with collecting and submitting samples to public health departments, characterizing the paperwork and approval processes as "redundant," "very time-consuming," and "a significant barrier to care." One case described by an obstetrician exemplifies these challenges. To order serologic testing, the obstetrician had to complete hospital send-out laboratory forms and 3 forms from the Houston Health Department (HHD) and telephone HHD for approval. When the test results returned positive, the clinical team sent a plaque-reduction neutralization test (PRNT) sample to CDC to rule out cross-reactivity with other flaviviruses (9). While awaiting PRNT results, the patient elected amniocentesis, which required completion of 5 more forms and separate HHD approval. At the time she gave birth, the PRNT results were still pending, so the clinical team submitted placental samples for testing, which required completion of 4 new forms and involvement of the state health department. Second, clinicians expressed concerns regarding the clinical effects of delayed receipt of test results. These delays were generally longest early in the response, before testing was available via in-house or commercial laboratories and as public health departments faced extensive delays in federal funding to support testing. The delays were particularly challenging given Zika virus's potential effects on fetal development and the relatively short duration of pregnancy. Test results could influence decisions about clinical management; the risks and benefits of amniocentesis or other testing; and reproductive decision-making, including pregnancy termination. As one maternal-fetal medicine specialist summarized, "It's very difficult to base your management decisions on a test that took 6 to 8 weeks in pregnancy." The third challenge was the complexity and limitations of existing tests, including cross-reactivity with other flaviviruses and the contemporary general understanding of a limited period for IgM detection, which complicated determination of exposure and risks to pregnancy. It was believed that although IgM is expressed as early as several days after exposure, it typically wanes within 3 months, creating challenges for patients with a long duration of exposure or case testing delays of several months. These limitations frustrated efforts to identify true positive exposures and presented challenges for patient education. To quote a maternal-fetal medicine specialist, "The testing is not very specific… it doesn't necessarily eliminate your risk of having Zika… that has been difficult to get our patients to understand. Because most of our patients think that if a test is negative, then the risk is eliminated." Fourth, several respondents noted the effects of commercially developed tests. According to respondents, commercial tests generally improved result turnaround times, partially alleviating the demand on scarce public health department resources. However, they also introduced new cost pressures, particularly for public institutions, compared with free services available through public health laboratories. Commercial testing also introduced challenges for public health systems because these results often lacked necessary demographic and epidemiologic information to support downstream case investigations of positive test results by local health departments. Fifth, respondents described poor mechanisms for sharing data between laboratories and providers, including insufficient or delayed electronic health record (EHR) integration for ordering and reporting test results. Clinicians reported resorting to "clunky" workarounds, such as receiving a facsimile from the health department that then had to be scanned into the medical record. Providers expressed concern that such systems could lead to insufficient follow-up, particularly for care of neonates. For example, a pediatrician cited the challenges of exchanging and recording testing Logistical burdens with collecting and submitting samples "I was filling out a form for the city. I was filling out another form for the state, and another for CDC. All to just be able to submit the samples for testing… it took me about 15-20 minutes just to fill out the paperwork [per patient]. And a lot of it was redundant."-infectious disease specialist Delays in receiving laboratory results "… for a lot of women, [test results are] going to make no difference at all because they are going to continue their pregnancy... but, for other women, it may completely change their decision-making…. So that turnaround time matters, absolutely."-maternal-fetal medicine specialist Complexity and limitations of available Zika virus tests "The testing is not very specific. It doesn't necessarily eliminate your risk of having Zika, so there's lots of limitations even with a negative test."-academic pediatrician Influence of commercial testing "Frankly, the commercial labs-they're a blessing and not so much a blessing at the same time… when PCR specimens are done in a commercial lab and they're positive… we may have a patient name and that's it. Maybe their age, maybe their address, maybe not. And so we don't have all of the demographic information and epidemiologic information that we'd like to have to do a full case investigation."-state official Poor mechanisms for exchanging laboratory data "We get [Zika test results from the health department] through the fax… and we'll have medical records scan it in and then I sent that to the provider who is seeing the patient. It's a little clunky, but that's the only way we can do it because of the mode that we're getting it through the fax."-community obstetrician Travel screening Insufficient clinician initiation "We would love it if our safety net providers… were doing a similar type of Zika screening for all patient visits, not just OB visits, 'cause you're kind of behind the ball if you wait 'til the person's already pregnant and has been exposed."-public health physician Inaccurate referral information "So I think particularly for the immigrant population here in Harris County, there is also concerns that, 'why are they asking those questions, do they want to know where I've been and what I've done?' So I think there is also the concern for people who are here illegally perhaps that they don't want to divulge their travel history."-maternal-fetal medicine specialist Insufficiently precise information "… pathology would receive a blood sample on a mom who had been to Florida. She said yes to Florida… but based on the form that pathology got, it doesn't say the city that she visited. Before they will send it, they have to verify that it was Miami. I call the mom, well, she went to Jacksonville. She didn't go to Miami. That kind of stuff is very time intensive for somebody to follow up on."-genetic counselor Patient demographic and immigration status Transient and low socioeconomic level population "… A lot of these patients are very underprivileged and have very low resources, living in charity homes, living in homeless shelters…. How do we provide resources for these patients that have almost no resources to begin with? ... that's a big issue that I'm not really sure how to fully tackle. I think it's a very large issue"-academic pediatrician Language barriers "… 100% of our moms were Hispanic and low income. I can't remember a single one of them that spoke English either. And so there's a dynamic of we're trying to have interviews with them in a language that a number of our epidemiologists don't speak and try to find translators to convey whatever we're trying to ask, but then there's the dynamic of these patients with their own providers… there's a loss of information there just on the basis of translation."-public health physician Undocumented immigration status "… we're definitely hearing from some people… parents who are not here legally-even if their kids are here legally-are afraid to access medical care for fear of deportation."-community pediatrician Collaboration among public health clinicians Confusion as to appropriate Zika virus "point person" within public health system data as the reason why one infant in her practice was not evaluated for congenital Zika syndrome until 4 months of age-months beyond the CDC recommendations for evaluation and management of possible congenital infection (10). Travel Screening We identified 3 themes associated with travel screenings to identify patients with potential Zika virus exposure. First, specialty referral centers reported receiving inaccurate travel histories from patients or referring clinicians. In some circumstances, respondents cited the inaccuracies as probably stemming from patient concerns over divulging travel history because of their immigration status. One obstetrician offered the example of a patient who had become pregnant in Central America and subsequently traveled to the United States. When entering the United States, the patient's husband was detained as an undocumented immigrant and remained incarcerated throughout her pregnancy. The patient subsequently declined to disclose travel history to healthcare providers in Zika virus-endemic regions during several prenatal visits, ultimately notifying providers only of her travel early in her third trimester. According to another respondent, high levels of Zika virus-related anxiety may have motivated some patients to misrepresent their travel history in an effort to be referred for diagnostic testing. Second, clinicians suggested that insufficient initiation of travel screening probably resulted in some Zika virus-exposed patients "fall[ing] through the cracks" and not receiving recommended evaluation or testing. Some cited institutional barriers, including lack of development or implementation of travel screenings or restricting travel screening to obstetric visits only instead of expanding to primary care and family medicine. Third, reported travel information was sometimes insufficient or imprecise. For example, a patient might report visiting "Florida" when there was considerable heterogeneity in risk within the state (e.g., in the summer of 2016, a trip to Miami would trigger testing whereas a trip to Jacksonville would not) (11) or might list only the month of travel when the exact dates are necessary for ascertaining the most appropriate testing method. Patient Demographics and Immigration Status According to respondents, patient demographics presented challenges for access to care and subsequent follow-up. Many patients undergoing evaluation and subsequent care for Zika virus exposure were from or had close family ties to Zika virus-endemic countries. Several respondents characterized patients as often "transient" and described extensive socioeconomic barriers, including lack of stable housing, transportation, or telephone service-all of which could undermine long-term follow-up and care coordination. As a pediatrician explained, "We have a patient that gave us their cell phone number, but the cell phone went dead. They don't have family in the country and they live in a shelter, so I'm just not sure how to ensure we have good communication, transportation, follow-up, and shelter." Furthermore, respondents noted that for many patients, proficiency in the English language was limited, which respondents described as potentially exacerbating not only the accuracy of travel screenings but also patient education about Zika virus prevention and clinical management. Several respondents also described extensive issues related to immigration status and implications for healthcare access. Some patients were reluctant to disclose their travel history for fear of deportation. Respondents also noted that concern about immigration status may have dissuaded some patients from seeking care, either during the prenatal period or for subsequent follow-up infant care. Other clinicians noted that immigration status influenced patient behavior across the health system. For example, public health clinicians described a "marked decline" in women accessing services through the Women, Infants, and Children program, reportedly because of concerns over deportation. Collaboration We identified 2 types of collaboration challenges. One was between providers and public health agencies and the other among clinicians involved in patient care. Provider-Public Health Agency Collaboration Respondents described challenges with communications between clinicians and public health agencies at the local, state, and federal levels. To develop an action plan, the Houston Office of Surveillance and Public Health Preparedness convened local stakeholders, including city departments, researchers, and industry leaders (12). Nevertheless, respondents reported missed opportunities for interdisciplinary communication. For example, some clinicians described uncertainty identifying the Zika virus "point person" or team within health departments who could field questions from clinicians and hospital laboratories about case identification and testing or even which health department had responsibility. Furthermore, submitting laboratory testing for one patient could involve numerous health departments, including separate paperwork, processes, and phone approvals for each department-all of which presented additional time burdens for clinicians and delays for patient diagnosis. According to one public health respondent, such challenges may reflect misunderstandings associated with allocation of responsibility across different public health partners: "I think there's a fundamental misunderstanding of what the CDC does and doesn't do. They have national experts. They provide resources that they get from Congress. There's a national laboratory, but they don't take over a response. A response is formulated at the local level… [but] local health departments have varying levels of expertise." Interclinician Collaboration Several clinicians described interclinician communication about patient care as the "most frustrating" or "hardest" aspect of the Zika virus response. We identified 2 distinct issues. The first and by far most common problem was insufficient or inefficient communication between obstetric and pediatric care providers, including pediatrician notification of suspected or confirmed maternal Zika virus infections. Specific criticisms related to this process included absence of a centralized database of positive test results, lack of connectivity between EHRs across different health systems, and an inability of pediatricians to access relevant maternal medical records. Second, several clinicians described insufficient clarity regarding case "ownership." Issues included disagreement over which provider or care team was responsible for tracking and communicating test results to patients and which providers (e.g., general pediatricians vs. specialists) should primarily be responsible for long-term evaluation of Zika virus-exposed infants. Improved EHR Support Respondents identified numerous ways in which technology could streamline testing processes, starting from the initial stage of screening patients, progressing to collecting and submitting samples, and then documenting and sharing results among providers and public health systems. Five specific respondent suggestions for improved EHR support were 1) standardizing screening questions within the patient's EHR to ensure accurate assessment of risk exposure; 2) implementing EHR-based decision support systems to help providers select and order the appropriate test(s); 3) enabling electronic ordering of Zika virus testing through the EHR; 4) prepopulating demographic information within the test form to reduce provider time burden and improve the efficiency of public health case investigations; and 5) integrating the testing laboratory and patient EHR to enable test results to be automatically entered into the patient EHR. Admittedly, revising an EHR system can be an unwieldy process (13,14). Institutions should anticipate this challenge and develop policies to enable timely integration of hospital EHRs to facilitate clinical care and public health reporting. Specialty Centers and Referral Systems The rapid evolution of knowledge and the complexity of Zika virus testing and ongoing patient management provide arguments for specialty centers and referral systems. The education, documentation, and reporting requirements associated with patient care in the context of an evolving infectious disease outbreak are extensive and yet are only one of several goals during a patient-provider encounter. As one maternal fetal medicine specialist explained, "This is a pretty specialized area that's rapidly evolving, and it's probably good to send patients for at least a discussion with folks that really have their fingertips on what did we learn last night at midnight when The New England Journal [of Medicine] was released." Several specialty centers were offered as corollaries, including existing models for perinatal HIV infection, congenital cytomegalovirus infection, or newborn screening for genetic conditions, as well as the designation of 3 tiers of health centers in the 2015 domestic Ebola response (15). For example, Illinois's Perinatal Rapid Testing Implementation Initiative for HIV successfully reduced the number of mother-infant pairs who were discharged with unknown HIV status; the initiative created 4 regional networks, a 24hour hotline, a surveillance system, and implementation resources including template policies and consent forms (16). Specialty centers and referral systems offer several advantages for managing Zika virus infection and other emerging infectious diseases, including state of the science laboratory and ultrasonography/magnetic resonance imaging with high precision and efficiency, potential to scale up for pandemics and endemic-disease expansion, capacity for prenatal and postnatal diagnosis and postnatal follow-up, follow-up care, and opportunities for research and testing of potential interventions in an enriched population with a higher likelihood of exposure and infection. Some patients, particularly undocumented immigrants or other patients of low socioeconomic status, may face financial and transportation barriers with regard to accessing specialty centers, particularly for repeated visits. Additional cost pressures may arise in association with imaging and specialty care, even among insured patients. Additional work is needed to identify strategies to improve access while managing patient and system costs; these strategies include using telemedicine, providing transportation support and cell phones or phone cards to enable follow-up, and implementing corresponding regulatory and payment structures to facilitate these services. Standardized Forms Several clinicians suggested standardizing forms across local, state, and federal agencies to reduce time burdens associated with testing. Federal, state, and local public health entities in Texas each had their own Zika virus-specific testing forms, for which much of the required information overlapped. Condensing these forms into a single, generalizable form could reduce work redundancies and form completion errors while increasing testing efficiency. Ideally, these forms would be online, could translate directly into state databases of testing results, and could be integrated into Zika virus registry reporting information. State or Regional Testing Databases Providing patients with timely testing, clinical management, and follow-up requires that clinicians know of suspected or confirmed exposures. However, our interviews suggested that clinicians faced several barriers accessing patients' prior Zika virus testing information at the point of care. Insufficient information about prior testing or exposure was particularly challenging for patients who accessed different providers throughout the course of a pregnancy, such as receiving prenatal care from a federally qualified health center, delivering at a county hospital, and going to a separate Medicaid clinic for their neonatal visits. This situation is not uncommon for patients in the demographic group most affected during Houston's Zika virus experience. For most patients, standard practice for Zika virus testing and reporting of results in Texas occurs through county health departments, according to ZIP code. Such practices are particularly challenging for providers in large medical centers, which regularly draw patients from >1 health department. A potential solution would be to create a centralized state or regional database for Zika virus test results. Centralized databases could enable providers to access timely information about prior exposure and testing history, regardless of where that testing occurred, similar to those that already exist for patients with syphilis or HIV infection. Joint Academic-Public Health Task Force As emerging arboviruses become the new normal, emerging infectious diseases will continue to pose complex health challenges for communities (17). Prior responses demonstrate the value of infrastructure and capacity building to prepare for the inevitability of future outbreaks (18). Although national organizations and their state and local affiliates (e.g., CDC, US Department of Health and Human Services, American Medical Association, American College of Obstetrics and Gynecology, American Academy of Pediatrics, and Infectious Disease Society of America) can provide information, guidelines, and algorithms, operationalizing and implementing these resources in clinical settings requires local engagement and adaptation. As a public health official explained, "The algorithms… make sense from the scientific viewpoint, but they get confusing to individual practitioners. So making it easy for individual practitioners on the front line to be able to know who to call to get advice… mapping out those systems of care at the local level, I think, will be very important." Two levels of local engagement would support the response to emerging infectious diseases: at the community level and within individual institutions. At the community level, task forces composed of health leadership organizations, academic medical centers, and local public health authorities could facilitate communitywide collaboration, addressing such issues as defining roles and responsibilities across partners, communicating to the public about preparedness and response, enhancing laboratory capacity, and expanding testing and surveillance in high-risk areas. This strategy is consistent with prior guidance regarding the importance of training local professionals willing to collaborate with the government in public health management in enabling effective responses to emerging infectious diseases (19). Such collaboration will admittedly be challenging in some circumstances because local institutions may be more accustomed to viewing each other as competitors for clinical, philanthropic, and research resources rather than as collaborative partners. Prior successful multiorganizational collaborative efforts may offer some instructive lessons, including enlisting an existing reputable organization as a convening body (e.g., the Texas Medical Center), involving key stakeholders, and using electronic tools for low-cost dissemination (20). At the institutional level (including hospitals, clinics, and academic health centers), leadership should designate an emerging infectious disease point person or committee with the authority and support to rapidly create an ad hoc task force comprising interdisciplinary members needed to effectively address a public health emergency. Diverse representation will be vital to the success of such committees, including clinical leadership to translate national, state, or local algorithms to clinical management plans for the respective institutional contexts; health information management leaders to modify and implement EHR changes to support the clinical response; laboratory personnel to guide testing processes; and communications experts to disseminate guidance on policy and process throughout the institution. Limitations For this project, we purposefully selected clinical, operational, and public health leaders with experience responding to Zika virus infections. Consequently, obstetricians, pediatricians, and infectious disease specialists working in academic medical centers were overrepresented. Their experiences shed insight into features of health system capacity and preparedness to respond to an emerging infectious disease. However, involvement of other stakeholders, particularly patients and primary care providers in nonacademic, rural, and resource-poor settings, probably would have provided additional insights. The suggestions for improvement reflect issues raised by our respondents and are not exhaustive. Other critical issues merit further attention, including how healthcare providers can ensure that patients feel safe to access needed health services in the face of contemporary government approaches to immigration enforcement (21). Conclusions The emergence of Zika virus brought numerous challenges to the health system in Houston. Although the virus itself was relatively new, many of the issues confronted by providers and public health officials in the face of the disease were far from novel. Instead, the issues were often the result of known, predictable, and recurring shortcomings in our healthcare system. The insights of expert stakeholders led us to suggest several strategies for improving the response to Zika virus and other future emerging infectious diseases.
2018-10-30T10:41:55.215Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "98528defe6c7650bd3816ae4c0b9c2ce5f0e79d1", "oa_license": "CCBY", "oa_url": "https://wwwnc.cdc.gov/eid/article/24/11/pdfs/17-2108.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ad19589e38502c96cbc0cd16c6760f55c17f0e6", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
119475914
pes2o/s2orc
v3-fos-license
Infrared Extinction by Aggregates of SiC Particles Particle shape and aggregation have a strong influence on the spectral profiles of infrared phonon bands of solid dust grains. In this paper, we use a discrete dipole approximation, a cluster-of-spheres code following the Gerardy-Ausloos approach and a T-matrix method for calculating IR extinction spectra of aggregates of spherical silicon carbide (SiC) particles. We compare the results obtained with the three different methods and discuss differences in the band profiles. Introduction Grain growth by aggregation is an important process in dense cosmic environments as well as in the earth's atmosphere. Besides influencing dynamic properties it also changes the absorption and scattering properties of the solid dust particles for electromagnetic radiation (e.g. [4]). This is especially true in spectral regions where resonant absorption occurs, such as the phonon bands in the infrared. It is quite well known that shape and aggregation effects actually determine the band profiles of such absorption and emission bands, which hinders e.g. the identification of particulate materials by their IR bands, but detailed investigations especially of the influence of grain aggregation are still lacking. We plan to set up a spectroscopic experiment (see Tamanai et al., this volume) for measuring aggregation effects on IR extinction by dust particles dispersed in air and, simultaneously, have started to use light scattering theory in order to predict numerically the band profiles for different aggregation states. Structure of the clusters We consider three-dimensional clusters of identical touching spherical particles arranged in three different geometries: fractal, cubic, and linear. For a high precision of the calculations, we restrict the number of particles per cluster to less than 10. Therefore, we have selected only three geometries, namely a "snowflake 1'st-order prefractal" cluster (fractal dimension D = ln 7/ ln 3 = 1.77, [5]), where one sphere is surrounded by six others along the positive and negative cartesian axes, a cluster of eight spheres arranged as a cube and a linear chain of nine spheres. All of the clusters ( Fig. 1) consist of spheres with radii R = 10 nm and are embedded in vacuum (or air). For the optical constants of the particle we have chosen the data of β-SiC in the wavelength range 10-13µm, calculated from a Lorentzian oscillator-type dielectric function describing the phonon resonance in this wavelength range (see [6]). On the one hand, this phonon resonance is of practical importance since it is observed as an emission band from dust particles in carbon star envelopes. On the other hand, depending on the resonance damping parameter, it represents a model material of a very high complex refractive index with |m| > 10 and sharp surface resonances in the wavelength range between the LO and TO frequencies. The DDA method The discrete dipole approximation (DDA) method is one of several discretisation methods (e.g. [7], [8]) for solving scattering problems in the presence of a target with arbitrary geometry. In this work we use the DDSCAT code version 6.1 [1], which is very popular among astrophysicists. In DDSCAT the considered grain/cluster is replaced by a cubic array of point dipoles of certain polarizabilities [9]. The cubic array has numerical advantages because the conjugate gradient method can be efficiently applied to solve the matrix equation describing the dipole interactions. By specifying an appropriate grid resolution, calculations of the scattering and absorption of light by inhomogeneous media such as particle aggregates can be carried out to in principle whatever accuracy is required. For the sc8 cluster we used a grid of 36 × 36 × 36 dipole, which provides 23 752 dipoles in the cluster, while for the lin9 cluster a grid of 64 × 64 × 64 dipole was used providing 1676 dipoles in the cluster. The program performs orientational averaging of the clusters. The clusters-of-spheres method The clusters-of-spheres calculations have been performed using (1) the program developed by M. Quinten (MQAGGR, commercially available) based on the theoretical approach by [10] and (2) using the T-matrix code by D.W. Mackowski (SCSMTM) calculating the random-orientation scattering matrix for an ensemble of spheres [3]. Both programs aim at solving the scattering problem in an exact way by treating the superposition of incident and all scattered fields, developed into a series of vector spherical harmonics. Available computer power, however, forces to truncate the series at a certain maximum multipole order npol max , which in both programs can be specified explicitly. Furthermore, both programs perform an orientational average of the cluster, the resolution of which was set to 15 degrees in MQAGGR and 10 degrees in SCSMTM for theta (the scattering angle). The variation in the azimuthal angle is not specified in SCSMTM. In MQAGGR it is varied betwen 0 and 360 degrees, again with a resolution of 15 degrees. Results and discussion The extinction profiles shown in Fig. 3 display the results obtained for two cluster geometries and three different codes. The profiles obtained with the two clusters-of-spheres codes at the same multipolar order (13 for the linear chain and 15 for the cubic cluster) give quasi-identical results in the case of the linear chain but differences for the cubic cluster. SCSMTM gave less resonances than MQAGGR, which is true for all lower maximum multipolar orders as well. The reason for that could possibly be different ways of orientational averaging. This is not completely understood yet. Generally speaking, both codes didn't converge up to the multipolar orders tried (up to 2h of CPU time per wavelength point). DDSCAT at the maximum resolution used (1676 dipoles in the linear cluster, 23752 dipoles in the cubic one) gave much smoother profiles, i.e. with less distinct resonances. DDSCAT possesses limitations for large refractive indices ( [11]). On the other hand, the resonances produced by the clusters-of-spheres codes depend so much on the maximum multipolar order that it seems reasonable to assume that the (unreached) converged spectrum would show a more smooth profile as well. Results with lower dipole resolution tend to show sharp resonances as well.
2018-12-18T11:32:46.491Z
2005-11-11T00:00:00.000
{ "year": 2005, "sha1": "6c3719dec7c6cbdf8340682a1536bc6a36115478", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6c3719dec7c6cbdf8340682a1536bc6a36115478", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256121208
pes2o/s2orc
v3-fos-license
The exponential pencil of conics The exponential pencil Gλ:=G1(G0-1G1)λ-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_\lambda :=G_1(G_0^{-1}G_1)^{\lambda -1}$$\end{document}, generated by two conics G0,G1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_0,G_1$$\end{document}, carries a rich geometric structure: It is closed under conjugation, it is compatible with duality and projective mappings, it is convergent for λ→±∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda \rightarrow \pm \infty $$\end{document} or periodic, and it is connected in various ways with the linear pencil gλ=λG1+(1-λ)G0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_\lambda =\lambda G_1+(1-\lambda )G_0$$\end{document}. The structure of the exponential pencil can be used to characterize the position of G0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_0$$\end{document} and G1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_1$$\end{document} relative to each other. a rich geometry to study. However, the linear pencil lacks certain desirable properties: For example, it is not compatible with duality, i.e., the linear pencil of the dual of two conics is not the dual of the pencil of the two conics (see Sects. 2.1 and 3), and the linear pencil does, in general, not exist as real conics for all λ ∈ R. In this article, we investigate the exponential pencil G λ = G 1 (G −1 0 G 1 ) λ−1 of two conics G 0 and G 1 . It turns out, that this pencil has a remarkable spectrum of geometric properties, which we study in Sect. 3. In Sect. 4 we classify the exponential pencils according to the relative position of the generating conics. But first, we start with some preliminary remarks to set the stage and to fix the notation. Matrix powers Let f : R → C n×n be analytic such that (a) f (0) = I, where I is the identity matrix, In particular, we have f (−x) = f (x) −1 for all x ∈ R, and therefore A is necessarily regular. Moreover, all matrices f (x), f (y) commute. With the infinitesimal generator F := f (0), we may write f (x) = e F x . In particular, A = f (1) = e F , i.e., F is a logarithm of A. The logarithm of a matrix is in general not unique. Nonetheless, (a)-(c) determine the values of f (n) for all n ∈ Z. It is convenient to write f (x) = A x for a function satisfying (a)-(c). However, we have to keep in mind that two different logarithms of A define different functions x → A x . In concrete cases, a function A x can be calculated by the binomial series whenever the series converges. Let f (x) = A x be a solution of (a)-(c), and suppose the matrix A is similar to the matrix B, i.e., B = T −1 AT . Then g(x) := T −1 f (x)T is analytic, g(0) = I, g(1) = T −1 AT = B, and g(x + y) = g(x) · g(y) for arbitrary x, y ∈ R. Thus, g(x) = B x . In this situation, the infinitesimal generators of f and g are similar: g (0) = T −1 f (0)T . Projective plane and conics We will work in the standard model of the real projective plane, i.e., we consider the set of points P = R 3 \{0}/ ∼, where x ∼ y ∈ R 3 \{0} are equivalent if x = λy for some λ ∈ R. The set of lines is B = R 3 \{0}/ ∼, where g ∼ h ∈ R 3 \{0} are equivalent, if g = λh for some λ ∈ R. We that say a point [x] and a line [g] are incident if x, g = 0, where we denoted equivalence classes by square brackets and the standard inner product in R 3 by ·, · . As usual, a line [g] can be identified with the set of points which are incident with it. Vice versa, a point [x] can be identified with the set of lines which pass through it. The affine plane R 2 is embedded in the present model of the projective plane by the map The projective general linear group PG L(3, R) consists of equivalence classes [A] of regular matrices A ∈ R 3×3 representing maps P → P, A conic in this model of the projective plane is an equivalence class of a regular, linear, selfadjoint map A : R 3 → R 3 with mixed signature, i.e., A has eigenvalues of both signs. It is convenient to say a matrix A is a conic, instead of A is a representative of a conic. We may identify a conic by the set of points [x] such that x, Ax = 0, or by the set of lines [g] for which A −1 g, g = 0 (see below). Notice that, in this interpretation, a conic cannot be empty: Since A has positive and negative eigenvalues, there are points [ p], [q] with p, Ap > 0 and q, Aq < 0. Hence a continuity argument guarantees the existence of points [x] satisfying x, Ax = 0. From now on, we will only distinguish in the notation between an equivalence class and a representative if necessary. Fact 2.1 Let x be a point on the conic A. Then the line Ax is tangent to the conic A with contact point x. Proof We show that the line Ax meets the conic A only in x. Suppose otherwise, that y x is a point on the conic, i.e., y, Ay = 0, and at the same time on the line Ax, i.e., y, Ax = 0. By assumption, we have x, Ax = 0. Note, that Ax Ay since A is regular, and Ay, x = 0 since A is selfadjoint. Hence x and y both are perpendicular to the plane spanned by Ax and Ay, which contradicts y x. In other words, the set of tangents of a conic A is the image of the points on the conic under the map A. And consequently, a line g is a tangent of the conic iff A −1 g is a point on the conic, i.e., if and only if A −1 g, g = 0. Definition 2.2 If P is a point, the line AP is called its polar with respect to a conic A. If g is a line, the point A −1 g is called its pole with respect to the conic A. Obviously, the pole of the polar of a point P is again P, and the polar of the pole of a line g is again g. Moreover: Fact 2.3 If the polar of a point P with respect to a conic A intersects the conic in a point x, then the tangent in x passes through P. The fundamental theorem in the theory of poles and polars is Fact 2.4 (La Hire's Theorem) Let g be a line and P its pole with respect to a conic A. Then, for every point x on g, the polar of x passes through P. And vice versa: Let P be a point and g its polar with respect to a conic A. Then, for every line h through P, the pole of h lies on g. Proof We prove the second statement, the first one is similar. The polar of P is the line g = AP. A line h through P satisfies P, h = 0 and its pole is Q = A −1 h. We check, that Q lies on g: The next fact can be viewed as a generalization of Fact 2.4: Theorem 2.5 Let A and G be conics. Then, for every point x on G, the polar p of x with respect to A is tangent to the conic H = AG −1 A in the point x = A −1 Gx. Moreover, x is the pole of the tangent g = Gx in x with respect to A. Proof It is clear that H = AG −1 A is symmetric and regular, and by Sylvester's law of inertia, H has mixed signature. The point x on G satisfies x, Gx = 0. Its pole with respect to A is the line g = Ax. This line is tangent to H iff H −1 g, g = 0. is indeed the polar of x with respect to A. The last statement in the theorem follows immediately. Definition 2.6 The conic H = AG −1 A is called the conjugate conic of G with respect to A. Recall that the dual of a point P ∈ P is the line P ∈ B and the dual of the line g ∈ B is the point g ∈ P. In particular, P and g are incident if and only if their duals are incident. The dual lines of all points on a conic A are tangent to the conic A −1 , and the dual points of all tangents of a conic A are points on the conic A −1 . Therefore, A −1 is called the dual conic of the conic A. We will denote the dual A −1 of a conic A by A . The projective space P = R 3 \{0}/ ∼ can also be represented as the unit sphere S 2 ⊂ R 3 with antipodal identification of points. Then, this space S, endowed with the natural metric d ([x], [y]) = arcsin x × y , becomes a complete metric space with bounded metric. The set of closed sets in this space is a complete metric space with respect to the inherited Hausdorff metric. In particular, a conic A given by is a compact set in S. In this sense, we can consider the limit of a sequence of conics. The exponential pencil The linear pencil of two matrices g 0 , g 1 ∈ C n×n is given by This notation is consistent for the values λ = 0 and λ = 1. If g 0 and g 1 commute, exponentiation of the linear pencil gives where G i := e g i . The last expression in (1) makes sense also for non-commuting matrices and we may define an exponential pencil of two matrices G 0 , G 1 ∈ C n×n by provided (G −1 0 G 1 ) x , x ∈ R, exists in the sense of Sect. 2.1. The notation G λ in (2) is consistent for the values λ = 0 and λ = 1. Notice that for regular matrices G 0 , G 1 , a unique discrete exponential pencil G n = G 1 (G −1 0 G 1 ) n−1 for n ∈ Z exists. This general concept applies naturally to conics and we define: Definition 3.1 Let G 0 , G 1 be two conics. Then is called an exponential pencil generated by G 0 and G 1 provided that all G λ are symmetric and real. Remarks (a) For an exponential pencil to exist, it is necessary and sufficient that G −1 0 G 1 has a real logarithm F such that G 1 F is symmetric. (b) In Sect. 4 we will see that the existence of an exponential pencil depends on the position of G 0 and G 1 relative to each other, and except for only one case, the exponential pencil is unique. (c) Each G λ in an exponential pencil generated by G 0 and G 1 is actually a conic: In contrast to the linear pencil, an exponential pencil of conics does not contain degenerate or complex conics. This is a consequence of the following Lemma. (ii) Since G λ is symmetric, it has real eigenvalues which depend continuously on λ. Then, according to (i), the product of the eigenvalues cannot change sign and the signature of G λ remains constant. The next Lemma will have immediate geometric consequences: In view of Theorem 2.5 and Definition 2.6, we get as an immediate consequence of Lemma 3.3: More generally, we have the following: Lemma 3.5 If G λ 0 and G λ 1 belong to a pencil G λ = G 1 (G −1 0 G 1 ) λ−1 generated by G 0 , G 1 , then G λ 0 and G λ 1 generate the same exponential pencil as G 0 and G 1 . More precisely, we have In particular, the exponential pencil does not depend on the order of the defining conics G 0 and G 1 . where we used the original definition of f in the last equality. Now the claim follows immediately. It turns out that exponential pencils behave well with respect to duality: Theorem 3.6 Let G 0 and G 1 be conics and G 0 and G 1 their duals. Suppose G 0 and G 1 generate an exponential pencil G λ . Then, the dual of G λ is an exponential pencil of G 0 and G 1 . More precisely, for all λ ∈ R we have Observe that the linear pencil does not enjoy the corresponding property. and therefore, according to Sect. 2.1, we may write We obtain and claim follows by replacing x by λ − 1. The natural question is now to ask which conics G 0 , G 1 generate an exponential pencil. To answer this question, we recall that two conics can lie in 8 different positions relative to each other (see Petitjean 2010): We now go case by case through the list and investigate the existence and the geometric properties of the resulting exponential conics. In particular, it will turn out that the exponential conic and the linear conic are quite closely related. We start with the important observation that the exponential pencil is projectively invariant: Lemma 3.7 Let S ∈ R n×n be a regular matrix, inducing a projective map P → P, x → Sx. Then the image under S of an exponential pencil G λ = G 1 (G −1 0 G 1 ) λ−1 of two conics G 0 , G 1 is an exponential pencil of their images. Proof For T := S −1 , the images of the conics G 0 , G 1 under S areḠ 0 := T G 0 T andḠ 1 := T G 1 T . We want to show that the imageḠ λ = T G λ T is an exponential pencil ofḠ 0 andḠ 1 . We start by definig f (x) and therefore, according to Sect. 2.1, we may write f (x) = (Ḡ −1 0Ḡ 1 ) x . We obtain and claim follows by replacing x by λ − 1. The investigation of the exponential pencils in all the Cases 1-8 listed above can now be reduced to a canonical form in each case. Classification of the exponential pencils The two figures below show the exponential pencil of two conics G 0 , G 1 (bold) in two cases. On the left, the geometry seems rather gentle, on the right quite complex. In this section, we investigate the exponential pencil of two conics in each of the possible cases of their relative position. It turns out that the geometric behavior of the exponential pencil is characteristic for each case. iff the common interior of G 1 and G 0 is connected. In this case, the exponential pencil is unique. G λ converges for λ → ±∞ to a line ± . The family G λ has an envelope E with asymptotes ± . Through every exterior point of E (i.e., points with four tangents to E), except for the points on ± , there pass exactly two members of the exponential pencil G λ . Each G λ touches a member of the linear pencil g λ = λG 1 + (1 − λ)G 0 in two first order contact points. Proof After applying a suitable projective map, we may assume that where a > 1 > b > 0 or b > 1 > a > 0 for the positive sign, and a > 1 for the negative sign (see Halbeisen and Hungerbühler 2017). Let then every solution X of e X = A leads to an exponential pencil G λ = G 1 e (λ−1)X of G 0 and G 1 , provided G λ is a real symmetric matrix for all λ ∈ R. In particular, h(x) := e x X must be real for all x ∈ R. But then h (0) = X must be real. We can therefore concentrate on real solutions of e X = A. According to Culver (1966, Theorem 1), such a real solution exists only for the positive sign in A. This corresponds to the case, where the common interior of G 0 and G 1 is connected. Then, the solution of e X = A is unique, according to Culver (1966, Theorem 2), and we obtain a unique exponential pencil given by The envelope E is obtained by eliminating λ from ∂ ∂λ x, G λ x = 0 and x, G λ x = 0. One finds The figure shows in the affine plane x 3 = 1 the pencil generated by the unit circle G 0 and an ellipse G 1 (both bold), together with the asymptotic lines ± (red) and the envelope E (blue). Theorem 4.2 (Case 2) Let G 0 , G 1 be two disjoint conics. Then they generate an iff G 1 is in the interior of G 0 or vice versa, in which case the exponential pencil is unique. G λ converge for λ → ±∞ to a point (which coincides with a limit point of the linear pencil g λ = λG 1 + (1 − λ)G 0 ), and a line (which contains the second limit point of the linear pencil). Each G λ touches two members of the linear pencil g λ = λG 1 + (1 − λ)G 0 in two first order contact points, or, if G 0 , G 1 are projectively equivalent to concentric circles, each G λ belongs to the linear pencil. Proof Since G 0 , G 1 are disjoint, there exist coordinates for which both conics are diagonal [see for example Pesonen (1956) or Hong et al. (1986)]: W.l.o.g. where 1 > a, b > 0 or a, b > 1 in case of the positive sign, and 1 > a > 0, b > 0 in case of the negative sign. Then, As in Case 1, an exponential pencil can only exist for the positive sign in A. This corresponds to the case where G 0 is in the interior of G 1 or vice versa. Now, we have to consider two cases: Case 2a. a = b: Then, by the same reasoning as in Case 1, the exponential pencil G λ is unique and given by (3). The figure on the left shows, in the plane x 3 = 1, the exponential pencil generated by the unit circle G 0 and an ellipse G 1 inside of G 0 (both bold). The limit as λ → ∞ is the center (red), and as λ → −∞ the ideal line. It is instructive to look at the same configuration on the sphere (figure on the right, limit point and limit ideal line in red). Case 2b. a = b: In this case we have and according to Culver (1966, Theorem 2, and Corollary), there is a continuum of real solutions of e X μ = A. So, there is a chance that the exponential pencil is not unique. From Gantmacher (1998, §8) we infer that all matrices where m, n are integers and K is an arbitrary regular matrix of the form are logarithms of A, and there are no other logarithms. Then e (λ−1)X μ has the same block structure as K . Now, in our case, we need that G λ = G 1 e (λ−1)X μ is real and symmetric. But this implies that G −1 1 G λ = e (λ−1)X μ is real and symmetric for all λ. Then the derivative of this with respect to λ at λ = 1 gives that X μ must be real and symmetric. Then for each k ∈ N, e X μ /2 k is also symmetric and real, and positive definite, because e X μ /2 k = e X μ /2 k+1 e X μ /2 k+1 . Recall that repeated roots A 1/2 k of A which are real, symmetric and positive definite, are unique. This means, that the values of e X μ /2 k agree for all integers k. Therefore, the infinitesimal generators X μ must actually agree. In other words, there is only one real symmetric logarithm X of A, and the exponential pencil is given by (3), i.e. a family of concentric circles. Alternatively, the uniqueness can be seen directly from (4) by imposing symmetry and real valuedness of X μ . Theorem 4.3 (Case 3) Let G 0 , G 1 be two conics with two intersectctions. Then they generate a countable family of exponential pencils G . Such a pencil is either periodic with a conic as envelope, or periodically expanding covering the plane infinitely often, with a local envelope which has a singular point S. For integer values of λ, the corresponding conics of all exponential pencils agree. Proof After applying a suitable projective map, we may assume that (see Halbeisen and Hungerbühler 2017). Geometrically, G 1 represents a circle of radius r > 0 in the plane x 3 = 1 with center in (a, 0) which intersects the unit circle G 0 , centered in (0, 0), in two real points. I.e., −1 < a − r < 1 and 1 < a + r , which implies that κ := (1 − a + r )(1 + a − r )(a + r − 1)(a + r + 1) > 0 because all four factors are strictly positive. We now use a translation T , a swap of axis P, a scaling L, and a rotation R, namely with the following values Notice that 4a 2 − κ = (1 + a 2 − r 2 ) 2 ≥ 0 and hence the radicand 2 − √ κ a ≥ 0 in c. For U = T P L R this leads to the following representation of the conics: In the plane x 3 = 1 these are rotated hyperbolas centered at (0, 0, 1), and we denote them again by G 0 and G 1 . Then, G −1 0 G 1 has the form , where k is an arbitrary integer. Notice, that −2r < 1 − a 2 + r 2 < 2r , again because the factors of κ are strictly positive, and hence the values φ k are real. Here, according to Gantmacher (1998, §8), we find the following solutions X of A = e X k : Therefore, we get For r = 1 (and only in this case), the resulting exponential pencil is periodic with period 2π/φ k . Hence, in the plane x 3 = 1, G λ are rectangular hyperbolas, rotating around the origin with constant angular velocity φ k . For r = 1, the rectangular hyperbolas are rotating with constant angular velocity φ k and at the same time exponentially shrinking (r > 1) or expanding (0 < r < 1) with factor r λ . The figures below show the two cases: G 0 and G 1 are bold, the envelope is blue, the singular point S is red. Remark The case when r = 1 (i.e., when the resulting exponential pencil is periodic), was studied with respect to Poncelet's Theorem in Halbeisen and Hungerbühler (2016) and Halbeisen and Hungerbühler (2017). Theorem 4.4 (Case 4) Let G 0 , G 1 be two conics with two intersections and one first order contact. Then they generate an exponential pencil G λ = G 1 G −1 0 G 1 λ−1 iff the contact point of G 1 and G 0 lies on the boundary of their common interior. Then the exponential pencil is unique. Each G λ touches a member of the linear pencil g λ = λG 1 + (1 − λ)G 0 in two first order contact points. For λ → ±∞, G λ converges to the tangent in the contact point, and to a line trough the contact point, respectively. The family G λ has an envelope E. Proof After applying a suitable projective map, we may assume that (see Halbeisen and Hungerbühler (2017)). Then, As in the proof of Case 2b, we are only interested in real logarithms of A. By Culver (1966, Theorem 1), the real logarithm of A exists iff μ < 1. This corresponds to the situation where the contact point sits on the boundary of the common interior of G 0 and G 1 . By Culver (1966, Theorem 2), the real logarithm is unique. By the binomic series we get and finally the exponential pencil Notice that the binomial series converges only for |μ| < 1. But the expression we got for (G −1 0 G 1 ) x satisfies the properties of Sect. 2.1 and therefore the result for G λ is correct for arbitrary μ < 1, μ = 0. The conics G λ are symmetric to the line (0, 1, 0) and touch G 0 , G 1 in their contact point. The envelope E is obtained by eliminating λ from ∂ ∂λ x, G λ x = 0 and x, G λ x = 0. In the plane x 3 = 1 one finds The figure shows, in the plane x 3 = 1, the pencil generated by the unit circle G 0 and an ellipse G 1 (both bold) together with the limiting lines (red) and the envelope E (blue). Theorem 4.5 (Case 5) Let G 0 , G 1 be two conics with one first order contact point C. Then, they generate an exponential pencil G λ = G 1 G −1 0 G 1 λ−1 iff G 1 lies inside of G 0 or vice versa. This exponential pencil is unique. The family G λ together with the tangent in C forms a foliation of P\{C}. Each G λ touches a member of the linear pencil g λ = λG 1 + (1 − λ)G 0 in two first order contact points. If G 1 is inside of G 0 , then G λ converges to C for λ → ∞, and to the tangent in C for λ → −∞. If G 0 lies inside of G 1 it is the other way round. Proof After applying a suitable projective map, we may assume that (see Halbeisen and Hungerbühler 2017), i.e., G 0 is a unit circle centered in (0, 0, 1) and G 1 a circle with center (a, 0, 1) which touches G 0 in (1, 0, 1) . Then, we get G −1 0 G 1 = I + T αT −1 . As in Case 4, the real logarithm of A exists, and is unique, iff 1 > a. This corresponds to the case where G 0 is inside G 1 or vice versa. Then, by the binomic series, we get and finally the exponential pencil Notice that the binomial series converges only for |a| < 1. However, the expression we obtained for (G −1 0 G 1 ) x satisfies the properties of Sect. 2.1 and therefore, the result for G λ is correct for arbitrary a < 1, a = 0. The conics G λ are symmetric to the line (0, 1, 0) and touch G 0 , G 1 in C. The figure shows, in the plane x 3 = 1, the pencil generated by the unit circle G 0 and a circle G 1 inside of G 0 (both bold), together with the tangent in the contact point (red). Theorem 4.6 (Case 6) Let G 0 , G 1 be two conics with two first order contact points C 0 , C 1 . Then they generate an exponential pencil G λ = G 1 G −1 0 G 1 λ−1 iff G 0 lies inside of G 1 or vice versa. This exponential pencil is unique, and each conic G λ is a member of the linear pencil g λ = λG 1 + (1 − λ)G 0 . If G 1 is inside of G 0 , then G λ and g λ have the same limit for λ → ∞, and for λ → −∞ the limit of G λ consists of the tangents in C 0 and C 1 . If G 0 is inside of G 1 it is the other way round. The proof will actually give some more information. Proof After applying a suitable projective map, we may assume that (see Halbeisen and Hungerbühler 2017). Then, Like in Case 2, A has only one symmetric, real logarithm if μ < 1. This inequality is equivalent to the fact that one conic lies inside the other, and we get In this case, we obtain as exponential pencil The figure shows the pencil generated by the unit circle G 0 and an ellipse G 1 (both bold) together with the limits (red). Theorem 4.7 (Case 7) Let G 0 , G 1 be two conics with one intersection and one second order contact. Then, they generate a unique exponential pencil G λ = The family G λ has a conic E as envelope. E belongs to the linear pencil of G 2 − 3G 0 − 6G 1 and the double line joining the intersection point and the second order contact point of G 0 and G 1 . Through every exterior point of E, except for the tangent in the contact point of G 0 and G 1 , there pass exactly two members of the exponential pencil G λ . Proof After applying a suitable projective map, we may assume that Halbeisen and Hungerbühler 2017). Then, we get G −1 0 G 1 = I + T αT −1 . By Culver (1966, Theorem 2), A has a unique real logarithm, and we can use the binomic series (which, in this case, consists of only three terms), to obtain and finally the exponential pencil The envelope E is obtained by eliminating λ from ∂ ∂λ x, G λ x = 0 and x, G λ x = 0. One finds the conic It is then a simple calculation to check, that x, G λ x = 0 has exactly two solutions λ whenever x is in the interior of E and away from the tangent in the contact point of G 0 and G 1 . The figure shows in the plane x 3 = 1 the pencil generated by the unit circle G 0 and an ellipse G 1 (both bold) together with the envelope E (blue). Theorem 4.8 (Case 8) Let G 0 , G 2 be two conics with one third order contact point C. Then they generate a unique exponential pencil G λ = G 1 G −1 0 G 1 λ−1 which coincides with the linear pencil g λ = λG 1 + (1 − λ)G 0 . The pencil G λ together with the tangent t in C yields a foliation of the projective space outside C. For λ → ±∞, G λ converges to t and C respectively. Proof After applying a suitable projective map, we may assume that we get G −1 0 G 1 = I + T αT −1 . Again, we have a unique real logarithm of A and therefore, by the binomic series (which, in this case, consists of only two terms), we get and finally the exponential pencil It is easy to check, that for every point P / ∈ t there is exacly one λ such that P, G λ P = 0 The figure shows in the plane x 3 = 1 the pencil generated by the unit circle G 0 and a hyperbola G 1 (both bold) and the limits (red). A triangle center Starting with the circumcircle G 0 and the incircle G 1 of a triangle 0 = A 0 B 0 C 0 , we obtain a discrete chain of conjugate conics G n = G 1 (G −1 0 G 1 ) n−1 , for n = 0, 1, 2, . . .. Because of Theorem 2.5, the triangle 1 joining the contact points A 1 , B 1 , C 1 of the incircle of 0 is tangent to G 2 . Iteration of this construction yields a sequence of triangles n (see figure below) having vertices on G n and sides tangent to G n+1 . The corresponding contact points on G n+1 are the vertices of n+1 . This is a chain of dual Poncelet triangles in the sense of Halbeisen and Hungerbühler (2016). According to Theorem 4.2, the linear and the exponental pencil of G 0 and G 1 have the same limit point. Hence, the sequence of triangles n converges together with the G n for n → ∞ to the dilation center X of 0 : This is Triangle Center X (3513) in the Encyclopedia of Triangle Centers [5]. This center has hereby a new interpretation. The figure shows the situation for a triangle 0 (blue) and G 0 , G 1 (bold) with the limit point X (red). Since 0 is a Poncelet triangle for G 0 , G 1 , any other point A 0 on G 0 defines a triangle 0 with vertices A 0 , B 0 , C 0 on G 0 with incircle G 1 . Each such triangle 0 generates a chain of dual Poncelet triangles with the same center X .
2023-01-24T14:38:37.666Z
2017-12-21T00:00:00.000
{ "year": 2017, "sha1": "4b5d89f7fead65cc8f9470ee74258db22ed87617", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13366-017-0375-1.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "4b5d89f7fead65cc8f9470ee74258db22ed87617", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
248446967
pes2o/s2orc
v3-fos-license
Gingival Squamous Cell Carcinoma Masquerading as Localized Periodontal Disease in the Maxilla: A Case Report : Objectives: Squamous cell carcinoma is a malignant neoplasm of epithelium. In the U.S., carcinoma of the gingiva constitutes 4% to 16% of all oral carcinomas. This case report highlights such a case in maxillary gingiva and emphasizes the vital role of dental professionals, especially periodontists and endodontists, in being cognizant that an inflammatory lesion can mimic a serious condition like squamous cell carcinoma. INTRODUCTION Amongst the wide spectrum of pathological conditions encountered in the oral cavity, several conditions mimic each other and put the clinician at a diagnostic crossroad that can be very daunting.This is particularly true when the treatment is significantly different for each of the conditions on the differential diagnosis list.One such challenging conundrum is a similarity in the presentation of some inflammatory condi-tions that can mimic malignant conditions and vice versa.Several times gingival squamous cell carcinoma can mimic localized periodontal disease and is often misdiagnosed and mismanaged.The diagnosis of gingival squamous cell carcinoma (SCC) is critical because of the time-sensitive nature of the condition.Accurate diagnosis is vital as tumor size and invasion into the surrounding anatomical structures and timely intervention decides the prognosis [1,2]. Gingival squamous cell carcinoma often presents with benign features, and if misdiagnosed or not diagnosed in the early stages, it can lead to a delay in executing the appropriate treatment.Gingival squamous cell carcinoma is often asymptomatic, and initial symptoms are usually an intraoral mass or swelling, ulceration, pain, ill-fitting dentures, mobility of teeth, or unhealed extraction wounds [3 -5].Computerized tomography (CT) and magnetic resonance imaging (MRI) are useful for evaluating the extent of carcinoma in the head and neck.This study highlights the clinical-histopathological and radiographic findings of a case of gingival SCC. Patient Information and Relevant History A 50-year-old male patient presented to a private dental office with the chief complaint of a lesion in the gums below heavily restored right maxillary first molar and second premolar.The patient was otherwise healthy with no history of tobacco use or any traumatic event. Clinical Findings, Radiographic Findings, and Management On clinical examination, an ulcerated lesion in the gingiva was present in the right posterior maxillary region (Fig. 1).Periapical and panoramic radiographs were done to evaluate the lesion further.Radiographic examinations revealed alveolar bone loss around the right maxillary first molar and second premolar and the canine, but the generalized alveolar bone loss was noted, with several other teeth in the oral cavity.Change in the trabecular pattern was observed in this area but was deemed to be the result of inflammation and possible infection (Fig. 2).Based on these findings, it was diagnosed as localized chronic severe periodontitis.With the right maxillary first molar showing prognosis, the tooth was extracted.At a follow-up visit after three months, the lesion did not resolve after dental extraction and antibiotic therapy.Moreover, three months post-extraction, the mass showed rolled and raised borders (Fig. 3).At this time, the case was referred to the University Hospital, and a computerized tomography scan (CT) was done to evaluate the lesion further.CT scan revealed an enhancing soft tissue mass in the right gingiva destroying the palate and causing irregular thinning of maxilla and adjacent maxillary sinus wall.The scan also showed enlargement of the lymph nodes (Fig. 4).Subsequent histology confirmed the diagnosis of a well-differentiated squamous cell carcinoma SCC (Fig. 5).5).Invasive squamous cell carcinoma within the sub mucosa.High power view of the ulcerated surface from which nests of infiltrative squamous cell carcinoma arise.Nests infiltrate the superficial lamina propria with abundant keratin pearls. Fig. (6). There are focal areas of avid activity at the level of the mandible on the right juxtaposition to the operative site, as well as focal areas of metabolic activity in the region of the submandibular gland corresponding to a lymph node, at the level of cricoid cartilage posterior to SCM and lateral to the jugular vein which corresponds to a lymph node, and areas of metabolic activity within the lung parenchyma.The patient eventually underwent multiple resections of the maxilla at the initial site of the mass and multiple neck dissection surgeries for the next six months.The biopsy results repeatedly revealed metastatic disease spreading to the lower neck lymph nodes.Positron Emission Tomography (PET/CT) scan result showed increased metabolic activity posterior to the sternocleidomastoid, contralateral neck, and within the lung parenchyma (Fig. 6).Head MRI was performed to detect metastatic spread to the brain and maxillofacial regions and showed no focal mass, abnormal enhancement, midline shift, or acute intracranial hemorrhage.A chest x-ray confirmed metastasis to the lungs (Fig. 7).To this date, the patient's condition is worsening with progressive metastasis, pleural effusion, and underlying infections. DISCUSSION Gingival squamous cell carcinoma accounts for less than 10% of all oral cancers [6].It is mainly reported in the elderly, with about 2% of the patient population reporting less than 40 years of age [7].Gingival carcinomas are usually squamous cell carcinomas that are typically well-differentiated [8].It is predominant in males.Carcinoma of the gingiva is insidious in growth and painless.One common finding is that in its early stages, it is often misdiagnosed as inflammatory lesions of the periodontium, such as pyogenic granuloma, periodontitis, papilloma, or even fibroid epulis (inflammatory hyperplasia) [8].As in our study, the lesion mimicked localized chronic severe periodontitis and possibly a pseudoepitheliomathous hyperplasia. Most gingival squamous cell carcinomas are found in the mandible.Interestingly, in this case, it was in the maxilla.The site of the mass is very important as it is vital for the identification of the spaces and location of tumor spread.A schematic diagram of the tumor spread in the maxilla and mandible is described in Fig. (8).Cady et al., in 1969, did a 20year survey on epidermoid carcinoma of the gum and found that posterior lesions tend to be larger with involvement of bone and other adjacent structures [9].In this case, too, the initial lesion was in the posterior maxilla.This suggests the pivotal role of advanced imaging inappropriate diagnosis, delineating the extent and possible spread of the disease. While evaluating the extent, the buccal and gingival areas of the lesion should be carefully evaluated for the extent of submucosal spread, osseous involvement, involvement of the retromolar trigone, pterygomandibular raphe, and cervical lymphatic spread [10].The presence of osseous involvement is suggestive of a T4 lesion.The staging of the disease is important both for the treatment and prognosis. Computed tomography (CT) is excellent at showing the subtle cortical erosions, and the extent of marrow involvement may be better assessed by using magnetic resonance (MR) imaging.CT findings of osseous involvement include cortical erosion surrounding the primary lesion, aggressive periosteal reaction, abnormal attenuation in bone marrow, and pathologic fractures [11].In this case presentation, CT findings revealed osseous involvement and nodal involvement of Level I and II.Since nodal involvement is the single most important prognostic indicator, an accurate assessment of all nodal chains at the same time is essential in staging, treatment, and prognosis (Table 1).Hence, advanced imaging is key in detecting the prognosis of the disease. I All nodes above the hyoid bone, below the mylohyoid muscle, and anterior to a transverse line drawn on each axial image through the posterior edge of the submandibular gland. II Skull base, at the lower level of the bony margin of the jugular fossa, to the level of the lower body of the hyoid bone. III Between the level of the lower body of the hyoid bone and the level of the lower margin of the cricoid cartilage arch. IV Between the level of the lower margin of the cricoid cartilage arch and the level of the clavicle on each side as seen on each axial scan. V Skull base, at the posterior border of the attachment of sternocleidomastoid muscle, to the level of the clavicle as seen on each axial scan. VI Inferior to the lower body of the hyoid bone, superior to the top of the manubrium, and between the medial margins of the left and right common carotid arteries or the internal carotid arteries.They are the visceral nodes. VII Caudal to the top of the manubrium in the superior mediastinum, between the medial margins of the left and right common carotid arteries. MR imaging findings that are indicative of osseous involvement include loss of low-signal-intensity cortex, replacement of high signal intensity marrow on T1-weighted images by intermediate-signal-intensity tumor, contrast enhancement within the bone, and contrast enhancement of nerves traversing the mandible.In our study, a Head MRI was performed to evaluate the area, but no concurrent findings were noted.Lubek et al. in 2011, conducted a retrospective analysis of 72 patients with gingival carcinoma and indications for elective neck dissection.They found that elective neck dissection is indicated for all stages of mandibular gingival and T3 or T4 carcinomas of the maxillary gingiva.T2 maxillary SCC should be considered for neck dissection.Overall, disease-free survival was worse among those with cervical metastasis and patients who had marginal resections [11].In this case, neck dissections were performed but leading up to the current time in the patient's follow-up, the patient did not respond well to any therapy.Hence, it is noteworthy that conditions like this must be diagnosed as early as possible.Therefore, any persistent lesion exhibiting features that are not responding to conventional gingival and periodontal treatment options for more than two weeks should be referred for further evaluation to rule out cancer (Fig. 9).The 5-year survival rate of gingival SCC is considerably less when compared to SCC developing at other sites, suggesting a poor prognosis [12].This case highlights the presence of a very serious condition like gingival squamous cell carcinoma (SCC) and its deceptive behavior as a benign and inflammation-mimicking lesion in the gingiva.Many times, clinicians dismiss persistent lesions as idiopathic without adequate investigations, and doing so could result in missing a potentially life-threatening disease like squamous cell carcinoma (SCC). CONCLUSION This case report emphasizes that clinicians should pay attention to unresolving lesions, especially those of inflammatory origin, and serves as a reminder that delay in appropriate radiographic examination and diagnosis of these lesions may adversely affect the prognosis. Fig. ( 2 Fig. (2).Periapical and panoramic films demonstrating advanced alveolar bone loss surrounding the dentition, as well as displacement of the right maxillary first molar.The alveolar bone destruction of the right maxillary first molar appeared as an ill-defined, non-corticated Fig Fig. (4).A) Axial CT, soft tissue window showing an enhancing soft tissue mass in the region of the right gingiva extending medially to the level of the roof of the mouth.B) Mildly enlarged lymph node anterior to the right submandibular gland, and C) Another mildly prominent jugulodigastric lymph node. Fig. ( Fig. (5).Invasive squamous cell carcinoma within the sub mucosa.High power view of the ulcerated surface from which nests of infiltrative squamous cell carcinoma arise.Nests infiltrate the superficial lamina propria with abundant keratin pearls. Fig. ( 7 Fig. (7).AP chest x-ray showing multiple ill-defined nodular densities in the lungs consistent with known metastases to the lungs.A guided lung biopsy of a dominant opacity in the right upper lobe.
2022-04-29T15:46:09.563Z
2022-04-25T00:00:00.000
{ "year": 2022, "sha1": "1fa83f47a9dfbfc045632f7fbd4337308f4c9d28", "oa_license": "CCBY", "oa_url": "https://opendentistryjournal.com/VOLUME/16/ELOCATOR/e187421062112220/PDF/", "oa_status": "CLOSED", "pdf_src": "ScienceParsePlus", "pdf_hash": "c9bc8fcc749a9c0fb55ad4c3230a9c6b2a0b89a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
219978446
pes2o/s2orc
v3-fos-license
Design of high performance and low resistive loss graphene solar cells Despite metallic plasmonic excitations can enhance the performance of ultra-thin solar cells however these so-called plasmonic solar cells suffer from a large resistive (Ohmic) loss caused by metallic elements. In this work, we report on a new design that uses graphene nanoribbons (GNRs) in a two-dimensional (2D) grating form at the top of the semiconductor-on-insulator (SOI) solar cells aimed to reduce the resistive loss. The results showed that GNRs can remarkably reduce the resistive loss compared to the SOI cell with Ag nanograting, while keeping all other cell’s parameters, comparable with those of Ag SOI cell. Optical absorption and short-circuit current density of the graphene cells showed, respectively, enhancements of 18 and 1.7 times when optimizations were done with respect to width and the grating period. Our calculations showed that the graphene solar cells dissipate at most 5% of incident sunlight power as narrow and tiny peaks around 508 nm, which is noticeably lower than those of Ag solar cells with high and broad band peaks with the maximum values of 29% at 480 nm and 24% at 637 nm. Introduction Nowadays, the huge decrease in fossil energy resources and high cost of pollution control has forced scientists to search for sustainable energy sources. During the years, different methods have been introduced. Among them, the most important one that guarantees a clean and environmentally friendly technique, has benefited from the photovoltaic effect [1,2]. In spite of huge research which has been accomplished on solar cell topic, however, there are limitations on the massive production of solar cells. Regarding to this fact that the sunlight radiation covers almost a broad spectrum range, mainly from 290 nm in ultraviolet regime to 3200 nm in mid infrared one, for absorbing long wavelength photons, those which lie in near to mid infrared spectrum regime, thick absorption layers are needed [3]. Increasing the Si wafer thickness not only imposes additional cost, but also can cause junk recombinations, because of a short electron diffusion length [1,2,4,5]. At least, one can name three types of recombination: the electron jumps back from conduction band to valence band and recombines with a hole while a photon is emitted. This process is called band-to-band recombination, or radiative recombination. Reed-Shockley-Hall (RSH) recombination is assisted by trap energy levels in band gap. Auger recombination is a process in which an electron and a hole recombine in a band-to-band transition, but the resulting energy is given off to another electron or hole. In the case of hybrid solar cell, mismatch of different crystal lattices may cause a large number of dangling bond at the interface between two different materials. The interface recombination can be viewed as a type of RSH and the interface becomes major limiting factor resulting in rather high interface recombination velocity which thereby lowers the lifetime of carriers. Due to the aforementioned difficulties, it was a necessity to find an alternative solution to enhance the light absorption in the thin solar cells. For the first time, in order to improve the carrier collection efficiencies, Redfield [6] introduced dielectric waveguide concept to confine and guide the scattered emission in a Si film with a 2 μm thickness. The method used by Redfield means confining the light beams into a dielectric waveguide of a thin solar cell with an effective technique to enhance the light absorption. Yablonovitch, based on the ray optics, presented a statistical procedure toward measuring the enhancement factor for light intensity as 4n 2 for bulk absorption and n 2 for surface absorption to make the process of solar cell fabrication much more cost effective [7]. Here n is the refractive index of semiconductor film, Obtaining the highest efficiency in any design and experiment is indeed a precious goal. Authors have innovated various configurations to enhance the quantum efficiency as well as the short-circuit current density of solar cells [8,9]. For example, all-inorganic composition and suitable band gap of quantum dots (QDs) have been used in Perovskite solar cells to enhance the power conversion efficiency [10]. In the recent years, applying dielectric photonic crystals [3] and periodically patterned metallic structures [11] as back reflectors in order to enhance the electromagnetic energy intensity even beyond 4n 2 limit has become a popular trend. Using the plasmonic excitations in the ultra-thin solar cells, one can simultaneously increase the efficiency of solar cells and reduce the cost of film deposition [1,4,[12][13][14][15][16][17][18] which are two favorite factors in experimental research. Using perovskite-hybrid plasmonic nanostructured, Zhang et al. have explained the role of plasmonic coupling and photonic cavities in enhancing light-matter interactions and manipulating carrier dynamics [19]. There are at least three main schemes for plasmonic structures to be integrated with a solar cell. The first method involves locating the plasmonic elements at the top of the solar cell [13,[20][21][22]. It provides us with two important advantages: improving the optical absorption in absorption layer through scattering of light into it and preventing the reflection back [23]. Embedding the plasmonic structures inside the absorption layer is another approach [24], in which plasmonic nanoparticles act as subwavelength lenses enabling enhancing the light absorption. The third method is to arrange a grating or striped-like plasmonic structures at the back surface of the solar cell [25][26][27]. In this method, surface plasmon polaritons (SPPs) are excited in the absorption layer, leading to an increase in the optical absorption. Furthermore, in this scheme, where plasmoic structures are served as a back contact [28], a short distance provides proper conditions for collecting the charge carrier to be collected. To support the SPPs in the active layer of a solar cell, it is a crucial need to find the optimum plasmonic material. Although for many years gold and silver have been of the most interesting and extremely popular materials [29,30], they impose limitations one practical applications such as energy loss from intraband transitions, large Ohmic losses at the optical frequencies caused by electron-electron scattering, electron-ion scattering, and inelastic scattering from defects and grain boundaries [31], lack of tunability, and difficulties during the fabrication process. Since the first investigation on plasmonic effect, silver has been the best choice for visible light applications, because it has the lowest Ohmic loss and a high onset of intraband transitions. For near infrared/ terahertz region, gold and copper can be used, but gold is usually preferred over copper since it doesn't oxidize and is therefore more stable. There are other materials that have lower losses than silver and gold such as sodium and potassium, however these materials are not stable and would therefore not be easy to be used in plasmonic devices. Nevertheless, due to the lack of tunability and difficulties during the fabrication process, they impose limitations on practical applications. The discovery of graphene and other 2D materials such as hexagonal boron nitride (h-BN), transition metal chalcogenides to a class of monoelemental 2D materials (Si, Ge, Sn, etc.) which happened in "postgraphene age" triggered a wide range of research concerning 2D materials covering metals, semiconductors, and insulators, all with intriguing properties [32][33][34]. It has been shown that strongly confined surface plasmonic excitation in graphene with outstanding properties such as electrical tunability [35], low dissipative losses [36], strong lightmatter interactions [37], and extreme field confinement [38][39][40] can be a trustable substitution for noble materials. Compared with plasmons in noble metals, the electronic properties of graphene can be effectively controlled by changing the Fermi energy through the use of a gate voltage, chemical doping, and electric and magnetic fields [28]. Prominent properties of graphene along with its electronic [32][33][34][41][42][43] and photonic [44][45][46] features have made it an interesting platform for plasmonic waveguiding applications [37,[47][48][49]. However, so far its abilities have been proven more in the terahertz and mid-infrared frequency range [50,51] rather than optical spectral region [52] (http://pubs.acs. org/doi/abs/10.1021/acssuschemeng.5b01504), (https:// www.ncbi.nlm.nih.gov/pubmed/28295982). Patel et al. showed that multilayer graphene acts as an efficient transparent conducting electrode in a graphene/Si heterojunction solar cell [53]. Up to 40 layers of n-type graphene, the efficiency found to be constant and enhanced only to 7.62%. After further optimization on the parameters of p-crystalline silicon wafer, a maximum efficiency of 11.23% has been achieved. In this work, we investigate the impact of graphene nanoribbons (GNRs) on the optical and electrical performance of ultra-thin SOI solar cells and compare the results with those of SOI cells incorporated with Ag nanostrips. Due to the extremely light confinement, high carrier mobility, and zero band gap characterizations, graphene is worth investigating the light absorption in the Si absorber layer by putting GNRs on SOI (G-SiO 2 -Si-SiO 2 ) cells. Since graphene is a semimetal material, it seems that its special optical properties can help the waveguide-like modes in the optical frequency range to be strongly excited. These modes are not necessarily confined at the graphene/Si interface, but they are expanded in Si throughout. Our results show that these abnormally excitations can help the optical absorption to be increased in the active layer. To evaluate systematically the cell's performance, the optical absorption and short-circuit current density enhancements and resistive (Ohmic) loss of G-SiO 2 -Si-SiO 2 solar cells are calculated and compared with those of Ag nanostrips incorporated SOI (Ag-SiO 2 -Si-SiO 2 ) cells. For Ag-SiO 2 -Si-SiO 2 solar cells, the optimum design which has been reported previously [17,18] is used. For G-SiO 2 -Si-SiO 2 cells, width and the period of GNRs, which are two key parameters in the performance of proposed structure, are optimized. Model and theory The geometry of our proposed solar cell is schematically shown in Fig. 1a which comprises GNRs at the top of a SOI solar cell. A 10 nm buffer SiO 2 layer at the bottom of GNRs prevents undesired and strong damping of the surface plasmons resonances due to the Schottky effect [2]. A 50 nm absorption silicon layer is located under SiO 2 . Si layer is grown on a SiO 2 layer with a 240 nm thickness. Figure 1b shows a unit cell of our G-SiO 2 -Si-SiO 2 solar cell. Figure 1c illustrates an Ag-SiO 2 -Si-SiO 2 solar cell in which an Ag nanograting is placed at the top of a SOI solar cell. For simplicity, a unit cell is shown. Ag nanostrip has a triangle cross section with a 60 nm height and an 80 nm base. For Ag nanostrips, experimental data of Johnson and Christy [31] including the real and imaginary parts of refractive indices were utilized. For optical properties of Si layer, we have used its dielectric function given in Ref. [54]. Refractive index of SiO 2 is also set to 1.45. Note that the period of G-SiO 2 -Si-SiO 2 and Ag-SiO 2 -Si-SiO 2 solar cells are chosen to be 314 nm, except for Figs. 5 and 6b in which the grating periods are optimized. A plane electromagnetic wave is illuminated from the top side of the simulation frame with a normalized value of unity. To solve the Maxwell's wave equations and extraction of the solar cell parameters, we have considered only one unit cell with suitable boundary conditions to reduce the runtime and computational machine RAM. To ensure a realistic cell, we have used periodic boundary conditions for the lateral sides to include the periodicity of the structure, as shown in Fig. 1b and c. We have also used a perfectly matched layer (PML) for the top and the bottom sides of the simulation frame. PML boundaries can help diffracted waves to be absorbed properly. The wavelength scan is performed from 250 to 1100 nm with respect to ASTM-G173 and Si band gap (1.1 eV). It is worth mentioning when the imaginary part of the graphene surface conductivity is positive, the graphene layer behaves like a thin metal material and supports only transverse-magnetic (TM) mode [29]. Note also that under the transverse-electric (TE) mode, the graphene (and even Ag and Au with the thicknesses around 1 nm) will not represent any electromagnetic response in the solar cell, since their thickness is much smaller than the wavelength of the incident field. Therefore, in the following, we have only focused on the TM mode. By solving the Maxwell's wave equations in the unit cell for the transverse-magnetic (TM) mode, the distribution of electric and magnetic fields was obtained. Here, the complex permittivity of graphene is calculated directly by using ε g = 1 − iσ g /(ε 0 ω t) [36], where ε 0 is the permittivity of free space, t is the effective thickness of the graphene, and σ g is the complex surface conductivity of graphene which depends on the angular frequency, ω, chemical potential, μ c , temperature, T, and charge particle scattering rate, Γ = 1/τ, with τ being the relaxation time of charge carriers. Regarding the intraband and interband transitions, the total conductivity of graphene (σ g = σ inter + σ intra ) is described according to the local random phase approximation of the Kubo formula [55] which is written as [56]: In the above expression, e represents the charge of the electron, k B is the Boltzmann's constant, and ℏω is the photon energy. In our calculations, we set the graphene parameters as τ = 5 ps, T = 300 K, and μ c = 0.5 eV. Having found the electric and magnetic field distributions, the absorbed power in Si slab can be determined by the following formula [57]: where S ! is Poynting vector, Im(ε Si ) is the imaginary part of the Si permittivity, and dV = dx dy dy is the differential volume of the absorption layer (Si). According to the absorption power formula, Eq. (3), the integration is taken over the whole absorption layer volume. In our 2D modeling, the cell is assumed to be invariant along the zdirection, so that the electromagnetic field varies only along x and y. Thus dV = l dx dy, where l is the cell length along the z-direction. Absorption enhancement, Π(λ), is defined as the ratio of power absorbed in the Si layer of the cell with graphene nanoribbon (Ag nanostrip) to that of bare cell. Bare cells do not possess any graphene nanoribbon. Therefore, the absorption enhancement is given by: To perform fully evaluation on the solar cells, the short-circuit current density (J SC ) of the proposed structures should also be calculated. J Sc is the current density flowing through the solar cell when the voltage over the cells is assumed to be zero. For an ordinary cell, the short-circuit current density is calculated through the integration of the product of solar irradiance, I(λ), and cell spectral response, SR(λ), over the solar wavelength as: IðλÞ Â SRðλÞ dλ. In this integration, it is assumed that the quantum efficiency is unity, QE = 1, and SR(λ) = qλ/hc, where q is the elementary charge, h is Planck's constant, and c is the speed of light. Therefore, the J SC enhancement is given as [58]: where Π(λ) is the absorption enhancement given by Eq. (4). The resistive (Ohmic) loss, R L , which is the solar light power converted to the heat in the cell, is an important parameter for a solar cell which, to the best of our knowledge, has not been reported yet in the solar cells decorated with plasmonic elements. Resistive loss can warm the cell through the Ohmic heat in the cell, so it restricts the cell performance. Cell heating through Ohmic loss is an undesired phenomenon which should be considered in designing the plasmonic solar cells. In order to calculate resistive loss, we integrate J ! : E ! ( J ! is the current density) over the GNR volume as: Results and discussion In this section, we present the results of absorption enhancement, short-circuit current density enhancement, and resistive loss of G-SiO 2 -Si-SiO 2 and Ag-SiO 2 -Si-SiO 2 solar cells as well as absorption enhancement of hybrid Ag-G-SiO 2 -Si-SiO 2 solar cell. In all schemes, the thickness of the absorption layer (Si) is taken to be the same. Figure 2 shows the effect of GNR width (W) on the absorption enhancement of G-SiO 2 -Si-SiO 2 solar cell. To compare the results with those of Ag-SiO 2 -Si-SiO 2 one, we have also plotted the absorption enhancement of Ag-SiO 2 -Si-SiO 2 solar cell in this figure. It is seen that the absorption enhancement of G-SiO 2 -Si-SiO 2 cell strongly depends on the GNR width. For example, as W is increased from 150 to 250 nm, the absorption enhancement increases, too. If W is increased further to 280 nm, the absorption enhancement reduces drastically. Therefore, in the rest of this paper, we choose W = 250 nm as the optimum value for GNR width. It is interesting to note that for W = 250 nm, the absorption enhancement has a sharp peak around 508 nm (blue curve in Fig. 2.). This peak is related to the waveguide mode which is excited in Si layer. Figure 3a, which shows the normalized magnetic field distribution, clearly illustrates this mode happens at 508 nm. As the blue curve in Fig. 2 shows, the absorption enhancement in Si layer is much stronger than that of Ag-SiO 2 -Si-SiO 2 cell which happens at 508 nm. From the magnetic field distribution, Fig. 3c, for Ag-SiO 2 -Si-SiO 2 cell, one can conclude that there is no enhanced optical near field for absorption enhancement, because this wavelength is far from the plasmonic resonance wavelength. Interestingly, at this wavelength, the magnetic field is strongly confined and enhanced in the G-SiO 2 -Si-SiO 2 cell (see Fig. 3a). Therefore, due to the special transport properties of graphene, the light energy concentration and near field enhancement occur in the silicon layer. In the visible range, there are also other modes for G-SiO 2 -Si-SiO 2 cell happening at 624 nm and for Ag-SiO 2 -Si-SiO 2 happening at 642 nm, which assist in solar light absorption in Si layer. For G-SiO 2 -Si-SiO 2 cell, we call it plasmonic-like mode, because for graphene, the plasmonic modes excite at Mid-IR or terahertz spectrum region [28]. For Ag-SiO 2 -Si-SiO 2 cell, it is definitly a localized surface plasmonic (LSP) mode, which occurs at 642 nm and is illustrated clearly in Fig. 3d. Although due to the plasmonic mode excitation for Ag-SiO 2 -Si-SiO 2 cell at 642 nm, the absorption enhancement is much higher than that of G-SiO 2 -Si-SiO 2 cell mode at 624 nm, the latter has an enahncement over a wider range of wavelengths (see Fig. 2). For instance, by increasing the wavelength from 510 nm to 600 nm, the absorption enhancement of the Ag-SiO 2 -Si-SiO 2 cell drops down to 1, but for G-SiO 2 -Si-SiO 2 cell, it changes from 1.5 to 1.85. This charactristic will be much more effective in the future calculations for quantities such as short-circuit current density. In order to enhance even more the light-graphene interaction [50], we have designed a hybrid solar cell by placing GNR between Ag nanostrip and the SOI substrate (Ag-G-SiO 2 -Si-SiO 2 ), as is schematically shown in the Fig. 4a. Figure 4b represents the absorption enhancement of the proposed hybrid Ag-G-SiO 2 -Si-SiO 2 cell, where different heights of 10, 40, and 60 nm for Ag nanostrips were considered. For comparison, in Fig. 4b, we have also shown the results for solely G-SiO 2 -Si-SiO 2 and Ag-SiO 2 -Si-SiO 2 solar cells. It is found that for an Ag-SiO 2 -Si-SiO 2 cell with a 10 nm height, there is only a single waveguide peak at 506 nm with a 2.2 enhancement in optical absorption. The magnetic field distribution of this mode is shown in Fig. 4c. From the field distribution, it is observed that the electromagnetic field is strongly confined in the Si layer. For this height, there is a very wide peak for optical absorption in the longer wavelengths region, where we have not shown its distribution. As the height of the Ag nanostrip is increased (i.e. when reach h = 40 nm and 60 nm) the absorption enhancement corresponding to the plasmonic mode is arisen. From Fig. 4b we observe that for h = 40 and 60 nm for Ag height, an enhancement of 4.5 and 3.8 is observed for plasmonic mode at 659 nm and 667 nm, respectively (violet and dashed green curves in Fig. 4b). Figure 4d, e, and f show normalized magnetic distributions for hybrid solar cell with an Ag nanostrip with the height of h = 10 nm at (c) 506 nm and (d) 620 nm, and h = 60 nm at (e) 497 nm and (f) 659 nm. In the following, we optimize the cell's performance with considering the period of the GNRs and Ag nanostrips. Figure 5a shows the results for optical absorption of G-SiO 2 -Si-SiO 2 cell for different periods. It is observed that by increasing the period of GNRs from 300 nm, the optical absorption is increased. Also, the absorption peaks are shifted toward longer wavelengths. Our calculations show that the optimum period is around 358 nm. Precisely, the G-SiO 2 -Si-SiO 2 solar cell has a peak for optical absorption enhancement at 500 nm with magnitude of 5.67 for P = 300 nm which increases up to 18.37 at 540 nm for P = 358 nm. From Fig. 5b we observe that the period of the Ag-SiO 2 -Si-SiO 2 cell has more impact on enhancing the optical absorption through waveguide mode rather than plasmonics mode. It is seen that for P = 300 nm, the absorption enhancement corresponding to the waveguide and plasmonic modes occurs at 480 nm and 641 nm with the value of 2.10 and 6.18, respectively. For P = 358 nm, these peaks shift to 537 nm and 654 nm with magnitude of 4.95 and 7.42, respectively. From Fig. 5 a and b, one can conclude that for P = 358 nm, the absorption enhancement for waveguide modes of the G-SiO 2 -Si-SiO 2 cell is much higher than that of the Ag-SiO 2 -Si-SiO 2 cell. However, as it has been discussed earlier, Ag solar cells lead to an increase in optical absorption by their localized surface plasmons (LSP). Therefore, increasing the period of Ag nanograting helps better absorption of incident light due to its multiple peaks, proportional to their LSP modes. However, it is very crucial issue whether or not this absorbed power would cause a stronger short-circuit current density. In this regard, and in order to compare the performance of G-SiO 2 -Si-SiO 2 and Ag-SiO 2 -Si-SiO 2 cells, we have emphasized J SC enhancement, which is the ratio of J SC of the cell with nanostrip (nanoribbon) divided to J SC of the bare cell without nanostrip (nanoribbon). Figure 6a and b show the short-circuit current density (J sc ) enhancement for G-SiO 2 -Si-SiO 2 and Ag-SiO 2 -Si-SiO 2 solar cells, respectively. In Fig. 6a, we have evaluated the J sc for various widths of GNRs, where, as we expected, W = 250 nm gives the optimum value. In Fig. 6b we have compared the results of period optimizations of both G-SiO 2 -Si-SiO 2 with W = 250 nm and Ag-SiO 2 -Si-SiO 2 cells. It is seen that as the period is increased, the J sc for G-SiO 2 -Si-SiO 2 cell also increases. As expected, P = 358 nm is the optimum period for J sc of G-SiO 2 -Si-SiO 2 solar cell. However, for Ag-SiO 2 -Si-SiO 2 solar cell, various periods (in the range of 314 to 358 nm) give almost the same J sc . Interestingly, for G-SiO 2 -Si-SiO 2 solar cell with P = 358 nm, two types of cells give more and less the same value. The last important quantity which has been investigated in this work as the highlight prospect is the resistive (Ohmic) loss. This quantity says about an undesired phenomenon which is the main factor of dissipation of useful energy absorbed from the sunlight. To make a comparison, we have calculated the ratio of absorbed power in the GNRs/Ag nanograting layer to the total sunlight power radiated to the cell. We call it as normalized resistive loss (R L ). Figure 7 depicts the normalized resistive loss as a function of wavelength for Ag-SiO 2 -Si-SiO 2 (dashed red curve) and G-SiO 2 -Si-SiO 2 solar cells (solid blue curve) in the visible frequency range. It can be observed that the normalized loss of the G-SiO 2 -Si-SiO 2 is much lower than that of the Ag-SiO 2 -Si-SiO 2 . It is found that G-SiO 2 -Si-SiO 2 dissipates at most 4.7% of the incident power which happens at 508 nm, but Ag-SiO 2 -Si-SiO 2 generally dissipates much higher solar energy entire the solar spectrum. For instance, at 480 nm and 637 nm, Ag-SiO 2 -Si-SiO 2 dissipates, respectively, up to 29% and 24% of the incident power. This portion naturally is converted to the heat. The very low amount of dissipation for G-SiO 2 -Si-SiO 2 cell is an invaluable point, because heating the solar cell actually reduces the cell's performance. Therefore, it is safe to say that in the meantime that G-SiO 2 -Si-SiO 2 and Ag-SiO 2 -Si-SiO 2 enhance the solar cell's electric and optical performance with the same values, GNRs impose noticeably lower energy dissipation in the cell, so it is a better candidate for incorporating in photovoltaic ultra-thin cells. Recently, several experiments have shown that the chemical vapor deposition (CVD) graphene can be used to improve the performance of the solar cells [59,60]. Indeed, CVD is the primary technique used to obtain large-area graphene sheets, which are usually in high demand for various solar cell applications. In these experiments, the graphene was synthesized on a Cu foil by an atmospheric pressure CVD (APCVD) and then transferred to a glass substrate. The number of graphene layers could be well controlled via altering the H 2 flow rate, which also provides a wide selection range of transparency and sheet resistance. In order to provide a better coverage and contact on CdTe solar cell, the graphene was synthesized with a threedimensional (3D) structure using porous Ni foam as the growth substrate. A similar method was employed to grow graphene and the 3D structure was successfully observed and transferred to CdTe device [61]. The final graphene back contact thickness exceeded 10 μm with an excellent electrical conductivity (550-600 S/cm), which assisted a significant device efficiency improvement up to 9.1%. Conclusion In summary, we have numerically studied the optical absorption and short-circuit current density enhancements and resistive loss of G-SiO 2 -Si-SiO 2 and Ag-SiO 2 -Si-SiO 2 solar cells which are SOI solar cells with, respectively, GNR and Ag nanogratings at the top. We have compared the results of both cells in the visible frequency range. It is found that the performance of the G-SiO 2 -Si-SiO 2 solar cell strongly depends on the width (W) of GNRs and the period (P) of the structure. By optimizing the W and P of the G-SiO 2 -Si-SiO 2 cell, we could achieve the maximum absorption enhancement for GNR solar cell (18.37 at 540 nm) which is three times higher than that of Ag solar cell at this wavelength. It is demonstrated that the GNR cells with optimum W and P can intensify the waveguide mode peak and cause a better confining of the light into the Si layer. However, Ag solar cell has strong absorption enhancement due to conducting the light into the Si layer by its localized surface plasmons (LSP) modes. The result of calculating short-circuit current density enhancement confirmed that the GNR solar cell with optimized W and P can intensify the waveguide modes high enough such that having a same J SC enhancement compared to Ag solar cell, despite their lack of plasmonic behavior in this frequency range. The outstanding point of this work is the low resistive loss of GNR solar cells. Our calculations showed that GNR solar cells dissipate only less than 5% of incident sunlight power inside the SOI cell happening at 508 nm, compared to high values for Ag solar cells which dissipate the solar power up to 29% and 24% happening at, respectively, 480 and 637 nm.
2020-06-18T09:08:05.046Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "905a80fb76fe9ef59c83ccc0fd673f4e9c3f50ac", "oa_license": "CCBY", "oa_url": "https://jeos.springeropen.com/track/pdf/10.1186/s41476-020-00136-5", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "af342451174e4eb701f77fa2cde56cf055c6397b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
122672027
pes2o/s2orc
v3-fos-license
Equivalent expressions for norms in classical Lorentz spaces Abstract We characterize the weights w such that ∫0 ∞ ƒ*(s)p w(s) ds≃ ∫0 ∞ (ƒ**(s) – ƒ*(s))p w(s) ds. Our result generalizes a result due to Bennett–De Vore–Sharpley, where the usual Lorentz L p,q norm is replaced by an equivalent expression involving the functional ƒ ** – ƒ *. Sufficient conditions for the boundedness of maximal Calderón–Zygmund singular integral operators between classical Lorentz spaces are also given. Introduction Let (Ω, Σ(Ω), µ) be a nonfinite totally σ−finite resonant measure space, and let w be a strictly nonnegative Lebesgue measurable function on R + = (0, ∞) (briefly a weight).For 1 ≤ p < ∞ the classical Lorentz space Λ p µ (w) (see [10] and [6]) is defined by those measurable functions in Ω such that where f * µ (t) = inf s : λ µ f (s) ≤ t is the decreasing rearrangement of f , and λ µ f (y) = µ {x ∈ Ω : |f (x)| > y} is the distribution function of f with respect to the measure µ (we refer the reader to [4] for further information about distribution functions and decreasing rearrangements). Similarly, the weak Lorentz space Λ p,∞ µ (w) (see [6]) is defined by the condition where W (t) = t 0 w(s)ds.Obviously, the above spaces are invariant under rearrangement and generalize the Lorentz spaces L p,q µ since if w(t) = t q/p−1 , (1 ≤ q, p < ∞) then Λ q µ (w) = L p,q µ and Λ q,∞ µ (w) coincides with L p,∞ µ ; in particular the Lebesgue space L p µ is the space Λ p µ (w) when w = 1. Let us denote by f * * µ the maximal function of f * µ defined by It is proved in [3] (see also [4], Proposition 7.12) that in the case p > 1 the usual Lorentz L p,q µ norm can be replaced by an equivalent expression in terms of the functional where as usual, by A B we mean that c −1 A ≤ B ≤ cA, for some constant c > 0 independent of appropriate quantities. The main purpose of this paper is to extend (1) in the context of the classical Lorentz spaces and describe the weights w for which The work is organized as follows: in Section 2 we provide a brief review of the parts of the theory of B p and B * ∞ weights that we shall use in this paper and prove some properties of the weights w that belong to B p ∩ B * ∞ .In Section 3 we characterize the weights w for which (2) holds, and as application, we obtain sufficient conditions for the boundedness of maximal Calderón-Zygmund singular integral operators between Lorentz spaces Λ p µ (w), if µ is an absolutely continuous measure on R n defined by µ(A) where u belongs to the class of weights A p 0 , for some p 0 ≥ 1 (see [8] as a general reference of this class of weights). Preliminaries If h is a Lebesgue measurable function defined on R + the Hardy operator P and its adjoint Q are defined by Results by M. Ariño and B. Muckenhoupt (see [1]) and C. J. Neugebauer (see [11]) which extend Hardy's inequalities, ensure that: The boundedness of P on Λ p,∞ µ (w) was also considered by J. Soria (see [14] Theorem 3.1).Soria's result ensures that: Lemma 2.1 Let 1 ≤ p < ∞ and w be a weight on R + .Then, the following are equivalent, For any a > 1 we have that where the last inequality follows from (4).Since we have that Now if we take a = e 2c we obtain a constant C (depending only on p) such that Finally, since P w(r) ≤ pP Q p w (r) it follows that We observe that condition ii) is hence by Fubini's theorem which by [7] and by Sagher's Lemma (see [12]), this happens if and only if On the other hand, as we have seen before, condition 3 The main result Theorem 3.1 Let 1 ≤ p < ∞ and w be a weight in R + .Then, the following are equivalent, , where the equivalence constants do not depend on µ. Thus by Hardy's Lemma (see [4] Proposition 3.6, pag.56) and Fubini [15], Theorem 3.11.pag 192) we have that Collecting terms, we get The reverse inequality follows by the triangular inequality and condition B p . ii) ⇒ i).This is a direct consequence of Lemma 2.1 since if we apply condition ii) to the characteristic function χ A with µ(A) = r, we obtain µ (w) and then w / ∈ B * ∞ ) and hence if f ∈ Λ p,∞ µ (w) we get lim t→∞ f * * µ (t) = 0. Now using the elementary identity (see [3]) and letting s → ∞ we find that if f ∈ Λ p,∞ µ (w) Hence On the other hand, since w ∈ B p the boundedness of the Hardy operator in Λ p,∞ µ (w) implies that iii) ⇒ i).Since 1/W 1/p is decreasing and lim x→∞ 1/W 1/p (x) = 0; as a consequence of Ryff's Theorem (see [4] Corollary 7.6.pag.83) there is a µ-measurable function f on Ω such that f * µ = 1/W 1/p , then by hypothesis and by (3) w ∈ B p .Given a > 0 and s > 1, define and let g(t) = Qh(t).Since g is decreasing and lim x→∞ g(t) = 0, again by Corollary 7.6.pag.83 of [4], we can find we get that in particular which implies that . Summarizing we have proved that which by Lemma 2.1 implies that w ∈ B p ∩ B * ∞ .Observe that in the above theorem we have proved the norm equivalence between f (in the classical Lorentz space Λ p µ (w)) and f * * µ − f * µ in the weighted L p (w) space.In fact we have the following Proposition 3.1 The following statements are equivalent, where the rearrangement (f * * µ − f * µ ) * is taken with respect the Lebesgue measure in R + . Proof.i) ⇒ ii).Since w ∈ B p ∩ B * ∞ , it follows from Theorem 3.1 that lim t→∞ f * * µ (t) = 0, for every f ∈ Λ p µ (w).Hence, where S := P • Q is the Calderón operator.Since if h is a nonnegative function on R + then S h (t) is decreasing, for each t > 0, by taking rearrangement (with respect the Lebesgue measure in R + ) we get (see [4] Proposition 5.2.pag.142) Applying condition ii) we get Given a > 0 and s > 1, define and let g(t) = Qh(t).Since g is decreasing and lim x→∞ g(t) = 0, using again Ryff's Theorem (see [4] Remark 3.1 If 1 < p 0 < ∞ using the same proof that above and Theorem 3.3.9 of [5], one can easily check that Theorem 3.2 holds for every w in the biggest class B p/p 0 ,∞ ∩ B * ∞ where w ∈ B q,∞ ⇔ W (r)/r q ≤ cW (s)/s q 0 < s < r < ∞, (q > 0). Corollary 7.6, pag.83) we can find f such that f * µ = g.Then f * * µ − f * µ = P Qh − Qh = P h + Qh − Qh = P h, thus, by condition ii), since h is decreasing and since w ∈ B p , we get that ∞ 0 Qh(x) p w(x) dx ) p w(x) dx = W (a) + a ≤ c W (a) + a p ∞ a w(x) x p dx ≤ cW (a)
2019-04-20T13:04:19.284Z
2005-05-25T00:00:00.000
{ "year": 2005, "sha1": "cd978c166ce89843f5fbdf8ff6379bea3e370712", "oa_license": "CC0", "oa_url": "https://ddd.uab.cat/pub/artpub/2005/defdcd53e833/Pesos06.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "7bdeabc221b5bce69c6177c728facdba4498736e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
252670908
pes2o/s2orc
v3-fos-license
T-cell deficiency and hyperinflammatory monocyte responses associate with Mycobacterium avium complex lung disease Immunological mechanisms of susceptibility to nontuberculous mycobacterial (NTM) disease are poorly understood. To understand NTM pathogenesis, we evaluated innate and antigen-specific adaptive immune responses to Mycobacterium avium complex (MAC) in asymptomatic individuals with a previous history of MAC lung disease (MACDZ). We hypothesized that Mav-specific immune responses are associated with susceptibility to MAC lung disease. We measured MAC-, NTM-, or MAC/Mtb-specific T-cell responses by cytokine production, expression of surface markers, and analysis of global gene expression in 27 MACDZ individuals and 32 healthy controls. We also analyzed global gene expression in Mycobacterium avium-infected and uninfected peripheral blood monocytes from 17 MACDZ and 17 healthy controls. We were unable to detect increased T-cell responses against MAC-specific reagents in MACDZ compared to controls, while the responses to non-mycobacteria derived antigens were preserved. MACDZ individuals had a lower frequency of Th1 and Th1* T-cell populations. In addition, MACDZ subjects had lower transcriptional responses in PBMCs stimulated with a mycobacterial peptide pool (MTB300). By contrast, global gene expression analysis demonstrated upregulation of proinflammatory pathways in uninfected and M. avium-infected monocytes, i.e. a hyperinflammatory in vitro response, derived from MACDZ subjects compared to controls. Together, these data suggest a novel immunologic defect which underlies MAC pathogenesis and includes concurrent innate and adaptive dysregulation which persists years after completion of treatment. Introduction Nontuberculous mycobacteria (NTM) are commonly encountered in the environment (1)(2)(3). Despite widespread exposure to NTMs, few humans develop disease. Risk factors for NTM lung disease include cystic fibrosis, structural lung disease, and a syndrome in women with higher rates of scoliosis, pectus excavatum, and a low body mass index (4,5). Disseminated NTM infection is associated with Mendelian s u s c e p t i b i l i t y t o m y c o b a c t e r i a l d i s e a s e ( M S M D ; OMIM#209950), a rare pediatric disease caused by inborn errors of IFNg immunity (6), suggesting a role for IFNg. Although previous studies suggest that adults with Mycobacterium avium complex (MAC) disease have low levels of IFNg-production which could limit the protective immunity, the mechanisms underlying this cellular defect are not known (7)(8)(9) and the majority of NTM infections occur without identification of a genetic or immune defect. Species in the Mycobacteria genus, which includes NTMs, the Bacille Calmette-Guerin (BCG) vaccine, and Mycobacterium tuberculosis (Mtb), share many similarities, including a lipid-rich cell wall and conserved proteins (10). Previous studies suggest that prior NTM exposure and sensitization could provide a protective immune response against tuberculosis (TB) (i.e., heterologous mycobacterial immunity) and/or impair BCGinduced vaccine protection (11)(12)(13)(14). Mechanisms of heterologous immunity could arise from shared T-cell responses to highly conserved epitopes between Mtb and NTMs. Animal models support that NTM infection is protective against TB disease and vice versa (14,15). However, these previous studies were focused on a limited number of immune responses (IFNg-production or delayed type hypersensitivity in skin), employed whole cell reagents rather than single proteins or peptides, and/or were underpowered. With ongoing vaccine developing efforts for TB, understanding the role of NTM exposure and heterologous immune responses may be important for success (11). Despite geographic variation of NTM, the Mycobacterium avium complex (MAC) including M. avium and M. intracellulare, are the most common NTM in all global regions (2), and the main drivers of the increasing incidence of NTM infection (16)(17)(18). Diagnosis of MAC lung disease is often challenging due to difficulties in collecting sputum and differentiating between MAC colonization and lung disease (19). Efforts to improve MAC diagnostics included skin tests (20,21) and serodiagnostics (22)(23)(24). However, immunologic tests which accurately diagnose MAC, predict disease progression, or assess the success of treatment are not currently available. We hypothesize that detection of MAC antigen-specific T-cell responses could lead to an assay to identify MAC exposure, infection, and/or disease, and to test concepts of heterologous immunity. In this study, we hypothesized that M. avium (Mav)-specific immune responses are associated with susceptibility to MAC lung disease. To investigate this, we compared innate and adaptive immune responses to Mav in asymptomatic individuals with and without a history of treated MAC lung disease (MACDZ). We further tested whether it was possible to define T-cell epitopes associated with MAC-specific immune responses. Study participants We enrolled subjects in Seattle who had a history of MAC isolated from a sputum sample. Among the MAC subjects, the majority met American Thoracic Society (ATS) criteria for MAC lung disease (19) with a history of both pulmonary symptoms at the time of diagnosis and abnormalities on chest radiography (Table 1). Among the forty-three MAC subjects (MACDZ; 27 with adaptive profiling and 17 with innate profiling (1 studied in both groups)), forty-two were asymptomatic at the time of diagnosis and 39 had documented radiographic abnormalities (3 had missing data) with MAC. At the time of blood collection, the subjects did not have symptoms and the majority had previously completed a course of treatment months to years earlier. There were no exclusion criteria for the case group. We designed two study groups for adaptive (T-cell response) and innate (Monocyte Mav infection) immune profiling (Table 1). For innate profiling, we also enrolled local controls in Seattle who were self-described as healthy without history of recurrent or serious infections. For adaptive profiling, we enrolled healthy controls at the University of California, San Diego Anti-Viral Research Center (San Diego, USA, n=28), and at the Universidad Peruana Cayetano Heredia (Lima, Peru, n=4), both with (IFNg-release assay (IGRA)+HC) and without (IGRA-HC) latent tuberculosis infection. Mtb infection status was confirmed by a positive IGRA (QuantiFERON-TB Gold In-Tube, Cellestis, or T-SPOT.TB, Oxford Immunotec) and the absence of symptoms consistent with TB or other clinical and radiographic signs of active TB. There were no exclusion criteria for the different cohorts. Participants from case and control groups were chosen randomly unless there were limitations on how many cells were available. The individuals running the experiments were blinded to the case control designation. Equal numbers of controls and MACDZ individuals were included in each experiment. We considered the final sample size an exploratory decision due to a lack of direct preliminary data to guide the decision in this cohort. Peptides To discover the epitopes targeted by MAC-specific T-cell responses in MACDZ individuals, we constructed a candidate (25), and peptides with a median percentile rank ≤2 were selected. Any peptides from "hypothetical proteins" were excluded. This resulted in a peptide library of a total of 1,584 peptides: 628 MAC-specific, 516 NTM-specific, and 440 MAC/ Mtb-specific (Table S1). Peptides were randomly divided into three peptide pools per category, resulting in nine pools. PBMC isolation and thawing Venous blood was collected in heparin or EDTA containing blood bags or tubes and PBMCs were isolated by density gradient centrifugation using Ficoll-Hypaque (Amersham Biosciences) according to the manufacturer's instructions. Cells were resuspended in FBS (Gemini Bio-Products) containing 10% DMSO (Sigma-Aldrich) and cryopreserved in liquid nitrogen. Cryopreserved PBMC were quickly thawed by incubating each cryovial at 37°C for 2 min, and cells transferred to 9ml of cold medium (RPMI 1640 with L-glutamin and 25mM HEPES; Omega Scientific), supplemented with 5% human AB serum (GemCell), 1% penicillin streptomycin (Life Technologies), 1% glutamax (Life Technologies) and 20 U/ml benzonase nuclease (MilliporeSigma). Cells were centrifuged and resuspended in medium to determine cell concentration and viability using trypan blue and a hematocytometer. Cell culture reagents, mycobacterial strains Monocytes were cultured in Roswell Park Memorial Institute 1640 medium containing phenol red, HEPES and L-glutamine (RPMI 1640, Gibco) supplemented with fetal bovine serum (Atlas Biologicals) to a final concentration of 10% (RPMI-10) and recombinant human macrophage colony-stimulating factor (M-CSF, Peprotech) at 50 ng/mL. The Mycobacterium avium strain 104 (from the Cangelosi lab) was cultured in Middlebrook 7H9 media (BD Difco) supplemented with glycerol (Fisher; 4 mL/L), Middlebrook ADC Supplement (BD BBL, 100 mL/L) and Tween 80 (Fisher; 0.05% final) and grown to log-phase. Cultures were pelleted at 3,000 x g, washed twice in Sauton's media, resuspended in Sauton's media to OD~1.0 and aliquots were frozen at -80°C until monocyte infections. Freshly thawed M. avium 104 stocks were used to immediately infect monocyte cultures after obtaining the optical density to avoid heterogeneity between batches. The conversion of OD to CFU to achieve the desired multiplicity of infection (MOI) was determined by plating serial dilutions of a freshly frozen stock on Middlebrook 7H10 agar (BD BBL) for CFU enumeration. Fluorospot assay PBMCs were thawed and antigen-specific cellular responses were measured by IFNg, IL-5, and IL-17 Fluorospot assay with all antibodies and reagents from Mabtech (Nacka Strand, Sweden). Plates were coated overnight at 4°C with an antibody mixture of mouse anti-human IFNg (Clone 1-D1K), IL-5 (TRFK5), and IL-17 (MT44.6). Briefly, 200,000 cells were plated in each well of the pre-coated Immobilon-FL PVDF 96well plates (Mabtech), stimulated with the respective antigen (peptide pools at 1 mg/ml, Mav and Mtb whole cell lysates at 10mg/ml, PHA at 10mg/ml as a positive control and DMSO corresponding to the concentration present in the peptide pools). M. avium strain 104 whole cell lysate was prepared by heat-killing at 100°C for 25 minutes. Mtb whole cell lysate (Mtb lysate) from strain H37Rv was obtained from BEI Resources, NIAID, NIH (NR-14822). Fluorospot plates were incubated at 37°C in a humidified CO 2 incubator for 20-24 hrs. All conditions were tested in triplicates. After incubation, cells were removed, plates were washed six times with 200 ml PBS/0.05% Tween 20 using an automated plate washer. After washing, 100ml of an antibody mixture containing anti-IFNg (7-B6-1-FS-BAM), anti-IL-5 (5A10-WASP), and biotinylated anti-IL-17 (MT504) prepared in PBS with 0.1% BSA was added to each well and plates were incubated for 2 hrs at room temperature. The plates were again washed six times and incubated with diluted fluorophores (anti-BAM-490, anti-WASP-640, and anti-SA-550) for 1 hr at room temperature. After incubation, the plates were washed and incubated with a fluorescence enhancer for 15 min. Finally, the plates were blotted dry and spots were counted by computer-assisted image analysis (IRIS, Mabtech). The responses were considered positive if they met all three criteria (i) net spot forming cells per 10 6 PBMC were ≥20, (ii) stimulation index ≥2, and (iii) p≤0.05 by Students' t test or Poisson distribution test. The sum of the positive responses for each individual cytokine was used to represent the total magnitude of response. RNA-sequencing data analysis Paired-end reads that passed Illumina filters were filtered for reads aligning to tRNA, rRNA, adapter sequences, and spike-in controls. The reads were aligned to the GRCh38 reference genome and Gencode v27 annotations using STAR (v2.6.1) (31). DUST scores were calculated with PRINSEQ Lite v0.20.3 (32) and low-complexity reads (DUST > 4) were removed from BAM files. The alignment results were parsed via SAMtools (33) to generate SAM files. Read counts to each genomic feature were obtained with the featureCounts (v 1.6.5) (34) using the default option along with a minimum quality cut off (Phred > 10). After removing absent features (zero counts in all samples), the raw counts were imported into R v3.6.1 and genes with an average TPM < 1 were removed. R/Bioconductor package DESeq2 v.1.24.0 (35) was used to normalize raw counts. Variance stabilizing transformation was applied to normalized counts to obtain log 2 gene expression values. Quality control was performed using boxplots and Principal component analysis (PCA), using the 'prcomp' function in R, on log 2 expression values. Differentially expressed genes (DEGs) were identified using the DESeq2 Wald test, and p-values were adjusted for multiple test correction using the Benjamini Hochberg algorithm (36). Genes with adjusted p values < 0.05 and log2 fold change > 0.5 or < -0.5 were considered differentially expressed. Pathway enrichment analysis was performed using Enrichr (https://maayanlab.cloud/Enrichr/), and cell type enrichment was performed using DICE (37). The RNAseq data have been submitted to the Gene Expression Omnibus under accession number GSE199403 (http://www.ncbi.nlm.nih. gov/geo/). CD14+ monocyte isolation and M. avium infection Peripheral blood mononuclear cells (PBMC) were isolated from selected individuals using Ficoll gradient separation, followed by washing, and cryopreservation. Cryopreserved PBMCs were thawed in batches of 8 donors (balanced by MAC subjects/healthy controls), (Day = 0) and viable cells, as assessed by Trypan Blue stain, were resuspended in RPMI/10 containing M-CSF (50 ng/mL) at 2 million cells per mL and rested overnight in non-TC treated dishes at 37°C/5% CO 2 . On day 1, CD14+ monocytes were enriched with negative selection using magnetic beads (Classical Monocyte Isolation Kit, Miltenyi Biotec) and then plated at 1 million cells per mL RPMI-10 supplemented with M-CSF and again incubated at 37°C/5% CO 2 . The purity of the enriched CD14+ population was 60-80% as determined by flow cytometry. On day 2, cell cultures were stimulated either with M. avium 104 diluted in RPMI/10 to achieve an estimated MOI 5.0 or an equivalent volume of RPMI/10 media alone. After 6 hrs, media was aspirated and cells were lysed in Trizol (Invitrogen) and lysates were transferred to cryotubes and stored at -80°C. RNA was isolated from lysates in batches by chloroform extraction and the application of the aqueous phase with 100% ethanol to miRNeasy mini columns, which were washed and eluted according to the manufacturer instructions (Qiagen). RNA quality was assessed by Agilent TapeStation to ensure RIN ≥ 8.0 and quantification was measured using Nanodrop (Thermo Scientific). Differential gene expression, gene set enrichment analyses, and STRING network analysis To identify genes with expression patterns that distinguished MAC and HC phenotypes according to the monocyte response to Mav infection, we selected a linear mixed effects model that incorporated an interaction term in addition to the main effects: Expression~MACDZ + Mav + MACDZ : Mav +/-covariates with patient included as random effects and age, sex, and ethnicity included as covariates using R packages lme4 (43). Inclusion of age, sex, or ethnicity as covariates in the model did not improve the fit (median sigma changes 0.0001 for age, 0.00007 for sex, and 0.000003 for ethnicity, Figures S4A-C). Furthermore, except for ethnicity, no clustering was detected on PCA plots of these covariates ( Figure S5). Differentially expressed genes (DEGs) were assessed at an FDR<0.05, and significant genes were further assessed in a MACDZ : Mav pairwise contrasts model including MACDZ within media or Mav-infected and Mav infection within HC or MACDZ. We also explored whether to include the samples from cystic fibrosis (CF) subjects due to imbalance of this variable in the cases and controls (6 vs 0, respectively). Using pairwise comparisons of MACDZ subjects with and without CF, we did not discover any DEGs in the media or Mav condition. In addition, there was no difference in CF vs no-CF clustering on a PCA plot ( Figure S6D) or improved model fit with exclusion of CF samples (Figures S5D, E). However, removal of CF samples lowered the numbered of DEGs substantially (227 vs 45 at FDR <0.05) likely due to reduced power. Without evidence of confounding by the CF samples, we proceeded with further analyses with inclusion of the CF samples. To understand biologic connectivity between significant genes, we used STRING v11 network analysis (44) of Mavdependent DEGs as defined by the interaction term (2 genes) or both MACDZ and Mav infection (87 genes), as well as Mavindependent DEGs as defined by MACDZ alone (138 genes). We identified one major cluster for Mav-dependent DEGs (31 out of 89 genes) and one for Mav-independent DEGs (28 out of 138 genes. DEGs were also assessed for enrichment against Gene Ontology (GO), Hallmark and Kyoto Encyclopedia of Genes and Genomes (KEGG) gene sets using Fisher's exact test in Enricher (45). Gene set enrichment analysis (GSEA) was performed using the Molecular Signatures Database [MSigDB v7.2 (46)] Hallmark and Gene Ontology (GO) collections. Fast gene set enrichment analysis [FGSEA (47)] was used to compare fold changes of all genes in MACDZ : Mav pairwise contrasts as described above. Leading-edge genes in significant GSEA results (FDR < 0.1) were compared between MAC and HC to identify significant pathways. Statistics For flow cytometry and fluorospot data, significant differences in frequencies of cell subsets, magnitude of responses, and individual gene expression were calculated by the two-tailed Mann-Whitney test. Results were considered statistically significant at p<0.05. For RNASeq data, DEGs for the PBMC samples that met the criteria of adjusted p-values <0.05 and log2 fold change of >0.5 or <-0.5, were identified using the DESeq2 Wald test, and p-values were adjusted for multiple test correction using the Benjamini-Hochberg algorithm (36). Pathway enrichment was performed using Enrichr and cell type enrichment was performed using DICE. For RNASeq data, DEGs for the monocyte samples that met the criteria of adjusted p-values <0.05, were identified using a linear mixed effects model in R using the lme4 package, and p-values were adjusted for multiple test correction using the Benjamini-Hochberg algorithm (36). Pathway enrichment was performed using Enrichr and gene set enrichment analysis (GSEA) was performed using the Molecular Signatures Database (MSigDB v7.2) Hallmark and Gene Ontology (GO) collections. Study approval Approval for study protocols was obtained from the institutional review boards at the University of Washington School of Medicine and La Jolla Institute for Immunology. All participants provided written informed consent prior to participation in the study. MACDZ have infrequent MAC-antigen or mycobacteria-specific T-cell responses To define T-cell responses against MAC antigens, we tested PBMCs from 10 asymptomatic, previously treated MACDZ and 10 IGRA+HC (Table 1) with MAC-, NTM-, and MAC/Mtbspecific peptides, and MTB300 (26) which includes peptides found in NTMs, an EBV/CMV-II and a TT pool of epitopes as controls (Methods), as well as Mav and Mtb whole cell lysates. The antigen-specific reactivity was assayed directly ex vivo using an IFNg/IL-5/IL-17 Fluorospot assay, where the total cytokine response is presented as the sum of the three ( Figure 1A). Surprisingly little reactivity was detected against the pools in MACDZ individuals. As expected, the IGRA+HC also did not react to the 9 different peptide pools. The IGRA+HC had higher reactivity against Mtb lysate and MTB300 (as expected), but also a trend towards higher reactivity against Mav lysate. Both cohorts had similar reactivity against the nonmycobacteria derived peptide pools EBV/CMV and TT. The reactivity detected in both cohorts were primarily driven by IFNg-specific responses, with barely any IL-5 or IL-17 detected ( Figure S2A). To determine whether reactivity in MACDZ was driven by a response other than IFNg/IL-5/IL-17, we also used a cytokineagnostic approach measuring Activation Induced Marker (AIM) upregulation following antigenic stimulation ( Figure 1B). Upregulation of both OX40 and PDL1 has previously been used to measure Mtb-specific T-cell reactivity (48). Again, we found minimal reactivity against the 9 different peptide pools in both cohorts and the same hierarchy of responses against the controls ( Figure 1B). IGRA+HC had a trend towards higher reactivity against Mav lysate irrespective of the activation markers investigated ( Figure S2B). Finally, we measured stimulus-specific IFNg, IL-2, TNFa and CD154 responses to determine whether MACDZ had a different polyfunctional response ( Figure S2C), but as before, if anything, responses in the IGRA+HC were higher. In conclusion, we were unable to define T-cell responses against the peptide library or increased reactivity compared to IGRA+HC against Mav lysate with multiple antigen-specific assays. Importantly, the lack of response did not translate more broadly to non-mycobacteria derived antigens. MACDZ have lower frequencies of specific PBMC cell subsets To determine the cause for the lack of reactivity in MACDZ against Mav reagents, we determined basal frequencies of major PBMC subsets, e.g. measured without antigen stimulation in MACDZ (n=19) compared to IGRA-HC (n=18). We first analyzed the relative frequency of monocytes, NK cells, B cells, CD56-expressing T-cells, T-cells, CD4+ and CD8+ T-cells (Figure 2A). The frequency of monocytes was higher in MACDZ compared to IGRA-HC (p=0.036; Two-tailed Mann-Whitney test), and in contrast, the frequency of lymphocytes was lower (p=0.038). This difference was primarily driven by lower frequencies of CD8+ T-cells (p=0.0002), as all other cell subsets had similar frequencies (p>0.05). We next analyzed the frequencies of CD4 and CD8 memory T-cell subsets ( Figure 2B, Figure S3A). There was no significant difference between CD8 memory T-cell subsets ( Figure S3A); however, for CD4 memory a significant lower level of Tem (CD45RA-CCR7-, p=0.02), and a borderline significant higher level of naïve (CD45RA+CCR7+, p=0.05) T-cells was detected in MACDZ compared to IGRA-HC ( Figure 2B). This was striking since MACDZ were older than IGRA-HC (p=0.05, Figure 2C) and we had hypothesized the opposite based on the general trend toward a shrinking naïve T-cell pool as a function of older age. We also measured the frequency of T helper subsets based on the expression of CXCR3, CCR6, and CCR4. MACDZ had lower levels of Th1 (p<0.0001), Th1* (p=0.0005) and CXCR3 +CCR6-CCR4+ (p=0.0003) memory CD4 T-cells, and conversely higher levels of CXCR3-CCR6-CCR4-(p=0.014) memory CD4 T-cells ( Figure 2D). There were 9 individuals out of the 17 MACDZ tested with the Th subset markers that were affected by CF. There were no significant differences in B A Th1* and Th1 subsets in individuals with CF ( Figure S3B). The lower frequency of Th1* and Th1 subsets, which are involved in Mtb-specific immune responses, is consistent with the lack of detectable Mycobacteria-specific T-cell responses in MACDZ. Transcriptional analysis of unstimulated and stimulated PBMCs reveals monocyte and T-cell gene signatures specific to MACDZ To further attempt to discover MAC-specific T-cell immune responses, we performed RNAseq on PBMCs from the MACDZ (n=15), IGRA+HC (n=10) and IGRA-HC (n=15) cohort, after a 24 hrs stimulation with Mav lysate, Mtb lysate, MTB300, and anti-CD3/CD28 as a positive control. MTB300 contains immunodominant T-cell epitopes which are present in both Mtb and NTMs. The OX40/PDL1 AIM assay yielded results similar to those described above ( Figure S4A). The MAC cohort was associated with the lowest number of OX40+PDL1+ CD4 Tcells in response to Mycobacteria-derived stimulation, but no difference in the response following anti-CD3/CD28 stimulation. Hypergeometric mean pathway enrichment using Hallmark gene sets for the upregulated genes for each group showed similar pathways between unstimulated and MTB300stimulated samples. Genes involved in heme metabolism, coagulation and complement were identified for MACDZ, although with adjusted p-values just below the cut-off of 0.05 (corresponding to -log10 1.3 in the figures). For IGRA+/-HC, we found significant enrichment for inflammatory response, interferon-gamma response and TNF-alpha signaling via NF-kB (Figures 3C-F). The similar pathways within each cohort in unstimulated vs. MTB300 stimulated samples was explained by a large overlap of DEGs ( Figure 3G, Table S2). Using cellular deconvolution methods (through dice-database.org) for the upregulated genes in each cohort, we found an enrichment corresponding to classical and non-classical monocytes in MACDZ and, in contrast, an enrichment of activated CD4 and CD8 T-cells in IGRA+/-HC, which was, as expected, more pronounced following MTB300 stimulation (Figures 3C-F). Several Th1*-related genes were upregulated in IGRA+/individuals, 7 in unstimulated [p=0.11; overlap between upregulated genes here compared to the previously described Th1* signature (49)] and 9 in MTB300 stimulated samples (p=0.007), thus reflecting the differences observed in the phenotypic analysis described above. The IL-32 and CXCR6 expression was increased in IGRA+/-HC compared to MACDZ in both unstimulated and MTB300-stimulated samples ( Figure 3H; remaining Th1* genes in Figure S4C). Overall, these results demonstrate that MTB300stimulated T-cell responses in PBMCs are lower in MACDZ compared to controls. Furthermore, the Th1 and Th1* cell subset frequency differences above could explain some of the gene expression changes observed here. In addition, these data suggested differences in monocyte frequency and monocyte response to MTB300 within PBMCs when comparing MACDZ and controls. Uninfected and Mav-infected monocytes in MACDZ have upregulated proinflammatory pathways We next hypothesized that MACDZ subjects have a hypofunctional myeloid cell response which underlies the deficient T-cell responses. We enriched CD14+ monocytes from PBMC (MACDZ, N=17; N=14 with MAC lung disease, 3 without CXR data available; or HC, N=17, Table 1 138 were significant for MACDZ only (Mav independent), 87 were significant for MACDZ and Mav infection, and 2 were significant for the interaction term (together being Mavdependent; FDR <0.05, Figure 4A and Table S3). To discover gene signatures that distinguished MACDZ vs. HC monocyte populations, we employed gene set enrichment analysis (GSEA) with the entire dataset (50) using the molecular signatures database (MSigDB) 'Hallmark' curated gene sets (46). We identified multiple gene signatures that differentiated MACDZ and HC monocytes, including an enrichment of inflammatory and signaling genes upregulated in MACDZ compared to HC, in both unstimulated and Mav infected monocytes ( Figure 4B and Table S4). The most significantly enriched gene sets included Inflammatory Response, Interferon-Gamma Response, TNFA Signaling via NFKB, and IL6 JAK STAT3 Signaling (FDR < 0.01 for media, Mav, or both). We also used hypergeometric mean pathway enrichment for the 138 Mav-independent and 89 Mav-dependent DEGs. Only Mav-dependent DEGs showed significant enrichment; this included the same pathways as GSEA, namely Inflammatory Response, Interferon-Gamma Response, and TNFA signaling via NFKB (Table S5). Analysis of the most significantly enriched KEGG pathways provided additional details on more specific pathways including Chemokine_Signaling_Pathway, Cytosolic_DNA_Sensing_Pathway, Nod_Like_Receptor Signaling_Pathway, and Toll-Like_receptor_ Signaling_Pathway (FDR < 0.001, Table S5). Taken together, these data suggest that monocyte transcriptional profiles of MACDZ are enriched for pro-inflammatory pathways compared to HC in both media and Mav infection conditions. To further explore biologic pathways connected to these DEGs, we used network analysis (string-db.org) to connect the DEGs. For Mav-independent DEGs, we found a cluster (N=28 genes) of highly interconnected DEGs and 61 DEGs that had three or fewer connections ( Figure S7). For the Mav-dependent DEGs, we found a different highly interconnected cluster (N=31 genes) that was centered on IL-6 and contained transcription factors (REL, NF-kB, IRF8, NFE2, and SRF), cytoplasmic signaling molecules (CASP1, CASP4), co-stimulatory molecules (CD40, CD48, CD80), cytokines (IL6), and chemokines (CCL4, CCL8) ( Figure 4C). For each of these genes, expression values were higher in MACDZ compared to HC for the media and/or Mav condition ( Figures 4D-F, Figure S8). Overall, our results suggest that MACDZ have a higher proinflammatory expression profile in monocytes in both unstimulated and Mav-infected conditions. Discussion We present an in-depth characterization of the innate and adaptive immune responses in peripheral cells in MACDZ individuals. First, we found attenuated T-cell responses across multiple cellular subsets, including diminished responses of the Th1* subset that is important for antimycobacterial immunity (51,52). Second, we were unable to detect T-cell responses against MAC-specific peptides or increased reactivity compared to IGRA+HC against Mav lysate in multiple antigen-specific assays. Third, transcriptional analysis of Mav-infected blood monocytes demonstrated enhanced innate immune activation in MACDZ compared to controls. To our knowledge, these are the first concurrent observations of both hyperinflammatory innate and hypoinflammatory adaptive profiles in MACDZ subjects, which provide a new conceptual framework for understanding MAC immunopathogenesis. Previous studies demonstrated that MAC disease occurs in individuals with T-cell defects, such as those with AIDS, rare genetic or autoimmune diseases which comprise IFNg-dependent immune responses (6). Furthermore, the lack of MAC-specific IFNg response is in concordance with previous studies where low IFNg was detected following lysate or antigen-mixture stimulation in individuals with pulmonary NTM/MAC without previously identified immune defects (7-9, 53, 54). However, not all studies have found lower levels of IFNg, Wu et al. found differences in IL-17 and GM-CSF, but no differences in IFNg and IL-12 (55). We extend these studies with the identification of specific T-cell subsets underlying the defect, discovery of concurrent hyperinflammatory innate responses, and use of peptide reagents with well-defined specificity. We were not able to define MAC-or NTM-specific T-cell epitopes in this study in individuals with known MAC exposure and lung disease. This could support a model whereby MACDZ do not contain immunodominant T-cell epitopes which are species-specific. However, previous work from our group and others have defined NTM-specific T-cell reactivity in healthy individuals without Mtb infection, with an assumed NTM exposure (52, 56). Here, we attempted to define these responses at a MACspecies peptide-specific level in subjects with a known history of MAC disease and presumed T-cell sensitization. Despite our use of unique sets of MAC-, NTM-, and MAC/Mtb-specific peptides, similar to previous studies predicted for promiscuous HLA class II binding, we did not detect any T-cell responses that were enriched in MACDZ. Together, these data and prior studies suggest that the lack of response is due to a host response that is specific to those with documented MAC disease. Our results further highlight an important role for Th1* Tcells in anti-mycobacterial immunity. The Th1* subset contains the majority of Mtb-and NTM-specific T-cells (49,51,52), and a lower frequency of this subset could lead to a lack of Mycobacteria-specific responses, as observed in the MACDZ individuals. This hypothesis is strengthened by the previous observation that IGRA+ individuals have an increased frequency of the Th1* subset compared to IGRA-controls (49) and that this subset mediates BCG-induced CD4 T-cell responses and is increased following vaccination (57). The underlying cause for the lower frequency of Th1* and Th1 subsets in MACDZ remains to be determined, but could also predispose these individuals to this unusual infection through a failed antimycobacterial immune response. Additionally, excessive antigenic exposure from persistent MAC infection and inflammation could dampen T-cell function due to terminal differentiation, or "exhaustion". In Mtb infection, increased frequency of terminally-differentiated PD1-KLRG1+CD4+ Tcells is associated with increased bacterial burden (58). The combination of ongoing infection in an excess of proinflammatory innate immune responses provides evidence for T-cell dysfunction. Another contributing factor includes anatomic barriers. Many MACDZ individuals, those with ciliary dysfunction, cystic fibrosis, or preexisting bronchiectasis (11, 59), have anatomic lung defects that separate bacilli from adaptive immune surveillance. These observations are consistent with effector memory T-cells not replenishing over time without ongoing antigenic stimulation (60). Although this model is a possibility, our study subjects had several different underlying risk factors which makes it difficult to assess whether anatomic barriers were contributing to their T cell responses. Surprisingly, we also observed that blood monocyte proinflammatory responses, were enhanced in MACDZ months to years after completed treatment, despite the lack of effective Tcell help. These data are consistent with several immunologic models. First, enhanced pro-inflammatory innate responses could stimulate persistent T-cell activation via cytokines and contribute to T-cell exhaustion and terminal differentiation. Alternatively, these enhanced innate responses could promote epigenetic changes in myeloid cells which result in trained immunity features (61). In murine trained immunity studies, BCG infected bone marrow and influenced macrophage differentiation and function to induce increased innate responses to a broad array of pathogens (61, 62). Although there is currently no evidence for trained immunity in human MAC disease, our studies suggest that MAC disease is associated with decreased T-cell responses, which permits Mav replication leading to alterations of macrophage responses during chronic infection. Alternatively, MACDZ myeloid cells may be genetically programmed with accentuated inflammatory responses which drive susceptibility to MAC disease. Recent genetic studies support this possibility, although the genes underlying MAC susceptibility remain poorly understood (63- 65). Further studies will be needed to assess the impact of both macrophage hyperreactivity and relative T-cell deficiency in Mav pathogenesis, and should also investigate peripheral vs. tissue-resident cell populations. While no diagnostic can fully supplant microbiologic testing, non-sputum-based biomarkers for patient screening, to predict progressive lung disease, and response to treatment would greatly advance the clinical care of patients with NTM disease (66,67). In addition to our findings that MACDZ had a higher pro-inflammatory expression profile in monocytes, Cowman identified over 200 transcripts with differential expression between persons with pulmonary NTM infections compared to other respiratory diseases (68). This suggests a possible role for peripheral blood transcriptional signatures as a biomarker of MAC lung disease. Such studies in TB indicate that signatures can predict progression to disease, distinguish between healthy and persons with TB disease, and associate with TB treatment status (69)(70)(71)(72)(73)(74)(75). Pursuit of peripheral blood transcriptional diagnostics for MACDZ may be an alternative to antigenspecific T-cell assays. Limitations to our study population were inclusion of participants with cystic fibrosis, since they may have differences in immune responses. However, no differences in immune responses or RNA signatures were detected comparing those with and without cystic fibrosis. In addition, the cases had a higher frequency of co-morbid conditions compared to controls. We did not have access to whether the participants were actively smoking at the time of recruitment. Tobacco smoking may influence both the adaptive and innate immunity, and this parameter should be recorded in future studies. Given, that we were unable to detect Mavspecific T-cell responses in MACDZ subjects, this suggests the need to examine individuals at earlier stages of infection, those with colonization, or treatment naïve individuals with active MAC disease. Furthermore, future studies can investigate how long the observed alterations in innate and adaptive immunity may last. In conclusion, this study provides a detailed characterization of immune responses in individuals with MAC disease. Peripheral signatures in MACDZ are characterized by impaired T-cell memory and hyperactive monocyte responses. These findings expand our understanding of the breadth of Tcell deficits associated with MAC disease extending to those without defined deficits. In addition, our data suggests a surprising parallel finding of enhanced innate responses which may be a critical new component of understanding MAC pathogenesis. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. Ethics statement The studies involving human participants were reviewed and approved by the La Jolla Institute for Immunology IRB board, and the IRB board University of Washington School of Medicine. The patients/participants provided their written informed consent to participate in this study.
2022-10-03T13:34:33.401Z
2022-10-03T00:00:00.000
{ "year": 2022, "sha1": "7d60195ffc74f326e6722f1b6b5cfcdf7a7bd124", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "7d60195ffc74f326e6722f1b6b5cfcdf7a7bd124", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
209172468
pes2o/s2orc
v3-fos-license
Measuring the invisible : analysis of the Sustainable Development Goals in relation to populations exposed to drought Brazil, together with all the member countries of the United Nations, is in a process of adoption of a group of Sustainable Development Goals, including targets and indicators. This article considers the implications of these goals and their proposed targets, for the Semi-Arid region of Brazil. This region has recurring droughts which may worsen with climate change, further weakening the situation of access of water for human consumption in sufficient quantity and quality, and as a result, the health conditions of the exposed populations. This study identifies the relationship between drought and health, in an effort to measure progress in this region (1,135 municipalities), comparing relevant indicators with the other 4,430 municipalities in Brazil, based on census data from 1991, 2000 and 2010. Important inequalities between the municipalities of this region and the municipalities of the rest of Brazil are identified, and discussed in the context of what is necessary for achieving the Sustainable Development Goals in the Semi-arid Region, principally in relation to the measures for adaptation to achieve universal and equitable access to drinking water. introduction The risks emerging from environmental changes arising from processes related to the model adopted for economic development -destruction of ecosystems, loss of biodiversity, land use, occupation and deforestation -constitute threats to the environment and the social and economic structure, especially at local level.These processes affect the environment and its relationship with society, changing populations' conditions of life and health.In spite of this, the health sector in many countries still shows a certain apathy in relation to these changes, which can, directly or indirectly, alter the state of health of the populations affected [1][2][3] . Among the conditions or situations of risk that relate to the combination of environmental, climatic and social changes, at local and regional level, is drought.Drought is a type of phenomenon that is simultaneously environmental and climatic, related to a prolonged reduction of the water reserves existing in a region, as well as lower than usual rainfall 4 .Its nature is complex, due the difficulty of its delimitation in space (it can affect anything from extremely large areas, due to the global distribution of humidity, to much smaller areas), and in time (it can last for months or years) 5 .The effects of the process of drought on economic, social and environmental development influence factors that determine health, principally in relation to access to quantity and quality of potable water and foods, thus adversely affecting living conditions, especially for the poorer and more vulnerable social groups.The effects of drought on health, over the medium and longterm, are still little recognized and difficult to measure, especially in areas where drought is commonly recurrent 3,[6][7][8] . At global and national level, drought presents itself as a great threat, principally affecting the poorest populations.According to data from EM-DAT, globally between 1970 and 2014, drought was responsible for 5.4% of natural disasters, 31% of the total of people adversely affected, and 21% of deaths 9 .In Brazil, according to the Brazilian Atlas of Natural Disasters, in the period between 1991 and 2010, of the 31,909 records of natural disasters and 96 million persons affected, more than 50% were by reason of drought, negatively affecting mainly the Semi-arid Region, which includes eight states of Brazil's Northeast and the northern part of the state of Minas Gerais, of the Southeast Region 10 . In the Semi-arid Region of Brazil, drought is recurrent and long-lasting, and the effects on the conditions of life and health of people are dealt with by economic and social policies and decisions that can reduce, or worsen, the vulnerability of the populations and of the territory 8 .Climate changes that are in progress can alter the magnitude and frequency of drought events, which will probably mean greater environmental, economic and social damage, with serious consequences for the populations' health 11 . The concerns about water, drought and health are important parts of the post-2015 development agenda, and are included in the Sustainable Development Goals (SDGs).The idea of the SDGs originated at the Rio+20Conference in 2012, based on a proposal from Colombia and Guatemala 12 .In September 2014 a proposal with 17 objectives and 169 targets 13 was presented at a meeting of the General Assembly of the United Nations, and these objectives and targets will be the principal basis for a new agenda for development post-2015 14 .Brazil has an important contribution in these discussions 15 . This article seeks to understand the relationships between the SDGsand the situation of the Brazilian Semi-arid Region, and emphasizes the relationship between drought, water and health.It also presents a quantitative analysis of the years 1991, 2000 and 2010 of specific indicators at municipal level. Methods For this article, the 17 objectives and their 169 targets were reviewed, classifying them under three dimensions of sustainable development (social, environmental and economic), highlight the relationship that exists between the targets on water, drought (desertification) and health.We have prepared a conceptual framework that shows the interrelationships between the 17 SDGs, identifying with a greater or smaller degree of intensity those that are key for understanding and acting on the subject of drought, from the point of view of health and human wellbeing. We have also analyzed the differencesin social, economic and environmental indicators related to the conditions of drought, between the1,135 municipalities of the Brazilian Semi-arid Region -which is the region most affected by drought in the country (more than 70% of the drought events recorded in Brazil are concentrated in this region) -and the other 4,430municipalities of the rest of Brazil, using data from the censuses of 1991, 2000 and 2010.We make comparisons of medians, and the first and third quartiles, of the indicators selected. We provide a graphic expression of four indicators corresponding to health (child mortality rate per thousand live births) and to the dimensions of sustainable development: social (percentageliteracy), environmental (percent with access to piped water) andeconomic(percent of people who are not poor), developed in the Brazilian Atlas of Sustainable Development and Health 16,17 .These graphics demonstrate the performance of the municipalities of Brazil in terms of progress in these indicators, in the periods of 1991, 2000 and 2010, comparing the municipalities that have drought (the Brazilian Semi-arid Region) with the other municipalities of Brazil.The child mortality rate (CMR), is represented by a circle, the thickness of which shows 50% of the distribution (inter-quartile interval); the other three variables are represented in each one of the three angles of the triangle, with an interval of distance between the two lines that represent the first and third quartile denoting the central 50% of the distribution (inter-quartile interval) of each one of these variables.It is important to point out that the ideal condition of the graphic would be to achieve the circle (TMI = 0) at a point in the center of the triangle, and the quartiles getting closer to each other and reaching the extreme of the triangle (value =100%). results and discussion comparing the SDGs with focus on the populations of the Brazilian Semi-arid region In reviewing the SDGs we find that all the objectives are related to health, to a greater or lesser degree, and that all are related to the question of water.Figure 1 shows the relationships between these goals, grouped so as to understand their relationships from the point of view of the social, economic and environmental dimensions, and in particular the relationship between water, drought (desertification) and health. To better understand the relationship between the Health and Wellbeing SDG and the other SDGs, it is placed in the center.Certain determi- the Sustainable Development Goals The review of the 169 targets proposed in the 17 objectives resulted in 41 targets that can be aligned with the relationship between drought and health.Below we highlight some of these relationships, taking into consideration data that compare social, economic and environmental inequalities between Brazil's Semi-arid Region and all the other municipalities of the country, as detailed in Table 1. The selected indicators represent some of the targets established within the SDGs.For each one of these indicators, a comparison is presented between the Semi-arid Region and the rest of the country, shown by the differences observed between the first and third quartiles, and the medians.In this case we highlight an important fall in the median of the TMIn in the Semi-arid Region, from 94.2 to 27.2 per thousand live births, and also an approximation to the median of the rest of the country.This approximation also takes place with the TMI and life expectancy at birth.There are also important differences in the indicators of poverty, illiteracy and access to piped water, but even so the differences are diminishing, similarly to those in the other indicators assessed.For the IDHM, in 2010 the median in the Semi-arid Region was 0.591, that is to say 50% of the municipalities had an IDHM less than or equal to 0.591, which translates as 'low' or 'very low' .This advance is an important contrast when compared with the year 1991, when 50% of the municipalities had an IDHM of 0.291 (very low), or less.The other municipalities of Brazil had better levels in 2010 (0.688 considered average, low and very low), an important increase compared to 1991 (0.414 considered very low). Figure 2 is a summary chart of health and three other indicators representing the three di-mensions of sustainable development.These are: health, measured by the infant mortality rate per thousand live births; the social dimension, measured by the proportion of the population that is literate; the environmental dimension, measured by access to piped water; and the economic dimension, measured by the proportion of the population that is not poor. In a more detailed analysis of these charts we can see a significant improvement in the four variables corresponding both to the Semi-arid Region, and also to the rest of the municipalities of Brazil, in the three periods analyzed, and principally in the last 10 years.When we compare the medians of the years 1991 and 2010, we see in the municipalities of the Semi-arid Region a great reduction in child mortality (TMI) from 72.7 in 1991 to 25.2 per thousand live births in 2010, and also a reduction in inequality, seen in the movement of the central circle that has become thinner and moved in direction of the central axis of the triangle.As for the other three variables, we also see great advances, evidenced by the lines that represent the first and the third quartiles (distance between quartiles), which move closer towards the vertices of the triangles.We can see in the charts the indicators of: the environmental dimension (proportion of the population with access to piped water), with an increase in the median from 21.1% (1991) to 74.6% (2010);the economic dimension (non-poor percentage of the population) from 18.6% in 1991 to 58.9% in 2010; and in the socialdimension -in the proportion of the population that is literate, which increased from 49.5% in 1991 to 70.1% in 2010. Summing up, these indicators show improvement throughout the country, and an approximation between the municipalities of the Semi -arid Region and the other municipalities of the country.If this progress continues without interruption and with greater prioritization of actions for some municipalities of the Semi-arid Region, it will be possible to achieve several of the targets established in the SDGs. SDG-1.End poverty in all its forms everywhere: The relationship between poverty and health is well established 19 .Currently, the Brazilian Semi-arid Region shows significantly higher levels of poverty than the rest of the country (Table 1), hence the targets proposed by the SDG are of functional importance -including, for example, adequate systems of social protection for all, with special attention to the poorer and vulnerable populations, guaranteeing them equal rights to the economic resources, and also access to basic services, principally water.Brazil has made efforts to eradicate extreme poverty, and it is possible to achieve this target by 2030.However, the target of reducing the proportion of people who live in a situation of poverty to half its present level, or less, will call for more efforts, principally in the Semi-arid Region.More than 50% of the beneficiaries, and of the total value of the benefits, of Brazil's 'Family Subsidy' (BolsaFamília) program in 2012 were in Brazil's Northeastern Regionwhere a large proportion of the Semi-arid Region is located 20 .In the Semi-arid Region, the challenge that remains is that the compensatory policies of reduction of poverty should be accompanied simultaneously by emancipatory policies, including the economic, environment and social development that results in improvement of education, access to water, generation of work and income and expansion of sustainable production and consumption.These improvements can strengthen the autonomy and citizenship of the population that was previously adversely affected by poverty, and promote health. SDG-2. End hunger, achieve food security and improved nutrition and promote sustainable agriculture: The conditions of drought mean both scarcity and contamination of water and, consequently, scarcity and contamination of foods, with the capacity to cause absence of food security, malnutrition and other effects on health 5 .At the same time, nutritional deficiency is a central determining factor in child deaths associated with diarrhea, pneumonia, malaria and measles 21 .The targets of eliminating all the forms of malnutrition, including the targets that have been agreed internationally (by 2025) on chronic malnutrition, malnutrition in children less than five years old, and nutritional needs of adolescents, pregnant women, nursing mothers and the elderly, are fundamental for improving the health situation of the Semi-arid Region.Thus, it is necessary to establish strategies to guarantee access to water, for the purpose of doubling agricultural productivity, and the income of small food producers, so as to guarantee sustainable food production systems.These strategies can be supported with the implementation of resilient farming practices that are able to increase the capacity for adaptation to the climatic conditions, including extreme situations of drought.Climatic changes and other environmental changes are new factors causing food insecurity 22,23 .With the subsistence conditions in which the Semi-arid Region lives, it is also important that there should be recognition of family agriculture as a social and political space strategy for production and reproduction of life, and also adaptation to climate change 24 . SDG-3.Ensure healthy lives and promote wellbeing for all at all ages: In the Semi-arid Region, indicators such as infant mortality rate, access to potable water, level of illiteracy and life expectancy, as well as other indicators, show worse conditions than the rest of the country 8 (Table 1).Thus, the targets relating to this objective are directly related to the health conditions of this region.Proposals such as to end avoidable births of newborns and children aged less than five (by 2030) eradicate the neglected illnesses that are endemic to the region, reduce the incidence of transmissible and non-transmissible diseases, and promote mental health, are fundamental for ensuring a healthy life and wellbeing of the populations that live in the Semi-arid Region.For most of these targets to be achieved, access to potable water is functional -it is a basic good that makes it possible to promote various conditions of human health and wellbeing 25 , in which it is also an indicator of progress. SDG-4.Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all: The relationship between education and child mortality and other causes of illness/ death, including waterborne diseases, is well es-tablished, and it is important to highlight that in recent decades progress has taken place in Brazil 26 .This goalinvolves targets to ensure that all children complete equitable qualityprimary and secondary education, that leads to relevant and effective learning results, and also to eliminate gender disparities, and provide a quality of access at all levels of education and professional training, including to the most vulnerable.A high level of illiteracy still persists in the Brazilian Semi-arid Region (Table 1), which could be eliminated through an adequate policy of access to education for young people and adults.The guarantee that people will acquire the knowledge and abilities necessary to promote sustainable development and sustainable lifestyles, human rights, gender equality, promotion of a culture of peace and non-violence, global citizenship and recognition of the value of cultural diversity could contribute to the improvement of the region's social and economic indicators.It is important that the promotion of learning for the populations of the Semi-arid Region should not be based only on reception of technological knowledge, but should also benefit from being guided, in its development and the potential of local production, by an exchange of knowledge with other communities. SDG-5.Achieve gender equality and empower all women and girls: A great part of the burden of work, and management of the local economy in the Semi-arid Region is under the responsibility of women, who at various periods of history have been called 'widows of the drought' , when they stayed in their homes taking care of the life of the family while the men migrated in search of work and income, due to the effects of lack of supply of water for irrigation of farming.At present women are the heads of 93% of the families benefited by the Bolsa Família program 27 .The participation of women in taking of decisions, whether political or economic, in the public sphere or in the family, with rights of equality to leadership, at all levels, constitutesa target for action with great potential effect on the sustainability of the development of the Semi-arid Region.Undertaking reforms to recognize the equal rights of women to economic resources, access to property and control and management of land, and other forms of property and natural resources, could contribute to improvement of the conditions of life of women and empowerment of their management and participation in family structures. SDG-6. Ensure availability and sustainable management of water and sanitation for all: There is an extensive literature on the relationship between water, water treatment and services, and health 25,28,29 .Access to water with security and quality is still a major challenge for the Brazilian Semi-arid Region.Universalization of service of supply of water and water treatment for all is not a reality in the region (Table 1).Thus, measures identified in the SDG, such as promoting universal and equitable access to potable water that is secure and accessible; providing access to adequate and equitable water treatment and hygiene for all; reducing pollution; reducing the proportion of untreated waste water by half; increasing recycling and safe reuse of water; increasing the efficiency and sustainability of the use of water in all sectors; and ensuring supply of fresh water to deal with scarcity of water, are important and indispensable for improving the quality of human life and wellbeing of the populations that live in the region.The local patterns of rainfall, types of soil and social conditions should be taken into account in preparation of technologies for supply and storage of water for the population, since drought in the region is recurrent and prolonged.Achieving the targets proposed for this goal by 2030 would result in significant progress in improvement of the economic, social and health indicators in the region, because of the important relationship between access to water (whether for agriculture, industry or domestic use) and these dimensions of sustainable development.The participation of the local communities in discussions to improve policies, technologies and means of management relating to water and water treatment is also essential for meeting these targets.SDG-7.Ensure access to affordable, reliable, sustainable and modern energy for all: Lack of clean and safe energy is a risk for health 30 .The increased participation of renewable sources in global energy supply, by 2030, is a target that will continue to call for great effort by Brazil.The availability of some types of renewable energy for the Brazilian Semi-arid Region (where access is less than in the rest of the country, Table 1), such as, for example, solar and wind energy, would be a significant step in improvement of environmental management in a sustainable and decentralized way, ensuring electricity at prices that are accessible for the populations.This measure, preferably constructed with community participation, could have a beneficial effect on some factors that determine health, such as services for health, production, education, economic development and other goods and services. SDG-8. Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all: The Brazilian Semi-arid Region is a socially and economically vulnerable area.Thus, the targets identified in the SDG, such as improving the efficiency of global resources in consumption and in production, with efforts to dissociate economic growth from environmental degradation by 2030, and to reduce the proportion of young people without employment, education or qualification by 2020, are essential for strengthening of the population's capacity for adaptation and resilience, and that of the economy of the region.Achieving this objective is a difficult task and calls for a considerable effort from governments and society.Some initiatives of the government for strengthening of local economies have been undertaken in the region.Highlights are public policies related to land ownership, development, reduction of social and economic inequalities in the region (pointed out by Celso Furtado in the 1950s), stimulus and dissemination of productive systems to strengthen family agriculture and ecology (which has been stimulated by the 'Living with Drought' program, anchored on the group of organizations that have been promoting and organizing the region since the 1990s), and other forms of generation of sustainable work and income 24,31 . SDG-9.Build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation: The production chains of goods and services in the Brazilian Semi-arid Region are very sensitive to situations of prolonged drought, which affect the installed infrastructure, and also the economically active population.In spite of the adaptation to seasonal droughts, the persistence of long, multi-year periods of drought can rupture important links in this chain, reducing production, consumption and investment capacity.The targets proposed for developing a local and regional infrastructure of quality, that is reliable, sustainable and resilient, with focus on equitable access and accessible prices for all are, thus, fundamental for the economic development and human wellbeing of the populations that live in the region.This dimension of sustainability and development in the region should take into account some essential factors such as: combating degradation of the soil; reform in the management of water resources; guarantee of the production of family subsistence through sustainable agriculture; production of clean energy; and other investments, in access to scientific information technologies and the Internet. SDG-10. Reduce inequality within and among countries: In the Brazilian Semi-arid Region, there are significant inequalities between the indicators of municipalities and those of the rest of Brazil (Table 1), and also between them.Thus, the target proposed in this SDG, of reducing inequalities by 2030, will call for strategies, from the government, of empowering and promoting social economic and political inclusion of all.The measures proposed for this objective, principally the target to achieve and maintain faster growth in the income of the 40% poorest of the population than the nationwide growth rate, would make it possible to reduce the social effects and inequalities in the region that arise from the conditions of drought, also reducing families' vulnerabilities.It is emphasized that the government of Brazil has important policies for reducing social inequalities such as, for example, programs of transfer of income, which have positive effects on the Semi-arid Region, but it is necessary to make progress in programs for reduction of regional and local inequalities through a model of inclusive and sustainable development. SDG-11.Make cities and human settlements inclusive, safe, resilient and sustainable: The environment where people live has measurable effects on health 32 .To ensure that by 2030 the targets of this objective are reached requires a large coordinated effort of the governments at all their levels, principally in the municipalities of the Brazilian Semi-arid Region.Targets such as planning, with participative management, of human settlements, that are integrated, inclusive, safe and sustainable, with access to safe housing and adequate and safe basic services, principally water in quantity, and quality, is a foundation for achieving other targets, such as, for example, reduction in the number of deaths and people affected by disasters (in this case, situations of drought), and economic losses caused by drought, in relation to GDP.These targets are fundamental for protecting the poor and vulnerable populations that live with drought.Implementation of integrated policies and plans scheduled to be complied with by 2020, such as inclusion, efficiency in the use of resources, measures of mitigation and adaptation to climate changes, resilience of the populations and the government to disasters, and sustainable integration between the countryside and the city, can strengthen the Semi-arid Region and improve its socioeconomic profile. SDG-12.Ensure sustainable consumption and production patterns: To achieve sustainable patterns of production and consumption in the Brazilian Semi-arid Region, it is necessary that there should be appropriate management and use of the natural resources, especially the water resources, based on other values that express a solidarity economy, such as, for example, alternatives based on agroecology, co-existence with the Semi-arid Region, management of the Caatinga biome, maintenance of herds and adapted cultures, and the associative and cooperative projects existing in the region.For this target to be reached by 2030 it is possible that a technological development would be required that has participative forms of management, including sustainable techniques of irrigation, storage and distribution of water, to make it possible to guarantee that these resources are appropriated by all, not only by minorities that are politically and economically dominant. SDG-13.Take urgent action to combat climate change and its impacts: According to a study by the World Health Organization, it is estimated that between 2030 and 2050 there will be approximately 250,000 additional deaths per year as a consequence of climate change 33 .According to estimates by the Brazilian Climate Change Panel, the forecasts for the Semi-arid Region up to 2021 will be an increase of temperature of between 3.5°C and 4.5°C, and a reduction of between 40% and 50% in average annual rainfall 11 .In the Brazilian Semi-arid Region, the vulnerability of the Caatinga biome to the effects of climate change represents a strong factor of pressure for desertification in the region.To avoid greater impacts of this possible situation it is important to increase the capacity of resilience and adaptation of institutions and populations through national and, principally, local strategies and plans.Thus, as well as the integration of these measures, it is important to strengthen people's capacity for new economic, environmental and social conditions, and implement programs of sustainable development, with the aim of reducing the vulnerabilities that already exist in the region and avoiding possible greater impacts. SDG-14.Conserve and sustainably use the oceans, seas and marine resources for sustainable development: Taking this SDG as a basis for policies related to collections of water (rivers, lakes and reservoirs), it is important to observe that these are subject to a high variability in their quantity and quality.The low coverage of systems of collection and treatment of sewerage, combined with the predatory use of farming land and employment of weed killers has contributed to salinization, silting and eutrophication of land-ba-sed waters.Contamination of these waters puts at risk the populations that use these resources for supply for human consumption and irrigation of farming.Part of the total of sediment, organic matter and contaminants produced on the continent can reach the ocean in periods of rain.Thus, measures for conservation and sustainable use of the collections of water and of the soil, and measures for water treatment in the Semi-arid Region are extremely important to minimize the environmental and social impacts, and this will require a significant effort by governments, especially local governments. SDG-15.Protect, restore and promote sustainable use of terrestrial ecosystems, sustainability manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss: According to the Brazilian Climate Change Panel (Painel Brasileiro de Mudanças Climáticas, or PBMC), the Brazilian Semi-arid Region has tendencies to desertification and loss of native forests, which would result in an increase in the scarcity of water and loss of biodiversity 11 .The measures for combat of desertification and restoration of the soil and of water should be inserted into programs of sustainable socioeconomic development of the areas affected by drought.However, it is difficult to guarantee that targets aligned with this objective will be reached by 2020, since this will demand a series of integrated strategies of sustainable management of the territory, including integrated participation of the populations and their cultural values in all the stages of the processes of development to promote sustainability in this region. SDG-16.Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels: Conditions resulting from situations of drought, particularly in prolonged periods, can contribute to increase of physical and social violence, which are potentialized by processes of migration, urbanization, economic and human losses, and driving of more vulnerable portions of a population that live in the region out of their territory.Some (non-sustainable) measures used in the attempt to promote economic development and reduce the impacts of drought, such as agribusiness, are altering the communities' way of life, contributing to increase in violence, introduction of drugs to schools, prostitution, and migration, and also expulsion of farmers from certain regions 34 .Measures for empowerment of local populations and strengthening of institutions of justice should be included in the regional and local plans, with a view to prevention of the factors that make populations vulnerable that are associated with drought, and also with economically-excluding development projects. SDG-17.Strengthen the means of implementation and revitalize the global partnership for sustainable development: The group of environmental, economic and social problems existing in the Brazilian Semi-Arid Region, added to the low concentration of investments in health and education, produces various impacts that have a feedback effect on the poverty and vulnerabilities of this region, such as diseases, unemployment, illiteracy, and migration.Thus, the following factors, in conditions that are favorable for this region, become fundamental targets for development with equity, and improvement of the quality of life of these populations: sustainable promotion of environmental, economic and social development; reduction of local and regional social inequalities; education; development of knowledge; and dissemination of environmentally sustainable technologies, especially those related to the infrastructure of storage, management and distribution of water.Civil society of the region, through ASA (Articulação do Semiárido), has brought together various entities to discuss proposals for an appropriate policy for sustainable development for the region, taking into consideration, as well as its differences, the economic and human, environmental and cultural, scientific and technological dimensions.This partnership rural workers' unions, environmental entities, NGOs, Christian churches, international cooperation agencies, associations and cooperatives, women's movements, universities, researchers and the community of the Semi-arid Region itself.The support of the United Nations in the Conferences of Parties (COP),through the United Nations Convention to Combat Desertification (UNCCD)has also been important for the global partnership and discussion of sustainable measures for the regions of drought 35 .Thus, the strengthening of the implementation of this SDG calls for expansion and strengthening of the participation of civil society in this process, involving both its needs and its propositions. conclusion The implications of environmental and climate change on public health are multiple, and often are not being recognized, making it more diffi-cult to identify and act upon the various factors determinant of health.In cases of municipalities that are vulnerable to situations of drought this invisibility, together with the weak social and environmental conditions normally observed in the region, make it even more difficult to take action to reduce risks and promote health.These challenges, added to the already existing environmental conditions, and their impacts on the populations' conditions of life, especially in relation to access to water in quantity and quality, demand a greater integration of the health sector with other sectors, in planning of actions. To establish better management of drought situations and its relationship with the reach of the targets proposed by the SDGs, it becomes necessary to build alliances that can work with the information taking into account the territorial bases, where the social production of the health -illness process manifests itself.The purpose is to support planning, prioritization and assessment of actions.Traditionally, in situations of drought, concerns are more directed to determinant environmental and economic factors, specifically in terms of agriculture, such as use of the land, absence of water for irrigation, and economic losses, with emphasis limited to certain social determinants that have long-term impacts on health, such as precarious access to quality education, scarcity of foods and profound social and economic inequalities.It is important to remember that the vulnerabilities in the Semi-arid Region express the interaction and the cumulative character of the risk situations in relation to environmental degradation and climatic conditions, combined with precarious conditions of life and social and economic inequalities. Planning of action, principally, in health, needs to be sustained on articulation and integration of public policies oriented to the pillars of sustainable development: environmental, social and economic.An important strategy for analyzing the health situation and showing the inequalities is the construction of indicators of the proximal social, economic and environmental determinants using the SDGs, as a basis, as shown in Figure 1.These indicators would make it possible to show up situations that today are invisible, supporting the establishment of measures that can achieve universal and equitable access to the promotion of health wellbeing, and reduction of social inequalities. It is important also to consider values and cultures of the territory to be worked on, amply and transparently incorporating the parti-cipation of society.This strategy is essential for a better engagement of the community in the planning of actions and in the decision processes for reduction of risks, and it would also help in social control and qualification of management in health 36 . It is concluded that, although the data show great advances from 1991 to 2010 both in the municipalities of the Semi-arid Region and also in the other municipalities of Brazil, efforts, investments and prioritization of actions are still necessary that can result in reduction of social and health inequalities.For a better understanding of the implications of the SDGs and their proposed targets, and to make it possible to act on the situation of each municipality of the Semi-arid Region, to strengthen the actions for control, co-existence and adaptation at all levels, and reduce social inequalities, it is important to be aware of the particular vulnerabilities of each one.Thus, a more detailed analysis of the determinant factors that act on health and which have relationship with the SDGs, would be a support for prioritization and implementation of actions, and formulation of public policies for better sustainable development in this area.These determining factors include: poverty; hunger; low levels of education; lack of access to employment and social inclusion; precarious dwellings; fast and disorganized population growth; and, principally, lack of access to water in appropriate quantity and quality. Figure 2 . Figure 2. Progress of the municipalities of the Brazilian Semi-arid Region, and of the other municipalities of Brazil, according to selected indicators in four dimensions of analysis.Source: IBGE, based on data available in theUNDP (United Nations Development Program) 19 .Chart design based on the Brazilian Atlas of Sustainable Development and Health 16,17 . collaborationsA Sena worked on the conception and outlining of the first version of this article.A Sena, CM Freitas, C Barcellos, W Ramalho and C Corvalan contributed equally in the preparation and revision, and approved the final version of the article.agradecimentos The authors acknowledge the CNPq support to the research 'Mudanças climáticas e saúde humana: vulnerabilidade socioambiental e resposta a desastres climáticos no Semiárido Brasileiro' . justice Social Dimension economic dimension environmental Dimension Figure 1.Relationships between the 17 Sustainable Development Goals. table 1 . Social, economic, environmental and health indicators for municipalities of Brazil's Semi-arid Region (1,135), and municipalities of the rest ofBrazil (4,430); and difference between medians (M), quartile 1 (Q1) and quartile 3 (Q3),in the years 1991, 2000 and 2010.Indicators: TMI: Child Mortality Rate per thousand live births; TMIn: Infant Mortality Rate per thousand live births; Life expectancy: at birth; Proportion of the population in poverty conditions (%); Proportion of population that is illiterate (%); Proportion of the population without access to piped water (%); Proportion of the population living in households with electricity (%); IDHM -Municipal Human Development Index.Source: IBGE, based on data available in UNDP (United Nations Development Program)19.
2017-10-16T03:24:41.821Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "b46249be2dda4e26e85ee261fb38723112420cf0", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/csc/v21n3/en_1413-8123-csc-21-03-0671.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "84039344a5e4f1f7e35f281cc77d6ef0a1129654", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
253437417
pes2o/s2orc
v3-fos-license
Emergency Primary Ureteroscopy for Acute Ureteric Colic—From Guidelines to Practice Objective: To review the factors that may influence the ability to achieve the present guidelines’ recommendations in a well-resourced tertiary centre. According to current National Institute for Health and Care Excellence (NICE) guidelines, definitive treatment (primary ureteroscopy (URS) or shock wave lithotripsy (ESWL)) should be offered to patients with symptomatic renal colic that are unlikely to pass the stone within 48 h of diagnosis. Methods: Retrospective review of all patients presenting to the emergency department between January and December 2019 with a ureteric or renal stone diagnosis. The rate of emergency intervention, risk factors for intervention and outcomes were compared between patients who were treated by primary definitive surgery vs. primary symptom relief by urethral stenting alone. Results: A total of 244 patients required surgical management for symptomatic ureteric colic without symptoms of urinary infection. Of those, 92 patients (37.7%) underwent definitive treatment by either primary URS (82 patients) or ESWL (9 patients). The mean time for the procedure was 25.5 h (range:1–118). Patients who underwent primary definitive treatment were likelier to have smaller and distally located stones than the primary stenting group. Primary ureteroscopy was more likely to be performed in a supervised setting than emergency stenting. Conclusions: Although definitive treatment carries high success rates, in a high-volume tertiary referral centre, it may not be feasible to offer it to all patients, with emergency stenting providing a safe and quick interim measure. Factors determining the ability to provide definitive treatment are stone location, stone size and resident supervision in theatre. Introduction Urolithiasis is one of the most common urological conditions. In the past several decades, the prevalence of urolithiasis has increased dramatically, reaching a lifetime frequency of 14% [1,2]. The rising prevalence is often associated with increased hospital visits, investigations and interventions, all of which pose a significant financial burden and make cost-efficient management of these patients imperative [3]. Traditionally, primary management of ureteric stones included temporizing measures such as analgesia, and expectant or late management. When an intervention is indicated, like severe kidney injury, an infected stone or ongoing pain, primary treatment is often given in the form of a ureteric stent or nephrostomy insertion to relieve the patient's symptoms before definitive treatment [4][5][6]. However, these measures can result in frequent hospital visits with further symptoms, additional procedures, delays in treatment and possible complications, thus worsening the burden of urolithiasis on both the patient and the healthcare system. Subsequently, in the last few years, accumulating clinical and financial evidence supports the definitive management of ureteric stones, with primary ureteroscopy (URS) or extracorporeal shock wave lithotripsy (ESWL) as reasonable first-line treatment options for ureteric stones to avoid long-term stenting, whenever appropriate [6][7][8][9][10]. This practice has been recently supported by the National Institute for Health and Care Excellence (NICE) guidance. According to the NICE guidelines, a definitive primary treatment (by URS or ESWL) should be offered within 48 h of diagnosis to patients with ureteric stones that are unlikely to pass and intractable pain [5]. However, despite promising results, the number of studies that have examined the fundamental role of primary URS in the UK or other public healthcare-dominated systems is limited. The present study aims to assess whether the NICE strategy is implemented in daily clinical practice and to examine the factors that may influence achieving these guidelines in a well-resourced tertiary teaching hospital. Materials and Methods This is a retrospective analysis of all patients requiring emergency intervention for our institution's computed tomography (CT)-confirmed ureteric calculus. All patients underwent either ureteric stenting, primary URS or ESWL between January and December 2019. Indications for intervention included: persistent pain despite adequate analgesic medication, persistent obstruction and renal insufficiency. Patient demographics and operative details were collected retrospectively by reviewing electronic records, including medical notes, operation notes and discharge summaries. The following demographic data were recorded: age, gender, stone size, location, time of admission, type of intervention, the time between diagnosis and intervention (time to intervention-TTI), length of hospital stay, presence of consultant during the procedure and number of re-admissions following the initial procedure. Time of admission was defined as either "in hours"-between 08:00 and 17:00, or "out of hours"-between 17:00 and 08:00. Successful treatment was defined as a complete primary treatment (ureteroscopy or ESWL) without the need for additional intervention other than stent removal. All patients underwent CT scan 6 months following the definitive procedure to ensure their stone-free status. The number of re-visits was defined as reattendances to the accident and emergency department (A&E) during the following three months after the initial visit. Stent Insertion Retrograde ureteric double-J stent insertion was performed under general anaesthesia. All stents were inserted using a cystoscope with a 30 • lens and guided by a fluoroscope. A cystoscope was introduced to the desired ureteral orifice, and a Termo glide guidewire (0.035 inches) was inserted into the ureteral orifice, behind the stone up to the kidney. The stent was placed next over the guidewire and pushed into the kidney using a pusher. Stent size was selected on an individual patient basis. Ureteric stents were either removed during definitive surgery or in clinic by flexible cystoscopy under local anaesthetic if the patient's stone had passed. Ureteroscopy All emergency primary URS was undertaken with the intent to provide definitive treatment. URS was performed under general anaesthesia with an 8F semirigid ureteroscope (Karl Storz Endoskope, Tuttlingen, Germany) with the aid of fluoroscopy. Stones were either fragmented using a holmium:YAG or removed by an endoscopic basket. Any stone fragmentation was performed with a holmium:YAG laser (Lumenis Ltd., Elstree, UK). When required, stone fragments were removed using an endoscopic basket. The decision to place a ureteric stent following the procedure was left to the operating surgeon's discretion. ESWL ESWL was performed by the same dedicated radiographer in all cases using an on-site lithotripter (Storz Medical Modulith SLX-F2). The number of delivered shocks varied depending on stone size and density, up to 3000 pulses. Maximum shockwave energy and speed delivered were 7J and 4 Hz, respectively. Statistical Analysis Data are presented as mean ± standard error and range, or number (percent) unless otherwise specified. Statistical analysis was performed using Statistical Package for Social Sciences (SPSS, Version 22.0, Chicago, IL, USA). The student's t-test and the Mann-Whitney U test were used for the analysis of continuous variables and the Chi-square test was used for the analysis of categorical variables. A p value of <0.05 was considered statistically significant. Results A total of 287 consecutive patients underwent emergency intervention for ureteric calculi. Of those, 244 were included in this study and 13 were excluded. Primary definitive treatment was performed in 92 patients (37.7%), including URS in 83 (90.2%) and ESWL in 9 ( Figure 1). Overall, 92 (37.7%) patients underwent primary treatment, whereas the remaining 152 had surgical stenting without final stone extraction. UK). When required, stone fragments were removed using an endoscopic basket. The decision to place a ureteric stent following the procedure was left to the operating surgeon's discretion. ESWL ESWL was performed by the same dedicated radiographer in all cases using an onsite lithotripter (Storz Medical Modulith SLX-F2). The number of delivered shocks varied depending on stone size and density, up to 3000 pulses. Maximum shockwave energy and speed delivered were 7J and 4 Hz, respectively. Statistical Analysis Data are presented as mean ± standard error and range, or number (percent) unless otherwise specified. Statistical analysis was performed using Statistical Package for Social Sciences (SPSS, Version 22.0, Chicago, IL, USA). The student's t-test and the Mann-Whitney U test were used for the analysis of continuous variables and the Chi-square test was used for the analysis of categorical variables. A p value of <0.05 was considered statistically significant. Results A total of 287 consecutive patients underwent emergency intervention for ureteric calculi. Of those, 244 were included in this study and 13 were excluded. Primary definitive treatment was performed in 92 patients (37.7%), including URS in 83 (90.2%) and ESWL in 9 ( Figure 1). Overall, 92 (37.7%) patients underwent primary treatment, whereas the remaining 152 had surgical stenting without final stone extraction. The baseline parameters of the groups are seen in Table 1. Both groups were comparable in regards to age and gender. However, patients who underwent primary definitive The baseline parameters of the groups are seen in Table 1. Both groups were comparable in regards to age and gender. However, patients who underwent primary definitive treatment were more likely to have smaller and distally located stones than the primary stenting group (Figure 2). Moreover, although the time of admission (in or out of office hours) did not seem to affect the type of treatment chosen, a significant association was found between consultant presence in theatre and the type of procedure performed. A consultant urologist was present during the operation in 22.4% of ureteric stenting cases compared to 68.7% of definitive emergency procedures (68.7%). The rest of these procedures were performed by a urology registrar alone. Nevertheless, no differences were observed in the TTI as in both groups, surgical intervention was delivered in up to 48 h (30.7 and 25.3 h for primary stenting and definitive treatment, respectively). Of note, the hour of procedure was not associated with the presence of a consultant. A consultant was present in most cases during the "out office hours" (62.6% of all procedures and 57.5% of URS). Nevertheless, no differences were observed in the TTI as in both groups, surgical intervention was delivered in up to 48 h (30.7 and 25.3 h for primary stenting and definitive treatment, respectively). Of note, the hour of procedure was not associated with the presence of a consultant. A consultant was present in most cases during the "out office hours" (62.6% of all procedures and 57.5% of URS). Given the potential inherited bias between patients undergoing ESWL or URS, we Given the potential inherited bias between patients undergoing ESWL or URS, we excluded the patients who underwent primary ESWL and compared the primary stenting group to primary URS alone ( Table 1). The second analysis revealed similar results, including smaller and more distally located calculi in the primary URS group compared to the primary stenting one. Notably, we found that all ESWL procedures were performed without the presence of a consultant urologist. Surgical Outcome In terms of surgical outcomes, the overall success rate was 83.7% and 97.4% for primary stone treatment and primary stenting, respectively. Treatment failures management is specified in Table 1. All patients who underwent primary ESWL achieved complete stone clearance. Successful primary URS was performed in 68 patients (82%). A ureteric stent was inserted at the end of all 68 primary URS. Reasons for treatment failure are shown in Table 1. Of the failed procedures, almost 50% were secondary to difficulty in access and a narrow ureteric lumen. Further analysis did not reveal differences between the failed and successful procedures in regards to the age of the patients, size of the stone, time of admission or presence of a consultant during surgery. Nevertheless, patients who underwent successful URS were more likely to have distal stones than those who failed procedures (66.2% vs. 40%, p = 0.051) ( Table 2). Further outcome analysis was performed comparing the successful treatment groups ( Table 1). The length of stay following surgery was comparable in all groups. However, patients who underwent stenting alone have a higher rate of A&E re-visits. All re-visits were due to stent-related symptoms (pain or urinary symptoms). Discussion Emergency URS or SWL provide feasible options for definitive treatment in symptomatic renal colic with comparable stone-free rates to elective treatment. However, emergency stenting also provides a safe and quick interim measure. The value of temporary procedures is mainly to shift urgent stone conditions to non-urgent pathways, allowing for prolonged drainage until the elective procedure. The advantage of this is seemingly straightforward. The patient arrives at a pre-set date and is potentially operated on by an experienced endourology team. Nevertheless, pain relief by stent alone will also lead to a subsequent burden on elective and outpatient waiting lists, which will cause severe delays and potentially more associated complications like stent pains, forgotten stents and infections [11]. This has been compounded during the COVID-19 pandemic with a loss of elective operating, leading to more surgical delays [12,13]. Indeed, taking all of the above into consideration, the NICE guidelines have recently recommended applying primary definitive treatment whenever feasible. Several studies strongly supported a recommendation, including prospective comparative trials. However, in the current analysis, we tried to show whether this approach is adopted in the actual clinical setting of a tertiary teaching centre and not within the limits of a controlled study, where various everyday factors are coordinated and controlled. Our analysis showed that the rate of primary definitive treatment is still 37.7%. The majority of potentially suitable patients were still stented first. However, further analysis revealed that the "choice to treat" was influenced by stone and setting-related criteria. Regarding the stone variables, it has been previously described that the stone's size and location both predict the stone-free rates following URS, as well as the complexity of the procedure [14][15][16]. Consequently, we have shown that patients with larger and more proximal stones were more likely to be stented rather than receive definitive treatment. Further analysis of the "failed URS" group revealed that the size of the stone was not associated with failure. Hence, size alone should not necessarily be a reason to avoid primary URS. Distally located calculi, on the other hand, were more likely to end up in success. Another interesting finding was the effect senior staff availability might have on practice. In the current analysis, primary URS was more likely to be performed in a supervised setting, yet the time of admission (in or out of office hours) was not predictive of any treatments. Moreover, the presence of an experienced surgeon did not affect the success rate of URS. These findings could be explained simply by earlier reports, suggesting that less experienced surgeons have been suggested to have more complications and lower success rates [17][18][19]. Taking that into consideration, it seems that the presence of a fully trained, experienced surgeon has tilted the scale toward a complete procedure rather than a stent alone. Of note, the fact that the hour of procedure did not affect the results means that the choice is more driven by the reluctance of the trainee to complete the procedure alone. We acknowledge the apparent bias in the current analysis, rising from the retrospective nature of the study and the relatively small number of failed procedures. We also realize that it is impossible to conclude, without all the relevant data, what may have led the surgeon to choose one approach over the other. However, the current study is a practical view of the implication of the guidelines and associated evidence. Despite strong support from previous trials, primary URS is still attempted in less than 40% of patients, even in a tertiary well-experienced centre. Conclusions Despite the potential benefits and the relatively high success rate reported, primary definitive treatment is yet to become everyday practice. According to the current study, factors determining the ability to provide definitive treatment are stone location, stone size and resident supervision in theatre. Primary treatment, and more specifically, primary URS, should be encouraged. Institutional Review Board Statement: Study did not require ethical approval. The study was conducted in accordance with the Clinical audits and service evaluations requirements.
2022-11-10T16:58:40.572Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "f3e171ecd5efdff2d5c223fe3a8d703fd43ba5ae", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4426/12/11/1866/pdf?version=1667909915", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97ee4f9c95466c30224dda6fab07373b7676ffee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }