id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
55640277 | pes2o/s2orc | v3-fos-license | Convexity Invariance of Fuzzy Sets under the Extension Principles
We discuss the convexity invariance of fuzzy sets under the extension principles. Particularly, we give a necessary and sufficient condition for a mapping to be an inverse ∗-convex transformation, and also obtain some sufficient conditions for a mapping to be an ∗-convex transformation. Two applications are given to illustrate the obtained results. Finally, we give some applications of the main results to the hyperstructure convexity invariance of type 2 fuzzy sets under hyperalgebra operations, and to the convexity invariance of fuzzy numbers under basic arithmetic operations.
Introduction
As a suitable mathematical model to handle vagueness and uncertainty, fuzzy set theory is emerging as a powerful theory and has attracted the attention of many researchers and practitioners who contributed to its development and applications 1-12 .Convexity plays the most useful role in the theory and applications of fuzzy sets; and the research on convexity and generalized convexity is one of the most important aspects of fuzzy set theory 9, 10, 13-16 .Moreover, the extension principle for fuzzy sets is in essence a basic identity which allows the domain of the definition of a mapping or a relation to be extended from points in a set U to fuzzy subsets of U. This, in fact, is the underlying basis for many operations of the most basic concepts of fuzzy set theory such as arithmetic operations of fuzzy numbers, hyperalgebra operations of type 2 fuzzy sets, and synthetic operations of fuzzy relations.
Hence it is important and interesting to study the convexity invariance of concepts of fuzzy set theory under the extension principles, which has been explored by many
Preliminaries
In this section, some basic definitions of t-norms, extension principles, convex fuzzy sets, and type 2 fuzzy sets are reviewed.Throughout this paper, the letters N and R will denote the set of all positive integers and real numbers, respectively.A fuzzy set A in a universe of discourse X is characterized by a membership function A x which associates with each point in X a real number in the interval 0, 1 , with the value of A x at x representing the "grade of membership" of x in A 10 .The symbol F X denotes the family of all fuzzy subsets of a set X. Definition 2.1 see 1, 12 .For a fuzzy set A in a universe X and each λ ∈ 0, 1 , the strong λ-level set of A, denoted by A λ, is defined as Specially, the set A 0 is called the support set of fuzzy set A.
According to Zadeh's definition, a type 2 fuzzy set is a fuzzy set with a fuzzy membership function.
Definition 2.2 see 21 .A fuzzy set of type-2, A, in a universe of discourse X is characterized by a fuzzy membership function μ A as where the value μ A x is a fuzzy grade and is a fuzzy set in the unit interval 0, 1 .A fuzzy grade μ A x is represented by where f is a membership function for fuzzy grade μ A x and is defined as
2.4
Definition 2.3 see 2 .A t-norm is a binary operation on the unit interval 0, 1 , that is, a function * : 0, 1 2 → 0, 1 , such that for all a, b, c, d ∈ 0, 1 , the following four axioms are satisfied: T A t-norm * is said to be continuous if it is a continuous function on 0, 1 2 .
Example 2.4 see 2 .The following are the four basic t-norms * M , * P , * L , and * D given by, respectively, x * M y min x, y , minimum The t-norms * M , * P , and * L are continuous, but * D is not.
A t-norm * can be extended by associativity in a unique way to a n-ary operation taking for x 1 , . . ., x n ∈ 0, 1 n n ≥ 3 the value x 1 * • • • * x n defined recurrently by for all v ∈ Y , u ∈ X, A ∈ F X , and B ∈ F Y , where f −1 v is the inverse image set of v.
Definition 2.8 see 1, 21, 23 Zadeh's multivariable extension principle .Let X be a Cartesian product of universes, X X 1 × • • • × X r , and let f be a mapping from X to a universe Y such that y f x 1 , . . ., x r .Then a mapping f from F X 1 × • • • × F X r to F Y can be induced by f as follows: for all v ∈ Y and all n-tuple of fuzzy sets A 1 , . . ., A r which are fuzzy sets in X 1 , . . ., X r , respectively.
The multivariable extension principle as stated in Definition 2.8 can and has been generalized by using sup-t-norm convolution rather than sup-min convolution in 17 .
Definition 2.9 see 1, 17 generalized multivariable extension principle .Let * be a t-norm, let X be a Cartesian product of universes, X X 1 ו••×X r , and let f be a mapping from X to a universe Y such that y f x 1 , . . ., x r .Then a mapping for all v ∈ Y and all n-tuple of fuzzy sets A 1 , . . ., A r which are fuzzy sets in X 1 , . . ., X r , respectively.
In set theory, a total order is a binary relation here denoted by infix ≤ on some set X.The relation is transitive, antisymmetric, and total.A set paired with a total order is called a totally ordered set.If X is a totally ordered set, the order topology on X is generated by the subbase of "open rays"
Main Results
Let E be a real linear topological space.For arbitrary two points x, y ∈ E, the line segment xy joining x and y is the set of all points of the form tx 1 − t y, t ∈ 0, 1 .
Theorem 3.1.Let E be a real linear topological space, let * be a continuous t-norm, and let f be a mapping from E to Y, ≥ , a totally ordered set equipped with the order topology.If the restriction of f to every line segment xy, f| xy , is continuous, then f is an * -convex transformation.
Proof.Let A be an arbitrary * -convex fuzzy set of E. For any given three points x, y, z Thus, without loss of generality, suppose that f −1 x and f −1 z are nonempty sets.For any two points u ∈ f −1 x and v ∈ f −1 z , the restriction f| uv is continuous.In addition, we have that f u x and f v z.From the generalized intermediate value theorem 24 , it follows that there is a point w ∈ uv with f w y which implies that f −1 y is nonempty.Then by the * -convexity of A, we have Then the continuity and the monotonicity of * and the arbitrariness of u in f −1 x imply that
3.3
Similarly from the continuity and the monotonicity of * and the arbitrariness of v in f −1 z , it follows that which implies that f A is an * -convex fuzzy set of Y .This completes the whole proof.
Corollary 3.2.Let * be a continuous t-norm.If f is a continuous mapping from R to Y, ≥ , a totally ordered set equipped with the order topology, then f is an * -convex transformation.
Proof.It is obvious that f satisfies the conditions of Theorem 3.1.Then the desired result follows quickly from Theorem 3.1.
In Corollary 3.2, the continuity of f is not a necessary condition for a mapping f to be an * -convex transformation.To give such a counterexample, we need some lemmas.
Lemma 3.3. A fuzzy set
A in a real linear space E is an * M -convex fuzzy set if and only if all of its strong λ-level sets, A λ, are convex sets.
Proof.Suppose that A is an * M -convex fuzzy set.For every λ ∈ 0, 1 and any x, y ∈ A λ and α ∈ 0, 1 , the inequalities imply that the point αx 1 − α y belongs to A λ. Thus, A λ is convex.Conversely, suppose that all of the strong λ-level sets, A λ, are convex sets.For any x, y ∈ E and α ∈ 0, 1 , if either A x or A y is 0, then it is obvious that Thus, without loss of generality, suppose that A x and A y are not 0. Taking λ min A x , A y − ε, where ε ∈ 0, min A x , A y , we have that x and y belong to the convex set A λ, which implies that for every α ∈ 0, 1 , By the arbitrariness of ε, we obtain that for every α ∈ 0, 1 , A αx 1 − α y ≥ min A x , A y .
3.8
Thus, A is an * M -convex fuzzy set.We complete the whole proof here.
Lemma 3.4 see 23 .
If f is a mapping from a universe X to another universe Y .Then for any A ∈ F X , the following equation: holds for all λ ∈ 0, 1 .
3.10
This function is not continuous at x 0 because the limit of f x as x tends to 0 does not exist.However, it is an * M -convex transformation.
In order to show this, let A be an arbitrary * M -convex fuzzy set in R. Thus, Lemma 3.3 implies that every strong λ-level set A λ is a convex set in R. In addition, the convex sets in R are intervals.If the interval A λ does not contain 0, then f A λ is an interval because f is a continuous function on A λ.If the interval A λ contains 0, then f A λ −1, 1 because any neighborhood of 0 can always include an interval 1/2 n 1 π, 1/2nπ or an interval −1/2nπ, −1/2 n 1 π for sufficiently large n ∈ N.
Thus, by Lemma 3.4, we have proved that all of the strong λ-level sets, f A λ, are convex sets, which shows that f A is an * M -convex fuzzy set in −1, 1 .Then f is an * Mconvex transformation.Definition 3.6.Let E be a real linear topological space, let * be a t-norm, and let f be a mapping from E to Y, ≥ , a totally ordered set equipped with the order topology.For a line segment xy, define the mapping g : 0, 1 → Y by g t f tx 1 − t y .f is said to be monotonous on a line segment xy, or f| xy is said to be monotonous, if g t is monotonous on 0, 1 with respect to t. Theorem 3.7.Let E be a real linear topological space, * be a t-norm, and f be a mapping from E to Y, ≥ , a totally ordered set equipped with the order topology.f is an inverse * -convex transformation if and only if the restriction of f to every line segment is monotonous.
Proof.Suppose that the restriction of f to every line segment is monotonous.Let B be an arbitrary * -convex fuzzy set in Y .For any u, v ∈ X and t ∈ 0, 1 , we have that 12 because f| uv is monotonous.Thus, from the * -convexity of B, it follows that
3.13
which implies that f −1 B is an * -convex fuzzy set.Therefore, f is an inverse * -convex transformation.
Conversely, suppose that f is an inverse * -convex transformation.If there is a line segment uv on which f is not monotonous.Then there is a t 0 ∈ 0, 1 which satisfies
3.15
This is a contradiction.Thus, we can complete the whole proof here.
Corollary 3.8.Let E be a real linear topological space, and let * be a t-norm.If f is a real linear functional, then f is an inverse * -convex transformation.
Proof.For any line segment xy, the function g : 0, 1 → R defined as satisfies min g 0 , g 1 ≤ g t ≤ max g 0 , g 1 , 3.17 for all t ∈ 0, 1 .From the arbitrariness of the line segment xy and the above inequalities, one can easily deduce that g t is monotonous on 0, 1 .Then by Theorem 3.7, we get the desired result.
Theorem 3.9.Let E be a Cartesian product of real linear topological spaces E E 1 × • • • × E r , let * be a continuous t-norm, and let f be a mapping from E to Y, ≥ , a totally ordered set equipped with the order topology.If the restriction of f to every line segment xy, f| xy , is continuous, then f is a multivariable * -convex transformation.
Proof.Let A 1 , . . ., A r be * -convex fuzzy sets in E 1 , . . ., E r , respectively.For any given three points x, y, z ∈ Y with x ≤ y ≤ z, if either f −1 x or f −1 z is an empty set, then it is obvious that Thus, without loss of generality, suppose that f −1 x and f −1 z are nonempty sets.For any two points u u
3.19
The mapping g t is continuous on 0, 1 with respect to t because the restriction f| uv is continuous.Since g 1 x and g 0 z, from the generalized intermediate value theorem 24 , it follows that there is a t 0 ∈ 0, 1 such that g t 0 y which implies that f −1 y is nonempty.Then by the commutativity and the associativity of * and the * -convexities of A 1 , . . ., A r , we have that
3.20
Combining with the continuity and the monotonicity of * and the arbitrariness of u in f −1 x , the inequality 3.20 implies that f A 1 , . . ., A r y
3.21
Similarly from the continuity and the monotonicity of * and the arbitrariness of v in f −1 z , it follows from the above inequality that f A 1 , . . ., A r y which implies that f A 1 , . . ., A r is an * -convex fuzzy set of Y .We complete the whole proof here.
Applications and Examples
Now we give some applications of the main results to the hyperstructure convexity invariance of type-2 fuzzy sets under hyperalgebra operations, and to the convexity invariance of fuzzy numbers under basic arithmetic operations.
Convexity Invariance of Type-2 Fuzzy Sets under Set Operations
Let * be a t-norm, and let μ A x and μ B be fuzzy grades for type-2 fuzzy sets, A and B, represented as
4.1
Then the hyperalgebra operations for type-2 fuzzy sets are expressed as follows by using the extension principles, generalized union: and generalized intersection:
4.3
It should be noted that a type-2 fuzzy set is * -convex if all of its fuzzy grades are * -convex fuzzy sets.The following theorem is a generalization of the results in 4, 8 .Proof.Define two mappings f ∪ * , f ∩ * : 0, 1 2 → 0, 1 as f x, y max x, y and f ∩ * x,y min x, y for x, y ∈ 0, 1 2 , respectively.It is easy to see that the two mappings are continuous on 0, 1 2 .Thus, by Theorem 3.9, we can get the desired results.
Convexity Invariance of Fuzzy Numbers under Basic Arithmetic Operations
In the early literature, fuzzy number is defined as a convex fuzzy set in a real line R 1, 5, 11, 12, 22 .Although this definition is very often modified nowadays, the convexity is always one of the conditions for a fuzzy set to be a fuzzy number.The arithmetic operations of , −, ×, ÷ for fuzzy numbers can be defined by using the Zadeh's multivariable extension principle.Define four mappings f , f − , f × , and f ÷ : R 2 → R by f x, y x y, f − x, y x − y, f × x, y x × y, and f ÷ x, y x ÷ y, for x, y ∈ R 2 , respectively.It is easy to see that the first three mappings are continuous on R 2 , and f ÷ is continuous on R × 0, ∞ or R × −∞, 0 .Thus, let the t-norm be * M , and let the mapping be one of the above four mappings in Theorem 3.9 in turn, one can get the following theorem.
Conclusions
In this work, we have discussed the convexity invariance of fuzzy sets under the extension principles.Particularly, we have given a necessary and sufficient condition for a mapping to be an inverse * -convex transformation and have also obtained some sufficient conditions for a mapping to be an * -convex transformation.Finally, two applications are given to illustrate the obtained results.The properties of the introduced concept, * -convex transformation, certainly deserve further investigation.
-1 a * 1 a, boundary condition T-2 a * b ≤ c * d, whenever a ≤ c and b ≤ d, monotonicity T-3 a * b b * a, commutativity T-4 a * b * c a * b * c associativity .
Theorem 4 . 1 .
Let * be a continuous t-norm.If A and B are two * -convex type-2 fuzzy sets, then A∪ * B, A∩ * B are * -convex type-2 fuzzy sets.
Corollary 4 . 2 Corollary 4 . 3 .Corollary 4 . 4 .Corollary 4 . 5 .
see 4, 8 .If A and B are two convex type-2 fuzzy sets, then A∪ * M B, A∩ * M B are convex type-2 fuzzy sets.If A and B are two * P -convex type-2 fuzzy sets, then A∪ * P B, A∩ * P B are * P -convex type-2 fuzzy sets.If A and B are two * L -convex type-2 fuzzy sets, then A∪ * L B, A∩ * L B are * L -convex type-2 fuzzy sets.If A and B are two * D -convex type-2 fuzzy sets, then A∪ * D B, A∩ * D B are * D -convex type-2 fuzzy sets.
Theorem 4 . 6
see 1, 5, 12 .If A and B are fuzzy numbers in a real line R, then A B, A − B, and A × B are fuzzy numbers.In addition, if 0 does not belong to the support set B 0 of B, then A ÷ B is a fuzzy number.
Definition 2.12.Let E be a Cartesian product of real linear topological spaces E E 1 × • • • × E r , let * be a t-norm, and let f be a mapping from E to Y, ≥ , a totally ordered set equipped with the order topology.The mapping f is said to be a multivariable * -convex transformation from E to Y if the induced mapping f by the generalized multivariable extension principle transforms every n-tuple of * -convex fuzzy sets A 1 , . . ., A r of E into an * -convex fuzzy set of Y , where A 1 , . . ., A r are * -convex fuzzy sets in X 1 , . . ., X r , respectively.
b}, 2.12 for all a, b in X 24 .Remark 2.10.It should be noted that a fuzzy set A in a totally ordered set Y is an * -convex fuzzy set if and only if for all x, y, z ∈ Y with x ≤ y ≤ z, μ y ≥ μ x * μ z .2.13 Definition 2.11.Let E be a real linear space, * be a t-norm, and let f be a mapping from E to Y, ≥ , a totally ordered set equipped with the order topology.The mapping f is said to be an * -convex transformation from E to Y if the induced mapping f by Zadeh's extension principle transforms every * -convex fuzzy set of E into an * -convex fuzzy set of Y ; the mapping f is said to be an inverse * -convex transformation from E to Y if the induced mapping f −1 by Zadeh's extension principle transforms every * -convex fuzzy set of Y into an * -convex fuzzy set of E. | 2018-12-10T04:54:15.020Z | 2012-10-02T00:00:00.000 | {
"year": 2012,
"sha1": "6d2af79eafdc6238826115ceec9b67e5d6e5ffc1",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jfs/2012/849104.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6d2af79eafdc6238826115ceec9b67e5d6e5ffc1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119282804 | pes2o/s2orc | v3-fos-license | Disk Substructures at High Angular Resolution Program (DSHARP): VIII. The Rich Ringed Substructures in the AS 209 Disk
We present a detailed analysis of the high-angular resolution (0.''037, corresponding to 5 au) observations of the 1.25 mm continuum and $^{12}$CO $2-1$ emission from the disk around the T Tauri star AS 209. AS 209 hosts one of the most unusual disks from the DSHARP sample, the first high angular resolution ALMA survey of disks (Andrews et al. 2018), as nearly all of the emission can be explained with concentric Gaussian rings. In particular, the dust emission consists of a series of narrow and closely spaced rings in the inner $\sim$60 au, two well-separated bright rings in the outer disk, centered at 74 and 120 au, and at least two fainter emission features at 90 and 130 au. We model the visibilities with a parametric representation of the radial surface brightness profile, consisting of a central core and 7 concentric Gaussian rings. Recent hydro-dynamical simulations of low viscosity disks show that super-Earth planets can produce the multiple gaps seen in AS 209 millimeter continuum emission. The $^{12}$CO line emission is centrally peaked and extends out to $\sim$300 au, much farther than the millimeter dust emission. We find axisymmetric, localized deficits of CO emission around four distinct radii, near 45, 75, 120 and 210 au. The outermost gap is located well beyond the edge of the millimeter dust emission, and therefore cannot be due to dust opacity and must be caused by a genuine CO surface density reduction, due either to chemical effects or depletion of the overall gas content.
INTRODUCTION
The distribution of gas and dust in protoplanetary disks will directly impact the outcome of planetary systems (Weidenschilling 1977;Öberg et al. 2011). Characterizing the spatial distribution of both the dust and gas in disks is therefore essential to understanding how and what kind of planets can form. The main problem theoretical models currently have is the fast migration of mm-sized dust particles towards the central star, preventing the formation of planetesimals, especially at larger distances from the star (Takeuchi & Lin 2002, 2005Brauer et al. 2007Brauer et al. , 2008. The solution that has been invoked to solve this problem is the presence of lo-cal pressure maxima that can stop, at least temporarily, the migration of solid particles and concentrate them for enough time to allow them to grow and form larger bodies (e.g., Pinilla et al. 2012). High-angular resolution ALMA observations of the millimeter dust continuum emission have shown evidence of such substructures in a handful of disks around nearby young stars, including HL Tau (ALMA Partnership et al. 2015), HD 163296 (Isella et al. 2016), TW Hya , Elias 24 (Cieza et al. 2017;Dipierro et al. 2018) andAS 209 (Fedele et al. 2018). All of these disks present multiple ring/gap structures, and demonstrate the presence of mm-sized grains out to radii of at least 100 au in the disks. The origin of these ring-like substructures is unclear. The most favored mechanisms include planetdisk interactions (e.g. Dong & Fung 2017), radial pressure variations due to zonal flows in MHD turbulent disks (Johansen et al. 2009), and snowline-induced gaps (Zhang et al. 2015).
While the presence of substructures seems common, a larger sample of disks is needed to fully characterize the prevalence and configuration of such substructures, and constrain the physical or chemical processes responsible for them. This is the motivation of the Disk Substructures at High Angular Resolution Project (DSHARP), one of the large programs carried out with the Atacama Large Millimeter Array (ALMA) in Cycle 4. The goal of the project is to characterize in an homogeneous way the substructures of 20 nearby protoplanetary disks, by mapping the 240 GHz dust continuum emission at a resolution of 35 mas, corresponding to 5 au (Andrews et al. 2018). One of the main outcomes of this survey is that bright rings and relatively faint gaps are an extremely common feature of disks, but the configuration (position and contrast) of the rings varies substantially from source to source (Huang et al. 2018a).
In this paper we focus on one of the most unusual DSHARP sources, the disk around the classical T Tauri star AS 209. The large number of rings, the narrowness of the rings, and the wide gaps in the outer disk make AS 209 especially intricate compared to the vast majority of disks observed at high-angular resolution. The star is located to the northeast of the main Ophiuchus star-forming region, at a distance of 121 ± 2 pc (Gaia Collaboration et al. 2018). The star has a spectral type K5, a mass of 0.9 M , and an age of 1.6 Myr (see Table 1 in Andrews et al. 2018). Observations at different wavelengths clearly show the effect of radial drift of larger grains, as the emission is noticeably more compact at longer wavelengths (Pérez et al. 2012;Tazzari et al. 2016). The surface density profile has been characterized with 870 µm observations at 0. 3 angular resolution (Andrews et al. 2009). More recently, Fedele et al. (2018) presented ALMA observations of the disk at ∼0. 17 angular resolution. The emission was characterized by a bright central component and the presence of two weaker dust rings near 75 and 130 au, and two gaps near 62 and 103 au. No obvious substructure was observed in the inner 60 au disk, except for a kink around 20 − 30 au. Fedele et al. (2018) also presented hydro-dynamical simulations and found that the gap near 100 au located between the two outer rings could be produced by a Saturn-like planet.
The disk has also been observed in molecular line emission. Huang et al. (2016) presented observations of the three main CO isotopologues at 0. 6 angular resolution. While the emission from the most abundant isotopologue, 12 CO, was found to be centrally peaked and decreasing monotonically with radius, the 13 CO and C 18 O emission showed evidence of an outer ring or bump centered at 150 au, near the millimeter dust edge. In this work we present 1.25 mm dust continuum and 12 CO 2 − 1 line observations in the AS 209 disk. The observations and data reduction are presented in section 2. The results of the dust continuum emission and the 12 CO line emission are described in section 3. A discussion is presented in section 4 and a summary is given in section 5.
OBSERVATIONS
The observations presented here are part of the DSHARP ALMA Large Program (2016.1.00484.L). The AS 209 disk was observed with ALMA in Band 6 in September 2017 in configurations C40-8/9. Shorter-baselines were observed in May 2017 in configuration C40-5. We use additional archival data from projects 2013.1.00226 .1.00486.S (Fedele et al. 2018, that provide information on shorter and intermediate baselines, respectively. A detailed description of the observations and data reduction process can be found in Andrews et al. (2018). Briefly, we first self-calibrated the short baselines, and in a second step self-calibrated the combined observations of short-and long-baselines. The continuum observations were cleaned in CASA 5.1, using the multiscale option and a robust parameter of −0.5. A taper with FWHM of 37 mas × 10 mas and position angle (PA) of 162 • was used to minimize PSF-related artifacts. The resulting continuum image has a beam of 38 mas × 36 mas, PA of 68 • and a rms noise of 19 µJy beam −1 . The solutions of the continuum self-calibration were then applied to the 12 CO data. The molecular line data was first regridded to channels of 0.35 km s −1 and then cleaned with a robust parameter of 1.0. A Keplerian mask was used to help the cleaning process. The resulting 12 CO image has a beam of 95 mas × 70 mas, PA of 97 • and a rms noise of 0.8 mJy beam −1 per channel.
RESULTS
In this section we first describe the main features of the dust continuum emission and model the surface brightness in the uv-plane to extract the positions, widths and amplitudes of the different ring components. We then describe the spatial distribution of the 12 CO emission and its relation to the dust continuum emission.
Dust continuum emission
The image of the 1.25 mm dust continuum emission from the AS 209 disk is shown in Figure 1. The map is shown in the upper panel while the lower panel shows the emission in polar coordinates to better visualize the axisymmetric nature of the emission. To create the map in polar coordinates, the image is first deprojected using an inclination of 34.88 • and a position angle of 85.76 • (see section 3.1.1). The polar angle increases in the clockwise direction, where zero degrees corresponds to the minor axis of the disk in the south direction. The dust emission is characterized almost entirely by a series of concentric narrow rings and gaps. Although the emission looks very axisymmetric, we cannot exclude the possibility that the rings have a small eccentricity (e < 0.15; see discussion in Huang et al. 2018a;Zhang et al. 2018). Fig. 2 shows the deprojected azimuthally averaged emission profile, assuming the same inclination and position angle mentioned above. The emission is centrally peaked and the surface brightness at the peak of the rings decreases with radius. A striking aspect of the emission is the difference between the inner 60 au disk, which consists of a central component and three closely packed rings, and the outer > 60 au disk that consists of 2 bright rings that are well separated and spatially resolved. The two outer rings have been previously reported by Fedele et al. (2018).
The new higher-angular resolution observations reveal the inner disk is not smooth but contains substantial substructure. In the inner disk the ring peaks are clearly resolved, but they overlap at lower surface brightnesses. The gap near 100 au is not completely empty of emission, as a faint ring can be seen in the deprojected radial profile around this radius (better seen in the zoomed panel in Fig. 2). An additional faint component can also be seen at the edge of the disk, just outside the bright ring located at ∼ 120 au.
In the next section, we model the emission in the uvplane to obtain the deconvolved position and width of the various rings observed in the disk.
Model-fitting in the uv-plane
Given the striking ring-nature of the dust continuum emission, we modeled the radial brightness distribution with the sum of concentric Gaussian rings: The number of rings is chosen through visual inspection of the cleaned emission map and the deprojected radial profile (see Figs. 1 and 2). We include a central component and 3 rings in the inner (< 60 au) disk and 4 rings in the outer disk. The center position of the innermost Gaussian is fixed to zero, i.e., the disk center. Two of the outer rings correspond to the faint components near 100 and 130 au. We assume the emission is axisymmetric, and create synthetic visibilities given by the Hankel transform (Pearson 1999): where ρ is the deprojected uv-distance in units of kλ, r is the radial angular distance from the disk center in units of radians, and J 0 is the zeroth-order Bessel function of the first kind.
To speed up the fitting method, the observed visibilities are first deprojected, radially averaged and binned. We include uv-points from 10 to 10000 kλ, in steps of 10 kλ. The inclination (i), position angle (PA) and center of the disk, given by an offset (δ x , δ y ), are included as free parameters in the fit. The total number of free parameters is then 27, that is 23 for the Gaussian rings (A i , r i , σ i with i from 1 to 8; r 0 = 0) and 4 for the disk geometry (i, PA, δ x , δ y ).
We use the affine invariant MCMC sampler implemented in the emcee package (Foreman-Mackey et al. 2013) to explore the parameter space. Table 1 summarizes the resulting best-fit parameters, calculated from the 50th percentile. The uncertainties are calculated from the 16th and 84th percentiles. The best-fit model is shown in red. An inset is shown in the upper right corner.
We find a disk inclination of 34.9 • and a position angle of 85.8 • , which are consistent with previous estimates (Fedele et al. 2018). These are also consistent with what Huang et al. (2018a) finds by fitting rings directly to the cleaned image. We also find a small offset from the phase center. We note that this offset is a just a nuisance parameter, since the disk center has been altered during the self-calibration process. The rings in the inner disk are centered at 15, 27, 41 au, and have FWHM between 7 and 17 au. Given that the spatial resolution of the observations is 5 au, the rings are all resolved. The two prominent rings in the outer disk are located at 74 and 120 au. Fedele et al. (2018) found this second ring to be located at 130 au instead of 120 au. The difference is due to the presence of another much fainter ring or bump, which we find to be located at 140 au. The faint ring located in between the two prominent outer rings is located at 92 au. We note that this ring is not located at the gap center but is instead closer to the inner ring near 74 au. These two fainter rings in the outer disk are found to be much broader (FWHM of 23 au) than the rest of the rings, in particular the two prominent outer rings which have FWHM of ∼ 7 − 10 au. This suggest that the nature of these components may be different from the rest, and their distribution are thus not well represented by a Gaussian. They could instead be treated as faint emission that is somehow connected to the brighter neighbor rings. Fig. 4. A slightly hexagonal structure and bumpiness can be seen in the rings, which is also seen in the observations. This demonstrate these structures are not real but correspond to PSF effects (see Andrews et al. 2018, for a discussion). The residuals, corresponding to the cleaned image of residual visibilities (V obs − V model ) are shown in the right panel of the figure. Our best-fit model successfully reproduces the observations, as seen by the low-level emission in the residuals map. However, some residuals remain near the disk center. This suggests that some additional substructure, in addition to Gaussian rings, are needed to fully reproduce the inner disk structure.
3.2. 12 CO J = 2 − 1 emission Figure 5 shows the moment-zero map of the 12 CO J = 2 − 1 line, integrated from −3 to 14 km s −1 . Pixels with S/N ratio lower than 3 have been clipped to highlight the substructure of the emission. The West side of the disk is moderately affected by cloud contamination due to the overlap between the velocities of the cloud with the blue-shifted part of disk line emission. This explains the East-West asymmetry. The bottom panel in Fig. 5 displays the azimuthally-averaged deprojected profile, including only the East side of the disk. A selection of channel maps, showing the emission from the East-side of the disk is shown in Fig. 6. Channel maps for the full velocity range can be found in Fig. 8. The 12 CO J = 2 − 1 emission 1) is centrally peaked, 2) extends much farther out than the millimeter dust emission (out to 300 au), and 3) presents 4 main gaps or emission decrements in the outer disk, near 45, 75, 120 and 210 au. We note this is not an effect of the clipping in the creation the moment-zero map, as the gaps are also seen with even more clarity in the individual channels (see Fig. 6). The first gap is located very close to the edge of the inner millimeter disk. The next two gaps near 74 and 120 au spatially coincide with the two prominent outer dust rings. This gap is better seen in the channel maps, at a velocity of 5.8 km s −1 . The spatial coincidence of these two gaps with the location of the millimeter dust rings suggests that the rings are optically thick and are absorbing the 12 CO emission (see the discussion in section 4.2). The fourth and outermost gap seen in the 12 CO emission is located at a distance of 210 au from the central star, well beyond the millimeter dust edge. Therefore, it cannot be explained by dust opacity and must be a real decrease in the 12 CO column density at this radius.
The high-angular resolution observations of the 12 CO J = 2 − 1 line allow us to determine the absolute orientation of the disk geometry (Rosenfeld et al. 2013). Because the 12 CO line is optically thick and the disk is flared, the observed emission originates from the surface layers of the disk which are elevated with respect to the midplane, where the dust continuum emission arises. At high enough angular resolution and if the disk is inclined enough, it is possible to differentiate the front and back side of the disk. In the case of AS 209, the southern part of the disk appears to be closer to us, since the emission arising from the half-cone that is pointing towards us is shifted to the north and appears brighter. By contrast, the emission arising from the half-cone that is pointing to the back appears dimmer and is shifted to the south (see Fig. 6). This effect is best seen at velocities of 5.8 and 6.15 km s −1 . Compared to other disks, such as IM Lup (Huang et al. 2018b) and HD 163296 , these effects are subtle, however, which is likely due to the higher inclinations of both disks compared to AS 209. Scattered light observations trace the illumination of the small grains in the disk. It is useful to compare scattered light to 12 CO observations since they both trace the upper layers of the disk. The scattered light image of AS 209 presented recently by Avenhaus et al. (2018) was found to be very faint and featureless in comparison to other disks of similar mass. Although the scattered light is faint, it is detected out to a radius of 200 au. The faint nature of the scattered light could be the result of shadowing of the outer disk by one of the rings in the inner disk, most likely the one observed in millimeter continuum near 15 au. Another possibility is that the disk has experienced substantial settling and it is very flat, which hampers the efficient scattering of photons from the star. From these observations, the authors computed the deprojected profiles for two possible scenarios in which the North and the South part of the disk are closer to us. As explained above, the new 12 CO observations demonstrate the latter alternative is the correct one. For this scenario, the resulting radial profile of the scattered light emission presents three rings, two of them located near the two prominent outer dust rings seen by ALMA, and a third one near 250 au. They also find a gap near 200 au. This outer ring and gap coincide with the gap and outer ring seen in the 12 CO emission.
DISCUSSION
Although other disks are known to harbor multiple rings in their millimeter emission, the disk around AS 209 is unique because of the striking difference between the substructure in the inner and outer disk, and the symmetric nature of the rings. While the inner disk consists of closely-packed rings that resemble the emission seen in TW Hya and HD 163296, the outer disk harbors two prominent rings that are separated by a gap with a much higher contrast compared to the other disks. In this section we discuss the different possible origins for these substructures, in both the dust continuum and the 12 CO line emission.
Origin of the dust emission ring-morphology
One of the main results from the DSHARP Large Program is that rings and gaps are very common in protoplanetary disks (Huang et al. 2018a), but the origin of these substructures is still unknown. Several hypotheses have been proposed, the most popular being the presence of planets. An embedded planet can induce a gap opening (or multiple gaps) by dynamic interactions with the disk. The depth of the gap will depend on several factors, including the mass of the planet, the time the planet has had to carve the gap, the disk aspect ratio h/r, and the disk viscosity -planets will open deeper gaps in low viscosity disks (e.g. Crida et al. 2006;Dong & Fung 2017;Bae et al. 2017). Using 3D hydro-dynamical simulations, Fedele et al. (2018) found that the position, width and depth of the outermost gap at 100 au in the AS 209 disk is consistent with the presence of a 0.2 M J planet located at 95 au. Their simulation also predicted the presence of a feature inside the gap. The feature was not detected in their observations but it is now detected with the DSHARP observations. Fedele et al. planet located at 80 au can produce the two major gaps seen in the outer AS 209 disk (at 60 and 100 au), and predicted an additional gap near 40 au, which is now detected by DSHARP. Another possibility, also consistent with the observations, is the presence of a second ∼ 0.1 M J planet in the inner gap at 57 au, close to a 2:1 resonance with the outer planet (Fedele et al. 2018).
The higher-angular resolution observations reveal the presence of additional gaps in the inner disk that were unresolved in the ∼ 0. 17 resolution observations of Fedele et al. (2018). Although they did not resolve the three individual rings in the inner disk, they did report a kink in the profile near 20 − 30 au. The multiple rings seen in the inner disk could be produced by planets in the outer disk. Hydrodynamics simulations have shown that a single super-Earth planet could produce major gaps both interior of the planet location and in the outer disk (e.g., Bae et al. 2017;. A particular new feature in the AS 209 disk is the gap located at roughly 10 au. The gap is not resolved and it is not seen in the scattered light image, perhaps because it is hidden by the 0. 185 coronagraph (Avenhaus et al. 2018). Inspired by the disk structures observed in the DSHARP sources, Zhang et al. (2018) presented a grid of hydrodymical simulations. In particular for AS 209, they show that a single planet located at 99 au could produce simultaneously the various rings seen in the inner 60 au and outer disk in AS 209 if α varies radially. The simulation is shown in Fig. 7. The planet responsible for the gaps has a mass of 0.087 M J . Quite remarkably, this simulation is able to produce not only the large gap near 100 au, but also matches the position of the gaps in the inner disk, at 60 au, 35 au and even the one at 24 au. A detailed description of the simulation as well as a synthetic image of the dust continuum emission is given in Zhang et al. (2018).
It is worth noting that the most prominent gaps seen in the dust continuum are not seen as prominent features in the 12 CO emission (but see section 4.2). This does not rule out the planet hypothesis, as theoretical simulations have shown that it is possible for planets to open gaps in the dust while leaving the gas emission relatively featureless (Paardekooper & Mellema 2004;Rosotti et al. 2016;Isella et al. 2016;Dipierro & Laibe 2017), especially for optically thick lines like 12 CO. This is particularly true if the planet has a low mass, which is the case for the putative planets in the AS 209 disk (Zhang et al. 2018).
It has also been suggested that the rings observed at millimeter wavelengths could by produced by changes in the dust properties at the location of snowlines of the main ices, like H 2 O, NH 3 , CO and N 2 (Zhang et al. 2015;Okuzumi et al. 2016). In this scenario, material can be concentrated at the location of condensation fronts, which is critical for the formation of planetes- imals. At these locations the µm-sized and mm-sized dust particles would have grown to cm sizes and larger, and would thus be invisible at millimeter wavelengths, appearing as gaps in the observations. We can estimate the location of these snowlines in the AS 209 disk, using the midplane temperature derived by Andrews et al. (2009). The two outer gaps in the disk, near 60 and 100 au, have temperatures of 20 and 15 K, which are comparable to the condensation temperatures of pure 12 CO and N 2 ices, respectively (Zhang et al. 2015). The water snowline, which is supposed to be the most efficient snowline to concentrate solids, is located very close to the central star (< 2 au), and is thus unresolved even with the ALMA observations. It is worth remembering that gas temperatures in disks are very uncertain. In particular, for the AS 209 disk the gas temperature was derived by fitting a power-law to lower angular resolution observations of the dust continuum (Andrews et al. 2009). The location of snowlines could therefore be shifted in the disk. A major problem with this scenario is that the gaps in the AS 209 disk are very wide (almost 20 au for the outermost gap) and also very depleted in dust. Although they may contribute some in the formation of these gaps, it is hard to explain how condensation fronts alone could produce such wide gaps. Moreover, dust coagulation and disk evolution models that take into account the condensation and evaporation of major volatiles do not find enhanced grain growth near the 12 CO snowline (Stammler et al. 2017). Only at the location of the water snowline is dust growth found to be efficient enough to produce strong features in the dust continuum emission (Birnstiel et al. 2010;Drażkowska & Alibert 2017;Schoonenberg & Ormel 2017).
Other alternatives mainly involve internal disk gas dynamics resulting from the coupling between gas and magnetic field. These include zonal flows due to magnetorotational instability (MRI) turbulence (Johansen et al. 2009;Simon & Armitage 2014;Bai & Stone 2014), ring formation at dead zone boundaries (Flock et al. 2015;Lyra et al. 2015), or through spontaneous magnetic flux concentration (Bai & Stone 2014;Béthune et al. 2017;Suriano et al. 2018). The zonal flow scenario generally produces radial density variations up to a few tens of percent on scales of a few scale heights. While this may be consistent with the rings found in TW Hya , the ring separation in AS 209 is much broader (e.g., 46 au separation for disk scale height of 5 − 10 au in the outer disk), and the contrast between rings and gaps is also much larger. The dead zone scenario relies on a specific location corresponding to a transition in the radial resistivity profile well within 100 au, making it difficult to reconcile with the widelyseparated multi-ring structure seen in the outer disk of AS 209. The last scenario offers large degrees of freedom attributed to how magnetic flux threading the disks evolve, though this process is very poorly understood, and existing studies largely rely on simplified assump-tions (Lubow et al. 1994;Okuzumi et al. 2014;Guilet & Ogilvie 2014;Bai & Stone 2017). Therefore, while some scenarios are unlikely, better theoretical understandings are needed to assess the possibility of magnetic origin for the substructures in AS 209.
4.2.
Origin of the 12 CO J = 2 − 1 emission rings and gaps One way to produce gaps in the 12 CO emission is by optically thick dust emission. For a long time, dust opacity at millimeter wavelengths was thought to be negligible, at least in the outer disk. However, recent lowangular observations of a large sample of disks (Tripathi et al. 2017) as well as high-angular resolution observations cast doubt on this assumption (see also the case of HD 163296; Isella et al. 2018). In the case of AS 209, the peak brightness temperature of the two prominent outer rings is ∼ 0.15 − 0.20 mJy, which correspond to Rayleigh-Jeans brightness temperatures of ∼ 3 − 4 K, and would only produce an optical depth of ∼ 0.4 − 0.5 even assuming relatively conservative dust temperatures of ∼ 12 − 15 K (see also Dullemond et al. 2018). The dust rings therefore seem to be optically thin. One possibility to explain the decrease of 12 CO emission close to the location of these two dust rings is that the rings are not resolved and are clumpy in nature. A filling factor of 1/3 would be enough to reproduce the ratio between the dust temperature and the brightness temperature of the rings. Another possible explanation is that some emission was removed during the continuum subtraction process. Indeed, part of the continuum can be absorbed by the molecule at the line center, especially when the optical depth of the line is much higher than the optical depth of the continuum. This can lead to an overestimation of the dust contribution at the line center, which is estimated from the line free channels, resulting in an underestimation of the line emission (see Boehler et al. 2017;Weaver et al. 2018). Huang et al. (2016) presented CO isotopologue observations of the AS 209 disk at 0. 6 angular resolution, and found that the C 18 O emission consists of a central peak and a ring component centered at a radius of ∼150 au. This ring component, however, was not observed in their optically thick 12 CO emission. The new higher angular resolution 12 CO observations reveal some new interesting substructure. In particular, we detect a fainter outer ring centered at ∼ 240 au, which was not detected in the C 18 O emission probably due to the lower signal-to-noise of the emission in the outer disk. The CO distribution thus appears to have at least 2 ring components located outside the millimeter dust disk and also well outside the expected location of the CO snowline (between 30 and 90 au; Huang et al. 2016). Huang et al. (2016) suggested that the C 18 O ring at 150 au is caused by CO being desorbed back into the gas-phase, which could happen by some non-thermal process (cosmic-rays or high-energy photons), or by thermal inversion due to dust migration (e.g., Cleeves 2016). However, the presence of a second ring at 240 au casts doubt on this interpretation. If chemistry is the cause of the outermost 12 CO ring, then the inner ring should be produced by a different mechanism. Another alternative is that the outermost gap near 210 au corresponds to a reduction of the total gas density at this location. The origin of this gap is unclear, but we can speculate it is produced by a planet. The formation of planets this far from the star is hard to explain theoretically, but indirect evidence of their existence has been found. Pinte et al. (2018) found localized deviation from Keplerian velocities in the 12 CO emission, and attributed the observed velocity kink to a planet located ∼ 260 au from the young HD 163296 star. We do not detect any clear evidence of deviations from Keplerian velocities in the AS 209 disk, but this could be due to lower signal-to-noise ratio and spectral resolution compared to the HD 163296 observations. Alternatively, the velocity kink could be present in the West side of the disk and thus hidden by the cloud contamination. We do not detect any localized emission from a circumplanetary disk at this radius in the dust continuum image either. Recently, Teague et al. (2018) presented a new method to measure rotation curves in disks. Using lower angular resolution archival CO data, they found deviations of up to 5% from Keplerian rotation at 250 au in the AS 209 disk.
SUMMARY
We have presented observations of the 1.2 mm dust continuum and the 12 CO 2 − 1 line emission from the disk around the classical AS 209 star, as part of the ALMA Large Program DSHARP. We have modeled the dust emission in the uv-plane and find that the emission can be well-represented by a series of narrow concentric rings. In addition to the two prominent rings located at 74 and 120 au that were previously reported by Fedele et al. (2018), we find a central component and three rings in the inner < 60 au disk. Two main gaps are seen, near 60 and 100 au. The second gap at 100 au is not completely empty from dust grains, however, as we detect faint dust emission within the gap. We also detect faint emission in the outer edge of the disk, out to ∼ 160 au.
The 12 CO image exhibits four gaps. Two of them spatially coincide with the position of the two prominent outer dust rings, which suggest the rings are optically thick. The outermost 12 CO gap near ∼ 210 au is located well-beyond the millimeter dust edge, and therefore traces real 12 CO depletion or a substantial decrease in the gas density.
We discuss the different possibilities for the origin of the gaps seen in the continuum emission. We find that, although some of the gaps roughly coincide with the location of snowlines of major volatiles, it it is unlikely that all the observed gaps are induced by a chemical effect. This is because of the varied structures observed in the different DSHARP sources, in terms of the width and contrast of the gaps, and snowlines should produce similar configurations (see Huang et al. 2018a, for a discussion). Recent hydro-dynamical simulations show, however, that super-Earth planets can produce the various rings seen in the continuum image, both in the inner disk and in the outer disk. As the spatial resolution of the observations improve, revealing new features, the locations and masses of these planets can be better constrained. If the presence of a planet (or multiple planets) in the AS 209 disk is confirmed, then planet formation starts very early (few Myr) in the evolution of disks. Moreover, if a planet is responsible for the gap seen in CO at ∼ 210 au, then planets can form at large distance from the central star, which challenges our current understanding of the planet formation process. | 2018-12-10T19:36:00.000Z | 2018-12-10T00:00:00.000 | {
"year": 2018,
"sha1": "a80adb9c5d9455c2eda0ec8cc8f3a6602fe1d38f",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.3847/2041-8213/aaedae/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a46e94982149b3bcf2b607c5c770133a32bea41d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
256900715 | pes2o/s2orc | v3-fos-license | Weyl points in ball-and-spring mechanical systems
Degeneracy points of parameter-dependent Hermitian matrices play a fundamental role in quantum physics, as illustrated by the concept of Berry phase in quantum dynamics, the Weyl semimetals in condensed-matter physics, and the robust ground-state degeneracies in topologically ordered quantum systems. Here, we construct simple ball-and-spring mechanical systems, whose eigenfrequency degeneracies mimic the behaviour of degeneracy points of electronic band structures. These classical-mechanical arrangements can be viewed as de-quantized versions of Weyl Josephson circuits, i.e., superconducting nanostructures proposed recently to mimic band structure effects of Weyl semimetals. In the mechanical setups we study, we identify degeneracy patterns beyond simple Weyl points, including the chirality flip effect and a quadratic degeneracy point. Our theoretical work is a step toward simple and illustrative table-top experiments exploring topological and differential geometrical aspects of physics.
INTRODUCTION
Topological semimetals have attracted significant attention in recent years due to their unique electronic properties and potential applications in next-generation electronics. The electronic band structure of such materials exhibits degeneracy points, e.g., Weyl points, in their band structure, at which the energy eigenvalues disperse linearly. These points act as sources or sinks of Berry curvature, which leads to a number of interesting phenomena, including Fermi arc surface states, chiral anomaly, and anomalous Hall effect [1][2][3]. Beyond Weyl-point physics, robust level degeneracies are essential ingredients of topological insulators [4], topological quantum computing [5], and topologically ordered systems as well [6,7].
The complexity of real materials with Weyl points in their electronic band structures often hinders the observation of the associated geometrical and topological effects. Because of that, the physical characteristics of Weyl points are often investigated using metamaterials, e.g., engineered, artificial crystals whose phononic or photonic band structures possess Weyl points [8][9][10][11].
Alternatively, Weyl points arise and can be studied in quantum systems with at least 3 control parameters. For example, multiply-connected superconducting devices [12], including multi-terminal Josephson junctions [13] and the recently proposed Weyl Josephson circuits [14], can emulate Weyl semimetal band structures, where, e.g., the magnetic fluxes piercing the loops of the circuit correspond to the wave vectors of a band structure. Though multiply connected superconductors are a promising testbed to emulate and investigate topologically non-trivial band structures, the realization of such experiments requires costly, advanced, and challenging fabrication, as well as millikelvin cooling technology.
In this work, we show that Weyl points and much of their rich phenomenology can be realized in mechanical ball-and-spring systems, potentially leading to much simpler and much less costly table-top experiments on Weyl point physics. Weyl points arise in the parameterdependent frequency spectrum of the normal-mode oscillations of the proposed ball-and-spring systems. In the first setup we propose (System A), we illustrate the appearance of Weyl points, their movement, their creation and annihilation, highlighting the case when the spatial symmetry of the setup governs the creationannihilation process, enabling the chirality flip effect [15]. arXiv:2302.08241v1 [cond-mat.mes-hall] 16 Feb 2023 In the second setup we propose (System B), we show that the parameter-dependent effective dynamical matrix is analogous to the wave-vector dependent effective Hamiltonian of bilayer graphene [16], which exhibits a non-generic degeneracy point with topological charge of 2 and local multiplicity (birth quota) of 4 [17].
The rest of the paper is structured as follows. In Sec. II we introduce preliminary concepts and highlight relevant background to make this work self-contained. In Sec. III, we introduce a simple mechanical system (System A) whose vibrational spectrum contains Weyl points, and we demonstrate their movement and creation/annihilation, and the chirality flip effect. In Sec. IV, we discuss the appearance of a charge-2 Weyl point in another classical mechanical system (System B). In Sec. V we discuss the relation to prior work as well as open follow-up problems, while Sec. VI provides our conclusions.
II. PRELIMINARIES
To ensure that our terminology is well-defined, and to make this work self-contained, we collect a few preliminary concepts and relations in this section. We start by connecting elementary geometry with the 'topological protection' or 'robustness' of Weyl points in quasiparticle band structures of 3D crystals.
Consider two arbitrary intersecting curves on the Euclidean plane, as shown by the solid lines in Fig. 1a. Notice that in the vicinity of the intersection point, the two curves are well approximated by straight lines, which enclose a nonzero angle. This type of intersection of two curves on the plane is called transversal. In this case, transversality implies robustness, in the following sense: If we slightly deform the solid purple curve into the dashed purple curve, then the intersection point still exists between the solid black and dashed purple curves, and these curves still enclose a nonzero angle in the vicinity of the intersection point. This robustness of the intersection point (and the local behavior around the intersection point) is sometimes referred to as 'protection' against small 'deformations' or 'perturbations'.
Consider a slightly different situation: two transversally intersecting curves in three-dimensional (3D) Euclidean space, as shown by the solid lines in Fig. 1b. Such an intersection is transversal, but it is not protected against small deformations. As shown in Fig. 1b, a small deformation of one of the lines (dashed purple line) can lead to an avoidance of the two curves, and correspondingly, the disappearance of the intersection point. However, if we consider a transversal intersection point of a line and a surface embedded in 3D Euclidean space, as shown in Fig. 1c, then the intersection point is protected again.
These three simple examples of Fig. 1 reveal an interesting property of an isolated transversal intersection point of two manifolds embedded in a host manifold. Namely, if the dimension of the host manifold D equals the sum of the dimensions of the two embedded manifolds D 1 and D 2 , that is, D = D 1 + D 2 , then the intersection point is protected against any small deformation. However, if the dimension of the host manifold is greater than the sum of the two embedded manifolds, that is, D > D 1 + D 2 , then the intersection point is not protected.
These observations lead to the often-stated conclusion that a Weyl point in the band structure of a threedimensional crystal is protected against small deformations of the Hamiltonian. In fact, a Hamiltonian describing such a band structure is a map from the crystal's Brillouin zone (essentially, a 3D torus) to the space of n × n Hermitian matrices, with an integer n ≥ 2. The matrix space plays the role of the host manifold. A Hermitian matrix can be described by the real and imaginary parts of its matrix elements, that is, this matrix space has dimension D = n 2 . Within this matrix space, the matrices with a twofold eigenvalue degeneracy (i.e., the matrices with ith and (i + 1)th eigenvalues being equal, but different from all other eigenvalues) form a manifold of codimension 3 [12,18,19], that is, dimension D 1 = n 2 − 3. This manifold is sometimes called a degeneracy stratum. Furthermore, the image of the 3-dimensional Brillouin zone via the Hamiltonian map is a manifold in the matrix space, also of dimension D 2 = 3. A Weyl point, i.e., a twofold degeneracy of the band structure, with linear dispersion in its vicinity, is in fact a transversal intersection point between the n 2 − 3-dimensional degeneracy stratum and the 3-dimensional image of the Brillouin Zone. This, together with the observation in the preceding paragraph, imply the robustness of a Weyl point against small perturbations, as stated above.
In the context of electronic (or more generally, phononic, photonic, magnonic, etc.) band structures, the Hamiltonian might depend not only on the wave vector but also on other physical parameters, such as mechanical strain applied to the crystal. In such a case, it is interesting to consider how the Weyl points move, merge or are born, as mechanical strain is varied. We will use the terminology that parameters characterising the position of the Weyl points are called configuration parameters, and all other parameters are called control parameters. In the above example, the configuration space (i.e., the space of configuration parameters) is the Brillouin zone, and the control space (the space of control parameters) is a six-dimensional space describing the mechanical strain tensor, which is a 3 × 3 symmetric real matrix.
So far, we discussed Weyl points in the context of band structures. However, the mathematical structures used in the above arguments are more general, they apply to parameter-dependent Hermitian matrices in general. Hence, Weyl points arise not only in band structures, but more generally, e.g., in parameter-dependent quantum systems [12,18,20,21]. If the parameter space of the quantum system is 3-dimensional, then twofold degenerate band crossings do arise typically. If the parameter space has more than 3 dimensions, then one can identify a 3 dimensional parameter manifold as the configuration space, and the complementary parameter manifold as the control space. In this picture, the Weyl points are moving in the configurational space as the control parameters are varied. Although for 3D band structures, the natural configuration space is the Brillouin zone, which is a 3D torus, in a more general setting, the configuration space does not have to be a torus, see, e.g., [20,21].
As argued above, the appearance of Weyl points in band structures of 3D materials is rather natural. However, other types of twofold degeneracies can be achieved by fine-tuning or symmetries. For example, it has been argued in [22] that crystalline symmetries can 'stabilize' or 'protect' three other types of twofold degeneracy points, which are called charge-2 Weyl point, charge-3 Weyl point, and charge-4 Weyl point. These degeneracy points differ from Weyl points (which are sometimes called charge-1 Weyl points) in the following respects: (1) their dispersion relation is non-linear, (2) they are not robust against symmetry-breaking perturbations, i.e., they can be 'dissolved' to a set of Weyl points if the Hamiltonian is perturbed such that the symmetry is not preserved; correspondingly, they are often referred to as nonprotected or non-generic.
The study of Weyl points and non-generic degeneracy structures have been proposed recently in multi-terminal Josephson circuits [13,14]. In particular, in the proposal of Weyl Josephson Circuits [14], whose quantummechanical Hamiltonian is a parameter-dependent Hermitian matrix, magnetic fluxes and gate voltages play the role of the parameters. Furthermore, the corresponding parameter spaces are cyclic, similarly to the Brillouin zone of crystals. Because of the strong analogy, it has been argued that Weyl Josephson Circuits can emulate Weyl points and non-generic degeneracy patterns in band structures [14,23]. Such non-generic degeneracy patterns may include the creation or annihilation of Weyl points, the presence of non-generic isolated degeneracy points and their dissolution to Weyl points upon deformation of the Hamiltonian [17,24], nodal lines [14] or surfaces, Weyl-point teleportation [23], symmetry-constrained chirality flip processes [15], etc.
Our present work builds upon the latter idea of emulating band-structure effects, but translates it to a simple classical mechanical setting: a system of linearly coupled harmonic oscillators, or more specifically, a balland-spring system. Such a system is described by a dynamical matrix D, which is a real symmetric n × n matrix, where the integer n ≥ 2 is the number of coordinates. For example, in the setup in Fig. 2a, the point mass (green circle) can move in two dimensions, hence n = 2. Note also that the dynamical matrix D has non-negative eigenvalues λ 1 , . . . , λ n ≥ 0, whose square roots ω j = λ j , (j = 1, . . . , n) provide the normal-mode eigenfrequencies.
In fact, the dynamical matrix is a function of the parameters characterising the system, D = D(p), where p is the vector of parameters, e.g., spring constants and unstretched spring lengths. Notice that this setting is similar to that of parameter-dependent Hermitian matrices, with the important difference that the surface in the space of real symmetric matrices on which an eigenvalue is twofold degenerate has codimension 2 [18], unlike the Hermitian case discussed above, with codimension 3.
As a consequence, 2D variants of Weyl points, i.e., point-like twofold frequency degeneracies with linear dispersion in their vicinity, typically appear in classical mechanical systems described by a dynamical matrix D(p), when two parameters of the vector p are varied. This observation suggests that to emulate some of the band structure effects listed above, it might be sufficient to engineer tunable classical mechanical ball-and-spring systems. This is what we pursue in this work. In this section, we introduce a classical system composed of balls and springs, whose vibrational spectrum emulates a number of Weyl-point-related features of electronic band structures of crystals. To enhance the analogy between our setup and band structures, we engineer the configurational parameter space to have torus topology, similarly to the Brillouin zone. Our mechanical system, depicted in Fig. 2a, to be referred to as System A, exhibits the following features: (i) the existence of 2D Weyl points in the configurational space for fixed controlparameter vector. (ii) The movement of 2D Weyl points in the configurational space, as the control vector is varied. (iii) Creation and annihilation of oppositely charged Weyl-point pairs as the control vector is varied. (iv) the 'chirality flip' effect [15], which is a special type of Weylpoint creation/annihilation promoted by the symmetry of the system.
The setup, shown in Fig. 2a, consists of a point mass m, three springs, and two rings. The motion of the mass is restricted to the plane of the figure, and the orientation of the x-y reference frame is also shown. The centers of the rings are located at the points (x, y) = (−d/2 − r 1 , 0) and (x, y) = (d/2+r 2 , 0), and the radii of the rings are r 1 and r 2 , respectively. On each ring, a spring is attached to a point of the ring, and the other ends of the two springs are attached to the mass. The two suspension points on the two rings are parametrised by the angles α and β. The top end of the third spring is attached to the suspension point at (0, R). The springs are characterised by their spring constants k j and rest lengths l j . The mass, whose vibrational modes we are interested in, is attached to these springs, and its equilibrium position depends on the system parameters.
The number of parameters of this setup is 13. In what follows we consider the angle parameters α and β as configuration parameters, and call others the control parameters. Therefore, the topology of the configuration space is a torus, similar to the Brillouin zone of a 2D crystal. The 11 control parameters are positive real numbers which we collect into a vector t. In what follows we use SI units for all physical quantities and omit units when specifying parameter values.
The key quantities we will describe here are the eigen-frequencies of the small oscillations (normal modes) of the mass in this setup. For a fixed set of control parameters (i.e., for a fixed control vector t), we define the mapping ω t : B A → R 2 + that assigns the eigenfrequencies of the system to each point of the configuration space in such a way that the first component is the greater eigenfrequency. The eigenfrequencies are the square roots of the eigenvalues of the dynamical matrix of the system.
Since System A consists of a single mass with its motion restricted to two dimensions, its dynamical matrix is a matrix in Sym 2 (R), the vector space of 2 × 2 real symmetric matrices. It is instructive to decompose the dynamical matrix as a linear combination of Pauli matrices; this reads where σ x and σ z are the Pauli X and Z matrices, the dependence on α and β is explicitly denoted, while the dependence on t is omitted for brevity. The normal modes of this mechanical system exhibit the Weyl-point features (i)-(iv) listed above, as shown in Figs. 2b, c, d. To obtain these results, we have computed the dynamical matrix, and from that, the eigenfrequency spectrum ω t , as described in Appendix A.
(i) Existence of Weyl points. The eigenfrequency spectrum of System A is plotted as function of the configurational parameters α and β, for a fixed control vector, in Fig. 2b, see caption for parameter values. The spectrum contains two 2D Weyl points, indicated as the red and blue points, where the vibrational eigenfrequencies are degenerate. Each of the band crossing points seen in Fig. 2b has a nonzero topological charge. The topological charge of a 2D Weyl point, analogous to the Chern number of Weyl points in 3D band structures, is the winding number of the vector field (d x , d z ) for a loop in the configurational space enclosing the degeneracy point. In Fig. 2b, red (blue) points denote topological charge +1 (−1). As illustrated in the figure, the sum of the topological charges of the Weyl points is zero.
(ii) Movement of Weyl points. By changing the control parameters t, the Weyl points trace out a trajectory in the configuration space. This is shown in Fig. 2c, as the spring constant k 3 is varied, all other control parameters being fixed. The blue/red colors correspond to the topological charge. Darker Weyl points correspond to greater k 3 values. The darkest points correspond to the parameters of Fig. 2b.
(iii) Creation and annihilation of Weyl points. Consider the scenario when the third spring is taken out of the system, which corresponds to a control vector t with k 3 = 0. In this case, there are no Weyl points in the configuration space, because the longitudinal normal mode has a higher frequency than the transversal mode. Continuously increasing the spring constant k 3 from zero to k 3 = 0.76, Weyl points are still absent. Increasing k 3 further, to k 3 = 0.78, we observe the creation of two Weyl points, depicted as the faintest red and blue points in other by further increasing k 3 , as shown in Fig. 2c. This shows that transition patterns between different Weylpoint configurations can be studied in such a mechanical system.
(iv) Chirality flip. The chirality flip effect has been theoretically described in [15]. The crystal studied there has a high-symmetry plane, which imposes symmetry constraints on the Weyl points and their motion as a varying mechanical strain is applied to the crystal (see Fig. 3d in [15]). Strain plays the role of a control parameter, and the Brillouin zone is the configuration space. Before applying strain, a negatively charged Weyl point resides in the high-symmetry plane of the Brillouin zone, and a mirror-symmetric pair of positively charged Weyl points resides on the two sides of the plane. As strain is increased, the off-plane Weyl points approach the in-plane Weyl point. At a critical value of the strain, the three Weyl points merge, and for further increase of the strain, only a single positively charged Weyl point remains in the plane. From the viewpoint of the in-plane Weyl point, it has undergone a flip of its topological charge from negative to positive, hence the name 'chirality flip'.
An analogous effect is observed in System A, if we consider a special symmetric configuration of the latter, when the control parameters fulfill r 1 = r 2 , k 1 = k 2 , l 1 = l 2 . In this case, the diagonal line α = β of the configurational space is analogous to the high-symmetry plane of the Brillouin zone in [15]. Furthermore, the Pauli coef-ficients of the dynamical matrix, defined in Eq.(1), have the following symmetry relations: These relations enforce a vanishing d x component on the symmetry line, that is, d x (α, α) = 0. A further consequence of Eq. (2) is that the spectrum is symmetric, ω(α, β) = ω(β, α) for both bands. Furthermore, the Weyl points appear in the configurational space symmetrically, such that mirror-symmetric partners have the same charge (see Fig. 2d).
The chirality flip effect in System A is illustrated in Fig. 2d. For a symmetric control parameter set (see caption), we plot the Weyl points as the spring constant k 3 is increased. Initially, there are 4 Weyl points, 2 of them (faint blue) on the symmetry line, 2 of them forming a mirror pair off the symmetry line (faint red). By increasing k 3 , the two off-line red Weyl points approach the symmetry axis, and merge with a blue Weyl point, leaving behind a single red Weyl point (dark red) on the axis -a clear manifestation of a chirality flip.
IV. MECHANICAL CHARGE-2 WEYL POINT
In the simplest tight-binding model of the electronic band structure of bilayer graphene, a non-generic degeneracy point appears at the K point of the Brillouin Zone. In the vicinity of the K point, the electronic states are described approximately by the following effective Hamiltonian: [16] Here, m is the effective mass of the electrons, and k x and k y are the wave vectors measured from the K point. As discussed above, in this band structure setting, k x and k y are the configuration parameters.
In this section, we introduce a classical ball-and-spring system that emulates the non-generic degeneracy point of bilayer graphene described by Eq. (3). First, we summarize the known characteristic properties of the latter (see (i)-(iv) below), show that the ball-and-spring system indeed emulates most of those properties, and also prove a specific mathematical equivalence (linear right equivalence) between the two systems.
A. Electrons in bilayer graphene
(i) Quadratic dispersion. Eq. (3) is an effective Hamiltonian for electrons in bilayer graphene in the vicinity of the K point. We will use the Pauli-matrix decomposition of this effective Hamiltonian. This reads, omitting the constant 2 /2m, as H eff (k x , k y ) = h x (k x , k y )σ x + h y (k x , k y )σ y , where the coefficients are: The difference between the eigenvalues of H eff is proportional to h 2 x + h 2 y = k 2 x + k 2 y . Hence, we say that the degeneracy at (k x , k y ) = 0 splits quadratically as the function of the configurational parameters (i.e., the wave vector).
(ii) Topological charge is 2. Similarly to the topological charge of the (d x , d z ) vector field, described in Sec. III as a winding number around the degeneracy point in the origin of the configurational space, a topological charge is also associated to degeneracy point of the (h x , h y ) vector field. In fact, the topological charge of the latter is 2.
(Note that in a mathematical context, the term 'local degree' is used for the topological charge.) (iii) Local multiplicity is 4. In [17] it is shown that isolated twofold degeneracy points have, besides the topological charge, another characteristic, the local multiplicity. The local multiplicity associated to such a degeneracy point is a positive integer. In particular, for bilayer graphene, the local multiplicity associated to the vector field (h x , h y ) is 4.
(iv) Perturbations can dissolve the quadratic degeneracy point into 2 or 4 Weyl points. Upon a generic 'perturbation' or 'deformation' of the Hamiltonian, i.e., upon a generic continuous displacement in the control space, the degeneracy point is continuously dissolved into Weyl points. The absolute value of the topological charge determines the minimum number of newborn Weyl points in that situation. The local multiplicity determines the maximum number of newborn Weyl points [17]. Combining these general rules with (ii) and (iii) above, it is concluded for the degeneracy point of bilayer graphene that perturbations can dissolve it into 2 or 4 Weyl points. For example, extending the simplest tightbinding model of bilayer graphene by including an additional hopping amplitude induces a perturbation to H eff which dissolves the quadratic degeneracy points into four Weyl points, known as the 'trigonal warping' effect. On the other hand, adding mechanical strain to the tightbinding model perturbs H eff such that it dissolves the quadratic degeneracy point into two Weyl points. Note also that the topological charge is conserved in these transitions.
B. Vibrations of System B
Here, we propose and study a classical ball-and-spring mechanical system whose parameter-dependent eigenfrequency spectrum shares the characteristics of bilayer graphene described in the previous section.
The mechanical system we study here, to be called System B, is shown in Fig. 3a. It consists of three point masses (m i , i ∈ {1, 2, 3}), connected with three springs with spring constants k i and rest lengths l i . The masses can move only in the plane, i.e., their positions are described by 6 Cartesian coordinates altogether. As before, we will focus on the eigenfrequencies of small, close-toequilibrium oscillations of the system. In equilibrium, the masses form a triangle whose geometry is determined by the rest lengths l i , which are assumed to fulfill the triangle inequalities. Equivalently, we can characterize the triangle by two angles α and β, and a single rest length, i.e. l 1 . We will use this latter parametrization, and will omit l 1 , as its value does not affect the normal modes of the system.
We specify reference values for our parameters as m i , ∆α = α − α (0) and ∆β = β − β (0) . These detunings can be thought of as perturbations of parameters with respect to their reference values. We set ∆m 3 = 0 and ∆k 3 = 0, without the loss of generality. Hence, the total number of parameters of the system is 6, listed as ∆k 1,2 , ∆m 1,2 , ∆α, and ∆β.
At this point, we can anticipate the twofold spectral degeneracy of this mechanical setup, which will play the role of the twofold spectral degeneracy of H eff (k x = 0, k y = 0) of Eq. (3). System B is described by 6 coordinates, hence its dynamical matrix, describing the normal modes, is a 6 × 6 real symmetric matrix, depending on the con-figuration and control parameters. Consider the case when all detunings are set to zero: ∆m 1 = ∆m 2 = 0 and t = 0. Then the point masses form an equilateral triangle, and the symmetry group of the system is the dihedral group D 3 . This group does have a twodimensional irreducible representation (irrep), suggesting that the normal-mode eigenfrequency spectrum might have a symmetry-protected twofold degeneracy.
We do find that this is indeed the case, for this zerodetuning case, the 4th and 5th eigenfrequencies (counting from lowest to highest) are degenerate, and the modes transform according to the two-dimensional E irrep of D 3 . This degeneracy is split as we move away from the origin of the configuration space (∆m 1,2 = 0) and hence break the symmetry.
The 6×6 dynamical matrix of System B has three normal modes with zero eigenfrequencies. These correspond to the two independent translations and the single rotation of the system. The remaining three normal modes have non-zero eigenfrequencies, and for t = 0, ∆m 1,2 = 0 there is a single degenerate pair of normal modes with finite frequency. For a fixed value of control parameters t, we define the mapping ω t : B B → R 2 + that assigns those eigenfrequencies to each point of the configuration space, which are degenerate in the symmetric case. Here, B B denotes the configuration space.
(i) Quadratic dispersion. For zero detuning of the control parameters, the splitting of the degenerate eigenfrequencies is of second order in the configuration parameters ∆m 1,2 . This quadratic dispersion in the configuration space is illustrated in Fig. 3b, where the difference of the eigenfrequencies, obtained from numerical diagonalization of the dynamical matrix, is plotted for t = 0.
The quadratic splitting of the degenerate frequencies can also be proven analytically, as discussed in App. B. To this end, we express the effective 2 × 2 dynamical matrix D eff of the quasi-degenerate subspace; the explicit form is shown in Eq. (B16). This matrix D eff is obtained perturbatively in the configuration parameters, at the symmetric control point t = 0, using second-order quasidegenerate (Schrieffer-Wolff) perturbation theory. Neglecting the unit-matrix term in the effective dynamical matrix, we obtaiñ where Here, we have introduced the simplified notation m x = ∆m 1 and m y = ∆m 2 . The fact that d x and d z are second order in the configuration parameter implies that quadratic dispersion.
(ii) Topological charge is 2. This result can be obtained from a numerical evaluation of the winding number of the vector field (d x , d z ), or can be read off from a visualisation of the vector field.
(iii) Local multiplicity is 4. This result can be obtained using any of the methods for calculation of the local multiplicity, outlined in [17].
(iv) Deformations can dissolve the quadratic degeneracy point into 2 or 4 Weyl points. Upon generic deformation (i.e., change of the control vector), a non-generic degeneracy point, such as the quadratic degeneracy points studied here, is dissolved to Weyl points. The absolute value of the topological charge (local multiplicity) of the original degeneracy point is a lower (upper) bound on the number of newborn Weyl points [17]. Because of (ii) and (iii) above, we expect that deformations of System B can dissolve the quadratic degeneracy point into two or four Weyl points.
We do confirm the two-Weyl-point scenario, which is illustrated in Fig. 3c. There, the green circle depicts the quadratic degeneracy point at t = 0. The red points show how that is dissolved to two Weyl points, one on the left, one on the right, each with unit positive charge, as the control parameter ∆k 1 is increased from zero.
We leave it as an interesting open question if the four-Weyl-point scenario, emulating the trigonal warping effect in the bilayer graphene band structure [16], can be realized in this physical setting. Without any detail, we do confirm that the four-Weyl-point scenario can be realized in a mathematical sense by the deformatioñ where the coefficients are defined in Eq. (12). For example, we have checked numerically (not shown) that four Weyl points are born from the quadratic degeneracy point if the above deformation is applied such that 0 < λ < 10 −2 . However, we do not know if such a deformation can be realized in a physical sense, that is, by tuning the physical parameters in the control vector t as D eff →D t,eff .
C. Bilayer graphene and System B are linear right equivalent
We have just shown that the characteristic properties (i)-(iv) of the bilayer graphene effective Hamiltonian hold for all (m x , m y ) ∈ B B . To obtain the diffeomorphism f we assume that it is a linear map and can be characterized by a 2 × 2 real matrix such that Inserting the defining equations of the mappings h (Eq.(4)), d (Eq. (6)) and f (Eq. (10)) into Eq. (8), and using the identifications m x ≡ k x and m y ≡ k y , we find the following 6 equations for the unknown matrix elements of F : Remarkably, the above system of equations is solvable, Solving the above system of equations yields Note that the matrix F defined by this solution is invertible, implying that the corresponding map f is indeed a diffeomorphism. Note also that simultaneous sign flip of the matrix elements in Eq. (12) yields an alternative solution.
With this, we have shown that the local vector fields d and h which describe fundamentally different physical systems are right-equivalent. This explains the similarities of their characteristics discussed above.
V. DISCUSSION
A. Relation to prior work Translation invariance vs. spatial compactness. Engineered, macroscopic mechanical systems have already been studied to investigate topological effects arising in band structures. The studies we are aware of rely on the concept of translationally invariant, crystal-like metamaterials, where concepts such as wave vectors and Brillouin zone arise naturally. These systems involve an extensive number of degrees of freedom. [8,9,25] In contrast, in this work, we propose and study spatially compact mechanical setups, consisting only of a few ingredients. In the systems we consider, the configurational parameters are only analogous to the wave vector. An inherent advantage of these setups is that only a few system elements and a few degrees of freedom have to be controlled.
Simplification and 'de-quantization' of the Weyl Josephson Circuit idea. This work is partly inspired by the prior proposals of emulating band-structure effects using multiply connected superconducting devices [12][13][14]. We think that spatially compact mechanical setups such as those studied in this work can provide a simplified, cost-efficient, and 'de-quantized' alternative platform for such emulator experiments. With table-top mechanical setups, the need for highly specialized fabrication and refrigeration technology is alleviated. Furthermore, measurement technology based on complex microwave sources and detectors for superconducting circuits can probably be substituted by more basic equipment (e.g., cameras or microphones for data acquisition, sound or ultrasound generators as driving sources), for mechanical experiments.
Exploring degeneracy points in various matrix spaces. Mechanical systems can also be regarded as complementary to superconducting devices, in the sense that they cover different matrix spaces. Namely, superconducting circuits provide access to degeneracy structures of particle-hole-symmetric [13] and Hermitian [14] matrices, whereas spatially compact mechanical systems are described by real symmetric matrices.
B. Open problems
In-situ control of parameters in a mechanical setup. In our work, we study how the eigenfrequencies of coupled mechanical oscillators change as parameters are varied. A brute-force experimental realisation of the effects discussed here should be possible by fabricating and measuring many different samples, which have fixed but different parameter values. An interesting experimental challenge is to find means for in-situ parameter control. This would alleviate the need to fabricate as many samples as many parameter settings are to be investigated.
Frequency degeneracy points in spatially compact classical ac electronic circuits. Besides the mechanical setups considered here, another class of classical systems where the physics of degeneracy points can be studied is that of ac electronic circuits. Emulating topological materials using translational invariant circuits ('topoelectrical circuits') is a field that already exists [26,27].
Trigonal warping of bilayer graphene. As discussed in Sec. IV B, our System B emulates the quadratic degeneracy point of bilayer graphene, but we have not found a perturbation respecting the physical constraints of System B that emulates the trigonal warping effect known bilayer graphene. This remains an interesting open problem.
A systematic construction of mechanical systems emulating band-structure effects. In this work, we have identified two mechanical setups where interesting Weylpoint properties known from condensed-matter theory can be emulated. These mechanical setups were found intuitively. A natural follow-up open problem is as follows: given a condensed-matter band structure model (e.g., electronic, phononic, magnonic) with an interesting band degeneracy pattern (e.g., non-generic degeneracy point, nodal loop, nodal surface, etc.), is it possible to systematically construct a spatially compact mechanical emulator reproducing that pattern? System A in our work illustrates that the torus topology of the Brillouin zone can be emulated, e.g., using suspension loops.
Replacing a cold-atom experiment with classical mechanics. In a recent breakthrough study [28], the authors performed an experiment using ultracold atoms, which used a dynamical method to probe the winding numbers of a linear and a quadratic degeneracy point in the momentum space of a honeycomb lattice. Such an experiment requires a highly coherent atomic ensemble and advanced control and measurement technology. Our present work proves that degeneracy points with quadratic splitting can be engineered in simple mechanical systems, hence it highlights the opportunity of repeating the cold-atom experiment using only classical mechanics. Such a mechanical experiment would rely on the in-situ time-dependent tunability of the system parameters, as discussed above.
Expanding the classification of isolated twofold degeneracy points in crystals. In [22], band degeneracies in time-reversal invariant crystalline band structures were classified, and four distinct types of twofold degenerate isolated degeneracy points were identified. In our work, we have illustrated two of those four types: Weyl points and quadratic degeneracy points (identified as charge-2 Weyl points in [22]). An open challenge is to engineer mechanical systems where the other two types of degeneracy points (charge-3 and charge-4 Weyl points) arise. A further idea is to exploit the fact that spatially compact mechanical oscillators are free of the strong constraints imposed by crystal symmetries, hence they could be used to realize more 'exotic' degeneracy point types which are impossible to realize in crystalline band structures. Similar questions arise in the context of higher-order degeneracy points, i.e., degeneracies where more than 2 normal modes share the same eigenfrequency [29].
VI. CONCLUSION
We have proposed simple ball-and-spring setups which illustrate that Weyl points and their associated features, characteristic of crystalline band structures, can be emulated in classical mechanical systems. We have shown that the parameter-dependent eigenfrequency spectrum of spatially compact ball-and-spring systems can exhibit (i) the appearance of Weyl points, (ii) the movement of Weyl points, (iii) the creation/annihilation of Weyl points, (iv) the chirality flip effect, an example of symmetry-constrained creation/annihilation, (v) quadratic degeneracy points and their dissolution to Weyl points. Our work opens a route toward table-top experiments on Weyl point physics, enabling the exploration of effects that have been proposed or realized with coherent quantum systems.
where (x i , y i ) is the suspension point of the i-th spring, and l i is the rest length of the spring.
The coordinates (x i , y i ) for the three springs can be written as In equilibrium, the elastic potential U t is minimized over the position of the point mass. To determine the eigenfrequencies, we calculate the equilibrium coordinates of the body. This is done by solving which is a system of nonlinear equations. We obtain the solution numerically for a specific set of parameters, using the built-in methods of Scipy. Given the equilibrium position of the mass (x 0 , y 0 ), its small oscillations are governed by the linearized Newton equations where the coordinates x and y are relative coordinates with respect to the equilibrium coordinates, and the restoring force has been linearized in the relative coordinates. Furthermore, in Eq. (A5), the second-order partial derivatives of the potential U t are evaluated at the equilibrium position (x 0 , y 0 ). We collect the displacements x and y into a single vector with which the linearized Newton equations can be written in a compact form where we have introduced the Hessian of the elastic potential at (x 0 , y 0 ): Then, we make use of the fact that we are looking for vibrational modes fulfillingŸ = −ω 2 Y. Hence, we obtain the linearized Newton equation to the following eigenvalue equation: The matrix on the right-hand side, is the dynamical matrix of the system. The mode eigenfrequencies are obtained by taking the square root of the eigenvalues of the dynamical matrix D t . The square roots of the eigenvalues are positive as long as the Hessian is positive-definite, which is guaranteed in case of a stable equilibrium position. The potential U t depends on the angles α and β through the positions x 1,2 , y 1,2 of Eq. (A3) hence the above method provides the spectrum ω t : B A → R 2 + of System A. In Sec. III, the topological charge of the Weyl points is introduced as the winding number of the vector field (d x , d z ), the latter being obtained from the Pauli decomposition of the (α, β)-dependent dynamical matrix. This topological charge is defined as the integral where we have introduced with d = (d x , d z , 0), and have used the 3D cross product (×) and the notation () 3 referring to the third component of a three-component vector. The integration contour C encircles the degeneracy point (and only this degeneracy point) and is parametrized by the angle variable ϕ ∈ [0, 2π). Below, we introduce the method we used to locate the Weyl points in the configuration space, and compute their topological charges, using the (d x , d z ) vector field. The calculation of the topological charge is based on a discretization of the integral of Eq. (A11) on a finite grid.
Then, we assign numbers to vertices, edges, and plaquettes of the grid as follows. We find the dynamical matrix and obtain the (d each vertex of the grid. In the case of the edges, we calculate the phase difference between neighboring vertices (j, k) and (j , k ) as where (j , k ) denotes a neighbour of (j, k), i.e. (j + 1, k), (j − 1, k), (j, k + 1) or (j, k − 1). In such a way, we assign a phase difference to each oriented edge of the grid. Note that Φ depends on the orientation of the path Finally, we assign the integer (vortex number) to the plaquette (j, k). Q (j,k) indicates the winding of the (d x , d z ) vector field on the boundary of the plaquette, hence it is a good indicator of the position and the topological charge of Weyl points. We use this technique to find Weyl points in the configuration space of System A.
In the main text, we have analyzed the chirality flip effect, which has been recently predicted in the context of electronic band structure theory, and which is an effect promoted by certain symmetries of the crystal. In the rest of the section, we discuss the symmetry properties of System A, which promote the chirality flip effect illustrated in Fig. 2d.
Let us consider a control vector t such that the system has mirror symmetry upon the reflection along the y axis for any α = β. For such a mirror-symmetric setting, the elastic potential has the symmetry U t (x, y; α, β) = U t (−x, y; β, α). (A15) Denote the equilibrium position of the mass for angle values (α, β) as (x αβ 0 , y αβ 0 ). Then, it holds that (x βα 0 , y βα 0 ) = (−x αβ 0 , y αβ 0 ). Equation (A15) creates a relation between the partial derivatives of the potential at (α, β) and (β, α) Similarly, it can be shown that the sign of the partial derivative of the potential with respect to y does not change upon the interchange of the angles. From this, we conclude that only the mixed second-order partial derivative ∂ x ∂ y U t changes sign upon the exchange on the angles. These relations imply that the dynamical matrix D t (β, α) is related to D t (α, β) as shown in Eq. (2). This means that the diagonal entries are identical, while the sign of the off-diagonal entries is flipped. As a consequence, the eigenvalues of the D t (β, α) and D t (α, β) are the same, hence the spectrum is symmetric to the α = β line. Due to the sign change of the off-diagonal entry of the dynamical matrix upon interchanging the angles, the topological charges of the symmetry-related Weyl points are the same. Recall that the above is valid only in case of a mirror-symmetric control parameter t. If the mirror symmetry is broken, then the spectrum is not symmetric anymore.
Appendix B: The spectrum of System B
In this section, we discuss the calculation of the eigenfrequency spectrum of the small oscillations in System B.
The three point masses of System B are characterized by 6 coordinates. We collect the displacements of the masses with respect to their equilibrium coordinates into a single vector X = (x 1 , y 1 , x 2 , y 2 , x 3 , y 3 ) T . Furthermore, we define the three-component vector S = (S 1 , S 2 , S 3 ) T that contains the spring elongations, with S i being positive if the i-th spring is stretched and negative if it is compressed.
In the linear approximation, the spring elongations are linear functions of the displacements, i.e., S = RX, where R is a real matrix of size 3 × 6, to be specified below. Finally, we define the vector of spring forces F = (F 1 , F 2 , F 3 ). The component F i denotes the force exerted on the masses by the i-th spring. Each force F i is regarded as a real scalar since the spring force vector is parallel to the spring itself. We use the convention that F i is positive when the spring is compressed.
The above definitions imply that F can be expressed as F = −KS, where K is a real matrix of size 3 × 3 that contains the spring constants The spring forces couple to the displacement vector X via the matrix R T [30]. Then the Newton equations of motion can be written as Here M is the mass matrix of the system, namely: . Normal modes of System B with non-zero eigenfrequency. The leftmost normal mode belongs to the fully symmetric irreducible representation, while the other two normal modes belong to the E irrep, which is two-dimensional. These are differentiated by the eigenvalue of the vertical mirroring operation. These normal modes have ±1 eigenvalue with respect to this mirroring with the −1 eigenvalue corresponding to the rightmost normal mode.
As the next step, we make use of the fact that we are looking for vibrational modes fulfillingẌ = −ω 2 X. Furthermore, we multiply both sides of Eq. (B2) by M −1/2 from the left, to obtain the eigenvalue equation where we have introduced the mass normalized eigenvectors Y = M 1/2 X. The matrix on the right-hand side of Eq. B4 is the dynamical matrix D of the system. The mass-normalized eigenvectors enforce the dynamical matrix to be symmetric. Note that up to this point, our derivation is general, i.e., it does not exploit any symmetry assumptions for the system. To solve the above eigenvalue problem we need to determine the matrix R which couples the displacements to the spring elongations. Using simple trigonometric identities we obtain (B5) We emphasize that we use the notation shown in Fig. 3a. Using the above form of the matrix R one can evaluate the matrix product in Eq. (B4) to obtain the dynamical matrix of the system. The eigenvalues can be calculated for arbitrary parameter values by numerical diagonalization of the dynamical matrix. The eigenfrequencies of the system are the square roots of the eigenvalues.
If all the masses, springs, and angles are identical, then the symmetry group of the system is D 3 [31]. Since this group does have a 2-dimensional irrep, the spectrum of System B may have a 2-fold eigenfrequency degeneracy -and indeed, this is the case.
In terms of irreps, one vibrational mode belongs to the irrep A 1 , which implies that this mode is symmetric under all symmetry operations of the symmetry group. This is the so-called breathing mode and it has the highest vibrational frequency ω = 3k m . We used the simplified notation k and m because here, we consider the case when all the springs and masses are identical. The other two vibrational modes belong to the E irrep, which is two-dimensional, meaning that these modes are degenerate. Their common eigenfrequency is ω = 3k 2m . These normal modes are shown in Fig. 4.
The remaining 3 normal modes have zero eigenfrequency. These normal modes correspond (i) to the translation of the whole system along the x and y directions and (ii) to the rotation of the whole system along its center of mass. In terms of irreps, the two normal modes (i) transform between each other under the operations of D 3 hence correspond to the E irrep, while the single normal mode (ii) corresponds to the A 2 irrep as the displacements change sign upon the reflection.
We are interested in how the finite-frequency degeneracy of the modes of the E irrep (shown in Fig. 4) splits as the parameters of the system are changed. To describe this splitting, we utilize quasi-degenerate perturbation theory (Schrieffer-Wolff transformation) [32] to derive an effective 2 × 2 dynamical matrix in the degenerate subspace.
Due to the symmetry of the setup, we use the simplified notation m We do this because we anticipate that the frequency splitting is second order in ∆m 1,2 . The dynamical matrix can therefore be approximated as As the next step, we expand the brackets in Eq. (B7), and keep terms only up to the second order in the mass ∆k 1 = 0. In this case, the charge-2 Weyl point splits into regular Weyl points (a.k.a. charge-1 Weyl points). In the main text, we have shown results related to these Weyl points. To find the Weyl points numerically, we use the method discussed in App. D. of [23]. Then, for each Weyl point found, we carry out a numerical Schrieffer-Wolff transformation to obtain the effective dynamical matrix around the Weyl point. The charge of the Weyl point is the winding of the vector field defined by the effective dynamical matrix. Here, it is important to keep the orientation of the quasi-degenerate subspace fixed as the control parameters are varied because the winding of the vector field does depend on the orientation of the subspace. This orientation can be fixed by choosing the sign of the normal modes consistently. | 2023-02-17T06:42:24.109Z | 2023-02-16T00:00:00.000 | {
"year": 2023,
"sha1": "2fe528a5e4ae3da77bc714edceae2ffffedb0657",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2fe528a5e4ae3da77bc714edceae2ffffedb0657",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
222180327 | pes2o/s2orc | v3-fos-license | Identifying the space source term problem for time-space-fractional diffusion equation
In this paper, we consider an inverse source problem for the time-space-fractional diffusion equation. Here, in the sense of Hadamard, we prove that the problem is severely ill-posed. By applying the quasi-reversibility regularization method, we propose by this method to solve the problem (1.1). After that, we give an error estimate between the sought solution and regularized solution under a prior parameter choice rule and a posterior parameter choice rule, respectively. Finally, we present a numerical example to find that the proposed method works well.
Introduction
Let T be a given positive number, a bounded domain in R n (n ≥ 1) with a smooth boundary ∂ . In this work, we consider the inverse source problem of the time-fractional diffusion equation as follows: where D β t u(x, t) is the Caputo fractional derivative of order β defined as [1] in the following form: where (·) is the Gamma function. In fact (g, , ϕ) is noised by observation data (g ε , ε , ϕ ε ) where the order of ε is the noise level. We have © The Author(s) 2020. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
In all functions g(x), (x), and ϕ(t) are given data. It is well known that, if ε is small enough, the sought solution f (x) may have a large error. It is known that the inverse source problem mentioned above is ill-posed. In general, the definition of the ill-posed problem was introduced in [2]. Therefore, regularization is needed.
As is well known, in the last few decades, the fractional calculation is a concept that has a great influence on the mathematical background and its application in modeling real problems. Fractional calculus has many applications in mechanics, physics and engineering science, etc. We present to the reader much of the published work on these issues, such as [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21] and the references cited therein. This makes it attractive to study this model.
The space source term problem for the time-fractional partial differential equations has attracted a lot of attention, and much work has been completed to study many aspects of this problem, specifically as follows.
In 2019, the authors Yan, Xiong and Wei proposed a conjugate gradient algorithm to solve the Tikhonov regularization problem for the case γ = 1.
In the case f (x) = 1, in 2014, Fan Yang and his group considered the Fourier transform and the quasi-reversibility regularization method; see [24]. Recently, the simple source problem, i.e, ϕ(t) = 1 and γ = 1 in Eq. (1.1) has been considered by Fan Yang, Zhang and Li, see [20,21,[25][26][27]; the authors used the Landweber iterative regularization, Truncation regularization and Tikhonov regularization methods solve this problem and achieved the results of convergence results to the order of p p+1 for 0 < p < 2 and 1 2 for p > 2, respectively.
The problem (1.1) with discrete random noise has been studied by Tuan et al, they used the filter regularization and trigonometric methods to solve this problem (1.1); see [28][29][30]. According to our searching, the results about applying the quasi-reversibility regularization method to solve the inverse source problem for the time-space-fractional diffusion equation is still limited. To the best of our knowledge, this is one of the first results of this type of problem. In particular, one addressed the case where L γ and the right-hand side ϕ(t)f (x) are represented in a general form. Motivated by all the above reasons, we consider the quasi-reversibility regularization method to solve the problem (1.1). The present paper aims to use the quasi-reversibility regularization method (QR method) to solve the problem (1.1).
The outline of the paper is as follows. In Sect. 2, we show some basic concepts, the function setting, the definitions, and the ill-posed problem are presented in Sect.
Preliminary results
The eigenvalues of the operator L γ is introduced in [31]. Let us recall that the spectral problem admits a family of eigenvalues where β > 0 and γ ∈ R are arbitrary constant.
Theorem 2.11
Let g, f ∈ L 2 ( ) and ϕ ∈ L ∞ (0, T), then there exists a unique weak solution u ∈ C([0, T]; L 2 ( )) ∪ C([0, T]; D ζ ( )) for (1.1) given by By a simple transformation we can see that This implies that Proof Denote ϕ L ∞ (0,T) = P(ϕ 0 , ϕ 1 ). A linear operator is defined R : L 2 ( ) → L 2 ( ) as follows: Because of k(x, ξ ) = k(ξ , x), we can see that R is self-adjoint operator. Next, we are going to prove its compactness. Let us define R M as follows: This implies that 24) and the corresponding eigenvectors are e k which is known as an orthonormal basis in L 2 ( ). From (2.19), the inverse source problem can be formulated as an operator equation, and by Kirsch ([2]), we conclude that the problem (1.1) is ill-posed. We present an example. Fix β and choose Because of (2.18) and combining (2.26), the source term f m is If we have input data , g = 0, then the source term f = 0. An error in L 2 ( ) norm between ( , g) and ( m , g m ) is Combining (2.29) and (2.32), we conclude that the inverse source problem is not wellposed.
Quasi-reversibility method
In this section, the quasi-reversibility method is used to investigate problem (1.1), and give information for convergence of the two estimates under a prior parameter choice rule and a posterior parameter choice rule, respectively.
Construction of a regularization method
We employ the QR method to established a regularized problem, namely where g ε , ε are perturbed initial data and final data satisfying and α(ε) is a regularization parameter. We can assert that From now on, for brevity, we denote and
A prior parameter choice
Afterwards, f (·)f ε,α(ε) (·) L 2 ( ) is shown under a suitable choice for the regularization parameter. To do this, we introduced the following lemma.
Proof (1) If j ≥ 1 then from s ≥ λ 1 , we get (2) If 0 < j < 1 then it can be seen that . Solving G (s) = 0, we can see that This is precisely the assertion of the lemma.
Theorem 3.2 Let f be as (2.18) and the noise assumption (2.13) hold. We obtain the following two cases. (3.8) Proof By the triangle inequality, we know The proof falls naturally into two steps.
A posterior parameter choice
In this subsection, a posterior regularization parameter choice rule is considered. By the Morozov discrepancy principle here we find ζ such that see [2], where ζ > 1 is a constant. We know there exists an unique solution for
29)
which gives the required results.
Here, put Therefore, combining (3.33) to (3.34), we know that From (3.35), it is very easy to see that (3.36) Therefore, we conclude that which gives the required results.
• Next, the relative error estimation is defined by (4.7) In Fig. 1, we show the convergent estimate between exact solution and its approximation by the quasi-reversibility method under a prior parameter choice rule and under a posterior parameter choice rule. In Fig. 2, we show the convergent estimate between the sought solution and its approximation by QRM and the corresponding errors with ε = 0.2. Similarly, in Fig. 3 and in Fig. 4, we show the comparison in the cases ε = 0.02 and ε = 0.0125. While drawing these figures, we choose values β = 0.5, γ = 0.5 and j = 1. In the tables of errors that we calculated in this numerical example, we present the error estimation for both a prior and a posterior parameter choice rule, respectively. In Table 1, we give the comparison of the convergent rate between the sought solution and the regularized solutions. Next, in Table 2, we fixed ε = 0.034. In the first column, with β p+1 = β p + 0.11, p = 1, 8 with β 1 = 0.11. Using Eq. (4.7), we show the error estimate between the sought solution and its approximation with β = 0.3, in the second column and the third column. Similarly Table 1 The error between the regularized solutions and sought solution at β = 0.5, γ = 0.5 Table 2 The error between the regularized solutions and the sought solution at ε = 0.034
Conclusions
In this work, we use the QR method to regularize the inverse problem to determine an unknown source term of a space-time-fractional diffusion equation. We showed that the problem (1.1) is ill-posed in the sense of Hadamard. Next, we give the results for the convergent estimate between the regularized solution and the sought solution under a prior and a posterior parameter choice rule. We illustrate our theoretical results by a numerical example. In future work, we will be interested in the case of the source function being a function of the general form f (x, t), and this is still an open problem and will show more difficulty. | 2020-10-08T13:30:35.125Z | 2020-10-07T00:00:00.000 | {
"year": 2020,
"sha1": "0ec807333df13e3440e1e00a8cdde54e02e7da63",
"oa_license": "CCBY",
"oa_url": "https://advancesindifferenceequations.springeropen.com/track/pdf/10.1186/s13662-020-02998-y",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d63dd389b48fc42a330bcd0f09b3980b33c8019c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234078182 | pes2o/s2orc | v3-fos-license | Dynamic Behavior of a Flexible Multi-Column FOWT in Regular Waves
: A tank experiment using a flexible multi-column floating offshore wind turbine (FOWT) model in regular waves was carried out to clarify the floater elastic response and its influence on the floater motion. The model motion response from the experiment was compared with the numerical simulations by NK-UTWind and WAMIT codes. The dynamic elastic deformation of the model was also compared between the experiment and NK-UTWind. The experiment observed significant elastic deformation for the decks and columns of the model around the wave period corresponding to the natural period of the structural vibration. Furthermore, comparing the heave response amplitude operator (RAO) between experiments and numerical simulations, a small peak appeared around this period in the experiment and NK-UTWind simulation instead of WAMIT simulation. These results indicated that dynamic elastic deformation affected the heave response of the model. The change in the model rigidity revealed that such elastic deformation could affect the motion response statistics in an actual sea condition if the peak period of the onsite wave spectrum is close to the floater natural vibration period. These investigations indicated the importance of considering the elastic behavior of a FOWT at its design stage.
Introduction
Offshore wind turbines have been highly expected to play an essential role in power supply. Their advantages are the availability of more substantial and more consistent winds, less noise, and visual pollution than the prevailing onshore wind turbines. In Europe, bottom-mounted types, which can be suitable for shallow seabed sites with a water depth of less than 50 m [1], have already been widely installed. On the other hand, in the area where water depth increases as it gets far from the shore, floating offshore wind turbines (FOWT) can be preferable.
The bottleneck in the prevalence of FOWT lies in their high costs, especially for installation, operation and maintenance (O and M) [2]. Various floater types of FOWT have been researched to meet the demand for wind energy production in the seas. In Japan, where FOWT is estimated to have a substantial potential for power generation, several types of research have been undertaken at the initiative of the New Energy and Industrial Technology Development Organization (NEDO); such as a barge type at Kitakyushu [3][4][5][6], semi-submersibles [7][8][9][10], and spar types at Fukushima [11,12]. Core stainless (SUS304) beams were used to represent the model elastic similarity (provide the structural rigidity), and the urethane pieces were wrapped around the core beams to define the geometry similarity. Urethane parts were segmented to avoid additional stiffness. A picture of the reduced scale model is shown in Figure 2.
The Froude law, as shown in Table 1, was applied to this experiment. The main dimensions and properties of the floater are shown in Table 2 and Table 3, respectively. Moment of inertia of area, I, was decided, considering Young's modulus, E, in model scale and in full scale, which enabled the model to meet the scale factor of bending rigidity, EI. B, K, G, and M are defined as the center of floatation, keel, the center of gravity, and metacenter points. Core stainless (SUS304) beams were used to represent the model elastic similarity (provide the structural rigidity), and the urethane pieces were wrapped around the core beams to define the geometry similarity. Urethane parts were segmented to avoid additional stiffness. A picture of the reduced scale model is shown in Figure 2.
The Froude law, as shown in Table 1, was applied to this experiment. The main dimensions and properties of the floater are shown in Table 2 and Table 3, respectively. Moment of inertia of area, I, was decided, considering Young's modulus, E, in model scale and in full scale, which enabled the model to meet the scale factor of bending rigidity, EI. B, K, G, and M are defined as the center of floatation, keel, the center of gravity, and metacenter points.
Wave Tank Setup
All the experiments were carried out in a towing tank at the University of Tokyo (UTokyo), Japan, with 85.0 m × 3.5 m × 2.4 m (length × width × depth). The model was installed 20.0 m away from the wave generator. Two horizontal moorings composed of wires and springs were aligned to the wave direction and attached to the model to prevent it from drifting. Details of the experimental setup and spring properties are shown in Figure 3; where, k is the spring constant, T 0 is the initial tension, and l 0 is the natural length of the spring. The 6DOF rigid body motions of the model were defined as the motion of the center of gravity.
The model was equipped with twenty-two pairs of strain gauges to measure the bending moment of the tower, decks, and columns. The locations of strain gauges in global coordinates are indicated as blue markers in Figure 4. Gauges 1 to 4 are attached to the tower center, deck base, column base, and column center, respectively. The sampling frequency during the experiments was 100 Hz. The 6DOF rigid body motions of the model were defined as the motion of the center of gravity.
The model was equipped with twenty-two pairs of strain gauges to measure the bending moment of the tower, decks, and columns. The locations of strain gauges in global coordinates are indicated as blue markers in Figure 4. Gauges 1 to 4 are attached to the tower center, deck base, column base, and column center, respectively. The sampling frequency during the experiments was 100 Hz. The 6DOF rigid body motions of the model were defined as the motion of the center of gravity.
The model was equipped with twenty-two pairs of strain gauges to measure the bending moment of the tower, decks, and columns. The locations of strain gauges in global coordinates are indicated as blue markers in Figure 4. Gauges 1 to 4 are attached to the tower center, deck base, column base, and column center, respectively. The sampling frequency during the experiments was 100 Hz.
Environmental Conditions
Experiments under regular waves were conducted under three different wave heights (18 mm, 36 mm, and 72 mm) and wave periods from 0.7 to 4.1 s. In higher wave cases, wave periods around natural periods of heave and pitch were mainly measured to discuss the effect of wave heights on the viscous damping.
[m] Figure 4. Location of the strain gauges on the experimental model.
Environmental Conditions
Experiments under regular waves were conducted under three different wave heights (18 mm, 36 mm, and 72 mm) and wave periods from 0.7 to 4.1 s. In higher wave cases, wave periods around natural periods of heave and pitch were mainly measured to discuss the effect of wave heights on the viscous damping.
NK-UTWind Code Model
First, the full-scale FOWT was numerically modeled and analyzed using a coupled analysis code for the rotor-floater-mooring response. The code used was the NK-UTWind (an in-house code developed by UTokyo for coupled analysis of FOWT, see [18]); other articles that present details about the use of NK-UTWind can be found, for example, in [10,13,20].
The aerodynamic and inertia loads on the rotor part are integrated into the structure part. The structure is formulated with a finite element model and discretized into node elements and beam elements. Each node has three translational and three angular degrees of freedom. Thus, it can be formulated as given in Equation (1).
where [M] is the mass matrix whose dimension is 6N for the structural model of N nodes, [C] the damping matrix, [K] the structural stiffness matrix, and x denotes the nodal displacement vector and its first derivative and second derivative denote the velocity and acceleration vectors, respectively. The right-hand side vector comprises four force components: the hydrodynamic force, the forces from mooring lines, the restoring force, and the aerodynamic force. The hydrodynamic force is evaluated based on Morison's Equation [21], as given in Equation (2). It can be applied for slender structures that are hydrodynamically transparent.
where ρ is the fluid density, D the diameter of the column element, and v the fluid particle velocity. Furthermore, C m and C D denote the added mass coefficient and drag force coefficient, respectively. The mooring force can be evaluated by either quasi-static catenary calculation, lumped-mass method, or linear spring. Wheeler's stretch method [22] was used to estimate wave forces for the submerged domain for each time step. The mesh and nodes considered in the NK-UTWind code analysis are visualized in Figure 5. All the beam elements are modeled as circular cylinders. In this procedure, the sphere footings are approximated by three flat circular cylinders. The difference of footing geometries between the experiment and simplifications applied in the NK-UTWind code can affect the added mass coefficient and drag force coefficient for each node considered in the numerical model. The added mass coefficients and drag coefficients for each simplified element were obtained from DNV-GL guidelines [23] as standard hydrodynamic coefficients for cylinders. of freedom. Thus, it can be formulated as given in Equation (1).
where [M] is the mass matrix whose dimension is 6N for the structural model of N nodes, [C] the damping matrix, [K] the structural stiffness matrix, and x denotes the nodal displacement vector and its first derivative and second derivative denote the velocity and acceleration vectors, respectively. The right-hand side vector comprises four force components: the hydrodynamic force, the forces from mooring lines, the restoring force, and the aerodynamic force. The hydrodynamic force is evaluated based on Morison's Equation [21], as given in Equation (2). It can be applied for slender structures that are hydrodynamically transparent.
where ρ is the fluid density, D the diameter of the column element, and v the fluid particle velocity. Furthermore, C and C denote the added mass coefficient and drag force coefficient, respectively. The mooring force can be evaluated by either quasi-static catenary calculation, lumped-mass method, or linear spring. Wheeler's stretch method [22] was used to estimate wave forces for the submerged domain for each time step. The mesh and nodes considered in the NK-UTWind code analysis are visualized in Figure 5. All the beam elements are modeled as circular cylinders. In this procedure, the sphere footings are approximated by three flat circular cylinders. The difference of footing geometries between the experiment and simplifications applied in the NK-UTWind code can affect the added mass coefficient and drag force coefficient for each node considered in the numerical model. The added mass coefficients and drag coefficients for each simplified element were obtained from DNV-GL guidelines [23] as standard hydrodynamic coefficients for cylinders. In the NK-UTWind code, 6DOF motions were obtained for the motion of the center of gravity. The displacement of the tower top and column bottom was evaluated by the elastic deformation using bending moment on each beam.
WAMIT Code Model
The dynamic behavior of the FOWT was also evaluated by WAMIT code, a program based on the linear potential theory to analyze submerged or floating objects under waves. It does not consider the effect of viscosity and can be applied to rigid body motions. Furthermore, the WAMIT code evaluates the hydrodynamic loads in the frequency domain.
One of the purposes of using WAMIT is to confirm the motion responses as a rigid body. The other is to reveal the difference of the motion responses arisen by the different evaluation methods of the potential theory and Morison equation.
The WAMIT code simulation was performed with a low-order mesh composed of 558 flat quadrilateral and triangular panels with a mean edge of approximately 4.3 m in full scale, as illustrated in Figure 6a. The mooring line characteristics were included in the software Edtools ® that calculated the full stiffness matrix using the formulation as presented in [24]. The non-diagonal terms due to the degree-of-freedom coupling were also considered. The 3D view of the Edtools ® model is shown in Figure 6b. thermore, the WAMIT code evaluates the hydrodynamic loads in the frequency domain.
One of the purposes of using WAMIT is to confirm the motion responses as a rigid body. The other is to reveal the difference of the motion responses arisen by the different evaluation methods of the potential theory and Morison equation.
The WAMIT code simulation was performed with a low-order mesh composed of 558 flat quadrilateral and triangular panels with a mean edge of approximately 4.3 m in full scale, as illustrated in Figure 6a. The mooring line characteristics were included in the software Edtools ® that calculated the full stiffness matrix using the formulation as presented in [24]. The non-diagonal terms due to the degree-of-freedom coupling were also considered. The 3D view of the Edtools ® model is shown in Figure 6b. Since the viscous effect is not considered in the potential theory calculation, the viscous effect was incorporated into the external damping analysis. The external damping was estimated firstly from the free decay tests and incorporated into the numerical model.
Ansys Model
Modal analysis to reveal the elastic mode and vibrations of the structure was performed using the Ansys ® code. The model geometry was built using CAD software Rhinoceros ® , and the position and dimensions of the model were obtained from Edtools ® . The main properties of the model are shown in Table 4. The thickness of each structure was given to match the mass of each element. Since the viscous effect is not considered in the potential theory calculation, the viscous effect was incorporated into the external damping analysis. The external damping was estimated firstly from the free decay tests and incorporated into the numerical model.
Ansys Model
Modal analysis to reveal the elastic mode and vibrations of the structure was performed using the Ansys ® code. The model geometry was built using CAD software Rhinoceros ® , and the position and dimensions of the model were obtained from Edtools ® . The main properties of the model are shown in Table 4. The thickness of each structure was given to match the mass of each element. The connection point between the structures that comprise the experimented FOWT must be correctly modeled in the Ansys ® code to obtain the structure eigenmodes. Details about the connection points and Ansys ® code model are presented in Figure 7. The connection point between the structures that comprise the experimented FOWT must be correctly modeled in the Ansys ® code to obtain the structure eigenmodes. Details about the connection points and Ansys ® code model are presented in Figure 7.
Results and Discussions
Three main results are discussed in herein: first, free decay and hammering tests; second, motion responses; and third, elastic displacements. All the results were obtained from experiments and numerical calculations (NK-UTWind and WAMIT codes). Numerical analyses were conducted in full-scale models; all the results were presented in the reduced scale 1/50.
Free Decay and Hammering Tests
The results of the free decay test and hammering test are shown in Table 5 and Table 6, respectively.
The hammering test was performed with the model in the water in its initial position for wave tests. The same condition was simulated using the NK-UTWind code. The time series of bending along the z-axis in the Gauge 2 position was compared. The natural period of the most energetic vibration mode at Gauge 2 along the Z-axis was around 1.7 s and showed a good match with differences of less than 1% between experiments and numerical calculations.
The differences in natural periods in the surge, heave, and pitch between NK-UT-Wind and the experiment were less than 4%.
Results and Discussions
Three main results are discussed in herein: first, free decay and hammering tests; second, motion responses; and third, elastic displacements. All the results were obtained from experiments and numerical calculations (NK-UTWind and WAMIT codes). Numerical analyses were conducted in full-scale models; all the results were presented in the reduced scale 1/50.
Free Decay and Hammering Tests
The results of the free decay test and hammering test are shown in Tables 5 and 6, respectively. The hammering test was performed with the model in the water in its initial position for wave tests. The same condition was simulated using the NK-UTWind code. The time series of bending along the z-axis in the Gauge 2 position was compared. The natural period of the most energetic vibration mode at Gauge 2 along the Z-axis was around 1.7 s and showed a good match with differences of less than 1% between experiments and numerical calculations.
The differences in natural periods in the surge, heave, and pitch between NK-UTWind and the experiment were less than 4%.
Ansys code simulations were performed in water to determine the eigenmodes of the FOWT. As a result of the simulations, three different resonant modes around 1.66 s were obtained. The first resonant mode of 1.66 s is shown in Figure 8. In the first eigenmode, it is possible to verify that all the bottom column displacements are in the same phase, and the displacements are aligned with the respective base deck line of each column. The eigenmode period was very similar to the ones obtained in the experiments and NK-UTWind code; therefore, this value can be considered validated.
Ansys code simulations were performed in water to determine the eigenmodes of the FOWT. As a result of the simulations, three different resonant modes around 1.66 s were obtained. The first resonant mode of 1.66 s is shown in Figure 8. In the first eigenmode, it is possible to verify that all the bottom column displacements are in the same phase, and the displacements are aligned with the respective base deck line of each column. The eigenmode period was very similar to the ones obtained in the experiments and NK-UT-Wind code; therefore, this value can be considered validated.
Regular Wave Tests
This section presents response amplitude operator (RAO) motions for heave and pitch and RAO motions for displacements at the tower top and column bottom. The RAOs were calculated under regular wave tests. Two main factors were analyzed to verify their effects on the dynamic behavior in regular waves of FOWT: first, the influence of the wave height; and second, the structural rigidity of the floater.
Concerning the terminology and symbols adopted, ξ is the wave amplitude, k is the wavenumber, and kξ is the maximum wave slope. ξ and ξ is the amplitude of heave and pitch motion, respectively. RAO results are presented in non-dimensional forms.
Regular Wave Tests
This section presents response amplitude operator (RAO) motions for heave and pitch and RAO motions for displacements at the tower top and column bottom. The RAOs were calculated under regular wave tests. Two main factors were analyzed to verify their effects on the dynamic behavior in regular waves of FOWT: first, the influence of the wave height; and second, the structural rigidity of the floater.
Concerning the terminology and symbols adopted, ξ a is the wave amplitude, k is the wavenumber, and kξ a is the maximum wave slope. ξ 33 and ξ 55 is the amplitude of heave and pitch motion, respectively. RAO results are presented in non-dimensional forms. In general, it is possible to observe a decrease in the peak value of the RAO around the natural periods when increasing the wave height. This fact was related to high damping values for high wave heights, which confirmed the pronounced quadratic (non-linear) behavior of the viscous damping.
Numerical results from the NK-UTWind code agreed very well with the experiments outside the resonance region. In the resonance region, the RAO values are very sensitive to the damping levels. The results showed that the damping level from NK-UTWind, provided mainly by the drag coefficients in the Morison equations, were higher than in the experiments. The discretization in nodes, i.e., slices of circular cylinders, of the footing region could provide more damping than the experimented case, therefore decreasing the peak value of the RAO results. Figure 6. The eigenmode elastic behavior was remarkable, and it was visible during the experiments due to the significant displacements of the column bottoms. Figure 10 permits to conclude that the peak value of the RAO pitch results showed a relatively better agreement between the experiment and numerical simulation than the heave one. The reason was that the difference in the damping levels due to the footing In general, it is possible to observe a decrease in the peak value of the RAO around the natural periods when increasing the wave height. This fact was related to high damping values for high wave heights, which confirmed the pronounced quadratic (non-linear) behavior of the viscous damping.
Numerical results from the NK-UTWind code agreed very well with the experiments outside the resonance region. In the resonance region, the RAO values are very sensitive to the damping levels. The results showed that the damping level from NK-UTWind, provided mainly by the drag coefficients in the Morison equations, were higher than in the experiments. The discretization in nodes, i.e., slices of circular cylinders, of the footing region could provide more damping than the experimented case, therefore decreasing the peak value of the RAO results. Figure 9 allows observing a small peak around the wave period equal to 1.70 s. 1.72 s represent the natural period of the most energetic vibration mode obtained from the hammering tests; see Figure 6. The eigenmode elastic behavior was remarkable, and it was visible during the experiments due to the significant displacements of the column bottoms. Figure 10 permits to conclude that the peak value of the RAO pitch results showed a relatively better agreement between the experiment and numerical simulation than the heave one. The reason was that the difference in the damping levels due to the footing geometry modeling affected the pitch motion less. The drag coefficient in the x-axis direction is responsible for the main viscous damping coefficient; therefore, the drag coefficient approximation from DNV for a cylinder in the x-axis direction was better than the one for the z-axis direction. Figure 11 shows the behavior occurred in the experiments that almost corresponded with the resonant mode shown in Figure 9. geometry modeling affected the pitch motion less. The drag coefficient in the x-axis direction is responsible for the main viscous damping coefficient; therefore, the drag coefficient approximation from DNV for a cylinder in the x-axis direction was better than the one for the z-axis direction. Figure 11 shows the behavior occurred in the experiments that almost corresponded with the resonant mode shown in Figure 9. external damping values (viscous damping) were evaluated to show how sensitive the RAO is to this parameter. Zero damping condition and three different external damping levels were selected to match the peak value of the experimental RAOs.
The linear potential theory cannot calculate the non-linear effects on RAOs due to the wave height. Since we know that the most significant part of this effect is due to the quadratic nature of the viscous damping, different levels of external damping can be included in the WAMIT simulations to simulate the wave high effect. The system total damping comprises potential damping and external (viscous damping, mooring lines, and other sources). The potential damping was neglected as the WAMIT calculations showed low values; the external damping due to the mooring line and other external sources was also considered small for the DOF of interest; thus, most external damping came from the viscous forces. In general, the numerical RAOs calculated from the WAMIT code showed a good agreement with the experiments using the corrected calibrated damping level. It means The damping ratio, ζ, was calculated in terms of the percentage of critical damping for the respective DOF. An increase of four times the wave height was responsible for modifying around 0.5% and 1.2% the damping ratio levels for heave and pitch. The numerical results from the WAMIT code were calculated for a rigid body; due to that, the RAO heave peak around 1.7 s, which represented the energy around the natural period of the flexible mode, did not reproduce the experimental results, as seen in Figure 12.
Influence of the Structural Rigidity
Punctual displacements due to the elastic deformation of the tower top, column bottom, and deck edge are induced mainly by bending moments when excluding the rigidbody motions of the floater. For the punctual displacement of the column bottom, the deck deformation was summed with the column deformation itself. Figure 14 represents the definition of the punctual deformations due to the elastic behavior of the tower, deck, and column.
(a) (b) (c) Figure 14. Definition of punctual displacements due to the elastic deformation for each element: (a) tower top displace- The linear potential theory cannot calculate the non-linear effects on RAOs due to the wave height. Since we know that the most significant part of this effect is due to the quadratic nature of the viscous damping, different levels of external damping can be included in the WAMIT simulations to simulate the wave high effect. The system total damping comprises potential damping and external (viscous damping, mooring lines, and other sources). The potential damping was neglected as the WAMIT calculations showed low values; the external damping due to the mooring line and other external sources was also considered small for the DOF of interest; thus, most external damping came from the viscous forces.
In general, the numerical RAOs calculated from the WAMIT code showed a good agreement with the experiments using the corrected calibrated damping level. It means that the peak values of RAOs presented the same values in the numerical calculations and experiments. The damping ratio, ζ, was calculated in terms of the percentage of critical damping for the respective DOF. An increase of four times the wave height was responsible for modifying around 0.5% and 1.2% the damping ratio levels for heave and pitch.
The numerical results from the WAMIT code were calculated for a rigid body; due to that, the RAO heave peak around 1.7 s, which represented the energy around the natural period of the flexible mode, did not reproduce the experimental results, as seen in Figure 12.
Influence of the Structural Rigidity
Punctual displacements due to the elastic deformation of the tower top, column bottom, and deck edge are induced mainly by bending moments when excluding the rigid-body motions of the floater. For the punctual displacement of the column bottom, the deck deformation was summed with the column deformation itself. Figure 14 represents the definition of the punctual deformations due to the elastic behavior of the tower, deck, and column.
Influence of the Structural Rigidity
Punctual displacements due to the elastic deformation of the tower top, column bottom, and deck edge are induced mainly by bending moments when excluding the rigidbody motions of the floater. For the punctual displacement of the column bottom, the deck deformation was summed with the column deformation itself. Figure 14 represents the definition of the punctual deformations due to the elastic behavior of the tower, deck, and column. The procedure to estimate the punctual displacement from the bending moment due to the elastic deformation is explained below. It was fundamentally based on the Euler-Bernoulli Hypothesis, i.e., cross-sections vertical to neutral plane remain vertical even after deformed.
For the tower top displacement, the linear relationship described as Equation (3) was assumed between the bending moment, M , and the distance from the tower top x . The The procedure to estimate the punctual displacement from the bending moment due to the elastic deformation is explained below. It was fundamentally based on the Euler-Bernoulli Hypothesis, i.e., cross-sections vertical to neutral plane remain vertical even after deformed.
For the tower top displacement, the linear relationship described as Equation (3) was assumed between the bending moment, M 1 , and the distance from the tower top x 1 . The constant of proportionality is defined as P 1 , considered the concentrated force at the tower top.
After this assumption, the tower is regarded as the cantilever stuck at the deck base, and δ1 is calculated as Equation (4), where tower length is L 1 , and the flexural rigidity of the tower is EI 1 .
For the deck edge, the bending moment is measured at only one point as M 2 and assumed to be subject to uniformly distributed load. Then δ2 can be calculated as Equation (5), where the deck length is L 2 , and the flexural rigidity of the deck is EI 2 .
For the column bottom, the linear relationship described as Equation (6) was assumed between bending moment, M 3 , and the distance from the column bottom x 3 . The constant of proportionality is defined as P 3 , considered the concentrated force at the column top M 3 = P 3 x 3 (6) After this assumption, the deck and column are regarded as L-shaped beams, and δ3 is calculated as Equation (7). The Equation represents the sum of displacement by deck tip angle and the deflection of the column itself, whereby the column length is L 3 , and the flexural rigidity of the column is EI 3 .
One of the current research goals is to investigate the effects of the model flexibility on the dynamic behavior in waves. A comparison of numerical models with different rigidity is a feasible alternative since the NK-UTWind was validated against the experimental results and evaluated these effects on the response of a FOWT in waves.
Based on the definitions proposed in Figure 14, punctual displacements due to the elastic deformation of the tower top, deck edge, and column bottom were obtained using the NK-UTWind code. Numerical models with three different rigidities, namely 0.7EI 0 , EI 0 , and 5EI 0 were simulated; where, EI 0 is the original rigidity of the experimental model. The simulations were performed for a wave height of 18 mm in the model scale.
The displacement results were presented as RAOs in a non-dimensional form in Figures 15-17, respectively, for three points located at the tower top, deck edge, and column bottom. For experimental results, the bending moment values were obtained from strain gage measurements as detached in Figure 4. The same procedure was applied for the NK-UTWind results, in which numerical measurements were obtained in the same position as the experimented gages. Figure 17 shows that the bottom columns indirect displacements were evaluated using the column top as a rigid body and measured by the Qualysis ® system to confirm the gauge measurements. In this case, the second term in Equation (7) is zero, and only the effect of the bending angle of a deck tip is considered.
In general, the punctual results for the most rigid model, 5EI 0 , were much lower compared to the two other cases, as expected. Some punctual displacements were observed because the model was not sufficiently rigid to avoid them. The most flexible case presented the largest punctual displacements.
In Figure 15, the punctual displacement of the tower top was the largest at the natural period of the pitch, and NK-UTWind overpredicted it compared with experiments.
The results in Figures 16 and 17 resembled each other because the deflection of the bottom column itself was much smaller than the effective displacement by the deck edge angle. From the experiment, a peak around a wave period of 1.7 s was observed; from the NK-UTWind numerical calculations, the presence of two large peaks around 1.7 s, and a small peak around the natural period of the pitch.
For the EI 0 case, the peak around 1.7 s could be explained by the experimental and NK-UTWind hammering test results and confirmed by eigenmode analysis using the Ansys code, i.e., the peak was due to the excitation of the natural period of the most energetic vibration mode. In turn, the 1.5 s peak could not be found in the experimental results in waves or the hammering test results. This difference between the experimental results and NK-UTWind simulations can be attributed to the difference in modeling the joint between decks and columns. Furthermore, the structural damping that can occur in the real structure could not be included in the NK-UTWind formulations. Due to the lower rigidity, the eigenmode on 1.7 s for EI 0 moved to 2.1 s for 0.7EI 0 .
The previous results showed that the floater structural elastic behavior was presented in the experiments in waves, and it was represented in the numerical calculations using NK-UTWind. The same models were utilized to verify the structural elastic behavior effect on the dynamic response in waves through RAO analysis. Figures 18 and 19 present the comparison of RAO results between experiments and NK-UTWind calculations for heave and pitch motions, respectively, for different structural rigidities. Three different structural rigidities were evaluated as 0.7EI 0 , EI 0 , and 5EI 0 . mode, as discussed for the punctual displacements of the deck edge and column bottom. Outside the aforementioned wave periods, the effect of the structural elastic behavior could be neglected. However, the structural elastic behavior impact must not be ignored when studying FOWTs. The typical sea states have high energy levels before 2 s (in the model scale 1/50) and 14 s (in the full scale). High response levels before 2 s can directly impact the fatigue of mooring lines and decrease these systems lives and increase the operating costs. For the RAO pitch, in Figure 19, no significant differences were observed for different structural rigidities. Although there were small peaks due to the consideration of the structural elastic behavior on the RAO heave, as seen in Figure 18, it may not be negligible in designing a FOWT model. Spectral analysis of the heave motion may result in significant amplitudes under an actual ocean environment. A discussion can be held to verify the structural vibration influence on the heave motion statistics under a real operational sea environment. For example, an operational sea condition characterized by significant wave height, H = 2.5 m, and mean wave period, T = 1/f ̅ = 9.0 s (or T = 1.27 s), was utilized (values in full scale). The heave response spectrum for the models with two different rigidities, EI and 5EI , were performed. For the wave spectrum, the ISSC spectrum, see [25], was considered as: where A = 0.1107H f ̅ , B = 0.4427f ̅ , f ̅ = 1.25f , and f is the peak frequency of the wave spectrum. The wave spectrum in the model scale is shown in Figure 20. For the RAO heave, in Figure 18, small peaks could be seen for EI 0 around the wave period of 1.5 s and 1.7 s. For the 0.7EI 0 model, there is a sharp inclination around the wave period of 2.0 s. The explanation for these peaks was the presence of the structural vibration mode, as discussed for the punctual displacements of the deck edge and column bottom. Outside the aforementioned wave periods, the effect of the structural elastic behavior could be neglected. However, the structural elastic behavior impact must not be ignored when studying FOWTs. The typical sea states have high energy levels before 2 s (in the model scale 1/50) and 14 s (in the full scale). High response levels before 2 s can directly impact the fatigue of mooring lines and decrease these systems lives and increase the operating costs.
For the RAO pitch, in Figure 19, no significant differences were observed for different structural rigidities.
Although there were small peaks due to the consideration of the structural elastic behavior on the RAO heave, as seen in Figure 18, it may not be negligible in designing a FOWT model. Spectral analysis of the heave motion may result in significant amplitudes under an actual ocean environment. A discussion can be held to verify the structural vibration influence on the heave motion statistics under a real operational sea environment. For example, an operational sea condition characterized by significant wave height, H s = 2.5 m, and mean wave period, T = 1/f = 9.0 s (or T p = 1.27 s), was utilized (values in full scale). The heave response spectrum for the models with two different rigidities, EI 0 and 5EI 0 , were performed. For the wave spectrum, the ISSC spectrum, see [25], was considered as: where A = 0.1107H s 2 f 4 , B = 0.4427f 4 , f = 1.25f p , and f p is the peak frequency of the wave spectrum. The wave spectrum in the model scale is shown in Figure 20.
Although there were small peaks due to the consideration of the structural elastic behavior on the RAO heave, as seen in Figure 18, it may not be negligible in designing a FOWT model. Spectral analysis of the heave motion may result in significant amplitudes under an actual ocean environment. A discussion can be held to verify the structural vibration influence on the heave motion statistics under a real operational sea environment. For example, an operational sea condition characterized by significant wave height, H = 2.5 m, and mean wave period, T = 1/f ̅ = 9.0 s (or T = 1.27 s), was utilized (values in full scale). The heave response spectrum for the models with two different rigidities, EI and 5EI , were performed. For the wave spectrum, the ISSC spectrum, see [25], was considered as: where A = 0.1107H f ̅ , B = 0.4427f ̅ , f ̅ = 1.25f , and f is the peak frequency of the wave spectrum. The wave spectrum in the model scale is shown in Figure 20. For the EI 0 model, the peak period of the wave spectrum was close to the natural frequency of the structural mode of vibration, which can magnify the heave response spectrum. The power spectrum of the heave motion response presented in Figure 21, S 33 (f), was calculated as: S 33 (f) = |RAO 33 (f)| 2 S w (f) For the EI model, the peak period of the wave spectrum was close to the natural frequency of the structural mode of vibration, which can magnify the heave response spectrum. The power spectrum of the heave motion response presented in Figure 21, S (f), was calculated as: The variance of the heave motion in these wave conditions was estimated from the 0th moment m of the heave response spectrum as: The significant response of heave is evaluated as 4 m . The results of significant wave height were 24.2 and 13.4 mm for the EI and 5EI models. The statistical results showed that the considerable heave response for the flexible model EI was 80% larger than the rigid model 5EI . The massive difference of 1.8 times for this specific model and operational sea condition suggested that floater structural elastic behavior must be considered when designing light FOWT structures. Thus, the rigidity effects on the dynamic behavior in waves are essential.
Comparison of Numerical Results from NK-UTWind and WAMIT
The NK-UTWind and WAMIT codes employ different methods for evaluating wave forces. This difference can appear when comparing the results from the NK-UTWind code The variance of the heave motion in these wave conditions was estimated from the 0th moment m 0 of the heave response spectrum as: The significant response of heave is evaluated as 4 √ m 0 . The results of significant wave height were 24.2 and 13.4 mm for the EI 0 and 5EI 0 models. The statistical results showed that the considerable heave response for the flexible model EI 0 was 80% larger than the rigid model 5EI 0 . The massive difference of 1.8 times for this specific model and operational sea condition suggested that floater structural elastic behavior must be considered when designing light FOWT structures. Thus, the rigidity effects on the dynamic behavior in waves are essential.
Comparison of Numerical Results from NK-UTWind and WAMIT
The NK-UTWind and WAMIT codes employ different methods for evaluating wave forces. This difference can appear when comparing the results from the NK-UTWind code for a rigid model under low wave height with the ones from the WAMIT code and applying the external damping level to calibrate the RAO peak values. Figures 22 and 23 present the comparison of RAO results between NK-UTWind and WAMIT numerical calculations for heave and pitch motions, respectively. The NK-UTWind model was calculated for wave height equal to 0.2m (in the full scale) and the most rigid structural value 5EI 0 . The WAMIT model was evaluated for damping ratios of ζ 33 = 1.5% and ζ 55 = 1.7%, heave and pitch, respectively. For the RAO heave, the NK-UTWind and WAMIT codes presented an excellent agreement, including peak values and natural period, as shown in Figure 22.
For the RAO pitch, the NK-UTWind and WAMIT codes presented a good agreement for wave periods shorter than 2.5 s. A small difference was observed for the value of the natural period of the pitch, as shown in Figure 23. The difference may be the added mass calculations in the NK-UTWind code; as highlighted before, the footing geometry was simulated as three circular cylinders that impacted the added mass and, consequently, in the natural period of the pitch. The discrepancy was more pronounced for pitch motions For the RAO heave, the NK-UTWind and WAMIT codes presented an excellent agreement, including peak values and natural period, as shown in Figure 22.
For the RAO pitch, the NK-UTWind and WAMIT codes presented a good agreement for wave periods shorter than 2.5 s. A small difference was observed for the value of the natural period of the pitch, as shown in Figure 23. The difference may be the added mass calculations in the NK-UTWind code; as highlighted before, the footing geometry was simulated as three circular cylinders that impacted the added mass and, consequently, in the natural period of the pitch. The discrepancy was more pronounced for pitch motions than for heave ones due to the distance between the footing and the center of gravity, For the RAO heave, the NK-UTWind and WAMIT codes presented an excellent agreement, including peak values and natural period, as shown in Figure 22.
For the RAO pitch, the NK-UTWind and WAMIT codes presented a good agreement for wave periods shorter than 2.5 s. A small difference was observed for the value of the natural period of the pitch, as shown in Figure 23. The difference may be the added mass calculations in the NK-UTWind code; as highlighted before, the footing geometry was simulated as three circular cylinders that impacted the added mass and, consequently, in the natural period of the pitch. The discrepancy was more pronounced for pitch motions than for heave ones due to the distance between the footing and the center of gravity, which impacted a more considerable added mass difference when considering the momentum arm.
Conclusions
In this research, a water tank experiment was carried out under regular waves using a flexible multi-column FOWT model. Dynamic motions and deformations of the model were featured and compared with NK-UTWind and WAMIT codes numerical calculations.
In the experiment, significant elastic deformation of the model was observed around the wave period of 1.7 s. The comparison between RAO heave from the experiment, NK-UTWind, and WAMIT simulations revealed that this elastic behavior affected the heave motion of the model. A small peak appeared in the RAO heave around the wave period of 1.7 s, and the same peak was observed in the experiment and NK-UTWind simulation, but not in the WAMIT simulation. However, except for the natural period of the structural vibration, the motion as a rigid body was dominant for the motion responses. Indeed, the WAMIT simulation has reasonable estimations of the RAOs of the experiment except for the wave period of 1.7 s.
Although the experimental results rarely estimate the displacements deriving from the elastic deformation, this work figured them out from strain gages applying some assumptions. The displacements from strain gages were compared with those calculated by the NK-UTWind code. The NK-UTWind calculation showed a larger displacement of deck edges and column bottoms under a wave period of 1.7 s than the experimental results. In turn, the displacement of the tower top was over-estimated numerically.
In investigating the structural rigidity effect on the motion response, it was possible to observe a small peak in RAO heave affected by elastic vibration compared with the rigid model. The slight difference seemed negligible compared to the peak values around the natural period; however, the spectral results under a real ocean environment greatly impacted the significant heave height results. The significant heave height was 80% higher for the flexible model than the rigid one when using an operational sea state condition. The need to consider the rigidity effects on the dynamic behavior of a light FOWT was brought about.
NK-UTWind and WAMIT calculations showed good agreements when considering rigid models under low wave heights; no significant differences could be found in the RAO heave. Simultaneously, for the RAO pitch, a small difference in the natural period was recognized. The difference was due to the evaluation method of hydrodynamic forces, specifically the added mass.
As a summary of NK-UTWind and WAMIT codes, they showed to be useful tools for the preliminary design of FOWT. NK-UTWind code can well represent the non-linearities due to the wave height and include the floater structural elastic behavior. The WAMIT code can be useful to calculate the added mass coefficients and to calibrate the damping levels. Using both tools together, it is possible to obtain more reliable results than using them separately. In another way, NK-UTWind with potential theory can be expected for better estimating of dynamic behaviors.
Funding: This research is based on the results obtained from a project supported by New Energy and Industrial Technology Development Organization (NEDO), Japan.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments:
The authors would like to thank Eng. Sakamoto, H. from Fractaly Co. Ltd., Japan, for model manufacturing and assembly, and Kato, T. the technical staff of the towing tank of UTokyo, for his technical support in conducting the experiments. The authors also would like to thank the student Marques, M. A. from the Federal University of Pernambuco (UFPE), Brazil, for his help during the image developments.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-10T00:04:31.159Z | 2021-01-27T00:00:00.000 | {
"year": 2021,
"sha1": "d4052a49669b97928f1915d6074e567eeef24b0f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1312/9/2/124/pdf?version=1612340325",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "5ab0babdab6258db519bfd1241cad1973d038edd",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
109284870 | pes2o/s2orc | v3-fos-license | Adaptation of coastal structures to mean sea level rise
With the mean sea level (MSL) rise, coastal defence structures will be exposed to wave height, which are larger than the design values, in particular for all the structures built in shallow waters where the depth imposes the maximal wave height due to bathymetric breaking. If MSL rise is one meter, the crest of these structures will have to be raised between two and three meters in order to keep the same overtopping volumes. Moreover the structures will have more severe damages and mass of armour units should be doubled. Statistics moderate the first conclusions because it keeps into account the whole set of events, including in particular shoaling waves. Schematically with the increase of damages, according to the severity of changes the stakeholders will adopt one of the following scenarios: a) repairing the structures as it is b) reinforcing the structures c) demolishing and redesigning the structures d) accepting coastal realignment. Three axes of reinforcement of structures are presented: limiting overtopping by modifying for example the crown wall, improving armour stability by adding an armour layer or by using milder armour slope and reducing the incident wave energy by building a detached low-crested breakwater or by sand nourishment. A curved parapet wall is a very efficient solution for impervious structures. This solution must often be completed by an additional armour layer for pervious structures. The front reservoir is also a promising solution. Cost benefice analysis (CBA) applied to the city of Le Havre shows that reinforcement becomes economically justified in district of Malraux when MSL rise is 1 m. Redesign and coastal realignment as far as they are concerned are acceptable when MSL rise exceeds 2 m.
I. INTRODUCTION
Impact of climate changes on coastal structures was studied within the framework of Discobole project [Lebreton, Trmal, 2009]. The methods that are used presently for structures design do not however enable to estimate correctly the consequences of climate change for three main reasons: -The design is not based on statistics, single approach managing the complexity of coastal hazards; -The design rules are generally proposed for new structures and consider badly the strengthening of former structures; -The structure is generally considered alone and not in a system of dangers including several scales (the scale of the structure, the scale of the zone directly protected by the work, the scale of the zone impacted by the flood risk).
The SAO POLO project [Sergent, 2012] aims at answering to these three problems. This project uses: -Analytical and statistical methods to estimate the impacts of climate changes in terms of overtopping and stability of coastal structures located in the breaking zone with a regular bathymetry and normal incident wave as well as the consequences on the updating of the design (strategy c); -Laboratory tests in wave flume to study the reinforcement of three types of existing coastal structures (strategy b).
-A study case in the district of Malraux in the city of Le Havre to establish a socioeconomic strategy for the choice among four strategies (a-b-c-d) presented in figure 1a.
Three axes of reinforcement of structures are possible: limiting overtopping by modifying the crown wall, improving armour stability by adding an armour layer or by using milder armour slope and reducing the incident wave energy by building a detached low-crested breakwater or by sand nourishment (see figure 1b).
II. UPDATING OF THE DESIGN OF THE COASTAL WORKS
An analytical study is led for a work located in the breaking zone. The change of the crest height DD between the final state with the MSL rise Dh and the present state is obtained for low overtopping discharges (i.e. q<5.10 -2 m 3 / ml/s). The ratio ∆D Dh always exceeds 1 and approaches 2 for pervious works and 3 for impervious structures. Whatever are the offshore wave conditions, the works located in very shallow waters with a water depth between 0 and 2 m risk according to the calculations to undergo also very strong damages of their armour layers. The most concerned works are beach structures. Like the change of the crest height, the armour weight increases linearly with MSL rise.
A statistical method based on the Monte Carlo method and the joint probabilities of offshore wave heights and sea levels quoted relative to chart datum is then tested on the Deauville breakwater (see figure 2). This method requires at first a separate analysis of exceedance probability of offshore wave height at high tide (more precisely the maximal wave height between two successive high tides) and wind set-up at high tide (the difference between the maximal observed level and predicted level around high tide). Then the joint probability of offshore wave heights and of sea levels is found through a change of variables (transformation in centred normal distribution) in a normalized workspace (normal bivariate function). The sea level is the sum of tide and wind set up but the wave set up is not included because it is implicitly taken into account in the overtopping and stability formulae. A random sort finally supplies the database representing 10 000 years of data at high tide (offshore wave heights and sea levels). The latter ones are then propagated and the overtopping discharges are determined by making vary the MSL. Two types of results are obtained: (a) the evolution of the return periods of overtopping discharges with the MSL rise; (b) the necessary raising of works to keep the same return periods for overtopping.
The results in terms of return periods for the Deauville breakwater are given in table 1. The chosen overtopping discharge is strong (5.10 -2 m 3 /s/ml). That is the overtopping level causing the wreck of the smallest ships at the back of the breakwater. A usual level of 1.10 -5 m 3 /s/ml represents a danger for pedestrians and vehicles. The present return period is important (10 000 years). The big value of the initial return period implies a fast variation of this value with MSL rise. The results in terms of the raise of the crest height are gathered in table 2. For pervious rubble mound breakwaters the analytical study shows that, for 1 m MSL rise, the necessary raise of the crest height is equal to 1.74 m for an overtopping discharge of 5.10 -2 m 3 /s/ml. The results of the statistical study moderate these results because the planned raising of the crest height is 1.40 m. The analytical and statistical studies give results with significant differences because all of the wave conditions (breaking and shoaling) are taken into account in the statistical study whereas the analytical study includes only breaking waves. If only shoaling waves are taken into account, the necessary raise of the crest height is equal approximately to the MSL rise i.e. 1 m.
III. JOINT PROBABILITY METHOD
According to Hawkes (2002), DEFRA/Environment Agency (2005) explains the method of joint probabilities applied to the field of coastal engineering. The wave propagation is modelled in our study by the Goda analytical formula (2000) and the overtoppings are given by the TAW formulae (2002).
The propagation of offshore wave heights to the coast that is done by the Goda analytical formula on a regular bottom slope distinguishes two zones: the shoaling zone where the wave height increases slightly when the water depth decreases; and on the other hand the breaking zone where the wave height quickly decreases when the water depth decreases. Figure 3 presents the effect of MSL rise on the wave height in the front of the work. MSL rise can be represented as a movement of the coastal structures offshore. Consequently the works that are presently in shoaling zone will not see significant changes of wave heights. The works that are presently in breaking zone i.e. in shallow waters will be subjected to stronger wave heights. The joint probability method is illustrated by figure 4. The latter figure presents two families of curves: the curves of iso-probability of an event at high tide in the axes (offshore wave height, sea level) on the one hand and the curves of iso-overtopping discharges in the same axes on the other hand. These latter curves are obtained using the Goda analytical formula (2000) for wave propagation and TAW formulae (2002) for over topping. The interest of this presentation is to distinguish the probabilities of the event (joint exceedance probability of offshore wave height and sea level) from the probability of the impact (for example an overtopping discharge) that is calculated here by the relative number of events giving overtopping discharges superior to a given discharge (the events are presented by stars in figure 4). Figure 4 shows that these events divide up in two groups: in the breaking zone (large wave height) and in the shoaling zone (high sea level at high tide). In figure 4, among all the events with a return period of 5 years, the event giving the strongest overtopping discharge (5.10 -2 m 3 /s/ml) is found at the border between breaking zone and shoaling zone. Considering climate change, MSL rise consists in moving the cloud of points rightward, the increase of the wave heights consists in moving it upward and the increase of the frequency of the storms in generating additional stars.
IV. REINFORCEMENT OF COASTAL WORKS
We recall the strategies of adaptation proposed to the stakeholders : a) repairing the structures as they are b) reinforcing them c) demolishing and redesigning them d) accepting coastal realignment. Updating of design of the structure is a costly option. The stakeholder would often content himself with the strengthening (or reinforcement) i.e. the strategy b. The reinforcement has been studied on three types of structures with laboratory tests in wave flumes.
IV.1. Maritime rubble mound breakwater
Tests A are done in the wave flume of University of Le Havre in order to characterize several options of reinforcement for a maritime rubble mound breakwater.
For a 1 m MSL rise, among all the envisaged options, only the strengthening of the structure by a third layer of Antifer armour units and a raise of the crown wall up to the level of the superior berm enable to reduce the overtopping discharges down to their initial values (without MSL rise). With the armour units of the same dimension for the third layer as the ones of two initial armour layers, the armour stability is largely improved in comparison with the initial conditions.
IV.2. Maritime impervious breakwater
Tests B are performed in the wave flume of Laboratoire National d'Hydraulique et Environnement (LNHE-EDF) in order to study in laboratory the different options of reinforcement of maritime impervious breakwaters that enable to keep the same overtopping discharge with 1 m MSL rise. The most promising options are the 1 m high curved parapet wall (cf. figure 6a) and the front reservoir (cf. figure 6b) with orifices for evacuation of overtopping volumes. These options of reinforcement with 1 m MSL rise allow us to keep or to reduce the overtopping discharge that is observed with the present MSL without reinforcement. The front reservoir consists in creating in front of the breakwater a seafront walk that is protected by a porous parapet with rectangular openings.
The addition of armour units on the impervious slope is not an adapted option of reinforcement (even if it can reduce the overtopping discharge in the preliminary tests) because this armour layer is unstable despite the armour size used in the laboratory test i.e. 4-6 T.
IV.3. Rubble mound breakwater positioned on the upper beach
Tests C are done in the wave flume of LNHE-EDF in order to study in laboratory the different options of reinforcement of rubble mound breakwaters positioned on the upper beach that enable to keep the same overtopping discharge as well as the same armour stability with 1m MSL rise. Among the tested options, the best results are obtained with the following reinforcements : 1) third layer with 5-6 T armour units and a 2 m raise of crown wall (cf. figure 7a) 2) a smoother slope of armour layer (1:3 slope instead of 1:2 slope) and a 1 m raise if crown wall (cf. figure 7b). The raise of the crown wall must always be combined with a reinforcement of the armour layer because it is observed that the armour layer is unstable when the crown wall is raised with 1m MSL rise. The city of Le Havre is crossed from East to West by a dead cliff which marks the old border between the high city and the low city. The low city is thus developed in the old intertidal space where the sea level evolves between high tide and low tide. To analyze the risk of marine flood in a thorough way, we choose three sites corresponding to different configurations in terms of flood (overflowing versus overtopping) but also in terms of protection and potential scenarios of adaptation. Districts Centre and Saint François are protected from the water waves by the breakwaters of the port and by small low walls around the basins. They are the lowest districts and the first ones subjected to a flood by overflowing at high tide like the floods, which already occurred in the past in the district Saint François. Then the littoral space of the Northwest zone is constituted by the municipality of Sainte-Adresse and by the beach of Le Havre. This zone is protected by a low defence wall down from Sainte-Adresse up to the South of the beach on more than 1 500 meters and by a work in pebble in the South of the marina on more than 800 meters.
The district of Malraux is protected by a rubble mound breakwater that is surmounted itself by a low wall. It is one of the most interesting sites for our study because the work subjected to the waves can undergo damage during the extreme events. We calculate, for various MSL rise, the total overtopping or overflowing volume which is going to flood the city during an extreme event (high tide + storm) that is estimated in Le Havre at approximately 2 hours. For that purpose, the average discharge is calculated at first (by l m) on a section of the parapet wall then multiplied by the length of the section and then by the duration of 2 hours to obtain a volume added on the length of the work. We wish to know the flood map corresponding to this volume of water in the considered zone. The latter is modelled as a basin. The maximal inland limits of the basin that is filled by the sea are defined. The flood in the district of Malraux is wide because of the relatively low topography.
V.2. The flood data in Le Havre
The geographical configuration of the three sites such as explained above shows that in these zones a marine flood occurs in different ways. In Saint François, a marine flood fills the basins which in turn overflow the city. In Sainte-Adresse or in Malraux, the direct interface with the sea establishes a different dynamics in case of marine flood. There is overtopping or overflowing on the low parapet walls.
However, with overflowing of basins or with overtopping on low parapet walls, the physical parameters used for the calculation of the damage on the stakes in the territory are the same: -Upstream, the boundary conditions are essentially given by the tide and wind set-up (the addition of the two latter ones constitute the marine level) and by the wave height.
-Downstream, that is an inland territory, we are interested in the "hydraulic" state of the flooded zones. The flood maps are deducted or calculated from the knowledge of the upstream conditions, the topography and the physics of the flow.
On this second point, three parameters that ideally we would like to know completely on the territory are: the water height, the duration of flood and the current velocity. However it is difficult to know them in a precise way without the help of a 2D hydraulic model (all the more in urban zones), and that is why the majority of the studies of the floods is based only on the maximal water depth to set up scenarios and calculations of simplified damage.
Concerning the sea level, Le Havre has tidal stations which register in a regular way the water levels for more than half a century. So, the reference document for the statistics of extreme levels developed from a partnership between the CETMEF and the SHOM, presents the maps of the extreme water levels at high tide for return periods of 10, 20, 50 and 100 years [Simon, 2008]. Therefore according to this study, the centennial flood in Le Havre is 9.30 meters CMH (Cote Marine du Havre).
V.3. Damage to the breakwater of Malraux
In the absence of information on the characteristics of the breakwater of Malraux, we made the hypothesis that the work was presently stable when it is subjected to a centennial wave. The work is not directly submitted to the offshore wave in the outer harbour. An abacus of diffraction was thus used to determine the incident waves on the work. The work is supposed to consist of 1-3 T armour units. The statistical method for damage estimations is the same as that used for the overtopping case. From the database of 10 000 years, the sea conditions are propagated up to the coast: the wave heights are thus known at the entry of the port, the diffraction coefficient is applied. The Hudson formula [Ciria Cur Cetmef, 2007] is finally used to analyze the breakwater stability. Thus the studied criterion is not any more the overtopping discharge, but the damage through the stability coefficient Kd. The obtained results are extreme damages. A statistical analysis enables to determine the return period of the damage (cf. tables 3 and 4).
Four levels of damages of armour layers are retained in reference to the main guides of conception: the beginning of damage, the intermediate damage, the important damage, and the breaking. Each of these levels corresponds to a stability coefficient, which can be related to a percentage of moved armour units.For the strategy of "doing nothing", we can reasonably think that the contracting authority will realize works according to the various levels of damage.
-In the beginning of damage, no program of works is envisaged; -In the intermediate damage, the contracting authority starts a program of works that consists in putting back in place the missing armour units; -In the important damage, the contracting authority starts a general confortement of the armour layer; -In the breaking, the work is replaced by an identical work.
The maintenance costs supported by the contracting authority are given in the table 4. For the strategy of strengthening, the cost estimates of reinforcement (in € 2012) of the table 5 are retained.
The contracting authority, considering the costs and according to the results of laboratory tests will choose certainly the solutions (1+3, 1+5, 2). We shall thus retain 7 500 € TTC / lm as average cost of a strengthening. The armour stability is very affected by the scenarios of MSL rise. For a 2 m MSL rise, only scenario presented here, each event with return period superior to 100 years leads to a damage level of breaking. The strategy is clearly to strengthen the work in position or to build a new work that resists (redesigning).
V.4. Damage to goods
The information collected on the average price of housing in m² allows us to establish besides a list of the values for several types of buildings. This information enables to calculate the damage of buildings during overtopping or overflowing (cf. tables 6 and 7). The used method is as follows: -An inventory of the stakes based on the approach by entities of goods [Givone, 2005] is adopted to characterize the majority of the vulnerable physical stakes in the floods, in particular the public and private buildings.
-The map of the floods is crossed with that of the stakes on the various zones to analyze the levels of risk from the water height in front of each stake.
-The following formula of the rate (or %) of damage in the floor is chosen: Ee = 5.68 H + (16.45 %) [Torterotot, 1993] where H represents the water height.
-The economic cost of the damage is estimated according to the following formula: CE = Ee x Se x Cs with CE = cost of the damage for a building, Ee = its rate of damage, Se = its surface on the ground and Cs: its cost by m².
-The evaluation of the average price of housing by m² enables to know the value of the properties concerned in monetary term (evaluation of Cs).
-The final scenarios proposed for an economic study and a comparison of the strategies of adaptation according to the severity of the climate change are chronologically as follows: doing nothing (cost of the damage in the work + cost of the damage in the properties); strengthening (cost of the strengthening + cost of the damage in the work + cost of the damage in the properties); redesigning (cost of the reconstruction + cost of the damage in the work + cost of the damage in the properties); realignment (cost of the realignment + cost of damage in the properties). As regarding the realignment, a second line of defence can be created with costs of damage associated to it.
-The principle of the annualization of the costs is used to compare the strategies.
-The proposed economic study is not sufficient when the safety is involved. In a general way, other criteria of decision come into play: the indirect economic costs; the safety; the acceptability; the environment; etc. An approach with multi-criteria is thus sometimes necessary. In table 7, we find that the work has important damages until the return period of 100 years but the breaking for more severe events justifies much more important maintenance costs. In 1000 years, on average and approximately, several severe events will happen: 1 event of return period of 1000 years, 1 event of return period of 500 years, 1 event of return period of 333 years, etc. The total cost of damages in 1000 years is therefore the sum of the costs for each event divided by 1000.
For building costs and costs of realignments, the choice is made to amortize the costs over 100 years.
V.5. Comparison of strategies
The contracting authority will adopt four different attitudes: -Doing nothing, which consists in reconstructing the work to the initial status and to bear the costs of damage to goods at the back of the dike; -Strengthening the existing work with the help of, for example, a front reservoir, a berm or an additional armour layer with a crown wall, in order to give a stability and an overtopping discharge close to their value before MWL rise; -Deleting the existing work and building a new one adapted to the new MWL. This measure will be designed to have almost no damage and limited overtopping volumes. The cost is estimated at 30 k€ TTC / ml and no damage to goods due to the overtopping or overflowing is possible ; -Realigning by leaving the area behind the work (chosen here as the flooded area for a millennial event).
VI. CONCLUSIONS AND PERSPECTIVES
In the district of Malraux, the strategy of strengthening becomes more interesting economically than the strategy of "doing nothing" from a 1 m MWL rise and remaining the most economic solution up to 2 m MWL rise. The strategy of realignment must be envisaged only locally because of its relatively high cost. The project highlighted the interest to get a database of the joint probability for wave heights -water levels along the French coast. A method was developed to select the most economic solution of strengthening a work. This method must be implemented in an IT tool to propose an expert system to the contracting authorities. The laboratory tests led within the framework of the project to determine in a qualitative way the most promising solutions of strengthening. But new design formulae (as regard in particular the armour stability, the crown wall stability or the overtopping volumes) are still to be obtained in order to highlight the significant interaction between strengthening of the superstructures and stability and, inversely, between strengthening of the armour layers and overtopping. These problems are still little studied in the literature. It should finally be recalled that the danger is situated in several scales: the scale of the work, the scale of the zone directly protected by the work, the scale of the zone impacted by the flood risk.
VII. ACKNOWLEDGMENTS
This work has been partially funded by the SAO POLO projet of program Gestion et Impact du Changement Climatique (GICC) and by the THESEUS project of the 7th European Framework. | 2019-04-12T13:57:48.667Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "94b61301959085691c813ce2729393923f1404b4",
"oa_license": "CCBY",
"oa_url": "https://hal.archives-ouvertes.fr/hal-01318454/file/Adaptation%20of%20coastal%20structures%20to%20mean%20sea%20level%20rise.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "732334e6c1910895b703b8e9cb63c61bfd2b0203",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
} |
20789143 | pes2o/s2orc | v3-fos-license | Pharmacological Augmentation of Endothelium-Derived Nitric Oxide Synthesis
BACKGROUND: The sympathetic nervous system has a unique role in endothelial function, and beta-receptors are a key part of sympathetic nervous system function. OBJECTIVES: To elucidate the pharmacological augmentation of endothelium derived nitric oxide synthesis. SUMMARY: Beta-blockers have been commercially available since the 1960s. Stimulating beta-receptors causes dilatation whereas blocking betareceptors, as traditional beta-blockers do, cause vasoconstriction. However, beta-blockers are hypotensives. This effect probably occurs because they inhibit renin in the kidney and juxtaglomerular apparatus, especially at high doses. They also have some central effects because of central inhibition of the sympathetic nervous system that also lowers blood pressure. In addition, evidence suggests that beta-blockers work at the vascular biology level to produce nitric oxide release. Beta-blockers differ in terms of their betareceptor selectivity, intrinsic sympathomimetic activity, and benefit/risk in diabetes and insulin sensitivity. Nebivolol, the newest of the beta-blockers, is long acting and the most cardioselective beta1-blocker currently available. Nebivolol-induced endothelium-dependent vasodilation associated with activation of the L-arginine/nitric oxide pathway may confer benefits to patients. The risk for diabetes is lower, the metabolic effects are lower, and people with diabetes who have clear nitric oxide dysfunction may have particular benefits from this agent. CONCLUSIONS: Third-generation beta-blockers, such as labetolol, carvedilol, bucindolol, and nebivolol, vasodilate by different mechanisms, behaving differently than traditional beta blockers and offering different benefits.
Pharmacological Augmentation of Endothelium-Derived Nitric Oxide Synthesis nervous system function. As hormones are released, they interact with the following beta-receptors: • Beta 1 -receptors predominate in healthy cardiac muscle over beta 2 -receptors. • Beta 2 -receptors predominate in the lungs. • Alpha 1 -receptors mediate endothelial function and vasoconstriction in peripheral vessels, regulate blood flow to the kidneys, and have been implicated in myocardial hypertrophy and benign prostatic hyperplasia. • Stimulating beta-receptors causes dilatation. 2 Beta-blocker mechanisms are interesting. Beta-blockers should cause hypertension via beta-receptor blockade, and traditional beta-blockers do vasoconstrict. However, beta-blockers are hypotensives. This effect probably occurs because beta-blockers inhibit rennin in the kidney and juxtaglomerular apparatus, especially at high doses. They also have some central effects because of central inhibition of the sympathetic nervous system (i.e., baroreceptor effects) that also lower pressure. Their ability to slow the heart rate also contributes to lower blood pressure (BP).
Beta-blockers have been commercially available since the 1960s. At this time, the third-generation beta-blockers include labetolol (a nonselective drug with higher affinity for the alpha 1receptor than for beta 1 -and beta 2 -adrenergic receptors); carvedilol (beta 1 selective drug that becomes less selective at higher doses and provides alpha 1 -receptor blockade); bucindolol (nonselective drug that inhibits the alpha 1 -receptor); and nebivolol (higher beta 1 selectivity than other beta-blockers, with endothelium-dependent vasodilation associated with activation of the L-arginine/nitric oxide [NO] pathway). These beta-blockers vasodilate by different mechanisms, behaving differently than traditional beta-blockers and offering different benefits.
Although beta-blocker subclasses do not appear to differ significantly in antihypertensive efficacy, beta 1 -selective agents may be more effective than nonselective beta-blockers. Beta-blockers with intrinsic sympathomimetic activity have been shown to have fewer clinical benefits in post-MI patients and precipitate heart failure in high-risk patients. 2 This reduces their clinical utility. Beta-blockers differ in terms of benefit/risk in diabetes and insulin sensitivity. The third-generation beta-blocker carvedilol improves insulin sensitivity while older beta-blockers-propranolol, atenolol, and metoprolol-are associated with decreased insulin sensitivity.
Additionally, endothelial-active antihypertensive agents are now available and inhibit free radical production and prevent activation of adhesion molecules. They also prevent platelet aggregation and inactivation of endogenous tissue plasminogen activator. Preventing these atherosclerosis-forming mechanisms can reduce the burden of disease. 3
The Newest: Nebivolol
Nebivolol is the newest of the beta-blockers. Nebivolol is a longacting, highly cardioselective beta-blocker. It is the most selective beta 1 -blocker currently available. Its beta 1 selectivity exceeds that of bucindolol, propranolol, and carvedilol (which have beta 1 /beta 2 ratios of about 5); of metoprolol (which has a beta 1 /beta 2 ratio of about 80); and of bisoprolol (which has a beta 1 /beta 2 ratio of about 125). Its dual mechanism of action includes (1) selective beta 1receptor blockade and (2) stimulation of endothelial NO production. These 2 mechanisms work in concert on BP. Its pharmacokinetic profile is appropriate for once-daily dosing. 4 NO mediates stimulation of endothelium-dependent vasodilation. To determine if nebivolol possesses NO-mediated vasodilation effects in man, researchers (Bowman et al.) infused nebivolol alone, and then with a NO inhibitor. This allowed them to determine if NO-mediated mechanisms were at work. Given alone, nebivolol produced dose-dependent venodilation, but when administered with L-NMMA (NG-monomethyl-L-arginine, an NO inhibitor), venodilation was reduced markedly. NO is thus an important part of nebivolol' s vasodilating ability. 5 Nebivolol' s potential value rests in its dose-dependent BP reduction that appears to peak at 5 to 10 milligrams. These doses can be expected to result in reductions of 10 to 12 millimeters of mercury. 6 A double-blind randomized multicenter study by Grassi et al. compared nebivolol' s efficacy and tolerability to that of atenolol over 12 weeks. Middle-aged people with mild-to-moderate essential hypertension were randomized to nebivolol 5 mg daily (n =105) or atenolol 100 mg daily (n =100) after a placebo run-in phase.
Nebivolol and atenolol had similar and significant antihypertensive effects. Nebivolol' s effect on sitting BP at 12 weeks was slightly better than atenolol' s. Both reduced sitting and standing heart rates significantly, but nebivolol caused less bradycardia than did atenolol. Study subjects were better able to tolerate nebivolol and reported fewer side effects. 7 Again, comparing nebivolol and atenolol (in the Grassi study), researchers have confirmed that nebivolol and atenolol reduce SBP and DBP similarly, and that atenolol-treated study subjects tend to have significantly lower heart rates. But researchers found a significant difference in stroke volume. After 2 weeks of treatment with nebivolol, mean stroke volume increased significantly and heart rate slowed significantly, leading to a slight increase in cardiac output that was nonsignificant. Peripheral resistance was reduced significantly.
After 2 weeks of treatment with atenolol, mean stroke volume increased slightly (this was not significant) and heart rate slowed. Cardiac output was reduced and peripheral resistance increased, again in a nonsignificant manner. Atenolol' s antihypertensive effect was attributed to cardiac output and heart rate reduction. Nebivolol' s antihypertensive effect was attributed to reduced peripheral resistance and increased stroke volume with preserved cardiac output. Both drugs reduce heart rate, which is a benefit.
In terms of end diastolic volume, nebivolol creates almost double the benefit of that seen with atenolol (a change of 10.6% vs. 5.7%, respectively). Nebivolol may be even more beneficial than atenolol to prevent heart failure due to its better end systolic volume (a change of 9.2% vs. -0.49%, respectively). 8 Using a small mouse model, Georgescu et al. investigated the cellular mechanisms by which nebivolol induces renal artery vasodilation. 9 They found that the cellular mechanisms of nebivolol' s vasodilator effect on the renal artery include activation of the endothelial beta 2 -adrenoceptor, participation of calciumactivated potassium channels, and an increase in NO and NO synthase. Nebivolol' s profound vasodilating ability was dose dependent. NO blockade stopped vasodilation almost totally.
A separate study by Kalinowski et al. looked at renal arteries in rats, attempting to determine how nebivolol stimulates NO release from microvascular endothelial cells. 10 The researchers found that nebivolol induces relaxation of renal glomerular microvasculature, using adenosine triphosphate efflux with consequent stimulation of P2Y-purinoceptor-mediated NO release from glomerular endothelial cells. The magnitude of the endothelial NO stimulation and release in the kidney was indisputable.
Chronic inhibition of NO synthesis can lead to arterial hypertension. In another rat study by Fortepiani et al., researchers administered nebivolol (1 mg/kg/day, 14 days) concurrently with the NO synthesis inhibitor NW-nitro-L-arginine methyl ester (L-NAME, 0.1, 1, and 10 mg/kg/day, 14 days). 11 Although glomerular filtration rate and natriuresis remained similar in nebivolol-treated and -untreated rats, nebivolol completely prevented arterial hypertension in the L-NAME 0.1 and 1 mg/kg/ day groups. It reduced the BP increase expected in the L-NAME 10 mg/kg/day dose. Nebivolol' s ability to prevent arterial hypertension associated with chronic NO deficit appears to be related to inhibition of the renin-angiotensin system.
The traditional beta-blockers worsen glucose and lipid parameters in diabetics. [12][13][14][15][16][17] Might nebivolol be a more acceptable and effective antihypertensive in people who have concomitant aberrations of lipid metabolism or diabetes? In an observational study (N = 6,376) comparing adult patients with arterial hypertension with and without comorbid conditions (including diabetes), patients were treated with 5 mg nebivolol daily, with older adults (older than 65 years) receiving 2.5 mg. At the end of 6 weeks, significant decreases in SBP and DBP were observed, with 62.2% of the patients reaching normal BPs. Heart rate also improved. During the study, triglycerides fell 13% and cholesterol fell 8%. In diabetic patients, those results were more pronounced (triglycerides decreased 18% and cholesterol 9%). Glucose decreased in diabetics by 16%. 18 Nebivolol monotherapy improves glucose and lipid parameters, even in patients with diabetes.
Summary
Third-generation beta-blockers do provide better tolerability than do traditional agents and may have added benefits due to vasodilating properties. The term "class effect" may now be obsolete for beta-blockers. Older agents (propranolol, atenolol, etc.) were similar with subtle differences in cardio-selectivity, but evidence indicates that effects unrelated to adrenergic blockade are working at the vascular biology level to produce NO release. The newest of the third-generation beta-blockers, nebivolol, offers higher beta 1 selectivity, the highest available compared with other beta-blockers. Endothelium-dependent vasodilation associated with activation of the L-arginine/NO pathway may confer benefits on the patients. The risk for diabetes is lower, the metabolic effects are lower, and people with diabetes who have clear NO dysfunction may have particular benefits from this agent. | 2017-06-17T06:10:13.107Z | 2007-06-01T00:00:00.000 | {
"year": 2007,
"sha1": "653d875e08b84d182c14ecc05b32004188035cb7",
"oa_license": "CCBY",
"oa_url": "https://www.jmcp.org/doi/pdf/10.18553/jmcp.2007.13.s5.9",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3bb97bf161ac62cde17eda2d0396bfe76767a75",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9633356 | pes2o/s2orc | v3-fos-license | Denitrification Coupled with Methane Anoxic Oxidation and Microbial Community Involved Identification
In this work, the biological denitrification associ ated with anoxic oxidation of methane and the micro bial diversity involved were studied. Kinetic tests for nitrate (N O3 ) and nitrite (NO2 ) removal and methane uptake were carried out in 100 mL batch reactors incubated in a shaker (40 rpm) at 30 C. Denitrificant/methanotrophic biomass was taken from a laboratory scale reactor fed with synt hetic nitrified substrates (40 mgN L -1 of NO3 and subsequently NO2 ) and methane as carbon source. Results obtained fr om nitrate removal followed a first order reaction, presenting a kinetic apparent constant (k NO3)) of 0.0577±0.0057d . Two notable points of the denitrification rate (0.12gNO3 -N g AVS d and 0.07gNO3 -N g AVS d) were observed in the beginning and on the seventh day of operation. When nitrite was added as an electron ac ceptor, denitrification rates were improved, presen ting an apparent kinetic constant (kNO2) of 0.0722±0.0044d , a maximum denitrification rate of 0.6gNO 2 -N gAVS d, and minimum denitrification rate of 0.1gNO 2 -N gAVS d at the beginning and end of the test, respectively. Endogenous material supporting denitrification and methane concentration dissolved in the substrate wa s discarded from the control experiments in the absence of meth ane and seed, respectively. Methylomonas sp. was id entified in the reactors fed with nitrate and nitrite as well a s uncultured bacterium.
INTRODUCTION
Methane oxidation by methanotrophs occurs in nature in aerobic (soil, river, lakes, etc) and anaerobic (marine sediments) environments. Methanotrops are divided in three specific groups depending on the path used for carbon uptake in the biosynthesis: a) type I utilize the ribulose monophosophate pathway (RuMP), b) type II employ the serine cycle, and c) type X, Methylococcus capsulatus like organisms (type I) that utilize the RuMP pathway despite having low levels of serine path enzymes (Hanson and Hanson, 1996). In the presence of oxygen, methanotrophs oxidize methane releasing soluble organics such as methanol that can be utilized by the coexisting denitrificants as carbon sources for their metabolic activities. Nitrate (NO 3 -) removal with methane as external carbon source has been observed since the 1970s (Modin et al., 2007). However, the sinthrophic association between methanotrophs and denitrifiers was first demonstrated by Rhee and Fuhs in 1978. (Modin et al., 2007). Davies (1973) isolated bacteria capable to denitrify with methane as the sole carbon source. However, these bacteria were not found to be specific for methane but could use other carbon compounds also as electron donors. Sollo et al., (1976) compared the denitrification process with methane and methanol as carbon sources in two different systems, the packed columns and fluidized bed. The rates of nitrate reduction with methane in packed columns were less than with methanol, 0.7 mgNO 3 --N L -1 h -1 and 4.6 mgNO 3 --N L -1 h -1 , respectively. Higher methane denitrification rate was observed in the fluidized bed reactor (1.2 mgNO 3 --N L -1 h -1 ) but half of that rate, 0.6 mg NO 3 --N L -1 h -1 , was attributed to denitrification supported from the organic content of the effluent used as the culture medium (Sollo et al., 1976). In view of the low rate of nitrate reduction and the problems involved in supplying the dissolved methane to the denitrifying bacteria, it was concluded that the denitrification with methane would not be an economical process. However, more recent studies considered the aerobic oxidation of methane associated with denitrification process as an alternative for organic carbon supply, which was necessary for nitrogen oxides removal in low organic concentration waters. (Werner and Kayser, 1991;Jewell et al., 1992;Thalasso et al., 1997;Rajapakse and Scutt, 1999;Houbron et al., 1999;Costa et al., 2000;Knowles, 2005;Waki et al., 2005). Methane anaerobic oxidation studies have related this process to sulfate reduction (Iversen and Jorgensen, 1985;Valentine and Reeburgh, 2000;Nauhaus et al., 2002). However, little is known about anoxic nitrogen compound reduction with methane as an electron donor or the organisms involved in the process. Islas-Lima et al., (2004) studied the dissimilative nitrate reduction in the presence of methane under anoxic conditions. The highest denitrification rate obtained was 0.25 gNO 3 --N g -1 VSS d -1 for partial pressures of methane equal or higher than 8.8 kPa. For lower pressures, the rate obtained was 4.9x10 -3 gNO 3 --N g -1 VSS d -1 leading to the conclusion that nitrate removal was dependent on the electron donor availability in the system. Raghoebarsing et al., (2006) isolated and identified a microbial consortium composed of two kinds of microorganisms; an uncultured bacterium and an archaea related to marine archaea able to accomplishe methane oxidation to carbon dioxide and nitrate as well as nitrite denitrification under anaerobic conditions. The aim of this work was to study the anoxic oxidation of methane related with nitrate and nitrite biological denitrification.
Methanotrophic/denitrificant biomass adaptation
In order to develop the kinetic tests for the denitrification from nitrate (NO 3 -) and subsequently from nitrite (NO 2 -) using methane as an electron donor, as well as to characterize the microbial community involved, biomass adapted to both conditions were utilized. Adaptation was accomplished in a bench-scale sequencing batch reactor (vol 1.6 L), in which, a 7-day cycles was performed for five-month period with nitrate as the electron acceptor, which was followed by a fourmonth period using nitrite as the electron acceptor, with 3-day cycles. The times of cycle were defined in order to obtain higher nitrogen reduction rates. In both the conditions, methane was the only external source of carbon added to the system. Carbon source availability and the absence of oxygen were achieved by injecting 3.84 L min -1 of methane into the reactor every five-minutes during each four hour period. To enhance the mass transfer between the liquid (bulk liquid) and gas (methane) phases, a submersed pump with mean flow rate of 90 L h -1 was installed at the base of the reactor for the recirculation of gas from the headspace to the bulk liquid. Synthetic substrates used for the cellular growth in nitrate denitrification comprised of (mg L -1 ): NaNO 3 (243); KH 2 PO 4 (216); K 2 HPO 4 (280); Na 2 SO 4 (10); NaHCO 3 (100); yeast extract (10) (50 Co). In the denitrification from nitrite, NaNO 3 was substituted by NaNO 2 (197 mg L -1 ). Biomass was immobilized on polyurethane foam cubes (20 kg m -3 density, 5 mm length) occupying approximately 5 cm of the reactor height. Varesche et al., (1997) suggested polyurethane foam as the most satisfactory support medium for biomass immobilization. The reactor set-up was kept in a controlled chamber at 30 o C (Fig. 1).
Kinetic tests
Kinetic tests were conducted in 100 mL batch reactors, containing 80 mL of culture medium and seed. The culture medium had the same characteristics of the reactor affluent and was prepared by boiling ultra pure water and cooling in a nitrogen atmosphere. Fifty cubes of polyurethane foam were collected from the methanotrophic/ denitrificant culture adaptation reactor and used as seed.
Endogenous sustained denitrification and methane concentration variation in the headspace of test flasks resulting from solubilization and sampling were analyzed in control reactors in the absence of methane and seed, respectively. After the addition of culture medium and seed, the methanotrophic denitrificant reactors (RMD) were sealed. Headspace atmosphere of each reactor was substituted by methane (99.5%), injecting it with a flow of 1.28 L min -1 for 15 min. Methane, nitrate, and nitrite concentrations were measured daily; nitrogen in the form of ammonium (NH 4 + -N) concentrations was measured at the end of the test. After each sampling, helium gas was supplied to re-establish the headspace gas pressure. Reactors and controls were incubated in a rotating chamber (40 rpm) at 30 o C. All the tests were performed in triplicate. A Gow-Mac gas chromatograph with a thermal conductivity detector and 2 m long ¼" diameter Porapak Q column was used for methane analysis. During the analyses, the oven, column and detector temperature were 50 o C and hydrogen (60 mL min -1 ) was utilized as the sweep gas. For the quantification of nitrate, nitrite and ammonium nitrogen, the flux injection analysis (FIA) method was used as described in Standard Methods for the Examination of Water and Wastewater (APHA, 1995). Nitrate and nitrite concentrations were determined by Method 4500 NO 3 -I and ammonium concentration was determined by Method 4500 NH 3 -B Foam attached volatiles solids (AVS) determination was preceded by the removal of the attached solids from the foam matrices. Foam matrices samples were transferred to a falcon tube (15 mL) and AVS were detached using a glass stick and distilled water. The washed volumes were transferred to porcelain capsules and the washing procedure was repeated until the foam matrices were clean. Solids determinations were conducted at the beginning and at the end of each test according to APHA (1995). Foam mass were determined after drying at 100 o C. Kinetic fitting curves for nitrate and nitrite removal were constructed using Microcal Origin® 6.0 software.
DNA extraction
The microbial biomass was retrieved from the polyurethane foam matrices by successive washing in phosphate buffer and subsequent centrifugation to pellet the cells. The pellets were kept on ice and total DNA was extracted using the phenol:chloroform:glass beads-based protocol described by Griffiths et al., (2000) Polymerase chain reaction (PCR) amplification For the denaturing gradient gel electrophoresis (DGGE) analysis, 16S rRNA gene fragments were amplified by PCR using specific primers: the primer sets 968F and 1392R (Table 1) for Bacteria Domain (Nielsen et al., 1999). A GC-clamp (Muyzer et al., 1993) was added to the forward primers of the three primer sets. A 2.0 µL of DNA template was added to the amplification reaction, which was performed in accordance with the instructions of the supplier manual for platinum Taq DNA polymerase (Invitrogen®, Carlsbad, CA, USA). The PCR was performed with a System 2.400 thermocycler (Perkin-Elmer Cetus, Norwalk, CT, USA). The PCR program was as described by Nielsen et al., (1999). Sequence alignment and phylogenetic analysis were performed using the Mega software (version 3). A phylogenetic tree was constructed by 500-fold bootstrap analysis using the neighbor-joining method. Amplifying conditions were performed as described by Nielsen et al., (1999) using thermocycler "Gene Amp. PCR System 2400" (Perkin-Elmer Cetus, Norwalk, Conn.). Programming conditions were: cycle number (35), initial denaturation (94 o C, 5 min.), denaturation (94 o C, 45 s), annealing (38 o C, 1 min), extension (72 o C, 2 min), end of extension (72 o C, 10 min) and cooling (4 o C). Products resulting from the nucleic acid extraction and PCR amplification were evaluated through agarose gel electrophoresis Experimental procedures for both were similar, differing only in the molecular marker. Agarose 1% and more was the high molecular mass marker (nucleic acid extraction) and agarose 1% and low was the low molecular mass marker (PCR amplification). The DGGE analysis was conducted as indicated by Muyzer et al., (1993).
Kinetic tests
After the 10 th day of incubation, the reactors that received nitrate as an electron acceptor demonstrated an average removal efficiency of 44% whereas the reactors receiving nitrite as electron acceptor reached average removal efficiency of 56% (Figs. 2a and b). In both cases, it was not possible to determine the NH 4 + -N concentration at the end of the test, eliminating the hypothesis of nitrogen removal by dissimilative reduction of ammonia. In the nitrate fed reactors, nitrite presence was not verified at end of test. Both the tests did not show variation in attached volatiles solids (AVS) during the incubation period with 0.195 gAVS g -1 foam (tests with nitrate) and 0.075 gAVS g -1 foam remaining (test with com nitrite). First order kinetic model more accurately represented the variation of nitrate concentrations (R 2 = 0.96) and nitrite (R 2 = 0.98) over time (Figs. 2a and b). The values of the apparent kinetic constants for nitrate (k NO3 ) and nitrite (k NO2 ) removal were 0.0577±0.0057 d -1 and 0.0722±0.0044 d -1 . The rates of nitrite removal reached values four times higher than those obtained for nitrate removal in the beginning of test and five times higher at the end ( Figs. 3a and b). In nitrate incubated reactors, two notable points in the removal rate were detected, 0.14 gNO 3 --N g -1 AVS d -1 and 0.12 gNO 3 --N g -1 AVS d -1 on the first and seventh days, respectively (Fig. 3a). Nitrogen removal attributed to endogenous decay of organic matter was eliminated because of the nitrogen removal obtained from the control reactors incubated without methane as carbon source (Fig. 4). In nitrate fed reactors, no NO 3 major variation was detected within the period of 10 days of operation (Fig. 4a), whereas the nitrogen removal rate in nitrite fed reactors was significant during first three days of operation and reached a mean value of 18% (Fig.4b). The apparent kinetic constant removal for nitrite under endogenous conditions (k ' NO2(e) ) exhibited a value of 0.6415±0.1443 d -1 ; however, the kinetic model that best fit the experimental results was the first order reaction with a residual fraction of NO 2 - (Fig. 4b) adapted from Pinho et al., (2002). In this case, the calculated NO 2 residual fraction was 32.5±0.8 mg L -1 .
Methane uptake
Methane concentration in the headspace of reactors decreased during the incubation period in both the conditions tested (Figs. 5a and b). In the control reactors (RC) that did not receive inoculum, methane concentration reduction was a result of methane solubilization in the bulk liquid and of the sampling procedure adopted during the incubation period. For this reason, these values were not attributed to methane uptake by methanotrophs. When the addition of nitrate as electron acceptor occurred (Fig. 5a), methane concentration decline in the denitirificant reactors (RMD) was higher than in the RC reactors. That variation was more obvious from the fifth day of incubation to the end of the study. However, when nitrite was present (Fig. 5b), differences between the RC and RMD reactors were not so dramatic most likely because the denitrification from nitrite consumed less carbon source. After subtracting the values of methane variation from the RC, methane uptake by methanotrophs in RMD fed with nitrate and nitrite was 0.009 and 0.005 mol L -1 , respectively, corresponding to uptake rates of 0.52 mol CH 4 g -1 NO 3 --N and 0.17 mol CH 4 g -1 NO 2 --N, respectively.
DGGE profile
The gel in Figure 6 shows a DGGE profile using primers for Bacteria Domain. Table 2 shows sequencing results from the samples adapted to methanotrophic/denitrificant conditions for nitrate (MDNO 3 -) and for nitrite (MDNO 2 -) obtained from the isolated and amplified DGGE bands. Note that three band were attributed to Methylomonas sp. genus bacteria with 96 to 97% similarity. Other bands were attributed to uncultivated ammonia oxidizing bacteria.
The phylogenetic tree (Fig. 7) illustrates the relationships between the DGGE bands and organisms from the Bacteria Domain. Its construction included known sequences of
DISCUSSION
Compared with the results from the RC, the results obtained from the RMD for nitrate and nitrite demonstrated that the inoculum used presented denitrificant characteristics with the use of methane as carbon source under anoxics conditions. The inhibiting effect of nitrite in the methanotrophic processes under certain conditions has been reported by others (King and Schnell, 1994;Dunfield and Knowles, 1995;Whalen, 2000;Waki et al., 2002). However, that effect was not observed in this work for NO 2 --N concentrations of 45 mg L -1 . The values obtained for k ' NO3 and k' NO2 were 0.0577±0.0057 d -1 and 0.0722±0.0044 d -1 , respectively. The data obtained for the removal rates showed that when nitrite was the electron acceptor, the process was more efficient. Raghoebarsing et al., (2006) also observed that nitrite (up to 84 mg NO 2 --N L -1 ) was removed better than nitrate under the methanotrophic/denitrificant conditions in the absence of oxygen. Nitrate specific removal rates obtained at the beginning of test (0,13 g NO 3 --N g -1 AVS d -1 ) were lower than those obtained by Islas-Lima et al., (2004) (0.25 g NO 3 --N g -1 VSS d -1 ). In this case, the foam matrix and support media in addition to the low rotating velocities during the incubation period might have interfered in methane transfer to the bulk liquid limiting its availability for the microorganisms. Roslev and King (1994), while studying the starvation effects of methane in methanotrophs, observed that lowering rotating velocity from 120 to 60 rpm during the incubation stage led to a 90% reduction in the growth rate of methanotrophic microorganism due to gas transfer limitations. The higher methane uptake observed in the test with NO 3 --N (0.009 mol L -1 ) might be associated with the fact that denitrification from nitrite required a low quantity of electron donors, according to the equations 1 and 2 (Raghoebarsing et al., 2006). The results showed 0.52 mol CH 4 g -1 NO 3 --N and 0.17 mol CH 4 g -1 NO 2 --N uptake, which was ten times the theoretical uptake. (2) Raghoebarsing et al., (2006) observed that the methane uptake was very similar to the stoichiometric value. However, their tests were conducted with purified cultures that did not present significant competition for the substrate when compared with non-pure cultures. The possibility that different microorganisms could occupy the same niche and be capable of using methane as carbon source with different environmental and nutritional requirements could justify the occurrence of the two specific removal rate peaks in the reactor incubated with nitrate (Fig. 3a). Graham et al., (1993), studying the competition between Methylosinus trichosporium OB3b, Type II metanotrophs and M. albus BG8I, Type I metanotrophs, in continuous flux reactor noted that Type II methanotrophs were favored under nitrogen and copper limiting conditions. Amaral and Knowles (1995) reported the presence of Type II methanotrophs at low oxygen concentrations and high methane concentrations, whereas the Type I methanotrophs prevailed at high oxygen concentration and low methane concentrations. While studying ammonia salts effects on the methane oxidation rate in soil samples, Gulledge et al., (1997) also reported the presence of different organisms capable of utilizing methane at the same niche. The phylogenic identification of a community associated with nitrate or nitrite denitrification processes with methane as an electron donor revealed the presence of the same organisms in both the conditions (see Table 1). These results, as well as those reported by Raghoebarsing et al., (2006) suggested that the organisms present were capable of adapting to different electron acceptors and using them as substrates for methane anoxic oxidation. Methylomonas sp. identified in both the conditions were classified as Type I methanotrophs. (Hanson and Hanson, 1996). Methanotroph bacteria are considered aerobic organisms. However, while studying the microbial diversity involved in the aerobic and anaerobic methane oxidation in different depths of the Black Sea, Shubert et al., (2006) observed the presence of Type I methanotrops (Methylococaceae) in deep water at 75 to 130 m where oxygen concentration was lower than 1.5 µM. The methanotrophic population accounted for 0.3 to 4% of the total bacterial cells and they were the principal organism responsible for methane oxidation in that region. Raghoebarsing et al., (2006) purified and identified a microbial consortium formed by two microorganisms: an unidentified bacterium and an archaea similar to the marine archaea capable of oxidizing methane to carbon dioxide and favoring the denitrification from nitrate and nitrite under anaerobic conditions. The present results as well as those obtained by Islas-Lima et al., (2004) and Raghoebarsing et al., (2006) suggested that the oxidation of methane in the presence of nitrate or nitrite was possible. Apparently, the microbes responsible for the aerobic methane oxidation were capable of adaptation to anoxic conditions. However, little information about the communities involved in the denitrification process with methane as electron donor is available. According to Modin et al., (2007), it is not yet possible to isolate any microorganisms with the ability to anaerobically oxidize methane. Thus, more studies identifying the microorganism as well as metabolic pathways involved in the anoxic methane oxidation with nitrate and nitrite as electron acceptors must be performed. | 2017-10-23T13:01:16.650Z | 2011-02-01T00:00:00.000 | {
"year": 2011,
"sha1": "22bc702a6bfffa978de73fab50c5d1d3f73c341a",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/babt/v54n1/a22v54n1.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "22bc702a6bfffa978de73fab50c5d1d3f73c341a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
240560249 | pes2o/s2orc | v3-fos-license | Recent Advances in the Synthesis of Isothiocyanates Using Elemental Sulfur
: Isothiocyanates (ITCs) are biologically active molecules found in several natural products and pharmaceutical ingredients. Moreover, due to their high and versatile reactivity, they are widely used as intermediates in organic synthesis. This review considers the best practices for the synthesis of ITCs using elemental sulfur, highlighting recent developments. First, we summarize the in situ generation of thiocarbonyl surrogates followed by their transformation in the presence of primary amines leading to ITCs. Second, carbenes and amines afford isocyanides, and the further reaction of this species with sulfur readily generates ITCs under thermal, catalytic or basic conditions. Additionally, we also reveal that in the catalyst-free reaction of isocyanides and sulfur, two—until this time overlooked and not investigated—different mechanistic pathways exist.
Introduction
Isothiocyanates (ITCs) are biologically active molecules occurring in cruciferous vegetables such as broccoli, watercress, cabbage and cauliflower suggested to have anti-tumour activity [1][2][3]. They are represented among natural products and pharmaceutical ingredients by the biologically relevant welwitindolinone and hapalindole alkaloids isolated from various algae species [4]. Notably, glucosinolates, found as secondary metabolites in almost all plants, contain the -S-C=N-functional group and act as a precursor for various ITCs [5,6]. Tissue damage of the plant promotes myrosinase enzyme activity as a defence mechanism, triggering the degradation of glucosinolates, and releasing, e.g., allyl, benzyl or phenethyl ITC or sulforaphane [7]. Sulforaphane, in particular, showed neuroprotective activity in the treatment of the neurodegenerative Alzheimer's and Parkinson's diseases [2,8]. Moreover, ITCs express significant antiproliferative activity as well [3,9], and the anti-microbial nature of certain ITCs makes them useful in food preservation [10]. Recently, they have also been applied as covalent warheads for labelling cysteine or lysine residues in medicinal chemistry and chemical biology applications [11][12][13][14]. Notably, due to their high and versatile reactivity, they are widely used as intermediates in organic synthesis [15,16]. ITCs readily react with nucleophiles, participate in cycloadditions leading to diverse heterocycles or are used in polymer chemistry [17].
The synthesis of ITCs generally relies on the reaction between thiophosgene or CS 2 and amines and thus involves the use of highly toxic reagents with narrow functional group compatibility [18][19][20][21][22]. Various thiocarbonyl transfer reagents appeared in recent decades to overcome these drawbacks, such as thiocarbonyl-diimidazole or dipyridin-2yloxymethanethione [23,24]. Decomposition of thiocarbamates or dithiocarbamate salts with various reagents offers a good alternative as well; however, this approach first requires the synthesis of the appropriate precursors [25][26][27][28][29][30]. Nitrile oxides react with thiourea to afford ITCs and harmless urea, but one should note that the instability of nitrile oxides leads to many by-products, rendering this approach less attractive [31]. The reaction of isocyanides with disulfides in the presence of thallium (I)-salts as catalysts also leads to the formation of ITCs [32]. This area has been reviewed recently [33]; thus, in this review, we focus solely on sulfur-based synthetic methods, which greatly emerged in recent years (Scheme 1). Elemental sulfur acts as the most atom-efficient surrogate to integrate the sulfur atom into the product [34][35][36]. The process is based on the nucleophilic attack of in situ generated carbene functionalities (1) on sulfur that behaves as an electrophile due to its empty d-orbitals [37]. This approach leads to thiocarbonyl surrogates (2), usually dihalogenides, reacting with primary amines (3) to provide ITCs (4). Otherwise, isocyanides (5), where the terminal carbon atom is able to act as a carbene, also undergo reaction with sulfur under thermal conditions or in the presence of external additives to yield ITCs. Notably, the addition of sulfur to formaldimines was also reported to generate ITCs, but this method is barely used nowadays [38,39]. The convenient activation of sulfur by nucleophilic additives, such as aliphatic amines and hydroxyl, sulfide and cyanide anions [40], and the corresponding significantly milder conditions compared to thermal activation support the notion that a switched mechanism also exists, involving a nucleophilic sulfur anion (S x − ) and the carbene of the isocyanide (5) acting as an electrophile (Scheme 2). The experimental findings from the research of Al-Mourabit et al. and Meier et al. and those found by our research group also support this latter presumption [41][42][43][44][45][46]. The exact number or the distribution of sulfur atoms in the forming anions was investigated experimentally and theoretically as well, suggesting that it depends on the reaction conditions and reactants [47].
Synthesis of ITCs through Thiocarbonyl Surrogates
Thiocarbonyls such as thiophosgene or thiocarbonyl fluoride are typical precursors for the synthesis of ITCs [33,48]. However, due their extremely reactive, volatile and toxic nature (bp. 70-75 • C and~(-60) • C, respectively), they are inconvenient to store and handle. The in situ preparation of thiocarbonyl surrogates from carbenes and sulfur is the most significant approach considering the focus of this review on sulfur-based ITC synthesis. Besides the well-known trapping of (hetero)cyclic carbenes with sulfur, the transformation of di-or trihalogenated compounds to halocarbenes and the following reaction with sulfur are considered as a convenient process [49][50][51]. Common halogenated reagents are chloroform (6), trimethyl(trifluoromethyl)silane (F 3 CSiMe 3 , 7) and the sodium and potassium salts of chlorodifluoroacetic acid (e.g., ClF 2 CCO 2 Na, 8) and bromodifluoroacetic acid (e.g., BrF 2 CCO 2 K, 9, Scheme 3) [52]. Notably, (triphenylphosphonio)difluoroacetate (PDFA, 10), prepared from BrF 2 CCO 2 K with triphenylphosphine, is a more efficient precursor of difluorocarbene based on the research work of Xiao and co-workers [53,54]. In EtOAc or nitromethane at reflux temperature, PDFA (10) decomposes to difluorocarbene (11), which is rapidly consumed by sulfur [55]. DFT calculations revealed that the reaction of sulfur and difluorocarbene is exothermic, with a high thermodynamic driving force (∆G = −207.1 kJ mol −1 ) and low activation energy barriers (∆G # = 33 − 42 kJ mol −1 ). Eventually, Xiao and co-workers discovered the three-component reaction of 10, sulfur and primary amines (12) resulting in ITCs (13) in 5 min at 80 • C in DME (Scheme 4A) [56]. The reaction tolerated a wide range of functional groups, including nitrile groups, halogens and heteroaromatic nitrogen atoms. In particular, unsaturated C=C double and C≡C triple bonds remained intact, which demonstrates the preferable reaction of difluorocarbene (11) with sulfur over other optional scavengers under the applied reaction conditions [52]. They proved the formation of thiocarbonyl fluoride (14) directly using HRMS and indirectly by trapping it in a Diels-Alder cycloaddition (Scheme 4C,D). In their three-component ITC synthesis, Jiang and co-authors introduced F 3 CSiMe 3 (7) as a difluorocarbene source (Scheme 4B) [57]. Here, KF is responsible for the initiation of the reaction under ambient conditions in THF. Although they failed to capture difluorocarbene (11) in control experiments with prop-1-en-2-ylbenzene (15) under standard reaction conditions (Scheme 4E), the F 3 CSanion (16) could be detected by HRMS. Based on literature data, they suspected the formation of thiocarbonyl fluoride (14) by the reversible decomposition of 16 (Scheme 4F) [58][59][60][61]. Zhang and Feng carried out the synthesis of ITCs starting from BrF 2 CCO 2 Na (20, Scheme 5) [62]. The reaction conditions were harsh compared to the PDFA-and F 3 CSiMe 3based methods, resulting in the formation of ITCs (21) after 12 h at 100 • C in the presence of a copper catalyst and an excess of the base. Presumably, the role of the base is to promote the HBr elimination from 20 and to react with the acid, while the copper catalyst is assumed to play a role in the formation of difluorocarbene (11), and it might also stabilize the reactive intermediate [63,64]. The authors suggested two mechanistic pathways: through thiocarbonyl fluoride 14 or through the formation of an isocyanide . . . . . . Under standard reaction conditions, in the absence of sulfur, they isolated naphthalene-2-isocyanide (24) in 55% yield, which they transformed to 25 with sulfur in 83% yield (Scheme 6A). On the other hand, ortho-phenylenediamine (26) did not provide the expected cyclic thiourea (27) but rather 1-difluoromethyl benzimidazole (28) under standard reaction conditions, suggesting a fast attack on the carbene by the neighbouring amine. Similarly, the incorporation of sulfur was unsuccessful in the case of ortho-hydroxy aniline (29), where the standard conditions led to the formation of benzoxazole 30 (Scheme 6B). Interestingly, contrary to the latter result, Weng and co-workers showed that ortho-hydroxy anilines provide N-substituted benzoxazole-2-thiones (31) if difluorocarbene is generated in the reaction (Scheme 6C) under different conditions [59]. Consequently, in the case of the preparation of ITCs 21 (Scheme 5), the relatively harsh conditions, long reaction times and the control experiments do not support the in situ generation of thiocarbonyl fluoride. In fact, the generation of difluorocarbene from bromodifluoroacetate or the less reactive chlorodifluoroacetate readily happens under 100 • C [63][64][65]. In conclusion, a more likely mechanism for the coppercatalysed ITC formation might rather be the transformation of the primary amine (23) into an isocyanide (22), which directly reacts with sulfur, resulting in ITC (21, Scheme 5). Interestingly, dichlorocarbene is less prone to react with sulfur, forming thiophosgene as Tan and co-workers suggested in their study in the multicomponent synthesis of thioureas [66]. Starting from chloroform (6) and KO t Bu at 55 • C, they trapped dichlorocarbene with the activated cyclohexene 32 (Scheme 7A). Nonetheless, the combination of sulfur with dichlorocarbene and sequentially with 4-toluidine (33) did not result in the formation of the corresponding ITC (34, Scheme 7B). On the contrary, in a sequential approach used on 33, they generated the isocyanide 35 with dichlorocarbene, which was further transformed to 34 with sulfur under standard reaction conditions (Scheme 7C). This latter experiment suggests that isocyanide may be the key intermediate in the reaction.
Synthesis of ITCs from Isocyanides
The sulfuration of isocyanides directly leads to ITCs (Scheme 1). Aromatic isocyanides and sulfur afford ITCs refluxing in benzene for 3 days, resulting in moderate yields [34]. On the other hand, aliphatic isocyanides practically do not undergo any reaction at all [32]. This led researchers to the revelation that catalysis or other types of activation, particularly nucleophilic additives, are necessary for an efficient, useful and comprehensive methodology.
Catalysis
The application of chalcogens or transition metal catalysts, such as selenium [67], tellurium [68], molybdenum [69,70] or rhodium [71], greatly facilitates the generation of ITCs offering excellent yields. These results have already been discussed in previous excellent reviews; thus, we provide only a focused overview of this field [72,73]. In contrast to sulfur, in the presence of a base, selenium readily reacts with isocyanides (37) in refluxing THF, resulting in isoselenocyanates (38), which may turn into ITCs (39) with sulfur in only a few hours (Scheme 8) [67]. Fujiwara and co-workers showed that selenium is, indeed, a necessary additive in the reaction, but only in a catalytic amount of 5 mol%. Later, they revealed the enhanced catalytic activity of the analogous tellurium on aliphatic derivatives, providing better yields using a significantly lower catalyst loading of 0.02 mol% [68]. To circumvent the toxicity of chalcogens, Stalke and co-workers introduced a base-free approach using a molybdenum catalyst, which they already applied in the episulfidation of alkenes and allenes with sulfur [69,74,75]. The reaction of isocyanides (41) and sulfur in the presence of catalyst 42 required 3 days in refluxing acetone, resulting in ITCs (43) in good to excellent yields (Scheme 9A) [69]. The first step of the reaction might be the sulfuration of 42 providing the molybdenum disulfur complex 44, which acts as the active sulfurtransferring agent. The application of 44 in stoichiometric amounts leads to 43 in only 2.5 h, supporting its involvement in the reaction (Scheme 9B). The work of Sita and co-workers also supports the participation of the catalyst in the sulfur-to-isocyanide addition. They prepared bis(isocyanide)-Mo complexes 45 through ligand exchange, which they further transformed to κ-(S,C)-ITC-molybdenum complexes (46) with sulfur (Scheme 9C) [70]. Presumably, 46 is a key intermediate of the reaction, characterized by X-ray crystallography. Starting from 47 in the presence of isocyanide and sulfur afforded ITCs in 16 h of reaction time indicated by 1 H-NMR experiments at 50 • C in benzene-d 6 with a catalyst loading of 5%. Next to molybdenum, the catalytic activity of rhodium in reactions with sulfur was demonstrated in the synthesis of 1,4-dithiins from cyclic alkenes, in the synthesis of diaryl sulfides and in the episulfidation of alkenes [76][77][78]. Yamaguchi and co-workers applied 1% RhH (PPh 3 ) 4 and Rh (acac) (CH 2 =CH 2 ) 2 in the transformation of isocyanides (48) to ITCs (49) in refluxing acetone (Scheme 10) [71]. Notably, they observed shorter reaction times if they refluxed sulfur in acetone for 1.5 h prior to use. The activation period for sulfur probably involves the thermal generation of polysulfides, followed by sulfur atom exchange promoted by the catalyst [79]. In particular, the application of organic tri-and tetrasulfides in the reaction with the isocyanide also led to the formation of ITC. Scheme 10. Rhodium-catalyzed transformation of isocyanides (48) to ITCs (49) with selected examples and mechanistic insights ( [71]). Scheme is based on Scheme 1 from [71].
Nucleophile-Induced Transformation of Isocyanide to ITC
The most common activation of sulfur is the cleavage of the octasulfur ring by nucleophiles [42][43][44][45][80][81][82]. Generally, cyanide, hydroxyl and sulfide ions may homolytically (Scheme 11A) or heterolytically (Scheme 11B) cleave sulfur-sulfur bonds under mild conditions, generating reactive linear polysulfide anion chains of different lengths (50) and radical anions (51) [83,84]. Notably, nucleophilic aliphatic amines (52) are very effective in activating sulfur, while (hetero)aromatic amines are generally not nucleophilic enough [85]. Primary and secondary amines can perform under ambient conditions; however, their use is limited as they necessarily react with in situ generated ITCs. Tertiary amines need harsher conditions to activate sulfur and possibly the presence of a proton source to be able to stabilize the linear polysulfide chains [86]. Al-Mourabit and co-workers established a three-component protocol for the synthesis of thioureas (55) starting from isocyanides (56), aliphatic amines (57) and sulfur [46]. They proposed two mechanistic pathways, with one through an intermediate containing a nitrilium structural element (58) resulting from the nucleophilic attack of 56 on sulfur (Scheme 12A). The electrophilic adduct 58 then reacts with 57, affording the thioureas 55. On the other hand, the aliphatic amines 57 might generate nucleophilic polysulfide anions (59) from sulfur at first (Scheme 12B), thus switching the reactivity, the isocyanide 56 being the electrophile and sulfur the nucleophile. The in situ generated ITCs (60) then might react with 57 in a simple addition, providing 55. The mild conditions support equation B, as in the absence of external additives, the reaction would require significantly higher thermal activation [32,34]. Scheme 11. Nucleophilic activation of sulfur leading to reactive polysulfide anions (50) and radical anions (51, (A)), and the mechanism of sulfur activation by nucleophilic aliphatic amines (52, (B)).
Our research towards the multicomponent synthesis of thio-and dithiocarbamates from isocyanides revealed that a diverse set of nucleophilic additives, such as NaH, NaOEt, Cs 2 CO 3 , DIPEA or DBU, are able to activate sulfur. In addition, we isolated the ITC intermediate 61 from the reaction in 85% yield at 40 • C, after 2 h (Scheme 13) [42]. The mild conditions and the observation that, in the absence of additives, no reaction occurred also support the need for the activation of sulfur. This suggests the existence of the second mechanism above (Scheme 12B) involving the formation of a nucleophilic reactive intermediate (62) attacking the electrophilic carbene (63). Benefiting from this new, convenient synthesis of ITCs, we established the improved, chromatography-free multicomponent synthesis of thioureas using tertiary amines as external activators that are resistant to acylation [43,44]. For this purpose, we prepared aqueous solutions of polysulfide anions, generated from sulfur and tertiary amines. We proved the existence of polysulfide anions in the reaction, on one hand, by the preparation of aqueous solutions in high concentrations (up to 0.4 M with respect to sulfur) and, on the other hand, by investigation of the solutions by NMR. Consequently, we proposed a switched mechanism towards the formation of ITCs from isocyanides and sulfur, where the nucleophile-activated sulfur attacks the electrophilic carbene. Most importantly, this transformation has proven to be more efficient, requiring shorter reaction times and milder conditions and featuring excellent functional group tolerance, validated in the synthesis of a diversely substituted set of thioureas, 2-iminothiazolines and 2-aminothiazoles [43][44][45]. Finally, Meier and co-authors recently published their improved method for the synthesis of ITCs (65) from isocyanides (66) and sulfur in the presence of only a 2-5 mol% base in renewable solvents (Scheme 14) [41]. They probed several tertiary amines, including DMAP, NMI, Et 3 N, DABCO, DBU and TBD, in the reaction and found that, generally, higher basicity led to better conversions. Eventually, they applied the developed method in the synthesis of a small library of ITCs, showing the wide applicability of the method. They proposed the same mechanistic suggestions, involving sulfur as a nucleophilic partner in the reaction with the electrophilic isocyanide (66). Scheme 13. Base-promoted transformation of isocyanides to ITCs with mechanistic insights ( [42]).
Scheme 14.
Base-promoted transformation of isocyanides (66) to ITCs (65) and selected examples by Meier and co-workers ( [41]). Scheme is based on Scheme 3 from [41]. Table 1 provides a comparison between the discussed synthetic approaches starting from amines or isocyanides with sulfur. When designing a multistep synthesis plan, depending on the stability of the substrate, one should consider the nature of additives, solvent, temperature and inert conditions if necessary. Generally, reactions involving difluorocarbene or thiocarbonyl fluoride require inert conditions, while isocyanide can be transformed to ITC under less strict conditions. The modification of amines is most effective using PDFA, but in the case of sensible compounds, one may turn to the room temperature approach involving F 3 CSiMe 3 as a carbene source (Table 1, entry 2). The presence of potassium fluoride, however, may result in the removal of silyl groups on a complex structure, and a copper catalyst might lead to side coupling reactions and waste containing transition metals. Selenium and tellurium should be handled with care due to toxicity, while Mo or Rh catalysts increase the price and, again, transition metals in the waste. ITC formation from isocyanides, on the other hand, is very effective in the presence of bases. This approach can be performed in a relatively short reaction time compared to the transition metal-catalysed pathways, even under aqueous conditions. Based on the scope of substrates in the reported methods, one may note that all approaches provide ITCs in good to excellent yields. Challenging derivatives might be trityl ITC, generally obtained in lower yields, presumably because of steric hindrance, and low-molecular weight aliphatic ITCs, such as tert-butyl ITC due to its volatile nature.
Conclusions and Outlook
ITCs are a biologically and synthetically relevant functional group, being present in important metabolites, natural products and synthetic intermediates. Their efficient and clean synthesis is of high interest, leading to the appearance of several recent methods. In particular, there are two strategies involving elemental sulfur for the incorporation of the sulfur atom, offering practical and modern approaches. The in situ generation of thiocarbonyl fluoride from difluorocarbene and sulfur provides ITCs with primary amines, or sulfuration of isocyanides may directly lead to ITCs under thermal-, catalyticor nucleophile-induced conditions. Based on previous literature data and our recent results, we highlighted mechanistic insights into the latter transformation. Besides the conventional nucleophilic carbene and electrophilic sulfur setup, a switched mechanism is also proposed, where the polysulfide anions activated by a nucleophile are able to transform the isocyanide to ITC. This approach offers an efficient, mild and green synthesis of ITCs. We expect that this spotlight on ITC synthesis revealing different mechanistic pathways will inspire further research in the field and open up novel synthetic methodologies due to a deeper understanding. | 2021-10-21T15:58:33.482Z | 2021-09-08T00:00:00.000 | {
"year": 2021,
"sha1": "bea807f35025461737dbaac6142d7c9093c6feeb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4344/11/9/1081/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "857fe137fa73a2c80091eb10c3038fd262e933ec",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
220886682 | pes2o/s2orc | v3-fos-license | Dibutyl phthalate promotes juvenile Sertoli cell proliferation by decreasing the levels of the E3 ubiquitin ligase Pellino 2
Background A previous study showed that dibutyl phthalate (DBP) exposure disrupted the growth of testicular Sertoli cells (SCs). In the present study, we aimed to investigate the potential mechanism by which DBP promotes juvenile SC proliferation in vivo and in vitro. Methods Timed pregnant BALB/c mice were exposed to vehicle, or DBP (50, 250, and 500 mg/kg/day) from 12.5 days of gestation until delivery. In vitro, CCK-8 and EdU incorporation assays were performed to determine the effect of monobutyl phthalate (MBP), the active metabolite of DBP, on the proliferation of TM4 cells, which are a juvenile testicular SC cell line. Western blotting analysis, quantitative PCR (q-PCR), and flow cytometry were performed to analyse the expression of genes and proteins related to the proliferation and apoptosis of TM4 cells. Coimmunoprecipitation was used to determine the relationship between the ubiquitination of interleukin 1 receptor-associated kinase 1 (IRAK1) and the effect of MBP on promoting the proliferation of TM4 cells. Results In the 50 mg/kg/day DBP-exposed male mice offspring, the number of SCs was significantly increased. Consistent with the in vivo results, in vitro experiments revealed that 0.1 mM MBP treatment promoted the proliferation of TM4 cells. Furthermore, the data showed that 0.1 mM MBP-mediated downregulation of the E3 ubiquitin ligase Pellino 2 (Peli2) increased ubiquitination of IRAK1 by K63, which activated MAPK/JNK signalling, leading to the proliferation of TM4 cells. Conclusions Prenatal exposure to DBP led to abnormal proliferation of SCs in prepubertal mice by affecting ubiquitination of the key proliferation-related protein IRAK1 via downregulation of Peli2.
Background
Dibutyl phthalate (DBP) is a widely used plasticizer that has a negative effect on the development and function of male reproductive organs in humans and laboratory animals [1,2]. As DBP binds to the matrix by non-covalent bond, it easily leaches into the environment and then migrates into the food [2]. The toxicological effects of DBP are complex and diverse. Among them, the impact of in utero exposure to DBP on foetal reproduction and development is particularly worthy of concern. Some studies confirmed that in utero exposure to DBP caused testicular malformations in male offspring [3][4][5], but the underlying mechanism has not yet fully investigated. As one of the target cells of DBP/MBP [5][6][7][8][9], Sertoli cells (SCs) are the first that are recognized to differentiate in the foetal indifferent gonad, and they play a critical role in foetal testis formation and sexual differentiation as well as in adult spermatogenesis [10][11][12]. Because of the fixed number of germ cells supported by SCs, the proliferative capability of immature SCs during prepuberty determines the number of mature SCs, testis size and output of germ cells in the mature testis. Our recent study suggested that monobutyl phthalate (MBP), the metabolite of DBP, could disrupt the growth of juvenile SCs [9], however, the underlying molecular mechanism still needs to be further explored.
Based on the data generated by screening a highthroughput mRNA microarray, downregulation of E3 ubiquitin ligase Pellino 2 (Peli2) was found in SCs after exposure to 0.1 mM MBP [9]. Peli2, a member of the Pellino protein family, is a novel E3-RING ubiquitin ligase involved in the ubiquitination and degradation of interleukin-1 receptor-related kinase 1 (IRAK1). Previous studies revealed that Peli2 mediated K63-linked IRAK1 polyubiquitination and reduced K48-linked IRAK1 polyubiquitination, thereby leading to the activation of downstream MAPK/JNK signalling pathways [13][14][15]. The activation of IRAK1 downstream of the MAPK/JNK signalling pathway is related to many cellular processes, such as cell proliferation, migration, and regeneration [16,17]. Meanwhile, both the extrinsic apoptotic pathway involving the Fas/FasL proteins, such as FADD, and the intrinsic pathway (mitochondria-mediated through the Bax/Bcl-2 family proteins) can regulate cell growth by inducing the apoptosis of SCs [18]. Given these previous studies, we raised the question of whether the Peli2-mediated proliferation pathway as well as apoptotic pathways were involved in MBP-mediated growth disruption of immature SCs.
In this study, we first evaluated the effect of DBP/ MBP on proliferation and apoptosis in vivo and in vitro, and then we investigated the molecular mechanism by which MBP promotes the proliferation of TM4 cells.
Animals and processing method
Nine-week-old male (n = 12) and female (n = 24) specific pathogen-free (SPF) BALB/c mice were obtained from the Experimental Animal Center of the Academy of Military Medical Science, Beijing, China. Time-mated females (day of vaginal plug = gestational day (GD) 0.5) were randomized into 4 groups (n = 6 for each group). Pregnant mice were treated with 0 (control), 50, 250, or 500 mg/kg/day DBP (Sigma, St. Louis, USA) in 1 ml/kg corn oil, which was administered daily by oral gavage from GD 12.5 until birth. Because seminiferous cord and gonocyte development of offspring were damaged under the daily oral dose of 500 mg/kg/day DBP given to pregnant mice from GD 16-18 [19], we set 500 mg/kg/day as the highest concentration group. The 22-day-old males were euthanized by CO 2 asphyxiation. The testes were carefully removed and fixed in 4% paraformaldehyde.
All procedures performed on animals were approved by the Animal Care and Use Committee of Nanjing University under the animal protocol number SYXK (Su) 2009-0017. The animal experiments were performed in accordance with the Guide for the Care and Use of Laboratory Animals (The Ministry of Science and Technology of China, 2006).
Reagents and cell culture
Foetal bovine serum (FBS), Triton® X-100, DMEM-F12 and MBP were purchased from Sigma-Aldrich Inc. (St. Louis, MO, USA). MBP (2.2224 g) was dissolved in 1 mL of DMSO to prepare a stock solution (10 M). SP600125 (JNK inhibitor) and an IRAK1 inhibitor were purchased from MedChemExpress (Monmouth Junction, NJ, USA). The antibodies used in this study are listed in Additional file 1: Table S1. TM4 cells were cultured in DMEM/F12 containing 10% FBS and 1% penicillinstreptomycin with a 5% CO 2 atmosphere in a humidified incubator at 37°C. TM4 cell lines were obtained from the American Type Culture Collection (Manassas, VA, USA).
Immunohistochemical analyses
Immunohistochemical analyses were carried out as previously described [20]. The primary and secondary antibodies used in this study were SOX9, Peli2, and HRP-conjugated secondary antibodies (Zhongshan Biotechnology, Beijing, China). For each section, ten images were randomly captured at 200× magnification under a light microscope. The total cells and the SOX9-or Peli2-positive cells in each image were counted automatically using ImageJ software. After calculating the average of ten images, excluding the minimum and maximum values, the positive ratio of SOX9-or Peli2-expressing cells was determined; six sections per group of mice were taken for statistical analysis.
Cell growth assay
A Cell Counting Kit-8 (CCK-8) (Dojindo Lab., Kumamoto, Japan) test was used to test cell growth after treatment with MBP according to the manufacturer's instructions. Briefly, TM4 cells were plated at 2 × 10 3 cells per well in 96-well culture plates. After 24 h, cells were treated with MBP at concentrations of 0, 0.1, 1 or 10 mM for various times (1, 2, 3, 4, or 5 days). Based on our previous study of cell viability, the median effective concentration (EC 50 ) of MBP was determined to be 16.21 mM [21]. In this study, the highest concentration of MBP used was 10 mM. Following MBP treatment, 100 μL of a mixed solution of 1:10 (v/v) CCK-8:DMEM/F12 was added to each well, and the cells were incubated for an additional 4 h. Absorbance was measured at the indicated time points at 450 nm with a microplate reader (Versamax, Chester, PA, USA). CCK-8 contains WST-8, which can be reduced by dehydrogenases in cells to generate an orange-coloured product (formazan), which is soluble in the tissue culture medium. Therefore, the amount of formazan dye generated by dehydrogenases in cells is directly proportional to the number of living cells. Measurements were performed at least three times on six samples in parallel. Cell survival rate = (As-Ab)/(Ac-Ab) * 100%, and the terms are defined as follows: As: experiment well; Ab: blank well; and Ac: control well.
EdU incorporation assay
EdU assay kits were used to determine cell proliferation (Click-iT® EdU Imaging Kits; Invitrogen). According to the kit's instructions, 1 mL of proliferation media containing 20 μM EdU (final concentration 10 μM) was added to 6 wells of the plate, containing cells to be incubated with final concentrations of 0, 0.1, 1 or 10 mM MBP for 24 h. Cells were then fixed with 4% paraformaldehyde for 15 min. The fixative was removed, and the cells were washed twice with 1 mL of 3% bovine serum albumin (BSA), which was followed by incubation with 0.5% Triton X-100 (Sigma-Aldrich, St. Louis, MO,USA) for 10 min at room temperature. The cells were then washed twice and incubated with 1 mL of Click-iT® reaction cocktail for 30 min at room temperature. The cells were then incubated with 100 μL of 5 μg/mL DAPI (Sigma-Aldrich) for an additional 30 min in the dark. After staining, the cells were captured at 600× magnification under a microscope (Olympus, Tokyo, Japan). DAPI is a nuclear stain used to determine total cell counts. Normally, DAPI bound to DNA is most strongly excited by ultraviolet (UV) light at 358 nm and produces the strongest emission in the blue range at 461 nm. Six fields for each sample were randomly captured. EdU-positive cells were counted using Ima-geJ software (NIH, Bethesda, MD).
Flow cytometry for apoptosis assay TM4 cell apoptosis after treatment with different MBPs was analysed using Annexin V-FITC and PI staining kits (Vazyme, Nanjing, China) according to the manufacturer's requirements. Flow cytometry was performed on a FACSCalibur flow cytometer (BD Biosciences), and the data were analysed using Paint-A-Gate software (Becton-Dickson, San Jose, CA).
Quantitative PCR (q-PCR) validation analyses of target genes
Analyses of q-PCR were performed as previously described [20]. Total RNA was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA) according to the manufacturer's protocol. HiScript Q RT SuperMix for q-PCR kit (Vazyme, Nanjing, China) was used for reverse transcription polymerase chain reactions, and then q-PCR assays were conducted with SYBR Green I mix (Takara, Dalian, China) on an ABI ViiA 7 Q-PCR System (Applied Biosystems, Waltham, MA). In all cases, mRNA levels were normalized to the expression of GAPDH, which served as an endogenous control. The relative expression of target genes was calculated by the 2 -△△Ct method [22]. The primer sets used in this study are listed in Additional file 1: Table S2.
Statistical analyses SPSS 18.0 (SPSS, Chicago, IL) was used for statistical analysis. The normality and homogeneity of variances in the data were checked by using Levene's test. The Student's t-test was used for paired comparisons. To compare more than two groups, we used one-way ANOVA with Duncan's post hoc test. P < 0.05 was considered statistically significant.
Results
The effect of DBP on the proliferation of SCs Following in utero exposure to 50 mg/kg/day DBP, the number of SOX9 (a marker of SCs)-positive cells in the testes of pups from the resulting male offspring at postnatal day (PND) 22 was significantly increased compared with the vehicle treatment group; SOX9 was detected by immunohistochemical assay (Fig. 1a, b). These in vivo results suggested that DBP stimulated the proliferation of SCs at a dose of 50 mg/kg/ day.
The effect of MBP on TM4 cell growth and DNA synthesis
The results showed that 0.1 mM MBP promoted cell proliferation, but 10 mM MBP inhibited the proliferation of TM4 cells (Fig. 2a). Compared with no treatment, 0.1 mM MBP increased the number of EdU-positive cells, indicating that 0.1 mM MBP promoted DNA synthesis in TM4 cells (Fig. 2b, c). Collectively, these in vitro data Fig. 1 The effect of dibutyl phthalate (DBP) on Sertoli cell (SC) proliferation. a The effect of DBP on the number of Sertoli cells per testis in mice after prenatal exposure to DBP. Testicular sections were collected from pups 22 days after mice were exposed in utero (GD12.5 -birth) to corn oil or DBP doses of 50, 250 or 500 mg/kg/day. Immunohistochemical staining for SOX9 was performed (scale bar 50 μm). Arrows represent the expression of SOX9 in the testes of DBP-treated and control male pups. b The ratio of SOX9-positive cell was detected by ImageJ (n = 6). The results are expressed as the means ± SEM. * p < 0.05; ** p < 0.01, compared with control confirmed that 0.1 mM MBP stimulated the proliferation of TM4 cells.
The effect of MBP on the apoptosis of TM4 cells
The results of flow cytometry showed that the apoptosis rates of TM4 cells were significantly increased in the 1 mM and 10 mM MBP treatment groups (Fig. 3a, b). To elucidate the mechanism by which MBP induced apoptosis, we examined the effects of MBP on Bcl-2 and Bax expression as well as cytochrome c (Cyt c) release, which are indicators of the intrinsic apoptotic pathways. The Bax/Bcl-2 ratio, as an apoptotic index, is used to evaluate the balance between apoptotic and anti-apoptotic proteins. The results showed that the Bax/Bcl-2 ratio was markedly decreased after exposure to 0.1 mM MBP (Fig. 3c). However, the Bax/Bcl-2 ratio increased in the 10 mM MBP group. Furthermore, the release of Cyt c into the cytosol was significantly increased in TM4 cells after exposure to 10 mM MBP (Fig. 3d, Additional file 1: Fig. S1). We also detected the activation of the extrinsic apoptotic pathway in TM4 cells and found that the extrinsic apoptosis pathway was inhibited after exposure to 10 mM MBP (Additional file 1: Fig. S2). These data indicated that exposure to 10 mM MBP induced apoptosis of TM4 cells by activating the intrinsic apoptotic pathway.
The effect of DBP/MBP on Peli2 expression
Based on microarray data in the GEO (Gene Expression Omnibus) database from our previous report [9], Peli2 was chosen for further study because of its important role in cell proliferation. We demonstrated that prenatal exposure to DBP (50 mg/kg/day) reduced the levels of Peli2 in the mouse testes, as shown by immunohistochemical staining (Fig. 4a, b). Moreover, the q-PCR results showed that Peli2 expression in the 0.1 mM MBP group was significantly lower than it was in the control, whereas it was increased in the 10 mM group (Fig. 4c), which was further confirmed by Western blotting (Fig. 4d, e).
The effect of MBP on the ubiquitination of IRAK1 in TM4 cells
Peli2 protein, a RING E3-ubiquitin ligase, can lead to the degradation of IRAK1 by promoting IRAK1 ubiquitination [24,25], which eventually inhibits the activation of the downstream MAPK/JNK signalling pathway [26]. In this study, we found that the mRNA level of IRAK1 was significantly increased at 0.1 mM MBP, whereas it was suppressed at 10 mM (Fig. 5a). Western blotting results also showed that the protein level of IRAK1 was increased after exposure to 0.1 mM MBP (Fig. 5b, c). A previous study showed that Peli2 played a key role in IL- Fig. 2 The effect of monobutyl phthalate (MBP) on the proliferation of TM4 cells. a Cell viability was measured by CCK-8 assay and showed the viability of TM4 cells after treatment with different concentrations of MBP for 5 days (n = 3). b Immunofluorescence staining showed EdU incorporation in TM4 cells without treatment (control) or following treatment with 0.1, 1, or 10 mM for 24 h. DAPI was used to counterstain cell nuclei. c Quantification of the average percentage of EdU+ cells for B (n = 6). The results are expressed as the means ± SEM. * p < 0.05; ** p < 0.01, compared with control 1-and LPS-induced K63-and K48-linked IRAK1 ubiquitination [27]; thus, we explored IRAK1 ubiquitination by Co-IP. The results showed that after exposure to 0.1 mM MBP, total polyubiquitination of IRAK1 was attenuated compared with that in control cells, while K63mediated polyubiquitination of IRAK1 was increased (Fig. 5d). To determine whether IRAK1 was upstream of MAPK/JNK, we examined the effects of an IRAK1 inhibitor on MAPK/JNK activation. The IRAK1 inhibitor reduced p-JNK expression at the protein level (Fig. 5e, f). These data suggested that K63-mediated polyubiquitination of IRAK1 might play a key role in DBP/MBP-mediated proliferation of TM4 cells.
MBP promoted TM4 cell proliferation by MAPK/JNK signalling
We detected the activation of the MAPK/JNK signalling pathway by assessing downstream members of the pathway by Western blotting. The results showed that the phosphorylation of both JNK (p-JNK) and c-Jun (p-c-Jun) was significantly increased in TM4 cells treated with 0.1 mM MBP (Additional file 1: Fig. S3a, S3b). Additionally, the phosphorylation of c-Jun in the testis after in utero exposure to 50 mg/kg/day DBP was significantly increased (Additional file 1: Fig. S3c, S3d). Furthermore, 0.1 mM MBP also induced marked enrichment of c-Jun in the nuclei of TM4 cells (Fig. 6a, b). To identify Fig. 3 The intrinsic apoptotic pathway participated in MBP-induced apoptosis of TM4 cells. a Annexin V-FITC/PI was used to stain apoptotic cells, which were analysed by flow cytometry at 24 h. b The level of apoptosis in TM4 cells was calculated (n = 3). c The protein levels of Bax and Bcl-2 in TM4 cells treated with different concentrations of MBP were measured by Western blotting; the Bax/Bcl-2 ratio was determined by ImageJ (lower panels, n = 3). GAPDH was assessed as an internal control. d Cytochrome c (Cyt c) release was detected in the cytosolic (Cytosol) fraction of MBP treated TM4 cells 24 h by Western blotting. The densitometry data were quantified with ImageJ (lower panels, n = 3). GAPDH was assessed as an internal control. The results are expressed as the means ± SEM. ** p < 0.01; * p < 0.05 Fig. 4 The effects of DBP/MBP exposure on Peli2 expression. a, b Testicular sections were collected from pups 22 days after they were exposed in utero (GD12.5 -birth) to corn oil or DBP doses of 50, 250 or 500 mg/kg/day. The expression of Peli2 in mouse testicular tissues was carried out by immunohistochemistry. Arrows represent the expression of Peli2 in the testes of DBP-treated and control male pups. The ratio of positive cells was detected by ImageJ (n = 6). The expression of Peli2 in SCs after exposure to different concentrations of MBP for 24 h. c The mRNA levels of Peli2 were measured with quantitative PCR (q-PCR), and GAPDH was measured as a loading control. d, e Peli2 protein levels were measured by Western blotting. The densitometry data were quantified with ImageJ (n = 3). GAPDH was assessed as an internal control. The results are expressed as the means ± SEM. ** p < 0.01; * p < 0.05 whether MAPK/JNK was responsible for the proliferation of TM4 cells, we examined the effects of the JNK inhibitor SP600125 on cell proliferation. The activation of the MAPK/JNK pathway was inhibited after pretreatment with the JNK inhibitor SP600125 (Fig. 6c, d). Furthermore, the MBP-induced increased expression of CDK1 was reduced after pretreatment with the JNK inhibitor SP600125, suggesting that MAPK/JNK participated in MBP-induced TM4 cell proliferation. These results were further confirmed by flow cytometry (Fig. 6e, f). were measured with q-PCR, and GAPDH was measured as a loading control. b, c The protein levels of IRAK1 were measured by Western blotting. The densitometry data were quantified with ImageJ (n = 3). GAPDH was assessed as an internal control. d MBP (0.1 mM) attenuates IRAK1 ubiquitination and stimulates K63-mediated IRAK1 polyubiquitination. Cell lysates were immunoprecipitated (IP) with anti-IRAK1, which was followed by Western blotting analysis with anti-K63 ubiquitin (K63-Ub), anti-ubiquitin (Ub), and anti-IRAK1 antibodies. e, f TM4 cells were pretreated with an IRAK1 inhibitor for 1 h, which was followed by 24 h treatment with 0.1 mM MBP. The expression levels of JNK and p-JNK were determined by Western blotting. The densitometry data were quantified with ImageJ (n = 3). GAPDH was assessed as an internal control. The results are expressed as the means ± SEM. ** p < 0.01; * p < 0.05. # p < 0.05, vs MBP exposure
Discussion
As an endocrine disruptor found in the environment, DBP is of concern because it is currently widely used in many products, including latex adhesives, cellulose acetate plastics, dyes, personal care products, and coatings for certain oral medications [28]. Humans are exposed to DBP on a daily basis, and daily DBP intake for the general population is 0.007-0.01 mg/kg/day [1]. Detection of the urinary levels of MBP reveal that the metabolites of DBP in women of childbearing age, who are estimated to be exposed to DBP at rates that are over 200 times greater than that of a reference population, as they frequently use oral medications with DBPincorporated enteric coats [29]. In addition, in some severe cases, DBP metabolites are often found to be nearly 600 times higher than they are in the normal population (10,025 μg/g creatinine vs 17 μg/g creatinine); these patients often require enteric-coated drugs or blood transfusions [30][31][32][33]. Furthermore, during the developmental window of foetal mice, the reproductive toxicity . The data are expressed as the means ± SEM. ** p < 0.01; * p < 0.05, vs control. # p < 0.05, vs MBP exposure of the highest dose of DBP exposure in other studies was mostly 500 mg/kg/day [19,34,35]. Therefore, we established 500 mg/kg/day as the highest dose in the in vivo experiments. A previous study showed that 8 mM MBP could inhibit HCG-induced testosterone and insulin-like peptide 3 secretion in cultured testicular interstitial cells in vitro [36]. It was concluded that the in vitro cultured cells are probably insensitive to MBP [36]. Moreover, based on the data regarding the effect of MBP on cell viability in our previous study, the EC 50 of MBP was determined to be 16.21 mM [21]. Therefore, in this study, the highest concentration of MBP in vitro was set at 10 mM.
The pharmacokinetics of DBP have been investigated in rats [37]. DBP levels in fecal excretion was found to be low, and more than 90% of the dose was excreted via metabolites in the urine within 48 h following either intravenous or oral administration [37,38]. Most DBP is metabolized to MBP by intestinal hydrolases in the small intestine, and then almost all MBP enters the bloodstream [37,39]. DBP can directly penetrate the bloodtestis barrier [40]. Clewell and his colleagues found that peak MBP concentrations in foetal testes were 72 and 152 μM in the 100 and 500 mg/kg/day DBP exposure groups, respectively [41].
In the present study, we confirmed that prenatal exposure to 50 mg/kg/day DBP promoted SC proliferation. To investigate the mechanism by which DBP/MBP disrupted the growth of immature SCs, we employed TM4 cells derived from immature mouse SCs in an in vitro study. Mouse TM4 cells share many characteristics of SCs and have been widely used as a substitute for primary SCs [42]. Consistent with the in vivo results, 0.1 mM MBP promoted proliferation and DNA synthesis in the TM4 cells, while apoptosis was significantly increased after exposure to 10 mM MBP. We then aimed to investigate the molecular mechanism associated with the proliferation and apoptosis of SCs in MBP-treated SCs at different doses.
Apoptosis is an evolutionarily conserved mechanism for programming cell death, and it occurs in response to some physiological stimuli, cell damage or stress and is an important part of various developmental processes in metazoans [43,44]. Previously, many investigations found that DBP exposure caused toxicity in several cell types, such as nerve cells, osteoblasts, and germ cells [45][46][47]. It has been confirmed that MBP exposure causes apoptosis of SCs, but the specific mechanism has not yet been demonstrated [8]. In this study, we analysed the protein levels of key components of the intrinsic pathways (Bax, Bcl-2, and Cyt c) and extrinsic pathways (FADD, caspase 8, and caspase 3) [18]. The results showed that 10 mM MBP could activate the intrinsic pathway, whereas the extrinsic pathway was inhibited. Interestingly, the expression of FADD was increased after exposure to 0.1 mM MBP (Additional file 1: Fig. S2a). A previous study showed that FADD played a role in regulating most of the signalosome complexes, causing it to emerge as a newly identified actor in innate immunity, inflammation, and cancer development [48]. Therefore, we speculated that FADD might be involved in other physiological processes after exposure to MBP at a concentration of 0.1 mM. Pellino proteins have various regulatory roles in cell growth, for example, murine genetic models have revealed roles for Peli1 in lung carcinogenesis [49] and for Peli3 in TNF-induced cell killing [50]. However, there is a notable lack of insight into the physiological roles of Peli2. It was illustrated that polyubiquitination of both IL-1/LPS-induced K63-and K48-linked IRAK1 was decreased in Peli2-knockdown cells [27]. In our study, we found that, with decreasing Peli2 expression, the total ubiquitination level was reduced, while K63-linked IRAK1 polyubiquitination was increased after exposure to 0.1 mM MBP. Therefore, we hypothesized that the decreasing IRAK1 ubiquitination was mainly due to K48-ubiquitination, which resulted in the degradation of IRAK1. Studies on Peli2 revealed a role for Peli2 in IL-1/ LPS-induced activation of the MAPK/JNK pathway [27,51]. Our data also found that 0.1 mM MBP activated IRAK1 and the downstream MAPK/JNK signalling pathway, suggesting that 0.1 mM MBP could promote immature SC growth through the Peli2/IRAK1/MAPK/ JNK pathway. Taken together, it was concluded that 0.1 mM MBP promoted the abnormal proliferation of SCs by inhibiting the expression of Peli2, disrupting the balance of IRAK1 ubiquitination, and activating the downstream MAPK/JNK signalling pathway.
Conclusions
In summary, we first confirmed that DBP/MBP stimulated the proliferation of SCs in vitro and in vivo at a relatively low concentration range. Then, we found that downregulated Peli2 resulted in increased K63 ubiquitination of IRAK1, which activated MAPK/JNK signalling pathways in TM4 cells treated with 0.1 mM MBP. In addition, we showed that 10 mM MBP caused apoptosis of TM4 cells by activating the intrinsic apoptotic pathway. A descriptive outline of this study is shown in Fig. 7.
Additional file 1 Table S1. Specifications of primary antibodies. Table S2. Primers used for q-PCR. Figure S1. Cytochrome C (Cyt C) released was induced by MBP at 10 mM group. Figure S2. The extrinsic apoptotic pathway do not participated in MBP-induced apoptosis of TM4 cells. Figure S3. MBP induces the activation of MAPK/JNK-associated protein in TM4 cells. | 2020-08-01T13:56:51.855Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "739f8cfb059545ccbfae010113cd0da14281d335",
"oa_license": "CCBY",
"oa_url": "https://ehjournal.biomedcentral.com/track/pdf/10.1186/s12940-020-00639-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "739f8cfb059545ccbfae010113cd0da14281d335",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
252407386 | pes2o/s2orc | v3-fos-license | The spatiotemporal coupling in delay-coordinates dynamic mode decomposition
Dynamic mode decomposition (DMD) is a leading tool for equation-free analysis of high-dimensional dynamical systems from observations. In this work, we focus on a combination of delay-coordinates embedding and DMD, i.e., delay-coordinates DMD, which accommodates the analysis of a broad family of observations. An important utility of DMD is the compact and reduced-order spectral representation of observations in terms of the DMD eigenvalues and modes, where the temporal information is separated from the spatial information. From a spatiotemporal viewpoint, we show that when DMD is applied to delay-coordinates embedding, temporal information is intertwined with spatial information, inducing a particular spectral structure on the DMD components. We formulate and analyze this structure, which we term the spatiotemporal coupling in delay-coordinates DMD. Based on this spatiotemporal coupling, we propose a new method for DMD components selection. When using delay-coordinates DMD that comprises redundant modes, this selection is an essential step for obtaining a compact and reduced-order representation of the observations. We demonstrate our method on noisy simulated signals and various dynamical systems and show superior component selection compared to a commonly-used method that relies on the amplitudes of the modes.
Dynamic mode decomposition (DMD) is a leading tool for equation-free analysis of high-dimensional dynamical systems from observations. In this work, we focus on a combination of delay-coordinates embedding and DMD, i.e., delay-coordinates DMD, which accommodates the analysis of a broad family of observations. An important utility of DMD is the compact and reduced-order spectral representation of observations in terms of the DMD eigenvalues and modes, where the temporal information is separated from the spatial information. From a spatiotemporal viewpoint, we show that when DMD is applied to delay-coordinates embedding, temporal information is intertwined with spatial information, inducing a particular spectral structure on the DMD components. We formulate and analyze this structure, which we term the spatiotemporal coupling in delay-coordinates DMD. Based on this spatiotemporal coupling, we propose a new method for DMD components selection. When using delay-coordinates DMD that comprises redundant modes, this selection is an essential step for obtaining a compact and reduced-order representation of the observations. We demonstrate our method on noisy simulated signals and various dynamical systems and show superior component selection compared to a commonly-used method that relies on the amplitudes of the modes.
Dynamical systems are abundant in many fields of science and engineering.
As dynamical systems are often high-dimensional and complex, their characterization from observations is a coveted goal. A key task in accomplishing this goal is finding a reduced-order representation of a system, where dynamic mode decomposition (DMD) is a prominent tool for this purpose. In recent years, the combination of delay-coordinates embedding and DMD (i.e., delay-coordinates DMD) has been shown to be highly useful in the characterization of dynamical systems, even when these systems are highly nonlinear or chaotic. This combination gives rise to a specific spectral structure that does not exist in ordinary DMD and has not been studied so far. In this work, we formulate and analyze this structure, where we show that the spectral components of delay-coordinates DMD exhibit a spatiotemporal coupling. This coupling suggests that the representations obtained by delay-coordinates DMD can be further reduced. By considering this coupling, we propose not only to decouple the spatial information from the temporal information but also to exploit it to construct an improved reduced-order representation in an unsupervised fashion. We demonstrate our approach on several dynamical systems that include noisy observations of undamped and damped mechanical oscillators. a) emilbr@campus.technion.ac.il
I. INTRODUCTION
Time-series analysis, modeling, and prediction are ubiquitous tasks in applied sciences. When the time-series stem from ergodic dynamical systems, learning their phase space in a nonparametric fashion from sufficiently long intervals of observations is possible and has become an active field of research in recent years. This so-called data-driven approach seeks to obtain meaningful, physics-related knowledge, in an equation-free manner, shifting the focus from equations-based descriptions to observations-based analysis [1][2][3][4][5][6] . Existing data-driven methods can be largely divided into two categories 7 . The first is based on a state-space representation, and the focus is typically on finding a map that propagates a state at a given time to a state at a future time. Arguably, the classical approach is to approximate the nonlinear dynamics as a collection of locally linear systems on tangent spaces near attractors 8- 10 . Another, more recent class of methods in this category, constructs reduced models of the state space in Euclidean spaces, and then, finds the propagation rules in the reduced spaces 1,[11][12][13] .
The second category is an operator-theoretic approach that considers observables of the system. In this approach, many recent methods are based on the Koopman operator 14 , which is a linear, infinite-dimensional operator that operates on observables in a Hilbert space, and propagates them linearly in time 15,16 . While the ability to represent nonlinear dynamics in a linear fashion is perhaps the most notable property of using the Koopman operator, it has several other remarkable attributes. In this line of work 7, 15,[17][18][19] , it was shown that the Koopman operator has the ability of capturing the dynamics of linear or nonlinear dynamical systems through its spectral components.
More concretely, the time evolution of a dynamical system can be decomposed into spatial patterns, which are often referred to as the Koopman modes and derived from the eigenfunctions of the Koopman operator, and temporal patterns, which are derived from the eigenvalues of the Koopman operator. Therefore, the spectral analysis of the Koopman operator is of high importance for obtaining an informative description of dynamical systems 7, 17,18,[20][21][22] .
Using the Koopman operator for analysis poses an important tradeoff. On the one hand, it facilitates a linear description of the dynamics and a useful decoupling of the spatial and temporal patterns via its spectral representation. On the other hand, the Koopman operator is infinite dimensional. Therefore, finite-dimensional approximations are necessary for practical purposes.
Perhaps the most common technique for such a finite approximation is the dynamic mode decomposition (DMD), introduced by Schmid and Sesterhenn 23,24 , where the primary goal is to approximate the Koopman eigenvalues and modes in finite spaces based on a finite set of observations 25 . The DMD eigenvalues and modes give rise to a discrete spatiotemporal representation of the dynamical system, facilitating a data-driven and equation-free analysis. Following its appearance, several variants for enhancing the capabilities of DMD have been introduced, e.g., the extended DMD (EDMD) 26 , where projections on finite-space dictionaries were used to improve the spatiotemporal representation, least-squares 27,28 and sparsity promoting 29,30 techniques, as well as approaches for the analysis of compressed data 31,32 . Additional extensions of DMD have been introduced, such as multiresolution DMD (mrDMD) 33 , which effectively uncovers multiscale structures in the data, and DMD with control (DMDc) 34 , which extracts low-order models of high-dimensional systems that require control.
Indeed, in recent years, DMD has been shown to be a powerful tool, demonstrating remarkable capabilities in a broad range of fields [35][36][37][38][39][40] , and particularly, in fluid dynamics 25,[41][42][43][44][45][46] . Nevertheless, despite its popularity and success, DMD has some notable limitations. For example, a straightforward application of DMD does not allow for the signal reconstruction of a standing wave 16,47 . In addition, ordinary DMD cannot handle cases when the number of linearly-independent DMD modes is smaller than the number of the system's oscillation frequencies 48 . Another notable limitation of DMD concerns one-dimensional signals, as its application to such signals results in a degenerate (scalar) representation.
Combining delay-coordinates embedding and DMD mitigates these limitations 16,47,48 . The delay-coordinates embedding (also referred to as time-delay embedding, or, simply, delay-coordinates) is a method for augmenting past observations to the present observation. This approach dates back to 1981 with the formulation of Takens' embedding theorem 49 , according to which an attractor of a system can be reconstructed up to a diffeomorphism using delay embedding.
Various studies that associate delay-coordinates and DMD-based methods have been presented in recent years. For example, Le Clainche and Vega 48 presented the higher order DMD (HODMD), a global linear method that is capable of uncovering a large number of frequencies of periodic and quasiperodic dynamical systems based on limited and noisy input data. Brunton et al. introduced the HAVOK analysis in terms of a Hankel matrix, which successfully represents highly nonlinear and chaotic systems using a linear model and intermittent forcing 35 . Pan and Duraisamy provided the minimal required augmentation number for a perfect recovery of dynamical systems based on their Fourier spectrum 50 . We note that combinations of delay-coordinates and Koopman operator-based methods have also been investigated 19,51,52 .
In this paper, we show that despite its prevalence, application of DMD to augmented data (i.e., delay-coordinates DMD) gives rise to a particular spectral structure that does not exist in ordinary DMD applications and, to the best of our knowledge, has not been studied in the existing literature. This structure comprises augmented DMD modes that embody temporal information, which is entangled with spatial information. We formulate and analyze this entanglement, and term it as the spatiotemporal coupling in delay-coordinates DMD.
Based on this spatiotemporal coupling, we propose a new approach for obtaining compact and reduced-order representations of dynamical systems from observations, where, similarly to ordinary DMD, the spatial and temporal patterns are decoupled. Our approach includes solving an inherent challenge of delay-coordinates DMD, where the number of augmented DMD components is often larger than the number of intrinsic modes of the dynamical system. In such cases, the augmented DMD components can be divided into two subsets: those that describe the dynamical system, which we term true, and those that are a mere artifact of the augmentation, unrelated to the system, which we term spurious.
By relying on the difference in their spatiotemporal coupling, we distinguish between the true and spurious DMD components, and present a method for selecting the true components. In contrast to ordinary DMD, our method is based on delay-coordinates DMD, and is therefore capable of analyzing and representing a broader family of observations and signals. We demonstrate the spatiotemporal coupling and the effectiveness of our approach on various simulated dynamical systems. This paper is organized as follows.
Section II briefly describes existing approaches for mode selection. Section III presents the problem formulation.
In Section IV, we introduce the spatiotemporal coupling in delay-coordinates DMD in detail, and reveal the specific relations within augmented DMD modes. Then, in Section V, we propose a new method for decoupling the spatial and temporal information, leading to the identification of the true augmented DMD components and to a compact, reduced-order and informative representation of the observations. For the purpose of illustration, in Section VI, we present the application of our method to a two-mode sine signal. Finally, in Section VII, our method is demonstrated on various dynamical systems, outperforming the common method that relies on the amplitudes of the modes at low SNR values.
II. RELATED WORK ON MODE SELECTION
The identification of the dominant DMD components required for optimal, reduced-order representations of dynamical systems (which we term true) has been studied in wider contexts, beyond delay-coordinates DMD. Criteria and methods for such a purpose are usually referred to as mode selection, where we use the broader term DMD components selection.
For instance, Rowley et al. ordered the DMD modes by their norms (amplitudes) 25 -a method we term the maximal amplitudes method. The norms of the modes can be further weighted by the magnitudes of the corresponding eigenvalues to account for modes that have large norms yet decay rapidly, as suggested by Tu et al. 47 . Schmid et al. used a projection of the data sequence on the identified modes, whose coefficients indicate on the significance of the modes 53 . Jovanović et al. introduced a sparsity-promoting approach using an addition of a penalty term of the DMD amplitudes, which regularizes the least-square deviation between the linear combination of DMD modes and the snapshots matrix 29 . Tissot et al. proposed an energetic criterion, where the amplitude of a DMD mode is weighted by its corresponding temporal coefficient 54 . Sayadi et al. proposed a parameterized approach, which utilizes the sparsity promoting DMD, and then reconstructs the modes' amplitudes using a time-dependent coefficient, giving a notion of their significance 30 . Another approach was proposed by Kou and Zhang, which considered the initial conditions and temporal evolution of DMD modes, ordering them according to the integrals of their corresponding time coefficients 55 .
Our work differs from the above studies on mode selection, as it provides, for the first time to the best of our knowledge, a mode selection framework in the specific context of delay-coordinates DMD. Seemingly, the problem of mode selection in delay-coordinates DMD is more challenging due to the existence of spurious DMD modes, as well as the higher dimensions of augmented DMD modes compared to the observations. Nevertheless, we show that despite the additional challenges, delay-coordinates DMD also constitutes a remedy. Specifically, we show that the spatiotemporal coupling in delay-coordinates DMD bears information that facilitates a new method for mode selection.
Consider a dynamical system
that evolves in discrete time k on a manifold M ⊂ R n , where x k ∈ M is a state vector, and f : M → M. The function f is unknown and could be deterministic or stochastic, e.g., due to the presence of noise. Assume that this discrete time formulation arises from a continuous-time dynamical systemẋ = Ax. Specifically, suppose that the state is sampled at a fixed sampling rate ω s = 2π/∆t, and m + 1 discrete samples x k = x(k∆t) of the state are collected, where k = 0, 1, . . . , m. In this work, following common practice 48,50,52 , we assume that m > n.
Our goal is to analyze the dynamics of system (1) based on the finite set of observations {x k } m k=0 . For this purpose, we use the DMD approach 7,23,47 , which provides a spatiotemporal representation of the observations, and is described next. First, a linear approximation of the discrete-time system in (1) is employed: To find A, the observations {x k } m k=0 are arranged into two observation matrices Then, following the exact DMD 16,47 , the singular value decomposition (SVD) of X is computed as where U ∈ C n×r , Σ ∈ C r×r , V ∈ C m×r , (·) * is the complex conjugate transpose, and r = rank(X). Definẽ A ∈ C r×r byà The so-called DMD eigenvalues, λ i , are the eigenvalues ofà that satisfỹ where v i are the corresponding eigenvectors. The so-called exact DMD modes, φ i , are given by 16,47 Tu et al. defined the exact DMD modes, φ i , and showed that {λ i , φ i } are the eigenvalue-eigenvector pairs of A 47 . Moreover, the authors distinguished between the exact modes φ i and the DMD modes obtained by the standard DMD 24 (termed projected DMD modes). It was noted that the exact and projected modes have the tendency to converge when X and X have the same column spaces 16 .
The observations x k can be expressed as where η 0,i is the ith entry of η 0 = Φ * x 0 , and Φ is the matrix whose columns are the r leading DMD modes φ i , satisfying Φ * Φ = I. Alternatively, (8) can be written in a matrix form as The continuous-time counterpart of the decomposition in (8), which corresponds to the dynamical systemẋ = Ax, is given by 16 where µ i and ψ i are the eigenvalues and eigenvectors of the propagating operator A, In case of a damped oscillating system, when the system comprises natural frequencies ω i and damping ratios ζ i , according to (11), we have from which we obtain In case the system is undamped, ζ = 0 can be substituted into (12) and (13).
Eq. (8) constitutes a decomposition of the system in (1) into its dynamic modes, such that λ i and φ i hold the temporal and spatial information, respectively. Yet, as described in Section I, obtaining this decomposition might be hindered for a variety of reasons, such as standing waves, one-or low-dimensional signals 16,47 , and observation noise (e.g., noise in the measurement equipment). Our goal is to obtain a compact and reduced-order representation of the system similar to Eq. (8), where the challenges mentioned above are present. To accomplish this goal, we use delay-coordinates DMD as detailed below.
IV. SPATIOTEMPORAL COUPLING IN DELAY-COORDINATES DMD
In this section, we show that when DMD is applied to delay-coordinates embedding (constituting the delay-coordinates DMD), the strict separation of temporal and spatial information in representations obtained by DMD as in (8) is violated. In other words, the separation where the eigenvalues λ i bear the temporal information about the dynamical system and the modes φ i represent the spatial information no longer exists in delay-coordinates DMD. Then, we show that this seemingly limiting spatial and temporal information entanglement could be harnessed toward the selection of the DMD components required for accurate and reduced-order characterization of the dynamical system. Specifically, we formulate and analyze the induced spatiotemporal structure of the eigenvalues and modes that arise from delay-coordinates DMD, which we term augmented DMD components. For simplicity, we divide the exposition into two stages. First, an augmentation of one sample is considered in Subsection IV A, followed by a generalization to augmentations of several samples, which is presented in Subsection IV B.
We begin by formulating the augmentation, i.e., applying delay-coordinates embedding to the observation matrices X and X in Eq. (3). LetX andX denote the augmented observation matrices, defined bŷ where s < m, s ∈ N is termed the augmentation number.
In the remainder of the paper, we use hats to denote either augmented or augmented-related terms. Note that X andX are Hankel matrices, which are typical in delay-coordinates DMD 16,35,52 . Same as in the exact DMD 16,47 ,X andX are related throughX or in vector form, Application of the exact DMD toX andX results inr DMD eigenvalue-mode pairs denoted by {λ l ,φ l }r l=1 , wherer = rank(X). Then, similarly to Eq. (8), the augmented observationsx k can be represented aŝ whereη 0,l is the lth entry ofη 0 =Φ * x 0 , andΦ is the matrix whose columns are the augmented DMD modeŝ φ l . Eq. (17) can be written in a matrix form aŝ In order to obtain a sufficient amount of augmented DMD components for a full description of the dynamical system,r must be at least as large as the number of DMD components required for the description of the unaugmented (original) system in (8); i.e.,r ≥ r.
Recalling thatr = rank(X) ≤ min n(s + 1), m − s , fulfillment ofr ≥ r depends on the choice of s. Concretely, choosing s such that n(s + 1) ≤ m − s leads to the range Conversely, the choice n(s + 1) ≥ m − s imposes r ≤r ≤ m − s, which leads to In practice, as r = rank(X) is unknown prior to the application of DMD, one can choose s ≤ m − n as the upper bound in (20). By (16) and by recalling thatÂφ l =λ lφl , the expansion of the augmented observations vectorx k+1 is given byx which, using (18), can be recast in matrix form aŝ Eqs. (17) and (21) show that the temporal propagation ofx k can be expressed using the augmented DMD components. Specifically, multiplication of the augmented modesφ l by their corresponding augmented eigenvaluesλ l propagatesx k one time step tox k+1 . Similarly,x k can be propagated q steps tox k+q bŷ A. Augmentation with one sample When s = 1,x k ∈ R 2n can be written asx k = [x k , x k+1 ] T , splitting it to the top and bottom n entries. Consequently, (18) can be rewritten as where the columns ofΦ (0) ∈ C n×r andΦ (1) ∈ C n×r are the top and bottom n entries of the columns ofΦ. We term each column ofΦ (0) orΦ (1) as a DMD sub-mode or, simply, sub-mode. Considering only the top n entries in Eq. (24) yields On the one hand, Eq. (25) shows that x k can be expressed by ther columns ofΦ (0) . On the other hand, according to (9), it can be represented using only the r ≤r columns of Φ. Therefore, Eq. (25) might not be a compact representation of x k , leading to the conjecture that the application of delay-coordinates DMD results in two types of DMD components -those that describe the dynamical system (true) and those that are not related to it and are an artifact of the augmentation (spurious). By applying similar considerations to the bottom n entries in (24), the same conjecture can be made for x k+1 andΦ (1) . In the following assumption, we make these conjectures more precise. We note that this assumption is supported by an empirical verification in Section VII.
true ∈ C n×r are termed true and the remainingr − r columnsΦ true has full column rank r and that its column space spans the space of observations {x k }.
In other words, we assume that the r leftmost columns ofΦ (0) are the ones required for the compact reduced-ordered representation of x k (hence, true), while the restr − r columns are redundant (hence, spurious). According to (9) and by Assumption 1, implying that the columns ofΦ (0) true are linearly independent, x k can be represented using only the r true columns ofΦ (0) and their respective eigenvalues. Now, considerx k+1 ∈ R 2n , which for s = 1 isx k+1 = [x k+1 , x k+2 ] T . Then, from (22) and by splittingΦ,Λ andĉ k to their true and spurious parts as above, we have . . ,λr], andĉ true k ,ĉ spurious k denote the r andr − r expansion coefficients that correspond to the true and spurious components, respectively.
Under Assumption 1, x k+1 can be represented using only the true parts of (24) and (26) as Consequently, by equating the right-hand side (RHS) terms of Eq. (27), we have Note that assuming distinct DMD eigenvalues is a common practice 25,48,56 .
For convenience, we define ∆ =Φ trueΛtrue . As the columns ofĈ are in the subspace of the solutions to Eq. (28), then dim ker (∆) = r. By the rank-nullity theorem, the rank of the matrix ∆ equals to the matrix's number of columns r minus the dimension of its null space, i.e., dim ker (∆) = r. Therefore, rank (∆) = 0, implying that ∆ = 0. Hence,Φ Proposition 2 and Eq. (31) show the spatiotemporal coupling in delay-coordinates DMD for the special case of s = 1. In this case, the sub-modesφ (0) l ,φ (1) l ∈ C n that comprise the true, augmented modeφ l ∈ C 2n are related to one another through the corresponding true eigenvalueλ l ofÂ. The general relations for augmentation with more than one sample are presented in the next subsection.
where the matrixΦ in Eq. (18) and (23) (33) Next, we generalize Assumption 1 and Proposition 2 for augmentation with more than one sample, where the spatiotemporal coupling in delay-coordinates DMD is bounded from above by B, which is defined and explained in the sequel in Eq. (40). Fig. 1(a).
We omit the proof because it is similar to the proof of Proposition 2.
Consideration ofφ l , which are the respective ith and jth sub-modes of the true, augmented modesφ l (i.e., the columns ofΦ The spatiotemporal coupling in delay-coordinates DMD, generally presented in Proposition 4 and Eq. (35), reveals that the true, augmented modesφ l contain not only spatial information, but also temporal information given byλ l . Importantly, by Proposition 4, this coupling is exhibited only by the true DMD components. In contrast, the considerations leading to this property do not apply to the spurious DMD components, and therefore, they lack this coupling. This spatiotemporal coupling can be written more explicitly using (35), e.g., asφ and is illustrated in Fig. 1(b), showing a zoom-in on the single augmented DMD modeφ l . Alternatively, one can obtain the augmented DMD eigenvalue through (35) bŷ Thus far, we claimed that the spatiotemporal coupling in Eqs. (34) -(37) holds for augmentation numbers bounded from above by B. Now, we present this upper bound explicitly and show that it depends on the oscillation frequencies of the underlying continuous-time dynamical system, ω l . The existence of the upper bound B stems from the Nyquist-Shannon sampling criterion 57,58 , provided that the sampling frequency ω s is sufficiently high, i.e., ω l ≤ 0.5ω s for every ω l of the system.
By substituting ∆t = 2π/ω s into (13), the relationship between the DMD eigenvalueλ l and the oscillation frequency ω l iŝ Therefore, taking powers p ∈ R ofλ l results in Namely,λ p l corresponds, in effect, to a frequency that is p-times faster than the continuous time counterpart of λ l , i.e., pω l . So, the Nyquist-Shannon sampling criterion for this frequency is pω l ≤ 0.5ω s , which provides the constraint p ≤ 0.5ω s /ω l . Accordingly, we define where · is the floor function.
In terms of Eq. (36) and as illustrated in Fig. 1 (b), λ j l relates the zeroth sub-mode ofφ l to its jth sub-mode, whereφ l represents the lth DMD mode that corresponds to the oscillation frequency ω l . Additionally, according to Eq. (39),λ j l represents a j-times faster frequency thanλ l , and is utilized by the spatiotemporal coupling in delay-coordinates DMD in Proposition 4. Consequently, this spatiotemporal coupling exists only for values of j that satisfy the Nyquist-Shannon sampling criterion (i.e., j = 0, . . . , B). In other words,λ j l for j > 0.5ω s /ω l represents a frequency that is sampled at a sub-Nyquist rate, and therefore, does not capture the underlying dynamics.
We showed that the sub-modes within an augmented mode are related to each other through the corresponding eigenvalue (see Proposition 4, Eq. (35) and Fig. 1(b)). We refer to these relations as the spatiotemporal coupling in delay-coordinates DMD. That is, an augmented DMD mode holds temporal information in addition to the spatial information. In addition, we show empirically in Section VI that the spurious components do not have this coupling.
In the following sections we utilize the spatiotemporal coupling for selection of the true DMD components and further representation and characterization of dynamical systems from observations, even when the observations are corrupted with noise.
V. PROPOSED METHOD
Based on the spatiotemporal coupling in delay-coordinates DMD presented in Proposition 4, we propose a method for obtaining a compact and reduced-order representation of the observations that is analogous to Eq. (8). As discussed in Section IV, representation (8) cannot be directly obtained from applications of DMD algorithms to delay-coordinates embedding due to two challenges: (a) the possible existence of both true and spurious augmented DMD components, and (b) the existence of redundant sub-modes comprising every augmented DMD mode, which are unnecessary for the representation. Both challenges must be addressed to obtain the desired compact representation, i.e., the selection of the true augmented DMD modes, followed by the selection of the relevant sub-modes within the true modes.
To the best of our knowledge, challenge (a) is not addressed in the current literature. A common practice to address challenge (b) is to choose the first n entries in each selected augmented DMD mode, ignoring its spatiotemporal coupling.
Therefore, we propose a method that considers the spatiotemporal coupling in Proposition 4 and selects the DMD components required for compact representations of dynamical systems. Our method is detailed in this section and summarized in Algorithm 1. An empirical comparison between our method and the maximal amplitudes method is conducted in Section VII, where the superiority of our method is demonstrated in case the observations are corrupted with high levels of noise.
Our method begins by augmenting the observations s times, and applying the exact DMD to them. Generally, the augmentation number s can be chosen either in the range prescribed in (19) or in (20). Yet, the empirical evidence presented in Section VII shows that s should be chosen according to (20).
Next, the DMD eigenvalue-mode pairs,λ l andφ l , respectively, are extracted, followed by a calculation of the oscillation frequencies ω l by (13), as well as the bounds B(ω l ) that correspond to each ω l by (40).
To identify the true components out of all the obtained DMD components, we propose to utilize Eq. (37), where the identification proceeds as follows. The same eigenvalueλ l is repeatedly computed via substitution of different i and j values in Eq. (37). For example, by setting i = 0 and j = 1, . . . , B(ω l ), the eigenvalueλ l can be computed multiple times by taking the quotients of the sub-modes as where a tilde denotes a computed entity, and the superscripted index (j) inλ (j) l denotes the jth computation of this eigenvalue. As stated in Proposition 4, the spatiotemporal coupling in delay-coordinates DMD holds only for true DMD components. Hence, any jth computation of any lth true eigenvalue according to (41) yields the same value (in a noise-free system). Moreover, this value is identical to the DMD eigenvalue, i.e.,λ (j) l = λ l , for l = 1, . . . , r. Conversely, such computations of spurious DMD eigenvalues, for l = r + 1, . . . ,r, are excepted to yield different computed results upon substitution of varying values of j in (41), as well as results that differ fromλ l . Therefore, (41) contains information that enables the distinction between true and spurious DMD components.
In the presence of observation noise, the identification of true DMD components through (41) can be enhanced by introducing an averaged computed eigenvalue, λ l , e.g., as In case ω l > 0.5ω s , then B(ω l ) = 0. This scenario can occur, e.g., when s is chosen to be very large, which, in turn, might produce a very large ω l . Since the sampling rate admits the Nyquist-Shannon sampling criterion for the true oscillation frequencies, such ω l must be related to a spurious eigenvalue. Therefore, when B(ω l ) = 0, we consider the eigenvalue corresponding to ω l as spurious and remove it from the computation in (42). Thereafter, the absolute errors, ε l , are calculated by providing an estimation for the difference between the augmented DMD eigenvalue and the average of the eigenvalues computed based on the spatiotemporal coupling in Proposition 4. Alternatively, the absolute errors can be defined as the average of the absolute differences between computed eigenvaluesλ (j) l and their corresponding DMD eigenvalueλ l .
We note that this definition yields similar empirical results in the applications studied in this paper.
If the computed eigenvaluesλ (j) l for j = 1, . . . , B(ω l ) are different fromλ l , then ε l in Eq. (43) is large, which means thatλ l is spurious. Contrarily, ifλ (j) l are approximately equal toλ l , then ε l is small, indicating thatλ l is true. Therefore, ε l , l = 1, . . . ,r, can be partitioned into two subsets, S 1 and S 2 . Without loss of generality, assume that S 1 constitutes the first d values of ε l , i.e., {ε 1 , . . . , ε d } ∈ S 1 , where 1 ≤ d ≤r, and that S 2 contains the remainingr − d values, {ε d+1 , . . . , εr} ∈ S 2 . This partitioning can be carried out by applying standard clustering algorithms to ε l , e.g., k-means with k = 2. After clustering, the average absolute errors of subsets S 1 and S 2 , ε S1 and ε S2 are computed as and then, the smaller of ε S1 and ε S2 denotes the subset that contains the ε l related to the true DMD eigenvalues. The distinction between small and large absolute errors was straightforward in the illustrative example (see Section VI, Fig. 6). In some cases, the DMD components of dynamical systems arise in complex conjugate pairs. In such cases, each complex conjugate pair represents the same characteristics of the system (e.g., the same oscillation frequencies ω l ). These pairs embody another degree of redundancy, which can be exploited, for example, as follows. Denoteλ q as the complex conjugate ofλ l . Then, similarly to (42), an averaged eigenvalue λ l,q , which accounts for this redundancy, can be written as where an overline denotes a complex conjugate. As λ l,q is computed over both l and q, the influence of noise on the observations can be further diminished in these systems. Lastly, once the subset of the true augmented DMD components is identified using Eq. (44), the observations can be represented analogously to (8) as whereσ 0,l = φ l ,x 0 , |S 1 | is the cardinality of S 1 , and, without loss of generality, we assume that S 1 is the subset that contains the ε l values related to the true DMD eigenvalues λ l . We note that our method has two main shortcomings, which we plan to address in future research. First, compared to mode selection based on maximal amplitudes, our method is computationally heavier. Second, as shown in the next section, our empirical study suggests that the performance is sensitive to the choice of the augmentation number s. We plan to develop a systematic procedure to set the augmentation number, as well as more efficient implementation schemes that mitigate the repeated computation of eigenvalue decomposition.
Algorithm 1 Proposed algorithm
k=0 of a dynamical system. Output: Compact and reduced-order spectral representation of the observations. 1. Augment the data s times according to the range in (20), apply the exact DMD algorithm to the augmented observations, and extract the augmented DMD eigenvalue-mode pairs,λ l andφ l , respectively.
2. Calculate the oscillations frequencies ω l that correspond to each augmented DMD eigenvalueλ l using Eq. (13). 6. Partition ε l into two subsets using k-means with k = 2, and compute the average values of each subset, as in (44). Select the averaged eigenvalues (and their corresponding modes) related to the subset corresponding to the smaller average value.
Compute the bound
7. Use the selected DMD components for building a compact and reduced-order spectral representation of the observations, according to Eq. (46).
VI. ILLUSTRATIVE EXAMPLE
By following the proposed algorithm (Algorithm 1) step by step, we demonstrate the spatiotemporal coupling in delay-coordinates DMD (formulated in section IV) on an illustrative example of a two-mode sine signal. More specifically, we show how Proposition 4 facilitates the identification, in an equation-free manner, of the DMD components required for the representation and characterization of the signal. Evaluation is done by comparing the true signal's oscillation frequencies to the oscillation frequencies that correspond to the DMD eigenvalues selected by our method. Our method can also be visually evaluated through its signal reconstruction compared to the original signal, as shown in Fig. 3. Consider the following two-mode sine signal where ω sys 1 = 3 rad/s, ω sys 2 = 5 rad/s, n(t) is an additive white standard Gaussian noise, and α = 10 −12 . Fig. 2 shows a dynamical system whose coordinate x(t) can represent such a signal. Specifically, signal (47) can be viewed as the position of mass m 1 in a two degrees of freedom (DOF) oscillator, comprised of two masses, m 1 and m 2 , connected to each other via springs. We denote the components that correspond to the system by the superscript sys, i.e., ω sys 1 and ω sys 2 are the system's harmonics.
The signal is sampled at the sampling rate ω s = 2π/∆t = 20π rad/s (corresponding to 10 Hz) and 101 samples (observations) {x k } 100 k=0 are collected, shown by the solid black line in Fig. 3.
We begin the illustration by augmenting the observations with s = 11. This choice of s is in accordance with the upper bound on s in (20), yet it does not adhere to the lower bound in the same equation. One of the main purposes of this section is to demonstrate that Proposition 4 holds for true DMD components and up to the bound B(ω l ). Consequently, we choose s = 11, which is a value that, on the one hand, demonstrates these two statements for system (47), and on the other hand, provides a small enough number of eigenvalues that can be conveniently visualized. That is, this choice is made merely for illustrative purposes.
The exact DMD algorithm is applied to the augmented observations, resulting in s + 1 = 12 augmented DMD eigenvalue-mode pairs, {λ l ,φ l } 12 l=1 . Fig. 4(a) presents the polar representation ofλ l , and Fig. 4(b) shows the oscillation frequencies, ω l , (corresponding to the eigenvalues by Eq. (13)), sorted in ascending order. It can be observed in Fig. 4(a) that the DMD eigenvalues of this signal arise in complex conjugate pairs. Moreover, these eigenvalues are distinct, satisfying the condition of Propositions 2 and 4.
Since the signal is composed of two oscillation frequencies, ω sys 1 and ω sys 2 , two complex conjugate DMD eigenvalue pairsλ 1,2 andλ 3,4 are related to these frequencies − these are the true eigenvalues and they are marked by blue circles in Fig. 4. Indeed, Fig. 4(b) corroborates them as true since they correspond to the frequencies 3 rad/s and 5 rad/s. Similarly, the remaining 8 eigenvalues (and their corresponding modes) are spurious and are marked by red crosses in Fig. 4.
Our goal of identifying the true DMD components required for the system description in an equation-free manner can be visually described via Fig. 4. Suppose the observations {x k } 100 k=0 were obtained from an unknown system, i.e., without the knowledge that 3 rad/s and 5 rad/s are the system's harmonics. Then, the markings of the blue circles and red crosses would be unknown, as well. In this regard, our goal would be the task of identifying the blue circles, which represent the true DMD eigenvalues.
By Eq. (40), the bounds that correspond to the two system's harmonics are computed to be B(ω sys 1 ) = 10 and B(ω sys 2 ) = 6. Next, we demonstrate Proposition 4 by showing that the spatiotemporal coupling indeed holds up to these bounds. The choice of s = 11 is the minimal value of s that can serve this purpose. As an example, we consider the true eigenvaluesλ 1 andλ 3 , as well as the spurious eigenvalueλ 9 . For each of them, we follow the eigenvalues computation in Eq. (41) eleven times; namely, we substitute j = 1, . . . , 11 for each of the three values l = 1, 3, 9 in Eq. (41). Fig. 5 presents the polar representations of these computed eigenvalues. as a blue dot. As expected and according to B(ω sys 1 ) = 10, the former 10 computed eigenvalues coincide with each other, and appear as a single circle in Fig. 5(a). Moreover, a comparison between them and their corresponding true DMD eigenvalue,λ 1 , reveals thatλ Contrarily, the 11 th computed eigenvalue,λ Lastly, Fig. 5(c), which relates to the spurious DMD eigenvalueλ 9 , depicts the 11 computed eigenvalues λ (1) 9 , . . . ,λ (11) 9 , which are different from each other. This difference indicates thatλ 9 is spurious, as relations (41) are not valid for spurious DMD components.
To conclude the demonstration in Fig. 5, we showed that only true DMD components adhere to the spatiotemporal coupling in delay-coordinates DMD in Proposition 4, and only up the bound B(ω l ) in Eq. (40).
Next, we exploit this spatiotemporal coupling in order to identify the true DMD components required for compact representation of the dynamical system. Accordingly, we compute all the averaged eigenvalues, λ l , l = 1, . . . , 12, via Eq. (42), followed by a calculation of their absolute errors, ε l , by Eq. (43). Fig. 6 shows ε l for each of the 12 eigenvalues in this illustrative example, whereλ 1 , . . . ,λ 4 are the true eigenvalues marked by blue circles, and the rest are the spurious eigenvalues marked by red crosses. Evidently, the absolute errors of λ 1 , . . . ,λ 4 are smaller by at least 10 orders of magnitude The absolute errors, ε l , between the averaged eigenvalues λ l in Eq. (42) and their corresponding DMD eigenvalues,λ l (see Eq. (43)). The true eigenvalues, λ1, . . . ,λ4 are marked by blue circles, and the spurious eigenvalues,λ5, . . . ,λ12, are marked by red crosses. The absolute errors of the spuriousλ l are larger by at least 10 orders of magnitude than those of the trueλ l , providing a clear distinction between the true and spurious DMD eigenvalues. The markings of the blue circles and red crosses were obtained in a unsupervised manner using Algorithm 1.
than the absolute errors of the spurious eigenvalues. Consequently, the true and spurious eigenvalues can be easily distinguished by visual inspection based on Fig. 6.
Based on this identification, the original (sampled) signal is reconstructed using Eq. (46). The accuracy of the reconstruction can be observed in Fig. 3, which shows the original (black solid line) and the reconstructed (blue dashed line) signals. We note that the markings of the blue circles and red crosses in Fig. 6, as well as the reconstructed signal in Fig. 3, were obtained in a completely unsupervised and automatic manner using Algorithm 1.
To conclude, we emphasize that the input of our algorithm is the samples (observations) of the system, where it operates in an equation-free and unsupervised fashion to, eventually, extract a reduced-order and optimal spectral representation of the system.
The code that reproduces the results in this section is openly available in the following GitHub link.
A. Two-mode sine signal
Consider the two-mode sine signal in Eq. (47). We examine noise with four different amplitudes α that correspond to the signal-to-noise ratios (SNR) 15, 10, 5, and 0 dB. We test the performance of the proposed method (detailed in section V and summarized in Algorithm 1) for the augmentation numbers s = 10, 20, . . . , 360, and compare its DMD components selection to the selection method based on maximal amplitudes. For comparison purposes, we focus on the accuracy of the recovery of the oscillation frequencies of the signal obtained by the two methods. For a fair comparison, we set the number of selected DMD components to 4 (2 oscillation frequencies that appear in 2 complex conjugate pairs correspond to 4 DMD eigenvalues). Note that, in general, our method does not require the number of DMD components as a prior, but infers it from the observations. The recovery of the oscillation frequencies is evaluated by the absolute errors, i , between the system's true oscillation frequencies (ω sys 1 and ω sys 2 ) and the frequencies obtained from the DMD eigenvalues that were selected based on our and the maximal amplitudes methods, denoted by the superscripts our and amp, respectively. Namely, we calculate We repeat the calculations of i 500 times, each time with a different generated random noise, and compute the medians of the absolute errors,˜ i , as well as their 25 th and 75 th percentiles. Fig. 7 considers the first oscillation frequency, ω sys 1 = 3 rad/s, and presents˜ 1 for different augmentation numbers s at different SNR values, where the 25 th and 75 th percentiles of 1 are denoted by the whiskers. Fig. 8 shows the same analysis for ω sys 2 = 5 rad/s. Fig. 7 and Fig. 8 show that at the larger SNR values (15 and 10 dB), both ω sys 1 and ω sys 2 can be recovered with small errors based on both methods, given that s is chosen according to the range indicated in (20) for our method, and in (19) for the maximal amplitudes method.
Yet, this is not the case for the smaller SNR values. For SNR value of 5 dB, the maximal amplitudes method is unable to recover ω sys 1 and ω sys 2 with reasonable errors; namely, the frequencies are obtained with either large error median or large error variance of i . Contrarily, our method recovers these frequencies with small errors when the values of s are set in range (20). For SNR value of 0 dB, the maximal amplitudes method leads to large error medians, whereas our method obtains small error medians when the values of s are set in range (20).
B. Quasiperiodic signal
Consider the following noisy quasiperiodic signal which can be recast as where n(t) is a white standard Gaussian noise with varying amplitudes α that correspond to SNR value of 15, 10, 5, and 0 dB.
Similarly to the two-mode sine signal simulation (section VII A), we test the capability to uncover the signal's irrational frequencies in (50) using our method and compare it with the maximal amplitudes method. Fig. 9 and 10 are the same as Fig. 7 and 8 but with respect to ω sys 1 = ( √ 10 − 1) rad/s and ω sys 2 = ( √ 10 + 1) rad/s, respectively.
For both ω sys 1 and ω sys 2 , the maximal amplitudes method performs well only at SNR value of 15 dB. However, at the other three examined SNR values, the system's frequencies are uncovered with a large error variance (10 dB) or a large error median (5 and 0 dB). Contrarily, our method uncovers ω sys 1 with small errors at SNR=15, 10, and 5 dB, and with small error median but with large variance at 0 dB. Also, using our method, ω sys 2 is uncovered with small errors at SNR=15 and 10 dB, and small error median but with large variance at 5 dB. At SNR = 0 dB, both methods fail to uncover ω sys 2 .
C. Multi degrees of freedom oscillator
Consider an oscillator comprised of N masses m 1 , . . . , m N connected by springs of constants k 1 , . . . , k N .
The first mass m 1 is connected to a wall via the first spring k 1 , as depicted in Fig. 11 for N = 3. The equations of motion of this system are given by , and x = [x 1 , x 2 , . . . , x N ] T are the positions of the masses. To excite motion, the system is perturbed from its rest state by moving each mass to some initial position, and then releasing them all at once. Namely, the system is subjected to the initial conditions x(t = 0) = [x 0 1 , x 0 2 , . . . , x 0 N ] T , as well as to zero initial velocities of all masses (ẋ(t = 0) = 0).
At all the examined SNR values, ω sys 1 and ω sys 2 can be recovered with small errors using both methods. These results are similar for ω sys 3 at SNR values of 40 and 35 dB. However, the maximal amplitudes method fails to uncover ω sys 3 with reasonable errors at SNR value of 30 dB, while our method successfully performs this task. At SNR value of 25 dB, the maximal amplitudes method is unable to uncover ω sys 3 as well, whereas our method uncovers this frequency with small error median and large error variance.
D. Damped oscillator
Consider a damped oscillator with a mass m = 1 kg, connected to a wall via a spring with constant k = 49 N/m and a damper with damping coefficient c = 0.07 Ns/m, as illustrated in Fig. 15. These system parameters correspond to the natural frequency ω n = 7 rad/s, damping ratio ζ = 0.005, and the frequency of damped vibrations ω d = ω n 1 − ζ 2 = 6.9999 rad/s. The mass is subjected to the initial conditions x(0) = x 0 = 1 m, x(0) = v 0 = 2 m/s, giving rise to oscillations, which induce the following time response 59 x(t) = exp(−ζω n t) v 0 + ζω n x 0 ω d sin(ω d t) + x 0 cos(ω d t) .
(52) Random noise drawn from a normal distribution with amplitudes that correspond to SNR values of 20, 15, 10, and 5 dB is added to (52). The recovery accuracy of ω n and ζ obtained by our method and the maximal amplitudes method is presented in Fig. 16 and Fig. 17, respectively.
At SNR values of 20 and 15 dB, ω n and ζ are recovered with small errors using both methods. At SNR values of 10 and 5 dB, the maximal amplitudes method is unable to recover both ω n and ζ with reasonable errors. Conversely, at these noise levels, our method yields a recovery with errors at the order of 10 −2 for ω n and 10 −3 for ζ.
VIII. CONCLUSIONS
Delay-coordinates DMD is widely-used for data-driven analysis of dynamical systems based on observations in a broad range of fields. In this paper, we investigated two key questions concerning the inherent redundancy that arises from the utility of delay-coordinates DMD: the excess of dynamical components (i.e., spurious components) and the excess of dimensionality (i.e., coordinates). We showed that delay-coordinates DMD induces a particular structure on the augmented DMD components, consisting of a spatiotemporal coupling. At first glance, this coupling seems to counter the core idea underlying DMD, which facilitates a representation of the system that decouples temporal and spatial patterns. Yet, a deeper look allowed us not only to mitigate this coupling, but also to exploit it. Based on the spatiotemporal coupling we presented, we proposed a method for constructing a compact and improved reduced-order DMD representation. Specifically, we showed how to identify and select the informative (true) DMD components, thereby addressing the excess of dynamical components. This identification is based on the induced temporal associations within each augmented mode, which allowed us to address the redundancy in dimensionality. We tested the proposed method on four dynamical systems corrupted with noise and compared the performance to the prevalent method, which is based on the maximal amplitudes of the DMD modes. The results demonstrate the advantages of the proposed method, especially at the presence of high levels of noise. x3(t) Figure 11. Three degrees of freedom oscillator, comprised of three masses m1 = 1 kg, m2 = 2 kg, m3 = 3 kg, connected via three springs of constants k1 = k2 = 50 N/m, k3 = 75 N/m. The system is subjected to the initial conditions x1(0) = 1 m, x2(0) = 2 m, x3(0) = 3 m,ẋ1(0) =ẋ2(0) =ẋ3(0) = 0, giving rise to its oscillations. Figure 12. Same as Fig. 7, but for ω1 of the 3 DOF oscillator. Figure 16. Same as Fig. 7, but for ωn of the damped oscillator. Figure 17. Same as Fig. 7, but for ζ of the damped oscillator. | 2022-09-22T06:42:25.652Z | 2022-09-21T00:00:00.000 | {
"year": 2022,
"sha1": "ae2740778dcf2e6e6a339799f973249b0e6093ea",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ae2740778dcf2e6e6a339799f973249b0e6093ea",
"s2fieldsofstudy": [
"Physics",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
} |
39984651 | pes2o/s2orc | v3-fos-license | Decomposable medium conditions in four-dimensional representation
The well-known TE/TM decomposition of time-harmonic electromagnetic fields in uniaxial anisotropic media is generalized in terms of four-dimensional differential-form formalism by requiring that the field two-form satisfies an orthogonality condition with respect to two given bivectors. Conditions for the electromagnetic medium in which such a decomposition is possible are derived and found to define three subclasses of media. It is shown that the previously known classes of generalized Q-media and generalized P-media are particular cases of the proposed decomposable media (DCM) associated to a quadratic equation for the medium dyadic. As a novel solution, another class of special decomposable media (SDCM) is defined by a linear dyadic equation. The paper further discusses the properties of medium dyadics and plane-wave propagation in all the identified cases of DCM and SDCM.
TE/TM decomposition
The most general linear electromagnetic medium (bi-anisotropic medium) can be expressed in terms of four medium dyadics in the three-dimensional Gibbsian vector ("engineering") representation as [1,2] where the four field vectors are elements of the vector space E 1 . The number of free parameters is 4×9 = 36 in the general case. It it well known that, in a uniaxial anisotropic medium defined by medium dyadics of the form µ g = µ t (u x u x + u y u y ) + µ z u z u z , ζ g = 0 (3) and satisfying ǫ t µ z − µ t ǫ z = 0, (4) any time-harmonic field with time dependence e jωt can be uniquely decomposed in two parts, satisfying u z · E T E = 0, u z · H T M = 0.
This property was probably first published by Clemmow in 1963 [3]. In the case ǫ t µ z − µ t ǫ z = 0 the decomposition can still be made but it is not unique. Actually, such a medium can be transformed to an isotropic medium through a suitable affine transformation [2]. The TE/TM decomposition in isotropic media has a longer history [4].
TE/TM decomposition in the uniaxial medium can be simply demonstrated for a plane wave. In fact, because the fields of a plane wave in any medium satisfy E · B = 0 and H · D = 0, they also satisfy in the uniaxial medium (2), (3). Thus, assuming (4), the plane wave must be either a TE wave or a TM wave with respect to the axial direction defined by the unit vector u z . Since any electromagnetic field outside the source region can be decomposed in a spectrum of plane waves, each component of which is either a TE wave or a TM wave, the field can be uniquely decomposed in TE and TM parts. The same principle is valid also for more general decompositions. Thus, it is sufficient to consider the decomposition problem for plane waves only.
Generalized decomposition
The TE/TM decomposition theory has been generalized to media where the fields can be decomposed to two parts satisfying either a · E = 0 or b · H = 0 where a and b are two given vectors [5]. Even more generally, [6] analyzes the occurrence of a 1 · E + a 2 · H = 0 or b 1 · E + b 2 · H = 0 where a 1 · · · b 2 are four given vectors. This last decomposition was shown to be possible in bi-anisotropic media with Gibbsian medium dyadics of the form ζ g = z × I + 1 2η (a 1 b 1 + b 1 a 1 ), where x, z are arbitrary vectors, τ, η are arbitrary scalars and B is an arbitrary dyadic. Media defined by (8) - (11) have been called decomposable media [6]. The paper [7] demonstrates that yet another scalar parameter α can be added to the definitions of the medium dyadics. In this case the definitions (8) - (11) are generalized (after some manipulations) to the form
Four-dimensional formalism
Remarkably, the conditions (12) -(15) of the decomposable medium can be formulated in a compact way by applying the four-dimensional differential-form formalism. The present paper thus starts directly from the four-dimensional definition of decomposable media.
It is well known that the Maxwell equations, and the medium equation have a simpler appearance in the four-dimensional differential-form representation in comparison with the three-dimensional Gibbsian vector formalism [8,9,10]. Here the electromagnetic fields are represented by two-forms Φ, Ψ, elements of the space F 2 , whose expansions in terms of three-dimensional two-forms B, D and one-forms E, H are Here, γ ∈ F 3 is the source three-form, consisting of three-dimensional charge three-form ̺ and current two-form J. In the basis of one-forms ε i , i = 1 · · · 4, ε 4 denotes the temporal basis one-form. The medium dyadic M ∈ F 2 E 2 maps two-forms to two-forms and corresponds to a 6 × 6 matrix. It is often simpler to consider the modified medium dyadic M g ∈ E 2 E 2 defined by in terms of a quadrivector e N ∈ E 4 . The modified medium dyadic maps two-forms to bivectors, elements of the space E 2 . Summary of definitions and operational rules for differential forms, multivectors and dyadics applied in this paper can be found in the appendices of [11,12,13] and, more extensively, in the book [10].
The medium dyadic M can be expanded in four three-dimensional dyadic components by separating terms involving the temporal vector e 4 , temporal one-form ε 4 , or both, as which corresponds to the matrix representation One can represent the modified medium dyadic by using Gibbsian medium dyadics as which corresponds to the matrix representation The matrix is the same as in the expression (1) which involves Gibbsian vectors.
The two sets of 3D medium dyadics have the relations [10] ǫ ′ = ε 123 ⌊(ǫ g − ξ g |µ −1 g |ζ g ), The four-dimensional formalism allows simple definition of important classes of electromagnetic media. For example, if the modified medium dyadic can be expressed in terms of the double-wedge square of some dyadic Q ∈ E 1 E 1 (which need not be symmetric) as the corresponding three-dimensional Gibbsian dyadics satisfy relations of the form [14,10] for some scalar α. Thus, ǫ g and µ T g are multiples of the same dyadic while ξ g and ζ g may be any antisymmetric dyadics. In (29), a medium defined by (28) was called a Q-medium for brevity. Such a medium is known to have the property of being non-birefringent to propagating waves. Thus, media in this class can be conceived as generalizations of isotropic media. Also, the class known as transformation media [16,17,18] largely coincides with the class of Q-media with a symmetric dyadic Q.
Generalizing the definition (29) by adding a term where A, B are two bivectors, leads to the class of generalized Q-media, defined in [19]. One can show that, for this kind of media, the three-dimensional medium dyadics must be of the form (8) - (11), i.e., any generalized Q-medium is actually a decomposable medium. However, this cannot be inverted, because the more general set of conditions (12) - (15) for α = 0 cannot be achieved with medium dyadics of the form (30). At this stage it is not obvious how to generalize (30) to correspond to the conditions (12) -(15).
Hehl-Obukhov decomposition
In many applications a decomposition of the medium dyadic based on its symmetry properties is often useful. As shown by Hehl and Obukhov [9] (following the symmetry properties discussed by Post [20]), the most general medium dyadic can be uniquely decomposed in three components as [9], called principal (1), skewon (2) and axion (3) parts of M. The axion part M 3 is a multiple of the unit dyadic I (2)T mapping any two-form to itself while the other two parts are trace free. The skewon part is defined so that the corresponding modified medium dyadic e N ⌊M 2 ∈ E 2 E 2 is antisymmetric, while the principal part M 1 is trace free and e N ⌊M 1 is symmetric. The decomposition (31) motivates an intuitive nomenclature; for example a medium defined by M = M 1 is called a principal medium and one with M = M 2 + M 3 is called a skewon-axion medium.
Plane-wave conditions
Let us now formulate the decomposition property in terms of four-dimensional quantities. Assuming plane-wave fields the Maxwell equations (16) for the wave one-form ν and electromagnetic amplitude two- These imply the following representations in terms of potential one-forms φ, ψ, Thus, the electromagnetic two-forms of any plane wave satisfy the orthogonality conditions in any medium. Defining the dot product between two two-forms Φ, Ψ or two bivectors the four-form conditions (35) can be expressed as the scalar conditions Thus, for any medium dyadic M, the two-form Φ of any plane wave satisfies a condition of the form for arbitrary scalars α, β, γ. For the modified medium dyadic this condition becomes because we have
Condition for the medium dyadic
Let us now assume that, given two bivectors A, B ∈ E 2 , the medium has the property that any plane wave satisfies either A|Φ = 0 or B|Φ = 0. Such waves can be respectively called A-waves and B-waves and the medium, in analogy with the medium defined by (8) - (11), can be called by the general name decomposable medium. Thus, any plane wave in such a medium is required to satisfy for two given bivectors A, B. Following [6] let us now define the class of decomposable media by combining (40) and (42) and requiring that the following condition be satisfied for all two-forms Φ: Since the left-hand side is zero for all media, this warrants that (42) is satisfied when the medium is such that (43) is satisfied. Requiring that this be valid for any two-forms Φ implies that the symmetric parts of the dyadics in brackets on both sides must be the same. Redefining the coefficients we can write the condition in the form Although (44) is obviously enough to define a large class of media, it is not enough for claiming that this class covers all media for which the decomposition condition (42) is satisfied. The latter question must be left for the topic of further research.
To find solutions M g for the condition (44), we must separate two cases: either γ = 0 or γ = 0.
• The case γ = 0 requires solving a symmetric quadratic dyadic equation and the corresponding class of media can be called that of (proper) decomposable media or DCM for brevity.
• In the case γ = 0 the quadratic dyadic equation is reduced to one of the first order. It will require separate consideration and actually defines a distinct class of media called that of special decomposable media or SDCM.
3 Solutions to the medium conditions
DCM
Assuming γ = 0 in (44) we can set γ = 1 without losing generality. In this case the DCM condition (44) can be expressed in compact form as by defining The condition (45) is a quadratic dyadic equation, whose solutions are derived in the Appendix (cf. (122)). Accordingly, two subclasses of DCM are obtained. The first class assumes that there exist a dyadic Q ∈ E 1 E 1 such that M ′ g is a multiple of Q (2) . The second class assumes that there exist a dyadic P ∈ F 1 E 1 so that M ′ g is a multiple of e N ⌊P (2) . Let us consider these two cases separately and respectively call them QDCM and PDCM. This nomenclature is chosen in the light of the obvious relation with Q-media [14] and P-media [15].
Redefining α, the QDCM solution of (45) as obtained from (46) -(49) must be of the general form for some normalized dyadic Q, bivectors C, D and scalars M, α. Thus, the definition (48) is more explicitly Given the bivectors A = C and B one can easily solve (52) for D. It is also easy to verify that plane-wave fields in a medium defined by (51) satisfy the decomposition property (42) (see Section IVA).
The three-dimensional Gibbsian medium dyadics corresponding to the general QDCM can be expressed in the general form (12) - (15). In fact, adding an axion term with α = 0 to the medium dyadic of the generalized Q-medium (30) analyzed in [19], it can be shown to correspond to the more general class (50) of decomposable media (12) - (15).
The second possibility in (45), that of PDCM, yields the following forms for the decomposable medium, for some normalized dyadic P, bivectors C, D and scalars M, α. The definition (48) in this case is Setting again C = A, one can easily solve (55) for D in terms of given A and B. For α = 0 the medium coincides with one called the generalized P-medium whose basic properties have been studied in in [15]. Thus, the PDCM solution coincides with the generalized P-medium extended by an arbitrary axion component.
Based on the expansions of the dyadic P and the bivector product DC in 3D components as the 3D components of the medium dyadic M expressed as (21) in the PDCM take the form [15] One can note that the dyadics ǫ ′ and µ −1 have a quite restricted form. In the case DC = 0 they actually do not have an inverse, which is in contrast to the QDCM case. Actually, P-media and Q-media can be transformed to one another through Hodge duality [15]. The same property is also valid for the generalized Q-and P-media.
SDCM
Let us now consider the special case of the condition (44) simplified by γ = 0 and β = 1, which yields the following first-order dyadic equation, Equation (62) can be interpreted so that the symmetric part of the dyadic M g − AB must be a multiple of e N ⌊I (2)T . Redefining α, the modified medium dyadic must thus be of the form where A ∈ E 2 E 2 is an arbitrary antisymmetric dyadic. It is known that any antisymmetric dyadic mapping two-forms to bivectors can be expressed in terms of a trace-free dyadic It is now easy to verify that a medium defined by (63) satisfies the decomposition condition (42). In fact, any plane wave in such a medium satisfies Alternatively, we can replace the solution (63) by which corresponds to Without losing the generality, the bivectors can be assumed to satisfy A · B = 0, whence the last one of the three terms in (67) is trace free. In this case the three terms correspond to the respective axion, skewon and principal parts of the medium dyadic M. While the axion and skewon parts may be arbitrary, the principal part is restricted to be of the simple form as defined by the two bivectors A and B. Since the principal part is not complete, i.e., it does not have an inverse, some trouble in interpreting the medium in terms of three-dimensional medium dyadics may be expected. If A = B is chosen in (66) and (67), SDCM reduces to a simplified class of media, previously called that of doubly-skew media [22].
3D expansions for SDCM
Because SDCM defines a novel class of decomposable media, it is interesting to find its definition in terms of three-dimensional medium parameters. Let us expand the trace-free dyadic B o of (64) as B o = C s + e 4 γ s + c s ε 4 − e 4 ε 4 (trC s ).
where C s is a spatial dyadic, c s a spatial vector and γ s a spatial one-form. Applying where I s is the spatial unit dyadic, we have Further, we can expand where the vectors a s , b s and the one-forms α s , β s are spatial. In terms of these we can write Inserting the expansions in (67) and equating with (21) we can identify one set of threedimensional medium dyadics as
Example of SDCM
As an example, let us consider SDCM with vanishing magnetoelectric parameters: This implies α = 0, C s = 0, a s β s + b s α s = 0, in the above 3D representations. Limiting to the most general case, i.e., that none of the quantities a s , β s , b s , α s vanishes, we must have b s = λa s and β s = −λα s for some scalar λ. Thus, in this case we can write and the corresponding Gibbsian dyadics become ǫ g = 2λa s a s − e 123 ⌊(γ s ∧ I T s ).
The medium defined by these expressions is characterized by both electric and magnetic gyrotropy. For example, the permittivity dyadic (83) can be approximately realized by magnetoplasma in a high static magnetic field for low frequencies [23].
Dispersion equations
To verify the decomposition of plane waves let us derive the dispersion equation for the general plane wave in all of the previous medium cases. The equation for the potential one-form can be obtained by starting from (33), (34) which yield This can be expressed as where D(ν) ∈ E 1 E 1 is the dispersion dyadic. The axion part of M g does not contribute because and we can omit it in all medium cases. One may note that (85) implies which will be applied in the sequel.
QDCM
In the case of DCM the dispersion dyadic equals that of the generalized Q-medium, This case has been analyzed in [19], but let us retrace the steps for convenience. For simplicity, let us define the vectors Expanding and applying the rules valid for normalized Q, (90) can be expanded as Actually, (94) corresponds to the A-wave and (95) to the B-wave as will be shown in the next section.
PDCM
Neglecting again the axion term, PDCM coincides with the generalized P-medium whose dispersion equation was derived in [15]. Omitting the details, quite similar to those of the QDCM, the dispersion equation can be split in two equations which are of the form ν|(C⌊P −1 )|ν = 0. (97)
SDCM
For SDCM the dispersion dyadic equals Expressing the general antisymmetric dyadic as in (64), where B o ∈ E 1 F 1 may be any trace-free dyadic, we can write where the bivector defined by is simple since it satisfies F · F = 0. The dyadic F⌊I T ∈ E 1 E 1 is antisymmetric and it can be shown to satisfy Defining for simplicity the vectors we can expand Applying the orthogonality ν|a = ν|b = 0, we obtain and similarly for F ∧ b, whence (FF) ∧ ∧ (ab + ba) = 2(e N e N ⌊⌊νν)(ν ′ |a)(ν ′ |b).
The last term of (103) vanishes due to Thus, we are left with which vanishes due to (87). Thus, we must have either ν ′ |a = 0 or ν ′ |b = 0 satisfied by ν ′ .
In conclusion, in the SDCM case, the fourth-order dispersion equation splits in two secondorder equations The plane-wave propagation depends on the metric dyadics B o ⌋A and B o ⌋B belonging to the space E 1 E 1 . The medium has no birefringence if the symmetric parts of these two dyadics are multiples of one another. When in addition A is a multiple of B, the medium coincides with the doubly-skew medium of [22].
Properties of the plane-wave fields
Let us now check whether the plane-wave fields satisfy the decomposition properties associated to the corresponding media.
QDCM
Applying (50) we can write for the QDCM case Since the potential one-form is not unique, one can choose an additional condition (Lorenz condition) without changing the field two-form Φ = ν ∧ φ. Choosing the condition Now the solution corresponding to the dispersion condition (94) leads to because of the definition (47). Thus, (94) corresponds to the A-wave.
To verify that the field corresponding to the dispersion equation (95) satisfies the B-wave condition B|Φ = 0 appears more complicated. Starting from c|φ = 0 and νν||Q = 0 while assuming the condition (110), from (111) we see that the potential must be of the form where λ is some scalar coefficient. The condition (110) then requires that ν satisfy ν|d = 0. Invoking (52) and the rule after a few algebraic steps we obtain Comparing with (95), the corresponding solution can be identified as the B-wave.
PDCM
Omitting again some details, in [15] it has been shown that the field corresponding to the dispersion equation (97) satisfies the condition i.e., it represents an A-wave. Similarly, the solutions of the dispersion equation (96) correspond to the field condition which equals B|Φ = 0 due to the definition (55).
SDCM
For the SDCM case, the equation for the potential one-form (85) with expressions from Section IVC inserted and the axion term omitted, yields Multiplying by φ| yields (a|φ)(b|φ) = 0. Multiplying by ν ′ | leads to the condition from which we obtain the relations This shows us that the plane waves corresponding to the two dispersion equations (108) are respectively A-waves and B-waves.
Conclusion
In this paper we have applied four-dimensional differential-form formalism to define media in which field two-forms can be decomposed in two parts: the A-fields and the B-fields:
Now one can show that, out of the two pairs of dyadics α, µ −1 and ǫ ′ , β, exactly one dyadic in each pair possesses a 3D inverse. To prove this, let us first consider the pair α, µ −1 and assume that neither of the dyadics possesses an inverse. Choosing the reciprocal bases e i , ε i in a suitable manner, we can expand where κ 1 , κ 2 and α 1 , α 2 , α 3 are one-forms of which the latter three are linearly dependent. These substituted in (123) yields a dyadic equation between the one-forms Assuming that α 1 ∧ α 2 = 0, (129) implies that κ 1 and κ 2 occupy the same subspace as α 1 and α 2 , and that furthermore, for some scalar A, we have The relation now becomes Substituting this in (124), the resulting equation does not have any solutions since the left-hand side has no inverse while the right-hand side has one for α = 0. Thus, the assumption that neither α nor µ −1 have an inverse is incorrect. Releasing the constraint α 1 ∧α 2 = 0, the case µ −1 ⌋ε 123 = εκ, e 123 ε 123 ⌊⌊α = eα can be shown to lead to the same result. Thus, at least one dyadic of the pair α, µ −1 must possess an inverse. The same conclusion is valid for the pair ǫ ′ , β.
To demonstrate that exactly one dyadic of each pair possesses an inverse, we notice that the left-hand sides of (123) and (126) are symmetric dyadics, whence all four terms must be antisymmetric dyadics. Because 3D antisymmetric dyadics A can be expressed in terms of some vector a or one-form α as the dyadics in (123) and (126) Since antisymmetric 3D dyadics do not have an inverse, neither do the left-hand sides of (135) and (136). This concludes the proof. As a conclusion, there are exactly two invertible dyadics among the four dyadics. Let us split the problem of finding solutions to (122) by considering the four possible cases separately.
Case 1: (α, β)
Assuming that α and β possess inverses, the other two dyadics can be solved from (135) and (136) as Defining the dyadic X = (ε 123 e 123 ⌊⌊α T )|β ∈ F 1 E 1 , the inverse of which exists due to the above assumption, we can express (124) as a condition for the dyadic X, Now it is easy to check that the solution of (140) must be of the uniaxial form where the parameter A and the scalar α of (122) are related as At this point we can construct the medium dyadic M satisfying (122) for Case 1 from (139), (137) and (138) as and it can be further expressed in the compact form M = MP (2) , M = 1 P = (detX)β − A(β|α + detβ ε 4 )(a + Ae 4 ).
Media defined by medium dyadics of the form MP (2) for some dyadic P have been called P-media [15].
Defining the dyadic X = (ε 123 e 123 ⌊⌊µ −1T )|ǫ in terms of which we can express from (148), (146) and (147) the dyadics Substituting these in (21), the modified medium dyadic M g = e N ⌊M becomes This can be expressed in the compact form Media defined by modified medium dyadics of the form M g = MQ (2) for some dyadic Q have been called Q-media [14].
Denoting this time which has an inverse within the limits of Case 3, the condition (156) takes the form a ∧ X + (e 123 e 123 ⌊⌊X −1T )⌊α = αe 123 ⌊I T s .
Because of the assumption α = 0, this condition leads to an impasse: the dyadic on the right-hand side has an inverse while that on the left-hand side has not. Thus, the original assumption that α and ǫ ′ possess inverses is obviously invalid.
Since this is of the same form as (156), the conclusion must be similar: the assumption that β and µ −1 have inverses cannot be valid for M to satisfy (122) with α = 0.
In conclusion, there are only two classes of solutions to the quadratic equation (122), that of Q-media and that of P-media. | 2011-01-27T10:20:08.000Z | 2011-01-27T00:00:00.000 | {
"year": 2011,
"sha1": "303fec892b88d8a90b81804f9aed39b8c32b9613",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1101.5247",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "303fec892b88d8a90b81804f9aed39b8c32b9613",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
253266790 | pes2o/s2orc | v3-fos-license | Does the fish-infecting Trypanosoma micropteri belong to Trypanosoma carassii ?
: Recently, based on a limited morphological characterisation and partial 18S rRNA gene sequence, Jiang et al. (2019) described Trypanosoma micropteri Jiang, Lu, Du, Wang, Hu, Su et Li, 2019 as a new pathogen of farmed fish. Here we provide evidence based on the expanded sequence dataset, morphology and experimental infections that this trypanosome does not warrant the estab-lishment as a new species, because it is conspecific with the long-term known Trypanosoma carassii Mitrophanow, 1883, a common haemoflagellate parasite of freshwater fish. The former taxon thus becomes a new junior synonym of T. carassii .
The taxonomy of fish trypanosomes has long been considered complex or even controversial, since most of the species were named mainly based on the new-host new-trypanosome paradigm (Fantham et al. 1942, Mackerras and Mackerras 1961, Lom 1979, Joshi 1982. Therefore, it was suggested already a long time ago that many species of fish trypanosomes might be synonymous (Baker 1960). As a matter of fact, the fast growing sequence data support this notion.
In 2019, an outbreak of trypanosomiasis was recorded in farmed largemouth bass Micropterus salmoides (Lacépède) in southern China. The pathogen was reported as a new trypanosome species, namely Trypanosoma micropteri Jiang, Lu, Du, Wang, Hu, Su et Li, 2019, with its description based on a limited morphological characterisation and partial 18S rRNA gene sequence (Jiang et al. 2019). However, for reasons detailed below, the establishment of this species, primarily based on morphological features, seems to be unsustainable. Jiang et al. (2019) performed a comparative analysis of T. micropteri with just three species of fish trypanosomes available in Gu et al. (2007) and Grybchuk-Ieremenko et al. (2014). One of the previously described species, Trypanosoma sp. pseudobagri (Gu et al. 2007), has a 99.8% sequence identity of its 18S rRNA sequence with T. mi-cropteri. Such a very high level of sequence similarity calls for a detailed comparison of morphological features between both flagellates, yet this was not performed. Another closely related fish trypanosome, the MARV strain of Trypanosoma carassii Mitrophanow, 1883 (see Gibson et al. 2005), was also evaluated only on the level of the 18S rRNA sequence. Moreover, we have identified two likely erroneous calculations in Jiang et al. (2019); namely, the nucleus width (mean of 1.2 µm derived from a range from 0.7 to 1.8 µm; mean of 1.8 µm, derived from a range from 0.7 to 0.9 µm), and a reversely defined flagellar index.
A freshwater fish trypanosome isolated from M. salmoides was maintained in our laboratory (Chen et al. 2022). Tilapia juveniles were bought from Tilapia Breeding Farm of Guangdong Province (Guangzhou, China). Fishes were kept in fish tanks for two weeks before any experiment. Blood from each fish was also examined by microscopy to further confirm they were free from any trypanosome infection. Infected blood was collected and inoculated into healthy fish through a syringe injection into the pericardial cavity. An infection was confirmed by the presence of parasitemia on day 10 post-injection. Biometric data of trypanosomes were conducted using Giemsa-stained smears, and approximately 200 randomly selected flagellates were T. micropteri b -this study 0.7 ± 0.4 8.3 ± 1.6 9.0 ± 1.8 6.6 ± 1.2 15.6 ± 2.8 10.9 ± 1.5 26.5 ± 3.6 2.0 ± 0.3 0.9 ± 0.2 1.6 ± 0.3 1.4 ± 0.2 1.1 ± 0.0 0.7 ± 0.1 T. micropteri c - Gu et al. (2007) 0.9 ± 0.1 10.9 ± 2.3 12.7 ± 2.4 9.7 ± 1.3 22.4 ± 3.2 15.3 ± 0.9 37.7 ± 3.9 2.4 ± 0.2 Biometric data (Centre to centre distances across the cell axis) in μm or ratios are provided as mean ± SD and ranges: PK, posterior end to kinetoplast; KN, kinetoplast to nucleus; NA, nucleus to anterior end; FF, free flagellum; BW, body width; NL, nucleus length; NW, nucleus width; PN, posterior end to nucleus; BL, body length; TL, total length; KI, kinetoplast index (PN/KN); NI, nucleus index (PN/NA); FI, flagellum index (FF/ BL). a, n = 50 (Woo 1981); b, n = 217; c, n = 80 (Trypanosoma sp. pseudobagri of Gu et al. 2007); d, n = 100, two sets data of NW were found (Jiang et al. 2019); e, n = 150 (Letch and Ball 1979); f, n = 132; g, n = 200. #, recalculated from the published data. * and ***, significances (p < 0.05 or p < 0.001) only observed within T. micropteri (a vs b, b vs c, b vs d and b vs f) measured as described previously (Su et al. 2022). Significance was assessed with Z-test at P < 0.05, using the following formula: Z = ABS(Mean A -Mean B) / SQRT (Standard error A ˄ 2 + Standard error B ˄ 2), P = (1 -NORMSDIST (Z)) × 2 Total DNA was extracted using the phenol-chloroform method as described elsewhere (Su et al. 2022). The fulllength 18S rRNA gene was amplified using the forward (5'-GACTTTTGCTTCCTCTATTG-3') and reverse primers (5'-CATATGCTTGTTTCAAGGAC-3'). PCR reactions were conducted using the Phanta Super-Fidelity DNA Polymerase (Vazyme Biotech, Nanjing, China) according to the manufacturer's protocol. PCR cycling parameters were as follows: initial denaturation at 94 °C for 3 min followed by 35 cycles at 95 °C for 15 s, 61 °C for 15 s, 72 °C for 2 min, and a final extension at 72 °C for 5 min. PCR amplicons were resolved in 1% agarose gel and sequenced by Thermo Fisher Scientific, Guangzhou, China, while additional 18S rRNA gene sequences of freshwater fish trypanosomes were obtained from the GenBank database. Sequences were aligned using Clustal X (Thompson et al. 1997), using default settings and with final manual adjustments. In order to determine the evolutionary distances among the 18S rRNA genes of freshwater fish trypanosomes, we calculated the sequence identities and p-distance by BLAST + 2.8.1 and MEGA VII.
The neighbor-joining (NJ) and Maximum likelihood (ML) methods were used to create phylogenetic trees by MEGA VII (Kumar et al. 2016) with Kimura 2-parameter model, pairwise deletion for gaps and bootstrap of 1,000 replicates.
It is worth mentioning that the trypanosome in question is a severe pathogen of farmed fish and its correct classification is therefore of importance for the aquaculture industry. Hence, to shed more light on the problem at hand, we have critically reviewed the published data and further investigated the identity of the MARV strain. Since only a 1.5 kb-long region of its 18S rRNA gene is available in the GenBank database (AJ620549), we have completely re-sequenced the entire gene in question. The newly obtained full-size 2,057 nt-long 18S rRNA gene sequence of the MARV strain (OL963935) allowed us to perform a thorough phylogenetic analysis. Alignment with the original sequence of Jiang et al. (2019) revealed 99.67% identity, the only difference represented by a five nucleotides deletion. A further extended alignment with other 18S rRNA sequences available in the public domain for T. carassii (Fig. 1) showed that such deletion was confined to MARV clone 11 (AJ620549) and might likely be an artifact restricted to this clone. Hence, we argue that the newly ob- tained full-length 18S rRNA sequence is superior and shall solely be used for further analyses. Phylogenetic analysis that includes this complete 18S rRNA gene sequence allowed us to identify a 99.95% sequence identity between two strains of T. carassii, namely TrCa isolated from Carassius carassius (Linnaeus), previously extensively used in experimental infections, and the MARV strain mentioned above (Suppl. Fig. 1) (Woo 1981, Bienek et al. 2002, Kovacevic and Belosevic 2015. Next, we compared the published data on the morphology of T. micropteri and T. carassii TrCa, revealing differences in the posterior end to kinetoplast distance, as well as in the body width size and the kinetoplast index (Table 1). However, when subjected to the Z-test, these differences turned out to be statistically insignificant. This is not unexpected when one considers the previously proposed influence of the host on morphological characteristics of fish trypanosomes (Lom 1979, Woo andBlack 1984) and the general morphological flexibility of members of the genus Trypanosoma Gruby, 1843 (Baker 1960).
Therefore, for comparative purposes, we have isolated a trypanosome from a diseased largemouth bass specimen captured in Foshan, Guangdong Province, China, and sequenced its 18S rRNA gene, which turned out to be 100% identical with T. micropteri of Jiang et al. (2019). Morphology of this newly isolated trypanosome (here temporarily called T. micropteri) ( Table 1) was carried out in a recently developed Nile tilapia infection model (Chen et al. 2022).
Morphological differences in the length of the free flagellum and total cell length were within the ranges provided in previous descriptions of this species (p < 0.05) (Gu et al. 2007, Jiang et al. 2019. Consequently, the slight morphological differences between T. micropteri and T. carassii (TrCa) can be attributed to different hosts, from which they have been isolated.
It is worth noting that the genetic distances between T. micropteri on one side and T. carassii TrCa or MARV strains on the other side are smaller than the distances separating different strains of T. carassii, namely TrCa and MARV, as well as EL-CP and Ts-Cc-SP isolated from pike Esox lucius (Linnaeus) and common carp (Cyprinus carpio Linnaeus), respectively ( Fig. 2; Table 2). Based on the available 18S rRNA sequences, we have generated Neighbor-joining and Maximum likelihood phylogenetic trees of fish trypanosomes.
Since all clades contain at least one strain affiliated with T. carassii, we conclude that the analysed dataset makes this species paraphyletic. Indeed, the available data are consistent with the conclusion that T. carassii is an umbrella species that actually lumps together several distinct fish trypanosomes. Moreover, the genetic distances between T. carassii TrCa and MARV in the clade C, and those between the clades A and B, range from 1.9 to 2.3%, whereas the genetic distance between T. carassii TrCa/MARV and T. micropteri is only 0.9%. Smit et al. (2020) proposed a 3% sequence difference in 18S rRNA (only 300 nt-long region covering the hypervariable V7 region was included) as a genetic distance sufficient to distinguish two different genotypes of fish trypanosomes. However, Díaz et al. (2020) used for the same purpose a 1% difference criterium (using a 1.4 kb-long region of 18S rRNA and full-size GAPDH), arguing that the same genetic distance distinguishes strains of Trypanosoma cruzi Chagas, 1909. Therefore, we propose that T. micropteri is not a valid species but rather a strain (and a new synonym) of T. carassii. | 2022-11-04T06:18:03.887Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "4f9d6ef98069149765c65d41bd0353dceca71f3d",
"oa_license": null,
"oa_url": "http://folia.paru.cas.cz/doi/10.14411/fp.2022.024.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "417215c08316b127017d5dab90761d8fb38dfcce",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250509923 | pes2o/s2orc | v3-fos-license | Mangifera indica (L.) tree as agroforestry component: Environmental and socio-economic roles in Abaya-Chamo catchments of the Southern Rift Valley of Ethiopia
Abstract The integration of woody perennials into the agricultural ecosystem often has diversified environmental, economic and social benefits. Mangiferaindicabased agroforestry system in southern rift valley is not investigated and scientifically documented well. The current study tried to investigate the environmental and socio-economic roles of the mango-based agroforestry system in the southern rift valley around the Abaya-Chamo catchment, especially in the Gamo Zone. Three kebeles (a small administrative unit in Ethiopia) were selected purposively based on the potential, and about 151 household heads were interviewed by questionnaires from May 2021 to July 2021. Household socio-economic data and environmental and socioeconomic roles of both mango and non-mango-based agroforestry practitioners were collected. Both key informant interview (KII) and focus group discussions (FGD) were also held. The collected data were analyzed by simple descriptive statistics in SPSS version 24 software. As a result, the interviewed respondents differed significantly (P < 0.05) in terms of agroforestry practices. It was confirmed that the mango-based agroforestry practitioners revealed that the soil fertility enhancement, reduction of crop damage by wind and amelioration of microclimate by shading for crop and livestock were of environmental importance mango-based agroforestry practitioners benefited more than non–mango-based agroforestry practitioners (P < 0.05). Whereas food/fruits, timber, traditional medicine, fuelwood, pole and fodder were socio-economic roles of agroforestry. The mango-based agroforestry practitioners benefited from pole and food more than non-practitioners (P < 0.05). This indicates that the mango-based agroforestry of Abaya-Chamo catchments of the southern Rift valley of Ethiopia has potential for environmental and socio-economic advantages to the society. Elongated experimental research will help to optimize strategies for the management and sustainable utilization of this climate-smart agricultural system.
Background and justification
Agroforestry is a planned combination of trees and crops with or without livestock on the same land that is increasingly being recognized as a sustainable system to reconcile agricultural production (Catacutan et al., 2017;Hassan et al., 2016;Nair et al., 2021). In this system, there is ecological and economic interactions between the components (Alao et al., 2016;Atangana et al., 2014). The systems make a significant contribution toward reducing poverty and resource degradation in Africa. For instance, agroforestry technologies such as fruit trees can provide a more diverse farm income and reduce food insecurity (Thangata et al., 2002).
Fruit trees, grass and crops in agroforestry systems had higher productivity, higher profitability and earlier returns on investment than sole-crop fruit systems, but also higher initial investment costs (Do et al., 2020). A majority of fruit-based systems are found in the tropics and subtropics where fruit trees constitute an important component of agroforestry systems (Lauri et al., 2020). In Ethiopia, the fruit treebased agroforestry system has a great role to play in livelihood improvement and it provides multiple contributions to household income and supplementary food for smallholder farmers (Adane et al., 2019).
Farmers in different parts of the tropics including Ethiopia traditionally integrate this fruit tree with other components of the so-called mango-based agroforestry system. This system (e.g., alley cropping) is an important component of agroforestry systems and is widely followed in many parts of the world (Mishra et al., 2020;Rana, 2022).
The southern rift valley around lake Abaya-Chamo catchment is known for fruit production specially Mangifera indica-based agroforestry (Gochera et al., 2021). The mango is claimed to be the most important fruit of the tropics and has been touted as the "king of all fruits". The fruit contains almost all the known vitamins and many essential minerals. However, the dominant agroforestry systems in southern Ethiopia are enset-based, enset-coffee-based and fruit-coffee-based and chat-based (Mesele, 2013).
Even though the southern rift valley around Abaya-Chamo catchment, especially in the Gamo Zone the mango is the largest produced tropical fruit in the farmland as agroforestry, still fruitbased agroforestry system is not sufficiently investigated. So the current study tried to investigate the environmental and socio-economic roles of the mango-based agroforestry system in southern rift valley around the Abaya-Chamo catchment, especially in the Gamo Zone.
Description of the study area
The study was carried out in the southern rift valley around the Abaya-Chamo catchment, especially in the Gamo zone in the Southern Nations, Nationalities, and Peoples' Region of Ethiopia. According to the Gamo zone agriculture and natural resource development office (2021), the top mango fruit-producing potential areas in the Gamo zone were considered namely: Chano Mile, Lante and Kola Shelle. These potential areas are shown in the map below ( Figure 1). The areas lie within the lowland agro-ecological zone at altitude ranges between 1200 m to 1285 m.a.s.l. The maximum mean annual rainfall of Chano Mille, Lante and Kola Shelle is 1000 mm and the minimum mean annual rainfall is 300 mm. The maximum and minimum mean annual temperature is 38°C and 14°C, respectively (Arba Minch meteorology service, 2021
Sampling techniques
Three kebeles were selected purposively based on the Mango fruit production potential in the Gamo zone (Gochera et al., 2021), namely, Lante, Chano mile and Kola Shelle. To determine the sample distribution and the corresponding target population in the study area, the following formula will be used. To calculate a proportion with a 95% level of confidence and a margin of error of 8%, we obtain according to Cochran (1977). Formula N = z 2 (1-p)p/(e 2 ). Accordingly, 151 sample households were selected. During the survey, mango-based agroforestry practitioners and non-mango-based farm practitioners were identified with district and Kebele agricultural office experts. In addition, key informants and Focus group discussions were held. The KIs were model farmers, who lived there, for at least for continuous 30 years in the area, as recommended by the kebele agricultural office.
Data collection method
Both primary and secondary data were collected and used. Primary data were collected by structured and semi-structured questionnaires, key informant interviews, focus group discussion (FGD) and field observation. Secondary data were collected from different sources published and unpublished sources. The socio-economic and demographic characteristics of households (HHs); name, age, family size, level of education attended, land size, type of agroforestry practiced and the role of practiced agroforestry were gathered from the inspected HHs by kobo collect android application using smart mobile Tecno Camon 12 (Lakshminarasimhappa, 2022). Household heads were interviewed for both environmental and socioeconomic parameters pre-developed by the research team during KIIs and FGD.
Data analysis
The quantitative data were analyzed using descriptive statistics. The results were presented using tables and figures. The qualitative data collected were narrated and summarized and used to substantiate and complement the collected data. Statistical Package for Social sciences (SPSS) software version 24 was used to analyze the data. The role of MBAFs among practitioners and nonpractitioners was subjected to one-way ANOVA, and the mean differences were considered significant at P < 0.05. Kruskal-Wallis χ2 tests to determine differences in interviewer responses to each variable and between MBAF practitioners and NMBAF practitioners were considered.
Demographic characteristics of household in the study area
Household demographic features can influence species selection and the ecological and socioeconomic roles of the agroforestry system. Out of 151 interviewed household heads, 121 were mango-based agroforestry practitioners (MBAF) and the rest 30 were non-mango-based agroforestry practitioners (NMBAF). Gender distribution indicates that 88% were male-headed whereas the remaining 12% were female-headed.
The age of MBAF practitioners indicated that 47.1% were in the range of 41-50, 27.3% were in the range of 31-40 and 19.8% were in the age of >50. The difference in age distribution was statistically significant (p < 0.05) for the age group from 31 to 50 (Table 1).
The education status of the respondents indicates that 44.4% had not received formal education, while the rest had education levels from elementary to degree level formal education. The education-level distribution of respondents was statistically significant (p < 0.05; Table 1).
The major livelihood activity for 91.4% of the respondents was reported as farming and 1.3% was a civil servant. The livelihood activity difference between practitioners and among civil servants and others was statistically significant (p < 0.05). The marital status between practice and non-practitioners was statistically significant (p < 0.05). Of the interviewed respondents, 88.15 were married and the rest were single and widowed.
The household distribution of respondents was also significantly different for Practitioners and non-practitioners at statistically significant (p < 0.05; Table 1).
Environmental role of mango-based agroforestry systems (MBAFs)
Agroforestry plays an important role in environment mainly through maintaining the ecosystem (Table 2). With significant difference among the two land-use types (p < 0.05), the sampled respondents were asked about the parameters that indicate the environmental benefits of the agroforestry: soil fertility status, crop damage by wind, shade problem at a farm in hot weather, soil erosion risk and level of pest damage were considered in this study. Accordingly, about 34%, 27.3%, 26%, 12% and 0.7% of the respondents revealed that there is no damage, low damage, high damage and medium damage and very high damage (Table 2). This indicates the damage level is medium to no damage. Regarding the damage to crops by wind between MBAF practitioners and MBAF practitioners, 25.3%, 25.3%, 18%, 10.7% and 0.7% of the respondents reported that the level of damage to crops by the wind on the farm is a high damage, low damage, no damage, medium damage and very high damage, respectively. On the contrary, 16%, 2%, 1.3% and 0.7% of NMBAF practitioners reported that the level of crop damage by the wind is no damage, low damage, medium damage and high damage. The responsesof two land-use typepractitioners were significantly different (P < 0.05).
Researchers have witnessed the importance of agroforestry practices against crop damage from strong wind. For example, one of the most important functions is to reduce wind erosion and protect crops from wind damage windbreaks are a major component of and play an important role in agroforestry ecosystems (Yang et al., 2021). Integrating woody perennials in the agricultural system improves the efficiency of ecological and ecosystem services by increasing productivity and decreasing the damage to crops organically and by resulting in higher yields (Jo & Park, 2017;Mume & Workalemahu, 2021;Stigter et al., 2002). The role of agroforestry in wind damage control is due to the existence of potential woody species in agroforestry systems and the difference in wind damage between two land-use type practitioners could be due to the difference in a woody component.
The role of agroforestry in soil fertility enhancement has been reported by different scholars. This research can be supported by different reports from different corners of the world. For instance, (Rathore et al., 2013) recommended that 15 years of age of mango plantation for multiple outputs and good economic viability without impairing site fertility. (Kenfack Essougong et al., 2020) reported the importance of Cocoa agroforestry systems for soil fertility management in Cameroon. Agroforestry increases the soil fertility status of the areas (Arage, 2021). The soil fertility enhancing role reported by agroforestry might be due to the existence of agroforestry species that help in improving soil fertility and proximity to farmers for the management of the system in MBAF. The respondents with mango-based agroforestry perceived that their farming system has potential for soil fertility enhancement. This could be related to the less harvest from the system and moderate return to the soil. Respondents were also subjected to responses on the Soil Erosion Risk Level of their farm. Accordingly, 75.5%, 11.9%, 2.0%, 8.6% and 2.0% of all respondent in the area viewed "Very low, Low, Moderate, High, Very High" and 78.1%, 2.0%, 60.3%, 7.9%, 2.0%,7.9% and 2.0% of MBAF practitioners viewed Very low, Low, Moderate, High, Very High and 15.2%, 4.0%, 0.0% and 0.7% NMBAF practitioners viewed "Very low, Low, Moderate, High, Very High", respectively. The responses between the two land-use types of practitioners were significantly different (P < 0.05; Table 2).
Researchers have reported that woody perennials have the potential to reduce soil erosion in the farming system. The erosion reduction role of the agroforestry system of coffee (Coffee arabica) and mixed shade trees (Inga spp and Musa spp) in Northern Nicaragua was reported (Blanco Sepúlveda & Aguilar Carrillo, 2015). Agroforestry has the potential generally to conserve soil and water on the farm and nearby (Lundgren & Nair, 1985;Mou, 2011). The role of agroforestry in soil erosion risk control viewed by respondents might be due to the roots of woody species in agroforestry having the potential to control erosion risk in agroforestry settings. Another research witnessed the upscaling of sweet-orange-based agroforestry for the restoration of degraded shifting cultivated lands in North-East India for environmental sustainability (Sahoo et al., 2021). Generally, the role of MBAF was modeled and confirmed as it has helped for conversion of degraded land into productive land along with environmental security (Rathore et al., 2021). Hence, all the considered parameters except the risk of soil erosion have differed significantly between MBAF and NMBAF practitioners (P < 0.05; Table 2).
Socio-economic role of mango-based agroforestry systems (MBAFs)
Agroforestry practice in the study area has a lot of socio-economic roles to the community as indicated below in Figure 2. About 90.1% of the interviewed respondents stated that they got food, especially in the form of different fruits like mango, and citrus species, and 60.96% of the respondents stated that they got traditional medicine mainly for humans and cattle too. Fortyfive per cent of respondents revealed as they got fuel wood from their farm. Other socioeconomic roles ; timber, pole and fodder,were informed by 11.96%, 27.2% and 12.6% of the respondents, respectively. In the case of agroforestry type, the respondents revealed significantly different only for food and pole (P < 0.05; Table 3). Mango-based agroforestry (MBAF) practitioners get significantly higher food and pole for construction than non-mango-based agroforestry (NMBAF) practitioner. The current research is in line with different reports about the importance of fruit tree-based agroforestry. For example, farmer's livelihoods improved enormously by practicing agroforestry as they have more access to food, fodder and fuel wood which is reflected by greater access to livelihood capital (Hanif et al., 2018). By the proper implementation of agroforestry practices with proper tree-crop combination, farmers can improve their livelihood and socio-economic status (Ibrahim et al., 2011). Mango-based agroforestry in India also generates maximum employment for the society (Mishra et al., 2020). The significant difference between agroforestry practitioners for food could be due to agroforestry's nature which is fruit-based and farmers' focus on fruit-based perennials. In the case of pole for constriction, mango-based agroforestry practitioner experiences integrating trees for pole due to scarcity of woodlands to bushes and agriculture.
CONCLUSION
Agroforestry plays an important role in environmental and socio-economic roles mainly through maintaining the ecosystem. Agroforestry practitioners in the study area confirmed that mango-based agroforestry systems (MBAFs) were used for soil fertility enhancement, the reduction of crop damage by wind, to ameliorate microclimate by shading crops and livestock at the farm in hot weather. This role of agroforestry practices was viewed as differently significant at P < 0.05.
In addition to environmental function, respondents confirmed the integrated Mangifera indica in their agricultural landscape has been used for socio-economic roles such as food/fruits, timber, traditional medicine, fuelwood, pole and fodder. Mangifera-based agroforestry practitioners got the food and pole for construction in significantly different amounts than Non-Mangifera-based agroforestry practitioners.
In general, mango-based agroforestry of Abaya-Chamo catchments of the southern Rift valley of Ethiopia has the potential for ecological and socio-economic merits for society. Further researches on challenges, management practices and longtime experimental research on integrated mangobased agroforestry orchard with annual crops are needed to optimize strategies for sustainability of this green economy. | 2022-07-14T18:26:03.337Z | 2022-07-12T00:00:00.000 | {
"year": 2022,
"sha1": "b15f573acf2ad1e6d904f2577d5fe2c87f8bd24f",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311932.2022.2098587?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "5c665e0076d49af60fbd91cd555f01abb2cf64f9",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": []
} |
252846352 | pes2o/s2orc | v3-fos-license | Dictionary learning: a novel approach to detecting binary black holes in the presence of Galactic noise with LISA
The noise produced by the inspiral of millions of white dwarf binaries in the Milky Way may pose a threat to one of the main goals of the space-based LISA mission: the detection of massive black hole binary mergers. We present a novel study for reconstruction of merger waveforms in the presence of Galactic confusion noise using dictionary learning. We discuss the limitations of untangling signals from binaries with total mass from $10^2 M_{\odot}$ to $10^4 M_{\odot}$. Our method proves extremely successful for binaries with total mass greater than $\sim 3\times 10^3$ $ M_{\odot}$ up to redshift 3 in conservative scenarios, and up to redshift 7.5 in optimistic scenarios. In addition, consistently good waveform reconstruction of merger events is found if the signal-to-noise ratio is approximately 5 or greater.
Introduction.-The LIGO/Virgo interferometer network [1,2] has already detected gravitational waves (GWs) from almost one hundred compact binary coalescence (CBC) events [3][4][5]. These detections populate GW transient catalogues and reveal information about properties of the underlying black hole and neutron star populations [6]. The high frequency range probed by the current terrestrial detectors, at around (10 − 1000) Hz, is sensitive to stellar-mass binaries that mostly lie below the pair-instability supernova mass gap 1 . Mergers of supermassive black holes, however, are expected to emit low-frequency (mHz) gravitational waves.
The space-based GW interferometer LISA, anticipated to be launched in the mid-2030s, will be sensitive to GWs in the mHz range [8]. Other GW sources will be detectable at these frequencies: Galactic white dwarf binaries, inspiraling binaries with extreme mass-ratio, or colliding true vacuum bubbles formed at the electroweak phase transition [9][10][11]. The tens of millions of double white dwarf binaries in the Galaxy could have an impact on detectability of massive black hole binaries coalescing in the LISA frequency band [12]. LISA will observe continuous GWs from inspiraling white dwarfs, and although it may be sensitive to individual sources, most will remain unresolved and these are referred to as Galactic confusion noise [13][14][15]. It has been shown that modulation of the Galactic foreground from the LISA orbit could lead to a reduction in signal-to-noise ratio (SNR) of other GW sources by a factor of 4 [16]. A LISA Data Challenge 2 is underway to study the impact of overlapping Galactic 1 GW190521 is the only exceptional event where the mass of the primary black hole is unambiguously in the mass gap [7]. 2 https://lisa-ldc.lal.in2p3.fr. sources on the sensitivity to massive black hole mergers [17], and attempts to separate the foreground from other GW sources have been conducted [18][19][20][21][22].
In this Letter we apply a dictionary learning method to separate CBCs from the Galactic foreground in the LISA frequency band. Such a method has been successfully applied in GW data analysis to classify and denoise Advanced LIGO's "blip" noise transients [23] and effectively improve the performance of the detector. More precisely, we assess the suitability of the dictionary learning method for the classification and reconstruction of massive binary black hole merger signals in the presence of Galactic noise.
Previous studies focused on the inspiral of loud CBC sources and demonstrated that SNR accumulated over time is sufficiently large to overcome the noise from Galactic binaries [24]. In other literature, detectability of CBCs was investigated for equal-mass and non-spinning binaries [25][26][27], confirming the largest SNR is expected from binaries with combined mass ∼ (10 5 − 10 6 )M . In particular, Fig. 3 in [27] presents two mass ranges with low SNR that could be affected by Galactic foreground, namely (10 2 − 10 4 )M and (10 7 − 10 9 )M .
Here we consider all of the mass ranges, along with varying spins and redshifts, and we study their waveforms around coalescence time. The dictionary learning method reconstructs CBC signals with ease in the trivial case where the CBCs are above the Galactic noise, i.e. for the (10 5 −10 6 )M mass range. We find the dictionary learning method to be too computationally expensive for very heavy mergers in the range (10 7 −10 9 )M . However, our method succeeds in separating low-SNR binaries in the range (10 2 − 10 4 )M from the Galactic noise. Hence, the dictionary learning method could significantly assist the detection of this prime LISA source [28].
Dictionary learning.-Any CBC signal in the LISA band will be overlaid with continuous waves from the inspiral of double white dwarfs. Therefore, we can model the detector strain, y(t), as a superposition of the CBC signal u(t) and the Galactic confusion noise n(t): (1) We express the loss function as and search for a solution u λ that minimises J(u), where || · || L2 is the L 2 norm [29,30]. The first term in the loss function, often referred to as the error term, measures how well the solution fits the data, while the regularisation term R(u) captures any imposed constraints. The regularisation parameter λ tunes the weight of the regularisation term relative to the error term; it is a hyperparameter of the optimisation process. The goal of the dictionary learning method [31] is to find the sparse vector α that reconstructs the true signal u as a linear combination of columns of a dictionary D, with D a matrix of prototype signals (atoms) trained to reconstruct a given set of signals, which for our study is CBCs. Sparsity of the vector α is imposed via the regularisation term R(u) = ||α|| L1 , using the L 1 norm. Therefore, the constrained variational problem in (2) reads and is called "basis pursuit" [32] or "least absolute shrinkage and selection operator" (LASSO) [33]. The basis pursuit can be improved significantly if, instead of using a predefined dictionary, we apply a learning process where the dictionary is trained to fit a given set of signals. The procedure starts by selecting templates of CBC waveforms and whitening the data. The waveforms are aligned at the strain maximum and divided into patches, with the number of patches (p) much larger than the length of each patch (d). To train the dictionary we solve (4) considering both the sparse vector α and the dictionary D as variables: with x i denoting the i-th training patch. This problem is not jointly convex unless the variables are considered separately as outlined in [34].
In our study we create training signals that contain CBC waveforms only and no noise. The dictionary created is then tested on signals that include new CBC waveforms combined with Galactic noise. We describe briefly CBCs in the mass range (10 7 − 10 9 )M have lower frequencies, making it difficult for the dictionary learning to reconstruct their sinusoidal behavior. We thus study reconstruction capabilities of binary black holes with total mass ranging from (10 2 − 10 4 )M . The dictionary is trained on a set of 100 noiseless CBC signals, simulated over one day with cadence 3 ∆t = 2 s. Table I lists the relevant parameters of the IMRPhenomD waveform and the corresponding ranges of values we choose for the CBC sources. We simulate the data by drawing randomly from the probability distribution of the parameters. The redshift for all sources is fixed to z = 2, since changing the redshift leads to a simple rescaling of the amplitude that has no impact on our whitened data in the training set. Note that the same does not hold for the testing data, since changing redshift would change the relative amplitude of the CBCs to the Galactic foreground. Consider two white dwarfs of mass M 1 and M 2 on a quasi-circular orbit with inclination ι at a distance R. for the + and × polarisations, respectively. The chirp mass of the binary is a function of the progenitors' masses, M c = (M 1 M 2 ) 3/5 /(M 1 + M 2 ) 1/5 , and the GW frequency is twice the orbital frequency, f GW = 2f orb . The resulting plane wave has a slight frequency shift over time, and for each polarisation reads
They emit GWs with amplitude [36]
where φ 0 stands for the initial phase andḟ GW for the GW frequency time derivativė To simulate the LISA Galactic foreground we sum over GW signals from the white dwarf binaries in our galaxy: where F +,× stands for the detector response function [37]. The mass, location, and orbital frequency of nearly 5 million white dwarf binaries in the Milky Way are taken from [38]. We present the resulting Galactic foreground in Fig. 1. The orbit of LISA around the Sun introduces a modulation in the Galactic noise (10), with maximum value when the normal of the LISA constellation plane is closest to the binary location, as expected since white dwarfs are supposed to cluster near the Galactic center. In our analysis, we will first consider the Galactic foreground at a time of the year when it is maximum. We subsequently consider testing signals that combine Galactic foreground and a CBC signal.
Results.-The hyperparameters of dictionary learning, namely the regularisation parameter λ, the atom (and patch) length d and the number of patches p, have an impact on the quality of the signal reconstruction. We fix p = 3 d/2 to ensure a complete dictionary (one where the number of atoms is greater than atom length) and choose d ∈ [2 2 , 2 7 ]. Independently of our choice of d, we find the optimal regularisation parameter to lie in the range λ opt ∈ [10 −3 , 10 −2 ] (see Supplemental Material for a detailed study). For the remainder of the analysis we fix λ = 10 −3 , as little quantitative differences to our results are found with the choice of λ = 10 −2 .
To find the best dictionary size, for each reconstruction we calculate the so-called overlap between the injected CBC waveform h i (f ) and the recovered waveform h r (f ), where S n (f ) is the one-sided noise power spectral density. The overlap O can range between -1 and 1, with 1 reflecting perfectly matched signals, and -1 implying perfect anti-correlation. The overlap is widely used in the GW community for identifying transient CBC signals through matched filtering using waveform template banks [39][40][41][42]. We track this metric across the testing dataset and choose the optimal atom length d opt that maximises it, namely d opt = 4 (see Supplemental Material).
To have a metric for error, we also calculate the overlap between the recovered CBC waveform and the present Galactic foreground n, which we denote as O(h r , n). In addition, we define the overlap difference ∆O = O(h r , h i ) − O(h r , n) to evaluate how much more the reconstructed signal has in common with the injected signal than with Galactic foreground.
We now study how well the CBCs can be reconstructed using a dictionary with d = 4 and λ = 10 −3 . Specifically, we create two sets of 50 signals with CBCs at redshift z = 1 and z = 10, and investigate how their reconstruction varies with SNR. From Fig. 2 we see that the overlap between reconstructed signal and injected CBC waveform (solid circles) increases with SNR, while the overlap between reconstructed signal and noise (crosses) decreases with SNR, as expected. Interestingly, the noise overlap is approximately constant until SNR ≈ 10, where it starts to decrease, while the reconstruction starts to improve significantly, with O(h r , h i ) = 1 for some of the z = 1 waveforms. Note that the z = 1 and z = 10 datasets do not differ greatly in overlap, and when they do it is between sources of very different mass. Therefore, we turn to study how overlap changes as a function of both redshift and total mass of the CBC. We begin by fixing mass ratio to 1 and both black holes' spins to 1, for simplicity, and we create a dataset of 400 CBC events with uniform spacing 1 ≤ z ≤ 20 and log-uniform spacing for total mass of the binary 10 2 M ≤ M tot ≤ 10 4 M . Each event is overlayed with Galactic foreground at a yearly modulation maximum and minimum (see Fig. 1), and reconstructed. Resulting contours of signal overlap O(h r , h i ) = 0.5, 0.9, overlap difference ∆O = 0, 0.75 and SNR = 5, 15, 25 are plotted in Fig. 3. Although generally speaking increasing SNR improves reconstruction capabilities, reconstruction success is more dependent on the CBC's total mass and redshift.
In strong (maximum) Galactic noise scenarios, binaries with total mass greater than 1330 M can be reconstructed with O(h r , h i ) > 0.5 for the redshift range. Reconstructions with O(h r , h i ) > 0.5 in weak (minimum) Galactic foreground scenarios can be achieved for binaries with total mass greater than 355 M . Extremely good signal reconstruction, with overlap greater than 0.9, can be achieved for sources with total mass greater than 1350 M up to redshifts of 3 in the pessimistic case, and up to redshifts as large as 7.5 in the optimistic case. From Fig. 3 one can also track how the overlap difference ∆O varies with binary mass and redshift. All sources that lie in the parameter space at the right of the ∆O = 0 contour lead to reconstructed signals that are more similar to the true, injected signal than the Galactic noise. In the pessimistic case this is true for total masses greater than 1000 M , and in the optimistic case for total masses as small as 315 M .
Conclusions.-Gravitational-wave signals from the inspiral of white dwarf binaries in the Galaxy must be considered when studying detection of CBC signals with LISA [12]. In this work we have modelled this Galactic foreground, or confusion noise, and studied its impact on the reconstruction of massive black hole binary merger signals using an approach based on learned dictionaries. We have found dictionary learning to be a promising technique for detection of such signals in the presence of Galactic foreground. Reconstructing the CBC waveforms with dictionary learning can be optimised with atom length d = 4 and regularisation parameter λ ∈ [10 −3 , 10 −2 ]. The threshold overlap between the injected and the reconstructed signal, typically chosen to be O(h r , h i ) = 0.5, can be achieved for binaries with mass M tot > 1330M in strong Galactic foreground, and with mass M tot > 355M in weak Galactic foreground at all redshifts. For CBCs with total mass M tot > 3150M , the reconstructed waveform and the true waveform overlap greatly with O(h r , h i ) > 0.9 up to redshift z = 3 and z = 7.5 in the case of strong and weak Galactic foreground, respectively. For all tested signals, we calculate the overlap between the reconstructed CBC waveform and the Galactic noise, and its difference to the overlap between the reconstructed and injected CBC waveform. We conclude that the reconstructed signal overlaps more with the true CBC signal than with the noise for binaries with M tot > 1000M .
Further assessment of the dictionary-learning approach presented in this Letter to assist detection of other types of GW sources in the LISA frequency band will be reported elsewhere. Those investigations include the analysis of extreme mass ratio inspirals and the case of overlapping CBC sources. We acknowledge computational resources provided by the LISA Data Challenge working group in the LISA consortium. The software packages used in this study are matplotlib [43], numpy [44], and MATLAB Signal Processing Toolbox [45]. Optimal regularisation parameter.-In theory there exists an optimal regularisation parameter λ opt such that we retrieve the best reconstruction of a CBC signal. For each signal in the testing dataset, we vary λ ∈ [10 −8 , 10 −1 ] and reconstruct the CBC waveform. The calculated overlap between the reconstructed signal and the injected CBC, O, can be seen in Fig. 1. For all reconstructions with λ 10 −3 , O plateaus. Some cases "dipped" to a minimum O before dramatically increasing as λ increased. Regardless of the O behavior each CBC demonstrated, the optimal regularisation parameter λ opt took on a value between 10 −3 and 10 −2 . This behavior was consistent over differing atom length d and Galactic noise strength.
Optimal atom length.-The length of the columns of the dictionary significantly impacts quality of reconstruction. We fix λ = 10 −3 and vary dictionary size d ∈ [2 2 , 2 7 ] to find the value of atom length that results in best reconstruction of the CBC signals. In Table I we report the change in the sum of Os for all 50 individual testing samples as we alter the atom length. With this we identify the optimal atom length to be d opt = 4. | 2022-10-13T01:15:58.107Z | 2022-10-12T00:00:00.000 | {
"year": 2022,
"sha1": "ae1f90322de2847c97288bca6c79058c299f4633",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "89aa3c65d7e09982b4b7f03a64e5e250b8109b50",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53594857 | pes2o/s2orc | v3-fos-license | Decoding quantum criticalities from fermionic/parafermionic topological states
Under an appropriate symmetric bulk bipartition in a one-dimensional symmetry protected topological phase with the Affleck-Kennedy-Lieb-Tasaki matrix product state wave function for the odd integer spin chains, a bulk critical entanglement spectrum can be obtained, describing the excitation spectrum of the critical point separating the topological phase from the trivial phase with the same symmetry. Such a critical point is beyond the standard Landau-Ginzburg-Wilson paradigm for symmetry breaking phase transitions. Recently, the framework of matrix product states for topological phases with Majorana fermions/parafermions has been established. Here we first generalize these fixed-point matrix product states with the zero correlation length to the more generic ground-state wave functions with a finite correlation length for the general one-dimensional interacting Majorana fermion/parafermion systems. Then we employ the previous method to decode quantum criticality from the interacting Majorana fermion/parafermion matrix product states. The obtained quantum critical spectra are described by the conformal field theories with central charge $c\leq 1$, characterizing the quantum critical theories separating the fermionic/parafermionic topological phases from the trivial phases with the same symmetry.
I. INTRODUCTION
Since the discovery of topological phases of matter, it has been drawn more and more attention. One of the most prominent signature of topological phases of matter is the bulk-edge correspondence. For example, the topological quantum field theory describing the bulk of the fractional quantum Hall states is in one-to-one correspondence with the conformal field theory characterizing its edge excitations 1 . As a manifestation of this correspondence, Li and Haldane has shown in a fractional quantum Hall system that the entanglement spectrum of the bulk describes the effective low-energy spectrum of the edge physics 2 , which was later proved by Qi et. al. in a general quantum Hall system 3 . In this regards, the concept of entanglement introduced from quantum information provides a quite useful tool to characterize the topological phases in condensed matter systems.
A natural question that follows is, what more we can learn from the entanglement of the bulk of a topological phase? Is it possible to encapsulate the information of the quantum critical points adjacent to this topological phase? At first glance, this may sound crazy because there could be many paths to drive one phase to another, undergoing different quantum critical points. However, the phase transition between topological phases shares some universal features with the gapped topological phase, like the quantum Hall plateau transition in integer quantum Hall phases is uniquely determined by the topological number of the adjacent quantum Hall phases. In fact, in the integer quantum Hall systems and some other symmetry-protected topological states, the topological phase transition can be viewed as the result of quantum percolation of the topological edge modes into the bulk 4 . This implies that via the edge physics, the bulk entanglement spectrum has the potential to encode information on the topological quantum criticality. Indeed, it is already verified in the AKLT states and integer quantum Hall systems [5][6][7] , demonstrating a bulkedge-criticality correspondence.
But it remains elusive to explore the possibility of decoding topological quantum criticality from a strongly interacting system with long-range entanglement. As in the one-dimensional fermionic systems, the interacting topological phases of matter without protecting symmetry fall into the Z 2 classification 8 . The only nontrivial topological phase is the Kitaev Majorana topological state with hosting an unpaired Majorana zero mode on the edges. If we go beyond the anomalous free constraint, a natural generalization of the Majorana topological state is the Z N parafermion topological state with the defining property of an unpaired parafermion zero mode on the edges. So it would be interesting to see whether such a bulkedge-criticality correspondence is still valid in the interacting fermionic/parafermionic topological phases via the bulk entanglement spectrum. Were it be true, this would strongly support the bulk entanglement study to provide a universal recipe in one-dimensional systems to extract topological quantum criticality without fine tuning Hamiltonian.
While it is a challenging task to solve a strongly interacting parafermionic Hamiltonian even in one dimension, the bulk entanglement study has the privilege that all you need is the ground state wave-function with a finite correlation length. In general, the matrix product states (MPS) in one dimension or tensor-network states in two dimension have become a powerful ansatz to capture the essential feature of the topological ground states. In particular, the fermionic MPS of the Z 2 Majorana topological phase has been established 9 , which is the minimal description of the exact ground state of the Kitaev Majorana chain. Our latest work further generalizes this idea to construct the fixed-point MPS for the gapped Z 3 parafermionic topological phase 10 and the more generalized Z N parafermionic topological MPS. With these new developments, it is much more efficient to study all kinds of bulk entanglement spectra.
In this paper, we first generalize the fixed-point Z 3 parafermionic MPS to a more generic topological Z N parafermionic MPS with a tunable correlation length. For the wave functions with a finite correlation length, we show that the topological edge modes are coupled with the tunable correlation strength. By introducing an extensive sublattice bipartition, we can derive the bulk entanglement Hamiltonian that describes a reduced onedimensional system with interactions of topological edge modes. Under the symmetric bipartition with the Z 2 fermionic or Z 3 parafermionic topological states in particular, the bulk entanglement Hamiltonians are shown to follow the behavior of the 1+1 space-time dimensional conformal field theories, characterizing the topological quantum critical points that separate the corresponding topological phase from the trivial phase with the same symmetry. Altogether, we hope that these nontrivial calculations could evidence a tendency of generalizing the bulk-edge correspondence to the bulk-edge-criticality correspondence.
The rest of the paper will be organized as follows. In Sec. II, we consider the generic Z N parafermion topological MPS and discuss its correlation length. Then we study the single block bipartition for the parafermionic MPS and its entanglement spectrum which mimics the edge physics in Sec.III. In Sec. IV, we introduce the procedure of symmetric bulk bipartition to derive the corresponding reduced density matrix and the bulk entanglement Hamiltonian. Next we perform exact numerical diagonalization to the entanglement Hamiltonians and obtain the critical entanglement spectra for the Z 2 Majorana and Z 3 parafermionic topological phases in Sec. V. Finally the discussion of our result and a brief summary are given in Sec. VI. Some related discussions are included in the Appendices.
II. ZN PARAFERMIONIC MPS
In this section we first construct a general Z N parafermion topological MPS with a finite correlation length, which is also supported by the fractionalized Majorana/parafermion zero modes on the edges. Let's start by introducing the basics of parafermions. In a superconducting system, it is well-known that the U(1) charge symmetry of fermions is usually broken down to the discrete symmetry Z 2 . Namely, the particle number conservation is broken down to the number parity conservation. Causality forbids to break this Z 2 parity symmetry further. Therefore, the fermionic Hilbert space as a super-vector space can be decomposed into the odd and even parity sectors: H = H 0 ⊕ H 1 . Mathematically, this Z 2 super-vector space can be generalized to a Z N super-vector space H = H 0 ⊕ H 1 ⊕ · · · ⊕ H N −1 . Indeed, the exotic parafermions arising from the fractionalized topological insulator systems coupled to alternating ferromagnets and superconductors just live in this supervector space. The creation or annihilation operators of Z N parafermions are the generalizations of the Majorana fermions that satisfy for l < l . It is easy to check that Majorana fermions fit into the case of N = 2. Unlike the bosons, the manybody states of parafermions are the Z N -graded tensor product of the single-particle states: where k ranges from 0 to N −1. Exchanging parafermions are mathematically expressed as an isomorphism for the graded super-vectors as where l < l . An inner product can be defined by mapping the dual vector space to a complex number i.e.
It should be noted that C and F commute, which will be repeatedly used in this paper. Moreover, the way which the parafermion operator acts on the many-body parafermion state follows as (5) where the phase string arises from the non-local commutation relation of parafermions. Any parafermionic state should belong to the special super-vector space with a definite charge detected by the charge operator: where |Ψ m is a many-body state defined in the H m with the charge m = 0, 1, 2, .., N − 1. Recently, the MPS for the one-dimensional parafermionic topological phase has been constructed 10,11 . It can be expressed in terms of a series of local parafermionic graded tensor: Each local site is associated with a graded product of super-vectors:Â where |k denotes the Fock parafermion mode and |α) and (β| are two virtual parafermion modes living in super-vector space V F and dual-super-vector space V * F , respectively. Contraction of the virtual modes ties the neighboring sites by maximally entangling bonds, leading to a compact form as with the total charge m = l k l mod N . Here τ is the generator of the Z N group with its matrix element τ α,β = δ (β−α−1) mod N . For the closed boundary system, there is a one-to-one correspondence between the total charge and the boundary conditions. Specifically, in the Z 2 Majorana case, m = 0 corresponds to the even parity state under the anti-periodic boundary condition, while m = 1 to the odd parity state under the periodic boundary condition. A graphic representation for the local tensor is given in Appendix A. Motivated by an exact ground state in the interacting fermionic system 12 , we generalize the parafermionic MPS to a generic Z N parafermionic MPS with a tunable correlation length, and the local tensor is expressed as where φ is a tuning parameter playing the similar role to the chemical potential for the local charge k ranging from 0 to N − 1, and C is the normalization factor. As shown later, the correlation length can be continuously tuned by φ. What's amazing is that despite the arbitrary long correlation length tuned by φ, this MPS always maintains to be topologically nontrivial and characterizes the topological phase of Z N parafermions away from fixed point. Essentially this is due to the gauge symmetry More evidences can be found in the later section. We have to mention that Eq. (10) does not exhaust all possibilities of the MPS with N > 2, which maintains the gauge symmetry and topological nontrivial. In fact for the generic Z N case, there can be at most N − 1 independent parameters controlling the relative distribution of the N physical channels on each site A αβ → a k δ (β−α−k) mod N without breaking the gauge symmetry. Here and after, we'll abbreviate the modulo N in the arguments of all delta function for convenience. It should be mentioned that the case of Z 3 is essentially equivalent to the MPS proposed by Fernando, et. al. 13 . The charge basis we adopt is nevertheless a better and more natural. Despite its similarity with the bosonic MPS, the Majorana/parafermionic MPS is dramatically distinct in the following two aspects: the matrix structure is subjected to the constraint by the intrinsic Z N symmetry, and the nontrivial commutation relation of the (para)fermions brings in a nontrivial phase factor when performing a contraction or permutation. Now let's come to discuss the correlation of the general MPS wave function. A generic two-body correlator ψ|Ô iÔj |ψ can be cast into the tensor network, which involves a consecutive mapping through the transfer matrix defined as: Note that unlike the bosonic case, the correlator could involve additional phase factor counting the total charge between the lattice sites i and j due to the parafermion commutation. Nevertheless, this seemingly non-local phase can be attributed to a local charge detector deposited on the virtual bonds at the sites i and j only, see Appendix A for detail. According to a quite standard MPS scheme, the spectrum of this transfer matrix determines the correlation length of the wave function. To diagonalize this transfer matrix, we can recast the eigenequation of the transfer matrix into a complete positive map: in which the right eigen-vector is reshaped into the matrix R n,j , the left eigen-vector is reshaped into the matrix L n,j , n labels different eigenvalues, and j labels eigen-vectors with the degenerate eigen-value. The eigenequation can be immediately solved as with the eigenvalues λ n = |λ n |e iθn , where The global factor C in Eq. (10) is chosen to make the largest eigenvalue λ 0 = 1, ensuring the normalization condition of the MPS in the thermodynamic limit. It is found that the transfer spectrum is N -fold degenerate so that both n and j ranges from 0 to N − 1. The degeneracy is no surprise because the transfer matrix inherits the gauge symmetry from the local matrix as a signature of the topological nontriviality: The eigenvectors within the degenerate subspace are therefore related by (1 ⊗ τ ) or (τ ⊗ 1). And yet N − 1 among them, i.e. those non-diagonal ones with j = 0 are redundant and do not contribute to the physical consequences. The origin of redundancy is due to the mismatch between the virtual bond dimension N and the parafermion quantum dimension √ N . The physically relevant ones are the diagonal ones R n,0 and L n,0 . Since |λ n | ≤ 1 and λ n =λ N −n , the sub-dominant eigenvalue λ 1 =λ N −1 contributes to the correlation in the thermodynamic limit: (16) and the more detail is given in Appendix B. Therefore the correlation length ξ can be defined by: Note that the correlation length is always finite except when φ = 0, concurring with the fixed-point MPS discussed before 10 . By varying the parameter φ, we can continuously tune the correlation length of the topological parafermionic MPS.
III. ENTANGLEMENT SPECTRUM AND EDGE PHYSICS
With the topological wave function, we can perform various bipartitions and study the corresponding entanglement spectrum to probe the topological nontriviality therein. Besides the gauge symmetry, the topological nontriviality manifests much more explicitly on the existence of fractionalized edge modes. To reach the boundary theory, we can make use of the bulk-edge correspondence, i.e., studying the single block entanglement spectrum 2,3 .
More specifically, we can bipartite the bulk into one block of l sites and its complement part, as shown in Fig. 1(a). By tracing out the complement part, we're left with a reduced density matrix ρ r describing the block. Since the bulk is gapped, the low energy physics of the block is reflected on its gapless edge excitations. There is an isometry transformation V that maps the block physical parafermions χ to the effective edge parafermions ψ, preserving the spectrum of the eigenvalues: spec(ρ r ) = spec(V † ρ r V ). In the following we mainly deal with the effective reduced density matrixρ r ≡ V † ρ r V , which is supported on the effective edge degrees of freedom and faithfully characterizes the low-energy physics of the block. In fact, by treating the complement part as the environment, we can rewrite the reduced density matrix in terms of the thermal density matrix as which defines the entanglement Hamiltonian as the negative logarithm of the reduced density matrix. The entanglement Hamiltonian was conjectured and further proved to faithfully characterize the low-energy sectors of the boundary theory in the quantum Hall systems 2,3 . In the following we explain how to extract the entanglement Hamiltonian using the MPS. First we coarse grain the MPS by blocking the local matrices of l-sites altogether and treating it as a ma-trixà {ki} αβ that linearly maps the left-and right-virtual degrees of freedom (αβ) into the physical degrees of freedom {k i }. To extract the relevant features, we perform a singular value decomposition to the matrix:à = U SV † , where V is exactly the isometry transformation, S is a diagonal matrix with singular values that characterize the distributive weights of relevant degrees of freedom, and U as the isometry that assembles the virtual modes to be the effective degrees of freedom. According to the relationsÃà † = U S 2 U † and we can deduce U and S by diagonalizingÃà † . Therefore U is found with its matrix elements as U (α,β),p = δ β−α−p / √ N and the nonzero singular values in S are given by where p ranges from 0 to N − 1 and the quantity inside the square root keeps positively definite. The Eq. (20) shows how the singular values rely on the eigenvalue of the transfer matrix and the block length l explicitly. It is worth noticing that, although the two edges together (αβ) appear to have N 2 degrees of freedom, only N among them are relevant. This is a signature of the fractionalization, rooted in the quantum dimension of edge mode being √ N . Finally we can projectà onto the relevant degrees of freedom living on edges by the isometry G ≡ÃV = U S with its matrix elements When the length of the complement of the block is sufficiently long, shown in Fig. 1(b), it is straightforward to obtainρ r independent of the boundary condition: where L 0,0 and R 0,0 are the matrices reshaped from the eigenvectors of the transfer matrix corresponding to the maximum eigenvalue. It is obvious thatρ r is already in a diagonal form with eigenvalues s 2 p /N . The entanglement spectrum as the eigenvalues ofĤ ent is thus given by From the expression, we find that the complete spectrum decays exponentially with the block length l: To give a concrete example, we consider the Z 2 Majorana topological state as a parent state. There are only two levels in the block entanglement spectrum labeled by even and odd parity respectively, and the gap between them is proportional to J l . To understand the spectrum more transparently, we need the entanglement Hamiltonian that indicates how the edge degrees of freedom interact with each other. Taking the logarithm ofρ r and expressing it back in terms of operators, we can obtain the entanglement Hamiltonian to the leading order of J l :Ĥ which describes the coupling between the two edge Majorana zero modes through the bulk. The coupling strength is exponentially suppressed with correlation length as the characteristic length. When l/ξ 1, the two edge Majorana zero modes tend to be decoupled, and the excited level tends to collapse to the ground state level. We then go to a more nontrivial concrete demonstration with Z 3 parafermion topological state. Similarly, the spectra is exposed to a global exponential suppression due to the dependence of J l on l. But we are more interested in the relative structure of the entanglement spectrum. The rescaled spectrum for the Z 3 case is shown in Fig. 1(c). It is visualized that the relative structure shows certain periodicity.
Actually, for the generic Z N cases with N ≥ 3, there is an asymptotic periodicity of 2π/N to the leading order of J l : To understand the spectrum more transparently, the entanglement Hamiltonian is derived as: which describes the leading coupling between the two effective parafermionic modes ψ 1 and ψ 2 of the edges respectively. Due to the exponentially decaying of coupling strength, the leading term is sufficient to characterize the spectrum when block length l is large enough. Now it is quite clear that the periodicity of the level structure arises from the coupling phase cycling with the block length. The cases with lθ 1 = (2k + 1)π and lθ 1 = 2kπ are of particular importance. The former corresponds to the ferromagnetic (FM) coupling of two edge modes in Z 3 case 14,15 , exhibiting non-degenerate lowest level and two-fold degenerate excited level, while the latter corresponds to the anti-ferromagnetic (AFM) coupling of two edge modes, displaying two-fold degenerate lowest level and non-degenerate excited level. In the next section we'll show that, depending on the coupling phase, the distinct interactions lead to distinct quantum critical theories.
IV. SYMMETRIC BULK BIPARTITION AND ENTANGLEMENT HAMILTONIAN
While the entanglement Hamiltonian derived from the single block bipartition is shown to describe the edge physics, it is absolutely incapable of describing the bulk because it is one-dimension lower than the bulk. In this section we introduce an extensive sublattice bipartition that yields an entanglement Hamiltonian describing the bulk physics.
The basic idea of decoding topological quantum criticality from a gapped topological phase 5 lies in having the fractionalized edge degrees of freedom to couple with each other and percolate into the reduced bulk subsystem. This can be manufactured by a sublattice bipartition of the bulk into L A pairs of alternating interlaced A and B sub-blocks, and tracing out the sub-system B and leaving A as a system of interest. Since the bulk is gapped, the low-energy physics of subsystem A is dominated by the extensive fractionalized edge degrees of freedom, as shown in Fig. 3(a). Distinct from the former single block bipartition that yields (0+1) dimensional entanglement Hamiltonian describing the edge physics, this extensive interlaced bipartition gives rise to a (1+1) dimensional entanglement Hamiltonian characterizing the bulk properties.
The three topologically distinct phases driven by distinct relative block length are shown schematically. The red dots remark the fractionalized edge particles as the low energy effective degrees of freedom of each block in the sub-system A. Solid line corresponds to the sub-system A while the dashed line stands for the sub-system B that is to be traced out. The coupling strength between edge particles decay exponentially with the block length so the relative strong coupling is highlighted with bold line. (d) A schematic phase diagram. The topological quantum critical point is guaranteed to be self-dual and translational invariant.
Depending on the relative block length of l A and l B , the subsystem A itself can fall in either topological or trivial phase, as shown in Fig. 2. This can be visualized in two limits: when l B is relatively small, the subsystem A is essentially the same phase with the original gapped topological phase; when l A is small enough, it falls into a product state of dimers. As one cannot start from the topological phase and enter into the trivial phase without experiencing a quantum critical point, it is expected that the subsystem A could be critical when l A and l B are comparable. Especially, in Z 2 Majorana case, the topological phase is related to the trivial phase by a duality transformation which can be implemented by the single site translation of the Majorana modes 16 . As a result, the quantum critical point separating the Z 2 Majorana topological phase from the trivial phase is translational invariant with respect to Majorana lattice 17 . This implies that the topological critical point is robustly pinned at l A = l B despite the presence of even higher order interactions. This scenario could possibly be generalized to the Z N case, i.e. l A = l B ≡ l to ensure that the coupling strength between the fractionalized edge modes is translational invariant. We call this symmetric bulk bipartition.
We first group every l sites together and apply the isometry transformation V to obtain a coarse grained chain with block tensor G. By tracing out the alternating B blocks, a parafermionic density operator in the form of a tensor-network is shown in Fig. 3. However, the scheme of tracing B is far less straightforward, because it needs permuting the physical parafermions which would inevitably result in a highly non-local phase. Nevertheless, this problem can be circumvented by a trick making use of the tensor-network formalism as a quantum circuit. Namely, the non-local phase factor arising from commuting parafermions can be shuffled and redistributed into each site locally at the cost of an additional bond that . 3: (a) A graphical tensor-network representation of the reduced density operator after a symmetric bulk bipartition. Here both of bra and ket charge m topological chains are divided into alternating LA pairs of interlaced A and B parts with respective length lA, lB. (b) By coarsed graining both chains, the simplified repeating element is denoted as R, and the arrows indicate that this tensor-network still carries parafermionic operators and is not available for the direct numerical calculation. When all parafermionic states in the part B are contracted, the reduced density operator becomes a conventional tensor-network, at the prize of additional bonds playing the similar role of the Jordan-Wigner phase string.
tracks and records the total charge of all the parafermions to the left of certain site. In this way, the parafermionic density operator turns into a more conventional tensornetwork that is amenable to direct numerical calculation (see more detail in Appendix C). Moreover, as the parafermion has fractional quantum dimension √ N , the bond dimension of the tensor network can be effectively reduced from N 2 to N . Thus the repeating element of the matrix product operator (MPO) is derived to be a six-rank tensor where p denotes the effective degrees of freedom for each block, α, β are the remained virtual bonds that are to be contracted, and l (r) inputs (outputs) the accumulated charge to the left of the site, the four singular values come from the four blocks respectively, and the phase factor accounts for the commutation between physical parafermions before their contraction. It is worth noticing that the additional bonds essentially play the similar role as the Jordan-Wigner phase string, which is inevitable when one tries to turn something fermionic into bosonic.
While the reduced density operator in the form of MPO is already available for direct numerical calculation, we're here to give an analytical derivation of the final expression of the reduced density operator. In essence we're going to grain L A number of R tensor together and con-tract all the internal bonds, leaving only the physical ones and the boundary bonds. We denote this intermediate block tensor asR [{p,q}] α,βr (L A ), which depends on the length of subsystem L A and carries both physical and boundary bonds. Finally contracting the boundary bonds in this block tensor yields the reduced density matrix, as shown in Fig. 3(b). By noticing that the intermediate block tensor has a recursion relation: we can derive the explicit form of the intermediate block tensor, and then contract the boundary bonds to derive the reduced density operator as the form where C is the normalization factor andŜ is an operator form of the diagonal singular matrix for one block. PhysicallyŜ stands for the coupling between the edge fractionalized particles of one block, whileÎ describes the hopping between fractionalized edge particles from the adjacent blocks: With the reduced density operator, the bulk entanglement Hamiltonian is thus obtained which is in general a complicated Hamiltonian with longrange interactions but can be controlled by the correlation length of the parent gapped wave function. In the following section we'll numerically perform exact diagonalization for the reduced density matrix in the form of the MPO, and demonstrate that the exact numerical results are consistent with the analytical form of the entanglement Hamiltonian up to the leading orders.
V. EXACT NUMERICAL RESULTS OF CRITICAL ENTANGLEMENT SPECTRA
The main focus of this section is on the bulk entanglement Hamiltonian Eq.(31) defined for the symmetric bulk bipartition. We'll perform exact diagonalization and obtain a critical spectrum for the entanglement Hamiltonian labeled by the charge and momentum good quantum numbers. In the following, we'll first introduce the momentum and charge quantum numbers, and then show the numerical data for two concrete examples of Z 2 Majorana phase and Z 3 parafermion phase, respectively. Both of them exhibit quantum critical behavior characterized by conformal field theories.
A. Symmetries of the entanglement Hamiltonian
Before doing numerical calculation, we look at the symmetries of the entanglement Hamiltonian. In fact, the symmetries are inherited from the parent topological state. The topological states with different charges correspond to different boundary conditions. As we mentioned early, the Z N charge symmetryQ is intrinsic and cannot be broken. So we always have charge as a good quantum number.
Next we consider the translation symmetry and boundary conditions. As the symmetry of the entanglement Hamiltonian follow from the parent topological state, we look into the symmetry actions in the parent state. Since due to the gauge symmetry, there exists a modified 'bitranslation' symmetry for the Z N topological state with charge-m. The bi-translation acts on the parafermions in the following way: (32) Such a bi-translation symmetry physically translates the parafermion lattice by two sites up to a phase factor depending on the total charge of the state, and the additional charge zero part (χ † 2 χ 1 ) 2 plays the role of preserving the parafermion commutation relation. When N = 2 for the Kitaev Majorana case, (χ † 2 χ 1 ) 2 = −1, the translation restores a familiar single site translation, corresponding to the periodic boundary condition for the odd parity parent state m = 1, and the anti-periodic boundary condition for the even parity parent state m = 0. It is more natural to see that under the modified translation the physical basis of the generic Z N parafermion wave-function changes in the following way: So the bi-translation symmetry also gives rise to a good quantum number similar to the usual momentum. The fact that the translation by L times restores the same operator up to a charge operator is due to the nontrivial parafermion commutation relationT L χ jT †L = e i(m−1) 2π NQ †2 χ j , leading to the consequence of shifting the momentum by a fractional value. The boundary condition of the entanglement Hamiltonian is in one-to-one correspondence with the charge of the parent topological state. We thus havẽ where δ denotes the neighboring lattice sites and k is a integer power. The relation ψ 2L A +j ≡ e i(m−1) 2π N ψ j manifests the boundary condition depending on the charge m of the parent topological state. SinceĤ A is an equal weight summation of the product of these terms, it can be proved that it is invariant under the modified "bitranslation" symmetry. Moreover, the charge operator for those edge modes should be modified as where the first factor comes from the charge of the parent topological state.Q introduces additional charge quantum number m for a given m, and such a modified charge operator commutes with the entanglement Hamiltonian and the modified translation, As a result, we can label the entanglement spectra with the momentum and charge quantum numbers simultaneously. Last but not least, we mention that the bitranslation symmetry imposed on the grouped blocks does not depend on the relative block length. While the momentum is attributed to the bi-translation symmetry, the single site translation depending on the relative block length is a much more subtle symmetry.
B. Z2 Majorana fermion criticality
As a simple example, we first consider the most familiar Z 2 Majorana system. The entanglement spectrum under the symmetric bulk bipartition is shown in Fig. 4, where we choose φ = 2, l = 5 and the largest value of L A = 18. In Fig. 4(a), we show that the first two energy levels in the finite-size system linearly collapse to the ground state energy. Meanwhile we can pick out the ground state of the entanglement Hamiltonian effectively composed of L A sites and perform the usual block bipartition to calculate the entanglement entropy. The result is shown in Fig. 4(b), where x < L A denotes the block length. The scaling of entanglement entropy follows the Calabrese-Cardy formula in the closed boundary 18 : The slope yields the central charge c 1 2 , which uniquely characterizes the free Majorana fermion CFT. Indeed, it is known that this theory describes the self-dual critical point separating the Z 2 topological phase from trivial phase in Kitaev Majorana chain 16,19,20 .
Moreover, it can be further verified that the low-energy part of the energy-momentum spectrum exactly follows the scaling law of the Ising conformal field theory. The energy levels of the lowest primary fields with their descendants are displayed in Fig. 4(c), where the levels are shifted by a constant to set the energy of identity primary field as zero and the values of the levels are rescaled. For the even parity parent topological state (m = 0), the boundary condition corresponds to the antiperiodic in the Majorana fermion representation, and the Hilbert space of the subsystem A is restricted within the Neveu-Schwarz sector, which explains why the momentum could be shifted by half-integers in the left spectrum in Fig. 4(c). The corresponding energy levels are labeled by the primary fields (I,Ī), (ψ,Ī), (I,ψ), (ψ,ψ), where the corresponding conformal weights are h I = 0 and h ψ = 1/2. On the other hand, for the odd parity parent topological state (m = 1), the boundary condition becomes periodic, and the Hilbert space of subsystem A lies in the Ramond sector. As shown in right spectrum of Fig. 4(c), the energy levels are marked by the primary fields (σ,σ) with the conformal weight h σ = 1/16.
C. Z3 parafermion criticality with FM coupling
The Z 3 parafermion system is more nontrivial because the existence of parafermions is necessarily a strongly in- : -2 6 : - teracting system. In contrast to the Z 2 situation, there could be more than one quantum critical theories, depending on the phase factor of the coupling constant between every two neighboring edge parafermions. This phase factor can be tuned by the block length l in our decoding procedure. Among them two phases are prototypical: lθ 1 = 2kπ for the FM coupling and lθ 1 = 2(k + 1)π for the AFM coupling. We first consider the FM coupling case. In the numerical calculation, the parameter is chosen as l = 24, φ 1.5076 and the largest values of L A = 12. We shall see that the energy levels of the entanglement Hamiltonian also linearly collapse to the ground state energy, as shown in Fig. 5(a). The entanglement entropy of the ground state of the entanglement Hamiltonian is also calculated and shown in Fig. 5(b), which fits into the Calabrese-Cardy formula with a central charge c 4 5 , confirming the Z 3 parafermion CFT. Indeed, this theory describes the topological quantum phase transition from Z 3 nontrivial topological phase to the trivial phase.
D. Z3 parafermion criticality with AFM coupling
Let us now consider the AFM coupling case and perform the similar numerical calculations for φ 1.5076, l = 12 and the largest value of L A = 12. Similar to the FM case, the lowest excitation levels are scaled linearly as a function of 1/L A , indicating a gapless spectrum as shown in Fig. 6(a). The calculation of the ground state entanglement entropy for the entanglement Hamiltonian shown in Fig. 6(b) determines that the critical entanglement spectrum is described by the CFT with a central charge c 1.
Moreover, the low-energy entanglement levels follow the finite-size scaling law under the three boundary conditions determined by the charge of the parent topological state m, which are displayed in Fig. 6(c) and (d).
Since the large value of L A = 12, there are finite-size corrections, which are order of 1/L 2 A . In Fig. 7, we have carefully analyzed these corrections so that the entanglement levels are expressed in terms of the following primary fields: (h,h) = (0, 0), (3/4, 3/4), (0, 1/3), and (3/4, 1/12) for the charge parent state with m = 0. On the other hand, for the parent topological state with m = 1, 2, the primary fields include (1/3, 0), (1/12, 3/4), (1/12, 1/12) and (1/3, 1/3). Note the momenta of some fields have been shifted by π in the presence the AFM correlation. Actually, all the primary fields can be further represented in terms of the primary fields of the compactified free boson CFT with the compactified radius R = 2/3, where e denotes the electrical charge and m is the magnetic winding number. In contrast to the critical statistical systems with both integers of e and m values, the quantum numbers (e, m) are found to be fractional and listed in Table. I. So the quantum critical point separating a topological phase from its adjacent trivial phase is not necessarily unique, and we can in practice deduce a rich family of quantum critical theories to describe the phase transitions between them.
VI. DISCUSSION AND SUMMARY
To gain a better understanding of the numerical result, we could expand the logarithm of the reduced density operator Eq.(29) to obtain the leading terms of the bulk coding topological quantum criticality from topological wave-function to the strongly interacting long-range entangled topological phases. This is achieved by a nontrivial symmetric extensive interlaced bipartition and extraction of its entanglement spectrum. This is a novel recipe to extract topological quantum critical points from a gapped topological ground state, without any need for the parent Hamiltonian or choosing any specific perturbations. In general, using this method, we could obtain a family of critical points, for example the Z N (N > 2) parafermion phase. Moreover, we have provided more than just a method, but our method also has a rather strong implication of a general physical picture. Our result actually suggests an appealing generalization of the bulk-edge correspondence to bulk-edge-criticality correspondence. By some concrete exact demonstrations, we hope that our results in this paper combined with our earlier works on the bosonic symmetry-protected states should strongly support that the decoding recipe and the bulk-edge-criticality are universal in one-dimensional systems. This could potentially be further generalized to the intrinsic topological order in two-dimensional systems.
Our MPS is defined as the contraction of the local tensors in the graded space, whose graphic representation is shown in Fig. 8. It is a general Z N parafermionic MPS in the topological phase with the local matrix: For the Z 2 case, there is an exact ground state with a fi-
⋅⋅⋅ ⋅⋅⋅
whereĉ † is the fermionic creation operator, and α is a parameter related to the interaction strength. Because of the Pauli exclusion principle, all of the exponential terms can be expanded as 1 ± αĉ † l , and all the higher order terms vanish completely, leading to a simple form of these two wave functions: where the delta function restricts the parity of states to even or odd and the exponential part has been expressed as the local decaying factors. This can be further represented into the MPS form with the local tensor: and φ = −2 ln |α|. Now it can be seen that this is nothing but the Z 2 example of our general MPS. Moreover, another exact solution of Z 3 parafermionic ground state with a finite correlation length was proposed by F. Iemini, C. Mora, and L. Mazza 13 in terms of the local tensor Via a gauge transformation U , we can transform this local tensor into another charge basis: The transfer matrix defined in Eq. (12) can be expanded by the eigenvectors: It is very useful to calculate the two-point correlation function, which is generally defined by two local opera-torsÔ 1 andÔ 2 with opposite charges. Assume thatÔ 1 is an operator with charge −q on the lattice site l, whilê O 2 is the other one with charge q on the lattice site l + d, which can be enforced on the local charge p state as: Putting them together ensures the zero total charge of the product operator. The correlation function is defined Ψ|Ô 1Ô2 |Ψ in a chain with L lattice sites, as shown in Fig. 9(a). No matter what the coefficient of the specific wave-function is, the contraction of the graded vector can be performed as: Then the contractions can be divided into three parts. First, those bra parafermions in less than l site can be contracted after crossing two operators with zero total charge. The second is those bra parafermions from l to l + d sites will be exchanged only with the operatorsÔ 2 to be contracted, whose charge q leaves a phase factor e −i 2π N q(k l +···k l+d ) . And the remanent parafermions can be contracted normally without crossing any sites. It is almost similar to the correlation function for the bosonic case with an additional non-local phase between two operators. Fortunately, according to the bulk-edge correspondence, this phase factor can be represented in the virtual indices e −i 2π N (β l+d−1 −α l )q , which gives rise to two gauge matrices behind the d − 2 sites transfer matrix shown in Fig. 9(b).
Finally, the correlation function is written in a very simply form, where a unitary transform remains in the definition of the correlation length as shown: where σ α,β = e i 2π N α δ β−α is the diagonal Z N charge matrix and O(|λ 2 | d ) terms have been ignored. Then the correlation length can be straightforwardly derived and equal to that for two zero charged operators, as shown in Eq. (17). The reduced density operator we are going to calculate is actually a parafermionic MPO, the contractions of which brings in nontrivial phase factors arising from the parafermion commutations. To deal with this, there is a trick to introduce additional bonds to keep track of the phase factor.
As shown in Fig. 3, the reduced density matrix can be obtained by contracting all of grouped physical indices in the subsystem B labeled by h, h , leaving all of the grouped physical indices from the subsystem A labeled by p, q. This problem is not simple like the bosonic case, and all of contractions are carried out from the left as a convention. Now let's consider about an edge parafermion defined in the graded space of the j-th of the B part labeled by h j , it will meet the corresponding partner labeled by h j only after crossing by the j − 1 edge parafermions defined in the A part both on bra and ket chains, as shown in Fig. 10(a). Since the directions of exchanges in the bra and ket chains are opposite, they leave the phase factor conjugated to each other, which finally causes a phase factor e i 2π N hj i<j (pi−qi) . Those two parafermions labeled by h j , h j form a bond state with zero charge, so no more phase factor is present in the next contraction. Here the local index h j represents the virtual indices β j − α j − p j , therefore, two additional bonds in the j site are necessary to record the non-local total charges of parafermions crossed by the parafermions labeled by h j , shown as in the r, l bonds of the tensor R (Eq.(27)). This is inspired from the previous work in the fermionic tensor contractions 21 . Finally, r = l + (p − q) accumulates the charge and the phase factor is written into the local form as e i 2π N l(βj −αj −pj ) for the j site. Further more, the two virtual parafermions living on both of bra and ket chains constitute a single Ndimensional super-vector space, which can be represented where the left-most bond l is fixed to zero as the start point and the final accumulated charge (the right-most r) is also fixed to zero.
To analyze the reduced density matrix for the subsystem A with L A blocks, the matrixR(L A ) can be solved mathematically, defined as Eq. (28) and shown in Fig. 10(b). After considering the boundary condition, the reduced density matrix can be expressed as where the singular values are diagonal contributions and I represents the non-diagonal contributions in the middle part of the reduced density matrix. In the charge accumulating basis, the reduced density matrix can be finally represented as: where P j = i<j p i (mod N ) and Q j = i<j q i (mod N ). Moreover, C = [N L A s 2 m (2lL A )] −1 in the expression of ρ A is the common factor including the normalization factor for the finite length. From that form, it is easy verified the result given by Eq.(29), which is written in the basis of the edge parafermonic operators. | 2018-08-28T00:34:52.000Z | 2018-02-13T00:00:00.000 | {
"year": 2018,
"sha1": "1f9fbf3b0f4d424c415ff793eadcac5efbed9596",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1802.04542",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1f9fbf3b0f4d424c415ff793eadcac5efbed9596",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
260799577 | pes2o/s2orc | v3-fos-license | Pityriasis following COVID-19 vaccinations: a systematic review
In the wake of a global COVID-19 pandemic, where innovations in vaccination technology and the speed of development and distribution have been unprecedented, a wide variety of post-vaccination cutaneous reactions have surfaced. However, there has not been a systematic review that investigates pityriasis eruptions and the associated variants following COVID-19 inoculations. A PubMed search using Preferred Reporting Items for Systematic Reviews and Meta-Analyses was performed to find case reports from the earliest record through November 2022. Data including types of vaccination and pityriasis were extracted and a quality review was performed; 47 reports with 94 patients were found: 64.9% had pityriasis rosea (PR), 3.2% PR-like eruptions, 16.0% pityriasis rubra pilaris, 7.4% pityriasis lichenoides et varioliformis acuta, 3.2% pityriasis lichenoides chronica, and 5.3% had reactions described as atypical. The top three COVID-19 vaccinations reported were Pfizer-BioNTech (47.9%), Oxford-AstraZeneca (11.7%), and Moderna (8.5%). Pityriasis reactivity was reported most frequently after the Pfizer-BioNTech vaccination, with pityriasis rosea being the most common variant. A large difference was additionally found between the ratio of post-vaccination pityriasis reactions following Pfizer and Moderna vaccinations (5.63), and the ratio of Pfizer’s usage in the United States as of December 28, 2022 relative to that of Moderna (1.59). Further studies with adequate follow-up periods and diagnostic testing will thus need to be performed to elucidate the root of this discrepancy and better characterize the association between different pityriasis reactions and COVID-19 vaccinations.
Introduction
The introduction of vaccines against Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is a critical factor in halting the progression of COVID-19.With the induction of novel vaccines, the interest is shifting toward understanding possible adverse reactions.Currently, many vaccine formulations dominate the global market; Moderna's (Spikevax™), Pfizer-BioNtech's (Comirnaty®) mRNA vaccines, and Johnson & Johnson's viral vector vaccine are some, among others. 1Each has had various reactions reported, including various cutaneous eruptions that have been investigated and described to varying degrees. 2 Due to a large vaccination effort, dermatological manifestations are thus likely to appear. 3Among them, pityriasis and its variants.Pityriasis rosea (PR) is a self-limiting papulosquamous disease affecting adolescents and young adults with an unclear etiology. 4Although there are variants in presentation, classic PR will present with a sudden onset of a solitary patch, referred to as the herald patch, typically found on the trunk.Subsequently, a secondary eruption of round to oval macules will appear along the cleavage lines, referred to as a Christmas tree distribution. 4Pityriasis rubra pilaris (PRP) is another papulosquamous disease with an unknown etiology affecting children and adults.There are five subtypes of PRP, with the most classic findings including redorange papules and plaques, perifollicular keratosis, and waxy keratoderma. 5Pityriasis lichenoides et varioliformis acuta (PLEVA) and pityriasis lichenoides chronica (PLC) are on two ends of the clinical spectrum.Overall, pityriasis lichenoides presents with recurrent erythematous to pruritic papules that can spontaneously regress. 6PLEVA has an acute course, with lesions described as crusted and vesiculopapular.PLC has a chronic relapsing course; its lesions are scaly instead of crusted. 6hile there are systematic reviews exploring the cutaneous sequela of COVID-19 vaccination in general, there are yet to be any focusing specifically on the incidence and manifestations of different pityriasis conditions in this setting.This systematic review aims to quantify and qualify occurrences of pityriasis conditions following COVID-19 vaccination in an effort to elucidate any correlations or patterns that could provide clinical insight.
Methods
This systematic review was conducted in accordance with PRISMA guidelines.Articles were retrieved from the PubMed database using the search formula: "((COVID-19) OR (COVID) OR (SARS-Cov-2)) AND (pityriasis)."No restrictions were applied to the year of publication and all articles from the earliest record to November 2022 were included.
Inclusion and exclusion criteria
During the initial screening, two independent reviewers assessed each article's title and abstract to exclude articles that were reviews, meta-analyses or published abstracts, not in English, not published in a peer-reviewed journal, did not use human subjects, and did not address the development of pityriasis or pityriasis-like manifestations following vaccinations against COVID-19.Subsequently, two independent reviewers screened the remaining articles by evaluating them in their entirety.The following inclusion criteria were followed: the case report is in English and describes the development of pityriasis or pityriasislike manifestations in patients who received a COVID-19 vaccination of any type.In the event that there was a disagreement, a third reviewer made the decision to include or exclude a publication.The process of study selection is summarized in Figure 1.
Quality assessment
Using the Joanna Briggs Institute Critical Appraisal Checklist for Case Reports, two independent reviewers (GM and VT) assessed the quality of each publication.In the event that there was a disagreement, a third reviewer decided the final score for that publication.
A total of eight criteria were used to assess the quality of the publication, including clearly described demographics, patient history with timeline, patient's current clinical condition at the beginning of the case study, diagnostic tests, treatment, patient outcome, appropriate follow-up period, and takeaway lessons.An adequate follow-up period was considered as 9 months or more since the resolution of symptoms.][9] Furthermore, a set follow-up period of 9 months was additionally chosen to better monitor the instances of recurrence of pityriasis following resolution of symptoms, regardless of the type.Each publication was assigned a score of 0 (not reported), 1 (reported but inadequate), or 2 (reported and adequate) for each criterion.Scores from each criterion were then added to generate an overall score for each publication, with the maximum score that a publication could obtain being a score of 16.
Study design and patient demographics
Our case study selection criteria are detailed in Figure 1.47 case reports and series describing pityriasis reactions in 94 patients were identified.These cases, detailed in Supplementary Table 1, consisted of 42 females and 52 males aged 15-85 years old with an average age of 44.53 and a standard deviation of 16.66.All studies included patients who had received a dose of vaccination within two weeks of presenting to the office with a form of pityriasis.None of the patients had concurrent dermatological conditions and only one had an active COVID-19 infec-tion. 10Previous medical histories were reported in twenty of the cases and included the following: hypertension, 11-15 type 2 diabetes, 12,16,17 psoriasis, 12,18,19 mild hidradenitis suppurativa, 20,21 alopecia areata, 22 chronic pancreatitis and arrhythmia, 11 1-year history of glioblastoma, 23 transverse myelitis, 19 moderate hepatic steatosis and familial hypercholesterolemia, 24 metabolic syndrome, hypothyroidism, and chronic kidney disease, 12 as well as chronic lymphocytic leukemia and chronic obstructive pulmonary disease, 12 fever with myalgia two days prior, 25 arthrosis, 13 vitiligo, 17 fatty liver, 16 arthrosis, 14 acute lymphocytic leukemia in remission, 12 urticaria several months prior, 26 Hashiomoto's, 27 transient ischaemic attack, coronary artery disease, and dyslipidemia. 28
Clinical testing performed
6,29,34-38,40,41,43,45-48,50,52-54 Histories of previous reactions were gathered and testing for viral infections was performed by many of the studies.Only one patient had a concurrent COVID-19 infection, 10 while one patient had their first reaction during a COVID-19 infection, 37 and another one had a previous reaction to a vaccination, which was a statininduced myopathy. 24Testing for other viral infections in a variety of combinations was reported in 18 patients with the following results, all negative: general reports of a negative viral result, 27,32,33,36,37,40 HIV, [14][15][16][17]34,35,45,46 hepatitis B and C virus, 14,34 antineutrophilic cytoplasmic antibody, 34 herpes simplex virus 1/2, Epstein-Barr virus, and varicella zoster virus, 56 negative for human herpesvirus 6/7, 23,35 hepatitis B and C, 15,16,35,45 and toxoplasma, 35 syphilis, 15,23 treponema pallidum. 14There were also two patients who refused blood testing. 26,55 Treatments reported for pityriasis eactions Treatments of PR and PR-like reactions were a combination of the following: topical and oral corticosteroid(s) including prednisolone, mometasone, betamethasone, 21,22,26,28,33,38,41,44,50,52 standard triamcinolone 0.1% ointment, 20,23,31,42,55 doxycycline, 22,29,34 oral acyclovir, 30 ganciclovir, 51 valaciclovir, 51 antihistamines, 21,28,33,41 emollients, 53 and L-lysine. 27Pityriasis rosea was reported to have resolved on its own in two cases. 28,56or the treatment of PRP and PLEVA, many combinations were tried with varying levels of success.As far as PRP is concerned, Hlaca et al. reported that acitretin and topical mometasone 0.1% resulted in complete resolution of symptoms 4 months after treatment, a combination that was also implemented by Hunjan et al. 11,18 Additional successful treatments included the use of methotrexate, 12 oral and topical corticosteroids such as prednisone, 12,15,17,45 emollients, 15 isotretinoin, 13 and ixekizumab, 16 while ustekizumab showed partial improvement in one case and none in another. 36Two PRP cases in this systematic review did not report the patient's final outcome after initiation of treatment, 24,46 while in another case, the patient was lost to follow-up. 10r the treatment of PLEVA, Sechi et al. were able to achieve remission after 10 weeks of topical 2% fusidic acid and 0.1% betamethasone cream use. 12Additional treatments that were reported in PLEVA patients that achieved remission of symptoms included oral corticosteroids, 14,37,40,43 doxycycline, 19 narrowband Ultraviolet B therapy, 37 emollients and mometasone furoate creams. 43A course of doxycycline was attempted in the case of the patient with a PLC eruption by Al Muqrin et al., however the final outcome of the treatment was not reported. 29In the other two PLC patients reported in this systematic review, symptoms resolved either through the use of doxycycline or on their own, leaving hyperpigmented macules. 34,35
Quality and risk of bias
The quality assessment outlined in Supplementary Table 2 produced scores ranging from 9 to 16.Only one publication received a perfect score. 16The majority of publications failed to report an appropriate follow-up period.48,52 For this review, an appropriate follow-up period was defined as a follow- up appointment after at least 9 months since resolution of symptoms, with a period of less than 9 months was reported as an inadequate follow-up period.Additionally, 11 publications did not describe the patient's outcome, 10,24,25,29,31,32,37,42,44,47,50 and 2 publications were assessed to have inadequately described the patient's outcome. 45,56A publication was considered inadequate reporting of the patient outcome if it did not further describe the resolution of symptoms with a timeline of when it was resolved.Furthermore, 6 publications, 25,32,35,39,47,56 failed to report if there was any treatment provided.Lastly, only one publication did not mention or expand on takeaway lessons from the case study. 33All publications reported demographics, patient history, clinical condition upon presentation, and method of clinical assessment.
COVID-19 vaccine-related reported cutaneous reactions
An array of cutaneous reactions following COVID-19 vaccinations have been documented, with the most common being local injection site reactions and delayed large local reactions. 57Other reported cutaneous reactions following COVID-19 vaccinations range from urticaria to erythema multiforme, herpes simplex reactivation and pityriasis-like lesions. 3,58It is noted that in general most cutaneous COVID-19 vaccine reactions occur after the first dose. 57,59Moreover, these reactions occur at lower frequency, tend to be self-limiting and are not currently a contraindication to vaccination. 58It is also important to note that the reactions listed are not specific to just COVID-19 vaccines.It is hypothesized that because there are common cutaneous reactions between a vaccine and the infection caused by the virus that it targets, the source may not be damage done by the virus and may instead be the process of immune activation against the virus. 60
Proposed etiologies of pityriasis and history of vaccine reactions
Several types of pityriasis have been linked to vaccinations, including COVID-19, in many case reports over the past few years.In general, the pathophysiology of pityriasis can be described as an immune disorder with aggravation of the immune system response due to an antigenic trigger. 12,18,61,62However, an exploration into vaccination reactivity can help elucidate the disease mechanism, as it is still under debate and investigation.
Pityriasis rosea and its less common variants, such as inverse pityriasis, are self-limited and usually resolve within 6-8 weeks.This was true for all of our PR case reports.4][65] While not all case reports included viral testing, the patients that were tested for viruses in this systematic review, including COVID-19, were all negative.Cases of PR and PR-like reactions concurrent with COVID-19 have been described as well in the literature. 32,66,67here have also been reports of PR or PR-like reactions after vaccination for human papillomavirus, 68 bacillus Calmette-Guérin, 69 smallpox, 70 hepatitis B, 71 pneumococcus, 72 yellow fever, 73 and influenza. 74,75RP also has an unclear disease mechanism outside of the inherited subtype related to a CARD14 gene mutation. 5Nongenetic cases of PRP have been linked to physical trauma, multiple autoimmune conditions, HIV infections, and malignancies. 5dditional cases of PRP have been reported after vaccination with diphtheria-pertussis-tetanus and oral poliovirus. 76LEVA and PLC are the same disease on two ends of the clinical spectrum.While PLEVA and PLC pathogenesis clearly falls under the category of a T-cell lymphoproliferative disorder, the etiology of this T-cell activity is less clear.Connections to specific infections such as HPV, 77 measles, mumps and rubella (MMR), 78- 81 anti-tetanus and diphtheria, 82 cases of PLEVA and PLC. 6Pityriasis lichenoides following COVID-19 vaccinations may be due to a delayed hypersensitivity response against vaccine excipients, the spike protein, or due to molecular mimicry resulting in a T-cell-mediated hypersensitivity skin reaction. 83This is highlighted by the case of Makila et al., in which the patient developed PLEVA during a COVID-19 infection, and had recurrence of symptoms one month after their second vaccine dose. 37Previous incidences of PLEVA reaction after vaccination have been reported for MMR, [78][79][80][81] anti-tetanus-diphtheria, 82 and influenza vaccinations. 61
Data trends in vaccination types
Though the case studies examined in this paper are unlikely to be exhaustive of all pityriasis reactions that have occurred and have not been reported in literature, we found far more of the reactions described were after Pfizer-BioNTech vaccinations (47.9%) than after vaccination with the Moderna vaccine (8.5%), resulting in a ratio of 5.63.When comparing this to the total vaccination rates in the United States as of December 28, 2022 (59.63% of which were Pfizer and 37.49% were Moderna), we calculated a far lower ratio of 1.59, indicating a higher difference in the level of reactivity risk between the two, though they are both novel mRNA vaccines using similar technology. 84We believe that more investigation into the cause of this difference is warranted.Though the cases we investigated included reactions worldwide, we were unable to ascertain complete worldwide data on the use of both vaccines.The Moderna vaccine is administered at higher rates in the United States than in most other countries, so the actual ratio difference may be greater at the global level. 84
Quality of current case literature
48,52 The rest of the cases did not describe any follow-up period after resolution of symptoms; therefore, it is hard to assess any possibility of recurrence of the pityriasis conditions described in this review.Aside from pityriasis, it has been reported that vaccinations may cause the recurrence of pre-existing dermatologic conditions. 85This further drives the importance of adequate follow-up periods, and further studies should ensure proper follow-up, as this information may help clinicians better understand any long-term risks of flareups or recurrence of PR, PRP, PLEVA, PLC, and possibly other dermatological conditions.Although all case studies mentioned in this review described the method of diagnosis, diagnostic tests such as biopsy diagnosis were not performed in all cases.Due to the vaccine's novelty, information from a histopathological examination may aid in the future diagnosis of cutaneous reactions against the COVID-19 vaccine.
Conclusions
Pityriasis conditions (PR, PRP, PLEVA, and PLC) have been reported as cutaneous manifestations of COVID-19 vaccinations.This review found a greater incidence of pityriasis reactions associated with the Pfizer-BioNTech vaccine, with the most diagnosed pityriasis type being PR.A significant absence of follow-up periods was found throughout the case studies, preventing thus monitoring of disease progression and recurrence.Future reports should focus on possible differences in Pfizer-BioNTech and Moderna vaccines to help explain the greater prevalence of pityriasis reactions with the former, as well as they should incorporate adequate diagnostic testing and follow-up periods to better characterize the association between the different types of pityriasis reactions and COVID-19 vaccinations.
Figure 1 .
Figure 1.Flowchart depicting selection process of publications for this systematic review, number of publications (n).
Figure 2 .
Figure 2. Scores for each criteria of quality presented as percentages across all included studies. | 2023-08-11T15:08:12.872Z | 2023-08-09T00:00:00.000 | {
"year": 2023,
"sha1": "9ae814d98a2bf42bc714f2560dc066d55eaeddd2",
"oa_license": "CCBYNC",
"oa_url": "https://www.pagepress.org/journals/index.php/dr/article/download/9742/9158",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8349959a42aaa9ba7f515864ed86446862f1c47a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263841324 | pes2o/s2orc | v3-fos-license | Gut microbiota causally affects cholelithiasis: a two-sample Mendelian randomization study
Background The gut microbiota is closely linked to cholesterol metabolism-related diseases such as obesity and cardiovascular diseases. However, whether gut microbiota plays a causal role in cholelithiasis remains unclear. Aims This study explored the causal relationship between gut microbiota and cholelithiasis. We hypothesize that the gut microbiota influences cholelithiasis development. Methods A two-sample Mendelian randomization method was combined with STRING analysis to test this hypothesis. Summary data on gut microbiota and cholelithiasis were obtained from the MiBioGen (n=13,266) and FinnGen R8 consortia (n=334,367), respectively. Results Clostridium senegalense, Coprococcus3, and Lentisphaerae increased the risk of cholelithiasis and expressed more bile salt hydrolases. In contrast, Holdemania, Lachnospiraceae UCG010, and Ruminococcaceae NK4A214 weakly expressed bile salt hydrolases and were implied to have a protective effect against cholelithiasis by Mendelian randomization analysis. Conclusion Gut microbiota causally influences cholelithiasis and may be related to bile salt hydrolases. This work improves our understanding of cholelithiasis causality to facilitate the development of treatment strategies.
Introduction
Cholelithiasis (also known as gallstones) is defined as a solid clot in the gallbladder or biliary system (Lammert et al., 2016).Approximately 90% of cholelithiasis occurrences are cholesterol gallstones, while the incidence of other stone types (including black and brown pigment stones) is below 10% (Sun et al., 2022).Approximately 10-20% of the global population has gallstones, and over 20% of cases develop gallstone diseases, such as acute cholecystitis, acute cholangitis, and obstructive jaundice (Corteś et al., 2020).Gallstone disease is one of the most expensive gastrointestinal conditions from a societal perspective (Grigor'eva and Romanova, 2020).Epidemiological studies identified numerous risk factors for cholesterol stones, which include type 2 diabetes, physical inactivity, and over-nutrition (Di Ciaula et al., 2019).This finding can be attributed to the risk factors that lead to excess cholesterol or disruption of cholesterol homeostasis (Rudling et al., 2019).
The gut microbiota has a non-negligible impact on metabolic disorders, including insulin resistance (Gomes et al., 2018), obesity (Abenavoli et al., 2019), and hyperlipidemia (Michels et al., 2022).These disorders are known risk factors for increased hepatic cholesterol synthesis, gallstone formation, and symptomatic gallstones (Di Ciaula et al., 2019).However, the role of intestinal microbiota in gallstone development remains unclear.Elevated levels of cholesterol and bilirubin in the bile and decreased levels of bile salts cause cholesterol gallstones (Sun et al., 2022).Decreased bile salt levels are observed in liver disease and in conditions such as Crohn's disease or in individuals that have undergone colectomy or intestinal resection, where the enterohepatic circulation of bile salts is impaired (Molina-Molina et al., 2018).These findings led us to hypothesize that the gut microbiota and intestine may play key roles in influencing gallstone formation in the host.The diversity and taxonomy of gut microbiota are associated with bile acid levels in gallstone disease, and increased concentrations of taurodeoxycholic acid and taurocholic acid are associated with the presence of conditionally pathogenic bacteria (Petrov et al., 2020).However, intestinal flora is often influenced by confounding factors, including age, sex, environment, alcohol consumption, diet, and lifestyle (Zmora et al., 2019).Eliminating these confounders in observational studies can be challenging, thereby limiting research on the causal role of gut microbiota in gallstones.Fortunately, Mendelian randomization (MR) analysis can be employed to explore the causal role of intestinal microbiota in the etiology of human diseases independent of confounding factors (O'Donnell et al., 2023).
MR borrows economically inspired statistical techniques to enable researchers to examine the causal factors affecting human diseases (Birney, 2022).Numerous risk factors related to diseases have not established causation owing to the limitations of observational studies that cannot avoid the influence of confounders.Thus, MR has become one of the most effective methods for addressing issues in human biology and epidemiology, including the relationship between intestinal microbiota and disease (Georgakis and Gill, 2021).The correlation between genetic variation and outcome is independent of confounders in MR analysis (Bowden and Holmes, 2019).For instance, MR analysis has revealed that Bifidobacterium is causally linked to preeclampsia-eclampsia (Li et al., 2022), and the fecal abundance of Oscillibacter and Alistipes is causally associated with decreased triglyceride levels (Liu et al., 2022).
This study used summary-level statistics of the genome-wide association study (GWAS) from the MiBioGen and FinnGen consortia to perform a two-sample MR analysis to investigate the causal relationship between intestinal microbiota and cholelithiasis.
Materials and methods
The two-sample MR study relied on three assumptions to draw conclusions regarding causation.Genetic variants strongly predict microbiome exposure independent of confounding factors and outcomes.Genetic variants influence outcomes through exposure (Emdin et al., 2017).
Data sources
Single nucleotide polymorphisms (SNPs) related to intestinal microbiota were obtained from the GWAS dataset of the International Consortium MiBioGen and were used as instrumental variables (IVs).This study included 34,024 individuals from 18 cohorts predominantly of European ancestry (including those from the United States, Canada, Israel, South Korea, Germany, Denmark, The Netherlands, Belgium, Sweden, Finland, and the United Kingdom).The dataset provided genotyping data that coordinated 16S ribosomal RNA gene sequencing to examine the relationship between genetic variants and intestinal microbiota by profiling taxonomic classification.A total of 211 taxa were included in the analysis.We selected SNPs that showed significant correlations with genus at a suggestive of genome-wide significance thresholds (P < 1× 10 −5 , F >10) as potential IVs from MiBioGen (Li et al., 2022).
Data outcome
GWAS summary statistics for cholelithiasis were acquired from the FinnGen Consortium R8 release data.The phenotype "cholelithiasis" was adopted in our research.The GWAS consisted of 32,894 cases and 301,383 controls.The mean age of the patients was 52.11 years old (female: 48.62, male: 59.36).The principal components (sex and age), and the genotyping batch were corrected during the analysis.SNPs were identified with genomewide significant correlations with cholelithiasis (P < 5 × 10 −8 ).
Instrumental variable filtering
First, SNPs showing significant correlations with the genus were selected at suggestive genome-wide significance thresholds from the MiBioGen as potential IVs (P < 1× 10 −5 , F >10).A chain imbalance threshold (R 2 < 0.001) and linkage disequilibrium threshold of 10,000 kbp was then applied to ensure independence among the selected SNPs.Single nucleotide polymorphisms that were unclear, duplicated, or palindromic were removed to ensure consistent SNP orientation in the exposure and results.Next, SNPs should have P >1×10 -5 to ensure that SNPs are independent of the outcome.Finally, the PhenoScanner online tool was used to check whether the SNPs affected the outcome, followed by their manual exclusion.The MR-Egger intercept analysis was employed to test for horizontal pleiotropy (P> 0.05) and leave-one-out analysis was conducted to assess whether each IV affected the overall estimates of the remaining IVs.
MR analysis
The most frequently used MR methods were used to analyze the causal relationship between intestinal microbiota and cholelithiasis.They were inverse variance weighted (IVW), MR-Egger, weighted median, weighted mode, and simple mode.
Reverse MR analysis
MR inverse analysis was performed to explore whether cholelithiasis had a causal effect on the identified microbiomes.During this analysis, cholelithiasis and bacteria was considered as an exposure and an outcome, respectively, SNPs with notably correlated with cholelithiasis were taken as IVs.
All data analysis was used R software (version 4.2.1) to conduct.We applied the IVW, weighted median, and MR-Egger regression methods using the R packages of "TwoSampleMR" (version 0.5.6).And "MRPRESSO" package was used to perform the MR-PRESSO analysis.
SRTING analysis
The STRING database collects and integrates protein-protein interaction (PPI), performs enrichment analysis and highlights proteins in the PPI network (Szklarczyk et al., 2023).We tried to use the STRING to predict the function of BSH in the bacteria (https://string-db.org),and conduct ontology (GO)and KEGG enrichment analysis.Statistical significance was defined as p<0.05.
Single nucleotide polymorphism selection
A total of 2557 SNPs related to 211 taxa were identified as IVs of the gut microbiota.A series of quality control steps was performed resulting in the selection of 72 SNPs associated with six genera and one phylum (based on the IVW P-value < 0.05).Analysis of these 72 SNPs using PhenoScanner showed that no SNPs were related to confounding factors.Heterogeneity between the two samples was tested using Cochran's Q statistics and no evidence of heterogeneity (p > 0.05) or horizontal pleiotropy of the IVs was observed (MRPRESSO-global, p > 0.05; MR-Egger intercept, p > 0.05) (Table S1).The MR analysis results are shown in Figure 1.
Sensitivity analysis
Sensitivity analysis is necessary to assess the effectiveness of IVWs.The MR-Egger method was used to assess horizontal multieffectiveness.The MR-Egger intercept and MR-PRESSO global tests indicated a low likelihood of horizontal multiplicity (Table S1, p > 0.05).All I2 values in the heterogeneity tests were <50%, and all pvalues were >0.05.This finding indicated that our findings were probably not influenced by heterogeneity bias.The MR-Egger intercept and the MR-PRESSO global test showed no significant horizontal pleiotropy.This result indicated that the outliers did not significantly affect the results.Simultaneously, the consistency of the robust IVW (adjusted for the effect of outliers) and MR-PRESSO (adjusted for the effect of horizontal multiplicity) supported the lack of significant outlier effects on the results (Table S1).A leave-oneout analysis was also conducted to assess whether each IV affects the overall estimates of the remaining IVs.These results suggested that none of the single IV treatments affected the results (Figure 2).
The causal effects of gut microbiota and cholelithiasis via inverse MR analysis
The reverse causal effects were examined using cholelithiasis as the exposure and intestinal microbiota as the outcome.Forty-three SNPs associated with cholelithiasis were selected as IVs (P < 5 × 10 -8 ).Cholelithiasis was causally associated with Actinobacteria, Lachnospiraceae, Clostridium innocuum, Eggerthella, Eubacterium brachy, Intestinimonas, Paraprevotella, and Mollicutes RF9 (Table 2).None of the bacteria exhibited a bidirectional causal relationship with cholelithiasis and gut bacteria.
Microbiota causally linked to cholelithiasis may be associated with bile salt hydrolase
Bile acids (BAs) are divided into primary and secondary categories.Primary BAs are excreted into the intestine and converted into secondary BAs by intestinal microorganisms (Sinha et al., 2020).The initial step involves hydrolysis of the amino acid fraction by BSH during secondary BA metabolism (Smirnova et al., 2022).BSH (also known as choloylglycine hydrolase) is present in the intestinal microbiome to maintain BA balance.Imbalances in BAs are associated with gallstones, gallbladder disease, obesity, and diabetes (Cai et al., 2022a).BSHs are highly conserved in all major gut microbial phyla (including Bacteroidetes, Firmicutes, and Actinobacteria); however, they are bacterially different owing to their preferential activity toward glycine-or taurine-conjugated BAs.The Human Microbiome Project reported that 26.03% bacterial strains encode BSHs (Song et al., 2019).Thus, gut microbiota-related BSH directly determines the synthesis of secondary bile acids, which involve in regulating cholesterol metabolism.We hypothesized that BSH may be a link between intestinal microbiota and gallstone causation.Thus, we selected BSHs of the microbiome (that are associated with cholelithiasis by MR analysis) in the Human Microbiome Project database and National Center for Biotechnology Information database for protein-protein interaction (PPI) analysis.Clostridium and Coprococcus were predicted as a risk factor for cholelithiasis by MR and they expressed more BSHs than the protective bacteria (Holdemania, Lachnospiraceae UCG010, and Ruminococcaceae NK4A214) (Figures 3A, B, D, E).Similarly, inverse MR analysis predicted cholelithiasis as a risk factor for Intestinimonas, which expresses BSH (Figures 3C, F).We also used STRING to predicted the function of Clostridium, Coprococcus and Intestinimonas.Enrichment analysis showed that the bile acid catabolic process and secondary bile acid biosynthesis were enriched in Clostridium and Coprococcus according to Gene Ontology analysis and Kyoto Encyclopedia of Genes and Genomes pathway analysis.Intestinimonas was only enriched for secondary bile acid biosynthesis (Table S2).No enrichment of secondary bile acid biosynthesis in microbiota was observed that did not contain the BSH protein.These results suggest that BSH may serve as a link between gut microbiota and cholelithiasis (Figure 4).
Microbiota linked to serum total cholesterol may be associated with BSH
Lower total cholesterol levels in serum may be an independent risk factor for cholelithiasis (Chen et al., 2022).It has been proved that one of the effects of BSH on host is cholesterol lowering (Ertürkmen et al., 2023), and Terrisporobacter was associated with higher total cholesterol levels (Guo et al., 2023).We detected the expression of HBS in Terrisporobacter, the red node represented BSH, and the functional enrichments was predicted by STRING, no enrichments were found (PPI enrichment P value: 0.457) and the function was described protein lipoylation and CoA hydrolase activity rather than bile acid or fatty acid metabolism (Figure 5). the result indicated that BSH of Terrisporobacter may be little function in bile acid or fatty acid metabolism.
Discussion
This study used the MiBioGen database and cholelithiasis data from FinnGen to investigate the causal relationship between gut microbes and cholelithiasis using MR analysis.Clostridium senegalense, Coprococcus3, and Lentisphaerae increase the risk of cholelithiasis.In contrast, Holdemania, Lachnospiraceae UCG010, and Ruminococcaceae NK4A214 showed protective effects.The causal relationship identified BSH as a potential link between bile salt metabolism and the gut microbiota.
The intestinal microbiota is a metabolic organ that produces numerous metabolites (Connell et al., 2022) (including BAs and indole derivatives) (Agus et al., 2018) that play crucial roles in regulating host metabolism (Cai et al., 2022b).Cholesterol oxidized by liver enzymes results in the production of BAs that are further metabolized by the intestinal microbiota (Collins et al., 2023).The gut microbiota regulates key enzymes, such as cholesterol-7ahydroxylase (CYP7A1), involved in BA synthesis (Hartmann et al., 2018;Fukui, 2021), and BA synthesis is tightly controlled by negative feedback inhibition through the farnesoid X receptor (FXR) (Jia et al., 2018).Bile acid deconjugation is primarily mediated by bacteria with BSH activity (Smirnova et al., 2022).Thus, intestinal microbiota plays a significant role in the key enzymatic processes involved in cholesterol synthesis in the liver, secondary BA production, and enterohepatic circulation of BAs.
The balance between cholesterol and bile salts is a critical factor for gallstones (Ye et al., 2021).Previous studies have revealed direct associations between taurocholic acid, taurochenodeoxycholic acid, and alpha diversity of the microbiota, together with positive associations with the genera Chitinophagaceae, Microbacterium, Lutibacterium, and Prevotella intermedia (Petrov et al., 2020).Patients with gallstones exhibit an increased richness of 7adehydroxylating microbiota and decreased levels of Fimicutes and diversity of gut microbiota (Wang et al., 2020).Additionally, the abundance of Lactobacillus strains significantly reduced in lithogenic diet-induced gallstones through the mediation of FXR signaling (Ye et al., 2022).However, the causal correlation between the intestinal flora and cholelithiasis remains unclear.Our study predicted that presence of abundance of Clostridium senegalense and Coprococcus3 have a causal relationship with cholelithiasis.
Many observational studies report a correlation between Clostridium and gallstone disease.For instance, patients with gallstones have elevated levels of Clostridium in the stool (Grigor'eva and Romanova, 2020), and Clostridium species were isolated from gallbladder stones (Liu et al., 2000), consistent with the results of our study.We found a causal association between Clostridium senegalense and cholelithiasis.Clostridium-encoded protein analysis revealed that Clostridium had two genes of BSH consistent with a previous study (Song et al., 2019).The presence of greater amounts of bacteria with BSH leads to increased bile salt deconjugation, resulting in elevated biliary deoxycholate levels, positive regulation of hepatic cholesterol secretion, and cholesterol crystallization (Lammert et al., 2016).Furthermore, increased hydrolysis can lead to steatosis, and increased secondary bile acid levels are associated with colorectal cancer (Horácǩováet al., 2018).Recent research showing that theabrownin can reduce hypercholesterolemia by inhibiting the intestinal microbiota related to BSH activity indirectly supports our hypothesis.The underlying mechanism may be that low BSH activity increased the concentration of conjugated BAs, which could inhibit the FXR-FGF15 signaling pathway, thereby led to decreased hepatic cholesterol synthesis (Huang et al., 2019).However, the effect of BSH on the microbiota in cholelithiasis causation requires verification and further exploration.
Coprococcus was also predicted to be a risk factor for gallstone disease, and encoded BSH proteins which was similar to the previous study (Song et al., 2019).Moreover, the relative abundance of BSH in the microbiota is significantly associated with mortality from diabetes and cardiovascular disease (Song et al., 2019).This finding further emphasizes the significance of BSH in cholelithiasis.Further research is required to understand how BSH in Clostridium and Coprococcus affect the formation of cholesterol stones.Recent studies show that Desulfovibrionales are enriched in cholelithiasis patients, increase the synthesis of secondary BAs and intestinal cholesterol uptake, stimulate biliary secretion, and affect FXR and CYP7A expression [15].Further research is required to determine whether BSHs in Clostridium and Coprococcus promote cholesterol stone formation by increasing the secondary BA synthesis.
The present study revealed the protective effects of Lachnospiraceae UCG010 and Ruminococcaceae NK4A214 against cholelithiasis.These strains did not express BSH.This result suggests that BSH deficiency might be a protective factor against gallstone formation; however, this hypothesis requires verification.Microbial conversion of cholesterol to coprostanol is another mechanism that reduces cholesterol and decreases the formation of cholesterol stones (Kenny et al., 2020).Lactobacillus curvatus KFP419 strain reduce cholesterol levels by increasing the conversion of cholesterol to coprostanol (Park et al., 2018).
Lachnospiraceae and Ruminococcaceae are reportedly associated with high coprostanol levels (Antharam et al., 2016).An elevation of sterol metabolites coincides with increases in the Lachnospiraceae family in vitro, implying that this family promotes sterol metabolism (Blanco-Morales et al., 2020).These findings suggest that the protective effects of Lachnospiraceae and Ruminococcaceae against cholelithiasis may be related to cholesterol conversion; however, the underlying mechanisms require further exploration.
genetic instruments were employed with a test of horizontal pleiotropy and two-sample heterogeneity to obtain reliable results from the MR analysis.Additionally, a leave-one-out analysis was performed to examine potential biases introduced by individual SNPs.Five sets of genetic instruments were used for MR analysis.In addition, the concept of BSH was introduced as an innovative approach to investigate the underlying mechanisms of the causal association between microbiota and cholelithiasis.
Nevertheless, this study has limitations.First, most patients with cholelithiasis in the analysis were of European descent, whereas the gut flora database encompasses other populations.Second, the potential association between microbiota and cholelithiasis by BSH was only superficially speculated.Further research is required to elucidate the underlying mechanisms and causal relationship.Third, subgroup analyses such as distinguishing between symptomatic and asymptomatic cholelithiasis was not possible because summary data for cholelithiasis was used in our analysis.Fourth, our exploration was limited to the genus level owing to the lowest taxonomic level available in the gut microbiota dataset, thus impeding a more detailed investigation at the species level.Additionally, some bacteria were predicted to be present at the phylum level; we could not analyze their BSH proteins.Finally, the sample size of the exposure group was relatively small; therefore, the reverse MR analysis results could not completely exclude the possibility of reverse causality.
Conclusion
Our investigations demonstrated a causal relationship between cholelithiasis and Clostridium senegalense and Coprococcus3, whereas Holdemania, Lachnospiraceae UCG010, and Ruminococcaceae NK4A214 had a protective effect.The causal relationship between the gut microbiota and cholelithiasis may be mediated by BSH.In addition, reverse MR analysis supported a causal relationship between cholelithiasis and the intestinal microbiota.Moreover, these findings suggest that cholelithiasis may influence the gut flora.Further validation and mechanistic studies are required.
FIGURE 3 The number of bile salt hydrolases (BSHs) in microbiota.(A) The number of BSHs in microbiota genera base on HMP database.(B) The number of BSHs in microbiota genera based on Mendelian randomization (MR) analysis and reverse MR analysis (C).The number of BSH-related proteins in (D) Coprococcus, (E) Clostridium, (F) Intestinimonas, red represents BSH, blue and green dots represent BSH-related proteins.
TABLE 1
The MR analysis of causal effects between gut microbiota and cholelithiasis.
TABLE 2
Inverse MR analysis the causal effects of gut microbiota and cholelithiasis. | 2023-10-12T15:21:03.189Z | 2023-10-09T00:00:00.000 | {
"year": 2023,
"sha1": "8e899e94e9819e5532a6e97e20995f1b50f0c422",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2023.1253447/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "305ab23bccb864f7f21e62aadadccef5d9fdbe0e",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52812905 | pes2o/s2orc | v3-fos-license | Immunisation Rates of Medical Students at a Tropical Queensland University
Although medical students are at risk of contracting and transmitting communicable diseases, previous studies have demonstrated sub-optimal medical student immunity. The objective of this research was to determine the documented immunity of medical students at James Cook University to important vaccine-preventable diseases. An anonymous online survey was administered thrice in 2014, using questions with categories of immunity to determine documented evidence of immunity, as well as closed-ended questions about attitudes towards the importance of vaccination. Of the 1158 medical students targeted via survey, 289 responses were included in the study (response rate 25%), of which 19 (6.6%) had documented evidence of immunity to all of the vaccine-preventable diseases surveyed. Proof of immunity was 38.4% for seasonal influenza, 47.1% for pertussis, 52.2% for measles, 38.8% for varicella, 43.7% for hepatitis A, and 95.1% for hepatitis B (the only mandatory vaccination for this population). The vast majority of students agreed on the importance of vaccination for personal protection (98.3%) and patient protection (95.9%). In conclusion, medical students have sub-optimal evidence of immunity to important vaccine-preventable diseases. Student attitudes regarding the importance of occupational vaccination are inconsistent with their level of immunity. The findings of this study were used to prompt health service and educational providers to consider their duty of care to manage the serious risks posed by occupational communicable diseases.
Introduction
Immunisation of medical students is an important infection control strategy, one that is strongly recommended by leading international public health advisory bodies [1,2]. Clinical guidelines for vaccination decision-making in Australia have been developed by the Australian Technical Advisory Group on Immunisation. Occupational vaccination recommendations from this group state that healthcare workers and students should ensure immunity to hepatitis B, seasonal influenza, measles, mumps, rubella, pertussis, and varicella. Additionally, those who work in remote Indigenous communities or with Indigenous children should be vaccinated against hepatitis A [3]. Adherence to these recommendations is mandated variably across Australian health services and universities-there is no national legislated requirement for occupational vaccination. For Australian-born medical students, many of these vaccinations would have been provided through a government-subsidised childhood immunisation scheme. However, for adults who are not in high-risk medical populations, any additional vaccines are the financial responsibility of the individual [3]. Private health insurance providers in Australia are not required to reimburse for vaccine-related expenses.
Despite the strength of occupational vaccination recommendations, medical students consistently have sub-optimal immunity to vaccine-preventable diseases, as was highlighted in a recent review of the literature on vaccine coverage among healthcare students [4]. The only published Australian research on medical student immunity was undertaken between 2002 and 2005 at the University of New South Wales. Using questionnaires and serological testing, the authors concluded that a significant proportion of first-year medical students were not immune to important vaccine-preventable diseases [5].
The primary objective of this study was to determine the documented immunity of medical students at a tropical Queensland university to important vaccine-preventable diseases. The findings were used to inform health service and educational providers about the adequacy of their current immunisation policies.
Organisational Context
Medical students at James Cook University, Queensland, Australia, are enrolled in a six-year undergraduate degree. Clinical exposure commences in first year and increases proportionally with progress through the course. Students in years one, two, and three are considered 'pre-clinical', receiving most of their education (including patient interaction) within the university environment. Students in years four, five, and six are in their 'clinical' years of medical school; the majority of their teaching takes place in hospitals. The main medicine campus is in Townsville, with other centres located across Northern Australia in Cairns, Mackay, and Darwin. Medical students are financially responsible for their immunisation-related expenses. They are sometimes included in Queensland Health staff vaccination initiatives, but not in all facilities. As per James Cook University and Queensland Health policy at the time of this research in 2014, healthcare students were required to provide proof of seroconversion to hepatitis B. The remainder of the immunisation schedule was recommended but not mandatory. These policies have since been updated [6,7].
Materials and Methods
An anonymous online survey was administered to medical students at James Cook University. The questions in the survey were specifically designed to ascertain history of documented immunity to important vaccine-preventable diseases (influenza, pertussis, measles, varicella, hepatitis A, and hepatitis B). These diseases were selected based on their significant potential for nosocomial transmission in this medical student population. Categories were used to define immunisation status, using proof of immunity guidelines from the Australian Immunisation Handbook and the Centers for Disease Control and Prevention [1,3]. Figure 1 demonstrates the use of this category system. Included in the survey were questions about socio-demographic variables. There were also two closed-ended questions about student attitudes towards the importance of occupational vaccination.
The survey was piloted on a group of ten final-year medical students. Emails were sent to medical students enrolled in all six years on three occasions during July and August 2014. A hyperlink directed students to the information statement and informed consent document, followed by the survey. The hyperlink was also posted on social media and promoted by the James Cook University Medical Students Association. This study was approved by the Human Research Ethics Committee at James Cook University (approval number H5664).
Data was collected by SurveyMonkey (www.surveymonkey.com) and analysed using SPSS for Windows, version 22.0 (IBM, New York, NY, USA). Incomplete responses were removed from the data set prior to analysis. Students who were 'unsure' of their vaccination status were grouped with the unvaccinated students for further analysis. Those who were unable to seroconvert to hepatitis B were considered immune, given that in the years after vaccination up to 60% of people lose detectable antibody but not protection [8]. Data were rigorously examined for error. Descriptive analyses were employed. Pearson's chi-square tests were used to investigate for statistically significant relationships between immune status and the independent variables (age, gender, nationality, year level group, and campus). Frequency tables were used to determine completeness of student vaccine coverage. Pearson's chi-square tests were used again to investigate for significant associations between completeness of vaccination and the independent variables. coverage. Pearson's chi-square tests were used again to investigate for significant associations between completeness of vaccination and the independent variables.
Results
Of 1158 enrolled medical students, 289 students (25%) across the four James Cook University medicine campuses completed the survey (33 surveys that were only partially completed were not included). The majority of students were aged between 18 and 24 (86.5%), were female (68.9%), and had grown up in Australia (82.7%). When compared to the demographic profile of the James Cook University medical student population in 2014, the sample is well matched in terms of year level group distribution; however, females are over-represented in the sample population (Table 1). The mandatory hepatitis B vaccine had the highest rate of documented immunity at 95%, while measles was 52.2% and all other vaccines surveyed were less than 50% (Tables 2 and 3). There was a statistically significant association between influenza immunity and medical student seniority-
Results
Of 1158 enrolled medical students, 289 students (25%) across the four James Cook University medicine campuses completed the survey (33 surveys that were only partially completed were not included). The majority of students were aged between 18 and 24 (86.5%), were female (68.9%), and had grown up in Australia (82.7%). When compared to the demographic profile of the James Cook University medical student population in 2014, the sample is well matched in terms of year level group distribution; however, females are over-represented in the sample population (Table 1). The mandatory hepatitis B vaccine had the highest rate of documented immunity at 95%, while measles was 52.2% and all other vaccines surveyed were less than 50% (Tables 2 and 3). There was a statistically significant association between influenza immunity and medical student seniority-54.7% of clinical students received the influenza vaccine in 2014, compared to 23.3% of pre-clinical students (p < 0.001). Pre-clinical or clinical year level group did not predict immunity to pertussis, measles, varicella, hepatitis A, or hepatitis B (Table 4). There were no statistically significant associations between immunity to any of the diseases and student age, gender, campus, or nationality (p > 0.05).
Notably, the majority of students perceived vaccination as important for their personal protection (11.1% agree, 87.2% strongly agree); as well as for patient protection (11.8% agree, 84.1% strongly agree). The proportion of students with documented immunity to all of the diseases surveyed was 6.6% (19/289). The remaining 93.4% of respondents would fulfil criteria for one or more catch-up immunisations. Administration of 823 vaccination catch-up schedules for individual diseases would be recommended to the students surveyed: an average of 3.05 per survey respondent. There were no statistically significant associations between comprehensiveness of vaccine coverage and year level group, age, gender, campus, or nationality (p > 0.05).
Discussion
The majority of medical students (93.4%) in this study were assessed as needing at least one vaccine. This suggests that there is significant vulnerability to communicable disease among this population, with resultant public health implications for hospital staff and patients and the university community. This population's strong belief in the importance of occupational vaccination is inconsistent with their low levels of immunity, suggesting that there is a need for research into other factors that influence medical student vaccination uptake.
Catch-up immunisations were recommended for 74% of medical students in a paediatric hospital in Basel, Switzerland [9], which is comparable to the findings of this study. Similarly, less than 30% of medical and nursing students in an Athenian study were in full compliance with recommended vaccinations [10]. In this study, documented immunity to recommended vaccines was lower than that demonstrated in Lille, France-72.7% of the French medical students had proof of immunity to pertussis, 78% had proof of immunity to measles, and 78.9% had proof of immunity to varicella [11]. Hepatitis B immunity was documented in 91.8% of French healthcare students, which is similar to our findings [12]. Among medical students studying at James Cook University, there was no statistically significant difference in immunity between those who grew up in Australia and those who grew up in other parts of the world. There was also no difference between the medical school campuses. These negative findings serve to reiterate that sub-optimal medical student immunity is not limited by geographic boundaries.
The rates of influenza vaccine uptake in this study were higher than the rates observed among medical students in Strasbourg, Warsaw, and Teheran (29.7%, 15.2%, and 4.7%, respectively) [13]. Sub-optimal influenza vaccination in other healthcare worker populations sets a poor example for medical students. A review of the literature pertaining to seasonal influenza vaccination among Australian hospital healthcare workers found that rates ranged from 16.3% to 58.7% (29% to 53% for physicians) [14]. The majority of studies into healthcare worker immunity have focused on seasonal influenza, but there is research that has demonstrated poor Australian healthcare worker compliance with recommended vaccination schedules [15]. These findings suggest that doctors may be poor vaccination role models for medical students.
In this study, medical students in their clinical years were more likely to be vaccinated against seasonal influenza than their more junior pre-clinical colleagues. There are several potential explanations for this finding. Knowledge, specifically regarding disease severity and vaccine safety, has previously been identified as an important determinant of medical student immunization behaviour [13,16,17]. Higher rates of influenza immunity in more senior students could therefore be attributed to acquisition of knowledge during medical school. However, year level was not associated with increased immunity to any of the other diseases surveyed. This could suggest a difference in the way that influenza teaching is delivered. Alternatively, it is possible that clinical medical students are more often opportunistically included in seasonal staff vaccination clinics during their hospital and community placements.
The levels of documented hepatitis B immunity among North Queensland medical students are high, which is attributable to the mandatory government and university requirement to be immune to this disease. Interestingly, documented hepatitis A immunity was similar to the other diseases surveyed, despite it not being included in the Australian childhood vaccination schedule. Hepatitis A vaccination is routinely recommended to travellers, thus the holiday patterns of medical students may be impacting their vaccination behaviour (other potential influences, although admittedly less likely, include the desire to safely consume raw oysters and semi-dried tomatoes [18,19]). Another explanation is that North Queensland medical students have responded to the recommendation that all healthcare workers and students practising in Indigenous Australian communities have immunity to hepatitis A. Nevertheless, this seems less likely, given the generally poor uptake of the non-mandatory vaccinations in this population.
The rate of self-reported immunity to pertussis was higher in this study than in the general Australian adult population. In the 2009 Adult Vaccination Survey, conducted by the Australian Institute of Health and Welfare, 11.3% of respondents reported being vaccinated against pertussis as an adult or adolescent. The Adult Vaccination Survey also noted that only 18.9% of adult Australians received the free pandemic (H1N1) influenza vaccine in 2009 [20]. Rates of seasonal influenza uptake in this medical student population were similar to those reported in Australian adults in 2014 (39%); however, the students fared more favourably when compared to younger Australian adults (24% influenza vaccine uptake in those aged 18-24 years; 23% in those aged 25-34 years) [21].
The first limitation of this study is the low survey response rate (25%), although the year level distribution of the sample population is well matched to the known demographic characteristics of the James Cook University medical student population. A second limitation of this study is its reliance on self-reporting of immunisation status (due to resource and funding constraints that precluded collection of serological data). However, it is recognised that the most important requirement for assessment of vaccination status is to have written documentation of vaccination, and for most diseases, there are no adverse events associated with re-vaccination of adults [3]. Thus, the category system specifically requesting documented proof of immunity that was utilised in this study should be considered an acceptable method of confirming vaccination history when serological data is unavailable.
Outcomes and Recommendations
This study highlighted the important need to address the vaccination rates of medical students, a population who, theoretically, should be extremely motivated to ensure their immunity to common vaccine-preventable diseases. Shortly following the acquisition of these survey results, qualitative research was undertaken on this medical student population to identify the determinants of their vaccination behaviour. Strategies to improve immunity were identified and published [22]. The vaccination policy for healthcare students at James Cook University has subsequently been updated since these results were provided to the organisation in 2014-prior to clinical exposure, students are now required to provide proof of immunity to measles, mumps, rubella, varicella, and pertussis, in addition to hepatitis B [23]. There has also been a medical student-led influenza vaccination campaign, which received national acclaim in 2017 [24]. Future research efforts could focus on exploring the impact of mandatory vaccination on medical student beliefs and behaviours.
It is evident that medical students cannot be relied upon to ensure their own immunity. Other health service and educational providers must reflect on their current immunisation policies and take action in order to protect the health of their students and the wider community.
Author Contributions: E.F. conceived and designed the experiment with assistance from R.S. and C.H.; E.F. performed the experiments and analysed the data; E.F. prepared the original manuscript, with editing and analysis from R.S. and C.H.
Funding: This research was funded by a medical honours grant from James Cook University, grant number JCU-QLD-416461. The funding sponsor had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 2018-10-14T17:56:42.162Z | 2018-05-23T00:00:00.000 | {
"year": 2018,
"sha1": "9ef518bc43182c9183b9233829a49b3a060de4c8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/tropicalmed3020052",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ef518bc43182c9183b9233829a49b3a060de4c8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216044482 | pes2o/s2orc | v3-fos-license | Dreams and Dissociation—Commonalities as a Basis for Future Research and Clinical Innovations
Dissociative symptoms refer to a spectrum of non-ordinary disruptive experiences from “zoning out,” to out-of-body experiences, to outright distortions in the fundamental sense of self, with Dissociative Identity Disorder (DID) as its most debilitating manifestation (Holmes et al., 2005). Dissociative symptoms range from 1 to 3% among general population and from 4 to 14% among psychiatric patients (Sar, 2011). In psychiatric patients, dissociative symptomatology can have a serious impact. Mean impairment scores of patients with dissociative disorders on measures of psychosocial, occupational, and interpersonal functioning are >50% higher than those of patients with other mental disorders (Mueller-Pfeiffer et al., 2012), and dissociative symptomatology is strongly related to self-harm and multiple suicide attempts (Foote et al., 2008). Relative to 17 other mental disorders, patients with dissociative disorders consumed the highest number of outpatient therapy sessions (Mansfield et al., 2010). Importantly, although dissociative symptoms are most salient and persistent in dissociative disorders such as DID, they are considered transdiagnostic phenomena and comorbid with many other conditions (e.g., psychotic illness, anxiety, depression). No evidence-based treatment consensus exists for dissociative disorders due to lingering controversies. Two perspectives, the trauma model and sociocognitive model, have vied for acceptance and empirical support over decades. The trauma model posits a causal relation between trauma and dissociative symptoms (Dalenberg et al., 2012; Vissia et al., 2016). Accordingly, dissociation is viewed as a coping mechanism triggered by childhood trauma in which distinct personality states, for example, arise to detach from emotionally overwhelming memories (Van der Hart et al., 2006). In contrast, the sociocognitive model contends that dissociative symptoms are shaped by social learning and cultural expectancies regarding clinical features of dissociation, as portrayed by media and reified by inadvertent therapist cueing. The model assumes that vulnerable patients come to adopt a narrative of being populated by distinct selves to explain mood swings, impulsive actions, and other puzzling behaviors (Lynn et al., 2019). Rapprochement between these models is needed and could be facilitated by fundamental research that clarifies antecedents and correlates of dissociation, including co-occurring sleep problems, that would potentially facilitate treatment consensus and innovation.
Dissociative symptoms refer to a spectrum of non-ordinary disruptive experiences from "zoning out, " to out-of-body experiences, to outright distortions in the fundamental sense of self, with Dissociative Identity Disorder (DID) as its most debilitating manifestation (Holmes et al., 2005). Dissociative symptoms range from 1 to 3% among general population and from 4 to 14% among psychiatric patients (Sar, 2011). In psychiatric patients, dissociative symptomatology can have a serious impact. Mean impairment scores of patients with dissociative disorders on measures of psychosocial, occupational, and interpersonal functioning are >50% higher than those of patients with other mental disorders (Mueller-Pfeiffer et al., 2012), and dissociative symptomatology is strongly related to self-harm and multiple suicide attempts (Foote et al., 2008). Relative to 17 other mental disorders, patients with dissociative disorders consumed the highest number of outpatient therapy sessions (Mansfield et al., 2010). Importantly, although dissociative symptoms are most salient and persistent in dissociative disorders such as DID, they are considered transdiagnostic phenomena and comorbid with many other conditions (e.g., psychotic illness, anxiety, depression).
No evidence-based treatment consensus exists for dissociative disorders due to lingering controversies. Two perspectives, the trauma model and sociocognitive model, have vied for acceptance and empirical support over decades. The trauma model posits a causal relation between trauma and dissociative symptoms (Dalenberg et al., 2012;Vissia et al., 2016). Accordingly, dissociation is viewed as a coping mechanism triggered by childhood trauma in which distinct personality states, for example, arise to detach from emotionally overwhelming memories (Van der Hart et al., 2006).
In contrast, the sociocognitive model contends that dissociative symptoms are shaped by social learning and cultural expectancies regarding clinical features of dissociation, as portrayed by media and reified by inadvertent therapist cueing. The model assumes that vulnerable patients come to adopt a narrative of being populated by distinct selves to explain mood swings, impulsive actions, and other puzzling behaviors . Rapprochement between these models is needed and could be facilitated by fundamental research that clarifies antecedents and correlates of dissociation, including co-occurring sleep problems, that would potentially facilitate treatment consensus and innovation.
DISSOCIATION AND SLEEP
Previous studies have secured moderate-to-high correlations of dissociative symptoms with sleep disturbances as well as provided evidence for disturbed sleep playing a causal role in dissociative symptoms (Watson, 2001;Van der Kloet et al., 2012a;Merckelbach et al., 2017;Schimmenti, 2017 An important theoretical and research question is whether dissociative symptoms, which range on a continuum of severity, are triggered by disruptions in memory and metacognitive processing that occur during sleep states, with disruptions during REM sleep of particular relevance, that carryover to waking life. When sleep and dream systems become impaired, memory processes during (REM) sleep become dysregulated and engender information overload from internal and external sources that (a) overwhelms cognitive processing, (b) impairs integration of self-relevant information and memories, and (c) induces dissociative symptoms, which are potentially manifested in fragmented (i.e., dissociated), dream-like mentation, illusions, delusions, memory distortions, and, ultimately, a disturbed sense of self (McNamara, 2013). Dreamlike phenomena, which are ordinarily confined to sleep, thus intrude into waking consciousness and are expressed as dissociative symptoms, including depersonalization and derealization, and, in the extreme case, identity fragmentation evident in DID.
CONSCIOUSNESS AND DREAMING
Conscious states may be defined as representations of brain states that arise as a function of shifting dynamics of largescale neuronal networks (Freeman, 2000;Varela et al., 2001;Bob and Louchakova, 2015). (Libet, 2006) posited that subjective experience is represented in the brain by synchronized activities of large numbers of neurons, referred to as a "cerebral mental field." This conceptualization affords description of subjective experiences in terms of constantly morphing brain activation patterns that not only generate consciousness via intricate feedback loops, but consciousness, itself, reciprocally affects brain dynamics. Neural systems thus create mental representations of perception, cognitive functioning, memory, and consciousness more broadly (Freeman, 2000;Singer, 2001). Interestingly, stressful experiences can affect the neural mechanisms that enable integration of contents of consciousness, potentially fueling dissociation of conscious awareness and memory (Bob, 2003;Spiegel, 2012) and disrupting sleep.
What is the role of dreams in processes related to dissociation (failure to integrate mental content into conscious awareness), defined conventionally as: "a disruption of and/or discontinuity in the normal, subjective integration of one or more aspects of psychological functioning, including-but not limited to-memory, identity, consciousness, perception, and motor control" [DSM-5; American Psychiatric (American Psychiatric Association., 2014)]. Dissociative states not only occur during wakefulness among healthy individuals and those with mild dissociative symptoms, but they are also manifested during dreams, typically related to shifts in dream scenes and particularly during nightmares and recurrent dreams (Hartmann, 1998;Bob, 2004;Schonhammer, 2005). Among 43 patients diagnosed with dissociative identity disorder (DID), 57% indicated that their "alter personalities" presented as dream characters in their dreams (Barrett, 1994). Dream characters can be viewed as hallucinated projections of the fragmented self; dreaming, in turn, may reflect dissociative states represented during memory processing in REM sleep (Bob, 2004;Stickgold and Walker, 2005).
In contrast with the synchronized activity of large groups of neurons in the "cerebral mental field, " in some states of consciousness, such as dreaming, meditation, divergent thinking, and dissociative states, neural network patterns may function in a more chaotic, unstable, and non-linear fashion (Kahn and Hobson, 1993;Bob, 2003) in which a small perturbation in the system can resonate and induce large changes in the system's behavior (Bob and Louchakova, 2015). For example, flexibility of mental processes facilitates generating patterns that create the subjective experience of coming up with "novel" ideas (Freeman, 2000). During chaotic brain states, activities usually take place in various regions of the brain acting simultaneously but independently. When the strength of the associations and information processing systems among these regions is greatly attenuated or impoverished and mental contents become fragmented and disorganized, dissociated mental states may arise (Bob, 2003). The sudden transitions of dream objects and sceneries experienced in dreams, may reflect dissociation related to rapid shifts in neural patterns related to chaotic or-as they are also called-self-organizing neural activities, mainly stemming from the pontogeniculo-occipital (PGO) systems in the brain (Kahn and Hobson, 1993).
LUCID DREAMING
A particular type of dreaming may be of special interest in this respect: lucid dreaming. According to Voss and Hobson (2014), insight, control, and dissociation represent the defining criteria for lucid dreaming. Insight refers to metacognitive reflective thought, i.e., the dreamer is aware that she is dreaming, and it is considered the core criterion. Control allows the dreamer to change the dream plot, and dissociation happens when the dreamer experiences the dream as feeling unreal (similar to waking derealization) or sees herself from a distance [similar to waking depersonalization; (Voss et al., 2018)]. This third person perspective can also entail the dream experience itself. Dreamers then experience the dream sequence from the outside, as if the dream were a movie. By this definition, lucid dreaming can be viewed as "a dissociative mental state of consciousness in which the dream self separates from the ongoing flow of mental imagery." (Voss et al., 2018, p.3). However, in lucid dreams a sense of reality or awareness of dreaming is superimposed on the "unreality" of the dream, whereas in depersonalization/derealization, a sense of unreality is superimposed on the "reality" of mundane waking existence. Thus, in lucid dreams meta-consciousness is preserved to a greater extent than in non-lucid dreams, whereas in depersonalization/derealization, meta-consciousness of the self and the surround is compromised relative to everyday normative experiences. These differences between lucid dreaming and dissociative experiences might explain why the correlation between measures of lucid dreaming and dissociation, while statistically significant, is weaker than the correlation between unusual sleep experiences (e.g., sleep paralysis, hypnagogic hallucinations, nightmares) and dissociation (Van der Kloet et al., 2012a). We suggest that such "dream-like" experiences infiltrate waking consciousness to create an experience of unreality that is expressed as dissociative experiences and symptoms.
Can dissociation be experienced as beneficial? Dissociation is usually transient during waking and associated with daydreaming and fantasy proneness in healthy adults (Van der Kloet et al., 2012b), at the mild end of the dissociation continuum. In the context of psychiatric diagnoses, some theorists have described dissociation as a protective mechanism to cope with emotional pain in posttraumatic stress disorder via downregulation of the limbic system, thereby suppressing unconscious affect (Lanius et al., 2010) and enabling self-conscious emotions via activation of the ventral prefrontal cortex [VPFC; (Damasio, 1988)]. In psychosis, dissociation is often undesirably associated with positive symptoms. However, Dalle Luche (2002) advanced a nuanced view by proposing that dissociative thought is more fleeting in the early stages of psychosis, whereas the loss of a sense of self is more prominent in the later stages of illness. Viewed in this light, dissociative cognition in lucid dreaming mirrors the type of dissociation experienced in the early stages of psychosis. Although attempts to control dream content can disturb sleep, an increase of lucid dreaming and accompanying dissociative thought may also be desirable as the heightened insight/meta-consciousness may be associated with a weakening of psychosis-like experiences. In general, lucidity in dreaming has been linked with positive rather than negative emotions. In normal REM dreaming, due to attenuation of the VPFC, unconscious emotions take the stage. In lucid dreaming, with the VPFC switched on again, self-conscious emotions take the lead and unconscious emotions are down-regulated. This process engenders an overall reduction of emotionality compared with regular dreams (Voss et al., 2018). Indeed, dissociative thought seems to down-regulate negative emotion both in dreaming as during wake (LaBerge and Rheingold, 1991;Voss et al., 2013), with parallels in lucid dreaming and psychiatric illness [but see (Mota et al., 2016)].
FODDER FOR FUTURE
Our discussion implies that it is possible to enhance insight and meta-consciousness via lucid dreaming in patients suffering from psychiatric disorders such as in dissociation and psychotic illness, in order to reduce negative emotions. Training the frontal lobe explicitly to create insight in the delusional feature of a dream may provide a foundation of enhancing reflective thought during the daytime as well. Indeed, researchers have piloted lucid dreaming as a clinical treatment in various groups with mixed results (Spoormaker and Van den Bout, 2006;Lancee et al., 2010). However, we suggest that therapists make explicit the purpose of enhancing meta-consciousness across the entire sleep-wake continuum to enhance generalizability of outcomes across the sleep/wakefulness spectrum and continuum of severity of dissociative symptoms. Notably, researchers have successfully treated patients with dissociative identity disorder with transdiagnostic interventions geared to improve sleep, enhance meta-consciousness, and emotion regulation, and decrease fragmentary, hyperassociative thinking that marks both dissociative conditions and dream consciousness (Mohajerin et al., 2019).
Treatment costs of patients with dissociative psychopathology are very high, while psychological interventions are generally not evidence-based and innovative treatments stagnate due to lingering controversies across theoretical camps. This state of affairs also impacts innovation in treating psychiatric conditions (e.g., PTSD, borderline personality disorder, schizophrenia spectrum disorders) with high comorbidity with dissociative conditions [see ]. Moreover, dissociative comorbidity is a severity marker signaling poor prognosis.
As ineffective and non-optimal treatments impose considerable burdens on patients and society, novel research programs focusing on the relations among dissociation, the sense of self, and sleep and dreaming are a priority. Studying both the chaotic and the deterministic brain state during sleep and wakefulness may provide insight into important functions of perception, memory, and cognition and what happens when they become dissociated. In doing so, the study of dissociation may provide important clues regarding the nature of human consciousness itself. Importantly, this effort will inform clinicians and researchers alike and serve as an impetus for new treatment studies, including research evaluating interventions targeting dissociative psychopathology via enhancing sleep and metacognitive processing.
AUTHOR CONTRIBUTIONS
DH wrote the original version of the manuscript, which was edited by SL. | 2020-04-22T13:10:49.922Z | 2020-04-22T00:00:00.000 | {
"year": 2020,
"sha1": "5ddab4f26fdc084a3c94f230649a52747df1d1ac",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fpsyg.2020.00745",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ddab4f26fdc084a3c94f230649a52747df1d1ac",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
221573085 | pes2o/s2orc | v3-fos-license | Click-Free Synthesis of a Multivalent Tricyclic Peptide as a Molecular Transporter
The cellular delivery of cell-impermeable and water-insoluble molecules remains an ongoing challenge to overcome. Previously, we reported amphipathic cyclic peptides c[WR]4 and c[WR]5 consisting of alternate arginine and tryptophan residues as nuclear-targeting molecular transporters. These peptides contain an optimal balance of positive charge and hydrophobicity, which is required for interactions with the phospholipid bilayer to facilitate their application as a drug delivery system. To further optimize them, we synthesized and evaluated a multivalent tricyclic peptide as an efficient molecular transporter. The monomeric cyclic peptide building blocks were synthesized using Fmoc/tBu solid-phase chemistry and cyclization in the solution and conjugated with each other through an amide bond to afford the tricyclic peptide, which demonstrated modest antibacterial activity against methicillin-resistant Staphylococcus aureus (MRSA), Klebsiella pneumoniae, Pseudomonas aeruginosa, and Escherichia coli (E. coli) with a minimum inhibitory concentration (MIC) of 64–128 µg/mL. The tricyclic peptide was found to be nontoxic up to 30 µM in the breast cancer cell lines (MDA-MB-231). The presence of tricyclic peptide enhanced cellular uptakes of fluorescently-labeled phosphopeptide (F’-GpYEEI, 18-fold), anti-HIV drugs (lamivudine (F’-3TC), emtricitabine (F’-FTC), and stavudine (F’-d4T), 1.7–12-fold), and siRNA (3.3-fold) in the MDA-MB-231 cell lines.
Introduction
Multivalency is a pervasive phenomenon that has attracted attention for therapeutic development with the examples of multivalent aptamers, a multivalent effect in glycosidase inhibition, and many more [1][2][3][4]. Multivalent binding plays an essential role in signal transduction and self-organization in biological systems and is mainly based on the multiple simultaneous molecular recognition processes [5,6]. Nature's one good example of multivalency is proteins forming multimeric architectures in the form of receptors or enzymes that contain several recognition sites.
Peptide-based multivalent interactions at interfaces of a biological system are of great importance. For example, the multivalency of carefully designed supramolecular peptide ligands was utilized in enhancing binding interaction with β-cyclodextrin (β-CD) to study host-guest relationships [7].
Materials
Fmoc amino acids, coupling reagents, and amino acid-loaded chlorotrityl resin were obtained We also discovered that a block of four subsequent tryptophan and positive-charged arginine residues generated amphipathic antibacterial peptide c[R 4 W 4 ], which demonstrated minimum inhibitory concentration (MIC) of 4 µg/mL against methicillin-resistant Staphylococcus aureus (MRSA) and 16 µg/mL against E. coli [23]. Furthermore, the structure-activity relationship showed the importance of tryptophan and arginine residues in this pattern for antibacterial activity [24].
Furthermore, to gain a deep understanding of c[WR] 5 series of peptide for their molecular transporter property, we developed a series of bicyclic peptides (c[W 5 G]-(triazole)-c [KR 5 ] and c[W 5 E]-(β-Ala)-c[KR 5 ]) containing two cyclic peptides (c[W 5 G] and c[KR 5 ]) joined through a flexible amide or nonflexible triazole linker ( Figure 1) and evaluated the delivery of a negatively-charged phosphopeptide [22]. It was found that bicyclic peptide containing amide linkage c[W 5 E]-(β-Ala)-c[KR 5 ] showed higher cellular uptake of a phosphopeptide as compared to monocyclic peptides (c [KR 5 ] or c[WR] 4 ). These data indicated that for better cellular uptake of these cyclic peptides, an optimal balance of positive charge and hydrophobicity is essential for interactions with the cell membrane and deep penetration into the lipid bilayer [15]. c[WR] 5 consists of a balance of hydrophobicity and positive charges. We hypothesized that this peptide template could be extended into a tricyclic peptide platform as a multivalent structure for application in drug delivery.
Most of the biological interactions are controlled via multivalent interactions with respect to density, spacing, complex interaction, and various dynamic effects [25][26][27][28]. Thus, it is always challenging to effectively be amended. Multivalent constructs were prepared using preorganized cyclopeptides or dendrimers using various methodologies, mainly "click reactions" [2]. Copper-catalyzed "click reaction" has various advantages, but the introduction of copper species to in vivo systems raises the concern of potential toxicity [29][30][31]. Various strategies, including the development of chelating azides as reagents, are being adopted to reduce the copper-induced toxicity [32]. We have previously shown that higher cellular uptake of a phosphopeptide in the presence of bicyclic peptide containing amide linkage c[W 5 E]-(β-Ala)-c[KR 5 ] than bicyclic peptide (c[W 5 G]-(triazole)-c[KR 5 ] [22], presumably due to the more flexibility of the linker between two cyclic peptides containing the amide linker for generating a proper conformation for effective interaction with the cell membrane.
Thus, to unravel a new multivalent tricyclic peptide as a molecular transporter, herein, we synthesized a tricyclic peptide based on naturally occurring amino acids using a simpler biocompatible and biodegradable amidic linkage instead of a triazole formed through click chemistry. The tricyclic peptide was a trimer of monocyclic peptides containing alternating hydrophobic residues (W) and charged residues (R) in addition to lysine (K) residues that assist in the generation of an amide linkage through their side chain amino group. Herein, we report antibacterial activity and molecular transporter properties of the tricyclic peptide.
Materials
Fmoc amino acids, coupling reagents, and amino acid-loaded chlorotrityl resin were obtained from AAPPTec (Louisville, KY, USA). The solvents and chemical reagents were purchased from MilliporeSigma (Milwaukee, WI, USA) and used without further purification. Crude peptides were purified using reverse phase high performance liquid chromatography (RP-HPLC) using Hitachi L-2455 system (Canby, OR, USA) with a C18 Phenomenex column (Prodigy, 10 µm, 2.1 cm × 25 cm), at a flow rate of 10 mL/min with the detection at 214 nm using a gradient of 0-100% acetonitrile (0.1% trifluoracetic acid (TFA)) and water (0.1% TFA) over 60 min. Peptides were characterized for confirmation of their exact mass using Bruker GT 0264 (Fremont, CA, USA) matrix-assisted laser desorption/ionization (MALDI) mass spectrometer with α-cyano-4-hydroxycinnamic acid as a matrix under positive mode. in DMF (20 mL) was added to the resin. The resin was agitated under nitrogen for 30 min for coupling the first amino acid. After 30 min, the reaction mixture was drained and washed with DMF (10 mL × 3 times). A solution (20 mL) of 20% piperidine in DMF (v/v) was added to remove the N-terminal Fmoc group with the agitation of peptidyl resin for 20 min two times followed by washing with DMF (10 mL × 3 times). In a similar way, the subsequent Fmoc-protected amino acids were coupled based on the required sequence of the linear peptides. Once the linear protected peptide was assembled on the peptidyl resin, the side-chain-protected peptide was cleaved from the resin for N to C terminal cyclization. The peptidyl resin was stirred with freshly prepared cleavage cocktail containing trifluoroethanol (TFE)/acetic acid/dichloromethane (DCM) (2:1:7, v/v/v, 50 mL) for 2 h. The resin was filtered off, and the supernatant was evaporated to dryness to get side chain-protected linear peptides (2,8,13). The side-chain-protected peptide (300 mg) was taken in 2 L round bottom flask under nitrogen. DMF and DCM (2:1, v/v, 1.2 L) were added to the mixture. A solution of 1-hydroxyazabenzotriazole (HOAt) (236.8 mg) in DMF and N,N'-diisopropylcarbodiimide (DIC) (269.4 µL) were added. The reaction was stirred for 24 h under nitrogen. The solvents were evaporated under vacuum, and side chains were deprotected by agitation with freshly prepared cleavage cocktail containing TFA/thioanisole/anisole/1,2-ethanedithiol (EDT) (90:5:2:3, v/v/v/v, 10 mL) for 2 h. The crude peptides were precipitated by the addition of cold diethyl ether (Et 2 O) and centrifugation. Crude peptides were purified with RP-HPLC. The peptides were separated by eluting the crude peptides at 10.0 mL/min using a gradient of 0-100% acetonitrile (0.1% TFA) and water (0.1% TFA) over 60 min and then characterized by MALDI TOF mass spectroscopy. The desired fractions were pooled and lyophilized to yield cyclic peptides. MALDI spectra of intermediate and synthesized compounds and selected analytical HPLC have been provided in the Supplementary Materials. To obtain compound 4, the protected cyclic peptide 3 (200 mg) was taken in a dry round bottle flask fitted with a rubber septum and under the inert condition with N 2 . A solution containing Pd(PPh 3 ) 4 (2.01 g, 0.00174 mmol, 3.0 equiv) and CHCl 3 :AcOH:N-methyl morpholine (37:2:1, v/v/v, 25 mL) was added, and the mixture was stirred for 2 h followed by evaporation of solution under rotatory evaporator. A solution of 0.5% DIPEA and sodium diethyldithiocarbamate (0.5% w/w) in DMF was added to the evaporated residue to remove the catalyst. The solvent was evaporated to remove all the solvents. The crude mixture was washed with water (8-10 mL) to get rid of all side products following with centrifugation (1200 rpm, 3 × 5 min). After centrifuging the crude peptide, it was dried overnight to obtain the protected cyclic peptide 4 (136 mg, 69%). MALDI-TOF (m/z) for 4, [C 88 To obtain the Dde-deprotected cyclic peptide 5, a solution of 2% hydrazine monohydrate solution in DMF (50 mL) with 7.8 mL of allyl alcohol was added to the peptide 3 (200 mg) and stirred at room temperature for 2 h. The reactant mixture was evaporated under vacuum to dryness and washed with cold water. The obtained solid was dried to yield side-chain-protected cyclic peptide 5 (124 mg, 65% Using the general protocol mentioned in Section 2.2.1, the monocyclic peptide was assembled on H-Arg(Pbf)-2-chlorotrityl resin (1, 1 g, 0.58 mmol) using building blocks of Fmoc-protected amino acids, such as Fmoc-Trp(Boc)-OH (916.3 mg), Fmoc-Arg(Pbf)-OH (1128 mg), and Fmoc-Lys (Dde)-OH (926.7 mg), based on the sequence of amino acids in the diamine cyclic peptide 10 followed by cleavage from solid support to afford side-chain-protected linear peptide NH 2 Compound 8 was cyclized and deprotected for Dde group using HOAt/DIC and 2% hydrazine in DMF, respectively, to provide side-chain-protected cyclic peptide [K-W(Boc)-R(Pbf)-W(Boc)-R(Pbf)-K-W(Boc)-R(Pbf)-W(Boc)-R(Pbf)] (9, 321 mg, 89%). Cyclic peptide 9 was divided into two parts (120 mg and 200 mg).
To the first part of peptide 15 (154 mg), a freshly prepared cleavage cocktail containing TFA/thioanisole/anisole/1,2-ethanedithiol (EDT) (90:5:2:3, v/v/v/v, 10 mL) was added and stirred at room temperature for 2 h to deprotect side chain-protecting group. The second part of peptide 15 (200 mg) was dissolved in DCM (25 mL) and kept at 0 • C. Triethylamine (121.3 µL) was added to the solution of succinic anhydride (87.1 mg) in DCM was added slowly to the reaction mixture at 0 • C. The reaction mixture was stirred at room temperature for 2 h. DCM was evaporated under vacuum and the residue was washed with water and dried to obtain crude product (17,196 mg, 98%). The side chain protecting group of peptide 17 was deprotected by reacting with freshly prepared cleavage cocktail containing TFA/thioanisole/anisole/1,2-ethanedithiol (EDT) (90:5:2:3, v/v/v/v, 10 mL) at room temperature for 2 h.
The cleaved peptides were precipitated and centrifuged to obtain fully deprotected cyclic peptides containing monoamine [KWRWRWRWR] (16)
Synthesis of Tricyclic Peptide 20
To a round bottom flask, the side-chain-protected cyclic peptides, diamine peptide (9) (10.0 mg, 3.29 × 10 −6 mol) and monocarboxylic acid peptide (17) (39.6 mg, 13.18 × 10 −6 mol) were added in 10 mL anhydrous DMF under stirring and inert condition using septum and purge of N 2 . The reaction mixture was kept at 0 • C followed by slow addition of N-ethyl-N -(3-dimethylaminopropyl)carbodiimide hydrochloride (EDC.HCl) (3.1 mg, 16.47 mmol) and 4-dimethyl aminopyridine (DMAP) (0.2 mg, 1.65 mmol) with stirring for 1 h. The reaction mixture was allowed to reach room temperature and stirred for 24 h. The completion of the reaction was monitored by mass analysis of aliquot after complete Pharmaceutics 2020, 12, 842 7 of 17 deprotection of the reaction mixture using MALDI. Once the reaction was completed, the reaction mixture was evaporated to the dryness to yield peptide 19, 24 mg, 80%, and the residue was treated with the freshly prepared cleavage cocktail containing TFA/thioanisole/anisole/1,2-ethanedithiol (EDT) (90:5:2:3, v/v/v/v, 10 mL) at room temperature for 2 h. The peptide was precipitated and centrifuged to obtain fully deprotected tricyclic peptide 20, crude 10 mg, 78% that was purified by HPLC as described above to get pure compound 6 mg. MALDI-TOF (m/z) for 20, [C 236 Bacterial solution (100 µL) was added to all the wells and incubated statically at 37 • C overnight. The minimum concentration at which bacterial growth was not visible to eyes were reported as MICs and experiments were performed in triplicate.
Cellular Cytotoxicity Assay
Human breast cancer (MDA-MB-231) cell line was grown on 75 cm 2 cell culture flasks with Dulbecco's modified eagle's medium (DMEM) medium, supplemented with 10% fetal bovine serum (FBS), and 1% penicillin-streptomycin solution (10,000 units of penicillin and 10 mg of streptomycin in 0.9% NaCl) in a humidified atmosphere of 5% CO 2 , 95% air at 37 • C. CellTiter 96 ® Aqueous (Promega, Madison, WI, USA) one solution cell proliferation assay (MTS) was utilized to evaluate MDA-MB-231 cell viability. Cells were seeded into 96-well plate (10 4 cells/100 µL) in complete media and incubated overnight at 37 • C with 5% CO 2 . Different concentrations (0−300 µM) of the peptide solution (10 µL) were added to wells to make the final concentrations of the peptide (0−30 µM). The cells were incubated again (37 • C, 5% CO 2 ) for 24 h, then MTS reagent (20 µL) was added to each well followed by incubation for 2 h under the same condition. The absorbance of the resulting solution was evaluated at 490 nm using a microplate reader to detect the live cells. Cells without peptide were considered as a control. DMSO (30% v/v) was used as a positive control. The compound was used in aqueous solution. All the tests were performed in triplicate. 6) carboxyfluorescein (FAM or F') was used as a negative control. After 3 h incubation, the media containing peptide was removed. The cells were washed twice with PBS, followed by digestion with 0.25% trypsin/0.53 mM EDTA for 5 min to remove any false surface binding. Then PBS was added to wash the cells. The cells were centrifuged (600 rpm, 5 min) to collect the pellet. Finally, the obtained pellet was resuspended in buffer designated for flow cytometry and analyzed by flow cytometry (FACSCalibur: Becton, Dickinson & Co. (Franklin Lakes, NJ, USA)) using FITC channel and CellQuest software. The data depicted in the results are determined by using the mean fluorescence signal for 10,000 cells collected. Flow cytometry analysis was performed in triplicate.
siRNA Delivery
Peptide/siRNA complexes were made by simple mixing of fluorescence-labeled siRNA (FAM-siRNA, Qiagen, Cat no 1027292) and peptides in different N/P ratios. First, FAM-siRNA was added to 150 mM NaCl solution. Tricyclic peptide 20 was dissolved in sterilized water to make a stock concentration of 1 mM. The peptide solution was mixed gently with siRNA/saline mixture at different N/P ratios. Then the resulting mixtures were allowed to stand for 30 min at room temperature for complete complex formation. N/P ratio was determined by the following equation: N/P = (Number of moles for peptide × Number of Nitrogens)/(Number of moles for siRNA × 48) where 48 represents the average number of phosphates in each siRNA molecule.
Cells were seeded in 24-well plates at~200,000 cells/mL. After overnight seeding, peptide/siRNA complexes were added. Peptide/siRNA complexes were made with peptide: siRNA N/P ratios of 10, 20, and 40, respectively. After 24 h of incubation at 37 • C, the cells were washed with hanks balanced salt solutions (HBBS), trypsinized, and fixed with 3.7% formaldehyde solution. Quantification of siRNA uptake was performed by Flow cytometer (BD-FACSVerse, BD Biosciences; San Jose, CA, USA) using the FITC channel to evaluate intracellular fluorescence signal. The mean fluorescence of cells in the selected cell population was determined. The results were calibrated by gating with the control cells (i.e., "No Treatment") group in such a way that the auto-fluorescent cells became~1% of the total cell population.
Statistical Analyses
To evaluate the statistical significance of the results, ANOVA test was carried out using a single factor in MS Excel 2016. In case of cytotoxicity assay, cells without treatment were considered as the control and compared with the treated groups. For cellular uptake studies, only drug treated cells were considered as the control and compared with drug/carrier mixture. p-value less than 0.05 was considered as statistically significant and included in the Figures.
Chemistry
For the synthesis of click-free tricyclic peptide, we designed and followed various strategies. In the first approach, we started with the synthesis of two cyclic peptides that contained one free carboxylic acid and one free amino group to perform intermolecular conjugation between them or to oligomerize a cyclic peptide containing a free amino group and a free carboxylic acid [33]. Scheme 1 depicts the synthesis of monocyclic peptides and efforts to perform conjugation to develop bicyclic or tricyclic peptides. The exterior square bracket [] is used to represent cyclic peptides obtained through N to C terminal cyclization; however, parentheses () represent linear peptides.
protecting groups using 2% hydrazine monohydrate solution in DMF and tetrakis(triphenylphosphine)palladium(0) under mild condition, respectively, to afford cyclic peptide 6, which was completely deprotected to remove Pbf and Boc groups to afford bifunctional cyclic peptide [DWRWRKWRWR] (7) containing both free amino and carboxylic acid groups as new bifunctional cyclic peptide. The conjugation of the cyclic peptide with itself failed to generate any dicyclic peptide. Scheme 1. Synthesis of monocyclic peptides containing free amino or free carboxylic acid or both.
In the second approach, we attempted to explore the enzymatic condensation polymerization between a diamine functional group-containing cyclic peptide 10 and a diacid functional group-containing cyclic peptide 12 using Novazyme-435, a lipase usually used for polyamide or polyester synthesis [34,35]. Thus, we included two lysine amino acid residues in the sequence of the cyclic peptide to get diamine residue, which could also be further modified from intermediate protected peptide 9 to diacid 12 after reacting to succinic anhydride. Scheme 2 depicts the synthesis of cyclic peptide building blocks (10 and 12).
The synthesis was started with the synthesis of side-chain-protected linear peptide 8followed by cyclization and deprotection of Dde groups to yield side-chain-protected cyclic peptide 9. Peptide 9 was divided into two parts, and one part was completely deprotected in the side chains to afford cyclic peptide containing diamine functional group [KWRWRKWRWR] 10. The second part of cyclic peptide 9 was reacted with succinic anhydride in anhydrous condition to obtain a diacid functionalized side-chain-protected peptide 11. The complete deprotection afforded diacid containing two free carboxylic acids in the cyclic peptide (12).
The enzymatic synthesis of multivalent cyclic peptide or polymerization of peptide 9 with peptide 11 and peptide 10 with peptide 12 were attempted using Novazyme-435 as a catalyst at 50 °C, 60 °C, and 70 °C without success, presumably due to lack of access of the cyclic peptides to the catalytic triad of the enzyme because of the bulky size. The side-chain-protected linear peptide (2) was assembled on 2-chlorotrityl resin starting from H-Arg(Pbf)-2-chlorotrityl resin (1) using Fmoc/tBu solid-phase peptide synthesis strategy followed by cleavage of the protected peptide from the resin using cleavage cocktail (TFE:DCM:AcOH) at the room temperature to afford 2. Linear protected peptide 2 was cyclized under dilute condition using HOAt and DIC to yield protected peptide 3 according to the previously reported procedure [16]. Cyclic peptide 3 contains an orthogonal protecting group of lysine known as Dde (4,4-dimethyl-2,6-dioxocyclohex-1-ylidene)ethyl) at the epsilon amino group of the lysine side chain. The Dde was selectively deprotected in the presence of 2% hydrazine monohydrate solution in DMF to generate a cyclic peptide containing a free NH 2 group in (5).
While to get the cyclic peptide with the free carboxylic acid, the orthogonal protecting group of allyl ester of aspartic acid (OAll) in cyclic peptide 3 was selectively deprotected under a mild condition and Pd(0) catalyst to afford (4). Cyclic peptides 4 and 5 were reacted in the presence of different coupling reagents, but the conjugation reaction was not successful, possibly due to the steric hindrance of bulky groups present in both peptides.
Alternatively, the cyclic peptides 4 and 5 were further deprotected for removal of orthogonal protecting groups using 2% hydrazine monohydrate solution in DMF and tetrakis(triphenylphosphine) palladium(0) under mild condition, respectively, to afford cyclic peptide 6, which was completely deprotected to remove Pbf and Boc groups to afford bifunctional cyclic peptide [DWRWRKWRWR] (7) containing both free amino and carboxylic acid groups as new bifunctional cyclic peptide. The conjugation of the cyclic peptide with itself failed to generate any dicyclic peptide.
In the second approach, we attempted to explore the enzymatic condensation polymerization between a diamine functional group-containing cyclic peptide 10 and a diacid functional groupcontaining cyclic peptide 12 using Novazyme-435, a lipase usually used for polyamide or polyester synthesis [34,35]. Thus, we included two lysine amino acid residues in the sequence of the cyclic peptide to get diamine residue, which could also be further modified from intermediate protected peptide 9 to diacid 12 after reacting to succinic anhydride. Scheme 2 depicts the synthesis of cyclic peptide building blocks (10 and 12). In the third approach, we designed and synthesized cyclic peptides with a free monoamine functional group (16) and free monocarboxylic acid functional group (18) as shown in Scheme 3. The protected linear peptide (13) was synthesized, followed by cyclization to afford cyclic peptide 14.
The Dde group was removed using hydrazine to afford protected peptide 15, which was divided into two parts. The first part underwent complete deprotection to yield mono amine-containing cyclic peptide [KWRWRWRWR] (16). The second part was functionalized by reacting with succinic anhydride to afford mono acid-containing cyclic peptide 17. The complete deprotection of peptide 17 afforded mono carboxylic acid-containing cyclic peptide 18.
Finally, the conjugation of monocarboxylic containing cyclic peptide 17 was performed with diamine cyclic peptide 9 using EDC/DMAP as coupling reagents under an anhydrous condition for 24 h to obtain side-chain-protected tricyclic peptide 19 (Scheme 4). The complete deprotection of side chains in the tricyclic peptide 19 yielded multivalent tricyclic peptide 20. The purity of the peptide was confirmed by analytical HPLC.
The analysis of our synthetic effort reflects that conjugation or coupling of a bifunctional cyclic peptide containing protected side chains or unprotected cyclic peptide (first approach) was not successful due to the steric hindrance associated with the side chains of tryptophan residues in the cyclic peptide. Moreover, steric hindrance may be due to the use of carboxylic acid for conjugation from the aspartic acid side chain, which was in the proximity of the backbone of the cyclic peptide. Therefore, we tried to use a lipase family enzyme named Novazyme-435 to facilitate conjugation and or polymerization of diamine or diacids containing protected or unprotected side chains in the cyclic peptides. However, the MALDI analysis did not show any conjugation. Novazyme-435 was reported to be used in the synthesis of polyamide or polyester from small molecules but did not produce any success with the case of cyclic peptides during our synthesis. Finally, the use of diamine cyclic peptide from the second approach and monofunctionalized carboxylic acid with a succinate linker was examined (third approach) using EDC and DMAP as classical peptide coupling reagents. The third approach afforded the tricyclic peptide, possibly due to the availability of four extra carbon atoms as a linker in the lysine side chain, which enhanced activation of the COOH group followed by conjugation to the diamine of the cyclic peptide. Scheme 2. Synthesis of cyclic peptides containing two free amino or two free carboxylic acids.
The synthesis was started with the synthesis of side-chain-protected linear peptide 8 followed by cyclization and deprotection of Dde groups to yield side-chain-protected cyclic peptide 9. Peptide 9 was divided into two parts, and one part was completely deprotected in the side chains to afford cyclic peptide containing diamine functional group [KWRWRKWRWR] 10. The second part of cyclic peptide 9 was reacted with succinic anhydride in anhydrous condition to obtain a diacid functionalized side-chain-protected peptide 11. The complete deprotection afforded diacid containing two free carboxylic acids in the cyclic peptide (12).
The enzymatic synthesis of multivalent cyclic peptide or polymerization of peptide 9 with peptide 11 and peptide 10 with peptide 12 were attempted using Novazyme-435 as a catalyst at 50 • C, 60 • C, and 70 • C without success, presumably due to lack of access of the cyclic peptides to the catalytic triad of the enzyme because of the bulky size.
In the third approach, we designed and synthesized cyclic peptides with a free monoamine functional group (16) and free monocarboxylic acid functional group (18) as shown in Scheme 3. The protected linear peptide (13) was synthesized, followed by cyclization to afford cyclic peptide 14. The Dde group was removed using hydrazine to afford protected peptide 15, which was divided into two parts. The first part underwent complete deprotection to yield mono amine-containing cyclic peptide [KWRWRWRWR] (16). The second part was functionalized by reacting with succinic anhydride to afford mono acid-containing cyclic peptide 17. The complete deprotection of peptide 17 afforded mono carboxylic acid-containing cyclic peptide 18.
Finally, the conjugation of monocarboxylic containing cyclic peptide 17 was performed with diamine cyclic peptide 9 using EDC/DMAP as coupling reagents under an anhydrous condition for 24 h to obtain side-chain-protected tricyclic peptide 19 (Scheme 4). The complete deprotection of side chains in the tricyclic peptide 19 yielded multivalent tricyclic peptide 20. The purity of the peptide was confirmed by analytical HPLC.
Biological Activity
The cytotoxicity of tricyclic peptide 20 was determined by the MTS assay against breast cancer cell lines (MDA-MB-231). Tricyclic peptide 20 showed more than 97% cell viability in MDA-MB-231 cell lines up to a concentration of 25 µM, which slightly decreased to 92% at 30 µM ( Figure 2). Thus, further cell-based assays were conducted at 25 µM.
Biological Activity
The cytotoxicity of tricyclic peptide 20 was determined by the MTS assay against breast cancer cell lines (MDA-MB-231). Tricyclic peptide 20 showed more than 97% cell viability in MDA-MB-231 cell lines up to a concentration of 25 µM, which slightly decreased to 92% at 30 µM ( Figure 2). Thus, further cell-based assays were conducted at 25 µM. The analysis of our synthetic effort reflects that conjugation or coupling of a bifunctional cyclic peptide containing protected side chains or unprotected cyclic peptide (first approach) was not successful due to the steric hindrance associated with the side chains of tryptophan residues in the cyclic peptide. Moreover, steric hindrance may be due to the use of carboxylic acid for conjugation from the aspartic acid side chain, which was in the proximity of the backbone of the cyclic peptide. Therefore, we tried to use a lipase family enzyme named Novazyme-435 to facilitate conjugation and or polymerization of diamine or diacids containing protected or unprotected side chains in the cyclic peptides. However, the MALDI analysis did not show any conjugation. Novazyme-435 was reported to be used in the synthesis of polyamide or polyester from small molecules but did not produce any success with the case of cyclic peptides during our synthesis. Finally, the use of diamine cyclic peptide from the second approach and monofunctionalized carboxylic acid with a succinate linker was examined (third approach) using EDC and DMAP as classical peptide coupling reagents. The third approach afforded the tricyclic peptide, possibly due to the availability of four extra carbon atoms as a linker in the lysine side chain, which enhanced activation of the COOH group followed by conjugation to the diamine of the cyclic peptide.
Biological Activity
The cytotoxicity of tricyclic peptide 20 was determined by the MTS assay against breast cancer cell lines (MDA-MB-231). Tricyclic peptide 20 showed more than 97% cell viability in MDA-MB-231 cell lines up to a concentration of 25 µM, which slightly decreased to 92% at 30 µM ( Figure 2). Thus, further cell-based assays were conducted at 25 µM. Phosphopeptides contain negatively-charged phosphate moieties, which restrict the peptides to cross the cellular membranes [36,37]. The molecular transport of these valuable probes into cellular systems presents a great advantage for the study of phosphoprotein-protein interactions as well as protein phosphorylation and dephosphorylation. The potential of tricyclic peptide 20 for the delivery of fluorescently labeled (F')-Gly-(pTyr)-Glu-Glu-Ile (F'-GpYEEI) was analyzed with flow cytometry. The percentage of fluorescently positive cells exceeded 98%, while non-treated cells or phosphopeptide alone were less than 0.6%. In terms of mean fluorescence, the cellular uptake was enhanced at least by 18 times in the presence of tricyclic peptide 20, suggesting the role of the latter as a successful delivery tool (Figure 3). The delivery efficiency of the tricyclic peptide was compared with monocyclic peptide [WR]5. However, the newly developed tricyclic peptide was found to be less efficient than the monomer. This could be due to the steric hindrance of the tricyclic peptide, which could restrict its binding affinity toward the phosphopeptide. Phosphopeptides contain negatively-charged phosphate moieties, which restrict the peptides to cross the cellular membranes [36,37]. The molecular transport of these valuable probes into cellular systems presents a great advantage for the study of phosphoprotein-protein interactions as well as protein phosphorylation and dephosphorylation. The potential of tricyclic peptide 20 for the delivery of fluorescently labeled (F')-Gly-(pTyr)-Glu-Glu-Ile (F'-GpYEEI) was analyzed with flow cytometry. The percentage of fluorescently positive cells exceeded 98%, while non-treated cells or phosphopeptide alone were less than 0.6%. In terms of mean fluorescence, the cellular uptake was enhanced at least by 18 times in the presence of tricyclic peptide 20, suggesting the role of the latter as a successful delivery tool (Figure 3). The delivery efficiency of the tricyclic peptide was compared with monocyclic peptide [WR] 5 . However, the newly developed tricyclic peptide was found to be less efficient than the monomer. This could be due to the steric hindrance of the tricyclic peptide, which could restrict its binding affinity toward the phosphopeptide. To study the application of the newly synthesized tricyclic peptide 20 as a molecular transporter, the cellular uptake of three different fluorescently labeled anti-HIV drugs, lamivudine (3TC), emtricitabine (FTC), stavudine (d4T) (F′-3TC, F′-FTC and F′-d4T) was examined in the presence and absence of the tricyclic peptide 20 using MDA-MB-231 cell line (Figure 4). After 4 h of incubation at 37 °C, the cells were washed with PBS followed by treatment with trypsin. The cellular uptake was monitored with flow cytometry. The data demonstrated exhibited significantly greater fluorescence signals in the cells incubated with the F′-3TC and F′-FTC-loaded with the tricyclic peptide 20, compared to those treated with the fluorescently-labeled compounds alone, indicating that the uptake of these two compounds is promoted by the tricyclic peptide. For instance, the cellular internalization in the presence of the peptide was approximately nine times higher for F′-3TC and almost twelve times higher for F′-FTC, which is significantly higher than the previously reported monocyclic peptide [WR]5 [13]. Interestingly, there was no significant increase in the mean fluorescence of the cells in the case of F′-d4T (less than twofold difference), although the percentage of fluorescently positive cells exceeded 98%. The lower cellular uptake of d4T as compared to 3TC or FTC could be speculated based on interactions of the pyrimidine ring of drugs with the phospholipid bilayer and tricyclic peptide 20. This may be attributed to the presence of the amino group in the pyrimidine ring of 3TC and FTC as compared to d4T. More interactions possibly occur with the tricyclic peptide; therefore higher cellular uptake was achieved for FTC and 3TC. **** To study the application of the newly synthesized tricyclic peptide 20 as a molecular transporter, the cellular uptake of three different fluorescently labeled anti-HIV drugs, lamivudine (3TC), emtricitabine (FTC), stavudine (d4T) (F -3TC, F -FTC and F -d4T) was examined in the presence and absence of the tricyclic peptide 20 using MDA-MB-231 cell line (Figure 4). After 4 h of incubation at 37 • C, the cells were washed with PBS followed by treatment with trypsin. The cellular uptake was monitored with flow cytometry. The data demonstrated exhibited significantly greater fluorescence signals in the cells incubated with the F -3TC and F -FTC-loaded with the tricyclic peptide 20, compared to those treated with the fluorescently-labeled compounds alone, indicating that the uptake of these two compounds is promoted by the tricyclic peptide. For instance, the cellular internalization in the presence of the peptide was approximately nine times higher for F -3TC and almost twelve times higher for F -FTC, which is significantly higher than the previously reported monocyclic peptide [WR] 5 [13]. Interestingly, there was no significant increase in the mean fluorescence of the cells in the case of F -d4T (less than twofold difference), although the percentage of fluorescently positive cells exceeded 98%. The lower cellular uptake of d4T as compared to 3TC or FTC could be speculated based on interactions of the pyrimidine ring of drugs with the phospholipid bilayer and tricyclic peptide 20. This may be attributed to the presence of the amino group in the pyrimidine ring of 3TC and FTC as compared to d4T. More interactions possibly occur with the tricyclic peptide; therefore higher cellular uptake was achieved for FTC and 3TC. siRNA has been shown as an alternative therapeutic approach in cancer treatment. One major obstacle for this approach has been the poor cellular internalization of the double-stranded RNA, which has anionic and hydrophilic character [38]. Different strategies have been used for siRNA delivery, including the use of synthesized cyclic peptides after complex formation with fluorescently labeled siRNA. Tricyclic peptide 20 showed 3.3-fold improvements in siRNA internalization compared to siRNA alone at the lowest peptide:siRNA ratio N/P 10, which progressively decreased with an increase in the N/P ratio ( Figure 5) in the MDA-MBA-231 cell line. At a 20:1 ratio, the cellular internalization of siRNA decreased, and at 40:1 ratio, there was no major difference in siRNA uptake compared to free siRNA. Both of them did not show any statistical difference to the control group (NT). The result suggests that at a lower N/P ratio, the peptide shows appreciable efficiency in siRNA delivery; however, at higher N/P ratio, the efficiency is decreased. This may be attributed to the possible intermolecular hydrogen bonding among the arginine residues at high concentration, which reduces the available cationic charges in the peptide; thus, loading of anionic siRNA on peptides is decreased. As a part of the innate immune system, peptides possess a broad-spectrum antimicrobial activity [39][40][41]. Cyclic peptides, in particular, are considered as an emerging class of antimicrobial peptides against antibiotic-resistant pathogens [42,43]. c[R 4 W 4 ] is a cyclic amphiphilic peptide exhibited potent antibacterial activity against MRSA (MIC 4 µg/mL) and E. coli (MIC 16 µg/mL) and showed improved antibacterial activity when co-administered with tetracycline [23,24]. The structure-antibacterial activity of a series of c[R 4 W 4 ] peptides revealed the requirement of R and W amino acids [24]. Thus, we decided to perform the antibacterial activity of tricyclic peptide 20. The tricyclic peptide 20 exhibited modest antibacterial activity against MRSA, Pseudomonas aeruginosa, Klebsiella pneumoniae, and E. Coli with MIC values in the range of 64-128 µg/mL, which was significantly lower than c[R 4 W 4 ] ( Table 1). These data were not unexpected since the peptide contained alternative R and W residues rather than blocks of hydrophobic and positively charged residues on opposite sides, like c[R 4 W 4 ], required for a more amphiphilic property to generate potent antibacterial activity.
Conclusions
We designed and synthesized tricyclic [(WR) 4 ] 3 after two unsuccessful synthetic strategies using Fmoc/tBu solid-phase synthesis. In the first approach, the intermolecular coupling of monfunctional cyclic peptides did not produce conjugated peptide, while in the second approach, the intermolecular coupling of diamine or dicarboxylic acid failed to produce a bicyclic peptide. However, by reacting protected diamine cyclic peptide 9 with a mono carboxylic-succinate-cyclic peptide 17, tricyclic peptide 20 was synthesized. The evaluation of the antibacterial activity of tricyclic peptide showed modest activity against MRSA, Pseudomonas aeruginosa, Klebsiella pneumoniae, and E. coli with MIC values of 64-128 µg/mL. Tricyclic peptide 20 was also evaluated for molecular transporter activity after it was found to be nontoxic up to 30 µM in the MDA-MB-231. Tricyclic peptide 20 enhanced the cellular uptake of phosphopeptide F'-GpYEEI by 18-fold as compared to the uptake F'-GpYEEI alone. Similarly, the uptake of antiviral drugs (d4T, 3TC, and FTC) was enhanced in the presence of tricyclic peptide 20 by 1.9-12-fold as compared to the drug alone. Tricyclic peptide 20 also internalized a fluorescent-labeled siRNA into the MDA-MB-231 cells by 3.3-fold at N/P ratio of 10 as compared to siRNA alone. Further optimization in the design and application of tricyclic peptide will be required to access the full potential of these multivalent compounds in the biological system. Our data also reflect the potential of the tricyclic or polycyclic peptides in the area of drug delivery. | 2020-09-10T10:24:28.135Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "b18ed2c6e8105c539af04591d01aca02c5dad9ab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/12/9/842/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c2d5f215c9b4e7df5391f0f21833e0b7cd28575",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
6375687 | pes2o/s2orc | v3-fos-license | Pursuit Eye-Movements in Curve Driving Differentiate between Future Path and Tangent Point Models
For nearly 20 years, looking at the tangent point on the road edge has been prominent in models of visual orientation in curve driving. It is the most common interpretation of the commonly observed pattern of car drivers looking through a bend, or at the apex of the curve. Indeed, in the visual science literature, visual orientation towards the inside of a bend has become known as “tangent point orientation”. Yet, it remains to be empirically established whether it is the tangent point the drivers are looking at, or whether some other reference point on the road surface, or several reference points, are being targeted in addition to, or instead of, the tangent point. Recently discovered optokinetic pursuit eye-movements during curve driving can provide complementary evidence over and above traditional gaze-position measures. This paper presents the first detailed quantitative analysis of pursuit eye movements elicited by curvilinear optic flow in real driving. The data implicates the far zone beyond the tangent point as an important gaze target area during steady-state cornering. This is in line with the future path steering models, but difficult to reconcile with any pure tangent point steering model. We conclude that the tangent point steering models do not provide a general explanation of eye movement and steering during a curve driving sequence and cannot be considered uncritically as the default interpretation when the gaze position distribution is observed to be situated in the region of the curve apex.
SUPPLEMENTARY FIGURE S2
Horizontal and vertical, gaze deviation (difference of median observed gaze position from designated target point) is under 2 o . Black dots: calibration datapoints from the current dataset. Open red circles: calibration datapoints from another simultaneously collected dataset.
Mathematical description of the segmentation algorithm
The system aims to maximize a fitness function, although it is not known if it actually reaches a (global) maximum: where is logarithm of the Poisson survival function for more than zero events with rate parameter for a new segment with being time between samples and 1, denotes the first sample index in the segment, is logarithm of the Gaussian probability density function with mean zero and (diagonal) covariance matrix , is the set of outliers and is a SUPPLEMENT SI for Lappi et al. (2013) Pursuit Eye-Movements in Curve Driving PLoS ONE doi: 10.1371/journal.pone.0068326 3 "penalty coefficient" for outliers, is the signal value of sample and is its estimate based on the segment's linear fit.
For the present analyses we used = 1/0.5 and = 0.6 based on tuning by hand.
was iteratively estimated similarly to the Expectation Maximization method by calculating the ML estimate based on a run of the algorithm and then running it again with the new estimate until the segmentation does not change. We used initial noise variances of 1.0 for both dimensions.
Driving behavior
The following figures and tables quantify physically driving behavior in the cornering phase in the present study. The Supplementary Figure S3 and S4 display group level and individual driving speeds as function of lap. Supplementary Tables T1 and T2 show individual participants' yaw-rate and the eccentricity in the visual scene.
SUPPLEMENTARY FIGURE S3
Boxplot showing average driving speed in the cornering phase as function of lap.
TP Hypothesis 0 (no OKN). Persistent fixation of the tangent point. Gaze is stable at the TP. Possibly observed in the TANG condition in Kandil et al. (2009) -although the presence or absence of OKN was not analysed in that study -but not in everyday driving.
That OKN is reliably elicited, however, shows that either the OKR is present while the TP is fixated (or that the drivers are not looking at the TP).
If the drivers' "attemp" to fixate the tangent point is hindered by OKR elicited by regional flow, gaze would move away from the fixation target and require re-setting saccades to restore fixation (hence OKN QP). QP characteristics may be therefore predicted if the dependence of SP on regional flow is known.
TP Hypothesis 1. Under the assumption that the OKR follows local flow, QP could reset gaze to the tangent point (assuming the SP has drawn gaze away from it), or to launch gaze "upstream" in the flow field, so that the slow phase pursuit OKR will bring gaze back to the TP.
TP Hypothesis 2.
If gaze is targeted at the tangent point, but is not stable at the tangent point because of OKR. But the as the SP does not follow local flow (it has a horizontal componens) the hypothesis needs to be adjusted.
The dependency of OKN SP on regional optic flow is not clear, and the assumptions of the TP hypotheses (above) do not give a specific prediction. Empirically, it is known that it is opposite to the direction of the curve and downwards. Thus, SUPPLEMENT SI for Lappi et al. (2013) TP Hypothesis 3. Another possibility would be to launch gaze "upstream" in the flow field, so that the slow phase pursuit OKR will bring gaze back to the TP: TP Hypothesis 3. Gaze is cast "upstream" in the flow field. OKN following (regional) optic flow re-sets gaze to tangent point.
There are thus many ways in which targeting the TP and OKN could be combined.
Unless the size and shape of the relevant region assumed to determine the OKN SP need to be incorporated SP direction and magnitude is underspecified. IN MAIN TEXT FOR EXPLANATION) | 2016-09-14T22:35:13.896Z | 2013-07-22T00:00:00.000 | {
"year": 2013,
"sha1": "613a7d04db545107c663518bde6acd85e06ab311",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068326&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a41184149aebfaebadbd4771f09e4337f0be1ac7",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
244837844 | pes2o/s2orc | v3-fos-license | Combined Building-Energy Systems with Heat Transfer Control by Building Constructions using RES
Energy systems built into one of the building structures that serve to capture solar energy, geothermic energy, and ambient energy, or which have the function of end elements of heating, cooling, and ventilation system, we generally call combined building-energy systems. Among combined building-energy systems we include solar roofs with built-in pipe absorbers, building structures with active thermal protection (ATP) - active heat transfer control, which have a multifunctional purpose – a thermal barrier, low-temperature heating, high-temperature cooling, recuperation and accumulation of heat, solar and ambient energy collection, large-capacity heat storage (ground heat accumulators built simultaneously in the foundation slab of the building), or heat exchangers used for recuperative ventilation of buildings built into the foundation slabs and wall structures. The research of combined building-energy systems at the Department of Building Services, Faculty of Civil Engineering, Slovak University of Technology in Bratislava has been carried out continuously since 2005. Within five research projects (responsible researcher, Kalús, D.) HZ 04-309-05, HZ 04-310- 05, HZ 04-142-07 (research and experimental measurements took place in the years 2005 to 2007), HZ PG73/2011 (research and experimental measurements took place in the years 2011 to 2013), [13,] and HZ PR10/2015 (research and experimental measurements have been carried out since 2015), two experimental houses IDA I. and EB2020, a mobile laboratory designed for measuring and optimizing a compact heat station using renewable heat sources, were designed and built by the research team at our workplace, and also a research of a fragment of a perimeter wall with built-in active thermal protection was carried out in the climatic chamber of the Faculty of Civil Engineering STU in Bratislava, Slovak Republic. Significant contribution to the research was provided by doctoral students Ing. Martin Cvíčela, Ph.D., (supervisor, Kalús, D.), Ing. Peter Janik, PhD., (supervisor, Kalús, D.) and Ing. Martin Šimko, PhD., (supervisor, Kalús, D.), who described the results of the research in their dissertations. At present experimental measurements in the mobile laboratory are performed by doctoral student Ing. Matej Kubica, (supervisor, Kalús, D.). In the area of combined construction and energy systems, research and optimization of suitable solutions continues, which have been transformed into one European patent and three utility models.
Introduction
The technical solution of combined building-energy systems in the sense of the patent SK 284 751 (ISOMAX system) is based on the assumption that the only heat source is solar and geothermic energy. Solar energy is absorbed through a solar energy roof. This source appears to be unstable with insufficient absorption of sunlight. It can be used in the summer and partly during the transition period with sufficient heating of the heat transfer medium, i.e. temperature of the heat carrying medium is higher than the temperature in the long-term heat storage in the ground (ground heat storage). Geothermic energy is captured in the ground storage to an extent negligible for heating needs. The ISOMAX system captures these energies only for direct use in the so-called thermal barrier without increasing energy efficiency e.g. by means of a heat pump or solar collectors. The amount of energy is difficult to determine exactly due to the large number of unstable physical parameters affecting the capture of solar radiation. The captured energy is only applied to charge the long-term storage. The source is difficult to regulate and cannot cover sudden requirements for increasing the energy supply and cannot cover the year-round need for energy for heating, hot water, or ventilation. For example, the design of resources with the ISOMAX system is done only empirically -by estimation. From the previous implementations, it is clear that a peak heat source is also needed. With the ISOMAX system, heat accumulation takes place only in the long-term ground storage, it is unstable, uneven. Ground storage is mostly constructed with an open surface at the bottom, which causes uncontrollable and immeasurable leakage of accumulated heat. The efficiency of such storage is several times lower than the efficiency of closed, insulated heat storage. The amount of stored energy and the length of charging of ground storage to a certain capacity are difficult to calculate due to a large number of changing physical parameters, such as ground moisture, its composition, groundwater level and its vertical movement and the like. Available for heating is always only the temperature of the heat transfer medium as is the average temperature of the long-term ground storage and the ISOMAX system cannot increase it in any way. The temperature of the heat transfer medium cannot be changed other than through the absorption of solar radiation. Calculation and design is done only by empirical -estimation method. Heat transfer by the ISOMAX system is performed only to the heat barrier and serves only to reduce heat losses. The temperature of the heat transfer medium is limited by the temperature in the ground heat storage or in the cooling circuit and fluctuates according to the current temperature in these storages and cannot respond to sudden weather changes or needs for indoor climate change by higher or lower temperatures than available in the storage. Due to the fact that it is not possible to supply a heat transfer medium with a constant temperature to the thermal barrier during the entire period -this changes the heat transfer through the building structure and its thermal resistance. It is clear from the previous realizations that the thermal barrier constructed in this way cannot cover the heat losses of a building all year round. The only source of cold for the ISOMAX system is ground cold storage at non-freezing depth. The coolant temperature depends on the changing soil temperature, it is limited and fluctuating and cannot respond to sudden changes in weather and needs with a colder substance than is available in the ground storage. The ISOMAX system only solves preheating of hot water up to maximum temperature of 35 0 C and only in summer when there is enough sunlight. The temperature of the ventilation air in the ISOMAX system fluctuates and depends on the temperature in the ground heat storage and in the ground cooling circuit and it is not possible to adjust it to temperatures different from that in the ground heat storage.
Research objectives
The aim of research in the field of combined building-energy systems is to solve the defined shortcomings so that, § heat/cold sources for energy systems -heating, hot water preparation, ventilation, and cooling were stable, independent of variable and unpredictable solar and geothermic energy accumulated in large-capacity, especially in ground heat storage, § the requirements for buildings with nearly zero energy demand have been met, 3 § RES are used as much as possible and the best possible accumulation of heat/cold from these sources is ensured, § the implementation of active thermal protection has been simplified, § the advantages of contact thermal insulation system have been economically effectively combined with energy systems -thermal barrier, heating, cooling, heat accumulation and recuperation, capture of solar and ambient energy and use of recuperative ventilation -in multifunctional building-energy constructions of buildings, § a compact heat station with a separate control system was designed to regulate, measure, and optimize the energy demand in the building, § a reliable exact calculation methodology was developed for the design, calculation, selection, and assessment of all components of the combined building and energy systems of a building. Kalús), [8], [9]. The patented ISOMAX system can be applied by two methods. The first method of implementation is the application to the building's enveloping structures, where for the purposes of distribution of lowtemperature heat transfer medium a pipe system is attached to the perimeter wall of an existing building, which is then covered with leveling plaster, thermal insulation is glued on and all layers of surface facade plaster are applied. This application is possible for new constructions but also for thermal insulation of already existing buildings, see Figure 1 (on the left). The second method is by means of load-bearing panels in the form of lost formwork (mostly polystyrene) in which active thermal protection is placed and usually their interior is filled with a cast concrete mixture only after installation on the construction site. In the next stage, the concrete must harden and reach the required strength (approximately 28 days). Such construction technology is the so-called "wet construction process". This is only applied in new buildings, see Figure 1 (on the right), [2], [12], [27], [28].
Description of research projects
Since the production of panels in the form of lost formwork, in accordance with patent SK 284 751 did not work, was too complicated and lengthy, the investor decided that reinforced concrete perimeter panels will be produced without thermal insulation so that tubular coils of active thermal protection (ATP) are in the central structure of the panels on steel reinforcement, see Figure 1 (on the left), and thermal insulation from the outer and inner side of the perimeter walls will be applied additionally only after the completion of the initial structural construction stage (foundations, walls, roof). Apart from the foundations, the ground heat storage, the ground heat exchanger, the roof structure with a solar energy roof, see Figure 2, the structural and energy components of the type panel house IDA I were industrially manufactured in the panel shop as common parts for panel production, see Figure 3. [7,8,9] Advantages of the type panel house IDA I ., § unification of panels, fast mass production, fast assembly of a building without significant technological intermissions, § high potential for the use of RES or waste heat, § storage capacity of envelope structures to accumulate heat/cold -thermal barrier -active control of heat transfer through building structures, § use of the self-regulatory effect of large-area radiant systems and TABS systems, § application of a peak heat source and a small-capacity heating water storage tank eliminates instability and dependence of energy systems on variable and difficult to predict solar and geothermic energy accumulated in a large-capacity, especially in ground heat storage. Disadvantages of the type panel house IDA I ., § production of combined building-energy components with ATP is more time-consuming compared to conventional components of prefabricated production due to the fastening of pipe coils to reinforcement and compaction of concrete around the pipes, § the need for technological breaks due to the hardening time of concrete (28 days), § in the event of a leak from pipes in the panels, the repair is very demanding and the panel loses its energy function, § the contact thermal insulation system can be realized only by gluing, anchoring could damage the pipe system in the panels, § application of the IDA I. type panel solution in the sense of the patent solution SK 284 751 -ISOMAX, thermal insulation of the interior and exterior side of the perimeter walls limits the function of the combined building-energy system with ATP only to the function of a thermal barrier, The implementation of recuperation ventilation by a ground heat exchanger (pipe in pipe), as well as its subsequent maintenance or disinfection, also appears to be complicated and cost intensiv. (photo archive, Kalús, D.) [7,8,9] The type panel house IDA I. is a building which, thanks to the application of combined construction and energy systems, has a high potential to use RES to a large extent and in accordance with Directive 2018/844 / EU to meet the requirements for nearly Zero Energy Buildings. Based on the conducted research, it is possible to recommend the production of panels with integrated active thermal protection and thermal insulation exclusively from the exterior in a unified way directly in the production. With such a modification, we obtain a multifunctional combined building-energy system, namely, large-area radiant low-temperature heating/high-temperature cooling, a thermal barrier and the accumulation of heat and cold in the mass making up the static part of the panels. We also avoid complications and time loss for subsequent thermal insulation of the building.
Research project -experimental house EB2020
The research project HZ PG73 / 2011 solved by a team of researchers from the Department of Building services, Faculty of Civil Engineering STU in Bratislava, Slovak Republic, in 2011-2013 focused on experimental measurements, analysis and determination of optimal use of RES for prototype family house EB2020 for nearly Zero Energy Buildings (responsible researcher, Kalús, D.), [10], namely to, § experimental measurements and evaluation of energy roof operation, Figure 5. In Figure 5 shows the temperature distribution in the perimeter structure without the use of ATP and with an average heat transfer medium temperature in the layer of 14 ° C and 20 ° C (at an outdoor temperature of -11 ° C). In the construction without ATP, the surface temperature will be 18.7 ° C and the temperature between the aerated concrete block and the thermal insulation will be 2.5 ° C. When installing an ATP with a temperature in its location layer of 14 ° C, the surface temperature will be 19.6 ° C. Active thermal protection in a given building consists of plastic pipes between aerated concrete masonry (375 mm) and facade polystyrene (100 mm) and in the roof structure in the perimeter, 20 x 100 m. With the help of ATP it is possible to reduce heat losses in the building through opaque constructions, to heat the building and to cool it in summer.
The source of cold is the cooling circuits, which are located at a non-freezing depth in the ground around the foundation strips of the building. They are formed by circuits of plastic pipes, 20 x 100 m.
Heating in the building is also possible by underfloor heating on both floors. An air handling unit with heat recovery is also installed. Photographs of energy systems of the experimental house EB2020 are in Figure 6. Combined building-energy systems with applied active thermal protection with the use of solar energy with long-term heat accumulation can be more efficient than conventional construction with conventional heating systems only with the right design and proper operation. Theoretical calculations, experimental measurements and analysis have identified important facts that need to be taken into account in the calculation, design, assessment and planning of buildings with combined building and energy systems, as well as the need for further research in this area, as these technical solutions have a high the potential to make significant use of RES and, in accordance with Directive 2018/844 / EU, to meet the requirements for nearly zero energy buildings "nearly Zero Energy Building (nZEB)".
Several system optimizations and recommendations for further research can be suggested, § The application of an energy roof requires lower investment costs than conventional solar collectors, but experimental measurements have shown that the energy gain and the achieved temperatures of the working substance at the outlet are significantly lower. For higher efficiency, it is worth considering installing a dark roofing and installing more circuits with a suitable division according to the sides of the world. This may be the subject of further research. § For theoretical calculations of energy roof it is necessary to know the radiation of solar radiation in hourly averages for a specific slope -it is suitable to install a solar radiation meter on the roof and connect to a measuring and recording control panel, or to install a flat solar collector with a given roof slope for direct comparison of heat and outlet temperatures. § For a more detailed evaluation of the energy roof, it is advisable to install a compact heat meter on the primary side as well -in front of the plate heat exchanger (already performed). § The use of an energy roof for low-temperature heating or the supply of active thermal protection can only be realized with a suitable heat storage solution. When preparing hot water, the energy roof can only be used for preheating. It is appropriate to consider its use in the operation of heat pumps, where in the summer it could serve as a heat exchanger in the preparation of hot water or in pool management. § Whenever possible, supply the ATP directly from the energy roof without heat accumulation, consider using an energy roof to preheat hot water. § In summer, the ATP was used as a wall cooling from July to September, with the inlet temperature to the ATP set at 20 ° C and the return temperature ranging from 20 to 23.5 ° C. The supply was carried out from cooling circuits, which are located in the ground around the foundations of the building -a passive cooling system. The soil temperature has warmed up here and another low- temperature storage tank has been created -it is appropriate to consider its use for the supply of ATP in the transitional and winter periods. The cooling circuits are directly connected to the ATP. § With large-area wall cooling, a separate cooling source is not required if the wall is designed correctly. The thermal conductivity of the material in front of the ATP tubes and the material in which the tubes are located should be as high as possible, this is a necessary condition for optimal operation ATP. § The use of ATP as a wall heating and cooling function is of practical importance only for building structures which have a high storage capacity on the interior side in front of the ATP pipes, ie a suitable bulk density, thermal conductivity and thermal capacity. Given the construction in the family house, where the experimental measurements took place, heating with ATP is very limited and economically inefficient. By wall cooling, it is practically possible to cover only the heat load through non-transparent constructions. For the use of ATP in the function of wall heating and cooling, it is recommended to design structures with suitable accumulation, e.g. reinforced concrete with a suitable thickness of thermal insulation from the exterior. § Building structures that have a high thermal resistance in front of the ATP pipes are not suitable for the wall heating and wall cooling function. The ATP system offered on the market, where the pipes are installed in a reinforced concrete structure, which is provided with thermal insulation on the interior and exterior side (ISOMAX system -self-supporting panels), cannot ensure year-round thermal comfort at normal temperature gradients of low-temperature heating and high-temperature cooling. At higher temperature gradients in the heating function, operation is energy and economically inefficient. With such constructions, it is necessary to design a heating system. § Heat accumulation in a common base plate is disadvantageous -it is advisable to consider heat accumulation in deep drillings, or to apply large-capacity water tanks. § In experimental measurements in the EB2020 experimental house in the heating period, the air temperature on the 2nd floor was often lower by more than 1 K compared to the air temperature on the 1st floor. Underfloor heating on the 1st floor and 2nd floor is connected via one circulation pump. . It is advisable to design two separate branches for each floor, just because there is a ground heat storage under the 1st floor. § It is necessary to perform further measurements of heating operations with setting the attenuation of the indoor air temperature. It is advisable to perform operation measurements only with underfloor heating, then only with active thermal protection and then in combined use with different temperatures. During experimental measurements in less than two heating seasons, it was not physically possible to perform further measurements, while the comfort of the inhabitants had to be taken into account. The combined building and energy system consisting of the use of solar energy by the energy roof, long-term heat accumulation in the ground storage and active thermal protection was comprehensively evaluated on the basis of calculations and experimental measurements. This is probably the first object in Slovakia with such a system, where long-term measurements took place. To date, no independent (non-commercial) research is known from domestic or foreign sources with published output, based on long-term measurements of all components of this system from heat recovery, through accumulation to ATP supply. Outputs for the further development of the scientific field and for technical and social practice were defined. in energy-active buildings, [6]. The mobile laboratory is currently researching, measuring and optimizing the compact heating unit of the new S.M.A.R.T. type within the dissertation Ing. Mateja Kubica (supervisor, Kalús, D.). Self-monitoring, analysis and reporting technology or S.M.A.R.T. is a monitoring system of technological equipment that detects and sends reports on various reliability indicators in an effort to predict failures. The new compact thermal unit of the intelligent type is a technological device with a control unit that can monitor, analyze, detect and predict faults, ensure communication and cooperation of technological components of the compact thermal unit with each other, but also with external devices using specially developed software for this purpose. remote control. [6,15] Our research is focused on the development of compact heating / cooling units using renewable energy sources, wiring diagrams, method of measurement and regulation, possible production process and their future applications. One interesting innovation will be the performance diagnostics of the entire system connected to the compact station -which will bring better control accuracy and it will be possible to integrate this invention into existing heating / cooling systems.
The laboratory includes vacuum solar collectors, photovoltaic panels, an air-to-water heat pump with the option of producing heat or cold, and a heat recovery ventilation unit and a DHW tank with electric heating. Remote access allows you to monitor and set actual and desired quantities according to the needs of the measurements performed. The software records measured states at five minutes intervals. The software can create various time graphs with temperature, humidity, consumption or battery charge status. If necessary, we can export all values to another calculation program.
Results, discussions and conclusions
Type panel house IDA I., Based on the conducted research, it is possible to recommend the production of panels with integrated active thermal protection and thermal insulation exclusively from the exterior in a unified way directly in the production. With such a modification, we obtain a multifunctional combined building-energy system, namely, large-area radiant low-temperature heating/high-temperature cooling, a thermal barrier and the accumulation of heat and cold in the mass making up the static part of the panels. We also avoid complications and time loss for subsequent thermal insulation of the building.
Experimental house EB2020, Experimental measurements -energy roof measurements found real temperatures at the outlet of the energy roof and the heat that can be obtained. In the case of a underground storage tank, the measured heat was stored and removed from the storage tank and the efficiency was evaluated. At ATP, the parameters of operations at different temperatures in the ATP pipeline were measured. Experimental measurements can serve as a basis for similar measurements, e.g. energy roofs with another upper part of the structure, underground heat accumulators under the foundation slab of the building, or ATP applications in other building structures, also as a base for designers.
Mobile laboratory -optimizer and simulator of compact heating / cooling units, Pre-installed and pre-programmed ultrasonic heat meters make it possible to create new ways of acquiring data and to compare design and actual conditions. Two sets of ultrasonic heat meters are installed in the compact unit. The power pack can detect instantaneous power and the amount of energy stored in the heat and cold store. The set of heat meters recognizes the installed capacity of heating systems. The measurement and control system is crucial for the proper functioning of the heating system. In addition to the qualitative and quantitative way of adjusting the power, the progressive measurement and control systems can also adjust the pressure conditions in the heating system. In addition to adjusting the operating characteristics of the system, the measurement and control system provides protection against damage to heating systems. Measurement and control monitors and sends feedback so that the software is updated in time for the next action. As a result, we can evaluate the very favorable conditions for developing and selling a new renewable energy facility and addressing the complex production, preparation and distribution of heat for family houses and small apartment buildings. There are currently technically simpler devices on the market that make good sense of compact stations due to their current advantages, such as fast and high quality assembly, pre-production and calibration in the production, control and flushing of the finest parts. The European Union's ecological focus also contributes to increasing sympathy for the prepared facility. Last but not least, the end of the economic crisis in the European Union and the high number of users of the target group will also be beneficial. In the area of combined construction and energy systems, research and optimization of suitable solutions continues, which have been transformed into one European patent and three utility models [3,4,5,6]. | 2021-12-03T20:07:37.392Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "1145a0b7ba4037d17d4cb9d4cd9b672c3f5f01fb",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1203/3/032091",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1145a0b7ba4037d17d4cb9d4cd9b672c3f5f01fb",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
247864265 | pes2o/s2orc | v3-fos-license | Long noncoding RNA ADIRF antisense RNA 1 upregulates insulin receptor substrate 1 to decrease the aggressiveness of osteosarcoma by sponging microRNA-761
ABSTRACT An increasing number of studies have supported the critical regulatory actions of long noncoding RNAs (lncRNAs) in osteosarcoma (OS). However, the detailed roles of adipogenesis regulatory factor-antisense RNA 1 (ADIRF-AS1) in OS have not been comprehensively described. Hence, we first detected ADIRF-AS1 expression in OS and evaluated its clinical significance. Functional experiments were then performed to determine the modulatory role of ADIRF-AS1 in OS progression. ADIRF-AS1 was found to be overexpressed in OS, and the overall survival of patients with OS who had high ADIRF-AS1 levels was shorter than that of those with low levels. ADIRF-AS1 knockdown led to restricted proliferation, migration, and invasiveness of OS cells and increased apoptosis. Additionally, ADIRF-AS1 downregulation impeded tumor growth in vivo. Mechanistically, ADIRF-AS1 acted as a competitive endogenous RNA for microRNA-761 (miR-761) that siphoned miR-761 away from its target, namely insulin receptor substrate 1 (IRS1), leading to IRS1 overexpression. Rescue experiments showed that low levels of miR-761 or restoration of IRS1 could neutralize the effects of ADIRF-AS1 ablation in OS cells. In summary, ADIRF-AS1 exacerbates the oncogenicity of the OS cells by targeting the miR-761/IRS1 axis. Our findings may aid in the advancement of lncRNA-directed therapeutics for OS.
Introduction
Osteosarcoma (OS), which is derived from primitive mesenchymal cells, is the most common type of bone tumor and comprises over 20% of all primary bone malignancies [1]. It ranks second in cancer-associated mortality in children and adolescents [2]. Currently, surgery in parallel with auxiliary radiotherapy, chemotherapy, and gene therapy is the main available treatment regimen for OS [3]. Due to advancements in diagnostic and therapeutic strategies, the five-year disease-free survival rate of patients with OS has increased to 70% [4]. However, the clinical efficacy of treatments in patients who experience local/distant metastasis or recurrence remains unsatisfactory [5]. Although recent progress in understanding the molecular biology of tumors has provided novel clues into OS pathogenesis [6,7], the detailed mechanisms of OS oncogenesis and progression are far from clear. Therefore, additional research in this area may aid the development of promising OS management methods.
Long noncoding RNAs (lncRNAs) are a family of RNA transcripts over 200 nucleotides without protein-coding capacity [8]. Formerly, lncRNAs were regarded as junk sequences [9], but in recent decades, as genome sequencing technologies have improved, they have been found to participate in gene expression control and have important functions in almost all aspects of cell biology [10,11]. LncRNA dysregulation significantly correlates with the occurrence and progression of various human cancers [12], including OS [13]. Accumulating evidence has confirmed that lncRNAs play carcinogenic or anti-oncogenic roles and exhibit remarkable regulatory functions related to the aggressive biological behavior of OS [14][15][16]. microRNAs (miRNAs) are a group of approximately 17-24-nucleotide-long noncoding, singlestranded RNA transcripts [17] that downregulate gene expression by cleaving mRNAs or decreasing translation [18]. Many studies have reported the expression and functions of miRNAs in OS, and highlighted their considerable role in regulating its malignancy [19][20][21]. Multiple mechanisms used by lncRNAs have been recognized, among which the competitive endogenous RNA (ceRNA) theory has drawn much research interest [22]. LncRNAs can lower the levels of certain miRNAs by means of competitive direct binding, thus siphoning miRNAs away from their target genes, leading to mRNA overexpression [23]. Thus, lncRNAs and miRNAs contribute to osteosarcomagenesis and progression, and advancements in our knowledge of these molecules may help exploit clinically attractive targets for OS therapy.
Adipogenesis regulatory factor (ADIRF,), also known as C10orf116 or APM2, was underexpressed in gastric cancer and manifested a significant relationship with higher pathological stage, higher clinical stage, lymph node metastasis, and poorer distant relapse-free survival [24]. Furthermore, APM2 was certified as a new regulator of cisplatin resistance in many human cancer types, regardless of p53 or mismatch repairMMR status [25].
Through The Cancer Genome Atlascancer genome atlas (TCGA) database, a variety of lncRNAs were found to be differentially expressed in OS, including ADIRF antisense RNA 1 (ADIRF-AS1). ADIRF-AS1, located at chr10:86,965,287-86,971,311, has been certified as a metabolismrelated lncRNA signature predicting the prognosis of patients with colorectal cancer [26]. Although many lncRNAs have been investigated, to date, no reports have been published on the roles of ADIRF-AS1 in OS. Therefore, we first detected ADIRF-AS1 expression in OS and evaluated its clinical significance. Then, functional experiments were performed to observe the modulatory role of ADIRF-AS1 in OS progression. We hypothesized that ADIRF-AS1 exacerbates the oncogenicity of OS cells by targeting the miR-761/IRS1 axis. Our observations may promote the development of ADIRF-AS1-directed OS management strategiesmanagements.
Patients and tissue samples
The Ethics Committee of Weifang Yidu Central Hospital approved our study. All patients provided written informed consent. OS tissues and adjacent normal tissues were collected from 57 patients in the aforementioned hospital. No patients were treated with chemotherapy or radiotherapy prior to surgical resection.
Three human OS cell lines (MG-63, U-2OS and Saos-2) were purchased from the National Collection of Authenticated Cell Cultures (Shanghai, China). Saos-2 and U-2OS cells were grown in 10% FBS-supplemented McCoy's 5A medium (Gibco). Minimum Essential Medium (Gibco) containing 10% FBS was used to culture MG-63 and HOS (ATCC) cells. A 1% penicillin-streptomycin mixture was used for culturing all cells. All OS cells were cultured at 37°C in a humidified atmosphere with 5% CO 2 .
Quantitative real-time polymerase chain reaction (qRT-PCR)
For the detection of ADIRF-AS1 and IRS1, total RNA was extracted with TRIzol® reagents (Invitrogen) and reverse-transcribed into complementary DNA utilizing a PrimeScript Reagent Kit with gDNA Eraser (Takara). Next, PCR amplification was executed with a PrimeScript™ RT Master Mix (Takara). Glyceraldehydeglyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as a normalization control.
To quantify miR-761 expression, small RNA was isolated by means of RNAiso for Small RNA (Takara Biotechnology Co., Ltd). Reverse transcription was performed with a miScript Reverse Transcription Kit, and PCR amplification was completed with a miScript SYBR Green PCR Kit (both from Qiagen GmbH, Hilden, Germany). miR-761 level was normalized to that of U6. All data were analyzed using the 2 −ΔΔCq method.
Cell counting kit-8 (CCK-8) assay
The CCK-8 assay was performedimplanted as previously described [27]. After 24 h of transfection, cells were collected and seeded onto 96-well plates at a density of 2000 cells per well. To assess proliferation, cells were incubated with 10 µl of CCK-8 solution (Dojindo Laboratories, Tokyo, Japan) at 37°C for 2 h. The optical density at 450 nm (OD450) was detected with a TECAN Infinite M200 multimode reader (Tecan, Mechelen, Belgium).
Flow cytometry analysis for cell apoptosis assessment
Flow cytometrycytometric analysis was performed as previouslyimplemented as described [28]. After 48 hours, cells were digested with trypsin without ethylenediaminetetraacetic acid (EDTA) and collected for cell apoptosis assessment using an Annexin V-FITC Apoptosis Detection Kit (Beyotime; Shanghai, China). The harvested cells were washed with phosphate-buffered saline. Cells were resuspended in 195 μl Annexin V-FITC binding buffer and transfected cells were stained with 5 µl of Annexin V-FITC and 10 µl of Propidium Iodide (PI) at room temperature for 30 min away from the light. Apoptotic cells were analyzed with a flow cytometer (BD Biosciences).
Transwell migration and invasion assays
Transwell assays were performed according to a previous study [29]. Transwell chambers (8.0 μm; BD Biosciences) were used to detect the cell migration and invasion abilities. A FBS-free culture medium was used to prepare single-cell suspensions. For the migration assay, 200 µl of cell suspension containing 5 × 10 4 cells were seeded into the upper chambers. For the invasion assay, membranes in the upper chambers were coated with Matrigel (Corning), and the same number of cells was seeded into the upper chambers. The lower chambers were filled with 500 µl culture medium supplemented with 20% FBS. After 24 h of culture, the non-migrated and noninvaded cells were cleaned by scrubbing with a cotton swab. The migrated and invaded cells were fixed in 4% paraformaldehyde, stained with 0.5% crystal violet and photographed on an inverted microscope (Olympus).
Xenograft experiments
Animal experiments were performed with approval from the Animal Care and Use Committee of Weifang Yidu Central Hospital. Short hairpin RNA (shRNA) against ADIRF-AS1 (sh-ADIRF-AS1) and NC shRNA (sh-NC) were inserted into a lentiviral plasmid, followed by transfection into 293 T cells. Supernatants harboring sh-ADIRF-AS1 or sh-NC lentivirus were harvested at 48 h post-transfection and used to infect U-2OS cells. Puromycin was then used to screen U-2OS cells with a stable ADIRF-AS1 knockdown. For the tumor growth study, BALB/c nude mice aged 4-6 weeks were purchased from Hunan SJA Laboratory Animal Co., Ltd. (Hunan, China). All mice were randomly classified into the sh-ADIRF-AS1 or sh-NC groups. Mice in the sh-ADIRF-AS1 group were subcutaneously injected with U-2OS cells with stably transfected sh-ADIRF-AS1. U-2OS cells overexpressing sh-NC were used as control cells in the sh-NC group. Tumor size was recorded weekly, and tumor volume was determined according to the formula: 0.5 × length × width 2 . Five weeks after treatment, all mice were euthanized, and tumor xenografts were excised, weighed, and photographed.
Subcellular fractionation
The assay was conducted as shown before [30]. OS cells in the logarithmic growth phase were harvested, and their nuclear and cytoplasmic fractions were separated with a Protein and RNA Isolation System Kit (Thermo Fisher Scientific, Inc.). RNA from the nuclear and cytoplasmic fractions was extracted and analyzed by qRT-PCR to determine the relative ADIRF-AS1 distribution in OS cells.
RNA immunoprecipitation (RIP)
RIP was carried outrealized as previouslyprevious reported [31]. A Magna RIP RNA-Binding Protein Immunoprecipitation Kit (Millipore, Billerica, MA, USA) was used for the assay. Briefly, OS cells were scraped off the culture plates and incubated with RIP lysis buffer. Lysed cells (100 μl) were incubated with a magnetic bead-antibody complex in RIP immunoprecipitation buffer with human anti-anti-argonaute RISC catalytic component 2 (Ago2) or anti-IgG antibodies (Millipore). After overnight incubation with rotation at 4°C, the magnetic beads were collected and rinsed with wash buffer. Next, proteinase K was incubated with the immunoprecipitated complex with shaking to digest the protein.
Purified immunoprecipitated RNA was assessed by qRT-PCR.
Luciferase reporter assay
The luciferase assay was performed as in a previous study [32]. Fragments of ADIRF-AS1 harboring the predicted target site of miR-761 were amplified and cloned into the pmirGLO reporter plasmid (Promega Corporation, Madison, WI, USA), which is referred to as wildtype-ADIRF-AS1 (wt-ADIRF-AS1). The ADIRF-AS1 fragments carrying the mutant (mut) predicted target site of miR-761 were inserted into the pmirGLO reporter plasmid, which yielded mut-ADIRF-AS1. The wt-IRS1 and mut-IRS1 reporter plasmids were designed and constructed in a similar manner. OS cells were co-transfected with miR-761 mimic or NC mimic and wt or mut reporter plasmids using Lipofectamine® 2000. After 48 h, the luciferase activity was measured in accordance with the protocol of the dualluciferase reporter analysis system (Promega Corporation).
Western blot
As described by Feng et al [33]., cells were collected and immersed in RIPA lysis buffer (Solarbio, Beijing, China) and supplemented with phenylmethanesulfonyl fluoride. A BCA Protein Assay Kit was used to determine the protein concentration. Equal amounts of protein were subjected to 10% SDS-PAGE and then transferred to a polyacrylamide difluoride membrane. After blocking with 5% nonfat milk for 2 h and subsequent incubation with primary antibodies at 4°C overnight, membranes were probed with HRP-labeled secondary antibody (ab150077; Abcam) at room temperature for 2 h. The immunoreactive bands were detected with an enhanced chemiluminescence (ECL) system (Pierce). The following primary antibodies were used in this study: anti-IRS1 (ab40777; Abcam) and anti-GAPDH (ab181602; Abcam).
Statistical analysis
All results were obtained from at least three independent experiments. Data were expressed as the mean ± standard deviation. The student's t test was used for comparisons between two groups. One-way analysis of variance with Tukey's post hoc test was employed to detect differences among multiple groups. The Kaplan-Meier method and log-rank test were used to assess the relationship between ADIRF-AS1 expression and the overall survival of patients with OS. Pearson's correlation coefficient analysis was applied to detect gene expression correlations. P values less than 0.05 indicated statistical significance.
Results
In the present study, we aimed to investigateexplore the expression status and clinical relevancemeaning of ADIRF-AS1 in OS. Additionally, the detailed rolesroles and underlying mechanisms of ADIRF-AS1 in OS were systematically describedelaborated.
ADIRF-AS1 is highly expressed in OS and indicates a poor prognosis
To determine whether ADIRF-AS1 correlated with OS progression, its expression in sarcoma was first analyzed in the TCGA database. Compared with normal tissues, ADIRF-AS1 was significantly overexpressed in sarcoma ( Figure 1a). As assessed by qRT-PCR, ADIRF-AS1 was highly expressed in OS tissues relative to adjacent normal tissues (Figure 1b). qRT-PCR also detected a higher ADIRF-AS1 expression in a panel of OS cell lines compared with its expression in hFOB 1.19 ( Figure 1c). Then, using the median value of ADIRF-AS1 in OS tissues as a cutoff, all patients were divided into the ADIRF-AS1-low (n = 28) or ADIRF-AS1-high (n = 29) groups. The overall survival was shorter in the ADIRF-AS1-high group than it was in the ADIRF-AS1-low group (Figure 1d). Altogether, high ADIRF-AS1 expression in OS indicates poor prognosis.
Downregulated ADIRF-AS1 suppresses the aggressive behavior of OS cells
MG-63 and Saos-2 expressed observably higher ADIRF-AS1 levels among the four tested OS cell lines. Therefore, they were used in functional experiments. To determine the roles of ADIRF-AS1 in OS, we induced its depletion in OS cells. To prevent off-target effects, two siRNAs (si-ADIRF-AS1#1 and si-ADIRF-AS1#2) were used, and the interference efficiency was confirmed via qRT-PCR (Figure 2a). OS cell proliferation was evidently hindered in cells transfected with si-ADIRF-AS1 compared with that of those transfected with si-NC (Figure 2b). Additionally, ADIRF-AS1 knockdown promoted the apoptosis of OS cells (Figure 2c). Furthermore, the motility (Figure 2d and e) properties of OS cells were restricted in response to ADIRF-AS1 ablation. Thus, ADIRF-AS1 exerts pro-oncogenic effects in OS cells.
ADIRF-AS1 serves as a miR-761 sponge
Cumulative studies have confirmed that lncRNA in the cytoplasm acts as a miRNA sponge or competing endogenous RNAs (ceRNA) [34]. To illustrate the mechanisms responsible for the actions of ADIRF-AS1, the cellular distribution of ADIRF-AS1 in OS cells was studied. As determined by subcellular fractionation, most ADIRF-AS1 was detected in the cytoplasm of OS cells (Figure 3a). A bioinformatics analysis was implemented using the online prediction tools ENCORI and miRDB to identify possible interacting miRNAs that target ADIRF-AS1. Five overlapping miRNAs (Figure 3b) with the potential to interact with ADIRF-AS1 were detected. We measured their expression in ADIRF-AS1-deficient OS cells using qRT-PCR, identifying a significant increase in miR-761 levels, while the levels of the other four miRNAs remained unaltered (Figure 3c). The wild-type and mutant miR-761 binding sites within ADIRF-AS1 are shown in Figure 3d. As evidenced by a luciferase reporter assay, exogenous miR-761 expression restricted the luciferase activity of wt-ADIRF-AS1 in OS cells did not affect the activity of mut-ADIRF-AS1 (Figure 3e). Additionally, qRT-PCR detected a considerable decrease in miR-761 levels in OS tissues compared with its levels in normal tissues (Figure 3f). Furthermore, ADIRF-AS1 levels in OS tissues negatively correlated with miR-761 levels ( Figure 3g). Finally, a RIP assay confirmed that ADIRF-AS1 and miR-761 were enriched in Ago2containing immunoprecipitated RNA (Figure 3h), which suggests that ADIRF-AS1 and miR-761 are coexpressed in the RNA-induced silencing complex.
Altogether, these data show that ADIRF-AS1 can sponge miR-761 in OS.
IRS1 is controlled by the ADIRF-AS1/miR-761 axis in OS cells
The regulatory actions of miR-761 in OS cells were also investigated. A miR-761 mimic was used to over- The 3ʹ-UTR of IRS1 harbored a complementary binding site for miR-761 ( Figure 5a) and sparked our interest to determine its regulatory roles in OS progression [35][36][37][38][39]. As evidenced by the luciferase reporter assay, the activity triggered by the wt-IRS1 reporter plasmid decreased in miR-761 mimictransfected OS cells. However, its repressive effect was counteracted when the binding site was mutated (Figure 5b). Moreover, IRS1 mRNA and protein levels were quantified after miR-761 mimic transfection, indicating that miR-761 decreases IRS1 expression in OS cells (Figure 5c and d).
To determine the regulatory effect of ADIRF-AS1 on IRS1 expression, ADIRF-AS1 was depleted in OS cells, which resulted in a notable decrease in IRS1 expression (Figure 6a and b). Nevertheless, IRS1 reduction due to ADIRF-AS1 silencing could be abrogated by miR-761 inhibition (Figure 6c and d). Furthermore, IRS1 was highly expressed (Figure 6e) and its levels positively correlated with ADIRF-AS1 levels (figure 6f) in OS tissues. Moreover, an inverse relationship was observed between IRS1 and miR-761 expression (Figure 6g). ADIRF-AS1, miR-761, and IRS1 were all enriched in Ago2-containing immunoprecipitated RNA (Figure 6h), which confirmed their presence in the RNA-induced silencing complex. These results suggest that ADIRF-AS1, miR-761, and IRS1 constitute ceRNAs in OS and that ADIRF-AS1 controls IRS1 expression by competitively binding to miR-761.
miR-761 underexpression or IRS1 overexpression offsets ADIRF-AS1 ablation-induced antitumor activities in OS cells
After verifying the carcinogenic actions of ADIRF-AS1 and the relationship of ADIRF-AS1 with miR-761 and IRS1, rescue experiments were performed to evaluate functional relationships. A miR-761 inhibitor was used in rescue experiments, and its efficiency in lowering miR-761 levels was validated by qRT-PCR (Figure 7a). After the transfection of ADIRF-AS1-silenced OS cells with a miR-761 inhibitor or a NC inhibitor, cell proliferation and apoptosis were detected by the CCK-8 assay and flow cytometry. The inhibition of proliferation and promotion of apoptosis (Figure 7b and c) of ADIRF-AS1-silenced OS cells were reversed by miR-761 inhibitor treatment. Additionally, the suppression of OS cell migration and invasion induced by si-ADIRF-AS1 transfection was recovered after miR-761 inhibitor cotransfection (Figure 7d and e).
Depleted ADIRF-AS1 restricts tumor growth in vivo
The anti-growth effect of ADIRF-AS1 knockdown on OS cells was demonstrated in vivo using xenograft experiments. Prior to that, we determined the knockdown efficiency of sh-ADIRF-AS1, which was the highest in. U-2OS cells. Accordingly, U-2OS cells were chosen for xenograft experiments (Figure 9a). The growth of sh-ADIRF-AS1transfected xenografts was notably suppressed in comparison with that in sh-NC-transfected xenografts (Figure 9b). The xenografts excised from sh-ADIRF-AS1-injected mice were strikingly smaller ( Figure 9c) and lighter (Figure 9d) than those from sh-NC-injected mice. Moreover, there was downregulated ADIRF-AS1 (Figure 9e) and increased miR-761 (figure 9f), as detected by qRT-PCR, and decreased IRS1 protein (Figure 9g) in xenografts obtained from sh-ADIRF-AS1-injected mice compared with those from sh-NC-injected mice. Therefore, ADIRF-AS1 knockdown weakens the growth capacity of OS cells in vivo.
Discussion
There is vast evidence that lncRNAs exert critical regulatory roles in OS [40][41][42]. The modulation of key signaling pathway genes by lncRNAs is also implicated in OS occurrence and progression [22]. However, the contribution of abundant lncRNAs to OS pathogenesis had not been clarified until now and requires further exploration. Herein, our findings validated that ADIRF-AS1 plays carcinogenic roles in OS by affecting miR-761/IRS1, providing evidence for the development of novel modalities of drug administration in anticancer therapeutics.
LncRNAs have attracted considerable attention in recent years. For instance, the lncRNAs UCA1 [43], FGD5-AS1 [44], and NEAT1 [45] are upregulated in OS and exacerbate osteosarcomagenesis. On the contrary, H19 [46], TUSC7 [47], and LINC00691 [48]are lowly expressed in OS and inhibit oncogenicity. Nevertheless, the detailed functions of ADIRF-AS1 in OS have not been fully elucidated. In this study, ADIRF-AS1 was confirmed to be overexpressed in OS. Specifically, the overall survival of OS patients with high ADIRF-AS1 levels was shorter than that of those with low ADIRF-AS1 levels. ADIRF-AS1 knockdown led to restricted proliferation, migration, and invasiveness and increased apoptosis in OS cells. Additionally, ADIRF-AS1 downregulation impeded tumor growth in vivo. Our results may help to provide an effective reference for the clinical management of OS.
Mechanistically, lncRNAs participate in physiological and pathological processes in different ways largely determined by their localization [49]. Regarding cytoplasmic lncRNAs, the ceRNA theory has been extensively researched, having shown that lncRNAs contain miRNA response elements and competitively bind to certain miRNAs, ultimately decreasing the repression of downstream genes by miRNAs [34]. The role of lncRNAs in modulating the aggressiveness of OS cells is associated with the complex crosstalk among multiple RNAs in the ceRNA network [50,51]. The lncRNA/miRNA/ mRNA pathway, which is regulated by ceRNA, supplements the roles of miRNAs [52]. The ceRNA regulatory pathway provides a new perspective that can clarify the molecular events of ADIRF-AS1 engaged in OS. Initially, we determined the cellular location of ADIRF-AS1 in OS cells. Using subcellular fractionation, our data showed that ADIRF-AS1 was abundant in both the nucleus and cytoplasm and that the cytoplasm contained more ADIRF-AS1. Considering that lncRNAs may function as ceRNAs or molecular sponges, a bioinformatics analysis was performed to predict ADIRF-AS1-miRNA. We identified a miR-761-binding site in the ADIRF-AS1 sequences and employed a luciferase reporter assay and RIP to confirm the target binding effect. Subsequently, mechanistic studies successfully demonstrated that IRS1 is a direct target of miR-761 in OS. Next, ADIRF-AS1 was confirmed to exert positive regulatory activity on IRS1, since when ADIRF-AS1 was knocked down in OS, miR-761 was upregulated and IRS1 was downregulated. Furthermore, ADIRF-AS1, miR-761 and IRS1 were all significantly expressed in OS tissues. Altogether, the three RNAs, ADIRF-AS1, miR-761 and IRS1, constitute a novel ceRNA regulatory axis in OS. miR-761 is upregulated in hepatocellular carcinoma [53], breast cancer [54], and gastric cancer [55] and exerts cancer-suppressing effects. In contrast, miR-761 is downregulated in ovarian cancer [56], colorectal cancer [57,58], and OS [59][60][61] and plays a carcinogenic role. These observations imply that the expression profile and functions of miR-761 display tissue specificity in human cancers. In agreement with previous studies [59][60][61], our research also authenticated miR-761 as an anti-oncogenic miRNA in OS.
Furthermore, IRS1, as a mediator of oncogenic insulin-like growth factor signaling, was verified to be a direct downstream target of miR-761. During osteosarcomagenesis and progression, IRS1 executes important regulatory functions and participates in the control of various tumor-associated malignant activities [35][36][37][38][39]. Using a rescue The growth of ADIRF-AS1-deficient xenografts was lower than that of the control group. (c) Pictures of xenografts collected from both groups. (d) The weight of xenografts in the sh-ADIRF-AS1 group was lower than that of xenografts in the control group. (e, f) qRT-PCR demonstrated a notable decrease in ADIRF-AS1 and an increase in miR-761 in xenografts with stable ADIRF-AS1 ablation. (g) Western blotting revealed the obvious downregulation of IRS1 protein in ADIRF-AS1-depleted xenografts. **P < 0.01. experiment, we found that miR-761 underexpression or IRS1 restoration could neutralize the effects of ADIRF-AS1 ablation in OS. Accordingly, the ADIRF-AS1/miR-761/IRS1 pathway was acknowledged as a promoter of OS malignancy, and miR-761/IRS1 was characterized as the downstream effector of ADIRF-AS1.
In the last decade, multiple inhibitors targeting lncRNA have been revealed to promote cancer regression [62][63][64]. However, only a very small fraction have shown clinical relevance. In recent years, the use of antisense oligonucleotides (ASOs) has provided novel insight into cancer diagnosis and treatment [65]. For instance, a lncRNA called prostate cancer antigen 3 (PCA3) has been applied in clinical practice as a biomarker for prostate cancer diagnosis [66]. Therefore, lncRNAs have considerable potential as diagnostic biomarkers and therapeutic targets in OS for its early diagnosis and management.
Herein, we used two OS cell lines, MG-63 and Saos-2, to explore the regulatory activities and underlying mechanisms of ADIRF-AS1 in OS. However, the 143B cell line, which presented high aggressiveness and the ability to metastasize, was not adopted for performing functional experiments, consisting a. limitation of our study that, will be resolvedresovle it in the near future.
Conclusion
Briefly, we demonstrated that ADIRF-AS1 exacerbates the oncogenicity of OS cells by targeting the miR-761/IRS1 axis, in which ADIRF-AS1 acts as a ceRNA for miR-761 and consequently leads to IRS1 overexpression. Our findings may aid the development of lncRNA-directed therapeutics for OS.
Highlights
ADIRF-AS1 was overexpressed in OS and was closely related to patients' overall survival.
ADIRF-AS1 exacerbated the oncogenicity of OS cells in vitro and in vivo.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
Our current research was supported by the Weifang Yidu Central Hospital. | 2022-01-16T06:16:25.798Z | 2022-01-14T00:00:00.000 | {
"year": 2022,
"sha1": "0f045ed8061b9883024b4b043f26348c6d3b7008",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2021.2019872?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2d3734f7da67dc58fce76e455d6235306801f41",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241584529 | pes2o/s2orc | v3-fos-license | Effectiveness of a clinical pilates program in women with chronic low back pain: A randomized controlled trial
Ozden Baskan1, Ugur Cavlak2, Emre Baskan3 1Physical therapist, İstanbul Rumeli University School of Physical Therapy and Rehabilitation, Istanbul 2Physical therapist, Avrasya University School of Physical Therapy and Rehabilitation, Trabzon 3Physical therapist, Pamukkale University School of Physical Therapy and Rehabilitation, Denizli, Turkey Pilates for back pain Effectiveness of a clinical pilates program in women with chronic low back pain: A randomized controlled trial
Introduction
Factors affecting pain severity of nonspecific back pain; sociodemographic characteristics, physical and psychosocial factors, lifestyle, repetitive activities, repulsion and withdrawal activities, static work posture [1,2]. Nonspecific low back, pain caused by pathologies related to the spine, sacroiliac joints, ligaments and paraspinal muscles, dura, spinal cord, and nerve roots is defined as mechanical low back pain [3,4]. Pilates exercises, which have become popular in recent years, strengthen the abdominal back muscles, and contribute positively to the lumbar spine posture. Clinical Pilates is a further modification of this method adapted for therapeutic use concentrating on core muscles [5,6]. In the literature, there are several studies evaluating Pilates but the results are conflicting [7]. Although there is a large number of researches on this subject, a stronger evidence about the effectiveness of Pilates is needed. Strengthening and stretching exercises are widely used in home exercise programs. The purpose of our study was to examine the effectiveness of clinical Pilates in women with chronic nonspecific low back pain. However, we compared the effects of clinical Pilates training versus home exercise program on pain, muscle strength, pulmonary function, balance ability and disability level in women with chronic nonspecific low back pain.
Material and Methods
Forty women between the ages of 30-45 were included in our study. According to the power analysis, it has been calculated that when 40 people were included in the study (20 people in each group), 90% power would be obtained with 95% confidence. The participants were randomly divided into two groups as Pilates group and a control (home exercise) group. A computer-aided block the randomization method was used for randomization of cases. Our study was conducted with the approval of Pamukkale University Faculty of Medicine Ethics Committee (60116787\020\44903). Initially, all participants gave written consent. This study was carried out in accordance with the Principles of Helsinki Declaration. Individuals who have been diagnosed with mechanical low back pain at least 8 weeks ago, who had mechanical low back pain for at least 3 months and female participants were included in the study. Simple analgesia was permitted, but participants were requested to refrain from seeking other forms of treatment during the trial. The exclusion criteria were pregnancy, radicular back pain, previous spinal surgery, orthopedic and neurological diseases. Before the start of the training, all participants were given a 2-hour training of Back School by the physical therapist. In this training, the anatomy of the lumbar region, muscle structure, neutral spine position, proper lumbar posture, protection methods of the low back during daily works, office ergonomics for individuals with low back pain were discussed. Participants performed 45 minutes of clinical Pilates exercises with a physical therapist 3 times per week for a total of 8 weeks. Forty cases were divided into two groups as Pilates group (n=20) and home exercise group (n=20). Home exercise program which is widely used in clinics was recommended to the second group.
The participants performed the home exercises in their own homes. To get informed about the continuity of the exercise program, a phone dial was made with the participants once a week. Forty participants completed the study as the clinical Pilates group and the home exercise group. The participants were evaluated before and after the training. Home exercise program Home Exercise Program consists of the following exercises: 1. Participant in a hooked position, reaching to the knees with her hands, head raised.
Clinical Pilates program
The 5 key elements, consisting of centering, breathing, head and neck placement, shoulder placement, chest wall placement were taught to the Pilates group before the first session. The Pilates exercise program included warm-up and cool-down exercises, and each exercise was repeated 7-8 times [9]. 1. Warm-up (foot series, roll up, chest stretching, upper body warming exercises, upper body series, side plie with stretch, walking) 2. Roll down, Roll down with push up, shoulder bridge, hip twist, abdominal preparation, oblique preparation, breaststroke preparation, swan dive, single leg kick, clam, single leg circle, swimming, arm opening, spine twist 3. Cool down (spine stretch, the saw, mermaid, piriformis stretches, hamstring stretch) Questionnaires and scales used in the evaluation of participants: Demographic data form: Age, smoking, marital status, number of childbirths, type of childbirth was questioned. Assessment of pain severity: The Visual Analogue Scale (VAS) was used for the intensity of low back pain [10]. Strength assessment: Participants' trunk flexion, trunk extension, hip flexion, hip extension, hip abduction, hip adduction, knee flexion and extension muscle strength were assessed with digital force meter (Power Track). "Make test" was used as the measurement technique and the maximum power application protocol for the device was used while the meter kept the dynamometer constant. After completing the desired movement, the participant was asked to continue the maximum isometric contraction for 5 seconds. Averages of the 3 consecutive maximum contractions were measured with intervals of 30 seconds [11,12]. Pulmonary function: During the test, forced expiratory flow rate (FEV1%), forced vital capacity (% FVC), FEV1 / FVC] were measured in the first second. During the test, the participant sat down, and his\her nose was closed with a plastic latch. The patient was asked to breathe very deeply, exhale very quickly and completely, or to hold his\her breath for a short time [13]. Balance Assessment: Static balance was evaluated by flamingo balance test. [14]. The dynamic balance was assessed by functional reaching test. [15]. Observed Oswestry Pain Scale: It consists of 10 items questioning daily life activities. These are; pain severity, personal care, lifting, walking, sitting, standing, sleeping, social life, travel and pain. There are 6 options between 0-5 points for each item. Zero indicates worst case, 5 indicates worst case, 0-14 weak, 15-29 moderate, 30 above the functional limit [16]. SPSS for Windows 22.0 computer package program was used for all statistical analyzes. Descriptive statistical data were given as mean ± standard deviation (X ± SD) or %. In all statistics, the p-value was accepted as 0.05. In the study, the Wilcoxon test was used to determine the differences before and after treatment in the study, and the Mann-Whitney U test was used for the differences between the groups [17].
Results
There was no difference between the groups in terms of age, weight, height, body mass index (Table 1). When analyzing pain severity, trunk and lower extremity muscle strength measurements, balance levels and disability levels of the groups, there was no significant difference between the groups (p> 0.05). In the clinical Pilates group, when comparing pain severity before and after training, there was a significant decrease in pain severity after training (p <0.05). When trunk flexor and extensor muscle strength were compared, it was seen that muscle strength significantly increased after the clinical Pilates training (p≤0.05). Bilateral hip flexion, extension, hip abduction and adduction, knee extension muscle strength were also significantly increased following the clinical Pilates training (p≤0.05). When the flamingo balance test and functional reaching test results were compared, a significant increase was determined after the training for the clinical Pilates group (p <0,05). There was also a significant decrease on the Oswestry pain scale in participants attending clinical Pilates training (p <0.05) ( Table 2). There was no significant change in the pain severity in the home exercise group, which was measured by the VAS (p> 0,05). When measuring the trunk and lower extremity muscle strength, there was no statistically significant improvement in the home exercise group. Both methods were not superior to each other in the respiratory functionality, although the clinical Pilates FVC results were improved (p = 0.09 for FVC; p = 0.32 for Fev1 /FVC). There was no significant improvement in balance levels and Oswestry pain scale measurements in the home exercise group (p> 0.05). In our study, we determined that clinical Pilates was more effective than home exercise group in reducing pain severity in patients with mechanical low back pain (Figure 1). When the clinical Pilates group and the home exercise group were compared, there was a statistically significant increase in measurement results related to the trunk flexion muscle strength of the clinical Pilates group after training (p <0,05). However, intergroup differences were not shown (p = 0.27). When clinical Pilates and home exercise training were compared for trunk extension, hip flexion, extension, abduction, adduction, knee extension muscle strength, the clinical Pilates group showed a statistically significant improvement in muscle strength (p≤0.05). When the flamingo balance test and the functional reaching test results of both groups were examined, there was a statistically significant superiority compared to the home exercise program of clinical Pilates method. Clinical Pilates training was found to be more effective than home exercise program in improving static and dynamic balance. When the results of the Oswestry pain scale of the clinical Pilates and home exercise program participants were compared, we can say that clinical Pilates is more effective in reducing the level of disability than the home exercise program.
Discussion
Our purpose was that clinical Pilates training in women with chronic nonspecific low back pain would provide more favorable improvements in muscle strength, pain, balance and disability compared to the home exercise program. The purpose of this study was to compare the effectiveness of clinical Pilates program on nonspecific low back pain with an exercise program consisting of flexibility exercises and strengthening abdominal and back muscles, which are commonly used in clinics.
We found a significant decrease in pain intensity in the Pilates group. There was no significant change in pain intensity in the home exercise group. When both methods were compared, it was seen that clinical Pilates more effective than a home exercise program in decreasing pain. The literature review shows that clinical Pilates reduces chronic mechanical low back pain. In a meta-analysis study, Owen et al. stated that Pilates, stabilization\motor control, aerobic and resistant exercise programs are the most effective treatment methods in individuals with chronic nonspecific low back pain [18]. Chronic back pain and muscle weakness developing over time leads to postural disorders. Over time, decreasing in movement and mobility leads to muscle weakness. We think that this situation also negatively affects the balance. Therefore, we think that clinical Pilates exercises have positive effects on muscle weakness and body inhalation in chronic low back pain and should be included in the physical therapy and rehabilitation program.
There was a statically significant increase in the levels of the body and lower extremity muscular strength in the clinical Pilates training group compared to the home exercise group. We can say that the clinical Pilates training is a more effective method than the home exercise program especially for improving hip flexion and extension muscle strength. The clinical Pilates program is a more effective method to improve muscle strength except for body flexion than the home exercise program. However, none of the methods are superior to the other in terms of enhancing body flexion muscle strength.
In the Pilates program, a person is worked on by increasing the level of difficulty of movement with his / her body weight and the auxiliary tools used. All the body-oriented exercises are controlled under the supervision of a physiotherapist in the Pilates program, while working individually against body weight at home in the home exercise program. We think that the difference is caused by this situation. . Its effect on dynamic balance was not statistically significant. In our study, we found significant improvement on static and dynamic balance in patients with chronic low back pain. We need more investigation on Pilates exercises for balance. In a meta-analysis conducted in 2020, they investigated the effect of Pilates exercises on disability in patients with low back pain. The results of meta-analysis showed that Pilates exercises had a significant positive effect on disability improvement in the Pilates group compared with other types of exercise methods [22]. Another review article suggested that, the Pilates method has demonstrated excellent results in pain perception and intensity, functional capacity, fear of movement, and the idea that movement can worsen the health perception, muscle strength, and flexibility [23]. As a result, it was concluded that Pilates is an effective method on pain and disability, Core weakness is very important in patients with low back pain [24]. Axial extension and stabilization principles are applied throughout Pilates exercises. Studies show that Transversus Abdominis, Multifidus, diaphragm, pelvic floor muscles and abdominal oblique muscles are the muscles which are key to movement in healthy people in terms of low back pain [24,25]. A limitation of this study was that long term effects of exercise programs have not been investigated. The strength of this study was that two exercise programs that affect such parameters as strength, balance, pulmonary functions were investigated in patients with chronic low back pain. Blinded assessments were done by a independent physiotherapist. Both interventions were delivered by the same physiotherapist to minimize differences. This study showed that clinical Pilates exercises are more effective than home exercise program in reducing pain, disability and improving strength, balance for patients with chronic nonspecific low back pain. Physiotherapists should use clinical Pilates exercises in the treatment plans of patients with nonspecific low back pain. | 2018-12-18T11:04:39.158Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "7dbb07d8ecca3a33d8c0982c649e29ba45d99bc8",
"oa_license": null,
"oa_url": "https://doi.org/10.4328/acam.20648",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f5083505417659b231cc17b274b5ac93e3011d3a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
257661940 | pes2o/s2orc | v3-fos-license | Correction of UAV LiDAR-derived grassland canopy height based on scan angle
Grassland canopy height is a crucial trait for indicating functional diversity or monitoring species diversity. Compared with traditional field sampling, light detection and ranging (LiDAR) provides new technology for mapping the regional grassland canopy height in a time-saving and cost-effective way. However, the grassland canopy height based on unmanned aerial vehicle (UAV) LiDAR is usually underestimated with height information loss due to the complex structure of grassland and the relatively small size of individual plants. We developed canopy height correction methods based on scan angle to improve the accuracy of height estimation by compensating the loss of grassland height. Our method established the relationships between scan angle and two height loss indicators (height loss and height loss ratio) using the ground-measured canopy height of sample plots with 1×1m and LiDAR-derived heigh. We found that the height loss ratio considering the plant own height had a better performance (R2 = 0.71). We further compared the relationships between scan angle and height loss ratio according to holistic (25–65cm) and segmented (25–40cm, 40–50cm and 50–65cm) height ranges, and applied to correct the estimated grassland canopy height, respectively. Our results showed that the accuracy of grassland height estimation based on UAV LiDAR was significantly improved with R2 from 0.23 to 0.68 for holistic correction and from 0.23 to 0.82 for segmented correction. We highlight the importance of considering the effects of scan angle in LiDAR data preprocessing for estimating grassland canopy height with high accuracy, which also help for monitoring height-related grassland structural and functional parameters by remote sensing.
Introduction
Grassland canopy height is an essential structural trait, which is usually regarded as one of the key indicators for representing grassland functional diversity (Fisher et al., 2018;Rossi et al., 2020;White et al., 2022). It is also used to classify grassland types (Dixon et al., 2014;Wu et al., 2017), and further to characterize various ecosystem functions, such as aboveground biomass (Fassnacht et al., 2021), species diversity (Sankey et al., 2021;Gholizadeh et al., 2022b) and grazing intensity (Bi et al., 2018). The estimation of grassland canopy height can facilitate grassland ecosystem monitoring and adaptive management.
Given the importance of grassland canopy height to grassland ecosystems, it is particularly crucial to achieving regional grassland canopy height mapping with high accuracy. Compared with ground data surveys at the sample scale, remote sensing provides spatially continuous data for canopy height estimation over a large region. For optical remote sensing data, some studies estimated the grassland canopy height successfully based on random forest models, but numerous parameters (i.e., vegetation indices, climate and topographic factors) were needed as inputs and they were often inconsistent across grassland types (Spagnuolo et al., 2020;Yin et al., 2020;Dusseux et al., 2022). Moreover, since the vegetation indices are highly susceptible to change due to the effects of other factors such as view angle, canopy structure and topography (Sims et al., 2011;Chen et al., 2020;Gu et al., 2021), it is difficult to find a universal vegetation index to monitor grassland canopy height. A previous study even failed to find the relationships between multiple vegetation indices (VIs) and grassland canopy height (Tiscornia et al., 2019). In addition, UAV-borne ultrahigh-resolution imagery are also used for the extraction of canopy height or other structural information based on structure-from-motion (SfM) photogrammetry methods, which can obtain point clouds from multiple images (Kalacska et al., 2017;Coops et al., 2021). But, the derived canopy height model (CHM) based on the SfM method shows some uncertainties in the vertical direction, especially for low plants (Cunliffe et al., 2016;Wijesingha et al., 2019), which influences the accuracy of grassland canopy height estimation (Michez et al., 2019;Zhang et al., 2022).
Light detection and ranging (LiDAR) offers new insight and technology for direct vegetation canopy height acquisition due to more powerful penetration ability than optical remote sensing techniques (Lefsky et al., 2002). LiDAR can provide both 3D spatial point cloud data and backscattered data (intensity data) for each observation point through active laser emission and reception as well as distinguish ground and non-ground points by multiple pulse-echo data (Bakx et al., 2019;Calders et al., 2020;Coops et al., 2021). Moreover, LiDAR point clouds allow us to get enough information without being limited by the spatial resolution of pixels (Jansen et al., 2019;Zheng et al., 2022). It has been widely used in obtaining the structure of forest ecosystems (Wulder et al., 2012;Zhao et al., 2018;Coops et al., 2021), and even global forest canopy height products based on space-borne LiDAR have been produced (Simard et al., 2011;Simard et al., 2018;Lang et al., 2022). In comparison, the application of LiDAR technology in grassland ecosystems is still finite due to the complexity of grassland (low height and high density) and the small size of individual plants. Although grassland canopy height can be estimated accurately based on terrestrial laser scanning (TLS) data (Guimarães-Steinicke et al., 2019;Tian et al., 2020;Xu et al., 2020), it is limited to the sample plot scale. On the contrary, space-borne LiDAR is too coarse to detect grassland canopy height because of low point density or large laser footprint, even if it can cover large regional or global areas (Malambo and Popescu, 2021). While providing relatively high point density data, UAV LiDAR offers the possibility to scale from sample plot to region for monitoring the canopy height of grasslands directly (Da Costa et al., 2021;Zhang et al., 2021), which has been successfully used to monitor the canopy height of forests, shrubs, and crops (Zalite et al., 2016;Liu et al., 2018b;Zhao et al., 2021).
Several studies, which explored the potential of UAV LiDAR to estimate canopy height in grasslands, reported height underestimation (Miura et al., 2019;Zhao et al., 2022). The underestimation would affect the estimation of grassland ecosystem functions related to canopy height and its heterogeneity. For example, Da Costa et al. (2021) found that grassland aboveground biomass estimated based on UAV LiDARderived canopy height was lower than that measured on the ground. High-density canopy was considered to be a main cause in forests, which hamper the penetration of laser pulses to reach the ground . Although grasslands may have a higher canopy density than forests, the gaps between grass individuals combined with high point cloud density allow UAV LiDAR to obtain reliable ground elevation information (Getzin et al., 2021;Zhao et al., 2022). The height loss at canopy was proved to be the main cause which greatly reduced the accuracy of grassland height-related structural traits (Miura et al., 2019). Zhao et al. (2022) investigated the height information loss distributed in grassland canopy top and bottom by comparing UAV LiDAR data with TLS data and ground-measured data. However, there are few studies exploring the differences of canopy height loss in the horizontal direction perpendicular to flight routes and the influence factors contributing to this discrepancy. Scan angle proves to be a factor severely affecting the prediction of structural traits in forests (Keränen et al., 2016;Liu et al., 2018a;Dayal et al., 2022) and narrower scan angle usually leads to a more accurate estimation of the mean height. But now, the effects of scan angle on estimating mean height of grassland are still unclear, which is essential for developing robust LiDAR-based models for regional grassland height estimation with various types.
In this study, we aim to estimate the grassland canopy height based on UAV LiDAR data and further quantify the impacts of scan angle on the loss of canopy height estimation towards a temperate meadow steppe. The research questions of this study include (1) how much uncertainty there is in estimating grassland height with UAV LiDAR data; (2) how much does the loss of grassland height relate to the scan angle, especially for the different height layers of grassland and (3) how much can the accuracy of the UAV LiDARderived grassland height be improved after correction based on scan angle. We explore these questions by establishing the relationships between the height loss indicators and scan angle, which are further used to correct grassland canopy height, and comparing with in situ data to demonstrate the feasibility of the correction method. Our study highlights that the accuracy of UAV LiDAR-derived grassland canopy height estimation can be improved based on scan angle, providing a reference to map grassland canopy height and height-related parameters more accurately at regional scale.
Study area
This study is conducted at the National Hulunbuir Grassland Ecosystem Observation and Research Station (NHGEORS) (central coordinates: 49°19ˊ56″N, 119°57ˊ18″E) located in the center of the Hulunbuir meadow steppe in the northeast of Inner Mongolia, China ( Figure 1). It is one of the most typical temperate meadow steppes in China, belonging to the temperate sub-humid zone with mean annual precipitation of 380-400mm, mean annual temperature of -2-1°C and humidity of 49%-50% (Barneze et al., 2022;Shen et al., 2022). The landform is the undulating hill in the piedmont of Greater Khingan Mountains with the elevation ranging from 650m to 700m. The main soil is the light loamy chernozem and dark chestnut soil developed on the parent material of loess.
As the typical area for observation and research on the structure and function of temperate meadow grassland ecosystem, more than 30 dominant species were recorded in our field measurements and several grassland communities could be distinguished, including Poaceae (Leymus chinensis, Stipa baicalensis and Cleistogenes squarrosa), Fabaceae (Astragalus laxmannii and Oxytropis myriophylla), Asteraceae (Artemisia scoparia, Artemisia frigida and Klasea centauroides), Amaryllidaceae (Allium tenuissimum and Allium polyrhizum), and Ranunculaceae (Thalictrum squarrosum and Clematis hexapetala) (Zhu et al., 2019). The grassland structure in this region has obvious hierarchical differences, especially in the canopy height due to the high diversity of grassland species. A representative grassland area covering about 400×400m with rich species and significant canopy height differences is selected for UAV flight experiment and method development in this region.
Data acquisition 2.2.1 Field measurements
The field measurements data were collected from August 1 to 6 in 2021. A total of 32 sample plots were uniformly distributed in the northwest and southeast regions of the study area. Each plot was 1×1m in size and it was divided into 25 investigation units with 0.2×0.2m. Within each unit, all species and their respective numbers were recorded and the height of each dominant species was measured at least three times from three different individual plants. Therefore, the mean height of each unit could be calculated as the mean height of each species, weighted by species abundance. Plot-level canopy height was obtained by averaging the mean height of each unit. Besides, the central coordinate pairs of each sample plot were recorded by Trimble GeoXH 3000 handheld GPS (Trimble Navigation Ltd, Sunnyvale, USA) and the differential correction was performed to minimize the position errors and obtain a decimeter-level positioning accuracy.
FIGURE 1
The location of National Hulunbuir Grassland Ecosystem Observation and Research Station (red star) and land cover data with 10m spatial resolution from ChinaCover2020 (left) (Wu et al., 2017), and UAV RGB image with 32 field-measured sample plots and photographs (right).
UAV LiDAR data acquisition and preprocessing
The flight campaign for acquiring the UAV LiDAR data of the study area was conducted on August 6, 2021 using DJI M600 UAV platform (DJI, Shenzhen, China). LiAir VH Pro LiDAR scanning system (Green Valley Inc., Beijing, China) was equipped, emitting 905nm pulses at a frequency of 480,000 pulses/second with a detection range of about 320m. The integrated LiAir VH Pro system equipped with global navigation satellites system (GNSS) and inertial navigation system (INS) provided high-precision positioning information of 5cm for UAV LiDAR data. The UAV flew around 100-120 m above ground level with a flight speed of 5-6 m/s. The maximum scan angle of ±40°and more than 50% flight strip overlap were set, resulting in an average point cloud density of 248 points/m 2 . Additionally, UAV RGB images of the flight region with 2cm spatial resolution were obtained synchronously with LiDAR data acquisition, which could be conducive to determining the boundaries of sample plots. Among the ground sample plots, 19 sample plots were scanned twice with different scan angles by UAV laser scanner, while 13 sample plots were scanned once, thus, a total of 51 observation samples were acquired.
Before extracting the height of grassland canopy, preprocessing of denoising and filtering for UAV LiDAR data were performed. Outliers were identified and eliminated when the distance from the center point to its nearest neighboring point was more than a threshold, which was determined as five times standard deviation of the distance (mean distance + 5×std) in this study. After denoising, a local minimum filtering algorithm was applied to identify seed ground points within each investigation unit (0.2×0.2m), which was proved to be an effective method for ground finding (Wang et al., 2017;Zhao et al., 2022). The rest laser returns points were considered vegetation points. The vegetation and ground points were classified by the commercial software Terrasolid (Terrasolid, Helsinki, Finland).
Estimation of mean canopy height and extraction of scan angle
There are several methods commonly used to extract height parameters, such as based on 95th or other percentiles (Zhao et al., 2022) and rasterized canopy height model (CHM) (Wang et al., 2017;Zhang et al., 2021). In this study, we estimated grassland canopy height by building the CHM. A digital surface model (DSM) and digital terrain model (DTM) with 0.1m spatial resolution were generated by triangulated irregular network interpolation based on vegetation points and ground points, respectively. CHM was established by subtracting DTM from DSM, and the value of each pixel was the estimated mean height of all plants within this pixel. To ensure the comparability with measured heights from ground sample plot (1×1m), we extracted all the pixels (10×10) within each ground sample plot from the CHM, which were located by central coordinates and high-precision UAV RGB images, and the mean CHM value of all pixels was calculated as the estimated mean grassland height by LiDAR. Meanwhile, the mean scan angle (absolute value) of all point clouds within these pixels at one scan was calculated as the scan angle of the corresponding observation sample.
Division of modeling and validation datasets
We divided the sample plots into two datasets for the modeling and validation of height correction, respectively. The modeling dataset included the sample plots with single scanning and the first scan data of the sample plots with double scanning (N = 32), while the second scan data of the sample plots with double scanning were used as validation dataset (N = 19) ( Table 1). From the table, similar data distributions were represented between two datasets and informed a reasonable division of our samples.
Height correction
Two indicators were adopted to represent the height information loss in our study, including height loss (H loss ) and height loss ratio (H loss Ratio) calculated as follows: H loss =H measured −H estimated
H loss Ratio= H loss H measured
Where H loss was the loss of grassland canopy height at the level of sample plot; H measured and H estimated were the canopy height measured on the ground for each sample plot and their corresponding estimated mean canopy height based on UAV LiDAR data, respectively. H loss Ratio was the proportion of height information loss to the measured grassland canopy height.
The relationships between height loss indicators and scan angle were determined based on a simple linear regression model to quantify the effects of scan angle on grassland canopy height estimation. F-test was adopted to test the significance of these relationships at levels of 0.01 and 0.05. In order to further analyze the effect of plant height on the relationship between scan angle and the loss of grassland height, we also compared the performance of the relationships among three segmented height ranges of groundmeasured canopy height (25-40cm with 12 samples, 40-50cm with 10 samples and 50-65cm with 10 samples), which was divided according to the plant functional groups with different plant competitive ability (Koyanagi et al., 2013;Ladouceur et al., 2019) as well as combined the common principle of sample equalization. Additionally, we used a t-test to test the significance of the differences in height loss between the three sets. According to the determined relationships between height loss indicators and scan angle of the modeling dataset within holistic and segmented height ranges, two forms of correction method (holistic correction and segmented correction) were adapted to calculate two height loss indicators of the validation dataset and subsequently used them for the height loss correction as follows: Where H 0 estimated was the final grassland canopy height of each sample plot within the validation dataset estimated by UAV LiDAR data after correcting the loss of grassland height determined by H loss or H loss Ratio, which were obtained from the relationships between scan angle and them.
Accuracy assessment before and after correction
We used the validation dataset to assess the performance of the height correction method by comparing the coefficient of determination (R 2 ), root mean squared error (RMSE) and mean absolute percentage error (MAPE) before and after correction. Besides, the 1:1 line was used to measure the deviation of the UAV LiDAR-estimated grassland height from the ground-measured height. Detailed descriptions of the Eqs. were shown as follows: where n was the number of validation sample plots; y i andŷ i were the measured and predicted values of the i th sample plot; y i was the mean measured value of all the validation sample plots.
Grassland height estimation before correction
Grassland height estimated by UAV LiDAR showed a low accuracy (R 2 = 0.29, p< 0.01, RMSE = 27.59cm) and a significant underestimation by comparing with the field-measured height (Figure 2A). The field-measured grassland canopy height for the plots of the modeling dataset distributed between 26.9cm and 63.9cm with an average of 45.6cm, while the corresponding UAV LiDAR-estimated height data only ranged from 10.5cm to 34.4cm with a mean height of 19.9cm.
Our results also showed inconsistent performance of UAV LiDAR-derived grassland canopy height in three different height ranges ( Figure 2B). Highest correlation between measured height and estimated height was found in the height range of 50-65cm (R 2 = 0.34, p< 0.05), which better than that of all samples (Figure 2A Plot-level measured canopy height versus estimated canopy height from CHM in whole height ranges (A) and three height ranges (25-40cm, 40-50cm, 50-65cm) (B) for the modeling dataset. Xu et al. 10.3389/fpls.2023.1108109 Frontiers in Plant Science frontiersin.org but it also had maximum underestimated deviations with MAPE of 60.7%. Beyond our expectations, UAV LiDAR basically failed to estimate the grassland height which was in the height range of 25-40cm and 40-50cm with R 2 = 0.07 and 0.02 (p > 0.05) and even it showed a negative performance in the height range 40-50cm.
Relationships between scan angle and height loss
We compared the relationships between scan angle and two height loss indicators based on the samples of the modeling dataset shown in Figure 3. There was a stronger significant negative relationship between scan angle and height loss ratio (R 2 = 0.71, p< 0.01) than height loss (R 2 = 0.18, p< 0.01). In the scan angle range of 0-40°, the loss of grassland height estimated by UAV LiDAR ranged from 14.3cm to 40.1cm, while the ratio of the loss ranged from 42% to 74%.
The stronger linear negative relationships between scan angle and two height loss indicators were also found when the canopy height ranges of plants themselves were considered (Figure 4). The height loss ratio still showed higher correlations with scan angle (R 2 from 0.78 to 0.86) than height loss (R 2 from 0.69 to 0.79). The highest correlation between height loss ratio and scan angle appeared in the height range of 25-40 cm (R 2 = 0.86, p< 0.01), while it was 40-50cm for height loss (R 2 = 0.79, p< 0.01). The height loss ratio showed consistent variation with scan angle changes for the height ranges of 25-40cm and 40-50cm (slope = -0.76 and -0.74), which was lower than that in the range of 50-65cm (slope = -0.89). Meanwhile, the maximum fitting values of the loss ratio for three height ranges were different when the scan angle was 0 (66.36%, 70.56% and 76.22%, respectively).
Grassland canopy height information loss of sample plots showed certain differences in three different height ranges (25-40cm, 40-50cm and 50-65cm) when the scan angle between 0 and 40°( Figure 5). With the increase of mean canopy height of plants at plots level, the mean height loss increased from 23.1cm (25-40cm) to 26.8cm (40-50cm) and 34.0cm (50-65cm) and there were significant differences between each pair at 0.01 level. By comparison, the differences of height loss ratio between the three height ranges were non-significant and they showed similar averages with 59.5% in 25-40cm and 57.9% in 40-50cm and 60.7% in 50-65cm, which closed to the average of all samples (58.3%).
Grassland height after correction
After correcting the loss of grassland height, for the validation dataset, the estimated grassland height based on UAV LiDAR showed significantly improved accuracy with R 2 increased from 0.23 to 0.68 (holistic correction) and 0.82 (segmented correction), RMSE decreased from 19.17cm to 5.26cm and 3.85cm, respectively ( Figure 6). It was also demonstrated that the segmented correction performed better than holistic correction for correcting the height loss as shown in Figures 6B, C.
Effects of scan angle
Our study demonstrated that the loss of grassland height was strongly related to the scan angle, and there was a larger height loss with the decreasing scan angle. Actually, with the increase of scan angle, more tilted laser beams hit the targets and there is a larger contact surface between the plant individual and the laser beams as well as more chances to encounter gaps within the canopy (Kamoske et al., 2019). It demonstrated that a higher probability of getting information at the top of the plant and penetrating deeper into the dense canopy to obtain the ground when the scan angle is larger. Moreover, we compared the estimated height of plant and ground, then found limited differences for ground elevation and larger differences for grassland canopy height between two scanning The relationships between scan angle and height loss (A) and height loss ratio (B). Xu et al. 10.3389/fpls.2023.1108109 Frontiers in Plant Science frontiersin.org with various scanning angles, which indicated the height loss came from the underdetection of highest points of plant canopies. Although height information loss was common in grassland when estimating height based on UAV LiDAR, whether it used pointbased, CHM-based or voxel-based methods (Streutker and Glenn, 2006;Guo et al., 2021b), this part of the loss could be effectively corrected by scan angle according to our study. After the correction based on scan angle, the accuracy of grassland canopy height estimation based on UAV LiDAR was significantly improved ( Figure 6) and closer to the real. These enabled further accurate estimation of ecological parameters related to grassland canopy height, such as grassland aboveground biomass (Michez et al., 2019;Morais et al., 2021), functional diversity (Lavorel et al., 2007) and revegetation effectiveness (Li et al., 2019). Moreover, simple linear relationships between scan angle and the loss of grassland height were built based on the assumption that all the height information loss was caused by scan angle in our study. However, obtaining the peak information of each individual A B
FIGURE 5
Mean canopy height information loss (A) and height loss ratio (B) differences of sample plots in three different height ranges (25-40cm, 40-50cm and 50-65cm). ** represents the significance at the level of 0.01 and NS represents non-significant for t-test. The relationships between scan angle and height loss or height loss ratio at three different height ranges [25-40cm (A-D); 40-50cm (B-E); 50-65cm (C-F)].
plant by UAV LiDAR in grassland was still a challenge due to the small size of individual grassland plants or the lack of obvious crowns (Zhao et al., 2022). Thus, determining the amount of height information loss caused by scan angle through multiple repetitive observations on the same grassland as well as combined with the results of model simulations would be conducive to extracting grassland canopy height information or other structural traits more accurately based on UAV LiDAR. Multiple scanning observations can help us to determine the height loss caused by scanning angle and can also be applied in the grassland height estimation by averaging the heights of multiple corrections. Specially, it was also informed that the loss of grassland height caused by scan angle was further related to the height of plants themselves. Higher grassland plants tend to have more height loss within the same range of scan angles, while the mean height loss ratio was similar among three height ranges ( Figure 5), which could explain the lower correlation between scan angle and height loss than height loss ratio without the height separation process (Figure 3). Although the use of height loss ratio reduced the effects, we still obtained a better performance of height estimation after correcting the loss of grassland height based on segmented correction than holistic correction (R 2 = 0.68 and 0.82, respectively) because of the better fitting relationships between scan angle and height loss ratio in three height ranges (25-40cm: R 2 = 0.86, 40-50cm: R 2 = 0.78 and 50-65cm: R 2 = 0.82) than that in the whole height range (25-65cm: R 2 = 0.71). Therefore, the application of the correction method would depend on the grassland condition. The holistic correction method could be established and used to correct the loss for the grassland with small height differences in a regional scale. But for the grassland with various layers of height, there would be a better performance using the segmented correction method, which needed a step about layering the grassland heights in advance. A few metrics relative to the grassland height might be considered for the preprocessing of layering, which could be predicted accurately by remote sensing, such as biomass and some biochemical traits (Jin et al., 2013;Lussem et al., 2019).
Additionally, the performance of estimated grassland height was inconsistent between three height ranges (Figure 2), which mainly depended on the effects of various scanning angles. Within each height range, scanning angles determined the loss of grassland height and further affected the height estimation accuracy. For example, the negative relationship between the field-measured and estimated canopy height was found in the height range of 40-50cm, because the sample plot with relatively higher plants (>45 cm) had smaller scanning angles and resulted in larger height loss.
Although our results demonstrated that the height loss correction models were effective when the scanning angle was less than 40 degrees, whether there was still a significant linear relationship between scanning angle and height loss or would tend to saturate when the scanning angle larger than 40 degrees needs to be further explored. Limited by the parameters of the UAV, the scanning angle was difficult to obtain the complete data between 0-90 degrees, it could be considered in combination with the terrestrial laser scanner or simulation to complete the entire range of data acquisition and analysis in the future study.
Height estimation among different grasslands
Our results demonstrated that the ability of UAV LiDAR for grassland canopy height estimation varies across different ranges of grassland height (Figure 2). Different grassland types commonly showed various structures due to the differences in dominant species and the height of the grassland was one of the main manifestations, which was even used to distinguish the grassland types (Wu et al., 2017). Moreover, grassland types or conditions were usually considered to be an important input or influence factors to estimate ecological parameters by remote sensing (Li et al., 2016;Guo et al., 2021a;Gholizadeh et al., 2022a), which were also considered to be a vital cause of the different performance of UAV LiDAR-derived grassland height.
In this study, we proved that grassland height estimated by UAV LiDAR data was severely affected by scan angle and it could be estimated with high accuracy after the scan angle-based correction on the temperate meadow grassland. By comparing the profiles of Estimated canopy height versus measured canopy height before (A) and after correcting the height loss based on the holistic correction (B) and segmented correction (C) for validation dataset. grassland height estimated by UAV LiDAR data along the direction perpendicular to flight path (Figure 7), the estimated height showed inverted parabolic trends with scan angle both in our study and in the temperate typical steppe located in Inner Mongolia grassland ecosystem research station (43°38′ N, 116°42′ E). The similar performance in two temperature grassland types demonstrated the universal effects of scan angle on grassland canopy height estimation based on UAV LiDAR. However, we did not determine the specific relationships between scan angle and the loss of grassland height and analyze the difference in the relationships between the two grassland types due to the lack of ground samples data. Conducting similar work in various grassland types to validate the universality of the method would be valuable for building a more robust grassland height estimation model under multiple conditions. Compared with the meadow steppes, the lower vegetation cover and simpler vertical structure of typical or desert steppes may affect the height correction results at different height layers due to the differences in the reception of UAV LiDAR signals. For typical or desert steppes, the methods usually were conducted by constructing empirical relationships between in situ data and optical remote sensing data (Lussem et al., 2019),or building complex machine learning models, which required large amounts of ground survey data (Viljanen et al., 2018;Ceśar De Sáet al., 2022). Therefore, analyzing the performance of estimating the height towards different grassland types combining LiDAR and optical data is a necessary future work for ecological applications.
Impacts of UAV LiDAR data for grassland height estimation
We used a CHM-based method that relied on the UAV LiDAR data to estimate the grassland height, and the spatial resolution of CHM was commonly regarded as a crucial issue affecting the accuracy of height estimation. The selection of optimal spatial resolution of CHM had been widely explored in forest ecosystems, while it was rarely reported in grasslands. For example, it was proved that the spatial resolution of CHM between 0.1m and 2m all performed well in estimating the canopy height of forests (Yin and Wang, 2019;Gu et al., 2020;Picos et al., 2020). However, we found that grassland height estimation showed less tolerance to the CHM resolution than forests after testing the performance of CHM with varied spatial Profiles of estimated grassland height with scan angle along the direction perpendicular to the flight path in our study (A). temperate meadow steppe) and Inner Mongolia grassland ecosystem research station (B). temperate typical steppe).
resolutions (0.05m, 0.1m, 0.2m, 0.5m and 1m). We demonstrated that it had the best performance at the spatial resolution of 0.1m and it might fail when the spatial resolution of CHM is coarser than 1m (p > 0.05) before correction (Figure 8), which might be relative to the smaller size of grassland plants. The coarser spatial resolution would lose the details of grassland individual canopy, while finer spatial resolution might introduce redundant information (i.e., canopy gaps) (Sadeghi et al., 2016). The best resolution should be appropriate to or better than the size of the individual plants. Moreover, the effect of CHM spatial resolution on canopy height estimation varies across forest types (Sadeghi et al., 2016;Yin and Wang, 2019;Liu et al., 2020) due to various species, sizes and postures of plants. The optimal spatial resolution of CHM for estimating grassland canopy height could be a focus of future work and it might also be inconsistent for different grassland types.Additionally, although the accuracy of grassland canopy height estimation was affected by the spatial resolution of CHM, the performance in each spatial resolution was improved after correcting the effects of scan angle. The scan angles of adjacent points were basically the same and the angle differences along the direction perpendicular to the flight courses still existed. Therefore, the effects of scan angle were relatively independent of the spatial resolution of CHM.
Point cloud density was considered to be another factor from UAV LiDAR data affecting the accuracy of grassland canopy height estimation (Peng et al., 2021;Zhao et al., 2022). Lower point cloud density usually decreases the probability of obtaining information about the targeted plants. For example, we estimated grassland height at the spatial resolution of 1×1m including 10×10 pixels, but the estimated maximum height from UAV LiDAR data did not occur at the same location as the maximum height from the ground sample survey data shown in several sample plots both before and after correcting the loss of height. The point densities of these sample plots were counted and found to be lower (124-164 points/ m 2 ) than the average point cloud density (248 points/m 2 ). Although it did not significantly affect the estimation of mean height at the sample plot scale in our study, it could lead to that the highest plant individuals were not always detected at the top position and this relative information was lost due to the insufficient point cloud density. Moreover, it had been shown that the accuracy of monitoring grassland height tended to stabilize when the point cloud density exceeded a certain threshold for terrestrial laser scanning (Zhao et al., 2022), but there were fewer studies quantified this relationship based on UAV LiDAR-based height and the point density threshold. Usually, points cloud density is related to the UAV or sensor parameters (i.e., flight area, flight speed and pulse frequency). Therefore, reconciling appropriate point cloud density and projected grassland monitoring area would be clearly an important issue in mapping structural traits related to canopy height over regional grasslands.
Conclusions
In this study, we demonstrated that scan angle was a main cause for the difference of grassland canopy height loss in the horizontal direction and developed a grassland canopy height correction method based on scan angle. There were significant linear relationships between height loss indicators and scan angle, and height loss ratio had a stronger relationship with scan angle than height loss, which was used to correct the height loss of grassland canopy estimation. After correction, the accuracy of grassland canopy height estimation was improved with R 2 from 0.23 (RMSE = 19.17cm) to 0.68 (RMSE = 5.26cm) for holistic correction and 0.82 (RMSE = 3.85cm) for segmented correction. Moreover, due to the influence of the canopy height of the grassland itself, different fitting linear relationships between scan angle and height loss indicators among three height ranges were found, which explained the better performance of segmented correction method. By exploring the effects of scan angle on grassland canopy height The performance of the estimated grassland height based on UAV LiDAR-derived CHM with varied spatial resolutions. Xu et al. 10.3389/fpls.2023.1108109 Frontiers in Plant Science frontiersin.org estimation, our study demonstrated the necessity for correcting the effects of scan angle on LiDAR-derived canopy height in grasslands, which should be a crucial preprocessing step for estimating grassland canopy height accurately. Our study illustrated that the grassland canopy height can be estimated with high accuracy by UAV LiDAR data after correction based on scan angle. Continuous height maps for regional grassland can be described and further used for ecological applications, such as grassland aboveground biomass estimation or functional diversity assessment. Currently, research about the height estimation in grassland by UAV LiDAR has been increasing, we suggest some crucial issues caused by grassland conditions, data acquisition and environment factors should be addressed in the future for improving the ability of monitoring grassland structure and functions. These will provide references to bridge the scale gap of grassland structural trais estimation between sample plot measurements, regional UAV or airborne and even global satellite monitoring.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Author contributions
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
Funding
This study was supported by the National Key Research and Development Program of China (2022YFF1302100) and National Natural Science Foundation of China (42071344). | 2023-03-22T15:16:57.675Z | 2023-03-20T00:00:00.000 | {
"year": 2023,
"sha1": "9585c37d3e74c0cd8bed91d6150b328c11e617d0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2023.1108109/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "f0b1c48c2b076f48d9f7e975fd3f87d493acbb86",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211262881 | pes2o/s2orc | v3-fos-license | Low levels of antibodies for the oral bacterium Tannerella forsythia predict cardiovascular disease mortality in men with myocardial infarction_ A prospective cohort study
Antibody levels to periodontal pathogens in prediction of cardiovascular disease (CVD) mortality were explored using data from a health survey in Oslo in 2000 (Oslo II-study) with 12 1/2 years follow-up. IgG antibodies to four common periodontal pathogens; Tannerella forsythia (TF), Porphyromonas gingivalis (PG), and Treponema denticola (TD) all termed collectively the “red complex”, and Aggregatibacter actinomycetemcomitans (AA) were analysed. The study sample consisted of 1172 men drawn from a cohort of 6,530 men who participated in the Oslo II-study, where they provided information on medical and dental history. Of the study sample, 548 men had reported prior myocardial infarction (MI) at baseline whereas the remaining 624 men were randomly drawn from the ostensibly healthy participants for comparative analyses. Dental anamnestic information included tooth extractions and oral infections. An inverse relation was found for trend by the quartile risk level of TF predicting CVD mortality, p-value for trend = 0.017. Comparison of the first to fourth quartile of TF antibodies resulted in hazard ratio (HR) = 1.82, 95% confidence interval 1.12–2.94, p = 0.015, adjusted for age, education, diabetes, daily smoking, and systolic blood pressure. Specificity comparing decile 1 to deciles 2–10 of TF predicting mortality was 92.3%. We found an increased HR by low levels of antibodies to the bacterium T. forsythia predicting CVD mortality in a 12 1⁄2 years follow-up in persons who had experienced an MI but not among non-MI men. This novel finding constitutes a plausible causal link between oral infections and CVD mortality.
Introduction
Many bacteria have been identified in advanced "chronic" periodontal disease but three bacteria are commonly identified and are jointly termed the 'Red complex' [1][2][3]. The latter comprises the strict anaerobic bacteria Tannerella forsythia (TF), Porphyromonas gingivalis (PG), and Treponema denticola (TD). These bacteria act in symbiosis under the progression of the infection through their production of several virulence factors and they possess the ability to evade host reactions resulting in soft and hard tissue destruction in the oral cavity [4][5][6][7][8]. Their numbers increase with increasing periodontal pocket depth. The facultative anaerobic Aggregatibacter actinomycetemcomitans (AA) is associated with gingivitis and localized periodontitis (juvenile periodontitis) [9,10].
Multiple pathways for exploring and linking periodontal disease to cardiovascular disease (CVD) and atherosclerosis, in particular, have been published [4,6,7,[11][12][13][14][15][16][17][18][19]. Both conditions are closely related to immunologic responses and inflammation. The exact mechanisms linking the two disorders have, however, not been firmly established, but several studies during the last decades provide evidence for an association between oral microbiota and CVD. Examples are DeStafano et al. who found an increased risk of atherosclerotic plaque formation associated with dental disease in 9760 patients over a 14 years followup [11], and in a case-control study Mattila et al. showed an association between periodontal disease and heart disease [12].
Periodontitis is well described and classified [20]. Periodontitis and other oral infections of dental origin in various stages occur in millions of people around the world [21]. It has been estimated by WHO that 5-15% of the world's population suffer from chronic periodontitis and juvenile periodontitis occur in about 2% of youths. Dental caries in industrialized countries affect approximately 60-90% of schoolchildren and the vast majority of adults. However, there are differences between populations. Bacterial DNA of more than 700 bacteria has been identified in the mouth but approximately 35% have not been possible to cultivate [22]. Several dental procedures may cause bacteraemia [23,24]. Exposure to oral bacterial infections may occur at an early age in deciduous teeth with severe caries and pulpal exposure to oral bacteria. Juvenile periodontitis occur in some young people. Hence, oral infections may occur at an early age and expose individuals over many years [24,25].The close proximity of the periodontal bacterial infection to the bloodstream makes it highly plausible that bacteria themselves or bacterial products spread to distant sites in the cardiovascular system [26][27][28][29]. In fact, oral bacteria or their DNA have been identified in atheromas, heart valves, and arterial walls [16,[30][31][32][33][34][35][36][37][38][39]. The first line of defence in these sites is the macrophages that are involved in the process of autophagy in bacterial infections [4]. Bacterial products such as lipopolysaccharides (LPSs) and increased low-density lipoprotein (LDL) accumulate in atheromas, initiating the transfer of macrophages into atheromas and their transformation into foam cells when absorbing LDL [7]. Andriankaja used indirect immunofluorescence microscopy with species-specific polyclonal and monoclonal serodiagnostic reagents when isolating six periodontal pathogens and assessing the odds of having a myocardial infarction (MI) in 1,060 men and women [14]. They found Prevotella intermedia and T. forsythia to be significantly associated with MI, OR = 1.40 (95% confidence interval (CI) 1.02-1.92) in adjusted analyses. In addition, three or more bacteria present in periodontal pockets gave an increased risk of MI, OR = 2.01 (95% CI 1.31-3.08).
The hypothesis
We hypothesize that anaerobe oral bacteria metastasize infection into the cardiovascular system as observed by IgG antibody levels. Using the Oslo II-cohort, we want in this sub-study to estimate the prediction for CVD mortality over 12 ½ years of follow-up by level of bacterial IgG antibodies to PG, TD, TF, and AA in men with a history of myocardial infarction (MI) versus men with no known history of myocardial infarction (non-MI). Knowledge of antibody levels to oral bacteria may provide important support to an etiologic explanation of the incidence and mortality of myocardial infarction related to oral infections and lead to the development of preventive measures such as a diagnostic test and a vaccine.
Evaluation of the hypothesis
The hypothesis was tested as follows: In 2000 the health survey named the Oslo II-study, a follow-up study of men invited to the Oslo Study of 1972/73 was undertaken [40][41][42]. Men of this cohort living in Oslo or in the surrounding county Akershus were invited to attend (n = 12,764). After screening 55 men withdrew and results from 6,530 participating men aged 48-77 years are eligible for analyses. At the screening, the men filled in a detailed questionnaire on medical history, oral health, medication, health service use, food-and drinking habits, physical activity, smoking, stress, and mental health. Oral health was defined according to the number, and cause of tooth extractions and current oral infections. Height, weight, waist, hip, and blood pressure were recorded (41). Blood samples were drawn for total cholesterol, HDL-C, glucose in the non-fasting state, and triglycerides analyses. EDTA-blood was stored at the HUNT biobank, Levanger, Norway, and the remaining serum after analyses were frozen and stored at −80 °C at the Norwegian Institute for Public Health, Oslo, Norway.
A study sample of 1172 men was drawn from the Oslo II cohort to assess prospectively the association between antibodies to four oral bacteria and mortality in a 12 ½ -years follow-up (42). The study sample consisted of 548 men who had reported a history of myocardial infarction at baseline in 2000 and 624 men randomly selected from the ostensibly healthy men in the cohort. All men had attended both health screenings and had hs-CRP measured. All procedures performed were in accordance with the ethical standards of the Norwegian institutional and national research committee REK (Helse Sør-øst) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. The SPSS random generator program was used in selecting healthy subjects.
The serum samples were analysed for IgG antibodies to the oral bacteria TF, PG, TD, and AA by the ELISA method. The ELISA and IgG antibody assay used in the Oslo II-study have previously been described [42]. In short, bacteria were cultivated anaerobically at the Institute for Oral Biology, Dental faculty, University of Oslo. The ELISA was performed at the Norwegian Institute for Public Health. ELISA plates were coated with the bacterial antigen at a concentration of 5 µg protein of each of the bacteria to capture serum antibodies. Then each well was incubated overnight with 100 µl solution. Thereafter the coated wells were stored for up to 14 days at 4 °C. The human serum calibrator was diluted to 1/40, 1/80, 1/160, 1/320, 1/640, 1/1280, and 1/2560. The samples were diluted to 1/160, 1/320, 1/640, and the 170 first were also diluted to 1/80. To the washed plates were added samples or serum calibrator. These were then incubated for 2 h at room temperature. After washing, conjugated secondary antibody was added and the plates were incubated at room temperature for another 2 h. The secondary antibody was Polyclonal Rabbit Anti Human IgG/AP, D0336, 1:1000 dilution. Into each well was pipetted one hundred microliter substrate. The colour development was registered by optical density (OD) and the result was recorded as a percentage of the serum calibrator. Sample dilution curves were compared for deviation from parallelism. Mean values were used unless large deviations were observed and then median values were chosen. For more details see publication (42).
The covariates used in the analyses were selected according to known predictors for CVD and periodontal infection namely age, daily smoking, education and diabetes, which are common to oral health and CVD and systolic blood pressure, an important predictor of CVD, for prediction analyses and to adjust for confounding. In addition, health service utilization and medication use were reported. Hs-CRP was analysed in the serum samples in 2004 at the same time as the antibody analyses. Missing data were limited. The outcome during follow-up was CVD mortality. All Norwegians have a personal identification number and this was used in the matching of study participants to the Norwegian Cause of Death Registry which provides complete follow up with regard to date and cause of death according to ICD-10 codes and I21 to I99 defines all CVD diagnoses used in this study. Mortality data were provided by Statistics Norway, and the mortality information was given in one linkage after 12 ½ years follow-up with the last date of follow-up 31.12.2012.
Baseline characteristics are presented by mean and standard deviations (SD) or number and percentage. Differences between characteristics of men with or without a history of MI were examined by ttest for equality of means of independent groups for continuous variables or by Fisher's Exact Test for dichotomous variables, two-sided test values only. The antibody variables were analyzed both as observed values, by ln-transformation due to their skewed distribution, stratified by quartile values (quartiles), and tested for correlation by the Pearson statistic. The quartiles were used on an ordinal scale applying the lowest quartile as reference alternating with using the highest quartile as reference. Cox proportional hazards regression analyses were used to estimate survival and the results are presented by the hazard ratio (HR), 95% confidence interval (95% CI). Covariates were used to adjust for confounding and selection bias. Sensitivity analyses for combinations of the 'Red complex' bacteria with AA were performed. Interaction analyses of self-reported MI by antibodies (quartile values) were performed. Sensitivity and specificity for mortality were tested at decile and quartile values of the four antigen series. The receiver operating curve (ROC) was used to detect any cut off value of the antibody readings of diagnostic importance. In addition, the area under the curve (AUC) was calculated. A P-value < 0.05 was considered significant. IBM SPSS version 25 was used for the statistical analyses. Kaplan Meier plots Fig. 1 A) and 1B) illustrate cumulative survival according to quartile levels of TF for men with or without a history of MI.
Results
The mean, SD, minimum and maximum values, and quartile values of the antibodies towards the four bacteria according to the ELISA IgG results are shown in Table 1. The highest maximum IgG values were against PG (8,710) for MI and 5,299 for non-MI, and the lowest were those against TD with 2,166 for MI and 1,204 for non-MI. The mean values did not differ significantly for any of the IgG measurements of the four bacteria. All the antibody readings displayed large standard deviations (SD).
The men with self-reported MI were found to be slightly older, less educated, and more likely to suffer from diabetes compared to non-MI men ( Table 2). Other significant differences were observed for systolic blood pressure, total cholesterol, HDL-C, non-fasting glucose, BMI, and triglycerides. Significantly more of the MI-men had visited the general practitioner four times or more last 12 months (40.4 versus 23.3%), but this was not the case for dental visits. The two groups did not differ regarding being on current antihypertensive drugs, 27.3 versus 28.5% whereas the percentage on cholesterol-reducing drugs (as expected) was much higher in the MI-group 63.8 versus 12.8%. Cox proportional hazards regression analyses were performed to study which variables predicted CVD mortality over the 12 ½-years follow-up (Table 3). In age and age-adjusted univariate analyses, age Using multivariate Cox analyses we studied quartile values of antibodies of each bacterium adjusted by age, education, diabetes, daily smoking, and systolic blood pressure (Table 4). We found no significant trend for quartile values on the ordinal scale of 1-4 for any of the antibodies examined separately. Then we studied quartile values using the lowest quartile as reference comparing it to the other quartiles. The results for trend were non-significant for all antibodies for both MI and non-MI except T. forsythia for MI where a highly significant inverse trend was found, p = 0.017. The results for Q2 versus Q1 was HR = 0.73, 95% CI 0.47-1.12, for Q3 versus Q1 was HR = 0.63, 95% CI 0.40-0.98, for Q4 versus Q1 HR = 0.47, 95% CI 0.29-0.77. We then reversed the analysis using the highest quartile as reference group for ease of interpretation which gave the following results: Q1 versus Q4 was HR = 1.82, 95% CI 1.12-2.94, for Q2 versus Q4 was HR = 1.42, 95% CI 0.86-2.35, for Q3 versus Q4 HR = 1.27, 95% CI 0.76-2.11, and trend p = 0.09. Interaction was significant for self-reported MI to antibodies of TF (quartile values), but not for any of the other three bacteria studied. Fig. 1a and b show the differences of the prediction for CVD mortality of TF antibodies by quartile values between MI and non-MI, respectively. In Fig. 1b can be observed the increased risk almost throughout the follow-up period for MI men as opposed to non-MI men shown in Fig. 1a.
It was of interest to study any potential association of the common AA to the 'Red complex' bacteria in relation to CVD mortality. AA was assessed together with the 'Red complex', each bacterium separately, in combinations with two of these bacteria, and all three bacteria. The Cox analyses were adjusted for age, education, diabetes, daily smoking, and antihypertensive drugs. The results showed associations in non-MI but not MI. Significant relations were observed for the 'Red complex' (HR = 1.71, 95% CI 1.03-2.82), 'Red complex' + AA (HR = 1.72, 95% CI 1.01-2.92), PG + AA (HR = 1.67, 95% CI 1.02-2.74), and PG + TF + AA (HR = 1.69, 95% CI 1.02-2.79).
The first decile of T. forsythia versus higher decile values among men reporting MI showed a specificity of 92.3% (369 tested for the second to the tenth decile among 400 men alive) and a sensitivity of 11.5% (17 tested for the first decile among 147 men who died during follow-up). The ROC curve did not show a distinct threshold value but the AUC was significant for T. forsythia (p = 0.010).
Discussion
More men reporting MI at the health screening in 2000 that later died from CVD were found to have low levels of T. forsythia antibodies in predictive analyses of 12 ½ years follow-up than non-MI. An 82% increased risk was found when comparing first to the fourth quartile of antibodies for T. forsythia. The Kaplan-Meier plot indicate that this risk was maintained throughout the follow-up period. The results showed a high specificity of the first decile to 92.3% and sensitivity to 11.5% indicating that a test for T. forsythia has a high probability of distinguishing men with the lowest risk for CVD mortality among men with a history of MI. The ROC curve did not display a distinct threshold but AUC did show a significant difference between the curves. The reduced ability to produce antibodies to T. forsythia may indicate that the 'Red complex' bacteria of "chronic" periodontal disease proceed from localized oral infection by possibly initiating or aggravating already existing coronary atherosclerosis or other cardiovascular conditions. The baseline characteristics for the study population reflect the medical diagnostic difference of men with a history of myocardial infarction versus non-MI not reporting such history (Table 1). More MI men took cholesterol reducing drugs but not more than non-MI with respect to antihypertensive drugs.
The main strength of this study is the clear temporal association with a follow-up of 12 ½ years from the Oslo II health survey in 2000, and the strength of the association with a high HR to indicate a causal relation. Another strength of the study is the complete register of cause and date of death ascertained by Statistics Norway. Further, we have shown a risk gradient by quartile values of antibodies to TF. Some Table 3 Cox-analyses; age and age-adjusted univariate analyses for prediction on total CVD mortality. A 12 ½ years follow-up of the Oslo II-study in 2000.
Risk factor
Myocardial infarction N = 548 CVD mortality n = 148 degree of multiple comparisons of the bacteria were done in order to explore their risk pattern. Microbiologic and epidemiologic studies provide plausible immunologic pathways, and it could potentially support the known notion of a hereditary risk for CVD through the antibody response pattern. One of the limitations of this study is that it included elderly men only. This reduces the generalizability of the results to women. Another limitation is that the MI diagnosis was selfreported at the health screening. However, MI is a major event in a person's life including hospitalization, and we have confidence in this information.
The association and possible causal relation between "chronic" periodontitis and CVD have been explored in a number of studies. Antibodies to periodontal pathogens were associated with different outcomes of CVD, CHD, stroke or MACE. Pussinen et al investigated antibodies to AA and PG in a random sample (n = 1163) of men aged 45-74 years in Finland. From this cross-sectional study, they reported that in persons with a high combined antibody response to PG and AA, coronary heart disease (CHD) was more common than in persons with a low response [44]. Aoyama et al examined 364 persons with diagnosed CVD for tooth extraction status [45]. In comparison of the four groups of remaining teeth, they found that the level of PG IgG in the group with 10-19 teeth was statistically higher than that in the group with ≥20 teeth. In their case-control study on stroke, Pussinen et al found that IgA-seropositive for A. actinomycetemcomitans was higher among the controls, non-stroke patients, and IgA-seropositive for P. gingivalis higher among patients with recurrent stroke during 13 years of followup [46]. Beck et al found that systemic antibody response to 17 periodontal bacteria rather than periodontal status was relevant to atherothrombotic coronary events (35). The bacteria involved differed between smokers and non-smokers. Differences between strains of bacteria indicate differences in virulence factors as observed by Yamazaki et al for prevalence of serum IgG positivity to 12 periodontal pathogens [47]. They observed that antibody positivity for -P.gingivalis FDC381 and P. gingivalis Su63 was higher than the other 11 bacteria and differed between the three groups of 51 CHD patients with variable degrees of periodontitis, 55 had periodontitis, and 37 were controls.
Periodontitis has been shown to be related to lower levels of HDLcholesterol (HDL-C), a situation that may be reversed after treatment of periodontitis [17,43]. D'Aiuto et al. have shown in randomized controlled trials that in a study of 6 months duration periodontal therapy achieved a reduction in CRP, IL-6, total cholesterol, systolic blood pressure, and the Framingham cardiovascular disease risk score [48]. In a second trial on persons with type 2 diabetes, periodontal treatment over a period of 12 months significantly reduced HbA1c [49]. Ramirez et al. studied subgingival "Red complex" bacteria and biomarkers for CVD. Comparing patients with periodontitis and periodontally healthy controls they found E-selectin, MPO, and ICAM-1 to be increased among patients with periodontitis. Other CVD markers studied were flow-mediated dilatation, CRP, VCAM-1, MMP-9, Adiponectin, and tPAI-1 [50]. The Danish Nationwide Cohort Study included 17,691 patients who received a hospital diagnosis of periodontitis and compared them to 83,003 controls from the general population [51]. Hansen et al. concluded that periodontitis may be an independent risk factor for CVD measured by incidence rate ratios, all significant, for myocardial infarction, ischemic stroke, CVD death, major adverse CVD events (MACE), and all-cause mortality.
Multiple microbiology studies using several approaches have been performed in order to understand periodontal disease developmentthe metastatic spread of bacteria, the effect, and function of bacterial virulence factors, and the immune system response [52]. The three bacteria of the 'Red complex' show both competitive and cooperative interactions [5,53]. They also show genetic variability but according to Amano et al. changes have not been observed during persistent colonization. The bacteria of the "Red complex" possess the common feature of expressing neuraminidases. This enables them to scavenge sialic acid from host glycoconjugates. The cleaved sialic acid serves as a nutrient for bacterial growth and aid the bacteria to evade host immune attack. Another important feature of periodontal pathogens is their ability to enter into host cells. It has been shown that TD and PG display symbiosis in protein degradation, nutrient utilization, and growth promotion [54]. The reduced antibody response to TF might be due to the unusual S-layer of this bacterium where two S-layer glycoproteins are assembled into a single S-layer [54][55][56]. The S-layer may be used as a strategy for this bacterium to evade recognition by the innate immune system particularly by suppressing Th17 responses [56]. The surface glycosylation provides a means of manipulation of the cytokine response of macrophages and T-cells. The present findings relate myocardial infarction to the metastatic consequences of advanced periodontal/dental bacteria and/or bacterial products.
Conclusion
The novel and main finding of this study is the inverse relation for antibodies towards the periodontopathogen T. forsythia to increased risk for CVD mortality. T. forsythia is an initiator of advanced "chronic" periodontal infection, allowing for the progression of periodontitis by the joint effect of the 'Red complex' bacteria on cardiovascular structures. This throws new light on the relative importance of red complex members in the development of CVD where P. gingivalis, the keystone bacterium of periodontitis, has attracted most attention so far. The reduced ability to form antibodies to T. forsythia in relation to MI needs to be studied further to establish whether it represents a general immunological deficiency contributing to the pathophysiological mechanism resulting in myocardial infarction. The consequence is the development of a vaccine for persons identified with low levels of antibodies to T. forsythia by a specific ELISA-test particularly in association with chronic periodontitis. This finding supports the reports of a suggested causal link between "chronic" periodontitis to mortality in persons with myocardial infarction.
Funding
The analyses for antibodies and hs-CRP, were supported by a grant from the Norwegian Council for Cardiovascular Diseases of the Norwegian National Association for Public Health, ExtraStiftelsen through the applicant organization National Organization for Heart and Lung Disease in Norway, and the University of Oslo. The Oslo II-study screening was organized and performed by the four different cooperating public institutions (see Acknowledgements). Ingar Olsen contributed through the European Commission grant FP7-HEALTH-306029 'TRIGGER'.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-01-23T09:07:12.447Z | 2020-01-20T00:00:00.000 | {
"year": 2020,
"sha1": "3d0574da2ea874365d574996716ffb00be445be4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.mehy.2020.109575",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "00b257150bf75b95f20191f46657e852cf432d44",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
137811938 | pes2o/s2orc | v3-fos-license | A Study on the Effect of Fiber Loading and Orientation on Mechanical Behaviour of Jute Fiber Reinforced Epoxy Composites
A Study on the Effect of Fiber Loading and Orientation on Mechanical Behaviour of Jute Fiber Reinforced Epoxy Composites Authors Md. Shadab Alam1, Mr. Saurabh Singh2, Mr.Indra Prakash3 NIMS Institute of Engineering and Technology, NIMS university, Jaipur-303121 INDIA. Emailshadab.alam1011@gmail.com ABSTRACT The natural fibers from renewable natural resources offer the potential to act as a reinforcing material for polymer composites alternative to the use of glass, carbon and other man-made fibers. Among various fibers, jute is most widely used natural fiber due to its advantages like easy availability, low production cost and satisfactory mechanical properties. For a composite material, its mechanical behavior depends on many factors such as fiber content, orientation, types, length etc. Attempts have been made in this research work to study the effect of fiber loading and orientation on the mechanical behavior of jute fiber reinforced epoxy composites. The aim of this study is to determine the mechanical properties of developed composite plates by varying percentage of Silicon carbide. The composite plates are fabricated by hand layup techniques which is very economical. The flexural properties under three-point bend test are investigated experimentally by using the theory of bending of beam. Experimental results show that the composite plate made with jute have strength closely, finally the developed reinforced composites are then characterized by flexural strength, bending stress and compressive strength. Also, impact test is performed on Charpy,s Impact testing machine to assess shock absorbing capability of material. KeywordsHand lay-up, composite laminates, load carrying capacity, impact energy
II. MATERIALS AND METHOD Selection of Jute
We all know that natural fiber jute is readily available with minimum or negligible cost in comparison to other natural fibers. So the jute we have used here for reinforcing the composite was taken from gunny bags which are used for storing rice or wheat. First of all these jute were washed with water and then dried in the Sun. These dried jute were then cut into pieces as per our requirement. In this proposed work our specimen dimension is (330×55×20) mm, so the jute mat were cut into (350×60) mm Natural fiber jute From Gunny Bags
Selection of Silicon Carbide
Silicon Carbide is the only chemical compound of carbon and silicon. It was originally produced by a high temperature electro-chemical reaction of sand and carbon. Silicon carbide is an excellent abrasive and has been produced and made into grinding wheels and other abrasive products for over one hundred years. Today the material has been developed into a high quality technical grade ceramic with very good mechanical properties. It is used in abrasives, refractories, ceramics, and numerous high-performance applications.
Selection of Resin
Epoxy Resin (General Purpose Epoxy Resin) Epoxy Resin is the modern laboratory benchtop material that offers a superb combination of features and benefits. It is durable, extremely chemical and stain resistant, mechanically strong, easily cleaned and decontaminated and exhibits good fire resistance and fire propagation properties.
Mixing Ratio
For the fabrication of jute reinforced composite the mixing proportion of the Resin and hardener plays an important role. First of all we have taken general purpose resin as base chemical according to our requirement and then we added hardener and accelerator in proportionate ratio. Resin used -Epoxy Resin (General purpose resin) Hardener used -Mekp ( methyl ethyl ketone peroxide) Percentage of Silicon Carbide-3% , 5% , 10% , 15%wt Percentage of Hardener -8% Hand Layup Technique The oldest and simplest moulding technique in which reinforcing materials and catalyzed resin are laid into or over a mould by hand. These materials are then compressed with a roller to eliminate entrapped air.
III. EXPERIMENTAL WORK Three-Point Bend Test
Flexural strength, also known as modulus of rupture, bend strength, or fracture strength a mechanical parameter for brittle material, is defined as a material's ability to resist deformation under load. The transverse bending test is most frequently employed, in which a rod specimen having either a circular or rectangular crosssection is bent until fracture using a three point flexural test technique. The flexural strength represents the highest stress experienced within the material at its moment of rupture.
Charpy's Impact Test
The charpy impact test, also known as the charpy v-notch test, is a standardized high strain-rate test which determines the amount of energy absorbed by a material during fracture. This absorbed energy is a measure of a given material's toughness and acts as a tool to study temperaturedependent brittle-ductile transition. It is widely applied in industry, since it is easy to prepare and conduct and results can be obtained quickly and cheaply.
Effects of varying percentage of Sic on Flextural Strength
The graph shows that the flexural strength of specimen increases with increasing Sic percentages between 5-10%and gives increasing impact strength value 80 to 14 joule. High strain rates or impact loads may be expected in many engineering applications of composite materials. The suitability of a composite for such applications should therefore be determined not only by usual design parameters, but by its impact or energy absorbing properties. From the above discussion better flexural strength and compressive strength comes in the range of Sic percentage 5-10%. And better impact strength comes near 10% of sic with respect to total thickness of specimen.
V. CONCLUSION
This proposed investigation of mechanical behavior of jute reinforced epoxy composites leads to the following conclusions: This work shows that successful fabrication of a jute reinforced epoxy composites with different Sic percentage is possible by simple hand lay-up technique It has been noticed that the mechanical properties of the composites such as compressive strength, flexural strength, impact strength etc. of the composites are also greatly influenced by the Sic percentage with respect to thickness of specimen. Use of Sic in jute reinforced composite high brittleness nature of specimen. Industry Importance: At present jute reinforced is a agricultural product can be used for industrial application like partition panels, packaging and automotive industry in addition to solving environmental problems related to the disposal of product.
APPENDIX
Appendixes, if needed, appear before the acknowledgment. | 2019-04-29T13:12:58.200Z | 2016-07-30T00:00:00.000 | {
"year": 2016,
"sha1": "fc74c4dab18bd49085d10c8aaf913a972437ee38",
"oa_license": null,
"oa_url": "https://doi.org/10.18535/ijmeit/v4i7.02",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0b5e3d69ba1612eec9db56cb6d1ce8c2ee40bc16",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
258987542 | pes2o/s2orc | v3-fos-license | Evaluating geospatial context information for travel mode detection
Detecting travel modes from global navigation satellite system (GNSS) trajectories is essential for understanding individual travel behavior and a prerequisite for achieving sustainable transport systems. While studies have acknowledged the benefits of incorporating geospatial context information into travel mode detection models, few have summarized context modeling approaches and analyzed the significance of these context features, hindering the development of an efficient model. Here, we identify context representations from related work and propose an analytical pipeline to assess the contribution of geospatial context information for travel mode detection based on a random forest model and the SHapley Additive exPlanation (SHAP) method. Through experiments on a large-scale GNSS tracking dataset, we report that features describing relationships with infrastructure networks, such as the distance to the railway or road network, significantly contribute to the model’s prediction. Moreover, features related to the geospatial point entities help identify public transport travel, but most land-use and land-cover features barely contribute to the task. We finally reveal that geospatial contexts have distinct contributions in identifying different travel modes, providing insights into selecting appropriate context information and modeling approaches. The results from this study enhance our understanding of the relationship between movement and geospatial context and guide the implementation of effective and efficient transport mode detection models.
Introduction
Knowledge regarding individuals' usage of travel modes is an indispensable element in contemporary travel behaviour studies.Travel mode choices are formed due to the everyday needs and constraints of individuals (Hägerstrand, 1970) and are generally influenced by travel-related factors such as cost, time, ability to generalize to new populations and to transfer to new geographical areas.Moreover, researchers have increasingly recognized the importance of geospatial context information, such as bus stops, land cover, and points of interests (POIs), in characterizing movements with different travel modes (Semanjski et al., 2017).By representing context information as features and incorporating them into the framework, detection performance can be significantly improved (Roy et al., 2022;Zeng et al., 2023).However, the relationship between movement and context can be depicted in multiple ways.There is currently no consensus on which types of geospatial context information are most critical for mode detection.As a result, detection approaches create a set of features based on their interpretation, typically including more contextual variables than necessary as input to ML models.
To address these gaps, we propose a pipeline to systematically evaluate the contribution of geospatial context information for travel mode detection.Specifically, we thoroughly review recent literature to identify common and natural context representations, which are then implemented based on open-source geospatial data.We develop a random forest (RF) model, reported as one of the best-performing ML models for this task.Finally, we apply SHapley Additive exPlanation (SHAP), a type of feature attribution method, to the trained RF model to evaluate the features' impacts.In short, our contributions are summarized as follows: • We implement a comprehensive set of features for travel mode detection.The features are summarized from previous research and describe motion characteristics and geospatial context information, covering a broad spectrum of context modelling approaches.
• We quantitatively evaluate the importance of geospatial context features to the detection performance.
Our results suggest that geospatial network features contribute the most, and the separation of travel modes can benefit from features describing their respective infrastructures.
• We conduct experiments on a large-scale GNSS tracking dataset.The dataset includes individual movements within Switzerland and involves seven travel modes: bicycle, boat, bus, car, train, tram, and walk, which is one of the largest studies in terms of geographical areas and detection scheme.
• We include only open-source geospatial data in the feature construction, ensuring the framework's reproducibility and generalizability.We open-source our framework to provide a benchmark for further reference 1 .
Related work
Travel mode detection is the task of inferring the travel mode utilized by an individual given a movement trajectory.Here, we focus on detecting travel modes from trajectories recorded using smartphone GNSS sensors, as they provide dense mobility traces that can lead to fine-grained mode detection results (Huang et al., 2019).The general approach for this line of mode detection studies involves three steps (Shen and Stopher, 2014;Prelipcean et al., 2017;Zeng et al., 2023).First, rule-based algorithms are designed to detect mode transfer points (MTPs) from raw GNSS track points, which segment the continuous trajectory into stages conducted with a single travel mode.Then, characteristic features such as speed or heading are extracted from the stages.Finally, rule-based heuristics, statistical methods or ML classifiers are developed to infer the travel mode based on these features.As the detection of MTPs has become the de-facto preprocessing standard (Tsui and Shalaby, 2006;Schuessler and Axhausen, 2009), we discuss the feature extraction and the method development in the following section.
Feature extraction and importance assessment
To identify typical features for travel mode detection, we consult review papers (Shen and Stopher, 2014;Gong et al., 2014;Prelipcean et al., 2017) and select representative studies published in recent years.An overview of features implemented in the reviewed literature can be found in Table 1.We distinguish between features that characterize the movement (motion feature) and features that describe the relationship between motion and context information (geospatial context feature).
Overall, motion features have been introduced since the emergence of the problem and are still the most widely used input variables.Speed and acceleration features can be found in nearly all studies and are considered the most straightforward indicators for distinguishing travel modes (Tsui and Shalaby, 2006;Xiao et al., 2015).The bearing rate that reflects the heading change stability and the length of the stage is mainly applied to separate motorized and non-motorized travels (Stenneth et al., 2011;Xiao et al., 2015).Jerk, which measures the change rate in acceleration, has gained popularity due to its introduction as an additional channel input for deep learning (DL) models (Dabiri and Heaslip, 2018).Other motion features, including duration, altitude, GNSS accuracy, and distance between track points, have been less often employed in recent years.Operationally, variables observed per track point, such as speed and acceleration, need to be aggregated into a single value to describe a movement trajectory.Apart from the most often calculated average value, researchers note that a nearly maximum value such as the 85th percentile should be used as an additional indicator (Biljecki et al., 2013), allowing for robustness to noise (Schuessler and Axhausen, Total 24 17 6 11 9 6 1 4 2 7 6 7 1 1 4 3 12 6 4 1 1 1 1 1 1 * Geospatial context features were only used to detect subway mode by designed rules.
2009
).In addition, a few methods consider more sophisticated statistical indicators (e.g., standard deviation, mode, and skewness) to describe the variable's distribution over the movement trajectory (Xiao et al., 2017;Wu et al., 2022).
Comparatively, geospatial context data has been used less frequently for travel mode detection.Based on their represented context information, we divide these features into four categories: • Infrastructure networks.These features quantify the trajectory's proximity to networks that allow movements with specific travel modes, typically implemented using different distance measures.For example, Stenneth et al. (2011) represented the rail network feature with the average Euclidean distance between each track point and its closest rail line.Roy et al. (2022) adopted the Hausdorff distance to obtain the furthest distance between track points and the network.Rasmussen et al. (2015) and Wu et al. (2022) both used a threshold-based method that calculates the proportion of track points with Euclidean distances closer to the network than a given threshold.
• Public transport stations.They are the most commonly implemented geospatial context features because of their easy accessibility from open-source map services and their effectiveness in distinguishing public transport from car travel (Chen et al., 2010).Proximity to public transport stations is either considered for the whole trajectory (e.g., by measuring the distance to each track point) (Stenneth et al., 2011;Roy et al., 2022) or for the movement's start and end points (e.g., by only considering the trajectory's endpoints) (Zong et al., 2017;Wang et al., 2018a).
• Public transport timetables and real-time locations.Besides static geospatial contexts, previous studies integrate information that reflects the actual public transport service situations (Sadeghian et al., 2022;Stenneth et al., 2011).However, this dynamic information is only openly accessible in limited areas of the world (e.g., large cities), hindering their application in large-scale travel mode detection studies.
• Land use and land cover (LULC).Relatively few studies consider LULC contexts in the task.Biljecki et al. (2013) proposed using the water body feature to identify boat movements, and Roy et al. (2022) argued that including LULC features help distinguish non-motorized travel modes.It has been long recognized that the built environment influences an individual's travel mode choice (Cheng et al., 2019;Tamim Kashifi et al., 2022), suggesting a considerable potential to incorporate LULC features for travel mode detection.
Although many studies have recognized the importance of geospatial context features (Biljecki et al., 2013;Semanjski et al., 2017), few have attempted to quantify their significance in improving mode detection performance.Comparative studies have demonstrated that including geospatial context information can significantly increase accuracy, ranging from 18% (Stenneth et al., 2011) to 77 % (Roy et al., 2022).However, due to variations in the implemented features, it is challenging to assess and compare the contribution of different context categories to the outcome, hindering the development of an efficient travel mode detection model.
Travel mode detection methods
The widespread use of smartphone GNSS sensors for collecting travel diaries has spurred researchers to develop approaches for automatically detecting travel modes from raw GNSS tracking data.As a first attempt, rule-based heuristics with human-crafted rule sets to differentiate each travel mode were proposed (Stopher et al., 2008;Chen et al., 2010;Gong et al., 2012).These systems match experts' understanding of mode usage but failed to perform satisfactorily when applied to noisy real-world GNSS records.Therefore, statistical methods such as fuzzy logic (Schuessler and Axhausen, 2009;Biljecki et al., 2013) and Bayesian networks (Xiao et al., 2015) were applied to account for ambiguity in allocating modes to observed motion and context characteristics.Later attempts introduced ML for learning classification "rules" directly from input features, allowing more flexibility in the decisions and being more robust to real-world noise.Examples of ML-based approaches include decision trees (Zheng et al., 2008), RF (Wang et al., 2018a), support vector machines (Semanjski et al., 2017), and artificial neural networks (Roy et al., 2022).Among these, RF has become increasingly popular as it can effectively handle high feature dimensionality and multicolinearity (Fernández-Delgado et al., 2014).Studies comparing various mode detection methods have consistently found that RF achieved the best performances among classical ML algorithms (Stenneth et al., 2011;Dabiri and Heaslip, 2018;Sadeghian et al., 2022).
Thanks to the availability of large-scale datasets, DL models have sparked a new paradigm for travel mode detection.Examples include convolutional neural network (CNN) (Dabiri and Heaslip, 2018;Yazdizadeh et al., 2020) and recurrent neural network (RNN) (Kim et al., 2022), which can learn multi-level representations from input features, enabling them to describe highly non-linear relationships.Additionally, DL models can effectively capture consecutive travel mode choice patterns (Zeng et al., 2023), which is challenging to consider in the standard three-step mode detection framework (Bolbol et al., 2012).However, current DL mode inference models only accept a limited motion feature set as input (e.g., speed, acceleration, jerk and bearing in Dabiri and Heaslip (2018)), and approaches to include relevant geospatial context information are still to be explored.The modelling approaches and feature attribution results obtained from this study offer valuable insights that can inspire geospatial context integration into DL models for travel mode detection.
Methodology
We present a framework for evaluating the importance of geospatial context information in travel mode detection.The overall pipeline is illustrated in Figure 1.First, we extract motion and geospatial context features from the GNSS movement trajectory ( §3.1).These features provide a comprehensive characterization of movements performed with different travel modes.Then, we implement an RF classifier to identify the travel mode with the extracted feature set ( §3.2).Finally, based on the classification outcomes, we evaluate the contribution of the features to the model's prediction using SHAP ( §3.3).In the following, we provide a more detailed description of each step.
Trajectory
Travel mode label
Random forest
Performance evaluation
Feature importance evaluation
Figure 1: Pipeline for evaluating the contribution of geospatial context information for travel mode detection.
Feature Extraction
As a first step, a typical travel mode identification framework requires extracting meaningful features.
We represent a movement trajectory travelled with a single travel mode by user u i using where m represents the employed travel mode, c is the associated geospatial context, and g(s) denotes the time-ordered track points that constitute the movement, i.e., g(s) = (q k ) n k=1 .A track point q is a tuple of q = ⟨p, t⟩, where p = ⟨x, y⟩ includes spatial coordinates in a reference system, e.g., latitude and longitude, and t is the time of recording.Following the notation, we distinguish between motion features, where only track points g(s) are involved, and geospatial context features, where both context information c and track points g(s) are needed.
ID Name
Feature 2 provides an overview of all motion features included in the study.Basic movement metrics such as length, duration, speed, acceleration, and bearing rate are calculated from track points.We obtain the average and the 85th percentile value of these metrics for each trajectory.More specifically, we calculate the length ∆d k and duration ∆t k of travel between two consecutive track points q k , q k−1 ∈ (q k ) n k=1 : where ∥ • ∥ 2 denotes the Euclidean distance with p k and p k−1 represented in a planar coordinate system.
The length D and duration T are obtained through summarizing all these intervals, i.e., D = n k=2 ∆d k , T = n k=2 ∆t k .Moreover, the speed v k and acceleration a k of track point q k ∈ (q k ) n k=1 are obtained as follows: Another essential feature that distinguishes travel modes is the direction change, which is often represented using the bearing rate (Yazdizadeh et al., 2020;Sadeghian et al., 2022) that measures the absolute difference between the bearings of two sequential track points.The bearing rate b k of track point can be calculated as follows: We then calculate the average speed V , average acceleration A and average bearing rate B of the trajectory by averaging over all its track points: The 85th percentile speed V 85 th , acceleration A 85 th and bearing rate B 85 th are obtained by ordering k=2 sequences in ascending order and selecting the 85th percentile value, respectively.
Geospatial Context Feature.Table 3 provides an overview of the geospatial context features considered in the study.These features are either obtained with the trajectory's endpoints (Table 3 Endpoints) or all points that form the trajectory (Table 3 All points).The latter can be further categorized following the geometric type of the context data, i.e., whether the contexts exist in the form of points (e.g., POI), networks (e.g., road network) or areas (e.g., residential area).We include abundant geospatial context features to exhaustively consider different context modelling approaches in previous studies (see Section 2).
We start by implementing features that only depend on the endpoints of a trajectory.These features measure the closeness of trajectory endpoints to the geospatial context, assuming that trajectories shall start and end at predefined locations (Stopher et al., 2008;Gong et al., 2012).Operationally, we regard the k=1 with m(Q) point objects as the context c = Q, and identify the minimum distance between Q and the trajectory start point q 1 as well as the end point q n , respectively: We use the minimum and maximum of the two distances to quantify the closeness of a trajectory to the point context: We construct Q using context data regarding railway station, tram stop, bus stop, car parking, bicycle parking and ship landing stage, respectively, which results in 12 features in this category.
Besides, previous studies have measured the proximity of the entire trajectory to geospatial contexts (Semanjski et al., 2017), which accounts for dynamic context interactions during the movement process.Considering the same point object set Q, we now measure the average minimum distance between Q and every point q ∈ (q k ) n k=1 that forms the trajectory: We implement features measuring the distance to railway stations D railS , tram stops D tramN , bus stops D busS and general POIs (e.g., public and catering facilities) D P OIS using the respective point context data.
In addition, movements using specific travel modes are confined by their infrastructures, such as trains and trams that operate on established tracks.This characteristic can be quantified by measuring the distance between the trajectory and infrastructure networks.Here, we consider the infrastructure network k=1 that consists of m(L) line objects as the context c = N .We then calculate the average of the closest distance between every trajectory point q ∈ (q k ) n k=1 and the network N : where f (q j , L k ) measures the Euclidean distance between the point q j and the line L k .As a result, network distances of a trajectory to the railway network D railN , to the tram network D tramN , to the road network D roadN , and to the pedestrian and bike network D pedN are obtained.
The last feature category relates to built and natural environments that are particularly attractive to or only accessible by specific travel modes.These LULC contexts are typically available in area formats and can be denoted using a set k=1 that contains m(E) non-overlapping area objects.Analogous with Equation 8, we include features describing the distance to residential areas D resident , public green spaces D green and forest areas D f orest , as travels close to these LULC contexts were reported to be dominated by active modes and short-distance public transport (Semanjski et al., 2017;Roy et al., 2022).In addition, we obtain the trajectory's proportion on water P water for describing boat travels: where within(q j , E k ) represents the spatial analysis function for determining whether q j is within is the indicator function, and
Random Forest Classification
Given a feature set that describes the trajectory, travel mode detection can be viewed as a classification problem that aims to obtain a (non-linear) mapping g(•) between the feature set X and the ground-truth travel mode label c, i.e., c = g(X ).When a new trajectory is observed, the learned mapping g(•) is used to predict a travel mode label ĉ.Many supervised ML models have been proposed for learning g(•) (Dabiri and Heaslip, 2018), among which tree-based models, especially the RF model, are reported to achieve optimum performances (Stenneth et al., 2011;Yang et al., 2022).
RF is an ensemble method based on the decision tree classifier, with a binary tree structure that partitions the feature space into a set of mutually exclusive regions.These partitions are performed by searching all possible feature splits and selecting the one that maximises the Gini impurity gain.For a candidate splitting feature X k ∈ X , the Gini impurity index is calculated as: where C is the number of categories in X k , and pr i represents the sample proportion of the i-th class.The gain for a split is calculated by comparing the Gini impurity in the parent node and the one after performing the split.A decision tree is completely built until pre-specified termination criteria are met, or until all leaves are pure, meaning that only samples from the same category are included (Breiman et al., 2017).
While the decision tree is robust to the inclusion of irrelevant features and produces inspectable models, it tends to overfit the training set and has low generalization ability (Hastie et al., 2009).RF significantly alleviates the overfitting issue by maintaining multiple decision trees, each trained with a different training set, constructed by random sampling from the original training dataset with replacement.In addition to this sample randomization, often referred to as bootstrap aggregating, RF introduces additional randomness in the feature selection process.Only a random subset of features is considered at each node split, ensuring the diversity of the learned decision trees (Breiman, 2001).During prediction, each constructed decision tree outputs a prediction class label, and the majority voting strategy is used to determine the final classification result.
Evaluating feature importance
With a well-trained RF model, we evaluate the contribution of individual features to the prediction result with SHAP (Lundberg and Lee, 2017) and TreeExplainer (Lundberg et al., 2020).SHAP is a game-theoretic approach to explain the output of ML models using Shapely values (Shapley, 1952) that fairly distribute players' contributions when they collectively achieve an outcome.The concept can be generalized to ML to quantify the contribution of each feature that collectively delivers the model's output ( Štrumbelj and Kononenko, 2014).More formally, the Shapley value ϕ X k of feature X k is its marginal contribution to the model prediction, averaged over all possible models trained with different feature combinations: where | • | denotes the cardinality of a set and v(S) is the prediction value of a model trained with the feature set S. The exact computation of Shapley values for an arbitrary model has proven to be NP-hard (Matsui and Matsui, 2001), posing computational challenges to their widespread adoption.Facing this challenge, Lundberg and Lee (2017) proposed SHAP to estimate Shapley values, and subsequently, TreeExplainer was presented to exactly compute Shapley value explanations for tree-based models (such as RF) in polynomial time (Lundberg et al., 2020).
We use the TreeExplainer to obtain SHAP value explanations at the level of individual observations.The overall importance ϕ X k of feature X k is calculated as the average absolute SHAP values over all considered data samples: where ϕ i,X k is the SHAP value for sample i for feature X k , and N is the considered sample size.A higher absolute SHAP value suggests a stronger influence of the feature on the prediction.Therefore, we can assess the impact of context features in travel mode detection by analyzing ϕ X k for all implemented motion and geospatial context features.
In addition, we implement feature importance assessment methods employed by previous travel mode detection studies (Yang et al., 2022;Wu et al., 2022), including mean decrease in impurity (MDI), permutation importance and drop column importance, to complement and support the result obtained using SHAP.
These methods are based on different principles and are described in detail in Appendix A.
GNSS tracking data and preprocessing
We utilize a large-scale longitudinal GNSS tracking dataset for the case study.The tracking dataset was recorded within the SBB Green Class (GC) E-Car pilot study conducted by the Swiss Federal Railways (SBB) from November 2016 to December 2017 (Martin et al., 2019).The study, involving 139 Switzerland-based participants, aimed to evaluate the effect of a mobility-as-a-service (MaaS) offer on individuals' mobility behaviour.The participants were provided with a MaaS bundle and were asked to install a GNSS-tracking application on their smartphones that records their daily movement with a high temporal resolution.Based on motion measurements such as speed and acceleration obtained from built-in smartphone sensors, the We implement a series of preprocessing steps to prepare the dataset for travel mode detection following previous research on travel behaviour analysis (Hong et al., 2023) and mode detection (Stopher et al., 2008;Dabiri and Heaslip, 2018).The detailed steps can be found in Appendix B. After preprocessing, we obtain 365,307 stages with user-validated mode labels grouped into bicycle, boat, bus, car, train, tram, and walk.
The travel mode frequency is shown in Table 4, suggesting a highly imbalanced number of class labels: ranging from 367 stages for the class boat to 155,177 for the class walk.and Swiss Map Vector 25 (SMV25) 3 .OSM is an open-source project that provides users with free and easily accessible digital map resources and is considered the most successful and prevailing volunteered geographic information (VGI) project (Hong and Yao, 2019).We retrieve historical feature layers from early 2017 in Switzerland from OSM to match the time frame of the GC tracking study.These layers include transport infrastructure, traffic-related POI, general POI, places of worship, road and railway infrastructure, as well as land cover type.As the water layer of the 2017 dataset does not include main lakes across Swiss borders, this layer is taken from the latest version of OSM (mid-2022).Besides, we retrieve information regarding the size of a river and whether it is navigable for boats from SMV25.Table 5 provides an overview of all layers used as input to extract geospatial context features.
Model training and evaluation metrics
Our pipeline is implemented in Python using trackintel (Martin et al., 2023), scikit-learn (Pedregosa et al., 2011) and SHAP (Lundberg and Lee, 2017) libraries.We randomly split the GC dataset into non-overlapping train and test sets with a ratio of 8:2.We then perform a grid search using five-fold cross-validation on the training set to determine the optimum hyper-parameters.The detailed ranges and the final selected hyper-parameter set are given in Appendix C. We finally retrain the RF model using all the training data and evaluate the model performances on the held-out test set.SHAP values are obtained from test data samples using path-dependent feature perturbation for the shapely value function (Equation 11), whose results are regarded as "true to the data" and reflect natural mechanism in the real world (Chen et al., 2020).
We use the F1 score to assess the performance of our travel mode detection model.For each travel mode category, the F1 score is the harmonic mean of precision and recall, which are obtained by constructing the confusion matrix and counting the true positive (TP), false positive (FP), and false negative (FN) samples from the model: We use the average F1 score across classes, which weights the performance for each travel mode fairly without considering its number of instances, thus creating a suitable measure for the class imbalance problem.
Feature extraction
We extract 32 features for each movement stage, consisting of 8 motion features and 24 geospatial context features.Figure 2 shows the distribution of three example features from different categories, highlighting clear distinctions between the considered travel modes.For instance, 85th percentile of acceleration distinguishes between low acceleration modes such as boat and walk with high acceleration ones such as car and tram (Figure 2A).Additionally, the stage endpoints for all travel modes except tram have a considerable distance to tram stops (Figure 2B).Finally, distance to road network successfully differentiates travel modes that operate on the road network (i.e., bus and car) from those that do not occupy road spaces, such as boat and train (Figure 2C).These examples demonstrate that features characterize movements from different perspectives and facilitate the separation of travel modes.We conduct a correlation analysis to investigate the relationship between features.Figure 3 shows the heatmap of Spearman's rank correlation coefficient ρ between pairs of features, which accounts for differences in feature distributions and scales.The majority of light grid colours representing ρ values close to 0 show that most features are not strongly correlated, indicating that the implemented feature set effectively captures diverse movement characteristics without much redundant information.However, there are some exceptions.We observe darker grid colours for motion features, showing high positive correlations between length and duration as well as speed and acceleration.We also report high negative correlations between bearing rates and other motion features.Moreover, we find strong positive correlations between features related to train infrastructure and between features related to tram infrastructure.Highly correlated features have limited influence on the travel mode detection performance since RF is relatively robust to feature collinearity during training (Fernández-Delgado et al., 2014).However, the interpretation of individual feature contributions may be affected depending on the employed feature attribution method (Hastie et al., 2009;Molnar, 2020).
Travel mode identification result
We train an RF model using these features to identify travel modes for movement stages.The confusion matrix and the precision, recall and F1 score of travel modes are presented in Table 6.We achieve an overall accuracy of 93.0% and an average F1 score of 83.0%.Regarding individual classes, we report reliable detection for the most frequently observed travel modes, namely car, train, and walk, as indicated by their high F1 scores of over 90%.In addition, the RF model achieved high performance for the tram mode with an F1 score of 96.2%.On the other hand, the model experiences difficulty in correctly classifying less frequently observed travel modes, such as bicycle, bus, and boat.Specifically, movement stages recorded as bicycle modes are often misclassified as walk or car modes.This could result from the bicycle mode being formed from both conventional and electric bikes, leading to a wide range of movement behaviours.We also observe that bus trips are frequently misdetected as car movements, likely due to their similar motion characteristics and shared road infrastructure.In summary, the overall performance of the mode detection model provides an excellent foundation for estimating the relative importance of each input feature.Yet, we must consider the performance variations between travel modes when interpreting the feature importance result.
Evaluation of Feature Importance
Based on the trained RF model, we analyse the feature contribution to distinguishing travel modes using various feature importance assessment methods.This section focuses on feature importance obtained using value factors all belong to motion features (Figure 4B).On the contrary, motion features are insufficient in separating bus and tram modes, whose most contributing elements are related to their corresponding geospatial context, such as bus and tram stops (Figure 4D and H).Here, it is evident that identifying bus and tram modes benefit from different context modelling approaches.We observe a high SHAP value of the endpoint-context distance to detect bus trips, whereas the trajectory-context distance contributes more to output tram labels.Not surprisingly, identifying boat travels benefits the most from the water body feature (Figure 4G), which describes the unique characteristics of boats travelling on water.In summary, this detailed analysis reveals that feature importance is travel mode dependent, and features with low overall importance might be essential for identifying specific travel modes.Besides general motion features, geospatial context features that reveal characteristics of particular modes are also crucial for travel mode detection.
Discussion
This study proposes a feature attribution framework for travel mode detection models.We implement a set of motion and context features to train an RF model, which obtains convincing travel mode identification results.The SHAP feature attribution analysis reveals the essential role of geospatial features in the task.
Note that by considering various mode labels on a large-scale GNSS tracking dataset, the achieved performance is high in view of the recent studies (Yang et al., 2022;Kim et al., 2022;Wu et al., 2022).Although the exact performance figures are not directly comparable due to the differences in the employed dataset and preprocessing steps, the result demonstrates the selected features' effectiveness and the RF model's capability.Moreover, the successful detection lays a solid foundation for subsequent feature importance assessment, as feature attribution methods aim to disentangle the decision-making process within the model.
We use SHAP values to measure the importance of features and compare them with those obtained from other importance assessment methods.Despite differences in relative importance, all assessment methods regard network features as the most contributing factors, and LULC (except water body) features are of minor significance.The discrepancies mainly lie in the attribution of motion features, which are most heavily affected by feature correlation.Permutation and drop column importance both suffer from this issue, underestimating the contributions from motion features.MDI and SHAP reflect the internal decision process of the RF model that is relatively robust to feature colinearity.Consequently, these two methods obtain similar attribution results for motion features.Nevertheless, MDI utilizes training data to derive its result and cannot reflect the model's generalization ability for unseen data.Therefore, with a solid foundation from game theory, SHAP is the most suitable and accurate approach to demonstrate the relative importance of geospatial context features.
The SHAP importance value for features helps us understand why the RF detection model may confuse specific travel modes.We show this by comparing each travel mode's most contributing feature set, given in Figure 4B-H.Bicycle movements are detected relying on road and pedestrian networks as well as motion features, which are also the most contributing factors for identifying walking and car trips.Similarly, the distance to the road network is most influential for both car and bus trips.The detection model will likely misclassify a travel mode if its distributions in the most contributing features are similar to other modes.In other words, the lack of distinctive features for bicycle and bus movements limits their prediction performance, leaving room for improving existing and introducing new feature designs.
Although we have systematically evaluated the importance of geospatial context data, there are several points to consider when interpreting the results.Firstly, it is essential to distinguish the context feature importance obtained from our pipeline from the general significance of the underlying context information.
Our study presents only one way to represent context information as features, and feature assessment results may vary with different modelling approaches.However, we note that the features and their representations implemented in this study are carefully selected following a systematic literature review.Secondly, the GC dataset is subject to common GNSS tracking quality issues, such as spatio-temporal gaps and spatial uncertainties in GNSS recordings (Zhao et al., 2021), which may affect the representativeness of the features and consequently influence the assessment results.Future studies should analyze the robustness of the assessment results to the quality of GNSS tracking data.
Conclusion
Methods proposed for travel mode detection from GNSS tracking data are increasingly powerful, yet little is known regarding the model's underlying working mechanism and how the model outputs a travel mode.
To address this gap, this study introduces an analytical framework for assessing the significance of geospatial context information in travel mode detection models.Concretely, we review common feature representations from recent work and implement an exhaustive set of features that describe motion characteristics and geospatial context interactions of a moving trajectory.We analyse the correlations between these features and use them for training an RF model that learns to correctly identify the travel mode.Using the constructed RF model, we employ feature attribution methods to evaluate the influence of individual features on obtaining the output mode label.The framework is tested on a longitudinal GNSS tracking dataset containing userlabelled travel modes for trips over 52,000 user days.
The feature attribution results obtained in this study demonstrate that geospatial network features, such as distance to the road network, are more critical than motion features, such as speed and acceleration, when classifying an extensive list of travel modes.This finding highlights the importance of incorporating network features in travel mode detection models, especially given that many existing studies rely heavily on complex motion features without considering the geospatial context.We also find that features describing relations between movement and geospatial point entities help identify public transport travel with designated start and end stations.Additionally, our results suggest that the majority of LULC features do not significantly contribute to the task, emphasizing the need for further modelling work to represent the relationship between LULC and movements.Finally, we identify the most contributing features for detecting each mode, providing insights into the contexts that should be emphasized when aiming to classify specific travel modes accurately.
The proposed travel mode detection framework can be readily applied to movement datasets collected from other parts of the world, thanks to the high-quality and easily-accessible OSM worldwide map service (Boeing et al., 2022).The study provides valuable guidance for feature selection, effective feature design, and building efficient travel mode detection models.Additionally, our results can inspire novel context integration approaches for DL models to represent the relationship between movement and geospatial context.relationship, the drop in the model score indicates how much the model depends on the feature.We compute the permutation importance on the test set without retraining the RF model.However, it is essential to note that this importance score does not reflect the intrinsic predictive value of a feature by itself, but rather how important the feature is for a particular model.For example, features deemed low importance for a poor model could be essential for a good model.Therefore, ensuring a strong predictive power of the model is crucial before using this score.Additionally, permutation importance is biased for highly correlated features.
As a well-trained model may still achieve good performance when one of the correlated features is shuffled, the permutation importance measure tends to underestimate the contribution of correlated features.
Figure S2 shows the mean decrease of the F1 score for random shuffling features.Most feature permutations have a limited effect on the model's performance, as indicated by an F1 score decrease of less than 3%, which might be due to their observed inter-correlations (see Figure 3).The network features have much higher contributions than all other feature categories, with distance to road network (2.19) being the most important among them.Also, including proportion on water Drop Column Importance.The drop column importance method evaluates the contribution of a feature by retraining a model without it.The underlying idea is that training a model without a feature will not significantly affect the performance if the feature is unimportant.However, this method is computationally expensive, requiring retraining the model as many times as the number of features.Additionally, it can underestimate the importance of correlated features.In extreme cases where two features are completely correlated, dropping one will not impact the model, as the remaining feature contains all the necessary information for training, resulting in a zero importance score.
Drop column importance is measured by the decrease in the F1 score of the test set, averaged across five different model (re-)trainings, as shown in Figure S3.This method can also produce negative values, which suggests the prediction performance increases when the feature is excluded from the model.Dropping each feature does not significantly influence the model's performance, with most differences in F1 score of less than 1%.We still observe a relatively high contribution of the network features, especially distance to road network (2.19).The distinction between all other features is small.The motion features demonstrated low importance compared to the other assessment methods, most likely due to their strong correlation (see Figure 3), which diminished their contribution when measured using drop column importance.
Figure 2 :
Figure 2: Boxplot showing the feature distribution categorized based on the ground truth travel mode.Three example features 1.6 A 85 th (A), 2.4 D max tram (B) and 2.19 D roadN (C) are selected to show the effectiveness of the implemented features in separating travel modes.Outlier values are excluded.
Figure 3 :
Figure 3: Spearman's rank correlation coefficient between pairs of features.All pairs have two-tailed P < 0.05 except for 2.21 Pwater and 2.6 D max bus (P = 0.13).
Travel mode category.SHAP values are calculated at the level of individual observations, thus able to reflect detailed feature attribution for each travel mode category.We presented the five most crucial features for detecting each travel mode in Figure4B-H.Although these features vary, we observe shared patterns for different travel modes.The corresponding network feature is always the most important contributing factor for modes restricted to specific infrastructures.Typical examples include cars, buses, trains and trams, whose infrastructure network is the most contributing feature, considerably exceeding the importance of the second most crucial one (Figure4C, D, F, and H).In addition, motion features such as average speed and acceleration are commonly attached with high SHAP values, showing their importance in distinguishing all travel modes.Their contributions are particularly evident in identifying walking, where the highest SHAP (2.21) significantly impacts the model's performance.Other decisive contributing factors include motion features (1.1 and 1.3), endpoint features (2.5, 2.6, 2.11, and 2.12) and distance to residential areas (2.23).The standard deviation across different permutation runs, indicated by the error bars, is very low, suggesting the permutation importance result is relatively stable.
Figure S2 :
Figure S2: The feature importance evaluated using permutation.The error bars indicate standard deviations calculated from five different permutations.
Figure S3 :
Figure S3: The feature importance evaluated using the drop column method.The error bars represent standard deviations calculated from five different model fits.
Figure S4 :
Figure S4: Travel mode prediction performance when altering the hyper-parameters of RF.The error bars represent standard deviations calculated from five-fold cross-validation.We show the performances for sample-weighted and original RF when tuning the maximum tree depth with 150 estimators (A), and tuning the number of estimators with 21 maximum tree depth (B).
Table 1 :
Summary of input features in related literature (PT: public transport).
Table 2 :
Description of motion features.
Table 3 :
Description of geospatial context features.
application segments the recorded GNSS traces into stages of continuous movements and staypoints where users are stationary.It additionally imputes the travel mode labels for stages (also based on motion measurements), which are later confirmed or corrected by the study participants.The GC dataset consists of ∼230 million GNSS track points, aggregated into 465,195 stages with travel mode labels (car, e-car, train, bus, tram, bicycle, e-bicycle, walk, airplane, boat, coach), and includes information about individual travel behaviour for 52,251 user days.
Table 4 :
Travel mode frequency of stages.
Table 5 :
The sources, geometry types and descriptions of the considered geospatial context data.
Table 6 :
Confusion matrix and performances for travel mode identification.model does not deem their contained information crucial.Additionally, geospatial point features moderately contribute to generating the output.Context information regarding public transport, including rail, tram and bus stops (2.1 -2.6 and 2.13 -2.15), is more valuable than the information related to car, bicycle and ship parking (2.7 -2.12).The distance to POIs (2.16) is among the most influential features in this category.Lastly, except for the high contribution of the water body feature (2.21), the other LULC features (2.22 -2.24) have limited impacts on the detection model, as shown by their lowest SHAP values compared to all other features.We believe that motion and other context features are sufficient in distinguishing travel modes, and the ancillary LULC context information does not provide additional knowledge for the RF model. | 2023-06-01T01:15:52.993Z | 2023-05-30T00:00:00.000 | {
"year": 2023,
"sha1": "731a251866314913ad00b56eb603fc15565c87c4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jtrangeo.2023.103736",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0663bfc159578af0c7ae3fe62f230971b20f1532",
"s2fieldsofstudy": [
"Geography",
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
221035159 | pes2o/s2orc | v3-fos-license | Virus Budding
Enveloped viruses exit producer cells and acquire their external lipid envelopes by budding through limiting cellular membranes. Most viruses encode multifunctional structural proteins that coordinate the processes of virion assembly, membrane envelopment, budding, and maturation. In many cases, the cellular ESCRT pathway is recruited to facilitate the membrane fission step of budding, but alternative strategies are also employed. Recently, many viruses previously considered to be non-enveloped have been shown to exit cells non-lytically within vesicles, adding further complexity to the intricacies of virus budding and egress.
variation on this same theme is the discovery of Gag-related cellular proteins that can self-assemble, exit producer cells, and transfer RNA molecules between cells (Pastuzyn et al., 2018;Ashley et al., 2018).
As illustrated in Fig. 2, HIV-1 Gag comprises four functional elements: MA, CA, NC and p6, connected by two spacer peptides: SP1 and SP2. The MA domain in Gag (MA Gag ) targets assembly to the plasma membrane, NC Gag binds the dimeric viral RNA genome, the CA Gag and SP1 Gag elements mediate spherical particle assembly, and p6 Gag recruits cellular ESCRT (Endosomal Sorting Complexes Required for Transport) pathway factors required for virus budding. Gag also packages several other components required for full infectivity, including the viral enzymes, which are packaged as co-assembling Gag-Pol fusion proteins, the viral Env protein, whose intracellular domain interacts with MA Gag , and Viral Protein R (Vpr), which binds p6 Gag .
(1) RNA packaging: Following synthesis in the cytoplasm, Gag monomers or low-order multimers bind two associated copies of the full-length viral genomic RNA through a highly structured packaging site located near the 5 0 end of the genome (Jouvenet et al., 2009;Kutluay and Bieniasz, 2010;Keane et al., 2015). This interaction is mediated by NC Gag and enables specific selection of the HIV-1 genome over cellular RNAs and viral RNAs undergoing translation (Kaddis Maldonado and Parent, 2016;Dubois et al., 2018;Kuzembayeva et al., 2014, Brown et al., 2020. Several host proteins, including ABCE1, Staufen1, and DDX6, can associate with Gag-RNA complexes and have been proposed to promote Gag trafficking, multimerization, and/or genome encapsidation, although these activities are not yet fully defined (Reed et al., 2012;Zimmerman et al., 2002;Chatel-Chaix et al., 2007).
(2) Membrane trafficking: Gag and Gag-RNA complexes move to the plasma membrane and associate with cholesterol-rich microdomains, commonly termed "lipid rafts" (Ono and Freed, 2001). Binding of Gag to the plasma membrane is mediated by a bipartite signal in the MA Gag domain that comprises a post-translational myristic acid modification at the aminoterminus and a cluster of basic residues, the highly basic region (HBR). Prior to membrane binding, the myristate is sequestered in a hydrophobic cavity within the globular MA Gag domain. The HBR interacts with acidic head groups of the plasma membrane-specific phospholipid phosphatidylinositol-(4,5)-bisphosphate (PI(4,5)P 2 ) and this interaction, perhaps in concert with Gag multimerization, exposes the myristate moiety. PI(4,5)P 2 and the conformational "myristoyl switch" enable targeting and stable Gag association with the plasma membrane (Chukkapalli and Ono, 2011;Tang et al., 2004). In addition to PI(4,5)P 2 , MA Gag can also bind cellular RNA, especially tRNAs. The MA-tRNA interactions likely prevent nonspecific binding of Gag to cellular membranes other than the plasma membrane (Chukkapalli et al., 2010;Alfadhli et al., 2011;Kutluay et al., 2014;Gaines et al., 2018). (3) Virion assembly: At the plasma membrane, Gag molecules multimerize and assemble into a hexagonal lattice through lateral protein-protein interactions that are mediated by the CA-SP1 region Wagner et al., 2016). Host-derived inositol hexakisphosphate (IP 6 ) binds within the CA-SP1 hexamers and facilitates formation of the immature Gag lattice (Dick et al., 2018;Mallery et al., 2018). In addition to the genomic RNA, Gag interacts directly with other viral proteins that are incorporated into the nascent virion, including the multifunctional accessory protein Vpr, which binds a leucine-rich element located within p6 Gag (Kondo et al., 1995), and the cytoplasmic tail of the gp41 subunit of the heterotrimeric transmembrane Env protein, which is incorporated within holes in the associating Gag lattice created by MA trimerization (Tedbury and Freed, 2014;Tedbury et al., 2016;Hill et al., 1996;Pezeshkian et al., 2019). In addition to the Gag polyprotein, the full-length viral RNA also encodes the Gag-Pol polyprotein. The longer Gag-Pol protein is translated by a ribosomal frameshifting mechanism, contains the viral enzymes, and is incorporated into the nascent virion through interactions with Gag (Smith et al., 1993).
Envelopment
The multifunctional structural proteins that mediate assembly and membrane targeting also appear to facilitate virion envelopment by inducing membrane curvature. The hexagonal HIV-1 Gag lattice contains small discontinuities that accommodate Fig. 2 The HIV-1 Gag polyprotein. HIV-1 Gag comprises four functional elements connected by two spacer peptides SP1 and SP2 (gray). MA (yellow) facilitates membrane binding and Env incorporation. CA (orange) mediates assembly of the immature capsid and, after proteolytic processing, forms the mature conical capsid. NC (red) binds the viral RNA genome through two zinc finger motifs. p6 (brown) binds Vpr and recruits early-acting ESCRT proteins TSG101 (a subunit of the ESCRT-I complex) and ALIX to facilitate membrane fission. Pink arrowheads denote proteolytic cleavage sites during maturation.
declination and allow the immature lattice to bend the membrane and create a spherical virion . The host factor angiomotin (AMOT) has also been implicated in HIV-1 virion envelopment because fully enveloped spherical particles are not formed efficiently in the absence of AMOT (Mercenne et al., 2015). The Bar domain of AMOT likely contributes to this activity as this domain has been shown to bend and tubulate membranes in other contexts (Nishimura et al., 2018). Unlike HIV-1, betaretroviruses and spumaviruses assemble in the cytoplasm before being trafficked to the plasma membrane. Thus, assembly and budding are spatially and temporally separated in these viruses. For example, Gag proteins from prototypical foamy virus, Mason-Pfizer monkey virus, and mouse mammary tumor virus preassemble into immature virions near the pericentriolar region and are then trafficked on microtubules to associate with membranes and the viral Env protein (Müllers, 2013;Hütter et al., 2013;Swanstrom and Wills, 1997). Hepatitis B virus (HBV), a hepadnavirus, similarly forms capsids in the cytoplasm, which then associate with envelope proteins in cellular membranes to mediate particle envelopment and budding (Prange, 2012;Blondot et al., 2016).
More broadly, most enveloped viruses have multifunctional structural proteins that mediate assembly and envelopment. Often, this role is fulfilled by matrix proteins that bind to the viral membrane, assemble into higher-order structures, and link internal ribonucleoprotein complexes to external envelope glycoproteins. In viruses that lack classical matrix proteins, such as hantaviruses (Hepojoki et al., 2012;Cifuentes-Munoz et al., 2014;Muyangwa et al., 2015) and alphaviruses (Brown et al., 2018), cytoplasmic tails of envelope glycoproteins act as matrix protein surrogates by directly interacting with nucleoproteins. Larger, more complex DNA viruses such as pox viruses (Roberts and Smith, 2008;Liu et al., 2014) and herpes viruses (Lv et al., 2019) divide these functions between multiple viral proteins (see Table 1).
ESCRT-dependent budding
As membrane envelopment proceeds, the membrane is constricted until nascent virions are connected to the plasma membrane by a thin stalk that must be severed to separate the viral and cellular membranes. Many enveloped viruses accomplish membrane constriction and fission by recruiting the machinery of the cellular ESCRT pathway (Votteler and Sundquist, 2013). ESCRT-independent enveloped viruses also exist, however, and these viruses must therefore either recruit other, as yet unidentified, host factors or encode viral proteins that mediate budding. The ESCRT pathway mediates cellular membrane fission events throughout eukarya and also in some archaeal species (McCullough et al., 2018;Christ et al., 2017;Scourfield and Martin-Serrano, 2017;Henne et al., 2013). The pathway was initially identified as the machinery that mediates intraluminal vesicle budding into specialized late endosomes, termed multivesicular bodies (MVB) (Hanson and Cashikar, 2012), but is now known to act at many other cellular membranes, including during cytokinetic abscission (Carlton and Martin-Serrano, 2007;Morita et al., 2007), resealing of the post-mitotic nuclear envelope (Olmos et al., 2015;Vietri et al., 2015), membrane repair (Scheffer et al., 2014;Jimenez et al., 2014;Skowyra et al., 2018), closure of autophagosomes (Takahashi et al., 2018;Zhou et al., 2019), and neuronal pruning (Zhang et al., 2014;Loncle et al., 2015;Issman-Zecharya and Schuldiner, 2014). Notably, all of these membrane fission events involve constricting membranes toward a cytoplasm-filled neck and are therefore topologically equivalent to virus budding from the plasma membrane.
ESCRT-mediated membrane fission events are catalyzed by a common core machinery (McCullough et al., 2018;Christ et al., 2017;Scourfield and Martin-Serrano, 2017;Banjade et al., 2019), which is recruited to different membranes by adapter proteins. These membrane-specific adapters recruit early-acting ESCRT proteins, which help to stabilize membrane curvature and also nucleate assembly of late-acting ESCRT-III proteins, which form the core fission machinery. ESCRT-III proteins can be recruited by three known mechanisms: (1) Adapters can recruit Bro1 domain-containing proteins such as ALIX, which serves as a bridge to the ESCRT-III proteins, (2) Adapters can bind the ESCRT-I complex, which in turn recruits ESCRT-III proteins via intermediate ESCRT-II complexes, and (3) The nuclear LEM2 adapter binds CHMP7, a hybrid ESCRT-II/ESCRT-III protein, which then binds other ESCRT-III proteins.
Humans express 12 related ESCRT-III proteins that are divided into eight subfamilies, CHMP1-7 and IST1, with some subfamilies comprising several homologs. ESCRT-III proteins can adopt "closed" and "open" conformations. In the autoinhibited closed state, ESCRT-III proteins are soluble and monomeric. When autoinhibition is relieved, ESCRT-III subunits open and can then bind membranes and form curved helical filaments. These filaments constrict membranes and recruit VPS4 AAA þ ATPases. VPS4 enzymes dynamically remodel and disassemble ESCRT-III filaments, using the energy of ATP hydrolysis to extract individual ESCRT-III subunits and release them back into the cytoplasm. These enzymes thereby power the virus budding cycle, although the precise mechanism by which ESCRT-III filaments and VPS4 enzymes collaborate to mediate fission is not yet fully understood.
Enveloped viral structural proteins recruit the ESCRT pathway using motifs that mimic cellular ESCRT adapters (Votteler and Sundquist, 2013;Hurley and Cada, 2018;Lippincott-Schwartz et al., 2017). These motifs were initially discovered in retroviral Gag proteins (Gottlinger et al., 1991;Huang et al., 1995;Parent et al., 1995;Xiang et al., 1996) and were termed "late domains" because they exerted their effects at a late stage of assembly. Analogous late domains were subsequently identified in many other enveloped viruses. Late domains can frequently function from different positions within viral structural proteins and can be swapped between viruses (Parent et al., 1995;Yuan et al., 2000), consistent with the idea that although they have different primary binding partners, they all ultimately converge on common downstream ESCRT-III proteins. Several different classes of late domains are now well understood, and others have been identified but remain to be linked to ESCRT binding partners. P(S/T)AP: The P(S/T)AP late domain (where the second residue can be either a serine or a threonine) was first identified in the p6 polypeptide of HIV-1 Gag (Gottlinger et al., 1991;Huang et al., 1995), and subsequently identified in structural proteins of other retroviruses, filoviruses, arenaviruses and reoviruses (Votteler and Sundquist, 2013). The P(S/T)AP motif recruits the fourprotein ESCRT-I complex by binding the UEV domain of the TSG101 subunit (Demirov et al., 2002;Garrus et al., 2001;VerPlank et al., 2001;Martin-Serrano et al., 2001). P(S/T)AP motifs are found in several cellular ESCRT adapter proteins, such as the early endosomal protein HRS (Bache et al., 2003). YPX n L: YPX n L late domains (where X n can vary in sequence and length) recruit ALIX by binding the central V domain (Strack et al., 2003;Martin-Serrano et al., 2003;von Schwedler et al., 2003). HIV-1 contains a YPX n L late domain, although this motif is less critical for budding than the PTAP motif in most cell types. Other viruses rely exclusively or primarily on YPX n L domains for budding, including other retroviruses, paramyxoviruses, flaviviruses, and possibly herpesviruses (Votteler and Sundquist, 2013). Divergent structural proteins that lack a readily detectable YPX n L motif, yet still bind to ALIX, have also been described (Boonyaratanakornkit et al., 2013;Lee et al., 2012), suggesting that ALIX-recruiting sequence motifs may accommodate more variability than has been documented to date. Cellular YPX n L-containing ESCRT adapters recruit ALIX during exosome biogenesis and lysosomal sorting (Baietti et al., 2012;Dores et al., 2012).
It is not yet fully understood how NEDD4 family members promote budding, but it has been suggested that ubiquitination of viral structural proteins, or other proteins within the budding site, recruits the ESCRT pathway because ALIX and TSG101 both contain ubiquitin-binding domains that can recognize ubiquitinated cargos for MVB incorporation and lysosomal degradation (Shields and Piper, 2011). Consistent with this idea, retroviral Gag proteins are typically ubiquitinated (although this requirement is not absolute (Zhadina et al., 2007)), and ubiquitin depletion or mutations that prevent ubiquitin ligase recruitment inhibit retrovirus budding (Patnaik et al., 2000;Strack et al., 2000).
FPIV: The paramyxovirus SV5 employs an FPIV motif to facilitate budding. SV5 release is ESCRT-dependent and is augmented by AMOTL1, but the binding partner for the FPIV motif remains to be defined (El Najjar et al., 2014). Viral structural proteins typically encode multiple late domains that function in synergy. For example, HIV-1 p6 Gag contains both a P (S/T)AP and a YPX n L motif, the HTLV-I Gag and Ebola virus (EBOV) structural protein VP40 proteins contain adjacent PPPY and PTAP late domains that bind both TSG101 and NEDD4, and murine leukemia virus Gag contains all three canonical late domains (Votteler and Sundquist, 2013). Nevertheless, some viruses contain a single (or at least dominant), late domain. For example, the retroviral EIAV Gag protein appears to recruit ALIX exclusively through a single YPX n L motif (Votteler and Sundquist, 2013).
All modes of viral ESCRT pathway recruitment ultimately converge on ESCRT-III, the machinery that catalyzes membrane fission. In the best-studied cases such as HIV-1, multiple different mammalian ESCRT-III proteins have been shown to localize to the bud neck (Jouvenet et al., 2011), but only CHMP2 and CHMP4 family members seem to perform indispensable functional roles (Sandrin and Sundquist, 2013;Morita et al., 2011). ESCRT-III proteins form helical filaments in the bud neck and progressively constrict it with the help of VPS4 enzymes, as described above. Ultimately, a membrane fission reaction severs the neck, releasing the virion from the cell.
ESCRT-independent budding
Some enveloped viruses bud independently of the ESCRT pathway, including alphaviruses (Brown et al., 2018), some paramyxoviruses (Salditt et al., 2010;Utley et al., 2008), and influenza A virus (Rossman and Lamb, 2011). ESCRT-independent membrane scission mechanisms are generally not well understood, but appear to involve as yet unidentified cellular factors, virally encoded proteins, or a combination.
Some RNA viruses, such as alphaviruses, contain an outer glycoprotein shell that completely covers the exterior of the viral membrane. The formation of this external protein coat has been suggested to play a role in virus budding, both in ESCRTindependent and ESCRT-dependent viruses. The alphaviral transmembrane glycoproteins E1 and E2 are embedded in the viral lipid envelope and form heterodimers that further trimerize into a continuous icosahedral lattice. These interactions are required to complete the budding step, and completion of the E1-E2 protein lattice, together with the nucleocapsid interactions across the viral membrane may be sufficient to drive both membrane curvature and fission (Brown et al., 2018;Weissenhorn et al., 2013). In contrast, flaviviruses and hepaciviruses also contain an outer protein coat, but still depend on the ESCRT pathway to complete their replication cycle (see below). Thus, external protein coats can apparently either replace or act in concert with the ESCRT pathway.
Influenza A is another ESCRT-independent virus (Bruce et al., 2009;Watanabe and Lamb, 2010;Chen et al., 2007), but in this case, the viral transmembrane protein M2 appears to mediate membrane fission (Rossman et al., 2010). During particle assembly, the hemagglutinin (HA) and neuraminidase (NA) glycoproteins are targeted to the plasma membrane and cluster in lipid rafts. The matrix protein M1 interacts with the cytoplasmic tails of HA and NA, polymerizes against the membrane, and apparently acts in concert with HA to induce membrane curvature. M1 also recruits M2 to the bud neck, and an amphipathic helix in the cytoplasmic M2 tail inserts, deforms and promotes plasma membrane fission (Martyna and Rossman, 2014;Chlanda and Zimmerberg, 2016). M2 also functions as a pH-regulated ion channel that facilitates the release of viral ribonucleoprotein complexes from the endosome into the cytoplasm, but ion channel and membrane fission appear to be independent activities (Rossman et al., 2010).
Intracellular budding
Some viral families envelop and bud into internal cellular membranes rather than at the plasma membrane. Budding into the lumen of a cellular organelle is topologically equivalent to budding from the plasma membrane, but in these cases the viral particle is temporarily surrounded by two membranes, the viral envelope and the organelle membrane. Virion release into the extracellular space therefore requires transport to and fusion of the virion-containing organelle with the plasma membrane.
One such case is the hepatitis B virus (HBV), which co-opts the MVB pathway for egress (Prange, 2012;Blondot et al., 2016;Patient et al., 2009). The three viral envelope proteins S, M, and L form budding sites at the MVB membrane that recruit mature cytoplasmic nucleocapsids. The assembling HBV particles bud into the MVB lumen in an ESCRT-dependent reaction that creates intraluminal virions. MVBs then fuse with the plasma membrane to release the enveloped viral particles from the cell, a process that resembles exosome release.
Herpesviruses are released via the secretory pathway in a complex series of events that requires several viral and cellular proteins (Lv et al., 2019;Fradkin and Budnik, 2016;Owen et al., 2015). Herpesviral genome replication, capsid assembly, and genome packaging all take place in the nucleus. The fully assembled nucleocapsids are too large to escape into the cytoplasm through nuclear pores and instead exit the nucleus and cell by undergoing several steps of envelopment and de-envelopment at multiple cellular membranes. During primary envelopment, the nucleocapsids bud through the inner nuclear membrane into the perinuclear space, thereby acquiring a lipid envelope. This process requires the virus to remodel the nuclear lamina. Cellular and viral kinases phosphorylate components of the nuclear lamina, leading to its disassembly. Two viral proteins, pUL31 and pUL34, then assemble into a cage-like nuclear egress complex, that carries the virion across the inner nuclear membrane (Bigalke and Heldwein, 2015). There is some evidence that the ESCRT machinery may also be involved in facilitating the membrane fission step required to release the enveloped virion into the intermembrane space (Arii et al., 2018). The primary envelope then fuses with the outer nuclear membrane in a process termed de-envelopment. The viral glycoproteins are necessary for de-envelopment, likely because they mediate fusion with the outer nuclear membrane. Once in the cytoplasm, nucleocapsids associate with tegument proteins, which in the mature virion occupy the space between the nucleocapsid and the envelope. The nucleocapsids then bud into vesicles, whose origins have been variously described as the trans-Golgi network, endosomes, or autophagic membranes. Recruitment to sites of secondary envelopment is promoted by tegument proteins through interactions with vesicle membranes and viral glycoproteins. Early-acting ESCRT proteins are not required for this process, but there are reports that ESCRT-III and VPS4 activity are required for virion release (Crump et al., 2007;Calistri et al., 2007;Pawliczek and Crump, 2009), and an exciting new structure of the herpes simplex virus pUL7: pUL51 complex, which is required for efficient virion assembly, reveals that the N-terminal region of pUL51 adopts a CHMP4B-like fold that may function as a viral ESCRT-III-like protein (Butt et al., 2020). Following membrane fission, the enveloped virions end up inside intracellular vesicles and are released into the extracellular space when the vesicles fuse with the plasma membrane.
In a related process, flaviviruses and hepaciviruses are released through the secretory pathway (Chatel-Chaix and Bartenschlager, 2014;Falcon et al., 2017;Gerold et al., 2017). Both genera of viruses induce extensive remodeling of endoplasmic reticulum (ER) membranes to form replication compartments. These vesicle-like structures remain connected to the ER, enclose viral proteins and the viral genome, and serve as protected compartments where almost all steps of the life cycle are carried out. Assembled nucleocapsids then bud into the ER lumen and are released through the secretory pathway.
Quasi-enveloped viruses
Historically, viruses have been divided into enveloped and non-enveloped classes based on the presence or absence of a hostderived membrane envelope, and it was thought that non-enveloped viruses were released exclusively by host cell lysis. This simple paradigm has now been overturned by studies showing that many different classes of non-enveloped viruses can acquire hostderived lipid envelopes and exit cells within vesicles. These extracellular vesicles resemble exosomes, and viruses that use this egress method are termed "quasi-enveloped" (Feng et al., 2014).
The picornavirus hepatitis A virus (HAV) was the first virus definitively shown to be quasi-enveloped (Feng et al., 2013), and HAV still serves as a paradigm for the process. Quasi-enveloped HAV particles (eHAV) are released in exosome-like vesicles that typically contain 1-4 particles per vesicle. These vesicles are formed when HAV capsids bud into endosomes in an ESCRTdependent manner. To promote budding, the VP2 capsid protein recruits ALIX, apparently using tandem YPX 3 L domains that become buried in the fully assembled virion (Gonzalez-Lopez et al., 2018;McKnight et al., 2017). Virion-containing multivesicular bodies then fuse with the plasma membrane and release eHAV particles into the extracellular space. There is now good evidence that this is the primary mode of HAV release from hepatocytes in vivo, and that HAV circulates in the blood exclusively within small vesicles (Feng et al., 2013). HAV can then shed its envelope in the biliary tract, which produces a non-enveloped particle that may be more stable in harsher environmental conditions (Feng et al., 2014). Importantly, HAV is highly infectious in both its enveloped and non-enveloped states.
The capacity for quasi-envelopment has since been described for several other viruses that were traditionally considered to be non-enveloped, including many other picornaviruses (Chen et al., 2015;Mutsafi and Altan-Bonnet, 2018), Hepatitis E virus (Qi et al., 2015), rotaviruses and noroviruses (Santiana et al., 2018). Furthermore, some picornaviruses, including poliovirus and coxsackievirus, differ from the HAV paradigm in that they form quasi-enveloped virions by subverting the autophagy pathway. In these cases, double-membraned autophagosomes engulf multiple naked viral particles, which then release quasi-enveloped viruses when the outer autophagosomal membrane fuses with the plasma membrane (Mutsafi and Altan-Bonnet, 2018;Bird et al., 2014). A final variation on this theme is the exosomal transfer of viral nucleic acids between cells, which apparently can, in some viruses like hepatitis C, spread productive infections without requiring full viral assembly (Ramakrishnaiah et al., 2013;Bukong et al., 2014).
The membrane appears to perform several important functions for quasi-enveloped viruses, including protecting the capsid from antibody-mediated neutralization (Feng et al., 2013), and clustering together of multiple virions so that they can enter target cells as a swarm or "quasi-species" that can cooperate genetically through cross-complementation. The later activity may be most important for enteroviruses, whose larger vesicles can each contain tens or even hundreds of viral particles (Chen et al., 2015;Santiana et al., 2018).
The outer membranes of quasi-enveloped viruses lack viral glycoproteins, and therefore cannot fuse with target cell membranes. Instead, eHAV particles are taken up into the host cells by endocytosis and trafficked toward the lysosome, where the membrane is degraded, and the released naked virions can cross into the cytoplasm by disrupting the endolysosomal membrane (Rivera-Serrano et al., 2019).
Maturation
Most enveloped viruses undergo additional maturation steps during and after budding. Before maturation, the virion functions as an assembly machinery that can package components and leave the producer cell. Conformational changes, typically triggered by proteolytic cleavage or pH changes, then convert the virion into a particle that is capable of entering and replicating in a new target cell (Veesler and Johnson, 2012;Steven et al., 2005).
In the case of HIV-1, the viral protease (PR) is activated by autoproteolysis as the virus assembles and buds, and it cleaves the Gag polyprotein at five different sites, producing three new proteins (MA, CA and NC) and three smaller peptides (SP1, SP2, and p6). Gag processing drives a series of major rearrangements in which the CA protein forms a conical internal capsid that surrounds viral RNA in complex with NC protein and viral enzymes. Gag cleavage is a sequential, ordered process, and each processing event appears to perform a different function. Cleavage at the SP1-NC junction releases the NC-RNA complex to condense to the center of the virion, cleavage at the MA-CA junction promotes folding of the CA N-terminus into a b-hairpin that will ultimately form an NTP-permeable pore in the assembled capsid, and cleavage at the CA-SP1 junction destabilizes the immature lattice and promotes formation of the mature capsid lattice. The NC-SP2 and SP2-p6 cleavages are also required for infectivity, as is cleavage of the longer Gag-Pol polyprotein, which liberates the viral enzymes. The mature conical capsid is a fullerene cone, with a curved hexagonal lattice comprising CA hexamers, and the cone ends closed through the incorporation of 12 CA pentamers. CA hexamers are stabilized by binding IP 6 (Dick et al., 2018;Mallery et al., 2018), and differential placement of the hexamers and pentamers produces a variety of related capsid structures that each differ slightly in length and shape (Sundquist and Krausslich, 2012;Mattei et al., 2016;Freed, 2015). Viral glycoproteins and their fusion peptides that enable entry into target cells must also typically be proteolytically processed to be functional. For example, the HIV-1 Env glycoprotein is synthesized as a polyprotein precursor (gp160), which is inserted into the endoplasmic reticulum membrane co-translationally. Env is glycosylated and then proteolytically cleaved by the host Golgiassociated protease furin as it traffics through the secretory pathway, producing the mature surface gp120 and transmembrane gp41 glycoprotein subunits, which remain non-covalently associated as heterotrimeric spikes. Proteolytic processing exposes the fusion peptide at the gp41 N-terminus and is required for fusogenic activity (Checkley et al., 2011).
In other viruses, proteolytic activation of viral fusion proteins can occur following entry into the target cell. Activation of the EBOV glycoprotein is a particularly well-understood case. EBOV particles associate with the host cell surface by interactions with host receptors that bind to glycans on the viral glycoprotein GP and phosphatidylserine in the viral envelope. After internalization through macropinocytosis, endosomal cysteine proteases such as cathepsins L and B proteolytically process GP to remove a mucinlike subdomain and the glycan cap and expose the receptor-binding site (RBS). The RBS binds the late endosomal/lysosomal protein NPC1, which induces a conformational change in GP, insertion of a fusion loop into the endosomal membrane, fusion of viral and endosomal membranes and release of the nucleocapsid into the cytoplasm (Lee and Saphire, 2009;Carette et al., 2011;Cote et al., 2011;Gong et al., 2016;Wang et al., 2016).
Cell-to-Cell Transmission
After budding, viruses can spread in two different ways; through cell-free transmission and cell-to-cell transmission. Cell-free virions diffuse freely through the extracellular space, and even between organisms, before entering target cells. This process can promote dissemination over long distances, to new tissues, and between hosts. However, untargeted diffusion through aqueous media is relatively inefficient, and free viruses are susceptible to immune recognition. In contrast, viral spread through direct sites of cell-to-cell contact increases transmission efficiency and can help evade antibody recognition. To promote cell-to-cell transmission, viruses often subvert cellular structures that are normally used for cell-cell communication or cargo transfer.
Retroviruses such as HIV-1 and MLV actively promote the formation of adhesive structures between donor and target cells. These stable contact sites are termed virological synapses because they resemble the immunological synapses that mediate antigen presentation, and even employ some of the same molecular components. Virological synapse formation requires interactions between the viral glycoprotein on the donor cell and its cognate target cell receptor. Coreceptors and adhesion molecules are then recruited to stabilize and further organize these contact sites, and the producer cell cytoskeleton is repolarized towards the synapse to promote directional viral assembly and release. Virions bud directly into the intersynaptic space and are transferred efficiently to the closely opposed target cell (Agosto et al., 2015;Bracq et al., 2018;Dufloo et al., 2018;Nejmeddine and Bangham, 2010).
Other viruses hijack existing cell-cell contacts for transmission. For example, neurotropic viruses such as herpesviruses and rabies viruses are transported along axons and spread across synaptic contacts (Koyuncu et al., 2013). Viruses can also achieve targeted release by exploiting membrane protrusions such as nanotubes and filopodia, which normally transmit information and cargoes between cells (Agosto et al., 2015;Bracq et al., 2018;Dufloo et al., 2018).
Conclusions
The principles of enveloped virus budding are remarkably conserved between different virus families, presumably owing to evolutionary history and common functional requirements. Viral egress is typically orchestrated by multifunctional structural proteins that recruit components, assemble the virion, bend host membranes, and facilitate membrane fission. In many, but not all cases, the cellular ESCRT pathway is recruited to mediate the final membrane fission step. Recent studies have also revealed that many traditional "non-enveloped" viruses can be released within vesicles as quasi-enveloped viruses and that viruses frequently alter cellular pathways to promote directional release and synapse formation.
Although the general strategies for enveloped virus egress are increasingly well understood, important challenges remain, including characterizing the release mechanisms of ESCRT-independent viruses, the biology and entry mechanisms of quasienveloped viruses, and the molecular mechanisms and pathogenesis associated with cell-to-cell viral spread. These and other advances will help reveal the best approaches for inhibiting virus release for therapeutic benefit and harnessing release activities in new systems that can be used to deliver biomolecular cargoes into target cells in vivo. | 2020-08-08T13:05:20.693Z | 2020-12-31T00:00:00.000 | {
"year": 2021,
"sha1": "162d4fb4e8ab8fb119ef898b1173b61e4049c239",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "30fb22af6b830e09930b8294c0a3d312d5590fbf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
11677122 | pes2o/s2orc | v3-fos-license | Effects of optineurin siRNA on apoptotic genes and apoptosis in RGC-5 cells
Purpose Optineurin is a pathogenic gene associated with primary open angle glaucoma (POAG), in which the retinal ganglion cells (RGCs) are targeted. However, the functions of optineurin, particularly in RGCs, are currently not clear. We introduced optineurin siRNA into cultured retinal ganglion cell 5 (RGC-5) and PC12 cells to determine the cellular and molecular mechanisms underlying the role of optineurin in POAG. Methods We constructed four optineurin siRNA–expressing plasmids, and transiently transfected them into PC12 cells. Quantitative real-time PCR, western blot, and fluorescent microscopy were used to determine optineurin expression and select the most effective optineurin siRNA to construct RGC-5 and PC12 stable transfected cells. Dimethylthiazolyl diphenyl tetrazolium bromide (MTT) assay and flow cytometry were applied to investigate the role of optineurin siRNA in cell growth and apoptosis. Gene microarray and quantitative real-time PCR were used to screen and validate differentially expressed genes in optineurin siRNA transfected PC12 and RGC-5 cells. Results siRNA effectively downregulated optineurin expression in RGC-5 and PC12 stable transfected cells. Optineurin siRNA significantly inhibited cell growth and increased apoptosis in RGC-5 and PC12 cells. Microarray analysis identified 112 differentially expressed genes in optineurin siRNA transfected RGC-5 cells. Quantitative real-time PCR and western blot confirmed that the expression of brain-derived neurotrophic factor (Bdnf), neurotrophin-3(Ntf3), synaptosomal-associated protein 25(Snap25), and neurofilament, light polypeptide(Nefl) was significantly downregulated in RGC-5 and PC12 cells transfected with optineurin siRNA. Conclusions Our study suggested that optineurin downregulation by siRNA in RGCs was an in vitro model for studying the mechanisms of optineurin effects on POAG. Neuroprotective factor and axonal transport genes may be involved in the development of POAG and could be novel targets for treating POAG due to optineurin mutation.
Glaucoma is the leading cause of irreversible blindness worldwide [1][2][3], and most of the cases are primary open angle glaucoma (POAG) [1], which is characterized by optic disc cupping and irreversible loss of retinal ganglion cells [2,3]. However, the pathogenic mechanism of POAG is not clear.
Genetic changes play an important role in the pathogenesis of glaucoma [4]. With the development of molecular genetics, in 2002 a new gene, designated as optineurin [5] (optic neuropathy inducing protein), was identified as being associated with POAG. However, the gene's function is unclear. It has been demonstrated that optineurin binds to myosin VI in the Golgi complex and plays a crucial role in Golgi ribbon formation and exocytosis [6]. There are still arguments regarding whether optineurin inhibits or promotes apoptosis. Zhu et al. [7] found that optineurin protects cells by maintaining activation of nuclear factor-kappaB (NF-κB) activation induced by tumor necrosis factor (TNF)-alpha. However, optineurin overexpression inhibited the protective effects of E3-14.7K on TNF-alpha receptor 1-induced cell death. Recently, a study revealed that Correspondence to: Zhongzhi Zhang, China Medical University, Beier Road No. 92 He Ping district, Shen Yang, Liaoning province, 110001, China; Phone: +8613147832685; FAX: +8613998351633; email: felipelove@163.com optineurin interacted with metabotropic glutamate receptors (mGluRs) and played an important role in antagonizing agonist-stimulated mGluR1a signaling [8]. Weisschuh et al. [9] used RNA interference to silence optineurin in HeLa cells, and, using microarray technology, found a series of differentially expressed genes.
Although retinal ganglion cells (RGCs) are the target cells of glaucoma, few research regarding the impact of optineurin on RGCs have been conducted. Therefore, in the present study, we used RNA interference technology to downregulate the expression of optineurin in PC12 and RGC-5 cells, a pathologic condition mimicking the POAG caused by optineurin mutation. Dimethylthiazolyl diphenyl tetrazolium bromide (MTT) assay and flow cytometry were applied to determine the effects of optineurin on proliferation and apoptosis in RGC-5 cells. To study the underlying mechanisms, we screened differentially expressed genes with gene microarray technology and validated them with quantitative real-time PCR and western blot. Our findings will help us learn the functions of optineurin. They might be also useful for treating POAG due to optineurin mutation.
nvitrogen Gibco, Carlsbad, CA) supplemented with 10% fetal bovine serum, 100 μg/ml penicillin, and 100 μg/ml streptomycin. Routine testing confirmed that the cells were free of mycoplasma and viral contaminants during the entire study period.
Construction of optineurin siRNAs and screening by transient transfection:
We designed four siRNA targeting sequences according to the rat optineurin reference gene sequence (GenBank NM_145081.3) by the siRNA Target Finder Program (Silencer® Pre-designed siRNA, Ambion, Foster City, CA). BLAST was performed with the selected siRNA sequences against expressed sequence tag libraries to ensure that only a single gene (optineurin) was targeted. One scrambled siRNA (Optineurin-NC) was used as a negative control. The sequences are described in Table 1. Purified fragments were digested with BamHI/BglII and inserted into the pGPU6/GFP/Neo vector (GenPharma, Placentia, CA). All constructs were identified by sequencing. The resultant plasmids containing siRNA 1, 2, 3, and 4 and the negative control sequences were sihoptineurin-1, sihoptineurin-2, sihoptineurin-3, sihoptineurin-4, and sihoptineurinNC, respectively.
Before transfection, cells were seeded into six well plates at 80% confluency for 12 h. Cell transfection was performed with Lipofectamine 2000 (Invitrogen, Carlsbad, CA) according to the manufacturer's instructions: 4.0 µg plasmids and10 µl Lipofectamine 2000 were used in each well.
To efficiently knock down optineurin, cells were transfected twice with siRNA on days 1 and 3. Quantitative real-time PCR was used to select the most effective siRNA (sihoptineurin-3) from the four candidates, which was used to establish RGC-5 and PC12 stable transfected cells. Stable transfection of siRNA: Cells were cultured in six well plates. Plasmid sihoptineurin-3 and plasmid sihoptineurinNC were transfected with Lipofectamine 2000 as described above. Plasmid sihoptineurinNC was used as a negative control. The cells were screened by G418 for 4 weeks, and several colonies were obtained. The same concentration of G418 was used to continue screening for another 4 weeks to obtain a positive monocolony. Survival colonies were isolated and expanded. We examined the expression of optineurin in the stable cell lines with fluorescence microscopy and western blot. We successfully established optineurin siRNA stable expression cell lines in PC12 and RGC-5 cells. We also used electron microscopy to observe the changes in the stable transfected cells. Transmission electron microscopy: Cells were first fixed in 2.5% glutaraldehyde for 1-4 h at 4 °C, and then fixed in 1% osmium tetroxide for 1 h at room temperature after a 2 h wash by 0.1 M phosphate buffer. After the cells were briefly washed with distilled water, routine dehydration was performed. The plastic embedded mount was prepared as described previously [10]. Ultrathin sections (100 nm) were cut on a Leica Ultracut microtome (Leica Microsystems,Wetzlar, Germany),
Quantitative real-time reverse transcriptase-PCR analysis:
Total RNA was extracted using TRIzol (Invitrogen) according to the manufacturer's instructions Trizol was joined according to 10 cm 2 /ml proportion. About 1 µg total RNA from each sample was reverse-transcribed into cDNA with the RevertAid™ First Strand cDNA Synthesis Kit.(Fermentans, Foster City, CA) in a total volume of 20 μl and stored at −20 °C. Real-time quantitative PCR was used to determine the optineurin, brain-derived neurotrophic factor (Bdnf), neurotrophin-3 (Ntf3), synaptosomal-associated protein 25 (Snap25) transcripts using a sequence-detection system (GeneAmp 5700; Applied Biosystems, Inc. [ABI], Foster City, CA). PCR reactions were performed in a 50 µl reaction mixture containing 25 µl master PCR mix (SYBR Green PCR Master Mix; ABI), 5 pM primer pairs, and 1 µl cDNA samples.
Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used at the same time as an internal control. Reactions were performed through the following conditions: 10 min at 95 °C for initial denaturation, 40 cycles of denaturing for 5 s at 95 °C, and annealing for 31 s at 60 °C. Experiments were repeated 3 times (n=3). All the primers used in the reactions are described in Table 2.
Protein isolation and western blot analysis: About 100 μl cell lysis buffer (60 mM of Tris, 2% sodium dodecyl sulfate (SDS), 100 mM of 2-mercaptoethanol, and 0.01% bromophenol blue) was added to each of the tissue samples. An equal amount of protein (10 μg) was loaded onto 10% sodium dodecyl sulfate-polyacrylamide gels, and electrophoresis was performed for 1 h. Following separation and transfer onto a polyvinylidene difluoride membrane (Invitrogen), the blots were incubated with 1:500 dilution optineurin or GAPDH primary antibody (Abcam, Cambridge, MA) at 4 °C overnight. Horseradish peroxidase (HRP)conjugated antirabbit IgG secondary antibody was incubated at room temperature for 2 h at a 1:5,000 dilution. An Enhanced Chemiluminescence (ECL) kit (Amersham Bioscience, Piscataway, NJ) was used to detect blotting signals following the manufacturer's instructions Mixed Reagents I and II according to 1:1,and then 10 times diluted. The amount of GAPDH in each sample was measured as an internal control. Gel-Pro Analyzer software (Media Cybernetics) was used to analyze the data.
The same western blot analysis protocol was used to detect Bdnf, Ntf3, Snap25, and Nefh protein expression in stable transfected RGC-5 cells. All of the antibodies were purchased from Abcam. Experiments were repeated 3 times per protein per treatment group (n=3).
3-(4,5)-dimethylthiazol
(-z-y1)-3,5-di-phenyltetrazolium bromide assay: Approximately 200 μl of 3-(4,5)dimethylthiazol (-z-y1)-3,5-di-phenyltetrazolium bromide (MTT; Sigma-Aldrich, St. Louis, MO) solution (5 mg/ml) was added into the RGC-5 and PC12 cells at 24 h, 48 h, and 72 h, respectively, after cell passage to 70% confluence. Cell plates were then wrapped with aluminum foil and incubated for 4 h at 37 °C. Cell plates were shaken for 15 min after the MTT was discarded, the precipitate was solubilized in 100% DMSO (Sigma) using 200 μl/well. Absorbance of each well was measured on a microplate reader (Wellwash MK2; Labsystems Dradon, Helsinki, Finland) at a wavelength of 490 nm. Experiments were repeated 3 times (n=3). Flow cytometry: Cell apoptosis was assessed with flow cytometry using an Annexin V FITC/PI staining kit (PharMingen, Becton Dickinson Co., San Diego, CA). After 48 h of transfection, we harvested cells, washed twice in phosphate buffer solution (PBS; sodium chloride NaCl 40.0 g, potassium chloride KCl 1.0 g, potassium dihydrogen phosphate anhydrous KH2PO4 1.0g, disodium hydrogen phosphate anhydrous Na2HPO4 4.6 g, distilled water to make up to 5 l; 4 °C), resuspended in binding buffer (10 mm HEPES/ NaOH pH 7.4, 140 mm NaCl, 2.5 mM CaCl2), stained with fluorescein isothiocyanate-conjugated annexin V (Annexin V-FITC), mixed gently and incubated for 15 min at room temperature in the dark, and then washed with binding buffer and analyzed with flow cytometry (FACS Calibar; Becton-Dickinson) using CellQuest software (BD,San Jose, CA). Experiments were repeated 3 times (n=3). Microarray analysis: Agilent dual-channel cDNA microarray was used for these experiments. Total RNA was isolated from transfected and untransfected samples by TRIzol as described previously. Total RNA was further purified using a QIAGEN RNeasy® Mini Kit (Qiagen, Valencia, CA). A low RNA Input Linear Amplification Kit (Agilent Technologies, Wilmington, DE) was used for cDNA and cRNA synthesis. aaUTP (Ambion) was used to create amplified and labeled cRNA with T7 RNA polymerase, which also amplified the target material and incorporated Cy3-or Cy5-labeled CTP. cRNA from the stable transfected samples was amplified with the incorporation of Cy5-CTP, while cRNA from the control samples was labeled using Cy3-CTP and then purified with a QIAGEN RNeasy® Mini Kit. Cy3 and Cy5 labeled cRNA samples were maintained in fragmentation buffer at 60 °C for 30 min. Hybridization was completed for each sample with a whole separate Genome (4×44 K) microarray (Agilent Technologies) at 65 °C in a hybridization oven overnight. We then washed, stabilized, dried, and immediately scanned the hybridization slides using the Agilent G2565BA Microarray Scanner System (v 7.0). A single log2 ratio (SLR) was calculated from transfected and normal cells.
Effects of optineurin siRNA on cell growth: As shown in
Effects of optineurin siRNA on cell apoptosis: Flow cytometry was used to identify cell apoptosis in this study. Flow cytometry showed that cell apoptosis occurred in 30.41% ±2.41% and 9.91±2.81% (SD) in sihoptineurin-3 stable transfected RGC-5 and PC12 cells, respectively, while 11.17%±2.77% and 8.09%±3.86% in RGC-5, and 4.36±0.87% and 3.55±0.55% in PC12 NC and untransfected control cells. There was a significant difference between the experimental groups and the controls (p<0.01 in RGC-5 and p<0.05 in PC-12, n=3; Figure 3).
Electron microscopy showed nuclear heterochromatin margination, partial membrane dissolution, rough endoplasmic reticulum expansion, mitochondrial reduction ( Figure 4A), and mitochondrial outer membrane damage, cell membrane partial dissolution, and apoptotic bodies ( Figure 4B) in the sihoptineurin-3 stable transfected RGC-5 cells. However, in the negative control and untransfected cells, we observed nuclear membrane integrity, with organelles enriched in cytoplasm ( Figure 4C,D). These data indicated that optineurin is related to cell apoptosis. Microarray analysis: To investigate the molecular mechanisms underlying optineurin's effects on cell growth and apoptosis, gene microarray analysis was applied to scan the differentially expressed genes in RGC-5 cells. One hundred twelve genes were identified as being upregulated by ≥1.0 SLR (fold change >2), or downregulated by <-1.0 SLR (fold change <-2) in RGC-5 stably transfected cells versus control. Genes with SLR>2.0 (fold change>4) or SLR<-2.0 (fold change<-4) were selected to do gene function catalog analysis and were found to be enriched in apoptosis, growth, and axonal transport (Appendix 1).
In the stable transfected RGC-5 cells, western blot analysis demonstrated that the expression of Bdnf was 2.06 fold, Ntf3 was 1.28 fold, Snap25 was 3.119 fold, and Nefl was 2.16 fold lower than that of normal cells (p<0.05, n=3; Figure 6).
DISCUSSION
After optineurin was identified in 1998 [11], Rezaie et al. [5] found that the incidence of POAG resulting from optineurin gene mutations was higher than the control group. Recent studies [12,13] revealed that optineurin genetic mutation and single nucleotide polymorphisms (SNPs) were associated with open angle glaucoma. However, the mechanisms have not been elucidated. To determine the comprehensive molecular mechanisms and signal transduction pathways of optineurin, Weisschuh et al. [9] used RNA interference to silence optineurin in HeLa cells, and, using microarray technology, found a series of differentially expressed genes. However, retinal ganglion cells are target cells of glaucoma, and optineurin is mainly located in the optic nerve [5,14]. Therefore, we chose RGC-5 and PC12 cells as our research targets and used siRNA to knock down the expression of optineurin in this study. RGC-5 cells are different from normal RGCs in electrophysiology and were more appropriate as Figure 3. Effects of optineurin siRNA on apoptosis of stable transfected RGC-5 cells and PC12 cells as measured with flow cytometry. LR denotes early apoptosis, and UR is advanced stage apoptosis. (*p<0.05, **p<0.01), n=3. Apoptosis %=Gated (UR+LR)%. Apoptosis rate of experimental groups is obviously higher than that of sihoptineurin-NC and blank groups. neuronal retinal precursors [15], but are similar in terms of molecular and genetic changes during apoptosis and responses to neuroprotective agents [16][17][18][19]. Thus, RGC-5 cells are widely used as a model of RGC. PC12 was also applied in this study, because it is widely used to study neuronal development and function as a tissue culture model. We constructed four optineurin siRNA plasmids and transiently transfected them into PC12 cells. Quantitative real-time PCR and western blot demonstrated that all of the four siRNAs significantly decreased optineurin expression in PC12 cells. We selected the most effective one, sihoptineurin-3, to establish RGC-5 and PC12 stable transfected cells. The origination of the RGC-5 cell line has been controversial. RGC-5 cells were found to be of mouse [15] by sequencing a region of the nuclear Thy1 gene and the d-loop and tRNA(Phe) gene in mitochondrial DNA. However, several other studies [20][21][22] have found RGC-5 cells to be of rat origin. We therefore designed the optineurin-3 siRNA according to the rat optineurin cDNA sequence and found a significant effect of the siRNA on the expression of optineurin, as demonstrated by Q-PCR and western blot. Thus, the significant effect of optineurin-3 siRNA makes the origin of RGC-5 no longer an issue in this study because it already allowed us to investigate the consequences of inhibiting optineurin expression in the cell line.
In the other experiments, the MTT assay and flow cytometry showed that optineurin knockdown significantly inhibited cell growth and promoted apoptosis. There have been many controversies regarding whether optineurin promotes or inhibits cell apoptosis. Word et al. [23] speculated that optineurin was an important component of the TNF-alpha signaling pathway and activated apoptosis. Kroeber et al. [24] showed that overexpression of wild-type optineurin had no significant effects on lens epithelial cell apoptosis induced by TGF-beta1 in mice. Evans and Hollenberg [25] speculated that overexpression of optineurin might not be a damaging effect on the trabecular meshwork cells, but a protected gene. Takahisa Koga [26] found that apoptotic activity increased by force-expressing wild-type optineurin into RGC-5 cells. BumChan Park et al. [21] demonstrated that overexpressed wild type optineurin resulted in impairment of the Tf uptake in RPE and RGC-5 cells, which could induce apoptosis. Nevertheless, E50K cells induced more dramatic effects than the wild-type optineurin. Our results suggest that optineurin mutations may lead to increases in apoptosis and decreases in cell growth in retinal ganglion cells that may result in POAG. Differences in these results are due to the different types of cells and different conditions of experiments. Perhaps, optineurin plays different roles in different stages of apoptosis. Further studies are needed.
To further determine the molecular mechanisms underlying the effects of optineurin on cell apoptosis, we used microarray technology. We found many signaling pathways were involved, such as TNF-alpha/nuclear factor-kappaB [7,27] and Ifn [28], as previously reported. In our study, 112 differentially expressed genes were identified in RGC-5 cells, of which Bdnf, Nefl, Ntf3 and Snap25 were validated in PC12 cells with quantitative real-time PCR and in RGC-5 with western blot analysis. These results were not obtained in optineurin knockdown HeLa cells [9] due to the different cell types and stable transfected cells that we used for microarray analysis. For the first time, we have revealed that optineurin may regulate cell growth and apoptosis and transport of neurotransmitters through Bdnf, Ntf3, Nefl, and Snap25.
Snap25 and Nefl usually produce fast and slow transport of the axon [29,30], and Bdnf and Ntf3 are neuroprotective factors and could impact cellular proliferation and apoptosis [31,32]. Bdnf is a neuroprotectant following optic nerve injury [33] and can support the survival of RGCs in vivo [34][35][36]. Ntf3, which is a subset of neural crest and placode-derived neurons, plays a role in increasing survival and leading to neurite outgrowth. Ntf3 regulates the proliferation of cultured neural crest progenitor cells grown in a serum-free defined medium [37]. In addition, Ntf3 also acts as a survival factor for differentiated neurons in the retina and promotes neuronal differentiation [38]. Both Bdnf and Ntf3 might play roles in the development of the rat and chick retina [39,40]. Recent studies confirmed that Bdnf and Ntf3 could be used to treat RGC injury [36,41,42], since they could promote the survival of injured cells. In our research, we found that optineurin knockdown significantly downregulated Bdnf and Ntf3 Figure 5. Validation of microarray data in stable transfected PC12 cells with quantitative real-time PCR. Bdnf was 57.52% of the expression in the Blank cells, Ntf3 was 62.42%, Snap25 was 36.40%, and Nefl was 55.66% (*p<0.05 versus control, n=3). Figure 6. Western blot analysis examined the changes in selected protein expression in RGC-5 cells. Protein lysates from RGC-5 cells were probed with antibodies to Bdnf, Ntf3, Snap25, Nefl, and GAPDH. Optical densitometric analysis was performed to calculate the relative protein amounts of siRNA-treated cells compared to Blank cells. In the sihoptineurin-3 transfected RGC-5 cells, the expression of Bdnf was 2.06-fold, Ntf3 was 1.28-fold, Snap25 was 3.119-fold, and Nefl was 2.16-fold lower than that of Blank cells, respectively (p<0.05; *p<0.05, **p<0.01) n=3. expression that was probably related to the survival and apoptosis of retinal ganglia cells. This indicates that reduced expression of neuroprotective factors, such as Bdnf and Ntf3, may be one of the reasons that optineurin mutations lead to POAG.
Snap25 is located in the presynaptic plasma and plays an important role in the synaptic vesicle membrane docking and fusion pathway [43]. Brownlees et al. demonstrated that Nefl proteins affect axonal transport of neurofilaments and neurofilament assembly in cultured mammalian cells and neurons [29]. Snap25 and Nefl are usually implemented as producers of fast and slow transport [30,44]. In this study, we found that knockdown of optineurin in RGC-5 downregulated the expression of Snap25 and Nefl. The result suggested that optineurin might affect the general rates of slow and fast axonal transport in vitro and that optineurin is an essential component for axonal transport. Optineurin might affect the neurotransmitter and neurotrophic factor expression and transport in RGC apoptosis. We speculate that optineurin could be a protective gene for RGC injury, and that optineurin plays an important role in the axonal transport process.
In the future, further in vitro and in vivo studies are required to show that optineurin causes glaucoma via regulating Bdnf, Ntf3, Snap25, and Nef1. | 2014-10-01T00:00:00.000Z | 2011-12-17T00:00:00.000 | {
"year": 2011,
"sha1": "110f8b7ad771c8b3289dc58e9fec3dab6061d9ff",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "110f8b7ad771c8b3289dc58e9fec3dab6061d9ff",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
257206080 | pes2o/s2orc | v3-fos-license | Graph Laplacians on Shared Nearest Neighbor graphs and graph Laplacians on $k$-Nearest Neighbor graphs having the same limit
A Shared Nearest Neighbor (SNN) graph is a type of graph construction using shared nearest neighbor information, which is a secondary similarity measure based on the rankings induced by a primary $k$-nearest neighbor ($k$-NN) measure. SNN measures have been touted as being less prone to the curse of dimensionality than conventional distance measures, and thus methods using SNN graphs have been widely used in applications, particularly in clustering high-dimensional data sets and in finding outliers in subspaces of high dimensional data. Despite this, the theoretical study of SNN graphs and graph Laplacians remains unexplored. In this pioneering work, we make the first contribution in this direction. We show that large scale asymptotics of an SNN graph Laplacian reach a consistent continuum limit; this limit is the same as that of a $k$-NN graph Laplacian. Moreover, we show that the pointwise convergence rate of the graph Laplacian is linear with respect to $(k/n)^{1/m}$ with high probability.
Introduction
Graph Laplacians are undoubtedly a ubiquitous tool in machine learning. In machine learning, when a data set X " tx 1 ,¨¨¨, x n u Ă R d is sampled out of a data generating measure µ supported on a Riemannian submanifold M, their neighborhood graphs are treated as discrete approximations of M, and the spectrum of the resulted graph Laplacians are used to extract its intrinsic structural information. For example, if the manifold is of a low dimension m, then this dimension can be detected by the first m eigenvectors corresponding to the m largest eigenvalues of the graph Laplacians. Even though in practice, the underlying manifold (and the associated measure) is technically a priori unknown or unseen, this so-called manifold assumption [10] is fairly common, since strong dependencies are often exhibited between the individual feature vectors x i . It has become the basis for many dimension reduction methods using spectra of graph Laplacians [4,14,15]. Methods using graph Laplacians are remarkably successful in semi-supervised learning [44,45,3], due to that graph Laplacians are generators of the diffusion process on graphs, therefore suitable for studying label propagation, and in spectral clustering [42,37], due to the special properties possessed by their eigenvectors [36]. Similarly, (weighted) Laplace-Beltrami operators are the generators of the diffusion process on manifolds, and their spectra capture important geometric properties of the manifolds [11]. Since point clouds arising as discretizations of Riemannian manifolds are distinguishable from arbitrary ones, it's reasonable to presume that large sample size asymptotics of their graph Laplacians mimic the behavior of the (weighted) manifold Laplace-Beltrami operators. It turns out that a passage from the discrete operator to the continuum one depends on (1) the graph and (2) the Laplacian constructions, as well as (3) the rate at which the graph connectivity parameters go to zero. We discuss this matter in more detail below, starting with an introduction to two well-known graph generations and the Shared Nearest Neighbor graphs.
Roughly speaking, a graph on X is built on a similarity measure. In many graphs, this similarity measure is both primary and distance-based. For example, in an -graph, two points x i , x j P X are connected if their Euclidean distance in R d is at most some chosen ą 0. In another, closely related, directed k-Nearest Neighbor graph (k-NN), a directed edge is drawn from x i to x j , if x j is among the k-nearest neighbors of x i , for some k P N of choice. Both graphs are therefore proximity graphs [41], with k-NN graph additionally connecting vertices by ranking their distances. However, it's been known that similarity measures based on distances are sensitive to variations within a data distribution, or the ambient dimension d (in fact, questions were raised as to whether the concept of the nearest neighbor is meaningful in high dimensions [6]). The need for a similarity measure that is better at handling high dimensional data led to the invention of secondary similarity measure. A special example of graphs constructed from this type of similarity measure is the Shared Nearest Neighbor (SNN) graph, the subject of our investigation. Typically, in an SNN graph, once the primary similarity of k-nearest neighbors is used -where k-nearest neighbors for each point x i are already determined -a secondary similarity is applied, by ranking affinity induced by the primary one: x i , x j are connected if x i , x j share a k-nearest neighbor in common. More concretely, let N N pxq Ă Xztxu be the set of k-nearest neighbors of x and cardpN N pxqq denote its cardinality. Then x i , x j are SNN neighbors if cardpN N px i q X N N px j qq ą 0 (see Figure 1 for an illustration). An undirected edge is drawn between them with an edge weight of either the intersection size cardpN N px i q X N N px j qq or the cosine measure [32] simcos px i , x j q :" aptly named since it's equivalent to the cosine of the angle between the zero-one set membership vectors of N N px i q and N N px j q. This measure (1.1) was often used as a local density for clustering [17,30]. It's been reported, and empirically confirmed in [32], that SNN measures are stable and less prone to the curse of high dimensions than conventional distance measures. As such, they have found use in clustering algorithms for large or high dimensional data sets [17,26,33,30,31] as well as in finding outliers in high dimensions [34]. Despite their popularity with the computing community [35,43,18,20,2], the theoretical understanding of SNN graphs and their Laplacians remain lacking, to the author's knowledge.
There are three main types of graph Laplacians studied so far in machine learning; they are, normalized, unnormalized and random walk Laplacians. Precise definitions will be given in 2.2. A plethora of work has been devoted to the pointwise consistency of graph Laplacians on -graphs [38,29,40,5,25,9], but to a lesser extent, of k-NN graph Laplacians [9,12]. The key idea in all these pointwise convergence results is that the graph connectivity parameter must be kept small yet relatively fixed with respect to the sample size n as n Ñ 8. Notably, it was shown in [29] for -graphs, where this parameter is precisely , that if Ñ 0 and n m`2 { log n Ñ 8, then almost surely for a non-boundary point x P M, where L denotes the unnormalized -graph Laplacian and p the nonvanishing density of µ. The optimal rate in (1.2) is " pnq " Opplog n{nq 1{pm`4q q.
In [9], (1.2) was recovered for a compact, boundariless manifold M. A consistency result for undirected k-NN graph was also established in the same paper, where the authors showed that, with high probability, uniformly for every x i P X and linearly in terms of pk{nq 1{m , whenever Here, L k denotes the unnormalized k-NN graph Laplacian, and pk{nq 1{m plays the role of graph connectivity parameter.
The inspiration for this work starts with the paper [9]. In a similar spirit with (1.3), we seek to uncover the effective limit operator of the unnormalized SNN graph Laplacian as pk{nq 1{m Ñ 0. We reveal that this limit is the same as in (1.3), This means, although an SNN graph is built on a k-NN graph, their Laplacians converge to the same operator. It also means that manifold spectral information is saturated at the primary level with the use of k-NN graph. Furthermore, we show that when (1.4) is satisfied, then with high probability, where L snn denotes the unnormalized SNN graph Laplacian. To our knowledge, we're the first to establish the pointwise consistency result for SNN graph Laplacians. It is expected to serve as the first installment toward studying consistency of graph-based algorithms on SNN graphs. Particularly, we plan to investigate the convergence of SNN-graph-based spectral clustering, which we expect the nonasymptotic and quantitative nature of (1.5) would be greatly beneficial.
Outline. The paper is organized as follows. In Section 2, we give all the basic set-up and precise constructions of SNN graphs and graph Laplacians. In the first half of Section 3, we state our assumptions, our main result as well as its ramifications. In the second half, we give an outline of the proof and recall necessary geometry and concentration results. In Section 4, we present our main proof, with some tedious steps abstracted away in the Appendix 6.
Preliminary
Basic manifold set-up. Let M be a compact, connected, boundariless, orientable, smooth mdimensional manifold (m ě 2) embedded in R d . Some essential constants intrinsic to M are: an upper bound K on the absolute values of the sectional curvatures, the reach R of M and a lower bound i 0 on the injectivity radius of M. We let M inherit the Riemannian structure induced by the ambient space R d . We write dV " dV ol M to denote the volume form on M with respect to the induced metric tensor, and dpx, yq to denote the geodesic distance between x, y P M. Furthermore, let µ be a probability measure supported on M and p be its density, such that 0 ă p min ď ppxq ď p max ă 8 (2.1) for every x P M. We assume p P C 2 pMq; i.e., p (expressed in normal coordinates) has continuous second partial derivatives (see also Remark 4 below). These regularity assumptions on p are fairly common in theoretical works on graph based learning, as they allow for tangible connections between learning algorithms and PDE theory to be established.
Basic analytic set-up. We define L 2 pµq to be the space of L 2 -functions on M with respect to µ, endowed with the inner product xf, gy µ :" We say f P L 2 pµq if xf, f y µ ": }f } 2 µ ă 8.
When X " tx 1 ,¨¨¨, x n u is a set of i.i.d. samples from µ, we let µ n be the usual empirical measure Similarly as above, we define L 2 pµ n q to be the space of functions on X with the inner product xu, vy µn :" 1 n n ÿ i"1 upx i qvpx i q u, v P L 2 pµ n q, and }u} 2 µn :" xu, uy µn .
Basic notation agreement. We allow abstract, analytic inequality constants C, c to change their values from one line to the next; moreover, they are implicitly dependent on the following intrinsic values of the manifold: m, K, i 0 , R as well as α, where α :" V ol m pBp0, 1qq, the m-dimensional volume of the unit ball. We will often indicate but not fully disclose an analytic constant's dependence on the density p. For instance, the following constant will play a role in our analysis to ensure the convergence of the SNN graph Laplacian, we can simply write c M " C p mint1, i 0 , K´1 {2 , R{2u. We will also write Ap B to mean A ď C p B and B ď C p A. In various statements, we choose different expressions of constant parametric dependence; these notations are local and defined where they are used.
To distinguish different types of geometric balls, we write Bpx, rq to denote a geodesic ball in M with center x and radius r, Bpx, rq to denote a Euclidean ball, and Bpx, rq its topological closure, either in R d or R m , depending on context. We abuse the use of the notation |¨|, which either means an absolute value of a quantity, or a Euclidean vector norm, or Lebesgue measure of a set, depending on context. Finally, we define an (essential) support of a function f :
Basic graph Laplacian constructions
Let Γ " pX, te ij u ij q denote an undirected, weighted (finite) graph on the set of nodes X " tx 1 ,¨¨¨, x n u, where each edge e ij is given an edge weight w ij . Let W P R nˆn be a symmetric matrix whose ijth entry is w ij (weight matrix) and D P R nˆn be diagonal whose iith entry is . The unnormalized or combinatorial graph Laplacian [29,13] is defined to be L puq :" D´W , or for f P L 2 pµ n q. To compare, the normalized and random walk graph Laplacians are defined respectively as follows [29], where I stands for the identity matrix. We note that when w ij takes the form of a kernel function of the distance between x i , x j , e.g., w ij " υp|x i´xj |q, for some non-increasing function υ : r0, 8q Ñ r0, 1s, the graph Γ is a proximity graph. As we shall see below, an SNN graph is not a proximity graph.
SNN graph and SNN graph Laplacian
We first introduce a k-relation, which we will use to define neighbors on an SNN graph of the data set X " tx 1 ,¨¨¨, x n u.
Definition 1. We define a relation " knn on XˆX by declaring can be defined out of (2.6), but we will only need it for the following definition of SNN neighbors.
Moreover, when (2.7) happens, we write, x i " snn x j , and say that x l is a shared k-NN neighbor of both Hence, in an SNN graph Γ, an edge e ij exists between x i , x j if both nodes have a common neighbor x l in the sense of (2.7). The defined " snn relation is symmetric, and the resulted SNN graph Γ " pX, te ij u i,j q is undirected. To complete the construction, we need to assign each edge a weight that reflects the number of shared k-NN neighbors between the edge nodes. We do this next.
Unnormalized SNN graph Laplacian
Since the density p is bounded below ((2.1)), with probability one, the requirement (2.6) is equivalent to µ n pBpx i , rqq ď k n (2.8) where r " |x i´xj | and µ n is the empirical measure in (2.3). Therefore, (2.8) can serve as a quantification of (2.6). Following this, we define, for every ą 0 N pxq :" Now N pxq captures the number of random samples x i in the punctured Euclidean -neighborhood of x. It's most fitting to take x P X in (2.9); however, x can also be a location on the manifold. A known estimate for N pxq is as follows. Pp|N pxq´αppxqn m | ě Cδn m q ď 2 expp´cδ 2 n m q.
Following [9], we let ε k pxq :" mint ą 0 : N pxq ě ku. (2.10) As before, x can be a point on the manifold. By (2.8), N ε k pxq " k, if x P X. A similar statement can be said for general x P M. To see this, we mention the following result which dictates that ε k pxq mp k{n with high probability. Then simple calculations from Lemmas 2.1, 2.2 show that, for x P M and Cpk{nq 2{m ď δ ď 1, Denote B k pxq :" Bpx, ε k pxqq. Motivated by (2.10), (2.11), we quantify the number of shared k-NN neighbors between x, y P X as N px, yq :" where η :" 1 r0,1s ptq, t ě 0, is the Heaviside step function. Note that, with probability one, the number of shared k-nearest neighbors between x, y P X is l ď k iff N px, yq " l.
We now assign an edge e ij in Γ a weight w ij :" N px i , x j q{k; note that this is the cosine measure mentioned in (1.1). We construct an unnormalized SNN graph Laplacian on L 2 pµ n q as follows, (2.13) A few remarks are in order.
Remark 1. Implicit in the formulation (2.13) is that h :" 2pk{pαnqq 1{m acts as the graph connectivity parameter. This is suggested by Lemma 2.2, (2.10) and our observation of N px, yq above, which all conclude that h is an expected SNN neighborhood radius at each datum x P X. When introducing a factor 1{h, pαn{kq{2 m " 1{h m becomes a necessary rescaling to reveal a divergent form at the microscopic level, whereas the factor pαn{kq 2{m {2 2 " 1{h 2 arises because the Laplacian corresponds to a second derivative. The factor 1{n will later play a role in a concentration effect. We will gather information about allowed choice of the parameter k for consistency in 3; however k should be such that h Ñ 0 sufficiently slow when n Ñ 8. Then when the number of points in each datum neighborhood encroaches infinity, heuristically, the sum (2.13) approximates an integral whose normalization approaches ∆ snn f px i q. This is the basic principle behind all the graph Laplacian convergence results [29,9,8] and is a well-known principle in the framework of nonparametric regression [27].
Remark 2.
Up to the scaling of 1{h m`2 , the formulation (2.13) depicts a standard unnormalized graph Laplacian (2.5). Indeed h m`2 L snn " D´W , where It can be seen from construction that w ij does not only depend on the distance |x i´xj | but also on the locations of x i , x j . Therefore, it is not a radial edge weight. This lack of radiality, as we shall see, is a complication in deriving the limiting operator for L snn as n Ñ 8.
We're ready to state our result.
Main result
Given the set-up in 2, we show that, with high probability, L snn in (2.13) converges pointwise, with a linear rate, to (a multiple of) the following weighted Laplace-Beltrami operator on M: The notation div stands for the divergence operator on M, and ∇ for the gradient. A precise statement is given in Theorem 3.1 below.
Assumption 1. Let m ě 2 and denote c 1 p,M :" max xPM |∇ppxq|. We assume the following for the remainder of this paper: here is the unit ball in R m and u 1 the first coordinate of u. It's known that σ " α m`2 [23]. For l " 0, 1, 2, 3 and f P C l pMq, we define where C " Cpp, }f } C 3 pMq q in (3.2) denotes a constant depending on p and }f } C 3 pMq .
Remark 3. Note that the second point of Assumption 1 is purely technical. One can combine the first and the second points; we separate them since the second involves the knowledge of the global maximum of |∇ log p|. See also Remark 6 below. The third point in Assumption 1 ensures that the probability bound in (3.2) is strictly less than one, while the role of the first will be clear in 3.2.1.
Ramifications of Theorem 3.1
Note from construction that SNN graphs adjust to data density in a way that is different from, say, the -and k-NN-graphs [22,9,29,23]. Yet Theorem 3.1 shows that the SNN graph Laplacian reaches the same continuum operator as the k-NN graph Laplacian, which is [9] ∆ snn "´1 2p divpp 1´2{m ∇q :" ∆ knn . Hence an application of SNN graph doesn't produce any more manifold spectral information than what could be gained from a k-NN graph.
Recall from [29] the following definition of the s-th weighted Laplace-Beltrami operator Here s P R, and ∆ :" ∇¨∇ is the unweighted Laplace-Beltrami on M. The presence of p s in (3.4) reveals how the data-dependent nature of the edge weights in a given graph construction influences the limiting differential operator. Since a Laplacian operator generates a diffusion process, the term s∇ log p¨∇ can be seen as inducing an anisotropic term, which directs the diffusion toward (s ą 0) or away from (s ă 0) increasing density. An interesting finding in [29] is that, for a given graph based on primary similarity, different types of graph Laplacians converge to different scaled versions of ∆ s . In particular, for an unnormalized graph Laplacian, this limit is Since the limiting operator for the unnormalized -graph Laplacian is ∆ :"´1 2p divpp 2 ∇q [29,23,9], we see that 1´2λ " 1 and s " 2 in this case. Although the k-NN graph as well as the SNN graph was not among the ones considered in [29], referring (3.5) back to (3.3), we find 1´2λ "´2{m and s " 1´2{m for both cases. This suggests that (3.5) might hold for more than just non-ranking, primary proximity based graphs. The fact that s " 1´2{m ě 0 in (3.3) for all m ě 2 is desirable in clustering and classification (see 3.1.1 below) where one wants the diffusion process mainly along the regions of the same density level. However, due to the factor p 1´2λ " p´2 {m in the limit, the unnormalized SNN (and k-NN) graph Laplacian is predicted to be unfit for applications such as label propagation, since the propagation will be slow in regions of high densities [29].
Spectral information
Since we intend to use this exposition as a starting point for future work on spectral convergence of SNN graph Laplacians, we briefly discuss the spectra of L snn and ∆ snn here. In what follows, H 1 pµq denotes the Sobolev space of functions f with a weak first derivative ∇f , and H 2 pµq, with a weak second derivative ∇ 2 f , all in L 2 pµq [19]. Then }f } 2 H 1 pµq :" }f } 2 (3.7) Hence, 2x∆ snn f, f y µ " Bpf, f q " E M pf q ě 0, and so ∆ snn is a positive semidefinite operator on H 2 pµq; as such, it has a pure point spectrum whose eigenvalues, including (finite) multiplicity, can be listed in an increasing order By (3.6) and the Rayleigh-Ritz variational principle The restriction to smooth functions in (3.8) (also in Theorem 3.1 and (3.7)) is not a matter since the associated continuum eigenfunctions are smooth [1]. Moreover, it can be seen from (3.8) that the first (non-constant) eigenfunctions must change their signs, and because in the energy E M pf q, |∇f | 2 is weighted against a positive power of p, they must only change their signs in low density regions. This gives an advantage in clustering with a few labeled points where one assumes that the classifier remains relatively constant in high density [7].
Similarly to ∆ snn , L snn is a positive semidefinite operator on L 2 pµ n q; indeed xL snn u, uy µn " Its spectrum can be listed as assuming that Γ is connected [39]. We anticipate the rate at which λ Γ l Ñ λ M l to be quantified in a future work.
Outline of the proof for Theorem 3.1
We prove the convergence L snn Ñ ∆ snn depicted in (3.2) by passing it through an evolution: (3.10) The precise definitions of L 1 , L 2 will be given in 4. We treat these two operators and ∆ snn as operators on L 2 pµq as well as on L 2 pµ n q X CpMq, i.e., we restrict them to the SNN graph Γ. Although we can treat the graph Laplacian L snn the same way, we mainly treat it as an operator on L 2 pµ n q. To progress from an operator on L 2 pµ n q to that on L 2 pµq, we will need a key concentration ingredient, given in 3.2.2. Additionally, we will need tools from differential geometry, which are summarized below.
Local Riemannian geometry
The results here were already presented in [9,23,8]; more details can also be found in [16].
Let exp x : T x M Ñ M be the Riemannian exponential map at x P M and J x pvq denote the Jacobian of exp x at v P Bp0, rq Ă T x M. The Rauch Comparison Theorem [16] states that the relative distortion of metric by exp x at z P Bp0, sq Ă T x M is bounded by OpK|z| 2 q. Hence from which it follows that (see [9]) |VpBpx, rqq´αr m | ď CKr m`2 . where exp x puq " y, exp x p0q " x and η is as in (2.12). It should be noted that Assumption 1 guarantees that (3.14) is a diffeomorphism with s ď cpk{nq 1{m , whenever c is small enough.
Remark 5. If f P C l pBpx, rqq, then }f } C l pBpx,rqq and }f } C l pBp0,rqq are equivalent. Indeed, following [28], one can write the covariant derivatives of f in normal coordinates around x, and use standard expansions for the Christoffel symbols and metric tensor in these coordinates, to conclude that p1´Crq}f } C l pBp0,rqq ď }f } C l pBpx,rqq ď p1`Crq}f } C l pBp0,rqq .
Main proof
Define, for x, y P M, We claim that N px, yq « nw k px, yq with high probability. where ε k px, yq :" ε k pxq`ε k pyq.
The proof of Lemma 4.1 is a direct consequence of Lemma 3.2 and is given in the Appendix 6.1. This inspires us to define Next, for every x P M, we let (see also [9]) k{n ": αppxqεpxq m . (4.2) Since p P C 2 pMq and is bounded away from zero, ε P C 2 pMq and therefore is a continuous version of ε k . Let ε ‹ :" maxtεpxq : x P Mu. Then εpxq, ε ‹p pk{nq 1{m ; in particular, it follows from Assumption 1 that (3.14) holds for s " 3ε ‹ .
As an operator on L 2 pµ n q, L 1 predicts the average behavior of L snn ; this is the content of the following lemma.
Then combining the events described in Lemma 4.1 and (4.3) gives the desired conclusion.
Define the operator L 2 on L 2 pµq in (3.10) to be In a similar fashion to Lemma 4.2, we claim that, as an operator on L 2 pµq X C 0,1 pMq, L 2 closely describes L 1 . The proof of Lemma 4.3 is similar to that of Lemma 4.2 but a tad more involved and therefore is given in the Appendix 6.2. We now define ωpx, yq :" pαn{kqwpx, yq " pαn{kq ż M ηˆd px, zq εpxq˙ηˆd py, zq εpyq˙p pzq dVpzq.
In what follows and the remaining of this paper, we will use the big O notation to indicate a quantity whose magnitude is bounded by a constant multiple of what is inside the brackets, and this constant is allowed to depend on various factors at play, such as the intrinsic values mentioned in 2 as well as }f } C 3 pMq and }p} C 2 pMq .
Remark 6. Note from (4.11) that in place of c 1 p,M in Assumption 1, one can take any estimate upper bound of max xPM |∇ppxq|.
Proof of Theorem 3.1. Let f P C 3 pMq. Observe from the proof of Lemma 4.2 that (4.5) follows for any x P X once the uniformity condition (4.3) is satisfied, and similarly, from the proof of Lemma 4.3 that (6.7), (6.9) hold once (6.3) does. Hence as a result of these lemmas, if pk{nq 1{m ď t ď pk{nq´1 {m , max 1ďiďn |L snn f px i q´L 2 f px i q| ď C p }f } C 1 pMq t (4.14) with probability at least 1´C p n expp´c p kpk{nq 2{m t 2 q. It also follows from Proposition 4.4 that, Combining (4.14), (4.15), we obtain the theorem conclusion.
Conclusion
In this paper, we start the theoretical study of Laplacians based on SNN graphs, which was not present in the literature. We establish a consistency result and show that the large scale asymptotics of unnormalized SNN graph Laplacian reach the same weighted Laplace-Beltrami limit operator as those of unnormalized k-NN graph Laplacian. Some directions for future investigation, that we plan to follow up this work with, are: (1) analyzing the spectral convergence of SNN graph Laplacian, (2) analyzing the effectiveness of spectral clustering methods using SNN graphs. In the second direction, our intention is to include the mixed distribution case, µ " ř K i"1 µ i , in which density can vanish on the manifold. It is also of interest to extend the analysis to the case of compact manifolds with boundaries.
Proof of Lemma 4.1
Denote B k pxq :" Bpx, ε k pxqq and recall that B k pxq " Bpx, ε k pxqq. Define also B k pxq o :" tz P B k pxq : z " xu.
Proof of Lemma 4.3
We define an intermediate operator between L 1 and L 2 as where w is as in (4.6). Suppose the following holds.
Lemma 6.1. Let f P C 0,1 pMq and x P M. Then Observe from definition that |wpx,¨q| ď Cp max k{n. Hence a routine application of Lemma 3.2 to L 7 f pxq yields Pp|L 7 f pxq´L 2 f pxq| ě C p rf s 1;Bpx,2ε ‹ q tq ď 2 expp´c p kpk{nq 2{m t 2 q for pk{nq 1{m ď t ď pk{nq´1 {m , which together with Lemma 6.1, concludes Lemma 4.3.
Proof of Lemma 6.1
Recall that ε k px, yq " ε k pxq`ε k pyq and εpx, yq " εpxq`εpyq. Fix x P M, and let A :" ti : |x´x i | ă ε k px, x i qũ Apsq :" ti : |x´x i | ă εpx, x i qp1`squ, for s ě 0. We can rewrite L 1 f pxq, L 7 f pxq in terms of these sets, as follows, nwpx, x i q k pf pxq´f px i qq.
We will show that, although A,Ãp0q are two different sets, they are roughly comparable with respect to set inclusion. To see this, note that by Lemma 2.2 and (4.2), we can assume for some fixed Cpk{nq 2{m ď δ ď 1. It follows that and henceà p´Cδq Ă A,Ãp0q ĂÃpCδq. (6.5) To handle the first sum, we utilize the Chernoff's bounds -and recall (4.2) again -to obtain that PpcardpÃpCδqq´cardpÃp´Cδqq ě Cδnεpxq m q ď 2 expp´c p δ 2 kq.
(6.6) It follows from definition and (6.3) that |w k px, x i q´wpx, x i q| ď C p k{n; using this and (6.3), (6.6), we have the first sum in (6.5) dominated by 7) with probability at least 1´2 expp´c p δ 2 kq. To handle the second sum in (6.5), we use the following lemma. We combine (6.7), (6.9) and let δ " Cpk{nq 1{m t for some pk{nq 1{m ď t ď pk{nq´1 {m to obtain the conclusion of Lemma 6.1.
Proof of Lemma 6.2. Note that the condition i PÃp´Cδq and (6.4) make both w k px, x i q, wpx, x i q nonzero in (6.8); if not, the bound there can be as large as C p k{n. Denote Bpxq :" Bpx, εpxqq. Suppose ε k pxq ď εpxq. We write w k px, x i q´wpx, x i q as the following sum We express (6.10) in normal coordinates, using the convention established in Remark 4: where we've centered at x, and so exp x p0q " x, exp x pvq " x i , exp x puq " z. It follows from this and (6.3) thaťˇˇˇż B k pxq 1 B k pxiq pzqppzq dVpzq´ż Bpxq 1 B k pxiq pzqppzq dVpzqˇˇˇˇď Cp max |V olpBp0, ε k pxqq´V olpBp0, εpxqq| ď Cp max δεpxq m ď C p δpk{nq.
Note that (6.11) corresponds to the first term in (6.10); the second term can be handled similarly, and so we conclude the lemma.
Derivatives. For i " 1,¨¨¨, m, let B i denote the partial derivative along the ith axis. Observe that, for every v P Bp0, 2ε ‹ q such that v " v i e i , where v i P R and e i the standard ith basis vector, ω 0 pvq ďω 0 p0q.
Hence 0 is a global maximum ofω 0 along the ith axis. If B iω0 p0q exists, this would mean B iω0 p0q " 0, (6.12) and if all the partial derivatives B iω0 exist in a vicinity of 0 and are continuous at 0, it would entail that ∇ω 0 p0q " 0, which is (4.8). To show these facts, we use a differentiation technique in fluid mechanics. It goes as follows.
Let ξpτ q be a smooth vector field, τ P R, and let ξ 1 pτ q ": υpτ q. Let Fpξpτ q, τ q be a density function of space and time. Let Rpτ q be a region varying with time. Then the Reynold's transport equation [24] states that d dτˆż Rpτ q Fpξpτ q, τ q dξpτ q˙" ÿ l ż Rpτ q B l tFpξpτ q, τ qpυpτ qq l u dξpτ q`ż Rpτ q B τ tFpξpτ q, τ qu dξpτ q, (6.13) where B τ denotes the partial derivative in terms of the time argument τ . We apply (6.13) to our context. Take v P Bp0, 2ε ‹ q. Let γ v pτ q :" v`e i τ and R v pτ q :" Bp0, εq X Bpγ v pτ q,εpγ v pτ qqq, for τ P R. Let ξ i pτ q denote a constant-velocity vector flow with pξ i q 1 pτ q " e i . Then (see Figure 2) B iω0 pvq " pαn{kq d dτˆż Rvpτ qp pξ i pτ qqJ x pξ i pτ qq dξ i pτ q˙| τ "0 , if exists. Comparing this to (6.13), we obtain d dτˆż Rvpτ qp pξ i pτ qqJ x pξ i pτ qq dξ i pτ q˙" ż Rvpτ q B i tppξ i pτ qqJ x pξ i pτ qqu dξ i pτ q. (6.14) Since the exponential map exp ((3.14)) is guaranteed to be a diffeomorphism by Assumption (1) and p P C 2 pMq, the derivative in (6.14) exists, and so does B iω0 pvq. Hence (6.12) holds. It remains to observe that γ v is continuous with respect to v, and so isεpγ v q. Therefore, V ol m pR v p0qq Ñ V ol m pR 0 p0qq in (6.14); this is enough to conclude that B iω0 pvq Ñ B iω0 p0q when v Ñ 0, for all i, and consequently, that ∇ω 0 p0q " 0.
Next, we consider B j B iω0 pvq. To do so, we define β v pt, τ q :" v`e j t`e i τ , for t, τ P R and R v pt, τ q :" Bp0, εq X Bpβ v pt, τ q,εpβ v pt, τ qqq. | 2023-02-27T06:41:10.247Z | 2023-02-24T00:00:00.000 | {
"year": 2023,
"sha1": "0265221c88824eb4162b78b4b9badae0d711a3cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0265221c88824eb4162b78b4b9badae0d711a3cd",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
211097960 | pes2o/s2orc | v3-fos-license | 2019 Asian American/Pacific Islander Nurses Association & Taiwan Nurses Association Joint International Conference: Changes in Nursing Research, Education, and Practice: From Local to Global
The 2019 Asian American/Pacific Islander Nurses Association (AAPINA) & Taiwan Nurses
Association (TWNA) Joint International Conference was held with the theme of Changes in Nursing Research, Education, and Practice: From Local to Global on August 16–17, 2019 at the Splendor Hotel, Taichung City, Taiwan. More than 700 researchers,
educators, and clinical nurses from over 10 countries participated in the conference. A dozen of internationally well-known nursing scholars and leaders including Dr.
Pamela F. Cipriono, the Vice President of International Council of Nurses (ICN), Dr. Wen-Ying Chou, the Program Director of Health Communication and Informatics Research Branch, the National Cancer Institute of the USA, and Dr. Eun-Ok Im, the AAPINA
President-Elect, and Dr. Hsiu-Hung Wang, the TWNA President provided keynote speeches. In addition, seven international scholars and leaders provided a thought-provoking forum on changes in nursing leadership in Asian countries.
Introduction/purpose: Early sexual initiation is considered a serious problem worldwide. Meanwhile, the average age of first sexual intercourse among adolescents is declining. This experimental study design based on Ajzen's Theory of Planned Behavior aimed to compare the effects of parent participation in an adolescent sexual education program on parent's sexual communication behavior and adolescent's sexual abstinence intention.
Methods: Eighty seventh-grade students and their parents were randomly assigned to the adolescent only program (group A, n=39) and parent-teen program (group B, n=41). The descriptive statistics were used for demographic data analysis. Repeated measure ANOVA and Generalized Estimating Equations (GEE) were used to test the effects of the program.
Results:
The findings at immediate and one month after the program indicated that sexual communication behavior scores of parents in Group B increased significantly over time (p < .05). Additionally, the attitude, norms, and intention about sexual communication of parents in Group B significantly increased over time and were higher than those in Group A (p < .05). Students in Group B showed a higher score of norms about sexual abstinence compared to those in Group A immediately after completing the intervention (B=12.93, p < .001). However, the attitude, perceived behavior control, and intention of sexual abstinence scores of students between Groups A and B were not significantly different across the time points of measurements. The finding suggested that the involvement of both parents and adolescents into sex education and sexual communication program would be more effective than the adolescent-only program. Future research should be involved parent in a program and designed to gain more confidence and skill about parents' sexual communication.
Discussion:
The finding indicated the parent−based expansion of the TPB to include parenting influences explicitly and indicated a conceptual framework for family-based design to increase sexual communication between parent and their adolescent child also promote adolescent' perceptions and sexual abstinence intention. In addition, behavioral belief and normative belief were also found to have the highest effect on parent sexual communication. While perceiving norms about sexual abstinence intention was also found to have the highest effect on adolescents among the other constructs from the TPB in this study. Future research should be involved parent in a program and designed to gain more confidence and skill about parents' sexual communication. Moreover, the programs can be applied as guidelines for nurses who work in schools, communities, and adolescent health services in health promotion. Nurses should push for the use of the intervention in schools and communities with the cooperation of parent-teacher association.
Abstract 4
Constructing the evaluation tool of teaching quality for nurse clinical teachers in a medical center Shirling Lin, MSN, RN Taipei Veterans General Hospital, Taipei, Taiwan, Taiwan Background: By using the appropriate teaching designs and methods, that learners could learn knowledge, morality and behavior comprehensively. Assessment and improvement of these specific competencies require an instrument measuring teacher performance.
Objective: The purpose of this study was to develop and validate of a questionnaire to assess the nurse clinical teachers' competencies essential for facilitating reflective learning.
Methods:A cross-sectional study design was used and data were collected by questionnaires. A stratified sampling of totally 39 classes including 752 staff nurses was invited from a medical center via anonymous intranet data collecting. Research instruments were used the 25-item teaching evaluation tool. ANOVA, t-test, factor analysis, cluster analysis, and discriminant analysis were performed for data analysis.
from Meleis (2010)'s transitions theory; (b) a literature review using multiple databases including the Cumulative Index for Nursing and Allied Health Literature (CINAHL), PubMed, PsycINFO, and two Korean databases (KISS and RISS); and (c) research findings from a structural equation modeling study on role transition of Korean critical care nurses and its related concepts.
Results:
The proposed SST included three major concepts with several sub-concepts: 1) transition conditions (intrapersonal factors, interpersonal factors, work environmental factors, and cultural factors), 2) patterns of response (successes in role transition and professional socialization), and 3) nursing therapeutics (education and leadership). The SST explains the relationships among these major concepts and subconcepts.
Conclusion:
The proposed SST is limited in scope to acute care settings. However, it could help understand Korean nurses' role transitions in South Korean contexts and could guide nursing education and management for nurses' successful role transitions in South Korea. Further studies are needed to validate the proposed theory in various nursing settings and situations.
Relationship between Recognition of Symptoms in Patients with Colorectal Cancer and Knowledge of Risk Factors and Lifestyle
Hiromi Uchihara, MSN, BSN, RN 1 , Midori Kamizato, PhD, PHN, RN 1 1 Okinawa Prefectural College of Nursing, Naha City, Japan Background: Colorectal cancer has characteristic symptoms such as persistent bleeding, diarrhea, constipation, and more. On the other hand, in interviews with colorectal cancer patients, which were conducted before the start of this study, it was found that they did not recognize the symptoms of colorectal cancer, even after onset. Therefore, it is important to encourage patients to recognize the symptoms of colorectal cancer.
Purpose: The purpose of this study was to clarify the relationship between the recognition of colorectal cancer symptoms in affected patients and the knowledge of risk factors and lifestyle.
Method: Postoperative patients with colorectal cancer participated in a self-report questionnaire survey on "recognition of symptoms", "knowledge of risk factors", and "current lifestyle habits." (Pearson's Chi-squared test and Kruskal-Wallis test were) performed on "recognition of symptoms"; each question had one of three possible answers: "I didn't know symptoms", "I knew symptoms after onset", and "I knew symptoms before onset". And, analysis was performed on the relationship among "recognition of symptoms", "knowledge of risk factors", and "current lifestyle habits".
Result: Responses were received from ninety-five patients (response rate: 88%). Thirty-six patients (37.9%) who answered "I didn't know symptoms" were male and Stage II/III patients with less than 3 million yen in annual income; their knowledge of risk factors was low, and they reported high alcohol and fat intake and low vegetable intake. Forty patients (42.1%) who answered "I knew symptoms after onset" were female and Stage II/III patients; they reported high intake of vegetables and low alcohol and fat intake. Nineteen patients (20.0%) who answered "I knew symptoms before onset" were Stage 0/I patients and more than 3 million yen in annual income; their knowledge of risk factors was high and their fat intake was low.
Conclusion:
In spite of after onset, about forty percent patients still do not know their symptoms. Differences in the knowledge of risk factors and lifestyle were observed because of differences in the recognition of symptoms. It is important that nurse need to provide the patients for depending on their knowledge of symptom.
Implications for Nursing:
It is important that nurse need to provide the patients for depending on their knowledge of symptom.
Abstract 8
Relationship between late effects and social isolation after radiation therapy in head and neck cancer survivors Tomoharu Genka, MSN, BSN, RN 1 , Midori Kamizato, PhD, RN, PHN 1 1 Okinawa Prefectural College of Nursing, Yogi, Naha, Japan Background: Head and neck cancer survivors after radiation therapy have characteristic of late effects. In particular, late effects related to communication disorders are expected to reduce social interaction with the surrounding environment and to induce social isolation. The psychosocial experience of head and neck cancer survivors is characterized by high depression prevalence, relationship conflict, social isolation and high suicide rates.
Purpose: The purpose of this study was to clarify the relationship between late effects after radiation therapy and social isolation in head and neck cancer survivors.
Method:
A cross-sectional, correlation design was adopted. Head and neck cancer survivors more than 3 months after radiation therapy were selected. Late effects and QOL were measured by the European Organization for Research and Treatment of Cancer (EORTC) quality of life questionnaire core module (QLQ-C30) and head and neck cancer module (QLQ-H&N35). Social isolation was measured with the Japanese version of the University of California Los Angeles loneliness scale ver.3.0 (UCLA-LS). The analysis determined the correlation coefficient between each variable.
Result:
One hundred and one people were included in the analysis. The mean age of the patients was 62.3(±11.4) years, and the period from the end of radiation therapy was35.2(±41) months on average. The symptom with the highest prevalence was "Trouble with social eating "(87.1%). Among the late effect by the participants, the severity of "Dry mouth" was the strongest. The average score of the loneliness scale was 44.2(±10.2) point, and this score was a high value compared with healthy people of the same age. Of the 14 late adverse events, 11 symptoms were correlated with social loneliness scores. Eleven symptoms were "Speech problems "(p<. 00, r =. 40), "Trouble with social contact" (p<.00, r =-.45) ,"Felt ill" (p<.00, r =. 37) "cough" (p<.00, r =.035) was included. Social isolation scores were correlated with overall QOL scores. In addition, functional QOL domain that was correlated with social isolation score was "emotional function" (p<00, r = -.049), "social function" (p<00, r = -.045), "cognitive function" (p> 00, r =-. 039), and "Role function" (p> 00, r =-. 030).
Conclusion:
Late effects in head and neck cancer survivors after radiation therapy may increase social loneliness. Nurses need to assess not only symptom management but also the psychosocial experience for survivors.
Implications for Nursing: Nurses need to assess not only symptom management but also the psychosocial experience for survivors.
Abstract 9
Defecation Control with Fermented Food and Lactulose in Residents of a Long-Term Care Facility in Japan Hiromi Hirata, PhD 1 , Hiromi Yamada, MSN 2 1 Nihon Fukushi University, Tokai, Aichi, Japan 2 Tsubutsubu Zakkoku Cooking Hidamari, Hikone, Japan Background: Older people are prone to constipation due to decreased abdominal pressure, disturbed defecation reflexes, and weaker anal sphincter (Kitamura et al., 2016). Constipation is associated with many issues, including hypermagnesemia induced by longterm laxative use.
Purpose: This study aimed to examine the effectiveness of non-laxatives such as fermented food (amazake) and lactulose (disaccharide) formulations for treating constipation in residents of a long-term care facility in Japan.
Methods:Eight residents who had been taking magnesium oxide as a laxative were recruited. The residents started drinking roughly 100 ml of amazake, a sweet non-alcoholic fermented rice drink, for two weeks after discontinuing the use of magnesium oxide. Then, after a two-day interval, they started drinking roughly 10 -30 ml of lactulose for two weeks. The number of bowel movements, amount and property of stools, and use of enemas and rectal laxatives were examined.
Results: Participants were one male and seven females at a mean age of 93 years. Four participants (B, D, E, and H) were able to sit on a toilet and exert pressure on the abdomen, one (A) was able to sit on a toilet but not exert pressure on the abdomen, and three (C, F, and G) used diapers. During the intervention, three participants (B, E, and H) had spontaneous defecation most of the time, whereas four (A, C, D, and F) showed no changes with either amazake or lactulose. One participant (G) who showed no change with amazake had spontaneous defecation twice while taking lactulose.
Conclusion: Amazake and lactulose were not effective for treating constipation in participants who were unable to sit on a toilet and exert pressure on the abdomen, except for one participant who had spontaneous defecation while taking lactulose. Implications for Nursing: These findings may be used for residents in nursing homes to treat constipation.
Abstract 10 (Taiwan) Improving the Performing Rate of Enhanced Recovery After Surgery (Eras) in Patients Receiving Urological Surgery -A Preliminary Study
Chen Tien Mei, RN 1 , Lin Huei Wen, RN 1 , Guo Shu Liu, PhD 1 , Wang Yueh Mien, RN 1 , 1 Taipei Medical University Hospital, Taipei, Taiwan Objectives: Evidences represent that Enhanced Recovery after Surgery (ERAS) improved the rapid, uncomplicated recovery after surgery with benefits for patients while improving quality and saving costs. The results depend on an approach to teamwork and continuous audit, consisting of early ambulation, early removed of drains and tubes, early oral intake of fluids and solids, and appropriate pain management. The study aimed to use a multiple intervention program for improving the performance of ERAS protocol in patients with receiving urological surgeries.
Methods:
In a cross-sectional survey, the low performing rate (64.1%) of ERAS protocol was determined in an urological surgery ward in a teaching hospital in northern Taiwan in March 2018. Through the cause and effect analysis and roots cause analysis in the lower performing rate of ERAS, the influence factors were that nurses had the deficient knowledge of ERAS, and they didn't understand to use the complicated context of patient education about ERAS as well as the standard nursing care procedure of ERAS protocol. The intervention program included that nurses' education program consisting of ERAS protocol about patients receiving urological surgery with general anesthesia as well as standard nursing procedure of ERAS, developed patients education of ERAS for urological surgery, and evaluating the standard operating ERAS of nursing care for those patients. All participants completed the satisfaction questionnaire of nursing care in hospital time.
Results: 480 patients conducted from March 2018 to February 2019. The performing rate of ERAS protocol significantly improved from 64.1% to 96.7% in patients receiving urological surgery with general anesthesia. In addition, the length of days in hospital reduced around 1 day from the mean of 3.65 days to 2.64 days in this study. They also reported an increasing score of the satisfaction questionnaire about nursing care in the hospital time from 86% to 95%. The mean scores of pre-post exams about nurses' knowledge of ERAS were significant increased from the exam score of 80 to 94.8.
Conclusion:
The results represented that this intervention was benefits of the increasing performance of ERAS protocol for patients with urological surgeries while improving healthcare quality and reducing the hospital time. Nurses could also increase knowledge about ERAS through the education program. It would further enhance the quality of patient care and saving healthcare money.
Abstract 11
Patient and caregiver experiences in home care with implantable Ventricular Assist Devices in Japan -Examination of illness beliefs using the lifeline method-Kayo Nagano, PhD, MSN, RN 1 , Midori Kamizato, PhD,PHN, RN 1 1 Okinawa Prefectural College of Nursing, Naha City, Japan Background: In Japan, the number of transplant donors is very small, and patients needing heart transplantation are required to wear an implantable Ventricular Assist Device (VAD) and face a long waiting period of 3 to 5 years. However, few research reports have revealed patients' long-term home care experiences in Japan, and it is difficult to say that there is currently enough support for caregivers. In long-term home care with VADs in Japan, it is necessary to deepen the understanding about the experiences of patients and caregivers and to examine their support.
Purpose: To clarify the experience of process for patients and caregivers in home care from worsening heart failure to VADs.
Methods:
We targeted patients and caregivers who had been discharged from home and experienced home care for more than 3 months with VADs in a prefecture in Japan. The purpose of the research was explained, and those who provided consent were regarded as the research participants. We described the highs and lows from the time of worsening heart failure to the interview day as a single trajectory (lifeline method), and, based on that, we conducted separate semi-structured interview surveys with the patients and caregivers. The obtained results were analyzed according to the time of heart failure worsening, that of wearing the VAD, and that of home care, as well as by caregiver attributes.
Results: We interviewed 7 VAD patients and 7 primary caregivers. The patients' trajectories dropped at the time of heart failure worsening and rose after VAD surgery, but dropped from the second year to the third year of home care and rose again after the third year. A patient in the third year after wearing VAD has experienced the strength of thinking "keeping up the fight and fighting the disease," as well as a mental struggle after a painful period of keeping up the fight while awaiting heart transplantation. On the other hand, the caregivers' trajectories showed various ups and downs, and the narrative of the experience was characterized by their patients' attributes. In particular, the lifelines of the patient and the mother of caregiver drew similar trajectories and showed a disease experience that closely reflected the patient's beliefs.
Conclusions:
The long-term transplant waiting period must be supported by considering the patients' experiences based on the disease and VAD timeline as well as the caregivers' experiences based on their patients' attributes.
Abstract 12
Cultural Care for Dying Patients in Okinawa and Taiwan Sayuri Jahana, PhD, RN 1 , Midori Kamizato, PhD,RN 1 1 Okinawa Prefectural College of Nursing, Naha, Okinawa, Japan Background: Understanding of cultural background of patients and families is crucial in providing care. What patients and families believes is heavily influenced by their culture especially when death is approaching.
Objective:
The purpose was to explore how nurses should care for dying patients and families in Okinawa and Taiwan Methods: Eight nurses in hospitals and palliative Care Units (PCUs) were interviewed about how they have given culture-based care for dying patients and bereavement family. Data analysis was conducted by using a qualitative descriptive method. The research was conducted after approval by the College ethics review panel.
Results:
People of Okinawa Islands believed that death was considered as a rite of passage where the spirit and body separated from each other. When a person died outside of his home and the body was brought home for the funeral, it was believed that the spirit remained at the at the location of his death. To ensure that the spirit returned home, a ritual called "NUJIFA" was performed at the site of the death. When patient passed away, the nurses in the hospital and PCUs in Okinawa offered the family chance to perform NUJIFA by the family itself or the family who was taught by the funeral parlor. In addition, nurses in PCU allowed family to perform NUJIFA not only the patents room but also the bathroom which the family though necessary.
In Taiwan, people wish to die at home because they believed a soul would not be able to return from a hospital. When the patient passed away in the hospital and PCU, nurses offered the family time and space to pray to Amida Buddha for 8 hours after patient's death in the special room. In the ICU of the hospital, the nurse played a chant Amida Buddha for a dying patient who Buddhist. Many individuals of Buddhist faith may attribute pain and suffering to bad karma. A Buddhist patient in the PCU never used pain medication despite severe pain because of it. She just prayed and endured. A Nurse told a metaphor about God's sending people for help, and she finally accepted pain medication.
Conclusion: Similar culture-related care was found in both Okinawa and Taiwan. It was similar for both them that they wanted to die at home and wanted the ritual for the souls. It is important to be sensitive to the culture and belief in order to provide care for dying patient and family coping with bereavement. Future studies are needed in order to develop a care program for nurses. It is important to be sensitive to the culture and belief in order to provide care for dying patient and family coping with bereavement.
Abstract 13
The This study aims to explore how graduates of the EWHA-Cambodia UHS (University of Health Science) Partnership Program perceive their experience and effect of the program. We performed the study as a part of the evaluation on the 2nd project (2017-2018) running a 2-year nurse bridge program for a bachelor's degree at UHS for nurses with ADN. The purpose of the program was to strengthen nursing and research competency of the nursing leaders in Cambodia.
The data of this study consisted of a survey and focus group interviews. The participants of the survey were 38 out of 45 NBP graduates from the 1st project (2015-2016) of the nurse bridge program. The questionnaire consisted of questions about the most memorable and challenging aspects of the program. The collected questionnaires were analyzed using SPSS. The participants of the Focus Group Interview (FGI) were three nurses who participated in the survey. FGI was conducted to explore participants' perception of the program. Interview questions included the satisfaction with the program and change through the program in terms of nursing, research, and clinical competence. The interview was recorded and transcribed for detailed analysis.
According to the results of the survey, 31.8% of the participants responded that the performance of the research, upgraded knowledge, and group activity were the most memorable. 51.8% of the participants answered that the most challenging in the program were language and research. Graduates who earned a bachelor's degree through the program reported that they received additional benefits such as promotion, salary increases, and employment. They described that they could use what they learned in those lectures and minimize their mistakes in caring for patients after the program. The participants perceived that they had improved not only nursing but also English and IT skills. They suggested that lecturers of the program should have better English skills so they can communicate more with students. Furthermore, they wished that the master's course in this program would be open so that they could pursue a higher degree in it. The results show that the nurse bridge program plays a significant role for the vulnerable nurses in improving their competences as nursing leaders in Cambodia. This study implicates that the nurse bridge program will improve the level of nursing and healthcare in Cambodia by creating nursing professionals with nursing and research competences.
Implications for Nursing is to improve the level of nursing and healthcare in Cambodia by creating nursing professionals with nursing and research competences.
Type D Personality and Caring Ability in Nursing students : The Mediating Effect of Emotional Intelligence and Resilience
Juyeon Lee, MSN, RN 1 , Sookyoung Kim, PhD, RN 1 1 CHA University, Pocheon, S. Korea Background: Nursing is a practical discipline that cares for each individual with his or her own unique characteristics and circumstances in mind. In recent years, caring has attracted attention as a major concept in nursing. According to precedent research, nursing student's caring ability was reported to be affected by emotional intelligence and resilience. However, there hasn't yet been studies on whether the emotional intelligence and resilience have mediating effects on the relationship between nursing student's type D personality and caring ability.
Purpose: Aim of this study was to examine mediating effect of emotional intelligence and Methods: The participants of this study were 3rd and 4th grade students in two nursing departments in G-Do, who were interested in understanding the purpose of the study and volunteering to participate in the study. Type D personality, caring ability, emotional intelligence and resilience were measured by using 278 questionnaires conducted from October 1st, 2018 to October 12th, 2018. Data were analyzed using Pearson's correlation and multiple regression involving Ha* Process Macro.
Results: As a result of regression, in step 1-1, type D personality have a significant effect on emotional intelligence(B=-6.80, p<.001). In step 1-2, the type D personality have a significant effect on resilience(B=-6.77, p<.001). In step 2, type D personality and emotional intelligence have a significant effect on caring ability(B=-7.20, p=.001). Emotional intelligence has a significant mediating influence on the relationship of type D personality and caring ability while resilience doesn't.
Conclusion:
Through this study, in order to improve the caring ability of nursing students, it is necessary to identify the type D personality of nursing students and to apply the intervention program for the type D personality and to develop and apply the emotional intelligence enhancement program. Background: Insomnia of nurses results in poor health and patients outcomes. However, research related to insomnia in newly graduated nurses in a longitudinal manner is limited worldwide. A most recent systematic review study have suggested that longitudinal studies were needed to identified what factors make individuals more vulnerable to sleep development.
Objectives:
The objective of this study was to examine the course and factors associated with longitudinal changes in insomnia severity in newly graduated nurses.
Methods:
The sample was consisted of 200 participants generating 800 observations of severity of insomnia during their first year nursing career. The participants who were recruited from two hospitals in Taiwan completed a package of instruments measured at study entry and 3-monthly to one year. While accounting for the repeated measures within participants over the duration of first year after graduation from nursing schools, growth curve modeling (GCM) and growth mixture model (GMM) analysis were performed to assess the nature of insomnia patterns and the magnitude of growth curve. Descriptive analyses were conducted using SPSS, and Mplus 8 was used for GCM and GMM.
Results: Baseline insomnia severity scores were available for 279 nurses, of whom 65.6% had insomnia. The results of multiple regression GCM indicated that educational attainment significantly predicted the growth rates of insomnia severity in newly graduated nurses. There was a significant difference in growth rates of insomnia severity between those who had BSN degree (bachelor's of science in nursing) and those who had AND (associate's degree in nursing) degree (p = .012) with those having AND degree being worse insomnia across time points. Moreover, occupational stressors at each time point were significantly associated with worse insomnia severity across times (all p <.001).
Conclusions:
The findings contributed to the knowledge about that the educational attainment predicted the growth rates of insomnia in nurses. Further studies should consider more specific social and organizational factors so as to gain insight into all of the complexities of severity of insomnia, and consider developing different intervention strategies for this homogeneous group of insomniac nurses. Using growth trajectories as predictors to examine the impacts on nurses' retention is also needed to be studied.
Implications for Nursing:
A higher level of educational attainment made nurses more resilient to the development of severity of insomnia. Attention to these aspects could potentially contribute to a better management of stress and insomnia symptoms in newly graduated nurses.
Abstract 16
Specialty Background: It is difficult to treat a symptom of dementia, there is a compelling need for making clear the care practice of match the stage and living conditions for people with dementia.
Purpose: The purpose of this study is to identify what healthcare professionals engaged in dementia care (e.g. nurses, care-workers) focus on when they interact with people with dementia, their families and physicians, in each (i.e. mild, moderate, or severe) stage of dementia.
Methods: This study was Qualitative Research, and data were collected at medical institutions, healthcare service facilities for the elderly, nursing homes, and group homes. For this study, 13 nurses and 9 healthcare workers were interviewed. The interviews were analyzed using content analysis.
Results:
The results showed that in the period from the onset of dementia to mild stage, healthcare professionals engaged in dementia care focused on listening carefully to dementia patients and their families to understand their situations and watchful waiting until intervention is necessary. However, professionals seemed to pay little attention to interventions that could have a meaningful therapeutic effect on dementia. In the moderate stage, professionals focused on discovering the hidden "true personality" of dementia patients, by analyzing information on behaviors displayed, i.e. behavioral and psychological symptoms of dementia (BPSD) assessment. Regarding efforts to create a comfortable therapeutic environment for patients, professionals made adjustments to the care environment, focusing on the "human living," to ensure that family and physician involvement in dementia care is not interrupted. In the moderate to severe stage of dementia, professionals focused their efforts on understanding dementia patients' intentions and worked with physicians to help dementia patients experience a comfortable end of life, and their families to spend a peaceful time with them. However, the results were unclear concerning preventive care for complications and concrete support for decision-making about care.
Conclusions:
These results show that professionals engaged in dementia care are required not only to possess the ability to act and management capability so that they can deliver necessary care for dementia patients and their families in cooperation with multiprofessional team members, but also to comprehensively assess dementia patients so as not to deteriorate their quality of life. Osaka University, Suita, Japan Background: In Japan, a policy was proposed to discharge psychiatric inpatients within 1 year and to create a community health care for those patients. However, it is difficult to provide support for them in communities with scarce social resources.
Abstract 17
Purpose: The purpose of this research is to identify and resolve issues for supporting and enabling people with mental illness to live fuller lives in communities with scarce social resources, and to improve mental healthcare services in such communities through cooperative programs between researchers and local nurses.
Methods:
To understanding issues of support of people with mental illness, we conducted group discussions and interviews with specialists in the community (Action 1). We held three workshops. After the workshops, we conducted a questionnaire to evaluate the participants' experiences and satisfaction (Action 2). To clarification of the roles of public health nurse to support people with mental illness, we conducted observations of her activities, and held interview with other professionals (Action 3). To clarification home-visit nursing care for people with mental illness, we analyzed three cases of home-visit nursing care and the data from interviews with home-visit nurses (Action 4).
Results:
In action 1, the issues of mental health were mental healthcare for aged people and children, measures for untreated mental ill people and so on. Supporters could not engage in enough collaboration with other professionals. Public health nurse thought that their roles in supporting people with mental illness were unclear. Home-visit nurses did not feel confident in their care for mental ill. In action 2, participants shared the strengths of and issues of their mental healthcare services and were able to empower each other through the workshops. In Action 3, the six roles of public health nurses became clear, including network development of a basis for support and so on. In action 4, four effective care were defined, including user-and family-oriented care, collaboration with the user's family and so on. After completing Actions, public health nurse and home-visit nurses, decided to hold support meetings on a regular basis.
Conclusions:
These Actions led to collaboration in actual cases. We were able to suggest the importance of care services for people with mental illness to live in communities with limited social resources for mental health from the standpoint of public health nurse and home-visit nurse. Background: Health literacy influences health behaviors, health utilization, patient-physician interaction, disease prevention and treatment. Women's health literacy not only influences their personal health behaviors but also influences health of their family.
Developing A Decision-Making Support System For Health Literacy In Multi-ethnic Females
Purpose: The purpose of this study is to evaluate the effectiveness of the females' health literacy decisionmaking support system (DMSS) in multi-ethnicity females in Taiwan.
Method:
A pre-experimental method with pre-posttest in one group was conducted. Participants who are Ho-Lo, Hakka, Aborigines, Mainlanders, and Southeast Asian were recruited using convenient sampling. The questionnaires in the study contained demographic data sheet, General Self-Efficacy Scale, Self-Directed Learning Instrument, Adolescents' Health-Promoting Behavior Scale and Taiwan Female Health Literacy Scale. The data analysis was conducted by descriptive statistics and paired t test.
Results: A total of 1511 multi-ethnic females visited the studied DMSS, 99 completed both pretest and posttest. Large proportion of the participants was Taiwanese, single, employed, had an educational level equal or higher than college and university, had a family income between 50,000 and 100,000 NTD. In terms of learn intentions, the Ho-Lo has the highest intention to learn the modules of menstrual health, menopause health and reproductive health. The Hakka has the highest intention to learn the women's specific disease module. New immigrants have the highest intention to learn the female cancer module. In terms of behavior patterns of learning, the Ho-Lo and Hakka tend to conduct tests first and then select learning modules. Aboriginal people and new immigrants tend to enter the learning theme before conducting the subject test. According to the comparative analysis of pre and posttest data, participants' self-efficacy, self-learning, health promotion and female health literacy were significantly different (p<.01).
Conclusion:
These results indicated that the designed DMSS was effective in increasing females' health literacy and health promoting behaviors. The females' health literacy DMSS might be widely used by all females. For further implementation, healthcare providers need recognize that there are different learning intentions and behavior patterns of health literacy among multi-ethnic groups of women in Taiwan. This study was funded by the Ministry of Science and Technology (106-2511-S-255-001,106-2511-S-255-003-MY3).
Implications for Nursing:
The designed DMSS was effective in increasing females' health literacy and health promoting behaviors and might be widely used by all females. For further implementation, healthcare providers need recognize that there are different learning intentions and behavior patterns of health literacy among multi-ethnic groups of women in Taiwan.
Methods:
The cognitive behavior program was developed by analyzing previous studies, conducting indepth interviews with fibromyalgia syndrome patients, drawing on cognitive behavior theory to establish the program contents, recruiting experts to test its validity, and conducting a preliminary survey. In order to confirm the effect of the program, this study used a randomized controlled trial design. The subjects were outpatients who had been diagnosed with fibromyalgia syndrome in University Hospital D, Busan. The 34 patients in the experimental group took part in the cognitive behavior program, which comprised 8 sessions (90 to 120 minutes) based on cognitive behavior theory, delivered over 8 weeks. The 34 patients in the control group were allowed to participate in the same program after the experimental group's intervention had ended. The collected data were tested for the Kolmogorov-Smirnov test via SPSS Statistics 24.0. Hypothesis testing was performed using the repeated measures ANOVA and the Friedman test.
Conclusions:
In conclusion, the changes in thought caused by the cognitive behavior program for patients with fibromyalgia syndrome appears to have had a positive effect on their physical conditions, emotions, and behaviors, which is in line with the cognitive behavioral model. It is expected that this program can be used in the future as an effective nursing intervention program to help fibrmyalgia syndrome patients improve their disease conditions.
Abstract 20
A meta-analysis for exercise effects in ankylosing spondylitis: Focused on the pulmonary function Eun Nam Lee, PhD 1 , EunJeong Kim, Doctoral student 1 , Eun-Jeong Yu, Doctoral student 1 , 1 Department of Nursing, Dong-A University, Seo-gu, S. Korea Purpose: The ankylosing spondylitis (AS) is characterized by stiffness and decreased mobility of the spine due to inflammation and structural damage of the spine. If the movement of the chest wall is restricted, it will have a negative effect on the functional part of the lung. These functional impairments and disability are serious problems that lead to decrease of quality of life. Although exercise has been recommended as a nonpharmacologic treatment to manage it, attempts to understand comprehensively the effects of exercise interventions that associated with restricted pulmonary functions were lacking. The purpose of this study is to analyze the effects of exercise on pulmonary function in AS.
Methods:We searched MEDLINE, EMBASE, CEN-TRAL, CINAHL (through Jan 2019) to identify RCTs on the effectiveness of exercise therapy on pulmonary function adults with AS. Three reviewers independently extracted data and assessed methodological and therapeutic validity (using the risk of bias scale in Cochrane).
Results:
The search retrieved 353 articles of which studies -describing 13 interventions (438 patients)fulfilled our inclusion criteria. VO2 peak values were significantly higher in studies involving aerobic exercise (3.446 (0.24; 0.88)), FVC were found to be effective in the inspiratory muscle training group (3.459 (0.47; 1.69)). CE had a significant effect on the most exercise interventions (6.520 (0.41; 0.75)). As a secondary variable, BASFI was effective in all studies (-6.597 (-0.67; -0.37)) Conclusion: As a result of the study, all studies aimed to improve the functional status of people with AS, but there was no study focusing on pulmonary function, so it was difficult to grasp the close relationship between the variables. Even though these studies were RCTs, exercise modality like monitoring, adequate dosing, and personalization were rarely addressed in the programs. The exercise form was mainly aerobic exercise and strength exercise, and several types of exercise were combined. The pulmonary involvement is common in AS and might have disturbed functionality and exercise modality. This study was meaningful in that it provided the idea of how to view pulmonary function as a major variable to evaluate the effect of exercise and to determine the most useful type of exercise program for AS patients.
Structural Equation Modeling of Health-Related Quality of Life in Patients with Systemic Lupus Erythematosus: Application of Resourcefulness Theory
Eun Nam Lee, PhD, RN 1 , Eun Hui Choi, PhD, RN 2 , Moon Ja Kim, MSN, RN 1 1 Dong-A University, Busan, Busan, S. Korea 2 Masan University, Changwon-si, S. Korea
Objectives:
The present study is a structural equation modeling study that developed a hypothetical model to explain the health-related quality of life in lupus patients by using Zauszniewski's resourcefulness theory and verified the goodness of fit (GOF) of the model.
Methods:
According to the resourcefulness theory, the disease activity and social network of lupus patients were set as antecedent factors, positive cognition as a process regulator, and physical and psychological quality of life as indicators of quality of life to construct a hypothetical model in this study. Data collection was conducted with patients who were diagnosed with lupus and were receiving outpatient treatment at D University Medical Center located in B Metropolitan City in Korea from June 1 to August 30, 2018, and a total of 252 patients'data were included in the final analysis. The collected data were analyzed using SPSS statistics 24.0 and AMOS 25.0 program.
Results:
Results of validity test of the hypothetical model revealed that the GOF was indicated by X2/df (2.51), SRMR(.06), RMSEA(.08), GFI(.92), AGFI(.87), TLI(.91), and CFI(.94), demonstrating that the figures met the recommended level. The results of the model analysis showed that the resourcefulness of lupus patients directly affected their physical and psychological quality of life, and that the disease activity degree did not affect their resourcefulness, but their social network did. Positive cognition was found to have a mediating effect on disease activity degree, social network, and resourcefulness. It was found that although the disease activity degree directly affected the physical and psychological quality of life, social network had a direct effect only on the psychological quality of life.
Conclusions:
In conclusion, the resourcefulness of lupus patients affected their physical and psychological quality of life, and resourcefulness was influenced by disease activity degree and social network with positive cognition serving as a mediating factor in the process. Based on these findings, the authors of this study propose the development of research of interventions that can enhance the resourcefulness to improve health-related quality of life of lupus patients.
A Novel Approach for Measuring Socioeconomic Factors Associated with Cardiovascular Health Among Older Adults in South Korea
Chiyoung Lee, MSN 1 , Eun-Ok Im, PhD, MPH, RN, CNS, FAAN 1 1 Duke University, School of Nursing, DURHAM, USA Background: Cluster analyses meaningfully interpret the interrelated socioeconomic status (SES) components of important health outcomes, enabling researchers to identify vulnerable groups. However, this approach has thus far been insufficient in the cardiovascular field. Older adults in South Korea comprise a unique group, with great socioeconomic variability.
Purpose:
The study aimed to identify socioeconomic clusters of older adults and to compare cardiovascular outcomes among the identified clusters.
Methods:
A cross-sectional analysis was performed using the data from 3303 older adults (over 65 years; 56.5% female) who participated in the Korean National Health and Nutrition Examination Survey (2016-2017). A two-step cluster analysis was used to identify older adults' socioeconomic clusters based on eleven SES factors. Socioeconomic levels were assigned to the clusters contingent upon which SES variables were more prevalent in each cluster. A comparison test (Chi-squared and ANOVA test) was performed to validate the cluster solution and explore the differences between the identified clusters. In addition, logistic and linear regression analyses estimated the risk values for the prevalence of cardiovascular diseases (CVDs) and associated risk factors.
Results:
A three-cluster solution was selected (p < 0.00): low (N = 715), middle (N = 1,425), and high socioeconomic (SES) groups (N = 1,163). After controlling for initial health status and health behavior, decreased odds of having diabetes (OR = 0.74, p < 0.01) and hypertension (OR = 0.70, p < 0.00) were found in the high SES group when compared with the middle SES group, and elevated odds of having hypertension (OR = 1.22, p < 0.00), being overweight (OR = 1.20, p < 0.05), and obese (OR = 1.43, p < 0.00) were found in the low SES group in the same comparison. Using a linear regression model, elevated risk differences in 10-year CVD risk levels (RD =1.38, p < 0.01), total cholesterol (RD = 5.32, p < 0.01), and systolic blood pressure (RD = 2.88, p < 0.01) were observed in the low SES group, compared to the middle SES group. However, no significant differences were found in the prevalence of CVDs among clusters.
Conclusion:
Understanding the potential combinations of SES risk factors could facilitate etiologic understanding of cardiovascular health. Furthermore, older adults of low SES groups should be a crucial target group for prevention and management of CVD in health promotion interventions.
Implications for Nursing:
This study supported the feasibility of applying clustering to assess varying SES conditions that exist within an older adult population, a method which should be considered in future research planning.
Future Directions in Nursing Research Across the Globe
Eun-Ok Im, PhD, MPH, RN, CNS, FAAN Duke University, USA Abstract: Researchers have emphasized the necessity of nursing research in the practice of professional nursing. Indeed, nursing research has provided solid grounds for evidence -based care that enhances health outcomes of individuals, families, communities and health care systems. Also, nursing research has shaped policies related to health care, within an organization, and at the local, state and federal levels. With advances in communication and transportation technologies, nursing has experienced globalization as in other fields. However, little has been discussed about future directions in nursing research across the globe. In general, the 2016 NINR strategic plan clearly prescribes future directions of nursing research, which include four focused areas of research including symptom science, wellness, self management, and end-of-life and palliative care. Characteristics and challenges of top NIH funded schools of nursing also provide directions for future nursing research. Also, current trends in nursing in general provide future directions for nursing research. The current trends in nursing include: (a) changing demographics; (b) increasing diver- Background: Of the 48 million older adults (60 years and older) in the United States, an estimated 90% of them have one common desire: to age well, safely, in their own homes; their desire is also the most costeffective for the states, and the nation. The National Academy of Medicine drew attention to the incapacity and inadequacy of health care systems to deliver highquality services to the growing older population. We are presenting 2 Symposia from 2 research sites Pittsburgh, PA and Manila, Philippines as a global collaboration.
Purpose: Our purpose was to assess the HEARTS Methods: Research Design. A mixed method sequential transformative research design was used in this research. We used a community-based engaged research approach as older persons are often stereotyped, marginalized, and silenced in their homes, groups, and communities.
Data Analysis: Quantitative. A detailed descriptive data analysis such as means, standard deviations, percentiles, ranges, and graphic presentations will be presented. Qualitative. The interview schedule generated robust information from the participants. For the Manila HEARTS, all conversations between investigator and participant were video-taped after informed consent.
Results: will be provided as each abstract is presented. Interventions that Enhance Resilience in Filipino Older Persons. The population aged 60 and over is growing faster than all younger age groups globally. Aging is poised to become the most significant social transformations of the 21st century with major implications. Older persons are contributors to world development; their talents and abilities should be woven into policies and programs at all levels. Adding health to years is a global theme for 2020 and beyond.
Symposium 1, Abstract 2
Purpose: The purpose of this paper is to define, clarify, and magnify, community-based participatory engagement as a crucial first step in community-based research using the theoretical framework of Community-Based Engagement prior to our assessment of the HEARTS (Health, Experience of Abuse, Resilience, Technology use, and Safety) of older persons.
Methods:
This study uses a mixed methods design. We visited and attended community activities in 6 Pittsburgh areas: South Hills, North Hills, East Liberty, Squirrel Hill, Homewood, and Lincoln-Larimer. From the community gatherings, older persons aged 60 years and older were recruited in the Pittsburgh, PA area after receiving IRB approval.
Results: As members of the Community Engagement Panel (CEP), one of the researchers in her talk invited those present at the first community meeting to join hands as members of the CEP and that they are not only participants in research but co-researchers. The group developed a 4-member research CEP. This 4 member CEP led the entire group of participants in defining, clarifying, magnifying, and engaging participants to ask questions regarding the research problem and answer pointed questions such as, how long are we going to do this research, what good can we get from it? These were answered by the CEP and the PI or Co-I.
Conclusions:
The topics discussed were sensitive, the hesitancy of people of color to be involved in research of any type, made it difficult to ascertain clear suggestions from participants regarding interventions that would enhance self-sustained engagement. Engaging neighbors and communities in research as partners, as opposed to the limited role of research participant, create more relevant outcomes to help resolve the science: community gap.
Symposium 1, Abstract 3
The Background: Elder abuse is a global public health issue that has reached alarming levels with high prevalence rate globally. The impact of elder abuse could undermine self-confidence, emotional stability, selfesteem, and adaptability. While scholars have made significant progress in addressing elder abuse across different cultures, there is a paucity of study examining the relationship between the experience of abuse and sleep quality. In older persons, sleep is a diminishing commodity. The consequences of sleep disturbances and sleep-related impairments can be healthand life-threatening.
Purpose: The purpose of this study is to explore the association between experience of abuse and sleep disturbance among older persons in the Pittsburgh area.
Methods:
We recruited 34 older persons aged 60 years and older in Pittsburgh. Two questionnaires EASI (Elder Abuse Suspicion Index) and PROMIS (Patient-Reported Outcomes Measurement Information System) were distributed to evaluate the experience of abuse and general health function like anxiety, depression, fatigue, sleep disturbance, sleep-related impairment, and physical function. Quantitative data were analyzed using a detailed descriptive and inferential statistical analysis.
Results: Among the PROMIS variables, sleep disturbance was the highest rated health functional problems with mean = 10.19 and SD =3.297. Results indicate that there are significant negative correlations between sleep quality and emotional abuse, r = -0.301, p = 0.044, sleep quality and financial abuse, r=-0.298, p=0.046, as well as between sleep quality and physical abuse, r=-0.366, p=0.018. Additionally, we find significant negative correlations between the extent to which individuals report feeling refreshed after sleeping and the experience of emotional abuse, r=-0.345, p=0.023; financial abuse, r=-0.308, p=0.038; and, physical abuse, r=-0.376, p= 0.014. Participants who experienced abuse were prone to have lower quality and less refreshing sleep.
Conclusions:
This research elicited data that older persons with abused experience in the Pittsburgh area are mostly affected by sleep disturbance. Given these findings, we strongly suggest that future research could harness technology such as a wearable device to explore intervention strategies to prevent health consequences of sleep disturbance and sleep-related impairments in older persons experiencing abuse.
Symposium 1, Abstract 4
Harnessing the Internet, Mobile, and Wearable Devices to Facilitate Health Outcomes
Background:
The impact of trauma and context on survivors' ability to communicate and seek help from healthcare providers is immeasurable. The Internet, mobile, and wearable devices and accessories are rampantly used as conduits to prevent and manage chronic illnesses, violence, and trauma and facilitate dialogue between health providers and traumatized populations globally. Given the global challenge of an everexpanding demand for healthcare management, our research team explored how to harness Email and text messaging, and WATCH (Wearable Accessory To Call for Help) to deliver HELP (Health, Education on safety, Legal rights, and Privileges) to women in Intimate Partner Violence (IPV).
Purpose: Our purpose was to 1) explore email, text messaging, and WATCH as a method of delivering HELP, and 2) assess which method is efficient and effective. We used Disruptive Innovation (DI) as our conceptual framework (Christensen, 2012).
Methods:
We used mixed methods design in data collection and data analysis. Quantitative data were collected using self-report questionnaires and qualitative data were collected via email and text messaging interviews. Quantitative data were analyzed using a detailed descriptive analysis and comparisons of the effect of the intervention used the intention-to-treat principles. Qualitative data were analyzed using a phenomenological approach.
Results:
The HELP intervention when delivered via email (1) decreased anxiety (diff.3.6%), depression (diff. 3.8%), and anger (diff. 4.3%), and (2) increased social support (diff. 14.5%). Qualitatively, the HELP information and intervention was shown to be feasible, acceptable, and effective among IPV survivors in the email participants. Results for the text messaging intervention showed that there was an increase in knowledge level pre to post-test scores, from 2.00 ± 1.00 to 2.7 ± 0.48 (p < 0.001) and confidence level pre to post-test score increased from 2.89 ± 0.60 to 3.30 ± 0.68 (p < 0.001). Only the results from the email and text messaging projects are reported here.
Conclusions: Barriers to email are that each user must have their own Internet connectivity and must be proficient in using the computer. These barriers are resolved in mobile and wearable devices. We plan to develop the WATCH4HELP (Wearable Accessory To Call for Help). prototyping of the WATCH4HELP is in the process.
Symposium 1, Abstract 5
The Background: Elder abuse is a multidimensional phenomenon that encompasses a broad range of behaviors, events, and circumstances. A substantial research has been working on the risk factors, prevalence, and consequences. There is a paucity of study accessing the relationship between the experience of abuse and the mental health of older persons. The prevalence of anxiety disorders in older people range from 1.2% to 15% in the community, and up to 28% in clinical settings.1,2 Older adults experience more stressors, in particular, loss of an intimate partner, illness, disability, fears of being a burden on others, impending mortality and reduced financial support. The consequences of anxiety and related health issues are far-reaching and life-threatening. Nevertheless, evidence-based research regarding the anxiety level in older adults is not significant in our review of the literature.
Purpose:
The purpose of this study is to explore the association between experience of abuse and anxiety level among older persons in the Pittsburgh area.
Methods: Two questionnaires EASI (Elder Abuse
Suspicion Index) and PROMIS (Patient-Reported Outcomes Measurement Information System) were distributed to evaluate the experience of abuse and general health function like anxiety, depression, fatigue, sleep disturbance, sleep-related impairment, and physical function. Quantitative data were analyzed using a detailed descriptive and inferential statistical analysis.
Results: Findings indicate that there are significant negative correlations between fearful level and physical abuse, r = -0.280, p = 0.054, fearful level and financial abuse, r=-0.406, p=0.009. Additionally, we find significant negative correlations between the extent to which individuals report hard to concentrate on anything except their anxiety and the experience of physical abuse, r=-0.318, p=0.035; and, financial abuse, r=-0.324, p=0.033. participants who experienced abuse were prone to suffer from anxiety more.
Conclusions:
The paucity of elder abuse and anxiety level shed light on the urgency to evaluate current intervention direction on violence against the elderly. This research elicited data that older persons with abused experience in the Pittsburgh area are anxious than those without abuse experience. Future research can explore the emotional need and support towards older abused adults with holistic care and human interaction.
The Effects of Technology-Based Interventions on Health Outcomes
Wonshik Chee, PhD Duke University, Durham, NC, USA Overview: With advances in computer and mobile technologies, the use of technology-based interventions has drastically increased in recent years. Compared with conventional interventions, technologybased interventions are reported to be effective in providing information and coaching/support mainly due to easy access without time or cost constraints. However, little is still known about the effectiveness of technology-based interventions on health outcomes.
Purpose: This symposium aims to provide an open forum to discuss the effects of technology-based interventions on health outcomes through three presentations from the same study that tested a technologybased coaching/support program for Asian American breast cancer survivors.
Methods and Results:
The first presentation shows the findings on the effect of sub-ethnicity on pain and symptom experience (pain management, symptom management, pain experience, and symptom experience) of Asian American Breast Cancer Survivors and to determine the multiple factors influencing the relationships. This presentation is to show the pre-test findings on the participants' health outcomes in the study. The second presentation is about the effect of the technology-based coaching/support program on menopausal symptoms of Asian American breast cancer survivors. The intervention group showed a significant decrease in the distress scores of menopausal symptoms over time: physical (β = -0.07, p = 0.08), psychological (β = -0.13, p = 0.05), psychosomatic (β = -0.17, p = 0.06), and total symptoms (β = -0.19, p = 0.01). The final presentation provides the findings on the efficacy of a technology-based intervention on improving cancer pain and its accompanying symptoms of Asian American breast cancer survivors. Only the intervention group showed a significant decrease in the total symptom severity scores between pre-test and post-3-months (Δ = -0.30, p = .0125). Through this symposium, implications for future technology-based interventions for breast cancer survivorship are proposed.
Subethnic Differences in Pain and Symptom Experience among Asian American Breast Cancer Survivors
Chi-Young Lee 1 ; Sangmi Kim PhD, MPH, RN 1 ; Wonshik Chee, PhD 1 ; Eun-Ok Im, PhD, MPH, RN, CNS, FAAN 1 Duke University, Durham, NC, Durham, USA Purpose: This study aimed to examine the effect of sub-ethnicity on pain and symptom experience (pain management, symptom management, pain experience, and symptom experience) of Asian American Breast Cancer Survivors (AABCS) and to determine the multiple factors influencing the relationships.
Methods:
This was a secondary analysis of the data from a larger study on the survivorship experience of AABCS; only the data from 94 women were used. Multiple questions on background characteristics, the Perceived Isolation Scale (PIS), the Personal Resource Questionnaire (PRQ-2000), the Memorial Symptom Assessment Scale-Short Form (MSAS-SF) were used to collect the data. The data were analyzed using chisquare tests, ANOVA, and hierarchical logistic and multiple regression analyses.
Results: Over 93% of the Japanese women were managing their pain while only less than 50% of the Chinese and Korean women were doing so (p < .01). They also experienced less pain (p = .03) and symptom distress (p = .02) than Chinese and Korean women. Being Japanese was a significant factor that influenced the women's pain management (OR = 14.63 p = .02), symptom management (OR = 11.17, p = .02), pain experience (β = -0.32, p = .01), and symptom distress (β = -0.24, p = .01). Residential area significantly contributed to variances in symptom management (OR = 10.08, p < .01). In addition, religion and level of acculturation were significantly associated with symptom distress (β = -0.10, p < .01).
Conclusion:
The current study supported the influence of sub-ethnicity on AABCSs' pain and symptom experience, with findings demonstrating that being Japanese was associated with better outcomes. Therefore, healthcare providers need to be aware of these sub-ethnic differences and tailor their cancer care while taking into account each sub-ethnic groups' unique characteristics. Moreover, management or interventions for better survival outcomes in AABCS can be influenced by residential area, religion and the level of acculturation. Thus, these areas of influence should be considered in developing such interventions for AABCS. Objectives: This study was to evaluate the effects of a technology-based information and coaching/support program on menopausal symptoms among Asian American breast cancer survivors.
Methods:
A pretest-posttest randomized controlled trial design was used. Ninety-one Asian American breast cancer survivors were included (intervention group 42, control group 49). The program was a theory-driven and culturally tailored technology-based program to enhance the survivorship of Asian American breast cancer survivors. Background characteristics, menopausal symptoms, and theory-based variables (attitudes, social influence, self-efficacy, and perceived barriers) were measured at the three timepoints (pre-test, post-1-month, and post-3-months). For data analyses, an intent-to-treat mixed-model growth curve analysis was conducted using the SAS PROC MIXED.
Results:
In a homogeneity test, there were no statistically significant differences in characteristics between the control group and the intervention group. For intervention group, there were significant decreases in the total distress scores of menopausal symptoms (β = -0.19, p = 0.01) and the sub-distress scores of menopausal symptoms: physical (β = -0.07, p = 0.08), psychological (β = -0.13, p = 0.05), and psychosomatic (β = -0.17, p = 0.06). The attitudes, social influences, and self-efficacy on breast cancer survivorship partially mediated the intervention effects on the distress scores of menopausal symptoms (p < .10).
Conclusions:
The study results proved the effects of the technology-based information and coaching/support program on relieving menopausal symptoms of Asian American breast cancer survivors. Further research is needed to verify the effects of the program on diverse groups.
Symposium 2: Abstract 4
The Effects of a Technology-Based Program on Pain and Symptoms Wonshik Chee, PhD 1 ; Sangmi Kim, PhD, MPH, RN 1 ; You Lee Yang, PhD, RN 1 ; Chiyoung Lee, MSN, RN 1 1 Duke University, Durham, NC, Durham, USA Objective: Pain and its accompanying symptoms are common problems, especially in the first few years of breast cancer survivorship after treatment as well as during the diagnosis and treatment process. Asian American breast cancer survivors reportedly have inadequate cancer pain and symptom management, subsequently reporting lower quality of life compared to other racial/ethnic groups. Technology-based programs could improve cancer pain and symptom management process. The purpose of this study was to examine the effects of a technologybased information and coaching/support program on cancer pain and its accompanying symptoms of Asian American breast cancer survivors.
Methods:A randomized pretest/posttest group design was used for the study. The study included 42 Asian American breast cancer survivors in an intervention group and 49 in a control group. The technology-based program targeted to decrease cancer pain and its accompanying symptoms through providing information and coaching/support using computers and mobile devices. Background characteristics and menopausal symptoms were measured using multiple instruments at three time points (pre-test, post 1-month, and post 3-months). The data were analyzed using an intent-totreat linear mixed-model growth curve analysis.
Results:
Only the intervention group showed a significant decrease in the total symptom severity scores between pre-test and post-3-months (Δ = -0.30, p = .0125). Although both groups tended to experience a decrease in the severity scores of physical symptoms over time (p = 0.0712), the decreasing rate was marginally greater among the intervention than the control group (p = 0.1032). However, cancer pain and psychological symptoms did not show significant group, time, and their interactive effects.
Conclusions:
The findings supported that the program helped reduce the symptom distress of Asian American breast cancer survivors. Further studies with a larger number of Asian American breast cancer survivors are needed to confirm the findings.
Symposium 3: Abstract 1 -Overview
Assessing the HEARTS of Older Persons in Mandaluyong, Philippines PhD, MSN, BSN, RN, FAAN, FACFE University of Pittsburgh School of Nursing, Pittsburgh, PA, USA Background: Of the 48 million older adults (60 years and older) in the United States, an estimated 90% of them have one common desire: to age well, safely, in their own homes; their desire is also the most costeffective for the states, and the nation. The National Academy of Medicine drew attention to the incapacity and inadequacy of health care systems to deliver highquality services to the growing older population. We are presenting 2 Symposia from 2 research sites Pittsburgh, PA and, the Philippines as a global collaboration.
Purpose: Our purpose was to assess the HEARTS (Health, Experience of Abuse, Resilience, Technology use, and Safety) of older persons.
Methods: Research Design. A mixed method sequential transformative research design was used in this research. We used a community-based engaged research approach as older persons are often stereotyped, marginalized, and silenced in their homes, groups, and communities. Data Analysis Quantitative. A detailed descriptive data analysis such as means, standard deviations, percentiles, ranges, and graphic presentations will be presented. Qualitative. The interview schedule generated robust information from the participants. For the Manila HEARTS, all conversations between investigator and participant were video-taped after informed consent.
Results: Will be provided as each abstract is presented.
Summary and Conclusions
: Symposium #3 has 4 abstracts 1) The Experience of Abuse and Resilience of Older Persons in Urban Philippines, 2) The Health and Experience of Abuse of Older Persons in the Philippines, 3) The Treatment of Social Networks and Safety Status of Filipino Older Persons, and 4) Identifying Interventions that Enhance Resilience in Filipino Older Persons. The population aged 60 and over is growing faster than all younger age groups globally. Aging is poised to become the most significant social transformations of the 21st century with major implications. Older persons are contributors to world development; their talents and abilities should be woven into policies and programs at all levels. Adding health to years is a global theme for 2020 and beyond. Background: With the rapid increase of older persons and the economic downturn in the Philippines, several Filipino households face the burden of supporting their older parents/grandparents and for those who are unable to find the means to support them, the older person is left neglected, financially exploited or abused. Elder abuse, according to the WHO, is a violation of human rights and a significant cause of illness, injury, loss of productivity, isolation, and despair among older persons globally.
Purpose: The purpose of this research was to assess the HEARTS of older persons in the Philippines. An Acronym used for H-Health, EA-Experience of Abuse, R-Resilience, T-Treatment, and S-Safety. We aimed to identify interventions that will enhance resilience and address the experience of abuse among Filipino older persons in an urban city in the Philippines.
Methods:
We assessed the HEARTS of older persons by using PROMIS to measure health, the EASI to measure the experience of abuse, the CD-RISC to measure resilience and a socio-demographic questionnaire to assess the socio-demographic characteristics of participants. All measures were translated and backtranslated into Tagalog, the Philippine language. All measures in English were found to have good psychometric properties.
Results:
The scores of abuse and resiliency correlation matrix show a correlation coefficient of 0.008 between the abuse score and resiliency score. The p-value score is 0.941 and is not significant at the set 0.05 level for the 2-tailed test. Therefore, results show no correlation between abuse and resiliency that means even though older persons have the possibility of abuse; this has nothing to do with their resiliency.
Discussion:
According to the World Report on Violence and Health, older people are isolated because of physical infirmities. Furthermore, loss of friends and family members reduces the opportunities for social interaction. Social isolation could be a harbinger of neglect. As people grow older, they become less resilient because of their frailty and inability to fend for themselves, thus they may receive a lower resilience score.
Conclusions:
The study gave evidence to the resilience of Filipino older persons even in times of verbal abuse. Thus, we conclude that primary prevention could be timely and effective in preventing the escalation of verbal/psychological abuse to physical, economic, or sexual.
Symposium 3: Abstract 3
The PROMIS and Experience of Abuse of Older Persons in the Philippines Dorothea Carino-Dela Cruz, PhD 1 ; Rose Constantino 2 ; Pearl Cuevas, PhD, MAN, BSN, RN 2 1 Centro Escolar University, Manila, Philippines 2 University of Pittsburgh SON, Pittsburgh, PA, USA Background: The Filipino older person poses a great economic challenge as their productivity declines There is an urgent demand for commitment, action, and research to assess and respond to the various needs of this population group. The Commission on Philippine Human Rights in 2014 documented a total of 760 human rights violation cases involving victim aged 60 and above. There is a range of 3.2 to 27.5% elder abuse reported by the general public, of which the adult children are the main perpetrators.
Purpose: The purpose of this research was to assess the HEARTS of older persons in the Philippines. An Acronym used for H-Health, EA-Experience of Abuse, R-Resilience, T-Treatment, and S-Safety. The researchers aimed to identify interventions that will enhance health and experience elder abuse in an urban city in the Philippines.
Methods:
The study explored the HEARTS of older persons through a descriptive survey quantitative research using the PROMIS (Patient-Reported Outcomes Measurement Information System) Questionnaire that determined the health status and the EASI (Elder Abuse Suspicion Index) Questionnaire that determined the experience of abuse. Both standardized questionnaires are translated and back-translated into Filipino. The research question was whether experiencing multiple major health events diminishes rates of resilience.
Results:
The scores of health and abuse correlation matrix show a correlation coefficient of 0.461 between the abuse score and the overall health score. The correlation coefficient for physical health is -0.307, for anxiety, is -0.324, for depression is -0.429 and for fatigue is -0.429. The p-value of physical health is 0.006, anxiety is 0.003, depression and fatigue are 0. The over-all p-value score is 0 and is significant at the set 0.05 level for the 2-tailed test. Findings showed the experience of abuse and health on the other hand yield significant p-values of less than 0.05. All of the scores for health has a negative moderate correlation.
Conclusions: Abuse in older persons affects their health. An older person who is not experiencing abuse is most likely healthy. The study gave evidence that Filipino older persons are doing all that they can to get their basic necessities and medical needs for themselves. However, the study shows that research needs to be done to examine the welfare of Filipino older persons to safeguard their health and well-being.
Symposium 3: Abstract 4 The Treatment Of Social Networks & Safety Status Of Filipino Older Persons Experiencing Elder Abuse
Pearl Cuevas, PhD, MAN, BSN, RN 1 ; Rose Constantino 2 ; Elvira L. Urgel 1 1 Centro Escolar University, Manila, Philippines 2 University of Pittsburgh School of Nur, Pittsburgh, PA, USA To Filipinos, the family is the center of the social structure. Providing care for the older person remains a moral obligation for families in the Philippines. However, with the continuous change in the economy, there are many families in the Philippines who at times can no longer provide the necessities making them incapable of fulfilling their duty to support the older person under their care.
Purpose: The purpose of this research was to assess the HEARTS of older persons in the Philippines. An Acronym used for H-Health, EA-Experience of Abuse, R-Resilience, T-Treatment, and S-Safety. The treatment of social networks and their safety were assessed in this study. Results of this research could benefit older persons towards the comprehensive, person-centered, efficient, and effective delivery of nursing care that is responsive to their needs.
Methods:
The study explored the HEARTS of older persons and used an Interview schedule with open-ended questions that determined the treatment and safety of older persons. The expert made a 10 item open-ended research questionnaire for the interview schedule. These were tape-recorded and transcribed verbatim. The interview schedule generated rich information from the participants by using open-ended questions. Colaizzi's distinctive seven-step process provides a rigorous analysis, with each step staying close to the data.
Results:
We generated findings that resulted in themes for active and frail older persons. The major themes are social network and self. The first theme is a social network that includes three (3) subthemes, 1) support from significant others, 2) unhealthy communication, and 3) unfair treatment. The second theme is self which also includes three (3) subthemes, 1) selfdiversion, 2) ineffective coping and 3) effective coping. For the active older persons, the major themes are self and social network including three (3) subthemes. We will be presenting in detail each theme, subthemes, and mini-themes for clarity and transparency including the concepts autonmy or respect, justice or fairness, and blessing or high regard.
Conclusions:
The study gave evidence that older persons use self-help activities in order to relieve the pains of abuse through interactions/communication with other people. . They are vigilant with their surrounding because they fear for their safety.
Resilience Interventions for Frail Older Persons In The Experience Of Abuse
Elvira L. Urgel, PhD, RN 1 ; Rose Constantino, PhD, RN 2 1 Centro Escolar University, Manila, Philippines 2 University of Pittsburgh School of Nursing, Pittsburgh, PA, USA Background: Assessing the HEARTS of older persons in the Philippines is an important aspect of nursing care. An acronym for H-Health, EA-Experience of Abuse, R-Resilience, T-Treatment, and S-Safety is a comprehensive screening and assessment process that is essential in developing evidence-based interventions.
Purpose: The study also aimed to identify interventions that enhance resilience in older persons experiencing depression, anger, anxiety, and elder abuse.
Methods:
The study used a mixed methods approach in the Sequential Transformative design, wherein the participants were allowed to respond to objective standardized questionnaires. The total number of older person participants (N=80) is further divided into two groups: 40 active older persons and 40 frail older persons who were all free from co-morbid conditions that may result in memory loss or cognitive impairment. The interview schedule was translated in Filipino with open-ended questions that determined treatment and safety of older persons. A registered psychologist was on standby during the interview for any concerns with the mental being of the respondents. A tape recorder was used for the interviews and transcribed in verbatim.
Results:
Results showed the resilience interventions of frail older persons in the experience of abuse. These are categorized into major themes: self-reliance, social networking. This statement, "My children are always here to support me and remind me not to overthink. What they do is take me out on a family date". This person is thankful for her family who looks after her. Those who relied on social networks or significant others were grateful for the communication "When my friends know that I'm bothered they would talk to me. They will listen to me then they will give advice to not stress over my problems and before I think about them, think about my self first.". Respect: "They keep quiet when I get mad. They just follow my instructions.". Obedience: "My children don't rebel against me.
Conclusions:
Interventions emphasizing optimism and positive emotions may be particularly effective in building resilience. The study gave evidence that older persons seek the help of social networks in times of stresses or problems. A novel finding is that Filipinos take pity on older persons, therefore, their neighbors look after them. The older persons still manage to survive on a daily basis because of the good treatment of their community. | 2020-02-06T09:03:07.652Z | 2020-01-28T00:00:00.000 | {
"year": 2020,
"sha1": "9c8579ddff244272e736684f0f5053ff01def83b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.31372/20190404.1007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58297d48f5fee3232d5c07ba77a96e771cbd198e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225718789 | pes2o/s2orc | v3-fos-license | NUDT15 genotyping during azathioprine treatment in patients with inflammatory bowel disease: implications for a dose-optimization strategy
Abstract Background NUDT15 R139C is an Asian-prevalent genetic variant related to azathioprine (AZA) intolerance in patients with inflammatory bowel disease (IBD). However, it remains unclear how to utilize the genotyping results to improve the step-up dosing strategy with an already low starting dose in Asian practice. Methods Clinical data of eligible IBD patients who received AZA therapy and NUDT15 R139C testing were retrospectively collected. The relationship between NUDT15 genotype, AZA doses, and AZA-induced toxicity and efficacy were comprehensively analysed. Results A total of 159 patients were included for toxicity analysis. Compared with the wild genotype, patients heterozygous for R139C are more prone to developing myelotoxicity and alopecia (P = 0.007; P = 0.042). In particular, they had a 5.4-fold risk of developing myelotoxicity when AZA dosage was increased from 25 mg/d to 50 mg/d (P < 0.001). Regarding efficacy, 115 patients who had received AZA for >4 months and maintained clinical remission on AZA monotherapy were included for further analysis. R139C heterozygotes were finally titrated to a significantly lower dose than the wild genotype [median (interquartile range): 0.83 (0.75–0.96) vs 1.04 (0.89–1.33) mg/kg/d, P = 0.001], whereas the clinical remission rates did not differ between groups (P = 0.88). Conclusions IBD patients with R139C heterozygote are highly susceptible to AZA-induced myelotoxicity at an escalated dose of 50 mg/d. Thus, they may require a smaller dose increase after a starting dose of 25 mg/d. The final target dose of these patients could be set lower than that of the wild genotypes without compromising efficacy.
Introduction
Azathioprine (AZA) is widely used as a first-line immunosuppressant in the treatment of inflammatory bowel disease (IBD) including ulcerative colitis (UC), Crohn's disease (CD), and IBD-unclassified (IBD-U). This old drug has proven efficacy in steroid sparing and the maintenance of long-term remission [1,2]. When additive to anti-TNF-antibodies, AZA has been found to exert an extra effect in suppressing the immunogenicity of biologics and thus accelerating mucosal healing. However, AZA has a relatively narrow therapeutic index and can result in substantial toxicity, particularly myelotoxicity, when overdose occurs [3].
The recommended standard dose of AZA in IBD patients is 1.5-2.5 mg/kg/d by European guidelines based on clinical trials featuring individuals of European descents [4,5]. However, IBD patients in East Asia seem to have a poorer tolerance to AZA compared with Caucasians. Thus, rather than using a standard dose for all patients at initiation, most IBD physicians in East Asia start AZA at lower doses and then gradually titrate to a minimal effective dose to mitigate the risk of overdosing and toxicity [6][7][8]. Despite this, the incidence of AZA-induced leukopenia in East-Asian patients still reaches up to approximately 25% with an average low dose of AZA ranging from 0.8 to 1.1 mg/kg/d [9,10], necessitating a search for better dosing methods.
Recently, NUDT15 R139C has been identified as a crucial Asian-prevalent DNA variant that strongly predisposed patients to AZA-related leukopenia during IBD treatment, resulting in reduced dose tolerance [14,15]. Yang et al. [14] showed that the incidence of early severe leukopenia in Korean patients homozygous for R139C was 100% at an initial dose of 25-50 mg/d during AZA treatment and thus AZA therapy is not recommended in mutant homozygotes of R139C. Nonetheless, a relatively wide variation of AZA tolerance has been observed in R139C heterozygotes [16]. Unfortunately, R139C is not associated with enhanced 6-TGN levels [3]. It remains vague how to adjust the dosage of AZA in IBD patients heterozygous for R139C in East Asia to minimize toxicity and maximize therapeutic efficacy.
Hence, we conducted this observational study to assess the relationship between NUDT15 genotypes and AZAinduced toxicity, especially AZA-induced myelotoxicity at different doses under a conventional step-up dosing strategy. In addition, we also examined the impact of NUDT15 genotypes on AZA final dose and efficacy, and thus aimed to provide a theoretical basis for NUDT15 genotype-guided individualization of AZA therapy from a combined perspective of both safety and efficacy.
Patient selection
A total of 159 IBD patients admitted into Renji Hospital between August 2016 and January 2017 who had received NUDT15 and TPMT genotyping and AZA treatment were retrospectively identified. The inclusion criteria were: (i) diagnosis with CD, UC, or IBD-U; (ii) initiation of AZA treatment at our medical center; (iii) regular follow-up visits at our medical center. Exclusion criteria: (i) concurrent use of an immunosuppressant other than AZA; (ii) non-compliance in AZA administration. The study was approved by the Ethics Committee of Renji Hospital affiliated to Shanghai Jiao Tong University School of Medicine.
AZA treatment
After excluding high-risk homozygotes (NUDT15 TT genotype or TPMT GG genotype), the initial oral dose of AZA was 25 mg/d. If patients were tolerable to 25 mg/d after 1-2 weeks, the AZA dose would be increased to 50 mg/d. After 3 months of 50 mg/d, the clinician would adjust the dose of AZA after a comprehensive evaluation of both tolerance and response. Thereafter, the medication plan was similarly adjusted according to both efficacy and toxicity every 3-6 months. If the patient continued to maintain remission, the evaluation interval could be extended as appropriate.
Toxicity and efficacy evaluation
Myelosuppression was defined as white blood cell count (WBC) 3.5 Â 10 9 /L or neutrophil absolute count (ANC) <2.0 Â 10 9 /L. Myelotoxicity was defined as the occurrence of myelosuppression or acute decline of blood counts approximating the numerical criteria of myelosuppression that required drug discontinuation. The grading of myelosuppression was based on the Common Terminology Criteria for Adverse Events version 3.0 as follows: WBC <3.0 Â 10 9 /L or ANC <1.5 Â 10 9 /L for grade II and below; WBC <2.0 Â 10 9 /L or ANC <1.0 Â 10 9 /L for grade III and below; WBC <1.0 Â 10 9 /L or ANC <0.5 Â 10 9 /L was grade IV myelosuppression. Liver dysfunction was defined as alanine aminotransferase two times higher than the upper limit of normal or a short-term rapid rise required withdrawal of drug intervention. Pancreatitis was required to meet the diagnostic criteria for pancreatitis: the clinical symptoms combined with an increase in serum amylase of at least three times. After AZA initiation, a full blood count was performed once per week for the first month, once per 2 weeks for the second and third months, and then monthly thereafter. Routine assessments of liver and kidney function and serum amylase were taken every month.
Clinical remission was defined as Harvey-Bradshaw Index (HBI) 4 for CD and Simple Clinical Colitis Activity Index (SCCAI) 2 points for UC and IBD-U [17][18][19]. Duration of remission maintenance on AZA monotherapy was defined as consecutive months in clinical remission without concurrent use of steroids and infliximab (IFX).
Data collection
Baseline patient information includes sex, ethnicity, type of IBD, genotyping results of NUDT15 and TPMT, age at diagnosis, disease location, history of intestinal resection, dates and doses of AZA at initiation, indication for AZA, type of induction therapy (enteral nutrition, steroids, IFX), concomitant or induction therapy [5-aminosalicylates (5-ASA)] at initiation, diseaseactivity indices (HBI for CD, SCCAI for UC and IBD-U) as well as follow-up data of AZA doses, blood tests, and disease-activity indices every 3 months were retrospectively collected via medical records. The peak dose of AZA was defined as the dose at which the patient developed an adverse reaction or the dose that was stable for 3 months or more at the last follow-up visits in non-encounters of adverse reactions. The final dose of AZA was defined as the dose that patients have used for 3 months or more at the last follow-up visits.
Statistical analyses
The results in our study were analysed by the statistical package R version 3.4.3 (The R Foundation, Vienna, Austria) and Empowerstats (X&Y Solutions, Inc., Boston, MA, USA). Quantitative variables were expressed as median (interquartile range) and Mann-Whitney U test was used for comparison between groups. Categorical variables were expressed in terms of number and percentage and compared using chi-square or Fisher test. Ranked data were compared by Mann-Whitney U test. Hardy-Weinberg's equilibrium was checked by chi-square test. Multivariate logistic regression was performed to control for potential confounding factors. Kaplan-Meier survival curves were plotted for estimation of the cumulative proportion of patients in sustained clinical remission and compared by the log-rank test. A P-value of <0.05 was considered statistically significant.
Patient characteristics
A total of 164 IBD patients were initially identified. Three patients with incomplete clinical data and 2 who did not use AZA regularly as prescribed were subsequently excluded, leaving 159 IBD patients included in this study. The characteristics of the cohort are presented in Table 1 with all reporting Han ethnicity. The genotype frequencies of both NUDT15 and TPMT did not deviate from Hardy-Weinberg equilibrium.
NUDT15 and adverse reactions
Details about adverse reactions and relevant clinical characteristics grouped by different NUDT15 genotypes are presented in Table 2. The peak doses (mg/kg/d) of patients with CT genotype were significantly lower than those with CC genotypes (P ¼ 0.015). Except for peak doses, no other baseline characteristics significantly differed between the genotypes.
The incidences of myelotoxicity and myelosuppression in CT genotype patients were significantly higher than that in CC genotype patients (P ¼ 0.007 and 0.003, respectively). Besides, patients with CT genotype were more prone to alopecia than those with CC genotype (P ¼ 0.042) and all occurred at a low dose of 25 mg/d. Other types of adverse reactions were not associated with the NUDT15 genotype.
NUDT15 and myelotoxicity during dose adjustment
Information about NUDT15 genotype and myelotoxicity at different doses using a step-up dosing strategy is summarized in Table 3. There was no significant difference in myelotoxicity incidence between CT and CC genotype at an initial dose of 25 mg/d. However, when the dose was increased to 50 and 75 mg/d, the incidence of myelotoxicity in the CT genotype was significantly higher than CC genotype (P ¼ 0.001 and 0.039, respectively). In the logistic-regression analysis with concomitant 5-ASA, weight, and CT genotype on the subgroup of patients whose doses were escalated to 50 mg/d, the NUDT15 heterozygotes had a 5.4-fold risk of myelotoxicity compared to wild-type patients (Table 4).
NUDT15 and efficacy
Only 115 who used AZA for 4 months or more and took AZA monotherapy for remission maintenance were further included for efficacy analysis. Information about AZA usage and relevant clinical characteristics between different genotypes is summarized in Table 5 and Supplementary Table 1 (according to the Montreal classification). Except that the final maintenance doses of patients with CT genotype were significantly lower than those with CC genotype, there was no significant difference in the baseline characteristics between the two groups. As for therapeutic efficacy, the proportion of patients with CC and CT genotype in clinical remission on AZA monotherapy was comparably the same in both genotype groups (78.4% vs 80.7% at 12 months, and 67.8% vs 70.6% at 24 months; log-rank P ¼ 0.88), as shown in Figure 1. Similar results were obtained in a subgroup analysis of patients with CD (75.3% vs 79.3% at 12 months, and 64.4% vs 68.0% at 24 months; log-rank P ¼ 0.78; Supplementary Figure 1). The sample of patients with UC and IBD-U was too small to analyse.
Discussion
NUDT15 R139C has recently been established an important pharmacogenetic predictor of AZA intolerance in East-Asian IBD patients [15,20]. Clinical guidelines from the Clinical Pharmacogenetics Implementation Consortium suggest using a reduced starting dose in R139C carriers to prevent AZA-induced leukopenia [21]. However, it remains obscure to what extent a final dose reduction is required and whether this would impair the therapeutic efficacy during IBD treatment. In fact, most Asian practices have long used a low starting dose of AZA (25-50 mg/d) that was subsequently increased to a local target dose based on tolerance and efficacy [6][7][8], but the incidence of adverse reactions, especially myelotoxicity, remains high [9,10]. In the present study, we attempted to provide new insight into AZA dose optimization on the basis of this conventional doseescalation strategy and the application of NUDT15 genotyping. In toxicity analysis, we evaluated the appropriateness of the starting dose and rapidity of dose escalation in different R139C genotypes under the step-up dosing strategy and inferred that the usual dose increment of 25 mg/d is very likely to cause undesirable overdosing of AZA in R139C heterozygotes in terms of safety. Furthermore, our study provides the first direct piece of evidence supporting that the final target dose of these heterozygous patients could be set lower than that of the wild genotypes without compromising efficacy during IBD treatment. Given that the incidence of severe early leukopenia is approximately 100% in R139C homozygotes [15], patients who were mutant homozygous for R139C were precluded from receiving AZA if they received pre-emptive genotyping in our medical center. Except for these patients, all the remaining patients started AZA at a considerably low dose of 25 mg/d and subsequently adjusted it according to both toxicity and efficacy, as a routine part of clinical practice. In this context, the incidence of myelotoxicity and alopecia, compared with wildgenotype patients (CC genotype), was still significantly higher in R139C heterozygotes (CT genotype) and the myelotoxicity rate reached as high as 45.5% in the heterozygous group despite the significantly lower doses. These data are well in line with the results of previous pharmacogenetic studies in East-Asian populations with variable starting doses [9,10,22], reaffirming the association between NUDT15 R139C and AZA-induced toxicity in Chinese IBD patients. On the other hand, our finding indicates that a step-wise dose adjustment with a low starting dose of 25 mg/d is insufficient to shrink the great disparity in the incidence of myelotoxicity between patients with CC and CT genotypes.
Are there any changes that could be made to modify this conventional dosing strategy to reduce AZA-induced myelotoxicity? To better address this question, we performed a stratified risk analysis of myelotoxicity according to dose adjustment in real clinical settings. At an initial dose of 25 mg/d, the incidence of myelotoxicity was extremely low in both genotypes. However, when the dose was directly increased to 50 mg/d and from 50 to 75 mg/d, patients with NUDT15 CT genotype had a significantly higher incidence of myelotoxicity than those with CC genotype (40% vs 13.2%, P ¼ 0.001; 60% vs 14.5%, P ¼ 0.039). The multivariate analysis of patients whose dosage was escalated from 25 to 50 mg/d revealed that patients with CT genotype has a 5.4-fold risk of myelotoxicity in comparison with the CC genotype. These results suggest that the starting dose of 25 mg/d is safe but further dose escalation to 50 mg/d may be too large a step in terms of toxicity in R139C heterozygotes using the routine dosing strategy. Thus, a smaller dose increment after 25 mg/d may be more appropriate for heterozygotes, such as from 25 to 33.3 mg/d (take two pills of AZA every 3 days with a unit dose of 50 mg/d) to prevent overdosing. If one heterozygous patient was receiving a NUDT15 genotyping test but was still directly titrated to a dose of 50 mg/d, pre-emptive genotyping might not be helpful in reducing the high myelotoxicity rate, whereas patients with CC genotype are relatively safe to be titrated to 50 mg/d if no sign of drug intolerance is detected at a starting dose of 25 mg/d.
Regarding efficacy, as NUDT15 heterozygosity is reportedly not associated with high 6-TGN levels, it is unclear how to optimize the target dose in the CT genotype due to a lack of other mature metabolite markers to predict potential efficacy [3]. In the present study, a low dose of AZA was prescribed in most IBD patients under the step-up dosing strategy and the final AZA maintenance dose was further reduced in patients with CT genotype compared with those with the wild genotype [0.83 (0.75-0.96) vs 1.04 (0.89-1.33) mg/kg/d, P < 0.001]. Notwithstanding, the cumulative proportions of patients in remission maintenance on AZA monotherapy were similar between patients with different genotypes. In the CD subgroup, the remission rates at 12 and 24 months in the two genotypes are comparable with previous reports that evaluated the clinical efficacy of a standard dose of AZA in remission maintenance [23][24][25]. These observations are in accordance with the longstanding notion that the gradual titration strategy and subsequently low doses of AZA are effective in East-Asian populations [8,26]. More importantly, these results indicate that further dose reduction does not compromise the clinical efficacy of AZA in IBD patients with CT genotype in comparison with CC genotype.
Interestingly, a similar finding has been documented when dating back to earlier studies in individualization therapy in IBD patients based on TPMT genotyping. In an observational study, Gardiner et al. [27] first demonstrated that, when the doses were individually adjusted by 6-TGN, the final average dose in TPMT heterozygotes was only half that in the wild genotype (0.9 vs 1.8 mg/kg/d) without impaired efficacy. Recently, TOPIC trials have revealed that, within a 20-week observation time, TPMT heterozygotes can achieve effective 6-TGN levels with a final average dose of 1.0 mg/kg/d and have comparable remission rates with wild-genotype patients at an average dose of 2.2 mg/kg/d [12]. Although NUDT15 is different from TPMT, as it does not influence the total concentration of 6-TGN, it can convert 6-TGTP to 6-TGMP with its polyphosphate hydrolase activity [28]. 6-TGTP is regarded as the predominant component of 6-TGN associated with immunosuppressive function [3], which can specifically inhibit Rac1 activation, leading to T-cell apoptosis [29]. Consistently, a higher proportion of 6-TGTP has been shown to correlate with better immunosuppressive efficacy of AZA in CD patients [30]. Therefore, although the overall concentration of 6-TGN is unaltered, NUDT15 heterozygotes, with diminished NUDT15 protein stability, are supposed to have a higher ratio of 6-TGTP at the same dose than the wild genotype, and thus would be more likely to achieve the therapeutic threshold during IBD treatment. This may explain why lower doses of AZA are sufficient to yield favorable clinical outcomes in NUDT15 heterozygotes.
As for the optimal magnitude of the dose reduction in CT genotypes, a balance between both toxicity and efficacy should be considered. Kakuta et al. [9] reported that the incidence of thiopurine-induced leukopenia was 39.1% (10/26) in Japanese R139C heterozygotes at a low dose of 0.897 6 0.303 mg/kg/d; the final maintenance dose was as low as 0.574 6 0.316 mg/kg/d in this group for safety concerns. However, they did not provide the efficacy data in this context. In the present study, we observed that AZA was effective in remission maintenance in heterozygous IBD patients with an interquartile range of AZA dose <1.0 mg/kg/d [0.83 (0.76-0.95) mg/kg/d]. However, it is noteworthy that the myelotoxicity rate of CT genotype reached up to 45.5% (15/33) at a slightly higher dose of AZA [0.94 (0.79-1.11) mg/kg/d], as mentioned earlier. Combined with these data, it seems plausible that the tolerable and effective doses for a large proportion of East-Asian IBD patients with CT genotype are <1.0 mg/kg/d.
Alternatively, the gradual dose titration also led to a median low dose of AZA [1.05 (0.84-1.35) mg/kg/d] and a relatively low myelotoxicity rate (22.2%) in the wild-genotype patients, which is in accordance with similar studies in East-Asian patients. Yet, the incidence of AZA-induced myelotoxicity reported in Western studies was 5% at a standard dose of 2.0-2.5 mg/kg/d [31,32], implying a potential ethnicity difference even between non-carriers of R139C in East Asia and Western counterparts. Perhaps, this may be caused by other unidentified genetic factors prevalent in East-Asian populations, which requires further investigation. Overall, our results provide preliminary evidence supporting that the final target dose of patients with NUDT15 CT genotype could be set to <1.0 mg/kg/d and that of the CC genotype could be set higher but there is no need for it to be the same as the Western standard dose.
The present study has several limitations that should be discussed. First, the nature of this single-center retrospective design may inevitably include some unknown bias. However, it is due to the single center that the use of AZA is in a relatively consistent manner given that different centers in China have different habits in adjusting to the ultimate doses, as no consensus has been achieved in the final dose in China. Second, we did not test NUDT15 phenotypes (NUDT15 enzyme activity, related metabolite concentration). In fact, new methods have been developed to distinguish 6-TGTP from 6-TGN but they are currently too immature to put into clinical settings [3]. Alternatively, in most parts of China, including our medical center, even the testing of 6-TGN has not been routinely carried out. As a retrospective observational study, metabolite information cannot be provided for analysis. Future prospective studies can be designed to determine whether 6-TGN-guided dose adjustments in NUDT15 wild-genotype patients may be better than the conventional step-up dosing methods in achieving a balance between AZA-related efficacy and toxicity.
In conclusion, patients with CT genotype seem to be more sensitive to AZA in terms of both toxicity and efficacy. Based on the results of our study, we suggest that patients with CT genotype need a smaller step for dose escalation <25 mg/d and might well benefit from a significantly lower target dose of <1.0 mg/kg/d using a step-up dosing strategy to achieve a balance between AZA-related toxicity and efficacy in East-Asian IBD patients. | 2020-04-23T09:08:12.386Z | 2020-06-26T00:00:00.000 | {
"year": 2020,
"sha1": "344259310d471f793d5123c7f70cc9573196c003",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/gastro/article-pdf/8/6/437/35546519/goaa021.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58c02f6816cf8d8487b86cfbc77a43323b62f63d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
41751472 | pes2o/s2orc | v3-fos-license | Bayesian filtering of human brain hemodynamic activity elicited by visual short-term maintenance recorded through functional near-infrared spectroscopy ( fNIRS )
Functional near-infrared spectroscopy (fNIRS) is a neuroimaging technique that measures changes in oxy-hemoglobin (ΔHbO) and deoxyhemoglobin (ΔHbR) concentration associated with brain activity. The signal acquired with fNIRS is naturally affected by disturbances engendering from ongoing physiological activity (e.g., cardiac, respiratory, Mayer wave) and random measurement noise. Despite its several drawbacks, the so-called conventional averaging (CA) is still widely used to estimate the hemodynamic response function (HRF) from noisy signal. One such drawback is related to the number of trials necessary to derive stable HRF functions adopting the CA approach, which must be substantial (N >> 50). In this work, a pre-processing procedure to remove artifacts followed by the application of a non-parametric Bayesian approach is proposed that capitalizes on a priori available knowledge about HRF and noise. Results with the proposed Bayesian approach were compared with CA and with a straightforward band-pass filtering approach. On simulated data, a five times lower estimation error on HRF was obtained with respect to that obtained by CA, and 2.5 times lower than that obtained by band pass filtering. On real data, the improvement achieved by the present method was attested by an increase in the contrast to noise ratio (CNR) and by a reduced variability in single trial estimation. An application of the present Bayesian approach is illustrated that was optimized to monitor changes in hemodynamic activity reflecting variations in visual short-term memory load in humans, which are notoriously hard to detect using functional magnetic resonance imaging (fMRI). In particular, statistical analyses of HRFs recorded during a memory task established with high reliability the crucial role of the intraparietal sulcus and the intra-occipital sulcus in posterior areas of the human brain in visual short-term memory maintenance. ©2010 Optical Society of America OCIS codes: (300.6340) Spectroscopy, infrared; (200.4560) Optical data processing; (170.2655) Functional monitoring and imaging; (170.3890) Medical optics instrumentation. References and links 1. D. A. Boas, M. A. Franceschini, A. K. Dunn, and G. Strangman, “Noninvasive Imaging of Cerebral Activation with Diffuse Optical Tomography,”in Vivo Optical Imaging of Brain Function. E. D. (CRC Press), Chap 8, pp. 193–221, (2002) 2. S. C. Bunce, M. Izzetoglu, K. Izzetoglu, B. Onaral, and K. Pourrezaei, “Functional Near-Infrared Spectroscopy,” IEEE Eng. Med. Biol. Mag. 25(4), 54–62 (2006). #133907 $15.00 USD Received 24 Aug 2010; revised 12 Nov 2010; accepted 14 Nov 2010; published 3 Dec 2010 (C) 2010 OSA 6 December 2010 / Vol. 18, No. 25 / OPTICS EXPRESS 26550 3. H. Obrig, and A. Villringer, “Beyond the visible--imaging the human brain with light,” J. Cereb. Blood Flow Metab. 23(1), 1–18 (2003). 4. S. P. Koch, S. Koendgen, R. Bourayou, J. Steinbrink, and H. Obrig, “Individual alpha-frequency correlates with amplitude of visual evoked potential and hemodynamic response,” Neuroimage 41(2), 233–242 (2008). 5. R. J. Cooper, N. L. Everdell, L. C. Enfield, A. P. Gibson, A. Worley, and J. C. Hebden, “Design and evaluation of a probe for simultaneous EEG and near-infrared imaging of cortical activation,” Phys. Med. Biol. 54(7), 2093– 2102 (2009). 6. A. Gibson, and H. Dehghani, “Diffuse optical imaging,” Philos. Transact. A Math. Phys. Eng. Sci. 367(1900), 3055–3072 (2009). 7. G. Jasdzewski, G. Strangman, J. Wagner, K. K. Kwong, R. A. Poldrack, and D. A. Boas, “Differences in the hemodynamic response to event-related motor and visual paradigms as measured by near-infrared spectroscopy,” Neuroimage 20(1), 479–488 (2003). 8. S. G. Diamond, T. J. Huppert, V. Kolehmainen, M. A. Franceschini, J. P. Kaipio, S. R. Arridge, and D. A. Boas, “Dynamic physiological modeling for functional diffuse optical tomography,” Neuroimage 30(1), 88–101 (2006). 9. V. Kolehmainen, S. Prince, S. R. Arridge, and J. P. Kaipio, “State-estimation approach to the nonstationary optical tomography problem,” J. Opt. Soc. Am. A 20(5), 876–889 (2003). 10. S. Prince, V. Kolehmainen, J. P. Kaipio, M. A. Franceschini, D. A. Boas, and S. R. Arridge, “Time-series estimation of biological factors in optical diffusion tomography,” Phys. Med. Biol. 48(11), 1491–1504 (2003). 11. A. F. Abdelnour, and T. J. Huppert, “Real-time imaging of human brain function by near-infrared spectroscopy using an adaptive general linear model,” Neuroimage 46(1), 133–143 (2009). 12. Y. H. Zhang, D. H. Brooks, M. A. Franceschini, and D. A. Boas, “Eigenvector-based spatial filtering for reduction of physiological interference in diffuse optical imaging,” J. Biomed. Opt. 10(1), 011014 (2005). 13. M. L. Schroeter, M. M. Bücheler, K. Müller, K. Uludağ, H. Obrig, G. Lohmann, M. Tittgemeyer, A. Villringer, and D. Y. von Cramon, “Towards a standard analysis for functional near-infrared imaging,” Neuroimage 21(1), 283–290 (2004). 14. S. Tak, K. E. Jang, J. W. Jung, J. Jang, and J. C. Ye, “General Linear Model and Inference for Near Infrared Spectroscopy using Global Confidence Region Analysis,” in Proceedings of IEEE Conference on International Symposium on Biomedical Imaging (ISBI), pp. 476–479 (2008). 15. J. C. Ye, S. Tak, K. E. Jang, J. Jung, and J. Jang, “NIRS-SPM: statistical parametric mapping for near-infrared spectroscopy,” Neuroimage 44(2), 428–447 (2009). 16. K. E. Jang, S. Tak, J. Jung, J. Jang, Y. Jeong, and J. C. Ye, “Wavelet minimum description length detrending for near-infrared spectroscopy,” J. Biomedical Opt. 14, 034004-(1–13) (2009). 17. G. Morren, U. Wolf, P. Lemmerling, M. Wolf, J. H. Choi, E. Gratton, L. De Lathauwer, and S. Van Huffel, “Detection of fast neuronal signals in the motor cortex from functional near infrared spectroscopy measurements using independent component analysis,” Med. Biol. Eng. Comput. 42(1), 92–99 (2004). 18. C. B. Akgül, A. Akin, and B. Sankur, “Extraction of cognitive activity-related waveforms from functional nearinfrared spectroscopy signals,” Med. Biol. Eng. Comput. 44(11), 945–958 (2006). 19. A. V. Medvedev, J. Kainerstorfer, S. V. Borisov, R. L. Barbour, and J. VanMeter, “Event-related fast optical signal in a rapid object recognition task: improving detection by the independent component analysis,” Brain Res. 1236, 145–158 (2008). 20. G. Taga, K. Asakawa, A. Maki, Y. Konishi, and H. Koizumi, “Brain imaging in awake infants by near-infrared optical topography,” Proc. Natl. Acad. Sci. U.S.A. 100(19), 10722–10727 (2003). 21. R. Sitaram, H. Zhang, C. Guan, M. Thulasidas, Y. Hoshi, A. Ishikawa, K. Shimizu, and N. Birbaumer, “Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain-computer interface,” Neuroimage 34(4), 1416–1427 (2007). 22. S. Cutini, P. Scatturin, E. Menon, P. S. Bisiacchi, L. Gamberini, M. Zorzi, and R. Dell’Acqua, “Selective activation of the superior frontal gyrus in task-switching: an event-related fNIRS study,” Neuroimage 42(2), 945–955 (2008). 23. H. Kojima, and T. Suzuki, “Hemodynamic change in occipital lobe during visual search: visual attention allocation measured with NIRS,” Neuropsychologia 48(1), 349–352 (2010). 24. G. Sparacino, S. Milani, E. Arslan, and C. Cobelli, “A Bayesian approach to estimate evoked potentials,” Comput. Methods Programs Biomed. 68(3), 233–248 (2002). 25. R. Luria, P. Sessa, A. Gotler, P. Jolicoeur, and R. Dell’Acqua, “Visual short-term memory capacity for simple and complex objects,” J. Cogn. Neurosci. 22(3), 496–512 (2010). 26. J. J. Todd, and R. Marois, “Capacity limit of visual short-term memory in human posterior parietal cortex,” Nature 428(6984), 751–754 (2004). 27. Y. Xu, and M. M. Chun, “Dissociable neural mechanisms supporting visual short-term memory for objects,” Nature 440(7080), 91–95 (2006). 28. M. Cope, and D. T. Delpy, “System for long-term measurement of cerebral blood and tissue oxygenation on newborn infants by near infra-red transillumination,” Med. Biol. Eng. Comput. 26(3), 289–294 (1988). 29. M. L. Schroeter, S. Zysset, F. Kruggel, and D. Y. von Cramon, “Age dependency of the hemodynamic response as measured by functional near-infrared spectroscopy,” Neuroimage 19(3), 555–564 (2003). 30. M. L. Schroeter, S. Cutini, M. M. Wahl, R. Scheid, and D. Yves von Cramon, “Neurovascular coupling is impaired in cerebral microangiopathy--An event-related Stroop study,” Neuroimage 34(1), 26–34 (2007). #133907 $15.00 USD Received 24 Aug 2010; revised 12 Nov 2010; accepted 14 Nov 2010; published 3 Dec 2010 (C) 2010 OSA 6 December 2010 / Vol. 18, No. 25 / OPTICS EXPRESS 26551 31. A. Duncan, J. H. Meek, M. Clemence, C. E. Elwell, P. Fallon, L. Tyszczuk, M. Cope, and D. T. Delpy, “Measurement of cranial optical path length as a function of age using phase resolved near infrared spectroscopy,” Pediatr. Res. 39(5), 889–894 (1996). 32. M. A. Franceschini, V. Toronov, M. E. Filiaci, E. Gratton, and S. Fantini, “On-line optical imaging of the human brain with 160-ms temporal resolution,” Opt. Express 6(3), 49–57 (2000). 33. M. Okamoto, H. Dan, K. Sakamoto, K. Takeo, K. Shimizu, S. Kohno, I. Oda, S. Isobe, T. Suzuki, K. Kohyama, and I. Dan, “Three-dimensional probabilistic anatomical cranio-cerebral correlation via the international 10-20 system oriented for transcranial functional brain mapping,” Neuroimage 21(1), 99–111 (2004). 34. S. Cutini, P. Scatturin, and M. Zorzi, “A new method based on ICBM152 head surface for probe placement in multichannel fNIRS,” Neuroimage (to be published). 35. M. R. Nuwer, G. Comi, R. Emerson, A. Fuglsang-Frederiksen, J. M. Guérit, H. Hinrichs, A. Ikeda, F. J. Luccas, and P. Rappelsburger, The International Federation of Clinical Neurophysiology, “IFCN standards for digital recording of clinical EEG,” Electroencephalogr. Clin. Neurophysiol. 106(3), 259–261 (1998). 36. A. K. Singh, M. Okamoto, H. Dan, V. Jurcak, and I. Dan, “Spatial registration of multichannel multi-subject fNIRS
Introduction
Functional near-infrared spectroscopy (fNIRS) is an emerging neuroimaging technique which uses the near-infrared region of the electromagnetic spectrum to measure changes in blood oxygenation.This region (i.e., 650-950 nm) is generally poorly absorbed by biological tissues other than oxy-hemoglobin (HbO) and deoxy-hemoglobin (HbR).Among other important implementations, this particular property, in combination with different absorption spectra of HbO and HbR, allows cognitive neuroscientists to explore brain activity by monitoring online variations in HbO and HbR concentration in the cerebral (cortical) blood flow during the execution of a cognitive task.
Over the past decade, a growing number of researchers have used fNIRS as an alternative to older and more established neuroimaging modalities, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), mainly because of its potential to provide non-invasively physiological information, its high temporal resolution and the relative low cost of the equipment [1][2][3].Information provided by fNIRS is complementary to that provided by direct measures of electromagnetic effects of neurons' activation and, for this reason, fNIRS has recently been used for the simultaneous recording of hemodynamic and electromagnetic signals (i.e., electroencephalography-EEG and magnetoencephalography-MEG [4,5]; ).Despite its inherent high signal contrast, one limitation of fNIRS is the reduced spatial resolution with respect to fMRI, frequently solved via multiple fNIRS measurements in order to localize the signal source in the brain [6].
In functional brain imaging, the signal acquired with fNIRS is a mixture of evoked hemodynamic response function (HRF), several background physiological components (e.g., cardiac, respiratory, blood pressure Mayer wave) and measurement noise (artifacts).Several methods have been proposed in the literature to estimate HRF from fNIRS signal by reducing the effect of noisy components, like low-pass filtering [7], state space estimation using extended Kalman filter [8][9][10][11], principal component analysis [12], generalized linear model [13][14][15], wavelet-based algorithm [16] and independent component analysis [17][18][19].Although each of these methods is generally associated with increases in signal-to-noise ratio, the so-called conventional averaging (CA) technique is still probably the most widely used method to estimate HRF from the fNIRS signal [3,5,7,[20][21][22][23]. Succinctly, the HRF is determined by averaging the fNIRS recordings (trials) collected after N identical stimuli, with N being often in the order of several tenth.Estimation of the HRF is achieved by assuming both the independence of the background noise from the activity elicited by the to-beprocessed stimulus, and the difference in phase of the physiological components from stimulus to stimulus.Notably, CA is algorithmically blind to information about HRF and fNIRS signals that can be independently extracted from the optical signal.Furthermore, fNIRS signal is notoriously not stationary (i.e., some trials are more noisy and/or less reliable than others), which suggests that the above assumptions should be taken with great caution.
In this work, we describe an approach for the estimation of HRF from fNIRS data devised to overcome the problems mentioned in the foregoing paragraph.The proposed method includes a pre-processing phase devoted to both artifacts removal (e.g., anomalous drifts due to movements of the subject) and reduction of the impact of cardiac activity on fNIRS signal.This pre-processing step allows us to circumvent the necessity of developing a complex model of the data (e.g., as the combination of sinusoids to describe the physiological noise component in the fNIRS signal) which would require sophisticated, but complex to deal with, estimation tools, such as the extended Kalman filter used in [11].Then, a Bayesian nonparametric approach is used to improve the contrast to noise ratio (CNR).The strength of this latter algorithm, originally developed by Sparacino et al. [24] for the estimation of eventrelated potentials (ERP) in electroencephalography, is that, though relying on mild assumptions on the fNIRS signal, it improves substantially the CNR due to the generation of a suitable compromise between experimental data and a priori expectations (e.g., smoothness) available on the unknown HRF.The proposed method is a general-purpose and straightforward filtering technique, that requires only a small amount of a priori information and can be used in any fNIRS experiment.In the present work, we compared its performance with two general, simple and widely used methods, namely, CA and classical Butterworth band-pass filtering.
Results obtained with both simulated and real data indicated that the methodology proposed in this context improves the HRF estimate compared to both CA and band-pass filtering.On the empirical side, the present approach was used to investigate visual short-term memory in humans (VSTM).Specifically, the hemodynamic signal was acquired from subjects engaged in the cued variant of a change-detection task.Acquired data were filtered using the Bayesian filtering approach and the peak amplitudes of the obtained HRFs were analyzed to disclose the neural underpinnings of VSTM functions.In this experiment, the amplitude of the expected HRFs was comparable or even lower to that of noise and of physiological components, making a stable HRF estimation hard to achieve.Nonetheless, in line with findings of particularly recent EEG [25] and fMRI studies [26,27], it was shown that the application of the present filter allowed us to detect and localize activity in parietooccipital regions (i.e., in the intraparietal sulcus, IPS, and the intra-occipital sulcus, IOS) correlated with the maintenance of information in VSTM.Furthermore, results obtained following the adoption of the present Bayesian filtering approach shed light and substantiated empirically an expected lateralization effect (see section 3.4 Lateralization effects in visual short-term memory), that dovetails nicely with prior fMRI investigations of VSTM functions in humans.
Participants
Thirteen right-handed students at the University of Padova participated in the experiment after providing informed consent.Two participants were discarded from the analysis because of an inadequate behavioural performance (accuracy below 50%).Thus, the participants included in the analysis were eleven (6 females, mean age 23.36 years ± 2.73).All participants had normal or corrected to-normal vision, and normal color vision.No participant reported a prior history of neurological or psychiatric disorders, and none was under medication at the time of testing.This study was conducted in accordance with the guidelines approved by the Institutional Review Board (IRB).
Stimuli and procedure
Each subject was seated in a comfortable chair placed before a computer screen.A schematic illustration of the sequence of events on each trial of the present experiment is reported in Fig. 1.A plus sign was presented at the centre of the screen for 2 s.The plus sign was then replaced by a small circle for 600 ms, alerting participants about the incoming presentation of the first (memory) array of colored squares.The alerting circle, which remained on the screen throughout the entire trial, was then flanked by an arrow head pointing to the left or right side of the screen for 400 ms.The offset of the arrow head was followed by a 300 ms blank interval.The memory-array was exposed for 300 ms, and followed by a blank interval lasting 1400-1600 ms, jittered in steps of 20 ms.A second (test) array composed of colored squares placed in the same positions as the squares in the memory-array was then displayed for a maximum duration of 3000 ms.Following the onset of the test-array, participants had to press one of two appropriately labeled keys of the computer keyboard to indicate whether a change in color had occurred or not.On 50% of trials one of the squares in the cued half of the memory-array was replaced with a square of a different color in the test-array.In the other 50% of trials, the color of the squares did not change between memory-and test-arrays.In the example illustrated, the rightmost square, green in the memory-array, was changed to cyan in the test-array.The to-be-remembered colored squares could be two or four.The inter-trial interval varied from 15 to 20 s.Trials were organized in 6 consecutive blocks (with a short pause between successive blocks), and each block included 24 trials.
Instruments
The fNIRS signal was acquired with a multi-channel frequency-domain NIR spectrometer (ISS Imagent TM , Champaign, Illinois), equipped with 28 laser diodes (14 emitting light at 690 nm, and 14 at 830 nm) modulated at 110.0 MHz.The diode-emitted light was conveyed to the subject's head by multimode core glass optical fibers (heretofore, sources; OFS Furukawa LOWOH series fibers, 0.37 of numerical aperture) with a length of 250 cm and a core diameter of 400 µm.Light that scattered through the brain tissue was carried by detector optical fiber bundles (diameter 3 mm) to 4 photo-multiplier tubes (PMTs; R928 Hamamatsu Photonics).The PMTs were modulated at 110.005 MHz, generating a 5.0 KHz heterodyning (cross-correlation) frequency.To separate the light as a function of source location, the sources time-shared the 4 parallel PMTs via an electronic multiplexing device.Only two sources (one per hemisphere) were synchronously (t = 4 ms) active (i.e., emitting light) resulting in a final sampling period of 128 ms (f = 10 3 /128 = 7.8125 Hz), after a dual-period averaging.
Following detection and consequent amplification by the PMTs, the optical signal was converted into temporal variations (Δ) of cerebral oxy-hemoglobin (ΔHbO) and deoxyhemoglobin (ΔHbR) concentration, an index that is sensitive to age [29,30].These indices were therefore corrected for age based on the differential-pathlength factor (DPF [28]; for details see [22]) using the equations described in [31] (1) Horizontal EOG (HEOG) was recorded bipolarly from tin electrodes positioned on the outer canthi of both eyes, referenced to the left earlobe.HEOG activity was amplified, filtered using a highpass of 0.01 Hz, and digitized at the same sampling rate of the fNIRS signal.
Impedance at each electrode was maintained below 5 KΩ.HEOG activity was re-referenced offline to the average of the left and right earlobes.
Probe placement procedure
Sources and detectors were held in place on the scalp using a custom-made holder and velcro straps.Each source location comprised two source optical fibers, one for each wavelength.The distance between each source/detector pair (heretofore, channel) was L = 30 mm, so as to equate channels for optical penetration depth into the cortical tissue [32].As shown by [33], the average cortical surface depth varies across regions, measuring around 17-18 mm in the parieto-occipital cortex.Considering that the average cortical thickness is around 30 mm, we set the cortical depth at 20 mm.Thus, the position of each channel coincided with the 20 mm cerebral projection of the midpoint of the source-detector distance.This probe arrangement included 18 channels, providing 18 measurements for HbO and 18 for HbR.
The spatial arrangement of source/detector pairs on the scalp was determined using a new probe placement approach [34], based on the combined use of a physical model of the head surface of the ICBM152 template (ICBM152-PM) and a 3D digitizing software (Brain Sight, RogueResearch).
The sources on each hemisphere were numbered from 1 to 7. Left-hemisphere detectors were indicated with the letters A/B, and right-hemisphere detectors with the letters C/D.Channels A1/C1 and A2/C2 recorded activity from the superior IPS (sIPS), A3/C3 from the angular gyrus (ANG), A4/C4 from the IPS, A5/C5 from the posterior part of the superior parietal lobule (pSPL), B3/D3 from the ANG, B4/D4 from the region at the intersection between IPS and the intra-occipital sulcus (IOS), B6/D6 from regions adjacent to the lateral occipital cortex (LOC), and B7/D7 from the superior occipital cortex (SOC).The resulting spatial arrangement of sources and detectors on the head surface is illustrated in Fig. 2a.The channels overlaid onto the ICBM152 brain template are illustrated in Fig. 2b.Both Figs.2a and 2b were generated after remapping the stereotaxic points onto the ICBM152 template using MRIcron (http://www.sph.sc.edu/comd/rorden/mricron/; for details, see [24]).
Afterwards, in order to place precisely the probe on the scalp of participants, the biunivocal correspondence between 10 and 10 points [35] and the MNI coordinates of their cerebral projections [33] was used.The spatial arrangement of each set of landmarks generated a stringent spatial bind that allowed us to standardize the probe placement across participants.The degree of precision achieved with this procedure was comparable with that obtained using different approaches [33,36], yielding a worst-case average error compatible with the spatial resolution of the present fNIRS setup.
The pre-processing strategy
Concentration measurements were first band-pass filtered (pass band: from 0.01 Hz to 3 Hz) to further remove any slowly drifting signal components and other noise with frequencies far from the signal band.For each channel, the acquired signal was segmented into 15 s trials starting from the memory array onset.Trials associated with an incorrect response and/or with HEOG amplitude exceeding ± 30 µV during the interval from the onset of the arrow cue presentation and the offset of the memory array (8%) were discarded from analysis.Trials were divided into two conditions of the present experimental design depending on the position (left hemisphere or right hemisphere) of the considered channel: to-be-memorized colored squared displayed on the same side of the considered channel (ipsilateral condition) and displayed on the opposite side of the considered channel (contralateral condition).Each trial was zero-mean corrected by subtracting the mean intensity of the optical signal recorded during the 15 s period.
Two separate procedures were applied to remove artifacts present in the hemodynamic signal.First, a custom procedure based on Grubbs' test was separately applied on the aligned trials in each condition [37].Trials with one or more values exceeding the empirically established Grubbs' threshold (which was set to 0.05) were discarded from analysis (1%).Moreover, given that some artifacts could have not been detected by this procedure, the remaining trials were checked with a second method, which considered variations in concentration of the hemodynamic signal throughout the entire trial rather than considering single time-points as the Grubbs' test.The mean value and the difference between the maximum and minimum values (heretofore, range) were calculated considering all trials in a given condition.The mean value and the range were also calculated for each single trial.Single-trial mean and range values were compared with the mean values of all trials in that condition [38].Trials characterized by a range or mean value greater than the condition mean ± 2.5 SDs were discarded from analysis (2%).
Subsequently, a notch filter was applied to reduce the cardiac component of the signal, which was the cyclic physiological component that most affected the performance of the Bayesian filtering described below (specifically, since the cardiac component had a period of about 1 s, it influenced the estimation of the noise model).The centre frequency of the notch filter was set to the frequency corresponding to the maximum value of the power spectral density in the range 0.7 -1.5 Hz, and it was computed for each trial in order to take into account possible variations of heart rate during the experiment.
The Bayesian filtering approach
The complete details of the approach, at both the theoretical and implementative levels, are described in [20], where the method was originally proposed and applied to three different kind of auditory ERPs.Only a brief explanation is thus reported in the present manuscript.
Data of each of the N trials were described by an additive model like the following: where y, u and v were n-size vectors containing equally spaced samples of the measured fNIRS signal (after the application of the above described pre-processing procedure), the "true" HRF u and the noise v (time zero is the time in which the stimulus occurs).Then, each trial y of Eq. ( 2) was individually filtered within a Bayesian embedding by exploiting a priori 2 nd order statistical information on both u and noise v, different from one trial to another.As far as the information on v, a stationary AR model was employed.Hence, the a priori covariance matrix of v was: where A was the square n-dimensional Toeplitz matrix the first column of which was [1, a 1 , a 2 ,…, a p , 0,… 0] T , {a k } k = 1,…,p being the coefficients of the AR model, and σ 2 was the variance of the noise process which derives the AR model.This model was identified, for each of the N available trials, from data measured in an interval lasting 4 s and starting from 1.5 s before the stimulus, when HRF was not present.Model order was set to 4. The only physiological component that occurred entirely in this interval of 4 s was the cardiac component, with an amplitude comparable to that of the expected HRF, which was removed by the notch filter previously described in the subsection devoted to pre-processing.As far as the 2 nd order statistical information on u, containing samples of the unknown HRF, the strategy was to model a priori known smoothness as the realization of a stochastic process obtained by the cascade of d integrators driven by a zero-mean white noise process {ε k } with variance λ 2 .Therefore, the covariance matrix of u was: where F = Δ d , with Δ being the square n-dimensional lower-triangular Toeplitz matrix the first column of which was [1, 1, 0, …, 0] T .For instance, for d = 1 the unknown signal is described by a random-walk model: which, in a Gaussian setting, tell us that, given u k-1 , then u k will be with probability 99,7% in the range u k-1 ± 3λ, i.e., the lower λ, the smoother {u k }.The multiple integration of a white noise process is a model widely used in the literature to describe signals on which only qualitative information is available.In fact, the model is flexible (i.e., it can be employed for a large class of responses by suitably tuning d) and simple enough (only λ is unknown) to be easily identifiable from post-stimulus data.The optimally filtered trial was thus: where γ = σ 2 /λ 2 was estimated, independently trial-by-trial, by the so-called "discrepancy" smoothing criterion (Twomey, 1965).
Remark 1.The discrepancy criterion may occasionally fail, leading to oversmoothed or undersmoothed profiles.A "mean" value of γ (obtained from individual γ of trials with no over-or under-smoothing) has been used only in cases (i.e., 5% of total trials) in which unacceptable smoothing was obtained.Without this correction, estimates of the HRF were worse (not shown).
Remark 2. The outcome of Eq. ( 6) is equivalent to that of the standard noncausal Kalman smoother, where the a priori model of the state evolution is u(t + 1) = u(t) + w(t) and the measurement model is y(t) = u(t) + v(t) (relative to Eq. ( 2) respectively), with covariance matrix of the process noise w equal to [λ 2 ], and covariance matrix of measurement noise v equal to [σ 2 ].The estimated HRF (ū) was obtained from the average of the N trials filtered as in Eq. ( 6), belonging to the same condition: and it was finally baseline-corrected by subtracting from the overall hemodynamic response the mean intensity of the signal in the 0-500 ms interval from the onset.A prototype of the whole algorithm described in this section was implemented in Matlab © (version R2008b, The Mathworks, Natick, Massachusetts, USA) and run on a personal computer.
Results
The new methodology is first tested and compared with CA and with a raw band-pass filtering procedure, on synthetically generated data in order to assess the results in a situation in which the true HRF is known.The estimation error will be considered as merit criterion.Then, the application to real data will be illustrated.In this case, the pre-processing procedure of the signal was the same for all methods and the contrast-to-noise ratio will be used as indicator of the estimator performance.The profiles obtained in the empirical study were then analyzed with the specific purpose to detect hemispheric lateralization effects during the maintenance of lateralized visual information in visual short-term memory.
Synthetic data generation
Simulated data were generated to assess the performance of the developed algorithm.It is well known that in fNIRS measurements background signals from systemic physiology noise are additional signal to the functional hemodynamic response [10,11].In order to take into account these effects in the generation of simulated data, these fluctuations were expressed as a linear combination of three sinusoids at the specific physiological frequencies.The first sinusoid corresponded to the cardiac component, with a frequency (f 1 ) ranging from 0.85 to 1.35 Hz and an amplitude (a 1 ) of ± 200 nM.The second sinusoid represented the respiratory component, whose frequency range was [0.15 -0.35] Hz (f 2 ) and the amplitude (a 2 ) was ± 200 nM.The third sinusoid described the blood pressure Mayer wave, with a frequency ranging from 0.05 to 0.1 Hz (f 3 ) and an amplitude (a 3 ) set to ± 400 nM.Each sinusoid had a different phase (θ), different trial by trial, ranging from 0 to 2π.
The HRF, function of time t, was modeled by a linear combination of two gamma-variant functions Γ, time dependent, with a total of 6 variable parameters [39]: , where u true is the known HRF, α tuned the amplitude, τ i and υ i tuned the shape and scale, respectively, and β determined the ratio of the response to undershoot.The parameters were chosen in order to have a peak amplitude of 140 nM and a peak latency equal to 6.5 s, corresponding to the peak amplitude and latency of the HRF that was expected by experimental data.The measurement noise η was modeled as a white normal process with standard deviation tuned to bear the standard deviation of real trials.
Only physiological and measurement noise (i.e., heart beat, respiration and blood pressure Mayer wave) were added to simulated data; artifacts (e.g., due to movements of the subject or shifts of a source or a detector, causing short, noncyclic abrupt drifts) have not been added.
Thus, samples y(t) of each simulated trial were generated as in Eq. ( 2), where u contains the samples of u true in Eq. ( 8) and v is given by: These simulated trials were comparable to real trials relative to HbO concentration changes.15 simulated subjects were created.Each subject contained 50 simulated trials.
Assessment of the method
The Bayesian filtering approach was applied to the simulated subjects, while the preprocessing procedure was not performed because no artifacts were introduced in simulated data.
In order to give a quantitative measure of the improvement of the estimate obtained with the Bayesian filtering approach, the following quantity was defined: (10) where ū was the estimate of the HRF of Eq. ( 7) and u true was the HRF used in Eq. ( 8) to generate the simulated data.The value of E was a sort of percentage estimation error.The index E was computed for both CA and the new method in simulated data.Furthermore, the proposed method was compared with a more conventional band-pass filtering (Butterworth, band-pass, from 0.01 to 0.3 Hz).The main disadvantage of band-pass filtering is that it can reduce hemodynamic responses as well as noise because these components overlap in terms of frequency spectra [40].The index E was computed for this method too.Two representative subjects are shown in Figs.3a and 3b respectively, in which true HRF and the HRFs estimated with CA, Bayesian filtering and band-pass filtering are reported.In both subjects the HRF obtained with the Bayesian filter is the one that best correspond to the true HRF.All the values of E are reported in Table 1.A remarkable improvement of the estimation error was obtained by Bayesian filtering (E = 7.09 ± 7.33, mean ± SD) with respect to both CA (E = 33.28 ± 17.19) and band-pass filter (E = 18.37 ± 13.87), denoting the good HRF estimates achieved by the proposed method.The improvement obtained by Bayesian filtering was 26.19 with respect to CA and 11.28 with respect to band-pass filter.Notably, an even more pronounced relative improvement (33.34 with respect to CA and 17.65 with respect to band-pass filter) could be seen if the number N of trials was reduced of 10% (N = 45), with an estimation error equal to 9.21, 42.55 and 26.86, obtained by Bayesian filtering, CA and bandpass filter respectively.This, and the fact that the noise components had an amplitude greater than that of the expected HRF, suggest that the use of the method could allow a reduction of the number N of trials needed to obtain an interpretable HRF estimate, with a consequent reduction of the duration of the experiment.The estimation error of the peak amplitude estimate (E peak ) was computed by the HRFs obtained with Bayesian filter and band-pass filter, while it was not computed by the HRFs obtained with CA because of the great amount of noise still present in the signal.The peak amplitude was the parameter used in the statistical analysis, and its correct estimation is crucial.In this case also, an improvement of the estimation error was obtained by Bayesian filtering (E peak = 4.24 ± 5.95, mean ± SD) with respect to band-pass (E peak = 11.90 ± 10.39).The lower values of the estimation error of the peak amplitudes with respect to the estimation error of the whole HRFs were due to the fact that the differences between true HRF and the estimated HRFs were greater in the last 5 s of the considered interval (lasting 15 s), where any peak was present.2), while the red dashed line depicts the filtered profile û obtained by Eq. ( 6).The yellow dashed line illustrates band-pass filtered data.High frequency physiological components and random measurement noise are notably reduced by the Bayesian filtering, while low frequency physiological components (i.e., blood pressure Mayer wave) is reduced by averaging all trials.Figure 4b shows a direct comparison between the average HRF ū (red dashed line) estimated by Eq. ( 7) in subject 9 from the available N = 37 trials (channel A2) and the HRF profile estimated via CA (blue line) and band-pass filtering (yellow dashed line).Similarly, Fig. 5a and Fig. 5b show representative filtered trial and estimated HRF relative to ΔHbR instead of ΔHbO.It is apparent that the proposed method is able to reduce noise and make possible a reliable estimation of peak amplitude.On the contrary, the estimate of the peak amplitude is not possible using the CA technique, because of the great amount of noise still present in the signal.
Application to real data
To assess the benefits of the proposed method in real data, the contrast to noise ratio (CNR) [41] was computed.The CNR was defined as the square root of the ratio of signal power to noise power.The signal power was calculated by integrating the power from the power spectral density over the "signal bands", the noise power was calculated by integrating the rest of the power spectrum.This definition of CNR is useful in real data, where is not possible to totally separate signal and noise, and it is preferable to time domain methods because of the periodic nature of our visual stimulation.The signal bands of our stimulation paradigm were 0.05-0.067Hz (fundamental band) and 0.1-0.133Hz (second harmonic).The noise bands were the rest of the spectrum.For ΔHbO, the mean CNR in single trial without Bayesian filtering was 0.86 after Bayesian filtering the CNR increases to 1.79, denoting a CNR improvement of 108%.Once estimated the HRFs, the mean CNR obtained with CA and with Bayesian filter were 1.42 and 2.30 respectively, denoting a CNR improvement of 62%.Worst CNR values were obtained for ΔHbR: 0.70 in single trial and 1.03 in estimated HRFs without Bayesian filtering, which increased to 1.48 in single trial and 1.78 in estimated HRFs using Bayesian filtering, denoting an improvement of 111% and 72% respectively.As regard the comparison with the band-pass filter, the CNRs obtained with the band-pass filter were similar to that obtained with the Bayesian filtering approach (ΔHbO, single trial: 1.78 vs 1.79, HRF: 2.30 vs 2.30).
CNR values obtained by CA, Bayesian and band-pass filtering are reported in Table 2. Furthermore, analysis on single-trial estimates of Eq. ( 6) showed a reduction of the standard deviation obtained by the Bayesian filtering approach with respect to the band-pass filtering.For each subject, the standard deviation of the trials of each condition was computed.The mean SD obtained by the proposed method was 248 nM in the contralateral condition and 240 nM in the ipsilateral condition.Using the band-pass filter, the SD values obtained were 318 and 306 nM in the contralateral and ipsilateral condition respectively.Obtained SD values for each condition and subject were submitted to a mixed ANOVA considering contralateral and ipsilateral as within-subject factors and the type of filtering (Bayesian or Band-pass) as between-subjects factors.The ANOVA revealed a significant effect of the type of filtering (F(1, 10) = 22.702, p = 0.001), reflecting reduced SD in the Bayesian filtering relative to the band-pass filtering.The reduced variability in single trials obtained by the proposed method suggests a more valuable reduction of noise.
Lateralization effects in visual short-term memory
The obtained HRFs were analyzed to investigate the lateralization effects in visual short-term memory.
Lateralization effects are expected based on the known architectural properties of the neural pathways subtended in the encoding and maintenance of visual information.Succinctly, due to the crossing of the neural pathways and the level of the geniculate nucleus [42], enhanced activation of parieto-occipital cortex contralateral to the side of visual stimuli presentation should be observed relative to ipsilateral activation (e.g., [43,44,25]).While such lateralization effects are normally found using electroencephalography, they can hardly be detected using fMRI, which reveals a strong concurrent ipsilateral activation that tends to conceal them (e.g., [45]).Lateralization effects were monitored at each recording channel.For each symmetrical channels' pair, ΔHbO and ΔHbR concentration values contralateral to the cued hemifield were calculated by averaging data recorded at left-sided channels when the to-be-memorized colored squares were displayed in the right visual hemifield and data recorded at right-sided channels when the to-be-memorized colored squares were displayed in the left visual hemifield.Ipsilateral ΔHbO and ΔHbR concentration indices were calculated with an analogous algorithm by averaging data at the complementary channels.The concentration of ΔHbT (ΔHbO + ΔHbR [46]; ) was calculated as an estimate of cerebral blood volume.For each condition, the mean value in the interval between 1 s before and 1 s after the maximum value of the hemodynamic response was considered, and a one tail t-test was performed to identify the channels showing a significant activation increase relative to the baseline.A second series of one-tail paired t-tests was conducted to compare contralateral and ipsilateral condition.The results of all statistical tests conducted on the concentration values were corrected for multiple comparisons using the false discovery rate method (FDR BH [47]; ).The q value specifying the maximum FDR was set to 0.1, such that no more than 10% false positive could be included, on average, in the set of significantly active channels submitted to statistical test.For ΔHbO and ΔHbT, all channels resulted activated in both conditions, showing that the whole parieto-occipital regions investigated were involved in VSTM.Only the channel at B4/D4 (corresponding to the IPS/IOS) showed a significant difference between the two conditions (ΔHbO: p = 0.0093; ΔHbT: p = 0.0084).The relative activation maps are graphically reproduced in Fig. 6, showing the lateralization effect for ΔHbO.A greater concentration values was found in the contralateral condition relative to values in the ipsilateral condition (mean ΔHbO value in nM ± SD of contralateral vs. ipsilateral trials: 160.87 ± 89.88 vs. 136.21± 86.47; mean ΔHbT value in nM ± SD of contralateral vs. ipsilateral trials: 119.74 ± 98.67 vs. 103.41± 100.1).For ΔHbR, not significant results were found by the statistical analysis, probably due to the poor CNR value.Figure 7 shows the mean response profile for ΔHbO-ΔHbR and ΔHbT in the IPS/IOS region in the contralateral and ipsilateral conditions.
These results suggest that the increase in BOLD response associated with increases in VSTM load (found in recent fMRI studies) was larger in contralateral cortex than the ipsilateral cortex.Such results are also in agreement with recent EEG findings.Equivalent analyses were carried out on band-pass filtered data.Such analyses revealed a less significant activation versus baseline for both contralateral and ipsilateral condition in all active channels.The p-value obtained in the contralateral condition are reported in Table 3, and analogous results were obtained in the ipsilateral condition.A less significant difference between the two experimental conditions was found analyzing low-pass filtered data recorded in the IPS/IOS region (the corresponding p-values are reported in Table 4).Data obtained with Bayesian filtering reveal a more significant activation versus baseline and a more significant difference between the two conditions.The analysis on ΔHbR (not reported) didn't reveal significant activation.
Results obtained with the band-pass filter suggests a reduction of both noise and HRF.The results obtained in simulated and real data reassert the capacity of our method to reduce noise preserving HRF.
Discussion
The fNIRS is an emerging neuroimaging method which can be usefully employed to provide, with limited invasivity and reasonable laboratory costs, crucial information for the study of cognitive processes.Here, the possibility of estimating the HRF from fNIRS signal was under investigation.Several methods, with different degree of sophistication and adaptability to general situations, were proposed in the literature, but CA is still a widely used method.In this work, after having developed a pre-processing procedure, we assessed the performance of a non-parametric Bayesian approach, originally developed and assessed in [24] for a broader class of auditory ERP.
A key feature of the method is that, under mild assumptions on the signals into play, it can significantly improve the accuracy of the estimates thanks to its ability to establish, for each individual trial, a suitable compromise between data and a priori expectations available on HRF smoothness.In particular, the proposed Bayesian filtering approach exploits models of the 2 nd order a priori statistical information on the background fNIRS noise and on the unknown HRF.While such a statistical description of the ongoing fNIRS noise is obtained, trial by trial, by fitting an auto-regressive model against pre-stimulus data, the a priori known smoothness of the unknown HRF is formalized by describing it as the multiple integration of a white noise process.This is a general and flexible way to give an a priori probabilistic description of a physiological signal, and it can be employ for a large class of fNIRS experiments, especially because the model has only one unknown parameter which, for each trial, can be estimated by a well established smoothing criterion.Good results are achieved in signal-to-noise ratio improvement and physiological noise reduction.A reliable estimation of the HRFs can be reached even if the number of trial is small (N50).Synthetic data clearly demonstrated the superiority of the approach over CA and other filtering methods of frequent use (such as the Butterworth band-pass filtering used in this paper).The trial-by-trial filtering strategy and, in particular, the estimation of the noise model from the first seconds of each trial is especially suited when the acquisition of the fNIRS signal requires a long time (tens of minutes), like in the proposed experiment.As a matter of fact, acquisition is structurally proned to be influenced by a number of factors, e.g., caused by movements of the subject, resulting in a general non-stationarity of the signal.Some of the more likely factors include changes in the optical coupling between the subject's head and the fNIRS sensors (the detectors are sensitive to the angle at which the remitted light is received and the reflectance of the skin surface depends on the angle of incidence) and changes in the physiological components (e.g., changes in heart frequency and in the amplitude of its component in the hemodynamic signal).Hence, a filtering approach that considers the non-stationarity of both the measurement noise and the physiological components of the fNIRS signal is fundamental.On the other hand, the method returns an average response of the HRF via Eq.( 7).Describing the HRF by an average profile is widely done in fNIRS signal processing [3,5,7,11,[20][21][22][23].As a matter of fact, fMRI investigations [e.g., 48] have shown that the hemodynamic response is quite consistent within subjects, although there might be some variability between sessions.Given that subjects performed the present experiment in a single uninterrupted session, providing an average HRF profile is acceptable.However, further development of the present work will include the design of a single-trial analysis technique, which will allow us to quantify HRF temporal variability.
The Bayesian filtering method is already published in [24].However, its application to fNIRS signal is original and not straightforward.The resulting overall method (including the pre-processing step) is simple and general.It is based on mild assumptions on the acquired signals, and it can be used in any fNIRS experiment.These features make it more easy-to-use and flexible than more sophisticated, but more demanding in terms of hypotheses, methods available in the literature.For instance, unlike PCA, the new method does not depend from the number of channels and from their location, and it doesn't assume the space-time separability of physiological components from HRF: these requirements are necessary for PCA, but they could make the analysis not robust [12,49].The only assumption the present method hinges on is the possibility to describe a-priori expected smoothness of the unknown HRF based on a well established a priori model (multiple integration of a white noise process), that makes the derivation of the single unknown parameter (i.e., λ 2 ) from the data almost immediate, by means of an automatic smoothing criterion.To highlight the opposite case, the method proposed in [8,11] is twined with a quite more complex a priori model of the data distribution, where, for instance, the unknown HRF is described by a waveform whose functional properties must all be specified a priori, with the exception of a subset of parameters which are to be estimated via the additional application of a nonlinear Kalman filter.This latter, per se, requires several parameters, e.g., initial states and covariance matrices to be empirically specified.To note, the HRF could not be modeled as in the aforementioned papers, for peak amplitude and latency were unknown.
The Bayesian approach was compared to two current standard methods (CA and classical band-pass filtering), demonstrating a sizable improvement on HRF estimation.On simulated data, a five times lower than that obtained by band-pass filtering.On real data, the improvement achieved by the present method is confirmed by an increase of 72% in the contrast to noise ratio (CNR) with respect to CA.It is worth noting that in real data we cannot totally separate signal and noise.Consequently, in the signal band used to compute the CNR may be present not only HRF, but also noise.This is probably the reason why the CNR obtained by the Bayesian approach is similar to that obtained by band-pass filtering.A more valuable reduction of noise obtained by the proposed method is suggested by the reduced standard deviation (SD) in single trial: a statistical test (ANOVA) revealed a significant effect of the type of filtering, reflecting reduced SD in the Bayesian filtering relative to the bandpass filtering.In addition, the more correct HRF estimate obtained by the proposed method with respect to that obtained by the band-pass filtering is suggested by the more significant difference between the HRFs of the two experimental conditions, that is confirmed by recent findings in fMRI studies.
The results obtained through the developed methodology were shown to be useful to provide new insights in the investigation of the VSTM.The peak amplitude of the HRFs of each channel and experimental condition can be reasonably estimated and analyzed.Results obtained with the Bayesian filtering approach denote a lateralization effect, that is hardly observable using fMRI but that is confirmed by EEG studies.Other experiments will be conducted using fNIRS and the developed filtering procedure to better understand VSTM mechanisms.
Further development of the present work will address the removal of the physiological components whose frequencies overlap the signal band, e.g., respiratory and, above all, blood pressure (Mayer) wave.
Fig. 1 .
Fig. 1.Sequence of visual events on each trial of the present experiment (see text for a full description).
Fig. 2 .
Fig. 2. Probe placement on the ICBM152 template (occipital view).(a) Sources (red circles) and detectors (black circles) overlaid on the head surface of the ICBM152-PM template; (b) Cerebral projections of sources (white circles) and detectors (black circles) overlaid on the ICBM152 brain template.
Figure
Figure 4adisplays an example of filtering for a representative trial of a representative subject (subject 1, channel A4, trial 5).The blue line denotes the raw signal of y in Eq. (2), while the red dashed line depicts the filtered profile û obtained by Eq. (6).The yellow dashed line illustrates band-pass filtered data.High frequency physiological components and random measurement noise are notably reduced by the Bayesian filtering, while low frequency physiological components (i.e., blood pressure Mayer wave) is reduced by averaging all trials.Figure4bshows a direct comparison between the average HRF ū (red dashed line) estimated by Eq. (7) in subject 9 from the available N = 37 trials (channel A2) and the HRF profile estimated via CA (blue line) and band-pass filtering (yellow dashed line).
Fig. 6 .
Fig.6.Occipital and top views of the statistical maps of contralateral vs. ipsilateral comparison, for ΔHbO.The maps have been overlaid onto the ICBM152 brain template.For illustrative purposes, the upper part of the figure shows examples of memory arrays with the hemifields including the to-be-memorized colored squares (contralateral to the hemisphere where the enhanced concentration of HbT and HbO was observed in the present study) cued by arrow heads.Note however that colored squares and arrow heads were never displayed synchronously during the experiment (see Fig.1). | 2018-04-03T05:19:04.175Z | 2010-12-06T00:00:00.000 | {
"year": 2010,
"sha1": "1cf089c753e17b006323ed84bfb2593aeb2dc3b7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.18.026550",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1cf089c753e17b006323ed84bfb2593aeb2dc3b7",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
51678085 | pes2o/s2orc | v3-fos-license | Prototype foamy virus integrase is promiscuous for target choice
Retroviruses have two essential activities: reverse transcription and integration. The viral protein integrase (IN) covalently joins the viral cDNA genome to the host DNA. Prototype foamy virus (PFV) IN has become a model of retroviral intasome structure. However, this retroviral IN has not been well-characterized biochemically. Here we compare PFV IN to previously reported HIV-1 IN activities and discover significant differences. PFV IN is able to utilize the divalent cation calcium during strand transfer while HIV-1 IN is not. HIV-1 IN was shown to completely commit to a target DNA within 1 min, while PFV IN is not fully committed after 60 min. These results suggest that PFV IN is more promiscuous compared to HIV-1 IN in terms of divalent cation and target commitment.
Introduction
Retroviral integration is the covalent insertion of the viral cDNA genome into the host DNA [1]. This essential step of the retroviral life cycle is performed by the viral enzyme integrase (IN). Following entry to a cell, reverse transcriptase copies the viral genomic RNA to a linear double stranded cDNA. IN must assemble onto the two ends of the viral cDNA before it completes two enzymatic activities. First, IN cleaves two nucleotides from the 3′ ends of the viral cDNA in a reaction termed 3′ end processing. Second, IN covalently joins the recessed 3′ hydroxyls to the host DNA in a single step transesterification reaction termed strand transfer. The two ends of the viral DNA are joined 4-6 bp apart on the host DNA. Repair by host proteins yields the integrated provirus flanked by duplications of host sequence [2].
A major advance in the study of integrationwas the structure of the full length prototype foamy virus (PFV) IN Ref. [3]. A second structure revealed that the functional integration This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). complex, termed an intasome, is a tetramer of PFV IN complexed with viral DNA ends [4]. These images offer unique insights into the moment of intasome binding to target DNA, but they do not inform the dynamic search mechanism of retroviral intasomes. Single molecule total internal reflection fluorescence (TIRF) microscopy studies showed PFV intasomes search target DNA by 1 dimensional (D) rotation coupled diffusion [5]. However, TIRF studies were not able to exclude the possibility that PFV intasomes also search by 3D diffusion or intersegmental transfer (IT). Transcription factors and DNA repair factors have been shown to employ a combination of 1D diffusion with 3D diffusion or IT, which may be necessary for searching DNA in the context of chromatin.
Large scale sequencing of retroviral integration sites reveals that both chromatin elements and sequence play distinct roles in the integrase site preference [6][7][8]. A subtle sequence preference at the integration site varies for each retrovirus [9]. HIV-1 IN favors actively transcribed genes while PFV IN prefers promoters and CpG islands [6]. Host co-factors that direct integration from multiple retroviruses have been identified as chromatin interacting proteins, which may be the determining factor for chromatin targeting [10][11][12][13]. No host cofactor has been identified for PFV IN. HIV-1 IN requires the host co-factor LEDGF/p75 for efficient integration both in vivo and in vitro [6,14]. The integrase binding domain of LEDGF/p75 has been shown to stabilize the HIV IN intasome [14]. In contrast, PFV IN readily forms intasomes in vitro without a stabilizing co-factor [3,15].
While retroviruses display diversity in integration site preferences, they also appear to adopt varying IN multimers in intasomes. PFV intasomes are a tetramer of IN, but retroviral IN octamers and hexadecamers have been reported [4,[16][17][18]. It is unknown if all retroviral intasomes will search target DNA by similar mechanisms. HIV-1 IN has previously been shown to commit to a target DNA early in vitro [19]. Addition of a second target DNA at any time following the start of the reaction did not yield integration to the second target. This data suggests that HIV-1 IN does not perform 3D diffusion or IT search of target DNA. The caveat to these experiments is that the HIV-1 IN co-factor LEDGF/p75 had not yet been identified and was not included in the experiments [20]. LEDGF/p75 appears to bind chromatin in vivo by 1D and 3D diffusion [21]. Whether the HIV-1 intasome search of chromatin is defined by LEDGF/p75 diffusion or the lack of diffusion by HIV-1 IN is unknown.
Here we explore the interactions of PFV IN with target DNA. We report the role of divalent cation on the integration activity of PFV IN. In contrast to HIV-1 IN, PFV IN was found to readily perform integration to a second DNA up to an hour into a reaction. Finally, the PFV IN preferred integration site sequence was found to neither enhance nor inhibit integration in vitro.
Purification of PFV IN
All chemicals were of the highest grade (Sigma Aldrich). Recombinant PFV IN was purified as described [22,23].
PFV integration reactions were performed in 10mM HEPES, pH 7.5,110mMNaCl, 4 μMZnCl 2 , 5mMMgSO 4 , and 10mMDTT, 0.5 μM PFV IN, 1 μM viral donor DNA, 50 ng target DNA plasmid in a final volume of 15 μL. Where indicated 5mM MgCl 2 , 5mM MnCl 2 , or 5mMCaCl 2 were substituted for MgSO 4 . Blunt viral donor DNA was used except when indicated. All reagents except target DNA were combined in 14 μL volume and incubated on ice for 15 min. Target DNA was added, reactions were incubated at 37 °C for 90 min, and stopped by the addition of 0.5% SDS, 1 mg/ml proteinase K. Reactions were incubated at 37 °C for an additional 60 min. Reaction products were separated by 1% agarose in TAE with 0.1 μg/mL ethidium bromide. Gels were scanned by Typhoon 9410 variable mode fluorescent imager (GE Healthcare) for ethidium bromide and Cy5. Images were analyzed by ImageQuant (GE Healthcare). Data was analyzed by paired t-test (GraphPad Prism).
Divalent cation preference of PFV IN
All retroviral INs require a divalent cation at the active site to assemble and perform both 3′ end processing and strand transfer reactions [1]. Several previous studies of HIV-1 IN evaluated the enzymatic preference for divalent cation [24][25][26][27]. HIV-1 IN appears to show a strong preference for manganese during assembly onto the viral DNA ends [26]. In contrast, calcium allows assembly of HIV-1 IN with viral DNA, but does not allow catalysis [24]. Recombinant HIV-1 IN is markedly more enzymatically active in manganese compared to magnesium [27]. HIV-1 IN was reported to be incapable of 3′ processing in the presence of magnesium [19]. The effects of different divalent cations have not been reported for PFV IN.
PFV IN activity was assayed with supercoiled plasmid target DNA [28]. Recombinant PFV IN is added to a Cy5 fluorescently labeled DNA oligomer mimicking the end of the viral cDNA (Fig. 1A). After incubation on ice to allow assembly of intasomes, a target plasmid is added and the reactions are incubated at 37 °C. The reaction products are separated by agarose gel stained with ethidium bromide. The PFV IN integration products are largely concerted integration (CI) of two viral donor DNA oligomers to the plasmid (Fig. 1A). The CI products have the mobility of linearized plasmid. A second integration product occurs when only one viral DNA donor is joined to the target plasmid. In this half site integration (HSI) reaction, a nick is introduced at the site of strand transfer relaxing the supercoils. The HSI products have the mobility of relaxed circular plasmid. The agarose gels are imaged for ethidium bromide and Cy5 fluorescence allowing for identification and quantitation of all DNA forms, including unreacted and reaction products.
We compared PFV IN integration to a supercoiled plasmid DNA in the presence of magnesium, manganese, or calcium (Fig. 1). Published protocols for HIV-1 IN assays often employ MgCl 2 , but PFV IN assays utilize MgSO 4 [19,28]. Both divalent cations were assayed. Using a preprocessed viral DNA donor with recessed 3′ ends, this integration assay does not distinguish between assembly or strand transfer. There was little difference in the accumulation of CI products when either magnesium or manganese was present (p > 0.05). However, CI products in the presence of calcium were reduced to 22% of products observed in the presence of MgSO 4 (p = 0.013). HSI products were also assayed but showed little difference between the divalent cations assayed. The activity of PFV IN in the presence of calcium suggests that this enzyme is more permissive than HIV-1 IN, which has no enzymatic activity in calcium [24]. PFV IN favored CI to HSI in magnesium or manganese, but was more prone to HSI in calcium (HSI MgSO 4 compared to CaCl 2 p = 0.036).
PFV integration was also assayed with a blunt viral donor DNA (Fig. 1). This substrate requires PFV IN to perform 3′ end processing prior to strand transfer. These results showed greater differences between the divalent cations than preprocessed viral donor DNA. HSI products were similar in the presence of MgSO 4 or MgCl 2 (p = 0.059), but CI products increased by 30% in the presence of MgCl 2 (p= 0.036). PFV IN was more active in the presence of the manganese cation showing a 2.1 fold and 3.2 fold increase of CI (p = 0.017) or HSI (p = 0.006) products, respectively, compared to MgSO 4 . PFV IN had no activity in the presence of calcium suggesting that 3′ end processing could not occur. Thus PFV IN may utilize calcium for assembly and strand transfer, but not 3′ end processing. Since PFV IN is functional in magnesium and this divalent cation is more physiologically relevant than manganese, subsequent experiments were performed with magnesium.
Target commitment
Real time single molecule experiments show that PFV intasomes may search over 1 kb of linear DNA [5]. It is unknown if a PFV intasome can switch targets by 3D diffusion following an unproductive search. Previous studies of HIV-1 IN suggested that this enzyme commits to a target DNA in vitro early and does not switch targets [19]. We tested the commitment of PFV IN to target DNA (Fig. 2). Integration reactions included two supercoiled plasmids, pMP2 and pcDNA 3.1, readily distinguished by agarose gel mobility.
The integration reactions were initiated with one plasmid at 37 °C. At variable times, a second plasmid was added to the reactions. Reciprocal reactions were performed, switching the order of the plasmids added.
PFV integration reactions revealed that simultaneous addition of two plasmids results in integration to both (Fig. 2). CI to the first plasmid in the reaction increased slightly throughout the time course. This observation was true whether 2858 bp pMP2 or 5397 bp pcDNA3.1 was the first plasmid added, but this difference was not statistically significant for either plasmid. CI to the second plasmid added to the reaction steadily decreased as the time of addition increased (simultaneous addition compared to 60 min pcDNA added second p = 0.025, pMP2 added second p = 0.039). HSI products were also quantified in these reactions and displayed similar trends to the CI products (simultaneous addition compared to 60 min pcDNA added second p = 0.012, pMP2 added second p = 0.016). Even when reactions had been incubated for 60 min, integration to the second plasmid is still readily detected. This suggests that a significant fraction of PFV IN does not commit to a target DNA within 60 min in stark contrast to HIV-1 IN.
Integration site sequence preference effects on integration
A unique yet subtle sequence preference for integration sites has been reported for different retroviruses [9]. Since PFV intasomes interact with naked DNA by 1D rotation coupled diffusion similar to lac repressor search for a sequence specific site, we tested the effect of the adding the PFV IN preferred integration site GTGCTAGCAC into target DNA [4,29]. The preferred integration site was subcloned into two different plasmids yielding pMP2-PFV or pcDNA3.1-PFV. PFV integration was compared between the parent plasmids and the preferred site plasmids (Fig. 3). Both blunt and preprocessed viral oligomer donors were tested in the assay. There were no apparent differences in the accumulation of integration products with the presence of the preferred sequence (p > 0.05).
Retroviral INs may have greater preference for structural features, particularly bent DNA, than sequence [30]. IN is known to favor the bent structure of supercoiled DNA compared to nicked, relaxed circles or linear DNA [5]. To evaluate integration efficiency with the sequence preference in the absence of supercoils, the plasmids were relaxed with a nicking endonuclease. In this context, there was no change in integration efficiency to the plasmids with the preferred sequence. These data suggest that the PFV IN preferred integration site does not enhance PFV integration.
Although the integration sequence preference did not enhance integration efficiency in target DNAs, excess small double stranded DNA oligomers were tested as competitors of integration. Double stranded DNA oligomers encoding the PFV preferred integration site sequence were titrated into integration assays (Fig. 4). The concentration of PFV IN is 500 nM, suggesting that the maximal concentration of tetrameric PFV intasomes in these reactions is 125 nM. The dsDNA oligomers were added at 5, 50, 500, or 5000M excess to monomeric PFV IN. PFV CI and HSI were unaffected by the addition of dsDNA oligomers at any concentration (p > 0.05). Taken together, the preferred sequence of PFV IN does not enhance or inhibit integration. It seems that structural elements are more important than sequence for integration targeting.
Discussion
Extensive mapping of integration sites has revealed that retroviral INs have unique preferences for sequence and chromatin features [6][7][8]. Bulk biochemical assays indicate that integrases prefer bent DNA [30]. Genomic DNA in the context of chromatin is the natural substrate for retroviral integrases making bent DNA an obvious target. However, even in the context of nucleosomes, integration does not appear to be random. A cryo-EM structure of the PFV intasome bound to a stable nucleosome revealed a single binding site [31]. Previous studies of IN with nucleosome substrates suggest that not all exposed DNA major grooves serve as targets for integrases [32].
Here we explore the dynamics of retroviral integrase interaction with target DNA. Intasomes may search for a target site by a variety of mechanisms including 1D diffusion (sliding), 3D diffusion (jumping), or IT. We have previously shown that PFV intasomes readily slide on DNA, but could not discern 3D diffusion or IT [5]. In this study, PFV IN is able to integrate to a second target DNA plasmid after an hour. While this data does not prove PFV IN performs 3D diffusion or IT, it is suggestive of these mechanisms. The PFV IN observations are in contrast to HIV-1 IN which did not integrate to a second DNA after 1 min and also appeared incapable of sliding on target DNA [19]. The HIV-1 IN data argues that this protein does not search by 3D diffusion nor IT. During an HIV-1 infection, the movement of HIV-1 integration complexes may be dictated by LEDGF/p75, which has been shown to search chromatin by at least 3D diffusion and likely IT [21]. PFV IN does not have a host co-factor and displays far greater mobility than reported for HIV-1 IN. Further experiments will be necessary to prove PFV IN 3D diffusion and/or IT searching. Similarly more sophisticated analysis, such as single molecule TIRF, may reveal that HIV-1 intasomes are capable of a more comprehensive search of target DNA than previously reported.
Each retrovirus has a subtle sequence preference at the integration site, only revealed after sequencing hundreds of integration sites [9]. Throughout the integration sequence preference, each base preference is independent of the preference for surrounding bases. Thus there is no interdependent relationship between base choice throughout the integration site preference. We explored the ability of the PFV preferred integration site sequence to either enhance or hinder integration. Adding the PFV integration site preference to two different plasmids did not increase the accumulation of integration products. Considering that DNA structure might be more important than sequence preference, the plasmids were relaxed with a nicking endonuclease to remove the preferred supercoils. Presence of the integration sequence preference in relaxed plasmids also did not increase integration product accumulation. These data suggested that the preferred sequence at PFV integration sites does not enhance integration to a naked DNA target plasmid in vitro.
Double stranded DNA oligomers encoding the PFV integration sequence preference were added to integration reactions to possibly inhibit integration. Such DNA oligomers were bound by the PFV intasome in a structure [4]. Even at large molar excess of the DNA oligomers, the accumulation of CI and HSI products were unaffected. The importance of retroviral integration site sequence preference appears to be minimal during integration in vitro. Relaxed circles (RC) and supercoiled (SC) plasmid mobility is indicated. C) CI and HSI products were quantified from the Cy5 fluorescent images of agarose gels and expressed as relative to the CI product in MgSO 4 . Error bars indicate the standard deviation between at least three independent experiments. DNA marker is kb.
Fig. 2. PFV IN commitment to target DNA
A) PFV IN and Cy5-labeled viral donor DNA were added to 2.9 kb plasmid pMP2 or 5.4 kb plasmid pcDNA, indicated by + symbols. The second plasmid was added 5-60 min after the start of the reaction, indicated by numbers. Simultaneous addition of the two plasmids at the start of the reaction is indicated by 0. Integration reactions were a total of 90 min from the addition of the first plasmid. Reaction products were separated by agarose gel, stained with ethidium bromide, and scanned for Cy5 (top gel image) and ethidium bromide (bottom gel image). B) CI and C) HSI integration products were quantified and are expressed relative to the total integration observed in the single plasmid reaction. Error bars indicate the standard deviation between at least three independent experiments. DNA marker is kb. Mackler | 2018-08-01T20:48:44.278Z | 2018-07-14T00:00:00.000 | {
"year": 2018,
"sha1": "5da305e5b5d367892a80c47ec236615f95a39778",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bbrc.2018.07.031",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5da305e5b5d367892a80c47ec236615f95a39778",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
250055991 | pes2o/s2orc | v3-fos-license | Unintended beneficial effects of COVID-19 on influenza-associated emergency department use in Korea
Background Non-pharmaceutical interventions, including hand hygiene, wearing masks, and cough etiquette, and public health measures such as social distancing, used to prevent the spread of coronavirus disease 2019 (COVID-19), could reduce the incidence rate of respiratory viral infections such as influenza. We evaluated the effect of COVID-19 on the incidence of influenza in Korea. Methods This retrospective study included all patients who visited five urban emergency departments (EDs) during the influenza epidemic seasons of 2017–18, 2018–19, and 2019–20. Influenza was defined as ICD-10 codes J09, J10, and J11, determined from ED discharge records. The weekly incidence rates of influenza per 1000 ED visits during the 2019–20 season, when COVID-19 became a pandemic, were compared with those of 2017–18 and 2018–19. The actual incidence rate of the 2019–20 season was compared with the predicted value using a generalized estimation equation model based on 2017–18 and 2018–19 data. Results The weekly influenza incidence rate decreased from 101.6 to 56.6 between week 4 and week 5 in 2020 when the first COVID-19 patient was diagnosed and public health measures were implemented. The weekly incidence rate during week 10 and week 22 of the 2019–20 season decreased most steeply compared to 2017–18 and 2018–19. The actual influenza incidence rate observed in the 2019–20 season was lower than the rate predicted in the 2017–18 and 2018–19 seasons starting from week 7 when a COVID-19 outbreak occurred in Korea. Conclusions The implementation of non-pharmaceutical interventions and public health measures for the COVID-19 epidemic effectively reduced the transmission of influenza and associated ED use in Korea. Implementing appropriate public health measures could reduce outbreaks and lessen the burden of influenza during future influenza epidemics.
Introduction
The outbreak of coronavirus disease-2019 (COVID- 19), which was caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), was reported in Wuhan, China in December 2019, and the first confirmed patient in the Republic of Korea was reported on January 20, 2020 [1][2][3]. After this first case, the Korea Disease Control and Prevention Agency (KDCA) raised the alert level of the infectious disease crisis from "blue" to "yellow" as more COVID-19 patients were identified, including recommendations that people should wear face masks and practice hand hygiene ( Fig. 1) [4]. When there was an outbreak in Daegu metropolitan city, the KDCA raised the infectious disease crisis warning to "red" on February 24, 2020, and began to implement social distancing in public [5,6]. These non-pharmaceutical interventions (NPIs) and public health measures were aimed at preventing the spread of COVID-19 through respiratory droplets. Along with behavioral changes following NPIs and public health measures recommended by the KDCA in the population, these measures were expected to influence Influenza is caused by a respiratory virus that spreads through droplet-based transmission and is highly contagious and prevalent in winter [10]. Influenza is associated with a high mortality rate owing to complications such as pneumonia in high-risk groups including elderly patients aged ≥65 years, pregnant women, and immunosuppressed patients [11][12][13]. It is estimated that 3-5 million people have a severe influenza infection each year, and 25-50 million people have died from influenza worldwide [14]. In Korea, it is estimated that >23,000 people are hospitalized and >1200 people die from influenza per year [15]. To reduce the burden due to influenza, it is important to establish active prevention policies and infection control measures. NPIs such as hand hygiene, cough etiquette, and use of masks are known to lower the risk of acquiring influenza infection, as well as to delay and reduce the peak of the epidemic [16,17].
Before COVID-19, even though public health agencies had invested physically and financially in promoting NPIs for controlling influenza epidemics, they were not considered mandatory nor widely accepted. However, during the COVID-19 pandemic, the KDCA has mandated mask use and social distancing behaviors, such as closing schools and suspension of mass gathering events. The majority of people implemented NPIs and maintained social distancing. We considered that such a large-scale behavioral change occurring in society could have an unintended effect on other respiratory infectious diseases, such as influenza. Influenza surveillance systems based in primary clinics can monitor influenza-like illnesses (ILIs) but usually do not include severe patients. Meanwhile, influenza surveillance based in the emergency department (ED), which includes severe patients, could be more objective due to the use of diagnostic tests.
We aimed to analyze the beneficial effects of the COVID-19 pandemic related increase in NPIs and public health measures on the incidence of influenza using data from EDs in selected Korean hospitals.
Study setting and design
This retrospective observational study was conducted in the EDs of five urban teaching hospitals in Korea. The study was approved by the institutional review board of our institution, and the need for informed consent was waived because of the retrospective design of the study (IRB number: 2020-05-017).
The annual number of patients treated in each ED ranges from 10,000 to 80,000, which varies from hospital to hospital. From 2017 to 2019, the median number of patients treated in the EDs of the five hospitals was 41,218. The population of the area where two hospitals are located is 9.7 million, and the populations of the area where the other three hospitals are located are 890,000, 550,000, and 280,000, respectively. When patients are discharged from the ED, they are diagnosed based on the International Classification of Diseases (ICD)-10 code. Influenza can be diagnosed through an antigen test using immunochromatographic assay (BD Veritor System for Rapid Detection of Flu A + B) [18]. However, during an influenza epidemic declared by KDCA, it also can be diagnosed based on clinical symptoms only. The influenza epidemic is declared by the KDCA when the average proportion of patients with ILIs is more than two standard deviations from the average proportion of patients in the non-epidemic period over the past 3 years since 2006. The influenza epidemic is declared over by the KDCA after review, if the average proportion of patients with ILIs is below the standard for declaring an epidemic for 3 consecutive weeks [19]. During the 2018-19 epidemic, the first peak was dominantly type A influenza and the second peak was dominantly type B influenza. In the 2019-20 season, when COVID-19 became a pandemic, the influenza epidemic was declared at week 46, peaked at week 52, and concluded at week 13 of 2020.
Study participants
For convenience, in this study, we included all patients who visited five EDs from week 40 of each year to week 22 of the following year, considering the start and peak of each influenza epidemic period in 2017-18, 2018-19, and 2019-20. We excluded patients who visited the ED only for receiving their medical records. All patients were divided into three phases based on their visiting week. The first phase was defined as patients who visited from weeks 40 to 52 when the number of influenza patients increased; the second phase was defined as patients who visited from weeks 1 to 9 of the following year, wherein the number of patients gradually decreased, and the third phase was defined as patients who visited from weeks 10 to 22 when the second epidemic peak occurred (Fig. 2).
Variables
Information on patients was extracted anonymously from the electronic records of the hospitals. A diagnosis of influenza was defined as ICD-10 codes J09, J10, and J11, determined from ED discharge records. For body temperature, we used the first measurement taken immediately after the ED visit. Fever was defined as a body temperature of >37.5°C according to the suspected symptoms of COVID-19 defined by the KDCA. COVID-19 pandemic-related information was collected from announcements made by the KDCA.
Statistical analysis
The demographic findings for each season were described. Data of categorical variables are reported as counts and percentages, and data of continuous variables are reported as median, first quartile, and third quartile. The weekly incidence rates of influenza per 1000 ED visits were calculated for each season. The analysis was performed according to the phases of the period considered in this study. The difference in the incidence rate between seasons for each phase was expressed as the incidence rate ratio with a 95% confidence interval (95% CI). A generalized estimation equation model with a Poisson distribution was used to compare changes in the weekly incidence rate for each season and to predict the trend of the influenza epidemic in the 2019-20 season based on the data of influenza incidence rates in the 2017-18 and 2018-19 seasons [20]. β means the change in the weekly incidence rate. Since the weekly data that can be evaluated for each phase are short, we estimated changes in weekly incidence rates over time using a linear term for time. All statistical analyses were performed using SAS software (version 9.4; SAS Institute Inc., Cary, NC, USA).
Results
During the study period, 463,633 patients visited the ED of the participating hospitals (Table 1). There were 147,295 influenza patients in the 2019-20 season. In the 2018-19 and 2017-18 seasons, there were 162,315 and 154,023 influenza patients, respectively. The distribution of age and sex of patients was similar during each phase. However, in the third phase of the 2019-20 season, wherein the number of COVID-19 patients surged nationwide, the total number of ED visits was lower, and the median patient age was older than that in the previous two seasons. The proportion of patients whose body temperature was >37.5°C was the lowest (14.7%) in the third phase of the 2019-20 season. Fig. 3 shows the weekly distribution influenza-diagnosed patients by hospital by season. Fig. 4 shows the weekly incidence rate of influenza per 1000 ED visits by season. In the first phase (weeks 40 to 52), influenza incidence rate increased weekly in every season. In the second phase (weeks 1 to 9), the influenza incidence rate decreased in every season. In particular, in the 2019-20 season, the weekly influenza incidence rate per 1000 ED visits sharply decreased from 101. 6 to 56.6 between weeks 4 and 5. In week 4, the first COVID-19 patient was diagnosed in Korea, and the KDCA raised the alert level from "blue" to "yellow" with emphasis on practicing hand hygiene, following cough etiquette, and wearing masks. When an outbreak of COVID-19 occurred in Daegu metropolitan city in week 9, the KDCA raised the alert level from "orange" to "red" and recommended social distancing such as maintaining a 2-m distance between people, refraining from going out, restricting movement, suspending mass gathering events, and closing schools. After week 9 of 2020, the influenza incidence rate per 1000 ED visits remained below 2, and there was no second epidemic wave, unlike that in the 2018-19 season. The ratio of influenza incidence rate per 1000 ED visits by season was evaluated for the third phase, in which social distancing was emphasized in 2019, to compare the difference in incidence rate by season. Compared with the third phase of the 2018-19 and 2017-18 seasons, the incidence rate ratios of the third phase of the 2019-20 Table 1 Demographic season were 0.03 and 0.14, respectively, which showed a statistically significant difference (p < 0.01).
The results of evaluating the weekly change in the incidence rate during the second phase by season show that the weekly incidence rate decreased by β = The weekly incidence rate of the 2019-20 season in the second phase was predicted based on the trend of incidence rates in the 2017-18 and 2018-19 seasons. For this purpose, a generalized estimation equation model was used (Fig. 6). The bar of predicted weekly influenza incidence rate in Fig. 6 indicates the confidence interval. From week 7, the actual influenza incidence rate observed in the 2019 season was lower than the value predicted in the 2017-18 or 2018-19 seasons.
Discussion
We investigated the effect of the COVID-19 epidemic on reducing the incidence of influenza-associated ED use by analyzing ED-based data, comparing the changes in the weekly incidence rate and predicting the trend of the influenza epidemic. The weekly influenza incidence rate per 1000 ED visits decreased sharply after the change in the alert level announced by the government. The weekly change in the incidence rate in the second phase was the most dramatic in the 2019-20 season compared to that in the 2018-19 and 2017-18 seasons. From week 7 of 2020, the actual influenza incidence rate observed was lower than the incidence rate predicted by the 2017-18 or 2018-19 seasons.
Our findings suggest that public health measures, including social distancing and the use of personal protective measures, are associated with a reduced spread of influenza. When the KDCA raised the level of infectious disease crisis alert, they recommended and implemented a change in the behavior of the population and social distancing policy. The government emphasized mandatory use of masks in public places, following cough etiquette, practicing hand hygiene, staying at home after the onset of respiratory symptoms, refraining from nonessential social activities, and postponing school and preschool openings. Fig. 7 shows the COVID-19 incidence, ED visits and population mobility derived from data published by Statistics Korea based on mobile big data analysis [21]. Population mobility defined as visiting a city other than the one in which the person lives and staying there for >30 min by using telecommunication-based mobility data. The population mobility started to decline from week 8, when the number of confirmed COVID-19 cases surged and the government raised the alert level of the infectious disease. The ED use also declined when the number of confirmed COVID-19 cases surged. This shows that people were influenced by government policies and quarantine rules. The official influenza epidemic period of the 2019 season as defined by the KDCA ended 12 weeks earlier than that of the previous season [19]. During 2020-2021, the KDCA did not declare the influenza epidemic [19]. These findings support our results.
Wearing masks showed protective effects against influenza in randomized controlled studies of community and health care facilities [22,23]. Hand hygiene demonstrated a preventive effect in laboratory settings [24,25]. A meta-analysis showed reduction in the incidence rate of respiratory illness by practicing hand hygiene [26]. In these previous studies, the effectiveness of NPIs in reducing the incidence of influenza was similar to the results of our study, but they were conducted in laboratory or small community settings. In Texas, United States, closing schools showed potential benefits during the influenza epidemic [27]. Social distancing measures in workplaces delayed and reduced the peak influenza attack rate, and were more effective when combined with NPIs as shown in a meta-analysis [28]. Public health measures were adopted for reducing the incidence of COVID-19 and several studies have investigated the effects of these measures on influenza at a national or regional level using a national influenza surveillance database. In Hong Kong, NPIs resulted in 44% reduction in transmissibility [7]. Compared with the three preceding seasons, Singapore recorded a decline in the daily influenza cases and influenza positivity in the 2019-20 season and China showed a reduced rate of influenza transmission during the 2019-2020 season after implementing NPIs [8,29]. Olsen et al. also reported reduction in the positivity of influenza tests in the United States and decreased influenza activity in Australia, Chile, and South Africa [30]. In Korea, two studies used a surveillance system database and reported an early end of the influenza epidemic and decreased peak rate of influenza activity and influenza hospitalization cases [31,32]. Korea had recommended NPIs and implemented a social distancing policy, including postponing school opening and suspension of mass gathering events, after the start of the COVID-19 epidemic. Our study shows statistically significant reduction in the influenza incidence rate observed in the 2019-20 season compared with that predicted by the previous two seasons.
Our study has a few limitations. COVID-19 could have affected healthcare-seeking behaviors. The government encouraged people with respiratory symptoms to visit a hospital. Patients with mild disease became reluctant to visit the hospital because of concerns about contact with confirmed COVID-19 patients. Because this study used ED-based data, the change in hospital visits could also indicate the change in ED visits. People may have visited primary clinics or outpatient clinics other than EDs. This change could affect the incidence rate of influenza among ED visits. To reduce the effect of this decrease in ED visits, when analyzing the incidence rate and associated trend, we used the number of influenza patients per 1000 ED visits, not the number of influenza patients diagnosed in the ED. In our study, it was difficult to determine whether the practice of influenza testing used in the five participating EDs was consistent from year to year. In particular, it is not known whether influenza testing practices have changed since the start of the COVID-19 epidemic. However, even after the COVID-19 epidemic began, the guidelines for conducting influenza testing by participating hospitals and the standards set by the government for diagnosing influenza have not changed. Because this was an observational study, we could not identify which measures were potentially effective in reducing the incidence of influenza. We were unable to ascertain how well social distancing or behavioral changes were actually performed; this would require further research.
Conclusions
Our study suggests that implementing NPIs and public health measures to control the spread of COVID-19 had a substantial impact on reducing influenza and related ED use in Korea. Implementing appropriate NPIs and public health measures could reduce outbreaks and lessen the burden of influenza during future influenza epidemics.
Funding
This research was supported by the Hallym University Research Fund 2020 (HURF-2020-31). The sponsor of this study was not involved in the data collection, analysis and interpretation of data; nor in the writing of the manuscript and in making the decision to submit the manuscript for publication.
Conflicts of interest
The authors have no potential conflicts of interest to disclose. | 2022-06-27T17:25:58.058Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "164ff559819d711bcd4c94c39fbfed6093a7a39e",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ajem.2022.06.039",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "445db8e07466218232b910645850e4cde7cd5714",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221144970 | pes2o/s2orc | v3-fos-license | Characteristics of Twitter Use by State Medicaid Programs in the United States: Machine Learning Approach
Background Twitter is a potentially valuable tool for public health officials and state Medicaid programs in the United States, which provide public health insurance to 72 million Americans. Objective We aim to characterize how Medicaid agencies and managed care organization (MCO) health plans are using Twitter to communicate with the public. Methods Using Twitter’s public application programming interface, we collected 158,714 public posts (“tweets”) from active Twitter profiles of state Medicaid agencies and MCOs, spanning March 2014 through June 2019. Manual content analyses identified 5 broad categories of content, and these coded tweets were used to train supervised machine learning algorithms to classify all collected posts. Results We identified 15 state Medicaid agencies and 81 Medicaid MCOs on Twitter. The mean number of followers was 1784, the mean number of those followed was 542, and the mean number of posts was 2476. Approximately 39% of tweets came from just 10 accounts. Of all posts, 39.8% (63,168/158,714) were classified as general public health education and outreach; 23.5% (n=37,298) were about specific Medicaid policies, programs, services, or events; 18.4% (n=29,203) were organizational promotion of staff and activities; and 11.6% (n=18,411) contained general news and news links. Only 4.5% (n=7142) of posts were responses to specific questions, concerns, or complaints from the public. Conclusions Twitter has the potential to enhance community building, beneficiary engagement, and public health outreach, but appears to be underutilized by the Medicaid program.
Introduction
Approximately 20% of online adults use Twitter [1], a social media platform that allows users to share short messages ("tweets"). With over 500 million tweets per day on topics including health-related issues, Twitter is a potentially valuable tool for public health officials to engage the public. Prior studies suggest that overall social media use is independent of educational attainment, race/ethnicity, education, income, and health care access [2]. Twitter use is also significantly higher in young adults compared with older age groups, which maps well to the Medicaid population, of which 93% are aged <65 years and 80% are aged <45 years [3]. However, evidence suggests that these platforms are currently underutilized. A minority of local public health departments and federal health agencies in the United States use Twitter [4], and these accounts typically have low public engagement [5].
Twitter may be particularly useful for state Medicaid programs, which together constitute the single largest source of public health insurance in the United States, covering 72 million children, older adults, people with disabilities, and low-income populations. Medicaid is currently undergoing a variety of program and policy changes, including eligibility expansion to low-income childless adults; establishment of work requirements for some enrollees; delivery system reforms like accountable care organizations and value-based payment models; and addressing emerging public health priorities like substance use disorder treatment. Given this context, there are significant opportunities for state agencies to better understand, respond to, and engage with the diverse needs of Medicaid-eligible and Medicaid-enrolled populations [6] and to communicate with the general public as a key stakeholder. Despite this potential, to our knowledge no prior study has examined the extent to which Medicaid agencies and health plans are using Twitter to communicate with the public.
Methods
We identified active Twitter profiles of state Medicaid agencies and Medicaid managed care organizations (MCOs), which are health insurance plans that contract with states to provide Medicaid health benefits and related services. We included the latter group because more than 70% of Medicaid beneficiaries receive their care through comprehensive managed care plans. To identify Medicaid MCOs, we used the Kaiser Family Foundation's publicly available Medicaid Managed Care Market Tracker [7], which lists all Medicaid MCOs and their parent firms across all 50 states and the District of Columbia. For each state, we searched Twitter, agency websites, and Google to identify and verify relevant Medicaid agency and MCO accounts. Our focus was on state-level Medicaid programs and plans, so we excluded accounts of higher-level governmental agencies (eg, Department of Health and Human Services), parent firms of managed care plans operating across multiple states, and professional accounts of individual Medicaid administrators and leadership.
We used the Twitter public application programming interface [8] to collect a total of 158,714 public posts by the identified accounts, spanning March 2014 through June 2019. To conduct content analysis on these tweets, we manually coded 800 tweets and developed a coding scheme with 5 broad categories of tweet content (Table 1). Three coders independently coded identical samples of tweets, and resolved disagreements and updated the coding guidelines via discussion. Two coders then reviewed a new set of 5338 tweets (excluding retweets), yielding an intercoder agreement of 0.78 using Cohen over 998 overlapping tweets. Tweets that could not be categorized as belonging to one of the 5 categories were labeled "Other" (n=198).
We used the coded tweets to train and evaluate supervised machine learning algorithms, experimenting with four approaches: naive Bayes, support vector machines, random forests, and an ensemble of these three classifiers, which combined the predictions of the independently trained classifiers using predetermined rules [9]. The ensemble classifier obtained the best performance when evaluated using the manually annotated labels, with an accuracy of 74.1%, and therefore we used this classifier to automatically classify all the posts in our data set.
Results
We identified 96 Twitter accounts, including 15 state Medicaid agencies and 81 Medicaid MCOs (out of 51 total state Medicaid agencies and 323 Medicaid MCOs). Among these accounts, the mean number of followers was 1784 (range 2-38,352, median 1434), the mean number of those followed was 542, and the mean number of posts was 2476. Twitter accounts had been active for a mean of 79 months, and approximately 39% of tweets came from just 10 accounts. Of all posts (N=158,714), 39.8% (n=63,168) were classified as general public health education and outreach; 23.5% (n=37,298) constituted outreach about specific Medicaid policies, programs, services, or events; 18.4% (n=29,203) contained relationship-building content including organizational promotion of staff and activities; and 11.6% (n=18,411) contained general news and news links (Table 1). Only 4.5% (n=7142) of the tweets were customer service responses to specific questions, concerns, or complaints from other users. Additionally, 2.2% (n=3492) of the tweets did not fit within these categories and were classified as "Other." The majority of posts received little or no engagement from the public, with only 1% of tweets having 9 or more likes, or 6 or more retweets. Tweets with more active engagement were more likely to contain words that conveyed actions directed at the public, like "email," "assist," "send," and "contact" (see sample tweets in Table 1). 18,411 (11.6) New post (Three regional agencies team up to support MassHealth consumers with disabilities, complex medical needs) has been published on Advocate News Online -https://t.co/JMfIU0sdhH Press releases, links to news articles, general announcements General news and announcements 7142 (4.5) @[redacted] Renewals are done yearly. You will be notified when it's time to renew your coverage.
Responses from the plan or agency directed toward a specific question or complaint raised by a consumer Customer service and responses to constituents a Note that 2.2% (n=3492) of the tweets did not fit within these categories and were classified as "Other," which is not shown in this table.
Principal Findings
Social media is one way for Medicaid programs to reach constituents and to launch low-cost health promotion and public engagement campaigns in the face of resource constraints. Our results suggest that despite Twitter's reach among the general public, it is underutilized in the Medicaid program, compared to other public health organizations [10,11].
Twitter has the potential to enhance community building and engagement, which is increasingly a priority for Medicaid programs. Prior work found that two-way communication on Twitter between public health entities and constituents led to an increase in action and awareness that, in turn, resulted in an improvement in community health [12]. However, many public health entities use social media simply to broadcast information without engaging audiences [4,10]. While we found that some Medicaid programs are using Twitter, the majority have relatively few followers and overall low engagement with the public. A number of studies point to methods to increase public engagement on Twitter. A study of US children's hospitals using social media found improvements in engagement when posts included pictures and content featuring patient narratives and community partnerships [13]. Other studies in the health care sector have found increased engagement with posts that feature hashtags, URLs, and user mentions [4].
Our study has limitations. Although we used multiple search strategies to verify accounts, we may be missing some active Medicaid accounts. We also excluded a number of umbrella state agencies on Twitter whose activities may include some degree of Medicaid program oversight and outreach, which may lead to underestimation of Medicaid agency presence and activity. Additionally, we used machine learning algorithms to automate content analysis of a large set of tweets, and there may be some misclassification of text across coding categories. Finally, our study focused on Twitter and is not generalizable to other social media platforms that Medicaid agencies and health plans may be using to engage with the public.
Conclusions
Changes to the Medicaid program have accelerated a number of efforts to increase consumer engagement around disease and care management, cost-sharing, and health care behavior change. As state Medicaid programs confront important public health challenges and expand to serve new populations, social media represents an important and underutilized tool for engaging both Medicaid-eligible individuals and the general public. Future research is needed to understand how social media platforms like Twitter can help these programs improve community engagement around public health programming and interventions. | 2020-06-04T09:06:13.236Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "a72ecf69858caf9296e2724f3708cef40cfe013e",
"oa_license": "CCBY",
"oa_url": "https://www.jmir.org/2020/8/e18401/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a34aa2887242147df833f214e7acbfd593d7cbd",
"s2fieldsofstudy": [
"Computer Science",
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
2708852 | pes2o/s2orc | v3-fos-license | Characterization of a nonglycosylated single chain urinary plasminogen activator secreted from yeast.
Using site-directed mutagenesis, we have changed the asparagine in human single-chain urinary plasminogen activator (u-PA) at position 302 to an alanine. This alteration removes the only known amino acid residue glycosylated in the protein. The single-chain u-PA containing an alanine residue at position 302 instead of asparagine (scu-PA(N302A] cDNA gene was expressed in the yeast Saccharomyces cerevisiae. Secretion of the protein product into the culture broth was achieved by replacing the human secretion signal codons with those from yeast invertase, adding a yeast promoter from the constitutively expressed glycolytic genes triosephosphate isomerase or phosphoglycerate kinase, and integrating multiple copies of these transcriptional units into the genome of yeast strains carrying the "supersecreting" mutation ssc1. When fermented in a fed-batch mode, these recombinant baker's yeast strains secreted scu-PA(N302A) in a strongly growth-associated manner. Greater than 90% of the u-PA found in the culture broth was in the single-chain form. Scu-PA(N302A) was purified to homogeneity using two chromatography steps. The purified protein had a molecular weight of 47,000 as determined by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and lacked any detectable N-linked glycosylation. The in vitro fibrinolytic properties of scu-PA(N302A) were found to be essentially equivalent to those of natural single-chain u-PA derived from the human kidney cell line TCL-598. Since scu-PA(N302A) lacks the immunogenic N-linked carbohydrate pattern of yeast, it may be a useful therapeutic agent which can be produced economically by yeast fermentation.
Using site-directed mutagenesis, we have changed the asparagine in human single-chain urinary plasminogen activator (u-PA) at position 302 to an alanine. This alteration removes the only known amino acid residue glycosylated in the protein.
The single-chain u-PA containing an alanine residue at position 302 instead of asparagine (scu-PA(N302A)) cDNA gene was expressed in the yeast Saccharomyces cerevisiae. Secretion of the protein product into the culture broth was achieved by replacing the human secretion signal codons with those from yeast invertase, adding a yeast promoter from the constitutively expressed glycolytic genes triosephosphate isomerase or phosphoglycerate kinase, and integrating multiple copies of these transcriptional units into the genome of yeast strains carrying the "supersecreting" mutation sscl. When fermented in a fed-batch mode, these recombinant baker's yeast strains secreted scu-PA(N302A) in a strongly growth-associated manner. Greater than 90% of the u-PA found in the culture broth was in the single-chain form. Scu-PA(N302A) was purified to homogeneity using two chromatography steps. The purified protein had a molecular weight of 47,000 as determined by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and lacked any detectable N-linked glycosylation.
The in vitro fibrinolytic properties of scu-PA(N302A) were found to be essentially equivalent to those of natural single-chain u-PA derived from the human kidney cell line TCL-598. Since scu-PA(N302A) lacks the immunogenic iv-linked carbohydrate pattern of yeast, it may be a useful therapeutic agent which can be produced economically by yeast fermentation.
Two immunologically distinct plasminogen activators, urinary plasminogen activator (u-PA)' and tissue plasminogen activator (t-PA), have been isolated from human tissue (l-3) and cDNA genes for both have been cloned (4, 5). Both enzymes exhibit kinetic parameters consistent with physiological roles in normal in uivo hemostasis.
Both share a number of structural features, including "kringle," epidermal growth factor-like, and serine protease catalytic domains, as well as plasmin-susceptible peptide bonds in the region between the kringle and catalytic domains. Both molecules are cleaved by plasmin to two-chain forms in which the two chains are held together by at least one disulfide bond. Both t-PA and scu-PA exhibit fibrin selectivity in their activation of plasminogen in both in vitro and in uiuo model systems and also in man (6-9). In other words, they catalyze the formation of active plasmin more efficiently in the presence of a fibrin clot than in the circulation.
Thus, both molecules spare circulating thrombogenic factors such as fibrinogen and (Yeantiplasmin.
Despite these similarities, the two molecules are also known to differ in a number of characteristics: (a) t-PA but not u-PA exhibits significant affinity for fibrin and soluble fibrin fragments (14); (b ) u-PA, through its epidermal growth factorlike domain, binds to a cell surface receptor while t-PA exhibits no measurable affinity for the u-PA receptor, but may bind to a different receptor on HUVEC cells (10, 11); (c) t-PA contains two domains not found in the u-PA molecule, a second kringle domain, and a finger domain resembling that found on fibronectin.
These additional domains of the t-PA molecule may be responsible for the fibrin affinity exhibited by t-PA (12, 13). However, fibrin selectivity must involve factors other than fibrin affinity because u-PA in its singlechain form (scu-PA) exhibits little or no fibrin affinity but appears to lyse clots with fibrin selectivity comparable to that of t-PA (8).
Fibrin selectivity during the conversion of plasminogen to plasmin distinguishes the activity of t-PA and scu-PA from that of urokinase (tcu-PA) and streptokinase and suggests that these molecules are more suitable for thrombolytic therapy in man. The demonstrated affinity of t-PA for fibrin, and kinetic evidence that the interaction with fibrin reduces the K, of t-PA for plasminogen, suggest a plausible mechanism by which t-PA may act in a fibrin-selective manner (1). By contrast, the mechanism by which scu-PA acts in a fibrinselective manner is unclear. For example, while scu-PA lacks measurable fibrin affinity and appears to exhibit little or no activity in uitro (XI), it acts as a potent, fibrin-selective thrombolytic agent in man and animals (7,8). In this report, we describe the production of human scu-PA in the baker's yeast Saccharomyces cerevisiae.
We chose to construct strains of yeast which secrete human scu-PA rather than produce it in the cytoplasm, because secretion appears important for accurate folding and disulfide bond formation in secretory proteins (16). Since human scu-PA is a 411-amino acid residue glycoprotein with an apparent molecular weight of about 53,000 and contains up to 12 disulfide bonds, we reasoned that refolding of aggregated scu-PA from inclusion bodies produced cytoplasmically in Escherichiu coli or yeast would be inefficient. While many mammalian proteins have been secreted from yeast, secretion of proteins larger than about 20,000 M, is typically inefficient with a large fraction of the protein remaining internal (17-20). Therefore, we took advantage of the yeast mutation sscl which has proven useful for the secretion of other mammalian proteins, including bovine prochymosin and growth hormone (18), to build a yeast strain which secretes at least two-thirds of the scu-PA synthesized into the culture broth. Human scu-PA normally contains carbohydrate linked to asparagine residue 302 (4). Yeast cells also carry out N-linked glycosylation, but unlike human cells would be expected to hyperglycosylate human scu-PA by adding a heterogeneous cluster of over 50 mannose residues as has been found for other secreted glycoproteins such as yeast invertase (21), yeast acid phosphatase (22), and human cY1-antitrypsin (23). Therefore, we altered the codon for amino acid residue 302 in the scu-PA cDNA to encode alanine instead of asparagine. We describe here the secretion from yeast, purification, and in vitro characterization of scu-PA(N302A) lacking any detectable N-linked carbohydrate. Except for the absence of Nlinked carbohydrate in in vitro assays, this scu-PA derivative is indistinguishable from scu-PA isolated from mammalian sources.
MATERIALS AND METHODS AND RESULTS'
The ssc Mutations Influence the Efficiency of Scu-PA Secretion by Yeast-Secretion of scu-PA was examined from both wild-type strains of yeast and from strains carrying ssc mutations which increase the secreted levels of other proteins of comparable size such as bovine prochymosin and growth hormone (18). The amounts of scu-PA secreted into the culture broth, expressed both as units per ml and normalized to cell mass, are shown in Table I. All strains were transformed with the same autonomously replicating plasmid carrying the human scu-PA cDNA transcribed from the yeast triosephosphate isomerase promoter and carrying the yeast invertase secretion signal in place of the human secretion signal. Clearly, cells carrying the sscl mutation secrete more scu-PA, both in absolute amounts and per cell, than do cells which are wild-type for secretion. A significant increase is also observed for cells carrying the ssc2 mutation, although the effect is less striking. It is clear from a comparison of Experiments 2 and 3 of Table I that scu-PA(N302A) is secreted from an sscl strain just as efficiently as is scu-PA. Thus, the substitution of alanine for asparagine at position 302 has no measurable effect on the secretion of scu-PA from yeast. Scu-PA(N302A) Secretion Is Growth-associated-Secretion of scu-PA by yeast was examined more closely under conditions of controlled pH, dissolved oxygen, and glucose feeding. Strain CGY1891, carrying the sscl mutation and integrated copies of scu-PA(N302A) transcriptional units, was grown in a modified YPD-type medium (see "Materials and Methods") for 138 h as shown in Fig. 3. Cell density and secreted scu-PA levels increased in parallel for the entire fermentation indicating that secretion of scu-PA is growth-associated. Secreted scu-PA(N302A) levels as high as 1800 IU/ml were attained, corresponding to about 15 mg of scu-PA(N302A)/ liter. The fact that the amidolytic activity of unfractionated culture broth prior to plasmin treatment was always less than 5% of the activity obtained after plasmin treatment indicates that very little scu-PA was converted to tcu-PA during the fermentation (Table II). In addition, secreted scu-PA was stable in the unfractionated broth for several days at 4 "C. Scu-PA was also detected within the yeast cells during fermentation both by immunoblot and by activity in a fibrin plate; however, internal scu-PA antigen levels typically represented no more than about 30% of the total scu-PA detected (data not shown).
The Specific Activity of scu-PA(N302A)-Several properties of scu-PA(N302A) purified from yeast culture broth were compared to those of native scu-PA derived from human kidney cell line TCL-598. Neither purified preparation contained significant levels of plasmin-independent amidolytic activity, consistent with the fact that these preparations contain only trace levels of contaminating tcu-PA. In fact, the amidolytic activity of both scu-PA preparations prior to plasmin treatment was less than 1000 IU/mg, well within the range observed previously for scu-PA (15). After treatment with plasmin to activate any latent amidolytic activity, the specific activities of both preparations were virtually identical, in the range of lOO,OOO-120,000 IU/mg (data not shown).
Electrophoretic
Mobilities of Scu-PA and Scu-PA(N302A)- The electrophoretic mobilities of scu-PA(N302A) secreted from yeast as well as scu-PA secreted from both yeast and from Chinese hamster ovary cells were examined on polyacrylamide gels containing SDS both with and without prior treatment with N-glycanase to remove N-linked carbohydrate (Fig. 4) Therefore, as expected from the asparagine-to-alanine codon - 46,500 change, scu-PA(N302A) is secreted from yeast cells without the addition of N-linked carbohydrate. 36,500 Scu-PA(N302A) preparations contained traces of a singlechain form of u-PA having an M, of about 27,000 (Fig. 4). 26,600 This species is most likely the result of cleavage of scu-PA(N302A) after Glu-143, as has been observed previously (55). (Table II) yeast and scu-PA obtained from Chinese hamster ovary cells were compared in vitro (Fig. 5). Equivalent amounts of the two plasminogen activators were incubated for up to 5 h in tubes containing 'Z51-fibrin-labeled plasma clots. The extent of clot lysis was judged by the amount of radioactivity released ("Materials and Methods"). At concentrations of 2.5, 1.25, and 0.63 pg/ml, scu-PA(N302A) and scu-PA exhibited comparable kinetics of clot lysis (Fig. 5, A and B). In addition, the levels of a*-antiplasmin remaining after lysis initiated by scu-PA and scu-PA(N302A) were comparable and significantly higher than the levels remaining after lysis initiated by tcu-PA (Fig. LX). Thus, by these assays, scu-PA(N302A) and scu-PA are virtually indistinguishable in their potency and fibrin selectivity of clot lysis in uitro.
DISCUSSION
These studies indicate that secretion by yeast of human urinary plasminogen activator modified to contain an alanine residue in place of the normally glycosylated asparagine residue at position 302 results in a fully active, fibrin-selective, scu-PA molecule which lacks any detectable N-linked carbo-hydrate. Because of the differences between yeast and mammalian glycosylation patterns, production of a nonglycosylated form of scu-PA may be the only practical way to avoid immunogenicity of this and other mammalian glycoproteins secreted from yeast. While both yeast and mammalian cells add identical preformed "core" mannose clusters from carrier lipids to asparagines, the sequence of events following this transfer differs considerably. Yeast cells process the core unit mainly by adding large numbers of mannose residues to the (Y~,~ backbone, while mammalian cells usually remove most of the mannose residues of the core and replace them with other sugars such as galactose, fucose, and sialic acid (47).
Ballou and co-workers (48) have determined several antigenie features of the yeast carbohydrate pattern. In addition, mannose receptors have been found on several cell types (49), and the presence of mannose-rich sugar on asparagines of human t-PA significantly reduces its circulating half-life (50). Therefore, mammalian proteins bearing a yeast pattern of carbohydrate may be antigenic and may also be cleared more rapidly from the mammalian circulation than their natural forms. In this regard, it is interesting to note that preliminary studies of the half-life of nonglycosylated scu-PA(N302A) in rabbits and dogs has demonstrated that the clearance of scu-PA(N302A) is very similar to that of native scu-PA isolated from human kidney cells3 The approach taken here of altering the amino acid sequence of the N-linked glycosylation sequon Asn-X-Thr/Ser should have general application for secretion of mammalian glycoproteins by yeast. Indeed, there have been reports of nonglycosylated murine and human GM-CSF secreted from yeast (51,52). In those cases, serine was substituted for asparagine at the first position of the Asn-X-Thr glycosylation sequon or valine was substituted for threonine at the third position.
There are no reliable rules for choosing a substitute amino acid residue, and certainly some residues may be detrimental to protein folding or stability. We chose alanine in this case because of its small side chain and its inability to be glycosylated.
Several factors are involved in obtaining efficient secretion of scu-PA from yeast. As observed previously for other mammalian proteins, the yeast invertase secretion signal is slightly more efficient for scu-PA secretion from yeast than is its natural mammalian secretion signal.4 Similarly, host strains carrying sscl or ssc2 mutations secrete scu-PA severalfold more efficiently than wild-type hosts. Interestingly, the presence of glycosylated asparagine or nonglycosylated alanine at position 302 had no detectable effect on the efficiency of secretion of scu-PA from yeast. Others have suggested that glycosylation is important for secretion of various mammalian proteins (52,53); but, in this case of heterologous protein secretion, glycosylation appears to be irrelevant. The use of a yeast secretion signal in place of the human signal normally found on scu-PA is important for obtaining efficient secretion from yeast, but it must be removed precisely if the resulting scu-PA is to be used therapeutically in man. In this case and also in the case of a-interferon (54), the invertase secretion signal was removed accurately during secretion from yeast. The fact that the junction between the invertase secretion signal and scu-PA, alanine-serine, is fortuitously the same as the junction between the invertase signal and invertase itself may be a factor in its accurate removal. The strain described in this report secretes seu-PA at a level corresponding to about 0.2% of the yeast-soluble cell protein. This represents about 2.5% of the protein in the rich culture medium used for these fermentations.
Purification required only two column chromatography steps and was accomplished with the relatively high yield of 43%. While scu-PA is quite sensitive to proteolytic cleavage by plasmin and thrombin in the "hinge" region between the two chains, scu-PA produced by yeast cells remained in the single-chain form even after storage of the crude fermentation broth for several days at 4 "C.
Nonglycosylated scu-PA molecules secreted by yeast may find application as therapeutic agents in man. Indeed, with the exception of its carbohydrate content and the amino acid residue at position 302, scu-PA(N302A) appears indistinguishable in its in vitro properties from native scu-PA produced by mammalian cells. The preservation of the molecule in its single-chain form during yeast fermentation is encouraging because mammalian cell processes frequently produce mixtures of single-and two-chain u-PA in which the twochain molecules may constitute over 50% of the product. The yeast produced single-chain u-PA appears fibrin-selective in its activation of plasminogen (Fig. 5). in our preparations (Fig. 4) is unlikely to affect interpretation of these results because it is a quite minor component in our preparation and because Stump et al. (55) have shown that a similar low molecular weight scu-PA is as fibrin-selective as full-length native scu-PA. Studies in the rabbit venous thrombosis model and in the dog arterial thrombosis model have confirmed the fibrin selectivity of our preparations of scuPA(N302A) in viva.' Finally, the successful construction of yeast strains which secrete detectable amounts of scu-PA now allows dissection of the structurefunction relationships of this protein by coupling powerful molecular genetic mutagenic techniques with a Petri plate activity assay for scu-PA secreted from individual yeast colonies. | 2018-04-03T03:21:01.100Z | 1990-01-15T00:00:00.000 | {
"year": 1990,
"sha1": "107729fa5a4b8ce3781b3291d6b15ac2a36b73aa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(19)40120-8",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "06bbe24d9163cc6c1667cf5a8528aefd3639483d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17764386 | pes2o/s2orc | v3-fos-license | Role of PON in Anoxia-Reoxygenation Injury: A Drosophila Melanogaster Transgenic Model
Background Paraoxonase 1 (PON1) is a protein found associated with high density lipoprotein (HDL), thought to prevent oxidative modification of low-density lipoprotein (LDL). This enzyme has been implicated in lowering the risk of cardiovascular disease. Anoxia-reoxygenation and oxidative stress are important elements in cardiovascular and cerebrovascular disease. However, the role of PON1 in anoxia-reoxygenation or anoxic injury is unclear. We hypothesize that PON1 prevents anoxia-reoxygenation injury. We set out to determine whether PON1 expression in Drosophila melanogaster protects against anoxia-reoxygenation (A-R) induced injury. Methods Wild type (WT) and transgenic PON1 flies were exposed to anoxia (100% Nitrogen) for different time intervals (from 1 to 24 hours). After the anoxic period, flies were placed in room air for reoxygenation. Activity and survival of flies was then recorded. Results Within 5 minutes of anoxia, all flies fell into a stupor state. After reoxygenation, survivor flies resumed activity with some delay. Interestingly, transgenic flies recovered from stupor later than WT. PON1 transgenic flies had a significant survival advantage after A-R stress compared with WT. The protection conferred by PON1 expression was present regardless of the age or dietary restriction. Furthermore, PON1 expression exclusively in CNS conferred protection. Conclusion Our results support the hypothesis that PON1 has a protective role in anoxia-reoxygenation injury, and its expression in the CNS is sufficient and necessary to provide a 100% survival protection.
Introduction
Paraoxonases (PON) are a family of enzymes whose enzymatic activity is not well understood. There are three known members of the PON family in humans, named PON1, PON2 and PON3. PON1 is a 45 kDa glycoprotein expressed in different tissues including kidney, colon, and the liver and mainly found associated with HDL in the bloodstream which in turn can deliver the protein to various tissues. Epidemiological data shows an association between PON1 activity, systemic oxidative stress and cardiovascular risk in humans [1,2]. Furthermore, PON1 is thought to be responsible for the antioxidant effect of HDL as it inhibits lipid peroxidation and mediates cholesterol efflux from atherosclerotic plaque [3]. PON1 can be localized in endothelial cells and it has been shown to be necessary and sufficient for preventing LDL oxidation [4,5,6] Reperfusion of hypoxic/anoxic tissues induces a massive production of reactive oxygen species (ROS) causing important oxidative damage [7]. Oxidative damage is central in the pathophysiology of anoxia-reoxygenation (AR) injury, such as in myocardial infarction, ischemic stroke and organ transplant. For example, Ferretti and colleagues reported that paraoxonase activity was lower in stroke patients compared to controls, and this correlated to a higher level of lipid peroxidation [8]. In addition, reduced PON1 activity is an important risk factor for atherosclerosis and serum PON1 activity also predicts arterial stiffness in renal transplant recipients [9].
Although most epidemiological studies suggest that PON1 protection against cardiovascular disease is due to its antioxidant effects, little is known about the role of this protein in anoxiareoxygenation injury and other oxidative-mediated pathologies.
Determining the role of PON1 in different injury models is challenging, as in the mammalian systems there are three PONs, therefore, to avoid the potential of compensatory mechanisms in a PON1 knockout, a triple knockout (PON 1,2,&3) is required. One strategy that other investigators have developed to overcome the above challenge is testing the role of PON in an invertebrate model, specifically, Drosophila melanogaster, as flies do not contain the PON gene or any homologs [10,11]. This approach eliminates the variable of redundancy.
We decided to test our hypothesis in model of anoxiareoxygenation (AR) in Drosophila melanogaster, as a model of AR injury as previously reported by Vigne and colleagues [12].
We hypothesized that PON1 expression will protect flies from AR injury. To test this hypothesis we conducted a series of experiments where transgenic flies expressing tubulin (WT) and flies expressing PON1 were exposed to AR injury. We demonstrated that PON1 expression confers a survival advantage against AR injury, and specifically PON1 expression in the CNS protects against AR induced mortality.
Experimental animals
All flies were maintained and cultured on standard yeast-agarsucrose-cornmeal medium. Flies were bred and tested at 25uC. The binary GAL4-UAS system and tub promoter was used for the ubiquitous transgenic expression of PON1 (3). The neuron specific Elav driver and gut driver were used instead of tub promoter to achieve expression of PON1 exclusively in the CNS and gut, respectively.
Anoxia exposure +/tub and PON1 transgenic flies were placed in vials (20 flies per vial) inside a plastic chamber (see Figure 1). 100% Nitrogen was injected in the chamber until oxygen levels dropped below 1%. Oxygen level was monitored with an oxymeter (TED 200-T Portable oxygen monitor, Teledyne electronic devices (Thousand Oaks, California). Within 1 min of anoxia, flies stop moving and fall into anoxia-induced stupor (paralysis). After different time periods, flies were placed at room air for reoxygenation, allowing them to recover from anoxia-induced stupor. Survival was registered 48 hours later.
In a subset of flaks exposed to 1h of anoxia, flasks were examined every 5 min after exposure to rom air and number of active flies were recorded. All flies that remained motionless at the bottom of the flask were considered inactive, and all flies that were moving or at the wall of the flask were considered active.
ROS measurement method
Guts from flies were dissected and stained using Dihydroethidium (DHE), a superoxide indicator. Guts were then observed under fluorescence microscope.
The entire gastrointestinal tract was removed intact from females for staining and further processing by grasping below the head and at the tip of the abdomen with fine forceps and gently pulling.
Female adult fly guts (3-5 days old, from same parents and bottle) were dissected as previously described under 16 Schneider's Drosophila Media (Gibco, Invitrogen, Carlsbad, CA). The guts were then stained with DHE using a previously described method [13].
To assess superoxide (O 2 2 ) levels in whole flies, a lucigenin based ROS assay was performed on fly lysate as previously described [14,15]. Fly lysates were prepared using 3-day old male whole flies. For each sample 3 flies were homogenized with a pestle in mitochondria buffer (1mM Tris pH 7.5, 20 mM EDTA, Aprotinin 2 ng/mL, Leupeptin 2 ng/mL, Pepstatin 2ng/mL). After sonication, lysates were centrifuged and supernatant collected. A Bradford assay was completed on all samples prior to conducting the experiment. 50 mg of protein were diluted in 16 PBS to a final volume of 1ml. Lucigenin (5 mM) and NADPH (100 mM) (Sigma-Aldrich, St. Louis, MO) were added to each sample, and luminescence was recorded every 30 seconds (s) for 10 min. Initial rate was defined as the linear slope of the data points from 30 s to 150 s.
Generation of Transgenic Fly Lines
The y w 1118 strain was used for transgenic injections. P elementmediated transformation and subsequent fly crosses were performed following standard techniques [16]. To generate UAS-PON1 transgenics the cDNA sequence for PON1 was cloned into the pUAST vector.
The binary GAL4-UAS system and tubulin (tub) promoter were used for the ubiquitous transgenic expression of PON1. Human PON1 cDNA was cloned into the pUAST plasmid and subsequently injected into Drosophila melanogaster embryos (y w 1118 ) using standard techniques (Rainbow Transgenic Flies, Inc. Camarillo, CA). y w 1118 ; ; tub-GAL4/TM3,Sb virgin females were crossed to y w 1118 ; ; UAS-PON1 males to obtain the experimental lines y w 1118 ; ; UAS-PON1/tub-GAL4, which is referred to as UAS-PON1/tub-GAL4. Control flies (w 1118 ; tub-GAL4/+) were obtained by crossing w 1118 males with y w 1118 ; tub-GAL4/TM3, Sb virgin females and are denoted as tub/+. F1 progeny were tested for expression of PON1 by performing western blots using an antibody to PON1 (Abcam, Cambridge, MA). The neuron specific Elav driver and gut driver were used instead of tubulin driver to achieve expression of PON1 exclusively in the CNS and gut, respectively.
Statistical analysis
All experiments were performed at least 3 times separately with 20 flies per condition per experiment. Statistical analyses were performed in Graphpad Prism 5. Unpaired t-test, paired t-test and Mann-Whitney U test were performed as indicated. A pvalue,0.05 was considered statistically significant [17].
PON1 expression increases survival after AR
Vinge and colleagues have reported that mortality increases in Drosophila after AR exposure [12]. Specifically, mortality increased acutely during the first two days and remained the same for the rest of their life span. We decided to test whether PON1 expression in Drosophila melanogaster protected against AR induced mortality. Wild type +/tub flies (WT) and transgenic flies expressing Paraoxonase 1 (PON1) were exposed to different periods of anoxia (1, 2, & 4 hours). After anoxic period, flies were placed at room air for reoxygenation. Survival was recorded 48h after AR stress. Oxygen levels of ,1% were achieved after 5 min of injecting 100% Nitrogen into the chamber. Within one minute of achieving this level of anoxia, flies stop flying and lay at the bottom of the vial, motionless. They remained stuporous throughout the anoxic time. After reoxygenation, flies became active from the anoxia-induced stupor and start flying.
As shown in Figure 2A, survival decreased as the anoxic period increased. PON1 transgenic flies showed a survival advantage of around 20% compared to WT flies. 100% of flies exposed to one and two hours of anoxia survived, however, PON1 flies had a longer stupor recovery time compared to WT ( Figure 2B).
PON1 R and Q protect against AR induced mortality via a decrease in baseline ROS levels
One common polymorphism is at position 192, were a glutamine (192Q) is replaced by an arginine (192R). These alloenzymes have differences in their paraoxonase and arylesterase activity [18]. Epidemiological data shows an association between different PON1 polymorphisms and cardiovascular risk [1]. Bhattacharyya and colleagues reported that subjects with QQ192 polymorphism have significantly increased levels of oxidative stress and higher risk of all-cause mortality compared with RQ192 and RR192. However, other studies have failed to demonstrate this association [19]. We hypothesized that PON1 192R will provide higher protection from AR injury compared with 192Q. To test this hypothesis, we exposed WT, 192Q and 192R transgenic flies to different periods of anoxia. As shown in the Figure 3A, both Q and R, confer similar survival advantage after three and four hours of anoxia. Only R provided a survival advantage of 10% after 6 hours of AR (see Figure 3A).
Since the mechanisms of AR injury are mediated by ROS, guts from flies of each variant were stained using Dihydroethidium, a superoxide indicator. Figure 3B shows that the level of superoxide in both, 192Q and 192R transgenic flies are lower compared with WT transgenic flies.
Age and dietary restriction does not affect survival advantage conferred by PON1 expression
Transgenic flies have a slight difference in food intake of about 10% (data not shown), and since dietary restriction increases survival after AR [12], we exposed PON1 and WT flies to anoxia after 18 hours of starvation, having access to water only, in order to investigate the role of PON1 expression regardless of food intake. As shown in Figure 4A, starvation confers a significant increase in survival in WT and PON1 flies, and longer periods of anoxia were necessary to increase mortality. Nevertheless, at higher anoxic periods (6, 8, & 12 hours), PON1 flies continue to have survival advantage.
We also investigated whether aging will influence AR induced mortality. Flies were paired to ages ranging from one to four days and exposed to three hours of anoxia. 48h survival was registered. As shown in Figure 4B, survival to AR decreases with age in both WT and PON1, however, PON1 transgenic flies continued to have a survival advantage compared with WT despite controlling for age effect.
PON1 expression in the Central Nervous System (CNS) confers survival advantage from AR
In order to determine whether regional expression of PON1 would confer survival advantage against AR, we developed two strains of flies with organ specific promoters that expressed PON1 protein only in the central nervous system (CNS) or in the Gut. In a similar manner of previous experiments, flies were exposed to three hours of anoxia and survival was registered 48 hours later. PON1 CNS transgenic flies have higher survival than +/tub CNS flies, similar to transgenic flies with a universal expression of PON1, while flies expressing PON1 in the gut survived similarly to WT flies ( Figure 5).
Discussion
Oxidative stress is an essential element of ischemia reperfusion injury, where perfusion of anoxic tissues produces an increase of ROS production beyond the antioxidant capability of the organism causing oxidative damage of the cell membrane by lipid peroxidation. This is the hallmark of multiple pathological processes including myocardial infraction, stroke, and transplant. During the last decade, paraoxonase enzymes have been shown to be an important element in the physiopathology of oxidative stress. It is well known that PON1 prevents the oxidation of LDL, and it is necessary for the antioxidant properties of HDL on LDL [3,4,20]. Furthermore, Rozenberg and colleagues showed that lack of PON1 increases oxidative stress in macrophages [21]. It has been shown that PON activity correlates with cardiovascular risk and atherosclerotic disease, which results from oxidation of LDL and accumulation in artery walls [22]. Although native enzymatic activity of PON1 is unclear, there is growing data supporting the role of PON enzymes as antioxidants. However, there is a paucity of data regarding the role of PON enzymes as antioxidants following anoxia-reoxygenation injury. The results of our study suggest that PON1 plays a protective role in anoxiareoxygenation.
PON1 activity has been implicated in having a protective effect in ischemia colitis [23] and more recently, it has been shown that protective effects of dexmedetomidine on liver ischemia reperfusion was associated with an increase in PON1 activity [24]. Okur and colleagues recently reported that patients with obstructive sleep apnea, non-apneic and nocturnal desaturated COPD had increased levels of lipid peroxidation and decreased PON activity despite the differences in nocturnal hypoxia pattern.
Anoxia-reoxygenation, as well as ischemia reperfusion injury, is mainly mediated by the overproduction of ROS. Our results show that transgenic expression of PON1 in Drosophila melanogaster confers a survival advantage against AR injury. Our results also show that flies expressing PON have lower levels of superoxide at baseline, suggesting that PON1 has antioxidant effects, protecting flies from oxidative stress after anoxia-reoxygenation. PON activity not only prevents oxidative stress that promotes atherosclerotic plaque formation but also may be relevant in more rapid burst of oxidative stress as in AR injury. More recently, it has been reported that PON1 forms a complex with HDL and myeloperoxidase, an important oxidative enzyme, partially inhibiting myeloperoxidase activity [25].
Transgenic flies had a delayed recovery from AR-induced stupor. This effect can potentially explain some of the protection conferred by PON1 expression, as flies are extremely resistant to hypoxia and anoxia, and they respond to low oxygen and anoxia by decreasing their activity, and oxygen utilization [26]. This delay in activity can buffer the ROS production in PON1 transgenic flies, thus improving their survival.
Also, our results suggest that protection is due to PON1 expression in the brain; as expression of PON1 in flies was sufficient and necessary for protection against AR injury in the same proportion that universal expression of PON1. In humans, PON1 mRNA has been found mainly in kidney, colon and liver [27], but protein is more widely distributed [28]. For example, PON1 can be transferred from HDL to the cell membrane of different tissues, including the brain [20].
Epidemiological data shows a clear association of ischemic stroke incidence, survival and PON activity, where subjects with . Survival of transgenic flies after starvation period. A) +/tub and PON transgenic flies were starved for 18 hrs and then exposed to different periods of anoxia, survival was registered 24 hrs after reoxygenation. B) +/tub and PON transgenic flies of different ages were exposed to 3 hrs of anoxia and survival was registered 24 hrs after reoxygenation. *p,0.05 when compared with +/tub. doi:10.1371/journal.pone.0084434.g004 192R have a higher risk for ischemic stroke than 192Q [29,30]. However, the data regarding the role of this polymorphism and cardiovascular risk is conflicting. Some studies report significant risk of CAD and higher level of oxidative stress with variant 192R [31,32], while others show no association [33,34]. Also, PON polymorphism has been associated with difference in response to clopidogrel, where loss of function alleles increased risk of having major cardiovascular event in patients on clopidogrel treatment [35]. In our study, flies expressing 192R and 192Q have similar ROS levels and both have similar survival protection against AR injury.
PON1 is only one of three enzymes in the family of PON in humans. PON are promiscuous enzymes and its native enzymatic function is not clear. PON1 is the only one with paraoxonase activity, while all members of the family have different degree of lactonases and acetylesterase activity, whose relevance for antioxidant effect remains to be determined. While PON1 is expressed mainly in the liver and found associated to HDL, PON2 is ubiquitously expressed and its role in oxidative damage by anoxia reoxygenation is potentially more relevant and it still remains to be tested. Giordano G and colleagues reported that PON2 is expressed in the brain and protects CNS cells against oxidative stress and confers gender-dependent susceptibility to such stress [36,37]. This suggests that both enzymes share similar enzymatic activity that protects against oxidative stress in the CNS. Further experiments, comparing PON1 and other members of the PON family with different enzymatic activity may help elucidate the role of lactonase and acetylesterase activity. In addition, it is necessary to explore the role of PON in AR injury in mammalian models, to help determine the role of these enzymes in humans.
Author Contributions Figure 5. Survival of transgenic flies expressing PON with organ specific promoters. +/tub and PON transgenic flies with organ specific promoters were exposed to 3 hrs of anoxia and survival was recorded 24 hrs later. *p,0.05, ** p,0.01 when compared with +/tub. doi:10.1371/journal.pone.0084434.g005 | 2017-04-19T06:16:21.615Z | 2014-01-06T00:00:00.000 | {
"year": 2014,
"sha1": "7971b2439a2a92925bbce9d0282ba31cd734f4d3",
"oa_license": "CC0",
"oa_url": "https://doi.org/10.1371/journal.pone.0084434",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "938db8bf28b74489c732c33d3c265879a0ecebf3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219156597 | pes2o/s2orc | v3-fos-license | Comparison of divergent breeding management strategies in two species of semi-captive eland in Senegal
Breeding management of small populations may have a critical influence on the development of population characteristics in terms of levels of genetic diversity and inbreeding. Two populations of antelope sister species – Critically Endangered Western Derby eland (Tauroragus derbianus derbianus) and non-native Least Concern Cape eland (Taurotragus oryx oryx) bred under different management strategies were studied in Senegal, Western Africa. The aims of the study were to compare the population genetic parameters of the two species and to test for the presence of interspecific hybrids. In total, blood and tissue samples from 76 Western Derby elands and 26 Cape elands were investigated, using 12 microsatellite markers. No hybrid individuals were detected in the sampled animals within the multispecies enclosure in Bandia Reserve, Senegal. The parameters of genetic polymorphism indicated much lower genetic diversity in Western Derby elands compared to Cape elands. On the other hand, the coefficient of inbreeding was low in both species. It is hypothesized that this could be a positive effect of strict population management of Western Derby elands, which, despite the loss of genetic diversity, minimizes inbreeding.
Genetic diversity represents an essential pillar for the survival of populations through the possibility of adapting to a changing environment. Problems connected with maintaining diversity are common in captive populations, whose sizes are limited by space and the individuals are scattered among institutions worldwide so the gene flow is restricted [1][2][3] .
To eliminate the negative impact of the phenomena affecting small populations (e.g. loss of variability due to genetic drift and inbreeding), it is necessary to apply appropriate genetic management 4,5 , which can vary from intensive interventions 6,7 to simple monitoring 8 . However, for the decision-making process, it is essential to know some basic information about the kinship and genetic variability of the individuals 9 which is usually recorded in studbooks. Even if the studbooks for captive wildlife exist, for example, European studbooks 10 or International studbooks 11 , knowledge of kinship across these populations is often limited and can be as low as only 50% or less in some ungulate species kept in zoos 12 .
However, there are also species with better background information recorded, where a high proportion of their pedigree is known (% PK). These are mainly endangered species for which special ex situ conservation programmes have been created, such as the European Endangered Species Programmes (EEPs) for species like the Cuvier's gazelle (Gazella cuvieri, 100% PK) 13 , or the recently established EAZA Ex situ Programmes 10 .
Pedigree data are especially valuable in the evaluation of ancestry and kinship 14 but even the studbooks do not guarantee reliable information concerning genetic parameters of polymorphism within the populations. The reason for the lack of reliable genetic data is mostly because the studbook analyses rely on the assumption that the founders are unrelated and non-inbred, which is not always the case 6,15 . Only genetic monitoring can reveal the true genetic polymorphism of populations 16 , including the ones without studbooks 8 .
Loss of genetic variability is not the only issue of genetics in captive breeding of wildlife. The populations can face problems with interspecific hybridization, which is considered as a very important factor that endangers biodiversity and the existence of many species 17 . Even if the hybridization occurs in nature, it can also be caused by human interference, usually by keeping related, but initially geographically isolated, taxa together; for example, in wildlife reserves or mixed-species exhibits in zoos, which is currently very popular [18][19][20] .
Small isolated populations of two eland species with different conservation statuses and different geographical origins are kept in Bandia Reserve in Senegal. Both populations had a similar number of founding animals, but they have been managed differently and thus may serve as an ideal model for evaluation of the influence of population management on the genetic parameters of the population.
Western Derby elands (WDE, Taurotragus derbianus derbianus) are represented worldwide by less than 200 free-ranging individuals in Senegalese Niokolo Koba National Park (NKNP) 21 . A unique conservation programme for this antelope takes place in two Senegalese wildlife reserves, Bandia and Fathala, where about 100 of these antelopes live in semi-captive conditions 22 . When their founders were captured in NKNP in 2000, the total WDE population was already very limited and WDE was considered as Endangered. The population in NKNP was estimated to total only around 100 individuals at that time 23 and all the founders of the captive programme were captured from just one herd 24 . The founders were transported to Bandia Reserve where the first captive breeding of WDE started. In 2006, the primary selected animals from Bandia Reserve were transported to Fathala Wildlife Reserve 25 .
According to the WDE Conservation Strategy, the conservation programme's "…aim is to manage the population to retain as high as possible genetic diversity…" 26 . For this purpose, the WDE semi-captive population is intensively managed to minimize kinship since its establishment 24 via annual identification of new-born calves and their mothers through suckling observations for studbook creation 27 , transportation of sub-adult offspring to other breeding herds to avoid backcrossing 28 , and quite recently even genetic monitoring to compare pedigree and microsatellite data 6 . However, natural gene flow between the reserves and the NKNP does not currently exist and the inbreeding rate of the population is increasing while the polymorphism is dropping rapidly 6 . On the other hand, Cape elands (CE, Tauroragus oryx oryx) were introduced to Bandia Reserve in 1996 to increase its attractiveness for safaris for visitors. The only management applied in the population of CE in Bandia Reserve involves the culling of surplus males and older calf-less females for meat production. Since the initiation of both breeding programmes, no animals have been imported into the populations of WDE and CE. For a comparison of the background of both studied populations, see the overview in Table 1.
In Bandia Reserve, all WDE were kept in fenced areas, separated from other species, until July 2012 when reserve managers decided to remove some of the fences in the reserve, and thus two previously separated breeding herds of WDE were merged and mixed with other animal species. Since that time, this WDE herd has been in physical contact with CE 26 . The length of pregnancy in the WDE is approximately 9 months 25 , so all WDE offspring which have been born in the multispecies enclosure in Bandia Reserve since April 2013 should be considered as potential hybrids, in other words, 26 potential hybrids were born up from this period until June 2017 22 .
Even though a hybrid of the Derby eland has never been observed, there is a risk of its hybridization with CE in Bandia Reserve, considering the previous experience with other Tragelaphini antelopes that are characterized by "unusual readiness to hybridize in captivity" 19,[29][30][31][32] . Such hybridization would jeopardize the entire conservation programme and thus there is an urgent need for monitoring, as well as genetic variability assessment, and subsequent evaluation of the appropriateness of the applied population management.
The aims of this study were to: 1. Test for the presence of interspecific hybrids between semi-captive Western Derby elands (T. derbianus derbianus) and Cape elands (T. oryx oryx) living in a multispecies enclosure in Bandia Reserve, Senegal; 2. Compare the population genetic parameters of the two divergently managed species and evaluate the effect of the applied population management.
Materials and Methods
Sampling. Individuals of two semi-captive populations were included in the study: WDE (n = 76) from Bandia and Fathala reserves in Senegal, and CE (n = 26) from Bandia Reserve, Senegal. Blood and tissue samples of WDE were acquired systematically by continuous whole population sampling, and included all living individuals recorded in the studbook from 2017 22 with the exception of 25 animals, from which the samples were not available for various reasons (too young age, absence of veterinarian at the time of observation, etc.). Samples from CE were obtained by random sampling of live calves at the age of 1-3 years, animals that died naturally, and surplus males and older calf-less females culled for meat production by the staff of Bandia Reserve.
All The PCRs were carried out in T100 ™ Thermal Cyclers from Bio-Rad using the Qiagen ® Multiplex PCR kit according to the enclosed protocol 34 . During reactions, 12 microsatellite loci were amplified with already published fluorescently labelled primers that were chosen based on cross-species amplification, including 5 previously used for WDE 6 ; for details see Table S1 in Supplementary Material. Fragmentation analyses was performed using a 3500 Genetic Analyzer (Applied Biosystems ™ ) with the GeneScan ™ 500 LIZ ™ dye Size Standard.
Data analysis. Data from the fragmentation analyses containing information about allele lengths of indi-
vidual loci were manually scored using GeneMarker ® Version 2.2.0, SoftGenetics LLC ® 35 and then binned by AutoBin 36 . Large allelic dropout, presence of stutter bands, and null alleles were tested in Micro-Checker 2.2 37 . Departures from the Hardy-Weinberg equilibrium and linkage equilibrium of all microsatellite loci were checked by Genepop 4.2 38 .
Population structure and the presence of hybrids were tested by the Bayesian clustering method in Structure 2.3.4 39 with 1,000,000 steps of Markov Chain Monte Carlo repetitions after 100,000 steps of the burn-in period. The analysis was run for 1-5 clusters (K) always with 5 repetitions for each K. The results were post-processed by Structure Harvester to evaluate the best supported K 40 .
Factorial correspondence analysis was done in Genetix 4.05 41 . To compare the populations of WDE and CE, basic population structure characteristics were obtained using different programs intended for population genetics calculations. Values of observed (Ho) and expected (He) heterozygosity were calculated in GenAlEx 6.502 42 , inbreeding coefficient (F IS ), fixation index (F ST ) and allelic richness (Ar) were obtained via FSTAT 2.9.3.2 43 , which was also used for the counting of the confidence interval of F ST . The confidence interval of F IS was evaluated using Genetix 4.05.
Results
In total, 76 samples of WDE and 26 samples of CE were genotyped. Analysis in Micro-Checker 2.2 did not detect any genotyping errors, including the presence of null alleles. Linkage disequilibrium was not detected. The populations were in Hardy-Weinberg equilibrium at all of the studied loci, except CE at ETH10 (p = 0.025) and WDE at CSRM42 (monomorphic locus in WDE).
The highest likelihood was obtained for two clusters (see Fig. S1 in Supplementary material). Each animal was correctly assigned to its presumptive species. The analysis did not reveal any signal of recent hybridization between the species (Fig. 1).
Factorial correspondence analysis visualized the relationships between all tested individuals of the tested populations (Fig. 2). Western Derby elands are represented by a homogenous cluster with little variation compared with CE. (2020) 10:8841 | https://doi.org/10.1038/s41598-020-65598-6 www.nature.com/scientificreports www.nature.com/scientificreports/ The results of the selected population structure characteristics to compare the populations of WDE and CE are presented in Table 2, and show higher heterozygosity and genetic diversity in CE than in WDE. However, both species have a low level of inbreeding.
Discussion
Derby elands (Taurotragus derbianus) and common elands (Taurotragus oryx) are considered as sympatric only in South Sudan [44][45][46] . So far, there is no information about the occurrence of hybridization between them, but it cannot be excluded, due to the high relatedness of the taxa (separation estimated at 1.6 million years before present) 47 and due to the presence of hybrids between other Tragelaphini species 19,[29][30][31][32] . The present study did not detect any individual of hybrid origin. However, not all suspect individuals were sampled and thus some possible hybrids might have been overlooked. In the WDE population, 16 samples from totally 26 suspect animals were tested while three of them have died before being sampled and never reproduced. Moreover, samples were taken randomly in the CE population. It may be concluded that until June 2017, no hybrid was detected, and considering this, the probability of ongoing hybridization is low. However, the risk of hybridization between eland species in Bandia Reserve still exists, as the species remain in direct contact 48 . Also, one cannot exclude that post-zygotic reproduction isolating mechanisms may exist, and that the embryo may be lost during its development. Should this be the case, the fitness of the whole herd would decrease. Respecting the uniqueness and conservation status of the WDE, we propose continuous monitoring of hybridization between WDE and CE until they have separated breeding facilities. If the hybrids occur, it may have severe consequences for the genetic integrity of the species, as shown i.e. in the giant sable antelope (Hippotragus niger variani) 49 and bontebok (Damaliscus pygargus pygargus) 50 . www.nature.com/scientificreports www.nature.com/scientificreports/ Relatedness of the individuals and the sex ratio within the founding herd highly affected the genetic diversity of the studied populations. Parameters of genetic polymorphism (Ar, Ho) are much higher in CE (Table 2). Factorial correspondence analysis also supports higher variation within CE (Fig. 2). Two important factors should be considered: firstly, there is an assumption that the founders of the WDE semi-captive population were already related 6 . The second important factor to consider is the presence of just one male founder of the WDE semi-captive population 25 , and thus the level of inbreeding within WDE increased rapidly over the generations 6 . In CE, the sex ratio of the founders was more balanced, containing three males and five females 51 . Although dominant eland males are usually considered as sires of all the offspring in their herds 25 , this is not necessarily true considering the studies in other ungulate species 52,53 .
Genetic management of the Critically Endangered WDE might positively affect the level of inbreeding of the whole population in semi-captivity. The values of the coefficient of inbreeding (Table 2) are comparable over the species, even though the other parameters of polymorphism are much lower in WDE (Table 2). Nevertheless, even though microsatellites were shown to be useful tools for the study of the genetic diversity of ungulates 54 , they have their limitations 55 , which must be considered regarding the results. They do not reflect the whole genome and therefore provides only partial information about the level of polymorphism, and often it is impossible to correlate specific traits with microsatellite parameters.
The genetic background of the population should always be considered from the onset of the establishment of a conservation programme. In optimal circumstances, there are more suitable candidate populations in the wild from which intended founders of the backup population can be selected. All founders should ideally be unrelated and their numbers sufficient to establish more breeding pairs to avoid kinship in the first generation 6,16,56 . However, this is not always possible in the case of Critically Endangered species, in which animal numbers have decreased to such an extent that backup breeding programmes must be established with only a few remaining founders 57 . An example of an extreme situation is the attempt to save the subspecies of northern white rhinoceros (Ceratotherium simum cottoni) via hybridization with the conspecific southern white rhinoceros (Ceratotherium simum simum), despite their natural long period of geographic, and thus genetic, isolation 58 . Nevertheless, there should be always an effort to avoid the possibility of both inbreeding and outbreeding depression, if possible 16 .
Mitigating genetic threats of small isolated populations, such as inbreeding, and the negative consequences associated with the loss of genetic diversity can involve conservation actions such as translocations 8 . Even though such actions can be risky either from the epidemiological or genetic point of view 59 , they may be crucial for species survival if managed properly 60 . In the case of the WDE semi-captive population, further importation of new individuals from NKNP to reduce inbreeding and increase genetic diversity has been already recommended 6,25 . The results of the present study regarding the genetic diversity of the WDE semi-captive population are in accordance with these previous conclusions and support the idea of introducing new founders from the wild. It also corresponds with the recommendations of Ochoa et al. 61 to promote mutual and continuous gene flow via translocations between populations in the wild and captive populations which should function as a source of genetic variation for reintroduction programmes. The One Plan Approach as an integrated approach to species conservation consisting of management strategies and conservation actions by all responsible parties for all populations of a species, whether inside or outside their natural range, should be a priority 62,63 .
However, in the case of WDE, the promotion of gene flow is needed not only between the wild and captive populations, but also within the captive population. Otherwise, they could suffer from high differentiation as was described in the subpopulations of Arabian oryx 61 . Although the WDE semi-captive population was considered as a whole in the present study, the individuals are kept in two separate reserves, and regular animal transport takes place between the breeding herds, mostly within each reserve. The transport of animals from Bandia Reserve to Fathala Wildlife Reserve took place in 2006,2008,2009 and 2011, with a total of 32 individuals (22 males and 10 females) being translocated and organized into one bachelor herd at first, and later, two breeding herds 25 . Since only a limited number of females succeeded in reproducing in Fathala Wildlife Reserve, the population in Fathala suffered multiple founder effects, and pedigree analysis suggests considerably lower genetic diversity in Fathala Wildlife Reserve than in Bandia Reserve 1-3,64 .
Wespi et al. 7 concluded that population management including interventions possibly influencing genetic diversity via regular changes of breeding males does not always offer distinct advantages when compared to unmanaged populations. However, their study did not reflect genetic parameters. In contrast, the results of the present study indicates that genetic management could have a positive effect on the genetic background of populations. Considering the results of this and a previous study by the present authors 6 , it may be concluded, that for highly inbred populations such as WDE, genetic management keeps inbreeding at a low level, despite low genetic polymorphism and high relatedness of the population.
Data availability
Upon acceptance of the manuscript, genotypes used in the final analyses will be deposited at Dryad Digital Repository. | 2020-06-01T14:07:58.257Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "322a3942db01fea4626938d162d9bdbfbf7f8657",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-65598-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf98aa603e3e76729e7ee19a3eadc6e1e51ab389",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17543214 | pes2o/s2orc | v3-fos-license | The Sphaleron Rate: Where We Stand
I review what we know about the ``sphaleron rate'', which is the efficiency of baryon number violation at high temperatures of order 100 GeV in the Standard Model. The leading behavior at weak coupling in the symmetric phase is known accurately; Gamma = (10.7 +- .7) (g^2 T^2 / m_D^2) log(m_D/g^2T)\alpha_w^5 T^4. At realistic values of the coupling our accuracy is worse. We also now have the tools to determine the rate nonperturbatively in the broken electroweak phase; the sphaleron rate there is slower than perturbative estimates.
Introduction
The Universe is filled with matter, and virtually no antimatter. This "unusual" situation can only be explained without appealing to the initial conditions of the Universe if, at some early epoch in the history of the universe, baryon number was not a conserved quantity.
As a matter of fact, baryon number is not a conserved quantity in the standard model 1 . Furthermore, while its violation under ordinary conditions is pitifully inefficient, that ceases to be true at very high temperatures, of order the weak scale, where electroweak symmetry is restored. These facts are the backdrop for the subject of electroweak baryogenesis, which attempts to use them to explain why the baryon number density of the current universe is what it is.
In this talk I will not discuss baryogenesis. Rather my emphasis is on strengthening its foundations by investigating more accurately exactly how efficiently baryon number is violated under hot conditions. Besides the obvious application to the study of baryogenesis, this is also a useful thing to do because it forces us to develop tools for dealing with the infrared physics of hot Yang-Mills theory. Also it is the only part of a baryogenesis calculation which is generic to all extensions of the standard model, because it only depends on the gauge group, and only very weakly on the Higgs sector. In fact, for much of the talk I will neglect the Higgs fields altogether and just study Yang-Mills theory.
What we want to know
Before going any further I should fix notation and explain what I will measure. The anomaly equation relates baryon number to the Chern-Simons number of the SU(2) weak fields, through where E and B are the SU(2) electric and magnetic fields, and I normalize so the gauge field A has units of inverse length and g 2 appears in the denominator in the Lagrangian. The constant of integration from the indefinite time integral is fixed by requiring N CS to be an integer for a vacuum configuration. N CS is called the Chern-Simons number. Because magnetic fields are always transverse (Gauss' Law for magnetism), the evolution of Chern-Simons number depends on the physics of the transverse sector.
Having defined Chern-Simons number I can define its diffusion constant, where the angular brackets mean an average is taken over the thermal density matrix. Γ is often referred to as the "sphaleron rate." The reason we care about it is that there is a fluctuation dissipation relation between it and the relaxation rate for a chemical potential for baryon number. I will not discuss this in detail, see instead 2,3,4 . I also comment that for N CS to diffuse requires nonperturbative physics, and nonperturbative physics is only unsuppressed, at high temperatures and weak coupling, on length scales ≥ (1/g 2 T ) 5 . It is interesting to know Γ in two regimes. The first is the electroweak symmetric phase. Almost all baryogenesis mechanisms will give a final baryon number directly proportional to its value here. For this reason we would like to know it with some accuracy here, which makes the calculation tricky. The other regime where we want to know Γ is in the broken electroweak phase, immediately after the electroweak phase transition, which is to say, right after the baryons were allegedly produced. In this case what we want to know is, are the baryons safe, or will they subsequently be destroyed? Here the strength of the phase transition is important; the real question is, how strong must the phase transition be to prevent the baryons from getting destroyed? To answer this question with descent resolution we only need to know Γ to within ±1 in the exponent; however Γ is exponentially small and perturbation theory is not yet reliable, which will make this calculation tricky as well.
Approximations I will need
Determining Γ requires determining unequal Minkowski time correlators at finite temperature in a quantum field theory. Furthermore, if the answer is to be nontrivial the field theory must be showing nonperturbative physics. No one knows how to do this directly. Therefore I will obviously need to make some approximations.
What saves us is that the SU(2) sector is weakly coupled. This allows two key approximations which make the problem tractable. First, the infrared behavior of the theory is classical up to parametrically suppressed corrections. Second, the ultraviolet behavior of the theory is perturbative. Here, infrared means k ≪ πT , while ultraviolet means k ≫ α w T . When the coupling really is weak, there is an overlap between these two regimes, and every degree of freedom can be treated with one approximation or the other. Then, one can integrate out those degrees of freedom which are perturbative, and treat the remaining, classical theory nonperturbatively on the lattice.
In what follows I will first discuss what we learn by treating perturbatively and integrating out everything we can. Then I will step back and only integrate out the highest k modes analytically, leaving a larger and more inclusive theory for numerical work. Finally I discuss what we can do in the broken phase. The complete details for these three approaches can be found in 6 , 7 , and 8 respectively.
Leading log
Dietrich Bödeker has shown that it is possible to integrate out all degrees of freedom with momentum scale k ≥ g 2 T log(1/g), and that doing so produces an effective theory for the remaining k ∼ g 2 T degrees of freedom which is classical Yang-Mills theory under Langevin dynamics 9 . The physical origin of this effective theory has been discussed by Arnold's group 10 . Integrating out the modes with k ≥ gT gives the well known hard thermal loop effective theory 11 . The behavior of the infrared modes in this theory is overdamped 12 , which just follows from Lenz's law and the fact that the plasma is highly conducting. The conductivity is k dependent on scales shorter than some mean collision length, which in an abelian theory is the large angle scattering length. However in a nonabelian theory a particle's charge is changed by scattering. The mean length for a particle to travel before its charge is randomized is so on scales longer than this the strength of damping is k independent. Therefore, in the approximation that the scale 1/g 2 T is well separated from the scale 1/(g 2 T log(1/g)), the infrared fields obey Langevin dynamics on long time scales.
lattice spacing a Langevin dynamics have two nice features. First, the Langevin dynamics of 3-D Yang-Mills theory are free of UV problems, and a zero lattice spacing limit exists. Second, Langevin dynamics are very easy to put on the lattice. Therefore the emphasis should be on controlling systematics, such as 1. the thermodynamic match between lattice and continuum, 2. the match between lattice and continuum Langevin time scales, 3. topological definition of N CS , and 4. the large volume and long time limits.
All of these systematics can be controlled. The first is discussed in 13 , the second in 6 , and the third in 8 . A volume 8/g 2 T on a side is large enough to achieve the large volume limit, so I use a volume 16/g 2 T on a side for overkill. The failure to achieve the large time limit is reflected in the statistical error bars.
The results, which show beautiful lattice spacing independence, are presented in Table 1, which gives the coefficient κ ′ for the equation These results settle the question, "What is the sphaleron rate in the symmetric phase, in the extreme weak coupling limit?"
Beyond the leading log
In the last section I determined the coefficient of the leading log behavior with very good precision. The problem is the (+O(1)) appearing in Eq. (4). How well does the expansion in log(1/g) converge?
The answer is, probably very poorly. To see this, compare the free path for color randomization, l max = 2π/g 2 T log(1/g), to the size of a box for which Figure 1: Sphaleron rate plotted against inverse HTL strength, for the lattice theory plus "particles" used to generate the hard thermal loops. The dashed fit is a straight line, while the dotted fit incorporates the known logarithmic dependence on m D found in the previous section.
the large volume limit has already been reached, 8/g 2 T . There is not a large separation between these scales. In fact, it is not clear whether l max is smaller than the scale characterizing nonperturbative physics, which must after all be well shorter than the dimension of a box which shows large volume behavior. The problem is that Bödeker's approach requires integrating out modes for which the perturbative treatment may not be very reliable. To test this, and to try to determine the sphaleron rate beyond leading log, we need to integrate out less, and make the numerical model include the gT as well as g 2 T scales. An effective action for the theory with the k ∼ T modes integrated out is known 14 , and goes by the name of the hard thermal loop (HTL) effective action. It is nonlocal, which is not surprising, since its construction involves integrating out propagating degrees of freedom in a Minkowski theory. Unfortunately nonlocality is very problematic for a numerical implementation.
A solution to this problem was proposed a few years ago by Hu and Müller 15 . The idea is that, rather than add the HTL action itself, one adds a set of local degrees of freedom which, if integrated out, would also yield the HTL ef-fective action. Since the HTL action represents the propagation of a set of high momentum, charged particles, what we add are a bunch of high momentum, charged classical particles. For the idea to work it is necessary to add particles in such a way that the numerical model retains an exact gauge invariance, and possesses a conserved energy and phase space measure so that thermodynamic averages are well defined. We present an implementation which satisfies these conditions in 7 , and refer the reader there for the (quite complicated) details.
With a model which reproduces the HTL resummed IR physics in hand, it is possible to test Arnold, Son, and Yaffe's claim that the IR dynamics are overdamped, directly. The results are shown in Figure 1, which shows that Γ does vanish linearly as m 2 D is increased. The data are not good enough to show unambiguously whether or not Bödeker's log is present. Fitting the data assuming it is, the (+O(1)) in Eq. (4) turns out to be about 3.6, which indicates that the expansion in log(m D /g 2 T ) is not a very good one. For the physical value of m 2 D = (11/6)g 2 T 2 and g 2 ∼ 0.4, the sphaleron rate is (20 − 25)α 5 T 4 , with systematics dominated errors of the order of 30%. The main remaining problems to be addressed involve lattice spacing effects and the interactions of the added "particle" degrees of freedom with the most UV lattice modes.
The broken electroweak phase
In the previous two sections the approach has been to find a numerical system which has the same physics as thermal Yang-Mills theory, and then to evolve it and measure directly the correlator, Eq. (2), which tells how efficiently baryon number is violated. However, this approach fails completely in the broken electroweak phase, because the rate of topological transitions is so small that no reasonable amount of numerical evolution would see any transitions at all. Another alternative, perturbation theory, is not very reliable close to the electroweak phase transition. We know for instance that the one loop and two loop effective potentials give quite different answers for the strength of the transition, and no one knows how to compute the sphaleron rate beyond the one loop level. Some other technique, nonperturbative but not strictly real time, is needed.
The reason for the suppression of the sphaleron rate in the broken phase is shown, in cartoon form, in Figure 2. There is a free energy barrier between minima, meaning that almost none of the weight of the thermal ensemble lies in states intermediate between vacua. The figure also suggests how I will determine the sphaleron rate in the broken phase. I should define N CS (or some appropriate observable) on the lattice, and measure how the free energy depends on it. This is not enough to give the real time rate, but with some more work one can turn the height of the barrier into the real time rate.
Defining N CS
I begin by defining Chern-Simons number on the lattice. The obvious approach is to use the same definition as in the continuum, Eq. (1). Note that the integral over "time" in that equation could really be an integral along any path through the space of configurations, not just one generated by Hamilton's equations. In particular one can fix the constant of integration by having the path begin or end at a vacuum configuration.
There is a problem on the lattice, which is that no lattice implementation of E a i B a i is exactly a total derivative. Therefore N CS defined through Eq. (1) and implemented on the lattice would depend on the path chosen. The resolution is to choose a unique and particularly sensible path, the gradient flow (cooling) path, that is, the path through configuration space along which the energy falls most rapidly. In continuum notation the path (parameterized by a cooling time τ ) is given by where H is the Hamiltonian and all indices have been suppressed. Besides being unique, this path also has the benefit that it moves quickly towards configurations where the gauge fields are smooth. This minimizes the impact of lattice artifacts, which were for instance responsible for E a i B a i not being a total derivative. Further, the path automatically goes to a vacuum configuration.
This definition of N CS has two problems, both easily resolved. First, N CS is UV poorly behaved. For instance, its mean squared value diverges as V /a, with V the physical volume and a the lattice spacing (or other regulator). This is resolved by using not N CS but the Chern-Simons number of a configuration after an initial length τ 0 of gradient flow. Our measurable is then dependent on an unphysical parameter τ 0 , but it is UV finite, and physical measurables such as Γ will be τ 0 independent in the end.
The second problem is that performing gradient flow down to the vacuum is intensely numerically expensive; yet we will need to do so thousands of times to determine the free energy distribution. This problem is solved by blocking. Gradient flow destroys information, and in particular it destroys almost all the UV information; so nothing is lost by blocking after some modest amount of gradient flow. The numerical savings are immense, and (if we use an O(a 2 ) improved lattice Hamiltonian and implementation of E a i B a i ) almost no accuracy is lost.
Using this definition of N CS , it is possible to determine the free energy (probability distribution) as a function of N CS by standard multicanonical Monte-Carlo techniques. A sample result is shown in Figure 3.
Turning probabilities into rates
One cannot read off the sphaleron rate from Figure 3 alone; in fact the height of the barrier in the figure depends on an arbitrary parameter τ 0 . To get Γ from the figure we need to know the mean rate at which N CS is changing during a crossing of the barrier. Multiplying the probability density at the top of the barrier by Ṅ turns the probability density into a probability flux per unit time.
It is straightforward to measure Ṅ numerically. First, we use multicanonical means to get a sample of configurations with N CS ≃ 0.5. Then, for each we draw momenta randomly from the thermal ensemble and perform a very short period of Hamiltonian evolution, measuring N CS before and after. Then |dN CS /dt| is approximated by |N CS (0) − N CS (δt)|/(δt), and Ṅ is the average of this over the sample. Also, one must divide by the volume used in the lattice simulation, to convert the rate of topological transitions to the rate per unit volume. However, Γ does not equal the probability flux per unit time over the barrier; it is how often one goes from being in one topological vacuum to being in another. It is possible to cross the barrier several times on the way from one minimum to its neighbor, or to cross an even number of times and return to the starting vacuum. This leads to a correction called the "dynamical prefactor," which is the ratio of true topological vacuum changes to crossings of the top of the barrier. To compute it, we use multicanonical means to get a sample of N CS = 0.5 configurations. Then each is evolved under Hamiltonian dynamics, both forward and backwards in time, until it settles in a topological vacuum. The dynamical prefactor is where ∆N CS is the difference in N CS between the starting and ending vacua. It is ±1 if there were and odd number of N CS = 0.5 crossings and 0 if there were an even number; we never observe prompt crossings from one topological minimum to another which is not its immediate neighbor.
The hard thermal loops appear in the dynamical prefactor, which is parametrically of order (g 4 T 2 /m 2 D ) log(m D /g 2 T ). However, using the techniques of the last section to include the HTL effects in the calculation of the prefactor shows that, for realistic values of m 2 D , the importance of HTL's is weak. This is expected, or at least we should expect that the dependence is weaker than in the symmetric phase, because broken phase baryon number violation should be mediated by a spatially smaller configuration, involving higher frequency modes which are less overdamped.
The final result for Γ in the broken electroweak phase, at the electroweak phase transition temperature and for a range of scalar self-couplings, is plotted in Figure 4, which also compares it to a perturbative result based on a two loop potential and the zero mode calculations from Carson and McLerran 16 . The actual rate is substantially but not drastically slower than the perturbative estimate. For comparison, the value needed to avoid baryon number washout after the transition, in the standard cosmology, is Γ ∼ 10 −7 α 4 w T 4 .
Conclusion
Tools now exist to calculate the baryon number violation rate in both the symmetric and broken electroweak phases. In the symmetric phase the rate behaves parametrically as α 5 w , with a logarithmic correction found by Bödeker which is numerically small. The rate at a realistic m 2 D is 20 − 25α 5 w T 4 , with systematic errors, estimated to be of order 30%, dominating statistical errors. In the broken phase the rate is smaller than a perturbative estimate, but still too large to save baryogenesis in the minimal standard model. | 2014-10-01T00:00:00.000Z | 1999-02-25T00:00:00.000 | {
"year": 1999,
"sha1": "eb8a9db878328cbb0963cf1fa75ab6aa9a0b457f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eb8a9db878328cbb0963cf1fa75ab6aa9a0b457f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
10263346 | pes2o/s2orc | v3-fos-license | Rapid radiation in spiny lobsters (Palinurus spp) as revealed by classic and ABC methods using mtDNA and microsatellite data
Background Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Results Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate change-related oceanographic processes have influenced the phylogeny of marine taxa, with most Palinurus species originating during the last two million years. The present study highlights the value of new coalescent-based statistical methods such as ABC for testing different speciation hypotheses using molecular data.
Background
The high dispersal potential of planktonic larvae usually results in genetic homogeneity over large distances in marine species, unless local adaptation or oceanographic barriers counteract this dispersal [1]. Because of such dispersal potential, ranges of marine organisms have frequently been considered to be vast, even though marine species can also exhibit cryptic speciation and fine-scale endemism [2][3][4]. Allopatric speciation in marine organisms is mainly thought of a vicariance process, where a species' geographic range becomes fragmented following changes in oceanographic conditions or disconnection of populations by lower sea levels, with a consequent divergence due to genetic drift [5]. However, allopatric speciation could also result from a founder effect, where a new population is established by a small number of individuals, often by long-distance dispersal, with subsequent restricted gene flow leading to speciation [6,7].
Spiny lobsters (Palinuridae Latreille, 1802) are one of the most commercially significant groups of decapod crustaceans and they are considered key predators in a variety of habitats [8][9][10]. The most striking feature of these lobsters is their flat-bodied crystalline larval phase, the phyllosoma larva, which is specially adapted for dispersal in oceanic waters [11,12]. The phyllosoma larva has a long planktonic life (up to 24 months) before metamorphosing into the puerulus stage, which is the transitional stage to a benthic existence [13,14]. Since it is generally assumed that such a long planktonic larval duration should promote high levels of gene flow and effectively counterbalance the speciation process [15,16], life history traits make spiny lobsters a suitable model for better understanding the speciation process in marine organisms with large dispersal capabilities.
The genus Palinurus, a typically temperate-water genus within the Palinuridae [17], is particularly indicated for such studies since it has a well-defined distribution with two groups of three species each present in the Mediterranean and Northeastern Atlantic (P. elephas Fabricius, 1787, P. mauritanicus Gruvel, 1911, and P. charlestoni Forest and Postel, 1964) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi Stebbing, 1900, P. delagoae Barnard, 1926 and P. barbarae Groeneveld et al. 2006) (Figure 1). The phylogeny of Palinurus species has been recently addressed using Parsimony, Maximum Likelihood and Bayesian phylogenetic reconstruction methods on 16S and COI mtDNA sequences [18]. A strong statistical support was found for the monophyly of each species within the genus, but the interspecific relationships were poorly supported. Furthermore, Parsimony and Maximum Likelihood analyses were congruent (in some instances) while the Bayesian analyses failed to support any interspecific clustering, except for a P. delagoae/P. barbarae clade. Even though results were inconclusive, it was pointed out that the Northern Hemisphere species P. charlestoni could have been originated from a South-African ancestor colonizing Cape Verde islands. Therefore, it has been proposed that the present geographical distribution of Palinurus species indicates a pre-Miocene allopatric divergence, with two main lineages separating due to the closure of the marine gateway between Mediterranean Sea and Indian Ocean after the northward collision of Africa with Eurasia (11.2-23 Mya) [19].
Such interpretation implies that Palinurus mtDNA (COI and 16S rRNA combined) has evolved no faster than 0.18% (lower bound) to 0.36% (upper bound) per lineage per million years. Those rates of evolution would be 3-7 times slower than reported for other decapod taxa [20], with Palinurus showing among the slowest mtDNA mutation rate reported to date [18]. In relation to this question, there is a certain controversy on the phylogeography and speciation modes of Eastern Atlantic and Western Indian Ocean species of several taxa, such as algae [21], sea urchins [22] or fishes [23]. These studies suggest that older events during the Miocene (e.g. the closure of the Tethys Seaway) and/or recent oceanographic processes during the Pleistocene (e.g. Western Africa upwellings) could have influenced the phylogeny of marine taxa [24,25]. For example, an extensive review of antitropical biotas has shown that transequatorial migration is the most likely explanation for the origin of antitropic distributions of most near-shore marine taxa [26]. A more complete understanding of the phylogenetic relationships among Palinurus species will allow us to answer important questions regarding the speciation processes in marine taxa with an Eastern Atlantic and Western Indian Ocean distribution.
The advantage of constructing phylogenetic trees from genetic data has long been recognised, with the most widely used software in the field being MrBayes [27]. Moreover, it has more recently been appreciated that population genetic processes can lead to incongruence between the species tree and the genetic tree [28], for which new software considering separately the likelihood of the data given the gene tree and the likelihood of the gene tree given the species tree has been developed on top of MrBayes (BEST [29,30]). This software allows for better estimation of species trees from population genetic data, but the program can be applied strictly to sequence data. Since microsatellite markers allow for simultaneously sampling different genome regions, we opted to use a novel package that can deal with both sequence data and microsatellites jointly through the use of an approximate Bayesian computation (ABC) method [31].
The geographic distribution of Palinurus species Figure 1 The geographic distribution of Palinurus species. Three species are present in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and other three in the Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae).
Therefore, the present study aims to ascertain phylogenetic relationships and monophyly patterns in species of the genus Palinurus from Eastern Atlantic and Western Indian Ocean using mitochondrial DNA sequence data and a set of 13 microsatellite markers. In order to solve the phylogenetic tree topology and test among opposite evolutionary hypotheses (Figure 2), we will use both classic distance-based and recently developed coalescent-based approximate Bayesian computation methods, which have been successfully used to trace complex colonizing scenarios [32]. Coalescent-based methods will allow us to define the likelihood for different mutation rates and tree topologies to have produced the observed dataset and consequently test the importance of an older mechanism (i.e. closure of marine gateway) versus more recent oceanographic processes in the origin of Palinurus species.
Classic distance-based methods
We employed CONVERT 1.2 to transform the excel-based microsatellites dataset into different formats to be run by other population genetic programs [39]. Genetic diversity and pairwise differentiation estimates (F ST ) for microsatellite data were obtained using the GENEPOP package version 4.0.7 [40], while average GTR corrected sequence divergence was obtained for mtDNA data as in [18]. Since different genetic markers can provide conflicting phylogenetic signals, a Mantel test was carried out correlating the genetic distance matrix obtained from mtDNA sequence data and the Fst distance matrix based on microsatellites. This will allow us to check for congruence among markers and for the possibility of combining them in a joint dataset. Isolation by distance patterns were evaluated by correlating geographic distance and each genetic distance through the Mantel test. If Palinurus speciation had occurred following a gradual North-to-South pattern, we would expect species most closely located geographically Phylogenetic trees showing the alternative hypotheses tested in the present study Figure 2 Phylogenetic trees showing the alternative hypotheses tested in the present study. The North-to-South speciation model (topology 1, with P. charlestoni originating from a P. mauritanicus-like ancestor) and the South-to-North speciation model (topology 2, with P. charlestoni originating from a south-african ancestor). to be those separated by a smaller genetic distance, whereas we would expect a vicariance event and posterior contact between phylogenetically distant species (topology 2) to reduce the correlation between genetic and geographic distance matrices. The rows and columns of one of the matrices were subjected to 700 random permutations, with correlation being recalculated after each permutation. The Mantel test analysis for a multiple set of distance matrices has been implemented in R and is available from the authors upon request. Finally, the distance measure of Cavalli-Sforza and Edwards [41] was obtained from the combined mtDNA-microsatellite dataset by coding mtDNA haplotypes as individual alleles, and phylogenetic trees (based on individuals or species) were built using the Neighbor Joining algorithm as implemented in Populations v1.2.30 http://bioinformatics.org/~tryphon/ populations/. The distance-based method has been shown to be among the most robust methods for phylogenetic reconstruction using microsatellite data [42][43][44]. A total of 1000 bootstrap replicates over loci were obtained to asses support for each clade. Previous phylogenetic analyses indicate that the species found in the Northernmost part of the distribution area of the genus, Palinurus elephas, is the most basal species of the group and therefore it has been used to root the trees [18,38].
Approximate Bayesian computation implementation
When inferring phylogenies under the "Isolation with migration" model [45], likelihoods can only be computed for relatively simple scenarios containing few parameters [46]. Indeed, the likelihood function for complex demographic scenarios can be very difficult, and practically impossible, to solve analytically [47]. For this reason, application of ABC methods to solve phylogenetic inference-related problems has become of great interest [48][49][50]. ABC methods have the advantage of facilitating the comparison of alternative models marginal to parameter values without the need for calculating likelihoods [51]. The method relies on the simulation of large numbers of data sets using known parameters under a given coalescent model, for which it is more realistic than standard sequence-based phylogenetic approaches [30,52].
When dealing with coalescent-based inference, we rely on simulating genetic data based on a coalescence model and computing summary statistics from simulated datasets. A typical ABC approach involves two steps [51]: a rejection step and a regression adjustment and weighting step. The rejection step consists of accepting only the simulations whose summary statistics are close to the summary statistics obtained from the observed dataset. To assess this closeness, a Euclidian distance is computed between the entire set of normalized summary statistics and the normalized summary statistics calculated from the data. A set of parameter values is accepted when its Euclidian dis-tance is within a certain percentage of the closest points to the studied data (as in [53]). The second step is a local linear regression adjustment that attempts to model the relationship between the parameter values and the summary statistics. This linear regression is performed only for the accepted set of parameter values. We assume that the relation between parameters and summary statistics is close to linear in the proximity of the target summary statistics. By using this adjustment, more points can be accepted, which allows a better characterization of the space problem [52]. Also in this step, each accepted set of parameter values is given a weight between zero and one that declines quadratically until a defined distance from the studied data set is reached (as in [48]).
To reduce heteroscedasticity in the regression, all demographic parameter values were transformed on a log scale. The transformed parameter values were adjusted one at a time using a general linear regression on the accepted points. Adjusted values were then back-transformed taking the exponential for all parameters, in order to present posterior densities on a normal scale [51,52]. The transformation also minimizes the appearance of values outside the prior ranges after performing the linear-regression correction. Previous studies indicated that the logistic and related transformations can lead to biases in the posterior densities estimated in the proximity of the prior boundaries under particular circumstances [31]. To avoid this problem we choose a log transformation, which still allows for points at the lower boundary to be retained within the support of the model. In this case, the points that fell outside the upper boundary after regression were discarded, since this procedure has been shown to give a more efficient estimation [31].
A standard backward coalescent process was implemented to simulate genetic data [54,55]. Simulated data are obtained by adding mutations under a stepwise mutation model [56] for short tandem repeats (STRs) and an infinite sites model [57] for sequence data. Hamilton and coworkers [58] suggest running several hundreds of thousands to millions of simulations, depending on the complexity of the underlying model. In our simulations 1,000,000 values of the summary statistics sets were generated and a tolerance δ = 0.01 was used to give 10,000 points from which parameters were estimated. When performing model-choice between the suggested different scenarios 2,000,000 points were simulated and a tolerance of δ = 0.005 was used. We used the mode of the posterior distributions as a point estimate of the parameter. Credible intervals were calculated around the mode, following previous studies by Hamilton et al. [58] and Beaumont [53].
The model-choice studies were performed by first carrying out the simulation in parallel on a 24-node cluster, and then combining the simulated output, in order to shorten the simulation time. A program developed by Lopes and co-workers was used to simulate genetic data in an "Isolation with migration" model for any number of modern populations [31]. This software allows the use of STR's and single nucleotide polymorphism (SNP) data simultaneously. The regression step was performed using a script developed by Beaumont (makepd.r, http:// www.rubic.rdg.ac.uk/~mab/) under the free software environment R v2.5.0 [59]. The posterior density estimation from the adjusted sample of parameter values was carried out using the locfit function [60].
Prior distributions of parameters
The same priors for the demographic parameters (current and ancestral effective sizes following a uniform distribution ranging from 1,000 to 100,000) were used for inferences based on the North-to-South speciation model (topology 1), and those based on South-to-North speciation model (topology 2, with P. charlestoni originating from a south-african ancestor) (Figure 2). The priors for the demographic parameters were chosen according to information available from the literature [34,35,61]. The priors for the splitting time estimates also followed a uniform prior, as indicated in Table 1. Mutation rates for each locus were treated as a nuisance parameter. Although we intended to differentiate between standard and slow mutation rates, it was not intended to infer their exact values. Therefore, a broad prior was used for the loci mutation rates to account for the uncertainty on the estimates. The variation in mutation rate between loci was accounted for by using a hierarchical Bayesian framework [62]. The mutation rates for each locus were drawn from a lognormal distribution (priors) with mean sampled from a normal distribution and the standard deviation being the absolute value sampled also from a normal distribution (hyper-priors) [53]. In order to cover the pro-posed limits of the mutation rate, we used a standard deviation hyperprior of 0.25 and chose for mtDNA a mean Standard mutation rate of 2.34 × 10-5 (log = -4.630784) and a Slow mutation rate of 5.4 × 10-6 (log = -5.267606) ( Table 1). The use of hyper-parameters within ABC methods has been previously described [48,53,63].
Choice of summary statistics
The summary statistics were chosen according to their success in previous ABC studies [49,53]. For mtDNA, 3 summary statistics were calculated for each sampled deme: number of haplotypes, k; number of segregating sites, S; and the average number of pairwise differences, π. For STR data, three within-deme summary statistics were calculated for each sampled deme: allele number, k; heterozygosity, H; and variance in allele length, Var(length). All this 6 statistics were computed for each of the six populations taken individually and for each of the fifteen pairs of populations pooled together. Hence, the Euclidian distances were computed from a total of 126 normalized summary statistics.
Comparison of scenarios using approximate Bayesian computation
In order to test between previously proposed hypotheses (Figure 2), we considered four scenarios which differed in the population tree topology and in the prior distribution on the mutation rate: (1) sequential expansion from the Northern hemisphere and considering a normal prior distribution centred on a standard mutation rate (Pliocene-Pleistocene speciation); (2) North-to-South expansion, but considering a normal prior distribution centred on a slow mutation rate (Miocene speciation); (3) secondary colonization of the Northern hemisphere, with P. charlestoni originating from a P. gilchristi-like ancestor and considering a normal prior distribution centred on a standard mutation rate; and (4) secondary colonization of the Northern hemisphere but considering a normal prior distribution centred on a slow mutation rate. Therefore, an ABC framework was used to discriminate among our four different scenarios. This model-selection step was performed before estimating the final demographic historic parameters, which were done conditional to the most likely scenario. The prior probability for each scenario in all the comparisons were set to be equal (i.e. 1/2 for each two-scenario comparison). The posterior probability of each model was then estimated by performing the rejection-step followed by a logistic regression [53]. Priors for divergence times were made broad enough to consider both a Miocene and a Pleistocene speciation pattern ( Table 1).
Beaumont [53] indicated that it is possible to sample the model indicator (i.e. {1, 2,..., m}) for "m" models (M1, M2,..., Mm) from a prior and treat this as a categorical random variable, X, in the ABC simulations. We can then apply a categorical regression to estimate P(X = x1|S = s'), where x = 1, 2,..., m is the indicator for model Mx and s' is the vector of the summary statistics that summarize our observed data. A scheme of weighting can also be used, with weights given by the Epanechnikov kernel, as done in a standard regression procedure. The regression-step was performed using Beaumont's R script calmod http:// www.rubic.rdg.ac.uk/~mab, which needs the VGAM package [64]. This procedure has been shown to substantially improve previous methods to select among different models using ABC [49,53].
Classic distance-based methods
All microsatellite markers were polymorphic in every tested species with the exception of PE10 in P. barbarae (Additional file 1). A Mantel test (R = 0.626; P = 0.003) revealed a significant correlation between the genetic distance matrix obtained from mtDNA sequence data and that obtained from microsatellites (Additional file 2). Moreover, the correlation between the matrix of geographic distance and each genetic distance matrix was always significant although higher for the microsatellite dataset (R = 0.585; P = 0.001) than for the mtDNA dataset (R = 0.237; P = 0.033). The distance measure of Cavalli-Sforza and Edwards showed a southern hemisphere species clade and agrees with placing P. charlestoni samples next to P. elephas samples when phylogenetic trees are built using the individual-based matrices and the Neighbor Joining algorithm (Figure 3). When dealing with species instead of individuals, a well supported monophyletic southern hemisphere clade was obtained as well, even though phylogenetic relationships among northern hemisphere species were not completely resolved (Figure 4).
ABC methods
The posterior distributions of model comparisons are presented in Table 2. The simulations obtained using distributions centered on the slow and standard mutation rate allowed us to check for which mutation rate has a higher posterior probability regardless topology (Table 2a). The tests for mutation rate of both topology models gave a better support for a standard mutation rate. This support is fairly strong, giving about 90% of probability of having a standard mutation rate. After testing for the mutation rate, the comparison of topologies assuming a particular mutation rate distribution allows us to check for which topology presents a higher posterior probability. The comparisons between the two different speciation models (Figure 2) considering the standard mutation rate strongly support the North-to-South speciation model (Table 2b). Therefore, the most likely scenario was a standard mutation rate and topology 1. According to the model comparison results, demographic parameters were estimated by conditioning the ABC runs to the most supported model (North-to-South speciation with a standard mutation rate).
Estimations of modern population effective sizes using both mitochondrial sequences and microsatellite markers pointed to about 50,000 individuals for populations of P. barbarae, P. delagoae, P. gilchristi and P. elephas and between 80,000 to 100,000 individuals for P. mauritanicus and P. charlestoni ( Table 3). Estimates of effective size of the ancestor populations were not very informative given the large confidence intervals obtained (Additional file 3). The estimation of the second ancestor population though had a quite informative posterior distribution, showing a value of about 10,000 individuals. This ancestor population refers to the original population from the South African coast that later originated P. gilchristi and from which P. barbarae and P. delagoae originated from. This value corresponds to the effective population size, so it is not straightforward to infer the census population size from it. Nevertheless, these results seem to point out an expansion event of the referred south-African ancestral population.
In this ABC study, the estimates of demographic parameters which had a better support were the ones regarding splitting time. When conditioning for a North-to-South speciation pattern, all five splitting times showed a posterior distribution considerably different from the prior and quite tight around the mode (Table 3). Accordingly, the splitting time when both P. elephas and P. mauritanicus were originated was placed around 2 million years ago (mya), separation between P. mauritanicus and P. charlestoni lineages took place about 1 mya, colonization of South Africa would have taken place only 0.5 mya, and the appearance of both P. delagoae and P. barbarae would be placed at about 0.2 mya and 0.1 mya, respectively.
Discussion
Our within-genus phylogenetic analyses using a new set of polymorphic nuclear loci and classic distance-based methods consistently support a North-to-South pattern of speciation in Palinurus with all the South African species forming a monophyletic clade (topology 1; Figures 2, 4). Moreover, the combination of nuclear and mtDNA markers under recently developed coalescent-based ABC meth-ods has allowed us to test for the previously suggested hypothesis of P. charlestoni originating from a P. gilchristilike ancestor [18]. It is known that when we are interested in the old events in a gene's history, small samples are sufficient, while if we are interested in recent events then larger sample sizes are critical [65]. Even though a larger sample size would allow us to better characterize the coalescent properties for each species (e.g. times to coales-Phylogenetic tree built from the individual-based distance matrix using the combined mtDNA-microsatellite dataset Figure 3 Phylogenetic tree built from the individual-based distance matrix using the combined mtDNA-microsatellite dataset. The Neighbor Joining algorithm under Cavalli-Sforza and Edwards distance measure agrees with placing P. charlestoni samples next to P. elephas samples. Species are coded by colors. cence), and we were able to obtain only 5 P. charlestoni samples from Cape Verde Islands, our results show that a North-to-South speciation pattern occurring during the last 2 my is more consistent with the observed dataset.
A previous phylogeographic study using the COI region and standard rates for decapod crustaceans [33] found divergence times for P. elephas and P. mauritanicus to be much more recent [18]. Also, a previous study on the Achelata infraorder evolutionary relationships showed no genetic variation to be present among most Palinurus species when analyzed using sequence data from several nuclear genes [38]. These results already suggested that the speciation process within the genus Palinurus is fairly recent and therefore not related with the Middle Miocene closure of the Tethyan Seaway. In agreement with previous evidence, the coalescent simulations carried out in the present study indicate that the observed molecular dataset is not likely to result from a low mutation rate, while the standard mutation rate is supported regardless of the speciation hypothesis assumed. Most interestingly, divergence time estimates obtained for Palinurus species using standard rates agree with known glaciation-related processes occurring over the last 2 my [66].
The late Pliocene changes in the climate system, both in the Northern and the Southern Hemisphere, had a large impact on the evolution of many terrestrial organisms [67]. The gradual closure of the Panama Isthmus between 5 and 3 mya stopped the exchange of tropical Atlantic and Pacific water masses and the present day overall circulation pattern was established, with a strong influence of North Atlantic Deep Water on the global circulation and intensification of the Benguela Current upwelling system Species tree obtained using the Neighbor Joining algorithm on the Cavalli-Sforza and Edwards distance matrix from the com-bined mtDNA-microsatellite dataset [66,68]. The Matuyama diatom maximum (~2 mya) marked the transition to a cold mode of trade-wind controlled upwelling along the Southwest African coast with enhanced advection of sub-Antarctic water masses [69]. Marlow et al. [70] have shown that the Benguela Current upwelling system became pronounced at 2.1 to 1.9 mya and intensified during the period leading to the onset of the 100 ky glacial cycles at about 0.6 mya. Consequently, the intensification of Benguela Current upwelling had a direct regional influence by cooling the marine waters of southern Africa, with average sea surface temperature shifting from about 26°C in the mid-Pliocene (3.5 mya) to approximately 11-17°C in the present [24,70]. This could have facilitated the colonization of southern Africa from the North, since temperate-water Palinurus species are generally found between 11-16°C [9,18]. Interestingly, ABC divergence time estimates for the separation of P. charlestoni and P. gilchristi lineages would correspond to about 550 ky ago, after Southwest African waters became suitable for Palinurus species.
Even though the results obtained in previous studies provide comforting genealogical support to the classical view of species as qualitatively distinct taxa [18,71], the reconstruction provided by mtDNA is not representative of most of the genome and may bias perceptions of evolutionary diversification [72]. The demographic context of differentiation is not taken into account in most mtDNA studies because single loci offer low precision on estimates of historical population size [28] and because the relatively shallow coalescent time for this molecule limits the temporal window for demographic inferences [73]. For a given divergence time, historical population size is a key factor determining whether a species is polymorphic at most loci and whether genes are expected to accurately trace the species phylogeny. It is widely recognized that large populations undergoing rapid speciation, such as in some marine species, could create intermingled genealogical tracings containing very little phylogenetic information among species divergence [74].
Some doubts had been previously raised on the phylogenetic relationships of the extant species of the spiny lobster genus Palinurus, since they show very few morphological differences, with overlaps between the Indian and Atlantic Ocean taxa [75]. In fact, most Palinurus species cannot be discriminated using sequences for several nuclear coding genes and P. delagoae and P. gilchristi have been found to share 16S haplotypes [18,33]. Even though nesting of some individuals within a different species may be caused by some microsatellites being identical in allele-size without being identical by descent (homoplasy), the results obtained from individual-based analyses using mtDNA and microsatellite markers indicate that monophyly patterns in P. delagoae and P. barbarae are not well supported (Figure 3). The coalescent theory has shown that polyphyletic gene lineages can persist in species long enough after divergence [72,76]. With an ancestral population effective size of about 20,000 [34] and a generation time of 4-10 years [36], an average of 160-400 ky would be needed for gene lineages to get fixed in Palinurus species. This would be in agreement with divergence time estimates obtained from the combined dataset, since individual-based analyses show that incomplete lineage sorting is most pronounced in the most recently evolved species pair, P. delagoae and P. barbarae (divergence time: ~89 kyr).
Conclusion
The combination of markers from both nuclear and mitochondrial genomes under an ABC-coalescent framework has proven to be effective for testing among alternative evolutionary hypotheses in Palinurus and highlights the importance of using multiple markers when dealing with closely-related species. The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Therefore, it has been shown that molecular tools can provide new insights into the mechanisms, pace and geography of marine speciation. Indeed, recent genetic evidence suggests that many species groups are relatively new, originating after the onset of the Pleistocene, during the last two million years [5,77]. These recent speciation events provide a great opportunity to analyze the speciation process in marine taxa, since footprints of species formation are most likely to be identified when comparing recently diverged species, the initial differentiation of which can be correlated with the different proposed speciation processes.
Publish with Bio Med Central and every scientist can read your work free of charge | 2017-04-14T09:41:33.039Z | 2009-11-09T00:00:00.000 | {
"year": 2009,
"sha1": "c126df931e7039fc71e9be3fac3d26e558615bd9",
"oa_license": "CCBY",
"oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/1471-2148-9-263",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca0b487432fec684e76851db599dc3d80d971b2e",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
270525100 | pes2o/s2orc | v3-fos-license | Evaluation of Sexual Dimorphism Using Condylar and Coronoid Mandibular Parameters in Orthopantomograms: A Pilot Study
Background Gender determination is critical to forensic science and medico-legal applications. Given that it is the most dimorphic bone in the skull and is frequently found intact, the mandibular bone may be extremely important in determining gender. Orthopantomograms (OPGs) are quite helpful in accurately estimating age and sex in this regard. It is a laborious task for forensics to determine the gender of victims of mass casualties, natural disasters, and severely dismembered bodies. The mandible, which is susceptible to development spurts, has a high degree of accuracy for determining sex. Aim This study aims to evaluate the potential use of coronoid height and condylar height as reliable anatomical markers for determining gender. Materials and methods In this study, 100 samples were used as study samples, 50 of which were male and 50 of which were female, in the age group of 20-30 years. The OPGs were obtained using a Planmeca Promax Scara 3 Digital OPG Machine (Planmeca, Helsinki, Finland), with settings of 70 kVp, 8 mA for 0.9 seconds, ensuring a 1:1 ratio. The images were then transferred to Planmeca Romexis® Viewer Software, Version 6.0 (Planmeca Oy, Helsinki, Finland) for measurement recording. Results Descriptive statistical analysis was done for this study and discriminant analysis was also done to create a population-specific formula. Results showed that the standard mean error for males concerning condylar height was 2.3 and coronoid height was 0.7. The standard mean error for females by condylar height was 1.6 and coronoid height was 0.6. The p-value was significant for coronoid height in both males and females. The p-value was not clinically significant for condylar height in both males and females. Conclusion The study's findings indicate that a larger mandibular angle is advantageous for gender assessment and helps with gender dimorphism. Out of both the parameters evaluated, coronoid height has shown statistical significance in both males and females. Hence, the study concludes that the parameter, coronoid height can be utilized to assess the gender of an individual.
Introduction
Gender estimation is of tremendous importance to forensic odontology since it provides a lot of information for identifying unknown individuals.New approaches are continuously being investigated to improve the precision and dependability of gender estimation with developing technology.An important area of forensic anthropology is sexual dimorphism using mandibular characteristics, which plays a major role in accurately determining the gender of an individual.Males and females differ morphologically in the mandible, a skeletal component that is sexually dimorphic [1].Analyzing coronoid and condylar height is one such new approach that has the potential to provide perceptive data.The coronoid and condylar processes, significant parts of the mandible, are essential for the smooth functioning of the temporomandibular joint (TMJ).Orthopantomograms (OPGs) offer a reliable and accurate method for measuring the heights of the condylar and coronoid rami due to their comprehensive panoramic view, consistent imaging technique, and ability to clearly visualize and measure critical anatomical landmarks.Their role in clinical diagnosis, treatment planning, and follow-up makes them an indispensable tool in dental and maxillofacial radiology.Even though these structures are mostly related to how the jaw functions, they can also be accurate markers of sexual dimorphism [2].Sexual dimorphism is the physiological differences between males and females that help forensic investigators identify the gender of unidentified people by taking advantage of these disparities [2,3].
Dental features including tooth size, dental morphology, and the examination of dental remains for identification purposes have played an important role in the field of Forensics.However, in the pursuit of more precise and sophisticated gender assessment techniques, researchers have turned their attention to other anatomical characteristics, such as those connected to the mandible [4].It is one of the sexually dimorphic bones exhibiting distinct differences between males and females.The two processes of the mandible, condyle and coronoid are found to show varied heights in males and females [4,5].In forensic and anthropological studies, the measurement of condylar ramus height and coronoid ramus height using OPGs can be valuable for sex determination.The coronoid height can be simply termed as the distance that extends between the highest point in the coronoid process and the inferior border of the mandible.The condylar ramus height tends to be greater in males compared to females.This difference is attributed to the generally larger and more robust mandibles in males.Similarly, condylar height is the distance between the highest point in the condylar process and the inferior border of the mandible [6].Similar to the condylar ramus height, the coronoid ramus height is typically greater in males.The larger coronoid process in males is associated with stronger masticatory muscles and overall bone density.According to previous research, there is sexual dimorphism evident in these measurements, with males typically having greater coronoid and condylar heights than females.These mandibular components vary in size and shape due to the influence of sex hormones on bone development, resulting in dimorphism [7].
Using coronoid and condylar height for gender assessment has several advantages.Primarily, these measurements are invaluable in forensic scenarios, where non-invasive techniques are preferred, as they can be obtained from skeletal remains without causing damage.Numerous studies among various populations have demonstrated the accuracy and reliability of these measurements, validating their use across different ethnic groups [8].Ongoing research aims to refine and validate methods for determining gender based on mandibular parameters.Geometric morphometrics and three-dimensional (3D) surface scanning are powerful tools that significantly enhance the precision and accuracy of measurements in various fields, including biology, anthropology, paleontology, and archaeology.So, geometric morphometrics and 3D surface scanning together provide a sophisticated framework for morphological studies, enhancing both the precision and accuracy of measurements and analyses.Further, an interdisciplinary approach among radiologists, computer scientists, and anthropologists has led to the development of automated quantification algorithms, making the analysis process more efficient and reducing subjectivity [9].
The mandible is frequently better preserved than other traditional techniques that depend on the presence of particular teeth, which may be absent or damaged in forensic situations.The mandible is the second best marker of gender determination after the pelvis, as the mandible is sexually diversified.Furthermore, the mandible is a good sign of sexual dimorphism even in older skeletal remains, as it displays this trait throughout the life of an individual.More research is needed to use coronoid and condylar height analysis in forensic settings through in-depth and population-specific investigations [10].To guarantee the accuracy and dependability of the gender estimation results, factors including age, population variability, and possible hormonal influences must be taken into account.These mandibular parameters may become essential elements of forensic investigations as technology advances and our knowledge of bone anatomy expands, providing a useful tool for identifying and bringing closure to instances involving unidentified individuals.By comparing the condylar and coronoid ramus heights of an unknown subject with the established baselines, forensic experts can infer the sex of the individual [3].The present study is undertaken to assess the rationale of the mandibular parameters, coronoid, and condylar height in sex estimation.
Materials And Methods
This study was employed to establish a population-specific criterion for evaluating sex and age with mandibular parameters as a measure.For this study, we collected digital OPGs from the archives of the Department of Oral Medicine and Radiology at Saveetha Dental College and Hospitals, Chennai, India.The sample size was calculated using G*Power software (version 3.1.9.4; Heinrich Heine Universität Düsseldorf, Düsseldorf, Germany) to ensure a statistical power of 95% with a significance level (alpha error probability) of 0.05.The calculated sample size was 92, and we included a total of 100 samples to ensure adequate statistical power.To prevent bias, an equal number of samples were collected from males and females, with 50 samples each.The age range included in the study was 20-30 years.The parameters evaluated were condylar ramus height and coronoid ramus height.These parameters were assessed to decide whether they can contribute to age and sex determination.The study included OPG radiographs that were visible, wellaligned, and had known sex information, while those with artifacts or pathological conditions were excluded.Measurements were taken using Planmeca Romexis® Viewer, Version 6.0 (Planmeca Oy, Helsinki, Finland).Statistical analysis was performed using SPSS for Windows, Version 16 (Released 2007; SPSS Inc., Chicago, IL, USA).The study received approval from the Institutional Human Ethics Committee of Saveetha Dental College (IHEC/SDC/FACULTY/22/FO/059).The coronoid ramus height of the mandible measures the distance between the highest point in the coronoid process and the inferior border of the mandible shown in Figure 1.
FIGURE 1: An orthopantomogram represents the coronoid ramus height (Cor. ht) of the mandible
The condylar ramus height of the mandible shows the distance between the highest point in the condylar process and the inferior border of the mandible represented in Figure 2.
Results
The results of this study show that there are significant differences in the mandibular angle between both genders.The analysis of mandibular parameters like condylar ramus height and coronoid ramus height of males and females are mentioned below.In males, mandibular parameters show a standard deviation of 16.3 for condylar ramus height and 5.2 for coronoid ramus height.Also in males, the parameter coronoid ramus height has a standard error mean of 0.7, whereas condylar ramus height has a 2.3 standard error mean.The p-value is 0.00 in coronoid ramus height was statistically significant in males and the p-value of condylar ramus height is 0.37, which is statistically not significant.The means of all the parameters on their combined means within the category (20-30 years) of males were illustrated in Table In males, the coronoid ramus height p-value of 0.00 was statistically significant, while the condylar ramus height p-value of 0.37 was statistically not significant * p-value is statistically significant In females, mandibular parameters show a standard deviation of 1.6 for condylar ramus height and 0.6 for coronoid ramus height.Also, coronoid ramus height has a standard error mean of 4.5, whereas condylar ramus height has an 11.8 standard error mean.The p-value is 0.00 in coronoid ramus height was statistically significant in females and the p-value of condylar ramus height is 0.37 which is statistically not significant.
The means of all the parameters on their combined means within the category (20-30 years) of females were illustrated in Table 2.
TABLE 2: Combined analysis of condylar and coronoid height for females
In females, the p-value for coronoid ramus height was found to be statistically significant at 0.00, whereas the p-value for condylar ramus height was found to be statistically not significant at 0.37 * p-value is statistically significant In the discriminant function analysis, condylar ramus height did not contribute towards sex determination and was consequently not utilized in deriving the formula.From Table 2, it can be noted that the standardized coefficient is greater in magnitude for coronoid ramus height (0.101, which reaches 1).Thus, coronoid ramus height will have the greatest impact on sex determination.The discriminant analysis equation obtained from the data to determine sex is given in Table 3.
TABLE 3: Regression analysis to estimate the gender of coronoid ramus height
From these values, the following formula could be derived: Gender = -21.725+ 0.101 × coronoid ramus height The predicted group membership is based on gender and for males, 41 out of 50 cases were correctly classified, which is 82%.For females, 49 out of 50 cases were correctly classified, which is 98%.Overall, 90 out of 100 cases were correctly classified, which is 90%.This means that the model correctly predicted the gender for 90% of the cases in the original group which is represented in Table 4.
TABLE 4: Prediction analysis of sex estimation using condylar ramus height and coronoid ramus height
In this prediction analysis, 90% of the original group cases were correctly classified
Discussion
Analyzing mandibular characteristics, such as coronoid and condylar height, can reveal gender.The anterior bony projection is referred to as the coronoid process, and the rounded protrusion at the back of the jaw is called the condyle.The morphological differences between the male and female coronoid processes and condyles give rise to these distinctive features of the mandible.Measurements of coronoid and condylar heights are used to determine gender since they are regarded as reliable markers [6].
In this study, the mandibular characteristics in both males and females were evaluated.In males, the standard deviation for condylar ramus height was 16.3, whereas for coronoid ramus height, it was 5.2.Furthermore, the standard error mean for coronoid ramus height was 0.7, whereas condylar ramus height was 2.3.The p-value for coronoid ramus height was 0.00, indicating statistical significance, whereas for condylar ramus height, it was 0.37, indicating no statistical significance.In females, the standard deviation for condylar ramus height was 1.6, and for coronoid ramus height, it was 0.6.The standard error mean for coronoid ramus height was 4.5, and for condylar ramus height, it was 11.8.The p-value for coronoid ramus height was 0.00, statistically significant, while for condylar ramus height, it was 0.37, not significant.The predictive model based on gender correctly classified 82% of males and 98% of females, with an overall correct classification rate of 90%, indicating the model's effectiveness in predicting gender based on mandibular parameters.
In the Kaur et al. study, there was a significant difference found in males and females in maximum ramus width, minimum ramus width, condylar height, and coronoid height and between males and females in condylar height and coronoid height.Furthermore, there was a statistically significant difference between males and females in coronoid height [7].Similarly, in our study, there was a statistical significance of 0.00 in coronoid ramus height was found with a standard deviation of 5.2+/-0.7 in males.
Determining the gender of an individual, an aspect of anthropology relies on examining certain skeletal characteristics that show differences between males and females.There are bones in the skeleton that provide valuable information for identifying the gender of a person, helping in understanding their biological traits and aiding in the identification process [11].The pelvis is often considered a reliable bone when it comes to gender estimation.Features like the inlet, notch, subpubic angle, and pelvic shape differ significantly between males and females [12].The skull is another area of the body, where dimorphic features can be observed.Certain characteristics, such as an eyebrow ridge, mastoid process, and mandibular ramus, show variations between males and females.Forensic experts utilize these features and often employ techniques like geometric morphometrics and advanced imaging for more accurate analysis [13].
Long bones like the femur (thigh bone) and humerus (upper arm bone) also play a role in estimating gender.These bones undergo changes in size and proportions that differ between males and females.While not as reliable as cranial indicators, long bones provide additional information, for a more comprehensive assessment.In general, determining gender based on bones requires an approach that takes into account skeletal components and utilizes both macroscopic and microscopic analyses [14].The continuous advancements in technology have greatly improved the precision of these evaluations, which play a role in the field's endeavors to ensure comprehensive and dependable forensic identifications [15,16].Forensic specialists can determine the gender of their skeletal remains by measuring certain characteristics and recognizing these differences.A significant location to evaluate sexual dimorphism is the mandibular ramus, a striking feature of the mandible.Ramus height, length, and width measurements offer important information regarding one's gender [17].When compared to females, men usually exhibit a bigger and more sturdy ramus.The overall variations in strength and size between the sexes are reflected in this dimorphism.Another area where sexual dimorphism is noticeable is the mandibular corpus, or the horizontal body of the jaw [18].
The difference between the mandibles of men and women can be partly explained by measurements of the height and length of the corpus.Men often have a larger and stronger mandibular corpus than females, which highlights the significance of this measure in estimating gender [19].One characteristic that contributes to sexual dimorphism is the gonial angle, which is produced at the intersection of the ramus and the corpus.Studies reveal that men frequently have a larger gonial angle than women [20].This anatomical distinction is important in determining gender, even though there might be differences in various populations.There are many sexually dimorphic traits in the symphysis, which is the point where the left and right halves of the mandible converge at the midline [21].When estimating gender, the form of the symphysis and the existence or lack of a mental eminence are important factors.Generally speaking, ladies have a more pointed and graceful appearance, whereas males have a square-shaped symphysis [22,23].
Mandibular sexual dimorphism is not only an anatomical curiosity, but it also has important forensic ramifications.In cases where additional gender markers are absent from skeletal remains, the examination of mandibular features becomes essential for creating an individual's biological profile [12].In forensic anthropology, accurate gender estimates are based on the unique physical differences between male and female mandibles.Although mandibular sexual dimorphism is a useful tool, there are a few things to be aware of [22].Complexities in the study may arise due to population-specific variances and the possible influence of environmental factors.Furthermore, age-related alterations may have an impact on mandibular morphology, highlighting the necessity of a thorough and nuanced method for determining gender.The accuracy and consistency of mandibular parameter assessments of sexual dimorphism are being improved by technological innovations like geometric morphometrics and 3D imaging.By offering more thorough and precise evaluations, these methods aid in the continuous improvement of forensic procedures [23].
Future studies could benefit from interdisciplinary collaboration with geneticists and other forensic specialists.Gender diagnoses that are more reliable and accurate may result from the integration of statistical models and genetic markers with mandibular parameter assessments [7].These cooperative initiatives guarantee a comprehensive approach to forensic anthropology by fusing state-of-the-art technologies with conventional morphological analysis [19].The foundation of gender estimation in forensic anthropology is sexual dimorphism using mandibular parameters.Based only on skeletal remains, the physical distinctions between male and female mandibles reveal important information about an individual's likely gender [24].
When compared to females, males often have higher coronoid and condylar heights.Males have larger dimensions because their skeletons are larger overall.Difficulties like individual and population-specific variances need to be taken into account when interpreting these measures.The improvement of sexual dimorphism evaluations in coronoid and condylar heights is made possible by interdisciplinary collaboration and technological breakthroughs like 3D imaging [25].
The accuracy and reliability of sexual dimorphism assessments in mandibular parameters will be a key factor in the success of forensic investigations going forward, ultimately helping with the reconstruction of life histories and identification of individuals as technology and collaborative research continue to progress [10].
Limitations
Evaluating sexual dimorphism in condylar and coronoid measurements using OPG has several limitations.Measurement accuracy can be affected by imaging distortions, and anatomical variability among individuals can complicate identifying consistent sexual dimorphism patterns.Operator-dependent inconsistencies in positioning and measurement techniques further impact result reliability.The 2D nature of OPG can lead to inaccuracies in representing 3D structures, making precise measurements difficult.Additionally, overlapping anatomical structures can obscure critical details, reducing diagnostic clarity and precision needed for accurately determining sex differences.These limitations should be considered in future research.
Conclusions
There is a statistical difference between coronoid and condylar height in males and females.Gender can be estimated using the given mandibular parameter, coronoid height.In conclusion, it has been demonstrated that using OPG imaging to ascertain gender based on mandibular parameters is a practical and flexible technique in several domains, including dentistry, forensic anthropology, and clinical diagnostics.The mandible, a prominent facial bone, is a strong indication of age and gender since it changes regularly over an individual's lifetime.The possibility of gender prediction using mandibular traits can be advantageous in forensic scenarios.Forensic anthropologists employ OPG scans to establish the age at death of unidentified skeletal remains.Interdisciplinary research collaborations and continuous advancements in imaging technology could significantly enhance the precision and reliability of age and gender estimate methodologies.
FIGURE 2 :
FIGURE 2: An orthopantomogram represents the condylar ramus height (Con.ht) of the mandible
TABLE 1 : Combined analysis of condylar and coronoid height for males
1. | 2024-06-17T15:35:22.345Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "8fa906ffdaded94c5267c499b4e2d43fe25d024d",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/262676/20240614-18864-11ecapt.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d59659db154d206b23136ba9fc6c22f0c56e1df8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254973980 | pes2o/s2orc | v3-fos-license | Local well-posedness of Kolmogorov's two-equation model of turbulence in fractional Sobolev Spaces
We study Kolmogorov's two-equation model of turbulence on $d-$dimensional torus. First, the local existence of the solution with the initial data from non-homogeneous fractional Sobolev spaces (Bessel potential spaces) $H^s$ with $s>\frac{d}{2}$ is proven using energy methods. Next, we show that solutions are unique in the class of solutions guaranteed by the local existence theorem.
Nowadays, the model is not used in engineering practice, however, the ones used share with Kolmogorov's model underlying ideas (which manifest as similar dissipation/source terms included in the system). More information regarding turbulence models (e.g. k´ε, k´ω) can be found in [9], [14], [15], [16]. Recently, the research concerning mathematical analysis of Kolmogorov's model has accelerated. In [1] authors showed the existence of a weak solution to Kolmogorov's turbulence model. It relies on the introduction of a new variable E " |v| 2 {2`b representing total energy in the system. It allows for the replacement of b-equation with an equation depending on E, for which passage to the limit with the approximate solution is plausible. In [5] authors showed the existence of a weak solution, however, without recovering the equation for b. Instead, they obtain inequality. In [3] authors showed the existence of the strong, unique solution on some small time interval given initial data from H 2 pT 3 q. In [6] and [7] authors consider the 1D system constructed based on Kolmogorov's system structure. They showed that solutions exist even for initial data for which the diffusion coefficient vanishes. Also, they proved the existence of a class of smooth initial data, for which finite-time blow-up occurs. In [4] authors showed that provided some smallness condition on initial data global strong solutions exist.
The main purpose of this article is to generalise the statement given in [3] by lowering the regularity requirement for initial data and by considering the d-dimensional spacial domain. Before we precisely state the result we need to introduce the notion of the solution to the system (1)- (5). We say that functions pv, ω, bq solve system (1)- (4) in the weak sense if pB t v, wq`pv¨∇v, wq`ˆb ω Dv, Dw˙" 0 @w P H 1 div pT d q, (6) pB t ω, zq`pv¨∇ω, zq`ˆb ω ∇ω, ∇z˙"´α`ω 2 , z˘@z P H 1 pT d q, pB t b, qq`pv¨∇b, qq`ˆb ω ∇b, ∇q˙"´pbω, qq`ˆb ω |Dv| 2 , q˙@q P H 1 pT d q for a.a. t P p0, T q. Now we can formulate the main theorem.
2 Preliminaries Let us start with recalling the definitions of spaces set on T d " r0, 1q d .
Definition 1 (see Remark 3.1.5 in [22] or Section 3.2 and 3.5 in [27] ). Let tu n u 8 n"1 Ă C 8 pT d q, u P C 8 pT d q. We say that u n Ñ u in C 8 pT d q if B α u n Ñ B α u uniformly for all α P N 0 . By D 1 pT d q we denote continuous linear functionals on C 8 pT d q.
Definition 2 (see Definition 3.1.6 in [22]). Let SpZ d q denote the space of rapidly decaying functions Z d Ñ C. That is, ϕ P SpZ d q if for any k ă 8 there exists a constant C ϕ,k such that |ϕpξq| ď C ϕ,k p1`|ξ| 2 q k{2 .
The topology on SpZ d q is given by the seminorms p k , where k P N 0 and Then, a sequence tϕ n u 8 n"1 Ă SpZ d q converges to the function ϕ P SpZ d q iff p k pϕ n´ϕ q nÑ8 ÝÑ 0 for all k P N 0 .
By S 1 pZ d q we denote continuous linear functionals on SpZ d q.
Definition 3 (see Definition 3.1.8 in [22]). Toroidal Fourier transform F T d " pf Þ Ñf q : Inverse toroidal Fourier transform F´1 T d " ph Þ Ñ q hq : SpZ d q Ñ C 8 pT d q is given by Definition 4 (see Definition 3.1.27 in [22]). Fourier transform extends to the mapping F T d : D 1 pT d q Ñ S 1 pZ d q by the formula xû, ϕy :" xu, ι˝q ϕy, (20) where u P D 1 pT d q, ϕ P SpZ d q and ι is defined by pι˝ψqpxq " ψp´xq.
Definition 5. Let s P C. Bessel potential J s on the torus is defined as follows wheref denotes Fourier transform of f . .
Also, based on orthogonality of te 2πkx u kPZ d in L 2 pT d q the following characterisation is valid Moreover, we introduce the following notation H s div pT d q " u P rH s pT d qs d : div u " 0 To simplify further expressions we introduce the following notation: pf, gq H s "`J s f, J s g˘L 2 pT d q .
Now, we will recall some know facts regarding fractional Sobolev spaces on the torus.
Lemma 1 (see [17], [25], [10], [26]). Let s ě 0 and p P p1, 8q, p 1 , p 2 , q 1 , q 2 P p1, 8s such that Let f, g P C 8 pT d q. Then there exists C " Cps, d, p, p 1 , p 2 , q 1 , q 2 q independent of f and g such that the following inequality holds: Lemma 2 (see chapter 2.8.3 in [29]). Let p P p1, 8q, s ą d{p and f, g P H s p pT d q. Then, f g P H s p pT d q and there exists a constant C ą 0, independent of f and g such that Lemma 3 (See Lemma 2.5(ii) in [24]). Let s ą d 2 and f P H s`Td˘. Then, function f is continuous and there exists constant C " Cps, dq independent of f such that Lemma 4 (see Theorem 5.5 in [19] or Section 3.1 in [18]). Let s ą d 2 . Assume that G is a smooth function on R with Gp0q " 0. Then there exists C independent of f and G such that: Lemma 5. Let s ą d 2 . Assume that G is a smooth function on R with G 1 p0q " 0. Then there exists C independent of u, v and G such that: Proof. The lemma is a direct consequence of Lemma 4. Proceeding as in [13] Corollary 2.66, we see that Gpuq´Gpvq " pu´vq which can be understood classically due to v, u both being continuous functions (see Lemma 3). By applying H s`Td˘n orm to the both sides and using Lemma 2 we get Next, we may change the order of the norm and integral to get As G 1 p0q " 0 we may apply Lemma 4 to the term under integral We may estimate the right-hand side using triangle inequality. We get By using Lemma 3 we obtain desired inequality.
By simple calculations we get the statement of the lemma: Lemma 7 (See Lemma 2.5(i) in [24]). Let p P p1, 8q and µ, ν P R such that ν ď µ. Then H µ p pT d q ãÝ Ñ H ν p pT d q. Lemma 8 (See Lemma 2.5(iii) in [24]). Let p, q P p1, 8q and µ, ν P R such that ν ď µ and Then H µ p pT d q ãÝ Ñ H ν q pT d q. Lemma 9 (see [10], Appendix I). Suppose that s ą 0, p, p 2 , p 4 P p1, 8q and p 1 , p 3 P p1, 8s such that Let f, g P C 8 pT d q, then there exists constant C independent of f and g such that where rJ s , f s g :" J s pf gq´f J s g.
where φ is smooth, compactly supported function defined on R d , such that φ |r0,1q d " 1 and where txu " ptx 1 u, . . . , tx d uq, (t¨u -floor function). Then, it is clear that From complex interpolation (see e.g.: Theorem 2.6 in [28]), we can deduce analogous inequality for the fractional spaces.
Lastly, by considering Lemma 1 for function φ r f and using (43) we get the needed statements. The proof of the Lemma 4 is more complicated and we shall give more details.
Proof. Let ψ, φ be smooth, compactly supported functions defined on R d , such that ψ |r0,1s d " 1 and φ | sup ψ " 1. Also let us observe that Ć Gpf q " Gp r f q. Now, using Lemma 2 and the fact that Gp0q " 0 we can write We can easily estimate both sides using (43) to get We see that and thus we may use interpolation inequality to get Thus based on Lemma 6 we get which follows from Lemma 3 and 6.
Proof of Theorem 1
The proof will be divided into several steps to better present and simplify reasoning.
Definitions of auxiliary functions
Given initial data ω 0 , b 0 we define b min , ω min , ω max in the following way We define auxiliary functions as follows To define an approximate problem we have to introduce a few auxiliary functions. For fixed t ą 0 we denote by Φ t " Φ t pxq a smooth, non-decreasing function such that Φ t pxq " where b t min is defined by (55). We assume that the function Φ t also satisfies |Φ pkq t pxq| ď c 0 pb t min q 1´k @k P t1, . . . , rss`1u (57) where, c 0 is a constant independent on x and t (see in the appendix of [3] for details). We also need smooth, non-decreasing function Ψ t such that We assume that this function additionally satisfy |Ψ pkq t pxq| ď c 0 pω t min q 1´k @k P t1, . . . , rss`1u, for some constant c 0 (the construction of Ψ t is similar to argument presented in the appendix of [3]).
Approximated system
To obtain the approximate system we will follow the procedure used in [11], [12]. Let us define the operator P n in the following way In later parts, we will require some properties of the P n operator. First, it is obvious that which follows from orthogonality in L 2 pT d q of functions te 2πikx u kPZ d . Let us define Cpnq " ř |k|ăn 1 and observe that for m P N and 1 ď p ď 8 we have Thus for m P N and 1 ď p ď 8 we have The obtained result is not surprising and could be justified based on the equivalence of norms in finite-dimensional spaces. Also, we can easily check that the order of differentiation and P n , when sequentially applied to function f : Thus it is clear that the following hold: P n div f " div P n f, P n ∇f " ∇P n f, P n ∆f " ∆P n f.
Before we define the ODE system we aim to solve (for α n k,j , γ n k , η n k ), we introduce the following functions v n j px, tq " ÿ |k|ăn α n k,j ptqe 2πikx , ω n px, tq " Thanks to orthonormality of basis te 2πikx u kPZ d in L 2 pT d q we have }v n ptq} 2 2 " ÿ |k|ăn |α n k,j ptq| 2 , }ω n ptq} 2 2 " ÿ |k|ăn |β n k ptq| 2 , }b n ptq} 2 2 " ÿ |k|ăn |γ n k ptq| 2 . (67) Additionally, we define function µ n is the following way: The modification was introduced to guarantee positive signs of diffusive terms. Moreover, the presence of Rep¨q in the definition of µ n allows us to deal with possibly complex-valued solutions. Now we consider the following system of equations B t α n k,j "´p´P n pv n¨∇ v n q`div pP n pµ n Dpv n qqq`∇p n q j , e´2 πik¨¯, B t β k "`´P n pv n¨∇ ω n q`div pP n pµ n ∇ω n qq´αP n`ω 2 n˘, e´2 πik¨˘, B t γ k "`´P n pv n¨∇ b n q`div pP n pµ n ∇b n qq´P n pb n ω n q`P n`µn |Dv n | 2˘, e´2 πik¨˘( 71) supplied with initial conditions α n k,j p0q "`v 0,j , e´2 πik¨˘, β n k p0q "`ω 0 , e´2 πik¨˘, γ n k p0q "`b 0 , e´2 πik¨˘( 72) and equation from which pressure is calculated ∆p n " div " P n pv n¨∇ v n q´div pP n pµ n Dpv n qqq ı , The introduced system of equations can be represented in the following form d dt » --`α n k,j˘k "1,...,n j"1,...,d pβ n k q k"1,...,n pγ n k q k"1,...,n fi ffi fl " Fˆ`α n k,j˘k "1,...,n j"1,...,d , pβ n k q k"1,...,n , pγ n k q k"1,...,n , t˙.
To show the existence of the solution of system (69) -(73) we will show that the righthand side is locally Lipschitz continuous with respect to α n k,j , β n k , γ n k , so basically, we need to estimate |F pα n,2 k,j , β n,2 k , γ n,2 k , tq´F pα n,1 k,j , β n,1 k , γ n,1 k , tq|.
To this end, we introduce the following functions We also introduce Additionally, functions p m n are calculated with the help of the system (73), with natural substitution of functions on the right-hand side: v n,j Ñ v m n,j , ω n Ñ ω m n , b n Ñ b m n . We start checking local Lipschitz continuity by considering the term of the right-hand side of (69) involving pressure term. We see thaťˇ`p (78) From the basic theory of elliptic partial differential equations, the solution of (73) system exists and the following estimate holds Let us analyse terms of the right-hand side of (79) separately. First by using triangle inequality, Hölder inequality and (63) we get Now, we go back to analysing the second term of the right-hand side of (79). Again, by using (63) we get By using triangle inequality and Hölder inequality we get With the help of (63) we get We see that based on (77), (56), (58) and (63) we have Now we use the fact that functions Ψ t , Φ t are Lipschitz continuous and obtain Thus by plugging (86) and (84) into (83) we get where Thus by using (87), (80), (79) with the help of (76), (67) we get Verification of the Lipschitz condition for other terms is analogous to conducted calculations. We will give one more estimat존P By Hölder's inequality, we get Now, based on (84), (86), (63) and (76) we get where Because all right-hand side's terms in (69) -(71) are locally Lipschitz continuous, thus the existence of the unique solution for some T n ą 0 follows from Cauchy-Lipschitz theorem. Now let us multiply equations (69)-(71) by e 2πikx and make summation over |k| ă n: B t v n`Pn pv n¨∇ v n q´div pP n pµ n Dv n qq`∇p n " 0, B t ω n`Pn pv n¨∇ ω n q´div pP n pµ n ∇ω n qq "´αP n`ω B t b n`Pn pv n¨∇ b n q´div pP n pµ n ∇b n qq "´P n pb n ω n q`P n`µn |Dv n | 2˘, Now, by applying divergence to equation (94) and by using (73) we get div v n " 0.
Also by taking imaginary parts of the system (94) -(96) and having in mind that initial data is real-valued is easy to check that solutions pv n , ω n , b n q are also real-valued. We will provide more details for v n . Let us apply Im to equation (94), multiply result by Im v n and integrate over T d to obtain The last term on the left-hand side is zero due to (98). Moreover, by (68), (56), (58) we have By using Young inequality we get Using Gröwnwall lemma and the fact that initial data is real-valued we conclude that and thus that velocity is real-valued. A similar approach can be applied to ω n and b n . Thus, system (94) -(96) can be rewritten in a following way: B t v n`Pn pv n¨∇ v n q´div pP n pµ n Dv n qq`∇p n " 0, B t ω n`Pn pv n¨∇ ω n q´div pP n pµ n ∇ω n qq "´αP n`ω B t b n`Pn pv n¨∇ b n q´div pP n pµ n ∇b n qq "´P n pb n ω n q`P n`µn |Dv n | 2˘, v n p0, xq " P n v 0 pxq, ω n p0, xq " P n ω 0 pxq, b n p0, xq " P n b 0 pxq, where: From the equation (66) it is clear that functions pv n , ω n , b n q are smooth with respect to spatial coordinates. By employing a standard iterative approach from the theory of ODE's (C k right-hand side implies C k`1 solution) applied to the system (69)-(71), it is easy to conclude that pv n , ω n , b n q are also smooth with respect to time.
Energy estimates
Before we start deriving energy estimates we need to establish some relations between J s and the derivative. Let f : Now using the properties of Fourier transform acting on a derivative we get Thus we have Also, it can similarly be shown that In the next part, we will be calculating the various inner products in L 2 pT d q. Thus to simplify reasoning it is beneficial to observe that if function f is a real-valued function so is J s f . Before we proceed with energy estimates we need to introduce notation regarding constants dependent on time. Positive constants dependent on the time of the form Cpη 1 , ..., η m , tq used in the later parts of the proof can be expressed by for some γ P R. Let us apply the J s operator to equation (104), multiply the result by J s v n , and integrate over T d . From this we obtain Using properties (111), (112), integration by parts, and the fact that div v n " 0 we get Now using Hölder inequality and Lemma 2 we get Using properties (111) and integration by parts we get pJ s div pµ n Dv n q , J s v n q " pJ s pµ n Dv n q , ∇ pJ s v n qq .
Now, we rewrite the expression in a way that will enable us to use the commutator estimate: where rJ s , µ n s Dv n :" J s pµ n Dv n q´µ n J s Dv n . We want to estimate the above expression from below. Using Hölder inequality, Lemmas 9, 6, we get For now, we will leave inequality in the above form. By definitions (55), (56), (58) and (108) we have Let us rewrite the right-hand side in the following way By performing integration by parts and using the fact that div v n " 0 in combination with (111) and (120) we get By combining (118), (119) and above inequality, we get Using estimates (116) and (123) in (115) we get Now, according to Lemma 6 we can express }∇v n } 2 H s in the following way We can rewrite inequality in the following way: Now, we will proceed with acquiring the estimate on ω n . Let us apply J s operator to the equation (105) and take inner product with with J s ω n : Proceeding as in (123), we obtain pµ n ∇ω n , ∇ω n q H s " pµ n J s ∇ω n , J s ∇ω n q`pJ s pµ n ∇ω n q´µ n J s p∇ω n q , J s ∇ω n q . (128) Thus using Hölder inequality and Lemma 9 we get Now, using Hölder inequality in combination with Lemma 2, we treat remaining nonlinearities in a following way: Applying inequalities (129), (130), and (131) to (127) we get Using Lemma 6, we can rewrite it in a more suitable form Now estimates for b n will be provided. As before, let us apply J s operator to the equation (106) and take inner product with with J s b n : Proceeding as before, with the use of Lemmas 9 and 2 we get: Now we provide the estimate for the last term of r.h.s. of (134). Using Lemmas 2 and 1 we getˇˇ`µ Finally, by using estimates (135), (136), (137), (138) in (134) and applying Lemma 6 we get By summing inequalities (126), (133) and (139) we get: where }v n , ω n , b n } 2 where constant Cpω min , tq is as in (113). By using Lemma 3 we get To simplify expressions we will introduce polynomial notation: for k P R`we define P k ptq in a following way: Thus we can write Now using (144) and (143) in (140) we get Cˆ}µ n } H s p}∇v n } 8`} ∇ω n } 8`} ∇b n } 8 q }v n , ω n , b n } H s`1`P3 ptq P 2 ptq p}∇b n } 8`} ∇ω n } 8 q }v n , ω n , b n } H s`1`P2 ptq }v n , ω n , b n } H s`1 }µ n } H s }∇v n } 8 P 1 ptq }v n } H s`1˙.
By using properties of P k we can write Now, we will continue the proof with the assumption that s P`d 2 , d
Now, let us apply Young's inequality to the right-hand side with coefficients to obtain Now, let us introduce βpsq ą 1 such that Thus we have Thus by definition (143) we get ď Cpb min , ω min , s, tq`1`}v n , ω n , b n } 2 H s˘β .
By integrating from 0 to t we get inequality Cpb min , ω min , ω max , s, τ qdτ.
After some manipulations, we obtain a uniform estimate for approximated solutions provided the denominator on the right-hand side is positive. Thus, let us define existence time T such that the following equality holds Cpb min , ω min , ω max , s, τ qdτ.
Using (160) in (159) we derive the estimate: Additionally (157) and (161) imply that: To show the continuity of the solution, the estimate in the norm L 2 p0, T, H s´1 pT d qq for the time derivative of the solution is required. We will derive the estimate only for b n as the calculations for other variables are similar. Let us apply J s´1 to equation (106) and calculate inner product with Using Hölder and Young inequality we get By Lemmas 6 and 7 it easily follows The most troublesome term to estimate is the last one on the right-hand side. Let us concentrate on it first. Let us chose ε ě 0 in the following way " ε P p0, Based on Lemma 1 we have By applying Lemma 1 to the last term on the right-hand side we get Let us observe that based on Lemma 8 we have Using the above estimates and Lemmas 3, 6 we get Using Lemma 7 we get Using the above result in (165) and Lemmas 2, 6 we get Right hand side of above expression is in L 1 p0, T q (due to (161), (162) and (150)) thus the estimate was proven. To conclude time derivative estimates are as follows }B t v n , B t ω n , B t b n } L 2 p0,T,H s´1 pT d qq ď C pb min , ω min , ω max , }v 0 , ω 0 , b 0 } H s , s, T q ă 8.
3.4 Passage to the limit in approximate system, regularity of solution, uniqueness Using estimates (161), (162) and (174) we can conclude existence of sub-sequence tn k u (relabelled as n) such that: Additionally, from the Aubin-Lions lemma, it follows that v n , ω, b n Ñ v, ω, b strongly in L 2 p0, T, H s 1`1 pT d qq, v n , ω, b n Ñ v, ω, b strongly in Cp0, T, H s 1 pT d qq.
for all s 1 ă s. It is easy to see that holds for all d 2 ă s 1 ă s. Indeed, we see that with the help of triangle inequality and Lemma 2 we get This, however, holds based on (180), (56), (58) and Lemma 5.
Having convergence results, we may pass to the limit in (104) -(106). It is easy to see that v, ω, b satisfy: pB t v, wq`pv¨∇v, wq`pµDv, Dwq " 0 @w P H 1 div pT d q, (184) pB t ω, zq`pv¨∇ω, zq`pµ∇ω, ∇zq "´α`ω 2 , z˘@z P H 1 pT d q, pB t b, qq`pv¨∇b, qq`pµ∇b, ∇qq "´pbω, qq``µ|Dv| 2 , q˘@q P H 1 pT d q for a.a. t P p0, T q. We will provide more details for the most troublesome term. First, we wish to establish the convergence where ψ P L 2 p0, T, H 1 pT d qq. We see thaťˇˇˇż Let us first focus on the first term of the right-hand sid졡ˇż T 0 pµ n Dpv n´v qDpv n`v q, ψq dtˇˇˇˇď where s 1 P`d 2 , s˘. With the help of Hölder inequality and (179), (180), (181) we geťˇˇˇż T 0 pµ n Dpv n´v qDpv n`v q, ψq dtˇˇˇďˆż By using Lemma 3 we hav졡ˇż and thus (187) is proven. Following the same procedure as presented in [3] or [1] one can show that: ω t min ď ωpt, xq ď ω t max for a.e. px, tq P T dˆr 0, T s and b t min ď b for a.e. px, tq P T dˆr 0, T s.
The uniqueness of the obtained solution easily follows from Theorem 2.
Proof of Theorem 2
First, we would like to conclude that ω t min ď ω i pt, xq ď ω t max , b t min ď b i pt, xq for a.e. px, tq P T dˆr 0, T s, j " 1, 2, (197) to have proper bounds on viscosity term µ i " b i ω i . We see that b i and ω i are continuous functions based on Lemma 3. Suppose that ω t min ą ω i pt˚, x˚q for some pt˚, x˚q P r0, T sˆT d such that ω i pt, xq ą Cω ą 0 for all pt, xq P r0, t˚sˆT d . Then, by following the procedure presented in [3] (starting from Eq.: 96) we obtain the contradiction. The same reasoning can be conducted for b i . To prove the upper bound on ω i we proceed as in [3] (starting from Eq.: 98). Let us denote by δ v " v 2´v1 , δ ω " ω 2´ω1 , δ b " b 2´b1 . Thus differences pδ v , δ ω , δ b q satisfy the following system of equations Now, we test equation (198)-(200) with δ v , δ ω , δ b to obtain estimate of differences in L 2 norm. We will show the procedure for δ b , rest are analogous Using (197) and Hölder inequality we get Using (197) and Lemma 3 we get Thus using the information about the regularity of solutions we can write where function belongs G b P L 1 p0, T q. Analogously there exist functions G ω , G v P L 1 p0, T q such that By summing (206), (205) and (204) we get where G " G v`Gω`Gb P L 1 p0, T q. From Gröwnwall inequality follows that δ v pt, xq " 0, δ ω pt, xq " 0, δ b pt, xq " 0 for a.a. px, tq P T dˆr 0, T s. This concludes the proof of uniqueness.
Appendix I
To prove Lemma 9 we will utilise some results from pseudo-differential operator theory. In the following definitions, we introduce the needed apparatus.
In the proof of Lemma 9 we will need some results from the theory of interpolation. First, let us introduce the needed definitions.
Definition 9. We define complex strip in the following way: S " tz P C : 0 ă Im z ă 1u.
Definition 10 (see [23]). A continuous function F : S Ñ C, which is analytic in S is said to be of admissible growth if there is 0 ď α ă π such that sup zPS log |F pzq| e α| Im z| ă 8 (218) Definition 11 (see [23]). Let pΩ, Σ, µq be a measure space and let X 1 , ..., X m be linear spaces. Let us assume that for every z P S there is linear operator T z : X 1ˆ. . . X m Ñ L 0 pµq, where L 0 pµq denotes the space of all equivalence classes of complex-valued measurable functions on Ω with the topology of convergence in measure on µ-finite sets. The family tT z u zPS is said to be analytic if for any px 1 , ..., x m q P X 1ˆ. . . X m and for almost every ω P Ω the function is analytic in S and continuous in S. Additionally, if for j " 0, 1 the function RˆΩ Q pt, ωq Þ ÝÑ T j`it px 1 , ..., x m qpωq (220) is pLˆΣq-measurable for every px 1 , ..., x m q P X 1ˆ. . . X m , and for almost every ω P Ω the function (219) is of admissible growth, then the family tT z u zPS is said to be an admissible analytic family.
The theorem we are about to cite is more general than stated below. The statement has been adapted to better fit the case at hand.
Theorem 4 (see Theorem 4.1 in [23]). For 1 ď k ď m, fix 1 ă q 0 , q 1 , q 0k , q 1k ă 8 and for 0 ă θ ă 1 define q, q k by setting Assume that X k is a dense linear subspace of L q 0k pT d q X L q 1k pT d q and that tT z u zPS is an admissible analytic family of multilinear operators T z : Suppose that for every ph 1 , ..., h m q P X 1ˆ. . . X m , t P R and j " 0, 1, we have where K j are Lebesgue measurable functions such that K j P L 1 pP j pθ,¨qdtq for all θ P p0, 1q, where P j px`iy, tq " e´π pt´yq sin πx sin 2 πx`pcos πx´p´1q j e´π pt´yq q 2 , x`iy P S.
Then for all pf 1 , ..., f m q P X 1ˆ. . . X m , 0 ă θ ă 1, and s P R we have where log K θ psq " Remark 3. For fixed x P p0, 1q and y P R there exists constant C x,y ą 0 such that |P j px`iy, tq| ď C x,y e´π |t| @t P R.
Proof of Lemma 9
The presented proof follows the original proof in the work of Kato and Ponce [10]. Some of the more calculation-focused lemmas were moved to Appendix II to provide a clearer argument. Additionally, in the presented proof term 4π 2 will be omitted in the definition of J s to shorten a bit obtained formulas.
Proof of Theorem 9. For smooth functions, any considered infinite series in this proof will be convergent as the following holds Let us start the proof by rewriting the expression under the norm using Definition 5: Now, we use the fact that the Fourier transform of a product is a convolution of transforms We change the variables in the first integral on the right-hand side ξ " ξ´η: We can write this in the following way Now, we aim to rewrite obtained expression as a sum of three terms. To do this we introduce the following partition of unity: let tΦ j u 3 j"1 Ă C 8 pRq be such that The value´1 9 in the definition of the Φ 1 actually can be replaced by any negative value. Now, we can write J s pf gqpxq´f J s pgqpxq " where and Now, we aim to provide the estimate for each term σ j pDqpf, gq. For the reader's convenience, each estimate will be obtained in a separate subsection.
With this, we can conclude that As mentioned before, we will show that for each of r function σ 1,r fulfils condition (215) up to some number of differentiations kpdq P N. We will analyse σ 1,r piece by piece. Let m ě 0. Then, based on Lemma 14 from Appendix II let us observe that for α i P N such that Using the assumption 1`|ξ| 2 1`|η| 2 ă 1 9 we can writ졡ˇˇB Cp1`rq m p1`|ξ| 2`| η| 2 q m 2 p1`|η| 2 q r´1{2 @pξ, ηq : .
(247) Now we will handle the last term in the definition of σ 1,r . From Lemma 16 from Appendix II we have thatˇˇˇˇˇˇB Thus based on (242), (244), (246), (247), (248) and Lemma 13 from Appendix II we can finally writ졡ˇˇB m σ 1 pη, ξq The sum on the right-hand side is finite based on the D'Alembert criterion for series convergence. Indeed we see thaťˇˇ`s {2 r`1˘ˇp 2`rq m`7 Based on Lemma 11 we havěˇ∆ (251) Thus based on (241), (244) and Theorem 3 we have
5.2.3
Step 3: Estimate of σ 2 pDqpf, gq Now we have to estimate To do this we introduce two new functions It is clear that σ 2 " σ 2,1´σ2,2 . We see that the following holds: Recalling that supp Φ 2 Ă`1 10 , 10˘and using Lemmas 14, 16, 13 from Appendix II combined with Lemma 11 and Theorem 3 it is easy to see that Now we have to provide the estimate for σ 2,1 : As we see in the formulation of Theorem 3 condition (213) has to be valid up to some number of differences taken. Let us denote this number by kpdq. Now, let us analyse the case where s{2 ě kpdq. We try to proceed in the case of σ 2,1 in the same way as in the case of σ 2,2 . Thus we try to validate the assumption (215) in Lemma 11. While doing so we may have to estimate negative powers of term 1`|ξ`η| 2 , which is problematic. This is not the issue when s{2 ě kpdq and calculations can be performed similarly to σ 2,2 (thanks to Lemma 17 from Appendix II). To obtain the estimate in the case where s is not so large we will apply the complex interpolation method. Thus we extend the definition of σ 2,1 , σ 2,1 to complex values: such that 0 ď Rez ď 2kpdq. If we chose z " 2k`it we can conclude using (288) and Lemmas 14,16,17,13,11 and Theorem 3 that for ψ, φ P C 8 pT d q we have › › σ 2k`it 2,1 pDqpφ, ψq where Cptq " C¨p1`|t|q k (this factor is the result of kpdq differentiations present in Theorem 3). Now we need to establish a similar estimate in the case of z " it. To this end, we observe that (based on transformations that lead to (235)) we have where We want to obtain an estimate of σ it 2,1 pDqpφ, ψq, however, it is easier to start with obtaining an estimate for σ it 2,1 pDqpφ, ψq: Now we need to derive estimates for each term on the right-hand side. First we will concentrate on κ it j pξ, ηq for j " 1, 3. Based on Lemma 17 from Appendix II we hav졡ˇˇB C¨p1`|t|q m`1`| η| 2`| ξ| 2˘´m 2 for 1`|ξ| 2 1`|η| 2 ą 9 or 1`|ξ| 2 1`|η| 2 ă is analytic, because functions of type S Q z Þ Ñ β z{2 , β P R`are analytic. We will show that the expression on the right-hand side converges uniformly. Indeed, we see that using (227) we have Thus it is easy to see that Thus, we can conclude that σ z 2,1 pDqpφ, ψq is analytic for any φ, ψ P C 8 pT d q. Using the same approach we can show continuity of S Q z Þ Ñ σ z 2,1 pDqpφ, ψq. We will only apply Theorem 4 to one of the variable of σ z 2,1 pDqpφ, ψq. To show that condition (222) holds, we verify that Cp1`|t|q 2k }ψ} p 3 P L 1 pP j pθ,¨qdtq for j " 0, 1 (interpolation with respect to the first variable). It is obvious based on Remark 3. Thus using Theorem 4 we can deduce that for 0 ď s ď 2k the following holds › › σ s 2,1 pDqpφ, ψq Now recalling (283), (287) and (288) we have The validity of the above inequality in case s{2 ą k was already justified in reasoning that lead to (290). Thus using the above and (286) we obtain
Conclusion
By combining (235), (252), (281) and (308) we get Lemma 14. Let s P C, m P N and tα i u d i"1 P N d such that ř d i"1 α i " m. Then, there exists N P N, tω i,j u N,d i,j"1 P N dN , tk i u N i"1 P N N , tC i u N i"1 P C N and such that where @i P t1, . . . , Nu 0 ď k i ď m, 2k i´ř d j"1 ω i,j " m and |C i ps, α 1 , . . . , α d q| ď Cpmqp 1`|s|q m . Also, we hav졡ˇB Proof of Lemma 14. We will prove the representation formula (317) using the induction method. Let us observe that and thus formula (317) holds for one differentiation. Now we assume that it holds for a certain number of differentiations and will try to deduce its validity after additional differentiation. Indeed we have We observe that 2pk i`1 q´ř d j"1 ω i,j´1 " m`1 and |C i ps´k i q| À r Cp1`|s|q m`1 . Thus we proved that (317) holds. Now it is easy to verify thaťˇˇˇB Lemma 15. Let r P N`, m P N and tα i u d i"1 , tβ i u d i"1 P N d such that ř d i"1 pα i`βi q " m. Then, there exists N P N, tω i,j u N,2d i,j"1 P N Nˆ2d , tk i u N i"1 P N N , tC i u N i"1 P R N and such that B m rxξ, ξ`2ηy r´1 pξ k`2 η k qs Moreover, for i " 1, . . . , N we have 0 ď k i ď m, r´1´k i ě 0, 2k i´ř 2d j"1 ω i,j " m´1 and |C i | ď Cp1`rq m . Additionally, there exists C independent from r such that for pξ, ηq : 1`|ξ| 2 1`|η| 2 ă 1 9 we hav졡ˇˇB m rxξ, ξ`2ηy r´1 pξ k`2 η k qs Proof of Lemma 15. We will prove this statement via induction argument. We see that for one derivative we have B rxξ, ξ`2ηy r´1 pξ k`2 η k qs B ξ j " pr´1qxξ, ξ`2ηy r´2 p2ξ j`2 η j qpξ k`2 η k q xξ, ξ`2ηy r´1 δ kj , B rxξ, ξ`2ηy r´1 pξ k`2 η k qs B η j " pr´1qxξ, ξ`2ηy r´2 2ξ j pξ k`2 η k q 2xξ, ξ`2ηy r´1 δ kj .
Again, using the fact that 2k i´ř 2d j"1 ω i,j " m, we obtain the desired inequality. | 2022-12-23T06:42:02.408Z | 2022-12-21T00:00:00.000 | {
"year": 2022,
"sha1": "3a698332ef55aae10d2a88a23b7e24a23c716a09",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3a698332ef55aae10d2a88a23b7e24a23c716a09",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
244175439 | pes2o/s2orc | v3-fos-license | Geology uprooted! Decolonising the curriculum for geologists
. Geology is colonial. It has a colonial past and a colonial present. Most of the knowledge that we accept as the modern discipline of geology was founded during the height of the post-1700 European empire’s colonial expansion. Knowledge is not neutral, and its creation and use can be damaging to individuals and peoples. The concept of “de-colonising the curriculum” has gathered attention recently, but this concept can be misunderstood or difficult to engage with for individuals who are not familiar (or trained to work) with the literature on the issue. This paper aims to demystify decolonising the curriculum, particularly with respect to geology. We explain what decolonising the curriculum is and then outline frameworks and terminology often found in de-colonising literature. We discuss how geology is based on colonised knowledge and what effects this may have. We explore how we might decolonise the subject and, most importantly, why it matters. Together, through collaborative networks, we need
Introduction
"Decolonising the curriculum" is an initiative that has gained momentum around the globe in recent years (Charles, 2019).Its origins are in humanities and social sciences; therefore, some of the language and rhetoric used, issues raised, and supporting texts, experiences, theories and ideas may be impenetrable or unfamiliar to those from STEM (science, technology, engineering, and mathematics) backgrounds.As academics, there is good evidence that we most comfortably op-erate in "discipline silos" of individuals who we feel share common interests, values, and skills (Becher and Trowler, 2001;Amaral, 2008;Kreber, 2008;Rogers and Cage, 2017).It is understandable, therefore, that STEM groups may lack the expertise to unravel some of the scholarly work around decolonising the curriculum.It is also true that many geoscience departments lack pedagogic experts (this is particularly true in the UK).This piece aims to break down some of the barriers to accessing and understanding decolonising the curriculum and, in this case, is framed around the discipline of geology (although it should be of use to academics across STEM disciplines).We have tried to avoid language that might be unfamiliar to geologists and have provided a glossary of words and phrases that commonly appear in scholarly work on decolonising the curriculum and pedagogy (see Sect. 6).Examples of colonial geological legacy are given, and we explore how and why this legacy may be problematic.We also suggest ways in which decolonising the curriculum can make our discipline more open, accessible, modern, and inclusive.
We openly acknowledge that this piece does not fully delve into every specific of the geology curricula or provide explicit "fixes" -this is very much designed to explain what decolonising the curriculum is, particularly where geology is concerned, and some ideas regarding how to approach decolonising the curriculum are provided.This is an introduction to be built on.It is intended to demystify decolonising the curriculum and show its applicability to geology and geologists.Future work by the authors and collaborators will involve exploring local and Indigenous people's geological knowledge and their role in colonial geological surveys in the late 19th and early 20th centuries, investigating the colonial Published by Copernicus Publications on behalf of the European Geosciences Union.
present of geology, and developing open-access resources for a decolonised geology curriculum.
The authors of this paper all work at universities in the UK and are actively involved in exploring, leading, and/or promoting/explaining the decolonising the curriculum initiative at their institutes.The level of engagement and experience with the decolonising initiative varies amongst the authors: from programme-level involvement to institute-wide responsibility and growing national recognition.Of the authors, three are geologists/geoscientists, one is a postcolonialist from Malaysia, and one is a race equality officer.The authors are UK-based, and we recognise the privilege that we have as scholars of the Global North, but we equally uphold that it is imperative that decolonisation efforts must happen from colonising countries.
This piece focuses on decolonising the curriculum, but it is important to emphasise that decolonialism is a much wider issue than the curriculum.It is and should be uncomfortable.This piece is very much for the academics in the Global North whose curricula has, in the past, been a relatively colonised curricula (as per tradition) but who are trying to find out how they can decolonise their teaching and learning in order to provide a more inclusive, representative curricula.Tuck and Yang (2012) emphasise that calling for decolonisation (of schools, for example) turns it into a metaphor where it is used as a vehicle for social justice and other methodologies -but not decolonisation.Our paper hopes to encourage decolonisation as more than a metaphor, but our focus is on knowledge and learning, not on the entire broad swath that decolonisation can encompass.Our focus is on how the logic of coloniality (Mignolo, 2007) has pervaded in Western modern science and in the knowledge construction within Western (as well as non-Western) universities as well as how we can take steps to make a paradigm shift and break the shackles of colonised minds, curricula, and knowledge sets.We start out with baby steps, first to raise awareness of the problem, and in future publications we hope to be able to identify strategies that would be of relevance to geologists.
The foundations and dominance of colonial geological knowledge
As an academic discipline or branch of knowledge, geology is relatively young (the academic discipline of geology arose in Europe -and to an extent the United States of America -in the late 18th/early 19th century).There are references to geological knowledge in several ancient texts (including the creation/formation of certain rock formations and the links to ancient environments as well as ideas on plate tectonics), mostly attributed to polymaths from around the globe or to scholars of "other" subjects and theologians (e.g.Yao, 2003).However, the study of the Earth and its changes through time has only really developed as a distinct academic pursuit since the late 18th century, arguably driven by a mixture of advanced mobility (the ability for individuals to cross vast distances recording rocks and relating them to one another), resource exploitation, and an increased interest in understanding "what" Earth and its constituent systems are.The first two of these motivating factors have strong colonial roots; it was at the height of colonial Europe that many of the principles, theories, laws, and practices that shape the discipline of geology were established.Prior to the late 18th century, the economy/resource-driven activities, such as the colonial Spanish and Portuguese state-led mineral exploitation of the early 16th century (Studnicki-Gizbert and Schecter, 2010), that we might include under the broad umbrella of geology today (i.e.surveying, quarrying, and mining) cannot be considered academic in nature (Sangwan, 1993).Although these early accounts of mineral exploitation are not considered academic, these activities did pave the way for further expansion and colonisation, and they ultimately contributed to the mindset that empires had the right to survey for, extract, and trade mineral wealth -this process laying the foundations of the modern discipline.
The principles and practices established in the early formation of the discipline were made (and/or sponsored) by men who were privileged (mostly wealthy) enough to pursue academic interests, both in their native countries (nearly exclusively European) and increasingly across borders.Ultimately, the discipline of geology as we know it was born at the height of northern European empires (with the foundations very much provided by earlier periods of southern European empire expansion and exploitation).But what does that mean for the subject -what difference does it make?
The Global North's (Eurocentric) dominance of knowledge production (i.e.epistemology -see Sect.6) has, as Winkler (2018) observed, led academic disciplines born of colonialism to "the tendency to systematically classify philosophical concepts for the purpose of organising knowledge into distinct properties [which] has become a hallmark of Western scientific reason" (p.592).The foundations of geology have been built on these pedagogical limitations.A dominant epistemology informs what knowledge a discipline is built from as well as how it is taught (i.e.pedagogy -see Sect.6).Decolonising the curriculum seeks to explore and question the epistemology of a discipline and looks to reform it (what is important knowledge?); in doing so, it also influences and alters a discipline's pedagogy.Carey et al. (2016) have pointed out that the modern view of knowledge creation (modern scientific method; see "Baconian knowledge" in Sect.6) "engendered a strong tendency in the environmental sciences to classify, measure, map, and, ideally, dominate and control nonhuman nature as if it were a knowable and predictable machine, rather than dynamic, chaotic, unpredictable, and coupled natural-human systems" (p.777).Rudolph et al. (2018) describe Western universities' dominance in knowledge construction, production, and legitimisation.They explain how "powerful knowledge" is produced and refined in specialisations, predominantly in resource-rich universities, primarily in the Global North.These institutions play by a set of internalised structures and hierarchies and acknowledge internal rules, which go towards reinforcing colonial and racist power relations.Such "powerful knowledge" continues to ignore, belittle, and erase other systems of knowledge.These structures also make institutes and universities potentially hostile environments for Indigenous scholars (staff and students) who must conform to dominant systems and suppress their Indigenous knowledge and identity (Dzombak, 2020).What Peake and Kobayashi (2002) said of the discipline of geography -that "Without an explicit effort being made to address and correct the consequences of the various (and often hidden) racist practices and discourses that permeate the epistemological foundations of geography and the institutional structures and practices that shape our work environment, geography will continue to embrace the colonialist heritage bequeathed upon it" (p.50) -is also true of geology and other STEM disciplines.
Colonisation of knowledge still takes place today and, although it is the legacy of historical colonial powers, it is not limited to their activities; it happens through imbalanced power relationships, internally within societies as well as externally (Popperl, 2018;Turner, 2018;Calvert, 2001).The colonisation of knowledge is engineered by proclaimed experts, whose power typically originates from an elite (generally wealthy) societal group or standing.We recognise that calling for a decolonised curriculum is not just permitting the inclusion of more viewpoints or amplifying the voices of a wider range of experts; the power imbalances are systemic, structural, and at all levels, particularly institutional.Trisos et al. (2021) highlight how difficult reconstructing power hierarchies will be: "Particularly for white-bodied researchers at well-funded universities and other organizations funded by corporate wealth from resource extraction, giving up a power and voice that has been explicitly and implicitly reinforced for at least 500 years will not be easy."(p.1209).
Colonisation of knowledge can happen where governments or corporations are involved or where organisations or institutions set the norms, but it can also occur on a personal level -for example, in the power balance between students and their tutors/supervisors.In geology (and many other disciplines) some examples of modern colonialism of knowledge include the influence of world powers, scientific associations and societies, and publishing houses.The influence of these groups differs, but they ultimately have control over processes that result in the promotion (as well as the extraction and funding) of knowledge seen as valuable to them."English has been the dominant form of knowledge communication in science, which can lead to publication bias against non-native English-speaking scientists.When one reads, writes and thinks in English, it is easy to forget that for the majority of people [. ..] knowledge is produced and tested in other tongues" (Trisos et al., 2021(Trisos et al., , p. 1206)).
Many readers will have experienced some level of this; peer review of papers and grant applications, and the biases involved, is a ubiquitous power structure that can determine what knowledge might be seen/perceived as valuable.Regulatory and/or accrediting bodies for education are also modern examples where a small group of individuals have the power to dictate knowledge that is valuable to a discipline.Indigenous scholars in geology/geoscience are (like all historically excluded groups) poorly represented within the discipline, and they struggle to be accepted.
3 What is decolonising the curriculum?
Origins and overview
Decolonising the curriculum is an initiative focusing on the action of decolonising ourselves (students and staff) and the teaching environments in which we operate.The founding frameworks of this movement are generally agreed to be the areas of decolonial and postcolonial studies.Postcolonial studies tend to focus on the social, economic, political, and cultural impact of colonial powers on past colonies and their peoples.Decolonial scholars arise from a variety of disciplines and are generally concerned with epistemology -questioning the dominance of Eurocentric knowledge systems.Both decolonial and postcolonial studies have been around for several decades and are generally studied by Indigenous scholars.Decolonising the curriculum is at different stages around the globe.Postcolonial nations (nations that were once colonies) are leading the way with several examples of Indigenous knowledge systems or methodologies being embedded within curricula; Manathunga (2020), for example, provides several vivid examples from New Zealand and Australia.
Although there is no single definition or understanding of what a curriculum is (Egan, 1978;Young, 2014), a curriculum can generally be described as the total sum of the knowledge, skills, social norms, and experiences that a student learns or is exposed to within a designed educational process.A curriculum is formed of/based on the knowledge, skills, and experiences considered to be valuable for a discipline and, certainly for geology, any industry that has influenced it.
Decolonising the curriculum is not an initiative looking to shame individuals for the content they teach nor for the work they make use of.It is not about cherry-picking diverse content for the sake of diversity or deleting certain works, and it is not an outright ban on teaching the work of old, dead, white men (Pett, 2015).It is not about change for the https://doi.org/10.5194/gc-5-189-2022 Geosci.Commun., 5, 189-204, 2022 sake of change, and it is not about the formerly colonised and the colonised switching places either.Decolonising the curriculum recognises that a total or outright dismantling or destruction of all imperially created structures and processes is not likely to happen overnight nor without resistance.However, a restructuring would be helpful; a rebalance of power to decrease the marginalisation and "othering" (see Sect. 6) of groups and knowledge sets could result in better pedagogy and greater understanding.It would also aid inclusivity through improved representation and diversity.Decolonising the curriculum has also been highlighted as a method towards closing the degree awarding gap between white students and those from a Black, Asian, or minority ethnic background (UUK and NUS, 2019).Thus, while decolonising the curriculum does not call for the abandonment of all Western theory, it does flag up that Western theory "does not in fact describe or map the entire planet, and that despite pretensions to universalism it suffers from gaps and lacunae, and for this reason needs to be revised in the light of local empirical conditions" (Jackson, 2003, p. 73;in Hönke and Müller, 2012, p. 390).
Envisioning all cultures and knowledge sets
There are several definitions of decolonisation and decolonising the curriculum.Mbembe (2016) highlights that there is little agreement on what decolonisation is and that it is an ever-changing and evolving "Beast".Here, as did Charles (2019), we support this definition taken from part of Keele University's Decolonising the Curriculum Manifesto (Keele University, 2018): Decolonising the curriculum means creating spaces and resources for a dialogue among all members of the university on how to imagine and envision all cultures and knowledge systems in the curriculum, and with respect to what is being taught and how it frames the world.
Decolonising the curriculum is a philosophical and pedagogical initiative exploring the origin, development, and use of knowledge, looking not only at repositioning theory but also at the content of a curriculum and how that content is taught.It is a curriculum design process looking to recognise knowledge as power, as well as recognising the power that enabled knowledge to be legitimised as such.It encourages us to question who created certain knowledge, why we use that particular knowledge, and who has access to it and why.A decolonised curriculum explores and acknowledges colonial legacy in knowledge creation, giving credit to those hidden and minoritised individuals who deserve it.Decolonising the curriculum is about exploring, examining, interrogating, and teaching the history of a discipline's knowledge base.It involves inquiring about the approach, method, framing, thought paradigms, theories, structures, and concepts that underpin and form all content within the discipline.However, the initiative is not solely concerned with knowledge.It is also vitally about place, power, and identity.Many Indigenous scholars, including geoscientists, have commented on how academics and students often have to assimilate into academia, following the norms, structures, frameworks, behaviours, and knowledge systems imposed on them.Dzombak (2020) provides a short blog highlighting the experiences of a few Indigenous geoscientists.
A reflective, uncomfortable process
Decolonising the curriculum is sure to mean different things to different people and will involve different actions for different disciplines.This seems particularly true if we compare subjects from STEM, the social sciences, and humanities.The process requires us to reflect on our backgrounds, experiences, ideologies, and discipline-specific narratives.Drawing on Tuck and Yang's (2012) work calling for the nondomestication of decolonisation, Esson et al. (2017) eloquently argue that "Decolonisation is a radical challenge to "unsettle" the architecture of privilege" (p.387).As academics who occupy positions of privilege and are sometimes said to dwell in ivory towers, decolonising our curriculums has to be a deeply self-reflexive process, involving capturing the experiences of historically marginalised groups (decolonising aims to address and rebalance injustices for all marginalised groups -it is not to be mistaken solely as a race/ethnicity issue).We need to acknowledge the biases in our world views (including social and political) (Holmes, 2020), be aware of our relationship to curricula/research, and fully understand ourselves as educators and researchers in order to address the context in which curricula design (both in terms of content and in terms of practice) (Rose, 1997) is taking place.To some extent, this is a complex manoeuvre where we sometimes have to pull the rug out from under our own feet.Furthermore, while decolonisation of anything, let alone curricula, is clearly many pronged, multi-levelled, and complex, one thing it definitely will also be is discomfiting; those who undertake to decolonise must be prepared to step outside of comfort zones and interrogate assumptions and privileges, perhaps even unlearning some of the latter (as Spivak advocates; Spivak, 1990;Andreotti, 2007).
Decolonising the curriculum should ideally be a reflective and honest process in which we recognise the emergence and use of the knowledge, or set(s) of knowledge, that we choose to apply in any given circumstance.Under what circumstances was the knowledge we use made, and why do we use this set of knowledge in particular?Several authors outline what decolonisation might entail, and they include themes such as recovering knowledge, reflecting on the exclusion of other knowledge, ethics, the use of language, and the internationalisation of Indigenous experience (e.g.Smith, 1999;Chilisa, 2020;Le Grange, 2020).
Whilst decolonising the curriculum is not a call for the vilification of past individuals, this does not mean we cannot judge and be disappointed, embarrassed, or angry at some of the unjust assumptions, beliefs, and actions of figures in the past.It is a call to understand, reflect, and call out the norms and actions of those who provided important advances to our knowledge.How might those behaviours have arisen, and how did they impact the formation of the knowledge we use?Were there others involved in the formation of that knowledge, and who were forgotten or marginalised collaborators?How might that have led to the continued exclusion of some groups in the present?The process acknowledges how certain knowledge was created, for example by explaining where authors held views which are found repugnant today, or where data were gained at the expense of others.For example, Lam (2021) provides an informative piece outlining (amongst other colonial links to geosciences) the history of Henry De la Beche, a slave owner who advocated for slavery reforms rather than abolition (De la Beche, 1825) and who is credited with creating the first geological map of Jamaica whilst visiting his slave plantation.
Decolonisation calls for us to consider the broader pool of knowledge available outside of our Eurocentric curricula (Hall and Tandon, 2017).Teaching geology from a single perspective (often framed by the works of dead, white men) leads to uneven power relations, particularly in relation to race, class, and gender (Begum and Saini, 2019).Where knowledge is ignored or ownership of knowledge is denied as to originating or belonging to Indigenous peoples, damage and hurt can be caused (Whitt, 2009).What we say and do matters.
Bridging society and science
Importantly, decolonising the curriculum seeks to highlight how injustice in the past has led to some of the most fundamental aspects of modern thinking, discipline identities, and continuing inequities (Harding, 2006) -and how, by acknowledging and understanding this, we can be better and strive to make a just, fair, equitable, and accessible modern system that provides a curriculum relevant to modern challenges.An issue not just for geology but for all STEM subjects is that science is often presented as being apolitical or neutral, where social relations have little bearing or influence.Gracio (2014) highlights this issue -"Science is made by people with interests, intentions and ambitions; and it's funded by governments and companies with agendas."-and calls out the absence of ethics and politics teaching in science curricula.The idea of "political geology", an inter-/multi-/transdisciplinary area relating to geopolitics, the Anthropocene, technology, cartography, history, and other themes has been identified as an emerging area of work (Bobbette and Donovan, 2019).
A holistic process -beyond diversification
Some of the conversations around decolonising the curriculum focus (in an unnecessarily constrained and limited way) solely on diversifying reading lists and case studies used across educational units.Diversity of representation in reading lists is important; ensuring reading lists are not just from Western male perspectives can enrich content and open the door to different frames of knowledge, experiences, and points of view.However, piecemeal developments like diversifying reading lists, whilst useful, do not fulfil the scope of decolonising the curriculum.Decolonising the curriculum is a holistic process.It needs time, thought, collaboration, and willingness to not only take fragmentary steps but for a major overhaul.A common criticism of decolonising the curriculum is that it "removes" or conveniently effaces historical knowledge (e.g. by removing certain case studies, authors, or contexts).Done properly, it should in fact broaden frames of reference, recognising other knowledge systems and ways of thinking, and opening global dialogue.Vandeyar (2020, p. 5) provides a useful quote emphasising how we must go beyond the diversification of materials and ensure we challenge and interrogate the knowledge we use: "Decolonisation of the curriculum requires much more than just changing the curriculum.How things are taught and academics' attitudes, perceptions and beliefs in this process are pivotal to the decolonisation project.Decolonisation is more than just a "choice of materials" (Wa Thiong'o, 1992).The attitude and disposition to materials used in the curriculum is critical."
Colonised forms of geological knowledge
Geology is a discipline created by colonial forces/parties at a time of active (explicit) colonial expansion (Yusoff, 2018;Figueiredo, 2020;Zeller, 2000).Because of its global relevance and common use of international case studies, it might be felt by some that geology is no longer colonial or that the colonial roots of geology no longer influence the subject's arena.However, the discipline born during imperial expansion is still very much the discipline taught across Western institutions today (albeit with adaptations as technologies/methods/nomenclatures/schemas have developed).It is important to recognise that this colonial version of geology, known to most geologists as the accepted global norm or canon (and adopted by many non-Western countries, likely as a result of colonial legacy), is not the only form of geological knowledge practised today or in the past.What makes the geological knowledge of a range of groups incompatible with the accepted geological canon?Many of the core aspects of geological teaching and learning focus on the identification, classification, and physical/mechanical characteristics of Earth materials, echoing the geological activities prohttps://doi.org/10.5194/gc-5-189-2022 Geosci.Commun., 5, 189-204, 2022 moted during the rise of geology as a military science and latterly an academic pursuit (see Sect. 4.2 below).Many Indigenous peoples have described and used their local geology for thousands of years (Nyblade and McDonald, 2021 and references therein).Reano and Ridgway (2015) highlight some of the geological workings of the Acoma people (western central New Mexico), who, rather than use the stratigraphic framework and classifications familiar to institutional geology (i.e.education, academia, industry) use an interpretive framework passed down generation to generation (called a "cultural framework" in their paper).This framework groups lithologies by their cultural or resource significance (e.g.farmland, building materials, pottery materials, water resources).These alternative frameworks can be linked and compared to "standard" frameworks to better welcome minority groups into geology (Reano and Ridgway, 2015).For wider cohorts, cultural frameworks also encourage better understanding of world views and the relationship between Indigenous populations and their lands as well as highlighting how cultural tensions can arise from modern colonisation (e.g.resource exploitation on Indigenous lands).
There are many more examples of traditional and local knowledge, oral histories, and mythology that are dismissed or ignored (and/or belittled) by the Western knowledge system but are grounded in the truth of observation.Oral histories include details of past environmental and climate change, of cataclysms, and the resulting environmental response.Oral histories are cumulative through generations and can often cover large parts of human (and non-human -animals, plants, rivers etc.) history.Whilst Western scientific method can (unquestionably) answer many questions, decolonisation of what counts as knowledge is needed to integrate Indigenous knowledge systems.Nunn (2018) demonstrates this excellently by bringing together the knowledge systems of the Western climate science canon and the oral histories of the Aboriginal First Nations peoples.For future climate action, this sort of integrated dialogue will be invaluable -for example, we could potentially predict the effects of sea-level rise to a coastline whilst integrating knowledge of the environmental impact (including how animals and plants react to such events) as evidenced through historical knowledge.
Most UK/Eurocentric/Western geologists probably do not think that cultural/Indigenous/alternative frameworks of knowledge even exist locally -often regarding the local knowledge as "less developed" (a colonial attitude!);however, that is far from the truth.Within the Global North, some areas that were themselves formerly colonial powers and/or internally colonised have (or had) a wealth of unacknowledged local geological knowledge, some of which persists.For example, in South Shropshire (England) many locals refer to "dhustone" for a hard, black igneous rock quarried from a place called Clee Hill.If asked about dhustone, many locals would likely be able to tell you where this rock can be found and why it is quarried.If, however, you were to ask locals if they knew where you could find microgabbro in the area, they would likely not know.This sort of geological knowledge, which exists across the globe, is often downplayed or explained as "not proper" geology -why?The knowledge serves a purpose and is successfully disseminated.Many of the terms used locally for rock types with decorative aspects (such as Cotham marble, Purbeck marble, Sussex marble, and Puddingstone in the UK) are often dismissed as "incorrect" by geologists.This narrow acceptance of what is "correct" geological knowledge potentially damages the image of the geological discipline, with individuals being made to feel inferior and therefore being unwilling to engage further.Learning about and working with local knowledge is not an onerous task and could lead to a more engaged and responsive reaction to geological activities (e.g.Palmer et al., 2009).
The connection between the geoscience industry and active harm to sites of cultural significance is a tangible result of the erasure and belittlement (or wilful misunderstanding and ignoring) of local and Indigenous knowledge.Geologists working for the extractive industry can hold power over the future of landscapes and peoples that have coexisted for thousands of years.Colonial legacy of land ownership and legalities over material extraction (supported by powerful and wealthy groups) is very much persistent today.The destruction of a 46 000-year-old First Nations heritage site of rock shelters in Western Australia to access higher volumes of high-grade ore is just one recent example (Wahlquist and Allam, 2020a).The responses to these inexcusable actions have been positive; all mining companies in Australia have been recommended to review all agreements with traditional landowners, and Rio Tinto have had several recommendations imposed on them, including remediation work, restitution packages, and a commitment to halt actions on 1700 First Nations heritage sites that it has permission to destroy (Wahlquist and Allam, 2020b).Geoscience researchers have been responsible for similarly destructive activities, with several well-documented cases of rock core/samples being taken from sites of cultural significance (e.g.Sahagún, 2021) and from areas of natural beauty (MacFadyen, 2010), in spite of codes of conduct existing to mitigate this (e.g.Scottish National Heritage & the Geologists' Association, 2011).
Perhaps the most important aspect of acknowledging geology's colonial present, as well as its debts to marginalised peoples and damage to environments, is to ensure that present and future actions work towards a more collaborative discipline in which the co-production of knowledge with all involved parties is normalised (Adame, 2021;Adams et al., 2014;Wilkinson et al., 2020;Sheffield et al., 2021).The geosciences are often overlooked (or misunderstood) in policy (Stow and Laming, 1991;Gill and Smith, 2021), process, and considerations for sustainable development, and this is undoubtedly linked to past geological activities being associated with extractive and damaging processes.The history of the discipline of geology is rarely taught in detail in geology courses.Many of the "Fathers of Geology" and their achievements are included in teaching programmes, but little (or no) content centres on how the current discipline was formed.The colonial influence and exploitative actions underpinning the subject's foundations are not part of the discipline's canon (and neither is the discipline's colonial present).This section gives a very brief overview of the discipline's colonial origins.Some of this will be familiar to geologists, particularly the names of certain individuals and their geological contribution.What may not be known by some geologists are the wider systems, actions, and processes that these geological contributions were made under.
In the 17th century, individuals such as Nicolas Steno began drawing up ideas about the deposition of sediments and the origin of fossils, questioning the accepted views of Earth science at the time (Adams, 1938;Gohau, 1990).The 18th century saw a realisation that minerals and ores (often inaccessible at the surface) could be found by studying certain natural phenomena.At this time, two main schools of thought arose to explain the creation of Earth materials -Neptunism (also called Diluvianism) and Plutonism.Neptunism argued that geological materials precipitated from water (much of this thinking was linked to Christian Bible teaching, particularly the great flood) (Gohau, 1990).Plutonists believed that volcanism was mainly responsible for rock formation -and alluded to the age of the Earth being very old and not understandable from the limited span of teachings from the Bible (Gohau, 1990).It was in the 19th century that the foundations of the discipline we know today (from Western education and industry) were founded.Uniformitarianism (and the opposing catastrophism) was proposed by Charles Lyell in his "Principles of Geology" (Lyell, 1830).Ideas around stratigraphic principles and relative dating began to be developed at this time.
In the 19th century, geological investigation often included historical and ethnographic elements; geologists would investigate a wide variety of subjects, including antiquities, ancestral, and Indigenous myths and past civilisation/human activity, and used texts and oral history to investigate local geology (Chakrabarti, 2020).For example, Alexander von Humboldt, a German polymath and geographer, integrated his experiences around the globe to try and explain/explore the distribution of a number of natural features (e.g.animals, plants, mountains, volcanoes) through the measurement and recording of them on maps (Secord, 2018).
Geological expeditions/surveys (although not necessarily solely geological -many aspects of fields such as trade, botany, and anthropology were also included) were an instrumental tool of colonial expansion (e.g.Stafford, 1984Stafford, , 1988;;Sangwan, 1993;Yusoff, 2018;Figueiredo, 2020;Zeller, 2000).Expeditions and surveys played an important role in the economic, technological, and cultural development of colonial powers (Britain and Spain in particular).Spanish engineers surveyed much of South America during the period from 1750 to the 1840s, and British surveyors operated across the British Empire from the 1830s to the 1870s (Teale, 1945;Chakrabarti, 2019;Stafford, 1984Stafford, , 1988;;Miller, 2020).Many expeditions, surveys, and "missions" to countries and territories where colonies were later established included a geological element.Geological surveys were undertaken, and estimates of natural resources were made, with the colonial party often being guided by locals; these guides and the knowledge they shared was erased, "rewritten", or taken without recognition, which is a system that can still be observed today (see Sect. 4.3 below).Many cases of colonial expansion and occupation were based on the findings of these "exploratory" parties, particularly where natural resources were involved (Stafford, 1988).Other reasons for colonial expansion included strategic military/trade locations (including the slave trade), areas for European settlers to live, and the desire to push colonial frontiers further into lands occupied by "savages" and "barbarians" (Webb, 2017).The importance of mineral wealth to the British imperial effort was so commonly understood that military, naval, and commercial (e.g. the East India Company) officers were offered training to better equip them to make scientific observations and enquiry, with mineral wealth from the colonies permanently held on display in London (Stafford, 1984).Official (British) Geological Surveys (i.e organisations, rather than the action of surveying/exploring) were established in nearly all UK colonial territories from 1918 (Colonial Geological Surveys, 1944).At a similar time to European colonial expansion, a similar expansion of colonial settlers was occurring in the United States of America, where geology surveys evaluated the economic value of land and drove expansion into resource-rich areas (Nyblade and McDonald, 2021).As British colonialism and the British Empire were rising, modern-day nations were being established and territories were being fought for in South America (Spanish America in particular).These nations had little or no access to Spanish mineral survey data, conducted in the territories by colonial expansionists (Miller, 2020).Miller (2020) argues that these fledgling nations (and ultimately all nations) are most closely defined by shared knowledge and knowledge systems -in the case of Spanish America, knowledge exchange between Indigenous peoples and Spanish colonials had been ongoing for centuries.Secord (2018) introduce the idea that resource extraction was not the sole geological motivation during the time of colonial expansion (particularly of northern European empires).Geology, as well as the idea that the Earth held multiple "lost worlds" and natural wonders, became entangled with philosophy and literary works of the time, for example, Secord (2018) suggests that Conrad's The Heart of Darkness (1899) and Conan Doyle's The Lost World (1912) allowed readers to share and experience the thrill of exploration to wondrous lands with contemporary geologists.https://doi.org/10.5194/gc-5-189-2022 Geosci.Commun., 5, 189-204, 2022 Some of the leaders of exploratory parties are well-known geologists and scientists today, with Alexander von Humboldt, Charles Darwin, Henry De la Beche, and Roderick Murchison being key players in the use of surveys for colonial expansion (Stafford, 1984;Secord, 2018).These surveys and the organisations responsible for them were funded by colonial and ruling powers, for example the Spanish Crown (e.g.funding expeditions to Peru, Chile, and New Spain -Mexico) and the British Crown and Government -often directly through military organisations (e.g. the Board of Ordinance) (Rose, 1996).Early geological activities of the British Empire had such strong military ties that for much of the 19th century, certainly in the UK, geology was perceived as a military science (Rose et al., 2019).Colonial geologists are responsible for the creation of most of the "first geological survey of ...[somewhere]..." and are often associated with the first geological interpretations of the areas they surveyed.In some cases, they are attributed with the "first discovery" of mineral wealth or of features they observed.This is, of course, absurd.Locals often provided valuable knowledge, guided and worked for these parties without formal credit or recognition, and were clearly aware of many geological features prior to their reported "discovery".For example, Frank Dixey, the first Director of the Directorate of Colonial Geological Surveys talks about "native information", carriers, and escorts in his personal memoirs on surveying Sierra Leone (Dunham, 1983).This phenomenon is commonly known as "firsting" (see Sect. 6).It was these types of activities that led to the establishment of the geological discipline we know today.In Spanish and Portuguese America, Indigenous populations were clearly successful mineral surveyors, as proven by the quantities of gold, silver, and copper looted by the Spanish and Portuguese in the early 1500s.After this initial period of looting, the colonial forces then surveyed and extracted vast amounts gold and silver (Bakewell, 1984).The colonial forces took their survey data with them when withdrawing from colonies, and the Indigenous peoples once again surveyed their lands for minerals (Miller, 2020).
Individuals such as De la Beche and Murchison were likely driven by the same excitement and inquisitiveness that many geologists share about how the world works.The "exciting" debates held by prominent geologists at the time concerning the establishment of geological periods was a factor in influencing members of parliament, noblemen, military officers, and colonial administrators that geological knowledge and exploration could promote economic growth (Stafford, 1984).However, many of these individuals were in the privileged position of being able to pursue an academic lifestyle due to injustices towards others, both domestically and internationally (e.g.Hyde, 2020).The theme of lone geniuses making exciting discoveries, giving talks, and moving specimens to research institutes/museums etc. persists today.Rarely (never?) is knowledge creation the achievement of the individual.The idea that knowledge is validated through cer-tain associations or groups of individuals is also observable today; thus, whilst many of the individuals who were prominent in the establishment of geology as the discipline we know today are long gone, the systems which they formed and supported are still very much alive.
Of course, there are individuals in the history of geology who were advocates for justice; for example, William and Richard Phillips and William Allen, who were pivotal in the establishment of the Geological Society of London, were abolitionists (Lam, 2021).These individuals, however, were still part of a group that encouraged the removal of materials from colonial territories for "metropolitan analysis" (Stafford, 1984).Imperial resource extraction may seem like an action of the distant past; however, geology as an essential tool for colonial expansion was celebrated as recently as the 1940s and 1950s (Teale, 1945), was a dominant economic process until relatively recently, and arguably still continues via modern corporations.Reports of mineral extraction from colonial territories and scientific work resulting from such activities were published in the Bulletin of the Imperial Institute and latterly in The Quarterly Bulletin of the Colonial Geological Surveys up to at least 1957 (Beard, 1950).
"Parachute" science
Parachute knowledge creation is a phenomenon not restricted to geology or STEM disciplines.It is the act of researchers (typically from the Global North) travelling to conduct fieldwork in a "Majority World" region (typically the Global South) and either not collaborating with or not recognising the participation of local researchers, landowners, or guides (e.g.Greshko, 2020;North et al., 2020).Spivak argues that field data collection (which she refers to as "information retrieval") is another form of imperialism, which centres the Western academy (Andreotti, 2007;Nordling, 2021).
Parachute knowledge creation may involve the removal of samples or specimens from countries to be held or exhibited elsewhere without full collaboration or agreement from the country/area/people of origin, often referred to as "extractivism" (see Sect. 6).Although long the norm and even common practice in academia, extractivism is inherently exploitative.It may lead to the creation of academic outputs (e.g.articles published in academic journals) where the authorship team is exclusively from the Global North, and collaborators from the study area are not included or acknowledged.These extractive practices have been shown to lead to biases in data, and they still occur today (Raja et al., 2022).This process leads to the perception of the need for external experts to local issues; it does not meet nor help local research efforts and can even hinder these local efforts (Stefanoudis et al., 2021).The practice of hindering local efforts has been recently highlighted by local geologists working on Nyiragongo volcano, Democratic Republic of Congo (Nordling, 2021).
Geosci.Commun., 5, 189-204, 2022 https://doi.org/10.5194/gc-5-189-2022 Parachute/colonial science often leads to the phenomenon of firsting (see Sect. 6).A recent example of this practice has been reasonably visible and involves a unique Brazilian fossil that ended up in a German museum and was subsequently published on by a group with no Brazilian collaborators (Vogel, 2020).The example of the Brazilian fossil also raised questions on the ethical (and legal) practices of obtaining materials; Brazilian law forbids the exportation of fossils, other than for loans (Vogel, 2020).It is important to recognise that these behaviours can cause hurt to those being othered and result in the breakdown of engagement, trust, and willingness to help from these parties.Cisneros et al. (2022) outline additional examples of parachute and extractive science from Mexico and Brazil, as well as outlining the impacts (and excuses) of such practice.They suggest that scientists should be required to provide documentation proving the ethical and legal position of sample collection/acquisition and that journals should refuse to publish without these.Yozwiak et al. (2016) highlight that international collaboration is fundamental to tackling major global health emergencies.This is also true for tackling geoscience-related challenges, such as climate change, critical material extraction, disaster risk reduction, and water extraction.Equitable collaborations between global experts, including those with invaluable local knowledge, are essential to avoid the damage caused by colonial science.Building research collaborations with support, training, and educational opportunities for local communities helps engage key stakeholders and creates more equitable partnerships (Whiteford and Vindrola-Padros, 2015).These collaborative actions may seem daunting to those without the experience, time, resources, or incentives to carry them out (Roldan-Hernandez et al., 2020), but they should be normalised and built into ethical planning and research grant submissions.
Towards a decolonised geology curriculum
In decolonising the geology curriculum, we need to acknowledge the colonial legacy of the knowledge we teach and understand that the knowledge sets we use are not superior, not truth claims, and not pluriversal nor representative; therefore they can only be partial and non-exhaustive/noncomprehensive.We must recognise the damage and harm which that knowledge creation was (and is still) a part of as well as the fact that some knowledge has been suppressed and/or erased and that some has been created unethically.Geology is not apolitical nor is it unconnected to the sustainable future of diverse societies.We need to understand that all knowledge used has power.
To create a discipline that is equitable, progressive, and compassionate, curriculum development teams need to start considering decolonisation of their curricula now.The process will take time, effort, and willingness.Sharing effective practice, collaboration, and co-creation as well as listening to individuals from colonised territories, or those whose knowledge has been colonised, is vital.There will be a wide range of actions specific to different curricula, dependent on what, how, and where it is taught -each journey will be unique.Here, we outline some suggestions on how we can begin to decolonise the geology curriculum: 1. Explain and explore what decolonising the curriculum is.Invite students to participate.Create or share resources that help explain what decolonising means.Emphasise the focus on knowledge production and use as well as on power in the process of knowledge generation and suppression.Outline that it is not good vs. bad and not about removing bodies of work based on individual beliefs and behaviours but rather about exploring how this has influenced both the knowledge itself and how individuals were oppressed or disadvantaged during the knowledge creation process.Explore why we should learn from this history, rather than repeat it.
2. Teach the history of geology.No geology is neutral (Yusoff, 2018).Teaching this discipline needs to include pointing to its framing.Exploring the origins of the knowledge that we use and acknowledging that peoples and lands were damaged in the creation of that knowledge allows us to understand why some groups might feel they do not belong in geology and how some groups have been excluded.It may help explain why diversity in geology cohorts is worryingly low (Dowey et al., 2021).It allows us and our students to understand the consequences of past actions and hopefully reduce/remove these actions in the present and future.
3. Set the context of the discipline of geology.Instead of presenting the syllabi or curriculum as the definitive, universal version of "Geology", contextualise to make clear that the geology taught in our degree programmes is one version of many possible knowledge sets, from particular perspectives, and that it is selective and exclusive in various ways, as all curricula must be ("Geo-Context", Pico et al., 2021).Make clear to students how even the best selected syllabi cannot claim to speak for the entire discipline nor be completely representative, let alone comprehensive or exhaustive.To this end, the conceptual framework could introduce methods and approaches that emphasise contextualised and situated knowledge sets, recognising that knowledge is place and time specific.Knowledge is underpinned by powers which have legitimised it as knowledge, often at the expense of other/alternative knowledge sets.6. Explore the bias of Global North research (abundance, "impact" and perception).There is a bias in both the number of papers produced by teams from the Global North -even where this research is focusing on topics from the Global South (North et al., 2020) -and in the "impact" and perception of the quality of work produced by researchers from the Global North vs.South (Collyer, 2018).Commit to including works from a broader range of authors.Embed decolonised actions into research procedures -work with local researchers and people.Consider inviting researchers from the Global South for reviews and to provide virtual research seminars to students.
4.
7. Participate in creating a more diverse population of geologists.Research has shown that projects run by diverse groups are more impactful (used more widely and cited more) than those with non-diverse project teams (AlShebli et al., 2018).This is also true of curricula, particularly those co-created with student bodies.Alternative knowledge can be offered and integrated within the curricula to appeal to a wider audience and resonate with a greater number of non-geologists as well as providing a broader range of knowledge systems, approaches, and attitudes.Work towards dismantling hierarchies and structures that create barriers and exclude groups.Diverse representation likely creates more inclusive communities of practice (Sheffield et al., 2021).A diverse body of geologists, including Indigenous scholars, is needed to tackle the grand challenges of the 21st century (Dowey et al., 2021) 8. Teach climate change as a social justice and colonial issue.Geological knowledge of climate change is essential to understanding the dangers of the anthropogenically enhanced climate crisis.Teach students that climate change is not apolitical; it is an example of modern colonialism, with the largest anthropogenic contribution to pollution from the Global North whilst the largest impacts are felt in the Global South (e.g.Weizman and Sheikh, 2015;Mahony and Enfield, 2018).Policy and process must be created with researchers and populations from the Global South to ensure equity of proposals and partnerships.9. Co-create and collaborate.Traditional curricula tend to focus on the individual -whether that is in highlighting "lone geniuses" or in emphasising individual academic achievement.Our curricula should emphasise the benefit of group and teamwork (Gregory and Thorly, 2013;Johnson and Johnson, 2009;Springer et al., 1999 The SDGs can be tracked to activities across the geology curriculum (Rogers et al., 2018).
These actions are by no means exhaustive but aim to provide a starting point for geology academic teams beginning to think about decolonising the curriculum.Sundberg (2014) highlights the importance of taking steps -moving, engaging, and reflecting -in enacting decolonising practice: "understanding that decolonisation is something Geosci.Commun., 5, 189-204, 2022 https://doi.org/10.5194/gc-5-189-2022 to be aspired to and enacted rather than a state of being that may be claimed".Sundberg encourages those undertaking decolonisation to progress by recognising and encompassing other forms of knowledge ("multiplicity").The aforementioned study argues that we each create our own truth or knowledge, because we are all subject to different conditions; our experience of the world is not inevitable ("historical contingency").This goes some way to explain why we find different knowledge in different societies and places; our lived experiences differ and so, therefore, does the way we build knowledge around these experiences.Historical contingency should not be a concept unfamiliar to geologists.The idea that historic (geological) events are not inevitable but that each event relies on a number of complex conditions is one that anyone reconstructing past Earth events will understand.
The power of decolonisation
Decolonising the curriculum may initially feel inaccessible to scientists, with its own set of terminology/jargon and its basis in historical context.However, it is vital to a more equitable future for geology and many other disciplines, with value to both academics and students.It also serves as a reminder that the work we conduct is not apolitical, neutral, nor divorced from society -people, places, knowledge, power, and the environment are interwoven with our science.Decolonising any curriculum involves not just the contents of the syllabi but also the pedagogical structures underpinning the curriculum, from delivery right through to assessment methods.Decolonising the curriculum is a set of processes, a pedagogical approach, and an ideology, which seeks to enhance knowledge and learning, to make disciplines richer and more enthralling.It seeks to include more; to dig deeper; to encompass more viewpoints, representations, and voices; and to welcome diversity rather than stay narrow and limited.Decolonising is a democratic and collaborative process, breaking down hierarchies to heighten productivity and effectiveness.It talks truth to power, exposing power structures that have shored up practices and processes unseen and uncalled out for most of their existence.Decolonising curricula, if done well, should be a liberating process and an education enhancer for both staff and students.
Decolonising the curriculum glossary and recommended reading
In this resource, we have tried to steer clear of language that may be unfamiliar, impenetrable, or off-putting to many geologists (and probably individuals from other STEM subjects); however, if we have used such terms, we have tried to explain them along the way.In this section, we highlight some key terms typically found in decolonising the curriculum literature in an attempt at demystifying them for those unfamiliar: Baconian knowledge.Modern scientific method as developed by Sir Francis Bacon; a method of knowledge creation based on systematic observation resulting in empirical data.
Colonial/colonialism. The act, practice, or policy of control of a people by a power or other people.Often associated with the establishment of colonies.Colonialism, the creation of colonies, and their exploitation in systematised ways, derives from the Latin term "colonia", which meant a settlement of Roman citizens in a newly conquered territory.
Decolonisation.The removal of ongoing colonial domination (Noxolo et al., 2017); in its early 20th century usage, this term referred to the process of attaining political independence by colonised countries (the socioeconomic, historical, and geographical act of nations gaining independence from colonisers).Decolonisation is often thought to be the end territorial ownership of colonies; however, colonialism does not disappear with decolonisation, and coloniality can continue long after a country has decolonised.
Epistemology.How knowledge was produced.
Epistemological violence.The practice whereby empirical data are interpreted in a way that implies that the Other is inferior (Teo, 2010).Epistemic violence can be either against knowledge or through/via knowledge (i.e.dominant knowledge sets, oppressive knowledge sets).
Extractivism.The process of extracting natural resources for export for economic/academic gain (often associated with poor environmental process and policy).
Imperial/Imperialism. Of or relating to an empire and/or the activities of an empire Firsting.Knowledge (of a discovery, a finding, etc.) framed from a European perspective for a phenomenon that was made by Others previously.This framework of knowledge promotes (mostly white, male) Europeans as creators of global knowledge often to the detriment of those who are not accepted as "firsters" (Beck, 2017).
Neoliberalism.A movement with commitments to individual liberties, belief in shifts in policy and ideology against government intervention, and a conviction that market forces should be self-regulating (Olssen et al., 2004).
Neocolonial.The process and/or practice of using economic and cultural influence and globalisation to influence or control a country, society, or people, rather Nkrumah, 1965).
Ontology.What knowledge actually is as knowledge.
Others, othering, the Other.Individuals or groups who are presented as not fitting with the social norms of a social group.A process which influences how people view and treat the others and leads to in-and out-groups (Held, 2020).
Pedagogy.The interaction between students, teachers, the learning environment, and learning activities (Murphy, 1996).
Postcolonialism.The study of the socio-economic, historical, cultural, and political impacts of colonialism on colonised people and their lands.Note that "Post"colonialism does not mean "after" colonialism; postcolonialism begins at the first moment of colonial contact and describes what transpires thereafter as a result of colonising or having been colonised.
Privilege.How a person's identity can afford them (often unacknowledged) advantages as a function of the group with which they identify.For example, social class, age, nationality, disability, ethnic or racial category, gender, neurodiversity, sexual orientation, and religion.One marker of privilege is that the privileged person need not even be consciously aware of that privilege and, even when aware, can neglect the awareness through a kind of "sanctioned ignorance"; whereas the absence of privilege throws up obstacles, problems, struggles, and suffering which cannot be ignored.
We recommend the following texts/resources for those who wish to explore the theme of decolonising the curriculum further (there are many more relevant resources available, these are just some that we found particularly useful): Decolonising the Curriculum by Amrita Narang, York University (https://edta.info.yorku.ca/decolonizing-the-curriculum/, last access: 4 July 2022).
Decolonising higher education: creating space for southern sociologies of emergence by Catherine Manathunga (2020, see references).
Data availability.
No data sets were used in this article.
Author contributions.SLR initiated this work, which was shaped from discussion with and input from LL and HS.SLR drafted much of the manuscript; all co-authors were involved in co-creating and collaborating on discussing, writing, revising, and restructuring the work.
Competing interests.At least one of the (co-)authors is a member of the editorial board of Geoscience Communication.The peerreview process was guided by an independent editor, and the authors also have no other competing interests to declare.
Disclaimer.Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Review statement.This paper was edited by Rebecca Priestley and reviewed by two anonymous referees.
Decolonising
Curricula and Pedagogy in Higher Education: Bringing Decolonial Theory into Contact with Teaching Practice (ThirdWorlds) by Shannon Morreira, Kathy Luckett, et al.Towards Decolonising the University: A Kaleidoscope for Empowered Action by Dave S.P. Thomas and Jivraj Suhraiya.Decolonising Intercultural Education: Colonial differences, the geopolitics of knowledge, and inter-epistemic dialogue (Routledge Research in International and Comparative Education) by Robert Aman.Dismantling Race in Higher Education: Racism, Whiteness and Decolonising the Academy by Jason Arday and Heidi Safia Mirza.Re-imagining Curriculum: Spaces for disruption by Lynn Quinn.Decolonising the University: The Challenge of Deep Cognitive Justice by Boaventura de Sousa Santos.Decolonizing Geography: an introduction by Sarah Radcliffe.Recognising Geology's Colonial History for Better Policy Today by Maddy Nyblade and Jenn McDonald (2021, see references).
Rayla al Birun, 973-1048 CE, an Iranian scholar -Asimov and Bedworth, 1998; and Shen Kuo or Shen Gua, courtesy name Cunzhong and pseudonym Mengqi/Mengxi Weng, 1031-1095 CE, a Chinese polymath - Theophrastus, 371-287 BCE, an Ancient Greek philosopher -Cuvier, 1830; Pliny the Elder, 23/24-79 CE, an Ancient Roman natural philosopher -Pliny the Elder, 1855; Abu al- Teach responsible resource extraction.Emphasis should be placed on ethical, sustainable extraction and exploration.Cultural considerations should be embedded into our curricula (e.g.land ownership works in very different ways around the globe).Curricula should encourhttps://doi.org/10.5194/gc-5-189-2022Geosci.Commun., 5, 189-204, 2022 age students to explore where the majority of material extraction occurs vs. the abundance of the material globally.Explore what local environmental and human rights look like and compare the price of commodities and where those materials are being consumed. | 2021-10-18T17:13:16.200Z | 2021-10-04T00:00:00.000 | {
"year": 2022,
"sha1": "6874cb63e023b3004f18f65b388a03b87e196c44",
"oa_license": "CCBY",
"oa_url": "https://gc.copernicus.org/articles/5/189/2022/gc-5-189-2022.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fc0abf3bd2a679f895cf6456c161e04c4789cf7f",
"s2fieldsofstudy": [
"Geology",
"Education"
],
"extfieldsofstudy": []
} |
233928794 | pes2o/s2orc | v3-fos-license | Numerical Study of B-Screw Ship Propeller Performance: Effect of Tubercle Leading Edge
⎯ various attempts to modify the ship's propeller have been made to improve performance as a propulsion component. This paper analyzes the effect of modification of the B-Series propeller by adopting a whale fin shape (Humpback Whale). Also, it analyzes the flow in the propeller before (standard) and after modification. Modifications are made to the leading edge, which is called the tubercle leading edge (TLE). It adds and subtracts sections with a wavelength of 0.2R and amplitude of 2.5% of the chord section length in the propeller leading edge. The numerical study is used using CFD on different J values (0.2, 0.4, and 0.6). It was found that the modification of TLE has a less significant effect on performance. Instead, it decreased at a low J value (0.2). Meanwhile, the largest decrease was at a high J value (0.6), namely up to 10.4% for thrust, 4.3% for torque, and 6.4% for efficiency. Whereas at J=0.4, the torque increases only 0.4%, and the torque and thrust decrease, although less significant. The flow analysis indicates that the shape of the TLE provides a decrease in pressure. However, on the positive side, this modification provides a reduction in noise on the propeller surface. Keywords⎯CFD, leading-edge, propeller, simulation, tubercle.
I. INTRODUCTION 1 The main propulsion on various types of ships generally uses a propeller. The type of propeller used varies according to the needs and types of ships operated while sailing. Like tugboats, these ships usually use a type of Kaplan accompanied by a ducted nozzle to increase power. This ship requires the ability to push with great force without speed or so-called bollard pull. Meanwhile, the typical propeller used for commercial ships is the B-series [1].
Various modifications have been made to improve the propeller's performance (thrust, torque, and efficiency). Also, the results of the cavitation and vibration tests are needed [2]- [4]. Parts that are often engineered on a propeller are the propeller blade, hub, and add components. The propeller's geometry applies Bernoulli's law. There are differences in the face area and the back area to provide lift. A difference can also be seen between the leading and trailing edges. The shape of the root in direct contact with the hub is also different from the propeller's tip. The use of variable pitch propellers is currently more commonly used because of its main advantage in increasing efficiency [5].
The leading edge is attractive because the fluid interacts directly with this part for the first time. When the fluid hits this part, some possibilities are differences in pressure contours and even flow shapes. Modification in sinusoidal resembles a whale fin (Humpback Whale) is interesting to be applied to the propeller. Several researchers have observed a possible increase in propeller blades' performance using numerical and experimental tests [6]- [8]. Visually with a low Reynold Number value, the flow that occurs at the leading edge protrusion is dominated by a streamwise structure. The boundary layer slowly blends with the propeller leaf wall adjacent to the protrusion peak position [9]. Almost all researchers have examined this modification with a Reynold Number value of less than 1 x 10 6 [10]. That is, the applications that have been observed have been of various fluid velocities and density.
The tubercle leading edge (TLE) shape of the tip and the entire leading edge of the current turbine have been carried out at the Emerson Cavitation Tunnel (ECT), Newcastle University. The research conducted by Shi et al. [11] used NREL S814 turbines with a diameter of 400 mm with a modified TLE amplitude of 10% of the chord. The experiments conducted show that the TLE modification results provide greater force, torque, and thrust than without modification but in slow experimental conditions. The lower the pitch angle used seems to improve the performance of the modification results.
The research continues by looking at the flow through the approach used, namely using PIV (Particle Image Velocimetry). This method is through analysis using a camera directed to the tunnel [12]. Two methods were used to analyze this trial's results, namely 2D PIV and Stereo PIV, which differed in the number of vectors observed. The results of several experiments using the tip speed ratio (TSR) found that in slow conditions (TSR=2), the flow separation is more directional and increases the torque for starting. Meanwhile, optimum conditions (TSR=3) and Overspeed (TSR=5) can weaken the tip's vortex. From this trial, the more protrusions in the TLE, the greater the power and thrust coefficient in slow conditions.
The application of TLE to ship propellers is different because the leading edge is not straight, such as foil and turbine. Numerical and experimental tests are conducted using the advance ratio variation (J). The numerical tests are found that the value of torque and thrust increase at low J values. At high J values, the efficiency increases slightly. However, at low J, the efficiency values are small [13]. These studies only use a speed of 2 m/s. For further research, it is recommended to study at high speeds. From several experiments, it can be concluded that TLE affects propeller performance.
However, several types of propeller blades have been tested to determine the performance caused by TLE. This paper's discussion is intended to determine how specifically it affects the type of B-Screw propeller. It was initially observed how the propeller's torque, thrust, and efficiency with several J and compared variations without modification were observed. Moreover, it also discussed the modification effect on the propeller flow and noise generation.
II. METHOD
This study used a standard B-Screw propeller type as the main comparison with modified TLE. It is numerically studied using the CFD simulation method in several variations of J value. The primary identifications of the study are propeller performance, flow characteristics, and noise. However, in this section, propeller modeling is explicitly described for the standard and TLE shape details, such as the initial standard propeller design, specification, and TLE design details. Numerical study method specification is also well explained, such as the boundary condition, meshing, and simulation condition.
A. Propeller Modification Design
The study uses two-propeller models, namely without modification (standard) and with modification (TLE). Propeller type B-Screw is used as the standard as in the specifications in Table 1. As mentioned in Table 1, the propeller is type B4-50, which means it has four blades with 50% of the area. The propeller diameter (Db) is about 1.96 m. The pitch is about 1.37, which results in a pitch and diameter ratio (P/Db) of about 0.7. Moreover, the propeller runs at 395 RPM.
Modifications are made by changing the leading edge with the amplitude of about 2.5% chords. Moreover, the wavelength is about 0.2R, as in Table 2. However, the entire trailing edge is formed sinusoidally according to the shape of the propeller. It should be noted that each section's position is adjusted without shifting the centerline of the propeller blade. Thus, when a protrusion of "x" is formed, the section's center position is adjusted by shifting by minus x/2 and vice versa. Figure 1 shows the standard propeller blade ( Figure 1a) and TLE shaped (Figure 1b). It depicts no difference between the TLE propeller and the normal propeller surface. When a section has formed a peak, one section next to it is formed a valley as the additional peak. However, it might be a slight difference in the blade volume. The design is refined to be prepared through a meshing process to the three-dimensional shape to be shaped. Eight sinusoidal peaks were generated on the modified propeller.
B. Numerical Study
The numerical testing approach for modifying this TLE uses CFD software with an open water test arrangement formed from a boundary formation (Figure 2). Moreover, two boundary types are developed for the simulation. The outer boundary is a fixed boundary for the water flow. The inner boundary is a rotating boundary for the rotating propeller. The boundary's specific approach is to use a Multiple Reference Frame (MRF) for testing ship propellers. However, in this study, the "coarse" type was selected for propeller testing. The coarse meshing type is chosen for the initialization study, which has less time to run the simulation.
The overall performance of the propeller can be calculated from the value of thrust (T) (Eq.1), torque (Q) (Eq.2), and efficiency (η) (Eq.3). This data is obtained during numerical testing on each advanced coefficient (J) (Eq.4), wherein this test at J=0.2, J=0.4, and J=0.6. Thus, the Va value obtained also varied when tested, namely 2.6, 5.2, and 7.7 m/s. Then the coefficient of thrust, torque coefficient, and efficiency of the two coefficients' accumulation was obtained.
A. Propeller Performance
The results of the propeller performance after the overall modification decrease in terms of thrust, torque, and efficiency. Figure 3 shows the effect of TLE design on the propeller to the propeller performance, namely thrust, torque, and efficiency, compared to the standard propeller design. It shows that the thrust (KT) magnitude decreases to 10.4% in J=0.2 and almost the same at J=0.6. Meanwhile, at J=0.4, the thrust decreases only 1.1%.
Moreover, the torque value decreased to 4.3% in J=0.2 and almost the same at J=0.6 but increased only 0.4% at J=0.4. From the accumulated value of thrust and torque, it is found that the efficiency value decreases to 6.4%, decreases only 1.5% when J=0.4. The smallest decrease was only at J=0.4, but it almost doubled at J=0.6 compared to J=0.2.
The decrease in propeller performance is most likely due to the reduced propeller surface area due to modification. Some sections have reduced size among other sections. Also, this causes a streamwise waveform whose boundary layers coalesce due to differences in section thickness. The surface area is noticeably reduced, and the overall volume decreases. However, the propeller performance is also affected by the surface flow, especially at the leading edge after the modification. The details of the flow effect on the blade are discussed in the next section.
International Journal of Marine Engineering Innovation and Research, Vol. 6(1), Mar. 2021. 16-23 (pISSN: 2541-5972, eISSN: 2548-1479) 20 Figure. 4. The total pressure on standard and tubercle propeller surface. Figure 3 shows the total pressure pattern on the standard and TLE propeller back surface in several J values. Overall, the total pressure is higher in the trailing edge but lower in the leading edge, both standard and TLE propeller. It is evident due to the blade shape, which thicker in the trailing edge. The trailing edge flow becomes faster than the leading edge due to Bernoulli's law, increasing the total pressure. The total pressure in the leading edge is initially high due to the fluid separation's initial force through the back and face surface. Moreover, a wider surface with low total pressure was detected near the propeller tip. However, the blade shape near the propeller tip is thicker than near the boss cap; thus, the fluid freely flows through the surface and lowers the total pressure. Figure 3 also shows that both standard and TLE propeller at high J value lowers total pressure to the propeller surface. The fluid velocity is proportional to the J value. At a high J value, the fluid velocity is higher than at a low J value, leading to lower pressure as the Bernoulli law.
B. Total Pressure on The Propeller
However, the TLE design lowers the total pressure in the leading edge with a more expansive low total pressure in the blade. Moreover, a wider low total pressure surface shows a higher J value. The TLE surface increases the flow velocity on the leading edge, thus lowers the total pressure.
International Journal of Marine Engineering Innovation and Research, Vol. 6(1), Mar. 2021. 16-23 (pISSN: 2541-5972, eISSN: 2548-1479 Figure 4 depicts the Reynold Number (Re) of the flowthrough propeller blades (standard and TLE) with several J values. It shows that the Re at the leading edge is low and gradually higher to the trailing edge. It means that the flow tends to be laminar at the leading edge, and at the trailing edge, the flow tends to be turbulence.
C. Reynold Number on The Propeller
However, a high J value leads to an increase in the standard propeller's surface turbulence. In contrast, TLE leads to lower turbulence. The standard propeller at low J value has a broader area at the leading edge with low Re. The area decreases as higher J value, but it is the opposite of the TLE. Besides, the TLE propeller's low Re area tends to be low and constant at all J values. It is due to the TLE shape, which separates the fluid flow and generates more turbulence flow at the blade surface.
International Journal of Marine Engineering Innovation and Research, Vol. 6(1), Mar. 2021. 16-23 (pISSN: 2541-5972, eISSN: 2548-1479 Figure 5 shows the power surface acoustic on both standard and TLE propeller. The power surface acoustic indicates the noise which is generated to the propeller surface. However, the noise decreases after the leading edge in the standard and TLE propeller. Due to low pressure near the leading edge, it is then gradually higher to the trailing edge (specifically near the trailing edge). In low fluid pressure, the flow is generated in high resistance; thus, the noise increases. A Higher J value increases the fluid velocity, thus lowering the pressure through the blade; therefore, the fluid resistance increases, leading to increased noise. Besides, the highest J value (J=0.6) slightly decreases the noise near the leading edge. It is due to low pressure leads to very high velocity; thus, lower turbulence occurs.
D. Power Surface Acoustic on The Propeller
Moreover, the noise in the propeller tip is higher than near the boss cap. It indicates that the fluid turbulence forms significantly at the propeller tip rather than near the boss cap. However, in Figure 5, TLE design at the leading edges decreases turbulence intensity and decreases noise.
IV. CONCLUSION
This research analyzes the propeller's effect with TLE design on the performance, total pressure, and noise compared to standard design in several J values. A CFD modeling method is used to identify the effect of TLE design. There are several conclusions from the identification: a. The streamwise shape on the leading edge (TLE) significantly impacts the flow produced by propeller rotation in several variations of the advance ratio (J). b. The TLE shape affects the slightly decreased performance at low J (0.2) and high (0.6) in torque, thrust, and efficiency. Whereas at J=0.4, the torque value increases, although it is not very significant at 0.4%. The thrust's value decreases only slightly by 1.1%, with the efficiency slightly decreasing by around 1.5%. However, this value is much lower than the decline that occurred in other J. The largest decrease was J=0.6, with a decreased torque value of 4.3%, a decrease in thrust value by 10.4%, and a decrease in efficiency by 6.4%. c. The propeller's pressure and noise decrease due to the flow separation due to the sinusoidal shape.
Overall, this modification of the TLE propeller provides the advantage of a more regular flow with reduced noise. However, in terms of performance, it decreases slightly in the same dimensions. It is possible to get the same performance and benefits with a larger surface area. | 2021-05-08T00:02:43.751Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "b9b189bd3df7deea9998c2ffd65c3913e6c3aa11",
"oa_license": "CCBYSA",
"oa_url": "https://iptek.its.ac.id/index.php/ijmeir/article/download/8702/pdf_43",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "aded24167f74ab613aadcd26b59a6fa8f68ce485",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
11039779 | pes2o/s2orc | v3-fos-license | Isolation, Characterization, and Multipotent Differentiation of Mesenchymal Stem Cells Derived from Meniscal Debris
This study aimed to culture and characterize mesenchymal stem cells derived from meniscal debris. Cells in meniscal debris from patients with meniscal injury were isolated by enzymatic digestion, cultured in vitro to the third passage, and analyzed by light microscopy to observe morphology and growth. Third-passage cultures were also analyzed for immunophenotype and ability to differentiate into osteogenic, adipogenic, and chondrogenic lineages. After 4-5 days in culture, cells showed a long fusiform shape and adhered to the plastic walls. After 10–12 days, cell clusters and colonies were observed. Third-passage cells showed uniform morphology and good proliferation. They expressed CD44, CD90, and CD105 but were negative for CD34 and CD45. Cultures induced to differentiate via osteogenesis became positive for Alizarin Red staining as well as alkaline phosphatase activity. Cultures induced to undergo adipogenesis were positive for Oil Red O staining. Cultures induced to undergo chondrogenesis were positive for staining with Toluidine Blue, Alcian Blue, and type II collagen immunohistochemistry, indicating cartilage-specific matrix. These results indicate that the cells we cultured from meniscal debris are mesenchymal stem cells capable of differentiating along three lineages. These stem cells may be valuable source for meniscal regeneration.
Introduction
The meniscus plays an important role, biochemically and biomechanically, in maintaining homeostasis of the knee [1]. Meniscus tears occur frequently as a result of aging or sports activity, and loss or damage of the meniscal structure can lead to knee degeneration [2]. However, the standard treatment for meniscal injury, partial or total meniscectomy, often leads to knee instability, degeneration, and dysfunction [3]. Alternatives are available but they come with several disadvantages. Suture repair is rarely used because of the limited blood supply in the remaining meniscal tissue and the relatively complex mechanical environment [4]. Meniscal allograft transplantation is considered the gold standard for treating patients with more serious meniscal defects and patients who have undergone meniscectomy, but longterm follow-up has shown limited and superficial host cell ingrowth. Transplantation also faces numerous challenges, including integrating the allograft with host tissue, obtaining limited donor tissue, preserving and processing donor tissue, achieving a match between donor and host meniscal size, fixing the grafted tissue, and avoiding infection and immune rejection [5]. The recently described procedure of meniscal xenogeneic transplantation shows promise for avoiding some of these problems, but much work is needed before it can be applied in the clinic [6].
An alternative to transplantation is meniscal regeneration through cell-based tissue engineering [7,8]. The two main sources of cells for tissue engineering of meniscus are meniscal fibrocartilage cells and mesenchymal stem cells (MSCs) [9]. Autologous meniscal fibrocartilage cells derived from meniscectomized debris would in principle make the ideal seed cells for meniscal regeneration because the cells already have the proper cellular phenotype and they offer the potential for autologous therapy, with no additional risk of morbidity or immune rejection [10]. However, obtaining these cells requires two surgical procedures and the number of cells obtained is usually insufficient for seeding meniscal regeneration; in addition, the safety and efficacy of using meniscectomized debris in the clinic are unclear. Even more importantly, it is unclear whether these terminally differentiated mature fibrocartilage cells can function adequately in the long term in a rebuilt meniscus, since experiments in vitro have shown that their proliferative ability and biological activity gradually decrease with monolayer expansion and passaging, leading to loss of the phenotypic characteristics of primary meniscal fibrochondrocytes [11,12]. Adult MSCs from mesoderm may prove even more effective for meniscal regeneration because they self-renew and show multilineage differentiation potential, immunomodulatory effects, and homing ability [13][14][15]. MSCs can differentiate into different mesenchymal or nonmesenchymal tissues under appropriate conditions in vitro and in vivo [16]. On the other hand, the cells do have disadvantages: they may undergo spontaneous transformation or senescence [17][18][19][20], and they may exhibit hypertrophy during chondrogenesis, resulting in apoptosis and ossification [21,22]. Cartilage formed by MSCs has inferior content in extracellular matrix and does not provide the same mechanical properties as cartilage formed by mature chondrocytes [23]. In addition, MSCs account for only 0.001-0.01% of mononuclear cells in bone marrow [24]. It would therefore be useful to identify alternative sources of MSCs for meniscal repair and reconstruction.
MSCs are found in nearly all tissues of the body, not just in bone marrow [25]. For example, MSCs have been isolated from adipose tissue and synovium for meniscal regeneration [26,27], but harvesting cells from these sources is associated with donor site morbidity. More importantly, these cells give unsatisfactory outcomes for meniscal regeneration because the regulatory pathways controlling their differentiation are poorly understood. Studies suggest that the injured meniscus itself may show certain healing potential [28], so we speculate that MSCs exist within the meniscal debris. These cells may offer the combined advantages of both meniscal fibrocartilage cells and mesenchymal cells.
To explore whether meniscal debris is a good source of seed cells for meniscal regeneration, we used enzymatic digestion to isolate cells in meniscal debris from patients with meniscal injury, and then we identified these cells as MSCs Stem Cells International 3 based on their adherent ability, morphology, phenotype, and multilineage differentiation potential. Such cells may provide adequate amounts of seed cells for meniscal regeneration.
Isolation and Culture of MSCs from Meniscal
Debris. This study was approved by the local Research Ethics Committee, and written informed consent was obtained from all patients. Meniscal debris was collected from 6 patients with meniscal tears (3 men and 3 women; 6 knees, comprising 4 left and 2 right), who underwent arthroscopic partial meniscectomy or plasty. They were diagnosed on the basis of clinical manifestations, magnetic resonance imaging, and arthroscopy. Their median age was 37 years (25-49). The causes of meniscus tears were related to either sport activities (5 patients) or degeneration (1 patient). There were 5 lateral and 1 medial menisci. The median time of surgery after injury was 22 months (2-60). The tears included horizontal (2 menisci), longitudinal (2 menisci, including 1 bucket-handle tear), flap (1 meniscus), and complex (1 meniscus horizontal + longitudinal tears) tears. The tear areas were located either in the anterior horn (2 menisci) or in the meniscal body (4 menisci). The tears involved the white zone (4) or red-white (2) zone. Two patients had concomitant cartilage lesions.
Meniscal debris-derived MSCs were isolated and cultured as previously described [29]. Briefly, meniscal debris was rinsed with phosphate-buffered saline (PBS) containing 1% gentamicin to remove surrounding tissue, then minced into 0.5-mm 3 pieces, and digested with 2 mg/mL 1 : 1 mixed collagenase type I and type II for 4-6 h at 37 ∘ C. The isolated cells were washed three times with PBS and suspended to a concentration of 1 × 10 6 cells/mL in complete low-glucose Dulbecco's modified Eagle's medium (LG-DMEM) containing 10% fetal bovine serum (FBS), 2.2 g NaHCO 3 , 100 U/mL penicillin, 100 g/mL streptomycin, 25 ng/mL amphotericin B, and 2 mM l-glutamine at 37 ∘ C in 5% humidified CO 2 . After 48-72 h, nonadherent cells were washed away with PBS. The medium was replaced every 3-4 days. When the primary cultures reached 80-90% confluence, they were trypsinized with 0.25% trypsin/0.1% EDTA and passaged by splitting with the ratio of 1 : 2 or 1 : 3. Third-passage (P3) cultures were utilized for subsequent experiments.
Phenotypic Characteristics of MSCs from Meniscal Debris.
Potential markers expressed on the surface of third-passage MSCs derived from meniscal debris were analyzed by flow cytometry. Cells were harvested with trypsin/EDTA, then were incubated for 1 h with FITC-conjugated antibodies against CD44 (Abcam), or purified primary antibodies against CD90, CD105, CD34, or CD45 (Abcam). Next, cells were labeled for 30 min with FITC-conjugated secondary antibody. In parallel, cells were incubated with nonspecific mouse IgG instead of primary antibody to detect nonspecific staining. Then cells were fixed in flow buffer, washed, and subjected to flow cytometry and results were analyzed using Cell Quest software (BD Biosciences). The results were expressed as percentages of positive cells on histogram plots relative to the proportions obtained with the isotype-matched negative control.
Trilineage Differentiation Potential of MSCs Derived from
Meniscal Debris. The potential of third-passage MSCs from meniscal debris to differentiate along three lineages was examined, as previously described [30].
Osteogenesis.
MSCs were grown to 80-90% confluence and then induced for 2 weeks in osteogenic medium supplemented with 0.1 M dexamethasone, 10 mM -glycerol phosphate, and 50 M ascorbate. Cultures were considered positive for osteogenesis if they showed alkaline phosphatase (ALP) activity and the presence of Alizarin Red-positive calcium deposits.
Adipogenesis.
MSCs were induced for 2 weeks in adipogenic medium consisting of 1 M dexamethasone, 0.5 mM methyl-isobutylxanthine, 10 g/mL insulin, and 100 mM indomethacin. Cultures were considered positive for adipogenesis cells if they showed the accumulation of Oil Red Ostained lipid vacuoles within the cytoplasm.
Chondrogenesis.
MSCs (10 6 cells) were collected in 15-mL polypropylene centrifuge tubes, centrifuged at 480 ×g for 10 min, and cultured in micromass for 3 weeks at 37 ∘ C with 5% CO 2 in high-glucose DMEM supplemented with 100x ITS, 1 mmol/L pyruvate, 0.17 mmol/L ascorbate, 0.1 M dexamethasone, 0.35 mmol/L proline, and 10 ng/mL TGF 3. Sagittal sections were processed with hematoxylin and eosin to reveal general histology, Toluidine Blue and Alcian Blue to detect glycosaminoglycan (GAG), and immunohistochemistry to detect expression of type II collagen (Col-II).
Real-Time Quantitative PCR.
After inducing MSCs cultures to follow one of the three lineages described above, they were assayed for expression of specific markers for extracellular matrix and transcription factors using real-time quantitative PCR. Total RNA from samples was extracted using RNAVzol reagent (Vigrous) according to the manufacturer's instructions. Concentration of each RNA sample was measured by UV spectrophotometry and integrity of RNA samples was assessed by agarose gel electrophoresis. Total RNA (2 g) was subjected to reverse transcription using the Superscript First Strand Synthesis System (Invitrogen). PCR primers were designed using Oligo6 primer analysis software (Table 1). Quantitative real-time PCR was performed in 15-L reactions consisting of 7.5 L 2x SYBR Green PCR Master Mix (Toyobo), 1 L cDNA product, 1 L of each primer, and 5.5 L of nuclease-free water. All PCRs were performed under the following conditions: 2 min at 50 ∘ C, 10 min at 95 ∘ C, and 40 cycles of 15 s at 95 ∘ C and 1 min at 60 ∘ C. ABI Prism 7000 Sequence Detection System software was used to perform melting curve analysis to verify amplification specificity and determine mRNA levels using the comparative cycle threshold (Ct) method. Expression levels of mRNA were normalized to that of GAPDH mRNA using the same Ct method.
Statistical Analysis.
Expression levels of each mRNA were reported as mean ± SD ( = 6). Student's t-test was used to assess the significance of differences between induced cultures on a given day and uninduced controls on day 0. < .05 was considered statistically significant.
Isolation and Culture of MSCs Derived from Meniscal
Debris. Nucleated cells were isolated from the meniscal debris using collagenase. After primary cultures had been in the incubator for 2-3 d, the isolated round cells gradually spread out and adhered to the culture dish. By 4-5 d, cells exhibited typical spindle-shaped, fibroblast-like morphology. As time went on, the cells grew more rapidly and took on a swirling or cluster appearance. By 10-12 d, cultures reached 80-90% confluence, whereupon they were subcultured 1 : 2 or 1 : 3. Following the first subculturing, approximately 4-5 days were needed for each passage. Cell morphology remained homogeneous until P3 (Figure 1). (Figure 2).
Trilineage Differentiation Potential of MSCs Derived from
Meniscal Debris. Third-passage MSC cultures were subjected to in vitro differentiation assays in order to investigate their mesenchymal multipotency potential. When cultures were induced to undergo osteogenesis, the cells began to gather and become sparse, changing from a spindle-shaped morphology to a more polygonal one. After 1-2 weeks, the nodules became larger and scattered uniformly. After 14 d, cultures were stained with Alizarin Red (Figure 3(a)) to detect calcification and stained for ALP (Figure 3(b)) to detect the presence of osteoblasts. These tests showed that MSCs secreted bone matrices and the osteoblast marker ALP. Bone matrices were stained red in the presence of Alizarin Red. Cultures also showed scattered, uniformly distributed mineral nodules. Quantitative PCR analysis further showed that expression levels of osteogenic-specific genes for Runx2, ALP, OCN, and Col-I were significantly higher in induced cultures than in uninduced ones (Figures 3(c)-3(f)). When third-passage cultures were induced to undergo adipogenesis, the volumes of cells and nuclei increased, and intracellular lipid droplets became visible in the cytoplasm by microscopy (Figure 4(a)). These droplets were confirmed to be lipid vacuoles because they stained red with Oil Red O (Figure 4(b)). Lipids continued to accumulate during 2 weeks. Quantitative PCR analysis further showed that expression levels of adipogenic-specific genes for PPAR-, Adiponectin, LPL, PAS, and aP2 were significantly higher in induced cultures than in uninduced ones (Figures 4(c)-4(g)).
When third-passage cultures were induced to undergo chondrogenesis, pellets changed to spheroids ( Figure 5(a)). Hematoxylin and eosin staining ( Figure 5 Blue also revealed a considerable degree of metachromasia. Quantitative PCR analysis further showed that expression levels of chondrogenic-specific genes for SOX-9, Col-II, and GAG were significantly higher in induced cultures than in uninduced ones (Figures 5(g)-5(i)).
Discussion
In this study, we successfully isolated adherent cells from meniscal tear debris; the cells had a morphology typical of fibroblasts and they expressed a characteristic mesenchymal phenotype, with no expression of hematopoietic surface markers. It was possible to effectively induce the cultured cells to differentiate into osteogenic, adipogenic, or chondrogenic lineages. The cells were identified as MSCs based on their morphology, surface marker expression, ability to adhere to culture surfaces, and multilineage differentiation potential. Our findings suggest that meniscal debris may be a useful source of seeding cells for meniscal regeneration. Some studies showed that cells in the vascular periphery of meniscal injury, or even in the avascular area, could spontaneously heal the tissue damage [31]. This suggested that meniscal debris may contain stem or progenitor cells that can participate in meniscal regeneration. These cells may come from several sources. One possible source is from a nearby area such as synovial fluid: MSCs increase in greater numbers in synovial fluid after meniscus injury than in normal knees even within the avascular area [32]. Another possible source of MSCs is the meniscus itself. Following injury or certain pathological conditions, MSCs may be activated and recruited to injured tissues, where they assist in homeostasis, remodeling, and repair by replacing mature cells that have been lost and by exerting paracrine effects to recruit cells to the site of injury [33]. MSCs populations may have multiple origins (systemic or local origin), as 8 Stem Cells International suggested by studies in which MSCs were shown to arise from diverse mesenchymal lineages [34,35]. Another basis of MSCs existence in meniscus tissue may be that MSCs could be used to investigate the pathogenesis of meniscal calcification or ossicle [36,37].
Whatever the origin of the MSCs in our meniscal debris samples, their properties are similar to those previously reported for human meniscus stem/progenitor cells, which displayed characteristics of MSCs and expressed high levels of Col-II [38,39]. Those cells promoted meniscus regeneration and ameliorated osteoarthritis through SDF-1/CXCR4mediated homing in a rat model of meniscus injury. How meniscal tissue-specific MSCs are activated to regenerate meniscal tissue and what mechanisms they use during that regeneration are still unclear. Whatever the pathways involved, our results suggest that meniscal tissue-specific MSCs may be superior to terminally differentiated mature cells and to MSCs derived from other sources for meniscal regeneration.
Conclusions
We have isolated cells from human meniscal debris that were fibroblast-like and that were able to adhere to plastic and undergo several passages in vitro. The cells showed a distribution of surface markers similar to that previously reported for MSCs. They were also efficiently induced to differentiate into osteoblasts that produced mineralized matrix, adipocytes that accumulated lipid vacuoles, and chondrocytes that produced GAG and Col-II. Real-time PCR analysis confirmed that each differentiated lineage upregulated the corresponding genes for osteogenesis, adipogenesis, or chondrogenesis. Our study demonstrated the existence of MSCs in meniscal debris and showed that they can be cultured and differentiated, opening the door to studies examining their potential for meniscal regeneration. | 2018-04-03T00:20:47.045Z | 2016-12-04T00:00:00.000 | {
"year": 2016,
"sha1": "ae6183641e5712bb055160bdb4a7d4434d953234",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/sci/2016/5093725.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "696434cf276effa4e4ff57c3a633175accfbc912",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
209541083 | pes2o/s2orc | v3-fos-license | Familial Hypercholesterolaemia in a Bulgarian Population of Patients with Dyslipidaemia and Diabetes: An Observational Study
Introduction Patients with diabetes and familial hypercholesterolaemia (FH) are at very high risk of cardiovascular events, but rates of FH detection are very low in most countries, including Bulgaria. Given the lack of relevant data in the literature, we conducted a retrospective observational study to (1) identify individuals with previously undiagnosed FH among patients being treated at Bulgarian diabetes centres, and (2) gain insight into current management and attainment of low-density lipoprotein cholesterol (LDL-C) goals in such patients. Methods From a database of diabetes centres across Bulgaria we retrieved medical records from patients aged ≥ 18 years with type 1/2 diabetes mellitus (T1DM/T2DM) who were being treated with insulin/insulin analogues, dipeptidyl peptidase 4 inhibitors, glucagon-like peptide 1 receptor agonists and/or sodium-glucose co-transporter-2 inhibitors. Patients with FH (Dutch Lipid Clinic Network score ≥ 3) were identified, and their data analyzed (lipid-modifying therapy (LMT), diabetes treatment, cardiovascular events and glycaemic and lipid parameters). Results A total of 450 diabetic patients with FH (92.0% with T2DM; 52.4% receiving insulin/insulin analogues) were included in the analysis. LMT consisted of statin monotherapy (86% of patients; 18% receiving high-intensity statin monotherapy), statin-based combination therapy (13%) or fenofibrate (< 1%). Median LDL-C was 4.4 mmol/L. Although 30% of patients had a glycated haemoglobin level of ≤ 7%, only one patient (< 1%) achieved the LDL-C target recommended in 2016 European guidelines for very high-risk patients (< 1.8 mmol/L). Previous cardiovascular events were documented in 40% of patients. Conclusion To our knowledge, this is the first study to specifically explore lipid target achievement in diabetic patients with FH. In this preselected Bulgarian population, < 1% of patients achieved the 2016 European guideline-defined LDL-C target. These data highlight the importance of identifying FH in diabetic patients as early as possible so that they can receive appropriate treatment. Electronic Supplementary Material The online version of this article (10.1007/s13300-019-00748-2) contains supplementary material, which is available to authorized users.
lipoprotein cholesterol (LDL-C) goals in such patients. Methods: From a database of diabetes centres across Bulgaria we retrieved medical records from patients aged C 18 years with type 1/2 diabetes mellitus (T1DM/T2DM) who were being treated with insulin/insulin analogues, dipeptidyl peptidase 4 inhibitors, glucagon-like peptide 1 receptor agonists and/or sodium-glucose co-transporter-2 inhibitors. Patients with FH (Dutch Lipid Clinic Network score C 3) were identified, and their data analyzed (lipid-modifying therapy (LMT), diabetes treatment, cardiovascular events and glycaemic and lipid parameters). Results: A total of 450 diabetic patients with FH (92.0% with T2DM; 52.4% receiving insulin/ insulin analogues) were included in the analysis. LMT consisted of statin monotherapy (86% of patients; 18% receiving high-intensity statin monotherapy), statin-based combination therapy (13%) or fenofibrate (\ 1%). Median LDL-C was 4.4 mmol/L. Although 30% of patients had a glycated haemoglobin level of B 7%, only one patient (\ 1%) achieved the LDL-C target recommended in 2016 European guidelines for very high-risk patients (\ 1.8 mmol/L). Previous cardiovascular events were documented in 40% of patients. Conclusion: To our knowledge, this is the first study to specifically explore lipid target achievement in diabetic patients with FH. In this preselected Bulgarian population, \ 1% of
INTRODUCTION
Atherosclerotic cardiovascular disease (ASCVD) is the leading cause of death in patients with diabetes mellitus (DM) [1]. Indeed, those with type 1 or type 2 DM (T1DM or T2DM) have an approximately twofold higher risk of both myocardial infarction and stroke than nondiabetic individuals, as shown by a meta-analysis of 102 prospective studies conducted by the Emerging Risk Factors Collaboration [2]. Addressing major risk factors, including dyslipidaemia and hypertension, as well as the disturbed carbohydrate metabolism, is key to reducing the risk of CVD events in diabetic patients [3].
Reducing elevated total cholesterol (TC), non-high-density lipoprotein cholesterol (non-HDL-C) and, most importantly, low-density lipoprotein cholesterol (LDL-C), via lipid-modifying therapy (LMT), lowers the risk of CV events [4][5][6]. The degree of risk reduction is proportional to the degree of LDL-C reduction [7], and the European Society of Cardiology/ European Atherosclerosis Society (ESC/EAS) have set stringent LDL-C goals for patients with dyslipidaemia [8,9]. However, it is well established that a substantial proportion of patients do not manage to achieve these goals despite receiving LMT [8][9][10][11].
The presence of undiagnosed familial hypercholesterolaemia (FH) is a potential reason for failure to attain guideline lipid targets. Characterized by very high levels of LDL-C and early CVD, this disorder is caused by mutations in genes governing the LDL pathway, including the LDL-receptor (LDLR), apolipoprotein B or proprotein convertase subtilisin/kexin type 9 (PCSK9). While reported prevalences of FH in various populations range from approximately 1:500 to 1:200, detection rates are very low in most countries [12,13]. In Bulgaria, FH has only recently (in 2017) been added to the list of International Statistical Classification of Diseases and Related Health Problems (ICD) codes for diseases reimbursed with public funds. The EUROASPIRE IV survey estimated the age-standardized prevalence of potential FH in coronary patients in Bulgaria to be 9% (3.7-14.2%) [14]. It is important that individuals with FH are identified as early as possible and treated appropriately, since if left untreated, they are estimated to have an up to eightfold higher CVD risk versus unaffected relatives [15]. The odds of developing CVD have been reported to be threefold higher in diabetic patients with FH versus nondiabetic patients with FH [16] and 22-fold higher in mutation-positive FH patients with LDL-C C 190 mg/dL (4.91 mmol/L) versus non-FH individuals with LDL-C \ 130 mg/dL (3.36 mmol/L) [17].
To date, no real-life data on the management of FH in diabetes populations are available in the literature. Moreover, healthcare data from Eastern Europe are somewhat limited, reflecting a general lack of patient registries and limited access to public/health insurance data. There is no registry of diabetic patients in Bulgaria. Accordingly, we conducted a retrospective observational study to (1) identify individuals with previously undiagnosed FH among patients being treated at Bulgarian diabetes centres, and (2) gain insight into current management and attainment of ESC/EAS-defined LDL-C goals in such patients.
METHODS
We reviewed electronic medical records from a database of Bulgarian diabetes centres that routinely treat patients with diabetes and are authorized by the National Health Insurance Fund to prescribe insulin, insulin analogues, dipeptidyl peptidase 4 (DPP4) inhibitors, glucagon-like peptide 1 receptor agonists (GLP-1 RA) and sodium-glucose co-transporter-2 (SGLT2) inhibitors. Patient records were screened to identify those with pre-treatment LDL-C levels of C 4.1 mmol/L and/or TC levels of C 8 mmol/L, and Dutch Lipid Clinic Network (DLCN) criteria were applied to the retrieved records to identify patients with FH (see study flow diagram in Electronic Supplementary Material [ESM]). If pretreatment LDL-C was not available it was calculated from the most current LDL-C measurement and statin dose using a regression coefficient approved by the Bulgarian Society of Cardiology and National Health Insurance Fund [18].
Inclusion Criteria
The medical records of patients aged C 18 years with T1DM or T2DM on LMT who were being treated with insulin, insulin analogues, DPP-4 inhibitors, GLP-1 RA and/or SGLT2 inhibitors and attending a participating centre for a routine visit between 1 January 2014 and 1 September 2018 were eligible for analysis in phase 1 of this study. All eligible patients were required to have available data on LDL-C values, LMT (type and dose of medication) and documented family history. Patients identified with a DLCN score C 3 (i.e. possible/probable/definite FH) were eligible for inclusion in the main analysis (phase 2). Patients with secondary dyslipidaemia were excluded, as were those with a normal lipid profile. Investigators started from the most recent eligible records and continued backwards until the required sample size of 450 patients with FH was reached.
The study was conducted in accordance with all local legal and regulatory requirements and followed generally accepted research practices. In agreement with national law, the study protocol was approved by the Central Ethics Committee of the Bulgarian Regulatory Medicines Agency. Due to the retrospective nature of this study informed consent was not required.
Data Collection
For patients identified as having FH as per DLCN criteria, baseline characteristics collected in phase 2 included medical status at the date of enrolment and all relevant clinical history up to that date. The following data were collected:
Study Endpoints
The primary objective was to estimate the proportion of patients with FH and diabetes who achieved LDL-C levels \ 1.8 mmol/L, i.e. the ESC/EAS goal applicable to very high risk (VHR) patients at the time [9], based on the most recent LDL-C value available. Secondary objectives included (1) describing the clinical characteristics of patients with FH and diabetes; (2) identifying the parameters of clinical management of patients, including statin dose and intensity and use of statin-based combination treatments; (3) determining absolute LDL-C levels and (4) analyzing hospitalizations for CVD. Moderate-intensity statin therapy was defined as atorvastatin 10 or 20 mg, fluvastatin 80 mg, lovastatin 40 mg, rosuvastatin 5 or 10 mg or simvastatin 20 or 40 mg; high-intensity statin therapy was defined as atorvastatin 40 or 80 mg, rosuvastatin 20 or 40 mg or simvastatin 80 mg.
Study Sample Size
A large Swedish survey found that approximately 30% of diabetes patients with a history of CVD reached their ESC/EAS target for LDL-C [11]. As our primary endpoint (achievement of 2016 LDL-C targets) was based on subjects with diabetes and previously undiagnosed FH (and therefore very high pretreatment LDL-C levels), it was anticipated that the proportion of patients reaching the endpoint would be much lower than 30%. Based on the Wilson method [20], we calculated that a target sample size of 453 patients with FH would enable the primary outcome measure to be estimated with 3% precision, if 5-20% of patients achieved the LDL-C target.
Although the EUROASPIRE IV survey calculated a 9.0% overall prevalence of FH (age-standardized prevalence of potential FH by gender and centre) among coronary patients in Bulgaria [14], we did not expect to find the same prevalence of FH in our diabetes population. Patients with heterozygous FH are reported to be at much lower risk of T2DM (approx. 50% lower) than are individuals without FH [21,22], as well as being at lower risk of statin-associated incident T2DM [23]. There are an estimated 95,000 patients with T1DM or T2DM in Bulgaria being treated with insulin/insulin analogues, DPP-4 inhibitors, GLP-1 RA and/or SGLT2 inhibitors, Thus, we planned to initially include 15 large diabetes centres across the country, with the option to increase this number to a maximum of 20 centres if required to meet our target of 453 FH patients. During the study, it became apparent that the proportion of patients with FH in our study population was much lower than anticipated and the sample size was rounded down from 453 to 450 in the course of site data verification. This made little change to the study precision (3.007 vs 2.997%).
Statistical Analysis
This study was primarily descriptive in nature. Categorical variables were summarized as the number and percentage of patients in each category. For continuous variables, the number of observations, mean, median, standard deviation, quartiles 1 and 3 (Q1, Q3), range (minimum and maximum) and number of patients with missing data were reported.
All data collected during the observation period were used in the analysis and there was no imputation for missing values. To assess whether sites were representative of sites across Bulgaria, summary statistics of site characteristics and the number of patients enrolled in the study by site were analyzed.
Data were managed using GoResearch TM
Study Population
A total of 16 diabetes centres participated in the study, providing [ 30,000 patient records (500-8000 per centre) in phase 1. Review of these records identified 450 patients with FH who were included in phase 2 of the study as planned; of these patients, 36 (8.0%) had T1DM and 414 (92.0%) had T2DM. The characteristics of these 450 patients are shown in Table 1
There were 93 hospitalizations recorded in total; all except for one (pyelonephritis) were for CV events (ESM Fig. 1b). The most common
DISCUSSION
This national retrospective observational study was based on data retrieved from clinical records of over 30,000 patients with T1DM and T2DM who were being treated with insulin/insulin analogues, GLP1 RA and/or DPP4/SGLT2 inhibitors at diabetes centres across Bulgaria. By applying the widely accepted DLCN criteria to individuals with pretreatment LDL-C C 4.1 mmol/L and/or TC C 8.0 mmol/L, we identified 450 patients with previously undiagnosed possible/probable/definite FH (DLCN score C 3), representing approximately 1.5% of the records studied.
As noted previously, patients with heterozygous FH are reported to be at much lower risk of developing T2DM (approximately 50% lower) than are individuals without FH [21,22], although the risk may depend on the specific type of FH-associated gene mutation involved [21,24,25]. This apparent protective effect of FH might reflect impaired intracellular cholesterol uptake [16,21]. FH is characterized by reduced LDLR production or function and reduced cholesterol uptake by pancreatic beta cells. Conversely, statin treatment leads to increased LDLR expression and increased cholesterol uptake in beta cells [26,27].
Almost all ([ 99%) of our 450 patients were receiving statin-based therapy, with approximately 18% receiving high-intensity statin monotherapy and 13% on statin combinations. Almost one-third had good glycaemic control (HbA1c B 7%), but only one patient (\ 1%) achieved the 2016 ESC/EAS LDL-C goal of \ 1.8 mmol/L recommended for VHR patients [9], based on their most recent values. Moreover, patients with the highest DLCN scores were found to have the worst glycaemic control, although HbA1c was not correlated with DLCN score. Previous cardiovascular events were documented in 40% of patients.
Given that lowering of LDL-C to levels well below 1.8 mmol/L has been shown in recent years to confer additional CV risk reduction, updated 2019 European guidelines now advocate an LDL-C target of \ 1.4 mmol/L for VHR patients and \ 1.8 mmol/L for HR patients, Results are shown as the percentage of patients in each DLCN score category and overall (All). Percentages may not always total 100% due to rounding along with a minimum LDL-C reduction from baseline of 50% [28]. For patients with ASCVD with a second vascular event within 2 years while taking maximally tolerated statin-based therapy, the updated guidelines recommend considering an LDL-C target of 1.0 mmol/L [28]. None of our patients achieved LDL-C levels within the new VHR target, since the lowest value recorded was 1.75 mmol/L. Patients with diabetes benefit from aggressive control of dyslipidaemia at least as much as do nondiabetic patients with other risk factors according to data from the Cholesterol Treatment Trialists' Collaboration meta-analysis of 14 randomized statin trials [7] and the IMPROVE-IT (Improved Reduction of Outcomes: Vytorin Efficacy International Trial) trial, which evaluated the addition of ezetimibe to statin therapy [29].
There are multiple factors that can contribute to poor lipid control in patients with dyslipidaemia. Underdosing and statin discontinuation/poor adherence to therapy, which may reflect concerns surrounding actual or anticipated statin intolerance, are acknowledged as key factors and have been linked to poorer clinical outcomes [9,30]. Moreover, in patients with diabetes, physicians may prioritize glycaemic control over lipid control and be reluctant to prescribe higher statin doses because of potential adverse effects on glucose metabolism [26,27]. Reimbursement issues in Bulgaria may present a financial barrier to intensive statin therapy, as well as to newer and alternative treatments for patients with inadequate response or statin intolerance. Only 25% of the cost of statins are reimbursed, and simvastatin is the only statin reimbursed by the National Health Insurance Fund for diabetic patients, with atorvastatin and rosuvastatin being paid for out of pocket. Only cardiologists are authorized to prescribe statins as secondary prevention with public funds. Healthcare systems in the Eastern Europe region are based on the reference pricing principle and are, therefore, relatively restrictive in permitting newer treatments.
Even at high intensity, statins may not be able to sufficiently reduce the very elevated baseline LDL-C levels that characterize patients with FH. Monotherapy with atorvastatin 40-80 mg or rosuvastatin 10-20 mg daily can typically lower LDL-C by approximately 50%, while adding ezetimibe to either of these treatment regimens can reduce levels by a further 15-20% [9]. Statins cannot reduce LDL-C in patients with homozygous FH with no residual LDLR function and produce only modest reductions in those with low residual LDLR activity [31]. The availability (since 2015) of the PCSK9 inhibitors (PCSK9i) evolocumab and alirocumab has provided alternative options for the treatment of dyslipidaemia. These agents improve LDLR recycling and increase LDLR availability on hepatocyte cell surfaces [32] and can lower LDL-C by a further 60% in patients already receiving maximal statin therapy [33][34][35][36][37], thereby significantly reducing the risk of CV events [35,38,39]. European dyslipidaemia guidelines have recommended that PCSK9i may be considered for VHR patients with ASCVD, including those with progressive ASCVD or diabetes with target organ damage or a major CV risk factor, or for patients with severe FH without ASCVD but severely elevated LDL-C despite maximal statin/ezetimibe therapy [40]. Evolocumab was reported to be effective in reducing LDL-C and other atherogenic lipids, without compromising glycaemic control, in patients with T2DM and dyslipidaemia on statin treatment (n = 981) [41]; in this study, approximately 90% of patients attained LDL-C \ 1.8 mmol/L.
In Bulgaria, the treatment of T1DM and T2DM is initiated by endocrinologists, who also follow up patients on insulin, insulin analogues, DPP-4 inhibitors, GLP-1 RA and/or SGLT-2 inhibitors. Such patients visit a diabetes centre every 6 months for follow-up, and onceyearly lipid profile tests are recommended for those receiving LMT. National treatment guidelines ensure that treatment lines are harmonized across diabetes centres. The study sites were chosen based on the National Health Insurance Fund list of diabetes centres authorized to prescribe insulin, insulin analogues, DPP-4 inhibitors, GLP-1 RA and SGLT-2 inhibitors. All patients attending the participating sites who met the inclusion criteria were considered eligible for inclusion in the analysis.
The current study has a number of limitations. Data were captured retrospectively from patient records and are limited to patients treated with insulin, insulin analogues, DPP-4 inhibitors, GLP-1 RA and SGLT-2 inhibitors. It is expected that our results are generalizable to Bulgaria as a whole, but they may not necessarily be directly transferable to other regions. Lifestyle data were missing for approximately 30-40% of patients, perhaps reflecting patient reluctance to disclose such information. The number of hospitalizations may have been underestimated, as patients without hospitalization data could be included in the study. We did not evaluate compliance with LMT. LDL-C was determined by a number of different local laboratories (direct measurement or calculated using the Friedewald formula). Our estimate of the proportion of patients within glycaemic and lipid goals is based only on their most recent available test results. Nevertheless, our observational data provide a useful snapshot of reallife patient management of this population in Bulgaria.
CONCLUSIONS
To our knowledge, this is the first study to specifically evaluate lipid management in diabetic patients with FH. Based on the most recent LDL-C measurements for these 450 patients, \ 1% who were being treated in diabetes clinics throughout Bulgaria achieved the ESC-defined goal of LDL-C \ 1.8 mmol/L. These data highlight the importance of identifying underlying FH in diabetic patients so that they can receive appropriate treatment.
ACKNOWLEDGEMENTS
Funding. Amgen (Europe) GmbH participated in the design/concept of the study and provided funding for the study, preparation of the manuscript and Rapid Service Fee for publication.
Medical Writing Assistance. Julia Balfour of Northstar Medical Writing and Editing, Dundee, UK, provided medical writing support, with financial support from Amgen.
Authorship. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published.
Authorship Contributions. TT and VL contributed to conception and/or design of the study. AO conducted the statistical analysis. All authors contributed to acquisition, analysis, and/or interpretation of data for the work, critically revised the manuscript, gave final approval and agree to be accountable for all aspects of work ensuring integrity and accuracy.
Disclosures. Tsvetalina
Tankova has received consulting fees from MSD, Sanofi, Boehringer Ingelheim, Eli Lilly, Novo Nordisk, Astra Zeneca and speakers bureau fees from MSD, Sanofi, Boehringer Ingelheim, Eli Lilly, Novo Nordisk, Astra Zeneca, Servier, Merck and Amgen. Anna-Maria Borissova has received consulting fees from Boehringer Ingelheim, Astra Zeneca, Mundipharma, Sanofi, Eli Lilly, MSD, Amgen and speakers bureau fees from Novo Nordisk, Eli Lilly, Sanofi, Boehringer Ingelheim, Astra Zeneca, MSD, Servier, Novartis, Amgen, Merck and Roche. Roumyana Dimova has received consulting fees from Amgen and Speakers Bureau fees from Boehringer Ingelheim, Astra Zeneca and Novo Nordisk. Adrian Olszewski is an employee of 2KMM, a contract research organization. Vasil Lachev and Reneta Petkova are employees and shareholders of Amgen. Atanaska Elenkova and Ralitsa Robeva have nothing to disclose.
Compliance with Ethics Guidelines. The study was conducted in accordance with all local legal and regulatory requirements and followed generally accepted research practices. In agreement with national law, the study protocol was approved by the Central Ethics Committee of the Bulgarian Regulatory Medicines Agency. Due to the retrospective nature of this study informed consent was not required.
Data Availability. Qualified researchers may request data from Amgen clinical studies. Complete details are available at the following: http://www.amgen.com/datasharing.
Open Access. This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/ by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2020-01-03T14:36:41.847Z | 2020-01-02T00:00:00.000 | {
"year": 2020,
"sha1": "9eb9b0079513ff6e7e200fe897e55ca326f6a30f",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13300-019-00748-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "630966c42d03e97a4a99dff2e2ace5bfaa67c285",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56595511 | pes2o/s2orc | v3-fos-license | Data on metabolic profiling of spongy tissue disorder in Mangifera indica cv. Alphonso
Data in this article presents aroma volatiles and fatty acids composition of mesocarp specific malady namely spongy tissue disorder in Mangifera indica cv. Alphonso. Quantitative changes in various aroma volatile compound classes as well as saturated and unsaturated fatty acids in spongy tissue vis-à-vis healthy mesocarp have been analyzed throughout the development of the disorder. Statistical data analysis correlates the dynamic changes in the aroma volatiles composition to that of the modulation in the fatty acids profile.
Subject area
Biology, chemistry More specific subject area Aroma volatile and fatty acid composition of healthy, spongy affected and spongy control Alphonso mango fruit Type of data
Data format Analyzed Experimental factors
Alphonso mango fruits were harvested at mature raw stage from the trees and ripened in hay boxes. At various ripening stages, healthy and spongy tissue mesocarp of fruits were recovered. Pulp/mesocarp from healthy fruits, spongy tissue as well as healthy part around spongy tissue from affected fruits were collected, snap frozen in liquid nitrogen and stored at À80°C.
Experimental features
Aroma volatiles were extracted by solvent extraction method in dichloromethane and acetone (80:20), Fatty acids were methylated by transesterification to generate Fatty acid methyl esters (FAMEs) which were then extracted using chloroform. Metabolite identification and quantification was done by GC-MS-FID. Activity of cell wall degrading enzymes was determined using specific biochemical assay for individual enzyme. Relative transcript abundance of gene was determined using quantitative reverse transcriptase polymerase chain reaction.
Data source location
Plant Molecular Biology Unit, Division of Biochemical Sciences, CSIR-National Chemical Laboratory, Pune 411 008, (M. S.) India.
Data accessibility
Data provided within this article
Value of the data
Investigated data highlight the metabolic changes occurring in the spongy tissue disorder of Alphonso mango, which is valuable to researchers working on diseases or disorders of fruits.
Ripening related data have shown decreased enzymatic activity in mesocarp from the spongy area as compared to that in the healthy fruits. This suggests the arrest of ripening process of mesocarp which in turns affects the internal fruit morphology without affecting external physiology of fruit.
Reduced levels of lactones and furanones; the key volatiles of Alphonso mango flavor highlight the poor quality of spongy tissue affected Alphonso mango, which is of value to domestic and export market chain.
Analyzed data highlight the significantly elevated levels of terpenes, green leafy volatiles and linoleic acid content with exclusive presence of oxygenated terpenes in the spongy tissue compared to the healthy and the spongy control fruits throughout the ripening stages.
Higher ratio of linoleic acid to linolenic acid in spongy mesocarp suggests the probable lipid peroxidation in affected tissue.
Present metabolic and enzymatic data are a foundation work for the development of on line non destructive sensor technology or visualization technique, to segregate the spongy tissue disordered fruits from that of healthy, which can ultimately be beneficial to mango farmers and exporters.
Data
Total 45 different aroma volatiles have been identified from mesocarp of three different tissue sets (Healthy, Spongy and Spongy control) of Alphonso mango fruit at four stages of fruit ripening. These aroma volatiles belong to different compound classes such as Monoterpenes (11), Sesquiterpenes (09), Lactones (05), Oxygenated terpenes (06), Furanones (02) and miscellaneous compounds (12), which includes alkanes, alkenes, alcohols, esters, green leafy volatiles etc. (Table 1, Figs. 1 and 2). Pandit et al. [1] have analyzed the stage specific dynamics of aroma volatile compounds of Mangifera indica cv. Alphonso pulp. Dominance of terpenes was seen at the early stages of ripening i.e. in unripe fruits whereas lactones and furanones were considered to be the ripening markers because of their exclusive appearance at the later stages of ripening. In case of the spongy tissue, significantly higher concentration of terpenes in mid ripe and ripe stages and loss of lactones and furanones in these stages suggested the slower synthesis of specific marker metabolites of ripening in spongy tissue. Further decreased activity of three major ripening related enzymes (β-D-Galactosidase, α-D-Mannosidase and β-D-Glucosidase) (Fig. 3) lead to the accumulation of respective substrates.
These observations suggest arrest of ripening of Alphonso fruit in the spongy tissue, which in turn affects the physicochemical properties of the mesocarp. In addition to terpenes, significant increase in green leafy volatiles concentration was observed in case of the spongy tissue as compared to that in the healthy and the spongy control fruits in all the stages of ripening. Terpenes and green leafy volatiles are important defense compounds of the plants. These compounds are released during fruit development, herbivores attack or tissue damage [2]. Hence, increased concentration of terpenes and green leafy volatiles in the spongy tissue suggests the stress condition inside the fruit. This has also been confirmed by the exclusive presence of 2,4-decadienal in the spongy tissue, which is a primary oxidation product of Linoleic acid. Furthermore, total 24 different fatty acids were also identified ( Table 2 and Figs. 4-6). Significant decrease in fatty acids such as Myristic, Myristoleic, Palmitoleic, Hepta-2,4-dienoic, cis-10-heptadecenoic, 9,12-hexadienoic acid was observed in the spongy tissue as compared to the spongy control and the healthy fruit. Similarly, significant increase in Heptadecanoic, Oleic, Linoleic, 11-Eicosenoic and Lignoceric acid was observed in the spongy tissue as compared to the spongy control and the healthy fruit. 9,15-Octadecadienoic acid, which is known as mangiferic acid, along with 12,15-Octadecadienoic acid were absent in the spongy tissue. Such absence of polyunsaturated fatty acids highlights the membrane disintegration. Enhanced process of membrane disruption and lipid peroxidation was also depicted with significant increase in the ratio of LA/ALA content (Fig. 7). Due to the higher accumulation of Linoleic acid it leads to reducing the nutritive value of fruit.
Unsaturated fatty acids (Linoleic and α-Linolenic acid) are the precursors of lactone [3] and GLV biosynthesis [4]. In case of the spongy tissue, dominance of GLV and absence of lactones, along with increased concentration of Linoleic acid suggests the probable shifting of lactone biosynthesis pathway towards oxylipin biosynthesis pathways of GLV production. Hence, transcript abundance of 13-Hydroperoxide lyase (HPL) gene, involved in conversion of unsaturated fatty acid hydroperoxide to green leafy volatile [4] was also studied (Fig. 8). In case of the spongy tissue, 4 to 6-fold increase in gene expression of 13-HPL was observed as compared to the healthy fruits. A strong correlation (0.97) between transcript abundance and total GLV content at table green and mid ripe stages of spongy tissue suggests a probable involvement of oxylipin biosynthesis pathway of GLV production rather than lactone biosynthesis pathway at ripening stages of Alphonso mango with spongy tissue malady.
Plant material
All the tissues of cv. Alphonso were collected from mango orchards of Agronomy department of the Dr. Balasaheb Sawant Konkan Agricultural University, Dapoli (N17°45 0 E73°11 0 ). To evaluate complete development of spongy tissue formation, four ripening stages of mango fruit pulp were collected [1]. Fruits of 0, 5, 10 and 15 DAH (Days After Harvest) (termed as mature raw, table green, mid ripe and ripe, respectively) were used for the present analysis. At each ripening stage mangoes were removed from hay boxes, and spongy affected mesocarp was separated and frozen in liquid nitrogen. Along with the spongy part non-spongy mesocarp around the spongy affected area was frozen separately and considered as spongy control. Completely healthy fruits i.e. free from the spongy tissue at the corresponding stage of ripening were also considered in this analysis. For statistical validation, fruits were collected from 5 individual trees which were considered as biological replicates.
Volatile extraction
Extraction of volatiles were done using 2 g mesocarp tissue from all the four ripening stages dichloromethane: acetone (80:20) as solvent system with appropriate concentration of nonyl acetate as an internal standard. Volatile extraction protocol was carried out as described previously [5].
Biochemical analysis of ripening related enzymes
Crude enzymes were used in enzyme assays which were extracted in HEPES-NaOH buffer (pH 7.4). Enzyme activity was calculated based on the amount of pNp released. Enzyme extraction and activity assay were performed as reported earlier [6].
Fatty acid extraction
Transesterification of fatty acids were carried out using methanolic HCl. One g of the mesocarp tissue was finely crushed in liquid nitrogen and added to the 5 ml methanol containing 3 M HCl, 25 mg butylated hydroxytoluene (BHT) as an antioxidant and appropriate amount of tridecanoic acid as an internal standard. FAME were extracted in n-Hexane and reconstituted in Chloroform. Transesterification and FAME extraction were carried out as reported earlier [3]. flow of Helium as carrier gas. Oven temperatures were programmed from 40°C for 5 min, raised to 180°C at 5°C/min followed by an increase till 280°C at the rate of 20°C/min and held at 280°C for 5 min. General chromatographic and mass spectrometric conditions were retained as reported earlier [5]. Compounds were identified by matching generated spectra with NIST 2011 and Wiley 10th edition mass spectral libraries. Quantitative analysis was done using flame ionization detector maintaining the same chromatographic conditions. Absolute quantification was performed using known concentration of nonyl acetate (internal standard).
Fatty acid analysis
Fatty acid separation was carried with SP ™ 2560 (Supelco, Bellefonte, Pennsylvania, U.S.A.) column with 75 m long, 0.18 mm i.d. and 0.14 mm film thickness. Qualitative analysis was carried out on 7890B GC system Agilent Technologies coupled with Agilent 5977A MSD (Agilent technologies s , CA, U.S.A.) using 1 ml chloroform reconstituted FAMEs. Other gas chromatographic parameters were maintained as reported earlier [7]. Identified FAMEs were confirmed by spectral matching with NIST 2011 and Wiley 10 th edition mass spectral libraries. Compounds were validated by matching retention time and spectra of authentic standards procured from Sigma Aldrich (St. Louis, MO, USA). Quantification of identified compounds were done using GC-FID. Chromatographic conditions were similar for GC-MSD and GC-FID. Absolute quantification was done by normalizing concentrations of all the FAMEs with internal standard (tridecanoic acid methyl ester) [3].
Statistical analysis
For statistical validation of volatiles and fatty acids, at each stage of the ripening a minimum of two fruits of each of the five plants collected were used for independent extractions while each extract was analyzed twice on the GC. Fischer's LSD test (p r 0.05) was carried out by ANOVA (StatView software, version 5.0 (SAS Institute Inc., Cary, NC, USA)) on healthy, spongy and spongy control tissue separately to compare the quantity of each compound and class within three datasets of four ripening stages. Principle component analysis for the whole data set of fatty acid content and volatile content was carried out using Systat s statistical software (Version12, Richmond, CA, U.S.A.).
RNA isolation and cDNA synthesis
Total RNA isolation was carried out for all the tissues sampled for current study using RNeasy Plus mini kit (Quiagen, Hilden, Germany). RNA quality and integrity was checked using Bioanalyzer 2100 (Agilent Technologies, Santa Clara, USA). Two microgram of total RNA was used to carry out reverse transcription for synthesis of cDNA using High Capacity cDNA reverse transcription kit (Applied Biosystem, Carlsbad, CA, USA) [8].
Quantitative real-time PCR
Quantitative real-time PCR was performed using the Fast Start Universal SYBR Green master mix (Roche Inc. Indianapolis, Indiana, USA) and elongation factor 1α (EF1α) as an endogenous reference gene for which primers were reported earlier [9]. Hydroperoxide lyase gene was amplified using gene specific primers (MiHPL_F1 CGTCCTTGACATTCTGAAACGC and MiHPL_R1 CCTTCGCAGAGATGCTTGTTTC) covering amplicon size of 100 bp. Quantification of transcripts were done by ViiA™ 7 Real-Time PCR System (Applied Biosystems, California, USA) having thermal cycle program of initial denaturation at 95°C for 10 min with subsequent 40 cycles of 95°C for 3 s and 60°C for 30 s followed by a melting curve analysis of transcript. Relative quantification (ΔΔCT method) and statistical analysis was carried out manually. Complete analysis was repeated with three biological replicates and three technical replicates were employed for each biological replicate. | 2019-01-22T22:22:01.546Z | 2018-12-04T00:00:00.000 | {
"year": 2018,
"sha1": "a452bcc5bc50cf12823e7ce7472be2daaa928829",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2018.11.140",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a452bcc5bc50cf12823e7ce7472be2daaa928829",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
12993622 | pes2o/s2orc | v3-fos-license | The Origins of Ashkenaz, Ashkenazic Jews, and Yiddish
Recently, the geographical origins of Ashkenazic Jews (AJs) and their native language Yiddish were investigated by applying the Geographic Population Structure (GPS) to a cohort of exclusively Yiddish-speaking and multilingual AJs. GPS localized most AJs along major ancient trade routes in northeastern Turkey adjacent to primeval villages with names that resemble the word “Ashkenaz.” These findings were compatible with the hypothesis of an Irano-Turko-Slavic origin for AJs and a Slavic origin for Yiddish and at odds with the Rhineland hypothesis advocating a Levantine origin for AJs and German origins for Yiddish. We discuss how these findings advance three ongoing debates concerning (1) the historical meaning of the term “Ashkenaz;” (2) the genetic structure of AJs and their geographical origins as inferred from multiple studies employing both modern and ancient DNA and original ancient DNA analyses; and (3) the development of Yiddish. We provide additional validation to the non-Levantine origin of AJs using ancient DNA from the Near East and the Levant. Due to the rising popularity of geo-localization tools to address questions of origin, we briefly discuss the advantages and limitations of popular tools with focus on the GPS approach. Our results reinforce the non-Levantine origins of AJs.
"Ashkenaz:"İşkenaz (or Eşkenaz), Eşkenez (or Eşkens), Aşhanas, and Aschuz. Evaluated in light of the Rhineland and Irano-Turko-Slavic hypotheses Table 1) the findings supported the latter, implying that Yiddish was created by Slavo-Iranian Jewish merchants plying the Silk Roads. We discuss these findings from historical, genetic, and linguistic perspectives and calculate the genetic similarity of AJs and Middle Eastern populations to ancient genomes from Anatolia, Iran, and the Levant. We lastly review briefly the advantages and limitation of bio-localization tools and their application in genetic research.
THE HISTORICAL MEANING OF ASHKENAZ
"Ashkenaz" is one of the most disputed Biblical placenames. It appears in the Hebrew Bible as the name of one of Noah's TABLE 1 | Major open questions regarding the origin of the term "Ashkenaz," AJs, and Yiddish as explained by two competing hypotheses.
Open questions
Rhineland hypothesis Irano-Turko-Slavic hypothesis Evidence in favor of the Irano-Turko-Slavic hypothesis The term "Ashkenaz" Originally affiliated with the people living north of Biblical Israel (Aptroot, 2016) or north of the Black Sea (Wexler, 1991). Used in Hebrew and Yiddish sources from the Eleventh century onward to denote a region in what is now roughly Southern Germany (Wexler, 1991;Aptroot, 2016).
Denotes an Iranian people "near Armenia," presumably Scythians known as aškuza, ašguza, or išguza in Assyrian inscriptions of the early Seventh century B.C. (Wexler, 2012(Wexler, , 2016. GPS analysis uncovered four primeval villages in northeastern Turkey whose names resemble "Ashkenaz," at least one of which predates any major Jewish settlement in Germany . "Ashkenaz" is thereby a placename associated with the Near East and its inhabitants both Jews and non-Jews. The ancestral origin of Ashkenazic Jews Judaean living in Judaea until 70 A.D. who were exiled by the Romans (King, 2001) and remained in relative isolation from neighboring non-Jewish communities during and after the Diaspora (Hammer et al., 2000;Ostrer, 2001). This scenario has no historical (Sand, 2009) nor genetic support ( Figure 1B) (e.g., Elhaik, 2013Elhaik, , 2016Xue et al., 2017).
A minority of Judaean emigrants and a majority of Irano-Turko-Slavic converts to Judaism (Wexler, 2012).
AJs exhibit high genetic similarity to populations living in Turkey and the Caucasus . All bio-location analyses predicted AJs to Turkey ( Figure 1A). Ancient DNA analyses provide strong evidence of the Iranian Neolithic ancestry of AJs ( Figure 1B) (Lazaridis et al., 2016).
The arrival of Jews to German lands
After the arrival of Palestinian Jews to Roman lands, Jewish merchants and soldiers arrived to German lands with the Roman army and settled there (King, 2001). This scenario has no historical support (Wexler, 1993;Sand, 2009).
Jews from the Khazar Empire and the former Iranian Empire plying the old Roman trade routes (Rabinowitz, 1945(Rabinowitz, , 1948 and Silk Roads began to settle in the mixed Germano-Sorbian lands during the first Millennium (Sand, 2009;Wexler, 2011).
Ashkenazic Jews were predicted to a Near Eastern hub of ancient trade routes that connected Europe, Asia, and the northern Caucasus . The findings imply that migration to Europe took place initially through trade routes going west and later through Khazar lands. Yiddish's emergence in the 9th century Between the Ninth and Tenth centuries, French-and Italian-speaking Jewish immigrants adopted and adapted the local German dialects (Weinreich, 2008).
Upon arrival to German lands, Western and Eastern Slavic went through a relexification to German, creating what became known as Yiddish (Wexler, 2012). Xue et al.'s (2017) inferred "admixture time" of 960-1,416 AD corresponds to a time period during which AJ have experienced major demographic changes. At that time, AJs were speculated to have absorbed Slavic people, developed Slavic Yiddish, and intensified the migration to Europe .
Growth of Eastern European Jewry
A small group of German Jews migrated to Eastern Europe and reproduced via a so-called "demographic miracle" (Ben-Sasson, 1976;Atzmon et al., 2010;Ostrer, 2012), which resulted in an unnatural growth rate (1.7-2% annually) (van Straten and Snel, 2006;van Straten, 2007) over half a millennium acting only on Jews residing in Eastern Europe. This explanation is unsupported by the data.
Most of the Ashkenazic Jews were predicted to Northeastern Turkey and the remaining individuals clustered along a gradient going from Turkey to Eastern European lands . This is in agreement with the recorded conversions of populations living along the southern shores of the Black Sea to Judaism (Baron, 1937). A German origin of AJs is unsupported by the data ( Figure 1A).
The genetic evidence produced by Das et al. (2016) is shown in the last column. descendants (Genesis 10:3) and as a reference to the kingdom of Ashkenaz, prophesied to be called together with Ararat and Minnai to wage war against Babylon (Jeremiah 51:27). In addition to tracing AJs to the ancient Iranian lands of Ashkenaz and uncovering the villages whose names may derive from "Ashkenaz, " the partial Iranian origin of AJs, inferred by Das et al. (2016), was further supported by the genetic similarity of AJs to Sephardic Mountain Jews and Iranian Jews as well as their similarity to Near Eastern populations and simulated "native" Turkish and Caucasus populations.
There are good grounds, therefore, for inferring that Jews who considered themselves Ashkenazic adopted this name and spoke of their lands as Ashkenaz, since they perceived themselves as of Iranian origin. That we find varied evidence of the knowledge of Iranian language among Moroccan and Andalusian Jews and Karaites prior to the Eleventh century is a compelling point Figure 2B) (red), and Das et al. (2016, Figure 4) (dark green for AJs who have four AJ grandparents and light green for the rest) are shown. Color matching mean and standard deviation (bars) of the longitude and latitude are shown for each cohort. Since we were unsuccessful in obtaining the data points of Behar et al. (2013, Figure 2B) from the corresponding author, we procured 78% of the data points from their figure. Due to the low quality of their figure we were unable to reliably extract the remaining data points. (B) Supervised ADMIXTURE results. For brevity, subpopulations were collapsed. The x axis represents individuals. Each individual is represented by a vertical stacked column of color-coded admixture proportions that reflect genetic contributions from ancient Hunter-Gatherer, Anatolian, Levantine, and Iranian individuals. of reference to assess the shared Iranian origins of Sephardic and Ashkenazic Jews (Wexler, 1996). Moreover, Iranian-speaking Jews in the Caucasus (the so-called Juhuris) and Turkic-speaking Jews in the Crimea prior to World War II called themselves "Ashkenazim" (Weinreich, 2008).
The Rhineland hypothesis cannot explain why a name that denotes "Scythians" and was associated with the Near East became associated with German lands in the Eleventh to Thirteenth centuries (Wexler, 1993). Aptroot (2016) suggested that Jewish immigrants in Europe transferred Biblical names onto the regions in which they settled. This is unconvincing. Biblical names were used as place names only when they had similar sounds. Not only Germany and Ashkenaz do not share similar sounds, but Germany was already named "Germana, " or "Germamja" in the Iranian ("Babylonian") Talmud (completed in the Fifth century A.D.) and, not surprisingly, was associated with Noah's grandson Gomer (Talmud, Yoma 10a). Name adoption also occurred when the exact place names were in doubt as in the case of Sefarad (Spain). This is not the case here, as Aptroot too notes, since "Ashkenaz" had a known and clear geographical affiliation (Table 1). Finally, Germany was known to French scholars like the RaDaK (1160-1235) as "Almania" (Sp. Alemania, Fr. Allemagne), after the Almani tribes, a term that was also adopted by Arab scholars. Had the French scholar Rashi (1040?-1105), interpreted aškenaz as "Germany, " it would have been known to the RaDaK who used Rashi's symbols. Therefore, Wexler's proposal that Rashi used aškenaz in the meaning of "Slavic" and that the term aškenaz assumed the solitary meaning "German lands" only after the Eleventh century in Western Europe as a result of the rise of Yiddish, is more reasonable (Wexler, 2011). This is also supported by Das et al.'s major findings of the only known primeval villages whose names derive from the word "Ashkenaz" located in the ancient lands of Ashkenaz. Our inference is therefore supported by historical, linguistic, and genetic evidence, which has more weight as a simple origin that can be easily explained than a more complex scenario that involves multiple translocations.
THE GENETIC STRUCTURE OF ASHKENAZIC JEWS
AJs were localized to modern-day Turkey and found to be genetically closest to Turkic, southern Caucasian, and Iranian populations, suggesting a common origin in Iranian "Ashkenaz" lands . These findings were more compatible with an Irano-Turko-Slavic origin for AJs and a Slavic origin for Yiddish than with the Rhineland hypothesis, which lacks historical, genetic, and linguistic support ( Table 1) (van Straten, 2004;Elhaik, 2013). The findings have also highlighted the strong social-cultural and genetic bonds of Ashkenazic and Iranian Judaism and their shared Iranian origins .
Thus far, all analyses aimed to geo-localize AJs (Behar et al., 2013, Figure 2B; Elhaik, 2013, Figure 4; Das et al., 2016, Figure 4) identified Turkey as the predominant origin of AJs, although they used different approaches and datasets, in support of the Irano-Turko-Slavic hypothesis ( Figure 1A, Table 1). The existence of both major Southern European and Near Eastern ancestries in AJ genomes are also strong indictors of the Irano-Turko-Slavic hypothesis provided the Greco-Roman history of the region southern to the Black Sea (Baron, 1937;Kraemer, 2010). Recently, Xue et al. (2017) applied GLOBETROTTER to a dataset of 2,540 AJs genotyped over 252,358 SNPs. The inferred ancestry profile for AJs was 5% Western Europe, 10% Eastern Europe, 30% Levant, and 55% Southern Europe (a Near East ancestry was not considered by the authors). Elhaik (2013) portrayed a similar profile for European Jews, consisting of 25-30% Middle East and large Near Eastern-Caucasus (32-38%) and West European (30%) ancestries. Remarkably, Xue et al. (2017) also inferred an "admixture time" of 960-1,416 AD (≈24-40 generations ago), which corresponds to the time AJs experienced major geographical shifts as the Judaized Khazar kingdom diminished and their trading networks collapsed forcing them to relocate to Europe . The lower boundary of that date corresponds to the time Slavic Yiddish originated, to the best of our knowledge.
The non-Levantine origin of AJs is further supported by an ancient DNA analysis of six Natufians and a Levantine Neolithic (Lazaridis et al., 2016), some of the most likely Judaean progenitors (Finkelstein and Silberman, 2002;Frendo, 2004). In a principle component analysis (PCA), the ancient Levantines clustered predominantly with modern-day Palestinians and Bedouins and marginally overlapped with Arabian Jews, whereas AJs clustered away from Levantine individuals and adjacent to Neolithic Anatolians and Late Neolithic and Bronze Age Europeans. To evaluate these findings, we inferred the ancient ancestries of AJs using the admixture analysis described in Marshall et al. (2016). Briefly, we analyzed 18,757 autosomal SNPs genotyped in 46 Palestinians, 45 Bedouins, 16 Syrians, and eight Lebanese (Li et al., 2008) alongside 467 AJs [367 AJs previously analyzed and 100 individuals with AJ mother) that overlapped with both the GenoChip and ancient DNA data (Lazaridis et al., 2016). We then carried out a supervised ADMIXTURE analysis (Alexander and Lange, 2011) using three East European Hunter Gatherers from Russia (EHGs) alongside six Epipaleolithic Levantines, 24 Neolithic Anatolians, and six Neolithic Iranians as reference populations (Table S0). Remarkably, AJs exhibit a dominant Iranian ( 88%) and residual Levantine ( 3%) ancestries, as opposed to Bedouins ( 14% and 68%, respectively) and Palestinians ( 18% and 58%, respectively). Only two AJs exhibit Levantine ancestries typical to Levantine populations ( Figure 1B). Repeating the analysis with qpAdm (AdmixTools, version 4.1) (Patterson et al., 2012), we found that AJs admixture could be modeled using either three- ( Table 1) and rule out an ancient Levantine origin for AJs, which is predominant among modern-day Levantine populations (e.g., Bedouins and Palestinians). This is not surprising since Jews differed in cultural practices and norms (Sand, 2011) and tended to adopt local customs (Falk, 2006). Very little Palestinian Jewish culture survived outside of Palestine (Sand, 2009). For example, the folklore and folkways of the Jews in northern Europe is distinctly pre-Christian German (Patai, 1983) and Slavic in origin, which disappeared among the latter (Wexler, 1993(Wexler, , 2012.
THE LINGUISTIC DEBATE CONCERNING FORMATION OF YIDDISH
The hypothesis that Yiddish has a German origin ignores the mechanics of relexification, the linguistic process which produced Yiddish and other "Old Jewish" languages (i.e., those created by the Ninth to Tenth century). Understanding how relexification operates is essential to understanding the evolution of languages. This argument has a similar context to that of the evolution of powered flight. Rejecting the theory of evolution may lead one to conclude that birds and bats are close relatives. By disregarding the literature on relexification and Jewish history in the early Middle Ages, authors (e.g., Aptroot, 2016;Flegontov et al., 2016) reach conclusions that have weak historical support. The advantage of a geo-localization analysis is that it allows us to infer the geographical origin of the speakers of Yiddish, where they resided and with whom they intermingled, independently of historical controversies, which provides a data driven view on the question of geographical origins. This allows an objective review of potential linguistic influences on Yiddish (Table 1), which exposes the dangers in adopting a "linguistic creationism" view in linguistics.
The historical evidence in favor of an Irano-Turko-Slavic origin for Yiddish is paramount (e.g., Wexler, 1993Wexler, , 2010. Jews played a major role on the Silk Roads in the Ninth to Eleventh century. In the mid-Ninth century, in roughly the same years, Jewish merchants in both Mainz and at Xi'an received special trading privileges from the Holy Roman Empire and the Tang dynasty court (Robert, 2014). These roads linked Xi'an to Mainz and Andalusia, and further to sub-Saharan Africa and across to the Arabian Peninsula and India-Pakistan. The Silk Roads provided the motivation for Jewish settlement in Afro-Eurasia in the Ninth to Eleventh centuries since the Jews played a dominant role on these routes as a neutral trading guild with no political agendas (Gil, 1974;Cansdale, 1996Cansdale, , 1998. Hence, the Jewish traders had contact with a wealth of languages in the areas that they traversed (Hadj-Sadok, 1949;Khordadhbeh, 1889;Hansen, 2012;Wexler TBD), which they brought back to their communities nested in major trading hubs (Rabinowitz, 1945(Rabinowitz, , 1948Das et al., 2016). The central Eurasian Silk Roads were controlled by Iranian polities, which provided opportunities for Iranian-speaking Jews, who constituted the overwhelming bulk of the world's Jews from the time of Christ to the Eleventh century (Baron, 1952). It should not come as a surprise to find that Yiddish (and other Old Jewish languages) contains components and rules from a large variety of languages, all of them spoken on the Silk Roads (Khordadhbeh, 1889;Wexler, 2011Wexler, , 2012Wexler, , 2017. In addition to language contacts, the Silk Roads also provided the motivation for widespread conversion to Judaism by populations eager to participate in the extremely lucrative trade, which had become a Jewish quasi-monopoly along the trade routes (Rabinowitz, 1945(Rabinowitz, , 1948Baron, 1957). These conversions are discussed in Jewish literature between the Sixth and Eleventh centuries, both in Europe and Iraq (Sand, 2009;Kraemer, 2010). Yiddish and other Old Jewish languages were all created by the peripatetic merchants as secret languages that would isolate them from their customers and non-Jewish trading partners (Hadj-Sadok, 1949;Gil, 1974;Khordadhbeh, 1889;Cansdale, 1998;Robert, 2014). The study of Yiddish genesis, thereby, necessitates the study of all the Old Jewish languages of this time period.
There is also a quantifiable amount of Iranian and Turkic elements in Yiddish. The Babylonian Talmud, completed by the Sixth century A.D., is rich in Iranian linguistic, legalistic, and religious influences. From the Talmud, a large Iranian vocabulary has entered Hebrew and Judeo-Aramaic, and from there spread to Yiddish. This corpus has been known since the 1930s and is common knowledge to Talmud scholars (Telegdi, 1933). In the Khazar Empire, the Eurasian Jews, plying the Silk Roads, became speakers of Slavic-an important language because of the trading activities of the Rus' (pre-Ukrainians) with whom the Jews were undoubtedly allied on the routes linking Baghdad and Bavaria. This is evident by the existence of newly invented Hebroidism, inspired by Slavic patterns of discourse in Yiddish (Wexler, 2010).
We advocate for implementing a more evolutionary understanding in linguistics. That includes giving more attention to the linguistic process that alter languages (e.g., relexification) and acquiring more competence in other languages and histories. When studying the origin of Ashkenazic Jews and Yiddish, such knowledge should include the history of the Silk Roads and Irano-Turkish languages.
INFERENCE OF GEOGRAPHICAL ORIGINS
Deciphering the origin of human populations is not a new challenge for geneticists, yet only in the past decade highthroughput genetic data were harnessed to answer these questions. Here, we briefly discuss the differences between the available tools based on identity by distance. Existing PCA or PCA-like approaches (e.g., Yang et al., 2012) can localize Europeans to countries (understood as the last place where major admixture event took place or the place where the four ancestors of "unmixed" individuals came from) with less than 50% accuracy (Yang et al., 2012). The limitations of PCA (discussed in Novembre and Stephens, 2008) appear to be inherent in the framework where continental populations plotted along the two primary PCs cluster in the vertices of a trianglelike shape and the remaining populations cluster along or within the edges (e.g., Elhaik et al., 2013). There is therefore reason to question the applicability of ambitious PCA-based methods (Yang et al., 2012(Yang et al., , 2014 aiming to infer multiple ancestral locations outside of Europe. Overall, accurate localization of worldwide individuals remains a significant challenge (Elhaik et al., 2014).
The GPS framework assumes that humans are mixed and that their genetic variation (admixture) can be modeled by the proportion of genotypes assigned to any number of fixed regional putative ancestral populations (Elhaik et al., 2014). GPS employs a supervised ADMIXTURE analysis where the admixture components are fixed, which allows evaluating both the test individuals and reference populations against the same putative ancestral populations. GPS infers the geographical coordinates of an individual by matching their admixture proportions with those of reference populations. Reference populations are populations known to reside in a certain geographical region for a substantial period of time in a time frame of hundreds to a thousand years and can be predicted to their geographical locations while absent from the reference population panel . The final geographic location of a test individual is determined by converting the genetic distance of the individual to m reference populations into geographic distances (Elhaik et al., 2014). Intuitively, the reference populations can be thought of as "pulling" the individual in their direction with a strength proportional to their genetic similarity until a consensus is reached ( Figure S1). Interpreting the results, particularly when the predicted location differs from the contemporary location of the studied population, demands cautious.
Population structure is affected by biological and demographic processes like genetic drift, which can act rapidly on small, relatively isolated populations, as opposed to large non-isolated populations, and migration, which occurs more frequently (Jobling et al., 2013). Understanding the geographyadmixture relationships necessitates knowing how relative isolation and migration history affected the allele frequencies of populations. Unfortunately, oftentimes we lack information about both processes. GPS addresses this problem by analyzing the relative proportions of admixture in a global network of reference populations that provide us with different "snapshots" of historical admixture events. These global admixture events occurred at different times through different biological and demographic processes, and their long-lasting effect is related to our ability to associate an individual with their matching admixture event.
In relatively isolated populations the admixture event is likely old, and GPS would localize a test individual with their parental population more accurately. By contrast, if the admixture event was recent and the population did not maintain relative isolation, GPS prediction would be erroneous ( Figure S2). This is the case of Caribbean populations, whose admixture proportions still reflect the massive Nineteenth and Twentieth centuries' mixture events involving Native Americans, West Europeans, and Africans (Elhaik et al., 2014). While the original level of isolation remains unknown, these two scenarios can be distinguished by comparing the admixture proportions of the test individual and adjacent populations. If this similarity is high, we can conclude that we have inferred the likely location of the admixture event that shaped the admixture proportion of the test individual. If the opposite is true, the individual is either mixed and thereby violates the assumptions of the GPS model or the parental populations do not exist either in GPS's reference panel or in reality. Most of the time (83%) GPS predicted unmixed individuals to their true locations with most of the remaining individuals predicted to neighboring countries (Elhaik et al., 2014).
To understand how migration modifies the admixture proportions of the migratory and host populations, we can consider two simple cases of point or massive migration followed by assimilation and a third case of migration followed by isolation. Point migration events have little effect on the admixture proportions of the host population, particularly when it absorbs a paucity of migrants, in which case the migrants' admixture proportions would resemble those of the host population within a few generations and their resting place would represent that of the host population. Massive demographic movements, such as large-scale invasion or migration that affect a large part of the population are rare and create temporal shifts in the admixture proportions of the host population. The host population would temporarily appear as a two-way mixed population, reflecting the components of the host and invading populations (e.g., European and Native American, in the case of Puerto Ricans) until the admixture proportions would homogenize population-wise. If this process is completed, the admixture signature of this region may be altered and the geographical placement of the host population would represent again the last place where the admixture event took place for both the host and invading populations. GPS would, thereby, predict the host population's location for both populations. Populations that migrate from A to B and maintain genetic isolation would be predicted to point A in the leave-one-out population analysis. While human migrations are not uncommon, maintaining a perfect genetic isolation over a long period of time is very difficult (e.g., Veeramah et al., 2011;Behar et al., 2012;Elhaik, 2016;Hellenthal et al., 2016), and GPS predictions for the vast majority of worldwide populations indicate that these cases are indeed exceptional (Elhaik et al., 2014). Despite of its advantages, GPS has several limitations. First, it yields the most accurate predictions for unmixed individuals. Second, using migratory or highly mixed populations (both are detectable through the leave-one-out population analysis) as reference populations may bias the predictions. Further developments are necessary to overcome these limitations and make GPS applicable to mixed population groups (e.g., African Americans).
CONCLUSION
The meaning of the term "Ashkenaz" and the geographical origins of AJs and Yiddish are some of the longest standing questions in history, genetics, and linguistics. In our previous work we have identified "ancient Ashkenaz, " a region in northeastern Turkey that harbors four primeval villages whose names resemble Ashkenaz. Here, we elaborate on the meaning of this term and argue that it acquired its modern meaning only after a critical mass of Ashkenazic Jews arrived in Germany. We show that all bio-localization analyses have localized AJs to Turkey and that the non-Levantine origins of AJs are supported by ancient genome analyses. Overall, these findings are compatible with the hypothesis of an Irano-Turko-Slavic origin for AJs and a Slavic origin for Yiddish and contradict the predictions of Rhineland hypothesis that lacks historical, genetic, and linguistic support ( Table 1).
AUTHOR CONTRIBUTIONS
EE conceived the paper. MP processed the ancient DNA data. RD and EE carried out the analyses. EE co-wrote it with PW and RD. All authors approved the paper. | 2017-07-14T18:03:28.577Z | 2017-06-21T00:00:00.000 | {
"year": 2017,
"sha1": "1daf5108086e3bf10167bc7e35e21ebc85cf62ba",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2017.00087/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1daf5108086e3bf10167bc7e35e21ebc85cf62ba",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
270117949 | pes2o/s2orc | v3-fos-license | FedKG: A Knowledge Distillation-Based Federated Graph Method for Social Bot Detection
Malicious social bots pose a serious threat to social network security by spreading false information and guiding bad opinions in social networks. The singularity and scarcity of single organization data and the high cost of labeling social bots have given rise to the construction of federated models that combine federated learning with social bot detection. In this paper, we first combine the federated learning framework with the Relational Graph Convolutional Neural Network (RGCN) model to achieve federated social bot detection. A class-level cross entropy loss function is applied in the local model training to mitigate the effects of the class imbalance problem in local data. To address the data heterogeneity issue from multiple participants, we optimize the classical federated learning algorithm by applying knowledge distillation methods. Specifically, we adjust the client-side and server-side models separately: training a global generator to generate pseudo-samples based on the local data distribution knowledge to correct the optimization direction of client-side classification models, and integrating client-side classification models’ knowledge on the server side to guide the training of the global classification model. We conduct extensive experiments on widely used datasets, and the results demonstrate the effectiveness of our approach in social bot detection in heterogeneous data scenarios. Compared to baseline methods, our approach achieves a nearly 3–10% improvement in detection accuracy when the data heterogeneity is larger. Additionally, our method achieves the specified accuracy with minimal communication rounds.
Introduction
The application of online social networks is becoming increasingly widespread, with the number of users on platforms like Twitter, Facebook, Instagram, and Weibo continuously growing.Social networks have become important tools for users to communicate, entertain, and obtain information.Alongside this, a new type of account controlled by automated programs has emerged on these social platforms [1]: social bots.Social bots can be categorized into malicious and benign accounts.Some benign social bots are designed to automatically aggregate content from various sources and provide services to regular users.However, over time, malicious social bots with nefarious purposes have emerged.These malicious social bot accounts strive to disguise themselves as real human accounts, creating artificial popularity, spreading false information [2], and guiding negative public opinion [3] on social platforms.Evidence of social bots manipulating public opinion for political purposes has been found in events such as Brexit and the 2016 US presidential election [4].In summary, malicious social bots engage in a range of behaviors that disrupt the social network's order, posing a significant threat to the social network's security.Therefore, there is an urgent need for suitable and effective methods for social bot detection to foster a healthy social network environment.In our current research, we examine the differences between social bots and humans, but have not yet provided a detailed classification of social bots.
Existing social bot detection methods focus on the feature information of social accounts, including the user account features, user tweet features, and user relationship features.Graph representation learning methods based on graph neural networks are important approaches for fusing relationship features among users.We notice that most recent social bot detection works are conducted based on isolated data parties.However, utilizing account information from individual data parties for model training has several limitations: (1) the limited number and feature space of social accounts held by a single data party restricts the improvement of model performance [5]; (2) domain-specific bots across different data parties often exhibit similar characteristics [6], and sharing data facilitates the discovery of these common features; (3) labeling social bots is a challenging task, which means a high cost for each data island dedicated to building social bot detection models.Therefore, sharing the data from multiple parties for joint model training can leverage the advantages of a larger user base and richer user feature information, while alleviating the problem of difficult data labeling.
Data privacy is a primary challenge in implementing the data fusion of multi-party account information [7].Under the premise of complying with relevant regulations and protecting data privacy, data owners often do not share data directly.The key problem lies in how to achieve the joint model training by integrating the multi-party data while ensuring that data privacy is not compromised.Federated learning has emerged as an important method to deal with the related problems.
Additionally, the heterogeneity among multiple data parties, known as the non-Independently and Identically Distributed (non-IID) problem, can significantly impact the performance of federated learning models [8].For instance, when multiple data owners share their data for training a federated social bot detection model, there are substantial differences in the composition and distribution of social accounts among each participant (e.g., the number of human and social bot accounts present different distributions across multiple platforms).In such cases, utilizing the traditional federated learning methods for direct parameter averaging to obtain a global model can lead to severe biases in the global model, thereby affecting the performance of individual participant models.Consequently, even though federated learning facilitates the integration of multi-party data to some extent, the resulting global model may not positively impact the performance of all participant models.Some existing works [9,10] have improved upon the traditional federated averaging method by applying knowledge distillation.They distill the logit knowledge of client models to the server, resulting in significant improvements compared to existing federated learning algorithms when dealing with data heterogeneity issues among the clients.
Some recent research has focused on federal social bot detection.However, reference [5] fails to acknowledge the significance of the interaction between social accounts in enhancing the detection performance, while reference [6] overlooks the heterogeneity of the data distribution in federated learning.These shortcomings render these methods inadequate for real-world federated social bot detection scenarios.Our approach makes corresponding improvements to address these limitations.In this paper, we combine federated learning with the Relational Graph Convolutional Neural Network (RGCN) [11] graph representation learning method to construct a federated social bot detection model.We apply a class-level cross-entropy loss function to address the class imbalance problem between social bots and human accounts.The knowledge distillation method is applied to optimize and adjust for the data heterogeneity issue among multiple participants.Specifically 1.
We construct a naive Federated RGCN framework for social bot detection in a multiparty data fusion scenario; 2.
We define a class-level cross-entropy loss function.and applied it to the training process of local models, mitigating the impact of class imbalance issues during client model training; 3.
By applying the knowledge distillation method, we make adjustments at both the server and client sides, effectively alleviating the impact of data distribution heterogeneity among clients on the performance of the federated learning model.
Social Bot Detection
The detection of bot accounts in social media began as early as 2010 [12].Early social bot detection work primarily involved feature engineering methods, applying the traditional machine learning techniques to classify social accounts.Varol et al. [13] evaluated the classification models using over a thousand features extracted from users' public data and metadata, including friends, tweets, network patterns, activity time series, and sentiment.Yang et al. [14] achieved an efficient analysis using minimal account metadata and extended it to real-time processing of the entire public tweet stream on Twitter.Kantepe et al. [15] extracted a plethora of features based on the account profiles, tweets, and temporal behavior, and classified accounts using machine learning classifiers.
With the widespread application of deep neural network models, several social bot detection frameworks based on deep neural networks have emerged.Kudugunta et al. [16] detected bots at the tweet level using the content and metadata, employing a deep neural network detection model based on the Long Short-Term Memory (LSTM) architecture.Wei et al. [17] applied a bidirectional LSTM (BiLSTM) network to extract semantic features from tweets.Stanton et al. [18] applied generative adversarial networks for spam detection, relying on limited labeled and unlabeled data to detect spam, thus avoiding labeling costs and inaccuracies.SATAR [19] combines a user's tweet semantic information, attribute information, and neighbor information for the generalization, simultaneous training on a large number of self-supervised users, and fine-tuning for specific bot detection scenarios to adapt to the evolution of social bots.Hayawi et al. [20] utilized Long Short-Term Memory (LSTM) networks and dense layers to handle a hybrid input model.Arin et al. [21] employed three LSTM models and a fully connected layer to capture the complex behavioral activities of accounts, while exploring three learning schemes to train components connected across different levels.
With the application of graph neural networks for node representation and node classification tasks, many works have emerged that utilize graph models for social bot detection.Alhosseini et al. [22] firstly implemented social bot detection based on graph models by applying the GCN method to obtain the user representation and achieve the classification of the social accounts.BotRGCN [23] applied the relational graph convolutional neural network model to achieve social bot classification, obtaining better detection results.HGT [24] uses relational graph transformers to simulate the heterogeneous influence among users to learn user representations and uses semantic attention networks to aggregate information.
Federated Learning
The continuous development of big data technology has led to the emergence of an increasing number of data silos.Federated learning is an important approach to deal with the data silo problem nowadays.The concept of federated learning was first introduced by McMahan et al. in 2017 [25], with the FedAVG algorithm as the basis.It is a distributed machine learning method that allows data to remain on local clients for model training, eliminating the need for centralized data transmission to a server for centralized learning.
The server only needs to aggregate the information learned by the clients to complete the global model update.Currently, federated learning has been widely applied in various fields, such as the medical field [26], the financial field [27], the recommendation field [28], and so on.
The non-IID problem is an important challenge in federated learning that needs to be addressed [29].In federated learning, multiple participants often have different data characteristics.If a naive federated averaging method is used to simply aggregate the model parameters obtained from the local training of each participant, it often leads to the model falling into a local optimum [30], resulting in an ineffective global model.Our work focuses on the label-based distribution imbalance problem in social bot detection within federated learning, where multiple data contributors have similar data characteristics, but the label distribution of the data are highly imbalanced, leading to inconsistencies between the local optimum and global optima.
Existing optimization methods for the non-IID problem in federated learning can be considered from two perspectives: model-based and data-based.Model-based optimization methods can be further divided into adjustments for server aggregation methods and adjustments for client parameter updates.Adjustments for the server aggregation methods can involve adding weights to the parameters of each client, while adjustments for the client parameter updates include adding regularization terms to the local model updates to reduce the distance between the local and global models, thus addressing the non-IID problem.For example, FedNova [31] improves the model aggregation of FedAvg by normalizing and scaling the local updates of each participant based on their local training epochs before updating the global model.FedProx [32], based on FedAvg, adjusts the client parameter updates by adding a regularization term that constrains the optimization distance between the local and global models.SCAFFOLD [30] uses control variates to correct the client drift in local updates while ensuring faster convergence.FedDyn [33] and MOON [34] constrain the direction of the local model updates by comparing the similarity between model representations, thereby maintaining the consistency between local and global optimization objectives.
The data-based optimization methods can leverage data generation techniques to generate data that makes the data among different participants closer to being identically distributed, thus alleviating the performance loss of the global model caused by data heterogeneity between the clients.FedDistill [35] is a data-free knowledge distillation method, where the participants share the average of label-based logit vectors.FedGAN [36] trains a generative adversarial network (GAN) to handle non-IID data challenges in an efficient communication manner, but inevitably introduces bias.FedGen [37] and FedFTG [38] utilize generator models to simulate the global data distribution for performance improvement.
Federated Social Bot Detection
Most existing works on social bot detection are based on data from a single organization, with few considering the fusion of multi-party data to train powerful models and alleviate the difficulties of labeling social bots for each organization.DA-MRG [6] made the first attempt at federated social bot detection, where the method locally trains a domain-aware multi-relation graph model for social bot detection.Data were divided into multiple participants based on the data size, and the classic FedAvg algorithm was used to aggregate the parameters among the participants to obtain a global detection model.FedACK [5] also focused on the combination of federated learning and social bot detection by extracting features from the user metadata and tweet text for social bot detection.When constructing the federated learning models, it considered the impact of data heterogeneity among multiple participants and proposes a federated adversarial contrastive knowledge distillation method to address the issue of data heterogeneity.These methods do not simultaneously consider an effective social bot detection framework and the heterogeneity of data among multiple participants in federated social bot detection frameworks.To address the aforementioned limitations, we propose a knowledge distillation-based federated graph social bot detection method, achieving more efficient federated social bot detection.
Problem Definition
Given a social network account U, we define the basic account information as P, tweet information as T, and the relationship information as N. Our goal is to learn a bot detection function f : f (U(P, T, N)) → ŷ that makes the predicted label ŷ close to the true label y, maximizing the prediction accuracy of the model.For the construction of the federated social bot detection model, we consider the traditional federated learning framework based on the server-client scheme, assuming that there exists a server and K clients jointly participating in the training of the federated social bot model.Each client holds a private dataset D k , and each client's dataset D k contains N k individual user information, then the final optimization objective of the problem is defined as: L is a loss function parameterized by the model parameters θ to evaluate the performance of each client's prediction model on its private dataset D k .
FedKG Social Bot Detection Framework
The overall framework of FedKG is shown in Figure 1.Below, we will introduce the overall model from two components: the local social bot detection framework and the knowledge distillation-optimized federated learning process.
User feature coding
Firstly, the user tweet information, user description information and user account information are encoded to obtain the initial feature representation of the user.We used a similar encoding approach to BotRGCN [23] to encode user features.For the semantic information in the user's tweets and descriptions, a pre-trained RoBERTa model [39] is applied for encoding, resulting in representations r t and r d ; for the user's numerical features, z-score regularization is applied to obtain the representations r n ; for the user's categorical features, one-hot encoding is applied to obtain the representations.These feature representations are then concatenated to form the overall feature representation vector r i = [r t , r d , r n , r c ] of the user, which serves as the initial feature representation of the user.
Feature extractor for user feature representation
The feature extractor is a module that performs the feature extraction on users based on the graph representation learning methods.A social network graph is firstly constructed.In online social networks, the neighbor information of users provides key information for a more accurate analysis of users, resulting in more accurate user feature representations.Therefore, a social network graph is constructed based on the neighbor relationships, and a graph neural network model is applied for feature extraction.Users in the social network are treated as nodes, forming a node set V, and the neighbor relationships between users are treated as edges, forming an edge set E. There are various types of relationships between users, such as follows, retweeting, and commenting.In this work, the focus is on the mutual following relationships between users.Considering that the following and follower relationships contain different information, the edge set contains two types of edges: one representing the following relationship between users, and the other representing the follower relationship between users.Thus, a heterogeneous graph G = {V, E} is constructed to simulate real-world social networks.We apply the RGCN model as a graph neural network model to learn the low-dimensional feature representations of users.Firstly, we apply a linear layer to transform the initial feature representations of users into the initial representation vectors of nodes in the graph: where W 1 and b 1 are learnable parameters.After that, L layer R-GCN is applied to learn the user embedding representation vector.
where Θ is the adjacency matrix, R denotes the set of relationship types between users, and N r denotes the set of neighbors of a user with relationship type r.After L layers of R-GCN, we obtain the user feature representations x L .At this point, the final feature representation of each user is obtained by the feature extractor. •
Classifier for user classification
The classifier is used to further transform the user feature representation into the final classification result output.After obtaining the user feature representation through the feature extraction model, we further apply a Multilayer Perceptron (MLP) network to transform the user representation: where W 2 and b 2 are the learnable parameters, and h i is the user feature representation.After that, a softmax layer is applied to transform the transformed user feature representation into a probability distribution: where W O and b O are learnable parameters. •
Learning and optimization
We optimize the feature extractor F and classifier C with different loss functions during model training.Social bot detection is typically formulated as a binary classification problem.In general, the loss function is defined as the cross-entropy loss function to optimize the model parameters.For example, the loss of the feature extractor and the classifier are defined as follows: where L F ce and L C ce are the loss functions of feature extractor F and classifier C, respectively, V is the set of all sample nodes, y i is the true label of the node v i , and ŷi is the probability that the node is predicted as a positive class by the model.
Considering that, in the social bot detection task, the class imbalance between human and social bot accounts will greatly affect the effectiveness of the training model, the traditional sample-level cross-entropy loss function calculates the loss for each sample and then takes the average.However, if the number of samples in each class is small, their impact on the average loss can be diluted.The optimization algorithm may be biased towards samples from the majority class, resulting in insufficient training of minority class samples.Thus, inspired by DAGAD [40], we modify the traditional sample-level cross-entropy loss function to a class-level cross-entropy loss function to alleviate the issue of local class imbalance for the local model.Specifically, we calculate the loss separately for each class, and then assign the same weight to these losses for summation.In this case, the losses for each class are treated equally, so even if the number of samples in a certain class is small, its loss value still has a greater impact on the model training.The equation is defined as follows: where L CE ′ is the class-level cross-entropy loss function, V min and V maj represent the sets of minority class sample nodes and majority class sample nodes, respectively, y i is the true label of the node, and ŷi is the probability that the node is predicted to be a positive class by the model.Correspondingly, the loss functions of the feature extractor and classifier are modified to class-level cross-entropy loss functions, i.e., L F ′ ce and L C ′ ce .The above describes the complete process of the basic social bot detection model for each client.
Knowledge Distillation-Optimized Federated Learning Framework
The overall process of the naive federal learning framework is as follows: the server first initializes the global model parameters where θ t+1 is the new global model parameters for the next round, K is the number of participating clients, N k is the number of samples of client k, and N is the sum of the number of samples across all clients.This completes one global iteration, and this iteration is repeated many times to obtain the optimal global model.The data heterogeneity among multiple participants in federated learning has a nonnegligible impact on the performance of federated learning.Therefore, in this work, we apply knowledge distillation methods to optimize the federated learning algorithm.
We train a global generator G on the server side, aiming to learn the global data distribution.The parameters of the global generator G are shared among all clients, as shown in Figure 1.Applying standard Gaussian noise z ∼ N(0, 1), given target labels y, the global generator generates pseudo-samples x (i.e., user feature representations).These pseudo-samples incorporate data information from all clients.By applying these pseudo-samples to further guide the training of the local classification model, the decision boundary learned by the client model can be closer to the decision boundary of all the global data.This approach mitigates the impact of data heterogeneity among clients and improves the performance of client detection models.
Therefore, the final loss function L C k of the client's classification model C is modified on the basis of the class-level cross-entropy loss function as follows: where L C,k ce ′ is the class-level cross-entropy loss of the classifier part calculated by client k based on the local real data, L k dis is the difference loss between the probabilities obtained from the real samples and the pseudo samples through the classification model, N k is the number of training samples of the client k, D KL is the Kullback-Leibler divergence, C k is the classification model of client k, and σ is the softmax function.By minimizing Equation ( 9), we allow the probability distribution obtained from the real samples approach the probability distribution obtained from the pseudo-samples.This allows the pseudosamples, which contain knowledge of the global data distribution, to influence the decision boundary of the client's local classification model.
The global generator G is trained on the server using the knowledge distillation method to integrate the data distribution knowledge among global clients.The logits knowledge from the client models serves as the teacher model's knowledge, while the server's global generator serves as the student model.Equation ( 10) is defined to measure the difference in probability distributions between the logit outputs of the client models and the logit outputs of the server model: where K is the number of participating clients, α k,y t is the weight of the samples in client k with label y to the samples in all global data with label y, C k is the classification model of client k, C is the global classification model, and σ is the softmax function.We train the global generator by maximizing Equation (10) to generate pseudo-samples that are beneficial for model training.In addition, a diversity loss is introduced in order to ensure the diversity of samples generated by the generator and to avoid model collapse: (11) where xi and xj are pseudo-samples.Then, the overall loss of the global generator is: In order to further solve the problem of data heterogeneity among clients, after training the global generator, the server side also applies the knowledge distillation strategy to adjust the global average classifier model, and the loss function of the global classification model training is defined as: By minimizing Equation ( 13), the global classification model can further integrate the knowledge from the client classification models, improving the generalization of the global model and enhancing communication efficiency.
Datasets
We conduct experiments on the Twibot-20 [41] dataset to evaluate our method.The Twibot-20 dataset is a widely used benchmark dataset for social bot detection tasks.It contains a total of 229,573 users.Among them, only 11,826 accounts are labeled, including 5237 human accounts and 6589 bot accounts.It provides the users' semantic information, account information, and following and follower relationships information among users, supporting the construction of models based on the graph representation learning methods.The user account information used contains numerical features and categorical features, and the specific feature names and feature descriptions are shown in Table 1 and Table 2, respectively.We combine the federated learning framework with the classical RGCN method and apply the knowledge distillation method to optimize the classical federated learning algorithm to address the problem of data heterogeneity among clients.Therefore, the baselines we apply include a detection model trained only locally (named Local), and federated learning methods FedAvg [25], FedProx [32], and FedDistill [35] that address the data heterogeneity problem.We evaluate the performance of these methods under different degrees of data heterogeneity.
•
Local: A detection model trained only on local private data, without any information exchange between participants; • FedAvg: A basic federated learning algorithm, with parameter sharing between the client and the server, and the server performs weighted averaging of client parameters based on their sample quantities; • FedProx: A federated learning method that introduces a regularization term in the loss function on the clients to reduce the distance between local and global models; • FedDistill: A data-free federated knowledge distillation method where the clients share the average of the label-based logit vectors.Since there is no parameter sharing, the performance of FedDistill decreases significantly.To ensure fairness, we modified the original method to share logit averages based on shared model parameters as a baseline comparison method; • FedACK: A federated social bot detection method that proposes a GAN-based federated adversarial comparison knowledge distillation mechanism.The relationship information between users is not considered in the social bot detection model, and the detection performance has a large gap with the detection of graph model-based methods; we modify its framework by replacing its local social bot detection model with our local detection framework as a baseline comparison method.
Data Heterogeneity
In the experimental validation, we divide the Twibot-20 dataset into multiple parts to simulate multiple participants in federated learning.For data distribution heterogeneity for different participants, we apply the Dirichlet distribution Dir(α) for non-IID partitioning of data to simulate the heterogeneity among different participants in the real world, following a previous work [42], where the parameter α reflects the degree of the data heterogeneity, and a smaller α indicates a higher degree of data heterogeneity among clients.
Implementation Details
The feature extractor utilizes a two-layer RGCN, while the classifier employs a threelayer MLP.The global generator model is an MLP with two linear layers, and the hidden layer dimension is 16.The basic parameters for model training are as follows: batch size is 64, learning rate is 0.01, optimizer is Adam optimizer, and the number of global communication rounds is set to 100.
Regarding the evaluation metrics, our study evaluates the model performance based on the average accuracy of each client model on the test set in each iteration.The experimental results below report the highest average accuracy achieved during the iterations.
Performance Comparison
We first set the number of clients to four and the number of local training epochs to five.We set the heterogeneity level hyperparameter α to {1, 0.8, 0.5, 0.3} to compare the performance of different methods under different heterogeneity degrees of data distribution.As shown in Figure 2, we visualize the number of human and bot samples at different α, and we can see that as α decreases, the distribution differences in the number of categories among different participants gradually become larger, which represents a major challenge for the construction of traditional federated learning models.Table 3 presents the detection performance of different methods under different degrees of data heterogeneity.It can be observed that our method, FedKG, consistently achieves the highest accuracy at higher degrees of heterogeneity.Specifically, at an α of 0.8, our method has a 3% improvement over the second-highest method, and at an α of 0.5, it has a nearly 10% improvement.Compared to the local training approach, where the clients only utilize their local data for training, the federated learning methods that involve data sharing show significant improvements in performance.At low level of data heterogeneity (α = 1), all of the various federated learning methods achieve relatively high accuracy rates; at higher levels of data heterogeneity, our method demonstrates a clear advantage.We also conducted an ablation experiment by replacing the class-level cross-entropy loss function with the ordinary sample-level cross-entropy loss, resulting in a variant method called FedKG-C.The experimental results show that after replacing the classlevel cross-entropy loss function, the performance of FedKG-C decreases compared to the previous FedKG method under different degrees of data heterogeneity, which indicates that the class-level cross-entropy loss function plays a crucial role in addressing the imbalance issue in social bot detection.Furthermore, it can be observed from the experimental results that the variant method still outperforms other baseline methods when the data heterogeneity is low.This suggests that our method, applying the knowledge distillation method of tuning at both the client and server side, can effectively alleviate the impact of the data heterogeneity problem on the performance of federated learning algorithms, resulting in improved results.
Communication Efficiency Comparison
Table 4 demonstrates the number of rounds required for all algorithms to achieve the specified accuracy under different degrees of data heterogeneity when the number of clients is set to four and the number of local training epochs is set to five.The table shows that under different degrees of heterogeneity, our FedKG method achieves the corresponding accuracy with the minimum number of grounds, showing a clear advantage in terms of efficiency.The reason behind this is that during each ground of FedKG's federated learning, the knowledge distillation approach is applied.The global generator is trained with the logit information from each client as the teacher knowledge, and pseudo samples with global data distribution are generated to guide the training of the client's classification model.Additionally, after averaging the server parameters, further training of the global classification model is performed using the client's classification model logit as knowledge.This method of separate tuning between clients and the server allows for rapid and efficient fusion of knowledge across clients, resulting in a significant improvement in communication efficiency.Figure 3 shows the model learning curves at 100 global communication rounds for different methods.It can be observed that our method, FedKG, achieves the specified accuracy with the fewest number of communication rounds and maintains relative stability throughout the global iteration process, demonstrating the best performance.The learning curves of both FedProx and FedDistill methods show an increasing and then relatively stable trend.However, FedProx, which addresses the data heterogeneity among clients by adding a regularization term on the client side, requires multiple communication rounds between the server and clients, resulting in poor performance in terms of communication efficiency.
Impact of the Number of Clients
We also conducted experiments by varying the number of clients to explore the impact of the number of participants on the model performance in the federated social bot detection problem in cross-organizational scenarios.Table 5 presents the accuracy of the FedKG method under different numbers of clients when the data heterogeneity level α between clients is 0.3, 0.5, 0.8 and 1, respectively.It can be observed that the model achieves the highest accuracy when the number of clients is 2, regardless of the data heterogeneity level.With the increase in the number of clients, the accuracy of the model performance under different degrees of heterogeneity shows a decreasing trend.Moreover, this effect becomes more pronounced when the heterogeneity level is higher.This indicates that, as the number of participants increases, the challenge of data heterogeneity among multiple participants poses a more severe challenge to the performance of federated learning models.
Discussion
In this section, we discuss some limitations of the present study, further analysis of the experimental results, and future research directions.Firstly, due to the difficulty of obtaining and labeling data in multiple real social platforms, we heterogeneously partition the existing Twibot-20 dataset to simulate multiple data islands in the real world.This partly reflects the effectiveness of our method in training joint models with shared multiparty data.In the future, we will strive to collect data from multiple social platforms to train and verify the model in actual scenarios.Furthermore, in Section 4.5, we discuss the impact of participant numbers on model performance.The results indicate that the model achieves its highest accuracy when the number of clients is two.According to the results presented in Table 5, it is reasonable to infer that with the increase in the number of participants, the data heterogeneity among multiple participants will pose more serious challenges to the performance of the federated learning model.However, this does not necessarily imply that a fixed number of two participants should be employed when developing a federated social bot detection model in real-world scenarios.The determination of the number of participants is also related to factors such as the individual participant data volume, and warrants further comparison and analysis.Finally, we only consider the data heterogeneity issue among different participants and define same model architecture among clients, neglecting the need for different models among participants.In the future, we will pay more attention to the personalized federated learning method and further improve the federated learning algorithm under the premise of model heterogeneity among participants.For example, parameter decoupling can be adopted, where each client retains a portion of the model parameters for the local training without sharing them with the server.This enables each client to learn personalized representations, thereby achieving model heterogeneity across clients.
Conclusions
The presence of malicious social bots poses a serious threat to social network security.The singularity and scarcity of individual data, coupled with the high cost of labeling social bots, have led to the development of a joint model combining federated learning and social bot detection.We combine the federated learning framework with RGCN model to achieve federated social bot detection.To alleviate the impact of class imbalance in local data, a class-level cross-entropy loss function is applied during local model training.We apply the knowledge distillation method to address the issue of data heterogeneity.Extensive experimental results on benchmark datasets demonstrate the effectiveness of our method in social bot detection, outperforming baseline federated learning methods.In the future, we will further explore the combination of federated knowledge distillation and social bot detection tasks, focusing on the personalization issues in federated learning, and strive to build an efficient framework for federated social bot detection in real scenarios with model heterogeneity among clients.
, our social bot detection model is firstly divided into a feature extraction model based on the graph representation method and a classification model.The method of knowledge distillation is applied to train the global generator and global classification model.The global generator generates pseudo samples with global data knowledge to guide the training of the local classification model, making the decision boundary of the local model closer to the decision boundary of the model trained on global data; the global classification model integrates the logit knowledge of the clients' local classification models, mitigating the impact of data heterogeneity among clients on model performance.The main contributions of this paper are as follows:
3. 2 . 1 .
Social Bot Detection Model Based on RGCN Graph Representation Approach Each client's local social bot detection model consists of two components: a feature extractor for extracting user feature representations and a classifier for classification, parameterized by θ f and θ c , respectively, and the feature extractor F and classifier C are trained in stages during the model training process.The details of each component of the local model are specified below.
θ and distributes the global model parameters to each client; each client trains its local model based on the local private data D k , including the training of the local feature extraction model and the local classification model; after the client's local training is finished, it sends the updated model parameters θ k to the server for the preliminary average aggregation:
1 Figure 2 .
Figure 2. Visualization of data heterogeneity among participants.A darker color means more training samples that a client has for the corresponding label.
3 Figure 3 .
Figure 3. Learning curves of different methods in 100 communication rounds under different α. 4.4.Impact of Local Epochs In order to explore the impact of the number of local training epochs on the model's performance, we conducted experiments on the FedKG method, keeping the number of clients fixed at four and varying the number of local training iterations from 1 to 15.We observed the model performance under data heterogeneity levels, and the experimental results are shown in Figure 4. From the figure, it can be observed that the model performance generally decreases as the number of local training epochs increases, regardless of the degree of data heterogeneity.This indicates that multiple local training epochs on local clients during the federated model training tend to lead the model towards the local optima which, in turn, results in a decline in the performance of the global model.Therefore, it is crucial to set an appropriate number of local training iterations in federated learning to achieve the best model performance.
Figure 4 .
Figure 4. Accuracy of FedKG with different local epochs under different α.
Table 3 .
Comparison of the test accuracy of different methods under different α; a smaller α indicates higher heterogeneity.The bolded value indicates the highest value, and the underlined value indicates the second-highest value.
Table 4 .
The round number required for different methods under different α to achieve the specified accuracy.
Table 5 .
Accuracy of FedKG with different number of clients with different α. | 2024-05-30T15:22:32.378Z | 2024-05-28T00:00:00.000 | {
"year": 2024,
"sha1": "6c8121f4c6ca49a96870766e172c064aac665c1f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/24/11/3481/pdf?version=1716901987",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2971e8da7e24760fa65cf0b1c549a1246e60bbf6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16710785 | pes2o/s2orc | v3-fos-license | Herpes zoster: A clinicocytopathological insight
Herpes zoster or shingles is reactivation of the varicella zoster virus that had entered the cutaneous nerve endings during an earlier episode of chicken pox traveled to the dorsal root ganglia and remained in a latent form. This condition is characterized by occurrence of multiple, painful, unilateral vesicles and ulceration which shows a typical single dermatome involvement. In this case report, we present a patient with herpes zoster involving the mandibular division of the trigeminal nerve, with unilateral vesicles over the right side of lower third of face along the trigeminal nerve tract, with intraoral involvement of buccal mucosa, labial mucosa and the tongue of the same side. Cytopathology revealed classic features of herpes infection including inclusion bodies, perinuclear halo and multinucleated cells.
to the right corner of the mouth. Encrustation was seen on the right side of the lip, but was not crossing the midline [ Figure 1].
Intraorally, multiple shallow ulcerations with erythematous irregular borders and tissue tags were seen on the buccal mucosa, tongue and labial mucosa unilaterally on the right side. These ulcers were painful causing difficulty in eating and mouth opening. There were no other skin lesions accompanying the orofacial lesions. After careful clinical examination, a provisional diagnosis of HZI was made [ Figures 2 and 3].
Clinical differential diagnosis included herpes simplex infection (HSV). HSV infection appears in a similar fashion and if mild and localized to one side may be mistaken for HZI; cultures helps to differentiate between the two. Cytosmear prepared from the labial mucosa revealed epithelial cells. Epithelial cells were arranged in clusters, and few isolated cells were seen. These epithelial cells were showing intranuclear eosinophilic inclusions with margination of chromatin resembling Cowdry A type inclusion [ Figure 4]. Multinucleated cells [ Figure 5], perinuclear halo [ Figure 6] and nuclear fragmentations [ Figure 7] were also seen.
Cytological features were suggestive of the herpes infection. Hence, correlating the clinical feature with cytological features, a final diagnosis of HZI was concluded.
DISCUSSION
HZ is more commonly known as shingles, from the Latin cingulum, for "girdle. " This is because a common presentation of HZ involves a unilateral rash that can wrap around the waist or torso like a girdle. Similarly, the name zoster is derived from classical Greek, referring to a belt like binding (known as a zoster) used by warriors to secure armor. [2] Zoster lesions contain high concentrations of VZV that can be spread, presumably by the airborne route. This Herpes zoster progresses as a cluster of small bumps which turns into blisters; the blisters further fill with lymph and break open. Then, crust formation occurs over the blister; finally, it disappears. Postherpetic neuralgia can sometimes occur due to nerve damage. [2] Most people are infected with this virus as children and suffer from an episode of chickenpox. The immune system eventually eliminates the virus from most locations, but it remains dormant (or latent) in the ganglia adjacent to the spinal cord (called the dorsal root ganglion) or the ganglion semilunar (ganglion Gasseri) in the base of the skull. Repeated attacks of herpes zoster are rare. [2,[6][7][8] The clinical features of HZI can be grouped into three phases: (1) prodromal, (2) acute and (3) chronic. Initially, the adult patient exhibits fever, general malaise and pain and tenderness along the course of the involved sensory nerves, usually unilaterally. Often the trunk is affected. Within a few days, the patient has a linear papular or vesicular eruption of the skin or mucosa supplied by the affected nerves. It is typically unilateral and dermatome in distribution. After rupture of the vesicles, healing commences, although secondary infection may intervene and slow the process considerably. Approximately 10% of affected individuals will exhibit no prodromal pain. Conversely, on occasion, there may be recurrence in the absence of vesiculation of the skin or mucosa. This pattern is called zoster sine herpete (zoster without rash) and affected patients have severe pain of abrupt onset and hyperesthesia over a specific dermatome. Fever, headache, myalgia and lymphadenopathy may or may not accompany the recurrence. [9] Herpes zoster may have additional symptoms, depending on the dermatome involved. Herpes zoster ophthalmicus involves the orbit of the eye and occurs in approximately 10-25% of cases. It is caused by the virus reactivating in the ophthalmic division of the trigeminal nerve. In a few patients, symptoms may include conjunctivitis, keratitis, uveitis and optic nerve palsies that can sometimes cause chronic ocular inflammation, loss of vision and debilitating pain. Zoster oticus, also known as Ramsay Hunt syndrome type II, involves the ear. It is thought to result from the virus spreading from the facial nerve to the vestibulocochlear nerve. Symptoms include hearing loss and vertigo (rotational dizziness). [2,10,11] Oral lesions occur with trigeminal nerve involvement and may be present on the movable or bound mucosa. The lesions often extend to the midline and frequently are present in conjunction with the involvement of the skin overlying the affected quadrant. Like varicella, the individual lesions manifest as 1-4 mm, white, opaque vesicles that rupture to form shallow causes primary varicella infection in exposed susceptible persons. Localized zoster is only contagious after the rash erupts and until the lesions crust. [2][3][4][5] ulcerations. Involvement of the maxilla may be associated with devitalization of the teeth in the affected area. [9] The cytological presentation includes binucleated, syncytial multinucleated giant cells along with the ballooning of cytoplasm and cowdry type A intranuclear eosinophilic inclusions with partial or complete loss of chromatin; these inclusions were separated from the thick nuclear membrane by a clear zone or halo. The cells also showed enlarged degenerated nuclei with smudged and homogenized ground glass or slat gray appearance (cowdry B type nuclei). These infections do not show intracytoplasmic inclusions; however, subtle shading within the nucleus may be mistaken for inclusions. [12] Histopathology revealed that virus exerts its main effects on the epithelial cells. Infected epithelial cells exhibit acantholysis, nuclear clearing and nuclear enlargement, which have been termed ballooning degeneration. The acantholytic epithelial cells are termed Tzanck cells (not specific for herpes; refers to a free-floating epithelial cell in any intraepithelial vesicle). Nucleolar fragmentation occurs with condensation of chromatin around the periphery of the nucleus. Multinucleated, infected epithelial cells are formed when fusion occurs between adjacent cells. Intercellular edema develops and leads to the formation of an intraepithelial vesicle. Mucosal vesicles rupture rapidly; demonstrate a surface fibrinopurulent membrane. [9] Zoster diagnosis might not be possible in the absence of rash (e.g., before rash or in cases of zoster sine herpete). [13] In its classical manifestation, the signs and symptoms of zoster are usually distinctive enough to make an accurate clinical diagnosis once the rash has appeared. [14,15] The accuracy of diagnosis is lower for children and younger adults in whom the incidence of zoster is lower and its symptoms are less often classic. [7,16] In some cases, particularly in immunosuppressed persons, the location of rash might be atypical, or a neurologic complication might occur well after the resolution of the rash. In these instances, laboratory testing might clarify the diagnosis. [8,17] Tzanck smears are inexpensive and can be used at the bedside to detect multinucleated giant cells in lesional specimens, but they do not distinguish between infections with VZV and HSV. Direct fluorescent antibody (DFA) staining of VZV-infected cells in a scraping of cells from the base of the lesion is rapid and sensitive. DFA and other antigen detection methods also can be used on biopsy material, and eosinophilic nuclear inclusions (Cowdry type A) can be observed on histopathology.
Polymerase chain reaction techniques performed in an experienced laboratory also can be used to detect VZV DNA rapidly and sensitively in properly-collected lesion material. In immunocompromised persons, even when VZV is detected by laboratory methods in lesional specimens, distinguishing chickenpox from disseminated zoster might not be possible by physical examination or serologically. In these instances, a history of VZV exposure, a history that the rash began with a dermatomal pattern and the results of VZV antibody testing at or before the time of rash onset might help guide the diagnosis. [18,19]
CONCLUSION
Herpes zoster is a painful blistering infectious disease, characterized by numerous cytological changes. When these cytological changes are demonstrated in an ideal smear prepared from the blisters, identification of the condition becomes simple, without warranting a biopsy.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T05:37:53.775Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "93db2d2cfc2707319c0e4fc1ea15a65324677b94",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc5051314",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "93db2d2cfc2707319c0e4fc1ea15a65324677b94",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236724681 | pes2o/s2orc | v3-fos-license | The relationship between maternal smartphone use, physiological responses, and gaze patterns during breastfeeding and face-to-face interactions with infant
Smartphone use during parent-child interactions is highly prevalent, however, there is a lack of scientific knowledge on how smartphone use during breastfeeding or face-to-face interactions may modulate mothers’ attentive responsiveness towards the infant as well as maternal physiological arousal. In the present study, we provide the first evidence for the influence of the smartphone on maternal physiological responses and her attention towards the infant during breastfeeding and face-to-face interactions. Twenty breastfeeding mothers and their infants participated in this lab study during which electrodermal activity, cardiograph impedance, and gaze patterns were monitored in breastfeeding and face-to-face interactions with three conditions manipulating the level of maternal smartphone involvement. We report that mothers’ gaze toward their infants decreased when breastfeeding while using the smartphone compared to face-to-face interaction. Further, we show that greater maternal electrodermal activity and cardiac output were related to longer maternal gaze fixation toward the smartphone during breastfeeding. Finally, results indicate that mothers’ smartphone addiction levels were negatively correlated with electrodermal activity during breastfeeding. This study provides an initial basis for much required further research that will explore the influence of smartphone use on maternal biobehavioral responses in this digital age and the consequences for infant cognitive, emotional, and social development.
Introduction
The smartphone has become a dominant competitor for attention in our daily lives [1][2][3], potentially disrupting maternal sensitivity and attentive responsivity during mother-child interactions [4][5][6][7][8][9][10][11], especially in the critical infancy period [12] when the foundations of social interactions begin to form [13]. Despite the smartphone's prevalence in everyday situations, little is known regarding its impact on maternal responsivity during early-life interactions with the infant. Responsive maternal behavior is an essential building block of the biobehavioral synchrony system that develops early on in life between mother and infant and promotes infants' physiological, cognitive, and social-emotional growth [14]. This non-verbal system includes dynamical temporal concordance between mother's and infant's patterns of affect, proximity, touch, vocalizations and gaze patterns [14][15][16][17]. At around three months of age, infants engage in concurrent mutual gaze with their mothers approximately 30-50% of the time in face-to-face interactions [14]. Further, nearly every time infants looked at their mothers' faces, mothers were already looking at them [18]. Overall, responsive and attentive parenting is known to have beneficial effects for children's brain development [19,20], selfregulation capabilities [21], and cognitive development [22]. In recent years, several studies have suggested, that smartphone use by the mother during interactions with her infant cause unpredictable distractions and might hinder her ability to sensitively respond to the infant and thus negatively influences resilience to stress and memory capacities in her child [23][24][25][26]. Therefore, it is critically important to explore the decrease in mothers' attention to her infant cues [27] due to smartphone use, especially early in life in the primary interactive contexts of breastfeeding and face-to-face interactions. At three to six months of age, breastfeeding and face-to-face interactions comprise a majority of the settings in which the parent-infant bond develops and require sensitive responsivity from the mother [28][29][30][31][32][33] in order to continuously learn to recognize and interpret their infants' cues and adapt to their changing abilities [34]. Therefore, the sensitive and responsive role of the parent in these fundamental parent-infant social contexts of feeding and face-to-face interactions is pivotal for early development.
As noted, a main competitor for maternal sensitive attention during interactions with her infant is the smartphone. Smartphone use at mealtime, for instance, is a common phenomenon among adults [7], and the presence of and engagement with screen distractions during infant feeding is one potential barrier to responsive feeding interactions [35]. Reports from an observational study in fast-food restaurants showed that caregivers who were focused on their mobile device during the meal, tended to respond to children's bids for attention in insensitive or aggressive ways [9]. Family members of individuals with highly frequent mobile use in their home environment reported negative feelings such as frustration and stress [8]. Negative emotions related to smartphone use also accompanied parents' attempts to minimize their phone use while children were in their care [6]. Moreover, these effects of smartphone use on the parent's sensitive and attentive response towards their children were also found to be related to children's risky behaviors [4] and to deficits in language acquisition in early childhood [10,11]. To sum, parental distraction by smartphone use is a prevalent phenomenon that affects the quantity and quality of parents' sensitive responses towards their children, and in turn, on their children's behavior and affective responses.
The mere presence of the smartphone that has become a dominant competitor for human attention creating a cognitive distraction for adults as was shown with regards to working memory capacity, functional fluid intelligence [36], attentional and cognitive processes [37], and higher positive urgency [38].
Problematic excessive smartphone use is accompanied by stress like symptoms [39] that are reminiscent of substance-related dependence, such as withdrawal while not using the phone, and associated functional impairment [40][41][42][43]. The following factors have been shown to impact individuals' compulsive use of their smartphones: their level of perceived enjoyment and satisfaction from smartphone use and its technological innovativeness, and their level of internet addiction [44,45]. Smartphone addiction is a complex concept that includes various behavioral manifestations (e.g., gaming, chatting, shopping, and gambling) [46]. With respect to gender differences, higher levels of smartphone addiction were reported in women [46].
Although several studies have examined the negative impact of smartphone addiction [47][48][49][50][51][52][53][54], little is known regarding its behavioral and physiological basis, especially in the context of mother-infant interactions. A major way in which we can assess reactivity to smartphone distraction during mother-infant interaction is by measuring changes in physiological function of mothers' peripheral nervous system as well as her gaze patterns during interactions with her infant that also involve the smartphone.
The sympathetic branch of the autonomic nervous system (SNS) is broadly related to high arousal states and to "fight or flight" responses [55]. SNS activity can be assessed via recordings of individuals' impedance cardiography [56] and electrodermal activity (EDA) [55]. Impedance cardiography offers a non-invasive cardiac measurement in psychophysiological contexts such as cardiac output (CO) which describes the volume of blood being pumped by the left and right ventricle of the heart, per time unit [56]. Specifically, an increase in CO was detected while experiencing certain emotions such as anger [57], as well as in stress conditions [58,59]. EDA is the continuous variation in the electrical characteristics of the skin, which can be measured directly as changes in skin conductance levels [55,60,61]. EDA has been shown to indicate psychophysiological arousal and to be a meaningful marker of psychologically significant stimuli [62], reactive in the contexts of addictions [63], mother-infant interaction and infants' distress [64] as well as gaze behavior [61]. For these reasons, we expect EDA and CO to be good potential candidates for assessing the physiological effect of smartphone use during mother-child interactions.
Aim and hypothesis
In light of the above, the aim of the current study was to explore the effects of smartphone use and smartphone distractions on maternal attention and physiological function during two types of interactions with her infant: breastfeeding and face-to-face interaction. First, in line with previous studies, we hypothesized that the mother's attentiveness, indexed by her gaze direction toward the infant, will be reduced during moments of smartphone use compared to smartphone unavailability. We expected this effect to be more pronounced during breastfeeding compared to face-to-face interactions-as breastfeeding is a setting in which mothers' can more easily disengage attention to their infants. As this is the first study to assess changes in EDA and CO during breastfeeding and face-to-face interactions with regards to the smartphone, we aimed to explore if physiological function in these indices will be different during the smartphone use versus smartphone unavailability stages and in breastfeeding versus faceto-face interactions. Finally, in line with previous studies, we hypothesized that higher maternal attention towards the smartphone will be related to a blunted maternal physiological arousal (similar to an addiction like state), as well as to increased reported smartphone addiction.
Methods
The study's protocol was approved by the Psychology Department's Ethics Committee at Bar-Ilan University, Israel. Participants were informed that the purpose of the experiment was to examine maternal physiological processes during breastfeeding and how they relate to using a smartphone. All mothers provided informed consent at the onset of the study and all procedures are in line with the ethical approval obtained.
Participants
Twenty mothers and their 3-6 months old infants (8 boys; 12 girls-65% were firstborns, 30% were born second, and 5% were born third) took part in the experiment. Criteria for study participation were mothers' normal and corrected vision by contact lenses, and infant age was limited to three to six months. None of the mothers reported smoking cigarettes. Table 1 presents demographic information. Missing Data. One mother's data could not be collected for technical reasons, therfore, she was not included in the analyses. In three other cases, physiological data could not be fully collected during the study because the infant was crying/the mothers needed to discontinue breastfeeding, or due to electrode displacement. During breastfeeding, under the conditions of smartphone use/bag, physiological data were obtained in full from 19 participants. During the mute condition, physiological data were obtained in full from 16 mothers. During the face-toface interactions, while the smartphone was on mute, data was collected fully from 18 participants. A total of 16 had full data for the entire study.
Procedure and experimental design
Before arriving at the lab, mothers completed several online questionnaires at home regarding their smartphone use, perceptions of their bond with the infant, ratings of their infant's temperament, and demographic information. Upon arrival at the lab, mothers reported their baseline situational anxiety via a short questionnaire [65].
Participants were instructed to arrive at the lab session well hydrated and avoid caffeine/cigarettes two hours prior to the study. Mothers were physiologically monitored via a MindWare Mobile recording device (MindWare Technologies, Gahanna, OH) (See details on physiological collection and analysis below). Infants were not physiologically monitored during the study. After participants were connected to the physiological monitoring system, a five minutes baseline commenced, in which mothers were instructed to sit still and relax while their infant was under the care of a research assistant in a separate adjacent room. Following the baseline, mobile eye-tracking glasses were fitted to the mothers in order to measure their gaze patterns throughout all following experimental conditions. The entire lab visit was videotaped from three angles allowing full visibility of both interacting mother and infant. Video recordings, physiological recording, and SMI eye-tracking glasses were fully synchronized in time.
The experiment was comprised of three conditions in two phases. The phases were breastfeeding and face-to-face interaction, starting with breastfeeding. The conditions were: 1) Smartphone use 2) Smartphone sound on but unavailable to mothers 3) Smartphone on mute and unavailable to mothers. The experiment started with participants being asked to place their smartphone (sound on) on the table next to them and with the assistance of the experimenter, to wear the eye-tracking glasses. Next, participants were instructed to take their baby in their arms and engage in breastfeeding for five minutes. During this condition, five What-sApp messages were sent to the participants' smartphone by the experimenter (sitting in an adjacent control room). Some text messages with questions that mothers were required to answer (e.g., "what brought you to participate in our study?") and some messages with further instructions for the following conditions as well as a short situational anxiety questionnaire identical to the ones completed at the arrival to the lab ("well done! Now I would like you to answer the questionnaire attached in the following link"). In the next condition (breastfeeding while smartphone in the bag), participants were asked to place their phone in their bag, sound on, and were instructed not to respond to any received alerts (the experimenter entered the room before the measurement began and assisted if needed) while breastfeeding for another five minutes. In this condition, four messages were sent by the experimenter, and the mothers' only heard the notifications. In the final condition of the breastfeeding phase (breastfeeding/ smartphone mute), mothers were asked to mute their phone and to place it back in their bags for another five minutes. Participants then were given the time to end breastfeeding, as they saw fit. The following phase of the face-to-face interaction was conducted similarly to the previous one, with the three phone conditions-each lasting five minutes. Mothers were asked to have a face-to-face interaction with their infants as they normally do ("you can use your baby's toy or the puppets in the room as well as one of the books"). The infants were placed in a baby swing across from the mothers, who were instructed to remain sitting during the measurement and to avoid standing or taking their baby out of the swing. WhatsApp messages were then sent to the participants' smartphones in the same manner as in the breastfeeding phase. Before each condition ended, the participants were asked to complete the situational anxiety questionnaire that was sent again via a text link. At the end of the measurement of the conditions in which the phone was not available, research assistant entered the room and instructed them orally to answer the questionnaire in the link. After the final task, there was a final five-minute baseline recording, identical to the first. Finally, monitoring equipment was then removed. Participants were escorted to a control room where they were asked to watch specific video clips from the six different experiment condition � phases that were just recorded, and to answer some questions on their emotions when watching the clips-valence and arousal (the Affect Grid) [66] (Fig 1). Data from the Affect analysis is not reported here. Mothers and infants were thanked and compensated for their participation (received a coupon for a free coffee and an onesie for the baby). Initially, mothers' filled in several self-report questionnaires, followed by two experimental phases in the lab (breastfeeding and face-to-face interactions). Each phase was comprised of three experimental smartphone conditions, each lasting 5 minutes. Through the lab study, mothers' EDA, cardiac impedance and gaze fixation patterns were continuously recorded. The entire lab visit was also video-recorded from three angles. https://doi.org/10.1371/journal.pone.0257956.g001
Physiological measurements
We assessed physiological measures of the sympathetic branch of the autonomic nervous system (SNS) in mothers during breastfeeding and face-to-face interaction while also using their phone or getting notifications without being able to answer. We compared these conditions to a state in which the phone was muted and unavailable during interactions, so it could not overtly distract mothers. More specifically, we measured two types of physiological activity denoting SNS function: 1) Measures of electrodermal activity (EDA), assessed from the skin of the palm. 2) Measures of cardiological impedance, assessed from the heart (See details below on acquisition). We calculated statistics for two specific physiological measures of EDA: Tonic skin conductance level (SCL) and tonic period, and for one specific measure of cardiological impedance: Cardiac output (CO). Tonic SCL represents the tonic level of electrical conductivity of the skin calculated by the entire amount of SCL minus the phasic activity. Tonic period represents the amount of time spent in non-responsive states and is calculated by the total period of the waveform after removing all phasic responses (SCRs). Cardiac output indicates how much blood is being pumped out of the heart at any given time in liters per minute and is calculated as Heart Rate (HR) times Stroke Volume (SV). Acquisition. Physiological data was obtained via a MindWare mobile impedance cardiograph device (MindWare Technology, United States) with a sampling rate of 500Hz. The device was connected to a participant via nine electrodes. Impedance and respiratory data was derived from the standard tetrapolar electrode procedure for the impedance cardiogram [67]. Electrodermal activity (measured in microsiemens [μS]) was collected via two disposable Ag-AgCl electrodes, both placed on the palm of the participant's nondominant hand. EDA values were outputted from the MindWare EDA analysis software as the mean skin conductance level recorded at 2Hz.
Gaze direction measurement
Maternal attention to the infant was operationalized via the assessment of mothers' gaze patterns with an eye-tracking mobile device (Senso Motoric Instruments; Teltow, Germany, SMI, https://www.smivision.com)-a non-invasive and robust system designed to be used as a common pair of glasses (weight 75g). The SMI head-mounted system includes two small cameras on the rim of the glasses capturing the eye movements of the wearer, and a front view camera that captures the participant's line of sight. The range of eye-tracking is 80˚horizontal, 60˚vertical with a binocular 60Hz temporal resolution and up to 0.1˚spatial resolution. This is combined with a recording from a 24-Hz front view camera with a field of view: 60˚horizontal and 46˚vertical. Before the acquisition, for each participant, a three-point online calibration was conducted, followed by a check of the calibration's accuracy. The calibration procedure included three dots on the wall at a distance of approximately 1 meter from the participant. Mothers with corrected vision (glasses or contact lenses) were instructed to use contact lenses in order for SMI glasses to be used.
Data analysis
Physiological pre-processing. Cardiac output (CO) data were analyzed using MindWare Technology's IMP application, version 3.1.2. The calculation of cardiac output and stroke volume was determined by Kubicek method [68] using rho set to 135 ohms. Artifacts in the data were detected by manual visual inspection with the MAD/ MED and IBI min/max heart beat detection methods that enabled noise filters with expected heart rate range of maximum Heart Rate (BPM) of 200 and minimum Heart Rate (BPM) of 40. The EDA signal was analyzed using MindWare Technology's EDA software, version 3.1.4. Artifacts in the data were detected by manual visual inspection. Irregular pronounced changes and sudden drops and fluxes that were clearly related to disconnections, were corrected by a linear spline function available in the application. The smoothed EDA signal was achieved by a rolling filter of 500 data points per block. the level of EDA (in MicroSimmens) was outputted with threshold of 0.05 for every 500 ms (as recommended by the provider). We calculated statistics for three physiological measures mentioned above-tonic SCL, tonic period, and CO.
Gaze direction pre-processing. Visual intake count (fixations), saccades, and blinks were outputted by standard SMI algorithms. Areas of interest (AOI) were coded using SMI BeGaze software. AOI included infant (face and body) and smartphone. Visual intake count locations, indicated by a marker on the video recorded by the front view camera, were manually mapped by a trained research assistant, indicating fixation-by-fixation. Eye-tracking analyses were conducted according to a similar procedure in prior research [69,70]. We derived the Dwell time (%) parameter to measure the duration of time in which mothers' gaze fixated on an AOI. The dwell time refers to a visit in an AOI, from entry to exit, calculated by the sum of durations from all fixations and saccades that hit the AOI. Normalized dwell time parameter (dwell time in ms. divided by AOI Coverage) was derived to evaluate the differences between the mothers' gaze patterns towards their smartphone and infant through the experiment smartphone use condition in both phases.
Questionnaires
The Smartphone Addiction Scale (SAS) was used to measure daily-life smartphone use (disturbance, positive anticipation, withdrawal, cyberspace-oriented relationship, overuse, and tolerance) [48]. The following surveys were collected but not included in the current report: The Infant Behavior Questionnaire-Revised Very Short Form (IBQ-R) measures infant' temperament [71]; The Mobile Attachment Scale (MAS) [39]; The Maternal-to-Infant Bonding Scale (MIBS) [72]; The short form of the State-Trait Anxiety Inventory [65]; The Affect Grid [66].
The relationship between experimental phases and maternal gaze patterns
In order to assess if experimental phases and conditions, as well as maternal gaze direction (AOI: towards her infant or her phone), were related to the amount of maternal fixation on each AOI, we conducted a two-factor ANOVA with repeated measures (with Bonferroni post hoc correction for multiple comparisons). The analysis revealed a significant interaction effect between experimental phases (breastfeeding versus face-to-face) and AOIs in predicting the amount of the normalized dwell time (ms/coverage) mothers' directed to each AOI [F(1,17) = 80.42, p < .001, partial eta 2 = .83]. As can be seen in Fig 2,
The relationship between experimental conditions, phases, and sympathetic responses
In order to assess if experimental conditions and phases were related to maternal physiological activity, we performed three repeated measures ANOVAs, in which CO, tonic SCL, and tonic
Associations between smartphone addiction, physiological measures, and gaze patterns to the smartphone
In Table 2 we present Pearson's correlations between self-reported smartphone addiction (SAS), physiological measures (tonic SCL, tonic period and CO), and maternal gaze fixations to her phone during breastfeeding.
Associations between smartphone addiction and physiological indices
We used Pearson's correlations to assess associations between the reported level of smartphone addiction and physiological measures. During breastfeeding: We found that tonic SCL during smartphone use and tonic SCL when the smartphone was on mute were negatively related to addiction scores (r = -.56, p = .01; r = -.58, p = .02 respectively). While in face-to-face
PLOS ONE
interactions: We found that tonic SCL when the smartphone was in the bag and tonic SCL when the smartphone was on mute were negatively related to addiction scores (r = -.55, p = .01; r = -.50, p = .04 respectively). Additionally, during face-to-face interactions tonic period when the smartphone was in the bag was positively related to smartphone addiction scores (r = .50, p = .03). CO was not significantly related to addiction scores in any of the study's phases or conditions (See all correlations in S1-S3 Tables). Overall the results indicate that in many of the study's phases and conditions, higher EDA was related to lower levels of mothers' addiction to their smartphones.
Association between smartphone use and physiological indices-an initial exploration
In order to descriptively assess if there was a potential link between mothers' shifting their gaze and attending to their smartphone and physiological arousal we looked at mothers' skin conductance responses while getting notifications and having to read and respond to them. In Fig 4 we show two such examples from different mothers who participated in the study. As can be seen, at least descriptively, incoming messages (indicated by red vertical lines in the figures) were sometimes followed by a distinctive skin conductance response. These immediate responses may demonstrate that a shift in attention towards the smartphone was accompanied by a physiological arousal response (Fig 4).
Discussion
Primary social interactions between mother and infant constitute early environmental conditions that are critical to infant development. However, in the last decade, the smartphone has emerged as a demanding competitor for parents' attention. Despite the prevalence of this phenomenon, to date, there has been no scientific examination of how maternal smartphone use during breastfeeding or face-to-face interactions affects mothers physiological responses or their attention to their infants. In the present study, we examined mothers' SNS activity and gaze patterns during breastfeeding and face-to-face interactions with their 3-6 months old infants, in which mothers were instructed to either use their smartphones, ignore them, or put them on mute. We further assessed the relationship between objective physiological markers and gaze patterns, and if they were related to subjective report of the mothers' addiction to the smartphone. As we hypothesized, during breastfeeding, mothers fixated on the smartphone for a longer duration than on their infant. An opposite maternal gaze pattern was found during face-toface interactions: fixation towards the infants was longer compared to the smartphone. We suggest here that the breastfeeding context may be more susceptible to smartphone attentional distractions compared to the face-to-face context. Specifically, during breastfeeding, the infant is not demanding or available for face-to-face interaction, and the goal is to feed. As such, mothers may be more prone to distractions and other activities such as watching TV, thinking, or using their smartphones, compared to face-to-face interactions which require full attention to fulfill their goals [35].
We further found changes in maternal physiological activity that were related to the context of breastfeeding in the smartphone use condition. Specifically, only during breastfeeding, tonic SCL was elevated while the smartphone was on mute compared to when it was in use. This finding suggests that the smartphone's unavailability during feeding interactions may be a potential cause of maternal alertness or distress, which is not the case during face-to-face interactions when the full attentiveness of the mother to the infant is called for. It is interesting to note that increased SCL was evident when the phone was unavailable and muted, but not when it was unavailable and the sound was on. It is possible that the muted condition induced a stress-like response due to a perceived lack of control [73], whereas the sound on condition promoted a sense of control that may have inhibited autonomic arousal [74].
Another main physiological finding, which was consistent across most of the phases/conditions in our study, indicates that mothers' higher self-reported smartphone addiction scores correlated with lower EDA. These results on the physiological correlates of "chronic" smartphone use, echo results from the literature on substance addictions [75][76][77]. For instance, among individuals with extensive opioid use, there were little to no EDA changes following . The red vertical dotted lines indicate an incoming text message, which was followed by the mother's required shift of attention to her phone in order to respond. The Y axis represents electrodermal activity in microsiemens, and the X axis represents elapsed time. As can be seen, it seems, at least descriptively, that incoming messages can be followed by a distinctive skin conductance response.
https://doi.org/10.1371/journal.pone.0257956.g004 drug administration, while opioid-naïve or less experienced patients demonstrated more dramatic sympathetic changes after drug administration [78]. Additionally, heavy internet users showed lower SCL compared to low-risk internet users [63], and in gamblers, wins were related to hypoactive sympathetic functions [79]. It may very well be that due to extensive use, skin conductance is less responsive [78].
We further found that aspects of sympathetic activity were related to breastfeeding mothers' gaze patterns towards the smartphone in a complex manner. Specifically, higher tonic period and higher CO were both related to longer maternal gaze fixation to her smartphone during breastfeeding. Tonic period represents the amount of time spent in non-responsive electrodermal states [80]. The mothers' attention to the smartphone during breastfeeding can be considered as either related to the previously mentioned physiological hypoactivation [79], or may also represent a stress-reducing activity. Studies on the physiological response of contentment indicate decreased SCL [61,81,82], which fits to our results of less EDA responsivity related to more fixation on the smartphone during breastfeeding. High CO was also related to more fixation to the smartphone, which may potentially reflect an opposite stress-like response or higher arousal [83,84]. Conversely, we suggest that this finding may reflect the fact that CO was overall higher during breastfeeding, in which there was also a higher level of maternal fixations to her phone. Our finding is also consistent with previous results from rodents [85] and humans [86], which pointed to an overall increase in CO during breastfeeding, possibly reflecting the perfusion of the mammary gland by oxytocin-infused blood, facilitating milk ejection. It is important to note that increased CO during breastfeeding may also be explained by the order of our paradigm's phases, which was not counterbalanced but instead regularly started with breastfeeding to ensure the baby was calm. Therefore, further studies are needed to examine if elevated CO (and its relationship to fixation to the phone) is only due to breastfeeding or also interacts with smartphone use in a way we were not able to elucidate from the current design.
Limitations and future studies
The limitations of the current study are mainly the small sample size and its exploratory nature, which entail that further studies corroborate and expand these initial results. Moreover, in the current study, we did not assess immediate physiological responses to received notifications, and hence, future studies are required to explore breastfeeding mothers' immediate physiological responses when smartphone distractions occur. Further studies are also needed to examine if our results are unique to the breastfeeding context or may appear in other feeding situations in non-breastfeeding mothers. Another way in which our research design could be strengthened is to include a baseline condition in which the mother is not interacting with her infant but instructed to be involved with her smartphone "as she usually does". This condition may reveal baseline differences in maternal proclivity towards checking her cell phone and could further develop the findings of the current study. An exploratory descriptive analysis of several mothers' EDA time-series' indicated there may be "incoming message"-specific physiological responses in EDA that accompany the shifts in attention from infant to smartphone. As we did not have a-priori hypotheses regarding these effects, we suggest they be further validated in future research. Finally, these results should be expanded in longitudinal research designs to examine the developmental influences of smartphone use on infants and the mother-infant dyad.
Notwithstanding our project's strength is in its multimodal and interactive nature: A mental, behavioral, and physiological examination of mothers during real-life situations that involve interactions with their infant and smartphone use. To our knowledge, this is the first presentation of data and findings that characterize behavioral and physiological markers of a preference to the smartphone in mothers' that is context specific-breastfeeding versus faceto-face interactions. Interestingly, some physiological responses were similar to those reported in the literature on substance addiction. Our results should be further explored to fully understand the outcomes of smartphone use on characteristics of the mother, the infant and their bond in this digital era. | 2021-08-03T00:04:57.511Z | 2021-02-19T00:00:00.000 | {
"year": 2021,
"sha1": "e0cb9e01ff908721039b924c326fc61b4609f1a9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0257956&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e195a9217ea397c289daf0cb1e66de45b8b8b8f7",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
259054335 | pes2o/s2orc | v3-fos-license | Central Slip Repair using Trans-articular K-wires: A Comparative Study.
Summary Central slip disruption may lead to PIP joint dysfunction causing significant morbidity. Existing evidence for any specific surgical management of these injuries is limited but does favor early mobilization of the PIP joint. Aim: To assess the functional outcome in a cohort of patients undergoing central slip repair with internal K-wire proximal interphalangeal joint splinting and complete immobilization against those with external splinting only. Methods: A single center retrospective analysis of all patients that underwent operative central slip repair in our institution over a 5-year period. Data were collected via the HIPE database and clinical notes. Data relating to demographics as well as range of motion, total active motion {(TAM) (TAM%)} score, and hand therapy rehabilitation type were analyzed. Results: The study population was n = 44 patients. N = 33 patients were treated without a K-wire and n = 11 treated with a K-wire. There was a male predominance, 81.8% (n = 36). Mean age was 40.4 years. There was no significant difference in the mean TAM achieved at final measurement between the “no K-wire” and the “K-wire” treatment groups [no K-wire 202.1° (standard deviations (SD) 40.0) vs. K-wire 187.4° (SD 28.2), p = 0.208]. The “no K-wire group” achieved a mean TAM % of 78.0 (SD 11.4) and the “K-wire group” achieved a mean TAM % of 72.1 (SD 10.8); no statistically significant difference in mean scores was observed between groups. Conclusion: Our study has shown comparable functional outcomes between those having complete joint immobilization with internal K-wire splinting and those that are externally splinted only following central slip repair.
Introduction
Extensor tendon injuries are a commonly encountered hand injury, and functional outcomes can often be limiting. 1 , 2 Functional limitation post-injury is often more severe when compared with flexor tendon injuries. 3 , 4 Extensor tendon injuries are classified as per the zone of injury with zone III extensor injuries often involving the central slip (CS). 1 , 2 The mechanism of injury can be from closed hyperflexion injuries resulting in avulsion of the CS or from open penetrating trauma to the digit where the tendon is cut. This can occur with or without concomitant bony injury of the phalanges. Closed injuries frequently occur in those participating in sporting activities. 5 However, the incidence of CS injury, or zone III extensor tendon injury, is higher with open penetrating trauma compared with closed injuries. 2 If left untreated, CS injuries can progress to a boutonniere deformity. As described by Massengill, the boutonniere deformity results from flexion at the proximal interphalangeal joint (PIPJ) and extension at the distal interphalangeal joint (DIPJ), caused by volar subluxation of the lateral bands when the CS is disrupted. 6 Boutonniere deformity can result in significant morbidity for patients due to the impact on hand function.
The management of a CS injury is dependent on the nature of the injury. Closed injuries are often managed conservatively with extension splinting to immobilize the PIPJ. 1 , 7 Open injuries require surgical management. Surgical techniques described involve direct repair of CS, use of bone anchors, tendon grafts, and bone tunneled suturing of the CS, among other techniques. 2 , 8 , 9 Post-operative external splinting is indicated to immobilize the PIPJ once surgical repair of CS is performed. 7 The concept of "internal splinting" was first described by Pratt, where a longitudinal K-wire was placed along the length of the finger to immobilize the digit following an extensor tendon repair. 10 This technique has been refined to the use of shorter K-wires traversing only one joint to immobilize it following repair. This technique, which involves immobilizing the PIPJ in extension allowing the CS to heal prior to active mobilization and rehab, is favored by some surgeons. Contrary to this, some surgeons prefer to avoid trans-articular K-wires due to concerns regarding stiffness of the joint.
There is a paucity of literature comparing the functional outcomes for various surgical and nonsurgical management techniques of CS injuries. 2 Specifically, there is no consensus agreement on the use of trans-articular K-wire fixation of the PIPJ following CS repair or reconstruction. To our knowledge, there are no studies to date comparing functional outcomes following internal splinting with K-wire fixation of the PIPJ following CS repair or reconstruction and external thermoplastic splint following CS repair.
Aim and Objectives
The aim of this retrospective cohort study was to compare the functional outcomes between patients who underwent CS repair or reconstruction with K-wire fixation of the PIPJ and patients who underwent CS repair or reconstruction with external splinting. Primary outcomes were to compare active finger range of motion measured at final hand therapy appointment pre-discharge, total active motion (TAM), and TAM%. Secondary outcomes were to compare the incidence of boutonniere deformity, extent of the extensor lag at the PIPJ, and day of discharge from hand therapy post-surgery.
Methods
The study is a single-center retrospective cohort study based on a prospectively maintained database of all patients that underwent operative CS repair in our institution over a 5-year period between 2017 and 2021. An electronic case report form was designed to retrospectively collect as well as prospectively gather and maintain information on patient demographics, operative details, range of motion (ROM) measurements, TAM score, and post-operative complications. This database was maintained by the authors. Demographic details were recorded from a centralized hospital database and clinical notes. Operative details were recorded from the operative note; ROM measurements and TAM score were measured by hand therapists. The TAM score refers to the sum of the active MCP, PIP, and DIP arc of motion in degrees of an individual digit. The calculation of TAM was done using the Strickland-Glogovac formula. 12 This calculation can then be compared with the TAM of the contralateral hand as a percentage (TAM%) or the norm of 260 °and using the Strickland and Glogovac system of classification whereby TAM > 150 °or 85%-100% of the contralateral finger or the norm of 260 ˚is classified as excellent, good is > 70%-84% of the norm, fair > 50%-69% of the norm, poor < 50%. 12 In this study, it is reported as a percentage of the norm of 260 °. TAM is a validated measure and has been used previously in studies examining functional outcomes following CS injuries. [13][14][15][16]
Study population
Patients eligible for inclusion in the study were all patients with an acute acquired CS injury, open or closed, requiring surgical intervention that presented within 4 weeks of their initial injury. Patients excluded were those with existing boutonniere deformities, delayed presentation more than 4 weeks from initial injury (n = 8), multi-digit injuries (n = 1), those who underwent conservative management only (n = 6), and patients lost to follow up who did not have any functional outcome measurements recorded (n = 12). Forty-four patients met the inclusion criteria.
The cohort consisted of patients who underwent repair or reconstruction of the CS with or without internal splinting using a K-wire. For those who underwent internal splinting with K-wire fixation, a 1.1 mm K-wire was inserted at the time of repair traversing the PIP joint ( Figure 1 ). The K-wire was left in situ for 4 weeks and was removed using a sterile technique as a local anesthetic procedure in a day surgery theater. All CS repairs and concomitant injured structures were repaired using a standard operative repair technique. The operative technique included a standard modified Kessler repair with some variability observed regarding the type and size of suture (4.0 or 3.0, PDS vs. Maxon) with four (n = 4) cases requiring the use of a Mitek bone anchor using the technique previously described in the literature by the lead author. 11 Patients were followed up in the outpatient department at one week post operatively for wound review and application of a thermoplastic splint. Patients wore the splint continuously until 6 weeks post-surgery. K-wires were removed 4 weeks post-surgery. A graded flexion program and a splint weaning program were commenced at 6 weeks guided by hand therapy. Outcome measures were recorded at the final hand therapy appointment. Active ROM was measured in degrees with a finger goniometer using a standardized protocol. The same departmental protocol was used throughout when measuring the ROM to ensure intra-rater and inter-rater reliability. The same make and model of goniometer was used, and the positioning and verbal instruction were consistent for all patients. 16 , 17 Once splint weaning was commenced, rehabilitation consisted of a gradual ROM and return to function and strength.
Statistical analysis
Statistical analysis was performed using R statistical software package. Categorical data were described using counts and percentages. Differences in proportions were analyzed using Fisher's exact test. Where the distribution approximated normal, data were described using means and standard deviations (SD). Means were compared between groups using Welch's two-sample t-test. Where the distribution was not normal, data were described using medians and interquartile range (IQR), and groups were compared using the Wilcoxon rank sum test.
Characteristics of the study population
The characteristics of the study population are described in Table 1 . After applying the inclusion and exclusion criteria, the study population consisted of n = 44 participants, of whom 81.8% (n = 36/44) were male. The mean age was 40.4 years (SD 19.3). The index finger was the most frequently injured digit accounting for 34.1% (n = 15/44) of injuries, and the dominant hand was involved in 50.0% of cases. Laceration was the most prevalent mechanism of injury, and most injuries (88.6%) were open.
Of the 44 participants included in the study, n = 33 underwent CS repair or reconstruction with external splinting and are referred to as the "no K-wire group," and n = 11 underwent CS repair or reconstruction with internal splinting using a K-wire and are referred to as the "K-wire group." There were no statistically significant differences observed between the treatment groups with regard to gender, age, involvement of the dominant hand, digit involved, or mechanism of injury.
Surgical repair and splinting regime
The type of surgical repair and splint used are described in Table 2 . The majority of patients, 88.6% (n = 39/44), underwent repair or reconstruction of the CS alone, with n = 4 also having lateral band repair and n = 1 having repair of multiple structures. The most frequently used splint was a PIPJ circumferential splint. There were no statistically significant differences observed between the K-wire treatment groups regarding the type of surgical procedure or the splinting regime used.
Functional outcomes
The objectively measured ROM outcomes and TAM scores for each treatment group are described in Table 3 . Both groups commenced discontinued splinting and commenced full mobilization at a mean time of 6 weeks. Final ROM measurements were taken at a median of 14.0 weeks for the no
Table 3
Comparison of functional outcomes between K-wire and no K-wire groups. K-wire group and at median time of 17.0 weeks for the K-wire group, but this difference was not statistically significant. There was no significant difference in the mean TAM achieved at final measurement between the "no K-wire" and the "K-wire" treatment groups (no K-wire 202.1 °(SD 40.0) vs. K-wire 187.4 °(SD 28.2), p = 0.208). The "no K-wire group" achieved a mean TAM % of 78.0 (SD 11.4), and the "K-wire group" achieved a mean TAM % of 72.1 (SD 10.8), with no statistically significant difference in mean scores observed between the groups. The incidence of complications is shown in Table 4 . There were 3 complications recorded in the study population, n = 2 boutonniere deformities, both of which occurred in the "no K-wire" treatment group, and n = 1 non-union, which occurred in a patient who underwent K-wiring.
Discussion
The primary objective of this retrospective cohort study was to compare mean TAM scores at final follow-up between patients who underwent CS repair/reconstruction with external splinting and patients who underwent CS repair/reconstruction with internal K-wire fixation. In this study, we found no statistically significant difference in mean TAM scores between those treated with and without K-wire fixation at a median follow-up time of 14.0 and 17.0 weeks, respectively.
A literature review by Geoghegan et al. has highlighted the paucity and quality of published data regarding the management of CS injuries, with only two prospective cohort studies being previously published and only nine studies in total being identified that fit the inclusion criteria for their literature review. 2 This highlights the limit and amount of quality comparative data against which we can analyze our study. They have rated all papers they identified as being "fair to poor" quality also. The mean sample size of the papers they identified was n = 27 (SD 33 patients), and the demographic data from their papers referenced are comparable to the demographics of our cohort. Comparatively, our study has n = 44 patients, making this study quite large in the context of the current available literature. This literature review highlights the heterogeneity of data collected, post-operative management regimes, outcome measures used, follow-up and functional outcomes, and variety in follow-up time points. The current evidence base for management of CS injuries is limited, and the authors allude to the fact that much of the data regarding CS management are equivocal. Geoghegan et al. conclude that the majority of the literature would support early mobilization and not prolonged immobilization. Our results would counter this in that our data have shown that post-operative joint immobilization with a trans-articular K-wire for a 4-week post-operative period, with removal of the K-wire at week 4 and commencement of the hand therapy rehabilitation regime at no later than week 5, can achieve comparable functional outcomes with patients that are mobilized earlier. There was also a lower complication rate and progression to boutonniere deformity in our K-wire group.
The only study that is directly comparable to our study is a study by Evans. 17 This comparative study looked at functional results following active post-operative management that consisted of mobilization versus immobilization in CS repair. Their study analyzed a cohort of 55 patients separated into two comparative groups. Group 1 (n = 33) patients that were immobilized in a splint for 3-6 weeks and Group 2 (n = 25) patients that were treated with a short arc of motion (SAM) initiated from days 2 to 11 post-operatively. Their outcomes that achieved most statistical significance between Group 1 and Group 2 were mean day of initiation of motion, day of discharge from therapy, TAM, and PIPJ extensor lag with the patients in the SAM group commencing therapy sooner, being discharged sooner, having a higher TAM score, and a lower degree of extension lag at the DIPJ compared with Group 1. 17 Interestingly, they analyzed simple injuries separately, and, in this cohort, they found that Group 1 (immobilized) had an average TAM of 137 °and a TAM% of 63% normal, with Group 2 (SAM) having an average TAM of 147 °and a TAM% of 75% of normal. Comparatively, our TAM and TAM% scores were 202 °and 78% (no K-wire) and 187 °and 72% (K-wire), respectively, which using the classification applied by Evans et al. constitutes a "good" functional outcome, which is comparable to their results. 17 We acknowledge that there is comparative heterogeneity between the measurement of outcomes and data between our study and those of Evans et al., which makes it difficult to draw any definitive conclusions here.
We have only identified one other paper that has analyzed the use of trans-articular K-wires in the surgical management of CS repairs in the literature. Mehdi et al. 8 have described a technique of Mitek TM repair of CS injuries with the use of trans-articular K-wires for joint immobilization with a single trans-articular K-wire and splinting for 2 weeks followed by gradual active and passive mobilization exercises. They achieved good to excellent outcomes in eight patients with a ROM outcome (%) ranging from 100% to 50% in this cohort. Again, these data are comparatively different from the data collected in this study, but they show that the use of trans-articular K-wires for joint immobilization over a short post-operative period can lead to good functional outcomes.
Analysis of the reported functional outcomes within the literature using mean TAM as an outcome measure has shown some variability. Looking at specific studies regarding CS repair and rehabilitation, Feuvrier [13][14][15][16] Comparing these TAM scores with our cohort of patients with TAM measurements of 202.1 °for our no K-wire group and 187.4 °for our K-wire group, although there is heterogeneity within these groups regarding intervention and postoperative management rehabilitation, both our K-wire and no K-wire groups have had good to excellent functional outcome TAM scores when compared with the published literature. This also highlights that the use of internal splinting using trans-articular K-wires achieves a comparatively good functional outcome. Another outcome measure assessed in our study showing comparable outcomes between our no K-wire and K-wire group was PIPJ extension lag at the last follow-up. This was comparable for both groups at 11.6 ˚for the no K-wire group and 13.3 ˚(SD 9.6) for the K-wire group (SD 11.5).
The limitations of the study include the small sample size with 33 participants in the "no K-wire" group and 11 participants in the "K-wire group." Statistical tests may be compromised by the small numbers analyzed. This is a limitation seen in many of the studies in this area, 2 and further research should focus on building large cohorts through either specialist centers or through multicenter studies.
Conclusions
Our results add to the literature on this topic where a paucity of data exists to date. Statistically significant data from a larger prospective study would be required to advocate for a definitive change in practice. The results of this study we believe are of clinical use in that we have shown that there is utility in terms of functional outcome in using trans-articular internal splinting of the PIPJ following a CS repair. | 2023-06-04T15:02:39.477Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "9bb656bfb0828bee82c5435e5ea992c0959394ee",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jpra.2023.05.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61f722a0b0f7c6b57f898d87a17d19b970396721",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237307844 | pes2o/s2orc | v3-fos-license | Reversible Leukoencephalopathy in a Man with Childhood-onset Hyperornithinemia-Hyperammonemia-Homocitrullinuria Syndrome
A 49-year-old Japanese man had shown developmental delay, learning difficulties, epilepsy, and slowly progressive gait disturbance in elementary school. At 46 years old, he experienced repeated drowsiness with or without generalized convulsions, and hyperammonemia was detected. Brain magnetic resonance imaging detected multiple cerebral white matter lesions. An electroencephalogram showed diffuse slow basic activities with 2- to 3-Hz δ waves. Genetic tests confirmed a diagnosis of hyperornithinemia-hyperammonemia-homocitrullinuria (HHH) syndrome. Leukoencephalopathy was resolved following the administration of L-arginine and lactulose with a decrease in plasma ammonia levels and glutamine-glutamate peak on magnetic resonance spectroscopy. Leukoencephalopathy in HHH syndrome may be reversible with the resolution of hyperammonemia-induced glutamine toxicity.
We herein report the improvement of leukoencephalopathy by L-arginine and lactulose administration and reduction in the glutamine-glutamate (Glx) peak on MRS in a 49year-old man with childhood-onset HHH syndrome. any neurological diseases in his family. He presented with convulsions and was diagnosed with epilepsy at 11 years old. Despite his dropping out of ambulatory treatment, the epileptic episodes disappeared.
At 18 years old, his intelligence quotient evaluated using the Tanaka-Binet test was 49. After graduating from a high school for disabled students, he started work but retired after a month because of poor relationships with his colleagues. He subsequently lived at home with his mother. At 46 years old, he entered a group home, where he experienced repeated drowsiness with or without generalized convulsions. He was admitted to another hospital, and antiepileptic drugs (levetiracetam and lacosamide) were administered.
At that time, brain MRI showed multiple high-intensity lesions in the cerebral white matter on fluid-attenuated inversion recovery (FLAIR) imaging sequence (Fig. 1A). Hyperammonemia was detected for the first time. However, no intervention for hyperammonemia was started at this time. Transient drowsiness, with or without convulsions, occurred about once a month. At 49 years old, he was admitted to our hospital.
On a physical examination, no abnormal findings were noted. On a neurological examination, he was slightly disoriented (Glasgow Coma Scale, E4V4M6). The Mini-Mental State Examination (MMSE) score was moderately decreased (22/30). Apart from a small voice, the cranial nerve functions were normal. Asterixis, mild spasticity, and brisk tendon reflexes without weakness were observed in the upper limbs. A lower limb examination demonstrated spasticity and weakness [i.e., iliopsoas and hamstring muscles, 4/4; tibialis anterior muscle, 2/2; and other muscles, 5/5, on the Medical Research Council Scale (range, 0-5)] with exaggerated tendon reflexes, ankle clonus, and pathological reflexes. Cerebellar ataxia and dysautonomia were absent. While he was able to walk without aid, his gait was spastic and unstable.
Tests for a complete blood count, the hepatorenal function, glucose, and C-reactive protein showed no abnormalities. The plasma ammonia level was elevated (176 μg/dL; normal range, 12-66). A plasma amino-acid analysis showed high levels of glutamine, ornithine, and citrulline at 1031 (range, 420-700), 559 (range, 0-100), and 51.2 (range, 17-43) nmol/mL, respectively. Brain MRI showed cerebral white matter lesions with progression of cerebral atrophy (Fig. 1B). A high signal of globus pallidus on T1-weighted imaging was unapparent. MRS of the white matter lesion of the right semioval center detected high peaks of Glx (Fig. 1D). Spinal MRI showed no abnormalities. Singlephoton emission computed tomography detected a diffuse decrease in perfusion, except to the cerebellum. An electroencephalogram (EEG) conducted while awake showed diffuse slow basic activities with 2-to 3-Hz δ waves without epileptic activities ( Fig. 2A), suggestive of encephalopathy. No motor potentials were evoked in the upper or lower limbs on transcranial magnetic stimulation. Sanger sequencing revealed a homozygous nonsense variant in SLC25A15 (NM_014252.3:c.535C>T: p.Arg179*). This SLC25A15 gene variant has been reported as a pathogenic variant in the HHH syndrome (1), thereby confirming the diagnosis of HHH syndrome.
A protein-restricted meal of 1 g/kg/day did not decrease the plasma ammonia levels [i.e. 205 μg/dL (before breakfast), 282 μg/dL (1 hour after breakfast), and 287 μg/dL (1 hour after dinner)]. Upon increasing the dose of oral Larginine (from 3 to 6 g/day) and lactulose, the plasma ammonia levels decreased to the normal range [57.2±9.4 μg/dL (mean±standard deviation)] with disappearance of disorientation, seizure, and asterixis. The apparent slow basic activities in the EEG were resolved with treatment, although the frequency of the α wave was approximately 8 Hz (Fig. 2B). After 2 months of treatment with L-arginine and lactulose, brain MRI showed resolving leukoencephalopathy associated with mild progression of cerebral atrophy (Fig. 1C). Furthermore, MRS detected a decreased Glx peak associated with an improvement in leukoencephalopathy, eight months after starting the treatment (Fig. 1E). The MMSE score increased slightly (24/30).
Discussion
We herein report an adult patient with childhood-onset HHH syndrome who demonstrated reversible leukoencephalopathy following the administration of L-arginine and lactulose, despite a lack of appropriate therapy for over 35 years. Furthermore, this is the first report of a decrease in the Glx peak on MRS with the resolution of hyperammonemic leukoencephalopathy in HHH syndrome.
The pathophysiology of abnormal brain MRI findings in HHH syndrome has not been elucidated. White matter lesions, similar to those observed in HHH syndrome, have been described in patients with other UCDs, such as ornithine transcarbamylase deficiency (OTCD) (17,18). Takahashi et al. reported an increased Glx peak on MRS in four of six patients with OTCD (18). Furthermore, they showed decreased and increased Glx peaks on MRS accompanied by improved and exacerbated clinical severity, respectively (18). However, they did not describe changes in the white matter lesions on MRI in detail (18). An increased Glx peak supposedly reflects hyperammonemia-induced glutamine accumulation in the brain, which induces astrocyte enlargement (19) and cerebral edema (20). In our patient, improvement in both the cerebral white matter lesions observed on MRI and the encephalopathic symptoms following intervention for hyperammonemia may have been due to the resolution of hyperammonemia-induced glutamine toxicity. The mild progression in cerebral atrophy, despite an improvement in leukoencephalopathy, may be partly due to the resolution of both astrocyte swelling and brain edema. The deterioration of cerebral atrophy prior to intervention for hyperammonemia may have been due to intense hyperammonemia-induced glutamine toxicity.
The prognosis of HHH syndrome varies remarkably from
B C A
mild neurological involvement to severely disabling disease, with or without treatment (1,21). Treatment outcomes have not been elucidated, especially in patients who receive delayed intervention in adulthood. In our experience, even adult HHH syndrome patients with long-standing disease may respond to treatment and show a decrease in leukoen-cephalopathy.
Conclusion
We encountered an adult patient with childhood-onset HHH syndrome who showed improvement in leukoencephalopathy, associated with a decrease in the Glx peak on The authors state that they have no Conflict of Interest (COI). | 2021-08-27T06:16:18.577Z | 2021-08-24T00:00:00.000 | {
"year": 2021,
"sha1": "e2816bf19eb94fa9f58c99b8ac4b56e5c2aa0968",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/advpub/0/advpub_7843-21/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f323b406f6e44e3f423af3df68571f57d786a94",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229721712 | pes2o/s2orc | v3-fos-license | Thiazole Analogues of the Marine Alkaloid Nortopsentin as Inhibitors of Bacterial Biofilm Formation
Anti-virulence strategy is currently considered a promising approach to overcome the global threat of the antibiotic resistance. Among different bacterial virulence factors, the biofilm formation is recognized as one of the most relevant. Considering the high and growing percentage of multi-drug resistant infections that are biofilm-mediated, new therapeutic agents capable of counteracting the formation of biofilms are urgently required. In this scenario, a new series of 18 thiazole derivatives was efficiently synthesized and evaluated for its ability to inhibit biofilm formation against the Gram-positive bacterial reference strains Staphylococcus aureus ATCC 25923 and S. aureus ATCC 6538 and the Gram-negative strain Pseudomonas aeruginosa ATCC 15442. Most of the new compounds showed a marked selectivity against the Gram-positive strains. Remarkably, five compounds exhibited BIC50 values against S. aureus ATCC 25923 ranging from 1.0 to 9.1 µM. The new compounds, affecting the biofilm formation without any interference on microbial growth, can be considered promising lead compounds for the development of a new class of anti-virulence agents.
Introduction
The development of synthetic small molecules able to counteract antibiotic resistance (AMR) mechanisms is urgently needed [1]. In fact, most antibiotics used to date to treat the most common infections are becoming ineffective. Many bacteria, including the wellknown ESKAPE pathogens (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species) evolved in highly resistant forms through different mechanisms that include the inactivation of the antibiotic, chemical modification of the antibiotic target, alteration of cell permeability, and biofilm formation [2]. In particular, bacterial biofilm is currently considered one of the most relevant virulence factors, which is capable of making pathogens up to 1000 times more resistant than their planktonic form [3]. It was estimated that more than 80% of chronic infections are caused by biofilm formation on indwelling medical devices or host tissues [4].
Biofilm is a complex multicellular structure in which bacterial cells are embedded in a matrix constituted of extracellular polymeric substance (EPS), which is mainly formed by polysaccharides, proteins, lipids, extracellular DNA (e-DNA), and molecules originating from the host including mucus and DNA [5]. In the past decade, many efforts have nating from the host including mucus and DNA [5]. In the past decade, many efforts have been made for identifying new therapeutic strategy able to eradicate biofilm-associated infections, [6] and, despite numerous compounds being described as potent anti-biofilm agents, no new derivative has reached the clinic. The lack of approved anti-biofilm drugs together with the increase in the spread of chronic biofilm-related nosocomial infections make the research in this field particularly relevant.
Among the bioactive scaffolds recently described for their interesting anti-biofilm properties, thiazole derivatives are considered among the most promising compounds [7].
Sulfur-containing heterocycles are often involved in attractive nonbonding interactions that play an important role in the control of molecular conformation. In comparison with other five-membered heterocycles, the thiazole nucleus has unique features due to the presence of the low-lying C−S σ* orbitals. The small regions of low electron density present on the sulfur atom, known as σ-holes, are often involved in drug-target interactions, thus improving the affinity toward the biological receptor [8].
Many thiazole compounds were reported in the last decade as potent anti-biofilm agents. The 4-(o-methoxyphenyl)-2-aminothiazoles 1a,b (Figure 1) were found to be able to significantly inhibit P. aeruginosa biofilm formation at concentrations in the low micromolar range, interfering with the quorum sensing (QS) system [9]. The thiazole derivatives 2a,b ( Figure 1) showed potent anti-biofilm activity against eight methicillin-resistant (MRSE) and two reference (ATCC 12228, ATCC 35984) strains of Staphylococcus epidermidis eliciting BIC50 values ranging from 0.35 to 7.32 µg/mL [10]. On the basis of the interesting anti-biofilm properties described for the thiazole scaffold and continuing our search for new nortopsentin alkaloid analogues with promising biological activity [11][12][13], we recently reported the synthesis and the anti-biofilm activity of the new nortopsentin analogues of type 3, in which the imidazole nucleus of the natural compound was replaced by the thiazole ring, and the indole moiety in position 4 was replaced by 7-azaindole ( Figure 1) [14]. The thiazole derivatives 3 were tested against S. aureus ATCC 25923, S. aureus ATCC 6538, and P. aeuruginosa ATCC 15442 in order to evaluate their ability to inhibit biofilm formation and microbial growth. Most of the new thiazole nortopsentin analogues proved to be active as inhibitors of biofilm formation exhibiting marked selectivity toward staphylococcal biofilms, showing BIC50 values in the low micromolar range. Compounds of type 3 showed a typical anti-virulence profile, fighting bacterial virulence factors, such as biofilm formation, without interfering with the bacterial growth, thus imposing a low selective pressure for the onset of antibiotic resistance mechanisms. On the basis of the interesting anti-biofilm properties described for the thiazole scaffold and continuing our search for new nortopsentin alkaloid analogues with promising biological activity [11][12][13], we recently reported the synthesis and the anti-biofilm activity of the new nortopsentin analogues of type 3, in which the imidazole nucleus of the natural compound was replaced by the thiazole ring, and the indole moiety in position 4 was replaced by 7-azaindole ( Figure 1) [14]. The thiazole derivatives 3 were tested against S. aureus ATCC 25923, S. aureus ATCC 6538, and P. aeuruginosa ATCC 15442 in order to evaluate their ability to inhibit biofilm formation and microbial growth. Most of the new thiazole nortopsentin analogues proved to be active as inhibitors of biofilm formation exhibiting marked selectivity toward staphylococcal biofilms, showing BIC 50 values in the low micromolar range. Compounds of type 3 showed a typical anti-virulence profile, fighting bacterial virulence factors, such as biofilm formation, without interfering with the bacterial growth, thus imposing a low selective pressure for the onset of antibiotic resistance mechanisms.
With the aim to obtain more potent anti-biofilm agents that could be effective in the treatment of staphylococcal infections that are biofilm-mediated, herein, we report the synthesis of a new series of thiazole derivatives, structurally related to the nortopsentin analogues 3, in which the 7-azaindole nucleus in position 4 of the thiazole ring was replaced by a thiophene (1a-q) or a pyridine ring (2a-q), and the aromatic bicyclic system in position 2 of the thiazole nucleus can be either an indole or a 7-azaindole moiety.
In fact, thiophene and pyridine moieties are recognized as valuable scaffolds in the development of potent anti-biofilm derivatives [15,16]. Additionally, the thiophene ring was recently discovered as a key nucleus in a series of compounds able to potently inhibit the virulence of relevant Gram-negative pathogens interfering with bacterial Disulfide bond enzyme A (DsbA) enzymes, which catalyzes disulfide bond formation in secreted and outer membrane proteins with virulence functions [17]. Therefore, since the so-far synthesized thiazole nortopsentin analogues have shown a strong selectivity toward the Gram-positive pathogens, we investigated whether the introduction of the thiophene ring could improve the anti-biofilm activity against the Gram-negative bacteria.
Biological Studies
All the new compounds were first tested for evaluating the antibacterial activity against the planktonic form of the Gram-positive S. aureus ATCC 25923, S. aureus ATCC 6538, and of the Gram-negative pathogen P. aeruginosa ATCC 15442. All the new thiazole derivatives, analogously to the precursors 3, did not affect the microbial growth, showing Minimum Inhibitory Concentration (MIC) values greater than 100 µg/mL. This result is in agreement with the desired anti-virulence profile.
Inhibition of biofilm formation of the same bacterial strains was evaluated for all the new derivatives 1a-q and 2a-q at sub-MIC concentrations, and BIC 50 values (the concentration of compound needed to inhibit biofilm formation by 50%) were determined for the compounds that showed a percentage of biofilm inhibition greater than 20% at the screening concentration of 10 µg/mL at least against one bacterial strain (Table 3). n.s.: not significant because lower than 20% of inhibition percentage at the screening concentration of 100 µg/mL. The averages from at least three independent experiments are reported with standard deviation (SD).
All derivatives 1 and most of the compounds 2 showed antibiofilm activity, eliciting, as previously observed for the nortopsentin analogues of type 3, a marked selectivity against the Gram-positive pathogens, in particular toward S. aureus ATCC 25923. Compounds 1l, 2b, 2c, 2i, and 2k exhibited the highest potency with BIC 50 values ranging from 1.0 to 9.1 µM. The replacement of the indole ring with the 7-azaindole moiety, as well as its substitution at position 5 with a halogen atom or a methoxy group, does not entail advantages in terms of the biofilm inhibition. Instead, the presence of a methoxyethyl group on the indole nitrogen generally led to an improvement of the antibiofilm activity against the staphylococcal strains. Compounds 1l, 2b, 2c, 2i, and 2k were also tested by using viable plate count, and the activity of inhibition of staphylococcal biofilm formation was reported in terms of log reduction. By using such a method, compound 2i was the most effective compound in interfering with biofilm formation, since it causes the greatest log reduction ranging from 2.62 to 1.73 at concentrations between 10 and 0.1 µg/mL (see Figure 2).
Most of the new thiazole derivatives 1 and 2 were inactive or weakly active against the Gram-negative strain. Only compounds 1a and 2m showed a significant inhibition of P. aeruginosa biofilm formation, eliciting BIC 50 values of 14.9 and 5.5 µM, respectively.
Additionally, the most active compounds, for every bacterial strain, were selected and tested at the screening concentration of 100 µg/mL, for evaluating their dispersal activity against the 24 h preformed biofilm. No derivatives was able to interfere with the biofilm architecture; only compound 2m showed weak dispersal activity eliciting a percentage of inhibition of 36% against P. aeruginosa at the screening concentration. Biological results highlighted the ability of the new compounds to interfere with the first stage of the biofilm life cycle, which consists in the bacterial adhesion to surfaces [21]. Anti-adhesion agents represent a valuable alternative to antibiotics, since they deprive the bacterium of its pathogenicity by preventing its adhesion to the host cells.
Additionally, the most active compounds, for every bacterial strain, were selected and tested at the screening concentration of 100 µg/mL, for evaluating their dispersal activity against the 24 h preformed biofilm. No derivatives was able to interfere with the biofilm architecture; only compound 2m showed weak dispersal activity eliciting a percentage of inhibition of 36% against P. aeruginosa at the screening concentration. Biological results highlighted the ability of the new compounds to interfere with the first stage of the biofilm life cycle, which consists in the bacterial adhesion to surfaces [21]. Anti-adhesion agents represent a valuable alternative to antibiotics, since they deprive the bacterium of its pathogenicity by preventing its adhesion to the host cells.
General
All melting points were taken on a Büchi-Tottoly capillary apparatus (Büchi, Cornaredo, Italy) and are uncorrected. IR spectra were determined in bromoform with a Shimadzu FT/IR 8400S spectrophotometer (Shimadzu Corporation, Milan, Italy). 1 H and 13 C NMR spectra were measured at 200 and 50.0 MHz, respectively, in DMSO-d 6 solution, using a Bruker Avance II series 200 MHz spectrometer (Bruker, Milan, Italy). Column chromatography was performed with Merk silica gel 230-400 mesh ASTM or with Büchi Sepacor chromatography module (prepacked cartridge system). Elemental analyses (C, H, and N) were within ± 0.4% of theoretical values. The purity of all the tested compounds was greater than 95%, as determined by HPLC (Agilent 1100 Series). 3a-c, 4a-d, 5a-d, 6d, 7a, and 8a These compounds were prepared using procedures previously reported [17,18]. Analytical and spectroscopic data are in agreement with those previously reported.
General Procedures for the Synthesis of 3-bromoacetyl compounds 11 and 12
These compounds were prepared using known procedures (80-90%). Analytical and spectroscopic data are compatible with those previously reported [19,20] 3a-c, 4a-d, 5a-d, 6d, 7a, 8a (2 mmol) and bromoacetyl derivatives 11, 12 (2 mmol) in anhydrous ethanol (8 mL) was refluxed for 30 min. After cooling, the obtained precipitate was filtered off, dried, and recrystallized from ethanol to give the desired thiazoles 1a-n and 2a-n. the growth of a biofilm (24 h old), the content of each well was removed; then, wells were washed up twice with sterile PBS (Phosphate Buffered Saline) and filled with fresh TSB medium (200 µL). After that, different concentrations of compounds were added starting from a concentration equal or greater than the MIC obtained against the planktonic form of tested strains using TSB as the medium. The microtiter plate was sealed and incubated at 37 • C for further 24 h. The content of each well was removed, wells were washed twice with sterile PBS (100 mL to each well) and the 96-well plate was placed at 37 • C for 1 h before staining with a 0.1% w/v crystal violet solution. After 30 min, plates were washed with tap water to remove any excess stain.
Biofilm formation was determined by solubilizing crystal violet as above described, and the absorbance was read at 540 nm using a microplate reader (Glomax Multidetection System Promega, Promega Italia s.r.l, Milan, Italy). The percentages of inhibition were calculated with the above-reported. Each assay was performed in triplicate and repeated at least twice.
Inhibition of Biofilm Formation (Viable Plate Count)
Compounds 1l, 2b, 2c, 2i, and 2k, which exhibited the highest potency in inhibiting S.aureus ATCC 25923 biofilm formation by the crystal violet method, were tested against the same strain by using a viable plate counts method [22]. Briefly, a suspension of the tested strain was obtained as described in Section 3.2.2. Polystyrene flat-bottom 24-well plates were filled with 2 mL of TSB with 2% w/v glucose; then, we added 25 µL of bacterial suspension and sub-MIC concentrations (10; 5; 1; 0.1 µg/mL) of the above-mentioned compounds and incubated them for 24 h at 37 • C. After that time, the wells were washed 3 times with 1 mL of sterile NaCl (0.9% v/v solution), and the surface of each well was scraped 3 times. The inocula were put in test tubes with 10 mL of NaCl (0.9% v/v solution) and sonicated (ultrasonic nominal power equal to 215 kHz) for 2 min. Eight serial dilutions 1:10 were prepared and 100 µL aliquots of each dilution were plated onto tryptic soy agar (TSA). Then, petri dishes were incubated at 37 • C and CFU/mL were counted after 24 h. Each assay was performed in triplicate and repeated at least twice. Activity was expressed as log reduction with respect to the not treated growth control.
Statistical Analysis
Mean values, standard deviation (SD), and significance testing (p-value) were calculated on a PC with the computer program, Microsoft Excel 2019 (Microsoft Corporation, Redmond, WA, USA).
Among the novel approaches evaluated in response to the emergence of the antibiotic resistance, the anti-virulence strategy is considered one of the most encouraging [23]. Disarming the bacteria from their pathogenicity tools, as the biofilm formation, was found to be more beneficial than interfering with their growth. In this scenario, the new thiazole derivatives 1l, 2b, 2c, 2i, and 2m, which proved to be able to interfere with the biofilm formation, without affecting the microbial vital processes, can be considered promising lead compounds for the development of new anti-virulence agents usable for the treatment of biofilm-associated infections or for the prophylaxis of implant surgery.
Author Contributions: A.C., S.C., B.P., D.C., and C.P. performed chemical research and analyzed the data; D.S. and M.G.C. performed biological research and analyzed the data; P.D. and G.C. participated in the design of the research and the writing of the manuscript. All authors have read and agreed to the published version of the manuscript. | 2020-12-31T06:18:18.464Z | 2020-12-27T00:00:00.000 | {
"year": 2020,
"sha1": "2b204f65af930bacae9307c29110890e9fbfaec2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/1/81/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e3e555271e4b44aa14560d472a9e620046100ac",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235680692 | pes2o/s2orc | v3-fos-license | Research on Test Method and Sensor of Engine Mount Transmission Force
The engine mount transmission force can be used as an important basis to evaluate the vibration isolation performance of the engine mounting system. However, there is a lack of testing equipment and methods for the transmission force. The force sensor and the test method of the transmission force is studied. The finite element model of the mounting element is established to analyze the arrangement position of the sensor. The force sensor is developed based on the resistance strain gauge. The sensor is arranged in the middle of the intermediate bolt. The test system of mount transmission force is established at the same time. The driving force of driving wheel is obtained through the chassis dynamometer test. The force sensor is calibrated base on the test data. The results of mounting transmission force are obtained through the vehicle road test.
Introduction
At present, the vibration isolation performance of engine mounting system is mainly evaluated by vibration acceleration. The vibration acceleration of the vehicle is measured by arranging the acceleration sensors at different positions of the engine and chassis under different working conditions [1]. Then the transmissibility of vibration is obtained by processing the test data. This method has been widely used in the research and evaluation of vehicle vibration. But there is a defect: it is not suitable to evaluate the vibration isolation performance of engine mounting system in the road test. The acceleration sensors are installed at the active and passive sides of the mounting elements [2]. The acceleration of the passive side of the mount is usually affected by the road excitation. Therefore, the transmissibility cannot reflect the vibration isolation performance of engine mounting system under road test.
It is the most direct and effective evaluation method to use engine mount transmission force. Because this method can avoid the influence of vehicle body side vibration on the evaluation results. However, due to the compact structure of the engine mounting system, it is difficult to arrange the force sensor. The test methods of transmission force are as follows [3,4]: (1) direct measurement method, (2) dynamic stiffness method of mount method, (3) inverse square of Frequency Response Function matrix method [5].
In this paper, the layout, basic structure and application of the engine transmission force sensor are studied. The layout position of the sensor is determined by theoretical analysis. The sensor is made of
Analysis of Engine Mounting System Structure
The engine mounting system of the sample vehicle adopts four-point layout. The left and right mounts mainly bear the vertical load and vibration of the engine. The front and rear mounts are mainly used to restrain the pitching and horizontal motion of the engine. When the engine works, the engine excitation is transmitted to the vehicle body or subframe through the path of mounting bracket (engine side)mounting element body -mounting bracket (body/ subframe side)-body/ subframe. Therefore, it is necessary to select the appropriate position in the transmission path and arrange the sensor for measuring the force transmitted by the engine to the body or subframe. Since there are usually multiple connection points between the mounting bracket and the engine or body. Therefore, it is not suitable to place the sensor in this position [5]. It can avoid the difference of the test results and improve the accuracy of the test results by placing the sensors on the mounting body.
The structure of mounting element mainly consists of mounting bracket, colloid, steel sleeve and bolt. Because of the large deformation and irregularity of the colloid part, it is not suitable to arrange the sensor. Due to the large volume of the mounting bracket and many connecting points, it is not easy to measure the accurate transmission force by placing the sensor. The bolt in the mounting center is the only way to transfer the mounting force, and it is small and single in structure, so it is easy to obtain the accurate engine mount transmission force.
Finite Element Analysis of Mounting Element
In order to verify the effectiveness of the scheme of arranging sensors on the bolts, the finite element model of the mounting element is established. The engine mount transmission force is calculated by applying the load on the mounting bracket.
Meshing:
Firstly, the 3D model of the mounting element is imported for pre-processing. Through the main steps: geometry cleaning, mesh generation and mesh inspection, the main components such as bracket, colloid, bolt and rubber core steel sleeve are meshed. Then import the mesh file into ANSYS [6], as shown in Figure 1. There are 37212 elements in the mount finite element model.
Mounting load:
The displacement excitation is applied to the mount bracket on the engine side in vertical direction. The frequency of the excitation is the same as the 2 nd order excitation vibration frequency of the engine idle speed. The frequency of the engine can be given as follows: ( 1) Where: n is the speed of the engine, i is the number of engine cylinders, τ is the number of strokes, and v is the order of the engine vibration.
Simulation result:
The simulation results are shown in Figure 2 and
Manufacture of Force Sensor
Resistance strain gauge is a kind of sensor whose resistance value changes with its own deformation. The strain of the resistance strain gauge can be measured by measuring the resistance change.
The fabrication method of the sensor is as follows: firstly, turn the hole at the head of the bolt with a diameter of 2 mm, which is used to arrange the connecting wire of the resistance strain gauge. A groove is milled in the middle of the bolt to arrange the resistance strain gauge. The structure form, size, resistance value, service temperature, creep characteristics need to be considered in the selection of resistance strain gauge. The main parameters of the resistance strain gauge selected are shown in Table 2. Then, the machined surface is polished and cleaned, and the strain gauge is glued into the bolt groove. Finally, the white silica gel is used as the protection of the resistance strain gauge. The force sensor is shown in Figure 4 Table 2. Parameters of resistance strain gauge.
Construction of Test System
The test system of transmission force mainly includes: force sensor, bridge, amplifier, filter, data collector, laptop, etc. The resistance strain gauge should be equipped with electric bridge when it is used. The structure of the bridge includes: single arm bridge, half bridge and full bridge. The half bridge is selected because the certain temperature compensation ability and the size limitation of intermediate bolts. R 1 . R 2 . R 3 . R 4 is the resistors of the bridge. Resistance R 1 and R 2 are usually connected to the test part. When the strain is occurred, the corresponding resistance changes to R 1 +R 1 , R 2 +R 2 . Suppose R 1 =R 2 =R 3 =R 4 =R and R 1 =-R 2 =R, The output of the bridge is given as follows [8]: (2) Where: U e is the supply voltage.
Calibration of Force Sensor
The force sensor needs to be calibrated to determine the proportional relationship between the voltage signal of the sensor and the transmission force. The calibration method is based on the principle of torque balance. The torque generated by the transmission force of the mount is balanced with the output torque of the engine. The output torque of the engine can be obtained by converting the driving force of [9]. During the test, the vehicle speed is 30km/h. The transmission is in 2 nd gear, and the engine load is full load.
The relationship between driving force of driving wheel F t and engine output torque T tq is as follows: (3) Where: i g is the transmission ratio, i 0 is the main reducer ratio, r is the tire rolling radius, T is the mechanical efficiency of the transmission system. The driving force curve measured in the test is converted to the output torque curve of the engine according to equation (3). The cure is shown in Figure 5 Because the signal measured by the force sensor is a broadband signal, which is not conducive to the calibration of the sensor. It is necessary to process the test signal. The method of wavelet analysis is used to process the signal. The signal is processed by db wavelet in MATLAB software [10], as shown in Figure 6. The fundamental wave of the front and rear mount test signal is obtained by wavelet analysis, as shown in Figure 7 and Figure 8. According to Figure 7 and Figure 8, the ratio of the absolute value of the first peak amplitude of the front and rear mounting curves is 2.63. Therefore, the ratio of the front and rear mounting forces is approximately 2.63. The relationship between the transmission force and engine output torque is given as follows: (4) Where: F 1 is the transmission force of the front mount, F 2 is the transmission force of the rear suspension, l 1 is the vertical distance from the front mount to the center line of the crankshaft of the engine, which is 0.35m, l 2 is the vertical distance from the rear mount to the center line of the crankshaft of the engine, which is 0.25m.
According to the ratio of the front and rear mount transmission forces and equation (4), the relationship between transmission force and engine output torque is given as follows: The engine torque curve is processed according to equations (5) and (6) Finally, the calculated amplitude of front and rear transmission force analysis curve is taken as the ordinate, and the amplitude of the test signal curve processed by wavelet method is taken as the abscissa to draw the scatter diagram, as shown in Figure 9 and Figure 10. After fitting, the sensor calibration formula is obtained:
Vehicle test of transmission force
In order to obtain the transmission force under the actual working condition, the force sensor is used in the vehicle road test. The test conditions include idle and 3 rd gear driving, as shown in Table 3. The test equipment mainly includes: force sensor, dynamic strain gauge, and signal conditioning instrument, data collector, inverter, and laptop. The test results of idle condition are shown in Figure 11. The test curve has obvious periodicity, which corresponds to the periodic operation of the engine under the idle condition. The amplitude of the engine mount transmission force does not change significantly at different engine speeds, which is mainly due to the fact that the engine has almost no load under the idle condition, so the engine power changes little. At the same time, it also shows that the vibration isolation performance of the mount has little change under the idle condition. The test results of 3 rd gear driving condition is shown in Figure 12. The test results show that the transmission force of the powertrain mount basically increases with the increase of the engine speed. This is consistent with the change trend of engine output power. However, when the mounting speed is 2000 rpm, the transmission force of the mounting is large, which may be due to the poor dynamic stiffness of the connection position between the mounting bracket and the vehicle body and the existence of local modes. The transmission force sensor is made by arranging the strain gauge on the intermediate bolt of the mounting element. The driving force of driving wheel is measured by chassis dynamometer. The engine torque is obtained by conversion. According to the proportional relationship between the peak signal of the force sensors, the theoretical curve of the transmission force is obtained. The force sensor is realized by combining the sensor voltage signal with the transmission force. The test results of the mounting force are obtained by the vehicle test, which provides an effective basis for the evaluation and analysis of the engine mounting vibration isolation performance. | 2021-06-30T20:02:51.492Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "776e0a52cc1b19cb7de75a1dbd37d813126f0ce4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1952/3/032046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "776e0a52cc1b19cb7de75a1dbd37d813126f0ce4",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
229507700 | pes2o/s2orc | v3-fos-license | YouTube as a Recruitment Tool? A Reflection on Using Video to Recruit Research Participants Profiling Emerging
This article presents an innovative, video-based approach to the recruitment of research participants. A YouTube video was created and uploaded as part of a doctoral study exploring what it means to be struggling as a teacher. Following a review of the recruitment literature, which highlights a general lack of attention paid to the challenges of recruitment, the author explores the approach she took in planning the video. The video was the main promotional tool for the study and was communicated via Twitter and email. She also presents online survey findings on the perceived impact and influence of the video; the visual format, informal tone and the ability to see the researcher in person were rated very positively. A reflective analysis of the video transcript follows drawing on the literature as well as the survey findings. She concludes that video-based recruitment can be an inexpensive but powerful tool which allows a human connection with the researcher early on in the research process.
Introduction
This article presents an innovative recruitment strategy used in a doctoral study. I wanted to recruit educators working in the English secondary school system to participate in research interviews and who would be prepared to share with me their experiences of struggling as a teacher. The research question -what it feels like to be struggling as a teacher -required a particularly sensitive research design. I was keen to come across to potential participants as both approachable and compassionate and felt that a more personal approach to recruitment was required. I wanted to establish a personal connection with potential participants early on in the research process. To this end, I created and uploaded a short YouTube video. My research found that struggling -as a teacher -is experienced as a temporary, fractured state. Dimensions of struggling include heightened bodily symptoms and it is associated with negative moods and emotions. Struggling can also involve a damaged self-view, a reduced sense of controllability and may lead to impaired performance (Culshaw, 2019a). I adopted an innovative approach, not just for the recruitment of participants, but in terms of the research methodology generally. I used an arts-based method, collage, to explore what it means to be struggling and participants expressed their experience of Feature Suzanne Culshaw's article comprises a video, which can be viewed here (and here).
struggling by placing and moving a range of arts and crafts materials as their thinking developed. I have written elsewhere about the power of collage as a research method (Culshaw, 2019b). In this article I present first a short review of a specific section of the recruitment literature, which I position within the context of the research design for this study. Then I outline the recruitment approach I adopted. I provide a forensic exploration of the specific method I used -a YouTube video -and reflect on its merits as a recruitment tool drawing on survey feedback. I close with an outline of the possible limitations of using this method and suggestions for further research.
2
Recruitment of Participants: the Literature I undertook a search of the recruitment of research participants literature in Google scholar, with a specific focus on the use of video, the internet and social media in the recruitment of participants. Search terms included:
promoting research participation online participant recruitment challenges (teachers) recruiting participants online YouTube recruiting participants video
A review of abstracts led me to rejecting several publications from the review process, as their focus was more on the use of social media for data collection purposes rather than the recruitment of participants. The search yielded 15 r esults for review, from a range of disciplines including medical education, psychiatry, social research methodology and occupational rehabilitation. The recruitment literature reviewed here included four systematic reviews. This article adds to that literature and makes a contribution to recruitment methodology generally, and in educational research more specifically. A list of the 15 reviewed publications is included in the Appendix. Recruitment in research is defined as the dialogue which takes place between an investigator and a potential participant, prior to the initiation of the consent process . It is essential to consider a recruitment methodology when designing a research project as the methods used influence the success of the research (Derrick, Elsieo-Jaye, Hanny, Britton & Haddad, 2017). In many cases, multiple -rather than single -recruitment strategies are used (e.g. . Some suggest (e.g. that recruitment is the most challenging aspect of any study and yet it is a key Culshaw 4 video journal of education and pedagogy 5 (2020) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] determinant of the results . Others bemoan the lack of attention given to recruitment in the literature (e.g. . Few authors have "dissected their studies" (Luck et al., 2017, p. 44) to review their recruitment strategies and it is not always clear how researchers accessed potential participants. Recruitment is not adequately or widely reported in many research publications as I found when searching the literature for this article. Kaba and Beran (2014, p. 578) state that "researchers often spend less than a few sentences summarising access to the sample".
Details highlighting the processes used to engage participants often remain unclear . Guidance and tips (e.g. are thin on the ground despite an appeal from Lysaght et al. for light to be shone on the specifics of recruitment to "enhance methodological transparency and inform practice" (2016, p. 136). Innovative strategies have the potential to engage participants' attention yet even less attention has been given to developing such methods . Rife et al. suggest that social media (in this case Facebook) are a "promising new (recruitment) space for social scientists" (2016, p. 80). In their systematic review, found that Facebook as a recruitment method was faster and cheaper and had the potential to improve participant selection in harder-toreach demographics. However, only one publication adopted the approach I took, i.e. a purposemade YouTube video -which they reported as a "novel feature of the study" (p. 7) -as part of their recruitment strategy. In summary, then, there is a paucity of literature on recruitment and insufficient methodological attention has been paid to the challenges, opportunities and practicalities of different recruitment approaches.
Consideration of an appropriate recruitment strategy for my study was part of the wider research design process which took into account the particularly sensitive nature of the research question. Research which delves into deeply personal experiences can be stressful for the participant and the researcher (Dickson-Swift, James, Liamputtong, 2008; Lee & Renzetti, 1993). Sensitive research topics can deepen the emotional connection between the researcher and participant, as both co-construct the narrative history (e.g. Fahie, 2014). It was clear to me that I would need to establish a trustful relationship and rapport with potential participants from the outset. Video can be one step in establishing trust, as participants get to see the researcher and get a sense of their sincerity, credibility and enthusiasm . also describes how video can be used for promoting studies, recruiting participants and gaining consent, but suggests that video-based recruitment methods are generally difficult to evaluate, rarely utilised and/or underreported.
Integrity is a key element of the research process and a vital characteristic of the researcher; clearly, the application for ethics approval for this study had to consider the sensitive nature of the research topic and assess the risk of potential harm to participants. Confidentiality and anonymity were particularly pertinent as was my duty of care to participants; I provided details of external, professional support at all interviews with participants. The ethics application process was underpinned by guidelines set out by the British Educational Research Association (BERA, 2011) and approved by the relevant ethics committee at my university.
In the section below, I describe the approach I adopted when recruiting participants for a study exploring what it means to be struggling as a teacher.
Recruitment Strategy: an Approach Using YouTube
Recruiting teachers who would be prepared and able to talk about their experience of struggling had the potential to be difficult but was central to the success of this research. Careful consideration had to be given to the approach I took, how I worded any recruitment documentation and, more practically, how I would access potential participants. Gaining access to the sample population is a key consideration once the inclusion and exclusion criteria for participation have been decided . Establishing a rapport with the participants was always going to be crucial and potential participants needed to be able to relate to the person they would be opening up to. Being amenable and approachable are dimensions of having good interpersonal skills and are some of the key attributes highlighted as important in the literature (e.g. . Non-verbal cues and affect-based messages are also more likely to attract potential participants . Researcher credibility is also mentioned as essential as it influences people's willingness to participate . Kaba and Beran suggest that introductions are best made "in person" (2014, p. 582) and researchers should not underestimate the "personal touch" (Luck et al., 2017, p. 44). Kaba and Beran also suggest that if the researcher has "excellent networking skills" they should "use this strength" (2014, p. 583). It is, however, important to remain mindful of ethical considerations and the possibility of pressurising potential participants into taking part .
Taking this into account, I created a short recruitment video which I posted on YouTube and shared on social media, mainly through my Twitter account and my blogsite. The video lasted three minutes and showed me in my office holding a cup of tea, talking about the rationale for my study and the kind of Culshaw 6 video journal of education and pedagogy 5 (2020) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] teachers I was hoping to attract. My main reason for posting the video was to be seen as approachable, credible and sensitive. I also liked that it was a way to provide a consistent presentation of a "standardised message" to all potential participants (Khatri et al., 2015, p. 7). I hoped viewers would sense that we shared a similar background and professional concerns . The video attracted about 100 views within two weeks of being posted and a number of participants mentioned that they had watched it. I subsequently conducted a separate survey to elicit feedback about the merits of this particular approach to recruitment. Results of the survey are presented below.
The only other study I was familiar with which used a purpose-made YouTube video is a 2015 clinical study targeting medical students. It combined the use of YouTube, Twitter, Facebook and a website to recruit participants for a "national, multicentre cohort study" in England (Khatri et al., 2015, p. 3). The research team created an introductory, narrated, step-by-step video to explain the project and protocol to potential participants. A link to the registration form was placed in the comments section. The video was 7:53 minutes long and is no longer available for viewing. This was a large-scale project in comparison to my study and involved a team of researchers. Similar to that project, however, I found this approach "easy and intuitive to use… as it provides an accessible medium by which to distribute" (Khatri et al., 2015, p. 7) information about the study. It was also "far less time intense" than other possible approaches (Khatri et al., 2015, p. 7).
My video was the main promotional tool; the main channel of communication to potential participants was via emails to secondary schools I know within about 40 miles of where I live. The text of the emails included a link to the YouTube video. Information for research participants can often look academic and professional but the text can be difficult to read ; the language I used in both the video and all written communications was "pitched at a level that can be understood by the potential participants" (Patel et al., 2003, p. 231) whilst also adhering to the "usual rules of written professional correspondence" (Kaba & Beran, 2014, p. 580). The first challenge, according to Lysaght et al., is to identify the "best first point of contact" (2016, p. 136) and I identified the need to circulate information about the study via the use of gatekeepers, in this case school leaders. The emails were sent either to the Headteacher or to a Senior Leader with responsibility for staff training and/or wellbeing. In all correspondence with gatekeepers and potential participants, I emphasised my professional experience as a teacher and, to some extent, my personal experience of struggling. I also made it clear that any further contact was to be made directly with me. It was important to make the initial presentation 7 YouTube as a Recruitment Tool? video journal of education and pedagogy 5 (2020) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] "palatable" (Lysaght et al., 2016, p. 136) by creating a "buzz" about my research and explaining how it might directly impact teachers (Kaba & Beran, 2014, p. 580). I was not offering any particular incentives to participants but hoped that their intrinsic motives for getting involved might include "curiosity, altruism… and knowledge" (Kaba & Beran, 2014, p. 583).
I was mindful of the fact that any willingness to participate can depend on the perceived "attractiveness of the research question" (Nasser et al., , p. 1334 and that participants will only take part if they can identify with and "understand the validity and relevance" of the study (Kaba & Beran, 2014, p. 581). Relevance of the topic and applicability to personal context are factors which can influence the decision to participate (Caldwell, Hamilton, Tan, Craig & Boutron, 2010;. The topic of struggling as a teacher is both relevant and timely in the English secondary state school system. However, some researchers struggle with recruitment of participants and I was aware that under-enrolment on the study would affect the contribution to knowledge I was seeking . Successful recruitment is critically dependent on the initial contacts made and this underpinned the rationale for the approach I took. Initial concerns about not recruiting sufficient participants were soon allayed when I was contacted by more than twenty potential participants. It is not possible to report the direct influence of the video as data were not collected on how many participants had or had not viewed the video.
I turn now to a reflective analysis of the recruitment video and take a forensic approach to the video transcript. I also present feedback from an online survey conducted after completion of the study. First, however, I describe the thinking behind the creation of the video itself.
The Recruitment Video: a Reflective Analysis
In this section I describe the recruitment video I uploaded to YouTube and reflect on its relative effectiveness. The video was filmed in one take, after preparing some notes to refer to if needed and with helpful guidance from my son who was regularly vlogging at the time. I was keen to aim for multiple exposures and wanted to use a "variety of different methods (to) engage multiple stakeholders and collaborators" (Kaba & Beran, 2014, p. 584) to increase the "likelihood of participation" (Kaba & Beran, 2014, p. 579). The video was pivotal and would act as more than a useful "adjunct to traditional methods" (Khatri et al., 2015, p. 2) such as email and personal networks. The video was created in March 2017 (posted on YouTube: 30/03/2017), as part of the recruitment strategy for my doctoral study (Culshaw, 2019a). I was Culshaw 8 video journal of education and pedagogy 5 (2020) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] looking for teachers who would be willing to talk about their experience of struggling. The video can be viewed here: https://www.youtube.com/watch?v= 3S_wlotcuF4 The rationale for the video was to appear approachable, credible and sensitive. I planned it to be a clear, concise and consistent summary of the project as well as an opportunity for potential participants to see who I was; an "inperson introduction," (Kaba & Beran, 2014, p. 582) albeit on screen. I hoped to show a "caring and compassionate attitude" (Patel et al., 2003, p. 234) and to be "passionate about [my] research" (Kaba & Beran, 2014, p. 581). When potential participants made an initial approach, supplementary written details were provided. Patel et al. suggest that the language used should be "pitched at a level that can be understood by potential participants" (2003, p. 231). My background as a teacher gave me an insider view and allowed me to gauge the type of language to use. I was particularly keen not to alienate viewers by using overly academic language. Indeed, throughout the study I adopted a more informal register in written and spoken correspondence. This goes, perhaps, against the advice of some who suggest adhering to the "usual rules of written professional correspondence" (Kaba & Beran, 2014, p. 580). I maintain that my conduct and correspondence remained professional, if unconventional, throughout. The text below is an extract from the introductory email which accompanied the video and is an example of the written register I used. It also mirrors the style of spoken language used in the video: The literature reviewed above purports to provide practical guidance on how to recruit participants and also draws on the experience of the authors (e.g. . Indeed, Kaba and Beran outline twelve tips to "improve the quantity and efficiency of recruitment to provide highquality outcomes and evidence on the effectiveness of research" (2014, p. 583). Nasser et al., however, highlight the lack of attention on how to implement "best practices for achieving participant recruitment" (2011, p. 1334). Moreover, there appears to be a lack of empirical data from participants on their experience of recruitment. In their systematic review, found that less than half of the reviewed publications reported the effectiveness of recruitment strategies; report, too, that recruitment findings tend to be inconclusive and not generalisable. In my study, I had not initially intended to collect feedback from the participating teachers about their recruit ment experience nor the factors influencing their willingness to participate. To address this, in retrospect, and to explore more forensically the potential of a video-based approach to recruitment, I subsequently designed an online survey. This survey was not intended to be completed by participants in my study.
Post-study Survey Findings
A survey was created with a view to eliciting feedback on the specific recruitment video uploaded to YouTube in March 2017. The survey was conducted after completion of the study and its sole purpose was to gauge responses to this style of recruiting research participants. The respondents were anonymous and were highly unlikely to have been participants in the original study.
It was a small-scale survey, designed to elicit perceptions and impressions of a video-based approach. The survey consisted of six questions and the link was shared via my Twitter account. The tweet was directed at my followers (approx. 4,000 people and organisations predominantly in the fields of educational leadership, teaching and/or educational research). It was retweeted 14 times and had almost 200 engagements, according to the Tweet Analytics.
Tweet 2 (2 days later):
Thanks for the responses so far. One last push today… it literally takes 5 mins. All feedback gratefully received, as it will help inform a recruitment strategy for a new project (as well as a journal article I'm writing).
There were 17 returns and, from the responses, it was clear that respondents had engaged with the video to some extent at least. The wording of the questions, the type of questions and a summary of the results are presented in Table 1. Themes emerging from the survey findings include the importance of the personal characteristics of the researcher, the visibility of the researcher, practical aspects of a video-based approach and the appropriateness of the approach. Culshaw
12
video journal of education and pedagogy 5 (2020) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] Survey questions two to five were all framed as positive statements and allowed respondents to choose on a 5-point Likert scale between strongly disagree, agree, neither disagree nor agree, agree and strongly agree. I am aware of the dangers of acquiescence bias in wording the questions this way. However, the comments in the open questions (1 and 6) do not contradict the overall positive experience of the video. Useful critical feedback included not referring to my notes during the video and reducing the length of the video to under two minutes. Ideal video length is perhaps a topic requiring further research . In one paper , a focus group was used to help choose what to include in the video. Developing the script was a key feature of the use of video in another study and it might have been useful for me to seek third party feedback about the script .
In terms of feedback about the visual format, being able to see the researcher in person and the informal tone of the video (questions [3][4][5], responses were overwhelmingly positive. Qualitative comments included 'much more powerful,' 'helps show why the research matters' and 'human connection with researcher is important.' Useful critical feedback included wanting to know how and where the video was shared and ensuring it reached a sufficiently diverse network of potential participants.
YouTube Video and Survey Findings: a Reflective Analysis
In this section I use the recruitment literature and the survey findings as an analytical lens to explore the video transcript. Below is the full transcript of the 3-minute video from which I have highlighted a selection of phrases for further analysis. I then present these phrases with reference to the literature and by incorporating feedback from the online survey.
Hello. I'm Suzanne (1) (6). But, also, there's a little bit perhaps about when the struggling began or when you were not struggling and so that journey between struggling and not-struggling is of interest to me as well. And finally, we'd look to explore the factors which influence the struggling, the ones that help and the ones that hinder. You might find talking about struggling difficult, and I think that's to be expected but equally it can be quite a useful thing to do to have that opportunity to talk to someone about how things are going for you (7). If you feel like you'd like to get involved with my study, then I'd love to hear from you (8). My contact details will appear at the end.
Essentially what it would involve is meeting up with you on two different occasions to talk about your experience (9). Of course, I've got ethics approval from my university to conduct this study and I can guarantee that your data would be kept confidentially and your identity will be anonymised throughout (10). And, of course, what is important to state is that even if you say you want to get involved and then change your mind then you're free to withdraw at any stage, no reason needed (11). So, if you think that you've got a story to tell me, if you'd like me to hear your experience of struggling, then I'd love to hear from you (12). Thank you for listening (13).
I have extracted 13 phrases from the video transcript for further elucidation. For ease of reading, I do not include individual references to the literature in the table but am drawing on seven of the most relevant publications reviewed in this article . Each phrase can be rooted in the literature and/or the survey findings. Table 2 shows how the highlighted phrases provide a pathway to guide others through the phases of introducing the study and key details they would need to know, to help them decide whether to participate. The phrases emphasise the importance of researchers connecting with potential participants in terms of personal characteristics such as approachability, credibility and empathy. Participants also need to receive sufficient information about the research topic itself and to sense the researcher's enthusiasm for the topic. Finally, they need reassurance about the ethical conduct and practicalities of the research process as well as an appreciation of their willingness to participate. What Table 2 shows, therefore, is a triangulation between the literature and the planning and creation of the YouTube video which has also been confirmed by the retrospective survey findings. Culshaw
14
video journal of education and pedagogy 5 (2020) 1-19 Table 2 Video transcript phrases and reference to the literature and survey findings (cont.) In summary, the video attempted to show me, in person, as a credible, trustworthy and caring researcher. It provided a summary overview of the study and an indication of what participation would involve. Ethical considerations were mentioned as well as the right to withdraw. It was a direct, passionate, appreciative appeal to teachers to share their stories of struggling, an experience that I share. The video was viewed over 100 times within just two weeks and I was contacted by over 20 potential participants. I suggest that the characteristics highlighted in Table 2, which are referenced in the recruitment literature and confirmed in the survey findings, need to be inherent if the use of video as a recruitment tool is to be effective.
Concluding Remarks
When I was planning the research design for this study, recruitment was obviously a consideration but not necessarily a key one. I have presented here a "novel feature" and a potentially "new modality for … recruitment" (Khatri et al., 2015, p. 7, 2) which I suggest is suitable for lone researchers like myself. The video I created and uploaded to YouTube has now been viewed over 200 times. However, I will never know exactly how many participants watched or were influenced by the recruitment video I shared. This is a limitation I acknowledge. At the time, it was not my intention to collect data relating to the impact or influence of that particular recruitment tool, although some participants mentioned that they had watched it. Researchers, including myself, would do well to plan to elicit and learn from feedback on what actually influenced people's willingness to participate. Culshaw
16
video journal of education and pedagogy 5 (2020) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] This was a small-scale study in the field of education, with one researcher and limited budget. Creating the video was neither time-consuming nor expensive. The online survey which was conducted after completion of the study has afforded feedback into the potential for using video for recruiting research participants. Given the importance of accessing participants for the success of research and the validity of research outcomes, further research into the potential influence and power of video-based recruitment approaches would be welcome. | 2020-12-03T09:04:08.935Z | 2020-11-23T00:00:00.000 | {
"year": 2020,
"sha1": "0359a40c9d1052cf6bad52e9b4aaae6e82d59961",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1163/23644583-00501004",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8c33a692d9580601c30f8e64923e8258a176b0d2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
234335124 | pes2o/s2orc | v3-fos-license | A shape optimisation with the isogeometric boundary element method and adjoint variable method for the three-dimensional Helmholtz equation
This paper presents a shape optimisation system to design the shape of an acoustically-hard object in the three-dimensional open space. Boundary element method (BEM) is suitable to analyse such an exterior field. However, the conventional BEM, which is based on piecewise polynomial shape and interpolation functions, can require many design variables because they are usually chosen as a part of the nodes of the underlying boundary element mesh. In addition, it is not easy for the conventional method to compute the gradient of the sound pressure on the surface, which is necessary to compute the shape derivative of our interest, of a given object. To overcome these issues, we employ the isogeometric boundary element method (IGBEM), which was developed in our previous work. With using the IGBEM, we can design the shape of surfaces through control points of the NURBS surfaces of the target object. We integrate the IGBEM with the nonlinear programming software through the adjoint variable method (AVM), where the resulting adjoint boundary value problem can be also solved by the IGBEM with a slight modification. The numerical verification and demonstration validate our shape optimisation framework.
structures with certain periodic patterns, that is, metamaterials have been recently and intensively studied in science and engineering [6,7].
Shape optimisation is useful in appropriately designing (meta)materials in a wave field of interest. To analyse the wave problem numerically, boundary element method (BEM) is suitable because it can deal with the infinite domain without any absorbing boundary condition, which is necessary for domain-type solvers such as finite element method (FEM) and finite difference method. In addition, boundary-only models that BEM handles fit in shape optimisation, which concerns the deformation of the surface (or boundary) of a target material rather than its inside.
The first study on shape optimisation with using BEM was conducted by Soares et al. in 1984 [8], which investigated linear-elastostatic problems in 2D. Since then, there are over 200 publications that are related to both shape optimisation and BEM, as shown in Figure 1. The BEM-based shape optimisation is currently being promoted by a new type of BEM, that is, isogeometric BEM (IGBEM), whose number of publications is also shown in the same figure. The IGBEM is characterised by employing the NURBS (including B-spline) function as both shape and interpolation functions, following the concept of isogeometric analysis (IGA) [9,10]. In this case, one can design the shape of interest through the control points (CPs) associated with the NURBS surface(s). On the other hand, one needs to regard (a part of) the nodes of a boundary element mesh as the design variables in the case of the conventional BEMs, which are based on a piecewise polynomial basis. This can increase the number of design variables unnecessarily, in particular, when the shape of the boundary is complicated and, thus, the mesh is fine. In the IGA, the technique of knot insertion can readily resolve the dilemma between reducing the number of design variables and increasing the resolution of the boundary element analysis. This is the main advantage of the IGBEM over the conventional BEMs, although the formulation and implementation of the former are hard than those of the latter.
Another merit of the IGBEM is that we can easily compute the gradient of the sound pressure at any points (generally except for its boundary, where the other surfaces are connected) on the surface of a scatterer. This is because the sound pressure is usually differentiable over a NURBS surface. This property of the IGBEM is useful when we compute the shape derivative of our interest (see (11)). On contrary, the gradient can be discontinuous on the edges of the boundary element mesh in the convectional BEM. Year Publications "shape optimization" + "boundary elemenet method" "isogeomtric boundary elemenet method" Figure 1: Publications with the terms both "shape optimization" and "boundary element method" (coloured in blue) and those with the term "isogeometric boundary element method" (in red). The data was obtained from Web of Science on Apr 28, 2021.
So far, shape optimisations based on the IGBEM have been investigated in terms of potential problems (or steady-state heat problems) [11][12][13][14][15], elastostatic problems [16][17][18][19][20][21], including 2D thermoelastic problem [22], and acoustic problems in concern [23][24][25][26][27][28][29]. In regard to 2D, Liu et al. [23] performed a shape optimisation of a Γ-shaped sound barrier, where the direct differentiation method (DDM) was employed to compute the sensitivity of the objective function with respect to CPs. Takahashi et al. [24], which is a prior research of the current work, optimised periodic and layered structures in terms of the ultra-thin solar panels. They derived the shape derivatives with the adjoint variable method (AVM). Ummidivarapou et al. [25] introduced a teaching-learning-based optimisation algorithm, which is a gradient-free method, to design a acoustic horn. Similarly, Shaaban et al. [26] performed a shape optimisation by exploiting the particle swarm optimisation (PSO) algorithm, which is gradient-free. This was extend to the axi-symmetric problem by the same authors [27]. The shape optimisation by Wang et al. [28] is similar to Liu et al. [23] but used the AVM instead of the DDM. On the other hand, the 3D acoustics was considered only by Chen et al. [29]. They conducted a shape optimisation based on the DDM. Thus, their study can be regarded as a 3D version of [23]. They maximised the sound pressure of the surface of submarine or vase successfully.
Similarly to Chen et al. [29], the purpose of this study is to establish a shape optimisation system for 3D acoustic problems. In this system, a nonlinear optimisation algorithm integrates the corresponding IGBEM and AVM. These two ingredients were developed in the authors' previous research [30]. They proposed an accurate method to evaluate the singular and nearly-singular integrals associated with the isogeometric discretisation and, additionally, performed a shape-sensitivity analysis as an application. The present work makes steady progress toward the shape optimisation with considering some optimisation algorithms which are implemented in two software Ipopt [31] and NLopt [32]. Those algorithm are compared with respect to their performances in some numerical examples.
The rest of this paper is organised as follows: Section 2 overviews an IGBEM for the 3D Helmholtz equation in terms of exterior homogeneous Neumann problems, which was constructed in our previous work [30]. Section 3 formulates the shape optimisation on the basis of the IGBEM and the adjoint variable method and describes the reduction of the problem to a nonlinear optimisation problem. Section 4 validates the proposed shape optimisation system through a numerical example and then demonstrates the capability of the system for complicated problems. Finally, Section 5 concludes the present study.
Isogeometric BEM
We will overview the formulation of the IGBEM for the 3D Helmholtz equation, referring to our previous work [30].
Problem statement
Let us consider a scattering problem of the time-harmonic acoustic wave in 3D. Specifically, we will solve the following exterior Neumann boundary value problem (BVP) in 3 the infinite domain IR 3 \ V : Boundary condition : Radiation condition : where u : x ∈ IR 3 → C denotes the total field or sound pressure, u in denotes a given incident field, V denotes one or more acoustically-hard scatterers in IR 3 , S denotes the boundary ∂V , n denotes the unit outward normal to S and k denotes the prescribed wavenumber.
Boundary integral equation
We will solve the BVP in (1) with the following standard boundary integral equation (BIE): where G denotes the fundamental solution of the 3D Helmholtz equation, that is, Also, C denotes the free term and is equal to 1/2 if S is smooth at x. In this study, we utilise the equi-potential condition to yield where Γ(x) := 1 4π|x| denotes the fundamental solution for the Laplace equation in 3D.
Isogeometric analysis
We will discretise the BIE in (2) as well as the RHS of (4) under the concept of the isogeometric analysis (IGA). The IGA is a kind of isoparametric formulation that exploits the NURBS basis as both interpolation and shape functions mainly in the field of both boundary and finite element methods.
First, we express a given boundary S, which is supposed to consist of one or more closed surfaces, by using multiple NURBS surfaces. Each NURBS surface, say Π, is parameterised with two curve parameters s and t, where the domain of s and t can be [0, 1] without the loss of generality. Then, we can express any point y on Π as the tensor product of NURBS basis as follows: where N p k denotes the k-th B-spline function of degree p and w kl and C kl denote the (k, l)-th weight and control points, respectively, which should be determined according to the shape of S. Also, for the sake of simplicity, we denote the product N ps k (s)N pt l (t) by N kl (s, t) and the summation in the denominator by W (s, t).
The two series of knots, which are denoted by {s i } ns+ps i=0 and {t j } nt+pt j=0 , are nondecreasing in general. To guarantee that the outer control points, i.e. the control points C kl whose index k or l is either 0 or the largest one (i.e. n s − 1 or n t − 1), locate on the perimeter ∂Π of the NURBS surface, we use the clamped knots, i.e. Similarly to the boundary point y in (5), we interpolate the boundary density u on a surface Π with the tensor product of the NURBS basis as follows: where coefficients u kl are the unknown variables to be determined from the BIE in (2). It should be noted that, since the knots are clamped, u at a control point C kl on the perimeter ∂Π corresponds to u kl exactly; meanwhile, the other coefficients u kl do not generally correspond to u at C kl .
The solution, that is, Dirichlet data u on S must be continuous across the intersecting line between two adjacent NURBS surfaces. This continuity-requirement can be satisfied by giving a unique unknown index, say ν, to all the unknown coefficients associated with the underlying intersection. For example, let us consider the case that an outer control points C kl on a NURBS surface Π has the same position as an outer point C ′ k ′ l ′ on another surface Π ′ , where we measure the geometrical distance of the two points C kl and C ′ k ′ l ′ to judge if they share the same position or not. Then, we give a global unknown index ν to the two points C kl and C ′ k ′ l ′ as well as the corresponding unknown coefficients u kl and u ′ k ′ l ′ . As a result, we can obtain a certain number N that represents the number of (global) unknowns over S. By using N global unknowns and control points denoted by u ν and C ν , respectively, we no longer use the local indices (i.e. kl and k ′ l ′ ) and can express any point y and the boundary value u as follows: where R ν corresponds to the basis w kl N kl W for a certain NURBS surface.
Discretisation of the BIE
By plugging (7) into the BIE in (2), we can yield the following discretised BIE: Here, a pair of parameters (ŝ,t) corresponds to a collocation point x on S and each parameter is determined as the Greville abscissa [23]. Similarly to the determination of the global N unknowns (u ν ), we regard the repeated collocation point on an intersection as a unique collocation point. As a result, we can determine N distinct collocation points on S, which are enough to solve (8). In this study, we use the LU decomposition to solve N unknowns (u ν ) from a set of N discretised BIEs of (8).
Once the unknowns are obtained, we can compute u at any point: we may use (6) for any point on S, while we may exploit the integral representation for any point in V . In addition, we can compute the derivatives of u on S by differentiating the NURBS functions in (6) with respect to s and/or t. This is useful to compute the shape derivative (sensitivity) because it usually consists of the derivative(s) of u on a surface, as seen in (11).
Regarding the boundary integrals in (8), we apply the Lachat's method to the singular integrals and the hierarchical subdivision technique to the singular-and nearly-singularintegrals. The details are described in our previous paper [30].
Knot insertion
As we will mention in Section 3, we will optimise the shape of S via the control points C ν . If the number of control points involved in a target S is large, the convergence of the optimisation would be slow. So, one may construct a surface with a small number N of control points. However, this can lead to a low accurate solution in the IGA because the number of unknowns (degrees of freedom) is also N ; recall (7). To resolve this issue, which is common in the IGA [9], we may resort to the knot insertion, by which control points can be added to anyplace without changing the shape of S. This technique is used when we analyse the BVP in (1) as well as the adjoint one in (12), which will be mentioned in Section 3.1.
Gradient-based shape optimisation
We will construct our shape optimisation method based on the IGBEM, adjoint variable method and nonlinear optimisation method. The present framework is a direct extension of the 2D case investigated in our previous paper [24] 1 .
Problem statement and shape derivative
The present shape optimisation problem is to maximise or minimise a prescribed objective function J by changing the surface of scatterer(s) V , i.e. the boundary S. Specifically, we define J as the summation of the sound pressure u at M observation where u is supposed to be a solution of the BVP in (1) or the primary problem in the context of the adjoint variable method.
To define the shape derivative (sensitivity), denoted by S, of J in (9), we slightly move every point y on S by ǫV (y), where ǫ is an infinitesimally small number and V denotes the direction to move. Correspondingly, the boundary S and the field u are perturbated toS andũ, respectively. Then, S is defined as the coefficient of the term ǫ which is obtained by expanding the perturbated objective function J (ũ;S) with respect to ǫ. Therefore, we have Here, as well-known (see [33] for example), S can be derived as follows: where () * denotes the complex conjugate and the adjoint field λ is the solution of the following adjoint problem: Boundary condition : We can also solve the adjoint problem with the IGBEM mentioned in Section 2. To this end, we may replace the incident field u in (x) in (2) with the following term: where G is the fundamental solution given in (3).
Discretisation of the shape derivative
In the numerical analysis, the infinitesimal deformation (perturbation) ǫV must be finite. When we denote the point of an arbitrary point y, which is expressed as (7) in the IGBEM, on the surfaceS byỹ, we can approximate ǫV (y) as follows: where δC ν denotes the variation of the control point C ν , i.e.
Then, J in (10) can be discretised as follows: where The vector s ν stands for the sensitivity of J with respect to the control points C ν , which are considered as the design variables in this study. Because of ∂u/∂n = 0 and ∂λ/∂n = 0 due to the boundary conditions in (1b) and (12b), respectively, the gradients of u and λ * in (14) can be expressed as follows: where s := ∂y ∂s and t := ∂y ∂t denote the tangential vectors along s and t coordinates, respectively, and J := |s × t| is the Jacobian. It should be emphasised that the tangential derivatives ∂u ∂s and ∂u ∂t can be computed readily by differentiating the NURBS basis R ν . In addition, the gradients are continuous over the surface S (except for the intersections among NURBS surfaces in general) if the degrees p s and p t of the NURBS basis are two or more.
Since there is no singularity in the integral in (14), we may evaluate the integral with the Gauss-Legendre quadrature formula.
Reduction to nonlinear optimisation problem
The optimisation problem stated in Section 3.1 forms a nonlinear optimisation problem. In general, the problem is to minimise the prescribed objective function f : IR n → IR with respect to n design variables x ∈ IR n under m inequality-constraints g : IR n → IR m , i.e.
where g L,U ∈ IR m denote the bounds of g. The design variables x are usually bounded as where x L,U ∈ IR m denote the bounds of x. Optionally, the gradient ∇f and Hessian of f is considered if they are readily computed. In the present shape optimisation problem, we choose the N control points C ν (where ν = 0, . . . , N − 1) as the design variables. Then, we may regard our objective function J , design variables C ν and their gradients s ν in (14) as f , (C 0 , . . . , C N −1 ) T ∈ IR 3N and (s 0 , . . . , s N −1 ) T ∈ IR 3N , respectively, where 3N corresponds to the number n of design variables. In this study, we do not consider the Hessian of J .
We utilise a primal-dual interior-point method with line searches based on Filter methods, which is implemented in Ipopt [31,34] and will be called IP hereafter. In the previous research [24], the IP sometimes required many number of backtracking linesearch steps and, thus, many number of evaluating f as well as ∇f . As a result, the computational time was sometimes enormous.
Hence, we consider different gradient-based optimisation methods. To this end, we exploit the software NLopt [32], which contains many optimisation methods. We use the MMA (method of moving asymptotes [35]) and SLSQP (sequential least-squares quadratic programming [36]) because they are general-purpose in the sense that they can handle nonlinear inequality-constraints. 8
Numerical examples
This section will provide some numerical examples with our shape optimisation software. In Section 4.1, we will validate the software through an optimisation problem, which can be analysed exactly. The problem is actually a parametric optimisation problem, but we can test the entire of our software for the shape optimisation problem. After the validation, we will show the capability of the software through some more complicated examples in Sections 4.2-4.4.
Verification 4.1.1. Problem configuration
Let us consider a parametric optimisation problem. Specifically, we will find the radius, denoted by a, of a spherical scatterer (centred at the origin) so that the radius can maximises an objective function J in (9) We give a planewave incident field u in (x) = e −ikz , which propagates in the −z direction, where the wavenumber k is given as one.
Following the reference [37], we create the surface S of the spherical scatter with six NURBS surfaces. Each surface is constructed with 5 × 5 control points and the tensor product of the B-spline functions of degree 4, i.e. n s = n t = 5 and p s = p t = 4 . Correspondingly, the number N of the (unique) control points is 98. As noted in Section 2.5, we can increase N to improve the resolution of the boundary element solution by the knot insertion. We consider three cases of N , i.e. N = 866, 2402 and 3458. Under the present configuration, the sound pressure u at the observation point z 1 can be written as a function of the radius a [38], that is, where the spherical coordinates θ and r corresponding to z 1 are 0 and 8.5, respectively. Also, j n , h n and P n denote the spherical Bessel function of degree n, the spherical Hankel function of the first kind and degree n and the Legendre polynomial of degree n, respectively. In addition, the coefficient A ′ n is defined as where the prime represents the differentiation with respect to a. It should be noted that (17) is valid when the observation points z 1 is in the outside of the sphere, that is, a ∈ (0, 8.5).
From (16) and (17), we can plot J against a ∈ [1, 8] as in Figure 3. When we restrict the lower and upper bounds of the design variable a to 1 and 7, respectively, we can have two local maxima, i.e.
which are computed by applying the Brent minimisation algorithm, which is implemented in GNU Scientific Library [39], to (17). If we can arrive at one of these local maxima from a certain initial radius, denoted by a 0 , we can validate our optimisation software.
The design variable of this problem is the radius a only, while our shape optimisation method treats all the control points C ν as the design variables (recall Section 3.3). To fill this gap, we modify a fraction of the computer program by considering the relationship between the radius a and control points C ν . Specifically, when the radius is updated from a toã, all the control points must be scaled byã/a so that the surface S can preserve its spherical shape. Therefore, the control points C ν must be updated toC ν so that holds. Plugging this and the variation of the radius, i.e. δa :=ã − a, into (13), we have Clearly, the shape derivative of J with respect to the radius a, i.e. ∂J ∂a , is obtained as Hence, when the new (perturbated) radiusã is determined by an optimisation algorithm, we first update the control points C ν toC ν according to (19) and, then, ∂J ∂a in (21). This procedure is added to the user-defined routine to compute J and its gradient.
In this analysis, we let the convergence tolerance of IP, i.e. the parameter tol of Ipopt [31, Page 69] be 10 −3 . In regard to both MMA and SLSQP, we let the relative tolerance, which corresponds to the parameter ftol rel of NLopt, be 10 −3 . Table 1 shows the computed optimal radius a and the corresponding value of J for the two initial radii a 0 and three numbers N of control points. We can observe that every solver could achieve one of the local maxima. There is no clear difference due to N , which means that the discretisation error of the IGBEM is almost negligible with the smallest N .
Results and discussions
The columns of "Eva." in Table 1 shows the number of evaluating J (and most likely its gradient at the same time) until convergence. In every combination of a 0 and N , the SLSQP required less evaluation counts than the others. Figure 4 plots the value of J against the evaluation count in the case of a 0 = 3 and N = 866.
The present results indicate that our formulation and its numerical implementation are valid.
Example 1: Reflector
To demonstrate the capability of our shape optimisation framework, we will begin with a simple model of a cuboid "reflector", whose dimensions are 1 × 1 × 0.5, as shown in Figure 5. Regarding a planewave incident field u in with the wavenumber of k = 3 that propagates in the −z direction, i.e. u in (x) = e −ikz , we try to maximise the objective function J in (9), where a single observation point z 1 = (0.5, 0.5, 1.0) T in the illuminated side is considered. The surface of the reflector consists of six NURBS surfaces (viz., top, bottom, left, right, front and back) with the NURBS functions of degree 2, i.e. p s = p t = 2. They are shown in different colours in Figure 5. The number n s , which denotes the number of control points along the local coordinate s, is given as 6, 6 and 3 if s is parallel to x-, y-and z-axis, respectively, at the initial configuration. Similarly, we determine the value of n t every NURBS surface. Each NURBS surface is clamped on its perimeter, as mentioned in Section 2.3. In this case, the number N of unique CPs is 92, which are shown as points (in red or blue) on the surfaces in Figure 5. The number of CPs is increased to 548 by the knot insertion when we perform the isogeometric boundary element analysis.
In this example, we design only 4 × 4 CPs, which are coloured in red in Figure 5, on the top surface excluding the perimeter. In addition, we allow each target CP to move vertically at most 0.3, which guarantees that any target CP never touches with the others. Thus, the number of design variables is 16. To be specific, we first regard all the coordinates of the CPs as the design variables and, then, set the initial coordinate to both lower and upper bounds for each coordinate that is not optimised. Figure 6 compares the history of the value of J for the three optimisation algorithms. All the algorithms converged to almost the same solution. Similarly to the previous example, the SLSQP required the least number of evaluations until convergence. Figure 7 shows the distribution of the absolute value of sound pressure, i.e. |u|, on the boundary at both initial and optimised shapes. In addition, Figure 8 shows the distribution of |u| on the middle cross section, i.e. y = 0.5; draw range was selected as −0.5 ≤ x ≤ 1.5 and −0.5 ≤ z ≤ 3.5. The peak of |u| is not on the observation point z 1 , but the created shape is reasonable in the sense that it looks like a parabolic antenna. It should be noted that the results in Figures 7 and 8 are of the SLSQP but almost the same results were obtained by both IP and MMA.
Initial shape
Optimised shape By considering the evaluation counts in Figure 6 as well as Figure 4, we will use the SLSQP only in the following examples.
Example 2: Resonator
As the second example, we attempt to catch a sound with a bowl. Specifically, as illustrated in Figure 9 (left), we consider a cubic scatterer (whose dimensions are 3 × 3 × 3) with a hollow (1 × 1 × 2). Then, we optimise the shape of the hollow so that we can increase the sound pressure inside it. We give two types of the incident fields, i.e. u in (x) = e −ikz and e −ikx , which represent the planewave propagating in the −z and −x direction, respectively. Here, k = 3 is supposed.
The bowl is modelled with 46 NURBS surfaces of degree 2, which are distinguished by different colours as in Figure 9 (left). The whole surface of the bowl includes 282 CPs, which are displayed as the points (in blue) in the same figure. In addition, we increase the number N of CPs from 282 to 1314 by the knot insertion in every boundary element analysis.
We optimise the shape of the hollow, preserving the initial square-shape of both top aperture and bottom surface. To this end, we choose 32 CPs on the side walls of the hollow; 20 of them in the back side are shown as red points in Figure 9 (right). We allow each CP to move up to a certain value from its initial position. The value is selected as 0.15, which is less than the half of the minimum distance (i.e. 0.4) between any two CPs at the initial configuration, so that any CP does not touch the others as well as all the observation points.
Entire
Posterior half Figure 9: The initial shape of the resonator model in Section 4.3. The points in grey represent the three observation point z 1 , z 2 and z 3 in the hollow. The red and points represent the CPs that are designed, while the blue ones represent the fixed CPs; the CPs on the left, back and bottom surfaces are also fixed. Figure 10 shows the history of J in both incident fields. We could increase J in both cases. The value of J increased monotonically from 3.19 × 10 −1 and then converged at 2.22 after seven counts in the case of the vertical (−z-direction) incidence, while it oscillated significantly but increase gradually from 4.13 × 10 −2 to 4.61 × 10 1 after 60 counts in the case of the horizontal (−x-direction) incidence. Figures 11 shows the initial and optimised (final) shapes of the hollow together with the distribution of |u| on it. In both cases, the |u| around the observation points was relatively low, but the sound pressure was actually strengthened inside the hollow after each optimisation. In addition, Figure 12 shows the distribution of |u| on the middle cross section of y = 1.5.
16
Vertical incidence: Initial shape Vertical incidence: Optimised shape Horizontal incidence: Initial shape Horizontal incidence: Optimised shape
Example 3: Bending duct
As the final example, we consider a more complicated model, whose dimension is 3 × 3 × 5, containing a bending duct. Figure 13 shows the cross section of y = 1.5 to see the inside of the model. As illustrated in the figure, we consider 3 × 3 observation points on the plane x = −0.5 near the exit of the duct. We design a part of the top and bottom surfaces of the duct through 30 control points, 20 of which are drawn as red points in the figure. More precisely, we optimise the z coordinates of those CPs. Here, every coordinate can be changed from its initial value up to 0.2, which takes account of the aforementioned consideration, that is, any CP never collides with the others during the optimisation. The incident planewave is given from the +x side and its wavenumber is selected as 1, 2, 3 or 5 for comparison.
As shown in Figure 14, the optimisation was successfully terminated for every wavenumber k. The value of J was largest at the second largest k = 3. This can be related to 18 the standing wave in the vertical direction excited in the duct, which can be observed in Figure 15, but we did not pursue the reason.
Conclusion
Exploiting our previous work [30] on the development of the isogeometric boundary element method (IGBEM) for the 3D Helmholtz equation and the sensitivity analysis based on the adjoint variable method (AVM), we have newly proposed a shape optimisation system by integrating the IGBEM and AVM into nonlinear optimisation algorithms, viz. a primal-dual interior point method (IP) and the method of moving asymptotes (MMA) as well as the sequential least-squares quadratic programming (SLSQP), which are available from the open software Ipopt [31] and NLopt [32], respectively. We have numerically verified the system in a (parametric) optimisation problem that has the exact solution. The system could find the optimal solutions successfully. Then, we applied the system to optimise three models that consider a reflector, resonator and bending-duct, which consist of multiple NURBS surfaces. We could maximise the objective function in every optimisation and found that the SLSQP was the best in the sense that it required less number of evaluating the objective function as well as its gradient, which is the most time-consuming part in the optimisation based on the IGBEM and AVM.
In the future, we are going to enhance our shape optimisation system so that it can directly deal with NURBS models that are generated by solid modellers such as Rhinoceras 2 and SMlib 3 . This task is technically evident but practically important to give initial shapes comprising of truly curved surfaces. In addition, we are planning to develop a similar shape optimisation software for electromagnetism in 3D from the present one, considering the application to metamaterials and plasmonics. | 2021-05-11T01:16:27.486Z | 2021-05-10T00:00:00.000 | {
"year": 2022,
"sha1": "6ddea10c51ca1392215f48a2078b5bf0df9bf04d",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.cad.2021.103126",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6ddea10c51ca1392215f48a2078b5bf0df9bf04d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
73470348 | pes2o/s2orc | v3-fos-license | Serum magnesium and calcium levels in relation to ischemic stroke
Objective To determine whether serum magnesium and calcium concentrations are causally associated with ischemic stroke or any of its subtypes using the mendelian randomization approach. Methods Analyses were conducted using summary statistics data for 13 single-nucleotide polymorphisms robustly associated with serum magnesium (n = 6) or serum calcium (n = 7) concentrations. The corresponding data for ischemic stroke were obtained from the MEGASTROKE consortium (34,217 cases and 404,630 noncases). Results In standard mendelian randomization analysis, the odds ratios for each 0.1 mmol/L (about 1 SD) increase in genetically predicted serum magnesium concentrations were 0.78 (95% confidence interval [CI] 0.69–0.89; p = 1.3 × 10−4) for all ischemic stroke, 0.63 (95% CI 0.50–0.80; p = 1.6 × 10−4) for cardioembolic stroke, and 0.60 (95% CI 0.44–0.82; p = 0.001) for large artery stroke; there was no association with small vessel stroke (odds ratio 0.90, 95% CI 0.67–1.20; p = 0.46). Only the association with cardioembolic stroke was robust in sensitivity analyses. There was no association of genetically predicted serum calcium concentrations with all ischemic stroke (per 0.5 mg/dL [about 1 SD] increase in serum calcium: odds ratio 1.03, 95% CI 0.88–1.21) or with any subtype. Conclusions This study found that genetically higher serum magnesium concentrations are associated with a reduced risk of cardioembolic stroke but found no significant association of genetically higher serum calcium concentrations with any ischemic stroke subtype.
Growing evidence indicates that the essential minerals magnesium and calcium may have a role in cardiovascular disease. Magnesium, the second most predominant intracellular cation, can influence the cardiovascular system through vascular tone, blood pressure, endothelial function, platelet aggregation and coagulation, cardiac arrhythmias, and glucose and insulin metabolism. [1][2][3] Calcium, the most abundant mineral in the body, has an essential role in the coagulation system, intracellular signaling, and muscle contraction, but is also associated with some pathologic processes such as carotid artery plaques 4,5 and calcifications. 6 Magnesium and calcium supplementation leads to a rise in blood concentrations of these minerals. [7][8][9] Therefore, any association of circulating magnesium and calcium concentrations with risk of stroke, an enormous public health problem, can have important public health and clinical implications. Findings from observational epidemiologic studies indicate that low serum magnesium concentrations [10][11][12] and slightly elevated serum calcium concentrations 13,14 are associated with increased risk of stroke. Limited data from randomized controlled trials further indicate that calcium supplementation might increase stroke risk. 4 However, given the observational design of the majority of available studies on magnesium and calcium in relation to risk of stroke, it is uncertain whether the observed associations are causal and independent of other risk factors, and not biased by reverse causation.
Mendelian randomization (MR) is a genetic epidemiologic method that exploits genetic variants influencing the modifiable exposure of interest as unbiased proxies for the exposure to infer causality. 5,15 This method has been utilized to demonstrate that serum magnesium 16 and serum calcium 17 concentrations are associated with respectively decreased and increased risk of coronary artery disease, but has not been used to determine whether circulating levels of these minerals are associated with risk of ischemic stroke. We applied a 2-sample MR approach to investigate whether serum magnesium and calcium concentrations are causally associated with ischemic stroke as a whole or any of its main subtypes.
Single nucleotide polymorphism selection and data sources
We selected all single nucleotide polymorphisms (SNPs) associated with serum magnesium or calcium concentrations at Glossary CI = confidence interval; GWAS = genome-wide association study; MR = mendelian randomization; OR = odds ratio; SIMEX = simulation extrapolation; SNP = single nucleotide polymorphism. genome-wide significance (p < 5 × 10 −8 ) in the largest published genome-wide association studies (GWAS) on these minerals. 18,19 The GWAS on serum magnesium identified 6 significant and independent (i.e., not in linkage disequilibrium) SNPs, explaining 1.6% of the variance in serum magnesium concentrations, in the joint analysis of the discovery and replication cohorts including 23,829 individuals of European ancestry. 18 The GWAS on serum calcium identified 7 replicated and independent SNPs, explaining 0.9% of the variance in serum calcium concentrations, in up to 61,079 individuals of European ancestry. 19 From the MEGASTROKE consortium, 20 we obtained summary statistics data for stroke for the 13 SNPs. To reduce potential bias caused by population stratification, we restricted the stroke dataset to individuals of European ancestry. Thus, our analyses included data from up to 404,630 noncases and 34,217 ischemic stroke cases, subtyped into cardioembolic stroke (n = 7,193), large artery stroke (n = 4,373), and small vessel stroke (n = 5,386). Stroke subtypes were classified according to the Trial of Org 10172 in Acute Stroke Treatment criteria. 21 Standard protocol approvals, registrations, and patient consents Each study included in the GWAS used in the present study was approved by an institutional review board, and all participants had provided informed consent.
Statistical analysis
The primary analyses were conducted using the inversevariance weighted method (hereafter referred to as standard MR analysis), which gives accurate estimates if all SNPs satisfy the instrumental variable assumptions (data available from Open Science Framework, figure e-1, osf.io/ b57sq/). 22 In sensitivity analyses, we used other MR approaches, including the following: (1) the weighted median method, which provides consistent estimates if at least 50% of the weight in the analysis comes from valid instrumental variables 22 ; (2) the heterogeneity-penalized model averaging method, which gives consistent estimates if a plurality of the instrumental variables are valid 23 ; and (3) the MR-Egger method, which can detect and adjust for pleiotropy. 22,24 The MR-Egger analysis is disposed to regression dilution bias. The degree of dilution bias was assessed with the I 2 GX statistic. 25 I 2 GX values below 0.9 were considered substantial dilution, and the simulation extrapolation (SIMEX) method was used to adjust the estimates for dilution bias. 25 The MR-PRESSO method was used to detect potential outliers. 26 Moreover, we conducted sensitivity analyses excluding SNPs with pleiotropic associations with possible confounders or intermediates of the exposurestroke relationship.
Odds ratios (ORs) were scaled per 0.1 mmol/L (about 1 SD) increase in serum magnesium concentrations and 0.5 mg/dL (about 1 SD) increase in serum calcium concentrations. A Bonferroni-corrected level of significance of less than 0.006 (correcting for 2 exposures and 4 outcomes) was considered statistically significant. Associations of the 13 individual SNPs with the 4 outcomes were considered statistically significant at p values of less than 9.6 × 10 −4 . The analyses were conducted using Stata software (StataCorp, College Station, TX) and the MendelianRandomization package 27 for R. Statistical power was calculated using the method proposed by Brion et al. 28 Data availability All data generated or analyzed during this study are included in the main manuscript and its supplementary information files.
Statistical power
We had 100% power to detect an OR of any ischemic stroke of 0.80 for serum magnesium levels and 1.25 for serum calcium levels. The statistical power in analyses of ischemic stroke subtypes is shown in data available from Open Science Framework (table e-1, osf.io/b57sq/).
The I 2 GX value from the MR-Egger analysis was 0.87, indicating 13% dilution of the estimates. The MR-Egger analysis, with adjustment for dilution bias using the SIMEX method, provided imprecise estimates (data available from Open Science Framework, table e-3, osf.io/b57sq/). In this analysis, genetically predicted serum magnesium concentrations were associated with cardioembolic stroke but the CI included the null (OR 0.66, 95% CI 0.21-2.10); there was no evidence of directional pleiotropy (data available from Open Science Framework, table e-3). In contrast, directional pleiotropy was detected in the analysis of large artery stroke, and this was not explained by any single SNP (data available from Open Science Framework, table e-3).
The MR-PRESSO analysis identified potential outlying SNPs (at p < 0.10), which varied for different subtypes (data available from Open Science Framework, table e-4, osf.io/ b57sq/). The association of genetically predicted serum magnesium concentration with cardioembolic stroke persisted after exclusion of the outlier in TRPM6 (OR 0.56, 95% CI 0.43-0.73). The association also remained after exclusion of 2 SNPs associated with estimated glomerular filtration rate and 1 SNP associated with blood pressure and serum urate levels, but was attenuated after omitting 2 SNPs associated with atrial fibrillation (OR 0.73, 95% CI 0.52-1.03) (data available from Open Science Framework, table e-5).
Serum calcium
None of the calcium-associated SNPs was statistically significantly associated with ischemic stroke as a whole or any subtype (data available from Open Science Framework, table e-6 and figure e-2, osf.io/b57sq/). There were no associations between genetically predicted serum calcium concentrations and any stroke outcome in the standard MR analysis (figure 2). The OR of all ischemic stroke per genetically predicted 0.5 mg/dL (about 1 SD) increase in serum calcium concentrations was 1.03 (95% CI 0.88-1.21; p = 0.68). The lack of association remained in sensitivity analyses (figure 2), and there was no evidence of directional pleiotropy in the MR-Egger analysis (data available from Open Science Framework, table e-7, osf.io/b57sq/). The I 2 GX value was 0.96, indicating no significant dilution bias in the MR-Egger analysis. No associations of genetically predicted serum calcium concentrations with any stroke outcome were observed after exclusion of the SNP in GCKR, which has pleiotropic associations with potential confounders (e.g., blood lipids and type 2 diabetes) (data available from Open Science Framework, table e-8). 17 No outliers were identified in the MR-PRESSO analysis.
Discussion
Findings of this MR study showed a consistent association between genetically higher serum magnesium concentrations and reduced risk of cardioembolic stroke but not other subtypes. Genetically predicted serum calcium concentrations were not associated with any ischemic stroke subtype or with ischemic stroke overall.
Although several observational prospective studies have reported that low circulating magnesium concentrations [10][11][12] and low magnesium intake 29 are associated with increased risk of stroke, data on ischemic stroke subtypes are scarce. 12 In the Nurses' Health Study, low plasma magnesium concentrations (<0.82 mmol/L) were associated with an approximately 70% to 80% increased risk of embolic and thrombotic stroke, 12 supporting our findings. Previous observational studies were limited by possible residual confounding because low magnesium concentrations and magnesium intake are correlated with potential risk factors for stroke. [10][11][12] Magnesium may in part reduce the risk of cardioembolic stroke through its antiarrhythmic effects 1,3 and via atrial fibrillation. Low serum magnesium concentrations are associated with increased risk of atrial fibrillation, 30,31 which is a strong risk factor for cardioembolic stroke. Two of the magnesium-associated SNPs were significantly associated with atrial fibrillation, including the SNPs in the MUC1 (p = 0.02) and SHROOM3 (p = 2.4 × 10 −4 ) genes, with the allele associated with higher serum magnesium concentrations being associated with lower risk of atrial fibrillation. 32 The association between genetically predicted serum magnesium concentrations and cardioembolic stroke was attenuated after exclusion of those 2 SNPs, suggesting that the association may partly be mediated by atrial fibrillation.
Magnesium also has anticoagulant and antiplatelet properties. 1,3 Magnesium is considered to be nature's calcium blocker as it suppresses many of the physiologic actions of calcium. 1,3 For example, calcium promotes blood coagulation, whereas magnesium suppresses blood clotting and thrombus formation and reduces platelet aggregation, the synthesis of platelet agonist thromboxane A2, von Willebrand factor binding to collagen, and thrombin-stimulated calcium influx. 1,3,33-35 Antithrombotic effects may lead to reduction in risk of both cardioembolic and large artery stroke. A significant association between genetically predicted serum magnesium concentrations and large artery stroke was observed in the standard MR analysis, but this association did not persist in sensitivity analyses.
Other possible mechanisms whereby high serum magnesium concentrations may reduce ischemic stroke risk include improvement of endothelial function 36,37 and reduction in blood pressure, 36,38 atherosclerotic calcification, 39 arterial stiffness, 40 oxidative stress, 41 fasting glucose concentration, 38 insulin resistance, 42 and risk of type 2 diabetes. 43,44 Some of those beneficial effects may also lead to a reduction in small vessel stroke, which was not observed in this study.
The MR design has not been previously used to determine the association between serum calcium concentration and risk of ischemic stroke, but a few observational prospective studies have examined the association between serum calcium concentrations and risk of stroke. 13,14 In a cohort of about 440,000 Swedish adults, high (≥2.40 mmol/L) vs low (<2.25 mmol/L) serum calcium concentrations were associated with a 12% increased risk of incident ischemic stroke and with a 40% increased risk of fatal ischemic stroke. 14 Another cohort of 13,288 US adults showed a 16% increase in risk of total stroke per 1-SD increase in serum calcium concentrations. 13 The association of genetically predicted serum calcium concentration with cardioembolic stroke in the present study was of similar magnitude, though nonsignificant, as the association with stroke in previous observational studies 13,14 and with coronary artery disease in a previous MR study (OR 1.25, 95% CI 1.08-1.45). 17 The estimates for serum calcium and cardiometabolic stroke are also similar to those for calcium supplementation and stroke from a meta-analysis of 8 randomized controlled trials (relative risk 1.15, 95% CI 1.00-1.32; p = 0.06). 4 It is unclear why genetically predicted serum calcium concentrations were not associated with large artery stroke, which like coronary heart disease is related to atherosclerosis. A possibility is that we may have overlooked an association because of low power (data available from Open Science Framework, table e-1, osf.io/b57sq/).
A major strength of this MR study is that biases that can be of concern in conventional observational studies were avoided. Other important strengths are the large number of cases of ischemic stroke and that associations with ischemic stroke subtypes could be investigated.
A limitation is that statistical power was low in the analyses of ischemic stroke subtypes. The power was particularly low in the analyses of calcium because the SNPs only explained a small proportion of the variance (0.9%) in serum calcium levels. Hence, we cannot rule out that we may have overlooked weak associations between genetically predicted serum calcium concentrations and ischemic stroke subtypes. Another shortcoming is that the biological function of several of the genetic loci associated with serum magnesium and calcium levels are unknown (data available from Open Science Framework, table e-9, osf.io/b57sq/). The reliability of MR results relies on 3 main assumptions (data available from Open Science Framework, figure e-1, osf. io/b57sq/), which can be violated by population stratification, canalization, and pleiotropy. Population stratification was minimized because we restricted the study populations to European-descent individuals. We could not directly test whether canalization may have influenced the results. Canalization refers to compensatory processes during development that alleviate the genetic effect. Such feedback mechanisms would bias the results toward the null and cannot explain the observed association between serum magnesium concentration and cardioembolic stroke. Pleiotropy occurs when a genetic variant is associated with more than one phenotype. We conducted several sensitivity analyses to explore and adjust for pleiotropy. The association of genetically predicted serum magnesium concentrations with cardioembolic stroke, but not the other subtypes or overall stroke, was robust in these sensitivity analyses and the MR-Egger analysis provided no evidence of directional pleiotropy.
This study found evidence that genetically higher serum magnesium concentrations may be associated with a reduced risk of cardioembolic stroke. Genetically higher serum calcium concentrations were not associated with ischemic stroke, but the existence of an effect of low magnitude cannot be ruled out.
Author contributions Susanna C. Larsson designed the study, performed the statistical analyses, wrote the first draft of the manuscript, and drew the figures. Susanna C. Larsson is the corresponding author and takes responsibility for the accuracy of the analysis and had authority over manuscript preparation and the decision to submit the manuscript for publication. Matthew Traylor reviewed and commented on the manuscript. Stephen Burgess reviewed and commented on the manuscript. Giorgio B. Boncoraglio reviewed and commented on the manuscript. Christina Jern reviewed and commented on the manuscript. Karl Michaëlsson reviewed and commented on the manuscript. Hugh S. Markus reviewed and commented on the manuscript. | 2019-03-11T17:21:57.764Z | 2019-01-25T00:00:00.000 | {
"year": 2019,
"sha1": "add77bc74cc791c1f1eb67e6a0015161d80f2507",
"oa_license": "CCBY",
"oa_url": "https://n.neurology.org/content/neurology/92/9/e944.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "add77bc74cc791c1f1eb67e6a0015161d80f2507",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26583861 | pes2o/s2orc | v3-fos-license | Pulmonary hypoplasia with hepatic and renal anomalies in a dead fetus
Pulmonary hypoplasia is a developmental malformation characterized by incomplete development of lung tissue. During routine fetal autopsy of an apparently normal female dead fetus of 36 weeks gestation presented completely hypoplastic left lung, partially hypoplastic right lung, right-sided shift of heart, right-sided shift of trachea, left-sided diaphragmatic hernia through which an extra lobe from left lobe of liver extended into the left half of thoracic cavity. Left kidney was iliac in position.
to treat the infants with PH.
CASE REPORT
The present case is a 36 weeks spontaneously delivered still born female fetus weighing 1.7 kg with 32 cm crownrump length (CRL) of a second gravida with a previous obstetric history of still born baby.During routine thoracic and abdominal autopsy the fetus presented a left-sided diaphragmatic hernia of 1 inch diameter.In the thoracic cavity, right-sided shift of heart and trachea and a grossly reduced left lung compared to right lung was observed [Figures 1 and 2].The trachea and its bifurcation in to principal bronchi was normal [Figure 2] with a clearly visible lumen.Beyond principal bronchi the bronchial tree could not be differentiated.
The left lung was thin, quadrilateral in shape, without lobation and nearly 1/10 th of normal size for the gestational age.The right lung presented three lobes but the middle lobe and part of upper lobe are thin and fibrosed [Figure 2].On microscopic examination, entire left lung and the middle lobe and the lower part of upper lobe of right lung contained small amount of lung tissue and more of connective tissue.Microscopic appearance of left lung and fibrosed parts of right lung correspond to pseudoglandular stage of lung development with undifferentiated intrapulmonary bronchi and alveoli [Figures 3a-b].Rest of the right lung presented terminal sac stage of development which is normal for the gestational age.
INTRODUCTION
Pulmonary hypoplasia (PH) is a developmental abnormality of the lung with decrease in size and weight of lung.[3] With the advancement of medical knowledge for other pulmonary conditions, there is gradual decrease in the neonatal morbidity and mortality.Immaturity in the development of respiratory passages and lung can lead to increased prevalence of respiratory illnesses in neonatal to senile age groups and are clinically challenging to the neonatologists, pediatricians, chest physicians and geriatricians alike.PH has been described as the most common anomaly in infants who die in the neonatal period.PH can result from several causes, and it can often be suspected on the basis of previous obstetric history or ultrasound findings during the current pregnancy. [4]ppropriate medical or surgical intervention is required In the abdominal cavity, a third hepatic lobe [Figures 2 and 4] in the form of a stalk of 5.0 cm length was extending from the left lobe of liver into the thoracic cavity passing through the diaphragmatic defects with a terminal circular dilatation of 2.0 cm width.Terminal dilatation of third lobe projecting into the left thoracic cavity presented normal histological features of liver, while the stalk connecting it with the liver presented fibrous tissue.The left kidney was iliac in position, while the right kidney was in normal position.
DISCUSSION
According to Sultana et al, [5] PH occurs in all cases of congenital diaphragmatic hernia (CDH) and reported an incidence of 7.8% based on 165 newborn fetal autopsies.According to Harrison et al, [6] the incidence of CDH is 1 in 2200 live births with a 50% mortality rate in prenatally diagnosed cases.The worldwide incidence of PH is approximately 13%, with a range of 9-28% in United States. [3]Incidence of PH in live births is 1 in every 10000-12000 (Moore et al. 1992 ).The reported incidence of varying degree of PH in neonatal autopsies varies from 7 to 26%. [7]If unilateral, left lung is more commonly involved than the right lung.Development of the bronchial tree takes place at about 26 th to 31 st day of intrauterine life. [2,8]ategorized maldevelopment of lung into three groups: Group 1: Agenesis, in which there is complete absence of lung tissue.Group 2: Aplasia, with rudimentary bronchus and without lung tissue.Group 3: Hypoplasia with normal pulmonary tissues that is under-developed.
Monaldi classified maldevelopment of lung into four groups. [2]They are Group 1: No bifurcation of trachea.Group 2: Only rudimentary main bronchus.Group 3: Incomplete development after division of main bronchus.Group 4: Incomplete development of subsequent bronchi and small segment of the corresponding lobe.
In the present case, right lung presented the features of Group 3 of Boyden and Group 3 of Monaldi classification, while the left lung presented Group 3 of Boyden and Group 4 of Monaldi.Hypoplasia of the lung may be regarded as primary (idiopathic) or secondary.Primary hypoplasia occurs without any cause.Secondary PH is frequently associated with adverse intrauterine influences that cause fetal lung compressions which may be intrathoracic or extrathoracic. [9]The intrathoracic causes are diaphragmatic defects, intrathoracic tumors, while extrathoracic causes are oligohydramnios, renal agenesis or chronic elevation of hemidiaphragm. [3]The cause of PH in the present case is secondary hypoplasia due to diaphragmatic hernia.An important finding in association with PH is its coincidence with other congenital anomalies.Anencephaly, diaphragmatic hernia, cardiac lesions, hepatic anomalies, abnormalities of the thumb, deformities of the thoracic spine, urinary tract abnormalities, renal anomalies and pleural effusions have been reported and these associated anomalies may be due to their close proximity. [10]e basis for variations in presenting features, morphological and histological findings may be related to severity and cause of hypoplasia as well as to the timing of the etiologic events that led to anomaly. [4]According to Areechon and Reid [1] the size of diaphragmatic hernia and maturity of fetus affect alveolar development.In the present case, branching of the intrapulmonary conducting system is arrested due to the diaphragmatic hernia and the alveolar passages remained in pseudoglandular phase with increased amount of connective tissue.Because of the diaphragmatic hernia and pressure of the developing extra lobe and intrathoracic pressure changes, branching of intrapulmonary bronchi and alveolar development were arrested between 7 and 16 weeks of intrauterine life.Cause of death in the present case could be due to immaturity of the lung that is not compatible for the survival.
CONCLUSION
In the present case, the histological finding of pseudoglandular phase in lung development before 16 weeks of development suggests that a correction of congenital diaphragmatic hernia may increase the chances for aeration and growth of hypoplastic lung and total normal alveolar development.PH due to congenital diaphragmatic hernia (CDH) can be prevented by prenatal diagnosis by ultrasonography and fetoscopic tracheal occlusion to treat CDH.This type of abnormalities can also be prevented by adopting few preventive measures like preterm delivery before 32 weeks followed by postnatal ventilator resuscitation and surgical intervention to close diaphragmatic defect and inflation of lungs.This procedure, if successful will decrease the chances of PH as size of diaphragmatic defect and maturity of fetus both affect lung development.The association of left-sided unascended kidney with the CDH and PH was not reported in the literature.
Figure 1 :
Figure 1: Showing empty left thoracic cavity with left hypoplasia
Figure 2 :
Figure 2: Showing thoracic cavity after removal of heart and left-sided diaphragmatic hernia through which an accessory left lobe entered inside the thoracic cavity
Figure 4 :
Figure 4: Showing liver with an accessory left lobe
Figure 3 :
Figure 3: (a) and (b) -fibrosed parts of left lung correspond to pseudoglandular stage of lung development with undifferentiated intrapulmonary bronchi and alveoli a b | 2018-04-03T00:46:52.786Z | 2012-04-01T00:00:00.000 | {
"year": 2012,
"sha1": "04af656fd71611b8505c784cc5c6f2b015e6fb17",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0970-2113.95329",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25df0591c16e085223b9948cefda4071b834089a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
134455959 | pes2o/s2orc | v3-fos-license | Estimating Mangrove Forest Volume Using Terrestrial Laser Scanning and UAV-Derived Structure-from-Motion
: Mangroves provide a variety of ecosystem services, which can be related to their structural complexity and ability to store carbon in the above ground biomass (AGB). Quantifying AGB in mangroves has traditionally been conducted using destructive, time-consuming, and costly methods, however, Structure-from-Motion Multi-View Stereo (SfM-MVS) combined with unmanned aerial vehicle (UAV) imagery may provide an alternative. Here, we compared the ability of SfM-MVS with terrestrial laser scanning (TLS) to capture forest structure and volume in three mangrove sites of di ff ering stand age and species composition. We describe forest structure in terms of point density, while forest volume is estimated as a proxy for AGB using the surface di ff erencing method. In general, SfM-MVS poorly captured mangrove forest structure, but was e ffi cient in capturing the canopy height for volume estimations. The di ff erences in volume estimations between TLS and SfM-MVS were higher in the juvenile age site (42.95%) than the mixed (28.23%) or mature (12.72%) age sites, with a higher stem density a ff ecting point capture in both methods. These results can be used to inform non-destructive, cost-e ff ective, and timely assessments of forest structure or AGB in mangroves in the future.
Introduction
Mangrove forests are a woody vegetation that grow predominantly in tropical regions in more than 100 countries and territories worldwide [1]. Located at the interface between land and sea, mangroves provide dozens of ecosystem services ranging from local to global scales, including shoreline stabilization [2], storm protection [3], and habitat [4]. The ability of mangroves to sequester carbon is impressive, with estimates of 1023 Mg carbon stored per hectare [5], which is more efficient per unit area than any other tropical forest type [6]. However, mangroves are being lost at an alarming rate due to land use change, deforestation, and the impacts of climate change [7][8][9], thereby converting potential carbon sinks into sources [10]. Quantifying carbon storage is essential to retain the ecological and economical integrity of these ecosystems [8,11,12], by highlighting their potential to mitigate climate change and ensuring their future conservation [13].
The estimation of above-ground biomass (AGB) is an important indicator for carbon storage, as it represents a lower bound for the total biomass within an ecosystem [14]. Mangroves can vary considerably in AGB, from 500 Mg/ha in riverine areas to <8 Mg/ha for dwarf mangrove stands [15]. Traditional AGB assessments utilise direct harvesting methods, which are accurate, but time-consuming, costly, and destructive [16,17], and prone to subjective errors [18]. The measurement of forest structure, which refers to the horizontal and vertical arrangement of physical attributes, such as trunks and branches [19], has been used to develop less-destructive estimates of AGB. Structural attributes can be used to estimate carbon storage by developing species-specific allometric equations, which establish a Noosa Heads is a small coastal town located roughly 100 km north of Brisbane, Queensland. The region is sub-tropical, experiencing~1400 mm precipitation per year, mostly during November to March [48]. The town is bounded to the north by the Noosa River, a complex river system that originates near Mount Elliot and extends~74 km south-east before entering the Pacific Ocean between Noosa Heads and the southern tip of the Great Sandy National Park (Figure 1). The study area is located within the Weyba Creek Conservation Park between Lake Weyba and the Noosa River estuary ( Figure 1). Drones 2019, 3, 32 3 of 18
Study Area
Noosa Heads is a small coastal town located roughly 100 km north of Brisbane, Queensland. The region is sub-tropical, experiencing ~1400 mm precipitation per year, mostly during November to March [48]. The town is bounded to the north by the Noosa River, a complex river system that originates near Mount Elliot and extends ~74 km south-east before entering the Pacific Ocean between Noosa Heads and the southern tip of the Great Sandy National Park (Figure 1). The study area is located within the Weyba Creek Conservation Park between Lake Weyba and the Noosa River estuary (Figure 1).
Survey Sites
Three 25 × 25 m sites were delineated on the 15 August 2018 to capture the structural attributes associated with increasing stand age and included: (a) 'Juveniles': Mostly seedlings and juveniles
Survey Sites
Three 25 × 25 m sites were delineated on the 15 August 2018 to capture the structural attributes associated with increasing stand age and included: (a) 'Juveniles': Mostly seedlings and juveniles <1.37 m tall with some mature individuals; (b) 'mixed': Individuals of varying age; and (c) 'mature': Mostly mature age individuals >2.5 m (Figure 1). Field characterization was conducted using an inclinometer to obtain tree heights, and mangrove species were identified using descriptions from [49]. The juvenile site was dominated by a dense population of juvenile Aegicerus corniculatum (River Mangrove) ranging from 0.7 to 1.5 m tall with scattered Avicennia marina var. australasica (Grey Mangrove) at around 3.5 m tall. The mixed site contained the same species, but was less dense, with larger spacings between individual A. marina var. australasica trees and scattered juvenile A. corniculatum shrubs growing at similar heights as the juvenile site. This site also had a thick matting of Sporobolus virgincus (Saltcouch) covering the ground surface. The mature site was dominated by mature age A. marina var. australasica trees growing up to around 8 m tall with a dense canopy and ground surface covered by pneumatophore roots.
Field Surveys
A CHC X91+ Real-Time-Kinetic GNSS (RTK-GNSS), which has a horizontal and vertical precision of up to 8 mm and 15 mm, respectively [50], was used to obtain 45 Topographic Points (Figure 1). From these points, 11 ground control points were chosen near the borders of each site, and these positions were flagged using black and white markers, so they were visible in the subsequent UAV survey ( Figure 1).
A consumer-grade DJI Phantom 4 Advanced quadcopter was used to obtain 591 overlapping images of the entire study site. The drone was mounted with a 1" CMOS (complementary metal-oxide semiconductor) sensor, with an effective pixel size of 20 megapixels, and images were taken at a resolution of 5472 × 3648 in the RGB (red-green-blue) spectrum using the FC6310 camera (8.8 mm) [51]. The Ground Station Pro App (DJI, version 2.0) was used to plan the flight, which utilised an orthogonal flight path and image overlap of 90% forward and 60% side overlap ( Figure 2). The flight was conducted at an average of 51.6 m above ground level, which resulted in a ground sampling distance (GSD) of 1.38 cm per pixel over an area of 0.07 km 2 . We used automatic take-off and landing for the flight, which took 20 min and was conducted at 0830 h Australian Eastern Standard Time (AEST) on the 30 August 2018, with fine weather and light winds. <1.37 m tall with some mature individuals; (b) 'mixed': Individuals of varying age; and (c) 'mature': Mostly mature age individuals >2.5 m (Figure 1). Field characterization was conducted using an inclinometer to obtain tree heights, and mangrove species were identified using descriptions from [49]. The juvenile site was dominated by a dense population of juvenile Aegicerus corniculatum (River Mangrove) ranging from 0.7 to 1.5 m tall with scattered Avicennia marina var. australasica (Grey Mangrove) at around 3.5 m tall. The mixed site contained the same species, but was less dense, with larger spacings between individual A. marina var. australasica trees and scattered juvenile A. corniculatum shrubs growing at similar heights as the juvenile site. This site also had a thick matting of Sporobolus virgincus (Saltcouch) covering the ground surface. The mature site was dominated by mature age A. marina var. australasica trees growing up to around 8 m tall with a dense canopy and ground surface covered by pneumatophore roots.
Field Surveys
A CHC X91+ Real-Time-Kinetic GNSS (RTK-GNSS), which has a horizontal and vertical precision of up to 8 mm and 15 mm, respectively [50], was used to obtain 45 Topographic Points ( Figure 1). From these points, 11 ground control points were chosen near the borders of each site, and these positions were flagged using black and white markers, so they were visible in the subsequent UAV survey (Figure 1).
A consumer-grade DJI Phantom 4 Advanced quadcopter was used to obtain 591 overlapping images of the entire study site. The drone was mounted with a 1" CMOS (complementary metaloxide semiconductor) sensor, with an effective pixel size of 20 megapixels, and images were taken at a resolution of 5472 × 3648 in the RGB (red-green-blue) spectrum using the FC6310 camera (8.8 mm) [51]. The Ground Station Pro App (DJI, version 2.0) was used to plan the flight, which utilised an orthogonal flight path and image overlap of 90% forward and 60% side overlap ( Figure 2). The flight was conducted at an average of 51.6 m above ground level, which resulted in a ground sampling distance (GSD) of 1.38 cm per pixel over an area of 0.07 km 2 . We used automatic take-off and landing for the flight, which took 20 minutes and was conducted at 0830 hours Australian Eastern Standard Time (AEST) on the 30 August 2018, with fine weather and light winds. We used a Faro Focus 3D S 120 TLS scanner, which projects constant infrared waves of differing length outward until they contact an object and are reflected back. Distance is measured by detecting phase shifts in the infrared waves, and Cartesian coordinates (x, y, z) are recorded. The ranging unit is capable of capturing points up to 120 m away, with a ranging error of ±2 mm within 25 m using the factory defined settings. We used a multi-scan approach to capture the forest structure in each site (Figure 3a-c). To co-register individual scans into a single point cloud, we used a minimum of three spheres or checkerboard targets in each scan. Targets were placed in open areas to allow the best possible visibility (Figure 3d,e). The number of scans required per site was determined in the field, which was dependent on the characteristics of the site and the visibility of the targets. For example, seven scans were required in the juvenile site, where dense understory vegetation blocked direct sight between some targets, whereas the mature site only required six scans as targets were easily visible. All scans were conducted in fine weather, however, when scanning the juvenile site, wind gusts up to 19 knots were recorded nearby. We used a Faro Focus 3D S 120 TLS scanner, which projects constant infrared waves of differing length outward until they contact an object and are reflected back. Distance is measured by detecting phase shifts in the infrared waves, and Cartesian coordinates (x, y, z) are recorded. The ranging unit is capable of capturing points up to 120 m away, with a ranging error of ±2 mm within 25 m using the factory defined settings. We used a multi-scan approach to capture the forest structure in each site (Figure 3a-c). To co-register individual scans into a single point cloud, we used a minimum of three spheres or checkerboard targets in each scan. Targets were placed in open areas to allow the best possible visibility (Figure 3d,e). The number of scans required per site was determined in the field, which was dependent on the characteristics of the site and the visibility of the targets. For example, seven scans were required in the juvenile site, where dense understory vegetation blocked direct sight between some targets, whereas the mature site only required six scans as targets were easily visible. All scans were conducted in fine weather, however, when scanning the juvenile site, wind gusts up to 19 knots were recorded nearby.
Data Processing
The UAV and TLS field surveys were uploaded to the relevant software to process the point cloud data (Figure 4).
The UAV data was uploaded into Agisoft Photoscan Professional (version 1.4.3, 64-bit) and was batch processed at 'high quality'. The model used all 591 images to create a dense point cloud using automatic mesh reconstruction. The model was georeferenced in PhotoScan to the Geocentric Datum of Australia (GDA) 1994 Map Grid of Australia (MGA) Zone 56 using the 11 GCP points, and the batch process was re-run after camera optimization (non-linear corrections). The TLS data was imported into FARO Scene (version 5.3) for pre-processing and point cloud registration. Target spheres were identified in the hemispherical photos and given the corresponding Universal
Data Processing
The UAV and TLS field surveys were uploaded to the relevant software to process the point cloud data (Figure 4).
The UAV data was uploaded into Agisoft Photoscan Professional (version 1.4.3, 64-bit) and was batch processed at 'high quality'. The model used all 591 images to create a dense point cloud using automatic mesh reconstruction. The model was georeferenced in PhotoScan to the Geocentric Datum of Australia (GDA) 1994 Map Grid of Australia (MGA) Zone 56 using the 11 GCP points, and the batch process was re-run after camera optimization (non-linear corrections). The TLS data was imported into FARO Scene (version 5.3) for pre-processing and point cloud registration. Target spheres were identified in the hemispherical photos and given the corresponding Universal Transverse Mercator (UTM) coordinates in the GDA 94 MGA Zone 56 reference system, which were obtained by offsetting from GCPs surveyed with the RTK-GNSS. Checkerboard targets were used to co-register individual scans into a single point cloud for each site using the default origin (0, 0, 0) in Faro SCENE (v5.3). The TLS point clouds were clipped to a 625 m 2 area using a virtual bounding box and both the SfM-MVS and TLS point clouds were imported into CloudCompare (v2.10. alpha, 64 bit) for rasterization.
Transverse Mercator (UTM) coordinates in the GDA 94 MGA Zone 56 reference system, which were obtained by offsetting from GCPs surveyed with the RTK-GNSS. Checkerboard targets were used to co-register individual scans into a single point cloud for each site using the default origin (0, 0, 0) in Faro SCENE (v5.3). The TLS point clouds were clipped to a 625 m 2 area using a virtual bounding box and both the SfM-MVS and TLS point clouds were imported into CloudCompare (v2.10. alpha, 64 bit) for rasterization. The point clouds were segmented using a single transect across each site for both methods to examine profile point densities and canopy penetration. All point clouds were converted to a digital surface model (DSM) using the 'Rasterize' tool in CloudCompare using the maximum z values and a cell resolution of 3 cm 2 , with eight DSMs created in total. Two DSMs were created from the SfM-MVS survey that spanned the entire study site, and six DSMs were created from the TLS survey that were confined to the 625 m 2 area of each mangrove site. Half of the created DSMs had empty cells interpolated using the average height value, while the other half were not interpolated. This allowed an estimation of coverage by dividing the empty cell DSM by the interpolated DSM and multiplying by 100.
Volume Calculation
A volumetric or surface differencing approach was used to estimate mangrove forest volume, which involves subtracting a DTM from a DSM to obtain a CHM, thereby normalizing object heights above ground [52]. Forest volume, which includes all pixels associated with aboveground vegetation, can then be estimated by multiplying the canopy height value of a raster cell by its resolution [53,54]. A DTM was created in ArcMap (version 10.6) by interpolating all 45 topographic points with the 'Topo to Raster' tool ( Figure 5).
In ArcMap, the SfM-MVS DSM was projected to the GDA 94 MGA Zone 56 reference system and the Australian Height Datum (AHD). The TLS DSMs were imported with z elevation, but no x or y coordinates, due to a truncation issue with the global coordinates in FARO Scene. Therefore, the TLS DSMs were also projected into the GDA 94 MGA Zone 56 and AHD reference systems, and subsequently georeferenced to the same coordinates as the SfM-MVS DSMs manually in ArcMap. During the rasterisation process, it was noted that some point clouds contained spurious points below ground height, which carried over into the raster model. Therefore, these were corrected using a conditional raster that changed their value to zero. Once the per cell volume was calculated, the sum of all cells contained within each site gave the estimated forest volume in cubic metres (m 3 ). The point clouds were segmented using a single transect across each site for both methods to examine profile point densities and canopy penetration. All point clouds were converted to a digital surface model (DSM) using the 'Rasterize' tool in CloudCompare using the maximum z values and a cell resolution of 3 cm 2 , with eight DSMs created in total. Two DSMs were created from the SfM-MVS survey that spanned the entire study site, and six DSMs were created from the TLS survey that were confined to the 625 m 2 area of each mangrove site. Half of the created DSMs had empty cells interpolated using the average height value, while the other half were not interpolated. This allowed an estimation of coverage by dividing the empty cell DSM by the interpolated DSM and multiplying by 100.
Volume Calculation
A volumetric or surface differencing approach was used to estimate mangrove forest volume, which involves subtracting a DTM from a DSM to obtain a CHM, thereby normalizing object heights above ground [52]. Forest volume, which includes all pixels associated with aboveground vegetation, can then be estimated by multiplying the canopy height value of a raster cell by its resolution [53,54]. A DTM was created in ArcMap (version 10.6) by interpolating all 45 topographic points with the 'Topo to Raster' tool ( Figure 5).
In ArcMap, the SfM-MVS DSM was projected to the GDA 94 MGA Zone 56 reference system and the Australian Height Datum (AHD). The TLS DSMs were imported with z elevation, but no x or y coordinates, due to a truncation issue with the global coordinates in FARO Scene. Therefore, the TLS DSMs were also projected into the GDA 94 MGA Zone 56 and AHD reference systems, and subsequently georeferenced to the same coordinates as the SfM-MVS DSMs manually in ArcMap. During the rasterisation process, it was noted that some point clouds contained spurious points below ground height, which carried over into the raster model. Therefore, these were corrected using a conditional raster that changed their value to zero. Once the per cell volume was calculated, the sum of all cells contained within each site gave the estimated forest volume in cubic metres (m 3 ). The raster models were converted to point files and the mean absolute error (MAE) for canopy height models and volume estimations were calculated using the equation: where n is the number of observations, y j is the observed value, andŷ j is the predicted value calculated for each observation. The differences in volume were also expressed as a percentage using the equation: Percentage difference = 100*|TLS − SfM-MVS/((TLS + SfM-MVS)/2) Finally, the CHMs were reclassified so that only vegetation greater than 1.37 m was displayed, which produced a rough estimate of canopy cover. The SfM-MVS volume raster was subtracted from the TLS volume raster to identify the spatial distribution of differences in forest volume.
Point Clouds
The point cloud derived from the SfM-MVS process can be seen in Figure 6. The SfM point matching used 336,195 tie points to create a sparse point cloud, which was increased to 162,196,895 points using MVS over an area of 67,292.19 m 2 , with white pixels indicating areas of no data ( Figure 6). The side view of the SfM-MVS revealed spurious low points that were most evident in the eastern side of the study area ( Figure 6). A side profile view of the TLS surveys indicates high point density in understory vegetation, with some gaps evident in the canopy, especially in the mature site (Figure 7). Spurious low points were also evident in the TLS scans, especially in the mature site ( Figure 7). A comparison of methods in terms of survey and processing time can be seen in Table 1. A comparison of methods in terms of survey and processing time can be seen in Table 1. Survey time was lower for the SfM-MVS than the TLS surveys, with all data collected by the UAV in roughly 20 min ( Table 1). The final point clouds for the TLS produced up to~100 times more points compared to SfM-MVS, with the highest and lowest number of points recorded in the mature and juvenile sites, respectively (Table 1). More scans did not necessarily correspond to more registered points, as the mature site had the most points, but fewest scans (Table 1). Scanning coverage differed between all sites as seen in Figure 8. Fewer empty cells were visible in the mature site compared to the juvenile and mixed sites, resulting in greater overall coverage (Figure 8). The results of the point cloud rasterization and coverage calculations can be seen in Table 2. Fewer empty cells were visible in the mature site compared to the juvenile and mixed sites, resulting in greater overall coverage (Figure 8). The results of the point cloud rasterization and coverage calculations can be seen in Table 2.
The lowest and highest cell coverage occurred for the TLS scans in the juvenile and mature sites, respectively ( Table 2). The SfM-MVS method produced less variation in cell coverage compared to the TLS (Table 2). Profiles derived from segments of the point clouds can be seen in Figure 9. Tree stems, foliage, and the ground surface were well represented in the mature site ( Figure 9). Point densities appear to decrease within dense vegetation across all sites, including the TOC and at ground level (Figure 9). Ground points were under-represented using the SfM-MVS method, including areas below dense vegetation or canopy (Figure 9). It was also noted that the TLS registered several points in 'mid-air' that were not captured by the SfM-MVS method, which was most evident in the juvenile and mixed sites (Figure 9).
Forest Volume
The SfM-MVS method produced lower maximum cell heights than TLS for all sites after rasterization (Table 3). The average maximum heights were lower for the SfM-MVS method in the juvenile and mixed sites, and higher in the mature site when compared with TLS (Table 3). This corresponded to lower
Forest Volume
The SfM-MVS method produced lower maximum cell heights than TLS for all sites after rasterization ( Table 3).
The average maximum heights were lower for the SfM-MVS method in the juvenile and mixed sites, and higher in the mature site when compared with TLS (Table 3). This corresponded to lower measures of volume in the juvenile and mixed sites, but higher volume in the mature site for SfM-MVS ( Table 3). The error calculated between the methods for canopy height and volume estimates can be seen in Table 4. The highest MAEs for both the CHM and volume estimation were observed in the mature site, and the lowest were observed in the mixed site ( Table 4). The lowest percentage difference in volume was recorded in the mature site, despite having the greatest error ( Table 4). The highest percentage difference in volume was recorded in the juvenile site, with the mixed site also reporting a substantial difference in volume estimation ( Table 4). The TLS-derived canopy cover was 20.03% in the juvenile site, 30.91% in the mixed site, and 84.15% in the mature site (Figure 10a-c). difference in volume estimation ( Table 4). The TLS-derived canopy cover was 20.03% in the juvenile site, 30.91% in the mixed site, and 84.15% in the mature site (Figure 10a-c). Differences in volume between the juvenile, mixed, and mature sites can be seen in Figure 10df. The TLS produced greater measures of volume for vegetation below 1.37 m, as indicated by the prevalence of blue cells in those sites (Figure 10d,e). In the mature site, the greatest volume differences generally corresponded to canopy gaps, with the SfM-MVS recording greater heights in these areas compared to the TLS (Figure 10f). Differences in volume between the juvenile, mixed, and mature sites can be seen in Figure 10d-f. The TLS produced greater measures of volume for vegetation below 1.37 m, as indicated by the prevalence of blue cells in those sites (Figure 10d,e). In the mature site, the greatest volume differences generally corresponded to canopy gaps, with the SfM-MVS recording greater heights in these areas compared to the TLS (Figure 10f).
Point Cloud Density and Forest Structure
In general, the UAV point cloud was obtained much faster than the TLS point clouds, and despite having a greater overall processing time, much of this was automated and required little input, which was comparable with other studies [28,46]. The time taken to conduct traditional AGB assessments in mangroves is not often reported, however, survey periods generally extend across several months depending on the size and complexity of the site. This is generally related to the muddy and/or inaccessible terrain and the desire to minimise damage from trampling pneumatophore roots and seedlings [11]. By comparison, the 20-minute UAV flight or multi-day survey conducted with the TLS in this study required substantially less time in the field compared to traditional methods. The final TLS point clouds contained approximately 45 to 100 times more points per/625 m 2 than the SfM-MVS point clouds across all sites. Point density of this magnitude is not uncommon using TLS [26,53], and the lower point density for SfM-MVS is likely linked to the survey settings. For example, flying at a lower altitude or increasing the degree of image overlap can greatly increase point density [55]. However, it should be considered whether the greater point density offered by the TLS is worth spending more time in the field, increasing the potential of trampling damage.
High stem density in the juvenile site likely led to poor cell coverage and capture of forest structure, whereas a low stem density in the mature site allowed greater capture of forest structure. Occlusion is a commonly reported limitation of using TLS in vegetated environments [56], which can lead to highly variable point densities [23,28,57,58]. This was partially controlled for by incorporating multiple scans well within the optimal 25 m radius from the scanner, however, laser pulses were still unable to penetrate fully through dense vegetation. The higher average cell coverage for SfM-MVS was likely due to the high redundancy of overlapping images, which ensures sufficient point matching [46,55], and the high spatial resolution of the imagery. Despite this, the SfM-MVS method only captured the outer canopy envelope, with many studies linking poor canopy penetration to low capture of forest structure and/or underlying terrain [39,[59][60][61]. As expected, the ground-based TLS was better able to capture below canopy forest structure, whereas the aerial-based SfM-MVS was better equipped to survey canopy height, with neither survey type efficient for both.
Canopy Height and Volume Estimations
The higher average maximum canopy height for SfM-MVS in the mature site is likely due to overestimation, which has been found in mangroves [43] and other forested environments [28,39,59,61]. SfM-MVS is generally unable to represent fine scale canopy gaps [62] and smoothing of these gaps during rasterization can lead to overestimations of height [23]. This was observed in the volume difference raster, where greater SfM-MVS volumes tend to correspond to gaps in the canopy cover. Furthermore, the lower maximum cell height recorded by SfM-MVS in this site is not unusual, as this method is known to underestimate the maximum height of individual plants [55]. The higher error recorded in this site, despite having the lowest difference in volume, is likely due to poor TOC representation for the TLS, which had the highest standard deviation for all sites. Occlusion of TOC points in TLS scans can produce erroneous canopy height estimations [63], with a low TOC sample point density corresponding to under-estimations of canopy height in other studies [18]. It is important to note all AGB estimations carry some degree of error. For example, the departure of allometric models, which are generally considered more robust, from measured field data has been reported as high as 49.2% for estimations of AGB [20], and 18.4% for estimations of volume [64] in mangroves, which varies between the species and equations used.
Conversely, the average maximum canopy height was lower for the juvenile and mixed sites, which also had high variation between volume estimations. This was likely a function of the dense understory layer in the juvenile site, as high stem density can lead to poor point capture for both SfM-MVS [65] and TLS [23]. For the TLS measurements, greater recorded heights and subsequent volume may be a function of artificial mid-air points that result from beam interceptions at the canopy edge, which can affect height estimations [28,56]. In this case, beam interceptions may have occurred horizontally, where taller shrubs occluded shorter ones, producing spurious high points. For the SfM-MVS, it is possible that high stem density led to poor image matching, which can lead to underestimations of DSM heights [32]. Alternatively, wind gusts can cause outer branches to move between images, leading to feature rejection during image matching [55], which would explain the poor representation of higher points by SfM-MVS in the juvenile site. Similar effects may have occurred in the mixed site, however, the difference in error and volume were likely reduced due to decreased stem density. For example, SfM-MVS is more likely to match LiDAR data in sparsely vegetated areas [65,66], and error decreases with increasing bare area [67]. Similarly, occlusion is less of an issue for TLS scans in sparsely vegetated environments [58]. The greatest differences between SfM-MVS and TLS in this site occurred in the TOC and at the canopy edge, which suggests occlusion and beam interceptions may have hindered point capture in these dense areas.
Limitations and Future Considerations
Comparison between the canopy cover raster and the volume difference raster highlighted horizontal misalignments between the TLS and SfM-MVS raster models. This was greatest in the juvenile and mixed sites, where canopy edges tend to 'shadow' each other. This is an artefact of the georeferencing process, which had to be performed manually in ArcMap. Horizontal co-registration error for canopy heights are generally small [68], but vertical co-registration of CHMs and DTMs obtained from different data sources remains a major challenge when estimating canopy heights [55]. In this case, DSMs and DTMs were aligned vertically using the AHD projection, but models were imperfectly aligned horizontally, which may have led to differences in volume estimations. Differences may have also stemmed from spurious high points as mentioned previously. This could likely be resolved by using a point filtering algorithm to remove outliers, or by using a percentile of a cell's elevation rather than the maximum z value, as this can make CHMs less sensitive to outliers [39,66].
The surface differencing method requires an accurate high spatial resolution DTM to normalize canopy heights above ground [23,69]. The DTM created in this study interpolated elevation heights that were taken at the peripheries of each site, since the RTK-GNSS could not obtain a signal below the canopy. This is less of an issue in small level plots, as changes in topography are expected to be minimal [28]. However, any upscaling of this method would require a more representative DTM. The surface differencing method also assumes a constant volume below the height of the CHM [53]. Each site contained mangrove species with different growth forms at differing stem densities. Furthermore, wood density in mangroves is species-specific [11,12], meaning the AGB across each site is not constant and will differ depending on the species composition. Therefore, sites that contain a higher stem density that are dominated by a single species, such as the juvenile site, likely represent a more accurate estimation of volume using this method.
Conclusions
This study compared the ability of SfM-MVS and TLS to capture forest structure and volume in three mangrove sites of differing structural characteristics. For forest structure, SfM-MVS was unable to capture points below the canopy in the same detail offered by TLS, which was clearly superior in terms of point density. For forest volume, SfM-MVS captured the top of the canopy more accurately than the TLS, which suffered from occlusion, however, it tends to overestimate volume in canopy gaps. Furthermore, increasing stem density led to the capture of erroneous points for both SfM-MVS and TLS, suggesting either method may be more appropriate in lower density mangroves.
In summary, we suggest TLS would be more beneficial for capturing mangrove forest structure, whereas SfM-MVS would be better for capturing mangrove canopy height, and subsequent volume. However, the choice between either SfM-MVS and TLS depends on the accuracy, spatial scale, and time scale required, with both offering potential advantages and disadvantages. Our results could better inform future work in the conservation of these important ecosystems, given the rapid decline of mangroves globally.
Author Contributions: All work conducted by A.D.W. and J.X.L.
Funding: This research received no external funding. | 2019-04-27T13:13:42.860Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "2c16c2171acbab4a33eec6709a8d1de767435f1b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-446X/3/2/32/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cd154b2441b52fea72f04e6ed6e9d0f8b5b8b39d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
21407307 | pes2o/s2orc | v3-fos-license | Progressive mitochondrial protein lysine acetylation and heart failure in a model of Friedreich’s ataxia cardiomyopathy
Introduction The childhood heart disease of Friedreich’s Ataxia (FRDA) is characterized by hypertrophy and failure. It is caused by loss of frataxin (FXN), a mitochondrial protein involved in energy homeostasis. FRDA model hearts have increased mitochondrial protein acetylation and impaired sirtuin 3 (SIRT3) deacetylase activity. Protein acetylation is an important regulator of cardiac metabolism and loss of SIRT3 increases susceptibility of the heart to stress-induced cardiac hypertrophy and ischemic injury. The underlying pathophysiology of heart failure in FRDA is unclear. The purpose of this study was to examine in detail the physiologic and acetylation changes of the heart that occur over time in a model of FRDA heart failure. We predicted that increased mitochondrial protein acetylation would be associated with a decrease in heart function in a model of FRDA. Methods A conditional mouse model of FRDA cardiomyopathy with ablation of FXN (FXN KO) in the heart was compared to healthy controls at postnatal days 30, 45 and 65. We evaluated hearts using echocardiography, cardiac catheterization, histology, protein acetylation and expression. Results Acetylation was temporally progressive and paralleled evolution of heart failure in the FXN KO model. Increased acetylation preceded detectable abnormalities in cardiac function and progressed rapidly with age in the FXN KO mouse. Acetylation was also associated with cardiac fibrosis, mitochondrial damage, impaired fat metabolism, and diastolic and systolic dysfunction leading to heart failure. There was a strong inverse correlation between level of protein acetylation and heart function. Conclusion These results demonstrate a close relationship between mitochondrial protein acetylation, physiologic dysfunction and metabolic disruption in FRDA hypertrophic cardiomyopathy and suggest that abnormal acetylation contributes to the pathophysiology of heart disease in FRDA. Mitochondrial protein acetylation may represent a therapeutic target for early intervention.
Introduction
The childhood heart disease of Friedreich's Ataxia (FRDA) is characterized by hypertrophy and failure. It is caused by loss of frataxin (FXN), a mitochondrial protein involved in energy homeostasis. FRDA model hearts have increased mitochondrial protein acetylation and impaired sirtuin 3 (SIRT3) deacetylase activity. Protein acetylation is an important regulator of cardiac metabolism and loss of SIRT3 increases susceptibility of the heart to stressinduced cardiac hypertrophy and ischemic injury. The underlying pathophysiology of heart failure in FRDA is unclear. The purpose of this study was to examine in detail the physiologic and acetylation changes of the heart that occur over time in a model of FRDA heart failure. We predicted that increased mitochondrial protein acetylation would be associated with a decrease in heart function in a model of FRDA.
Methods
A conditional mouse model of FRDA cardiomyopathy with ablation of FXN (FXN KO) in the heart was compared to healthy controls at postnatal days 30, 45 and 65. We evaluated hearts using echocardiography, cardiac catheterization, histology, protein acetylation and expression. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Introduction
Heart disease and heart failure exert a significant morbidity and mortality burden worldwide. Abnormal metabolism and metabolic remodeling are common pathologic features in heart disease, such as in obesity and diabetes-related cardiomyopathy [1,2], ischemia [3], and heart failure [4,5]. Mitochondrial dysfunction and heart disease are closely linked [6], and because mitochondria are central to regulating cellular energy and fulfilling the high demands of cardiac metabolism, it is expected that mitochondrial dysfunction plays a key pathologic role in abnormal cardiac metabolism [7].
Mitochondrial protein acetylation is an important post-translational regulatory mechanism of heart metabolism that has emerged in recent years. Acetylation in mitochondria is controlled by the NAD + -dependent deacetylase, sirtuin 3 (SIRT3). SIRT3 responds to cellular energy status and utilizes an "acetylation switch" to modify protein function [8] in order to respond quickly to metabolic cues and thus, operates a chief mechanism for maintaining normal mitochondrial function and metabolism [9]. SIRT3 targets enzymes that are typically activated in response to removal of acetyl groups, and which are important to energy generation and utilization, such as very-long-chain, long-chain, and medium-chain acyl-CoA dehydrogenases (VLCAD, LCAD and MCAD) [10,11], pyruvate dehydrogenase (PDH) [12], acetyl CoA synthetase 2 (AceCS2) [13], and electron transport chain (ETC) complexes I-III [14][15][16]. Additionally, SIRT3 appears to deacetylate other important proteins, such as those that provide protection from oxidative damage, e.g., superoxide dismutase 2 (SOD2) [17]. It is worth noting here that enzyme response to acetylation state is an area of continuing investigation. For instance, others have reported an increase in activity in LCAD in response to acetylation, specifically in obesity-related heart disease [18].
Beneficial effects of SIRT3 activity in the heart are well documented. For example, SIRT3 has been shown to protect against oxidative stress, attenuate fatty acid accumulation in the heart [19], and prevent development of stress-induced cardiac hypertrophy [20]. Mice lacking SIRT3 have mitochondrial dysfunction eventually resulting in myocardial energy loss, develop hypertrophy and fibrosis in response to mechanical stress [21], and are more susceptible to the detrimental effects of ischemia/reperfusion injury [22].
Acetylation is most exciting in the context of heart disease in that it has potential as a therapeutically modifiable target. For example, recent work has shown that exogenous treatment with NAD + precursors can increase NAD + levels in mitochondria, activate sirtuins to reduce protein acetylation, and improve cardiac outcome [23,24].
The heart disease of Friedreich's Ataxia (FRDA) results from inherited deficiency of frataxin (FXN), a mitochondrial protein important in energy homeostasis. FXN is a highly conserved mitochondrial matrix protein that functions in iron-sulfur cluster assembly, which is integral to mitochondrial metabolic machinery [25]. Reduced expression of FXN in FRDA results in impaired energy generation, a decreased NAD + /NADH ratio, and increased oxidative stress [26][27][28]. In addition to ataxia, patients often develop hypertrophic cardiomyopathy and heart failure. The vast majority of patients who go on to develop cardiomyopathy are asymptomatic until late stages of disease. The most frequent cause of mortality in FRDA arises from cardiac etiology, most commonly due to congestive heart failure [29]. There is no known cure.
Mice with conditional loss of FXN in the heart develop cardiac hypertrophy as early as 5 weeks of age, followed by transition to dilated cardiomyopathy and heart failure by approximately 8 weeks of age [30]. Certain biochemical and structural changes occur in the FRDA mouse model heart as early as 4 weeks of age, including reduced activity of important metabolic enzymes (such as those of the ETC complexes and aconitase) and mitochondrial ultrastructure abnormalities [30]. The importance of these findings is that underlying changes in the FRDA heart originate prior to overt abnormalities in cardiac function and thus, may present a window of opportunity for intervention before irreversible changes occur.
We previously reported that mitochondrial proteins in FRDA mouse model hearts have increased acetylation and decreased SIRT3 activity, resulting in increased acetylation of several important metabolic enzymes, including LCAD, MCAD, AceCS2, and the ETC [28,31]. The reduction in SIRT3 activity is likely due to insufficient NAD + bioavailability as a result of dysfunctional energy metabolism, and possibly by direct oxidative modification of the native protein. The impact of abnormal acetylation in the FRDA heart has not been investigated. Because of the important role that acetylation plays in cardiac function, we believe that loss of SIRT3 and resultant mitochondrial protein hyper-acetylation contributes to the heart disease of FRDA.
The purpose of the present study was to document the changes in physiology and function that evolve over time in a model of FRDA heart failure using both non-invasive (ECHO) and invasive left heart catheterization, and match the trajectory of cardiac pathophysiology to the protein acetylation profile along the course of disease in order to determine the link between acetylation of mitochondrial proteins and FRDA heart disease.
Cardiac hypertrophy and diastolic dysfunction are early and persistent findings
We used an established conditional mouse model of FRDA heart failure with absence of FXN in the heart (FXN KO) compared to healthy littermates (FXN fl/fl ) [32]. We examined cardiac function in detail, using ECHO and invasive left heart catheterization at postnatal age day 30 ±5, 45 ±4 and 65 ±5. These groups represented pre-, mid-and late heart disease according to previous reports [30].
There were no functional or anatomic differences between hearts of FXN KO and control mice at day 30 (Table 1).
Cardiac hypertrophy is manifest by day 45 in FXN KO animals (Table 1)
Acetylation is rapidly and temporally progressive and strongly correlates with a decline in heart function
We previously demonstrated that acetylation in the conditional FXN KO heart is dramatically increased in late stages of heart disease and that the majority of protein acetylation was localized to mitochondria [28]. Here, we examined the level of lysine acetylation in heart tissue at each time point in which we measured cardiac function in order to determine the level of cardiac protein lysine acetylation relative to the evolution of cardiac dysfunction from pre-disease to overt heart failure.
We show that acetylation is modestly increased prior to onset of detectable cardiac dysfunction in the FXN KO heart at day 30. This is followed by a dramatically rapid and progressive increase in protein acetylation compared to controls (Fig 2). We measured the expression levels of several essential mitochondrial electron transport proteins, and as expected, based on the established role of FXN in Fe-S cluster enzyme assembly [25,33] and prior studies [28], the expression of ETC complex subunits CI-NDUFA9, CII-SDBH and CIII-Rieske is reduced in FXN KO and decreases consistently over this time course (Fig 2).
Surprisingly, when the level of acetylation was correlated with measured cardiac function, there was a strong inverse correlation between amount of acetylation, as measured by relative density on western blot imaging, and EF (r = -0.923) and FS (r = -0.927) (Fig 2). To demonstrate that mitochondrial deacetylase activity is decreased or absent in the FXN KO hearts, we probed for acetylation of known SIRT3 mitochondrial protein targets, SOD2 and LCAD. SOD2 acts to destroy superoxide radicals and is inhibited when acetylated at lysine 68. SIRT3 reverses acetylation [17,34]. LCAD is a key enzyme in fatty acid oxidation and reports have demonstrated increased activity in response to deacetylation by SIRT3 [35,36]. As expected, we detected increased acetylation of both SOD2 and LCAD in the FXN KO compared to controls (Fig 2), confirming decreased activity of SIRT3 deacetylase. These results are consistent with prior studies in our lab that demonstrated hyper-acetylation of SIRT3 targets in cardiac-specific FXN KO hearts, including AceCS2, LCAD, and MCAD [31].
Abnormal cardiac mitochondria ultrastructure is accompanied by mitochondrial respiratory inhibition
Consistent with previous reports [30,32], we found that pathologic changes in mitochondria become evident on electron microscopy (EM) as early as postnatal day 30 in ventricular tissue of FXN KO, and increase in prevalence over time (Fig 3). Mitochondrial ultrastructure changes in FXN KO hearts include loss and collapse of cristae, disordered mitochondria-tosarcomere arrangement, and extensive accumulation and stacking of mitochondria, in addition to widespread electron-dense inclusions at day 65.
We quantified these observed morphological changes by measuring the ratio of myofibrilto-mitochondria area from micrographs, and determined the percentage of abnormal mitochondria in each strain [37]. At day 65, the myofibril-to-mitochondria ratio was significantly decreased in FXN KO (0.90 ± 0.46) compared to controls (1.89 ±0.47) (p = 0.011), and the average percentage of mitochondrial area containing abnormal mitochondria in FXN KO (78.1±26.7%) was significantly higher than in control micrographs in which no abnormalities were detected (p = 0.002) (Fig 3).
We next examined mitochondrial respiratory function at day 65. Respiratory control ratios (RCR) in FXN KO cardiac mitochondria (1.67 ±0.12) were significantly decreased compared to controls (4.86 ±0.37) (p<0.001) when measured by Clark Oxygen electrodes using complex I substrates glutamate and malate (Fig 3).
FXN KO hearts exhibit features of maladaptive ventricular remodeling
Consolidated areas of fibrotic infiltration of the heart are widespread in FXN KO mouse hearts at day 65 (Fig 4), similar to previous reports [32]. By measuring the percent of collagen detected in ventricular tissue on stained micrographs, we demonstrate that FXN KO mice have more than a 1.5 times increase in fibrotic involvement of ventricular tissue compared to controls (FXN KO = 9.1% vs. control 4.9%, p = 0.002) (Fig 4). We did not find a significant increase in amount of collagen in the FXN KO hearts compared to controls at days 30 or 45 on histological examination. FXN KO hearts at day 65 also display diffuse cardiomyocyte degeneration, indicated by the presence of vacuolation on light microscopy (Fig 4). Degenerating cardiomyocytes were not detectable at earlier ages (data not shown).
Loss of FXN in the heart leads to cardiac steatosis and cold intolerance
Ventricular tissue of FXN KO at day 65 demonstrated a combined pattern of macro-and microsteatosis in cardiomyocytes (Fig 5).
Because mice with loss of MCAD, or VLCAD in the heart quickly become hypothermic when exposed to cold environment and die [38,39], and, similarly, ablation of SIRT3 leads to cold intolerance [36], we predicted that the conditional FXN KO would be unable to maintain its core body temperature when exposed to a cold stress. Mice were fasted for 6 hours then placed in a cold room at 4˚C with core body temperature monitoring every 30 minutes, for a total period of 180 minutes. Indeed, we found that cold exposure lead to a precipitous drop in body temperature and significantly decreased survival rate in the FXN KO animals compared to controls (Fig 5). The majority of the FXN KO animals expired or became moribund with body temperatures <19˚C (at which time animals were euthanized for humane reasons) prior to cessation of the three hour time period (Fig 5). Both controls and FRDA mice were observed to be shivering with exposure to cold.
Discussion
This study is the first to document the progression of lysine-acetylated proteins and demonstrate a correlation between acetylation and cardiac function in the heart of the conditional FXN KO mouse model of FRDA heart disease. We have characterized the initial stages of cardiac dysfunction and evolution to heart failure in this animal model, which mirrors that of the FRDA human heart disease. We conclude that protein lysine acetylation increases in a temporally progressive pattern and that there is a strong negative correlation between level of acetylation in the heart and global heart function: as acetylation increases, ejection fraction and fractional shortening decrease.
We found that abnormal post-translational modification of key metabolic enzymes has occurred prior to detectable impairment in heart function in the FXN KO. Acetylation is increased at day 30 and continues to progress over time. The first notable abnormal traits on cardiac pathophysiology occur at 45 days of age, with thickening of the left ventricular wall and diastolic dysfunction. As acetylation progresses, heart function continues to decline and demonstrates features of decompensated left ventricular dilation, decreased ejection fraction and fractional shortening, consistent with systolic heart failure. These results suggest that targeting acetylation early would provide the best chance to halt or slow progression of FRDA heart disease. Modifying acetylation only after detectable cardiac dysfunction occurs may miss the opportunity to prevent irreversible hemodynamic changes in the FXN KO heart.
Loss of SIRT3 results in maladaptive ventricular remodeling in response to stress [19], abnormalities in lipid metabolism, and reduced tolerance to cold exposure [36]. We found similar outcomes in the FXN KO, which implies that loss of function of SIRT3 in the FXN KO heart and resultant hyper-acetylation of SIRT3 target proteins plays a major role in the pathologic processes of FRDA mitochondrial heart disease. The absence of FXN in the heart, which has an established role in manufacturing of Fe-S cluster subunits needed for enzymes vital to oxidative metabolism, is, in and of itself, a major insult to the cardiomyocyte cellular milieu [40]. The conditional FXN KO model thereby acts as a "stressed heart" model to examine the role of acetylation in heart disease and failure. This is supported by our observation that changes seen in the FXN KO are more robust, with an earlier age at onset, than those seen in models of SIRT3 loss alone. SIRT3 activity is impaired in FRDA hearts likely due to the decreased bioavailability of NAD + and to oxidative damage of SIRT3 [28]. The findings in this study can be applied to other models of heart disease and failure. Many cardiac diseases with abnormal metabolism and energy homeostasis are liable to result in impaired SIRT3 activity in a manner similar to the FXN KO model, leading to abnormal mitochondrial protein acetylation in the heart and impaired cardiac function. Such heart disease candidates include inherited mitochondrial cardiomyopathies, diabetic and metabolic syndrome heart disease, acquired cardiac hypertrophy, age-related and ischemic heart disease, and heart failure [18,19]. For example, the ischemic heart results in a shift in redox state with accumulation of NADH [41], mirroring the consequential disturbance in energy equivalents in the FRDA heart.
More studies are needed in order to determine a causative role of abnormal acetylation in leading to impairment of hearts with FXN loss. For instance, it would be of great interest to determine whether normalizing the NAD + /NADH ratio in the FXN KO heart would stimulate SIRT3 function, recover activity of targeted metabolic enzymes, and improve measured cardiac outcome. Recent work has shown that exogenous treatment with NAD + precursors can increase NAD + levels in mitochondria, reduce protein acetylation, and activate sirtuins [23,24]. Together with our findings that mitochondrial protein acetylation impairs heart function, this represents an attractive therapeutic potential for pharmacological modulation of protein lysine acetylation to improve cardiac metabolism and physiologic function and prevent unremitting progressive heart disease.
One limitation of this study may be that the MCK promoter used to drive Cre expression in this model is expressed in all sarcomeric tissues, including skeletal muscle. It is not expected that loss of FXN in skeletal muscle would significantly alter cardiac function in this study. Indeed, in patients with FRDA, and in the original report of this mouse model [32], there is not an identified skeletal muscle phenotype. Using the cardiac-specific alpha myosin heavy chain promoter (α-MHC) would address this limitation. However, we determined that the MCK promoter was the logical choice for these experiments both because the α-MHC has also been noted to be cardio-toxic under certain conditions [42], and we wanted to avoid this confounding variable, and because the MCK-Cre transgene has been used extensively in our lab and others' to generate the FXN KO mouse [28,32].
In conclusion, we demonstrate a close relationship between mitochondrial protein acetylation, cardiac dysfunction and metabolic disruption in a model of FRDA hypertrophic cardiomyopathy. Our results suggest that abnormal acetylation contributes to the pathophysiology of heart disease in FRDA and may represent a therapeutic target for early intervention.
Mouse breeding and genotyping
This study was approved by the Institutional Animal Care and Use Committee of Indiana University. We used site-specific Cre-lox recombination to carry out gene deletions of interest. We used FXN fl/fl mice to create FXN KO mice that were homozygous for heart and skeletal muscle deletion of FXN (MCK-Cre:FXN KO), as previously published [32]. Data was collected from males at postnatal days 30(±5), 45(±4), and/or 65(±5). Controls were age-and sexmatched healthy littermates (FXN fl/fl ).
Echocardiography (ECHO)
Mice were anesthetized with isoflurane and placed on a warming mat with continuous monitoring. Transthoracic ECHO images were obtained using a VisualSonics 1 2100 ultrasound machine for small animal imaging and MS400 transducer (Fujifilm VisualSonics, Inc., Toronto, Canada). Functional parameters of the left ventricle and outflow tracts were measured using standard assessment techniques. Relative wall thickness (RWT) was calculated by (2 Ã LVPWd)/LVIDd).
Cardiac catheterization
Mice were anesthetized with isoflurane and placed on a warming mat with continuous monitoring. Pressure-volume loops were obtained directly using a 1.2 F microconductance catheter (Scisense, Transonic Systems Inc.), which was inserted into the right carotid artery through a small neck incision and advanced retrograde into the left ventricle. The ADVantage pressure volume system (Scisense, Transonic Systems Inc.) was used to acquire pressure, admittance, phase shift, and amplitude, and real time pressure-volume data was displayed and analyzed offline using specialized software (Labscribe 2, iWorx, Dover, NH) [43].
Cold stress
This experiment was modeled after similar studies [38,39]. 65 day old FXN KO (n = 5) and FXN fl/fl (n = 6) were selected for cold exposure. Mice were deprived of food for 6 hours prior to onset of study and for the duration of experiment. A baseline rectal temperature was taken prior to placement in 4˚C cold room and then every 30 minutes thereafter for a total of 3 hours. A study was terminated for humane reasons if core body temperature reached 19˚C or less.
Histology
Ventricular tissue selected for histology was fixed in 10% formalin and paraffin embedded. Histological analysis included hematoxylin and eosin (H&E) and Masson's Trichrome to detect collagen as a measure of fibrosis. Collagen quantification was performed using ImageJ (IJ1.46) to measure percent area of tissue positively stained for collagen. Three to five 20x digital micrographs were obtained from each tissue section that underwent quantification studies.
Electron microscopy (EM)
Ventricular tissue selected for electron microscopy was fixed in 2.5% glutaraldehyde and underwent sectioning and uranyl acetate staining by the Electron Microscopy Center of Indiana University School of Medicine. At least 3 separate sections from each animal strain were selected for quantification analysis. Quantification methods were similar to methods used in our previous work [37]. Abnormal mitochondria were identified as those containing electrondense inclusions, cristae loss or dissolution, and/or collapsed or condensed cristae. Mitochondria and myofibril area was measured using ImageJ (IJ1.46).
Isolation of cardiac mitochondria
Mitochondrial isolation followed the method described in our previous work [28]. Freshly harvested hearts were submerged in ice-cold mitochondrial isolation buffer. Heart tissue was weighed, homogenized, and then subjected to differential centrifugation. The mitochondrial pellet was resuspended in the appropriate buffer and used immediately for respiration assay or western blotting, or flash frozen for later analysis.
Mitochondrial respiration
Mitochondrial isolates designated for use in respiration assays were resuspended in mitochondrial respiration buffer. A total of 2-3 mouse heart mitochondria were combined for each run, with n = 4-6 pooled heart mitochondria assayed for controls and n = 8-12 pooled heart mitochondria assayed for FXN KO. A Clark-oxygen electrode was used for respiration assays. Glutamate and malate were used as a substrate for all assays.
Statistics
All calculations, analyses and graphs were performed using SigmaPlot (Systat Software, Inc.). Two groups were compared using a two-tailed student's t test for samples with equal variance. Final data are presented as mean (±SD). Alpha was set at 0.05 and power at 0.800. A p-value of <0.05 was considered statistically significant. | 2018-04-03T01:01:46.532Z | 2017-05-25T00:00:00.000 | {
"year": 2017,
"sha1": "c6d50d7faa5e142e211dabf014fa281986757ea0",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0178354&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6d50d7faa5e142e211dabf014fa281986757ea0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
209444855 | pes2o/s2orc | v3-fos-license | Does Symbolic Knowledge Prevent Adversarial Fooling?
Arguments in favor of injecting symbolic knowledge into neural architectures abound. When done right, constraining a sub-symbolic model can substantially improve its performance and sample complexity and prevent it from predicting invalid configurations. Focusing on deep probabilistic (logical) graphical models -- i.e., constrained joint distributions whose parameters are determined (in part) by neural nets based on low-level inputs -- we draw attention to an elementary but unintended consequence of symbolic knowledge: that the resulting constraints can propagate the negative effects of adversarial examples.
Introduction
Deep probabilistic (logical) graphical models (dPGMs) tie together a sub-symbolic level that processes low-level inputs with a symbolic level that handles logical and probabilistic inference, see for instance ). The two levels are often implemented with k neural networks and one probabilistic (logical) graphical models, respectively. Prominent examples of dPGMs include DeepProbLog (Manhaeve et al. 2018) and "neural" extensions of Markov Logic (Lippi and Frasconi 2009;Marra and Kuželka 2019). In this preliminary investigation, we show with a concrete toy example that fooling a single neural network with an adversarial example (Szegedy et al. 2013;Biggio and Roli 2018) can corrupt the state of multiple output variables. We develop an intuition of this phenomenon and show that it occurs despite the model being probabilistic and regardless of whether the symbolic knowledge is factually correct.
Deep Probabilistic-Logical Models
We restrict ourselves to deep Bayesian networks (dBNs), i.e., directed dPGMs stripped of their logical component. (Our arguments do transfer to other dPGMs and deep statistical-relational models too.) These models are Bayesian networks where some conditional distributions are Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
implemented as neural networks feeding on low-level inputs, and (roughly speaking) correspond to ground Deep-ProbLog models.
Let x = (x 1 , x 2 ) and z = (z 1 , z 2 ). Our dBN for this problem defines a joint distribution P(x | z; ϕ) built on the conditionals P(x 1 | z 1 ), P(x 2 | z 2 ) and on ϕ. In particular, the probability of the event X i = x i is implemented as a Con-vNet with a softmax output layer applied to z i . The dBN is consistent with the symbolic knowledge ϕ in that it ensures that the joint distribution satisfies P(x | z; ϕ) = 0 for all x |= ϕ. This is achieved by taking an unconstrained joint distribution P(x | z) = i P(x i | z i ) and constraining it: Here Z = x|=ϕ P(x | z) is a normalization constant and the sum runs over all x's consistent with ϕ. A joint prediction is obtained via maximum a-posteriori (MAP) inference (Koller and Friedman 2009): If no symbolic knowledge ϕ was given, the most likely outputs would simply be (f 1 (z 1 ), f 2 (z 2 )), where: Finally, we use the same ConvNet for both images, and let
Adversarial Examples and Constraints
Consider a pair of images z representing a 1 and a 4, respectively, and let the ConvNet output the following conditional probabilities: P(X 1 | z 1 ) = (0.9, 0.1, 0, 0) (4) for some small ǫ, e.g., 0.001. Although the second image is rather uninformative, the unconstrained dPGM gets both digits right, with joint probability ≈ 0.226 (by Eq. 3) and so does the constrained classifier, with probability ≈ 0.9 (Eq. 2). In this case, the symbolic knowledge boosts the confidence of the model, a desirable and expected result. Now, perturbing z i by δ i shifts the conditional distribution output by the ConvNet from P(x i | z i ) to P(x i | z i + δ i ) and hence changes the probabilities assigned to the possible outcomes X i . Intuitively, a perturbation is adversarial if it is at the same time imperceptible and it forces MAP inference to output a wrong configuration. In other words, assuming that z i is classified correctly, z i + δ i is adversarial if f (z i + δ i ) = f (z i ) and δ i is "small" for some norm · .
It is well known that neural networks are often susceptible to rather eye-catching adversarial perturbations that can alter their output by arbitrary amounts (Szegedy et al. 2013;Biggio and Roli 2018). Thus it is not too far fetched to imagine a perturbation δ 1 that induces the following conditional distribution on the first digit: Now, it can be readily verified that this perturbation forces the unconstrained dBN to predict (2, 4) with joint probability ≈ 0.226 (which is symmetrical to the above case). Clearly this model is fooled by the adversarial image into making a mistake on x 1 , but the damage is limited to the first digit: x 2 is still predicted correctly. However (2, 4) does violate the symbolic knowledge ϕ, while the constrained dBN is forced to output a valid prediction, namely the most likely configuration out of {(1, 4), (2, 3), (3, 2), (4, 1)}. Given the above conditional distributions and ǫ, the constrained dBN outputs (2, 3) with probability ≈ 0.9. This prediction is definitely consistent with ϕ, but now both digits are classified wrongly.
Discussion
The toy example above illustrates the perhaps elementary but seemingly neglected fact that symbolic knowledge can propagate the negative effects of adversarial examples. This occurs because the model trades off predictive loss in exchange for satisfying a hard constraint.
While our example is decidedly toy, it is easy to see that the same phenomenon could occur in relevant sensitive applications. The phenomenon is also likely to transfer to undirected dPGMs like deep extensions of Markov Logic Networks (Lippi and Frasconi 2009;Marra and Kuželka 2019).
We make a couple of important remarks. First, depending on the structure of the symbolic knowledge, fooling a single neural networks in the dPGM may perturb any subset of output variables. Thus, seeking robustness of a single network is not enough and all k networks must be robustified. Second, this may not be enough either: if an adversary manages to fool a robustified neural network -even by random luck -the effects of fooling will still cascade across the model. Thus the dPGM as a whole must be made robust, in the sense that all CPTs appearing in it -not only the ConvNets -must be made robust. Finally, it may be the case that access to the symbolic knowledge might help attackers in designing minimal targeted attacks that induce any target variable.
Adversarial examples in dPGMs can be understood through the lens of sensitivity analysis for directed (Chan and Darwiche 2002) and undirected probabilistic graphical models (Chan and Darwiche 2005); see especially (Chan and Darwiche 2006). These works show how to constrain a probabilistic graphical model to ensure that the probabilities of different queries are sufficiently far apart. These constraints could be injected into standard adversarial training routines for neural networks to encourage global robustness of the dPGM. Of course, robust training of complex dPGMs is likely to be computationally challenging. Algebraic model counting in the sensitivity semiring might prove useful in tackling this computational challenge (Kimmig, Van den Broeck, and De Raedt 2017). | 2019-12-19T11:50:33.000Z | 2019-12-19T00:00:00.000 | {
"year": 2019,
"sha1": "8670e9380c5e46d79c7ae7a5f2dc9af40ee7f18c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8670e9380c5e46d79c7ae7a5f2dc9af40ee7f18c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
246778705 | pes2o/s2orc | v3-fos-license | Pathogenic Interactions between Macrophomina phaseolina and Magnaporthiopsis maydis in Mutually Infected Cotton Sprouts
The soil fungus Macrophomina phaseolina, the charcoal rot disease agent, poses a major threat to cotton fields. In Israel, highly infected areas are also inhabited by the maize pathogen Magnaporthiopsis maydis. This study reveals the relationships between the two pathogens and their impact on cotton sprouts. Infecting the soil 14 days before sowing (DBS) with each pathogen or with M. phaseolina before M. maydis caused a strong inhibition (up to 50–65%) of the sprouts’ development and survival, accompanied by each pathogen’s high DNA levels in the plants. However, combined or sequence infection with M. maydis first led to two distinct scenarios. This pathogen acted as a beneficial protective endophyte in one experiment, leading to significantly high emergence and growth indices of the plants and a ca. 10-fold reduction in M. phaseolina DNA in the sprouts’ roots. In contrast, M. maydis showed strong virulence potential (with 43–69% growth and survival suppression) in the other experiment, proving its true nature as an opportunist. Interestingly, soil inoculation with M. phaseolina first, 14 DBS (but not at sowing), shielded the plants from M. maydis’ devastating impact. The results suggest that the two pathogens restrict each other, and this equilibrium may lead to a moderate disease burst.
Introduction
Plants are threatened by a diversity of pathogen species living in complex communities, including also a variety of other non-pathogenic microorganisms. A plant's pathobiomes can be defined as a collection of coexisting phytopathogens that affect each other and the plant [1]. They are formed by pathogens inhabiting the same ecological niche and either cooperating or competing for the same plant resources. Two or more phytopathogens on the same host can result in significantly different disease outcomes compared to single infections [2]. The communities of natural microorganisms inhabiting the plant phyllosphere, the aboveground portions of the plant's habitat, or the rhizosphere, the roots' surrounding habitat, also include non-pathogenic members that can have protective effects against pathogens. For example, it has been shown lately that maize (Zea mays) grains are populated with a diversity of fungi and bacteria [3], some of which are known phytopathogens such as Alternaria alternata and Fusarium proliferatum, which can be opportunists, causing disease when the conditions are favorable. Other members of the maize grains' microflora are beneficial bioprotective species, including the fungal species T. asperellum, Penicillium citrinum, Chaetomium cochliodes, and the bacteria Bacillus subtilis.
The interactions between microorganisms occur in the soil of commercial fields before sowing, and their outcome can drastically affect disease severity [4]. In cotton (Gossypium hirsutum), interactions between Fusarium oxysporum, a wilt agent, and Magnaporthiopsis maydis, a root-surface inhabitant and assumedly harmless endophyte, were shown to have
Cotton Cultivar Selected for This Study
Pima cotton, Goliath cv. (extra-long staple (ELS) cotton), was cultivated routinely in different regions of Israel (supplied by Israel Seeds, Kibbutz Shefaim, Israel). This cotton cultivar was also grown on late wilt-contaminated fields during crop rotation and was recently reported to be M. maydis-vulnerable at the early growth stages [13]. In a survey conducted by Roni Cohen across Israel (Newe Ya'ar Research Center, northern Israel), Pima cotton, Goliath cv., was found to be sensitive to M. phaseolina charcoal rot disease [27].
Seedlings Experiment in a Growth Chamber under Controlled Conditions
The pot experiment was performed twice using the same experimental design. The results of both experiments are presented. The first repeat also included acute CRD cotton sprouts induced by puncturing them with infected wooden toothpicks, as will be described in Section 2.5.
Two-liter pots were filled with 70% commercial, non-sterilized garden soil and 30% Perlite No. 4 for aeration. The commercial garden soil mixture (Garden Mix, Deshanit, Be'er Yaakov, Israel) comprised fibers, coconut peat, a relatively low amount of tuff, and Osmocote (ScottsMiracle-Gro, Marysville, OH, USA), a 3-4 month slow-release fertilizer. The Pima cotton cv. seeds were soaked for about 15 min in distilled water and sown in pots to a depth of 5 cm (five seeds per pot). The pots were kept in a growth room with a photoperiod of 8 h of light and 16 h of darkness. All the plants were grown under the same conditions of constant humidity of 45% and a temperature of 25 ± 2 • C. Immediately after sowing, the pots were irrigated to initiate germination. Irrigation was carried out after the sprouts' first emergence, 100 mL once every two days, using a computerized irrigation system. Throughout the experiment, treatments against various pests and fertilization treatments were performed according to instructions by the Israel Ministry of Agriculture Consultation Service (SAHAM). The experiments were conducted in 6 repetitions per treatment, whereby each repetition is a pot containing five sprouts.
The Preparation of Infected Sterilized Wheat Grains
Infected sterilized wheat grains were used to spread the fungi in the soil, as previously reported [12]. This experimental setting, using sterilized wheat grains substrate, is a common method to induce maximum inoculation pressure under controlled conditions. This is especially needed when pathogens with a limited saprophytic ability (such as M. maydis) are involved.
Wheat grains were soaked in tap water overnight. The grains were then dried for about four hours in a fume hood on paper towels and autoclave-sterilized for 30 min at a temperature of 120 • C. Disinfected plastic 0.5 L boxes were used to inoculate 100 g sterilized wheat grains with 10 mycelial discs of each fungal species, M. maydis or M. phaseolina (separately). Mycelia disks (6 mm in diameter) were taken from the margins of a 4-6-dayold fungal colony grown as described in Section 2.1. The boxes were sealed with a lid that was tightened to the box using Saran wrap, covered with aluminum foil to guarantee dark conditions, and incubated at 28 ± 1 • C in the dark for 10 days.
The Inoculation Method in the Potted Plants' Experiments
The experiment comprised the following treatments: M. maydis inoculation, M. phaseolina inoculation, and dual fungi inoculation. Each infection treatment was implemented on three dates: 14 days before sowing (DBS), at sowing, or 7 days after planting (DAS). In addition, the infection order-one fungus after the other-was examined. This setting aimed at investigating possible advantages for each pathogen when it preceded the other in the soil. In both seedling experiment repetitions, we also included a non-infected negative control.
In the pre-sowing treatment, the pots were inoculated two weeks before planting with infected sterilized wheat grains, prepared as described in Section 2.4. In each pot, the soil's top layer up to a depth of 10 cm root area was mixed with 12 g of those grains enriched with the selected fungus. In the other treatments, where inoculation was done with seeding or one-week post sowing, three fungal discs 6 mm in diameter, taken from the margins of a 4-6-day-old fungal colony grown as described in Section 2.1, were added to each seed/seedling. This procedure is conducted by making a small hole in the soil adjacent to the plant, inserting the fungal discs to a depth of 5 cm, and sealing it with soil.
Another treatment group was set up to induce acute CRD in the cotton sprouts. This procedure was conducted by puncturing the sprouts, as previously reported [28]. To this end, infected wooden toothpicks were prepared by adding sterilized toothpicks to the surface of PDA plates seeded with M. maydis or M. phaseolina. The plates were kept at 28 ± 1 • C in the dark for about one week until the fungal hyphae covered the toothpicks (Figure 1). At 14 DAS (when the plants were mature enough to perform this procedure), each plant was stabbed in its lower stem part near the ground with M. maydisand/or M. phaseolina-infected wooden toothpicks. Each stalk was inoculated once by puncturing and leaving the infected wooden toothpick stuck in the stem. In the M. maydis and M. phaseolina combined puncturing inoculation, the stalk was inoculated by stabbing with two toothpicks, one for each fungus.
Data Collection during and at the End of the Growth Period
The emergence degree of the plants revealing the tip of the coleoptile that grew aboveground was determined 10 DAS. At the end of the experiment (day 40), all seedlings were gently removed from the ground, thoroughly rinsed with running tap water, and dried using paper towels. The plants' phenological development estimation included wet biomass and the number of leaves. In addition, the survival rate was evaluated, and a 0.7 g root sample was taken from each pot for DNA extraction.
Data Collection During and at the End of the Growth Period
The emergence degree of the plants revealing the tip of the coleoptile that grew aboveground was determined 10 DAS. At the end of the experiment (day 40), all seedlings were gently removed from the ground, thoroughly rinsed with running tap water, and dried using paper towels. The plants' phenological development estimation included wet biomass and the number of leaves. In addition, the survival rate was evaluated, and a 0.7 g root sample was taken from each pot for DNA extraction.
DNA Extraction
For DNA purification, the plants' roots were washed twice in sterile DDW for 30 s. Tissues were sampled by removing a cross-section of approximately 2 cm in length from each plant's root and near-surface hypocotyl under a sterile biosafety hood. Samples from the five plants of each plant pot were combined and considered as one repeat. The total weight was adjusted to 0.7 g and considered one repeat. All tissue samples were inserted into universal extraction bags (Bioreba, Switzerland) and 4 mL of CTAB buffer (0.7 M NaC1, 1% cetyltriammonium bromide (CTAB), 50 mM Tris-HC1 pH 8.8, 10 mM EDTA, and 1% 2-mercaptoethanol) was added. The tissues were ground with a tissue homogenizer (Bioreba, Switzerland) for five minutes until they were completely homogenous. The DNA samples were suspended in 100 microliters of ultra-pure quality water and stored in a freezer at −20 °C until used in the qPCR reaction.
The homogenized samples were treated for DNA purification according to the Murray and Thompson procedure [29] with slight modifications [10]. First, 1.2 mL of each DNA sample in CTAB buffer was incubated for 20 min at 65 °C. Then, the DNA samples were centrifuged at 14,000 rpm for 5 min at room temperature (24 °C). The upper phase of the lysate (usually 700 µL) was then extracted with an equal volume of chloroform/isoamyl-alcohol (24:1). After mixing by vortex, the blend was centrifuged again at 14,000 rpm for 5 min at room temperature. This stage of chloroform/isoamyl alcohol extraction was repeated twice. The supernatant, usually 300 µL, was then separated into a new Eppendorf tube and mixed with cold isopropanol (2:3). The DNA solution was mixed gently by inverting the tube several times, kept at −20 °C for 20-60 min, and centrifuged (14,000 rpm at 4 °C for 20 min). The precipitate DNA was isolated and resuspended in 0.5 mL of 70% ethanol. After another centrifugation (14,000 rpm at 4 °C for 10 min), the precipitate DNA was isolated and left to dry in a sterile hood overnight. Finally, the DNA was suspended in 100 µL HPLC-grade water and kept at −20 °C until use.
DNA Extraction
For DNA purification, the plants' roots were washed twice in sterile DDW for 30 s. Tissues were sampled by removing a cross-section of approximately 2 cm in length from each plant's root and near-surface hypocotyl under a sterile biosafety hood. Samples from the five plants of each plant pot were combined and considered as one repeat. The total weight was adjusted to 0.7 g and considered one repeat. All tissue samples were inserted into universal extraction bags (Bioreba, Switzerland) and 4 mL of CTAB buffer (0.7 M NaC1, 1% cetyltriammonium bromide (CTAB), 50 mM Tris-HC1 pH 8.8, 10 mM EDTA, and 1% 2-mercaptoethanol) was added. The tissues were ground with a tissue homogenizer (Bioreba, Switzerland) for five minutes until they were completely homogenous. The DNA samples were suspended in 100 microliters of ultra-pure quality water and stored in a freezer at −20 • C until used in the qPCR reaction.
The homogenized samples were treated for DNA purification according to the Murray and Thompson procedure [29] with slight modifications [10]. First, 1.2 mL of each DNA sample in CTAB buffer was incubated for 20 min at 65 • C. Then, the DNA samples were centrifuged at 14,000 rpm for 5 min at room temperature (24 • C). The upper phase of the lysate (usually 700 µL) was then extracted with an equal volume of chloroform/isoamylalcohol (24:1). After mixing by vortex, the blend was centrifuged again at 14,000 rpm for 5 min at room temperature. This stage of chloroform/isoamyl alcohol extraction was repeated twice. The supernatant, usually 300 µL, was then separated into a new Eppendorf tube and mixed with cold isopropanol (2:3). The DNA solution was mixed gently by inverting the tube several times, kept at −20 • C for 20-60 min, and centrifuged (14,000 rpm at 4 • C for 20 min). The precipitate DNA was isolated and resuspended in 0.5 mL of 70% ethanol. After another centrifugation (14,000 rpm at 4 • C for 10 min), the precipitate DNA was isolated and left to dry in a sterile hood overnight. Finally, the DNA was suspended in 100 µL HPLC-grade water and kept at −20 • C until use.
Real-Time PCR-Based Molecular Test
The qPCR reactions were performed as previously described [10], using the ABI PRISM ® 7900 HT Sequence Detection System (Applied Biosystems, Foster City, CA, USA) for 384-well plates. The molecular method is based on a standard qPCR protocol used to detect mRNA (converted to cDNA) [30]. Instead, it was modified to detect the DNA of the pathogen M. maydis using species-specific primers [31,32]. The A200a primers set was used for M. maydis detection (200 bp species-specific fragment) [24], while the MpK FI RI Agriculture 2022, 12, 255 6 of 18 primers were used for M. phaseolina detection (300-400 bp species-specific fragment) [33]. The primers' sequences are listed in Table 1. The housekeeping COX gene encoding the enzyme cytochrome c oxidase-the mitochondria's last enzyme in the cellular respiratory electron transport chain-was aimed at normalizing the M. maydis pathogen DNA [35]. The COX gene is used as a reference "housekeeping" gene representing the samples' total plant and fungal DNA. This gene was designated to normalize the amount of M. maydis DNA and was proven sufficiently stable in many similar works (see, for example, refs. [10,12,28,36,37]). The use of the COX gene provides a steady, strong, and easily detectable qPCR reading value after 22-25 cycles with minor variations and values that rarely exceed this range. The COX amplification was done using the COX F/R primer set [34,35] (Table 1). The calculation of relative gene abundance was made according to the ∆Ct model [38]. Similar efficacy was assumed for all samples. All amplifications were performed in triplicate.
The qPCR conditions were as follows: 5 µL total reaction volume was used per sample well-2 µL of DNA sample extract, 2.5 µL of iTaq™ Universal SYBR Green Supermix (Bio-Rad Laboratories Ltd., Hercules, CA, USA), and 0.25 µL of each of the forward and reverse primers (10 µM from each primer per well). The qPCR cycle program was as follows: pre-cycle activation phase (1 min at 95 • C), denaturation (15 s at 95 • C) for 40 cycles, and annealing and extension (30 s at 60 • C). A melting curve was used to analyze the results.
Statistical Processing
All statistical processing was done using JMP software, 15th edition (SAS Institute Inc., Cary, NC, USA). To test the significance of the results, we used a one-way analysis of variance (ANOVA) and a post hoc Student-based t-test for each pair of averages, without correction based on multiple tests. The significance threshold was p < 0.05. The random effect on the results of the biological repeats, the fungus-alone compared to the co-inoculation, and pre-inoculation compared to post-inoculation was insignificant. The experiment's biological repeats 1 and 2 are presented separately in the Results section and analyzed comparatively to each other.
Results
The relationships between M. phaseolina and M. maydis and their impact on cotton plants at their early growth stage under controlled conditions were studied. Sprouts were inoculated with M. phaseolina and M. maydis, together or one after the other, before, during, and after sowing. The pot experiment was performed twice with the exact experimental design, and the results of both experiments' repeats are presented. The two biological repeats obtained some similar results but also revealed an inquisitive perspective of the role of M. maydis in the sprouts' CRD outcome. This pathogen caused significant disease symptoms when applied alone before the seedling. However, it acted differently in the presence of M. phaseolina in the experiment's two biological repeats. In the first repetition, M. maydis pre-or combined infection shielded the plants against the CRD agent's destructive impact. In contrast, in the second repeat, M. maydis was virulent, causing harsh disease symptoms in all combinations tested. These pathogenic variations will be described and analyzed in detail below.
The Experiment's First Biological Repeat
The combined effect of the two soil pathogens, M. phaseolina and M. maydis, was tested at the plant's early aboveground emergence stage ( Figure 2). The aboveground emergence estimation performed 10 days after sowing (DAS) shows that the two pathogens harmed the sprouts when applied separately 14 days before sowing (DBS) compared to soil inoculation with sowing. In those early soil infection treatments, the emergence percentages dropped to 33% compared to the non-infected control, in which 96% of the plants emerged above the ground surface. In contrast, inoculating the soil along with sowing resulted in a lesser emergence inhibition effect. The difference between the two infection dates (14 DBS and 0 DAS) was an emergence improvement of 25% in M. phaseolina treatment (without a statistical difference) and 28% in M. maydis (p < 0.05) treatment. Combining the two pathogens 14 DBS or on the sowing day caused mutual suppression and higher emergence indices (58% compared to 33% when the plants were treated with pathogens separately 14 DBS). This difference, dual vs. single infection, was statis- (Table 2), the plants' growth indices show trends similar to the emergence values. The early infection of the soil 14 DBS with each of the pathogens had a significant (p < 0.05) inhibitory effect on growth in terms of the plants' wet weight and the number of leaves. This adverse effect was reduced when the inoculation was performed on the sowing day, when both pathogens were applied together, or when M. maydis preceded M. phaseolina in the soil, 14 DBS for the former fungus and 0 DAS for the latter. Table 2. This evidence suggests that the addition of M. phaseolina alone with the sowing has a weak impact on the plants' growth indices and survival. Still, it shields them against the devastating effect of the 14 DBS soil pre-inoculation with M. maydis alone.
Sprouts' Survival Rate
The treatments had a prominent effect on the plants' survival rate. In the control group, 96% of the plants survived, while the sole infection of both pathogens 14 days post-seeding led to significant (p < 0.05) sprout death (only 33-43% were vivid). A slight improvement (50% survival rate) was achieved when M. maydis was added to the soil after M. phaseolina, regardless of the soil infection day. Better survival percentages (63-70%) were recorded in the 14 and 0 DBS combined infection. Those values were still significantly lower than the control. The best survival rates (70-73%) were achieved when the fungi were added separately to the soil on the sowing day or when M. maydis had inhabited the soil 14 DBS before M. phaseolina (Table 2).
The Pathogens' DNA in Plant Roots
Real-time PCR-based molecular monitoring of the pathogens M. maydis and M. phaseolina DNA in plant roots on day 40 of growth ( Figure 3) indicates, in most cases, a correlation between the presence of pathogens and the severity of symptoms (emergence values, Figure 2 and growth indices, Table 2). In analyzing the qPCR results, no statistical significance was found due to high natural variations in the fungi pathogenesis. The highest DNA amount of M. maydis (4.3 × 10 −4 ) was recorded when the infection was done on the sowing day alone or at sowing, before infection with M. phaseolina was performed one week later. The M. maydis DNA level was especially low (7.2 × 10 −5 ) when the soil had been previously inoculated with M. phaseolina. high levels of this pathogens' DNA levels, in accordance with the growth reduction results.
The Puncturing Inoculation Effect
The addition of puncturing inoculation with wooden toothpicks in the lower stem section (Figure 1) produced mechanical damage to the plants, which could induce acute infection. This procedure was conducted two weeks after sowing when the plants were mature enough, causing a drastic reduction in growth indices, as expected. The outcome of this intervention on the growth indices was especially noticed in the combined infection by both fungi or when M. phaseolina preceded M. maydis (p < 0.05, Table 2). These results were similar to the severe disease symptoms recorded in the second experiment's repeat, detailed in Section 3.2.
Infecting the sprouts with M. maydis at sowing and infecting them again by puncturing with M. phaseolina two weeks later led to an 8.5-fold increase in M. maydis DNA compared to the reverse procedure (Mp 0 DAS + Mm 14 DAS St.) (Figure 4). This procedure of M. maydis first followed by M. phaseolina stabbing inoculation caused M. phaseolina DNA values to drop to near-zero levels (ca. 100-fold lower levels). These tendencies are similar to those obtained without stabbing (Figure 3). Changes in M. phaseolina DNA amounts in those plants were also noticeable ( Figure 3). In contrast to M. maydis, the cotton CRD agent (M. phaseolina) DNA levels were higher (4.2 × 10 −3 ) when the pathogen was added to the soil 14 DBS compared to on the sowing day (1.4 × 10 −3 ). Simultaneous inoculation in both pathogens or pre-treatment in M. maydis 14 DBS or on the sowing day dramatically reduced the amount of DNA of the cotton pathogen (ca. 10-fold reduction). However, pre-inoculating the soil with M. phaseolina led to high levels of this pathogens' DNA levels, in accordance with the growth reduction results.
The Puncturing Inoculation Effect
The addition of puncturing inoculation with wooden toothpicks in the lower stem section (Figure 1) produced mechanical damage to the plants, which could induce acute infection. This procedure was conducted two weeks after sowing when the plants were mature enough, causing a drastic reduction in growth indices, as expected. The outcome of this intervention on the growth indices was especially noticed in the combined infection by both fungi or when M. phaseolina preceded M. maydis (p < 0.05, Table 2). These results were similar to the severe disease symptoms recorded in the second experiment's repeat, detailed in Section 3.2.
Infecting the sprouts with M. maydis at sowing and infecting them again by puncturing with M. phaseolina two weeks later led to an 8.5-fold increase in M. maydis DNA compared to the reverse procedure (Mp 0 DAS + Mm 14 DAS St.) (Figure 4). This procedure of M. maydis first followed by M. phaseolina stabbing inoculation caused M. phaseolina DNA values to drop to near-zero levels (ca. 100-fold lower levels). These tendencies are similar to those obtained without stabbing (Figure 3).
The Experiment's Second Biological Repeat
In the first experiment, M. maydis infection shielded the plants from the full of CRD when applied before or with M. phaseolina. However, in the second repe picture was different. M. maydis enhanced the disease symptoms rather than elim them. This pathogenic behavior change led to harsh disease symptoms in almost al treatments.
Sprouts' Aboveground Emergence
Similar to the first repetition, here, too, the two pathogens harmed the sprout applied separately 14 DBS (Table 3). This infection procedure caused the emergen ues to drop from 96% emergence in the non-infected control to 64-68% (p < 0.05) pared to the 14 DBS results, the separate infection on the sowing day had a lesser e the M. phaseolina treatment (84% emergence) but a stronger impact in the M. mayd ment (55% emergence). The combined infection caused close to 60% aboveground ing, with minor differences between the dates 14 and 0 DBS.
Interestingly, it appears that the addition of M. phaseolina to the soil 14 DBS sh the plants from the devastating effect of M. maydis, which was added afterward. DBS pre-inoculation with M. phaseolina caused drastic emergence recovery (72%). Y plying this sequence inoculation on sowing days (Mp 0 DAS + Mm 7 DAS) failed to the plants, and only 32% of them emerged above the ground surface. In contr quenced soil inoculation with M. maydis before M. phaseolina caused acute emergen pression on both dates applied (48% and 36%).
The Experiment's Second Biological Repeat
In the first experiment, M. maydis infection shielded the plants from the full impact of CRD when applied before or with M. phaseolina. However, in the second repeat, the picture was different. M. maydis enhanced the disease symptoms rather than eliminating them. This pathogenic behavior change led to harsh disease symptoms in almost all of the treatments.
Sprouts' Aboveground Emergence
Similar to the first repetition, here, too, the two pathogens harmed the sprouts when applied separately 14 DBS (Table 3). This infection procedure caused the emergence values to drop from 96% emergence in the non-infected control to 64-68% (p < 0.05). Compared to the 14 DBS results, the separate infection on the sowing day had a lesser effect in the M. phaseolina treatment (84% emergence) but a stronger impact in the M. maydis treatment (55% emergence). The combined infection caused close to 60% aboveground sprouting, with minor differences between the dates 14 and 0 DBS.
Interestingly, it appears that the addition of M. phaseolina to the soil 14 DBS shielded the plants from the devastating effect of M. maydis, which was added afterward. This 14 DBS pre-inoculation with M. phaseolina caused drastic emergence recovery (72%). Yet, applying this sequence inoculation on sowing days (Mp 0 DAS + Mm 7 DAS) failed to protect the plants, and only 32% of them emerged above the ground surface. In contrast, sequenced soil inoculation with M. maydis before M. phaseolina caused acute emergence suppression on both dates applied (48% and 36%).
Plants Development
Forty days after sowing (Table 4), the plants' growth indices had the same tendencies as the emergence parameters. Early soil infection 14 DBS with M. maydis or M. phaseolina had an inhibitory effect on the plants' wet weight and the number of leaves-6-16% and 31-33% reduction, respectively (statistical significance measured in the number of leaves). As in the first biological repeat, soil inoculation with M. phaseolina at sowing had a lesser impact on the leaves number (weight value was slightly reduced). However, unlike the first repeat, M. maydis infection with sowing caused more severe growth inhibition (44% and 54% wet weight and leaves number reduction). When M. maydis combined or preceded M. phaseolina in the soil 14 DBS, the growth depression was more severe than the single infection (38% fresh weight and 40-52% leaves number reduction). Applying the pathogens in a sequence with M. phaseolina first at 14 DBS had some protective effect against the destructive activity of M. maydis. Yet, sequence infection starting from sowing, regardless of the pathogens' order, resulted in poor growth indices (59-69% biomass and 62-71% leaves number reduction).
Sprouts' Survival Rate
Regarding the plants' survival percentages at the experiment end, the second repetition (Table 4) also revealed quite a different picture from the first repeat. Here, M. maydis or M. phaseolina sole infection 14 DBS caused a minor survival decrease (70% compared to 96% in the control treatment). However, decreased survival rates were caused when M. maydis was applied alone with the sowing (52%) or with M. phaseolina simultaneously (60-63%), or even more in a sequence where M. maydis is the first 14 DBS or at sowing (43-53%). At the same time, the addition of M. phaseolina before M. maydis 14 DBS rescued these values (70% survival rate). This survival's protective impact was not observed when the inoculation sequence M. phaseolina-M. maydis was started at the seeding (Mp 0 DAS + Mm 7 DAS, 32% survival rate).
The Pathogens' DNA in Plant Roots
In the second experiment's repeat, the highest DNA values of each of the pathogens were recorded in the sole infection ( Figure 5). A peak of 0.5 relative DNA was recorded when M. maydis was applied alone in the soil 14 DBS, a nearly 1000 times higher value than the same treatment in the first experiment's repeat. In the same procedure on the sowing day, the M. maydis DNA levels dropped by half to 0.25. Still, those values were higher than the M. phaseolina pre-treatment. Moreover, they were higher by more than two orders of magnitude than the combined infection and M. maydis pre-inoculation treatments. Similar to fluctuations in the growth parameters, combined infection on 14 DBS resulted in higher M. maydis DNA levels than combined infection with seeding. Moreover, pre-treatment with this pathogen in a sequence with M. phaseolina starting from 14 DBS was less infectious than at sowing.
Changes in the DNA amounts of the M. phaseolina pathogen in those plants were similar to those of M. maydis, except for the ordered inoculation when M. maydis was applied first ( Figure 5H). Soil sequence inoculation with M. maydis first, starting 14 DBS, was more potent (ca. 10-fold higher CRD pathogen DNA levels) than implementing this sequence from the sowing day. This is in contrast with the higher growth and survival indices measured in the 14 DBS sequence treatment.
Similarities and Differences between the Two Experiments' Repeats
The two experiment's biological repeats had some similar results and some differences, as summarized in Figure 6. Similarities in both repetitions' results include a strong impact on the sprouts' development and survival (86%) when each pathogen is applied separately 14 DBS. A similar severe result (75-86%) was the sequence inoculation when M. phaseolina was added to the soil before M. maydis 14 or 0 DBS. In contrast, M. phaseolina sole infection on the sowing days had a minor or no effect on the sprouts.
All other treatments varied between the two experiment's repeats. The main difference was the strong growth depression influence of the treatments in the second repeat. Indeed, the total disease severity estimation based on combining all symptoms measured was 40% in the first experiment and 80% in the second. The cause of this result is presumed to be the high virulence of M. maydis in this repeat since sole infection with this pathogen with seeding led to drastic growth repression.
Changes in the DNA amounts of the M. phaseolina pathogen in those plants were similar to those of M. maydis, except for the ordered inoculation when M. maydis was applied first ( Figure 5, H). Soil sequence inoculation with M. maydis first, starting 14 DBS, was more potent (ca. 10-fold higher CRD pathogen DNA levels) than implementing this sequence from the sowing day. This is in contrast with the higher growth and survival indices measured in the 14 DBS sequence treatment.
Similarities and Differences between the Two Experiments' Repeats
The two experiment's biological repeats had some similar results and some differences, as summarized in Figure 6. Similarities in both repetitions' results include a strong sole infection on the sowing days had a minor or no effect on the sprouts. All other treatments varied between the two experiment's repeats. The main difference was the strong growth depression influence of the treatments in the second repeat. Indeed, the total disease severity estimation based on combining all symptoms measured was 40% in the first experiment and 80% in the second. The cause of this result is presumed to be the high virulence of M. maydis in this repeat since sole infection with this pathogen with seeding led to drastic growth repression.
Discussion
In Israel, M. maydis and M. phaseolina can be found in commercial cotton and maize fields [13]. While the former is considered a major threat to sensitive maize cultivars [37], it has an endophytic or opportunistic lifestyle in cotton plants [2,5]. M. phaseolina is the primary causal agent of cotton charcoal rot disease (CRD). This disease has become of increasing concern to Israel's cotton production over the past decade. Since both pathogens can be found in diseased cotton plants, they may contribute to the disease's outbreaks and spread [13,39]. However, when the two fungi had been previously inoculated simultaneously in the soil, CRD symptoms were prominently reduced while the plants' Figure 6. Total disease severity estimation of both experiment's repetitions based on combining all symptoms measured. Red rectangle-any statistical difference from the control (uninfected healthy plants that achieved the highest growth parameters, p < 0.05). Disease severity percentages (on the right) are the relative amount of significantly different indices compared to the total measures (red rectangles/8 rectangles ratio).
Discussion
In Israel, M. maydis and M. phaseolina can be found in commercial cotton and maize fields [13]. While the former is considered a major threat to sensitive maize cultivars [37], it has an endophytic or opportunistic lifestyle in cotton plants [2,5]. M. phaseolina is the primary causal agent of cotton charcoal rot disease (CRD). This disease has become of increasing concern to Israel's cotton production over the past decade. Since both pathogens can be found in diseased cotton plants, they may contribute to the disease's outbreaks and spread [13,39]. However, when the two fungi had been previously inoculated simultaneously in the soil, CRD symptoms were prominently reduced while the plants' growth and yield improved [2]. It was concluded that competitive interactions between the two fungal species are the cause of this outcome. This antagonistic co-influence resulted in restricting the spread of M. maydis in maize plants and of M. phaseolina in cotton.
The current study aimed at deepening our understanding of these intriguing relationships found in cotton sprouts. Indeed, the results from two independent growth room trials indicate that the picture is more complex than a simple two-directional antagonism. Where M. maydis inoculation with sowing had an aggressive impact in the second experiment's repetition, it had a prominent effect on the plants' growth and survival. In this scenario, the pre-inoculation of the soil 14 DBS with M. phaseolina protected the plants. Thus, mutual inhibition between M. phaseolina and M. maydis exists, and each of the partners restricts the other fungal population from developing uncontrollably. One possible explanation is that antagonism is formed in situ since both fungi occupy the same space. Therefore, the fungi interfere less with plant growth. An alternative explanation is that the antagonism directly results from inhibitory metabolites secreted by both fungi [2].
The qPCR results raise an interesting question-why are there different tendencies between 14 and 0 DBS when the pathogens are applied alone? It appears that M. maydis could successfully inoculate the plants when added at sowing. In contrast, M. phaseolina is best established in the plants when added to the soil at 14 DBS. This is emphasized even more when M. maydis is inoculated at 0 DAS and M. phaseolina at 7 DAS.
Tracking the effect of M. maydis soil inoculation on cotton plants under field conditions throughout the whole season in a two-year study shows that this fungus did not affect cotton growth parameters or yield [2]. These results imply that the maize pathogen maintained an endophytic lifestyle in cotton plants. Yet, under drought stress, M. maydis infection led to the decreased growth of the cotton plants, without any measurable effect on yield production [2]. So, some opportunistic behavior of this pathogen in cotton could exist. The current study's results are in line with this conclusion. The second experiment's repeat conditions were probably favorable for the pathogenic behavior of M. maydis and evoked its aggressive impact. This assumption and the nature of these interesting relationships and their causes over a full growth period will require further work to be resolved. Similar antagonistic interactions have been previously reported in phytoparasitic fungi pairs of other plants. One example is the suppression of Cochliobolus sativus by the Fusarium (roseum) species [40]. In this case, C. sativus and F. pseudograminearum consistently and significantly reduced one another under field conditions. Another example is the interactions of F. oxysporum and F. solani on pea roots [41]. While these pathogens inhibit one another, coinfection by other pathogens may result in a different outcome-increased disease severity. For example, roots infected with Aphanomyces euteiches were more susceptible to Fusarium root rot than those exposed only to Fusarium spp. [42]. This observation was confirmed by qPCR, which revealed significant changes in colonization rates when multiple species were present. Yet, the underlying mechanism in the M. maydis and M. phaseolina co-infection seems somewhat different. According to former studies, M. maydis does not cause actual disease in cotton. It is a root and stem inhabitant [2,5,13] that may provide some defense against other intruding soil pathogens. For example, Sabet et al. [5] demonstrated that the interaction between the cotton pathogen F. oxysporum and M. maydis on the roots induces a decrease in the severity of Fusarium cotton wilt.
It is curious why M. maydis failed to protect the plants when it was added to the soil after M. phaseolina. It is possible that the inhibitory metabolites produced by M. maydis did not reach an effective dose before the fungus had grown for some time. Thus, the action of M. maydis against infection with the CRD fungus is considerably reduced when inoculated into the soil after M. phaseolina and is enhanced when inoculated before it. While this assumption should be based on future studies, it should also be considered that other partners in the soil microbiome may be involved. Since the soil and roots microflora include many different microorganisms, the fabric of the relationship between them most probably affects the interactions between M. maydis and M. phaseolina as well. Yet, this study's results provide an essential opening stage for subsequent studies and implications for future control strategies development. The findings suggest a decreased risk of yield loss in regions where M. maydis and M. phaseolina co-occur, especially when the former precedes the latter.
The role of M. maydis as an endophyte in cotton is still unclear. This is also true for other fungal endophytes such as pathogens and harmless species. Until lately, studies of fungi in cultivated cotton have focused primarily on monitoring and controlling plant pathogens [43]. Today, there is increasing interest in asymptomatic fungal endophytes, and attempts are being made to unravel their potential as biological control agents. Still, surveys are needed to better characterize their distribution patterns, diversity, and possible implementation in integrated pest management. In such a survey, the pathogenicity of 42 isolates was tested on cultivated cotton in Australia [44]. All isolates caused a localized discoloration in stem tissue when infection was performed with the stem-stabbing method. Still, none of them could induce any foliar symptoms during the experiment's five-week period, suggesting that the endophytic fungi of native Gossypium species are unlikely sources of cotton pathogens. In cotton cultivated in the United States, members of the genera Alternaria, Colletotrichum, and Phomopsis, as well as the species Drechslerella dactyloides (formerly Arthrobotrys dactyloides) and Exserohilum rostratum, were isolated and identified as endophytes [43]. In addition, many latent pathogens were found. Some of those latent species are known to be antagonists against plant pathogens. Interestingly, no differences were found in endophyte species diversity among different cotton varieties or richness. Yet, the researchers did detect differences in plant tissues and during the growth period [43].
The above-mentioned data and other scientific studies together suggest that pathogenic soil biota, particularly soil-borne fungi, play an essential role in regulating plant productivity, i.e., biomass [45]. This hypothesis holds that plant species accumulate species-specific pathogens that reduce their host's performance, especially when the plant diversity is low, i.e., in monocultures where the relative abundance of the host is 100%, such as in commercial fields. This claim relies on crop yield reductions from pests, which typically increase with repeated crop cultivation in the same area and can be reduced by crop rotation [46]. Moreover, fungicide treatment or sterilization of the soil enhanced plant biomass production, even at low plant species richness [47,48].
Bioassays that inoculate plant species with isolates of one or two fungi may reveal host-specific pathogenic effects and provide proof of principle that these belowground organisms can act as pathogens and affect various plant species differently. However, such single-fungus single-plant bioassays are relatively simple [45]. The next step would be to determine the effects of multiple fungal interactions on plant growth since fungal co-infection is the rule rather than the exception. Combining next-generation sequencing and complex culture-based approaches will reveal the most influential fungal pathogenic actors and their roles in such biodiversity-ecosystem functioning.
Conclusions
Crop rotation has long been known as an effective means to reduce plant diseases. Even if late wilt disease-resistant cultivars are used, maize cultivation enhances the abun- | 2022-02-13T16:25:39.329Z | 2022-02-10T00:00:00.000 | {
"year": 2022,
"sha1": "5418f023f2f72604d8bf489b8e585736b888e567",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0472/12/2/255/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b70d5946d2584ebe2e7bf9607ed9228b15f9cd23",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
232132267 | pes2o/s2orc | v3-fos-license | Higher Impulse Electromyostimulation Contributes to Psychological Satisfaction and Physical Development in Healthy Men
Background and Objectives: This study investigated the various impulse effects of whole-body electromyostimulation (WB-EMS) on psychophysiological responses and adaptations. Materials and Methods: The participants included fifty-four men between 20 and 27 years of age who practiced isometric exercises for 20 min, three days a week, for 12 weeks while wearing WB-EMS suits, which enabled the simultaneous activation of eight muscle groups with three types of impulse intensities. Participants were allocated to one of four groups: control group (CON), low-impulse-intensity group (LIG), mid-impulse-intensity group (MIG), and high-impulse-intensity group (HIG). Psychophysiological conditions were measured at week 0, week 4, week 8, and week 12. Results: Compared with the CON, (1) three psychological conditions in LIG, MIG, and HIG showed positive tendencies every four weeks, and the analysis of covariance (ANCOVA) test revealed that body image (p = 0.004), body shape (p = 0.007), and self-esteem (p = 0.001) were significantly different among the groups. (2) Body weight, fat mass, body mass index, and percent fat in the CON showed decreasing tendencies, whereas those in LIG, MIG, and HIG showed a noticeable decrease, which revealed that there were significant differences among the groups. Specifically, a higher impulse intensity resulted in a greater increase in muscle mass. (3) Although there was no interaction effect in the abdominal visceral fat area, there were significant interactions in the abdominal subcutaneous fat (ASF) and total fat (ATF) areas. Both the ASF and ATF in the CON showed decreasing tendencies, whereas those in other groups showed a noticeable decrease. The ANCOVA revealed that the ASF (p = 0.002) and ATF (p = 0.001) were significantly different among the groups. In particular, the higher the impulse intensity, the greater the decrease in abdominal fat. Conclusions: This study confirmed that high-impulse-intensity EMS can improve psychophysiological conditions. In other words, healthy young adults felt that the extent to which their body image, body shape, and self-esteem improved depended on how intense their EMS impulse intensities were. The results also showed that higher levels of impulse intensity led to improved physical conditions.
Introduction
The definitions of health and beauty have varied over time, but the general standard for men has always been an image of being slim and muscular [1]. As with many issues, body image has become a concern in our society. Body image is the perception of one's body size and appearance and the emotional responses to this perception [2]. Cash [3] reported that body image is a multidimensional construct and refers to a person's perceptions and attitudes, including feelings, thoughts, and behaviors regarding their own body and appearance. Cash [4] also reported that the cognitive-behavioral model of the body image, which includes personality, physical or interpersonal attributes, and cultural socialization, plays a role in how invested individuals are in their body image and how they evaluate it. One facet of attitudinal body image is referred to as body satisfaction or dissatisfaction [5]. Inaccurate perceptions of body size and negative emotional reactions can result in varying degrees of body image dissatisfaction. Negative views towards obesity have been internalized. Many people have adopted the belief that obese individuals are unattractive, psychologically impaired, or medically sick [6]. Obesity caused from a sedentary lifestyle is associated with inappropriate food intake and energy imbalances [7]. Among the kinds of obesity, abdominal obesity is mainly seen in males and is a dangerous factor that causes heart and metabolic diseases, as well as blood vessel problems. Moreover, the larger number of adipocytes in the abdominal region increases metabolic complications [8,9]. This association seems to be due to a higher lipolytic rate in the visceral and deep subcutaneous adipose tissue, promoting an increase of free fatty acids in the blood circulation [10] and an increase in the hepatic synthesis of triglycerides, which translates into dyslipidemia [11]. Additionally, adipose tissue plays an important role in the development of systemic inflammation by secreting several cytokines and chemokines [12,13].
Physical activity, controlled diet, anti-obesity medication, and liposuction represent significant modalities in the treatment of obesity, resulting in increased energy expenditure, decreased energy uptake, reduced fat tissue, and an increased lean body mass. Additionally, regular physical activity and exercise have long been known to increase the metabolism and reduce fat mass, contributing to a more positive body image or shape [14,15], as well as self-esteem [16]. However, attempts to resolve obesity through traditional exercise or diets are somewhat inadequate for people who want fast results or those with metabolic syndromes. In addition, if abdominal obesity cannot be resolved within a short period of time, various complications can result, such as those described above. Therefore, other nonconventional methods have been devised.
Whole-body electromyostimulation (WB-EMS) is a somewhat newly adopted device that provides exercise-like effects in which artificial contractions are induced by electric currents from an external source [17,18], unlike natural contractions induced by the motor nerve of the central nervous system. Electrical muscle stimulation (EMS) delivers a stimulus to local muscles in a static state at sufficient intensities to evoke muscle contractions [19]. WB-EMS is time-efficient and less debilitating than localized EMS, thus producing a higher acceptance among nonathletes or athletes [20]. Maffiuletti [21] suggested that electrical stimulation increases the maximal strength and improves physical fitness. The authors von Stengel and Kemmler [22] analyzed the changes in the maximum isokinetic leg/hip extensor strength and leg/hip flexor strength after WB-EMS interventions in men from different periods of life. They found that, although there was an inconsistent tendency in terms of WB-EMS-induced lower extremity strength, WB-EMS significantly increased the maximal hip/leg strength throughout the adult male lifespan. Furthermore, Kemmler et al. [23] demonstrated that WB-EMS had positive effects on muscle mass and fat mass, as well as improved functional capacity, even in older, sedentary people. They also reported that WB-EMS is gentle on the joints and reduces the risk of injury due to excessive loads resulting from weight training. Recently, Kim and Jee [24] reported that obese elderly women who exercised with music while wearing WB-EMS suits resulted in improved body composition and cholesterol levels after eight weeks. In addition, they also found that there were decreased tendencies in some cytokines such as tumor necrosis factor-a, C-reactive protein, resistin, and carcinoembryonic antigen in the WB-EMS intervention group.
However, even though there is some evidence that WB-EMS favorably improves the body composition, biomarkers, and muscle mass or strength, few studies clearly address those benefits. Particularly, it has not been confirmed that a dose-response relationship exists between different impulse intensities and how WB-EMS affects the psychological factors such as body image or shape and self-esteem. Therefore, this study investigated the various impulse effects of WB-EMS on psychological conditions (body image, body shape, and self-esteem) and physical conditions (body composition and abdominal fatness) in healthy young men in accordance with dose responses using electrical stimulation composed of different impulse intensities.
Participants
The participants were aged between 20 and 27 years old. All volunteers wanted to improve their body shape and were checked by a bioelectrical impedance analysis (BIA) device. This study recruited healthy male college students who lived in a dormitory and did not exercise regularly for a duration of six months. They were excluded if they received any treatment for weight loss, taken any medication known to affect the body composition, or underwent any major surgery during one year prior to the start of the study. The following were also reasons for exclusion: having a history of coronary arterial disease, severe cerebral trauma, cerebrovascular disease, pulmonary disease, uncontrolled hypertension, cancer, and psychiatric diseases, such as eating disorders. After completing a survey and taking baseline measurements, fifty-four participants were randomly assigned to one of four groups using random number tables and assigned identification numbers upon recruitment: the control group (CON, n = 13), low-impulse-intensity group (LIG, n = 13), mid-impulse-intensity group (MIG, n = 14), and high-impulse-intensity group (HIG, n = 14). Although sixty participants were initially gathered, three participants were excluded, because two of them took part in an exercise program for over six months and another refused to participate. In the follow-up phase and the analysis phase, two participants from the MIG and one participant from the HIG dropped out due to personal reasons. Finally, fifty-four participants took part in this study, as shown in Table 1. All values are expressed as mean ± standard deviation. BMI, body mass index. CON, control group, LIG, low-impulse-intensity group, MIG, mid-impulse-intensity group, and HIG, high impulse-intensity group.
Experimental Design
This single-blind, randomized, controlled trial was conducted in a research center at Hanseo University, Seosan, Korea. This study followed the principles of the Declaration of Helsinki and received approval from the institutional ethics committee (26 September 2017 to 25 September 2018; 2-1040781-AB-N-01-2017083HR). This study was registered with Clinical Research Information Service (CRIS), reference KCT0005931. Prior to the study, the principal investigator explained all the procedures to the participants in detail. All participants read and signed an informed consent form. They arrived at the research center to complete a self-reported questionnaire about their health status and to learn how to record their calorie intake and calorie output in a diary.
The intervention program of this study lasted for 12 weeks, similar to the duration of previous studies [25][26][27]. The assessments were performed at week 0 (baseline) and then every 4 weeks for 12 weeks. Although all participants were assigned to four groups, no participant or other staff members were aware of the group assignments for the duration of the trial. All participants wore WB-EMS suits that fit their individual size. The LIG, MIG, and HIG underwent 20-min WB-EMS sessions combined with isometric exercises in accordance with their intensities of electrical stimulation 3 times a week for 12 weeks. In other words, they received one of three types of electrical stimuli at different intensities according to their maximum tolerance (1 MT). The CON also wore the WB-EMS suit as much as the other groups, but they did not receive any electrical stimuli while performing isometric exercises. The amount and intensity of the isometric exercise was the same for all the groups.
Measurement of Calorie Intake and Calorie Output
At the pre-experiment session, the participants were provided a diary to record what they consumed for breakfast, lunch, and dinner throughout the 12-week experimental period. However, for the record of calories consumed at week 0, the foods consumed on the day before the experiment were recorded in the diary. An expert input the food type and volume into CAN-Pro 5.0 (Korean Nutrition Society, Seoul, Korea) every day, calculated the caloric intake, and then performed an evaluation at the end of each month. The recorded calorie intake data were averaged and analyzed at the baseline, week 4, week 8, and week 12.
The daily amount of physical activity that was performed outside the experiment was recorded and calculated using the international physical activity questionnaire (IPAQ)shortened form version [28,29]. In order to increase the accuracy of the responses, an expert provided a diary to record the contents of the questionnaire on a daily basis. The participants answered the questionnaires based on the recordings of physical activities for the past 7 days throughout the 12-week experimental period. For the record of calorie output for week 0, physical activity performed on the day before the experiment was recorded in the diary. The daily calorie output was calculated by metabolic equivalent (MET)/minutes (kcal/kg/min) at the end of every month ( Table 2). Table 2. Definition and degree of the category scores by the international physical activity questionnaire (IPAQ).
Category
Criteria # Activity Degree
Low
Any one of the following 2 criteria -No activity is reported OR -Some activity is reported but not enough to meet Categories 2 or 3.
Moderate
Either of the following 3 criteria -3 or more days of vigorous activity of at least 20 min per day OR -5 or more days of moderate-intensity activity and/or walking of at least 30 min per day OR -5 or more days of any combination of walking or moderate or vigorous intensity.
High
Any one of the following 2 criteria -Vigorous activity on at least 3 days and accumulating at least 1500 MET/min per week OR -7 or more days of any combination of walking or moderate-or vigorous-intensity activities, accumulating at least 3000 MET/min per week.
Equations for calculating physical activity degree as follows. Walking MET/min/week = 3.3 × min of activity/day × days per week. Moderate-intensity physical activity MET/min/week = 4.0 × min of activity/day × days per week. Vigorous-intensity physical activity MET/min/week = 8.0 × min of activity/day × days per week. Total MET/min/week = Walking MET/min/week + Moderate-intensity physical activity MET/min/week + Vigorous-intensity physical activity MET/min/week. MET: metabolic equivalent.
By using Table 2, the total score was obtained through the summation of the duration (in minutes) and frequency (days) of walking, moderate-intensity activity, and vigorousintensity activity. Then, using the data, the average amount of physical activity per week was calculated based on the IPAQ score conversion method. These data were averaged on a monthly basis and analyzed every 4 weeks.
Measurement of Psychological Conditions
The psychological conditions for healthy male participants were evaluated and analyzed in three categories, and the reliabilities of three questionnaires from week 0 to week 12 were measured by calculating Cronbach's α, representing internal consistencies, as shown in Table 3. First, the Body Image Acceptance and Action Questionnaire (BI-AAQ) was used to measure the participants' flexibility and acceptance regarding their own body image [30]. This questionnaire was developed to measure body flexibility that assesses the acceptance of negative or unwanted thoughts, perceptions, body sensations, and emotions related to their body. The BI-AAQ was comprised of 12 items rated on a 7-point scale ranging from 1 = never true to 7 = always true. The participants' scores on the items were reverse-scored and averaged; lower scores reflected higher levels of body image flexibility. Second, the Body Shape Questionnaire (BSQ) developed by Cooper et al. [31], an 8-item self-reported measure, was used to assess negative concerns regarding body shape and size. This questionnaire consisted of four factors: fear of obesity, fear of exposure, experience of vomiting, and body dissatisfaction. Questions such as "Have you pinched areas of your body to see how much fat there is?" and "Have you thought that your thighs, hips, or bottom are too large for the rest of you?" were measured on a 6-point scale (1 = never to 6 = always). A lower total score was indicative of greater body satisfaction.
Third, the self-esteem scale (SES) of Rosenberg was used to measure the self-esteem of college students [32]. This questionnaire was designed to conceptualize self-esteem as a single dimension and to allow the participant to comprehensively evaluate oneself. The items of the survey instrument measured the degrees of the self-esteem and self-approval patterns, which consisted of 10 items, with 5 items regarding positive self-esteem (questions 1, 2, 3, 4, and 5) and 5 items regarding negative self-esteem (questions 6, 7, 8, 9, and 10). Negative self-esteem was scored as a reverse item. A higher total score represented a higher level of self-esteem.
Measurement of Physiological Conditions
The physiological conditions for the participants of this study were evaluated and analyzed by the BIA and computer tomography (CT) methods as follows. First, regarding the BIA, height was measured using a BMS 330 Anthropometer (Biospace Co., Ltd., Seoul, Korea), and body weight, muscle mass, fat mass, and percent fat of the participants were assessed using an InBody 230 Body Composition Analyzer (Biospace Co., Ltd., Seoul, Korea). This analyzer is a segmental impedance device, in which the electrodes are made of stainless-steel interfaces. The participants stood upright by placing their feet on the foot electrodes and gripping the hand electrodes. Eight tactile electrodes were attached to the surfaces of both hands and feet: palms, fingers, front soles, and rear soles. An analysis of body composition was measured before dinner and after voiding [33].
Second, in the aspect of CT screening, all participants of the study visited the Seoul Songdo Hospital in Korea. The participants lay down horizontally with their face and torso facing up, with both arms raised overhead. A radiologist performed a CT scan (Toshiba Scanner Aquilion Prime Model TSX-303A, Toshiba Medical Systems Corporation, Tokyo, Japan) of the abdomen four times (week 0, week 4, week 8, and week 12). All measurements were taken by a radiologist throughout the study for minimizing the measurement errors. This scan for abdominal fatness was performed at the level of the umbilicus or between the 4th and 5th lumbar vertebra. Abdominal visceral fat (AVF) and abdominal total fat (ATF) areas were estimated by delineating the regions and calculating an attenuation range of −190 to −30 Hounsfield units. The abdominal subcutaneous fat (ASF) area was calculated by subtracting the AVF area from the ATF value. The unit of all the area values was cm 2 . This study tried to secure the safety of the participants by measuring the abdominal circumference in the shortest time while minimizing the radiation dose from the CT scans [34,35]. In addition, the exposure dose of the participants was measured. An average value of 1.69 mSv was measured from the abdominal fat CT scans for a total average of about 6.91 mSv from all four measurements. These results were somewhat higher than natural radiation exposure (about 3 mSv) and radiation dose (about 3-7 mSv) that airplane crews are typically exposed to during one year. However, it was lower than the annual radiation dose of radiation workers (20 mSv per year) or the average radiation dose (about 1000-2000 mSv) received during radiation therapy for cancer treatment. In other words, it was considered and confirmed that this dose was not dangerous or harmful to the participants' health [36].
Measurement of Creatine Kinase
This study also took the safety of the participants into consideration by measuring the creatine kinase (CK). In other words, the CK was included, because it was considered to be an indicator of muscle damage during or after exercise with electrical muscle stimulation. Blood samples were taken after fasting for 10 h or longer before assessment and were collected using BD vacutainer tubes (Becton Dickinson, Franklin Lakes, NJ, USA) at 8 a.m. the following day. After the participants were stabilized for 10-15 min, 5 mL of blood was collected from the antecubital vein of the participants with a disposable syringe by a medical laboratory technologist. A total of 2 mL of the 5 mL of venous blood was added to an anticoagulant tube (EDTA bottle), shaken, and centrifuged at 3000 rpm for 5 min. The remaining 3 mL was left at room temperature for 1 h and centrifuged at 1000 rpm for 15 min. Isolated serum was kept frozen until the test. The samples were taken to the laboratory for analysis [37]. The CK was analyzed using a Beckmann Coulter Inc. device (Brea, CA, USA) at week 0, week 4, week 8, and week 12.
WB-EMS Protocol
Participants were given variously sized WB-EMS suits made by Miracle ® (Seoul, Korea) according to their size. The suit was composed of a silicone conductive pad and controlled via Bluetooth. This suit enabled the simultaneous activation of eight pairs of muscle groups (upper legs, upper arms, buttocks, abdomen, chest, lower back, upper back, and latissimus dorsi) with selectable intensities for each region [24]. In order to generate the effects of a diverse range of motion, eight types of isometric movements were performed during the impulse phase, as per the instructor's direction, as shown in Figure 1. Based on the available literature [18,[38][39][40][41], the stimulation frequency was selected at 85 Hz, the impulse width at 350 microsec, the impulse rise as a rectangular application, and the impulse intensity as a relative voltage on the maximal peak voltage (160 V). The impulse duration was 6 s, with a 4-s break between impulses. Each group trained by a qualified instructor conducted 20-min WB-EMS sessions 3 times a week (Mondays, Wednesdays, and Fridays) on two nonconsecutive days to allow a rest between each session. This study used 1 MT as the maximum peak voltage, similar to calculating the maximal voluntary contraction as one maximal repetition [42]. Each 1 MT of the upper and lower body was measured and stored via Bluetooth, and the intensity was adjusted for each individual during the isometric exercise. For preventing the patients from being surprised or uncomfortable with the electrical stimulus, the 1-MT level was gradually in- This study used 1 MT as the maximum peak voltage, similar to calculating the maximal voluntary contraction as one maximal repetition [42]. Each 1 MT of the upper and lower body was measured and stored via Bluetooth, and the intensity was adjusted for each individual during the isometric exercise. For preventing the patients from being surprised or uncomfortable with the electrical stimulus, the 1-MT level was gradually increased after starting with a low stimulation current [43][44][45]. All patients were asked to express the difficulty level of the exercise during the isometric exercises with wearing an EMS suit. An instructor asked for the ratings of perceived exertion (RPE) every 5 min, and an assistant recorded them. The electric stimulation was stopped at the request of the participant when reaching an unbearable level on the RPE scale [46]; at which point, the intensity was set as 1 MT. In other words, the % MT of this study was obtained through the RPE scale, a numerical scale ranging from 6 to 20, where 6 means "no exertion at all" and 20 means maximal exertion. The intensity of the exercise was estimated by applying the RPE during the exercise to the CON, as well as the three experimental groups. The intensity of the electrical workout was different from 1 MT. The LIG was assigned to 60% of 1 MT, MIG to 70% of 1 MT, and HIG to 80% of 1 MT from baseline to the end of the experiment. Although the CON performed isometric exercises while wearing EMS suits, they did not receive any electrical stimuli.
Statistical Analyses
Microsoft Excel (Microsoft, Redmond, WA, USA) was used to analyze the data, expressed as mean ± standard deviation (SD). The sample size was determined using G*Power v. 3.1.9.7 [47,48], considering a priori effect size of f2 (V) = 0.25 (medium size effect), α error probability = 0.05, power (1-β error probability) = 0.95, number of groups = 4, and number of measurements = 4. There were 13 to 14 subjects that were assigned to each of the 4 groups, with a total of 52 subjects based on the numbers assigned to this program. SPSS (version 22.0; IBM Corp., Armonk, NY, USA) was used to perform all statistical analyses, and the Shapiro-Wilk test was used to check the data distribution. Differences between the groups were observed using the Kruskal-Wallis rank test prior to comparing the groups. An analysis of variance (ANOVA) test was used for evaluating significant variances between the groups at the baseline, and 4 × 4 (group, time, and group by time interaction) was used to assess the effects of intervention. An analysis of covariance (ANCOVA) test was used to determine the differences between groups if there was an interaction between group and time (pre-values and post-values). Moreover, the Bonferroni post hoc test was implemented if there were significant differences among the four groups. An intention-to-treat analysis was conducted to compare the CON, LIG, MIG, and HIG. The groups served as the between-group factor, and the week 0 vs. week 4 vs. week 8 vs.
week 12 were the within-group factors. For all analyses, the significance level was set at p ≤ 0.05.
Demographic and Controlled Variables
As shown in Table 1, there were no significant differences among the four groups. The demographic variables of this study indicated a homogeneity of subjects. There were also no significant differences in the controlled variables, as shown in Table 4.
Effects of WB-EMS Exercise on Psychological Conditions
There were significant interaction effects for all psychological questions (Table 5). Three psychological scales in the CON showed negative changing tendencies, whereas those in the other groups showed positive changing tendencies. The ANCOVA revealed that the BI-AAQ (F = 5.017, p = 0.004), BSQ (F = 4.680, p = 0.007), and self-esteem questionnaire (SEQ) (F = 8.468, p = 0.001) were significantly different among the four groups (not shown in Table 5). In particular, the HIG showed the most improved value in week 12, which was confirmed by the Bonferroni post hoc test. All values are expressed as the mean ± standard deviation. CON, control group; LIG, low-impulse-intensity group; MIG, mid-impulseintensity group; HIG, high-impulse-intensity group; PAC, physical activity category which was scored by 1 (low), 2 (moderate), and 3 (high) activity levels; and CK, creatine kinase. p-value was analyzed using the repeated measures ANOVA test. G: group; T: time; G*T: group by time; IU: International unit. All values are expressed as mean ± standard deviation. BI-AAQ, Body Image-Acceptance and Action Questionnaire, BSQ, Body Shape Questionnaire, SEQ, self-esteem questionnaire, CON, control group, LIG, low-impulse-intensity group, MIG, mid-impulse-intensity group, and HIG, high-impulse-intensity group. a,b,c,d Bonferroni post hoc symbols. G: group; T: time; G*T: group by time.
Effects of WB-EMS Exercise on Physiological Conditions
As shown in Table 6, weight, fat mass, BMI, and percent fat in the CON showed decreasing tendencies, whereas those in the LIG, MIG, and HIG showed a noticeable decrease. There were significant interactions in all the variables in the repeated ANOVA test. The ANCOVA test revealed that, although weight (F = 6.354, p = 0.001), fat mass (F = 7.368, p = 0.001), and BMI (F = 6.427, p = 0.001) in the three experimental groups were significantly lower than those in the CON, the percent fat (F = 2.268, p = 0.092) did not show a significant difference. Muscle mass showed a different tendency, and the ANCOVA revealed that muscle mass (F = 5.758, p = 0.002) in the three experimental groups was significantly higher than those in the CON. In particular, the higher the impulse intensity was applied, the greater the increase of muscle mass.
Effects of WB-EMS Exercise on Abdominal Fatness
Although there was no interaction effect in the abdominal visceral fat area, there were significant interactions in the abdominal subcutaneous fat and total fat areas ( Table 7). Both subcutaneous fat and total fat in the CON showed decreasing tendencies, whereas those in the experimental groups showed a noticeable decrease. The ANCOVA revealed that subcutaneous fat (F = 5.517, p = 0.002) and total fat (F = 10.933, p = 0.001) were significantly different among the groups. In particular, the HIG showed the lowest value in week 12, which was confirmed by the Bonferroni post hoc test. All values are expressed as the mean ± standard deviation. CON, control group, LIG, low-impulse-intensity group, MIG, mid-impulseintensity group, and HIG, high-impulse-intensity group. a,b,c,d Bonferroni post hoc symbols. G: group; T: time; G*T: group by time.
Discussion
This study found that three psychological scales in the CON showed insignificant or negative changing tendencies, whereas those in the LIG, MIG, and HIG showed positive changing tendencies. Furthermore, the BI-AAQ for body image, BSQ for body shape, and SEQ for self-esteem were significantly different among the groups, which showed that higher impulse intensities resulted in greater positive changes. In other words, the HIG, which received the highest impulse intensity, showed the most improved values from week 0 to week 12. As for body composition and psychological variables, the WB-EMS groups, which were given stronger stimulations, improved their body weight, fat mass, and muscle mass, especially in the ASF and ATF. The physiological variables also showed a positive relationship between higher impulse intensities and a greater degree of improvement.
Everyone knows that exercise leads to physical development. Particularly, exercise can provide greater benefits when the intensity is higher than that of daily physical activity. However, if the intensity of exercise is too high or excessive, it can cause severe damage to stressed joints, as well as muscle ruptures. The WB-EMS suit, which has been used since several years ago to compensate for this, protects the muscle joints of the human body by reducing the burden caused by the weight from isotonic exercise but can maximize the effect of exercise by increasing the intensity of exercise. The effects of the high WB-EMS impulse intensity used in this study were similar to the results of another research study that showed increased lipid oxidation leading to positive effects on the metabolic indicators and body composition in obese men [49]. This study also showed that electrical current thresholds were higher in obese than in nonobese subjects and that the stimulation tolerance of obese subjects appeared to diminish within one EMS session [19]. Similarly, this study observed the physiological responses of the patients in accordance with the electrical impulse intensities. According to our results, the ∆% of body weight in the CON, LIG, MIG, and HIG changed from baseline to −0.74%, −1.34%, −1.28%, and −1.40% at week 4; −0.52%, −1.48%, −1.43%, and −2.04% at week 8; and −0.38%, −2.57%, −5.76%, and −8.88% at week 12, respectively. Similarly, the ∆% of fat mass in the CON, LIG, MIG, and HIG changed from baseline to −1.69%, −5.49%, −6.27%, and −5.95% at week 4; −1.96%, −6.50%, −7.13%, and −6.99% at week 8; and 0.34%, −3.66%, −13.94%, and −28.33% at week 12, respectively. The ∆% of the BMI and percent fat were similar to body weight and fat mass. It can be interpreted that stronger EMS impulse intensities result in decreased body fat.
Unlike the previous studies, Porcari et al. [50] reported that there were no significant changes in the circumferences of the arms or thighs, sum of skinfolds, body weight, percent fat, fat mass, or lean mass between the experimental and control groups after applying EMS. They also reported that the claims relative to the effectiveness of EMS for apparently healthy individuals were not supported by the findings of their studies. These findings may be explained and interpreted as follows. Subsequent stimulation sessions by Porcari et al. [50] were performed three times per week for eight weeks. The areas stimulated during each session were the biceps, triceps, quadriceps, hamstrings, and abdominal muscles. Using such parts of the body for EMS may be problematic, since the sites were no more than a part of the whole body. The second problem was that the electrodes repeatedly detached from the subjects' skin because of the use of Velcro straps. The third problem was that the number of channels was low, and the fourth problem was that the time off period after EMS was too long. The longer the resting time for muscles after EMS, the longer the time required for the pulse to fall below the threshold value of the muscles to induce muscle contractions again, which may reduce the efficiency of the muscle contractions. Hortobágyi and Maffiuletti [25] suggested that EMS programs that last up to six weeks may induce alterations in the muscle metabolism. Gondin et al. [26,42] and Ruther et al. [27] reported that the techniques of applying EMS for time periods longer than six weeks may cause muscle hypertrophy in the late phases of such programs. Regarding muscle mass, this study found that the ∆% of muscle mass in the CON, LIG, MIG, and HIG changed from baseline to −0.73%, 0.34%, 0.14%, and 0.20% at week 4; −0.67%, 0.73%, 0.61%, and 1.69% at week 8; and −2.21%, 1.38%, 5.31%, and 7.64% at week 12, respectively. In other words, muscle mass showed greater gains as the impulse intensity became higher. A substantial amount of research has also pointed to the positive effects of EMS on the body composition when performed for a period of over 12 weeks [20]. This study measured abdominal CT images four times from week zero to week 12 to examine the extent to which WB-EMS affects the abdominal circumference. According to the results, the ∆% of the ASF in the CON changed from baseline to −0.43% at week 4, −1.91% at week 8, and −1.86% at week 12, whereas those of LIG, MIG, and HIG changed from baseline to 0.56%, −0.46%, and −0.44% at week 4; −3.69%, −2.76%, and −2.22% at week 8; and −0.47%, −16.40%, and −25.91% at week 12, respectively. Meanwhile, this study found that the ∆% of the ATF in the LIG, MIG, and HIG-performed isometric exercises combined with WB-EMS changed from baseline to −2.68%, −1.10%, and −1.95% at week 4; −6.44%, −3.65%, and −9.74% at week 8; and −5.21%, −14.99%, and −27.44% at week 12 compared with the ∆% of ATF, which changed from baseline to 1.23% at week 4, −1.76% at week 8, and −0.86% at week 12. These results were similar to those reported by several previous studies, and it was found that the thickness of the abdominal subcutaneous fat can be reliably reduced when wearing WB-EMS and performing isometric exercises. Banerjee et al. [51] confirmed that EMS can be used on sedentary adults to improve physical fitness and may provide a viable alternative to more conventional forms of exercise in this population, as our results and previous studies also suggest.
Psychologically, it is not easy to tolerate exercise with high levels of electrical stimulation. That is, among the three types of WB-EMS intensities applied in this study, we investigated what kind of EMS impulse intensity has the greatest effect on the subjects' body image and satisfaction. In addition, we looked into whether changes in body image and satisfaction can lead to changes in self-esteem. Originally, body image or body image flexibility was known to be associated with psychological flexibility regarding body image [52]. The BI-AAQ has also been used to measure and evaluate eating disorders, poor psychological health [53,54], distressing thoughts and feelings associated with binge eating [55], anorexia [56], and bulimia [57]. However, this study investigated the psychological changes by using body image questionnaires such as BI-AAQ to measure the body image flexibility after the application of WB-EMS. Body image flexibility is defined as the capacity to experience the ongoing perceptions, sensations, feelings, thoughts, and beliefs associated with one's body fully and intentionally while pursuing chosen values [30]. Along with the BI-AAQ, this study used the BSQ and SES for observing psychological changes from pretests to posttests. Prior to investigating the effects of WB-EMS on the above variables, Marsh et al. [58] reported that the highest correlations existed between the body and physical appearance factors, with the three correlations relating competence to strength, body, and physical activity. These results indicated that body attractiveness is due to both body traits and physical appearances.
This study assigned the same isometric exercise to young male adults but provided low-, medium-, and high-intensity EMS impulses. We measured the effects of the three types of EMS impulses on the feelings of the participants, as well as the intensities that were most helpful in improving their body image. In addition, this study also used the BSQ to measure psychological satisfaction regarding body image after performing isometric exercises combined with WB-EMS. The results of this study revealed that the ∆% of the BI-AAQ in the CON changed from the baseline to 0.75% at week 4, −0.63% at week 8, and 0.24% at week 12. The ∆% of the BI-AAQ in the LIG changed from the baseline to −4.76% at week 4, 1.11% at week 8, and 2.14% at week 12. The ∆% of the BI-AAQ in the MIG and HIG changed from the baseline to −4.22% and −2.65% at week 4, −5.18% and −7.78% at week 8, and −11.50% and −24.12% at week 12, respectively. Lower BI-AAQ scores indicate higher levels of body image flexibility. In other words, it can be interpreted that the stronger the EMS impulse is, the higher the person's body image satisfaction. The BSQ showed similar results: −1.45% in the CON, −7.10% in the LIG, −22.95% in the MIG, and −25.09% in the HIG at week 12. Although the ∆% of self-esteem in the CON and LIG changed from the baseline to −3.89% and −4.04% at week 4, −11.62% and −5.41% at week 8, and −13.17% and −7.55% at week 12, those of self-esteem in the MIG and HIG changed from the baseline to 2.25% and 13.25% at week 4, 13.02% and 15.20% at week 8, and 18.81% and 29.68% at week 12. In other words, it can be said that a higher EMS intensity impulse leads to greater physical development and exercise intensity, which can be mentally challenging.
According to some researchers, increased skeletal muscles, improved muscle strength without lifting weights, and even preserving muscle mass can result from the use of EMS [59][60][61]. This combination of both EMS and exercise training can cause additional tension, thus creating more effective results. Gerovasili et al. [60] reported that the electrically induced contractions must be in the range of 60-80% of the maximal voluntary contraction. Based on scientific evidence, this study found that the psychological scores were steadily increased each week, showing that only high-intensity electrical stimulation can improve body image, satisfaction, and self-esteem in healthy men. These findings were supported by the results of studies by Ahmad and Hasbullah [59], Gerovasili et al. [60], and Iwasaki et al. [61]. In addition, this study observed that the BSQ of the HIG showed the lowest score, indicating a very positive result and that the scores for health and physical activity in the HIG were higher than those of the other three groups after 12 weeks. Self-esteem in the HIG also showed a higher tendency compared with those of the other three groups from week zero to week 12.
It can be hypothesized that repeated exposure to WB-EMS training may result in increased physical fitness and muscle function, reduced body fat mass, and improved psychological health. Ahmad and Hasbullah [59] reported that EMS training was able to improve the male body composition. Many studies, including the results from this study, showed that EMS training combined with isometric exercise can decrease fat mass and percent body fat. These results suggest that improved body composition can also increase self-esteem through greater satisfaction with body image and body shape. Similarly, Harvey et al. [62] suggested that the physiological benefits gained from functional electrical stimulation training led to significant psychological benefits as well. Anderson et al. [63] reported that 37 sedentary healthy women participated in baseline testing on a range of anthropometric measures, body composition, and self-perception measures. Subsequently, participants were randomly assigned to one of three groups: walking group, walking + EMS group, or the control group. When comparing the results with the control group after eight weeks, both walking groups had a significant reduction in the number of anthropometric measures and improvements in the self-perception measures. The improvements of both the anthropometric measures and self-perceptions were greater for the walking + EMS group, which indicates that changes in self-perception may be affected by physiological changes.
Compared to the no stimulation EMS control group, all three of the EMS groups exhibited improved tendencies in self-esteem and significant improvements in body image and body shape. This effect was particularly apparent in the mid-and/or high-impulse EMS groups. These results are similar to previous research studies that suggested that exercise enhances self-perception [64][65][66][67][68][69] and is contrary to other studies that have found that exercise does not improve self-perception [70]. Ultimately, this study suggests that a WB-EMS suit equipped with an electrical muscle stimulation device can reduce fat and increase muscle mass, which, in turn, improves the psychological factors. However, this effect only appeared in the EMS group in which a high-impulse-intensity was applied.
Conclusions
This study confirmed that WB-EMS using high electrical impulse intensity can improve the body composition and psychological factors in healthy male adults. However, our study had some limitations. First, the evaluator was not blinded to the group allocation. Second, the participants consisted entirely of young men with somewhat smaller sizes. Considering these limitations, further studies that investigate the effectiveness of exercising with WB-EMS devices on a greater number of participants with diverse demographic backgrounds are encouraged. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical restrictions. | 2021-03-07T06:16:21.929Z | 2021-02-25T00:00:00.000 | {
"year": 2021,
"sha1": "ca29869644cd8ed7e14db78c38845733cd52a87b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/57/3/191/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c8e5d31c7195ff3f0154804820e3361f581cb70",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199299306 | pes2o/s2orc | v3-fos-license | Qualitative Investigation of Sensitive Topics in Tax Compliance Study in Malaysia
Most researchers would probably agree on subject matters that possess sensitive elements such as income, sex, religion and politics. These topics are believed to be relatively intrusive and inappropriate to some. The same goes to issues related to tax compliance and other pressing matters surrounding it. Thus, the purpose of this research is to probe areas related to tax compliance study that can be considered as sensitive. Depth interview was employed for this qualitative study to collect data. The interviews were conducted with 14 taxpayers from various age groups and social backgrounds. The findings were analysed based on the verbal responses recorded and transcribed from all participants in verbatim. Issues related to government and religions have resulted in intensed reactions by the respondents more so than other topics. This is evident in the verbal responses, physical reactions and emotions portrayed by the participants. There are two main constraints in the study which are the different race and religious faith between the researcher and participants and the small number of the subjects involved in this study. Topics related to government issues are seen to top the list in causing the most extreme reaction in respondents followed by questions on the role of religious values. Questions on other areas do not trigger much stir.
Introduction
Data collection process appears to be a challenging task for almost all researchers in ensuring the reliability of data and the validity of the findings. This is probably more challenging for researchers involved in studies that address some of society's most pressing social issues commonly associated with sensitive topics. There are some identified topics or areas of research in the prior studies that are highly likely to be classified by definition as sensitive such as the issues that involve sex, religiosity or any powerful group such as government (Lee, 1993). Lee (1993) also emphasizes that, despite the long list of topics stated as sensitive in previous studies, any topic is possible to be regarded as sensitive depending on its context and environment, not on the individual topic. Similarly, based on the review of the sociological research literature, van Meter (2000) also concludes that a topic will be considered as sensitive when the majority of society defines it as sensitive. Some of the researchers in tax compliance study such as Lozza et al. (2013); Darwish (2016) also regarded tax compliance as a sensitive topic due to its nature that involves people to reveal their true compliance intentions and attitudes. Furthermore, adding another sensitive topic such as religiosity and perceptions towards government in understanding taxpayers' compliance attitudes might further cause the sensitivity level to be stirred, hence concealing their actual compliance intentions and attitudes. However, proper strategies and techniques during interviews are suggested to be considered in reducing sensitivity in research and encouraging participants to provide only favorable responses (Elam and Fenton, 2003). Similarly, in tax compliance or tax evasion study, an appropriate technique employed during data collection such as indirect technique is expected to minimize the social desirability problem and consequently more likely to encourage taxpayers to share their true views (Kirchler and Wahl, 2010). Therefore, this paper examines the actual topic that can be highly regarded as sensitive topic in tax compliance study in a multi-religious and multi-cultural country namely Malaysia. The remainder of the paper is organized as follows. The next section briefly reviews the topics that can be regarded as sensitive and techniques or strategies recommended during interviews for sensitive topics. Then, it is followed by the presentation of method employed in this study namely face-to-face interviews. Next, the findings and overall discussions are presented. Several limitations of the study are acknowledged and the final section concludes the paper.
Sensitive Topics
There are mixed views about the definition of sensitive topics. Lee (1993, p. 4) defines sensitive research as a study on a specific topic that "potentially poses a substantial threat to those who are or who have been involved with it". Dempsey et al. (2016) argue that most topics have the capacity to be sensitive if they evoke an emotional response. Tourangeau (2011) state that topic is sensitive because it involves intrusiveness, risk and social desirability. Whereas Wellings et al. (2000) classify research as sensitive if it requires disclosure of behaviors or attitudes which would normally be kept private and personal, which might result in offence or lead to social censure or disapproval, and/or which might cause the respondent to express with angst. Tax compliance is one topic that fits the definition of sensitive topic as defined above especially when taxpayers intentionally do not fully fulfill their tax obligation. Intentional tax non-compliance attitude occurs when taxpayers purposely find ways to reduce the amount of tax paid. The attempts to reduce tax liability are done legally or illegally. The former is known as tax avoidance for example exploiting tax-loopholes. The latter indicates illegal means such as stating artificial transaction or underreporting income to reduce taxation, which is also known as tax evasion (Kirchler et al., 2003). Another sensitive topic is a topic related to religiosity. The term religiosity is often defined as an individual's conviction, devotion, and veneration towards divinity. Delener (1990) defined religiosity as the degree to which individuals are committed to a specific religious group. In a multi-religious and multi-cultural society like Malaysia, religious expression has always been monitored by the government in order to protect the racial harmony. This protection is clearly written in the constitution and has been in implementation to safeguard the country whenever issues on religions surface (Sani and Hamed, 2011). Similar to religiosity, people are also very careful when talking or giving opinion about any issue related to government. Criticizing and showing disagreement towards government's actions is commonly be interpreted as inclination towards oppositions. Thus, many people may be under reporting or may decline to give their opinion about the ruling government in order not to reveal their stands. Tsai (2010) in his research on issues of political sensitivity in rural China found that topics such as local governmental performance and public goods provision are sensitive topics to government officials.
Interview Issues in Sensitive Topics
An interview is a conversation between researcher and research participants focusing on questions related to research topics (Merriam, 2009). In collecting data on sensitive issues, individual face-toface in-depth interview is commonly employed (Timraz et al., 2017;Ryan and Dundon 2008;Dickson-Swift et al., 2007). In the face-to-face mode, non-verbal language and cues can be very rich, including dress, body language and mannerisms (Oltmann, 2016). Face-to-face approach also offers more possibilities to explore and uncover the feelings, emotions and also attitudes of participants (Crawford, 1997). In-depth semi-structured interview is strongly suggested to be used for investigating sensitive topics (Elam and Fenton, 2003). Questions in a semi-structured interview are more flexibly worded or can be a combination of more and less structured ones (Merriam, 2009). The order of the questions and the exact wording are not determined ahead of time. This format gives the opportunity for the interviewer to explore particular themes or responses further. Before going to the field to conduct an in-depth face-to-face interview on sensitive topics, there are a few elements that must be taken into considerations by researchers. The first practical considerations for any researcher are seeking permission and gaining access from the institution where they want to conduct the interview or/and from individual research participants (Walls, 2010). Jepson et al. (2015) suggested researchers to send interview schedule to potential respondents and explain the real issue that needed to be discussed with respondents from the beginning of the interview so that they could make a more informed decision about what the interview would cover. Another important element that must be done before an interview session is to guarantee anonymity. Anonymity refers to conditions in which participants' personal information and identities are kept secret (Saunders et al., 2015). However, it is argued that true anonymity is difficult to achieve because in a qualitative study, the researcher knows the identity of the participants and has to meet them personally (Scott, 2005). Therefore, the definition of anonymity in a qualitative study only applicable to people other than the researcher of that particular study (Saunders et al., 2015). Researchers can build rapport with participants by engaging in a small talk using every day conversational style before beginning the interview (Gall et al., 2003). Good rapport helps both parties reconcile to the research agenda and uncovered much deeper extrapolations of lived experiences from the participants. Good rapport also leads to depth and quality of information and experiences revealed by participants (Elmir et al., 2011). It is also important to allow participants adequate time to respond fully. In some cases, respondents were also being offered the option of omitting certain questions should they find the questions inappropriate to be responded during the interview. Data gathered through interviews must be recorded. Researchers can use field notes only, or a recording device, or both to record the data (Tessier, 2012). Field notes help researchers document what they observe, while recording device can ensure that everything said is preserved for analysis (Merriam, 2009). To end an interview session, researchers can give a closing statement summarizing some of the important points and allowing an opportunity for participants to clarify information or add additional pertinent data.
Research Method
Data collection process is the most crucial part in any study. One of the most important issues is to ensure the reliability of data so that the interpretation of data is reflecting the true opinion of the participants particularly in a qualitative study. In-depth interview was used in this study and was considered as the best method to understand the perceptions of people on certain situation in assembling reality (Punch, 2005). In this study, the participants were sharing their opinions about the role of religious values, perceptions towards government and the impact of these elements on tax compliance attitudes. Since almost all of the topics involved in this study were regarded as highly sensitive in prior studies (e.g. Lee, 2003;Lozza et al., 2013), semi-structured in-depth interview was adopted as the interviewing format for this study. This is because it is considered as one of the most appropriate methods for a study that involves sensitive issues (Elam and Fenton, 2003). Semistructured in-depth interview is most widely adopted by researchers in qualitative study by having a set of pre-determined open-ended questions and other questions that might arise during the interviews (DiCicco-Bloom and Crabtree, 2006). This interviewing format has its flexibility in giving control to the interviewer in obtaining the information needed for the study and at the same time allows some space for interviewee/s to expand current issues or even discuss any arising issues (Hitchcock and Hughes, 1989). The interview instrument was developed as a guideline which included a list of topics that needed to be explored during the interview. However, to minimize the issue of sensitivity during the interviews, the indirect questions were constructed so that the participants were more willing to share their opinions honestly and critically without any direct association with them (Nuno and John, 2015). The questions were not posed directly to gauge the participants' own views on the specific issues in this study but rather the way overall Malaysians were viewed by the participants regarding their compliance attitudes. The questions given to the participants were rather general in linking their views between religiosity, perceptions towards government and taxpayers' compliance attitudes. The participants were contacted before the interviews and brief information regarding the topic coverage, duration of interview and anonymity assurance was given to the participants via email. During the interviews, participants were aware of what was expected from them based on the information sheet provided and they were also aware that they have the rights to withdraw from the interviews at any time without providing any reason. These were done to minimize distress and discomfort of the participants because an interview is normally considered as a sharing secrets session (Orb et al., 2000). In addition, the researcher had also tried to engage the participants in a small social conversation before shifting gradually to the actual conversation in order to create friendly environment for the interviews. The in-depth interviews were conducted with 14 participants. Since one of the topics of this study was to explore the role of religiosity on tax compliance attitudes, the participants were selected based on their ethnicity to represent the four main religions in Malaysia namely Islam, Christianity, Buddhism and Hinduism because ethnicity was commonly associated with religion in Malaysia (Lee, 2000). The participants were also required to have a minimum of three years of experience in paying tax. This was to ensure the participants had sufficient experience in sharing their views about the research topic.
The interviews were conducted either in English or Malay, depending on their preference to ensure they were comfortable in sharing their views openly with the researcher. The interviews were tape recorded with the consent of the participants to ensure all responses were captured for the transcription process. This process is expected to increase the validity of data gathering rather than depending on the note taking only (Fielding and Thomas, 2008). The information from the interviews was transcribed based on the recorded interviews. In this study, verbatim quotations were employed to reflect the real feelings, thoughts, experiences and basic perceptions of the participants (Neale et al., 2005). Therefore, the actual quotations were recorded even though some of the responses were grammatically incorrect. The researcher also tried to be cautious when interviewing the participants who adhered to the same or different religion with the researcher who is a Muslim. Furthermore, since this study involved participants who came from a background of a number of different religions, the researcher had put her reasonable effort to be as cautioned as possible in asking questions to the participants and responding appropriately during the interview.
Findings
The interviews were conducted successfully with 14 participants from different backgrounds. There were seven males and seven females representing all age groups from the 20s to 70s with the largest group of participants in their 30s. The participants represented three major ethnic groups, namely Malay and other indigenous groups, Chinese and Indian. The participants were also representing major religions in Malaysia namely Islam, Christianity, Buddhism and Hindu. Overall, the majority of the participants have given their full cooperation in sharing their views and opinion regarding the issues discussed in this study. Almost all of the participants appeared to be more open and truthful in discussing tax compliance issue even though tax compliance is highly regarded as one of the sensitive topics in previous studies (e.g. Lozza et al., 2013). All participants were not hesitated to express their opinions openly regarding the high tendency of taxpayers to avoid or pay lesser tax than they were supposed to pay particularly for business taxpayers. One of the participants, P6 had also willingly shared his personal experience with his family members regarding tax non-compliance issue. The selected responses for tax compliance issue are presented in Table 1. However, despite all the reasonable efforts that had been done by the researcher to ensure the participants were comfortable during the interviews, the researcher still faced difficult situation to convince some of the participants to share their views particularly on specific topics namely issues that involved religion and perception towards government. Upon responding to one of the religiosity questions, one participant (P10) clearly tried to avoid giving a direct answer but instead, emphasized on the requirement to be compliant regardless of the situation. Nevertheless, P4 and P11 who also adhered to the same faith with P10, were willing to indicate their stance generally regarding this issue. The responses to the question are shown in Table 2. Similar response pattern was illustrated when the issue of religiosity was still being discussed, P10 seemed to be uncomfortable to continue with this issue and she tended to provide very brief and short answers, probably to indicate her true view of this topic without elaborating them. The interview with P10 only lasted for less than 15 minutes as compared to other participants who took a minimum of 30 minutes in average for each session. Even though all participants were already informed before the interview that there were no right or wrong answers and they might express opinions based on their perspectives, P10 still appeared to be reluctant to further discuss the religiosity topic. The followings are the examples of the questions that relate to religiosity posed to P10 and her responses are presented in Table 3.
Table 3
Responses to religiosity questions Question : There are two religious commitments namely intrapersonal (spiritual) and interpersonal (social) religiosity. Based on these, which do you think may strongly influence people to comply with tax laws and why? Response : "No influence." (P10, Buddhist, Executive Officer) Question : How do you think the impact of different levels of religiosity on tax compliance and why?
Response : "Disagree." (P10, Buddhist, Executive Officer) Question : Honestly, do you really believe that people are complying because of their religious values? Why? Response : "No way." (P10,Buddhist,Executive Officer) Opinions of the participants about the Malaysians' perceptions towards the Malaysian government (the previous government before 9th May 2018) were also gathered. In this study, many of the participants were rather hesitant to give their opinion at first. After they were being convinced that this study was only for educational and not for policy making purposes as strongly suggested by Tsai (2010), then only half of the participants were more confident to briefly express their opinion. The other two participants namely P10 and P14 were clearly unwilling to respond to this matter and P10 strictly classified this topic as sensitive. The remaining of the participants responded but very briefly. They only provided simple terms to indicate the level of trust in government such as 'low', 'shaky' and 'not really' without elaborating the actual meaning of their responses. The questions and selected responses on this issue are shown in Table 4.
Discussions
In a research that involves sensitive topics, findings from interviews help researchers to gain insights into people's feelings and thoughts which may provide valuable knowledge in understanding their attitudes on certain issues. In the current study that combined a number of sensitive topics namely tax compliance, religiosity and perception towards government, there was no guarantee in getting rich results. This was because reassuring participants to voice out opinions freely was quite challenging even though many interviewing strategies were adopted by the researcher to ensure rich data can be gathered for these sensitive topics. However, based on the findings of the current study, this particular situation seemed to be applicable only to certain topics namely religiosity and perceptions towards government and certain participants. The possible explanation for the findings related to religiosity might be due to the multi-religious nation in this country. Generally, those adhering to the religion of the majority can be considered as having religious privilege in a particular area or country and this might affect the members of the minority in subtle ways such as having to experience prejudice in a certain scenario. This was probably reflected by the responses made by P10 when she was quite reserved and somewhat reluctant to share her views specifically in religiosity topics because she adhered to the faith of the minority in Malaysia. Another possible explanation probably because some of the participants' viewpoints on religiosity issues particularly the Buddhists contradicted with the majority of the participants who agreed that religiosity seemed to have somewhat positive impact on taxpayers' compliance attitudes. Their reluctance probably due to the difference of opinion with the majority who normally incline to provide 'yes' answers to positive religious statements (Allport and Ross, 1967). Additionally, since their faith and the researcher's faith were different, this situation probably had created an uneasy environment for them to share their thoughts and feelings regarding this issue. This is in line with the finding in a study conducted by Rey (1997) that confronting people hailing from different faiths and practices may probably hinder the participant to further elaborate on his/her honest views regarding this issue particularly from his/her religion's perspective. Besides that, Sani and Hamed (2011) also highlight that one of the main issues in a Malaysian plural society is the restriction to express religious matters freely and hence probably contributes to such responses in this study. Furthermore, getting high quality data in a short time (in P10's case only 15 minutes) was quite impossible because the researcher and participants could hardly develop a good reciprocal relationship that was based on trust (Tsai, 2010). The findings related to perception towards government probably reflect the actual definition of sensitive topic as defined by Lee (1993) that participants might feel they are at risk if they express their true views. Hence, their hesitation can be linked to the possible risk they might be facing when they openly criticized about the Malaysian government. This is because even though Malaysians have the right to practice freedom of speech as stated in Federal Constitution 1999, Part 2 Article 10 (1), the freedom of speech was clearly suspended using Article 149 (Muda, 1996). Article 149 gives power to the Parliament to pass laws to suspend a person's fundamental rights vested to him in Part 2 of the Constitution if the Parliament believes that the person is a threat to national security or public order. One of the effects of this article is that people who critique the government can be legally silenced. This was clearly emphasized by one of the participants that the Malaysian government restricted the freedom of speech by stating that: "… they cannot talk openly. Why? Because if they talk openly, this will be sensitive. There is no freedom of speech too. If you talk too much, there will be certain laws that can put you behind bars." (P5, Christian, Selfemployed) Even though the government's clear intention is more likely to maintain harmony in the society and country, this restriction somehow has threatened the people's confidence towards the government and respect for laws (Khairuldin et al., 2017). More importantly, for the purpose of research, the restriction in voicing out opinions openly towards the government has seemed to have somewhat a negative impact in getting true and fair views and limiting the richness of data during the interviews. Therefore, the topic that involved the perceptions towards government might be regarded as more sensitive as compared to religiosity topic in this study.
On the other hand, the participants appeared to be very much comfortable in sharing their views and thoughts in tax compliance issue even though the conversation was led to the possibility of discussing the negative attitudes of Malaysian taxpayers. They were willing to voice out their sincere opinions probably because their responses did not really reflect their own attitudes but rather the view of Malaysians as a whole. This was probably because of the indirect questions posed to the participants in the interviews and hence encouraged them to be more sincere and honest in responding to those questions. The use of indirect questions during data collection is strongly suggested by Nuno and John (2015); Kirchler and Wahl (2010) when dealing with sensitive topics. More importantly, when questions on sensitive topic posed are not considered as sensitive by the interviewee, the topic is less likely to be sensitive (Meter, 2000) and the willingness of the participants can be anticipated. However, despite the same strategies employed by the researcher for all topics in this study, the discussions of the religiosity and Malaysians' perceptions towards government issues still seemed to be sensitive to some of the participants.
The obvious limitation of this study was the different background of the researcher who conducted this study with a number of participants particularly in terms of religious faith. This probably contributes to awkward situations between the participant and researcher during the interview which might lead to the disinclination of sharing their true views. Another obvious limitation was the small number of the participants involved in this study which probably has limited the access to the richness of data even though the ideal sample size of a qualitative study was not clearly stated in prior studies (Marshall et al., 2013). The key direction for future research from the present study is to possibly match the background of the researcher and participants which might encourage openness in participants and reduce awkwardness simultaneously. Increasing the sample size might also help to furnish researchers with more data, hence enhancing a better understanding of any sensitive topic in the future.
Conclusions
All in all, it is clear from the findings that the majority of the participants have openly and truthfully shared their views and opinions regarding the issues discussed in this study even though tax compliance is almost always viewed to be a sensitive topic. Nevertheless, when questions covering certain areas on tax compliance were posed, some participants chose to display a different tone. Across the board, topics related to government issues are deemed to be more sensitive than other topics dealt with in the interview questions. Questions on perceptions towards government are found to be more delicate than questions on the role of religious values and tax compliance attitudes. However, the issues on religiosity inevitably stir perturbed and somewhat defensive responses too, though less serious in most participants. Despite the effort to create 'safe' and comfortable atmosphere, some participants still display unpleasant reactions. The indicative nature of the participants' responses are demonstrated through short and brief responses, reluctance, hesitation and delays, taking long pauses to respond, attempts to avoid giving direct answers which can be translated into the feelings of uneasiness, agitation and discomfort during the interview session.
These salient reactions are rather typical when one is dealing with his or her feelings by matching the physical reactions with his or her emotions. Essentially, the very nature of tax as a topic of interview and discussion can already easily evoke the feelings of intrusion, let alone interfusing it with other controversial topics like religions and politics. This further can result in strong abhorrence, extreme reactions and opinions if not carefully administered or worse, it can pose risk to the well-being of the researcher. Hence, from this research, it is suggested that more careful measures for precautions are to be taken into consideration to lessen the possibility of evasive responses and extreme opinions. This can be done through the careful wording of questions and perhaps a preparation of list of optional questions on the side, in cases of having uncooperative participants. Another suggestion is to select participants who are devoted to the same religion with the interviewer or having more interviewers from different religious background to match with the background of the respondents. This careful planning can ensure researcher's effort to obtain reliable data and worthwhile information as well as to convince participants to remain calm, collected and truthful during interviews. Challenges are, without a doubt, part and parcel of conducting research, but preparation and strategies are key to yield favourable results and in handling precarious situation. What appears trivial and inconspicuous in nature, may be sensitive to others and when things derail, data and information gathered may not really represent their true perceptions. | 2019-08-03T01:36:11.653Z | 2019-03-29T00:00:00.000 | {
"year": 2019,
"sha1": "85a087f693235812bb5947819fe8393879b61e55",
"oa_license": null,
"oa_url": "https://hrmars.com/papers_submitted/5798/qualitative-investigation-of-sensitive-topics-in-tax-compliance-study-in-malaysia.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3ddf507ae5a99aa65960c53d92ae8f6e775d69d4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
23659330 | pes2o/s2orc | v3-fos-license | The effects of three remineralizing agents on regression of white spot lesions in children: A two-week, single-blind, randomized clinical trial
Background This study investigated the effect of three remineralizing agents on improving white spot lesions (WSLs). Material and Methods This clinical trial included children who had at least one WSL on anterior teeth of upper or lower jaws. The participants were randomly assigned to 4 groups by treatment: 1) a cream containing casein phosphopeptide-amorphous calcium phosphate and fluoride (MI Paste Plus); 2) a cream containing hydroxyapatite and fluoride (Remin Pro); 3) a 2% sodium fluoride gel; and 4) usual home care (control). The treatment was performed for 3 times over 10 days using special trays for retaining remineralizing agents. The area and mineral content of WSLs were measured at baseline (T1) and 1 day after finishing treatment (T2). Blinding was applied for outcome assessment. Results Eighty patients were assigned to MI Paste Plus, Remin Pro, NaF or control groups. The application of all remineralizing agents caused a significant decrease in area and a significant increase in mineral content of WSLs (p<0.05), whereas the control patients did not experience any significant alteration (p>0.05). At T2, the area of WSLs was significantly lower in three experimental groups compared to the control group (p=0.023), but between-group difference in mineral content of WSLs failed to achieve statistical significance (p=0.08). Conclusions The in-office application of either MI Paste Plus or Remin Pro was as effective as 2% NaF for reducing area and increasing mineral content of WSLs. MI Paste Plus and Remin Pro could be recommended as suitable alternatives to NaF for managing WSLs. Key words:White spot lesion, caries, casein phosphopeptide-amorphous calcium phosphate, hydroxyapatite, sodium fluoride, CPP-ACP, MPlus, Remin Pro, NaF.
Introduction
Enamel demineralization is frequently observed in children and adolescents with poor oral hygiene. White spot lesions (WSLs) are defined as opaque enamel areas created by mineral loss from the subsurface layer of enamel. These areas are also defined as incipient or enamel caries. WSLs are the precursors for caries cavities and their milky color may cause esthetic problems that sometimes remain for several years later (1,2). Different options have been suggested for treatment of WSLs; some of them are conservative such as remineralization therapy; and others to some part aggressive such as bleaching or microabrasion (3,4). Remineralization therapy should be considered as the best approach for treatment of WSLs. Fluoride is the most commonly-used agent for remineralization of enamel caries and its beneficial effects have been demonstrated in several investigations (5)(6)(7). Fluoride in combination with calcium and phosphate ions creates a veneer of fluoroapatite on the surface of existing enamel crystals, which not only acts as a replacement for minerals lost from the tooth structure, but also is much less soluble than the original carbonated hydroxyapatite (8).
For each 2 fluoride ions, 10 calcium ions and 6 phosphate ions are required to form a unit cell of fluoroapatite [Ca10(PO4)6F2]. Therefore, in the clinical application of fluoride-containing agents, the availability of calcium and phosphate ions is the limiting factor for enamel remineralization (9). Furthermore, fluoride in higher doses may be toxic and create side effects such as fluorosis. Therefore, finding alternative remineralizing agents with minimal side effects has always been desired. Recently, products containing calcium and phosphate ions have been developed and advocated for prevention and treatment of enamel caries. Casein phosphopeptideamorphous calcium phosphate (CPP-ACP) is a milk product that is capable to precipitate high concentrations of calcium and phosphate ions in the vicinity of tooth structure by attaching to the tooth surface, pellicle and plaque (9,10). Therefore, the application of products containing CPP-ACP can lead to inhibition of demineralization and promotion of remineralization or most likely, a combination of both may occur (11). Recently, fluoride has been applied in association with CPP-ACP and it has been revealed that the combined application produces a synergistic effect on enamel remineralization (12)(13)(14). Tooth Mousse Plus (MI Paste Plus; GC Corporation, Tokyo, Japan) is a commercial product that contains CPP-ACP and 900 ppm fluoride (CPP-ACPF), and may provide more therapeutic effects than its precursor Tooth Mousse (MI Paste), which contains CPP-ACP alone. However, the evidence to support the benefits of CPP-ACP or CPP-ACPF on preventing early dental caries and enhancing remineralization of WSLs is still insufficient (15).
Remin Pro (VOCO GmbH, Cuxhaven, Germany) is another remineralizing paste that contains hydroxyapatite, fluoride and xylitol. It is believed that hydroxyapatite fills eroded enamel, fluoride seals dentinal tubules and xylitol acts as an antibacterial agent. The manufacturer recommended Remin Pro for controlling dentinal hypersensitivity, prevention of enamel demineralization and erosion, and promoting remineralization of carious lesions (16). However, there are few studies regarding the effectiveness of Remin Pro in regression of WSLs in children with poor oral hygiene, and its comparison with MI Paste Plus and NaF has not been performed in the clinical conditions according to the authors' knowledge. There are several methods to detect caries in the clinical environment; one of them is quantitative laser-or lightinduced fluorescence. This technique is also capable to assess progression or regression of caries over time and thus indicating changes in enamel mineralization level (17). Fluorescent markers such as porphyrins are produced by a variety of bacterial species. Vista proof (Dürr Dental, Bietigheim-Bissingen, Germany) is a fluorescent camera developed for caries detection, and is used with DBSWIN software (Dürr Dental) for image processing. VistaCam iX (Dürr Dental) is a newer version of vista proof, which in addition to the fluorescent camera, has a head for taking intraoral images. Different treatment protocols are available for remineralization of WSLs. In most previous studies, CPP-ACP or CPP-ACPF has been applied daily by the patients for a long period such as 1 or 3 months (18)(19)(20)(21)(22). A suggested in-office approach for the use of high-concentrated fluoride agents is 3 time applications over 10 days. This treatment approach may be associated with several advantages, as it eliminates the need for patient cooperation and may provide suitable results over a short period of time. However, the effectiveness of MI Paste Plus and Remin Pro when applied with this in-office procedure has not been evaluated to date. Therefore, this study was conducted to examine the effect of an in-office protocol for applying MI Paste Plus and Remin pro on mineral content and area of WSLs in 7-12 year old patients with poor oral hygiene.
Material and Methods
This study was a parallel-group, randomized placebocontrolled clinical trial. The participants were selected from those attending the Department of Pediatric Dentistry, School of Dentistry, Mashhad University of Medical Sciences, Mashhad, Iran between April 2015 to Aguste 2015. The inclusion criteria dictated that the patients should be in the age range of 7 to 12 years, presenting at least 1 WSL on the labial surface of six permanent upper or lower anterior teeth. Patients who had systemic diseases or were sensitive to milk proteins, as well as those using drugs that cause xerostomia were excluded from the sample. The exclusion criteria also involved patients who had hypoplastic enamel and those with dentine caries on upper anterior teeth and also uncooperative patients. The study protocol was reviewed and approved by the ethics committee of Mashhad University of Medical Sciences and an informed consent document was taken from each patient's parent/legal guardian after a brief explanation of the treatment process. The sample size for each group was calculated as n = 20, based on an alpha significance level of 0.05 and a beta of 0.2, according to the data obtained from a previous study (23).
-Interventions The participants were randomly assigned to one of the four groups and received different treatments for WSLs. The randomization was performed by a computer-generated table of random numbers. The details of the allocated groups were recorded on cards contained in sequentially numbered, opaque, sealed envelopes. These cards were prepared by an independent person who was not involved in the study process. Once a participant was diagnosed to be in accord with the inclusion/exclusion criteria, the allocation assignment was revealed by opening the envelope by this independent person. At the first appointment (T1), the teeth were cleaned with water slurry of pumice and rubber prophy cups. Afterwards, the patients in the study groups underwent the following treatments: The participants in group 1 were treated by a 2% neutral sodium fluoride gel (Sultan Healthcare Inc., Englewood, New Jersey, USA). A sufficient amount of gel was inserted within a tray and the tray was then placed over the maxillary dentition and remained in place for 10 minutes. The patients were instructed to avoid eating and drinking and rinsing the mouth over 30 minutes after remineralizing therapy. The patients in group 2 received a remineralizing cream containing CPP-ACPF (MI Paste Plus; GC Corporation, Tokyo, Japan), under the same conditions as described in group 1. The patients in group 3 received a remineralizing cream containing fluoridated hydroxyapatite (Remin Pro, VOCO GmbH, Cuxhaven, Germany), under the same conditions as described in group 1. The participants in group 4 received no treatment and served as the control group. The remineralizing treatments were performed 3 times over 10 days. All patients were instructed to brush twice daily using a soft-texture toothbrush and fluoridated toothpaste (Crest Cavity Protection, 1100 ppm F). The patients were also advised to avoid any supplementary fluoridated products and prevent from eating too much sugar and acidic food or drink.
-Outcomes The patients were evaluated at the start of the study (T1) and one day after finishing the remineralizing treatment (T2). The main outcomes were any difference in the area and mineral content of WSLs. At both T1 and T2 appointments, the teeth were first cleaned by pumice slurry and a rubber prophy cup and then a lip retractor was placed to hold the soft tissue away from the dentition. To determine the area of WSLs, the labial surfaces of the teeth with WLs were photographed by VistaCam iX (Dürr Dental, Bietigheim-Bissingen, Germany) using a special head for taking intraoral images. The photographs were taken as the patient's head was positioned horizontally and the lens of the device was 8 centimeter away from the labial surfaces of the teeth. The intraoral images were then transported to a microstructure image processing software (MIP4 Student software; Nahamin Pardazan Asia Co, Iran). For the purpose of calibration, the mesiodistal width of the imaged tooth was imported into the software. The borders of WSLs were determined manually and their areas were calculated by the software. If a tooth had more than one WSL, the cumulative area of all WSLs was calculated and considered in the statistical analysis. Twenty sets of photographs were selected and the areas of WSLs were determined again one week later to measure intraexaminer reliability.
To determine the mineral content of WSLs, the labial surfaces of the selected teeth were evaluated by Vista-Cam iX using a special head for taking fluorescent images. The protective cover of the head was placed over the labial surface of the tooth perpendicularly and in contact with the enamel surface. The image obtained from the fluorescent reaction of the tooth was saved and processed by special software (DBSWIN Imaging Software; Durr Dental). This software creates images of 720×576 pixels. Using DBSWIN software, a numerical value from 0 to 3 is assigned to each part of the tooth proportional to the amount of mineral content on that area. Furthermore, in the fluorescent images, the teeth are illustrated in different colors from green (approximately 510 nm wavelength) to red (approximately 680 nm wavelength) according to the stage of the caries process (Fig. 1). The healthy tooth enamel is indicated in green by a value from 0 to 1.0. The value assigned to early-stage enamel caries is in the range of 1.0 to 1.5, and WSLs are illustrated in blue in the software environment. In the present study, the greatest value assigned to a tooth with WSL was recorded at each assessment interval corresponding to the lowest amount of mineral content in the tooth. Figure 2 indicates an image taken from an initial caries lesion by VistaCam iX in DBSWIN environment. All examinations were performed by a single experienced investigator who had been trained to do these assessments before the study commencement on 10 patients, not involved in the study process. Neither the patient nor the treating clinician was blind to the group assignment. But, the investigator who assessed the outcomes was kept blinded to the group allocation.
-Statistical analysis The Kolmogorov-Smirnov test confirmed the normal distribution of the data (p>0.05). A paired-sample T-Test was applied to detect significant alterations in the area and mineral content of WSLs between T1 and T2 time points in each of the study groups. One way analysis of variance (ANOVA) was run to determine any significant differences in the area and mineral content of WSLs at the start and end of treatment between the study groups. When a significant difference was noted, pairwise comparisons were made with Dunnett test. The data were analyzed by SPSS software (SPSS 16.0, Chicago, IL, USA) and the significance level was determined at p<0.05.
Results
Eighty patients (46 girls, 34 boys; mean age, 9±2.2 years) were randomized in a 1:1:1:1 ratio to NaF, MI Paste Plus, Remin-Pro or control groups. All the participants finished the study period and had complete records for statistical analysis (Fig. 3). No significant difference was observed in age or sex of the participants among the study groups (p>0.05). Using the 20 sets of repeated measurements, the correlation coefficient for detecting the area of WSLs was 0.95, indicating excellent intra-examiner reliability. Table 1 presents the descriptive statistics (mean, standard deviation) regarding the area of WSLs in the study groups at baseline (T1) and one day after the end of the remineralization treatment (T2). Comparison of surface area of WSLs between T1 and T2 time points in each group revealed a significant reduction in the area of WSLs in patients underwent treatment with NaF, MI Paste Plus and Remin Pro (p<0.05), whereas in the control group no significant improvement in the area of WSLs occurred between the two assessments (p>0.05) ( Table 1). Between-group comparison by ANOVA revealed no significant difference in the area of WSLs at T1 (p=0.353), whereas at T2 time point, the difference between groups was significant (p=0.023). Pairwise comparison by Dunnett test exhibited that the area of WSLs was comparable in NaF, Remin Pro, and MI Paste Plus groups (p>0.05), and all showed significantly lower demineralized area compared to the control group (p=0.009, p=0.005 and p=0.003, respectively). Table 2 indicates the summary statistics (mean, standard deviation) for mineral content of WSLs in the study groups at both assessment intervals. When the alteration in mineral content of WSLs was compared between T1 and T2 time points in each group, it was revealed that the mineral content of WSLs increased significantly in patients underwent treatment with NaF, MI paste Plus and Remin Pro (p<0.05), whereas the increase in mineral content of WSLs in the control group was not significant (p>0.05) ( Table 2). Between-group comparison by ANOVA revealed no significant difference in mineral content of WSLs either at T1 (p=0.143) or at T2 time points (p=0.08) among the study groups. No harm was observed over the period of the experiment. The mean, standard deviation (SD) and the results of the statistical analysis regarding the area of WSLs (mm 2 ) in the study groups at the start (T1) and end (T2) of the experiment.
* indicates statistically significant difference at p<0.05.
Discussion
The present study investigated the effectiveness of MI Paste Plus, Remin Pro and NaF for improving area and mineral content of WSLs in children. The treatment regimen applied in this study was 3 times application of the remineralizing agent over 10 days. The 10-day treatment period was chosen in this study in order to assess a fast approach for managing WSLs. We preferred the in-office procedure, because children in the age range of this study may be uncooperative in effective brushing and use of remineralizing agents. Furthermore, the use of special trays over dentition ensured an even and long contact of the remineralizing material with the tooth structure.
In the present study, VistaCam iX was employed for assessing mineral content of WSLs and taking intraoral photographs. This device is small and portable and thus easily applied in the clinical situation. It has been demonstrated that fluorescent-based systems are suitable for detection of caries lesions and assessing alteration in mineral level of tooth structure (17,(24)(25)(26). Jablonski-Momeni et al. (27) exhibited that VistaCam iX provides high reproducibility and good performance for caries detection at various stages of the disease process. Al-Khateeb et al. (25) found a significant correlation between fluorescent changes and mineral loss, and thus recommended the fluorescent camera as a sensitive device for assessing the severity of incipient enamel lesions. In order to minimize measurement errors, the intraoral photographs were taken in similar lighting conditions and the camera was positioned at the same angular and linear position to the tooth surface. The use of MIP4 Student software provided precise measurement for area of WSLs in photographs (28,29).
In the present study, all the experimental groups experienced significant decrease in area of WSLs over the course of the experiment, whereas the control group did not show any significant improvement. Furthermore, the area of WSLs was significantly lower in the three experimental groups compared to the control group at the end of the study (T2). These results were interesting as the use of remineralizing agents lead to the regression of WSLs even in the short treatment period of this study. The reduction in the area of WSLs is not only important from the point of limiting caries lesion, but also from the esthetic point of view, especially in the anterior part of the dentition, where WSLs produce esthetic problems.
In the control group, a small and insignificant improvement in the area of WSLs was observed over time, possibly due to the natural remineralization phenomenon, which results in partial improvement of WSLs without any treatment.
Another variable that was measured in the present study was the mineral content of WSLs, as represented by Vis-taCam iX apparatus. The values represented by this device are correlated with the mineralization level of tooth structure. The normal enamel is usually displayed by 1, whereas WSLs indicate values in the range of 1-1.5 in the software environment. Therefore, any reduction in the values represented by VistaCam iX is correlated with the increase in mineral content of tooth structure. In the present study, a significant increase in mineral content of WSLs occurred after remineralizinng therapy in three experimental groups, implying the effectiveness of CPP-ACP + fluoride and hydroxyapatite + fluoride as well as 2% NaF in restoring the minerals lost from the tooth structure. The increase in mineral content of WSLs in the control group was small and not statistically significant. This small improvement in mineral content of the control group should be again attributed to the natural remineralization of WSLs in the presence of saliva and oral hygiene instructions. Although the experimental groups experienced significant increase in mineral content of WSLs, the difference between groups failed to achieve statistical significance at the end of the study period (p=0.08). This may be related to the low frequency of applying remineralizing agents over the short period of this study (3 times over 10 days). Another possibility is that the apparatus employed in this study (VistaCam iX) was not enough precise to detect small differences in mineral content of WSLs among the study groups.
In the present study, both MI Paste Plus (containing CPP-ACP and fluoride) and Remin Pro (containing hydroxyapatite and fluoride) were as effective as 2% NaF for reducing the area and increasing the mineral content of WSLs over time, and thus they could be recommended as suitable alternatives for high concentrated fluoride agents. In contrast, Salehzadeh Esfahani et al. (30) exhibited that CPP-ACP had significantly better outcomes compared to Remin Pro and sodium fluoride varnish in increasing microhardness of artificially-induced enamel lesions. It is believed that MI Paste Plus can maintain a state of supersaturation of calcium and phosphate over the enamel surface, and the fluoride content in this product has a synergistic effect with CPP-ACP, increasing its remineralizing potential (12,14). Remin Pro is a remineralizing water-based agent that can increase the insertion of hydroxyapatite and fluoride into enamel lesions and thus enhancing remineralization. In this study, the application of NaF gel also lead to remineralization, possibly due to the formation of alkali compounds containing fluoride around enamel surface, which prevent demineralization and enhance remineralization of dental structure. The findings of this study indicate that in patients with poor oral hygiene, any natural improvement in area and mineral content of WSLs is negligible, and remineralization treatment should be considered as a viable option for incipient caries lesions. Either MI Paste Plus or Remin Pro could be considered as suitable alternatives for 2% NaF in treatment of WSLs. The use of these alternatives may have some advantages over NaF as they are goodtasting and are not toxic in higher dosage, and thus they could be applied safely by either patients or clinicians.
Although the treatment period in this study was only 10 days, but the results were promising and thus this protocol can be recommended in the clinical conditions for patients with extensive enamel caries who may benefit from a rapid remineralization program. The outcomes of this study are in agreement with the results of several previous investigators who found significant regression of WSLs after the use of products containing CPP-ACP or CPP-ACPF (18,19). Robertson et al. (19) indicated that the application of MI Paste Plus prevented the development of WSLs during orthodontic treatment and reduced the number of WSLs already present, whereas the placebo paste had no significant effect on prevention and treatment of enamel caries lesions. Baily et al. (18) showed that significantly more WSLs regressed after 12-week application of a remineralizing cream containing CPP-ACP compared with a placebo. In these studies (19,20), the period of treatment with CPP-ACP or CPP-ACPF was 1 to 3 months, but the present study provided satisfactory results over a period of just 10 days. In contrast to the findings of this study, some clinical trials (20)(21)(22) reported that the application of a cream containing CPP-ACP or CPP-ACPF was not superior to brushing with a fluoride tooth paste for regression of WSLs. The difference between the results of this study and those of previous studies may be related to the use of fluoride-free CPP-ACP by some authors (21). Another factor that could affect the results is the volume of the remineralizing agent and its close contact with WSLs. Several studies recommended a pea-sized amount or 1 g of CPP-ACP applied by a finger or brush (21,22), but in this study, special trays were placed over sufficient amounts of remineralizing agents, providing close proximity to enamel structure. Furthermore, the duration of applying remineralizing agent was 10 minutes, which was longer than that used in most previous investigations (21,22), and the treatment protocol was not dependent on patient cooperation, as it was accomplished in the dental office. The limitation of this study was its short duration and the lack of blinding for the patients and the treating clinician, because industrial products were used. However, the outcome assessor was blind, so the risk of biases was minimized. Further studies are warranted to assess the effect of long term use of remineralizing agents on prevention and regression of WSLs in children. The use of supplementary instruments to measure the degree of improvement of incipient caries lesions is also helpful. Although the remineralizing effects of MI Paste Plus, Remin Pro and NaF were similar in the present study, but further studies with larger sample size and longer period of treatment are required to ensure the lack of significant difference in performance of these remineralizing agents.
Conclusions
Under the conditions used in this clinical trial: 1-The application of MI Paste Plus, Remin Pro or NaF for three times over 10 days was effective in reducing the area and increasing the mineral content of WSLs in 7-12 years old children, whereas the control group did not show any significant improvement. Therefore, remineralization therapy should be recommended in children presenting incipient caries lesions.
2-There was no significant difference between MI Paste Plus, Remin Pro and NaF regarding their effectiveness in regression of WSLs in children. Considering the lower side effects of MI Paste Plus and Remin Pro, these products could be recommended as suitable alternatives for high concentrated fluoride agents. | 2017-10-28T02:52:14.283Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "5956680a705ae81117b429344b12f9bee7810123",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4317/jced.53582",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "5956680a705ae81117b429344b12f9bee7810123",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20907189 | pes2o/s2orc | v3-fos-license | Conditional Gene Targeting in Mouse Pancreatic β-Cells
OBJECTIVE Conditional gene targeting has been extensively used for in vivo analysis of gene function in β-cell biology. The objective of this study was to examine whether mouse transgenic Cre lines, used to mediate β-cell– or pancreas-specific recombination, also drive Cre expression in the brain. RESEARCH DESIGN AND METHODS Transgenic Cre lines driven by Ins1, Ins2, and Pdx1 promoters were bred to R26R reporter strains. Cre activity was assessed by β-galactosidase or yellow fluorescent protein expression in the pancreas and the brain. Endogenous Pdx1 gene expression was monitored using Pdx1tm1Cvw lacZ knock-in mice. Cre expression in β-cells and co-localization of Cre activity with orexin-expressing and leptin-responsive neurons within the brain was assessed by immunohistochemistry. RESULTS All transgenic Cre lines examined that used the Ins2 promoter to drive Cre expression showed widespread Cre activity in the brain, whereas Cre lines that used Pdx1 promoter fragments showed more restricted Cre activity primarily within the hypothalamus. Immunohistochemical analysis of the hypothalamus from Tg(Pdx1-cre)89.1Dam mice revealed Cre activity in neurons expressing orexin and in neurons activated by leptin. Tg(Ins1-Cre/ERT)1Lphi mice were the only line that lacked Cre activity in the brain. CONCLUSIONS Cre-mediated gene manipulation using transgenic lines that express Cre under the control of the Ins2 and Pdx1 promoters are likely to alter gene expression in nutrient-sensing neurons. Therefore, data arising from the use of these transgenic Cre lines must be interpreted carefully to assess whether the resultant phenotype is solely attributable to alterations in the islet β-cells.
I n vivo analysis of gene function in the pancreas and -cells has benefited from the development of mouse lines expressing Cre in all pancreatic compartments or restricted to the islet -cells. The choice of promoter to drive recombinase expression is critical for controlling the location and timing of gene activity. In addition, inducible versions of Cre recombinase, e.g., CreER, allow temporal control to the manipulation of gene activity, which becomes important when analyzing gene function at specific embryonic and adult stages (1,2). Promoters of the pancreas duodenal homeobox 1 (Pdx1) (3,4) and insulin (Ins1 and Ins2) (5)(6)(7)(8) genes have been well characterized to allow the use of regulatory sequences for directing Cre expression to specific pancreatic cell populations. Commonly used transgenic mouse lines that employ rat Ins2 gene promoter sequences to drive Cre expression within the -cell population include Ins2-Cre/RIP-Cre [Mouse Genome Informatics (MGI): Tg(Ins2-cre) 25Mgn and Tg(Ins2-cre) 1Herr ] (9 -11) and RIP-CreER [MGI: Tg(Ins2-cre/Esr1) 1Dam ] (12). Pdx1 gene promoter sequences have proven useful for directing Cre expression throughout the early pancreatic epithelium (4,10,13,14) and to the endocrine cells of the pancreas (15). The Pdx1 gene is expressed early in pancreas development throughout the endoderm of the dorsal and ventral buds, but expression becomes restricted during development such that high levels of Pdx1 are maintained in the insulin-producing -cells with lower levels in subpopulations of acinar cells (8,16). Examples of Pdx1-Cre transgenic lines include Pdx1-Cre early [MGI: Tg(Pdx1-cre) 89 (14), and Pdx1-CreER [MGI: Tg(Pdx1-cre/ERT) 1Mga ] (15).
To assess the specificity of recombination and perform lineage tracing analysis, reporter lines such as the ROSA26-stop-lacZ [MGI: Gt(ROSA)26Sor tm1Sho ], also known as R26R (17), or the ROSA26-stop-YFP [MGI: Gt(ROSA)26Sor tm1(EYFP)Cos ] (18) mice have been developed. Upon Cre-mediated recombination, these reporter lines activate expression of a -galactosidase (-gal) or a yellow fluorescent protein (YFP) reporter under the control of the ubiquitously active ROSA26 promoter, resulting in expression that is stably inherited by all cell progeny regardless of their differentiation fate.
Here we show that most Cre lines currently being used to mediate pancreas or -cell recombination also direct Cre expression to areas of the brain, and this may lead to altered gene expression in nutrient-sensing neurons that affects nutrient homeostasis.
RESEARCH DESIGN AND METHODS
Mouse models. Transgenic Cre and R26R reporter mouse lines used in this study are listed in Table 1. Experimental animals were generated by crossing Tg(Ins2-cre) 25Mgn (termed RIP-Cre Mgn ) (11), Tg(Ins2-cre) 1Herr (termed RIP-Cre Herr ) (10), Tg(Ins2-creEsr1) 1Dam (termed RIP-Cre/ERT) (12), Tg(Pdx1cre) 89 Reagents. Primary antibodies included guinea pig anti-porcine insulin IgG (1:500; Dako, Carpinteria, CA), guinea pig anti-insulin antibody (1:1,000; Millipore, Billerica, MA), rabbit anti--gal IgG (1:5,000; MP Biomedicals, Solon, OH), goat anti--gal IgG (1:1,000; Biogenesis Ltd, Poole, UK), rabbit anti-STAT3 phosphorylation (pSTAT3) IgG (1:1,000; Cell Signaling Technologies, Beverly, MA), rabbit anti-orexin IgG (1:2,000; Calbiochem, EMD Biosciences/Merck, Darmstadt, Germany), and rabbit anti-Cre antibody (1:1,000, cat. #69050; EMD Biosciences, San Diego, CA). Fluorescent-labeled secondary antibodies were purchased from Jackson ImmunoResearch (West Grove, PA) and Invitrogen (Carlsbad, CA). Recombinant mouse leptin was obtained from the National Hormone and Peptide Program (Los Angeles, CA). Tamoxifen administration. Over a 5-day period, mice were injected subcutaneously or intraperitoneally with 3 doses of 1-8 mg tamoxifen (Sigma, T5648) freshly dissolved in corn oil (Sigma, C8267) at 10 mg/ml, 20 mg/ml, or corn oil vehicle. The subcutaneous injection site was sealed with a drop of Vetbond tissue adhesive (3M). Following tamoxifen administration, the mice were housed individually for 5-10 days before being analyzed for Crerecombinase-mediated activity. Detection of -gal activity. -Gal activity was detected by 5-bromo-4chloro-3-indolyl--D-galactopyranoside (X-gal) staining as described previously (20) with slight modifications. Briefly, pancreata and brains were dissected in ice-cold 10 mmol/l PBS and fixed in freshly prepared 1-2% paraformaldehyde for either 2-4 h at room temperature or overnight at 4°C. Brains (2-mm slices) and pancreata were permeabilized for 5 h in 2 mmol/l MgCl 2 , 0.01% sodium deoxycholate, 0.02% NP-40, 10 mmol/l PBS, and then stained overnight in the dark in 2 mmol/l MgCl 2 , 5 mmol/l potassium ferricyanide, 5 mmol/l potassium ferrocyanide, 1 mg/ml X-gal, 0.01% sodium deoxycholate, 0.02% NP-40, 10 mmol/l PBS pH 7.4 at ambient temperature or 37°C. Tissues were washed in PBS, postfixed in 4% paraformaldehyde for 1 h, washed in PBS, and placed into 70% ethanol prior to whole mount imaging. For YFP detection, embryos were dissected at e15.5 and imaged in whole mount. Leptin administration. Mice were injected intraperitoneally with leptin (5 mg/kg) or vehicle (PBS) and then rested for 2 h prior to perfusion. Immunohistochemistry. Immunodetection of pancreatic Cre expression was performed in 5-m paraffin sections prepared from paraformaldehydefixed pancreata of RIP-Cre Mgn , RIP-Cre/ERT, Pdx1 AI-III -Cre/ERT, and MIP-Cre/ERT mice. Transgenic lines expressing Cre/ERT received the third dose of tamoxifen on the day prior to being killed. After antigen retrieval, sections were incubated with primary antibodies to Cre and insulin (Millipore). For immunodetection of -gal expression in the brain, mice were perfusion-fixed, and brains were removed and postfixed overnight as described previously (21). Following cryoprotection, brains were sectioned into 30-m coronal slices, collected in four consecutive series, and stored at Ϫ20°C. Sections were then incubated with primary antibodies to pSTAT3, -gal (Biogenesis), or orexin overnight at 4°C. Immunolabeling was visualized with appropriate fluorescent-labeled secondary antibodies. Digital images were acquired by confocal microscopy. One-way ANOVA analysis was used to compare the percent of -cells that express Cre in the islets of the different transgenic lines. Quantitative RT-PCR. Islets (22) and hypothalamus were isolated from adult Tg(Pdx1-cre) 89.1Dam (13) mice and their controls. Total cellular RNA was isolated using the RNAqueous Small Scale Phenol-Free Total RNA isolation kit (Ambion, Austin, TX), and trace contaminating DNA was removed with the TURBO DNA-free kit (Ambion). High-quality RNA had a 28S-to-18S ratio from 1.2 to 2.0 and an RNA integrity number from 8.2 to 8.9. Singlestranded cDNA was generated by reverse transcription from 180-ng total RNA using the Superscript III First Strand Synthesis kit (Invitrogen). cDNA (40 ng/reaction) was analyzed by quantitative RT-PCR using the ABI Prism 7900 Sequence Detection System and POWER SYBR Green Master Mix (Applied Biosystems, Foster City, CA). Samples were analyzed in duplicates, and Samples with cycle threshold values greater than 40 were considered to have undetectable amounts of template. Primer sequences were the following; Cre (5ЈTGCAACGAGTGATGAGGTTC3Ј and 5ЈGCAAACGGACAGAAGCATTT3Ј), HPRT (5ЈTACGAGGAGTCCTGTTGATGTTGC3Ј and 5ЈGGGACGCAGCAA CTGACATTTCTA3Ј), and Pdx1 (5ЈCTGAGGGACAAAGATGCAGA3Ј and 5ЈTTCTAATTCAGGGCGTTGTG3Ј). One-way ANOVA with Newman-Keuls multiple comparison tests were used to compare outcomes in mice of different genotypes. Data were expressed as mean Ϯ SE.
RESULTS
Using the R26R reporter line, the RIP-Cre Mgn line (11) was previously shown to have robust Cre-mediated recombination within the -cells and the ventral brain during development (9). To investigate whether Cre-mediated recombination occurred within the brain of other transgenic Cre lines using the rat Ins2 or Pdx1 promoter (Table 1), these mouse strains were crossed with the R26R reporter strain and analyzed for -gal activity in whole mount brain slices ( Figs. 1 and 2). No X-gal staining was detected in the brain or pancreas from control R26R wt/lacZ littermates indicating that -gal is not expressed in the absence of Cre activity ( Fig. 1B and F and supplementary Fig. 1). In RIP-Cre Mgn ;R26R wt/lacZ mice, widespread X-gal staining was detected in most brain areas with robust expression in the mid-brain and ventral regions, which was consistent with previous reports (9) (Fig. 1C and supplementary Fig. 2). In the brains of RIP-Cre Herr ; R26R wt/lacZ mice, X-gal staining was less widespread and had a more punctate pattern without any obvious regionalization ( Fig. 1D and supplementary Fig. 3). The brains of RIP-Cre/ERT;R26R wt/lacZ mice with Cre activity induced by three 2-mg doses of tamoxifen revealed a diffuse intermediate pattern of X-gal staining that was more extensive than in RIP-Cre Herr ;R26R wt/lacZ mice but less than in RIP-Cre Mgn ;R26R wt/lacZ mice ( Cre transgenic lines has not been examined, but ectopic recombination was reported in the pharyngeal region of Pdx1-Cre Dam ;R26R wt/lacZ embryos (23). Unlike the widespread recombination in brains from RIP-Cre transgenic lines, X-gal staining in Pdx1-Cre Dam ;R26R wt/lacZ brains ( Fig. 2B and supplementary Fig. 5) and Pdx1-Cre Tuv ; R26R wt/lacZ brains ( Fig. 2C and supplementary Fig. 6) was localized primarily to the hypothalamus and brain stem. Analysis of Cre mRNA by quantitative RT-PCR in the Pdx1-Cre Dam line confirmed expression in the hypothalamus with levels of hypothalamic expression 12.6-fold lower than in islets (supplemental Fig. 5). Pdx1 AI-III -Cre/ ERT transgenic mice express the tamoxifen-inducible Cre (15). Injection of a single dose of tamoxifen into pregnant females (2 mg/40 g body weight) at e16.5 did not result in recombination in the brains of Pdx1 AI-III -Cre/ERT;R26R wt/lacZ embryos dissected at e20.5 (15). In adult Pdx1 AI-III -Cre/ ERT;R26R wt/lacZ mice injected with three 1-mg doses of tamoxifen, recombination was detected mainly in the hypothalamus (supplementary Fig. 7, left panel). However, three 8-mg doses of tamoxifen induced much broader recombination throughout the brain, suggesting that the extent of recombination in the adult brain is dependent upon the tamoxifen dose ( Fig. 2D and supplementary Fig. 7, right panel). These data suggest that the Pdx1 AI-III -Cre/ ERT transgene is expressed in the adult brain but not in the e16.5 brain, although it is possible that higher tamoxifen levels may be needed to induce Cre-mediated recombination within the embryonic brain.
To further examine the timing of Cre expression in the brains of the Pdx1-Cre lines expressing constitutively active Cre, we studied the Pdx1-Cre Tuv transgenic line crossed into either R26R lacZ/lacZ or R26R YFP/YFP reporter mice and analyzed embryos at e15.5 (Fig. 2J-K and supplementary Fig. 8). Both reporter strains demonstrated Cre activity in the brain stem and ventral region of the developing brain that gives rise to the hypothalamus, indicating that functional Cre protein is expressed in the ventral region of the Pdx1-Cre Tuv brain prior to e15.5.
To determine whether Cre activity in the hypothalamus of the three different Pdx1-Cre transgenes was due to previously unrecognized endogenous Pdx1 expression, a mouse line with a lacZ reporter cassette in the Pdx1 locus was examined (16). Both adult and embryonic (e15.5) Pdx1 wt/lacZ (Fig. 2E and L and supplementary Fig. 9) brains were negative for X-gal staining. Furthermore, expression of the endogenous Pdx1 gene was undetectable in the hypothalamus by real-time RT-PCR (data not shown) indicating that Pdx1-Cre Dam , Pdx1-Cre Tuv , and Pdx1 AI-III -Cre/ERT transgenes are ectopically expressed in the brain.
Detection of Cre-mediated recombination in the hypothalamus of Pdx1-Cre Dam ;R26R wt/lacZ mice (Fig. 2B, supplementary Fig. 5, and supplementary Fig. 10) raised the possibility that Cre protein may be expressed in neurons involved in the regulation of energy and glucose homeostasis. To determine the extent of Cre-mediated recombination within these specific neuronal populations, -gal positive cells in brain sections from leptin-treated Pdx1-Cre Dam ;R26R wt/lacZ mice were co-localized with orexin and leptin-induced pSTAT3, respectively. In the lateral hypothalamus, -gal protein was expressed in a complex pattern that partially overlapped with both the orexinexpressing and LepRb-expressing neuronal populations (Fig. 3), although significant populations of -gal positive cells did not overlap with the neuronal cell population in either the lateral hypothalamus or in other hypothalamic regions including the arcuate nucleus. Nonetheless, these data clearly illustrate that the Pdx1-Cre Dam line induces Cre-mediated recombination in subpopulations of hypothalamic neurons involved in energy expenditure and glucose metabolism.
DISCUSSION
The studies examining the in vivo role of genes associated with biological processes in the pancreas and -cells have relied largely upon fragments of the rat Ins2 gene promoter or the Pdx1 gene promoter (3,8 -13,16). Although insulin secretion from -cells plays an important role in glucose control, many other tissues including the brain are intimately involved in the regulation of glucose metabolism. Previous studies have demonstrated that the 668 bp rat Ins2 promoter fragment drives Cre-recombinase expression within the central nervous system of the mouse transgenic line, RIP-Cre Mgn [Tg(Ins2-cre) 25Mgn ] (9,24,25). In this study, we examined whether Cre-mediated recombination occurred in the brain of six mouse transgenic lines that have been extensively used to express Cre specifically within the pancreas or islet -cells ( Table 1). Analysis of Cre-mediated recombination using the R26R reporter strain demonstrated that all six transgenic lines expressed Cre recombinase to varying extents within the brain, raising the possibility that alterations of gene expression in the brain may complicate the analysis and that the observed phenotype may not be solely due to changes in the -cell. This possibility was highlighted by a recent study that used the RIP-Cre Mgn line to selectively delete the Stat3 gene in the -cells (24). STAT3-deficient mice displayed increased food intake, obesity, and leptin resistance; physiological effects that the authors attributed to STAT3 deficiency in the brain leading to impaired leptin signaling.
There are no previous studies reporting Cre-mediated recombination in the brain with Pdx1-Cre lines. Our findings indicate that the Pdx1-Cre Dam line causes recombination in a subset of hypothalamic neurons involved in energy and nutrient homeostasis. The similarity of -galexpression patterns in the hypothalamus with the other two Pdx1-Cre transgenic lines suggest that Cre-mediated recombination in these lines may also affect similar neuronal subpopulations. The lack of -gal activity in the brains of Pdx1 wt/lacZ mice indicates that the Cre expression in the brain of the Pdx1-Cre Dam , Pdx1-Cre Tuv , and Pdx1 AI-III -Cre/ERT mice is not a reflection of endogenous Pdx1 gene expression. A likely explanation for this spurious expression is the removal of these Pdx1 gene promoter fragments from their endogenous gene context. Furthermore, the similar recombination pattern generated with Pdx1-Cre lines makes it unlikely that this brain expression is a result of neighboring sequences at the sites of integration, which are almost certainly different for each line. While a lacZ knock-in reporter to analyze endogenous Ins2 expression is currently not available, a recent study demonstrated expression of the Ins2 gene but not the Ins1 gene in the mouse hypothalamus (26). Thus, in contrast to the infidelity of Cre transgene expression found with the Pdx1 promoter, Cre expression observed using the Ins2 promoter may reflect, in part, endogenous promoter activity. It is not known whether the ectopic expression of the insulin or Pdx-1 transgenes occurs during the embryonic period, adult periods, or in both periods.
Several caveats should be considered in interpreting our results. First, we did not examine all currently available insulin, PDX-1, or other gene promoters used to direct Cre expression to the pancreas or -cell. Thus, it is essential that investigators examine brain expression in any Cre line thought to be pancreas-or -cell-specific. Second, our results should not be interpreted to indicate specific expression (or lack of expression) in any brain region or nuclei as we did not perform detailed mapping of brain regions following lacZ staining. We did note regions with strong X-gal staining, but this should not be taken as evidence that other areas do not express Cre, and it is possible that a more isolated or diffuse expression in other brain regions may also lead to Cre activity. More detailed work is needed to identify which brain regions are positive or negative for Cre activity. Third, whether Cre expression leads to excision of a floxed DNA fragment is an incompletely understood process that depends on both Cre expression and the floxed allele. We mostly used a single reporter line, and we do not know if the results would differ with other lines that express other reporters such as alkaline phosphatase. In fact, we predict that some reporter lines will not show the same Cre activity we observed given that a range of sensitivity of floxed alleles to Cre-mediated recombination is likely (with the R26R wt/lacZ line being more Cre-sensitive) (27). This possibility further complicates interpretation of studies using Cre to inactivate a gene of interest. Thus, we urge caution in extrapolating that the lack of Cre-mediated recombination with a certain reporter gene predicts a lack of Cre-mediated recombination of a gene of interest in the brain. Based on the current study, it is clear that the R26R wt/lacZ floxed allele is susceptible to Cre-mediated recombination in several brain regions, including orexinpositive and leptin-responsive neuronal populations. Finally, the experimental intent for using Cre transgene must be considered. If lineage tracing is the goal, is it preferable to use a "sensitive" or "insensitive" reporter? If gene inactivation is the goal of Cre-mediated recombination, then whether other tissues or cells endogenously express the gene of interest becomes a critical factor. If the gene of interest is expressed in places other than the pancreas or -cell (especially in the brain where a large number of genes are known to be expressed in both tissues), then the current finding of Cre-mediated recombination in brain regions involved in glucose homeostasis, appetite, weight, and energy expenditure make attributions of the phenotype to the -cell more difficult.
The above analysis clearly illustrates that new transgenic lines are needed to ensure fidelity of conditional Cre expression in islet -cells. In transgenic Tg(Ins1-EGFP) 1Hara mice (28), an 8.5-kb fragment of the mouse Ins1 gene promoter was successfully used to express enhanced green fluorescent protein (eGFP) specifically within islet -cells in the absence of eGFP expression in other tissues. This same Ins1 promoter fragment was used in the MIP-Cre/ERT transgenic line to direct Cre/ERT gene expression in -cells (Tamarina et al., unpublished data). The improved fidelity of Cre expression observed in the MIP-Cre/ERT line is likely due, in part, to the additional regulatory elements within the larger promoter fragment employed and because the mouse Ins1 gene is not expressed in the hypothalamus (26). Thus, the MIP-Cre/ERT mice appear to represent a transgenic line to express Cre efficiently and specifically in islet -cells.
In conclusion, this study reveals that the current transgenic lines utilizing Ins2 and Pdx1 promoter fragments target Cre expression not only to the islet -cells, but also to the brain. While not invalidating the use of these lines, our data indicate that studies conducted using these Cre transgenic mice should be interpreted carefully to assess whether manipulation of the target gene within the brain could contribute to the observed phenotype. The lack of Cre-mediated recombination in the brain of MIP-Cre/ERT; R26R wt/lacZ mice suggests that the newly developed MIP-Cre/ERT line is currently the only available -cell-specific Cre line. As with all newly developed Cre transgenic lines, caution must also be exhibited when using this line until its potential as a -cell-specific Cre line has been validated through further experimental analysis. | 2018-04-03T01:07:08.987Z | 2010-08-29T00:00:00.000 | {
"year": 2010,
"sha1": "abdce34a9b5b2ff26b9f51ad14d0f4390055b7d2",
"oa_license": "CCBYNCND",
"oa_url": "http://diabetes.diabetesjournals.org/content/59/12/3090.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7e0192a74b08a7ccae747543df99fcbb71ea93a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14430971 | pes2o/s2orc | v3-fos-license | Perceived color shift of ceramics according to the change of illuminating light with spectroradiometer
PURPOSE Perceived color of ceramics changes by the spectral power distribution of ambient light. This study aimed to quantify the amount of shifts in color and color coordinates of clinically simulated seven all-ceramics due to the switch of three ambient light sources using a human vision simulating spectroradiometer. MATERIALS AND METHODS CIE color coordinates, such as L*, a* and b*,of ceramic specimens were measured under three light sources, which simulate the CIE standard illuminant D65 (daylight), A (incandescent lamp), and F9 (fluorescent lamp). Shifts in color and color coordinate by the switch of lights were determined. Influence of the switched light (D65 to A, or D65 to F9), shade of veneer ceramics (A2 or A3), and brand of ceramics on the shifts was analyzed by a three-way ANOVA. RESULTS Shifts in color and color coordinates were influenced by three factors (P<.05). Color shifts by the switch to A were in the range of 5.9 to 7.7 ΔE*abunits, and those by the switch to F9 were 7.7 to 10.2; all of which were unacceptable (ΔE*ab > 5.5). When switched to A, CIE a* increased (Δa*: 5.6 to 7.6), however, CIE b* increased (Δb*: 4.9 to 7.8) when switched to F9. CONCLUSION Clinically simulated ceramics demonstrated clinically unacceptable color shifts according to the switches in ambient lights based on spectroradiometric readings. Therefore, shade matching and compatibility evaluation should be performed considering ambient lighting conditions and should be done under most relevant lighting condition.
INTRODUCTION
Fabrication of a natural looking restoration is one of the challenges in esthetic dentistry because shade matching with natural teeth is a difficult task due to the complicated optical properties of teeth. 1 An esthetic restoration should reproduce morphologic, optical, and biologic characteristics of teeth under varied clinical conditions. Switches of ambient light sources and condition cause perceived color shifts of restorations and shade guides. 2,3 All-ceramic restorations can be made to match natural teeth in terms of color, surface texture, and translucency 4 ; therefore, they address the demand for esthetic restorations. 5,6 Optical properties of zirconia have introduced new opportunities for achieving superior esthetics. 1 Based on a clinical evaluation of shade matching maintenance of an all-ceramic system, 97 to 100% of restorations were rated alfa. 7 However, one of the clinical problems for all-ceramics is that the allowed thickness for a restoration is limited, which is generally regarded as 1.5 mm. 4 Shade matching is one of the most pivotal esthetic tasks. Although shade matching is usually performed by visual methods, instrumental color taking enhances the validity of visual shade matching. 8,9 Shade matching perfor-mance has been improved through the development of new shade guides and electronic color taking devices for dentistry. 1 Electronic color taking devices showed excellent repeatability, 10 and the use of spectrophotometer (SP) allowed accurate color evaluation of teeth and restorations. 11 However, color parameters measured by instruments vary by the measurement protocols. 12 Perceived color of an object is decided by reflected and transmitted visible light, and an object can only reflect and transmit the spectrum of light that shines on it. Since lightings show varied source-dependent spectral power distributions (SPDs), shade matching performance is highly influenced by the light sources. 13 Therefore, the impact of illuminating lights on the color of dental substances is a significant clinical concern. 3 Metameric colors are the color stimuli of identical tristimulus values calculated based on the reflectance values under a particular light source, but have different spectral reflectance values, 14 and metamerism is probably the largest single cause of industrial shade matching problems. 15 Since the SPDs of popular ambient light sources such as incandescent lamp, fluorescent lamp, and daylight differ, color of dental substances showed changes according to the illuminants used in SP or real light sources. 3,[16][17][18] It has been confirmed that the instrumental color values of teeth, restoratives, and shade guides vary by standard illuminant used in SP. [19][20][21][22][23][24] It was also reported that color shift of all-ceramics by the switch of illuminants in SP was clinically perceptible. 18 Perceptible color differences were observed in shade guide tabs due to the switch of illuminants in SP. 8,23,24 As to the observer factor, shade matching performance was affected by the color temperature of illuminated lights; lower color temperature light decreased correct shade matching. 25 Therefore, careful control of lighting conditions is essential to achieve an optically pleasing restoration. 26 Although the daylight is regarded as an ideal light source, it cannot be easily standardized because of its variability by weather, time of the day, and season of the year. Therefore, the Commission Internationale de l'Eclairage (1) mathematically defined ambient lights. CIE standard illuminant D65 is defined to represent a phase of the daylight with a color temperature of 6,500ºK, illuminant A is defined to represent an incandescent light (2,856ºK), and illuminant F9 is defined to represent a fluorescent lamp light (4,150ºK). 27 If teeth and restorations are opaque, influences of the type of instrument, illuminating and measuring configuration, and the kind of illuminant or light source on the color determination should have been limited. 28 However, color taking of translucent substances by SP results in deviated color values compared with the real color perceived by naked eyes. 29 These deviations are mainly caused by edgeloss effect due to small measurement aperture of SP, 12,30 thickness of translucent layer, and background conditions. 31 These distortions in color values measured by SP would decrease when color is taken by a spectroradiometer (SR). SR does not show edge-loss effect, and the illuminating configuration is similar to that of ambient lighting condition; therefore, simulation of human color vision in this kind of instrument is higher than that in conventionally used SP. Light source-dependent color shifts of a shade guide were determined by SR. 3 However, properties of light sources used in SR should be further specified, [32][33][34] because the CIE illuminants are mathematically defined, 27 whereas the SPDs of real light sources vary by the type, brand, and configuration of the source.
Visual thresholds for color differences are applied to correlate the instrumental color values with the clinical evaluation. Although the threshold for acceptability was reported to be 3.5 color difference (ΔE* ab ) units and that for perceptibility was 1.8 ΔE* ab units based SP readings, 35 2.6 ΔE* ab units was considered the clinically perceptible, while 5.5 ΔE* ab units was considered the clinically acceptable threshold based on SR readings. 36 Human color vision is categorized into colorimetry, sensation, perception, and visualization. 37 Since the instrumental color taking is in the colorimetry domain and the perceptible/acceptable thresholds are in the perception domain, correlating two domains needs careful interpretation.
Although there have been reports on the influence of illuminants on the SP-based color shifts of dental substances, [16][17][18][38][39][40] limitations in SP color taking might have distorted the experimental results of those studies. Moreover, the illuminating configuration in the SP instrument is different from that in clinical condition. Therefore, the purpose of this study was to determine the influence of the switch of real light sources, simulating the CIE standard illuminants D65, A, and F9, on the SR-based color shift of clinically simulated ceramics. The null hypothesis assumed was that the shifts in color and three color coordinates (CIE L*, a*, and b*) would not be influenced by the switched light, shade of veneer ceramics, and brand of ceramics.
MATERIALS AND METHODS
Specimens of seven core ceramics were fabricated, 11 mm in diameter, following the manufacturers' instructions. VITA Lumin A2 shade (VITA Zahnfabrik, Bad Säckingen, Germany) was selected. Thickness of the specimens was controlled with a polishing machine (AM Technology, Asan, Chungnam, Korea) to the manufacturers' recommended thickness required to mask a discolored abutment (Table 1). A sintering ceramic (VITA VM 7; VITA Zahnfabrik) was used as a reference core material.
Veneer ceramics were prepared for each core material (Table 1 and Table 2), with the final thickness of layered specimen of 1.5 mm. 4 Two shades corresponding to A2 and A3 shades (VITA Zahnfabrik) were selected. Thus, layered specimens were divided into A2-and A3-veneered groups. Seven specimens were made for each brand of the core and veneer ceramics. The number of specimens was determined based on previous color studies, in which generally five specimens were investigated. [41][42][43] Detailed specimen preparation procedures have been reported previously. 31 When the color of layered specimens was measured (Table 2), corresponding veneer specimen was laid over a core specimen. In this layering procedure, one veneer specimen for each material, representing the mean color value of seven specimens, was used. When layering, a drop of optical fluid (refraction fluid, 1.5 index; Cargille Lab, Cedar Grove, NJ, USA) was applied between the veneer and core specimens for an optical connection. 31 Color of the layered specimens were taken according to the CIE L*a*b* color scale over a white tile (CIE L* = 3 Spectral reflectance values were obtained from 380 to 780 nm with 2 nm inter vals (Spectrawin 2.0; Photo Research), which were converted to the CIE L*, a*, and b* values. Chroma was calculated as C* ab = (a* 2 + b* 2 ) 1/2 , and color shift was calculated as ΔE* ab = [(ΔL*) 2 + (Δa*) 2 + (Δb*) 2 ] 1/2 . 27 Vectorial shifts of lightness and chroma, and those of CIE a* and b* from the values under D65 simulator to those under A, or F9 simulators were determined. Amounts of shifts in color, lightness (CIE L*), CIE a* and b*, and also chroma, by the switch of lights were calculated. Influence of the kind of switched light (A or F9), shade of veneer ceramics (A2 or A3), and brand of core ceramics (n=8) on the shifts in color and color coordinates was evaluated with a three-way analysis of variance (ANOVA, α=.05). Brand was used as a factor instead of type of ceramics because ceramics in the same type could not be regarded as acting the same pattern by the switch of lights. Table 3 and Table 4. The range of shifts in color by the switch from D65 to A was 5.9 to 7.7 (mean ± standard deviation: 6.7 ± 0.6), that of lightness (the value under A simulator minus that under D65) was -1.3 to 1.6 (0.1 ± 0.8), that of CIE a* was 5.6 to 7.6 (6.5 ± 0.6), that of CIE b* was -0.1 to 1.3 (0.7 ± 0.4), and that of chroma was 1.1 to 2.6 (1.9 ± 0.4). The range of shifts in color by the switch to F9 was 7.7 to 10.2 (9.2 ± 0.8), that of lightness was 5.9 to 7.0 (6.4 ± 0.4), that of CIE a* was -0.9 to 0.1 (-0.4 ± 0.2), that of CIE b* was 4.9 to 7.8 (6.5 ± 0.9), and that of chroma was 4.9 to 7.7 (6.5 ± 0.9).
Amounts of shifts by the switch of lights are listed in
Vectorial shifts of lightness and chroma are presented in Fig. 1 and Fig. 2. In Fig. 1 to Fig. 4 Vectorial shifts of CIE a* and b* are presented in Fig. 3 and Fig. 4. The ranges of CIE a* and b* for the A2-veneered
DISCUSSION
The null hypothesis of the present study was rejected because all the color values were influenced by three factors. Regarding the shifts of color coordinates by the switch of lights, CIE L* values under F9 were higher than those under D65 ( Fig. 1 and Fig. 2), which might be caused by the difference in the light intensities of two simulators. However, it was confirmed that all the three simulators irra- diated similar light intensities. 3 Therefore, these shifts seem to reflect the light-switch induced lightness changes, which might be partially caused by fluorescent emission or other optical phenomena. As to the shifts in CIE a* and b* (Fig. 3 and Fig. 4), these shifts clearly reflected the SPDs of the switched lights. Fluorescent light tends to accentuate blue color, whereas incandescent light accentuates yellow-red range. 13 In the present study, when light was switched from D65 to A, red and yellow hue increased ( Fig. 3 and Fig. 4). When switched from D65 to F9, yellow hue and small amount of green hue increased (increased CIE b* and decreased a*).
With dental ceramics, acceptability thresholds in color parameters were determined. 35 As results, the acceptability threshold was ΔL' = 2.4, ΔC' = 3.2, and ΔH' = 3.2. These parameters are used in the CIEDE 2000 color difference formula, 44 and indicate the differences in the CIE L*a*b* lightness, chroma, and hue. Therefore, the thresholds for ΔL' and ΔC' were compared with lightness and chroma shifts of the present study. Lightness shifts by the switch from D65 to A (range: -1.3 to 1.6) were in the acceptable range (ΔL' < 2.4), while those from D65 to F9 (5.9 to 7.0) were not acceptable (Table 3 and Table 4). Chroma shifts by the switch from D65 to A (1.1 to 2.6) were in the acceptable range, while those from D65 to F9 (4.9 to 7.7) were not acceptable. Therefore, the shifts in color, lightness, and chroma by the switch to F9 could be regarded as visually higher compared with those by the switch to A. As to the threshold ΔE* ab values, the thresholds based on SRreadings 36 were referenced in the present study. Although experimental methods were not the same, when the visually acceptable threshold(ΔE* ab < 5.5) are applied, color shifts in all specimens by both A and F9 switches were unacceptable (ΔE* ab = 5.9 to 7.7, and 7.7 to 10.2, respectively).
Influence of illuminant-dependent color shifts of shade guide tabs based on SP readings was determined, and the color differences between the values relative to the illuminants A and D65 were in the range of 0.9 to 2.7 ΔE* ab units. 20 In the present study with ceramics, the corresponding values were in the range of 5.9 to 7.7 (Table 3 and Table 4), which were higher than those of the shade guide tabs. The shifts in color, lightness, and chroma of simulated all-ceramic specimens relative to three standard illuminants of SP were compared. 18 As results, the range of color shifts was in the range of 1.5 to 3.6 ΔE* ab units by the switch from D65 to A and that from D65 to F2 switch was 1.3 to 3.0. Lightness shifts (ΔL*) were 0.6 to 1.2 by A switch and 0.5 to 0.9 by F2 switch. Chroma shifts (ΔC* ab ) were 0.5 to 1.4 by A switch and 1.2 to 2.3 by F2 switch. Comparing with the results of the present study, the amounts of SP-based shifts were smaller than those measured by SR in the present study. Plausible causes for these discrepancies might be in the differences 1) of the measurement geometries of SP and SR, 2) in the illuminants and real light sources although the SPDs of the F2 and F9 simulators are similar, and 3) in the illuminating configuration. We think that the amounts of shift measured by SR of the present study are more clinically relevant than those determined by SP. Anyway, the color shifts by the switch of real light sources in ceramic materials are higher than those previously reported based on SP readings.
Color shifts of a shade guide due to the switch of three light sources were determined by SR. 3 As results, the range of color shifts by the switch from D65 simulator to A simulator was 4.0 to 9.1 ΔE* ab units, and that from D65 to F9 switch was 3.2 to 8.5 ΔE* ab units. Comparing with the ceramics of the present study, color shifts in the corresponding shade guide tabs showed a similar trend, but were not the same (2M2 and 2M3 in Fig. 1 to Fig. 4). Based on these, it was confirmed that the shifts in color and color coordinates in clinically simulated ceramics are not the same to those of the corresponding shade guide tabs; therefore, matched color with a shade guide under a particular light source could be mismatched under a different light source.
Core and veneer specimens were optically connected by an optical fluid instead of firing together, which is a limitation of the present study. Besides, the shape and size of clinical restorations are different from columniform specimens used in the present study, which might have caused discrepancy. Further in vivo studies carried under clinical conditions should be performed.
CONCLUSION
Within the limitations of this study, perceptible color shifts of clinically simulated ceramics under different ambient light sources were confirmed by spectroradiometer readings. Color shifts under different light sources were in clinically unacceptable range (ΔE* ab > 5.5), which should be considered together with the inconsistencies in light-dependent color shifts among shade tabs, teeth, and restorations. Color matching and shade compatibility evaluation should be performed under optimal lighting conditions that simulate the light source, which is most relevant to the patient. | 2016-05-12T22:15:10.714Z | 2013-08-01T00:00:00.000 | {
"year": 2013,
"sha1": "6133f47cd04de41a7622feb4cd4ba4fec43e89f0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4047/jap.2013.5.3.262",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6133f47cd04de41a7622feb4cd4ba4fec43e89f0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
46529077 | pes2o/s2orc | v3-fos-license | Influence of Nozzle Orifice Geometry and Fuel Properties on Flow and Cavitation Characteristics of a Diesel Injector
Cavitation refers to the formation of bubbles in a liquid flow leading to a two-phase mixture of liquid and vapor/gas, when the local pressure drops below the vapor pressure of the fluid. Fundamentally, the liquid to vapor transition can occur by heating the fluid at a constant pressure, known as boiling, or by decreasing the pressure at a constant temperature, which is known as cavitation. Since vapor density is at least two orders of magnitude smaller than that of liquid, the phase transition is assumed to be an isothermal process. Modern diesel engines are designed to operate at elevated injection pressures corresponding to high injection velocities. The rapid acceleration of fluid in spray nozzles often leads to flow separation and pockets of low static pressure, prompting cavitation. Therefore, in a diesel injector nozzle, high pressure gradients and shear stresses can lead to cavitation, or the formation of bubbles.
Introduction
Cavitation refers to the formation of bubbles in a liquid flow leading to a two-phase mixture of liquid and vapor/gas, when the local pressure drops below the vapor pressure of the fluid.Fundamentally, the liquid to vapor transition can occur by heating the fluid at a constant pressure, known as boiling, or by decreasing the pressure at a constant temperature, which is known as cavitation.Since vapor density is at least two orders of magnitude smaller than that of liquid, the phase transition is assumed to be an isothermal process.Modern diesel engines are designed to operate at elevated injection pressures corresponding to high injection velocities.The rapid acceleration of fluid in spray nozzles often leads to flow separation and pockets of low static pressure, prompting cavitation.Therefore, in a diesel injector nozzle, high pressure gradients and shear stresses can lead to cavitation, or the formation of bubbles.
Cavitation, in diesel fuel injectors can be beneficial to the development of the fuel spray, since the primary break-up and subsequent atomization of the liquid fuel jet can be enhanced.Primary breakup is believed to occur in the region very close to the nozzle tip as a result of turbulence, aerodynamics, and inherent instability caused by the cavitation patterns inside the injector nozzle orifices.In addition, cavitation increases the liquid velocity at the nozzle exit due to the reduced exit area available for the liquid.Cavitation patterns extend from their starting point around the nozzle orifice inlet to the exit where they influence the formation of the emerging spray.The improved spray development is believed to lead to more complete combustion process, lower fuel consumption, and reduced exhaust gas and particulate emissions.However, cavitation can decrease the flow efficiency (discharge coefficient) due to its affect on the exiting jet.Also, imploding cavitation bubbles inside the orifice can cause material erosion thus decreasing the life and performance of the injector.Clearly an optimum amount of cavitation is desirable and it is important to understand the sources and amount of cavitation for more efficient nozzle designs.
The flow inside the injector is controlled by dynamic factors (injection pressure, needle lift, etc.) and geometrical factors (orifice conicity, hydrogrinding, etc.).The effects of dynamic factors on the injector flow, spray combustion, and emissions have been investigated by various researchers including (Mulemane, 2004;Som, 2009aSom, , 2010a;;Payri, 2009).There have also been experimental studies concerning the effects of nozzle orifice geometry on global injection and spray behavior (Bae, 2002;Blessing, 2003;Benajes, 2004;Han, 2002;Hountalas, 2005;Payri, 2004Payri, , 2005Payri, , 2008;;Som, 2009c).The literature review indicates that while the effect of orifice geometry on the injector flow and spray processes has been examined to some extent, its influence on engine combustion and emissions is not well established (Som, 2010b(Som, , 2011)).To the best of our knowledge, the influence of nozzle geometry on spray and combustion characteristics has also not been studied numerically, mainly due to the complicated nature of flow processes associated.These form a major motivation for the present study i.e., to examine the effects of nozzle orifice geometry on inner nozzle flow under diesel engine conditions.With increasingly stricter emission regulations and greater demand on fuel economy, the injector perhaps has become the most critical component of modern diesel engines.Consequently, it is important to characterize the effects of orifice geometry on injection, atomization and combustion behavior, especially as the orifice size keeps getting smaller and the injection pressure higher.In order to achieve the proposed objectives, we first examine the effects of orifice geometry on the injector flow, including the cavitation and turbulence generated inside the nozzle.
Biofuels are an important part of our country's plan to develop diverse sources of clean and renewable energy.These alternative fuels can help increase our national fuel security through renewable fuel development while simultaneously reducing emissions from the transportation sector.Biodiesel is a particularly promising biofuel due to its compatibility with the current fuel infrastructure geared toward compression-ignition engines.Using biodiesel as a blending agent can prolong the use of petrodiesel.Biodiesel is also easily produced from domestic renewable resources such as soy, rape-seed, algae, animal fats, and waste oils.Our literature search (Som, 2010b) identified relatively few studies dealing with the injection and spray characteristics of biodiesel fuels.Since there are significant differences in the thermo-transport properties of petrodiesel and biodiesel fuels, the injection and spray characteristics of biodiesel can be expected to differ from those of petrodiesel.For instance, due to differences in vapor pressure, surface tension, and viscosity, the cavitation and turbulence characteristics of biodiesel and diesel fuels inside the injector may be significantly different.The injector flow characteristics determine the boundary conditions at the injector orifice exit, including the rate of injection (ROI) profile as well as the cavitation and turbulence levels; this can have a significant influence on the atomization and spray characteristics, and consequently on engine performance.Som et al. (Som, 2010b) compared the injection and spray characteristics of diesel and biodiesel (from soy-based feedstock) using an integrated modeling approach.This modeling approach accounts for the influence in nozzle flow effects such as cavitation and turbulence (Som, 2010a) on spraycombustion development using the recently developed Kelvin Helmholtz-Aerodynamic Cavitation Turbulence (KH-ACT) primary breakup model (Som, 2009b(Som, , 2010c)).Another objective of the current study is to demonstrate a framework within which boundary conditions for spray and combustion modeling for different orifice shapes and alternate fuels of interest can be available from high-fidelity nozzle flow simulations.
Computational model
The commercial CFD software FLUENT v6.3 was used to perform the numerical simulation of flow inside the nozzle.FLUENT employs a mixture based model as proposed by Singhal et al. (Singhal, 2002).The two-phase model considers a mixture comprising of liquid fuel, vapor, and a non-condensable gas.While the gas is compressible, the liquid and vapor are considered incompressible.In addition, a no-slip condition between the liquid and vapor phases is assumed.Then the mixture properties are computed by using the Reynolds-Averaged continuity and momentum equations (Som, 2009a).
where ( ) , is the turbulent viscosity In order to account for large pressure gradients, the realizable k turbulence model is incorporated along with the non-equilibrium wall functions. +P- 2 where P (production of turbulent kinetic energy) = 3 + P- The turbulent viscosity is modeled for the whole mixture.The mixture density and viscosity are calculated using the following equations: (1 ) (1 ) where and are the mixture density and viscosity respectively, and the subscripts v , , l g represent the vapor, liquid, and gas respectively.The mass ( f ) and volume fractions ( ) are related as: Then the mixture density can be expressed as: The vapor transport equation governing the vapor mass fraction is as follows: where i u is the velocity component in a given direction (i=1,2,3), is the effective diffusion coefficient, and e c R , R are the vapor generation and condensation rate terms (Brennen, 1995) computed as: where and v P are the surface tension and vapor pressure of the fluid respectively, and k and P are the local turbulent kinetic energy and static pressure respectively.An underlying assumption here is that the phenomenon of cavitation inception (bubble creation) is the same as that of bubble condensation or collapse.Turbulence induced pressure fluctuations are accounted for by changing the phase-change threshold pressure at a specified temperature (P sat ) as: where, 0.39 2 The source and sink terms in equation ( 10) are obtained from the simplified solution of the Rayleigh-Plesset equation (Brennen, 1995).No-slip boundary conditions at the walls and symmetry boundary condition at the center line are employed for the HEUI 315-B injector simulations (cf. Figure 3a).
Results and discussion
This section will first present a new improved criterion for cavitation inception for production injector nozzles.This new criterion will provide a tool for assessing cavitation under turbulent regimes typical in diesel injector nozzles.The influence of nozzle orifice geometry on in-nozzle flow development will be presented next.The influence of fuel properties such as density, viscosity, surface tension, and vapor pressure on nozzle flow characteristics will be presented.Cavitation and turbulence generated inside the nozzle due to geometry and fuel changes will also be quantified.
An Improved criterion for cavitation inception
According to the traditional criterion, cavitation occurs when the local pressure drops below the vapor pressure of the fuel at a given temperature i.e., when 0 v p p .This criterion can be represented in terms of a cavitation index (K) as: where p , b p , v p are the local pressure, back pressure, and vapor pressure, respectively.This criterion has been extensively used in the cavitation modeling community.However, Winer and Bair (Winer, 1987) and Joseph (Joseph, 1998) independently proposed that the important parameter for cavitation is the total stress that includes both the pressure and normal viscous stress.This was consistent with the cavitation experiments in creeping shear flow reported by Kottke et al. (Kottke, 2005), who observed the appearance of cavitation bubbles at pressures much higher than vapor pressure.Following an approach proposed by Joseph (Joseph, 1998) and Dabiri et al. (Dabiri, 2007), a new criterion based on the principal stresses was derived and implemented.The formulation for the new criterion is summarized below.
Maximum tension criterion:
Minimum tension criterion: The new criteria can be expressed in terms of the modified cavitation index as: where the strain rate S 11 is computed as: where u, v are the velocities in x, y direction respectively.
Under realistic Diesel engine conditions where flow inside the nozzle is turbulent, turbulent stresses prevail over laminar stresses.Accounting for the effect of turbulent viscosity the new criteria is further modified as: The experimental data from Winklhofer et al. (Winklhofer, 2001) was used for a comprehensive model validation.These experiments were conducted in a transparent, quasi-2-D geometry wherein the back pressure was varied to achieve different mass flow rates.To the best of our knowledge this experimental data-set is most comprehensive in terms of two phase information and inner nozzle flow properties.
Figure 1 presents the measured cavitation contour at injection and back pressures of 100 and 40 bar respectively, owing to a Reynolds number of 16,000 approximately.It is clearly seen from the marked red line that there is significant amount of cavitation at the orifice inlet.These cavitation contours extend to certain distance inside the orifice.The vapor fraction contour shows no cavitation (blue represents pure liquid).The classical criterion which basically is another way of representing the predicted vapor fraction contour also captures the same trend, i.e., hardly any cavitation is observed.The laminar criteria shows cavitation inception, however, no advection of the fuel vapor into the orifice is observed.The turbulent criteria seems to capture more cavitation with C t =2 agreeing better with experimental data that all the other criteria.Figure 2 presents the measured cavitation contour at injection and back pressures of 100 and 20 bar respectively, owing to a Reynolds number of 18,000 approximately.It is clearly seen from the marked red line in the experimental image (Winklhofer, 2001) that there is a significant amount of cavitation at the top and bottom of orifice inlet.These cavitation contours are symmetric in nature and are advected by the flow to reach the nozzle orifice exit.
The cavitation contours join near the orifice exit thus the exit is completely covered by fuel vapor.The predicted vapor fraction contour also shows significant amount of cavitation represented by the fuel vapor contour (in red).However, there is still a significant amount of liquid fuel (in blue) present at the orifice exit and the vapor fraction contours do not join together as was the case in experiments.The classical criterion which basically is another way of representing the predicted vapor fraction also captures the trend as the vapor fraction contour.The laminar criterion predicts marginal improvement to the classical criterion.This is expected since for high Reynolds number flows the difference between these criteria was observed to diminish (Padrino, 2007).Increase in Reynolds number results in an increase in turbulence levels inside the orifice.Thus the turbulent stress criterion is seen to improve the predictions of vapor fraction contours significantly.All the experimentally observed characteristics are captured by the turbulent stress criterion i.e., the vapor contours from top and bottom of the orifice are seen to merge together resulting in pure vapor at orifice exit.The turbulent criterion seems to capture more cavitation with C t =2 agreeing better with experimental data than all the other criteria.
The simulations using the new cavitation criterion show significant improvement in prediction of cavitation contours especially in the turbulent regime under realistic injection conditions.Future studies will focus on performing such studies in realistic geometries of interest characteristerized by three dimensional flow features.Winklhofer et al. (Winklhofer, 2001) experiments, although performed under realistic injection conditions, do not capture the 3D effects which are essential to flow development.
Effect of nozzle orifice geometry on inner nozzle flow development
This section will focus on capturing the influence of nozzle orifice geometry on in-nozzle flow development such as cavitation and turbulence in addition to flow variables such as velocity, discharge coefficient etc.The base nozzle orifice geometry which is cylindrical and nonhydroground will be presented first.The single orifice simulated for the full-production, minisac nozzle used in the present study is shown in Figure 3.The nozzle has six cylindrical holes with diameter of 169 μm at an included angle of 126°.The discharge coefficient (C d ), velocity coefficient (C v ) and area contraction coefficient (C a ), used to characterize the nozzle flow, are described below.The discharge coefficient (C d ) is calculated from: where actual M is the mass flow rate measured by the rate of injection (ROI) meter (Bosch, 1966), or calculated from FLUENT simulations, th A is the nozzle exit area, and is the theoretical mass flow rate.The three coefficients are related as (Naber , 1996): Here the area contraction coefficient is defined as: 2).The 3D view of the cavitation contours shows that vapor generation only occurs at the orifice inlet for both the orifices.For the base nozzle these cavitation contours are advected by the flow to reach the orifice exit.Consequently, the computed area coefficient (C a ) was found to be 0.96 for this case.A smoother orifice inlet (i.e., r/R=0.014)clearly leads to a decrease in cavitation.The small amount of vapor generated is restricted to the nozzle inlet.Thus chamfering/rounding the orifice inlet geometry can inhibit cavitation by allowing a smoother entry to the orifice, and also improve the nozzle flow efficiency (C d ) as discussed below.This is due to the fact that flow uniformity in the orifice entrance region is significantly enhanced for the hydroground nozzle hence, cavitation is almost completely inhibited.This observation is consistent with those reported by other researchers.A 2D cut-plane was constructed passing though the mid-plane.This view also highlights the fact that the hydroground nozzle cavitates significantly less compared to the base nozzle. only predicts pockets of vapor formation indicating that the remaining vapor is due to advection from the orifice inlet.In the case of conical (not shown here) and hydroground nozzles, vapor is generated at the orifice inlet however it is completely consumed soon after; hence the exit of the orifice is composed of pure liquid fuel only.Figure 7a presents C d and injection velocity at nozzle exit for different pressure drops across the orifice.The back pressure was always fixed at 30bar, hence, the change in injection pressure resulted in change in pressure drop across the orifice.The methodology for calculating these parameters was discussed earlier.With increase in injection pressure, injection velocity at the orifice exit is seen to increase, which is expected.It should be noted that the injection velocity reported is an average value across the orifice.As expected, the average injection velocity and discharge coefficient is lower for the base nozzle owing to the presence of cavitation at the orifice exit.The influence of nozzle geometry on turbulence levels at the nozzle orifice exit is investigated in Fig. 7b since these parameters are directly input in spray simulations as rate profiles for cavitation, turbulence, and fuel mass injected.The different needle lift positions simulated are also shown.The peak needle lift of this injector was 0.275 mm which corresponds to full needle open position.Other needle positions simulated are: 0.05mm, 0.1mm, 0.15mm, and 0.2mm open respectively.A general trend observed is that the turbulent kinetic energy (TKE) increased with needle lift position which is expected since the injection pressure also increased resulting in higher Reynolds numbers.TKE and turbulent dissipation rate (TDR) were seen to be higher for the base nozzle case at all needle lift positions.Turbulence is known to play a key role in spray breakup processes; hence, accounting for such differences in turbulence levels between orifices is expected to improve spray predictions.The reason for similar turbulence levels at lower needle lifts is due to the fact that at low needle lift positions, the area between the needle and orifice governs the fluid dynamics inside the nozzle.However, at full needle lift position during the quasi-steady injection period, the orifice plays a critical role in the flow development inside the nozzle.Area coefficient was unity for the hydroground nozzle which is expected since this orifice inhibits cavitation inception completely.These rate profiles are input for the spray simulations (Som, 2009a(Som, , 2010b(Som, , 2011)).
Influence of fuel properties on nozzle flow
This section presents the influence of fuel properties on nozzle flow development.As mentioned earlier, the nozzle flow characteristics of biodiesel is compared against that of diesel fuel since biodiesel is a lucrative blending agent.Table 2 presents the physical properties of diesel and biodiesel (soy-methyl ester) fuels.There are small differences in density and surface tension between these fuels.However, major differences are observed in viscosity and vapor pressure values.These differences are expected to influence the nozzle flow and spray development.Figure 8 presents the vapor fraction contours for diesel and biodiesel for P inj =1300 bar and P back =30 bar.The 3-D view of the cavitation contours indicates that vapor generation occurs at the orifice inlet for both the fuels.For diesel, these cavitation contours, generated at the upper side of the orifice, reach the orifice exit.In contrast, for biodiesel, the cavitation contours only extend a few microns into the orifice and do not reach the injector exit.Since cavitation plays a significant role in primary breakup, the atomization and spray behavior of these fuels is expected to be different.The mid-plane view also indicates that the amount of cavitation is significantly reduced for biodiesel compared to diesel.This is mainly due to two reasons: 1.The vapor pressure of biodiesel is lower than diesel fuel.Cavitation occurs when the local pressure is lower than the vapor pressure of the fuel.Hence, reduction in vapor formation can be expected for fuels with lower vapor pressures.Although injection pressures are very high, the differences in vapor pressure values are also important for cavitation inception.2. The viscosity of biodiesel is higher compared to diesel fuel (cf.Table 2).This increased viscosity results in lower velocities inside the sac and orifice, which in turn decreases the velocity gradients.This also results in lowering of cavitation patterns for biodiesel.
Figure 9 presents contours of the magnitude of velocity at the mid-plane and orifice exit plane for diesel and biodiesel fuels for the case presented in Fig. 8.The flow entering the orifice encounters a sharp bend (i.e., large velocity and pressure gradients) at the upper side of the orifice inlet, causing cavitation in this region, as indicated by the vapor fraction contours.Upstream of the orifice, the velocity distribution appears to be similar for the two fuels.However, at the orifice exit, the contours indicate regions of higher velocity for diesel compared to biodiesel.This is related to the fact that the viscosity (cf. Figure 10 presents the computed fuel injection velocity, mass flow rate, C d , and normalized TKE at the nozzle exit for different injection pressures.All these parameters are obtained by computing the 3-D flow inside the injector and then averaging the properties at the orifice exit.As expected, with increased injection pressure, the injection velocity and mass flow rate at the orifice exit increase (cf.Fig. 10a).However, the injection velocity, mass flow rate, and discharge coefficient are lower for biodiesel compared to diesel fuel.This difference in injection velocity and hence in mass flow rate can be attributed to the significantly higher viscosity of biodiesel.The lower mass flow rate for biodiesel implies that, for a fixed injection duration, a lesser amount of biodiesel will be injected into the combustion chamber compared to diesel.Combined with the lower heating value of biodiesel, this would lead to lower engine output with biodiesel compared to diesel fuel.As indicated in Fig. 10b, the average TKE at the nozzle exit is also lower for biodiesel.This is due to the fact that the Reynolds number is lower for biodiesel due to its higher effective viscosity.This has implications for the atomization and spray characteristics of the two fuels, since the turbulence level at the orifice exit influences the primary breakup.
Conclusion
The flow inside the nozzle is critical in spray, combustion, and emission processes for an internal combustion engine.Inner nozzle flows are multi-scale and multi-phase in nature, hence, challenging to capture both in experiments and simulations.Cavitation and turbulence generated inside the nozzle is known to influence the primary breakup of the fuel, especially in the near nozzle region.The authors capture the in-nozzle flow development using the two-phase flow model in FLUENT software.The influence of definition of cavitation inception is first analyzed by implementing an improved criterion for cavitation inception under turbulent conditions.While noticeable differences between the standard and advanced criteria for cavitation inception are observed under twodimensional flow conditions, thorough development and validation is necessary before implementing in real injection flow simulations.
Since the injector nozzle is a critical component of modern internal combustion engines, the influence of orifice geometry and fuel properties on in-nozzle flow development were also characterized.Both cavitation and turbulence was reduced using a hydroground nozzle compared to a base production nozzle.This will result in significant differences in spray, combustion, and emission behaviour also for these nozzles.Biodiesel being a lucrative blending agent for compression ignition engine applications was then compared to diesel fuel for inner nozzle flow development.Cavitation and turbulence generated inside the nozzle was observed to be lower for biodiesel compared to diesel fuel.Additionally, boundary conditions in terms of cavitation, turbulence, and flow variables were obtained for spray combustion simulations as a function of time for the detailed nozzle flow simulations.
Acknowledgment
The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory ("Argonne").Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357.The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government.
Fig. 1 .
Fig. 1.Comparison between the measured (Winklhofer, 2001), predicted vapor fraction contours, and cavitation inception regions predicted by different cavitation criteria.The injection and back pressures are 100bar and 40bar respectively.
Fig. 2 .
Fig. 2. Comparison between the measured (Winklhofer, 2001), predicted vapor fraction contours, and cavitation inception regions predicted by different cavitation criteria.The injection and back pressures are 100bar and 20bar respectively.
Fig. 3. (a) Injector nozzle geometry along with the computational domain.(b) The 3-D grid generated, specifically zooming in on the sac and orifice regions.(c) Zoomed 2-D view of the orifice and sac regions.
Figure 4
Figure 4 presents vapor fraction contours for the base and hydroground nozzles at P in =1300bar, P b =30bar, and full needle open position.Simulations were performed for diesel fuel (properties shown in Table2).The 3D view of the cavitation contours shows that vapor generation only occurs at the orifice inlet for both the orifices.For the base nozzle these cavitation contours are advected by the flow to reach the orifice exit.Consequently, the computed area coefficient (C a ) was found to be 0.96 for this case.A smoother orifice inlet (i.e., r/R=0.014)clearly leads to a decrease in cavitation.The small amount of vapor generated is restricted to the nozzle inlet.Thus chamfering/rounding the orifice inlet geometry can inhibit cavitation by allowing a smoother entry to the orifice, and also improve the nozzle flow efficiency (C d ) as discussed below.This is due to the fact that flow uniformity in the orifice entrance region is significantly enhanced for the hydroground nozzle hence, cavitation is almost completely inhibited.This observation is consistent with those reported by other researchers.A 2D cut-plane was constructed passing though the mid-plane.This view also highlights the fact that the hydroground nozzle cavitates significantly less compared to the base nozzle.
Fig. 4 .
Fig. 4. 3D and mid-plane views of vapor fraction contours for the base and hydroground nozzles.Simulations were performed at P in =1300bar, P b =30bar, and full needle open position.
Figure 5 Fig. 5 .
Figure5presents the velocity vectors plotted at the orifice inlet of the mid-plane for the base and hydroground nozzles presented in the context of previous figure.The zoomed view clearly shows that the velocity vectors point away from the wall for the base nozzle, while they are aligned with the flow for the hydroground nozzle thus ensuring a smooth entry into the orifice which decreases cavitation.As mentioned earlier, difference in cavitation characteristics plays a central role in spray breakup processes.Hence, spray behavior of a hydroground nozzle is expected to be different from that of the base nozzle.
Figure 6
Figure 6 presents the contours of Fig. 7. (a) Discharge coefficient and injection velocity plotted versus pressure drop across the orifice, (b) Turbulence parameters such as TKE and TDR as a function of time for the base and hydrogroud nozzles shown in Figure 4.
Fig. 8 .Fig. 9 .
Fig. 8. Vapor fraction contours for diesel and biodiesel inside the injector and at the midplane.The simulations were performed at full needle open position with P inj = 1300 bar and P back = 30 bar.Mid-plane
Fig. 10 .
Fig. 10.Computed flow properties at the nozzle exit versus pressure drop in the injector for diesel and biodiesel fuels: (a) mass flow rate and injection velocity; (b) discharge coefficient and normalized TKE.
Table 1 .
The influence of nozzle orifice geometry is characterized by comparing the in-nozzle flow characteristics of the base nozzle against a hydroground nozzle.The hydroground nozzle has the same nominal dimensions as the base nozzle except the hydrogrounding resulting in a small inlet radius of curvature.The essential features of the nozzle orifices simulated are shown in Table1.Geometrical Characteristics of nozzle orifice simulated.
Table 2 .
Comparison of physical properties of diesel and biodiesel (soy-methyl ester) fuels Table2) of biodiesel is higher than that of diesel fuel.The velocity contours at the orifice exit indicate fairly symmetrical distribution with respect to the y-axis for both fuels. | 2017-09-16T23:57:51.278Z | 2012-04-20T00:00:00.000 | {
"year": 2012,
"sha1": "c52543aee5488f744da1eea12a4f5276043fd03e",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/35632",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "93870a98ca7684d13190c3a1f21d46780fe325bf",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
237460378 | pes2o/s2orc | v3-fos-license | The Uncertainty Index and Foreign Direct Real Estate Investments in Developing Economies
Attainment of standards in a country’s real estate market to meet international investors’ expectations contributes significantly to the real estate sector. However, in developing economies characterized by an environment of uncertainty where stability cannot be achieved, direct investments in real estate can bring returns to foreign investors. This is because economic uncertainty in developing countries raises the exchange rate. An increase in the exchange rate keeps real estate prices in developing countries relatively low. Foreign investors then take advantage of the low prices to invest in real estate in that country. The study aims to research whether the uncertainty in developing countries increases the foreign direct real estate investments. The study examines the relationship between the uncertainties in selected developing economies in Europe and the real estate investments by foreigners in the period 2008–2018. Gengenbach, Urbain, and Westerlund Panel Cointegration test and PDOLS coefficient estimation methods were used in the study. According to the analysis results, a 1% increase in the uncertainty index in the economies examined increases foreign direct investments by 5.731%. Since this study is one of the most detailed studies measuring foreign direct real estate investments under uncertainty conditions in the economy, it contributes to the literature. To sustainably increase foreigners’ direct real estate investments in developing countries, economic and political stability should be prioritized. Facilitating the bureaucratic process, providing tax reductions, making real estate suitable for demand, following the appropriate price policy, and making various environmental regulations will also increase foreigners’ direct real estate investments.
residence is much lower than hotel accommodation. For this reason, foreigners prefer to purchase real estate in the countries they settle in [4]. Additionally, the country's provision of high-quality, affordable health services to foreign investors fosters an increase in foreign direct real estate investments in the country [2].
Determination of the exchange rate in economies based on the supply and demand in the market facilitates comparison of price levels between countries. The need for hot money in terms of the balance of payments by economies causes sudden fluctuations in exchange rates. This situation adversely affects foreign trade and the real estate sector [2]. Moreover, changes in exchange rates cause inflation uncertainty. In an environment of inflation uncertainty, investors invest in real estate as a security factor to prevent the depreciation of their capital [5]. However, real estate price bubbles, which can occur even when controlling inflation, damage financial stability by disrupting financial institutions' balance. The real estate price bubbles adversely affect the functioning of the real economy by increasing household consumption and borrowing [6]. For example, it was observed that the Global Financial Crisis in 2008 and economic policy uncertainties negatively affected economic indicators at the micro and macro scales [7].
Economic agents face inflation uncertainty and political uncertainty, economic policy uncertainty, demand uncertainty, and cost uncertainty in economies. Internal factors such as wars, fluctuations in oil prices, and panics in international capital markets or increased volatility during an economic crisis also create uncertainty. Uncertainties in the economy affect the economic decisions of governments, firms, and households [7]. Investors avoid taking risks because of the uncertainties in the economy. In an uncertain environment where stability cannot be achieved, investments decrease and economies shrink. Speculative investments are seen in countries with economic uncertainty, and fixed capital investments in these countries remain low [8]. In this context, developing economies with high economic uncertainties are significantly affected by global economic policies. Therefore, foreign investors in the real estate sector in these countries also reconsider their investment decisions [9].
However, under high economic uncertainty in developing countries, foreign capital leaves reducing the amount of foreign currency in the country. Thus, the exchange rate in the economy rises. As the exchange rate increases in developing countries, the price of real estate remains relatively low compared to that in other developed countries. Foreign investors become more willing to take advantage of the relatively low real estate prices to buy real estate in that country. Thus, relatively low real estate prices enable foreigners to increase their direct real estate investments in developing countries.
Bringing the country's real estate market to standards that will meet international real estate investors' expectations significantly contributes to the country's real estate sector [3]. Foreigners consider essential, the characteristics such as the social-economic situation, political stability, and security of the countries in which they intend to invest in real estate. Changes in real estate prices are also crucial for foreign investors who evaluate decisions based on forward-looking predictions. Therefore, the money that the foreign investor will spend on real estate against foreign currency will affect the real estate investment of foreigners [10]. Changes in exchange rates in economies affect inflation uncertainty; inflation uncertainty affects the level of economic uncertainty. The change in the level of uncertainty in economies affects foreigners' real estate investment.
The study aims to investigate whether uncertainties in developing economies affect foreigners' real estate investments either positively or negatively. In this section, the relationship between real estate investments in foreigners and the uncertainty index is analyzed.
The study consists of 4 sections after the introduction. In the second section, information is given about the studies examining the variables in the literature that affect foreigners' real estate investments. In section 3, the methodology of the study is presented. In the section 4 the relationship between real estate investments in foreigners and the uncertainty index in selected emerging economies is analyzed and discussed. Analysis findings are interpreted in the conclusion section, which is the 5th section.
2-Literature Review
Researches about real estate investments made by foreigners in economies have been noted. In integrated markets, capital flows to the countries that provide the highest revenue [11]. Regarding real estate investments by foreigners, Gerlowski et al. [12] examined investors' location preferences among Canada, Japan, and the UK for real estate firms in the USA. The analysis used a random effect model that combined time-series and cross-sectional data for 1980-1989. As a result, they found that foreigners preferred to invest in large developed and active economies. Cheng et al. [13] used Monte Carlo simulation in an environment of uncertainty in the US, UK, and Japan economies in 1973-1994. As a result, they found that while real estate promised to be a potential investment for foreign investors although it did not provide significant opportunities for investment diversification. In the USA, Miles [14] employed the data from 1975 to the third quarter of 2006. He shows that uncertainty negatively impacts housing starts by using Modern Generalized Autoregressive Conditional Heteroskedasticity. According to Rodriguez and Bustillo [15], foreign real estate investment in Spain represents about 40% of total foreign direct investment inflows. In this regard, they are modeling foreign real estate investment for Spain in terms of demand for tourism services and financial focus, using time series data from 1990 to 2007. As a result, they found that the economic stagnation and decline in housing prices in Spain harmed direct real estate investment inflows.
Fereidouni and Bazrafshan [16] analyzed the determinants of housing returns in Iran by analyzing capital appraisals, rents, and total returns. Using the generalized method of moments (GMM) data from 2000-2007 in Iran's 28 provinces, Fereidouni and Bazrafshan investigated the determinants of housing returns in Iran. Findings show that real estate investors in Iran can earn higher returns if they invest in provinces with positive changes in population, GDP, inflation, and negative unemployment rate changes. Salem and Baum [17] reveal the main determinants of foreign direct investment in real estate in selected Middle East and North Africa (MENA) countries. The study used the pooled Tobit model techniques for data on the foreign direct investments (FDI) in the commercial real estate sector in the period 2003-2009 in Algeria, Egypt, Morocco, Qatar, Saudi Arabia, Turkey, Tunisia, and the UAE. Ultimately, Salem and Baum [19] found that some selected MENA countries attracted more commercial real estate investment than other MENA countries. In their article, Rogers and Koh [18] presented various case studies on global foreign housing investments from Canada, Hong Kong, Singapore, Russia, Australia, and Korea. They concluded that globalizing real estate had become an asset for foreign investors seeking to diversify their investment portfolio. Şit [19] International real estate investment takes place in economies according to exchange rate levels. According to Johnson et al. [20], international real estate investment is susceptible to exchange rate fluctuations. In the study, Johnson et al. [20] used Monte Carlo analysis for London from 1987 to 2003. As a result, the currency swap strategy reduces downside risk in currency fluctuations. Thus, the risk-adjusted exchange rate provides a significant return on international real estate investment. Dapaah and Hwee [21] reviewed the statistical significance of exchange rate risk on the return of international real estate investments in their study. From 1986 to 2007, Dapaah and Hwee [21] used Markowitz's average variance approach in seven Asia Pacific cities; they found that foreign exchange risk has a statistically significant positive effect on the returns of the office investment portfolio. Choi and Park [22] postulate that the adoption of the Euro will increase the volume of Foreign Direct Investment (FDI) in the real estate industry between Germany and European countries. To estimate the impact of the Euro on FDI in the real estate industry, they used a modified weight equation, Pooled OLS, and random-effects models. Their results from panel data from 34 countries over the period 1986-2009 showed that the Euro contributed to the bilateral increase in FDI in European countries' real estate industry.
Foreign real estate investors invest according to the real estate prices in the economy they want to invest in. In this context, Fereidouni and Masron [23] examined the effects of real estate market factors on foreign real estate investment (FREI) using the panel data technique; they explored the relationships between real estate market factors and FREI for 31 countries between 2000 and 2008. The results showed that lower financing costs and higher levels of transparency in the real estate market attracted more significant amounts of FREI to the countries studied. The article also revealed that foreign real estate investors prefer countries with higher property prices. Fereidouni et al. [24] analyzed the relationship between interest rate and inflation between FDI, economic growth, and property prices in the real estate sector; they examined OECD countries from 1995 to 2008 by applying the Dynamic reciprocal relationship, a panel cointegration technique. Empirical results showed that real estate FDI does not cause appreciation of property prices and does not contribute to short and long-term economic growth in OECD countries. Wokker and Swieringa [25] used fixed effects panel regression techniques to estimate the impacts of foreign demand for Australian residential real estate on property prices. According to their results, the supply of residential properties was fixed in the short term between July 2010 and March 2015.
Moreover, any increase in demand, whether domestic or foreign, leads to higher house prices. Rogers et al. [26] conducted a survey of perceptions on foreign and Chinese investment among over 900 Sydney residents in 2009-2015. They found that foreign investment, especially by the Chinese, caused high levels of public anxiety and discontent among Sydney residents. The high house prices in Sydney also cause widespread concerns about housing affordability.
Uncertainties in economies affect macroeconomic variables. Similarly, uncertainties in economies have an impact on house prices and housing returns. Accordingly, Özdemir and Saygılı [27] investigated whether macroeconomic uncertainty causes instability in traditional money demand models. The results of the VAR analysis for Turkey for the period 1992Q1-2008Q3 revealed stable relationships between money balances, income, and interest margin in the long run when measures of economic uncertainty were added to the system. Bulut and Karasoy [28] examined the transfer of policy decisions to financial markets in times of increasing or decreasing uncertainty in monetary policy. Bulut and Karasoy [28] used a case study of Turkey to analyze the period from June 2010 to January 2015. They measured monetary policy uncertainty with the CBRT Expectations Survey. Empirical findings are closely related to the policyrelated uncertainty of the monetary policy transmission mechanism. For example, the surprise policy rate increase causes the Turkish lira to appreciate against the US dollar in an environment of low uncertainty and to depreciate under high uncertainty. Aye [29] examines whether economic policy uncertainty (EPU) data in eight developing economies (Brazil, Chile, China, India, Ireland, Russia, South Africa, and South Korea) impact actual housing revenues. The study employs the cross-sample validation (CSV) Granger causality approach. The results based on the CSV illustration do not show any evidence that economic policy uncertainty leads to actual housing returns except in Chile and China. Wang et al. [30] used the Economic Policy Uncertainty Index and several Chinese macroeconomic datasets from 1999 to 2014. Thus, they analyzed the impact of macroeconomic variables on housing price volatility under different policy uncertainty regimes. They used a 'logistic seamless transition vector autoregressive model' and a generalized impulse response function. As a result, policy uncertainty in macroeconomics leads to an increase in housing prices.
Moreover, under different policy uncertainty regimes, macroeconomic shocks have an impact on housing price volatility. During high policy uncertainty, an expansionary monetary policy facilitated the increase in house prices; a tight monetary policy leads to a 'House Price Puzzle' that makes it difficult for house prices to stabilize. Thanh et al. [31] create a new measurement of uncertainty peculiar to the real estate sector from 1970M7 to 2017M12. They show double housing prices when the Real Estate Uncertainty (REU) measurement is compared with Macro Uncertainty (MU). Vector autoregressions and Granger-causality analysis results also show that uncertainty measurement impacts housing starts and prices. Aye et al. [32] examine the spillover effects of economic uncertainty in 12 OECD countries, the booming and decreases in the housing sector. They found that declines in the housing market increase statistically more with economic uncertainty than with a discrete-time duration (hazard) model.
The results show that housing has a possible protective task against uncertainty. Andre et al. [33] analyze whether economic policy uncertainty (EPU) in the USA predicts the actual housing revenue moves. They see that EPU impacts both actual housing returns and fluctuations. Namely, EPU leads to an uneven fall in housing returns. Çepni et al. [34] research the role of uncertainty peculiar to real estate to estimate the conditional distribution of USA housing sales growth in the 1970:07-2017:12 period based on Bayesian Model Average (BMA). They finally found that the real estate uncertainty decreases the housing sales growth. Gupta et al. [35] analyze the role of macroeconomic uncertainty to estimate the synchronization in housing price movements in the 1975:02-2019:12 period in the USA and the District of Columbia (DC). Bayes used the dynamic factor model and machine learning approach for random forests and found that macroeconomic uncertainty has a predictive effect on national factor (stochastic) fluctuation and total USA housing prices. Previous studies have examined the effect of uncertainties in economies on macroeconomic variables or whether international real estate investment is based on real estate prices and exchange rate levels. This study however, contributes to the literature as one of the detailed studies measuring foreigners' direct real estate investments under conditions of uncertainty in the economy.
3-Research Methodology
In the study, Poland, Greece, Slovak Republic, Hungary, Turkey, and Slovenia's uncertainty indices in the 2008-2018 period and foreign direct real estate investments in these countries were examined. Assuming a cross-sectional dependency between units in the study, second-generation unit root and cointegration tests were performed. Accordingly, the Cross-Sectional Augmented Dickey-Fuller (CADF) test was used to determine whether the series is constant or not; CADF was generalized with cross-section, to consider the effects of cross-section dependence. The CADF test extends ADF regression with the first differences of individual series and cross-sectional mean of lag levels. Stability in countries with the CADF test statistics is tested using the same test for each country. Moreover, with the CIPS statistics obtained by taking the country averages, a stationarity test is made for the panel in general [36]. The following flowchart illustrates the econometric methodology followed in this paper. If a long-term relationship is found between series of variables, long-term parameters can be estimated using the Panel Dynamic OLS (PDOLS) method. The PDOLS Estimator is obtained by estimating the regression established using the values of the preceding and lagged variables of the differentiated I (1) variables. The PDOLS is a method that eliminates deviations in static regression by including dynamic elements in the model [37]. The most important advantages of the group-mean panel DOLS estimation technique, an intergroup estimator, compared to the in-group panel DOLS estimation methods, are that it causes fewer scale distortions and provides better predictions than those given by cointegration vectors with a heterogeneous structure. Therefore, in the study, the PDOLS estimation method was used to estimate the long-term coefficients of the variables. The expanded cointegration equation for PDOLS estimation can be expressed as follows [38].
4-Results and Discussion
In this context, foreign real estate investments countries assumed to have high economic uncertainties in Europe such as Poland, Greece, Slovak Republic, Hungary, Turkey, and Slovenia have been examined in the study. The uncertainty index of these countries for the years 2008-2018 is shown in Figure 1. According to Table 1, the series is not constant at their level. When the differences are taken, the series is constant at the 5% significance level.
The homogeneity of the model was determined using the Swamy S test. According to the findings in Table 2, since the probability value is greater than the 'chi2' value, the hypothesis is rejected. This situation shows that the model is heterogeneous. In this case, it is appropriate to rely on the results of heterogeneous cointegration tests and to use the suggested estimation methods for heterogeneous panels [41]. As a result of the tests, a second-generation panel data analysis was deemed necessary. Based on the results of the Swamy S Homogeneity Test results, the hypothesis was rejected as the parameters were heterogeneous. The Panel Cointegration Analysis is performed using the Gengenbach, Urbain & Westerlund Panel EC Cointegration Test [42] in line with the obtained results and assumptions of inter-unit correlation and heterogeneity. Since p-Value < 0.01 0 , the hypothesis was rejected. There is a cointegration relationship between the uncertainty index and foreign direct real estate investments. According to Table 4, there is a relationship between the uncertainty index and foreign direct real estate investments for the panel as a whole. According to the PDOLS results, the 1% increase in the uncertainty index in the analyzed countries increases foreign direct investments by 5.731%.
5-Conclusions
Uncertainties in economic policies are an essential source of instability for the macroeconomy in general. Uncertainties are particularly effective on investments in the real estate market for the economies of underdeveloped and developing countries.
This situation can be explained as follows; high economic uncertainty in developing countries causes the exchange rate to rise. The increase in the exchange rate results in relatively low real estate prices in developing countries, making the country attractive for foreign investors in real estate. Thus, relatively low real estate prices increase foreigners' direct real estate investments in developing countries.
Foreign direct real estate investments to European countries namely, Poland, Greece, Slovak Republic, Hungary, Turkey, and Slovenia that are assumed to have uncertainties in the economy have been examined. There is a long-term relationship between the uncertainty index and direct foreign real estate investments for the panel. According to the PDOLS results, the 1% increase in the uncertainty index in the analyzed countries increases foreign direct investments by 5.731%. Empirical results show that uncertainty increases foreigners' direct real estate investments, contrary to the expectation that foreigners will reduce their direct real estate investments.
The uncertainty in the Greece economy increased after the 2008 global crisis due to the debt crisis; and policymakers thereafter aimed to increase direct foreign investment to overcome the economic crisis. It is assumed that foreigners invested, especially in Greek islands. After the 2008 global crisis, real estate sales to foreigners in Turkey were unleashed. The currency increase in Turkey after 2014 lead to an increase in foreigners' real estate investments. Also, since the other examined developing countries have similar economic features, foreigners' real estate investments in those countries increased.
However, to increase foreigners' direct real estate investments in a sustainable way in developing countries, economic and political stability should be prioritized. This is because macroeconomics, risks that may arise from the real estate market are also limited when economic and political stability is achieved. Facilitating the bureaucratic process, providing tax reductions, making real estate suitable for demand, following the appropriate price policy, and making various environmental regulations will also increase foreigners' direct real estate investments.
6-1-Data Availability Statement
The data presented in this study are available on request from the corresponding author.
6-2-Funding
The author received no financial support for the research, authorship, and/or publication of this article.
6-3-Ethical Approval
The participant gave his written consent to use his anonymous data for statistical purposes. The participant was over 18 years old and voluntarily collaborated without receiving any financial compensation.
6-4-Conflicts of Interest
The author declares that there is no conflict of interest regarding the publication of this manuscript. In addition, the ethical issues, including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, and redundancies have been completely observed by the author. | 2021-09-09T20:41:58.198Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bfc502b9606f29ca2b4f4f473d8bdc53975e423a",
"oa_license": "CCBY",
"oa_url": "https://www.ijournalse.org/index.php/ESJ/article/download/528/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fdfd07d7499041c593ae77d2493fbc2efe293de8",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
220890975 | pes2o/s2orc | v3-fos-license | Environmental adversity is associated with lower investment in collective actions
Environmental adversity is associated with a wide range of biological outcomes and behaviors that seem to fulfill a need to favor immediate over long-term benefits. Adversity is also associated with decreased investment in cooperation, which is defined as a long-term strategy. Beyond establishing the correlation between adversity and cooperation, the channel through which this relationship arises remains unclear. We propose that this relationship is mediated by a present bias at the psychological level, which is embodied in the reproduction-maintenance trade-off at the biological level. We report two pre-registered studies applying structural equation models to test this relationship on large-scale datasets (the European Values Study and the World Values Survey). The present study replicates existing research linking adverse environments (both in childhood and in adulthood) with decreased investment in adult cooperation and finds that this association is indeed mediated by variations in individuals’ reproduction-maintenance trade-off.
Introduction
Cooperation offers benefits through the sharing of physical power, resources, skills, knowledge, problem-solving experience, social support and social influence. But cooperation is a fragile condition, vulnerable to exploitation from those who take without contributing. Many disciplines have important contributions to make to its study (Lazarus 2003) and here I apply an evolutionary approach (and for humans a geneculture coevolutionary approach; see Section 3) to understand what happens to cooperationthroughout the organic world but focusing on our own speciesin conditions of adversity. Does it flourish or wither, and why?
Cooperation, as understood in biology and economics, is defined in terms of its consequences for those involved in a social interaction and this is the approach I take here. To be precise an act is cooperative if it results in a benefit to both the actor and the recipient(s) of the act. My metric for evaluating benefit in non-human species is, in principle, Darwinian fitness, although empirically we often have to make do with a proxy for fitness. By Darwinian fitness I mean lifetime reproductive success, the total number of offspring produced. In the human case, where a gene-culture coevolutionary approach is more appropriate, we need to be more cautious in assuming that choices are always adaptive in a Darwinian sense, and I explain this more fully in Section 2.
Defining adversity
An environment is defined as adverse if it has some negative impact on the species concerned. For an evolutionary analysis, and for non-human species, the metric for this impact is fitness. For our own species it is also often the case that fitness is lower in what we deem to be a more adverse environment. However, as I have already said, we cannot assume that all human behaviour is adaptive and it is more parsimonious to claim simply that what I will call 'well-being' is reduced in a more adverse environment, my term being simply a more concise equivalent of 'subjective well-being' which combines measures of cognition (satisfaction) and positive affect (Cummins 2000). I discuss the relationship between well-being and fitness further in Section 2.
Broadly, adversities have abiotic or biotic origins. In the first category are variables such as temperature, aridity and altitude for which a species will have some optimal value at which its fitness (or well-being) is greatest, larger departures from this value bringing greater adversity. Biotic adversity arises from predators, parasites, disease and competitors. Uniquely human adversities, both abiotic and biotic, include pollution, housing, employment, health services, education, poverty, lack of opportunity and the social environment: other people who individually, collectively or institutionally may harm another physically, emotionally, economically or in any other way, either actively or by withholding some good. Adversity may also be felt in terms of relative deprivation (Davis 1959;Wilkinson and Pickett 2010).
Adversity (or harshness as it is also termed) clearly does impact on human well-being and consequently has profound and widespread influences on cognition, behaviour and development beyond its impact on cooperation (Low 1990;Ellis et al. 2009;Frankenhuis, Panchanathan, and Nettle 2016). These influences arise from sources that range from the abiotic environment through the personal to the societal. For example, lower socioeconomic status is associated with a 'behavioural constellation of deprivation' that leads to a focus on present-oriented behaviours (Pepper and Nettle 2017). Further, some aspects of adversity, such as extrinsic mortality risks (Nettle 2010;Frankenhuis, Panchanathan, and Nettle 2016;Pepper and Nettle 2017) and economic and political factors (Standing 2011), are outside the individual's control.
Outline
Following a preamble on evolutionary explanation (Section 2) I describe current thinking on the evolutionary origin of cooperation in the human species (Section 3). Then, following an account of how cooperation is influenced by adversity (Section 4) in organisms generally, but focusing on the human species, I offer a contribution to the explanation of these relationships (Section 5). Finally, I extend the discussion briefly to other forms of prosociality (Section 6) and draw conclusions (Section 7).
A preamble on the evolutionary analysis of behaviour
Given their very different intellectual histories, social and evolutionary scientists have largely worked independently to understand human behaviour and social structures, and when they have interacted it has been more often in conflict than in productive dialogue. Matters have improved as evolutionary biologists have come to appreciate that any evolved behavioural predisposition must emerge as action through the processes of individual development occurring within a particular cultural environment. And having learned these general lessons from psychology, anthropology and sociology they have gone on to formalize the ways in which organic and cultural evolution interact, as discussed below. But since the rapprochement is not yet complete, it will be useful, I think, to outline the nature of the questions the evolutionary scientist asks about behaviour and the conceptual approach that is brought to bear in searching for answers, before proposing some ideas for understanding cooperation under adversity that have an evolutionary-cultural foundation.
The social scientist seeking a causal understanding of human action is concerned with what the biologist calls proximate causation. That is, finding the influences within the environment (including the social and cultural environment), and within the individual, that account for the behaviour of interest influences that the psychologist calls motivational. And, by extension, an understanding is sought for how behaviour varies between individuals and cultures. The evolutionary behavioural analyst asks, in the first place, a different question, one that the biologist calls ultimate causation: what were the evolutionary forces (generally forces of natural selection) that have resulted in behaviour appearing in the particular form that it does, in response to particular proximate influences? And the guiding principle in generating hypotheses about ultimate causation is that it is predictedunder the influence of natural selectionto produce adaptive behaviour: behaviour that efficiently solves a problem in the organism's life.
But there is a second stage of evolutionary analysis; hypotheses of ultimate causation lead naturally to complementary hypotheses about the environmental cuesthe proximate causespredicted to influence the emergence of behaviour patterns given their proposed adaptive function (Barkow, Cosmides, and Tooby 1992). This is clearly a different source for ideas about proximate causation to those from the social sciences, but one that has a strong foundation in the theory of natural selection.
The generation of hypotheses of proximate causation from natural selection theory has been a particular feature of evolutionary psychology (Tooby and Cosmides 1990) and the evolutionary logic underlying this approach is important for the ideas I present in Section 5 concerning the influence of adversity on cooperation. Using this relationship as an exemplar the argument from evolutionary psychology is that natural selection over our evolutionary history has been responsible for the learned psychological predispositions that we bring to cooperative decision-making in the contemporary world, as well as the proximate causes that turn these predispositions into actions. I would argue that this is a particularly strong premise for our present case since cooperative decisions, environmental adversity and the interaction between the two must have had a great impact on fitness throughout human history. These predispositions govern contemporary behaviour to the extent that 'present conditions resemble past conditions in specific ways made developmentally and functionally important by the design of those adaptations' (Tooby and Cosmides 1990, 375). The approach is perfectly compatible with the evidence that cooperative tendencies vary with the economic and societal structures of different cultures, variation which can sometimes also be understood in adaptive terms (Henrich et al. 2004) as a result of further learned predispositions. Whether any of these predispositions are optimal in terms of fitness enhancement in the changed environments of the modern world remains an open question, is not assumed by the evolutionary psychology approach and is not assumed here. And since the consequences of cooperative decisions for people are my primary focus I will, parsimoniously, refer to 'well-being' (rather than fitness) to describe the relative positive outcomes of cooperating or not cooperating in environments of differing qualities. In the non-human examples the outcome measures are either fitness or, more frequently, proxies for fitness.
An example of how the evolutionary psychology approach seeks to understand evolved proximate causes will be useful here. Lieberman, Tooby, and Cosmides (2003) tested Westermarck's (1921) theory for the proximate causation of incest avoidance, an adaptive phenomenon in that it reduces the damaging effects of inbreeding depression. Westermarck proposed that incest avoidance, and the moral objection to incest, were achieved by the co-residence of siblings from an early age resulting in 'sexual negative imprinting'. Lieberman and coworkers found support for their hypothesis, derived directly from Westermarck's proposal, that duration of co-residence with an oppositesex sibling would correlate positively with the strength of the moral opposition to sibling incest. The association was independent of degree of relatedness (adopted, step-, half-or full-sib) while relatedness itself, the functionally important factor, did not influence the moral attitude to incest. This is understandable in evolutionary terms since a child cannot reliably know its kin relationship to another child it grows up with. Duration of co-residence, however, is a reliable cue since experienced directly, and crucially it correlated significantly with relatedness, the functionally relevant variable. These results illustrate the point that evolved proximate causal factors need not themselves represent the adaptive variable but must map reliably onto it.
Evolutionary biologists and social scientists continue to generate their hypotheses concerning proximate causes from different principles. This is a difficult division to bridge, but on another area of dispute further mutual understanding should be possible. This is the role of genetics in the causation of behaviour and, in particular, the worry by some social scientists that an evolutionary analysis of behaviour assumes genetic determinism, the notion that a particular genetic make-up fully determines a behaviour; given gene X behaviour Y will be shown and will be shown whatever the environment throws at it. While some biologists may have held this view in the past, it is now a straw man. The notion of innateness is discredited; rather behaviour is understood to unfold during life as a continuing interaction between the individual, its genotype and the environment (Mameli and Bateson 2006;Bateson and Mameli 2007), including the cultural environment (Nettle 2009).
It is the complex interplay between the individual, its genotype and the environment just described that is subject to natural selection, behaviour responding flexibly and often adaptively to the environment through processes involving direct experience, social influence and the internalization of norms. Although this means that behaviour is subject to various biases and that we are not blank slates (Pinker 2002), it does not necessitate genetic determinism, just as an enlightened view of the power of the environment to influence behaviour does not merit it with an analogous determinism.
Further, there is no longer a fundamental conflict between the study of culture and of biological evolution as forces for change. In the discipline known as 'gene-culture coevolution' the two are now integrated (Boyd and Richerson 1985;Richerson and Boyd 2005). The logic of the approach is that cultural practices modify the human environment and consequently influence the selection pressures acting on the human genome and directly on the cultural practices themselves, producing feedback loops, both positive and negative. Cultural transmission may be horizontal, vertical or oblique, and natural selection is assumed to act on both genetic and cultural variation in behaviour and cognition, although cultural success may look very different to genetic success. Culture evolves and the methods of evolutionary biology can be used to study its evolution. And gene-culture coevolution theory now has a new importance following recent findings that cultural practices modifying the environment have resulted in changes in gene frequencies (Laland, Odling-Smee, and Myles 2010;Richerson, Boyd, and Henrich 2010). It is therefore no longer possible to dismiss the idea that cultural forces might have changed the human genome by claiming that there has not been sufficient time for natural selection to act. And indeed natural selection continues to act on the human genome (e.g. Byars et al. 2010).
How did cooperation evolve in our species?
Understanding the evolutionary origin of cooperation has been a challenge since helping others would not at first sight appear to be favoured by natural selection. The method required to analyse this problem is game theory, developed by mathematicians and economists to predict rational choices when two or more individuals interact and the choices they make influence the payoff for other players. Economic rationality is classically deemed to be self-regarding in that it maximizes some kind of personal payoff (Gintis 2003). However, achieving maximum payoff may not be possible when the consequence of one's choices is under the influence of the choices made by others, as in a social interaction. Instead of reaching the 'best' choice, defined by maximum payoff, therefore, rational players in a game come to settle on the set of choices which means that no player can do better by choosing to play differently, such a set of plays being termed a Nash equilibrium (Binmore 2007a(Binmore , 2007bColman 1999).
Evolutionary biologists took this economic equilibrium concept in games and applied it to a similar problem, in which natural selection determines the decisions made by the rational player. If at the Nash equilibrium no player can make a more profitable move this is just the outcome to be expected when individual decisions are evolving under the force of natural selection, with fitness as the payoff metric. And, following this logic, the evolutionary equivalent of a Nash equilibrium is termed an evolutionarily stable strategy, or ESS (Maynard Smith 1982). Both the Nash equilibrium and the ESS are stable equilibrium states and thus, by definition, are what we expect to see in nature. An important difference between the two concepts, however, is that while the Nash equilibrium must take rational play as an assumption, a state of affairs on which players cannot improve is built into the theory of natural selection and therefore also the ESS concept.
The game theory approach to understanding the conditions for the existence of cooperation can be exemplified by the well-known economic game, the prisoners' dilemma (e.g. Colman 1999). In this game two individuals interact in a scenario in which each actor has a choice of two plays or strategies which, in general terms, can be thought of as cooperating with (C) or defecting on (D) the other player. The original prisoner scenario is unnecessarily complicated and I will illustrate the dilemma with a simpler scenario described by Colman (1999). A Buyer has decided to purchase a diamond from a Seller and a price has been agreed. For some reason the exchange must be made in secret and so the two agree each to leave a bag, the Buyer's containing the agreed price and the Seller's the diamond, at a different place in a wood, after which each will retrieve the other's bag. The problem is, of course, that either party might be tempted to leave an empty bag, thus defecting (D) on the other, rather than cooperating (C) with a full bag. Figure 1 shows the relative payoffs to Buyer and Seller of the four possible outcomes of the exchange. The absolute value of the numbers in the figure is arbitrary; all that matters, and what defines the dilemma, is their ranking, a higher number indicating a more preferred outcome.
If both parties cooperate and fulfil their agreement (a CC outcome), they gain three points, but if both come with an empty bag (DD), they gain only two since they have both failed to close a deal they desired. If one party leaves an empty bag (D) and the other a full one (C), then the defector goes home with both the cash and the diamond (five points), while the other party, the sucker, has neither (0 points).
What is the equilibrium outcome to this game, represented by the best response of each player to the play of the other? The answer, which can be seen in Figure 1, is for both players to defect (DD) since whatever the other player does it is always more profitable to defect than to cooperate. The dilemma demonstrated by the game is that this outcome, though a result of rational play, is not the best outcome that can be achieved. Both parties would clearly prefer a CC outcome to a DD outcome, but even if there was some way for them to agree on such an outcome, it would still pay to renege on the agreement. This game captures the essence of the problem of how to maintain cooperation and the logic can be simply extended to interactions between more than two players. Why should a hunter exert himself fully in the hunt, and why should he share his catch with others in his hunter-gatherer group? Why pull your weight in a team effort or, as a nation, fulfil promises on reducing greenhouse gas emissions? The temptation to defect is often rational in economic (selfish) terms. It would therefore seem that an ultimate explanation of the fact that cooperation is a feature of human social life cannot rest on the logic of the prisoners' dilemma as I have described it, in which the two parties meet only once. In the real world we often enter into relationships in which we interact repeatedly over days, months or years, and an ultimate explanation must take into account the fact that for most of our evolutionary history we lived in small groups in which all identities were mutually known and people interacted repeatedly throughout their lives (Kelly 2013). In the language of game theory we played iterated (i.e. repeated), not one-shot, games with each other and thus had the opportunity to reward past support, punish past defections or break off a relationship altogether. This complicates enormously the strategies that can be played, compared to the one-shot game I have described, strategies that take into account the history of the relationship. In particular, the fear of retaliation in later encounters encourages cooperation and can be the basis for a cooperative ESS in the repeated prisoners' dilemma, such as the Tit-for-Tat strategy: start by cooperating, then copy partner's last play (Axelrod and Hamilton 1981; see also Nowak and Sigmund 1993; for another cooperative ESS). And, pre-empting these findings, the 'folk theorem', as game theorists call it, concluded that for indefinitely repeated games with little discounting of future payoffs cooperative equilibrium strategies will always exist (Binmore 2005). So cooperation can result from self-regarding, broadly reciprocal, interactions between pairs of individuals (so called direct reciprocity). But understanding how relationships develop in small communities is not just a matter of summing all the dyadic relationships within it. Individuals learn about the cooperativeness of others by interacting with them, observing them directly and talking to third parties (Dunbar 2004). In this way they build up reputational knowledge invaluable when responding to offers of interaction with the potential for mutual benefit, or when selecting partners for such interactions themselves (Roberts 1998;Gurven 2004;Craik 2009). Alexander (1987) was the first evolutionary biologist to emphasize the importance of this process for the evolution of cooperation. He coined the term indirect reciprocity to describe the biasing of cooperative responses to those known to have been cooperative to others in the past, and there is now experimental evidence that players are more likely to cooperate with other cooperators than with free-riders in small group interactions (e.g. Milinski et al. 2001;Barclay 2004).
The analysis so far shows that self-regarding rationality is compatible with cooperation when individuals interact repeatedly with known partners. However, laboratory and real-world experiments show that people are not fully selfishly rational since participants also cooperate in one-shot prisoners' dilemma games about half the time and contribute in one-shot public goods games, the multi-person equivalent of the prisoners' dilemma in which the rational decision is to give nothing (Camerer 2003). Behaviour in another and much simpler game, the dictator game, is particularly instructive. In this game a dictator simply decides how to split an amount of money between themselves and another player. The rational self-regarding choice is clearly to give nothing away, but giving 10-20% of the fund is common (Camerer 2003). The results from this and other games played in many different societies while showing much variationunderstandable in terms of the economic and societal structure of the culturedemonstrate that pure selfishness is rare and suggest the existence of what may be a universal sense of fairness (Henrich et al. 2004).
Such a predisposition for fairness may have been favoured by selection, acting both genetically and culturally (Chudek and Henrich 2011), since it undermines self-regarding rationality and eases the path to the more rewarding cooperative CC (rather than DD) outcome in prisoners' dilemma-type repeated encounters and makes it possible in oneshot encounters too. In a manner that is similar to Roberts's (2005) notion of interdependence, one way of representing the fairness motive is by adding some proportion of the other player's payoff to one's own. If this proportion is great enough (e.g. >0.67 for the payoffs in Figure 1) the Nash equilibrium and ESS become CC even in the one-shot game. Binmore (2005, Chapter 9) argues that our sense of fairness evolved as a stable social mechanism for sharing resources in the non-hierarchical societies of the earliest huntergatherer humans and that we carry this same sense today. If we extrapolate from our knowledge of present-day egalitarian hunter-gatherers, our earliest human ancestors benefited from sharing with their neighbours because of their interdependence (Roberts 2005)particularly in cooperative hunting and gathering, and food sharingand consequently had a stake in each other's well-being. In particular, such cooperative practices reduce the risk of periods without food, and free riding on this system is not tolerated (Winterhalder 1986;Kaplan, Hill, and Hurtado 1990;Gurven 2004;Kelly 2013, Chapters 6, 7). (It is probably not a coincidence that the cooperative non-hierarchical structure, collective decision-making, monitoring of resource acquisition by others, graded sanctions for defectors and conflict resolution mechanisms of many hunter-gatherer societies (Kelly 2013) are all features shared with successful common pool resource groups such as coastal fisheries and forestry systems (Ostrom 1990).) An early human social contract characterized by sharing, as a form of enlightened self-interest, may be the origin of the Golden Rulevariations on 'Do as you would be done by'probably the most universal ethical imperative that we have (Binmore 2005, Chapter 9).
While an early evolutionary origin for our widespread sense of fairness seems likely, scholars differ on how best to explain the fact that we often act fairly in one-shot interactions in the real contemporary world (e.g. queuing or returning a lost wallet) and in the lab, when reciprocity is not expected. One explanation is that people think in such situations, consciously or not, as if they were repeated non-anonymous games, since the predisposition we bring to such encounters is one that evolved in the small groups of our early evolutionary history described above (e.g. Haselton and Nettle 2006). Gintis et al. (2003) disagree, arguing that early humans would also have engaged in encounters with a low probability of continuing, in which defection would have been the more profitable strategy. However, this doesn't explain the cooperative responses that are regularly seen in one-off encounters in both the real world and the laboratory. A further point is that experimental one-shot interactions may be played like repeated games since they inadvertently share cues associated with the reputational indirect reciprocity consequences of being observed by others (Kurzban 2001;Haley and Fessler 2005;Bateson, Nettle, and Roberts 2006).
The behavioural expression of cooperation, or its absence, is inevitably accompanied by emotions including a feeling for the welfare of others, guilt, shame, a personal concern for reputation and a fear of punishment (Milinski et al. 2001;Fehr and Gachter 2002;Barclay 2004;Bowles and Gintis 2011), and develops in the individual under the influence of social norms (e.g. Krupka and Weber 2013;Hugh-Jones and Ooi 2017). It is possible to incorporate these processes into models that combine biological and cultural evolution, as discussed in the previous section, and such a model, simulating internalization of norms transmitted vertically between generations and obliquely by socialization institutions, concludes that cooperation can be maintained when a minority of the population exhibit strong reciprocity, cooperating unconditionally and punishing defectors at a personal cost (Gintis 2003). Although the importance of strong reciprocity for the evolution of cooperation is currently a matter of controversy (Guala 2012), the model exemplifies an approach to the problem of understanding how individuals and communities reach equilibrium prosocial states. This must occur by some combination of genetic evolution, cultural influence and direct experience, but we are some way from a full understanding of the processes involved and their interaction.
Some discussion of altruism is also necessary here since, although formally distinguished from cooperation, the two prosocial acts share the consequence of benefitting others, differing in that the actor also benefits from a cooperative act but suffers a cost from an altruistic act. It is important to note that focusing narrowly on a single act defined, as above, as altruistic may miss a bigger picture. The single act may be just one of a series of reciprocal altruistic exchanges, so that considered over a longer time period the relationship is seen to be one of cooperation, as in reciprocal food sharing, since both parties benefit. Here the success of cooperation relies on trusting that the other party will reciprocate. In the dictator game described above the single decision involved is 'purely' altruistic if anything is given to the other party, as it commonly is. Acts of cooperation and altruism, in addition to their direct consequences, may accrue delayed benefits due to direct and indirect reciprocity and may be selected for as honest signals of the ability to act in this way in the future (Roberts 1998).
However, a difference between cooperation and altruism important for us here is that acts of pure altruism favoured by selection in the small societies of our early history, due to delayed benefits, may not be personally beneficial in the large anonymous societies of the contemporary world, even though they continue to be selected by gene-culture coevolution due to internalization of norms built on evolved predispositions. The upshot of this, as I will go on to explain, is that I deal here with the influence of adversity on cooperation only and not on pure altruism as just described. This is partly because I am concerned with the more straightforward case where costs and benefits are borne in a given individual (the cooperative actor) in a given environment in the here and now. In pure altruism immediate benefit is borne only by another party who may inhabit a different (social) environment from the altruistic actor, such as the case of Christians who rescued Jews in Nazi-occupied Poland (Tec 1986) and those who rescued persecuted family members, friends and strangers in Argentina during the military rule of -1983(Casiro 2006. Any delayed benefits to the self of altruism through direct or indirect reciprocity, if there are any at all, take place in a future environment of an uncertain nature. In addition, even if one wanted to assume that our altruistic tendencies are built on the evolved predispositions mentioned above, it would be very difficult to quantify the contribution of supposed delayed benefits to our altruistic decisionmaking. For all these reasons altruism is not amenable to the analysis I describe in Section 5. My analysis is suitable for examining cooperation, however, for which I need to consider only direct costs and benefits to the self and not to others (although even here the analysis is not perfect since cooperation may also have delayed benefits for the self).
This account of the origins of human cooperation just scratches the surface of half a century of research on the topic (Ridley 1997;Hammerstein 2003;Gintis et al. 2005;Tomasello 2009;Bowles and Gintis 2011). My aim here has been to outline the kinds of thinking required for an understanding of the origin and maintenance of human cooperation as a prelude to the following discussion of the influence of adversity on cooperative behaviour.
Finally, we should not exaggerate the human tendency to cooperate; like all human traits it is variable and some individuals, in some situations, prefer to freeride on the generosity of others.
Adversity and cooperation: data
What follows is by no means an exhaustive or systematic review but I have not been selective. I report all my findings from the literature on the influence of adversity on cooperation.
Non-human cases
There is a widespread tendency in the natural world for organisms to be more cooperative in conditions of adversity and I have not located any evidence to the contrary. The phenomenon has been reviewed by Andras and Lazarus (2005) and many of the following examples are taken from that account: • In response to environmental stressors, individual bacteria become social and form multi-cellular structures such as biofilms and mushroom bodies (Greenberg 2003) which enhance their resistance to the stressor, such as an antibiotic (Drenkard and Ausubel 2002). • Social, in contrast to solitary, feeding in the nematode Caenorhabditis elegans is triggered by environmental stressors (De Bono et al. 2002). • Fish school and primate group sizes are larger where predation risk is greater (Seghers 1974;Farr 1975;Hill and Lee 1998). Gregariousness reduces predation risk in various ways (Krause and Ruxton 2002). • Colonies of the common mole-rat (Cryptomys hottentotus hottentotus) are larger in arid areas, which present greater foraging adversity, than in mesic (moderately moist) areas. Movement between colonies is also less frequent in arid areas. Larger and more stable colonies favour resource sharing and the development of cooperative relationships with known individuals (Spinks, Jarvis, and Bennett 2000). • The phenomenon is also found in plant communities, in which an individual plant, acting respectively competitively or cooperatively, can inhibit or promote the biomass, growth and reproduction of its neighbours. In 11 mountain habitats around the world relationships between neighbours in subalpine plant communities are more competitive, whereas in the corresponding alpine communities, where abiotic stress is higher, cooperative interactions predominate (Callaway et al. 2002).
The human case
For the human case I have sought data from a range of methodologies: real-life case studies and experiments; anthropological work; within-and between-society comparisons; self-report measures; and lab experiments. For reasons that will become clear in Section 5 I divide the data into those in which, like the non-human data, cooperation increases with adversity, followed by those in which the opposite is the case.
Cooperation increases with adversity
While there is a long-standing view in the social sciences that external threats increase group cohesion (Stein 1976), this is not always measured in behavioural terms. However, it seems to be commonplace that people caught up in a natural disaster cooperate in ways they would not consider under more normal circumstances. Members of the Committee on Disaster Studies of the National Academy of Sciences, USA, write that following a natural disaster: The net result . . . is a dramatic increase in social solidarity among the affected populace . . . The sharing of a common threat to survival and the common suffering produced by the disaster tend to produce a breakdown of pre-existing social distinctions and a great outpouring of love, generosity, and altruism . . . persons tend to act toward one another spontaneously, sympathetically, and sentimentally, on the basis of common human needs rather than in terms of predisaster differences in social and economic status. (Fritz and Williams 1957, 48, emphasis added) This account provides something of a control condition in making a comparison with pre-disaster behaviour.
In the trench warfare of World War I cooperation between British and German infantry was commonplace. Between battles a 'live and let live' reciprocal arrangement developed whereby both sides refrained from firing on the enemy, in spite of the wishes of their commanders. This peaceful arrangement could be signalled, for example, by repeated daily firing at precisely the same position at precisely the same time. Axelrod (1984;drawing on Ashworth 1980) analyses this as an iterated prisoners' dilemma in which, for the infantrymen at least, mutual cooperation was the best outcome.
These real-world case studies show that some people behave cooperatively under severe adversity, sometimes at great personal cost, but they do not allow a quantitative comparison with the frequency or degree of cooperative behaviour in less adverse conditions. To provide the kind of data we are seeking here systematic studies are required.
The set of studies probably closest to these real-world cases of external threat are the experiments, carried out mostly in the lab (e.g. Puurtinen and Mappes 2009;Puurtinen, Heap, and Mappes 2015), but also in the real world (Erev, Bornstein, and Galili 1993) in which participants in small groups are found to cooperate more (generally through donations in a public goods game) when competing with other groups. The adversity in these studies is inferred to arise from the competition; it is intrinsic to the task and not a pre-existing condition that participants bring to the experiment, as in the studies that follow. While this is an important distinction, for present purposes it needs to be examined in relation to my analysis in the following section, which is framed in terms of the individual's perception of their adversity. A temporary adverse stimulus that arises immediately before a decision has to be made may have different consequences from that of a long-term adverse condition that one brings to an experiment, such as might arise from low social status or hunger, say. However, there are many psychological studies in which brief priming stimuli are remarkably effective in imitating the influence of long-term conditions, including those of prosociality (Piff et al. 2010;Piff 2014;Nettle et al. 2014). Since brief exposure to competition in an experiment may also have such a priming effect, it seems worthwhile to consider competition experiments as potentially suitable for our analysis here.
Moving on to an anthropological study, in examining the influence of adversity using a random half of the societies in the standard cross-cultural sample, Low (1990) found a significant positive association between extreme cold and the hunting of large game, which is a cooperative enterprise.
Finally, in a US study of social class effects, with class measured by educational attainment and income, lower-class participants offered more points than upper-class participants in a trust game, a cooperative game involving trust that the partner will reciprocate (Piff et al. 2010).
Cooperation decreases with adversity
The results of the following two studies conflict with those of Piff et al. (2010) just described and I will return in the following section to the issue of how these results might be resolved.
In his study of Tyneside neighbourhoods Nettle (Nettle 2015;Nettle, Colléony, and Cockerill 2011;Schroeder, Pepper, and Nettle 2014) has compared 'the informal social relationships and interactions that make up so much of everyday life' (Nettle 2015, 12) of two neighbourhoods in the city of Newcastle upon Tyne, UK. These neighbourhoods, with populations of about 3000 each, are markedly different in adversity as measured by the Index of Multiple Deprivation. The more deprived neighbourhood is at the first percentile of deprivation in England, while the less deprived is at the 79th percentile, and the study was carried out at 'a moment when people in [the former] neighbourhood had endured many years of uncertainty about the future of the whole area' (12). A number of behavioural and self-report measures were made, but the only ones relevant here were two self-report measures; adult respondents in the lessdeprived neighbourhood trusted each other more (in two studies) and felt more strongly that people in their neighbourhood looked out for one another. The finding on trust was replicated for children between 9 and 15 years of age across a range of neighbourhoods differing in the level of deprivation. (Trust is not the same as cooperation but it is required if cooperation through repeated altruistic exchanges is to be successful.) In support of these findings Haushofer (2013) found, in large data sets from the World Values Survey, that trust increased significantly with income within countries and with per capita GDP (at purchasing power parity) across countries.
Finally, I come to the tail of the distribution of environmental harshness, where case studies are naturally rare and where experiments are not possible. I describe three cases of societies living in conditions of extreme adversity; societies in extremis. Burch (2006, 272), as described by Kelly (2013, 288n6), found that in Alaskan Iñupiaq Eskimos 'in periods of widespread famine and hunger, the [cooperative] distribution system broke down, families hoarded food, and some tried to steal the stores of others or even to kill the owners '. In 1969and 1970Laughlin (1974, 1978 studied the So, a small society in Northern Uganda: 'The total So ecological/economic picture is grim. It is one of progressive deterioration of alternative resource bases to the point where a period of drought brings extreme and widespread hardship and starvation to the people of So' (Laughlin 1974, 380).
Laughlin quantified 'generalized' reciprocal exchange in the So, 'where the emphasis is upon the act of exchange and not upon immediate return or making a profit' (Laughlin 1974, 381) and compared its expression in two study periods, 'one of extreme hardship [and a second] in relationship to the first a time of plenty' (385). He found more generalized reciprocity in the period of plenty and, although the data are not analysed statistically, the increases compared to the period of extreme hardship are quite large: 48% for feeding guests ('the major medium of generalized reciprocity' [386]), 124% for food transfers and 38% for total (food + non-food) transfers. Generalized reciprocity contrasts with 'negative reciprocity' (i.e. exchange for profit) as a means of resource acquisition and when one type of reciprocity declines it is generally compensated by an increase in the other. This reminds us that in the real worldin contrast to the labcooperation is just one means of getting on in the world and is therefore subject, indirectly, to a range of influences, some of which may also be responsive to adversity. Thus in the So, to compensate for the reduction in generalized reciprocity during hardship, the cash value of negative reciprocity increased in that period by 454%.
The So are related to the Ik, who live further north in the mountains of northern Uganda. When studied by Turnbull (1966Turnbull ( , 119-136, 1967Turnbull ( , [1972Turnbull ( ] 1974Turnbull ( , 1978 from 1964 to 1966, it would seem that the Ik lived in even greater hardship than the So, and their society certainly suffered from a greater absence of cooperativeness. The Ik's former nomadic hunting cycle had been curtailed by the government and this, together with the frequent droughts, led to intermittent famine and eventually the 'disintegration of Ik society' (Turnbull 1978, 53), Turnbull's period of fieldwork coming at 'a critical moment in the process of social change' (Turnbull 1978, 53), though 5 years after Turnbull's study, Joseph Towles, Turnbull's collaborator, found the social system much the same (Turnbull 1975, 355). In Turnbull's account cooperation was limited to house building, which required more than a single builder, and individuals passed their lives in relative isolation. Family and community bonds of care broke down completely, children were weaned at the age of three and then left to forage for themselves, and young and old were left to die if they could not find sufficient food. Although children foraged in gangs, these gangs served only as protection against predators (including adult Ik) and food was not shared.
[A child] learned that cooperation was rarely beneficiala temporary expediency at bestand that the unpredictability of circumstances that could make it worthwhile meant that there was no value in establishing permanent bonds with others on grounds of age, sex, or kinship. He learned that systematic sociality itself had no value. (Turnbull 1978, 64) Turnbull concluded that, given the 'extreme . . . circumstances' (Turnbull [1972(Turnbull [ ] 1974 'the sadly functional nature of the Ik non-social system . . . was the only way to survive' (Turnbull 1976, 6).
Turnbull's 1972 book on the Ik, The Mountain People, was critically received by some anthropologists, partly for its claimed lack of objectivity (Beidelman 1973;Barth 1974;Wilson et al. 1975;Knight 1976). Two of these eight commentators (Barth and Geddes in Wilson et al. 1975) pointed to inconsistencies in Turnbull's report but none doubted his findings as I describe them above. Heine (1985), who studied the Ik for 2 months in 1983, critiqued many aspects of Turnbull's ethnographic work but again did not dispute the above description of Ik society. The Ik continue to live in northern Uganda today.
Understanding the influence of adversity on cooperation
Game theory models have illustrated ways in which environmental adversity selects for cooperation (Andras, Lazarus, and Roberts 2007;Smaldino, Schank, and McElreath 2013). Here I ask more simply, and more generally, what might the shape of the relationship between adversity and the benefit of cooperation look like, and what does that shape allow us to infer about how cooperation will vary with adversity? These questions were addressed by Andras and Lazarus (2005), with additional mathematical formalities, and I develop the ideas further here. My aim is to see to what extent these proposed relationships might explain the findings described in Section 4. This seems to be a particularly worthwhile exercise given the broad range of contexts and taxonomic groups in which cooperation is enhanced by adversity, while the relationship is reversed in extremis. Although I will focus on the human case the conclusions apply to the nonhuman examples too.
The simple framework to be described is not intended to deny the complexity of environments and of social life but rather to provide an explanation for one feature of that complexity, environmental adversity, while acknowledging that adversity might work on cooperation in additional ways too. Adversity is a ubiquitous variable influencing a wide variety of cooperative interaction types, and it will be achieving its effects alongside a cluster of other influences. My approach does not deny that there are individual differences in what people see as a life of well-being and how that life may be enhanced, or sometimes diminished, by cooperation. As a natural scientist, however, I proceed in the belief that our understanding of how and why people interact with their environment as they do can be advanced by utilizing various methods of behavioural data collection, as well as statistical methods for taking individual differences into account. Finally, I am aware that I have offered only a small sample of human cases and that the pattern of results that emerges does not unequivocally support my analysis. However, studies of these relationships are still in their infancy and a comprehensive understanding is still some way off. In that context the fit between the data and the ideas I present here seems to merit further testing.
It might be thought immediately obvious why people seem to cooperate more in adversity, that is, to provide an ultimate, or functional, explanation. People in adversity have a greater 'need', it might be argued, and everything else being equal we would expect that need to be met. Or it might be stated or hypothesized, though without supporting theory or evidence, that cooperation will be greater in a poorer group (Da Costa, de Melo, and Lopes 2014, 455) or that this relationship has an inverted-U shape (Laughlin and Brady 1978). But as cost-benefit analyses such explanations are incomplete, as I will show. In addition, they may not explain the effect of adversity in extremis reported above, which is given an explanation in the present analysis.
I start with two general assumptions. First, as already stated, that well-being (and, for non-humans, fitness) declines with adversity or, in other words, that it increases with the quality of the environment. Second, where cooperation is observed I assume that on average, and including any delayed benefits, it brings greater well-being than its noncooperation alternative (but not necessarily for all individuals on all occasions, since there will generally be a dynamic which includes some degree of freeriding). This assumption follows from the game theory prediction of how self-interested individuals will behave when the marginal net benefit of cooperation over non-cooperation (call it B) exceeds the threshold value at which cooperation becomes the equilibrium outcome (this kind of analysis was introduced in Section 3 in discussion of the prisoners' dilemma). It follows in turn that cooperation will be more likely to occur where B is greater since a greater value of B is more likely to exceed the threshold relevant to any particular case. And if B increases further, above the threshold value, cooperation is predicted to increase in order to take up this additional benefit, following the same self-interested logic that applied to the switch from non-cooperation to cooperation at the threshold. Such an increase in cooperation can occur by repetition of a cooperative act or by replacing it with a more beneficial alternative, the increase in cooperation being subject to the usual limitations as it comes into conflict with other demands on the individual's time and effort. I consider below the case of how, in fact, B is predicted to change in magnitude with environmental quality.
Though it is important to think in terms of net marginal benefit in defining B, to take account of potential marginal costs involved in cooperation (i.e. costs involved in cooperating that are absent when not cooperating), in some cases there may be no marginal costs. For example: a hunter or gatherer working with others rather than alone, where the benefit is bringing back more resources per capita; a car-sharing scheme where the parties involved get a ride every day for a share of the overall cost; or a sharing of knowledge or expertise where the gain per capita is enhanced when cooperating.
I represent these two assumptions graphically by two increasing functions relating environmental quality (E) to well-being (W), one function for the well-being of a cooperating individual and another for non-cooperation, the former function exceeding the latter for all values of E, for the reasons just argued. The non-cooperation function does not represent a free-rider's well-being but well-being if cooperation does not occur at all. The 'well-being of a cooperating individual' function represents well-being having provided a given cooperative act (at or above the cooperative threshold), and if this act has a marginal cost over non-cooperation, it is assumed to be independent of the quality of the environment.
Next, I add three further assumptions: that the shape of the increasing relationship between environmental quality and well-being is sigmoid, both when individuals do not cooperate and when they do; and that these two sigmoid functions converge at both extremes ( Figure 2). The basis for these assumptions is presented in the Appendix. The slope of a sigmoid curve at first increases and then decreases; it is first concave upwards (what I shall call the left segment of the curve) and then concave downwards, also called diminishing returns (the right segment). The pair of sigmoid curves, representing cooperation and non-cooperation, stand for the two conditions as they occur within a particular society or other group under analysis; I am not claiming that there is a common scale across all data sets that might be used to test the predictions. And in testing the predictions it will be important to assure, as far as is feasible, that the groups being compared along the environmental quality axis are indeed comparable on all features except that of some measure(s) of environmental adversity.
The marginal net benefit of cooperating (B), compared to not cooperating, at a given value of environmental quality (E) is therefore the difference in well-being between the cooperation and non-cooperation functions. This is the net benefit of the act over and above the condition of non-cooperation; the benefit gained as a result of the cooperative interaction or enterprise less any marginal cost of the individual's contribution (i.e. cost over and above cost for the case of non-cooperation). It can be readily seen from Figure 2 that as E increases B at first increases up to an inflection point, E i , on the environmental quality axis and then declinesan inverted-U relationship. Although any marginal cost of cooperation has been assumed to be independent of environmental quality, this may not hold if resources for cooperation are required in advance of a cooperative relationship being initiated. In this case those in a higher quality environment may find this initial investment more affordable, which would raise the benefit somewhat in those environments.
Since we would expect the occurrence of cooperation to map directly onto its marginal net benefit, B, this pattern captures the essence of most of the data I have reviewed, if we assume that the in extremis cases lie to the left of E i , which is very The inverted-U curve shows benefit (B), that is, the difference in fitness/well-being between cooperation and non-cooperation. E i indicates the inflection point of environmental quality, at which benefit ceases to increase with E and begins to decline as E increases further. The image is the property of the author. plausible, and all other cases lie to the right of E i , which is less certain. That is, cooperation declines with adversity in extremis but otherwise the opposite is the case. The sigmoid curves could be drawn in many ways; here I have speculatively drawn them so that E i sits towards the adverse end of the environmental quality continuum. This is not an a priori assumption that I wish to defend; I have done it simply to capture one post-hoc interpretation of the data, which is that the reversal of the 'adversity enhances cooperation' effect is rare and that it mostly occurs in extreme adversity (or in the extreme of relative deprivation (Davis 1959;Wilkinson and Pickett (2010)) if relative and not absolute deprivation is the influential variable). A consequence of this position for the inflection point is that as environmental quality worsens beyond E i the cooperation curve must fall steeply to converge with that for non-cooperation, which means that the benefit of cooperation will also fall steeply. A consequence of this is that below E i the benefit of cooperation is very sensitive to small changes in environmental quality and I will return to the implications of this.
The cases that do not immediately fit into this scheme are Nettle's (2015) Tyneside neighbourhood study and Haushofer's (2013) multi-national study, in both of which cooperation declines with adversity. This divergence in results cannot be resolved with any certainty but I offer some observations. First, the deprived neighbourhood in Nettle's study might lie to the left of E i , which would mean that it fitted into the present scheme. I do not mean to suggest that this community has a life as harsh as that of the So and the Ik. However, its state of relative deprivationrelative to its UK comparatorsmay be similar; it is in the 1% of the most deprived neighbourhoods in England. In addition, residents of the neighbourhood may share, but to a lesser extent, the problem responsible for the breakdown of Ik society, that of hunger. A recent account of the relationship between hunger, socioeconomic position (SEP) and behaviour reports that, based on US studies, a substantial fraction of people from [low-income] households experience an excess of hunger due to their SEP, at least some of the time [and] . . . within very affluent populations, individuals of lower SEP eat less satiating diets; do so on more irregular schedules; and a very sizable proportion . . . report experiences such as food insufficiency and food insecurity that imply an increased frequency of hunger. (cited in Nettle 2017, 7) Note also that the area containing the deprived neighbourhood contained an emergency-assistance food bank (Nettle 2015, 19). Nettle's (2015) own analysis of his results bears some similarities to the above arguments, including as it does the proposition that the more deprived neighbourhood is 'closer to the edge' (60) in the sense that the only way to avoid a crisis is to cross this edge in the hope of a happy outcome to a risky venture. And when it comes to a potential cooperative interaction: [i]f the people in your neighbourhood are . . . close to the edge, then it makes sense that even if you had a lot of interaction with them you might not feel that you had enough interaction to say you knew what they were going to do next. (61, emphasis in the original) Standing (2011, 20) has a similar conception of the precariat which, he suggests: lives with anxietychronic insecurity associated not only with teetering on the edge, knowing that one mistake or one piece of bad luck could tip the balance between modest dignity and being a bag lady, but also with a fear of losing what they possess even while feeling cheated by not having more.
My second observation is that the average per capita GDP measure used for Haushofer's (2013) between-country comparison may well be unsuitable for the present analysis since a particular average value may be accompanied by very different distributions of values within the nation, and consequently different values for adversity, depending on how adversity maps to per capita GDP from country to country. Third, it is informative to examine Nettle's and Haushofer's trust measures further. These are the only studies reviewed here that employ self-report rather than behavioural measures and, of course, a trusting attitude is relevant in the present context only to the degree that it predicts cooperative behaviour. While Nettle's measure concerned trust of others within the same neighbourhood, Haushofer's question -'Do you trust people you meet for the very first time?'was more generic. This 'generic trust' measure is more problematic as evidence here since it is unclear what the environment of any imagined cooperation in the mind of the respondent might be: the respondent's own environment, that of an imagined trustee, or something else? (See Nettle (2015, 63) for parallel comments on social environment.) My suggestion that Nettle's deprived neighbourhood might lie in the left segment of the sigmoid meets a general problem for an explanation that predicts an inverted U-shaped relationship between two variables. As a specific example of this general problem, for the Tyneside neighbourhoods' case to fit my explanation, the wealthier neighbourhood should not lie so far to the right that it is the less cooperative of the two. More generally, with few data points it is difficult to support or refute an inverted U-shaped relationship with confidence. With just two data points, a finding of more cooperation under adversity, less cooperation or no difference in cooperation could all be accommodated in the inverted-U relationship between environmental quality and benefit by placing the data points, respectively, on the right arm of the U, the left arm or one on each arm. However, as more data points become available over a wider range of environmental qualities, it is increasingly possible to support or reject the proposed relationship.
In this context it is important to stress that I am not suggesting that a particular pair of sigmoid curves describes all cases (where by 'cases' I mean studies that compare cooperation between two or more levels of adversity) since different cases will involve different populations and will rarely use the same metrics for adversity and cooperation. To the extent that my thesis is correct, each case of the cooperation-adversity relationship will have its own pair of sigmoid curves, the particular shapes of which (and consequently of the benefit curve) are free to vary. Indeed many cases, as in all the examples described here, will cover too narrow a range of adversities (or will compare too few adversity levels) to reveal a sigmoid relationship and instead the relationship will be monotonic. In this monotonic case my account predicts concave upwards functions below E i , but concave downwards functions above E i , additional evidence being required to support a case for one or the other position on the environmental quality axis. Finally, although I argue that all cases cannot be placed quantitatively within a single two-dimensional adversity-cooperation space, I do want tentatively to propose, on the basis of the studies I have described here, that Figure 2 may represent a broad and qualitative truth about the adversity-cooperation relationship (cooperation being narrowly and behaviourally defined, as I have stressed). That is, that the relationship is negative over much of the range of environmental quality but that in the poorest environments it is reversed and is steeper.
To my knowledge there are no data sets showing an inverted-U relationship between adversity and cooperation but this may be because no study (and particularly no experimental study) has considered a broad enough range of adversities. Another strong prediction of my proposition that needs testing is that the benefit of cooperation has an inverted U-shaped relationship with environmental quality.
Although environmental quality takes the role here of an independent variable, this does not mean that it is fixed. A cooperative act might have a sufficiently beneficial outcome to move the actor further to the right along the environmental quality axis. Another point about environmental quality, particularly for the human case, is that the best environments may offer opportunities for individuals to find new ways to cooperate, ways that increase the benefit of cooperation and may also improve environmental quality.
This analysis of response to adversity, in which the environmental state is assumed to be fully known, complements that of Haselton and Nettle (2006), based on signal detection theory, for cases in which judgments are made in conditions of incomplete information.
Populations in extremis I have found no non-human examples where cooperation is reduced by adversity. Under the present proposals, such an example would represent a population to the left of the inflection point and therefore in or close to extremis. This lack of data could therefore be explained by such populations having a high risk of dying out, migrating to a more suitable environment, or becoming permanently asocial, unless they were fortunate enough to be saved by environmental change orparticularly for human casesby an outside agency. The risk of extinction or migration would be particularly acute due to the steep decline in cooperative benefit to the left of E i . Gintis (2003, 160, emphasis in the original) takes a similar view and points to a cruel irony: In the primitive conditions under which human sociality evolved, when a group was threatened with extinction or dispersal, say through war, pestilence, or famine, cooperation was most needed for survival. But since the probability that the group will dissolve increases sharply under such conditions, cooperation based on future reciprocation cannot be maintained. Thus, precisely when the group is most in need of prosocial behavior, cooperation based on repeated interactions will collapse. Turnbull (1976, 6) held that, for the Ik, acting alone brought greater benefits than cooperating since, to repeat an earlier quote, 'the sadly functional nature of the Ik non-social system . . . was the only way to survive'. In terms of the present formulation this would mean that the cooperation and non-cooperation curves crossed at the adverse extreme.
It is also possible that acting alone may be more beneficial than cooperating under certain circumstances in the very best environments, the cooperation and non-cooperation curves again crossing (John Baker and Siobhán O'Sullivan, personal communications, 8 January 2017).
Other forms of prosociality
Cooperative behaviour shares motivational and emotional features, and evolutionary and cultural origins, with related prosocial behaviours, beliefs and attitudes, and the influence of adversity has been studied here too. At the level of individual differences those with more adverse life experience exhibit more empathy, compassion and generosity in charitable giving and aid to a stranger (Lim and DeSteno 2016;Lim 2017). The influence of social class on prosociality is controversial, with recent conflicting findings for a range of measures: utilitarianism, empathy, feelings of entitlement, narcissism, theft, lying, cheating, helpfulness, generosity, trust, trustworthiness, volunteering and charitable donation. Some studies find greater prosociality in higher classes (Korndörfer, Egloff, and Schmukle 2015) but others find the opposite class effect (Côté, House, and Willer 2015), to mention just two of the most recent reports. In understanding these apparently conflicting findings, it will be helpful to develop more tailored predictions for each measure, as attempted here for cooperation. If the inverted-U relationship proposed here between environmental quality and cooperation should hold also for some of these other forms of prosociality, this might resolve some of the (apparent) inconsistencies in the data, since conflicting studies could be situated on opposite sides of the environmental inflection point. As others have noted it is also important to take account of the methodology by which these results have been obtained, from behavioural observations and experiments to self-report, in both the lab and the real world, since they each have their own psychological influences.
Conclusions
Putting aside the case of societies in extremis for the moment, I have argued that in a more adverse environment a greater benefit is to be gained by cooperating, and consequently that cooperation will be more common in these circumstances. This view is supported by data from a broad range of non-human species and for a range of human contexts. The possible exception for the human case is the influence of SEP, where recent data show an unresolved picture.
The major premise for my conclusions is that fitness or well-being is a sigmoid function of environmental quality and I have suggested ways in which the implications of this premise for patterns of cooperation can be tested.
One can be either despairing or encouraged by this view of life. From the despondent position matters have to get bad before we make the most of our collaborative potential, while others will argue that it's just when life is troublesome that we are able to rise to the occasion by acting together. Putting aside such subjective responses the more objective conclusion is that cooperation seems to be scaled to adversity and responds adaptively to need.
In the very poorest environments, however, prosociality may break down altogether and a tentative conclusion from the arguments and evidence presented here is that quite small changes in adversity might have a large impact on cooperative sociality. Although this might work in either direction (Laughlin 1974 it requires theory on the dynamics of change to take this idea further. It is uncomfortable to accept, following Turnbull for the Ik, that individualism in extreme adversity is adaptive; that in extremis selfishness is the favoured choice for survival. He may be right, but outside such extreme conditions the unusual human capacity for cooperation is at the heart of our sociality. Acknowledgements I am extremely grateful to John Baker whose comments resulted in many improvements and clarifications, including discussion of the comparability of environments when testing the ideas presented here. My thanks also to Tony Bennett, Matthew Johnson and Siobhán O'Sullivan for information and discussion, and to Quoc Vuong for guidance in preparing Figure 2.
Disclosure statement
The author states that there are no potential conflicts of interest.
Appendix: Evidence for the function shapes in Figure 2 This appendix provides evidence for the relationships between environmental quality and fitness or well-being proposed in Figure 2 in three parts: The right (diminishing returns) segment of the sigmoid curves; the left segment; and the convergence of the two curves (cooperation and noncooperation) at both extremes. The evidence here is not selective but I have not attempted to find evidence for every kind of adversity.
The diminishing returns assumption is made in many behavioural ecological models, for example, for the way in which benefit to an offspring increases with parental investment (Trivers 1974); however, we are seeking empirical support here. Consider the common case of resource acquisition. In healthy animal populations in the wild (i.e. not in extremis), the rate of food intake generally increases with food density in a diminishing returns fashion (e.g. Goss-Custard et al. 2006) in accordance with foraging theory, as a consequence of the limiting effect of the time it takes to handle food (Stephens and Krebs 1986, 15). In addition, as an animal becomes satiated, further food intake brings increasingly less benefit no matter how much food the environment holds (as Winterhalder, Lu, and Tucker (1999, 304) also argue). This is why hungry and thirsty pigeons in the lab switch frequently between eating and drinking rather than, say, feeding to satiation before they start drinking (McFarland and Lloyd 1973); a unit reduction in thirst or hunger increases fitness more when the animal is further from its optimal internal state.
For humans, there is evidence of a diminishing marginal utility response of life expectancy to economic variables. This pattern is found across about 140 nations for the measure of national income per person (Wilkinson and Pickett 2010, Chapter 1; but see the main text discussion of Haushofer's 2013, per capita GDP measure) and within a single country, the US, for lifetime earnings (Cristia 2009). In the UK, life expectancy shows diminishing returns to various measures of deprivation, sometimes with a small concave upwards trend for those least deprived (Buck and Maguire 2015). Further, many studies show that subjective well-being, measured as life satisfaction or happiness, has a positive and diminishing marginal utility response to income, income change or wealth, for analyses both between countries, developed and developing (Frey and Stutzer 2002;Howell and Howell 2008), and within countries (Cummins 2000, a review of many studies; Graham and Pettinato 2001;Møller and Saris 2001, calculated from Tables I and II;Frey and Stutzer 2002). The causal relationship is from income to subjective well-being rather than in the opposite direction (Frey and Stutzer 2002;Gere and Schimmack 2017).
In a pioneering study of risk-sensitive foraging the feeding decisions of juncos are described by a sigmoid function. Experimental birds that would suffer a negative energy budget (in extremis conditions) if they fed at a predictable source are risk prone and show a concave upwards utility function, whereas those with a positive energy budget (i.e. a high environmental quality) are risk averse and have a diminishing returns utility function (Caraco, Martindale, and Whittam 1980). These results have now been replicated many times in a variety of animal taxa, including two anthropological cases (reviewed by Winterhalder, Lu, and Tucker 1999).
A sigmoid function has also been found significantly to explain the relationship between an individual's quality and their resulting utility, outperforming linear and concave models, for primate sexual success as a function of rank and for social rank (which predicts reproductive success) as a function of hunting yield in Aché hunters (Kuznar 2002). The proximate mechanisms responsible for these relationships is unknown and, although the sigmoid function is offered by Kuznar as an expression of differential risk sensitivity, the mechanisms discussed earlier in this appendix may alternatively, or additionally, be responsible. In other anthropological studies risksensitive decision-making fits the predicted sigmoid pattern (Kuznar and Frederick 2003).
A sigmoid function is also suggested by the nature of abiotic factors influencing fitness (or well-being, for the rest of this paragraph), such as temperature, humidity and a great many factors for human populations. For such features (pollutants and suchlike aside), and for a particular species, there is an optimum value that maximizes an individual's fitness, with fitness declining above and below this optimal value: an inverted-U shape. Now, unless the transition through the optimal value is to make a sharp discontinuity, which is biologically implausible, it follows that the function approaches the optimal value, from both sides, in a diminishing returns fashion. Imagine now the environmental quality axis reconceptualized so that fitness reaches a maximum value at the extreme right, representing the optimal value of, say, temperature for the species. In this reconceptualization each point to the left of this optimal value represents a pair of temperaturesone below the optimum and another above itthat have an equal effect on fitness. The axis, therefore, represents adversity whether due to under-or over-shooting of the optimal environmental value and the redrawn function of temperature against adversity will be diminishing returns to the right. If the original inverted-U function is roughly normally distributed, and therefore bell-shaped, the adverse extremes, on the far left of the reconceptualized axis, will be concave upwards and the whole function will then be sigmoid. Son and Lewis (2005) provide a corroborating example relating temperature to survival for three life history stages of an insect. The functions are bell-shaped to the right (high temperature), where survival was extremely low or zero (in extremis conditions).
The data on severe food and water deprivation in rats (i.e. in extremis conditions, in contrast to natural feeding and drinking schedules (Siegel and Stuckey 1947)) show, as in the left segment of the sigmoid curve, a concave upwards relationship between environmental quality (the inverse of deprivation time) and fitness (the inverse of food or water intake after deprivation, a measure of distance from a homeostatic optimal state) (calculated from Clark 1958;Stellar and Hill 1952, for food and water, respectively). Andras and Lazarus (2005) assumed that the two curves might be convex upwards (diminishing returns) for the whole range of environmental quality but these data show this to be implausible.
Finally, I argue that the two sigmoid curves are unlikely to be parallel (in which case the benefit of cooperation would be a constant for all values of the environment) but are likely to converge at both extremes. For an individual in an extremely high-quality environment it seems likely that additional resources gained through cooperation would add little or nothing to fitness or wellbeing, either through sharing or by reciprocal altruistic exchanges. However, this may be too simplistic for the human case and I will consider it further in the text.
In the poorest environments, approaching in extremis conditions, there are several contexts in which convergence is likely. First, if cooperation consists of the acquisition and sharing of resources, at some point there are simply too few resources for cooperationby physical help, skill and knowledge sharing, social influence or other meansto increase the resource sufficiently to compensate for the fact that it must be shared. Second, where cooperation consists of a series of altruistic exchanges, the strength of short-term need is sufficiently strong that failing to reciprocate becomes the favoured response. Third, in a life with many pressing needs the opportunity to cooperate may be compromised by time constraints (Siobhán O'Sullivan, personal communication, 8 January 2017). | 2020-08-01T13:06:01.686Z | 2020-07-30T00:00:00.000 | {
"year": 2020,
"sha1": "ac9188062b3fb810b9a21fb773b00afaad1a2876",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0236715&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3933fb37772812e6a030fe8bcca3673656631c8",
"s2fieldsofstudy": [
"Environmental Science",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
249660828 | pes2o/s2orc | v3-fos-license | Pediatric anesthesia and achalasia: 10 years’ experience in peroral endoscopy myotomy management
Background Endoscopic treatment for achalasia (POEM) is a recently introduced technique that incorporates the concepts of natural orifice transluminal surgery. Although pediatric achalasia is rare, POEM has been episodically used in children since 2012. Despite this procedure entails many implications for airway management and mechanical ventilation, evidences about anesthesiologic management are very poor. We conducted this retrospective study to pay attention on the clinical challenge for pediatric anesthesiologists. We put special emphasis on the risk in intubation maneuvers and in ventilation settings. Results We retrieved data on children 18 years old and younger who underwent POEM in a single tertiary referral endoscopic center between 2012 and 2021. Demographics, clinical history, fasting status, anesthesia induction, airway management, anesthesia maintenance, timing of anesthesia and procedure, PONV, and pain treatment and adverse events were retrieved from the original database. Thirty-one patients (3–18 years) undergoing POEM for achalasia were analyzed. In 30 of the 31 patients, rapid sequence induction was performed. All patients manifested consequences of endoscopic CO2 insufflation and most of them required a new ventilator approach. No life-threatening adverse events have been detected. Conclusions POEM procedure seems to be characterized by a low-risk profile, but specials precaution must be taken. The inhalation risk is actually due to the high rate of full esophagus patients, even if the Rapid Sequence Induction was effective in preventing ab ingestis pneumonia. Mechanical ventilation may be difficult during the tunnelization step. Future prospective trials will be necessary to individuate the better choices in such a special setting.
Introduction
Peroral endoscopy myotomy (POEM) is a new endoscopic technique for the treatment of achalasia, which encompasses the concepts of natural orifice transluminal endoscopic surgery [1].
Since the first human case was performed in Japan in 2008 [2], this procedure has been successfully tested on more than 5000 adult patients nowadays. Although pediatric achalasia is rare, POEM has also been successfully used for the treatment of symptomatic children since 2012 [3].
POEM combines the benefits of a minimally invasive, endoscopic, transoral procedure, with the long-term efficacy of a surgical myotomy, so for achalasia disorder, it has emerged as a first-line treatment [4]. The procedure includes incision of the esophageal mucosa, dissection of the submucosa facilitated by insufflation of carbon dioxide (CO 2 ), and myotomy of the esophageal muscular layer and requires special and often challenging anesthesiological management [5][6][7]. Esophageal clearance is severely compromised, and frequently, at the time of surgery, esophagus is still full of food. Furthermore, during the procedure, large volumes of carbon dioxide are inflated through the submucosal space into the mediastinum and the abdominal cavity, with possibly severe impairment of ventilation due to subcutaneous emphysema, capnomediastinum, capnoperitoneum, and pneumothorax [8].
Some studies about anesthesiological management are extrapolated from adult literature, while in the pediatric field the evidence is very poor, existing with only a few reports, focused on surgical procedures.
We perform a retrospective chart review about the anesthesiological management of 31 children who underwent POEM in a single center: we underline the clinical challenge in the perioperative period focusing on the risk in intubation maneuvers and ventilation management.
Materials and methods
This retrospective study was approved by local IRB (prot.5486/14). All the patients underwent POEM at a single tertiary referral endoscopic center between January 2012 and December 2021. They were retrospectively identified from a prospectively collected database. Patients, who were younger than 18 years old at the time of the procedure, were included in this study.
Demographics, clinical history, fasting status, anesthesia induction, airway management, anesthesia maintenance, the timing of anesthesia and procedure, post-operative nausea and vomiting (PONV), pain treatment, and adverse events were retrieved from the original database and medical charts.
Data were collected in a prospective database (Microsoft Excel) and password protected. Parametric data were presented as mean data and range. Based on the small size of our sample, and on the descriptive presentation of data, statistical analysis for determination of risk odd ratio for each factor was not allowed. The study was conducted following the ethical principles outlined in the Declaration of Helsinki.
Results
We collected a total of 31 patients. The firsts 6 procedures were scheduled in the operating room, but since 2013 POEM was performed exclusively in an endoscopic suite.
Preclinical assessment at the hospitalization the day before the procedure is summarized in Table 1. American Society of Anesthesiologists (ASA) class was assessed as II in 25 patients for the only presence of achalasia and nutritional status, as III in 6 patients for other pathologies (1 Down Syndrome affected by major heart disease; 1 hemiplegia due to oncologic neurosurgery; 4 chronic pulmonary disease, related with esophageal reflux). Preoperative preparation in the ward was standardized, similarly to our adult patients [9].
After two days of a clear liquid diet, the 24 h before the procedure adult patients could take just water, while an intravenous glucose solution assured the caloric intake in smaller patients (10/31, 32%).
On the day of the procedure, the antibiotic prophylaxis with clindamycin 20 mg/kg (600 mg maximum) and Cefazolin 30 mg/kg (2 g maximum) was administered iv 30 min before the beginning.
As in adults, a precautionary nasogastric tube was not placed before the procedure, except in one child, but no esophagus content was found.
Two procedure was aborted because of technical difficulties in submucosal tunnelization, due to fibrosis. There was no anesthesiological indication to discontinue the procedure. However, many cases required adjustments in ventilation, because of an increase in peak inspiratory pressure (maximum value reported was 32 mmHg over the PEEP) or in EtCO 2 values (maximum value reported was 60 mmHg) during the submucosal tunneling.
All patients manifested some transient signs of endoscopic CO 2 insufflation (pneumomediastinum, pneumoperitoneum, and subcutaneous emphysema) but no one required any clinical therapy. Neither tension pneumothorax nor other life-threatening events were reported.
The mean time of surgery was 49.3 +/− 17 min, with an extubation time of 11 +/− 4 min. Once extubated all patients were observed in the recovery room for a mean period of 66 +/− 24 min. Mild analgesic rescue (tramadol) was administered in 7 patients (22.58%), while 1 received a stronger opiate (3.22%) (morphine). PONV occurred in 5 patients (16.13%), but it was effectively treated with ondansetron. No emergence agitation was reported. One case of transient bronchospasm without desaturation occurred in the recovery room, but therapy with salbutamol puffs and betamethasone iv quickly restored the normal pulmonary function.
All patients were regularly transferred to the ward and started feeding within 48 h, with discharge scheduled for the second postoperative day. A mild case of ab ingestis occurred in a 3-year-old baby in the second postoperative day, delaying discharge to the 8th day.
Discussion
POEM procedure requires a specific anesthesiologic approach concerning strategy, risk, and possible adverse events. In 2014, Tanaka et al. made suggestions for adults [10], even though the first prospective study dates from 2015 [5], while in children the experience with this new procedure is limited ( Table 2). This procedure has been seldom implemented in young patients for prudential reasons, but the good outcomes experienced in adult patients have led endoscopists to apply POEM also to younger patients. The first point to be assessed is the risk of regurgitation at induction of anesthesia. To avoid aspiration literature suggests a pre-surgery liquid diet, withholding oral intake 24 h before, antacid therapy, and the use of rapid sequence induction [8]. Despite the fasting, in all patients with achalasia, residual food debris should always be suspected. Indeed, in our series, only 52% of patients had the esophagus completely empty at the beginning of the procedure.
Moreover, in children, prolonged fasting can lead to ketoacidosis and hypoglycemia [18], further worsening a nutritional status already debilitated by previous dysphagia [19]. Right now, there is no evidence that this approach improves esophageal clearance.
Although medical therapy has been demonstrated efficient in the reduction of lower esophageal sphincter (LES) pressure [20,21], children were never premedicated. A pre-procedural treatment with nifedipine [22] or nitrates [23] could be useful in emptying improvement, but there is evidence neither on this approach.
In some centers, an active draining of esophageal content is performed in children as well as in adult patients. In our series, only one anesthesiologist chose to place preoperatively an orogastric tube. In all other cases (30/31), the performer deemed that the risk of the tube positioning outweighed the benefits. Indeed, the esophageal content could be either liquid (i.e., secretion, saliva) or solid (food debris), so no withdrawable.
On the contrary, it should be taken into account the real risk of esophageal perforation [24], also considering the esophageal muscular wall alterations.
Performing an esophagoscopy for evacuation of solid esophageal contents within a few hours before POEM is suggested in adults [8][9][10]. Nabi et al. described a pre-operative endoscopy in children under light sedation [25], but the poor compliance of pediatric patients would require higher doses of sedative drugs, with a bigger risk for airway protection. Furthermore, the traditional fear of inhalation at induction must be reconsidered. Warner et al. evaluated pulmonary aspiration of gastric contents during the perioperative period in infants and children in 63.180 general anesthesia. Among them, only 24 children inhaled (0.04%) and only 3 patients required mechanical ventilation for more than 48 h [26].
At odds with the previous data, there is even more evidence that the traditional fear of inhalation of food debris, could be disproportionate. Some authors questioned even the rapid sequence induction for the high risk of vomiting in children [27]. The more recent review by Engelhardt et al. confirms the meager number of pulmonary aspirations, as well as highlights the risk linked to hypoxia and traumatic tracheal intubation when this technique is transferred directly into pediatric anesthesia practice [28].
The attitude of our group was more traditional, usually (30/31) preferring the RSI. No complication has been observed, even in smaller children. Sellick maneuver (cricoid pressure to occlude the esophagus during induction) was not performed because, despite current adult guidelines, studies on the pediatric population do not seem to show benefit in preventing regurgitation and potential risk in the airway management [29].
Some considerations are worth to be done about the ventilatory approach. Throughout the entire procedure, a continuous insufflation is performed to achieve a good display of the endoscopic field. Especially during the tunnelization step, when the esophageal wall is dissected, the gas insufflated cross in the points of least resistance, producing pneumediastinum, pneumoperitoneum, and subcutaneous emphysema. Since children's anatomy is characterized by a smaller abdominal/thoracic cavity, a minimal increase in abdominal pressures could compromise diaphragmatic breathing [30].
In our series, all patients needed some revisions in the ventilator setting regardless of whether volume-controlled ventilation or pressure-controlled ventilation had been applied.
The better mechanical ventilation strategy for a normal pediatric lung is not still defined. Many papers only recognized the pathologic status [31] and most of them were published more than two decades ago [32]. Some indications can be obtained by authors who analyzed the patient's outcome after several changes in mechanical ventilatory approaches [33], but as for pediatric mechanical ventilation, there is not any scientific evidence, so the anesthesiological management must be driven by personal experience and data from adults [34].
This lack of evidence appears in the extreme variability of the mechanical ventilation approach in our sample, especially when the anesthesiologist has to consider the continuous CO 2 insufflation generated by the endoscope. At the beginning of the procedure, VCV was the preferred approach in most of the cases (24/31) and a minimal PEEP (4-5 cm H2O) was ensured in all patients.
Based on the data from the anesthetic chart, the entire sample needed some adjustment in ventilation parameters during the tunnelization and the myotomy. This appears as the most difficult step in ventilation control, due to some alterations in the compliance of the respiratory system. In adults, Tanaka et al. observed an increase of EtCO 2 in all patients (until 50 mmHg) but no elevation of inspiratory pressure during mechanical ventilation was reported [10].
In our series both parameters were compromised, suggesting that the differences existing in the pediatric chest wall, compared with adults, [35] were involved. Clinical treatment is extremely variable as results in Fig. 1. Despite this "Brownian motion" profile, no adverse events occurred, suggesting that the ventilator mode used does not have a strong impact on the respiratory adverse events. It is also worth noting that Plateau Pressure (Pplat) was never reported in clinical charts. Although a value above 35 cm H 2 O in adults is strongly correlated with barotrauma [36], no anesthesiologist used Pplat to guide mechanical ventilation.
Some authors tried to evaluate whether the pneumomediastinum and pneumoperitoneum occur using the postoperative CT scan. Abnormal findings were frequently identified on radiological examination, but no correlation was found with the development of complications [37]. Hence, none of these conditions have any real clinical impact on the outcome.
Indeed, at our center, it is not common to perform some types of radiological exams in the postoperative period, especially in pediatric patients.
According to the limited duration of the POEM procedure, the management of analgesia involves the use of short-acting drugs. During the stay in the Recovery Room (mean time 66 min), only 25.8% (8/31) patients required an analgesic rescue (7 with weak opioids and 1 with strong opioids), while 16.1% (5/19?) required an intravenous therapy for PONV.
No serious or life-threatening adverse events have been reported. Only two major complications (supposedly linked to achalasia) occurred, but without sequelae. The length of hospitalization (3 days for each patient) was coherent with literature data, which show a period between 1 and 5 days for the uncomplicated procedures [38].
Considering the level of evidence in our retrospective analysis, the POEM procedure seems to be characterized by a low-risk profile, also in children and adolescents. No serious adverse events occurred and the anesthesiological requirements are within the reach of most of the pediatric anesthesiologists. Thanks to a proper scheduling system in a well-equipped endoscopic suite, instead of an operating room, any difficulties were encountered. Moreover, the execution of the procedure in a more friendly environment is probably positive in reducing the psychological impact on children and their parents.
About anesthesiological issues, data that emerged from our case series are quite comforting. The inhalation risk is due to the high rate of full esophagus patients, but the RSI was effective in preventing ab ingestis pneumonia.
The mechanical ventilation may be difficult during the tunnelization step, but in any case, the procedure had to be delayed or stopped. According to literature, there is no recommendation on optimal intraoperative ventilation strategy. The set tidal volume or inspiratory pressures need to be changed according to EtCO2 and Pplat, while a certain level of PEEP in the range of 4-8 cmH 2 0 is necessary to prevent atelectasis. Our experience would like to help fill the gap in pediatric management, reporting a significant case series, even considering the rarity of achalasia, mostly in children. However, considering the lack of evidence and the limited sample, we cannot make general recommendations. Future prospective trials are necessary to find better options in a such special setting. | 2022-06-15T15:14:53.458Z | 2022-06-13T00:00:00.000 | {
"year": 2022,
"sha1": "186dae279f65fa7e50d707c8e4bdee1fb157f999",
"oa_license": "CCBY",
"oa_url": "https://janesthanalgcritcare.biomedcentral.com/track/pdf/10.1186/s44158-022-00054-7",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dc1460278bc19bbe823b18b452e3fde903507b3c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2254662 | pes2o/s2orc | v3-fos-license | Age and albumin D site-binding protein control tissue plasminogen activator levels: neurotoxic impact.
Recombinant tissue-type plasminogen activator (tPA) is the fibrinolytic drug of choice to treat stroke patients. However, a growing body of evidence indicates that besides its beneficial thrombolytic role, tPA can also have a deleterious effect on the ischaemic brain. Although ageing influences stroke incidence, complications and outcome, age-dependent relationships between endogenous tPA and stroke injuries have not been investigated yet. Here, we report that ageing is associated with a selective lowering of brain tPA expression in the murine brain. Moreover, our results show that albumin D site-binding protein (DBP) as a key age-associated regulator of the neuronal transcription of tPA. Additionally, inhibition of DBP-mediated tPA expression confers in vitro neuroprotection. Accordingly, reduced levels of tPA in old mice are associated with smaller excitotoxic/ischaemic injuries and protection of the permeability of the neurovascular unit during cerebral ischaemia. Likewise, we provide neuror-adiological evidence indicating the existence of an inverse relationship between age and the volume of the ischaemic lesion in patients with acute ischaemic stroke. Together, these results indicate that the relationship among DBP, tPA and ageing play an important role in the outcome of cerebral ischaemia.
Introduction
One of the earliest pathological processes leading to ischaemic stroke is the aggregation of platelets with local activation of coagulation factors and the generation of a fibrin-rich clot that interrupts the blood supply to the brain. Under physiological conditions fibrinolysis is the resultant of the interaction between plasminogen/plasmin and plasminogen activators (tissue-type plasminogen activator-tPA-and urokinase) and its inhibitors. Accordingly, the administration of recombinant plasminogen activators for the treatment of thrombotic or embolic conditions has been intensively studied. This has led to the approval of recombinant tPA (Actilyse Õ ) for the treatment of patients with acute ischaemic stroke. Unfortunately, besides its beneficial thrombolytic role (NINDS, 1995), tPA also has a deleterious pro-haemorrhagic effect and a short therapeutic window, which has resulted in the regrettable fact that only 3%-8.5% of acute ischaemic stroke are currently treated with tPA (Bambauer et al., 2006). tPA is not just a circulating protease. Indeed, tPA is expressed not only in endothelial cells but also in neurons and glia, where it has been shown to play a role in the regulation of the permeability of the neurovascular unit (Montaner et al., 2003;Yepes et al., 2003) and to have a pro-neurotoxic effect Samson and Medcalf, 2006;Yepes et al., 2009). Accordingly, an increase in tPA activity in the brain parenchyma has been associated with exacerbation of brain injury by promoting extracellular matrix degradation (McGuire and Seeds, 1990), microglia activation through annexin II (Siao and Tsirka, 2002), excitotoxicity by interaction with NMDA receptors (Nicole et al., 2001;Fernandez-Monreal et al., 2004;Pawlak et al., 2005) and neuronal and endothelial apoptosis (Liu et al., 2004). Likewise, the interaction between tPA and the low-density lipoprotein receptorrelated protein (LRP1) results in detachment of astrocytic end-feet processes from the basement membrane with increase in the permeability of the neurovascular unit (Yepes et al., 2003;Cheng et al., 2006), NF-B activation, Akt phosphorylation and a cleavage of PDGF-cc (Su et al., 2008). Importantly, experimental evidence indicates that tPA can cross from the intravascular space into the ischaemic tissue, raising the possibility that recombinant tPA, given for the treatment of acute ischaemic stroke, may also permeate the ischaemic tissue to promote brain injury (Benchenane et al., 2005a).
Paradoxically, studies are essentially conducted in young healthy animals, while the incidence of ischaemic stroke increases with age and the prevalence and severity of some of its complications like increased blood-brain barrier permeability and cerebral oedema, seems to be higher in younger patients (Jaramillo et al., 2006). Despite these clinical observations, the effect of age is seldom studied in experimental models or considered in clinical trials. Similarly, while controlling the endogenous availability of tPA could be of therapeutic interest, it's transcriptional regulation of tPA in nerve cells remains poorly investigated (Shin et al., 2004;Lee et al., 2008).
Here, we demonstrate that ageing is associated with a decrease in the expression and activity of tPA in the brain parenchyma, and that this results in a decrease in the volume of the ischaemic lesion and protection of the barrier function of the neurovascular unit during cerebral ischaemia. Importantly, our data indicates that the basic region/leucine zipper protein, albumin D site-binding protein (DBP) is a key regulator of the neuronal transcription of tPA.
Sample collection
C57BL6/J mice (Charles River; L'Arbresle, France) were housed in a temperature controlled room with food and water ad libitum. Experiments were performed in accordance with French (Act no. 87-848; Ministè re de l'Agriculture et de la Forê t) and European Communities Council (Directives of November 24, 1986, 86/609/ EEC) guidelines. For transcription/expression analyses, at selected ages, animals were anaesthetized (between 10:00 and 11:00 AM) by chloral hydrate (i.p. 150 mg/kg) and transcardially perfused with ice-cold 0.9% NaCl with 5% heparin. At the initiation of the perfusion, a blood sample was collected. The striatum, cortex, cerebellum, hippocampus and spinal cord were harvested and separated into two halves for biochemical and mRNA analyses.
ELISA
ELISA for total tPA was performed on 200 mg of protein extracts, according to the manufacturer's instructions (Molecular Innovation Õ , USA).
Microarray experiments
For each age (4 and 21 months), samples were prepared by pooling an equal amount of RNA from three individuals. Five micrograms of total RNAs were reverse-transcribed with 7.5 mM random hexamers (GE Healthcare), 75 mM aminoallyl-dUTP (Sigma-Aldrich) and 100 U Rtases Reverse-iT Blend (Thermo-Scientific) overnight at 37 C. Aminoallyl cDNAs were then labelled with Cy5 or Cy3 mono-Reactive Dye (GE Healthcare) according to the manufacturer's instructions. Labelled cDNAs were then purified on Qiaquick PCR purification kit (Qiagen) and hybridized to the RNG-MRC_ MM25k_EVRY microarrays (Le Brigand et al., 2006) (http://www .microarray.fr:8080/merge/index). Each pool was hybridized in duplicate dye-swap independent experiments.
After hybridization, median signal and background intensities were extracted using Genepix 4.1 software (Axon Instruments). The level of expression of each gene has first been evaluated by calculating the mean ratio 'signal/background' for each corresponding spot from the two dye-swap experiments. Genes were considered to be expressed only if their mean ratio was 51.2. These filtered data were then submitted to VARAN (http://www.bionet.espci.fr/) for normalization by lowess fit and differential expression analysis (Golfier et al., 2004). Normalized log2 ratio 21/4 months were then used for further statistical analysis by SAM software (Tusher et al., 2001) and t-test with a P-value equal to 5% using TIGR Multiexperiment Viewer (TM4:MeV, http://www.tm4.org/mev.html). The final list (http://www.ncbi.nlm .nih.gov/geo/, accession number GSE9004, Supplementary Table 1) of differentially expressed genes was established by comparing the results obtained with the three methods.
Immunohistochemistry
Mice were anesthetized with 0.1 g/ml chloral hydrate (150 mg/kg) and perfused transcardially with cold heparinized NaCl 0.9% followed by 2% paraformaldehyde, 0.2% picric acid in 0.1 M sodium phosphate buffer pH 7.4. Brains were removed, post-fixed overnight at 4 C in the same fixative, rinsed in veronal buffer containing 20% sucrose and frozen in Tissue-Tek (Miles Scientific, Naperville, IL). Coronal sections of 10 mm were incubated overnight at room temperature with a primary antibody directed against either tPA (rabbit anti-tPA, 1:5000; from Pr. Carmeliet, University of Leuven, Belgium), DBP (rabbit anti-DBP, 1:600; Abcam), Microtubule-associated protein 2 (MAP-2; chicken anti-MAP-2, 1:6000; Abcam) or Glial Fibrillary Acidic Protein (GFAP; rabbit anti-GFAP, 1:1000; Sigma). Detection was performed using the corresponding fluorescein isothiocyanate (FITC, green) or tetramethyl rhodamine isothiocyanate (TRITC, red)-conjugated donkey anti IgG secondary antibody (1:300, Jackson Immunoresearch, West Grove, USA). Before being cover slipped with antifed medium containing DAPI, sections were incubated for 5 min with a solution of Sudan Black B (Sigma-Aldrich) in 20% alcohol to reduce the autofluorescence observed in oldest mice (Schnell et al., 1999). Sections were examined with a Leica DM6000 microscope. Images were digitally captured using a coolsnap camera and visualized with Metavue software. DAPI positive cells were quantified by the application software Meta Imaging Series 6.3. Values are the means of 10 serial sections for each animal (n = 3).
Decoy oligonucleotide and shRNA assays
Neuronal cortical cultures were prepared from foetal mice (E15-E16) as described earlier (Nicole et al., 2001). Decoy double-stranded oligonucleotides mimicking the DBP DNA binding sequence within the mouse tPA promoter (Eurogentec, France) or short hairpin RNA interference targeting the expression of DBP (Sigma, France) were transiently transfected in murine cortical neurons (DIV 8-10) with the lipofectamin 2000 reagent (InVitrogen, France) with modifications of the protocol provided by the manufacturer. The sequences of the decoy oligonucleotides used were: (F) primer: aacactataatgtaaacag and (R) primer: ctgtttacattatagtgtt. shRNA sequences used were: for DBP 86023 (shDBP1): ccggcggaggtacaagaacaatgaactcgagttcattgttcttgtacctccgtttttg; for DBP 86024 (shDBP2): ccggccggaggtacaagaacaatgactcgagtcattgttcttgtacctccggtttttg.
Forty-eight hours posttransfection, cells were either subjected to NMDA-mediated neuronal death as described below or assessed for tPA, DBP, NMDA receptor subunits expression and tPA proteolytic activity (Q-RT-PCR, spectrozyme and/or zymography assays).
Neuronal toxicity assay
Slowly triggered excitotoxicity was induced at 37 C by a 24 h exposure to 10 mM NMDA in DMEM supplemented with glycine (10 mM). Neuronal death was assessed by phase-contrast microscopy and quantified by measurement of lactate dehydrogenase (LDH) release by damaged cells into the bathing medium. The LDH level corresponding to near-complete neuronal death was determined in sister wells exposed to 200 mM NMDA for 24 h in DMEM supplemented with glycine. Background LDH levels were determined in sister wells subjected to sham wash and subtracted from experimental values to yield the signal specific to experimentally induced injury.
Animal model of cerebral ischaemia
All procedures were approved by the Emory University Institutional Animal Care and Use and classically randomized. tPA deficient (tPA-/-) mice and their C57BL/6J wild-type (WT) controls (2-to 24-month-old) were anaesthetized with 4% chloral hydrate (400 mg/kg IP). The rectal temperature was controlled at 37 C with a homeothermic blanket. Cerebral perfusion (CP) in the middle cerebral artery territory was monitored throughout the surgical procedure with a laser Doppler (Perimed Inc., North Royalton, OH, USA) and only animals with a 480% decrease in CP were included. The middle cerebral artery was exposed and distally occluded (MCAO) with a 10-0 suture (Wang et al., 1998;Yepes et al., 2003). Immediately after MCAO, a subgroup was placed on a stereotactic frame and intracortically injected (unilateral injection) over 5 min with 2 ml murine tPA (1 mM; Molecular Innovations Inc., Royal Oak, Michigan, USA) at bregma: -1 mm, mediolateral: 3 mm and dorsoventral: 3 mm (Paxinos and Franklin's stereotaxic atlas). The dose of tPA injected was chosen based on our previous observations (Yepes et al., 2003). None of the operated animals died. Forty-eight hours after MCAO, animals were sacrificed by anaesthetic overdose. To measure the volume of the ischaemic lesion brains were harvested, placed in a matrix and cut onto eight 1-mm sections, followed by incubation in 2% 2,3,5-triphenyltetrazolium chloride (TTC) in saline for 30 min at 37 C and fixation with 4% formalin in PBS (Wang et al., 1998). With this procedure, the necrotic infarct area remains unstained. The infarct volume was determined with the NIH-image analyser system as the sum of the unstained areas of the sections multiplied by their thickness according to the equation: Vischemic lesion = P (Areas of ischaemic lesion) Â distance between sections. The rostral and caudal limits for the integration were set at the frontal and occipital poles of the cortex. This method has been validated previously, demonstrating an excellent reproducibility of the volumetric assessment of the ischaemic lesion (Osborne et al., 1987). To study the permeability of the blood-brain barrier, a subgroup of animals was intravenously treated with 2% Evans blue (Sigma-Aldrich) in saline immediately after MCAO, followed 6 h later by transcardiac perfusion. Brains were removed, divided into ipsilateral and contralateral hemispheres, weighed, homogenized in N,N-dimethylformamide (400 ml) and centrifuged (21 000g, 30 min). Evans blue was quantified in supernatants from the absorbance at 620 nm minus the background calculated from the baseline absorbance between 500 and 740 nm, and divided by the wet weight of each hemisphere. Each observation was repeated six times.
Study population and clinical protocol
A total of 93 patients with an acute ischaemic stroke admitted to the Emergency Department of a University Hospital within 3 h of symptom onset (Vall d'Hebron, Barcelona, Spain) were included (Supplementary Table 2). All had a MCAO documented by transcranial Doppler (TCD). No important disease affecting the functional status of the patients was allowed in this cohort, since only those with a modified historical Rankin of 0 or 1 were included (this excluded dementia or any other important disease that affected daily life of those patients). Also patients with infections or any other immunological or tumoural disease at the moment of suffering the index stroke were excluded. Serial magnetic resonance imaging (MRI) exams were performed to study infarct volume evolution. A baseline MRI including diffusion-(DWI) and perfusion-weighted (PWI) sequences was performed within the first 3 h after stroke onset. A detailed history of vascular risk factors was obtained from each patient and a clinical examination performed on admission and serially until discharge. Stroke severity and neurological outcome were assessed using the National Institutes of Health Stroke Scale (NIHSS). TCD assessments were performed by an experienced neurologist using a Multi-Dop Õ X4 (DWL Elektroniche Systeme GmbH, Sipplingen, Germany) device, to assess location of MCAO. The study was approved by the Ethics Committee and conducted in accordance with the Declaration of Helsinki. All patients or relatives gave informed consent.
MRI protocol and volumetric assessment of lesion size
All MRI studies were performed as previously described (Rosell et al., 2005) with a 1.5 T whole-body imaging system with 24-mT/m gradient strength, 300 ms rise time and an echo-planar-capable receiver equipped with a gradient overdrive (Magnetom Vision Plus, Siemens Medical Systems, Germany). The perimeter of the area of abnormal high-signal intensity was traced on each DWI and TTP map. Measured areas were multiplied by the slice distance to obtain the total lesion volumes for both the DWI and TTP maps (cubic centimeters, cc).
Statistical analyses
For tPA activity/expression studies and DBP expression studies, data were expressed as mean AE SEM and statistical analyses consisted in a Kruskal-Wallis test, followed by a Mann-Whitney test for comparisons between groups. For human data, descriptive and frequency statistical analysis were obtained and comparisons were made by use of the SPSS statistical package, version 12.0. Statistical significance for intergroup age differences was assessed by Mann-Whitney U-test (two groups) and Kruskal-Wallis test (more than two groups). Correlations between numerical variables were assessed by Spearman's correlation coefficient. Receiving operating characteristic (ROC) curves were conducted to select the age with better sensitivity and specificity to differentiate large and small MRI lesions (above the median value). A multiple logistic regression analysis was performed to determine independent predictors of DWI and PWI extend. A P50.05 was considered statistically significant. For the animal studies of cerebral ischaemia the Wilcoxon two sample rank sum test was used and a P50.05 was considered as significant.
Age-related decline in tPA cerebral expression and activity in mice
We first investigated the effects of ageing on tPA's catalytic activity and transcription in WT female mice aged 1-24 months. With ageing, there was a significant decrease in the proteolytic activity of tPA in the cerebral cortex, hippocampus ( Fig. 1A and B), striatum and cerebellum (Supplementary Fig. 1A and B; P50.001). In contrast, no age-associated changes were observed in the spinal cord (Fig. 1C) and plasma ( Fig. 1D and concordant ELISA assays for tPA antigen levels, not shown). Similarly, tPA transcription was reduced with ageing in the cortex (P50.001; Fig. 1E), hippocampus (P50.001; Fig. 1F), striatum (P50.005; Supplementary Fig. 1C) and cerebellum (P50.001; Supplementary Fig. 1D), but not in the spinal cord (Supplementary Fig. 2A). Immunohistological analyses (see specific labelling in WT versus tPA-deficient mice, Fig. 2A) demonstrate a reduction of $50% in tPA immunoreactivity between 4 and 21 months of age in mossy fibres (Fig. 2B) with no change in the number of DAPI-labelled cells (Fig. 2C) or NeuN-labelled cells (not shown). ELISA for total tPA in the cortex and hippocampus confirmed the age-dependent decrease in tPA levels (Fig. 2D). Interestingly, age-associated reductions in tPA activity/ transcription were also observed in male cortices and hippocampi (Supplementary Fig. 3A and B).
DBP as a candidate transcriptional regulator of tPA cerebral expression
Microarray analysis comparing gene expression in the cortex revealed that for a total of 15 189 genes analysed (15 189 expressed out of 24 081 tested), 180 were upregulated and 27 downregulated at 21 months of age when compared with 4-month-old mice (http://www.ncbi.nlm.nih.gov/geo/, accession number GSE9004, Supplementary Table 1). Interestingly, several specific pre-and postsynaptic neuronal markers including SAP-102, PSD95, synaptophysin and synaptotagmin were unaltered. Similarly, genes related to tPA functions, like LRP, annexin-II and NMDA receptor subunits (NR1, NR2a-d, NR3) were unaltered. Regarding the endogenous tPA inhibitors (the type 1 inhibitor of plasminogen activator (PAI-1), protease nexin-1 and neuroserpin), a striking increased transcription of PAI-1 between 4 and 21 months, could participate in the reduced tPA activity (Q-RT-PCR experiment, not shown).
We then postulated that among the genes encoding for transcription factors downregulated between 4 and 21 months of age, some could be critical for the transcriptional regulation of tPA. Among 12 candidates differentially expressed with ageing (Supplementary Table 3), only two were downregulated, both matching with a map of the known binding sites for transcription factors on the mouse tPA promoter (NT 039457), obtained with the Genomatix software: the GATA zinc finger domain-containing protein 1 and the basic region/leucine zipper D-site albumin binding protein (DBP). Interestingly, the recognition site for DBP is conserved for all the tPA promoters identified so far (Fig. 3A).
To confirm the microarray results, Q-RT-PCR were performed for DBP and tPA from the cortex and hippocampus of mice at different ages (n = 4 per age). As previously demonstrated for tPA ( Fig. 1E and F), DBP mRNA levels were dramatically reduced between 3-6 and 15-24 months in the cortex (-75%) and hippocampus (-85%) ( Fig. 3B and C). Accordingly, we observed a positive correlation (P = 0.0016) between tPA and DBP mRNA levels whatever the age considered (Fig. 3D). Immunohistochemistry on hippocampal sections confirmed a decrease in tPA protein levels in mossy fibres with age, associated with a similar decrease in DBP protein levels in adjacent neuronal bodies (Fig. 3E). Interestingly, in the spinal cord, as observed for tPA mRNA levels, there was no age-associated modification of DBP mRNA levels ( Supplementary Fig. 2B).
Impact of age-related tPA decline under ischaemic conditions in mice
tPA-deficient mice are resistant to excitotoxic/ischaemic damages, which can be reversed by injection of tPA directly into the brain or intravenously (Tsirka et al., 1995;Wang et al., 1998;Nagai et al., 1999). Although tPA can cross the blood-brain barrier by a LRP-dependent mechanism (Benchenane et al., 2005b), it also promotes blood-brain barrier leakage (Yepes et al., 2003). Here, we measured the ischaemic lesion volume and the bloodbrain barrier permeability after permanent MCAO in 2-to 24month-old mice by TTC staining and Evans blue extravasation. Figure 4A and B shows that the ischaemic lesion and the permeability of the blood-brain barrier after ischaemic stroke were significantly reduced (40%) in WT 15-month-old mice and older, with a profile similar to that observed for tPA mRNA levels (Fig. 1E). Q-RT-PCR experiments revealed that the tPA mRNA cortical levels remained unaffected in the ischaemic hemisphere (when compared with the non-ischaemic side) in both young and old animals (Fig. 4C). An immunohistological analysis confirmed that the smaller lesion in old animals observed by regular thionin staining was correlated to a less important loss of MAP-2 (Microtubule-Associated Protein-2, a neuronal marker) immunoreactivity when compared with young mice (Fig. 4D). There was no major difference in the extent of astrogliosis between both ages (Fig. 4D). We then observed that 4 month-old tPA-deficient mice displayed a significant reduction in the ischaemic lesion volume and the blood-brain barrier permeability when compared with age-matched WT mice, but no difference when compared with 21-month-old WT mice ( Fig. 4E and F). In contrast to WT mice, no difference was observed between tPA-deficient mice at 4-and 21-month old. Difference between 4-month-old WT and tPAdeficient mice disappeared when tPA-deficient mice were injected with tPA into the ischaemic tissue immediately after the onset of the insult (Fig. 4E and F). Likewise, recombinant tPA increased the ischaemic lesion volume and blood-brain barrier permeability in WT mice (21-month old) to values comparable to those observed in 4-month-old WT mice. Interestingly, intrastriatal NMDA injection led to smaller lesion in old than in young animals (21-versus 4-months old) ( Supplementary Fig. 4). Lower levels of tPA thus lead to a decrease in excitotoxic neuronal death, lesion volume and permeability of the blood-brain barrier following ischaemic stroke in older animals.
Impact of age in human stroke patients
DWI-and PWI-weighted imaging studies performed in patients (see demographic data on Fig. 5A) with acute ischaemic stroke upon arrival to the emergency department demonstrate that, compared with younger patients, older people have smaller infarcts (see MRI of representative patients on Fig. 5D). Median lesion volume measured in 93 stroke patients at baseline DWI was 14 cc (Fig. 5B), median baseline PWI was 191 cc (data not shown). Patients with larger lesion volumes were significantly younger than patients with smaller lesion volumes: those with baseline DWI514 cc had a mean age of 73.36 versus 68.23 years for DWI414 cc (P = 0.017); those with baseline PWI5191 cc had a mean age of 73.98 versus 67.51 years for PWI4191 cc (P = 0.025). ROC curves used to study sensitivity and specificity of different ages to predict infarct volume, identified age of 74 years as the best cut-off to differentiate small or large Figure 2 Effect of ageing on tPA protein levels in the CNS. Control immunostainings for tPA (red) in WT and tPA-deficient mice (A; n = 3). Immunostainings for tPA (red) on perfused brain sections of 4-, 12-and 21-month-old WT mice (B) and corresponding nuclei counting using DAPI (blue) counterstaining (C; n = 3 per group). Scale bars, 100 mm. ELISA for total tPA performed on perfused brain extracts of WT mice at 4-and 21-months-old (cortex and hippocampus) (D; n = 4). *P50.05. lesion volumes. Only 38% of patients older than 74 years had a baseline DWI 4 14 cc as compared with 63% of those younger than 74 years (P = 0.018). Similarly, 38% of patients older than 74 years had a baseline PWI4191 cc as compared with 61% of those younger than 74 years (P = 0.029). A logistic regression analysis showed that older age (OR 1.052; B = 0.051, CI 1.01-1.09, P = 0.015) and lower baseline NIHSS (OR 0.87; B = -0.13; CI 0.78-0.96; P = 0.007) were independent predictors of smaller DWI lesions (baseline DWI514 cc) and also of smaller baseline PWI lesions (age OR 1.067; B = 0.064; CI 1.021-1.114; P = 0.004 and NIHSS OR 0.87; B = -0.12; CI 0.794-0.973; P = 0.013). After correcting the logistic regression by cardiovascular risk factors those were still the main independent predictors of infarct volumes with slight modifications in their statistical significance (i.e. older age OR 1.051; B = 0.05, CI 1.003-1.10, P = 0.03) and lower baseline NIHSS (OR 0.84; B = -0.16, CI 0.76-0.94, P = 0.003) were independent predictors of smaller DWI lesions (baseline DWI 5 14 cc); interestingly, some of those cardiovascular risk factors (i.e. diabetes) showed a non-significant trend to influence infarct volume (OR 3.5; B = 1.26; CI 0.95-13; P = 0.058). Older age was an independent predictor of smaller infarcts at arrival even when corrected by MCAO location or time from stroke onset to MRI (data not shown). In fact, patients with a proximal MCAO tended to be older than those with distal MCAO (72 versus 66 years, P = 0.07), giving even more power to our findings of smaller lesions among the eldest since larger infarctions might be presumed in those with proximal occlusions. We also found a positive correlation between DWI lesion volume and neurological scores at arrival (r = 0.27, P = 0.008) as shown in Fig. 5C.
Neuronal DBP functional 'knock-down' reduces both tPA levels and NMDA-mediated neurotoxicity At this stage, we hypothesized that an age-associated reduction in DBP-mediated control of tPA transcription could sustain less deleterious effects of tPA in oldest individuals. We thus investigated whether the binding of DBP to the mouse tPA promoter could influence the neuronal tPA transcription and subsequent potentiation of NMDA-mediated neurotoxicity. A double-stranded decoy oligonucleotide corresponding to the DBP cis-acting element in the mouse tPA promoter (position 1131-1145: aacactataatgtaaacag) and a control scramble decoy were generated and transiently transfected in cultured mouse cortical neurons as previously described (Ahn et al., 2004). After 48 h, the neuronal transcription of tPA was specifically reduced by the DBP decoy oligonucleotide (-54%, P50.001, Fig. 6A), while DBP and NMDA receptor subunits mRNA levels remained unchanged (not shown). Zymography assays performed in the corresponding bathing media confirmed a significant decrease ($30%, P50.05) in extracellular tPA activity levels (Fig. 6B) when compared with the scramble control. In agreement with the known proexcitotoxic effect of tPA (Nicole et al., 2001), DBP decoytransfected nurons exposed to NMDA (10 mM) are less sensitive to NMDA-mediated excitotoxicity than control cells (-29.7% versus scramble decoy; P50.0001, Fig. 6C). Similarly, DBP specific shRNAs reducing DBP transcription (-30% with shDBP1 and -60% with shDBP2, Fig. 6D) reduced tPA mRNA levels (Fig. 6E) and conferred resistance of cortical nurons to NMDAmediated excitotoxicity (-35% with shDBP1 and -50% with shDBP2, P50.05; Fig. 6F).
Discussion
While reperfusion remains undoubtedly the treatment to apply to treat ischaemic stroke patients (NINDS, 1995), both tPA-related blood-brain barrier leakage and neurotoxicity might be significant problems limiting the overall clinical benefit (Yepes et al., 2009). Accordingly, tPA levels in the vascular (Zlokovic et al., 1995; (E) Mean volume of the ischaemic lesion 48 h after MCAO in WT, WT mice injected with rtPA, tPA -/mice, tPA -/mice injected with rtPA at 4 and 21 months of age (n = 6 for each observation). *P50.05 versus WT and tPA -/treated mice, † P50.05 versus tPA -/mice (4 months old), z P50.05 versus WT and untreated tPA -/mice (21 months old). (F) Evans blue extravasation 48 h after MCAO in WT or tPA -/mice (4 and 21 months of age), injected or not with rtPA (n = 6 for each observation). *P50.05 versus tPA -/mice (4 months old), *P50.05 versus tPA -/and WT mice (4 and 21 months old). M: age in months. Wang et al., 1997) and parenchymal (Tsirka et al., 1995;Wang et al., 1998;Nagai et al., 1999) compartments might critically control the extent of stroke damages. However, to date, little was known about these parameters according to the age. The results presented here indicate that the expression of tPA in the brain parenchyma decreases in old mice and that this age-related effect is mediated by the transcription factor DBP. This reduced expression of tPA is associated in vitro with attenuation of excitotoxic damages and in vivo with decrease in the volume of the ischaemic lesion and preservation of the permeability of the neurovascular unit following middle cerebral artery occlusion. Importantly, we also show the existence of an inverse relation between age and the volume of the ischaemic lesion in human patients with acute ischaemic stroke.
DBP belongs to the conserved proline and acidic amino acid-rich basic leucine zipper (PAR bZip) transcription factor family (Khatib et al., 1994;Lopez-Molina et al., 1997;Gachon et al., 2004). The expression of DBP displays a robust circadian rhythm in tissues with high amplitudes of clock gene expression (like the suprachiasmatic nucleus) and constant levels in cerebral regions in which clock gene expression is stable (Lopez-Molina et al., 1997; Gachon et al., 2004). Interestingly, while hippocampal downregulation of DBP expression has a neuroprotective effect, viral over-expression of DBP enhances susceptibility to seizures (During et al., 2003;Klugmann et al., 2006). These observations agree with our results indicating that inhibition of the interaction between DBP and tPA promoter or reduction of its endogenous expression limits the neuronal transcription of tPA and the susceptibility to NMDA-induced injury. Without excluding the probable involvement of other transcription factors, our results indicate that DBP is a key regulator of the transcription of tPA in neurons.
As already stated, tPA has a dual role in the ischaemic brain, with a beneficial thrombolytic activity in the intravascular space and a deleterious effect on the neurovascular unit (Yepes et al., 2009). Our data indicates that whereas ageing does not have an effect on the expression and activity of tPA in the intravascular space, it has a significant impact in its expression and activity in the brain parenchyma, supporting previous observations in rats (Schmoll et al., 2001). The finding that the permeability of the neurovascular unit and the volume of the ischaemic lesion decreases in aged animals not only supports a deleterious role for parenchymal tPA during cerebral ischaemia, but also indicates tPA transcription and ageing of the neurovascular unit that the expression and activity of tPA in neurons and glial cells have a direct effect on the final outcome following cerebral ischaemia. This observation is further supported by the data with acute ischaemic stroke patients. A further evidence of an agedependent effect of tPA in the ischaemic brain is provided by the fact that the intracerebral injection of tPA increases the volume of the ischaemic lesion and the permeability of the neurovascular unit in aged WT mice and young and aged tPA deficient mice to values comparable with those observed in young WT animals.
These alterations in ischaemia-induced brain damages and blood-brain barrier permeability observed in older animals are most probably due to the reduced tPA levels per se, rather than to a general downregulation of transcriptional processes in the senescent organism. Indeed, first, we found more upregulated than downregulated genes in the aged brain (Supplementary Table 1). Second, the expression of the main actors mediating tPA effects in the brain remains unaffected by ageing.
While it is clearly established that the neonate/child brain displays a susceptibility to hypoxic/ischaemic insults different from that in adults, conflicting results arise from the sparse literature regarding old individuals. Ageing is generally described as a negative predictor of post-stroke recovery (Popa-Wagner et al., 2007), but some found no difference or even a better recovery in old versus young animals (Shapira et al., 2002). Moreover, though some authors hypothesized that a reduced density, sensitivity and function of NMDA receptors might lead to less excitotoxicity, they found an increased ischaemic lesion in aged rats at the cortical level but not in the striatum (Davis et al., 1995). However, others have reported that, apart from either a higher mortality rate or brain oedema, old rats subjected to focal ischaemia had similar damages than their young counterparts (Fotheringham et al., 2000;Shapira et al., 2002). Among potential explanations, differences in the ages, end-points and models studied might be candidates. Moreover, the temporal and regional profile of cellular (microglial activation, glial scarring, neurogenesis) Figure 6 Effect of DBP repression on tPA transcription and proteolytic activity in neuronal cultures. Cultured cortical nurons were transiently transfected with either double stranded DBP-decoy or scramble oligonucleotides (A-C) or shRNAs targeting the expression of DBP (D-F) (n = 18 from three independent cultures per group). DBP decoy reduced tPA mRNA levels measured by Q-RT-PCR (A) and active tPA levels in the corresponding bathing media assessed by zymography (B) (n = 18 from three independent cultures per group). DBP decoy-transfected neurons were less sensitive to NMDA-mediated excitotoxicity than DBP scramble-transfected neurons (C). shRNAs reduced the transcription of DBP (D) and tPA (E), while the empty vector (Ctrl Plok) had no effect. Neuronal death induced by NMDA exposure was prevented by DBP shRNAs (F) (n = 12 from three independent cultures per group). *P50.05 **P50.01, ***P50.0001. and molecular (gene expression) responses are profoundly influenced by age (Petcu et al., 2008).
The dedicated literature is even more limited regarding the extent of ischaemic infarcts in humans, with two studies failing to evidence any difference with age (Nakayama et al., 1994;Engelter et al., 2003). However, in agreement with our data in mice, we provide supporting evidence by DWI-and PWI-weighted imaging in humans. We show that stroke patients display a reduced ischaemic lesion with ageing. This latter finding is of particular relevance with respect to a recent retrospective analysis, reporting that in stroke patients, both age and thrombolytic therapy are independent predictors of 'Hyperintense Acute Injury Marker', an index of early blood-brain barrier opening, correlated to poor functional outcome, haemorrhagic transformation and reperfusion (Kidwell et al., 2008). These data suggest that although rtPA-induced thrombolysis is beneficial for stroke patients, ageing and related parenchymal tPA levels should be considered for evaluation of new thrombolytic agents or neuroprotectants in future clinical trials for stroke. Our stroke population is similar to that of large stroke clinical trials, and therefore a representative stroke cohort; however, since we wanted to recruit a very homogeneous group of patients in the study to have patients with comparable brain areas of hypoperfusion, we selected those with a documented occlusion in the middle cerebral artery territory. Therefore, our results showing smaller hypoperfused and infracted areas in the older might not apply for other subtypes of ischaemic stroke such as lacunar infarction or those with a posterior territory infarction that are not represented among our cohort of stroke patients. Moreover, this human data is an indirect support to our experimental results, directly since testing tPA expression in the brains of stroke patients of different ages to demonstrate lower levels among the older is not possible.
In conclusion, age-and DBP-associated reductions in tPA cerebral expression, could have profound impacts in pathological conditions. A full elucidation of the molecular events related to age-associated tPA decreases could lead to innovative therapeutic approaches for seniors, the forgotten patients who, aged 80 years and older, accounted for less than 50 of the total number of patients included in the three initial randomized controlled tPA trials (NINDS, ECASS and ATLANTIS) (Poppe and Hill, 2008). | 2017-04-14T18:48:27.456Z | 2009-08-01T00:00:00.000 | {
"year": 2009,
"sha1": "4c498427ffa13ff1c5665f797953508d96cbf2db",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/brain/article-pdf/132/8/2219/16696250/awp162.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9e5cd84aeb9e950956db5eca78abc4b8cf17ff99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234595233 | pes2o/s2orc | v3-fos-license | A study protocol of a randomized trial evaluating the effect of using defined menu plans within an intensive personal nutritional counseling program on cardiovascular risk factors: The MoKaRi (modulation of cardiovascular risk factors) trial
Importance Changes in dietary habits and lifestyle can reduce the risk of cardiovascular disease which is the leading cause of death worldwide. Objectives of the MoKaRi study The MoKaRi (modulation of cardiovascular risk factors) intervention study is designed to evaluate the effectiveness and potential of the developed MoKaRi concept. The MoKaRi concept comprises three components, each designed to improve dietary behavior. The first component entails using daily menu plans to implement a defined “cardioprotective diet”. This diet consists of seasonal menu plans which are characterized by: (i) a personalized energy supply depending on his or her age, gender, level of physical activity. (ii) an adequate intake of carbohydrates, protein, fat, vitamins, minerals, and trace elements according to the guidelines of the German Society of Nutrition (DGE). (iii) a recommended intake of saturated fatty acids (SFA; < 7% of caloric intake (En%)), monounsaturated fatty acids (MUFA; > 10 En%), polyunsaturated fatty acids. (PUFA; approx. 10 En%), and long-chain n-3 PUFA (≥500 mg per day). (iv) measures to encourage consumption of vegetables and fruits, and. (v) eating more than 40 g dietary fiber every day. Half of the participants will be scheduled to consume an additional 3 g of long-chain n- 3 PUFA every day in the form of fish oil. The second component consists of regular one-on-one nutritional counseling, while a variety of further incentives make up the third component of the MoKaRi concept. The MoKaRi study will provide essential insights into the relationship between defined nutrient intake, markers of food intake and health status. Our specific aim is to investigate the influence of dietary and lifestyle choices have on cardiovascular health. The information and practical tools suitable for daily use, such as the personalized menu plans, could help to transfer knowledge on nutritional facts to the general population. In this way, the validated MoKaRi concept may contribute to the prevention and therapy of cardiovascular diseases. Methods In line with our power calculation, we will enroll 60 participants and randomly assign them to one of two parallel arms. Each participant will receive personalized menu plans for each day of the study and will be provided with one-on-one nutritional counseling sessions every two weeks for a study period of 20 weeks (140 days). During this period, blood samples will be taken every 14 days (11 time points) and twice during a 20-weeks follow-up period. Incentives such as a supply of foods approved according t the standards of the study, a sports program, individual feedback on study parameters reflecting health status, and group activities round off the MoKaRi concept. Low-density cholesterol is the primary outcome measure of the MoKaRi study, and the secondary endpoints comprise markers of nutrient status (e.g. fatty acid distribution in plasma and erythrocyte lipids), a metabolomic profiling, diabetes risk markers, clotting markers, and further cardiovascular risk factors, such as blood lipids, homocysteine and high-sensitive c-reactive protein. The MoKaRi study was registered before launch at ClinicalTrials.gov (identifier NCT02637778; https://clinicaltrials.gov/ct2/show/NCT02637778).
Importance: Changes in dietary habits and lifestyle can reduce the risk of cardiovascular disease which is the leading cause of death worldwide. Objectives of the MoKaRi study The MoKaRi (modulation of cardiovascular risk factors) intervention study is designed to evaluate the effectiveness and potential of the developed MoKaRi concept. The MoKaRi concept comprises three components, each designed to improve dietary behavior. The first component entails using daily menu plans to implement a defined "cardioprotective diet". This diet consists of seasonal menu plans which are characterized by: (i) a personalized energy supply depending on his or her age, gender, level of physical activity. (ii) an adequate intake of carbohydrates, protein, fat, vitamins, minerals, and trace elements according to the guidelines of the German Society of Nutrition (DGE). (iii) a recommended intake of saturated fatty acids (SFA; < 7% of caloric intake (En%)), monounsaturated fatty acids (MUFA; > 10 En%), polyunsaturated fatty acids. (PUFA; approx. 10 En%), and long-chain n-3 PUFA (≥500 mg per day). (iv) measures to encourage consumption of vegetables and fruits, and.
(v) eating more than 40 g dietary fiber every day. Half of the participants will be scheduled to consume an additional 3 g of long-chain n-3 PUFA every day in the form of fish oil. The second component consists of regular one-on-one nutritional counseling, while a variety of further incentives make up the third component of the MoKaRi concept. The MoKaRi study will provide essential insights into the relationship between defined nutrient intake, markers of food intake and health status. Our specific aim is to investigate the influence of dietary and lifestyle choices have on cardiovascular health. The information and practical tools suitable for daily use, such as the personalized menu plans, could help to transfer knowledge on nutritional facts to the general population. In this way, the validated MoKaRi concept may contribute to the prevention and therapy of cardiovascular diseases. Methods: In line with our power calculation, we will enroll 60 participants and randomly assign them to one of two parallel arms. Each participant will receive personalized menu plans for each day of the study and will be provided with one-on-one nutritional counseling sessions every two weeks for a study period of 20 weeks (140 days). During this period, blood samples will be taken every 14 days (11 time points) and twice during a 20-
Background
The percentage of deaths worldwide attributed to cardiovascular disease (CVD) has risen from 21% in 2007 to 31% in 2016 [1]. Several risk factors contribute to the earlier onset and accelerated progression of CVD [2]. While gender, age, and genetic predisposition are beyond our control, most risk factors depend at least partly on lifestyle choices. Current CVD prevention guidelines focus on reductions in blood pressure, low-density lipoprotein (LDL) cholesterol, triglycerides, diabetes, weight, smoking, psychosocial stress, and on improving physical activity. Particular attention has been directed towards correcting unhealthy eating habits since diet is a significant determinant of this disease [3,4]. This finding is in line with the results of the Global Burden of Disease Study. In 2017, this study attributed 11 million deaths and 255 million disease-adjusted life years (DALYs) to poor diet [5]. On a global and local scale, the leading dietary risk factors for death and loss of DALYs were high intake of sodium and low intake of whole grains and fruits [5].
The traditional Western diet is characterized by high intake of calories, salt, saturated fat, simple sugar, and refined starch. At the same time, consumption of monounsaturated fatty acids (MUFA) and polyunsaturated fatty acids (PUFA), whole-grain fibers, fish, vegetables, fruits, vitamin D or potassium is inadequate [6][7][8]. Reducing the energy density of the diet and the intake of saturated fatty acids (SFA) plays a crucial role in preventing CVD. Thus, both European and American experts recommend that less than 10% of caloric intake (En%) should be in the form of SFA; for persons with hypercholesterolemia this figure falls to 7% [3,4,9]. Based on the health claims for eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which have been approved by the European Food Safety Authority (1924/2006 and 432/2012), a regularly intake of 2-3 g long-chain n-3 PUFA is associated with a reduction of blood pressure and triglycerides [10,11]. Further, 3 g EPA and DHA are suitable to reduce inflammatory markers which will also contribute to the prevention of CVD [12].
In addition to the unfavorable nutrient profiles of many popular foods, the western eating culture is characterized by a lack of natural appetite, hunger or satiety, an oversupply of food, and a lack of awareness regarding the link between diet and health risks.
Optimizing diet might be an effective means to reduce the risk of CVD [13]. Up to now, most efforts, based on guidelines or recommendations from nutritional societies, to convince healthy individuals to consume a healthy diet to reduce cardiovascular risk in later years failed, because CVD stay one of the leading causes of death in Europe and the United States [1,2,14,15]. The efforts have failed in part because they have not focused enough on motivation and on generating recommendations that are suitable for daily use. We, therefore, developed the concept for modulation of cardiovascular risk factors by diet (MoKaRi) to address the challenges presented by adherence to the traditional Western diet.
MoKaRi aims to improve the diet by explicitly addressing the practical day-to-day needs of the participants and by including several strategies to increase motivation.
Rationale and objectives of the MoKaRi trial
The proposed randomized clinical trial is designed to assess and evaluate the effectiveness and potential of the MoKaRi concept ( Fig. 1) in adults with increased risk for CVD. Moreover, the study will provide data about the association between the food intake defined by the menu plans and measurable markers reflecting food intake (nutritional biomarkers) as well as cardiovascular biomarkers or risk factors.
Besides the analysis of well-known nutritional biomarkers, such as B vitamins, vitamin D, vitamin E, minerals, and fatty acids, and established cardiovascular risk factors, such as blood lipids, fasting plasma glucose, and inflammation markers, we plan to conduct a metabolomic profiling. The controlled food intake by the menu plans in combination with the metabolomics approach offers the possibility to identify new potential biomarkers reflecting food intake.
The MoKaRi study will allow the identification of variables influencing people's nutritional behavior and offers the possibility to address these variables within the recommendations that will be developed for adults with increased cardiovascular risk.
Detailed issues addressed by the MoKaRi trial
• Evaluating the influence of dietary/lifestyle interventions on cardiovascular risk factors and achievement of additive effects by additional intervention with long-chain n-3 PUFA The menu plans are characterized by an optimized intake of SFA, MUFA, and PUFA to reduce cardiovascular risk factors such as total cholesterol and LDL cholesterol [3,4,9]. Based on the health claims which are authorized for EPA and DHA, a daily intake of 2-3 g long-chain n-3 fatty acids will result in an improvement of further cardiovascular risk factors, such as triglycerides and blood pressure [10,11].
• Identification of time-dependent changes in cardiovascular risk factors • Evaluation of time-dependent incorporation of long-chain n-3 PUFA into plasma lipids, lipid fractions (phospholipids, triglycerides, free fatty acids, and cholesterol esters) and erythrocyte lipids • Identification of variables (e.g. problems, conflicts, incentives, possibilities for motivation) influencing nutritional behavior of study participants and reasons for failing to comply • Identification of targets for treatment goals of dietary interventions with a focus on cardiovascular risk • Derivation of tailor-made, practicable recommendations for the target group The MoKaRi trial will provide valuable information for planning and implementing a healthy diet. Furthermore, valuable data will be obtained to develop or improve standards for conducting diet-related human intervention studies focusing on biomarkers for nutrition intake, establishing compliance with study interventions, and encouraging observation of the background diet.
Study design
The MoKaRi trial will be conducted as a randomized, single-center intervention study in parallel design (two arms: prepared menu plans with 3 g long-chain n-3 fatty acids by fish oil per day vs prepared menu plans without additional fish oil; Fig. 1). The MoKaRi study will be take place from February 2016 to July 2016.
The MoKaRi trial will start with a run-in period of one week to document the dietary habits of the study participants using questionnaires on food preferences and a food frequency protocol (FFP). The FFP consist of a list of foods that are normally consumed in the Western diet and the corresponding portion information. The participants mark the consumption of the listed foods with a line. Food and beverages that are not listed can be added. The daily nutrient intake will be calculated for seven days with the software package PRODI® version 6.4 (Nutri-Science, Stuttgart, Germany).
The intervention periods will start with a 24-h urine collection, fasting blood sampling, and an assessment of the health status (i.e., anthropometric data (height, weight, body mass index, and waist circumference), blood pressure, ankle-brachial index, pulse variability, Harvard-step-test, and bioelectric impedance measurement). At this time point, participants will receive their personalized menu plans for the first 14 days of the study as well as information material.
During the 20 weeks of the MoKaRi trial, we will take fasting blood samples and assess health status every 14 days. Also, 14 new menu plans and appropriate amounts of study foods will be handed out at these time points (i.e., 11 times). At the end of the intervention period, participants have to collect 24-h urine once again.
For fasting blood sampling, the follow-up period of 20 weeks will be split into two periods i.e. every 10 weeks. (Fig. 1). In the follow-up period, the participants can use the menu plans from the intervention period, but further measures of the MoKaRi concepts, such as nutritional counseling, providing study foods, visualization of the cardiovascular risk, feedback, sports program, and events encouraging group feeling are not planned in this period.
Over the entire period of the MoKaRi trial, the study participants have to keep a nutrition diary to document (i) their experiences with the menu plans, (ii) changes and deviations from the menu plans, and (iii) for evaluating all meals.
Setting and population, eligibility criteria and exclusion criteria
The study will be carried out in Jena (Thuringia, Germany) and the surrounding area.
Participants of both sexes aged 20-80 years will be eligible for inclusion of the MoKaRi study. As further inclusion criteria, study participants with plasma LDL cholesterol concentrations ≥3 mmol/L will be enrolled after written informed consent.
The study protocol listed the following exclusion criteria: • Intake of lipid-lowering medications, glucocorticoids, drugs that interfere with glucose metabolism • Gastrointestinal diseases, known allergies or food intolerances • Known familial hypercholesterolemia • Intake of additional dietary supplements (e.g. fish oil capsules or vitamin E) • Pregnancy, lactation • Patient's request or if patient compliance with the study protocol is doubtful • Regular abuse of alcohol or drugs • Body mass index (BMI) ≥ 25 kg/m 2 Medication during the trial: Any sporadic and systemic use of medications because of other diseases will be allowed if it did not interfere with the study results. In this context, the regularly intake of lipidlowering drugs, anti-inflammatory drugs, and drugs that interfere with glucose metabolism belongs to the exclusion criteria, but the regularly intake of thyroid hormones or the sporadic intake of antibiotics will be allowed. The regularly intake of antihypertensive drugs will be allowed if the dosage stay stable over the course of the study. All medication taken are recorded in the medication diary. The systemic intake of medications owing to other diseases is to remain unchanged over the study period.
The study leader will decide individually if the sporadic and systemic use of medications because of other diseases will be allowed or not as it interfere with the study results.
Intervention -the MoKaRi concept
The MoKaRi concept is based on a 'cardioprotective diet' which is characterized by.
Half of the participants will consume an additional 3 g long-chain n-3 PUFA (i.e., 3 g EPA and DHA) per day which corresponds to about 10 ml/d fish oil. The dietary regimes of both groups (with and without fish oil) will be isocaloric.
The MoKaRi concept is designed to improve nutritional behavior using the following three approaches: I) personalized menu plans for implementing a cardioprotective diet, II) regular, individual nutritional counseling, and III) incentives. These incentives are (Fig. 2): i) dissemination of knowledge ii) the possibility to participate in a sports program adapted to target group-specific needs and personal capacities (cardioprotective circle training) iii) providing of chosen foods which are approved by the study guidelines, e.g. extra virgin olive oil, mixed nuts, yoghurt enriched with fish oil, chia seeds, barley flakes among others iv) regular feedback on the course of study parameters (with a focus on cardiovascular risk factors) and v) activities to encourage group feeling, e.g. engaging in cooking events or the sports program once a week.
Preparation of MoKaRi menu plans
The daily recipes and menu plans will be adapted to the individual energy requirements of each study participant depending on age, gender, and physical activity level (PAL), and whether they belonged to the fish oil group or not. Therefore, the menu plans will be prepared in eight energy categories (1800 kcal, 1900 kcal, 2000 kcal, 2100 kcal, 2400 kcal, 2500 kcal, 2600 kcal, 2700 kcal) and will include detailed information about type and amount of foods for breakfast, morning snack, lunch, afternoon snack, and dinner (Fig. 3). For each meal, a welldesigned detailed description for preparation will be provided.
The software PRODI version 6.4 for professional dietary counseling and therapy will be used to calculate the nutrient content of recipes, meals, and menu plans. The recipes for each meal originate from cookbooks and personal experience. A unique hallmark of the menu plans will be the balanced nutrient profile. To achieve this, we will add carefully chosen nutrient-rich foods to meet the MoKaRi criteria for daily nutrient intake. For example, we will include fatty cold-water fish to ensure intake of long-chain n-3 PUFA, vitamin B 12 , vitamin D, iodine, and protein. The consumption of dairy products will be encouraged to achieve the recommended intake of B vitamins, high-value protein, calcium, and other minerals. Linseed, rapeseed, and olive oils, mixed nuts and seeds will be recommended to meet the criteria for fat quality. Vegetables, selected fruits, nuts, and seeds, and, in particular, pulses such as peas, beans, or lentils, will be inserted to ensure optimal intake of vitamins, minerals, trace elements, dietary fiber, and secondary plant molecules, e.g., carotenoids and polyphenols.
The participants of the MoKaRi trial will receive daily menu plans with optimized nutrient profiles over the entire study period of 20 weeks (14 new plans every two weeks, Fig. 1). An example of the MoKaRi menu plans is shown in Fig. 3.
Nutritional counseling and nutritional-lifestyle coaching
In addition to the daily menu plans for the implementation of the 'cardioprotective diet' over the 140 days of the MoKaRi trial, participants will receive regular individual nutritional counseling every two weeks throughout the full study period (Fig. 1). The study leader will conduct all nutritional counseling sessions and addressed the following issues: The consultations will be conducted using the motivational interviewing technique [17]. Personal information about the participant's life circumstances as well as needs and possible tips for better implementation of the MoKaRi concepts will be discussed. Based on this information, individual solutions for improving compliance will be explored. The consultations will be conducted in a casual environment to improve the well-being of the study participants. Fig. 3. Example of a menu plan according to the MoKaRi concept. The above daily menu plan is adapted to fit the requirements of a woman aged between 51 ≤ 65 yrs with a PAL of 1.6 which corresponds to a daily energy intake of 2000 kcal.
The cardiovascular risk profile of each participant will be visualized using a smartphone app developed by Preventicus® (Jena, Germany). With this tool, the participants of the MoKaRi study can track their risk factors (including blood pressure and blood lipids) which are displayed as icons in different colors. These icons change color depending on the individual values in comparison with reference values. They change from green (normal) over yellow (moderate deviation) to red (substantial deviation; Fig. 4).
Over the study period, the participants can participate in a weekly sports program. The offer will include a moderate circuit training with eight exercises in two intensities (1 h or 1.5 h). There will be four groups, with 15 participants per group.
Activities, such as organized cooking events and the sports program will serve the purpose of fostering group feeling and regular exchanges between the study participants and as incentives to improve compliance with the MoKaRi concept.
Primary outcome measures
The primary outcome measure of the MoKaRi study is LDL cholesterol [mmol/L]. All study parameters will be analyzed according to standardized methods, mainly in cooperation with reference laboratories, e.g. the Institute of Clinical Chemistry and Laboratory Diagnostics, University Hospital Jena, Germany.
Secondary outcome measures
Anthropometric measurements will be undertaken by the observers using standardized methods and include height (m), weight (kg), and waist circumference (cm). The parameters will be measured by calibrated instruments (seca, Hamburg, Germany). Height will be measured with a stadiometer to the nearest 0.5 cm without shoes, head upright, and eyes looking straight ahead. Waist circumference (cm) will be measured using a finger width above the belly button. Body mass index (kg/m 2 ) will be calculated. Arterial blood pressure will be measured with the subject in sitting position after least 5 min at rest. Blood pressure measurements will be taken twice at the relaxed right arm and the mean of the two values will be recorded. The ankle-brachial index will be calculated by dividing systolic blood pressure at the ankle by the systolic blood pressure value in the upper arm (brachium). A lower blood pressure in the leg than in the arm suggests the presence of arterial blockage, most likely due to atherosclerotic peripheral artery disease [18].
The primary and secondary outcome measures will be obtained regularly every two weeks over the intervention period (11 time points) and every ten weeks in the follow-up (two time points, Fig. 1). Blood samples will be collected via venipuncture after an overnight fast (a minimum of 12 h).
Statistical analysis 2.5.1. Primary outcome measure
The primary measure is the difference in plasma LDL cholesterol concentration during the implementation of the MoKaRi concept as estimated using a generalized linear model.
Primary and secondary outcome measures
Variables will be examined for normality using the Shapiro-Wilkes test. The Levene test will be used for variance homogeneity. Differences in the measured parameters within groups will be analyzed by Student's t-test or Wilcoxon signed-rank test. Furthermore, one-way ANOVA with repeated measurement will be conducted if data are normally distributed. Linear regression models will be used to assess the correlation between continuous variables. p-Values <0.05 (two-tailed) will be considered significant. The analyses will be performed with the statistical program R in the current version.
Power calculation, randomization, and enrollment
The single-center, randomized, phase II proof-of-concept study will examine the potential of a dietary concept to influence cardiovascular risk factors such as blood lipids, with LDL cholesterol, as the primary outcome. The two-arm parallel design will allow comparison between both groups (Fig. 1).
Jenkins et al. [19] conducted an intervention study with increased intake of plant sterols, vegetable proteins, and fiber. Due to the dietary intervention, the LDL cholesterol concentration was reduced from 3.80 mmol/L (run-in) to 3.01 mmol/L at the end of the treatment period. Based on these data, a group size of 27 subjects has 80% power to achieve a difference of 0.7 mmol/L (difference between μ1 = 3.8 mmol/L and μ2 = 3.1 mmol/L), assuming that the standard deviation is 0.9 (using a two-side t-test with 0.05 as significance level). Considering a dropout rate of 10%, we aim to enroll 30 participants per group which will be sufficient to evaluate two-sided standardized differences. The power calculation was performed using nQuery version 7.0 (Statistical Solutions, Boston, U.S.A.). Study participants who meet the eligibility criteria and gave their written informed consent will be randomly assigned to one of the two study groups and will receive either menu plans with 10 ml fish oil per day or menu plans without fish oil. Randomization will be conducted with the programming language R (package blockrand, block size 8). Blinding was not possible due to the study design.
Ethical considerations and dissemination
All participants will be informed about the aims and scope of the MoKaRi trial by the study leader and, if they agreed, they will provide written informed consent. All the study procedures will be carried out following the Declaration of Helsinki from 1989 [20]. The ethics committee of the Friedrich Schiller University Jena has approved the study protocol (number 4656-01/16).
The MoKaRi study was registered before launch at ClinicalTrials.gov (identifier NCT02637778).
A manuscript with the results of the primary outcomes will be published in a peer-reviewed journal. Separate manuscripts will be written on the secondary outcomes measures and published in peer-reviewed journals.
Strengths of the MoKaRi trial
Because CVD is the leading cause of death in Europe, the development and sustainable implementation of validated strategies for prevention and support of therapy of CVD is of great importance. Dietary strategies are promising in this context [1,2,15].
Despite the widely known impact of diet on cardiovascular risk, its effects are often underestimated due to the lack of effective clinical trials that evaluate the potential of cardioprotective diets. The proposed MoKaRi study aims to counteract this insufficiency.
The MoKaRi concept is based on three approaches (Fig. 2) that address a wide variety of needs to modulate sustainable nutritional behavior. The three approaches have been developed to motivate participants to improve their nutritional behavior. Because motivating humans to change their behavior is not easy, the MoKaRi concept includes numerous approaches to achieving this ambitious aim.
The first approach includes the provision of suitable recipes, information, and tips on how to apply a cardioprotective diet in a daily routine. There are 140 menu plans with different recipes for each meal of the day considering the seasonal availability of foods and individual energy and nutrient requirements depending on age, gender, and physical activity. These adjustments make each menu plan unique. The variety of available menu plans ensures that each study participant can choose plans according to his/her food preferences. We hypothesize that these features of the MoKaRi concept will have a significant impact on the sustainable implementation of the cardioprotective diet.
The second approach of the MoKaRi strategy is regular and individual nutritional counseling. This method will also strengthen the implementation of the MoKaRi strategy during the trial. The design of the MoKaRi trial facilitates the evaluation of the impact of the counseling units by comparing the course of the study parameters between the intervention period (with menu plans and nutritional counseling) and the follow-up (with menu plans from the intervention period but no nutritional counseling).
Nutritional counseling consultations will be conducted every 14 days at each of the eleven time points by the study leader, who will use motivational interviewing as a technique to influence the study participants' behavior. In particular, the following principles of this technique will be used as part of the MoKaRi concept: (i) express empathy, (ii) clientcentered accepting attitude, (iii) sincere interest in the participants and their life circumstances through active or reflective listening, and (iv) develop discrepancy using targeted (open) questions to help the participants to develop arguments for self-directed change. If the participants realize that the current behavior conflicts with critical goals (e.g. reduction of cardiovascular risk factors) this will strengthen the willingness to change. Confrontational behaviors will be avoided. Instead, different de-escalating strategies will be used, such as reflection a shift in focus, or a modification of the original strategy. The counseling sessions will aim to increase self-confidence and self-reliance in the participants to help them to achieve the goals of the study. This step is a vital issue of motivation which has generally proven to be necessary for the success of treatment [17]. Furthermore, the study leader will elicit and selectively reinforce self-motivating attitudes of the participants regarding problem insight, concerns, and willingness to change. At all times, the study leader will convey acceptance for and validation of the participant's thoughts and attitudes and will always communicate that the participants have agency and freedom of choice.Whereas the nutritional counseling process aims to stimulate intrinsic motivation, the incentive theory proposes that people are pulled towards behaviors that offer rewards to support desirable behaviors and avoid those that might lead to negative consequences [21]. Therefore, as a third approach, incentives, such as provision of chosen study foods, well-designed menu plans in the form of a cookbook, the possibility to participate in a sports program adapted to target group-specific needs and resources (cardioprotective circuit training) and, in particular, the regular feedback loops with a focus on the achieved changes on cardiovascular risk factors will be used to further improve compliance with the MoKaRi concept.
Specifically, the regular feedback loops (i) on changes of the study parameters with a focus on cardiovascular risk factors and (ii) on the applicability of the MoKaRi concept in day-by-day routine signalize our interest in individual life-circumstances and achievements. Moreover, the visualization of changes, e.g., on weight or cardiovascular risk factors will encourage study participants to continue.
Finally, the activities to encourage group feeling, e.g., cooking events, sports program, and the MoKaRi closing party will contribute to the adherence of the study participants to the recommendations. We hypothesize that the study participants motivate each other which will have a noticeable impact on sustainable behavioral changes (Fig. 2).
The design of the MoKaRi trial is unique due to the regular blood sampling which will be conducted every 14 days. This sampling will allow for a comprehensive monitoring of health status and cardiovascular risk factors during the implementation of the MoKaRi concept. Thus, valuable information about time-dependent changes of the study parameters will result. Moreover, this type of close-knit feedback on the achieved changes will encourage and motivate participants to continue. In this same manner, deviations from the concept (e.g. due to holidays, private parties or public holidays) will be discussed during the regularly talks.
The analysis of the fatty acid distribution in plasma lipids, plasma lipid fractions, and erythrocyte lipids as markers for the intake of dietary fat as well as the regular analysis of vitamins and minerals over the trial course are further markers for compliance with the MoKaRi concept. The possibility to observe compliance is a further essential strength of the MoKaRi study.
Limitations of the MoKaRi trial
Despite the numerous strengths of the study design, there are also several limitations, such as: (i) there will be no control group without menu plans available, (ii) variations (e.g., by sorts, soils, preparation, and feeding conditions) between calculated nutrient profiles available from the nutrition tables and the nutrient composition of the foods that are really consumed cannot be ruled out, (iii) the MoKaRi study is designed to evaluate the effectiveness and potential of the developed MoKaRi concept which comprises of different modules (menu plans, nutritional counseling, incentives, moderate circle training with eight exercise in two levels/1-1.5 h per week). The evaluation of the effectiveness of the individual modules is not possible, (iv) the individual compliance with dietary strategies is the most significant limitation and this uncertainty factor cannot be captured in total, (v) potential effects of the microbiome which has a significant impact on physiological markers in response to dietary modifications will not be considered, (vi) it is not clear whether the implementation of the MoKaRi concept will be sustainable, because we are not able to estimate sustainability without care (i.e., after the follow-up) and the questions remain what will happen when motivational strategies and incentives are stopped.
The MoKaRi trial will be conducted with participants at elevated cardiovascular risk. Therefore, sensibility for the topic and intrinsic motivation for improving cardiovascular risk factors, in particular, is relatively high. On the other hand, motivating healthy young individuals to consume a healthy diet for reducing their prospective cardiovascular risk at a later, undefined point in time remains challenging.
Options and perspective
For optimizing the study design of dietary interventions, the preparation and supply of study foods may have a high impact on compliance. Therefore, the allocation of study foods in an associated study restaurant or by a catering service could be a promising approach for improving compliance, particularly for healthy people. Therefore, the study design of diet-related studies could be improved and thus, valuable data will result that gives a reliable insight into the impact and potential of dietary interventions on health status and disease risk.
In addition, motivational strategies for stimulating intrinsic and extrinsic motivations as well as using client-centric methods of negotiation, such as motivational interviewing are promising concerning the improvement of nutritional behavior, e.g. by the implementation of the MoKaRi concept.
Conclusions
The MoKaRi trial may provide essential insights into the relationship between diet and health as well as disease status, particularly cardiovascular risk. The results will interest the general population as well as patients with increased cardiovascular risk because the implementation of the MoKaRi concept will be possible for everyone. The dissemination of information and practical tools, ready-to-use in day-by-day routine (e. g., menu plans) which are evaluated by the MoKaRi trial, may enable the scaling up of the transfer of knowledge to the general population. Therefore, the validated MoKaRi concept may contribute to the prevention and the support of therapy of CVD.
Funding
This work was supported by the German Federal Ministry of Research and Education (Competence Cluster for Nutrition and Cardiovascular Health (nutriCARD) Halle-Jena-Leipzig, grant number 01EA1411C).
Contributions
CD is responsible for the conception and trial design and writing the manuscript. SL was involved in the conception of the trial design and drafting the manuscript. PS provided statistical expertise. PC was responsible for linguistic finalization. CD, SL, and PC are responsible for critical revision of the article and contributing intellectual content.
Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Dr. Lorkowski reports personal fees from Akcea Therapeutics, amedes, AMGEN, Berlin-Chemie, Boehringer Ingelheim Pharma, Danone, Daiichi Sankyo, Lilly, MSD Sharp & Dohme, Novo Nordisk Pharma, Roche Pharma, Sanofi-Aventis, Swedish Orphan Biovitrum, Synlab, Unilever, and Upfield, and non-financial support from Preventicus outside the submitted work. | 2021-05-16T05:27:15.745Z | 2021-04-29T00:00:00.000 | {
"year": 2021,
"sha1": "527e3074a9453478e4935fee330ededed118b21a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.conctc.2021.100761",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "527e3074a9453478e4935fee330ededed118b21a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261113204 | pes2o/s2orc | v3-fos-license | UNDERSTANDING THE DYNAMICS OF ENTREPRENEURIAL PASSION IN ENTREPRENEURSHIP STUDENTS
The goal of this study is to analyze the shaping of entrepreneurial passion among students in the practice of entrepreneurial learning at the one university in Jakarta, Indonesia. Exploring this passion is based on two exogenous variables namely emotional support and perceived competence. Students of the Management Program at the Faculty of Economics and Business of Universitas Tarumanagara were involved as source persons in this research. The purposive sampling method is considered for the selection of samples. Some testing stages were used to analyze data including validity, reliability, and structural regression by using the software of Smart-Pls. The result shows both exogenous variables giving significantly affect entrepreneurial passion. Further effects establish good reliability on these constructs and most indicators are valid. Otherwise, found some indicators got a less score on the loading factor so three items are not accurate to measure entrepreneurial passion and one item is less useful to indicate perceived competence. Based on these results, the entrepreneurial education ecosystem needs to improve these items, so students will be more understand the dynamics of entrepreneurial passion specifically for inventing, founding, and developing their venture. It is useful as a mechanism for encouraging the desire of students for entrepreneurial activity.
INTRODUCTION
Aligning with developing a program of entrepreneurship at the education level, so as the initial stage in learning entrepreneurship is to understand how strong the entrepreneurial spirit is attached to the students involved in the program. As an educated entrepreneur candidate, after completing his education, he is expected to be able to build a business startup or realize his career as an entrepreneur. Entrepreneurial passion is an individual spirit in the early stages of building entrepreneurship so the dynamics of passion in running a business must be understood by entrepreneurial students so that in running their business they can avoid failure [1]. Hence, this construct is related to the formation of intentions in entrepreneurship [2][3][4][5]. In Indonesian, the equivalent of the word passion is spirit, so both are used interchangeably to explain passion in this study. To that end, this line of study raises the issue of entrepreneurial passion as reviewed by previous studies such as [6][7][8][9] to support learning and entrepreneurship education at the university level.
According to [7], passion is described as how persistent a person is in building entrepreneurial activities or is underlined as "intense positive feelings" experienced by a person due to involvement in entrepreneurial activities. Spirit in entrepreneurship is considered important because it is related to his persistence while in the process, carrying out entrepreneurial activities including the development of ideas, gathering resources, and managing and operating business. Thus, it was as central element of entrepreneurial process which can be transferred to employee [6]. Through stages, a nascent entrepreneur needs specific characteristics, namely passion so in time will become tenacious in realizing value for sustaining the business as well as providing benefits to the surrounding environment.
Practically, when someone makes the decision to become an entrepreneur, the biggest challenge is to maintain business commitment. Along with his entrepreneurial process, the newcomer often faces the challenges and distractions of leaving the business or moving into an easier career path, such as becoming an employee or involving in the family business. It does not require hard work like starting your own business. Therefore, maintaining the choice of being an entrepreneur or vice versa is largely determined by entrepreneurial passion so this aspect becomes an important theme of this research. By having a passion, an entrepreneur has a high spirit so tend to dedicate all the time, energy and resources powerfully to managing a business compared to those who choose to work with other people. For this reason, it is necessary to identify what internal factors impress entrepreneurial passion. Along with developing entrepreneurial, entrepreneurship learning is one approach to encourage student interest in entrepreneurial activities. As one of the entrepreneurial universities in Jakarta, Universitas Tarumanagara has committed to developing entrepreneurship. Since 2009 has opened a concentration of entrepreneurship in the Management Program with various facilities to support the entrepreneurship learning process. Approximately, eight years ago this program held an entrepreneur week event regularly to support product innovation as a result of student creativity. However, not yet fully the idea of a business model can be developed into a pilot business, so it is necessary to analyze the entrepreneurial passion of entrepreneurial students.
Conceptually, some studies such as Cardon et al., [7] and Newman et al., [10] noted there are three dimensions of entrepreneurial passion consist of "passion for inventing, passion for founding, and passion for developing". In detail, it is explained that "passion for inventing" is a passion for creating a product/service or business opportunity, "passion for founding" is a passion for commercializing and taking advantage of opportunities, while "passion for developing" is a passion for maintaining, growing, and expanding the business. These clusters depict that entrepreneurial passion as a heart of the owner venture [11]. Thus, need to be identified which dimensions are dominantly owned by students and what indicators strongly form these domains. It is known what factors influence student passion for entrepreneurial activities.
Therefore, research related to entrepreneurial passion was carried out with entrepreneurship student respondents with indications that there were still limitations in following up on development opportunities towards the startup stage. The theme of passion has become an orientation in various studies because it is related to important aspects of entrepreneurship.
Based on [11] shows that entrepreneurial passion increases the creativity and persistence of entrepreneurs. This factor is the key to generating thoughts and actions that lead to success [12]. In line with [13], entrepreneurial passion plays an important role. It strongly influences the emergence of interest in entrepreneurship and self-efficacy [2][3][4][5], associated with persistent behavior [8], impacting financial performance [14], and relevant to motivation [15]. These researchers pointed out the benefits of passion in entrepreneurship, especially for students who are pursuing entrepreneurship.
Specifically, entrepreneurial passion is related to developing goals and increasing commitment to goals so that entrepreneurs realize higher business growth in the early stages of business development [16]. Therefore, "passion" becomes an important foundation in pioneering entrepreneurship and influences entrepreneurial behavior in the process of achieving success. Thus, if it is aligned with government programs in improving people's welfare, it is an important factor to foster public interest in entrepreneurial activities through this passion. Based on these reasons, this study places entrepreneurial passion as of important variable and to be identified some factors which effecting this passion.
According to Jonsson [17] stated entrepreneurial passion occurs because of emotional support from the immediate environment, such as: getting precious resources or making social relationships. Moreover, this aid can be realized as financially [18] such as by receiving a grant from the government to pioneer entrepreneurship. Individuals who receive such support are more emotionally adept at dealing with adversity than those who lack support. Feelings of having strong support will provide confidence to overcome difficulties [19]. Likewise, [13] stated that entrepreneurs who receive emotional support tend to gain emotional well-being which has a positive impact on increasing perseverance and the ability to absorb knowledge which ultimately leads to their success as entrepreneurs [11] and [20]. In a previous study, [21] proved the same mechanism at the business actor, so this relationship was tested at the entrepreneurial student. In line with studies, the first hypothesis (H1) is formulated with statement "emotional support is related to entrepreneurial passion". Furthermore, growing entrepreneurial passion is studied through perceptions of selfcompetence. Perceived competence is important in forming passion, where this perception is obtained through previous experience as a basis for valuable and useful knowledge when evaluating opportunities and believing in recognizing new opportunities [22]. Individuals who have a low level of competence will easily feel anxious, frustrated, or apathetic, thereby hindering the experience of entrepreneurial passion [23]. In addition, someone with a low level of competition tends to be less able to show skills, knowledge, or attitudes in problemsolving so it is likely that their passion is limited in their development [24]. Therefore, perceived competence affects the entrepreneurial spirit, so referring to [13] and [21] identified a significant influence of perceived competence in maintaining their passion. In line with these studies, the second hypothesis (H2) is developed by the statement "perceived competence has a relationship with entrepreneurial passion".
Theoretically, the foundation of this study refers to the social learning theory of Bandura who pointed out that the learning process is through social observation and behavioral imitation. This theory supports that individuals develop confidence in doing something when observing or directly engaging in certain activities [25]. In addition, it is under the social support theory adopted for maintaining physical health. The social relationship provides a sense of security, respect, care, and assistance from other individuals or groups [26], so it relates to social bonding. The same link is needed in making networking and social integration when building the venture.
Finally, the goal of the study is to understand of passion at an entrepreneurial student's level which is a benefit to be information for management programs in running entrepreneurial learning. Using determinants such as emotional support and perceived competence is expected to find the mechanism for building entrepreneurial passion as a basis to create prototyping. It is a contribution of the university toward entrepreneurial development programs which are held by the government. It aligns with the new program of the Minister of Education and Cultural of the Republic of Indonesia which organized an iconic event namely "Merdeka Belajar Kampus Merdeka" abbreviated MBKM as a good program commits to the entrepreneurial project. In a medium period, this result is also relevant to the growing economy and providing decent work for a community, so it is suitable for the achievement of sustainable development goals in 2030. It harmonizes nationally with the government and globally for sustainability. The result of this study is used to prepare preconditions for holding entrepreneurial education.
METHODS
The research stages are as follows: The research design uses a quantitative approach accompanied by descriptive techniques to improve the analysis of the results. The research population comes from students who are studying entrepreneurship in the program of Management Study at the University of Tarumanagara in Jakarta, Indonesia. The sample selection method uses purposive sampling by emphasizing the existence of criteria in the respondent to be selected as a sample. The criteria are students who have taken entrepreneurship courses and at least are taking lectures in semester four so that they have an overview of learning entrepreneurship. The data collection period is between April-May 2022 involving 100 students.
Aligning with the research problems, this study involved two variables in predicting Entrepreneurial passion. Emotional support was the first exogenous variable while perceived competence as second variable. Measurement of emotional support refers to studies [21], [13] by using four indicators. Meanwhile, perceived competence also bases on studies [27], [21] with four indicators. Finally, involving studies [13], [21], [11] to arrange ten items of entrepreneurial passion with the elaboration as follows: Source: Improved from [21], [13], [11] These indicators were conversed to be an instrument by using Likert scaling from strongly disagree (1) to strongly agree (5). This tool was sent to the respondents through online questionnaire for data gathering.
In further stages, the validity testing was done through factor loading scoring while reliability testing used information from composite reliability and Cronbach Alpha. Specifically, validity testing is based on the output of convergent validity in the form of outer loading of each indicator and Average Variance Extracted (AVE) with a value above 0.5 for each latent variable. Refer to Garson [28], all indicators produce a score of loading factors over 0.70 so that it is declared valid.
Moreover, the analysis uses structural regression with running by Smart-Pls. The standard of hypothesis testing refers to a significance level of 5 percent. Based results can be directed to government and institutional education to make collaboration to improve passion among students.
Respondent Profiles
The profile of respondents is as follows: Using information in Table 2, it can be seen from all the respondents who have an interest in several business fields where the majority is in the culinary business and then fashion. Meanwhile, the highest number of respondents were female students as many as 54 percent while male students 46%. By taking concentration, the largest number of respondents from the field of entrepreneurship is 38 percent, then Human Resources management is 29 percent. Others are divided into financial management and marketing management.
The Validity and Reliability Testing
Results depict these scores are suitable with the criterion, even each score of composite reliability is greater than 0.80 while Cronbach Alpha is above 0.70 so that it is declared reliable. Only the second variable results in a lower Cronbach value but results in a high composite reliability value so that it is declared reliable. These results were listed in the Table 3.
Convergent validity testing shows that most indicators have an outer loading value above 0.70, nevertheless, four indicators produce an outer loading score of less than 0.60 such as PC2, EP1, EP5, and EP7. The value of the Average Variance Extracted is upper than 0.5 for each latent variable. Based on these results, it can be concluded that the data collected has met the test criteria.
Hypothesis Testing
The resulting R² value of 0.615 describes 61.50 percent of entrepreneurial passion can be explained by the two exogenous variables while the remaining 38.50 percent is explained by other variables. The testing of predictive relevance (Q²) illustrates how well the empirically collected data can be reconstructed by the model visualized by Smart-PLS. This measurement is suitable for construction on endogenous variables with reflective measurements. The result shows as many as 0.344 meaning that the observed values can be constructed properly. The goodness of Fit (GoF) is 0.6154 or above 0.36 so it is declared good [29]. In Table 3, shows exogenous variables hold a significant impact positively on entrepreneurial passion. Both produce t-statistics above 1.96 while p-values are above 5 percent so the influence of both is acceptable.
Statistically, the value of the path coefficient shows emotional support has an influence on entrepreneurial passion at 0.567. The t-statistic value is 5.260 with p-values of 0.000. The result identifies the first hypothesis is accepted which shows the significant impact of emotional support on entrepreneurial passion. The second hypothesis also is not rejected which proves the significant influence of perceived competence on this passion at 0.310. It is supported by the resulting t-statistic value of 3.432 with p-values of 0.001. Both variables produce a t-statistics above 1.96 or a p-value above 5 percent so that the effect is considered significant at the level of 5 percent.
Discussion
As previously explained, [7] highlighted three stages of entrepreneurial passion including inventing, founding, and developing. The instrument contains two aspects, namely intense positive feeling and identity centrality [11]. These are important for students to understand to maintain the dynamics of their passion in running a business. When comparing between Table 1 and 3, is found the differences in test results. Three indicators consist of EP1, EP5, and EP7 are removed as instrument of entrepreneurial passion. Because of these items have the lowest of outer loading scores or they are less than 0.60. Therefore, these items are not listed in Table 3. These indicators relate to efforts to commercialize new ways, desire to start a new business, and spirit of nurturing new businesses to achieve success. Therefore, agreeing with [1] as a nascent entrepreneur, it is necessary for to students in fostering these domains to prepare for their venture in the future.
Second comparison, previous studies [21] resulted in composite reliability of 0.876 and Cronbach's Alpha of 0.825 with a data collection period of October-November 2020 and involving respondents from new business owners in Jakarta. This data in the current study is more perfect than the previous because it produces a higher level of reliability. Likewise, the previous instrument maintains five indicators, while this study are seven items of entrepreneurial passion. This result illustrates the educated entrepreneurs have good conceptual skills so that they are better able to understand instructions in the questionnaire.
As seen in Table 3, PC2 was removed from the instrument. The second indicator stated: "I estimate financial statements e.g., income statement, cash flow, and break-even analysis". It represents an understanding of entrepreneurial finance so that the experience or knowledge of financial in managing ventures has not been evenly perceived by respondents. Thus, entrepreneurial finance knowledge needs to be improved as a driver for entrepreneurial students in carrying out the entrepreneurial stages of the process. Emotional support leads to seed financing support to start entrepreneurial activities. Funding support can come from investors, parents, and friends, including your savings as seed financing. Along with the development of the venture stage, funding can be provided by the government, banks, or other financial institutions. Thus, the first hypothesis meets with [13], that receiving emotional support increase persistence so giving impacts to increase entrepreneurial passion. It increases emotional well-being in a person and leads to successful entrepreneurial activity [20]. [30] noted through emotional support, entrepreneurs are more confident in running business, because of providing positive emotions so that it is easier to analyze opportunities.
This pattern is under social support theory which is social support as providing a sense of security, respect, care, and assistance from other individuals or groups [26] so that it has relevance to social networking and social integration. Refer to [31] emotional support affects the welfare of business actors. The existence of financial support has an impact on increasing motivation, intention to start a business, and growing entrepreneurial passion. It has a positive impact on entrepreneurial passion because of the assistance of seed financing so it also triggers enthusiasm for entrepreneurship [21].
Perceived competence leads to previous experience. Study [22] notes that previous work experience affects the belief in identifying opportunities. This is supported by [13] that taskrelated competence shapes confidence in grabbing opportunities, meanwhile previous experiences also form entrepreneurial passion. This belief produces positive feelings in entrepreneurs so impacts to increase their passion for entrepreneurship. If an entrepreneur has a positive experience such as success, it will give positive emotions and believe that there are opportunities for further activities. Conversely, when experiencing bad moments such as bankruptcy or failure, so the entrepreneurial passion tends to decrease. Thus, competence based on experience influences the dynamics of entrepreneurial passion. This perception gives a positive effect, so competencies relate to previous tasks, experiences, or jobs are a driving factor for the formation of an entrepreneurial spirit [21].
This relation is to the social learning theory of Bandura who pointed out that the learning process is through social observation and imitation of behavior in which individuals develop confidence in doing something when observing or directly engaging in certain activities [25]. Study [12] stated in academy case, entrepreneurial passion can mediate entrepreneurial personality with entrepreneurial behavior. Mechanism of experience is a factor forming passion in entrepreneurial, so in education practicing, students must be given more opportunities through this experience.
For instance, in involving students in the experience, the institution held an event namely entrepreneurial week. This event gives a chance for exposing entrepreneurial projects. Since 2015, has been held this event in some trade centers so that students exhibit innovations to visitors and work closely with stakeholders to follow up on their project.
Figure 1. One of Event of Entrepreneur Week
As shown in Fig. 1, several student projects such as batik fashion, baby chairs, fresh vegetables, snacks, and others. In these events, a passion for inventing begins to be formed so it must be followed up through a passion for founding and developing with the support of stakeholders. This event is still being held today through virtual exhibitions.
Based on these reasons, improving passion among students can be done through collaboration with other parties on the MBKM projects. Best practices can be carried out through many activities such as industrial internships, building entrepreneurial activity, independent projects, conducting research, humanitarian activities, and including activities with communities in rural areas. Collaboration in this program provides networking to meet many users e.g., market, investors, funding institutions, financing businesses angel or government institutions who can help maintain and continue to the next stage or at least the startup stage. Under this collaboration, if it is practiced properly, so it will increase competence and become a support system to grow, establish, and develop their venture. Therefore, it is time to build acceleration to adjust the learning curriculum in line with MBKM. The education program is a preparation for educated entrepreneurs so that through entrepreneurship it becomes a contribution to universities in improving welfare and decent work opportunities for the community. In the context of sustainability, it means supporting the achievement of sustainable development goals in 2030.
CONCLUSION
The results conclude that emotional support and perceived competencies have a significant positive effect on entrepreneurial passion in students who receive learning about venture development. However, three invalid indicators have been identified, especially the passion for founding and developing indicators, so attention must be paid so that students are able to maintain the dynamics of entrepreneurial passion in developing their ventures. Thus, collaboration through MBKM can be done as a best practice in building an entrepreneurial ecosystem in universities. The limitation of this study lies in modelling which places passion not dimensionally so that further studies can place entrepreneurial passion in second-order modelling with dimensions including a passion for inventing, founding, and developing to correctly identify which dimensions still need to be encouraged or vice versa, what can be maintained in the entrepreneurship education system. | 2023-08-25T15:02:56.975Z | 2023-08-06T00:00:00.000 | {
"year": 2023,
"sha1": "6cfe6070a281b93bea46d5a352fa3ae90cc58adb",
"oa_license": "CCBYNCSA",
"oa_url": "https://journal.untar.ac.id/index.php/ijaeb/article/download/25607/15368",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fd6e4cb6bdf8c2671db05296c212fc5879129983",
"s2fieldsofstudy": [
"Business",
"Education"
],
"extfieldsofstudy": []
} |
248399885 | pes2o/s2orc | v3-fos-license | Optical Coherence Tomography Evaluation of Carotid Artery Stenosis and Stenting in Patients With Previous Cervical Radiotherapy
Objectives Cervical radiotherapy can lead to accelerated carotid artery stenosis, increased incidence of stroke, and a higher rate of in-stent restenosis in irradiated patients. Our objective was to reveal the morphological characteristics of radiation-induced carotid stenosis (RICS) and the stent–vessel interactions in patients with previous cervical radiotherapy by optical coherence tomography (OCT). Materials and Methods Between November 2017 and March 2019, five patients with a history of cervical radiotherapy were diagnosed with severe carotid artery stenosis and underwent carotid artery stenting (CAS). OCT was conducted before and immediately after the carotid stent implantation. Two patients received OCT evaluation of carotid stenting at 6- or 13-month follow-up. Results The tumor types indicating cervical radiotherapy were nasopharyngeal carcinoma (n = 3), cervical esophageal carcinoma (n = 1), and cervical lymphoma (n = 1). The median interval from the radiotherapy to the diagnosis of RICS was 8 years (range 4–36 years). Lesion characteristics of RICS were detected with heterogeneous signal-rich tissue, dissection, and advanced atherosclerosis upon OCT evaluation. Post-interventional OCT revealed 18.2–57.1% tissue protrusion and 3.3–13.8% stent strut malapposition. Follow-up OCT detected homogeneous signal-rich neointima and signal-poor regions around stent struts. In the patient with high rates of tissue protrusion and stent strut malapposition, the 6-month neointima burden reached 48.9% and microvessels were detected. Conclusion The morphological features of RICS were heterogeneous, including heterogeneous signal-rich tissue, dissection, and advanced atherosclerosis. Stenting was successful in all 5 patients with severe RICS. One patient, with high rates of tissue protrusion and stent strut malapposition immediately after stenting, received in-stent neointimal hyperplasia at a 6-month follow-up.
INTRODUCTION
Radiotherapy is an effective treatment for head and neck cancer and dramatically extends life expectancy. However, there is a major issue that radiation-induced cerebral vasculopathy may occur during the long-term follow-up, such as cerebrovascular stenosis or occlusion (Twitchell et al., 2018). It is reported that the cumulative incidence of moderate (>50%) carotid stenosis in the first, second, and third years following head and neck radiotherapy is 4, 12, and 21%, respectively (Texakalidis et al., 2020). The mechanism of radiation-induced carotid stenosis (RICS) has been undetermined in the literature. Relevant hypotheses include radiation insult to the intima-media (accelerated atherosclerosis) and injury to the vasa vasorum in the adventitia (a distinct disease entity) (Plummer et al., 2011). To date, the histopathological findings of RICS based on small cases are various, including ulcerating atherosclerosis with calcification, periarterial fibrosis, damage to the vasa vasorum, necrotizing vasculitis, thrombosis, and transmural fibrosis (Levinson et al., 1973;Atkinson et al., 1989;Zidar et al., 1997;Tonomura et al., 2018).
Furthermore, the incidence of stroke significantly increases after cervical radiation and the relative risk of stroke reaches 5.6 in irradiated patients (Gujral et al., 2014). To reduce the risk of stroke, carotid artery stenting (CAS) is performed in patients with severe RICS. Nevertheless, the long-term outcomes after CAS show a markedly higher rate of in-stent restenosis in patients with previous radiotherapy, reaching 25.7% in 2 years (Yu et al., 2014). To the best of our knowledge, the stentvessel relationship immediately after stenting and the pattern of in-stent neointimal hyperplasia in patients with RICS have not been studied yet. Overall, the morphological features of RICS and the stent-vessel interactions in patients with previous cervical radiotherapy remain to be examined. With the help of the intravascular optical coherence tomography (OCT) technique, we may achieve that aim.
Optical coherence tomography is a vessel wall imaging technique with the highest resolution (10-20 µm) and has been known as an optical biopsy technique (Tearney et al., 2012). It has been applied to evaluate the carotid atherosclerotic plaque and the stent-vessel interactions since 2010 (Yoshimura et al., 2010(Yoshimura et al., , 2011Jones et al., 2012Jones et al., , 2014Setacci et al., 2012;Given et al., 2013;Liu et al., 2015Liu et al., , 2019Funatsu et al., 2020;Shi et al., 2020). The purpose of this preliminary study was to apply OCT to reveal the morphological characteristics of RICS and the stent-vessel interactions in patients with previous cervical radiotherapy.
Study Design and Patient Selection
Between November 2017 and March 2019, five patients (≥18 years old) with severe internal carotid artery (ICA) stenosis (70-90% diameter reduction) detected by digital subtraction angiography (DSA) and previous cervical radiotherapy were enrolled for pre-interventional OCT evaluation. Immediately after stenting, 4 patients underwent OCT evaluation. Since the patient experienced vasospasm and the blockage of the cerebral protection device, 1 patient did not receive post-interventional OCT evaluation. Only two patients underwent 6-or 13-month follow-up OCT evaluation since the other three patients refused another OCT evaluation due to the relatively expensive fee and the fear of invasive examination. Patients were from Jinling hospital in Nanjing, China. The study protocol was approved by the hospital's ethics committee. Written informed consents were obtained from all patients. The relevant clinical and radiologic data were reviewed.
Optical Coherence Tomography Image Acquisition and Analysis
The intravascular frequency-domain OCT was advanced through the guide catheter and positioned in the ICA C1 segment. An OCT imaging catheter was then carefully advanced over the guidewire of the cerebral protection device and navigated through the carotid artery lesion. During the blood clearance by the automatic injection of 20 ml of undiluted iodixanol 320 (GE Healthcare Ireland Limited, County Cork, Ireland) with a velocity of 10 ml/s through the 8F guide catheter, the light mirror of the OCT imaging catheter was helically pulled back (18 or 36 mm/s) to get a series of cross-sectional OCT images of the vessel wall (Jones et al., 2014;Liu et al., 2015;Shi et al., 2020). OCT image acquisition was repeated immediately after stenting and during the follow-up DSA examination. The OCT imaging catheter was introduced into the ICA over a 0.014-inch microwire during the follow-up DSA examination.
Optical coherence tomography images were analyzed independently by two investigators (XX and FH) with extensive experience in reviewing OCT images. OCT images within 10 mm distal and proximal to the minimum lumen were assessed. The image was considered non-analyzable if the assessment of a continuous 270 • arc was impaired by the intraluminal blood (Jones et al., 2014). Before quantitative measurement, manual OCT calibration was performed along with the entire pullback. Cross-sectional OCT images were analyzed at 0.1-or 0.2-mm intervals. The rates of tissue protrusion and stent strut malapposition were analyzed at 1-mm intervals (de Donato et al., 2013;Liu et al., 2015). The rates of tissue protrusion were calculated in the slice-based analysis, and the rates of stent strut malapposition were in the strut-based analysis.
Qualitative and quantitative OCT evaluations were performed precisely based on previously published criteria (Prati et al., 2010;Tearney et al., 2012). Lipid-rich plaque was defined as plaque with lipid content occupying more than one quadrant (Tearney et al., 2012). The calcific nodule was defined as single or multiple regions of calcium that protruded into the lumen (Tearney et al., 2012). A thin fibrous cap was present if the fibrous cap thickness was less than 65 µm (Tearney et al., 2012). Plaque rupture was defined as the discontinuity of the fibrous cap and the cavity formation, with or without a superimposed thrombus (Prati et al., 2010;Tearney et al., 2012). A thrombus was defined as a mass (≥250 µm) attached to the luminal surface or floating within the lumen (Jang et al., 2005;Tearney et al., 2012). Microvessels or neovascularization was defined as signal-voiding tubular structures (50-300 µm) present on at least three consecutive cross-sectional frames (Takano et al., 2009;Prati et al., 2010;Li et al., 2020). Dissection was defined as the presence of an intimomedial flap producing a doublelumen or an intramural hematoma formation (Alfonso et al., 2012). Intramural hematoma can be defined as a relatively homogeneous signal-rich material with variable attenuation separating the intima from the outer vessel wall (Alfonso et al., 2012). Tissue protrusion was defined as the prolapse of tissue into the lumen between adjacent stent struts (Tearney et al., 2012). It was divided into 3 groups, namely, smooth protrusion, irregular protrusion, and protrusion with attenuation (Funatsu et al., 2020). The distance from the surface of the stent strut to the lumen contour was measured. A stent strut was classified as "malapposed" (distance > 200 µm), "well apposed" (distance 10-200 µm), or "embedded" (distance < 10 µm) (de Donato et al., 2013;Liu et al., 2015). Follow-up OCT image with the minimum lumen was selected to measure the neointima burden. It was calculated as (stent area -lumen area)/stent area × 100% (Tearney et al., 2012).
Carotid Artery Stenting Procedure and Quantitative Angiography Analysis
The CAS procedure was performed through the femoral approach through guide catheters. All patients underwent systemic anticoagulation with 4,000 IU unfractionated heparin after femoral artery puncture and an additional 2,000 IU/h. Pre-dilatation of the carotid lesion with a 4.0-5.0-mm balloon catheter was performed after OCT image acquisition. Stenting was then performed with one of the following open-cell stents: Acculink (Abbott Vascular, California, United States), Precise (Cordis, Florida, United States), and Protégé (Medtronic, Minnesota, United States). The selection of the stent diameter and length was decided by our experienced interventional neurologists according to the lesion feature. Post-dilatation was performed with a 5.0-6.0-mm balloon catheter when the remaining stenosis was > 30%.
The degree of diameter stenosis, residual stenosis, and instent stenosis was calculated accurately according to the North American Symptomatic Carotid Endarterectomy Trial criteria (Barnett et al., 1998). Technical success was defined as the residual stenosis < 30% on DSA. In-stent restenosis was defined as ≥ 50% stenosis within or at the edge of the stent. Angiograms and OCT images used co-registration on the basis of the landmarks, such as the bifurcation of the CCA and the stent edge, to ensure that they were at identical sites.
Patient Characteristics
A total of 5 male patients [mean (standard deviation) age, 64.2 (8.5) years] with previous cervical radiotherapy underwent OCT evaluation of carotid artery stenosis successfully ( Table 1). The tumor types indicating cervical radiotherapy were nasopharyngeal carcinoma (NPC) (n = 3, 60%), cervical esophageal carcinoma (n = 1, 20%), and cervical lymphoma (n = 1, 20%). The radiation dose was 70 Gy in three patients. The median interval from the radiotherapy to the diagnosis of RICS was 8 years (range 4-36 years). Interestingly, all patients had not more than one risk factor for atherosclerosis but 60% of them had severe bilateral carotid disease (70-100% diameter reduction). Except for one patient diagnosed with severe right coronary artery stenosis, all patients were free of coronary artery disease and peripheral artery disease.
Digital Subtraction Angiography and Optical Coherence Tomography Analysis
As for the evaluated carotid artery lesion, the mean degree of diameter stenosis was 76% (range 70-90%) on DSA and the mean minimum lumen area was 3.10 mm 2 (range 1.57-5.23 mm 2 ) on OCT ( Table 2). The morphological characteristics of RICS in three NPC patients were different (Cases 1-3). In the NPC patient with a 4-year interval from radiotherapy, heterogeneous signalrich tissue was observed in consecutive 30 frames (6 mm) and occupied almost three-quarters of the vessel at the minimum lumen (Case 1). Atypical lesion such as dissection in the ICA C1 segment was detected in the other two NPC patients with longer time intervals (Cases 2 and 3). In the NPC patient with an 8-year interval, multiple cavity formations were present besides the dissection (Case 2). In the NPC patient at a 36-year interval, advanced atherosclerosis such as ruptured lipid-rich plaque, ruptured calcific nodule, and thrombosis coexisted with the dissection (Case 3). The morphological characteristics of RICS in patients with esophageal carcinoma or lymphoma were similar to atherosclerosis (Cases 4 and 5). Lipid-rich plaque was present in both cases. Neovascularization and thrombosis were revealed in the patient with longer time intervals (Case 5). All patients received balloon pre-dilatation and carotid stenting. The rate of technical success was 100% on DSA. Four patients underwent OCT examination immediately after stenting to evaluate the stent-vessel relationship in RICS patients (Cases 1, 2, 4, and 5). A total of 67 cross-sectional OCT images [mean (standard deviation), 17 (5)] were analyzed to assess the rates of tissue protrusion and stent strut malapposition. The mean rates of tissue protrusion and stent strut malapposition were 34.9% (range 18.2-57.1%) and 7.0% (range 3.3-13.8%), respectively. Besides, there were the highest rates of tissue protrusion and stent strut malapposition in Case 4. Most of the tissue protrusion was smooth tissue protrusion. Small dissection may appear sometimes (Cases 2 and 4). Two patients underwent follow-up OCT examination to evaluate the pattern of neointimal hyperplasia in RICS patients (Cases 3 and 4). There were homogeneous signal-rich neointima and signal-poor regions around stent struts in both follow-up OCT images. The neointima burden of Cases 3 and 4 was 31.6 and 48.9%, respectively. Microvessels were observed in the thicker 6-month neointima of Case 4. Detailed data are summarized in Table 2.
Case 1
The patient presented with a history of NPC 4 years had been treated with cervical radiotherapy (70 Gy) and chemotherapy (cisplatin, docetaxel, and bevacizumab). The patient presented with dizziness for 2 months. DSA showed 70% of stenosis at the left ICA (LICA) sinus ( Figure 1A). The patient underwent balloon pre-dilatation (5 × 30 mm) and stent implantation (Acculink, 7-10 × 40 mm). The residual stenosis was 20% ( Figure 1B). The 4-month follow-up DSA detected that there was no in-stent restenosis ( Figure 1C). Pre-interventional OCT examination of the distal lesion with mild stenosis revealed macrophage accumulations at 2 o'clock, and there was no clear three-layered vessel structure ( Figure 1D). More proximally, nearly half of the vessel was occupied by a heterogeneous signalrich tissue located close to the luminal surface ( Figure 1E). There was a clear demarcation between the tissue and the outer fibrous tissue. Moreover, there was a microvessel at 12 o'clock ( Figure 1E). Nearly three-quarters of the vessel was occupied by the heterogeneous tissue at the minimum lumen, and there was irregular signal-poor tissue accumulating at 5-7 o'clock ( Figures 1F,G). At the proximal site of the lesion, there were multiple layers of heterogeneous signal-rich tissue, forming the onion-like structure ( Figure 1H). The post-interventional OCT evaluation revealed excellent stent strut apposition and smooth tissue protrusion at the image with the minimum lumen ( Figure 1I).
Case 2
The patient presented with a history of NPC 8 years had been treated with cervical radiotherapy (70 Gy) and chemotherapy. The patient presented with paroxysmal vertigo for 3 days. Magnetic resonance imaging (MRI) revealed acute cerebral infarction in the right frontal lobe, occipital lobe, and semioval center. DSA showed 80% stenosis with an irregular surface at the LICA C1 segment (Figure 2A) and occlusion at the left external carotid artery origin. The patient underwent balloon pre-dilatation (4 × 30 mm) and stent implantation (Precise, 8 × 40 mm) at the LICA lesion. The residual stenosis was 20% ( Figure 2B). Pre-interventional OCT examination of the distal lesion revealed that the crescent-shaped material separated the intima from the outer vessel wall at 5-9 o'clock ( Figure 2C). More proximally, nearly half of the vessel was occupied by the relatively homogeneous signal-rich material with variable attenuation that separated the intima from the outer vessel wall, indicating intramural hematoma (Figure 2D). The intramural hematoma expanded to almost three-quarters of the vessel at the minimum lumen ( Figure 2E). In addition, the intimal tear at 5 o'clock, intimomedial flap floating, and double-lumen formation were well visualized by OCT (Figures 2F,G), demonstrating dissection. Multiple cavity formations and macrophage accumulations were detected at the proximal normal-appearing vessel ( Figure 2H). The post-interventional OCT evaluation disclosed excellent stent strut apposition, smooth tissue protrusion, and a residual small dissection (Figures 2I-K). The dissection corresponded to the intimal tear mentioned above ( Figure 2J).
Case 3
The patient presented with a history of NPC of 36 years had been treated with cervical radiotherapy. The patient presented with recurrent weakness and numbness of the right limb, and slurred speech for 1 month. MRI revealed acute cerebral infarction in the bilateral frontal and parietal lobes. DSA showed 90% of stenosis with an irregular surface at the LICA C1 segment ( Figure 3A) and occlusion at both the left external carotid artery and the right CCA (RCCA) origin. Balloon pre-dilatation (4 × 30 mm), stent implantation (Acculink, 9 × 40 mm), and balloon post-dilatation (5 × 20 mm) were performed at the LICA lesion. The residual stenosis was 20% (Figure 3B). The 13-month follow-up DSA showed no in-stent restenosis ( Figure 3C). Pre-interventional OCT examination of the lesion revealed that the crescent-shaped material separated the intima and the outer vessel wall at 9-1 o'clock, indicating intramural hematoma ( Figure 3D). In addition, intraluminal thrombus was observed and shadowed the underlying vessel wall (Figure 3D).
At the minimum lumen, there was a ruptured lipid-rich plaque and a ruptured calcific nodule with an overlying thrombus next to the intramural hematoma (Figures 3E,F). More proximally, the double-lumen with the fenestration communicating the true lumen and the false lumen was identified as the sign of dissection (Figures 3G,H). In addition, the calcification with overlying thrombus (Figures 3G,H) and the ruptured calcific nodule mentioned above were consecutive. The length of the calcification reached 6.8 mm. Cholesterol crystals and cavity formations due to intimal disruption were frequently detected at the proximal site of the lesion (Figure 3I). The 13-month follow-up OCT examination revealed mild in-stent neointimal hyperplasia (Figures 3J-L). The neointima was homogeneous signal-rich, and there were signal-poor regions near the stent struts (Figures 3J,K). Some struts in the cavity mentioned above were not covered by the neointima (Figure 3L).
Case 4
The patient presented with a history of cervical esophageal carcinoma 4 years had been treated with cervical radiotherapy (70 Gy) and chemotherapy (nedaplatin, docetaxel, and raltitrexed). The patient presented with slurred speech and paroxysmal unconsciousness for 3 days. MRI revealed acute cerebral infarction in the left frontal and temporal lobes. DSA showed a long lesion with 70% stenosis at both the right ICA (RICA) sinus and C1 segment ( Figure 4A) and occlusion at the LICA sinus. The patient underwent balloon pre-dilatation (5 × 30 mm) and stent implantation (Protégé, 8 × 60 mm) at the RICA lesion. The residual stenosis was 20% ( Figure 4B). The 6-month follow-up DSA showed no in-stent restenosis ( Figure 4C). Pre-interventional OCT examination of the lesion disclosed focal signal-poor tissue in the fibrous tissue ( Figure 4D). More proximally, there was a lipid-rich plaque with a thick fibrous cap at 6-10 o'clock ( Figure 4E). At the minimum lumen, the intima thickened with mainly fibrous tissue and linear signal-rich cholesterol crystals were detected at 6 o'clock ( Figure 4F). The cholesterol crystals were detected at 10 adjacent cross sections. The post-interventional OCT evaluation disclosed the small dissection, smooth tissue protrusion, and irregular tissue protrusion at the stented segment (Figures 4G-I). Besides, its rates of tissue protrusion and stent strut malapposition were the biggest among the four patients. The 6-month follow-up OCT examination revealed evident in-stent neointimal hyperplasia, signal-poor regions around stent struts, and neovascularization (Figures 4J-L). Stent struts were all covered by the neointima, and the small dissection was healed ( Figure 4J). There were some microvessels near the signal-poor regions ( Figure 4K). The neointima with the minimum lumen was fibrotic and homogeneous signal-rich ( Figure 4L).
Case 5
The patient presented with a history of cervical lymphoma for 15 years had been treated with cervical radiotherapy and chemotherapy. The patient presented with recurrent vision loss in the right eye for more than 1 year and paroxysmal vertigo for 2 months. DSA showed 70% stenosis at the bifurcation of the left CCA (LCCA) (Figure 5A) and occlusion at the RCCA origin. Balloon pre-dilatation (4 × 30 mm), stent implantation (Precise, 8 × 40 mm), and balloon post-dilatation (6 × 30 mm) were performed at the LCCA lesion. The residual stenosis was 20% ( Figure 5B). Pre-interventional OCT examination of the lesion revealed the intraluminal thrombus floating at 12-3 o'clock ( Figure 5C). There was a fibrous plaque with the intraluminal thrombus at the minimum lumen ( Figure 5D). A lipid-rich plaque with macrophage accumulations was visualized at the proximal site of the lesion (Figures 5E,F). In addition, there was a microvessel connecting with the edge of the lipid plaque. More proximally, the lipid plaque with the neovascularization still existed (Figure 5G). The length of the lipid plaque reached 5.5 mm. The post-interventional OCT evaluation disclosed fine stent strut apposition, and some of the vessel walls were out of the imaging range ( Figure 5H).
DISCUSSION
The goal of this study was to utilize OCT to reveal the morphological characteristics of RICS and the stent-vessel interactions in patients with previous cervical radiotherapy. In this pilot study, several morphological characteristics of RICS were exhibited by OCT, such as heterogeneous signalrich tissue, dissection, and advanced atherosclerosis. Based on the post-interventional OCT findings, the mean rates of tissue protrusion and stent strut malapposition were 34.9% (range 18.2-57.1%) and 7.0% (range 3.3-13.8%), respectively. Followup OCT examinations of the two patients revealed homogeneous signal-rich neointima and signal-poor regions around stent struts. Microvessels were observed in the thicker 6-month neointima where the neointima burden reached 48.9%.
Although the lesion characteristics of RICS were studied through various imaging tools, the mechanism of RICS has been undetermined. Zou et al. (2013) applied DSA and discovered that there was more bilateral severe stenosis at the CCA/ICA (18% vs. 6%) and dissections (20% vs. 3%) in patients with symptomatic occlusive radiation vasculopathy than in patients with severe symptomatic carotid stenosis. In line with the research by Zou, bilateral severe stenosis at the CCA/ICA and dissection were frequently present in this study. As for the plaque composition, multidetector row computed tomography showed a significant increase in the carotid artery plaque volume and the percentage of fatty plaque component at 2 years after radiotherapy (Anzidei et al., 2016). Moreover, the carotid ultrasound scanning revealed more hypoechoic plaque (9% vs. 0%) in irradiated NPC patients (4-11 years after radiotherapy) than in non-irradiated patients (Lam et al., 2002). Hypoechoic plaque may indicate lipid plaque or intraplaque hemorrhage and represent vulnerable plaque. However, Fokkema et al. (2012b) compared the histopathological features of RICS (1.8-24 years after radiotherapy) with atherosclerotic-induced carotid stenosis and discovered that RICS had less infiltration of macrophages and a smaller lipid core size, indicating a more stable plaque Six-month follow-up OCT images. Image (J) corresponds to image (G). Image (L) represents the image with the minimum lumen. The homogeneous signal-rich neointima, signal-poor regions around stent struts (dashed curve with arrows), and microvessels (red arrow) were visualized. Scale bars represent 1 mm. Asterisks denote guide-wire artifact. DSA, digital subtraction angiography; OCT, optical coherence tomography; RICA, right internal carotid artery. in RICS. The tumor type, the radiation dose, the time interval from radiotherapy, concomitant chemotherapy (cisplatin), and the existing cardiovascular risk factors were associated with RICS (Cheng et al., 1999;Dorth et al., 2014;Twitchell et al., 2018). We supposed that the heterogeneity of tumor types and time intervals from radiotherapy in these two research works may explain the opposite results.
The heterogeneous lesion characteristics of RICS were also observed by OCT in our study. As for the NPC patients (Cases 1-3) with increasing time intervals from radiotherapy, heterogeneous signal-rich tissue, dissection, and advanced atherosclerosis were identified in sequence. As far as we know, OCT has been applied to reveal the features of carotid atherosclerotic stenosis and the stent-vessel relationship after stenting. As for the features of carotid stenosis, advanced atherosclerosis such as ruptured lipid-rich plaque, ruptured calcific nodule, neovascularization, and thrombosis were both observed in carotid atherosclerotic stenosis (Yoshimura et al., 2011(Yoshimura et al., , 2012Jones et al., 2014;Yang et al., 2021) and RICS. Interestingly, heterogeneous signal-rich tissue and onion-like structure were observed in our study. Heterogeneous signalrich tissue may represent layers of distinct collagen types or organized thrombi and be explained by prior rupture and healing (Shimokado et al., 2018;Vergallo and Crea, 2020). As we know, radiation can damage the vascular endothelial cell and promote thrombosis (Zheng et al., 2020). Furthermore, radiation damage to the vasa vasorum may cause loss of the elastic fibers and smooth muscle fibers and lead to dissection. With the application of OCT, we can clearly identify the various lesion features of RICS and may guide individualized treatment in the future.
Carotid artery stenting and carotid endarterectomy were both effective treatments for reducing the risk of stroke in patients with severe carotid stenosis. On account of the fact that previous cervical radiotherapy could cause absent tissue planes in the carotid artery wall and poor tissue healing, CAS has been assumed as a less invasive alternative to reduce the risk of complications with surgery (Kernan et al., 2014). A meta-analysis compared the outcome of CAS with carotid endarterectomy in patients with RICS and discovered that CAS had higher rates of restenosis and late cerebrovascular adverse event (Fokkema et al., 2012a). In addition, Yu et al. reported that the rates of in-stent restenosis in patients with RICS were significantly higher than in patients with carotid atherosclerotic stenosis (26% vs. 4%) (Yu et al., 2014). The stent-vessel interactions in RICS have not yet been studied. In this study, the mean rates of tissue protrusion and stent strut malapposition were 34.9 and 7.0%, respectively. With the high rates of tissue protrusion and stent strut malapposition in Case 4, its 6-month neointima burden reached 48.9% and neovascularization was observed. The stent strut malapposition may increase the risk of stent thrombosis (Ng et al., 2017), and irregular tissue protrusion may be associated with target lesion revascularization (Soeda et al., 2015). Whether the rates of tissue protrusion and stent strut malapposition affect the stent restenosis in irradiated patients needs further exploration. This study has some limitations. First, it is a single-center study with small sample size. Future work should increase the sample size to compare RICS with carotid atherosclerotic stenosis by OCT. Second, although various morphological characteristics of RICS were detected in this study, consistent features of RICS need future exploration. Third, more post-interventional OCT and follow-up OCT evaluations are required to investigate the mechanism of stent restenosis in irradiated patients.
CONCLUSION
The morphological features of RICS were heterogeneous, including heterogeneous signal-rich tissue, dissection, and advanced atherosclerosis. Stenting was successful in all 5 patients with severe radiation-induced carotid stenosis. One patient, with high rates of tissue protrusion and stent strut malapposition immediately after stenting, received in-stent neointimal hyperplasia at a 6-month follow-up.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Jinling hospital's Ethics Committee. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
XX and FH participated in the study design, collection, interpretation, analysis of data, and writing of the manuscript. XS and RL participated in the collection, interpretation, and analysis of data. YH, ML, FW, QY, and WZ participated in the analysis of data and revising of the manuscript. XL and RY participated in the study design, collection, interpretation, analysis of data, and revising of the manuscript. All authors contributed to the article and approved the submitted version. | 2022-04-28T01:53:50.511Z | 2022-04-27T00:00:00.000 | {
"year": 2022,
"sha1": "4d84937bf9960b93ed03c20cadb10ad6a3a04cf2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4d84937bf9960b93ed03c20cadb10ad6a3a04cf2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6062385 | pes2o/s2orc | v3-fos-license | Bidomain Predictions of Virtual Electrode-Induced Make and Break Excitations around Blood Vessels
Introduction and background Virtual electrodes formed by field stimulation during defibrillation of cardiac tissue play an important role in eliciting activations. It has been suggested that the coronary vasculature is an important source of virtual electrodes, especially during low-energy defibrillation. This work aims to further the understanding of how virtual electrodes from the coronary vasculature influence defibrillation outcomes. Methods Using the bidomain model, we investigated how field stimulation elicited activations from virtual electrodes around idealized intramural blood vessels. Strength–interval curves, which quantify the stimulus strength required to elicit wavefront propagation from the vessels at different states of tissue refractoriness, were computed for each idealized geometry. Results Make excitations occurred at late diastolic intervals, originating from regions of depolarization around the vessel. Break excitations occurred at early diastolic intervals, whereby the vessels were able to excite surrounding refractory tissue due to the local restoration of excitability by virtual electrode-induced hyperpolarizations. Overall, strength–interval curves had similar morphologies and underlying excitation mechanisms compared with previous experimental and numerical unipolar stimulation studies of cardiac tissue. Including the presence of the vessel wall increased the field strength required for make excitations but decreased the field strength required for break excitations, and the field strength at which break excitations occurred was generally greater than 5 V/cm. Finally, in a more realistic ventricular slice geometry, the proximity of virtual electrodes around subepicardial vessels was seen to cause break excitations in the form of propagating unstable wavelets to the subepicardial layer. Conclusion Representing the blood vessel wall microstructure in computational bidomain models of defibrillation is recommended as it significantly alters the electrophysiological response of the vessel to field stimulation. Although vessels may facilitate excitation of relatively refractory tissue via break excitations, the field strength required for this is generally greater than those used in the literature on low-energy defibrillation. However, the high-intensity shocks used in standard defibrillation may elicit break excitation propagation from the coronary vasculature.
INTRODUCTION
Defibrillation via an implanted cardioverter defibrillator (ICD) remains the only reliable means of successfully terminating otherwise lethal cardiac arrhythmias. Despite its efficacy, the strong shocks required to ensure cardioversion render it a sub-optimal therapy, leading to the active pursuit of its refinement or novel lower energy protocols and electrode configurations. Both conventional (strong) and recently suggested low-energy defibrillation (Fenton et al., 2009;Janardhan et al., 2012;Luther et al., 2012;Rantner et al., 2013b) are thought to be driven by virtual electrodes (VEs), formed within tissue distant from the physical electrodes. VEs are produced upon application of an electric field at regions of conductivity heterogeneity in the intra-/extracellular spaces of the myocardial tissue. Electrical current then moves between the two cellular domains to redistribute itself in accordance with these changes, resulting in localized depolarization and hyperpolarization. Localized regions of depolarization within the myocardium may create new excitation wavefronts, which may act to annihilate fibrillation wavefronts and terminate the arrhythmia (Zipes et al., 1975). However, the induced regions of hyperpolarization have the ability to cause refractory myocardium to become reexcitable. These two competing effects, combined with the spatially inhomogeneous and often temporally aperiodic excitability and complex structural anatomy of the fibrillating heart, serve to complicate defibrillation strategies. Refinement of conventional ICD shocks and the advancement of novel low-energy protocols into clinical practice therefore necessitate a greater understanding of the specific mechanisms behind VE formation, particularly around fine-scale intramural anatomical structures.
Anatomically detailed modeling studies based on highresolution MR imaging data have suggested the importance of including the coronary vasculature within computational models for simulation of strong defibrillation shocks (Bishop et al., 2010a(Bishop et al., , 2012. Such studies explicitly showed the formation of VEs around vessel cavities and highlighted important differences between including and excluding these structures on defibrillation outcomes (Bishop et al., 2012). A series of recent experimental work has also demonstrated the promising success of low-energy defibrillation protocols consisting of a series of low intensity monophasic pulses which terminated fibrillatory activity in canine preparations (Fenton et al., 2009;Luther et al., 2012). Corresponding theoretical analysis suggested that the mechanism of low-energy defibrillation was mainly driven by depolarized VEs formed around intramural blood vessels, which led to the progressive excitation of the surrounding excitable tissue, terminating the arrhythmia.
An issue, yet to be fully investigated, is the mechanism by which vessels may help activate intramural cardiac tissue which is not fully excitable (or in the refractory phase). This is particularly pertinent, as during fibrillation, wavefronts are constantly interacting with wave tails, such that very little completely recovered tissue exists, with the majority of the tissue being in a refractory or relatively refractory state. Understanding the nature of postshock propagation under these conditions is essential in elucidating the underlying physiological processes driving low-energy defibrillation.
In the study, we first seek to quantify the field strengths at which vessels mediate secondary sources of excitation in different states of refractoriness (via "make" or "break" excitations). We compare how these field strengths relate to quoted low-energy fields as well as investigate how they depend on the specific micro-anatomy of the vessel in terms of size and structure, including the presence/absence of an insulating vessel wall. We then highlight the mechanisms by which the induced VE patterns excite refractory tissue and how this is governed by the anatomical distributions of vessels in terms of their proximity to other vessels and within the myocardial wall. Finally, we discuss the implications of the results in the context of standard and low-energy defibrillation strategies.
Excitation of Relatively Refractory Tissue
The mechanisms by which relatively refractory cardiac tissue may be re-excited by a premature unipolar stimulus has been investigated in detail both in computational bidomain studies (Bray and Roth, 1997;Roth, 2013, 2014) and corresponding optical mapping experimental work (Sidorov et al., 2005). These studies have demonstrated that the application of a unipolar stimulus to cardiac tissue induces a characteristic dogbone VE pattern, consisting of neighboring de/hyperpolarization. This specific VE pattern forms because, in general, myocardial tissue is anisotropic to different degrees within the intra-and extracellular spaces. The different VE patterns (Dekker, 1970) from unipolar stimuli of different polarities (anodal or cathodal) cause different excitation dynamics in cardiac tissue-specifically "anode/cathode make" excitation (when applied to diastolic tissue) and "anode/cathode break" excitation (when applied to relatively refractory tissue). Break excitations occur because, if a unipolar stimulus is applied to relatively refractory tissue, the hyperpolarizing action of the stimulus acts to restore excitability to these specific regions under the virtual cathodes. At shock cessation, the depolarized tissue (under the virtual anodes) can then propagate into these post-shock excitable channels (previously hyperpolarized by the shock), initiating a "break" excitation. This initial propagation may then give time for the surrounding bulk of the tissue (previously refractory) to naturally regain excitability, facilitating propagation away from the stimulus site. Consequently, the specific VE pattern induced by a unipolar stimulus, and particularly the hyperpolarizing action, facilitates propagation into otherwise refractory tissue.
Strength-Interval Curves
The relationship between the prematurity of the unipolar S 2 pulse (corresponding to the state of tissue refractoriness) and the strength of the stimulus required to elicit propagation is quantified in the strength-interval (SI) curve. Computational bidomain simulations and experimental optical mapping experiments have shown agreement with predictions of SI curves for different polarities of applied stimulus (anodal or cathodal) (Kandel and Roth, 2013). SI curves are produced by applying a premature S 2 stimulus of a known strength to the tissue at a given instant of refractoriness, i.e., at a specific time following the S 1 paced beat. The minimum strength required to elicit propagation is then recorded, and a new timing following the S 1 beat probed. Here, we extend the concept of the SI curve from unipolar to field stimulation applied to specific anatomical features. VEs from field stimulation are symmetric with respect to the polarity of the stimulus-in effect, polarities are inverted with respect to a swapping of the field direction. This means that, in contrast to the results from unipolar stimulation, SI curves from field stimulation are invariant with respect to change in sign of the stimulating electrode and, thus, the concept of "anodal" or "cathodal" stimulus does not exist for field stimulation of isolated structures.
Governing Equations
The tissue is modeled using the bidomain equations for cardiac electrodynamics (Henriquez, 1992). These may be written as where ϕ i and ϕ e are the intra-and extracellular potentials, V m = ϕ i − ϕ e is the transmembrane potential, σ i and σ e are the intra-and extracellular conductivity tensors, σ b is the bath conductivity, ϕ b is the potential field in the bath, β is the membrane surface area to volume ratio, I m is the transmembrane current density, I s is the transmembrane current stimulus, C m is the membrane capacitance per unit area, and I ion is the membrane ionic current density, as a function of the transmembrane potential V m and the vector of state variables η. As described in Roth (1991), the boundary conditions (BCs) imposed on equation (1) are motivated by physical arguments and ensure that there is no flux of the intracellular current across the tissue boundary, and that the extracellular and bath potentials are continuous at the boundary of the tissue ∂Ω = ∂Ω t ∪ ∂Ω tb . The subscript t represents tissue and subscript tb represents the tissue-bath boundary, so that the BCs are where ⃗ n is unit-normal to the heart tissue surface. In addition, no-flux conditions are applied to the boundary of the bath space.
Anatomical Geometries
Two simplified geometries were considered: a vessel in a semiinfinite medium and two proximal vessels in a semi-infinite medium, as shown in Figure 1. The models were chosen to represent anatomical observations in an idealized way. In each case, the fiber field was always tangent to the surfaces of, and smoothly circumnavigated, the vessels as observed in histological analysis (Gibb et al., 2009); this was achieved by taking the unit-vector field of the gradient of the potential field created between two electrodes on the left and right hand side of the tissue domain, and applying a no-flux boundary condition on vessel surfaces and the upper and lower boundaries of the tissue (Bishop et al., 2010a;Bayer et al., 2012). In Figure 1, the left panel shows the smoothly varying fiber field around a single blood vessel, parameterized by its outer radius a and wall thickness t. The wall thickness was chosen to be a non-linear function of the vessel radius through (Podesser et al., 1998) corresponding to experimental observations of the human coronary vasculature. Two values of radius a were chosen: 0.5 and 2.0 mm as these lie inside the range, and near the lower and upper bounds, of observed arterial radii (Podesser et al., 1998). As arteries and veins tend to be located proximal to one another, the effects of the superposition (Hörning et al., 2010) of VEs from proximal vessels were investigated by varying the angle θ between FIGURE 1 | Left: schematic of a blood vessel surrounded by myocardium, with fibers smoothly circumnavigating the vessel. The vessel wall has thickness t and conductivity σw, and the blood inside the vessel has conductivity σ b . The outer radius of the vessel is a, and the surrounding myocardium has anisotropic conductivity in the intra-(σ i ) and extracellular (σe) space. Right: two identical blood vessels proximal to one another separated by spacing d and oriented by angle θ; the origin is situated at the mid-point between the two vessel centers.
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org March 2017 | Volume 5 | Article 18 two vessels (middle panel, Figure 1); we chose to align them next to each other (θ = 0), offset them by θ = π/4 and align them above one another θ = π/2. In each case, the minimum distance between vessel radii was d.
A more realistic geometry was constructed from a highresolution MRI scan of the rabbit ventricles (Bishop et al., 2010b), in which blood vessel cavities were resolved. A cross-sectional slice was taken in the long axis and cropped to show a sector of the left ventricular wall, and the image then re-scaled to make the thickness of the left ventricular wall approximately 15 mm, similar to the human left ventricular wall thickness. Minimum bounding ellipses were fitted around the blood vessel cavities, giving two values for the vessel radius corresponding to the semi-major r 1 and semi-minor r 2 axes of the bounding ellipse. The thickness t of the vessel wall was assumed to vary as a linear function of the radius using where k ≈ 0.18, calculated from the mean wall thickness for (circular) vessels with radii of 0.5 and 2.0 mm, and ψ ∈ [0, 2π] is the polar angle. The outer radius of the vessel was thus taken to be a(ψ) = (1 + k)r(ψ). Figure 2 (left panel) shows the geometry produced. Varying fiber architecture was then assigned (right panel, Figure 2) using the same method as the simplified vessels. The geometries in Figures 1 and 2 were discretized with linear triangular finite elements, with an average edge length of approximately 75 µm, and internal boundaries (vessel and vessel wall surfaces) were highly refined; the average edge length of elements forming the a = 0.5 and 2.0 mm vessel cavities was approximately 12 and 22 µm, respectively.
Electrophysiological Tissue Representation
Ionic cellular dynamics were represented by a human ventricular cell model (ten Tusscher and Panfilov, 2006), which has been used in many tissue-level human computational modeling studies (Ashikaga et al., 2013;Rantner et al., 2013a;Arevalo et al., 2016). To reproduce the asymmetry of the membrane response to strong shocks delivered during the plateau phase of the action potential, the cell model was further augmented (DeBruin and Krassowska, 1998; Ashihara and Trayanova, 2004) with an electroporation current and a hypothetical potassium current that activates at larger depolarizations of >160 mV; these additional currents are simply summed with the I ion term in equation (1) and do not require any modification to the cell model ordinary differential equation system (ten Tusscher and Panfilov, 2006). The domains were assigned physiologically realistic values of conductivity. In the myocardium, the intra-and extracellular conductivity tensors σ i = (σ i,l ,σ i,t ) = (0.17, 0.019) S/m and σ e = (σ e,l ,σ e,t ) = (0.62, 0.24) S/m were given experimentally measured values (Clerc, 1976) for the directions longitudinal (subscript l) and transverse (subscript t) to the local fiber orientation, giving representative conduction velocities in resting tissue of approximately 61 (longitudinal) and 22 (transverse) cm/s. The vessel walls were assigned the experimentally measured (isotropic) value of σ w = 0.01 S/m (Bishop et al., 2010a), and the blood inside the vessels and surrounding the myocardium was assigned the value of σ b = 1.0 S/m (Visser, 1989).
Computational Considerations
The finite element solver CARP (Cardiac Arrhythmia Research Package) (Vigmond et al., 2003) was used to solve the bidomain equations (1) and (2). The prescribed time step was set to 5 µs; a value known to be stable and accurate (Cooper et al., 2016) for the ten-Tusscher (ten Tusscher and Panfilov, 2006) human ventricular model, and sufficiently low to ensure the ramp function for the shock was resolved smoothly.
Tissue Preconditioning
Initially, the tissue was pre-paced at the single-cell level using a basic cycle length of 300 ms, for 100 beats. The saved state variables were then applied to every reaction source term in the tissue-level model.
Strength-Interval Computation
In order to avoid introducing spatial gradients in refractoriness within the tissue, the entire tissue was uniformly excited using a 2 ms S 1 transmembrane stimulus. The state of the entire bidomain simulation was then saved at 5 ms time intervals from 250 to 350 ms after the last S 1 stimulus; the time values were chosen as they span the relatively refractory to the fully excitable phases of the action potential. These saved states are the x-values of the SI curve.
For each saved time increment, the minimum strength of an S 2 field stimulus required to elicit wave propagation from the heterogeneities present in the tissue (the vessels) was computed. This was performed using a simple bisection scheme with an upper bound of 10 V/cm, using the criterion that an excitation wavefront was found to be propagating somewhere in the tissue 30 ms after the termination of the stimulus with a minimum peak transmembrane potential (action potential amplitude) of −10 mV (a value found to be reasonable from visual inspection of the shock-induced wavefronts). A tolerance on the bisection scheme of <0.1 V/cm for the S 2 strength was used. The S 2 field stimuli were imposed by applying dissimilar Dirichlet boundary conditions, in the extracellular space, to the upper and lower surfaces of the tissue domains. These minimum field strengths then constitute the y-values of the SI curve.
For each of the idealized geometries considered, excitation from surface VEs on the depolarized surface of the tissue domains was avoided by imposing a layer of unexcitable (passive) tissue with sufficient thickness (approximately six times the transverse space-constant) to attenuate the surface VE amplitude sufficiently, and the boundaries were located far from the vessel to negate boundary effects. Such an approach did not affect the strength or distribution of the electric field around the anatomical structures considered.
Anisotropic Tissue
Prior to investigating the SI curves for the vessel configurations shown in Figure 1, we first show how the VE patterns change when the low-conductivity vessel wall is included in the bidomain model. This was done by solving the bidomain equations with a passive membrane (I ion = V m /R m ) until steady state is achieved. The steady-state VE patterns for a vessel with and without vessel walls are shown in Figure 3. Figure 3 clearly illustrates that the VE pattern is changed by the presence of the low-conductivity vessel wall. The most significant difference is the swapping of the polarity at the upper and lower surfaces of the vessel, an effect observed previously (Bishop et al., 2010a(Bishop et al., , 2012. A secondary effect of the presence of a vessel wall is to increase the magnitude of VEs which form on either sides of the vessel due to the anisotropic conductivity ratios. The physical reason for these two effects is that: (a) in the case of no vessel wall, boundary VEs from current transiting the extracellular-to-bath-to-extracellular spaces dominate, as current takes the path of least resistance through the vessel cavity, and (b) in the case of a vessel wall, the insulating vessel wall changes the path of least resistance from through the vessel cavity to around the vessel cavity, thus VEs from dissimilar anisotropy ratios dominate, and boundary VEs are diminished.
The different VE patterns shown above in Figure 3 act to change the active membrane response, as shown in the SI curves in Figure 4. Figure 4 shows that for late diastolic intervals (DIs) of greater than ≈315 ms, the SI curve is approximately linear and near the asymptotic, minimum, value. At such late DIs, the mechanism of tissue capture is via make excitation, with wave propagation initiating from the vessel almost immediately as the S 2 stimulus is applied. The shock strength at late DIs for vessels without vessel walls is approximately half that compared to the case with a vessel wall, and the shock strength required to elicit excitation from larger vessels is lower than that required for smaller vessels, consistent with the literature (Pumir and Krinsky, 1999;Hörning et al., 2010;Luther et al., 2012). For all vessels (in Figure 4), the shock strength monotonically increases with decreasing DI. At early DIs, the shock strength required to elicit propagation for the larger vessel with a vessel wall is lower than that without a vessel wall, a direct consequence of the altered VE pattern around the vessel from the influence of the insulating vessel wall. We include a vessel wall in the rest of the results in this manuscript, as the presence of the vessel wall significantly alters the SI curves. Figure 5 shows the types of propagation patterns created by the VEs around the large (a = 2.0 mm) blood vessel, with a vessel wall, at different DIs and shock strengths causing make and break excitation patterns. Break excitation (right panel in Figure 5) shows how the depolarized regions rapidly diffuse into the now-excitable hyperpolarized regions, causing waves to propagate through these previously hyperpolarized regions. Without the close proximity of de-and hyperpolarized VE regions, the mechanism of break excitation would not occur.
It should be noted that we investigated the SI response (results not shown) in the absence of the electroporation currents (Ashihara and Trayanova, 2004) and found similar behavior as reported in Bray and Roth (1997); the addition of the electroporation currents acts to reduce the shock strength required to elicit propagation from VEs.
Influence of Conductivity Variations
As there is some uncertainty regarding the conductivities of myocardium and the vessel wall, we recomputed the SI curves for the largest vessel (a = 2.0 mm) with other different sets of experimentally measured (Roberts et al., 1979;Roberts and Scher, 1982) tissue conductivities (Roth, 1997) and varied the vessel wall conductivity by two orders of magnitude around the experimentally measured value of σ w = 0.01 S/m (Bishop et al., 2010a). The tissue conductivities used are summarized in Table 1, and the resulting SI curves are shown in Figure 6. Figure 6 shows that, for the lowest vessel wall conductivity (left panel), the different tissue conductivities have little influence on the shape of the SI curve. With the vessel wall conductivity set at its experimentally measured value (middle panel), the third tissue conductivity set (Roberts and Scher, 1982) causes the SI curve to diverge appreciably from the rest of the curves at early DIs. At the highest vessel wall conductivity (right panel), the SI curves for each tissue conductivity set are appreciably different from the other two panels and appear similar to the solid curve shown in Figure 4 (right panel), which corresponds to the case of a vessel without a vessel wall. This is to be expected as the higher wall conductivity is more similar to the blood conductivity in the vessel cavity. Overall, the different tissue conductivities have little effect on the shape of the computed SI curves. The value of the vessel wall conductivity tends to dictate the shape of the SI curve; this is because the induced VE patterns are different when the conductivity of the vessel wall is much smaller than the conductivity of the vessel cavity (blood)-see Figure 3.
Modulation of SI Curve Response due to Superposition Effect of Neighboring Vessels
The vessels comprising the coronary vasculature are often colocated, with veins following the same tracts as arteries in vessel pairs. The proximity and orientation of two blood vessels alters the resultant VE pattern produced because the fiber architecture is different for two proximal vessels, compared to that for one vessel, and also because the potential fields in the intracellular and extracellular spaces are linear and obey the principle of superposition (Hörning et al., 2010). In the case where the vessels are aligned perpendicular to the applied field (left panel, Figure 7), the positive and negative VEs between the two vessels are amplified. When the vessels are oriented at π/4 from one another (middle panel, Figure 7), the positive VE at the top of the left vessel and the negative VE at the bottom of the right vessel are amplified. When the vessels are arranged in the direction of the applied field (right panel, Figure 7), the positive and negative VEs between the two vessels are reduced in magnitude.
The complex non-linear dynamics of the membrane response to these VEs of different patterns results in different SI curves for these vessels, as shown in Figure 8. For each of the vessel alignments, the SI curve is similar for late DIs of between 325 and 350 ms; this is because make excitation dominates at late DIs, which is relatively independent of the complex VE patterns Roberts et al. (1979) 0.280 0.026 0.220 0.130 Roberts and Scher (1982) 0.340 0.060 0.120 0.080 Here, "nominal" refers to the conductivity set used in the rest of this work. FIGURE 8 | SI curves for two proximal vessels, separated by spacing d = a and oriented relative to one another by angle θ (as described in Figure 1). a = 0.5 mm: , a = 2.0 mm: .
produced by the different vessel configurations. At early DIs of less then 300 ms, the different VE patterns result in different SI curves; however, the differences are minimal and each vessel alignment gives a similar response to the single vessel (with a vessel wall), shown in Figure 4. The θ = π/4 alignment requires the lowest shock strength to elicit propagation overall due to its configuration leading to the proximity of the strongest VE polarizations.
VE Induced Propagation from a Realistic Ventricular Slice
The ventricular slice geometry was pre-paced at the tissue level in a similar manner as for the isolated blood vessels; however, the SI curve was not computed. Instead, shocks were applied from anodal and cathodal electrodes in the bath space in the middle of the left ventricular cavity to a ground in the bath space surrounding the epicardium, resulting in an approximately transmural field. Shocks of 10 V/cm (computed via the minimum Euclidean distance from points on the surface of the cathode to the ground) were applied at DIs of between 290 and 350 ms after the previous stimulus. Regardless of the polarity of the electrode, the same type of make excitations were observed at late DIs (between 300 and 350 ms) from the transmural vessels and the same type of break excitations were observed at early DIs (between 290 and 300 ms), as with the idealized vessel geometries considered previously. As with the idealized vessel geometries, break excitations either continued to propagate or decay (as shown in Figure 9) depending on the DI at which the shock was applied.
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org March 2017 | Volume 5 | Article 18 However, a different mode of break excitation in the form of propagated graded responses (Trayanova and Rantner, 2003; attached to the subepicardial layer was observed around vessels proximal to the epicardium. Figure 9 shows snapshots of the transmembrane potential in the ventricular slice following the application of the shock. These boundary waves were caused by VE depolarizations around subepicardial vessels propagating into VE hyperpolarized regions which intersected the epicardium. The boundary propagation waves had peak transmembrane potentials of approximately −10 mV and action potential durations of approximately 35 ms; values significantly below the normal electrophysiological characteristics of the (ten Tusscher and Panfilov, 2006) cell model and similar to the propagating unstable wavelets described in Boyle et al. (2012). Approximately 120 ms after the shock was applied, these boundary propagation wavefronts re-entered and excited the entire tissue. By saving the cell model state variables (DeBruin and Krassowska, 1998;Ashihara and Trayanova, 2004;ten Tusscher and Panfilov, 2006) from nodes inside the boundary propagation waves and continuing the integration at the singlecell level, we determined that the short action potential durations and low transmembrane amplitudes of these waves were a direct consequence of electrotonic coupling and not an electrophysiological effect of the cell model itself. In contrast to the boundary propagation waves, single-cell action potentials had durations of approximately 200 ms (short in contrast to the normal human ventricular duration) and peak transmembrane potentials of approximately 20 mV.
Note that the transmembrane potential at the epicardium had equalized (from its shock-induced hyperpolarization) with the surrounding transmembrane potential when the boundary propagation was observed. At higher shock strengths, however, break excitations from vessel VEs reached the epicardium earlier and resulted in more rapid boundary propagation; this was due to the latent hyperpolarization from the shock-induced VEs. During anodal shocks, the inverted VE pattern gave rise to break excitations next to the endocardial surface from adjacent hyperpolarized areas; however, these boundary propagation waves rarely gave rise to bulk activation as they were often extinguished after collision with regions of prolonged refractoriness (due to depolarized VE regions) or failed to propagate along the relatively more curved endocardial boundary (trabeculation).
Isotropic Tissue
Some insight on the origins of the VEs elicited by blood vessels in isotropic tissue may be gained by using an analytical approach. Unlike anisotropic tissue where VEs may form in the tissue in response to conductivity anisotropy, VEs in isotropic tissue may originate only at tissue surfaces, where current enters or exits the bidomain. At steady state, the governing equations for isotropic tissue simplify to where λ 2 = R m σ i σ e / (β (σ i + σ e )) is the space constant (here, R m is the membrane resistance). Equation (5) assume that perturbations to ∇ϕ e from the induced transmembrane potential (in response to field stimulation) are small, such that they may be neglected, and assumes a parallel combination of the intraand extracellular isotropic conductivities for Laplace's equation in the extracellular space (Sobie et al., 1997;Jolley et al., 2008). Equation (5) may be solved by separation of variables as in Pumir and Krinsky (1999), except here instead of assuming a constant extracellular field for ∇ϕ e (Pumir and Krinsky, 1999); we first solve Laplace's equation for ϕ e to satisfy the electrostatic boundary conditions at the interfaces of the vessel cavity and vessel wall, and vessel wall and tissue. Assuming the far-field stimulus, of magnitude E 0 , is aligned with the y-axis (E = E 0 ⃗ y) gives, in polar coordinates: where K ν is the modified Bessel equation of order ν, of the second kind, and C is a scaling factor which comes from the solution of Laplace's equation for the particular geometry considered: where a ′ = a − t. Assuming that the electric field is constant around the vessel implies that C = 1, giving the solution in Pumir and Krinsky (1999). Substituting the transverse values for the intra-and extracellular conductivities (σ it = 0.019, σ et = 0.24 S/m) (Clerc, 1976) (approximating an out-of-plane fiber field parallel to the vessel axis), the blood (σ b = 1.0 S/m) and vessel wall conductivities (σ w = 0.01 S/m), into C and using the analytical expression for the wall thickness [equation (3) from Podesser et al. (1998)] gives C ≈ 0.15 and C ≈ 0.25 for vessels of radii a = 0.5 and a = 2.0 mm, respectively. In other words, the low-conductivity blood vessel wall reduces the current flux through the vessel cavity, significantly lowering the VE induced by the vessel in response to field stimulation. In the case of no vessel wall, the scaling factor becomes independent of the vessel size and simplifies to The SI curve for a vessel of radius a = 2.0 mm in isotropic tissue is shown in Figure 10. The ratio in the field strengths required to elicit excitation, from the vessels with and without a vessel wall, is approximately 6.44 at an interval of 350 ms, a value which is approximately constant within the interval range of validity (290-350 ms). This is close to the ratio of the scaling factors, for the a = 2.0 mm vessel with and without a vessel wall, of 1.59/0.25 ≈ 6.36, suggesting that, in the case of isotropic tissue, the scaling factor is proportional to the minimum field strength required for VE induced excitation from the blood vessels. In isotropic tissue, the VE pattern is identical for vessels with and without vessel walls (albeit with different magnitudes)-these patterns are shown and discussed in detail in Pumir and Krinsky (1999) and Bittihn et al. (2012).
DISCUSSION
In this work, idealized representations of blood vessels were used to investigate how the vessel-induced VEs influenced the electrophysiology of the surrounding tissue in different states of refractoriness upon field stimulation.
VEs around Idealized Blood Vessels and SI Curves
SI curves have been previously computed for unipolar stimulation both experimentally (Dekker, 1970;Sidorov et al., 2005) and in numerical bidomain simulations Roth, 2013, 2014). In general, the same mechanisms of make and break excitations, and the same overall morphology of the curves were observed from field stimulation of blood vessels, as have been observed (numerically and experimentally) in unipolar stimulation of myocardium (Sidorov et al., 2005;Kandel and Roth, 2013, FIGURE 10 | SI curves for the a = 2.0 mm vessel in isotropic tissue. The presence of a low-conductivity vessel wall increases the field strength required to elicit wave propagation by a factor of approximately 6.44 compared with the vessel without a vessel wall. In these computations, we used the transverse conductivity values; σ it = 0.019, σet = 0.24 S/m (Clerc, 1976), approximating an out-of-plane fiber field. 2014). Direct experimental validation of the VE patterns predicted by our simulations is challenging, due to the fact that vessels are inherently intramural. Thus, the close agreement in overall excitations mechanisms and SI curve morphologies with respect to previous experimental unipolar stimulation studies represents an important means of comparison. It should be noted that in both cases, break excitation occurs due to the co-location of de-and hyperpolarized VEs induced by the stimulus.
Excitation Behavior at Late DIs-"Make Excitations"
The VE pattern created upon field stimulation depends upon whether the blood vessel wall is resolved in the geometry, as shown in Figure 3. At late DIs, the tissue surrounding the vessel is relatively excitable and VE depolarizations of a sufficient magnitude cause wave propagation (see Figure 5, left panel).
Without a vessel wall, the shock strengths required for make excitations are lower than the case with the vessel wall, as shown in Figure 3. This is because current takes the path of least resistance: with the insulating vessel wall, current is shielded from traversing the vessel cavity (lowering the magnitude of the boundary flux VE) and instead prefers to travel around the vessel (raising the magnitude of the VE contribution from the varying fiber architecture). In addition, the shock strength required to elicit make propagations was shown to be lower for larger vessels (with and without vessel walls) in agreement with the literature (Pumir and Krinsky, 1999;Luther et al., 2012).
The effect of the superposition of VEs and different fiber architecture around vessels proximal to one another is minimal at late DIs (see Figure 8) as the complicated VE pattern has relatively little influence when the surrounding tissue is relatively excitable.
Excitation Behavior at Early DIs-"Break Excitations"
With decreasing DI, the SI curve monotonically increases, and the larger vessels display, in general (see Figures 4 and 8), three different slopes; shallow from 350 to 300 ms, sharp from 300 to 275 ms, and then decreasing slightly from 275 to 250 ms. These different slopes in the SI curves for the larger vessels correspond roughly with the shape of the repolarizing action potential (V m (t)). In the case of the large blood vessel, wave propagation is possible at lower strengths when the vessel wall is resolved (see Figure 4, right panel). This is because the VE pattern changes in the presence of an insulating vessel wall (see Figure 3), such that VEs of opposite polarity are relatively proximal to one another far away (Takagi et al., 2004) from the vessel cavity, which acts as a boundary to diffusion. The mode of excitation at early DIs is via break excitation, where, at the end of the shock, the depolarized region rapidly diffuses into the hyperpolarized region (which was in the refractory phase at the beginning of the shock, but has since been made excitable by VE hyperpolarization) and initiates wave propagation. This type of propagation may decay or may continue to propagate into the surrounding tissue (depending on its refractoriness). The same phenomenon (wave propagation from a vessel with a vessel wall at early DI) may occur for the smaller blood vessel; however, this was outside the parameter space investigated in this work.
When vessels were arranged close to one another, the altered VE patterns acted to change the SI curve at early DIs; however, the changes were not major; the same monotonic increase in shock strength at earlier DIs occurred. When vessels were arranged at θ = π/4 to one another, the strength required to elicit break excitation was lower overall; with this configuration, the two largest magnitude VEs were immediately next to one another.
VEs around Other Heterogeneities
VEs may form around any heterogeneities, and the nondimensional magnitude (|V m /(E 0 λ)|) of such VEs is dependent on the conductivity ratio of the heterogeneity and the surrounding tissue, and the characteristic length scale of the heterogeneity with respect to the local field direction E. For example, it is shown (Pumir et al., 2007) that, in isotropic tissue, elliptical heterogeneities oriented with their semi-major axes parallel to the local field require a stronger stimulus to elicit wave propagation than those oriented with their semi-minor axis parallel to the applied field. However, regardless of the geometry of the heterogeneity, or the direction of the local electric field, in isotropic tissue, the maximum non-dimensional VE is scaled by the conductivity ratio [discussed in equations (6) and (7)]. This does not hold, however, in anisotropic tissue, rendering it difficult to compare the magnitudes of VEs created by cylindrical blood vessels with other heterogeneities, such as fibrosis or sheets. In general, however, for heterogeneities in isotropic tissue with a non-zero conductivity ratio, the magnitude of the non-dimensional VE is approximately constant (and approximately equal to the conductivity ratio), provided the characteristic length scale is much bigger than the space constant (Pumir and Krinsky, 1999). The conductivity of regions of fibrosis is known to be much smaller than for healthy myocardium, so it is likely that the VEs elicited by these fibrosis heterogeneities will be dominated by anisotropy effects.
Relevance for Low-Energy Defibrillation Protocols
Previous experimental works highlighting the potential utility of low-energy defibrillation protocols (Fenton et al., 2009;Luther et al., 2012) have suggested the importance of the VEs formed around the coronary vasculature in providing a distributed source of wavefronts that facilitate arrhythmia termination. A consideration not addressed by these earlier works is the specific mechanism by which wavefront propagation is elicited from the vessel by the applied field with respect to the degree of refractoriness of the tissue.
Here, we have shown that break excitations may occur around blood vessels in otherwise refractory tissue. The ability to elicit wavefronts from relatively refractory tissue and not just from recovered diastolic tissue could therefore play a crucial role in optimizing low-energy defibrillation methods (Fenton et al., 2009;Janardhan et al., 2012;Luther et al., 2012;Rantner et al., 2013b) for which removing both excitable and soon to be excitable tissues is the main underlying goal. In the context of low-energy defibrillation, this may be especially relevant considering that the timing of the defibrillation shock, with respect to the time evolution of the excitable volume (Rantner et al., 2013b), is known to influence the defibrillation efficacy. However, an important finding from our study is that the field strength at which break excitations occur appears to be above that of low-voltage regimes, which typically quote field strengths <1 V/cm (Fenton et al., 2009;Luther et al., 2012;Rantner et al., 2013b). Therefore, our analysis suggests that such a mechanism may only be important in protocols that involve relatively higher strength shocks.
In this work, it was assumed that the tissue around the vessel repolarized simultaneously; however, in reality, a repolarization gradient, aligned with the propagation vector of the previous wave, will exist around the vessel. In sinus rhythm, the excitation generally propagates from the endocardium to the epicardium, producing a typical repolarization gradient of approximately 4.5 ms/mm-a value known to be sufficient to cause unidirection propagation from extrasystoles (Laurita and Rosenbaum, 2000) in guinea-pig hearts. It is however difficult to draw any conclusions regarding the arrhythmogenicity of extrasystoles originating from VEs around blood vessels; the induced VEs themselves will perturb the repolarization field and, in context, shocks will typically only be applied during fibrillation when the wave dynamics are chaotic and, by definition, not in sinus rhythm. The results from this work are mainly applicable to regions of tissue with small repolarization gradients, which may exist in regions around the myocardium during fibrillation.
A Note on Shock Strengths
In this work, and in much of the literature on defibrillation, the shock strength is quoted as the potential difference (∆ϕ e ) between the shock electrodes divided by the distance between them (l). A single measure of shock strength is a crude approximation when the conductivity in the domain is not constant, as is generally the case, and when one of the shocking electrodes is a catheter. The magnitude of the stimulating electric field, |E| = |∇ ϕ e |, which measures the shock strength as a function of position, may vary significantly even in simple or idealized geometries such as considered in this work. To highlight this point, see Figure 11 which shows the heterogeneous nature of |∇ ϕ e | for the (∆ ϕ e /l) ≈ 10 V/cm shock, which elicited the boundary FIGURE 11 | The spatial variation of the magnitude of the electric field around the realistic ventricular slice geometry. Note that the color bar maximum was lowered from its theoretical maximum (10 V/cm) to 5 V/cm in order to highlight the regions of field concentration.
wave propagation in the realistic slice geometry. Figure 11 shows how the field strength decays rapidly (approximately as 1/r d − 1 , where d is the number of dimensions) away from the point cathode and concentrates around the endocardial grooves [which, incidentally, is the reason for the preferential wave propagation from these regions as discussed in Rantner et al. (2013b)] and low-conductivity vessel walls. Thus, we highlight that quoted shock strengths for specific protocols may differ significantly from the localized field strengths sensed in specific regions of myocardium.
Boundary Propagation Mediated by Subepicardial Vessels
Subepicardial boundary propagation was observed to occur at early DIs in the image-derived ventricular slice geometry (see Figure 9). The phenomenon, observed after shocks in the relatively refractory period may represent a mode of shock-induced arrhythmogenesis, or an additional mechanism responsible for the earliest post-shock activations after the isoelectric window (Trayanova, 2007;Ashihara et al., 2008;Constantino et al., 2010). We hypothesize that the low transmembrane potential amplitude and short action potential duration of the propagating boundary wave, in addition to the shallow depth in which it propagates, may have prohibited its in vitro observation using voltage sensitive dyes in optical mapping experiments due to depth-averaging effects (Janks and Roth, 2002a,b;Bishop et al., 2007). However, it could be possible to observe this proposed mechanism using optrodes which record fluorescent signals directly from the intramural space.
It is thought that the boundary propagation waves are a form of "propagating unstable wavelets, " as described in Boyle et al. (2012). It is thought that due to the sealed boundary, current flows along the edge and accumulates longitudinally. This is just enough current to excite longitudinally but not transversely, with a partial activation of the fast sodium channel (Boyle et al., 2012). The electrotonic loading is also lower near the boundary due to the boundary condition, so a lower transmembrane current density is required to excite tissue down the concentration gradient (Kelly et al., 2013;Bishop et al., 2014). In addition, the high extracellular conductivity adjacent to the boundary (in the bath space) significantly increases the conduction velocity of the wave close to the surface (Henriquez et al., 1996(Henriquez et al., , 2007. As the largest blood vessels in the coronary vasculature reside in the subepicardial layer and the magnitude of the induced VE depends on the size of the vessel, it is more likely that shocks involving an intracardiac cathode will lead to boundary propagation along the subepicardium, as this shock polarity leaves the epicardium hyperpolarized and excitable post-shock.
Implications for Computational Modeling of Field Stimulation
As shown in Figure 3 and in other works (Bishop et al., 2010a(Bishop et al., , 2012, specifically representing the insulating blood vessel wall within the computational model significantly alters the VE pattern created around the vessel, upon application of a field stimulus. The effect of resolving the blood vessel wall on the electrophysiological response of the tissue to field stimulus, as a function of the surrounding tissue refractoriness (DI) and shock strength, is shown to be significant (see Figure 4). Thus, it is important to resolve the coronary vasculature in computational bidomain models of field stimulation and to take account of the vessel wall, which was experimentally measured to have a conductivity of approximately two orders of magnitude lower than blood (Bishop et al., 2010a). The physical resolution of the blood vessel wall may however be computationally prohibitive, due to the necessarily small dimensions of the vessel wall. We propose that in more anatomically detailed 3D models, it may be possible to represent the macroscopic effect of the vessel wall by artificially lowering the conductivity of the medium inside the vessel cavity, such that the induced VE pattern is similar.
We demonstrate this in Figure 12 for the a = 2.0 mm vessel, where we applied a homogeneous equivalent conductivity σ eq ∈ [σ w , σ b ] to the vessel cavity. The equivalent conductivity may be computed by solving Laplace's equation for the potential field in an (piecewise) isotropic medium, for the vessel geometry, ensuring the current density at the vessel surface, with the equivalent homogeneous conductivity, equals that when the vessel wall and blood are resolved (i.e., at the surface of the vessel, σ w ⃗ n·∇ ϕ = σ eq ⃗ n · ∇ ϕ where ⃗ n is the unit normal vector). The equivalent conductivity is stated as where σ bw = σ b − σ w . Equation (8) provides a good approximation in the case of isotropic tissue (results not shown), but under-predicts the VE strength in anisotropic tissue, as shown in Figure 12. It is worth noting that, when the conductivities and dimensions are substituted into equation (8), the equivalent conductivity σ eq is small (approximately two orders of magnitude lower than the minimum extracellular conductivity, for both the small and large vessels), implying that a zero-flux boundary condition for the extracellular potential at the vessel surface may provide an adequate approximation. These approaches require more investigation, however. An alternative approximation may be to apply a Robin-type boundary condition for the extracellular field across element edges/faces which constitute the blood vessel, in order to modify the extracellular current flux in a similar manner to that done for the intracellular current in Costa et al. (2014). Each of these proposed approximations requires the blood vessel surface to be spatially resolved in the computational models, however. An additional implication of resolving the coronary vasculature is that it naturally distributes VE sources in the myocardium, causing break excitations in relatively refractory tissue which may result in boundary propagation along the hyperpolarized surface as shown in Figure 9. Without the proximity of VE depolarizations to the hyperpolarized boundary, this type of excitation may not occur. However, this mode of excitation does not require the vessel wall to be resolved per se.
Limitations
The models used in this study were two dimensional, and thus, no out-of-plane effects on the VE patterns were investigated. However, using symmetry, we have investigated the effects of a homogeneous out-of-plane fiber field in the section on isotropic tissue. The accuracy of the experimentally measured conductivity for the blood vessel wall has some uncertainty, as this has only been measured once (Bishop et al., 2010a); the vessel wall conductivity is shown to significantly influence the shape of the SI curve (see Figure 6). The S 1 stimulus procedure resulted in no repolarization gradients around the vessels, which, although unphysiological, allowed us to generate SI curves which were not dependent upon initial propagation patterns. The propagating unstable wavelets have been, to the authors' best knowledge, hitherto experimentally unobserved-thus, there is some uncertainty regarding the accuracy of these results. The effects of biphasic shocks were also not investigated in this work as here we sought to relate more closely with lower energy protocols which use monophasic shocks (Fenton et al., 2009;Janardhan et al., 2012;Luther et al., 2012;Rantner et al., 2013b); such analysis could provide an avenue for future work.
CONCLUSION
Monophasic defibrillation shocks in the 5-10 V/cm range may elicit wavefront propagation from the larger blood vessels while the surrounding tissue is relatively refractory; this may either cause post-shock activations (causing failure of the defibrillation shock) or may increase the likelihood of successful defibrillation via wavefront annihilation with the fibrillation waves. Strong monophasic shocks from intracardiac cathodes may elicit low amplitude boundary wave propagation via break excitation from subepicardial vessel VEs, which could suggest an important mechanism of shock failure and reentry re-initiation.
Low-energy defibrillation protocols should focus on eliciting wavefront propagation from vessel VEs surrounded by excitable tissue, as excitation propagation requires low shock strengths [around 1 V/cm, in the regime of low-energy defibrillation (Fenton et al., 2009;Janardhan et al., 2012;Luther et al., 2012;Rantner et al., 2013b)] in this phase of the action potential. The low shock strengths also ensure that no break excitations are elicited from vessel VEs in relatively refractory tissue, as break excitations require shock strengths of greater than 1 V/cm. Bidomain modeling of defibrillation should include the coronary vasculature, as it is responsible for large magnitude VEs which may affect the fibrillation dynamics. If the vasculature is resolved, the insulating vessel walls should also be resolved as they strongly affect the VEs produced by the vasculature and thus the electrophysiological behavior via the non-linear Hodgkin-Huxley type action potential model.
AUTHOR CONTRIBUTIONS
Designed the research: MB and AC. Performed the research: AC. Contributed analytic tools: EV. Wrote the manuscript: AC, MB, and EV. | 2017-05-02T19:20:41.099Z | 2017-03-27T00:00:00.000 | {
"year": 2017,
"sha1": "b509cf2ad6632a07579bfc95adee8f4f3df6ae7a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2017.00018/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d976c09d74c1e7b5ed1b44faaf71781b71867b86",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
42686782 | pes2o/s2orc | v3-fos-license | On the Production of He, C and N by Low and Intermediate Mass Stars: A Comparison of Observed and Model-Predicted Planetary Nebula Abundances
The primary goal of this paper is to make a direct comparison between the measured and model-predicted abundances of He, C and N in a sample of 35 well-observed Galactic planetary nebulae (PN). All observations, data reductions, and abundance determinations were performed in house to ensure maximum homogeneity. Progenitor star masses (M<4M_sun) were inferred using two published sets of post-AGB model tracks and L and T_eff values. We conclude the following: 1) the mean values of N/O across the progenitor mass range exceeds the solar value, indicating significant N enrichment in the majority of our objects; 2) the onset of hot bottom burning appears to begin around 2 solar masses, i.e., lower than ~5 M_sun implied by theory; 3) most of our objects show a clear He enrichment, as expected from dredge-up episodes; 4) the average sample C/O value is 1.23, consistent with the effects of third dredge-up; and 5) model grids used to compare to observations successfully span the distribution over metallicity space of all C/O and many He/H data points but mostly fail to do so in the case of N/O. The evident enrichment of N in PN and the general discrepancy between the observed and model-predicted N/O abundance ratios signal the need for extra-mixing as an effect of rotation and/or thermohaline mixing in the models. The unexpectedly high N enrichment that is implied here for low mass stars, if confirmed, will likely impact our conclusions about the source of N in the Universe.
INTRODUCTION
Galaxies evolve chemically because hydrogen-rich interstellar material forms stars which subsequently convert a fraction of the hydrogen into heavier elements. These nuclear products are expelled into the interstellar medium and thereby enrich it. As this cycle is continuously repeated, the mass fraction of metals rises. Additional factors which influence the metal abundances in galaxies include the exchange of gas with the intergalactic medium via inflow and outflow.
A crucial component for understanding the rate at which the interstellar abundance of a specific element rises over time is the amount of the element that is synthesized and expelled by a star of a specific mass during its lifetime, i.e., the stellar yield. Generally, stellar yields are estimated by computing stellar evolution models that predict them. These models are constrained using elemental abundance measurements of the material that is cast off from the star in the form of winds propelled by radiation pressure, periodic expulsions by stellar pulsations, or sudden ejection caused by explosions.
In the current study, we are interested in the production of He, C and N by low and intermediate mass stars (LIMS), that is, those stars typically considered to occupy the mass range of 1-8 M ⊙ . Stellar models suggest that internal temperatures become sufficiently high either in the cores or outer shells of these stars to drive not only the conversion of H to He via the proton-proton chain reactions, but also the triple alpha process as well as the CN(O) cycle to produce C and N, respectively. Observationally, there is overwhelming evidence that LIMS do indeed synthesize and eventually expel measurable amounts of elements such as He, C, N and perhaps O, as well as s-process elements [see articles by Herwig (2005), Kwitter & Henry (2012), Karakas & Lattanzio (2014), Delgado-Inglada (2016), Maciel, Costa & Cavichia (2017) and Sterling (2017)]. However, the impact that LIMS actually have, relative to massive stars on the chemical evolution of these elements in a galaxy, is still very much open for debate.
The material that is cast off by LIMS during and after the AGB stage in the form of winds of varying speeds can subsequently form large-scale density enhancements and become photoionized by the UV photons produced by the hot, shrinking stellar remnant, forming a planetary nebula (PN). The photon energy absorbed by the nebula results in the production of detectable emission lines that can be analyzed in detail to infer abundance, temperature and density information about the PN.
PN abundance patterns reflect the nature of the chemical composition of the LIMS atmospheres at the end of stellar evolution and are therefore useful in two ways. First, the abundances of alpha, Fe-peak and r-process elements relative to H, especially O/H, Ne/H, S/H, Ar/H and Cl/H in PN, evidently represent the levels of these elements that were present in the interstellar material out of which the progenitor star formed. This conclusion is strongly supported by a recent study by Maciel, Costa & Cavichia (2017). This team has recently compiled and analyzed a database containing abundance measurements of 1318 PN along with a second database containing similar information about 936 H II regions, the latter objects representing the current ISM abundance picture. Through the use of histograms and scatter plots, the authors show that both object types exhibit the same lockstep behavior of Ne/H, S/H and Ar/H, all versus O/H 1 . This familiar result strongly supports the idea that LIMS do not themselves alter the levels of the alpha elements that were present in the interstellar material out of which they formed. As a result, PN can be used as probes of ISM conditions at the time of progenitor star formation 2 .
Second, and more relevant to our current study, elements such as He, C, N and s-process elements are found to be enriched in PN, and so measurements of their abundances provide valuable information about the nucleosynthesis that occurs during the lifetime of PN progenitor stars. Figures 1, 2 with analogous values of objects such as H II regions and F and G dwarfs, all of which measure the interstellar values of the two ratios involved either currently (H II regions) or at the time of their formation (stars). Original data for the MWG Disk PN points in these three figures can be found in Henry et al. (2000); Henry, Kwitter, & Balick (2004); ; Kwitter & Henry (2012); Dufour et al. (2015).
The relatively narrow horizontal band (especially in the cases of C/O and N/O) populated by the H II regions and stars in each graph demonstrates how He/H, C/O and N/O generally behave as metallicity changes. These patterns of chemical evolution are reflections of the details of stellar evolution and nucleosynthesis, processes which apparently are universal and space invariant. Presumably, when the progenitor stars of the PN in the plots began their lives on the main sequence, they were located along these bands at a position near the PN's current O/H value.
PN values of He/H, C/O and N/O clearly fall above these bands in nearly every case, strongly suggesting that He, C, and N have been significantly enriched by nucleosynthesis in nearly all progenitor stars over their lifetimes. High PN values for these three abundance ratios have been observed previously. For example, Henry (1990a) compiled the He/H and log(N/O) measurements by Aller & Czyzak (1983); Aller & Keyes (1987) for 84 Galactic PN and found the average values of these two ratios to be 0.11 and -0.38, respectively. From their large sample of southern PN, Kingsburgh & Barlow (1994) found similar average values for He/H and log(N/O) of 0.115 and -0.33, respectively. In the case of C/O, the log of our average value for objects in the current study (see Table 5) is log(C/O)=0.088 compared with 0.06 from Kingsburgh & Barlow (1994) (see their Table 14) 3 . In addition, simple eyeball comparisons of the ranges of all three ratios shown in Henry (1990a) with Henry (1990b, erratum), the figures in Kingsburgh & Barlow (1994) and our Figs. 1-3 in the current paper show good consistency among these studies and reinforce the point that these ratios in PN are generally enhanced relative to levels of found in H II regions of similar metallicity.
Regarding the apparent N enhancement in PN in particular, theory predicts that the hot bottom burning process that explains the extent of the enrichment occurs in the AGB stage of stars whose progenitors were at least 3-4 M ⊙ depending upon the star's metallicity. Yet, based upon the properties of the stellar initial mass function, we also know that in the absence of an unknown selection effect, most of the PN included in these figures must be the products of relatively low mass progenitors, i.e., 1-2 M ⊙ and should therefore show very little N enrichment. How can we reconcile this observational result with theory?
The purpose of our investigation here is to confront recently published stellar model predictions of PN abundances with the observed abundances of He, C, N and O. We consider the models of four 2 We qualify this seemingly tidy picture by pointing out that oxygen enrichment in PN has been reported by Péquignot et al. (2000) and more recently in C-rich PN by Delgado-Inglada et al. (2015) and García-Hernández et al. (2016).
3 Note that we have estimated their averages for N/O and C/O from their separate averages of N, C and O. Open black circles refer to PN located in the MWG disk and taken from our extended sample, filled red squares represent Galactic H II regions from Deharveng et al. (2000) and the filled magenta triangle shows the solar position (Asplund et al. 2009). The position of Orion (Esteban et al. 2004) is indicated. different research groups and evaluate each set of models based upon how well they appear to explain the observed abundances. Since these same models also predict the total stellar yield of each element, only a fraction of which is present in the visible nebula, our results can be used to assess the relevance of the yield predictions for use in chemical evolution models.
Previous studies comparing model predictions and observations have been carried out by Marigo et al. (2003), Stanghellini et al. (2009), Delgado-Inglada et al. (2015, Ventura et al. (2015), Lugaro et al. (2016), and García-Hernández et al. (2016). The principle method of comparison for these studies features plots of two different element-to-element ratios, e.g., C/O versus N/O, showing both the observed abundances and model tracks computed for a range of stellar masses. Most authors find that abundance trends involving He, C and N can be explained by various amounts Open black circles refer to PN located in the MWG disk and taken from our extended sample, red filled circles represent H II regions from Garnett (1995Garnett ( , 1997Garnett ( , 1999, MWG disk stars from Gustafsson et al. (1999) are shown with blue filled circles, MWG metal poor halo stars from Akerman et al. (2004) are indicated with orange filled diamonds, green open squares and diamonds indicated LMC and SMC PN by Stanghellini et al. (2005Stanghellini et al. ( , 2009 respectively, and red filled squares correspond to low metallicity dwarf galaxies by Berg et al. (2016). The maroon filled triangle and magenta filled diamond represent Orion (Esteban et al. 2004) and the Sun (Asplund et al. 2009), respectively. of 3rd dredge-up, which elevates C, and hot bottom burning, which does likewise to N. However, explanations of PN abundance patterns based upon progenitor masses is typically not included.
Our study augments earlier analyses by also considering each of the ratios of He/H, C/O or N/O separately as a function of an object's progenitor mass. The sample of PN abundances which we compare to model predictions consists of 35 objects that have previously been observed and analyzed by our group. We have observed all objects in the optical with ground-based telescopes and 13 out of the 35 PN in the UV using either IUE or HST. Open black circles refer to PN located in the MWG disk and taken from our extended sample, filled blue circles represent blue compact galaxies (Izotov & Thuan 1999), filled red circles are H II regions (van Zee et al. 1998), open green squares and open maroon diamonds are PN from the LMC and SMC, respectively, from Stasińska, Richer, & McCall (1998), and filled orange circles are low metallicity galaxies from Izotov, Thuan, & Guseva (2012). The maroon filled triangle and magenta filled diamond represent Orion (Esteban et al. 2004) and the Sun (Asplund et al. 2009), respectively.
We describe the PN sample in detail in section 2. Our methods for determining the necessary abundances and progenitor mass for each object are provided in section 3. A description of each stellar modelling code used to predict the PN abundances and stellar yields of He, C and N, along with an analysis of our comparison of theory and observation are presented in section 4. Our summary and conclusions appear in section 5.
OBJECT SAMPLE
For nearly 25 years our team has been building a spectroscopic database comprising 166 planetary nebulae located primarily in the disk and halo of the Milky Way Galaxy. While a vast majority of the observations have necessarily been restricted to the optical region of the spectrum, i.e., 3700Å to 10,000Å, we have also collected UV data for a smaller sample using both the IUE and HST facilities. Most of these data, along with derived abundances of He, N, O, Ne, S, Ar, and in several cases C, have been published. Because we are currently interested in comparing our observed CNO abundances in PN with theoretical predictions of the abundances of these same elements as a function of initial stellar mass, it is necessary to identify a subset of our database for which we can infer progenitor star masses that are based upon carefully and consistently determined central star luminosities and effective temperatures.
Initial stellar masses can be derived by using published values for T ef f and log(L/L ⊙ ) of each central star to place the star in a theoretical HR diagram. After plotting post-AGB evolutionary tracks labeled by mass in the same diagram, stellar masses can be inferred by interpolating between tracks 4 .
The extensive compilation of stellar data by Frew (2008, Tables 9.5 and 9.6, each comprising 210 objects) was adopted as our source of T ef f and log(L/L ⊙ ) for reasons of consistency. A total of 32 objects with N and O abundances from our database were also listed in the Frew paper. We have also measured C abundances using UV emission lines of C III] λλ1907,1909 for 10 of the 32 PN. Besides abundances of N and O, we have determined C abundances for three other objects in our database which are not part of the Frew list and have included these objects in order to maximize the sample size for objects with measured C abundances. Thus, our final object list contains 35 PN (about 1/5 of the objects in our original database), all of which have measured N and O abundances and including 13 objects with measured C abundances. We emphasize the fact that the spectroscopic observations of the 35 PN, as well as the data reductions and abundance analyses, were carried out exclusively by members of our team.
Our final sample of 35 objects is listed in Table 1 5 . For each PN identified in column 1 we provide a morphological description in column 2 and the Peimbert type in column 3. Column 4 indicates the spectral range over which we have observed the object (OP=optical; IUE/HST=UV data source). Finally, columns 5 and 6 list the galactocentric distance in kiloparsecs and the vertical height in parsecs above the Galactic plane for each object. Taking the distance of the Sun from the Galactic center as 8 kpc and the scale height of the thin disk as about 350 pc, we see that most of the PN in our sample are located near the solar neighborhood and within the thin disk. We also note that while the values of the He/H, C/O and N/O abundance ratios over the MW disk are sensitive to metallicity as measured by O/H, the O/H ratio only decreases by 0.23 dex between 6 and 10 kpc in galactocentric distance, assuming an O/H gradient of -0.058 dex/kpc . From Figs. 1-3, this corresponds to only minor changes in He/H, C/O and N/O, and so we can ignore the effects of the disk's metallicity gradient. 4 We are very much aware of the pitfalls of using this method to determine central star and progenitor star masses. Problems stem primarily from the small separation between adjacent model evolutionary tracks in the luminositytemperature plane that are used to infer these masses, given the uncertainties of the observed values of these two parameters. However, we are confident that in using this method we can at least tell if a progenitor star is inside or outside of a mass range for which theory predicts C enrichment through triple alpha burning and dredge-up, or N enrichment through hot bottom burning. 5 Fg1 and NGC 6826 are the only objects in our sample with any evidence of binary central stars. According to Boffin et al. (2012), Fg1 has a period of 1.2d. NGC 6826 has a fast rotating central star, which is something that can only be achieved in a merger (De Marco et al. 2015). However, neither of these objects exhibits any abundance peculiarities, according to our data. For now, we have assumed that the presence of a secondary star does not affect our results.
is the object's heliocentric distance, R ⊙ is the Sun's galactocentric distance of 8 kpc, and l and b are heliocentric galactic coordinates. D, l and b are taken from Frew (2008).
where D and b are the object's heliocentric distance and galactic longitude, respectively, and are taken from Frew (2008). Table 2 provides the details concerning the observations of each of our 35 sample objects. The name of the PN appears in column 1. Columns 2-7 list the observation date, the telescope(s) and instrument(s) used, the times for the blue and red exposures, and the offset from the central star, respectively. The relevant references for the observations are given in column 8.
Beginning with our first project in 1993, all data have been reduced and measured manually by one of us (KBK) using the same techniques throughout. Uncertainties were explicitly measured and calculated in our early papers; then experience taught us that we could estimate them from the lines strengths themselves. ELSA (see §3.1) calculates statistical uncertainties, but no systematics are included. The former are then propagated through to the final intensities and diagnostics. Systematic errors are minimized by employing the same set of atomic data for abundance determinations throughout and by having a homogeneous data reduction and measuring pipeline, all performed by the same individual. The original line strengths are available in the relevant papers provided in Table 2.
Nebular Abundances
We have published abundances of He, N, O and in some cases C previously in papers indicated in the footnote to column 8 in Table 2. However, we sought to render the abundances more homogeneous by recomputing all of them using the same updated abundance code along with the newly-published ionization correction factors by Delgado-Inglada et al. (2014) in the cases of total He, C and O abundances.
Ionic abundances were determined using the code ELSA (Emission Line Spectral Analysis), a program whose core is a 5-level atom routine. Emission line strengths and their uncertainties used as input to ELSA were taken from the references listed in Table 2. We used an updated version of the program originally introduced by Johnson et al. (2006), where the major change was the addition of a C III] density diagnostic routine based upon the λ1907/λ1909 line strength ratio (C III] λ1909 was already included in the program). The important emission lines besides Hβ that were used in the ionic abundance computations for each object were He I λ5876, He II λ4686, C III] λλ1907,1909, The resulting ionic abundances and uncertainties with respect to H + produced by ELSA are presented in Table 3. The object names are given in column 1 followed by column pairs containing the abundances and uncertainties for each ion labeled in the header. Uncertainties for the ionic abundances are computed internally by ELSA and are the result of contributions from: 1) the uncertainties in the line strength ratios, e.g., I λ /I Hβ ; and 2) the uncertainties in the reaction rate coefficients (radiative recombination or collisional excitation rate coefficients) that stem from errors in electron temperature.
Ionic abundances in Table 3 provided in Table 4. For N/O we followed Kingsburgh & Barlow (1994) and Kwitter & Henry (2001) and assumed that N/O=(N + /O + ). 6 We have assumed throughout that the contribution of neutral He is negligible in all objects. With the possible exception of IC 418, this is justified by the fact that the O +2 /O + abundance ratio is greater than unity (see Table 3), since the ionization potential of O + (35.1eV) greatly exceeds that of He o (24.6eV). Concerning IC 418, Dopita et al. (2017) Henry et al. (2000) and Sharpee et al. (2003). Dopita et al. (2017) also construct a detailed nebular model that implies an abundance ratio of He/H=0.11, significantly higher than our value of 0.07. Therefore, our neglect of neutral He in IC 418 may be unwarranted, in which case our inferred He abundance may in fact be too low. This uncertainty obviously affects the position of IC 418, currently at He/H=0.07, in Figs. 7, 10 and 11.
Our final elemental abundances and uncertainties appear in Table 5. Object names are provided in column 1, while column 2 contains our estimate of the progenitor star mass for that object. These masses were inferred according to the method described in the next subsection. Beginning with column 3, pairs of columns list the elemental number abundances and uncertainties for He/H, C/O, N/O and O/H. The uncertainties were rigorously determined by adding in quadrature the partial uncertainty contributions from each ion involved in the total element computation as well as the ICF uncertainty 7 . The results provided in Table 5 will be analyzed in detail in §4 following our detailed discussion of our method for determining progenitor masses.
Progenitor Star Masses
Central star and progenitor masses were estimated by plotting the position of each central star in the log(L/L ⊙ )-log T ef f plane along with theoretical post-AGB evolutionary tracks and interpolating between tracks for each of our 35 objects. The values of log(L/L ⊙ ) and log T ef f were taken from Frew (2008) for 32 of our 35 sample objects. For the three sample objects not included in Frew (2008) (IC2165, IC3568 and NGC5315) we assumed the L and T values derived from models in Henry et al. (2015).
We decided to base our analysis on the log(L/L ⊙ ) and log T ef f values for each of our objects found in Frew (2008) because of the thoroughness of the procedures which he used to obtain these values. In his compilation of log(L/L ⊙ ) values, Frew vetted all published V magnitude estimates for quality and then averaged the best values for each central star. Absolute visual magnitudes were then determined via a distance modulus, where distances were inferred from a new relation developed in Frew (2008) between the Hα surface brightness and nebular radius of a PN. Following the application of a bolometric correction, bolometric magnitudes were converted to solar luminosities. The effective temperature of each central star was determined by Frew using the H and He Zanstra temperature methods in most cases. Table 6 contains our adopted values for log(L/L ⊙ ) and log T ef f in columns 2 and 3, respectively, for each PN listed in column 1.
We experimented with two sets of post-AGB evolutionary tracks: those by Wood (1994, Z=0.016, VW) andMiller Bertolami (2016, Z=0.010, MB). Model sets differing in authorship as well as metallicity were chosen deliberately in order to test the effect upon inferred masses. For each set we plotted tracks in a separate log(L/L ⊙ )-log T ef f diagram and then placed our sample objects in the graph using our adopted values of these two stellar properties listed in Table 6. The final/initial mass associated with each track is designated by track color as defined in each figure's legend. Representative error bars for the observed values, shown in the lower right of each figure, are taken directly from Fig. 9.8 of Frew (2008), since uncertainties for individual objects were not provided. Because each track is associated with a specific initial and final mass, we carefully measured each object's displacement from adjacent Table 6, respectively. The average of these two masses is listed in column 6 of that table as well as in column 2 of Table 5. Figure 6 is a plot of masses from column 5 versus those in column 4 of Table 6. The straight line shows the one-to-one relation. For a vast majority of objects, the progenitor masses (M i ) determined using the MB tracks tend to be smaller than those determined from the VW tracks by about 0.3 M ⊙ . This systematic difference is a direct consequence of the higher luminosity of the MB models during the constant luminosity stage resulting from the updated treatment of the evolutionary stages that precede the post-AGB stage. However, this offset is less than our estimated uncertainty of ±0.5 M ⊙ and therefore is likely insignificant for our purposes here. Interesting exceptions are the five objects with M i 1M ⊙ , for which the MB tracks are slightly less luminous than those of VW. This leads to significantly higher extrapolated masses (M i ∼ 0.8M ⊙ ) for three of the objects when using the MB tracks, instead of the unrealistically low M i ∼ 0.5M ⊙ obtained with the VW tracks. To understand the differences in the predictions of the different theoretical models, and also to extract some physical insight from their comparison with the observations, it is necessary to keep in mind the different physical assumptions of each grid. The evolution of the surface abundances of AGB stellar models is particularly sensitive to the adopted physics on the AGB. In addition, the properties of the stellar models in advanced evolutionary stages, such as the AGB, are affected by the modeling of previous evolutionary stages. The latter is particularly true regarding the treatment of mixing processes such as rotationally induced mixing or convective boundary mixing (or overshooting) during H-and He-core burning stages.
Description of the Model Codes
We now briefly review the treatment of these key ingredients in the four grids adopted here for the comparison: the MONASH grid (Karakas 2014;, the LPCODE grid (Miller Bertolami 2016), the ATON grid (Ventura et al. 2015;Di Criscienzo et al. 2016) and the FRUITY database (Cristallo et al. 2011(Cristallo et al. , 2015. While all the models discussed here include an upto-date treatment of the microphysics, and all of them neglect the impact of rotation, the theoretical models discussed in this section have some key differences in the modeling of winds and convective boundary mixing processes. These differences will affect the predicted evolution and final abundances during the TP-AGB. Based on the treatment of winds on the AGB, the models can be roughly divided in two groups. On one hand we have the MONASH and FRUITY models that adopt a single relation between the pulsational period P and the mass loss rateṀ for both C-rich and O-rich AGB stars. The mass loss recipeṀ (P ) adopted by the MONASH models is the well know formula by Vassiliadis & Wood (1993, eqs. 1, 2, & 5), while the FRUITY models adopt a similar prescription derived by Straniero et al. (2006, see their §5). On the other hand, we have the implementations by the ATON and LPCODE grids that incorporate a different treatment for the C-rich and the O-rich AGB winds. The ATON code adopts the empirical law by Bloecker (1995, eqs. 1 & 16 with η R = 0.02) reduced by a factor 50 for the O-rich phase and the theoretical mass loss rates by Wachter et al. (2008, eqs. 1, 2 & 3) for C-rich winds. The LPCODE models adopt the empirical law by Groenewegen et al. (1998) for the C-rich phase while winds for the O-rich phase mostly follow the Schröder & Cuntz (2005) law. These laws appear as eqs. 1, 2, 3, & 5 in Miller Bertolami (2016).
Even more important than the treatment of winds is the treatment of convective boundary mixing (or overshooting) during the TP-AGB phase as well as in previous evolutionary stages. Again the models can be roughly separated into two groups regarding the treatment of overshooting during coreburning stages. As before, on the one hand we have the MONASH and FRUITY models that do not include any kind of convective boundary mixing processes on the upper main sequence where stars have convective cores. However, later during the He-core burning stage, FRUITY models include convective boundary mixing in the form of semiconvection (Cristallo et al. 2011). And while the MONASH models do not include any explicit prescription for convective boundary mixing, a similar result would be expected from their adopted numerical algorithm to search for a neutrally stable point at the outer boundary of the convective core (Lattanzio 1986). On the other hand, the ATON and LPCODE models include overshooting on top of the H-burning core with its extension calibrated to fit the width of the upper main sequence. Both grids keep the same calibrated overshooting for the convective core during the core He-burning stage. From this difference alone in the treatment of convective boundary mixing before the TP-AGB, one should expect third dredge up (TDU)
and hot bottom burning (HBB) to develop at lower initial masses (M i ) in the ATON and LPCODE models than in the models of the MONASH and FRUITY grids.
Regarding convective boundary mixing on the TP-AGB, two convective boundaries are key for the strength of TDU events during the TP-AGB [see Herwig (2000)]. These are the boundary mixing at the bottom of the pulse drive convective zone (PDCZ) that develops in the intershell region during the thermal pulses, and the boundary mixing at the bottom of the convective envelope (CE). The inclusion of overshooting at both convective boundaries increases the efficiency of TDU and lowers the threshold in initial stellar mass above which TDU develops. In addition, the inclusion of overshooting at the bottom of the PDCZ leads to the dredging up of O from the CO core, increasing the intershell and surface O abundances.
The treatment of these convective boundaries varies widely in the four grids discussed here. The MONASH models do not include any explicit prescription for convective boundary mixing. However, some overshooting at convective boundaries does occur as a consequence of the adoption of the numerical algorithm for the determination of the convective boundaries (Lattanzio 1986). On the contrary the FRUITY, ATON and LPCODE models adopt different implementations of an exponentially decaying mixing coefficient (Freytag et al. 1996) beyond the formally convective boundaries and with different intensities. While FRUITY models include strong overshooting at the bottom of the CE but no overeshooting at the PDCZ, LPCODE models adopt a moderate overshooting at the base of the PDCZ and no overshooting at the bottom of the CE. Finally, the ATON models adopt a very small amount of overshooting both at the bottom of the PDCZ and the CE.
While there are strong arguments in favour of the inclusion of moderate overshooting during the main sequence (Schaller et al. 1992;Pietrinferni et al. 2004;Weiss & Ferguson 2009;Ekström et al. 2012) the situation on the AGB is much less clear. In fact, trying to fit all available observational constraints by means of a simple overshooting prescription might not be even possible [see Weiss & Ferguson (2009) ;Karakas & Lattanzio (2014); Miller Bertolami (2016)]. This fact, together with the lack of compelling theoretical arguments and the lack of a common observational benchmark for AGB theoretical evolution models has led authors to the adoption of very different approaches.
Finally we note that convection in the ATON code is computed with the full spectrum of turbulence convection, which leads to stronger HBB (Ventura & D'Antona 2005) when compared with models that adopt the standard mixing length theory [Cristallo et al. (2011), and Miller Bertolami (2016)].
In summary, we can roughly divide the four grids into two main groups: 1) the MONASH and FRUITY models that neglect convective boundary mixing during the main sequence, do not include overshooting in the PDCZ and adopt a single wind formula for both the C-and O-rich phases; and 2) the ATON and LPCODE models which calibrate overshooting during core H-burning to the width of the main sequence, adopt the same overshooting for the core-He burning phase, include some overshooting at the bottom of the PDCZ, and adopt different wind prescriptions for the C-and Orich phases. Note, however, that all grids adopt different treatments of convective boundary mixing during the TP-AGB.
In addition to the differences in the adopted physics, there is another difference related to the point at which each sequence is terminated. Due to the several convergence problems experienced by stellar models at the end of the AGB, different authors choose to stop their sequences at some point before the end of the AGB, missing the last thermal pulse(s). Although the efficiency of third dredge up drops at the end of the AGB, some significant changes in the surface abundances can still happen in the last thermal pulses. This is because when the H-rich envelope mass has already been reduced by more than one order of magnitude, a much smaller amount of processed material needs to be dredged up to the surface to affect the final surface abundances. This is an important difference between the FRUITY, MONASH and ATON models that do not reach the post-AGB phase and the LPCODE grid models which are computed up until the white dwarf stage. LPCODE models show abundance variations due to the timing of the last AGB thermal pulse. The squares represent objects whose progenitor masses were determined using the evolutionary tracks of Vassiliadis & Wood (1994, Fig. 4), while the circles similarly refer to the tracks of Miller Bertolami (2016, Fig. 5). Error bars for individual objects have been suppressed for clarity, while a representative set of error bars is provided in the upper left corner of the plot. The horizontal black dotted line indicates the solar He/H value of 0.085 as determined by Asplund et al. (2009). Model predictions by the MONASH (red lines) and LPCODE (green lines) codes are shown for the metallicities given in the legend and designated in the graph by line type.
Analysis
Our primary results involving the behavior of He/H, C/O and N/O versus progenitor mass appear in Figs. 7-9.
Objects in our sample are shown with connected pairs of open squares and circles. The squares represent objects whose progenitor masses were determined using the evolutionary tracks of Vassiliadis & Wood (1994, our Fig. 4) derived masses were identical. For clarity, only a representative set of error bars is provided in each graph, where the vertical bar indicates the average of the relevant uncertainties given in Table 5. Also included in the plots are model abundance predictions for PN ejecta by the MONASH, LP-CODE, ATON and FRUITY grids (He/H predictions by the FRUITY and ATON grids were roughly constant at 0.10 and 0.095, respectively, and were not included in Fig. 7). Line colors and types refer to the specific grid and metallicity, respectively, as defined in the figure legend. The horizontal and vertical black dotted lines show the solar values (Asplund et al. 2009).
The behavior of He/H versus progenitor mass is shown in Fig. 7. Relative to the solar value, all of our sample members except two show He enrichment. Conspicuous outliers include NGC 6537 (He/H=0.17±.02) in the upper right and IC 418 (He/H=0.07±.01) and NGC 2392 (He/H=0.08±.01) both located below the solar line. NGC 6537 is a Peimbert Type I PN, a class which characteristically shows an enhanced He abundance.
Considering the He/H uncertainties, the MONASH and LPCODE model grids span the area occupied by the majority of points. Note, though, that in the case of the MONASH models, some of this success is achieved only by including the Z=0.030 model set, i.e., a metallicity roughly twice the solar value. This result is at odds with the metallicities which we measured for our sample of objects, where nearly all have O/H values 8 in Table 5 at or below the solar level of 4.90 × 10 −4 . In addition both the MONASH and LPCODE models predict a slight rise in He/H with metallicity, but the observational uncertainties of He/H likely obscure this theoretically predicted trend; if it indeed exists, it would be difficult to see it in the data. And while deeper spectra may increase the S/N, accuracy would continue to be compromised due to the errors introduced by flux calibration, dereddening, instrumental effects and uncertainties associated with atomic constants, including collisional corrections. We feel that uncertainties of no less than ±0.005 (a vertical error bar of 0.01) could likely be obtained.
In general the fact that most measured He/H ratios are above the solar value is in line with the expectations from stellar evolution theory, as all dredge up events during post main sequence evolution lead to increases in the He/H ratio. It is well known that extra-mixing processes are needed to explain the abundance patterns in first red giant branch (RGB) stars located above the RGB bump (Charbonnel & Zahn 2007;Charbonnel & Lagarde 2010;Wachlin et al. 2011;Lagarde et al. 2012;Maeder et al. 2013). We refer here to mixing processes in addition to overshooting, such as rotationally induced mixing 9 or thermohaline mixing 10 . The fact that all grids fail to achieve the maximum observed values of He/H might be related to their neglect of extra-mixing processes on the pre-AGB evolution. Table 6, IC 418 had a progenitor mass of roughly 1.4±.5 M ⊙ , while IC 4593's mass was originally 0.7±.5 M ⊙ . The only model in Fig. 8 which predicts this much excess C within the mass range of the two progenitor stars is the one of M i = 1.25M ⊙ and Z = 0.010 in the LPCODE grid. Interestingly, that model attains its high surface carbon abundance due to a final thermal pulse when the mass of the central star is already reduced to 0.593M ⊙ . In this circumstance TDU leads to the mixing of M TDU ≃ 0.003M ⊙ from the H-free core into a H-rich envelope of M H env ≃ 0.027M ⊙ , significantly increasing the surface carbon abundance of the star. This example shows why it is necessary to keep in mind that final AGB thermal pulses coupled with low envelope masses can significantly change the surface abundances from those predicted by AGB stellar evolution models 8 We note that oxygen may not be a reliable metallicity indicator if significant amounts of O are dredged up to the surface or destroyed by hot bottom burning during the TP-AGB as predicted by some models -see section 3.1.1 in Di Criscienzo et al. (2016) and Table 3 in Miller Bertolami (2016). 9 Rotationally induced mixing includes different types of mixing processes caused by the existence of rotation. These include mixing by meridional circulation and diffusion by shear turbulence in differentially rotating stars (Lagarde et al. 2012;Maeder et al. 2013).
10 Thermohaline mixing is a double diffusive process that can develop in low-mass stars. This thermohaline instability takes place when the stabilizing agent (heat) diffuses away faster than the destabilizing agent (chemical composition), leading to a slow mixing process. Thermohaline mixing can happen in low mass stars after the RGB-bump, and on the early AGB (Lagarde et al. 2012), where an inversion of molecular weight is created, by the 3He(3He,2p)4He reaction, on a dynamically stable structure.
which are not computed to the very end of the AGB. Yet, it is necessary to emphasize that if the mass ejected after the last thermal pulse is too small, the final abundances of the central stars might be different from those displayed by their surrounding PN. The nebula might not be homogeneous and may be dominated by the material ejected before the star altered its surface composition in the last thermal pulse.
Each of the four sets of model tracks displayed in Fig. 8 generally predicts two trends regarding C/O in PN: 1) as progenitor mass increases, C/O increases slowly, peaks around 2.5-3.0 M ⊙ and then decreases; and 2) for constant progenitor mass, C/O increases with decreasing metallicity. Both of these predicted trends are well-known and the presumed causes are nicely summarized in Karakas & Lattanzio (2014, §3.3). In an AGB star, C is produced (and also dredged-up from the CO core) within the periodically unstable He shell by the triple alpha process and is subsequently transported to the H-rich outer envelope during TDU. According to models, the amount of C that is mixed up into the envelope is directly related to the efficiency of the dredge-up process, where the dredge-up efficiency is characterized by the ratio of the mass of material brought to the surface relative to the increase in mass of the C-O core during the process. Models indicate that this efficiency increases independently with increasing progenitor mass and decreasing metallicity. However, this process begins to be damped as the stellar mass approaches 4 M ⊙ in the case of the MONASH grid and 2.5-3 M ⊙ for the other three grids as C is converted to N via the CN cycle during HBB. The difference between the MONASH grid and the other three grids could be related to the lack of convective boundary mixing in the high mass models of the former grid, which leads to a less efficient HBB.
We turn now to the behavior of N/O versus progenitor mass featured in Fig. 9. Here we see that roughly 30% of our objects exceed the solar value of 0.14 for N/O by more than their uncertainties. We also observe an upward trend in N/O in the data with increasing birth mass up to about 3 M ⊙ .
The apparent nitrogen enrichment below 1.5 M ⊙ is likely the result of dredge-up events before the AGB phase. However, the upward trend beyond this point is rather substantial and likely is the product of HBB. Interestingly, the lowest mass at which HBB is predicted by the ATON and LPCODE models to begin is around 3 M ⊙ while MONASH and FRUITY models predict the onset of HBB at around 5 M ⊙ (outside of the figure range). This difference is mostly due to the implementation of overshooting during the main sequence evolution in ATON and LPCODE models.
Yet, the upward trend of N/O in our PN sample occurs at an even lower progenitor mass, with high N/O values corresponding to M i 2.25 M ⊙ . If our stellar mass determinations are reasonably correct, this result confirms the well established need to include overshooting in the modeling of the upper main sequence, and perhaps the need to include some additional mixing processes like rotation-induced mixing in main sequence intermediate mass stars (Ekström et al. 2012, Fig. 9).
An additional shortcoming of the models is that none of the sets spans the entire region occupied by our PN. In particular, the observations clearly suggest that stars with progenitor masses below 3 M ⊙ produce higher levels of N than are predicted by any of the models. As mentioned above, the failure of the models to account for the observed abundances of N in low-mass stars might be pointing to the need to include other mixing processes, such as rotation-induced mixing, during previous evolutionary stages (Charbonnel & Lagarde 2010 Figure 10 is a plot of C/O versus He/H, where we observe no apparent correlation between the values of these two ratios. As we saw earlier in Fig. 8, the observed He/H ratio for all but IC 418 and NGC 2392 is above the solar value. This strongly suggests that a majority of the objects in our sample experienced significant He enrichment during their evolution. All of the ATON models within the 1-4 M ⊙ range predict a He/H value of 0.10, hence the straight vertical lines for those models. We have offset their track for the 0.014 metallicity models to the right slightly to help distinguish the two tracks. The model tracks of the MONASH and LPCODE grids are consistent with the observations in the sense that each model set spans the space occupied by the bulk of the sample objects, i.e., those 11 PN which have He/H≥0.10. The ATON models appear to span the observed C/O values, but lack the range in He/H exhibited by the data.
The observational data in Fig. 11 suggest that the N enrichment seen earlier in Fig. 9 only reaches M i = 3M⊙ for this grid. On the contrary, LPCODE models with Z=0.02 do show an upward trend in N/O as He/H increases due to the action of TDU and HBB during a large number of thermal pulses in the M i = 4M ⊙ model. The ATON models seem to span the observed N/O values, although again there is no reported range in their He/H values. Overall, there is little theoretical evidence that any of the model grids completely spans the point positions of the observational data, a result we also see in Fig. 9. We conclude that the possible observational trend in Fig. 11 previously seen in Fig. 9 is likely reflecting the action of TDU and HBB.
Finally, Fig. 12 shows the relation of C/O versus N/O for the 13 objects for which we have C measurements. As we saw in Fig. 8, these data exhibit a wide variation in the C/O ratio, with several objects having values significantly larger than the solar value. These same objects also have relatively low values of N/O, where ratios range from near solar to slightly above it. Then there are the three PN with solar C/O values that appear to be decidedly enriched with N. All model sets predict a significant variation in enhanced C/O at relatively low N/O, while at higher N/O levels the C/O values approach the solar value of 0.55. The data appear to be consistent with the models, and generally speaking, all model sets appear to span the empirical data sufficiently, although the high N/O region contains only three PN. The data in this figure are consistent with the theoretical expectation that C and N are anti-correlated, as C from TDU is subsequently destroyed during HBB to produce N.
Summarizing our detailed comparison of models and observations, the empirical trends seen in Figs. 9 and 12, and perhaps 11, suggest the existence of HBB in stars with birth masses less than 4 M ⊙ , something that is only attained by models that include overshooting on the main sequence (ATON, LPCODE).
In more general terms, however, observations, when combined with model predictions of four independent model grids, currently demonstrate that all four grids are compatible with the data except in the case of N/O. That is, all grids seem capable of spanning the distribution of points in the cases of C/O and He/H. We suggest that future computational efforts consider the implication that the onset of HBB occurs at a lower initial mass than previously believed. This is the most important result of our study.
SUMMARY AND CONCLUSIONS
Helium, carbon and nitrogen are known through observations to be synthesized by stars within the mass range of 1-8 M ⊙ (low and intermediate mass stars, or LIMS). We demonstrated this plainly in Figs. 1-3, where we saw that the He/H, C/O and N/O abundance ratios as a function of metallicity in a large sample of PN systematically fall above ISM values for the same ratios measured by stars and H II systems.
To evaluate the significance of the relative contribution that LIMS make to the galactic chemical evolution of these three elements, we need to determine the amount of He, C and N that a star produces and releases into the interstellar medium, i.e., the stellar yield. Fortunately, a portion of this ejected matter forms a planetary nebula, and from the emission spectra produced by these objects, we are able to measure the abundances of He, C, N among other elements. Since theoretical models of LIMS predict both the total yield and the PN abundance, by comparing the observed abundances to theoretical predictions of the same we can simultaneously infer the yield.
The goal of this project has been to make a detailed comparison between observationally determined abundances of the elements He, C and N in planetary nebulae with theoretical predictions of the same by four different grids of stellar evolution models. We have carefully selected PN for which high quality spectra and good determinations of the luminosity and effective temperature of each associated central star are available. The optical and UV spectra consist exclusively of our own observations made with ground-based telescopes as well as HST/STIS and IUE.
To ensure homogeneity, all spectral data were reduced and measured in a consistent manner, and abundances were all determined using the same algorithms. Central star luminosities and effective temperatures in all but three cases were taken from Frew (2008), and central and progenitor star masses were inferred by plotting these values in L-T diagrams containing evolutionary tracks from Vassiliadis & Wood (1994) and Miller Bertolami (2016).
Our final sample contained 35 Galactic PN, 13 of which have C abundances measured from UV lines available. These 35 objects vary widely in morphology. All are either categorized as Peimbert type I or II. And most are located in the Galactic thin disk within 2 kpc of the Sun.
Combining the inferred abundances and stellar masses, we conclude the following: 1. The mean values of N/O across the observed progenitor mass range of 1-3 M ⊙ are well above the solar value. With respect to current theory, this is an unexpected result and suggests that extra-mixing is required in this stellar group to explain the N enrichment. Our results also suggest an increase in N/O with progenitor mass for M>2 M ⊙ , implying that the onset of hot bottom burning occurs at lower masses than previously thought.
2. All but two of our sample PN clearly show evidence of He enrichment relative to the solar value. This is expected, since both first and third dredge-up mix He-rich material into the stellar atmosphere prior to PN formation from expelled atmospheric matter.
3. The average value of measured C/O within our sample is 1.23, well above the solar value of 0.55 (Asplund et al. 2009). The standard deviation for the sample is 0.85. Evidence of C enrichment is present in roughly half of the sample of 13 objects for which we measured the C abundance. Interestingly, the PN with the higher C/O values seem to come from low mass progenitors with M ≈ 1 M ⊙ . 4. The model grids to which we compared the observations successfully span the data points in the case of C/O. The models are also consistent with some, but not all, of the objects in terms of He/H. However, all of the models seem to fail in the case of N/O.
Our finding of elevated N/O in low mass stars, possibly due to an earlier-than-expected onset of HBB and/or the presence of extra-mixing, is the most significant result of our study. Further confirmation of this result will help markedly in the ongoing efforts to determine the provenance of N in the context of galactic chemical evolution. Because stars of masses between 1 and 3 M ⊙ are roughly five times more numerous than stars between 3 and 8 M ⊙ (assuming a simple Salpeter initial mass function), the potential impact of these low mass stars on the question of the chemical evolution of nitrogen is obviously significant. | 2017-09-01T16:51:08.000Z | 2017-08-29T00:00:00.000 | {
"year": 2017,
"sha1": "d678caa4a00048f53676a97913340eec7e090acc",
"oa_license": "CCBYNCSA",
"oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/93706/Documento_completo.pdf?sequence=1",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d678caa4a00048f53676a97913340eec7e090acc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
537161 | pes2o/s2orc | v3-fos-license | Early intervention services, cognitive–behavioural therapy and family intervention in early psychosis: systematic review
Background Early intervention services for psychosis aim to detect emergent symptoms, reduce the duration of untreated psychosis, and improve access to effective treatments. Aims To evaluate the effectiveness of early intervention services, cognitive–behavioural therapy (CBT) and family intervention in early psychosis. Method Systematic review and meta-analysis of randomised controlled trials of early intervention services, CBT and family intervention for people with early psychosis. Results Early intervention services reduced hospital admission, relapse rates and symptom severity, and improved access to and engagement with treatment. Used alone, family intervention reduced relapse and hospital admission rates, whereas CBT reduced the severity of symptoms with little impact on relapse or hospital admission. Conclusions For people with early psychosis, early intervention services appear to have clinically important benefits over standard care. Including CBT and family intervention within the service may contribute to improved outcomes in this critical period. The longer-term benefits of this approach and its component treatments for people with early and established psychosis need further research.
Early intervention services have been developed to address the needs of individuals with early psychosis. Typically, there is a delay between the onset of the first episode of psychosis and receiving an effective treatment -a period of untreated psychosis. 1 Reducing this duration of untreated psychosis (DUP) for people with schizophrenia may lead to an improved prognosis. [1][2][3][4] Early intervention services aim to detect emergent symptoms, reduce DUP, and improve early access to effective treatment, particularly in the 'critical period' (the first 3-5 years following onset). [5][6][7] Although at the time there was little evidence for the effectiveness of this approach, early intervention services were developed in Australia, the USA, Canada, New Zealand and elsewhere; and the widespread deployment of such services was recommended in the National Service Framework for Mental Health 8 and in the National Institute for Health and Clinical Excellence (NICE) guideline on schizophrenia for England and Wales. 9 Since then, the provision of early intervention services has steadily increased, 10 with 145 early intervention services currently operating in the UK, serving about 15 750 individuals (Care Services Improvement Partnership, personal communication, 2009). Early intervention teams have also gradually evolved and now often consist of community-based multidisciplinary mental health teams that provide a combination of pharmacotherapy, family intervention, cognitive-behavioural therapy (CBT), social skills training, problem-solving skills training, crisis management and case management. 11,12 However, although the evidence base for early intervention services is growing, their specific benefits have not been clearly demonstrated. 13,14 Therefore as part of an update of the NICE guideline on schizophrenia, 9,15 we conducted a systematic review of early intervention services for people with a first or early episode of psychosis. Because early intervention services typically include an individually tailored combination of evidence-based psychological interventions, we also examined the data on the separate use of CBT and family intervention used specifically in the context of early psychosis.
Search strategy and selection criteria
We identified randomised controlled trials (RCTs) of early intervention services, CBT or family intervention for people with early psychosis, using the original schizophrenia guideline 9 and five bibliographic databases (CINAHL, CENTRAL, EMBASE, MEDLINE, PsycINFO). The database search was conducted in September 2009 and restricted to English language papers or papers with an abstract in English. Full details of the search strategy can be found in the online supplement. Additional papers were identified by searching the reference list of retrieved articles, tables of contents of relevant journals, recent systematic reviews and meta-analyses of interventions in schizophrenia, and suggestions made by members of the schizophrenia Guideline Development Group (a comprehensive review protocol can be found in the updated edition of the full schizophrenia guideline, available from www.nccmh.org.uk). 15 Early psychosis was defined as a clinical diagnosis of psychosis within 5 years of the first psychotic episode or presentation to mental health services. Interventions addressing high-risk groups or 'pre-psychotic'/prodromal populations were excluded, as were studies where the main focus of the intervention was not on psychosis or where the duration since the first psychotic episode was greater than 5 years.
Quality assessment
All trials meeting the eligibility criteria were assessed for methodological quality using a modified version of the SIGN checklist. 16 Trials that were judged to be of adequate quality were included in the review. Trials that were not clearly described as randomised were excluded as were those with fewer than ten participants per intervention arm.
350
Early intervention services, cognitive-behavioural therapy and family intervention in early psychosis: systematic review V. Bird, P. Premkumar, T. Kendall, C. Whittington, J. Mitchell and E. Kuipers
Background
Early intervention services for psychosis aim to detect emergent symptoms, reduce the duration of untreated psychosis, and improve access to effective treatments.
Aims
To evaluate the effectiveness of early intervention services, cognitive-behavioural therapy (CBT) and family intervention in early psychosis.
Method
Systematic review and meta-analysis of randomised controlled trials of early intervention services, CBT and family intervention for people with early psychosis.
Results
Early intervention services reduced hospital admission, relapse rates and symptom severity, and improved access to and engagement with treatment. Used alone, family intervention reduced relapse and hospital admission rates, whereas CBT reduced the severity of symptoms with little impact on relapse or hospital admission.
Conclusions
For people with early psychosis, early intervention services appear to have clinically important benefits over standard care. Including CBT and family intervention within the service may contribute to improved outcomes in this critical period. The longer-term benefits of this approach and its component treatments for people with early and established psychosis need further research.
Declaration of interest
None.
Data extraction
Two of the authors (V.B. and J.M.) entered study details into a database and assessed methodological quality. Three of the authors (V.B., C.W. and P.P.) extracted outcome data into Review Manager (RevMan version 5.0.18 for Windows XP; The Cochrane Collaboration, Oxford, UK). The assessment of study quality and all outcome data were double-checked by one author (C.W.) for accuracy, with disagreements resolved by discussion.
Where available, data were extracted for the following outcomes: hospital admission; psychotic relapse (if appropriate criteria were used); DUP; and mean positive and negative symptoms as measured using the Positive and Negative Syndrome Scale (PANSS), 17 Brief Psychiatric Rating Scale (BPRS), 18 Scale for the Assessment of Positive Symptoms (SAPS), 19 and the Scale for the Assessment of Negative Symptoms (SANS). 20 Outcome data were extracted at both end of treatment and follow-up (based on mean end-point scores). In light of the fundamental aims of early intervention services, 12 data on remaining in contact with services and accessing psychosocial treatments were also extracted.
Statistical analysis
Meta-analysis was used, where appropriate, to synthesise the evidence using RevMan. Where possible, intention-to-treat with last observation carried forward data were used in the analyses. For binary outcomes, this approach assumes that participants leaving the study early, for whatever reason, had an unfavourable outcome. We calculated the standardised mean difference (SMD) for continuous outcomes, and relative risk (RR) for binary outcomes. For consistency, data from all outcomes (continuous and binary) were entered into RevMan in such a way that negative effect sizes or relative risks less than one favoured the active intervention. The number needed to treat for benefit (NNTB) 21 was calculated for statistically significant relative risks. Data from more than one study were pooled using a random-effects model, regardless of heterogeneity between trials, as this has recently been shown to be the most appropriate model in most circumstances. 22 Summary effects were assessed for clinical importance, taking into account both the point estimate and the associated 95% confidence interval (CI).
Results
The search process and total number of trials included in the review are illustrated in Fig. 1. Details of all included trials can be found in Table 1, with further information about included and excluded studies available in online Tables DS1 and DS2.
Early intervention services
Four published trials (n = 800) were included in the meta-analysis of early intervention services: COAST (Croydon Outreach and Assertive Support Team); 23 LEO (Lambeth Early Onset); 11 the OPUS trial; 24 and OTP (Optimal Treatment Project). 12 Inspection of the Cochrane review of early interventions in psychosis 13 identified three additional trials; however, these were excluded as they failed to meet our inclusion criteria regarding the population studied and comparison used. All included trials recruited participants from local mental health services such as community mental health teams, in-patient and out-patient services. However, the trials varied as to whether the participant was a new referral, with LEO 11 including only those making contact for the first or second time, whereas COAST, 23 OPUS 24 and OTP 12 considered people who had a documented first contact within a specified time period, ranging from 12 weeks to 5 years.
Interventions often included a case manager or care coordinator, with a lower case-load than in standard care. In addition to medication management, all participants allocated to early intervention services were offered a range of psychosocial interventions, including CBT, 11,12,23 social skills training 24 and family intervention 12,23,24 or family counselling, 11 and vocational strategies such as supported employment. 11,12,23 The psychosocial and vocational interventions were usually adapted to the needs of first-episode psychosis and offered on an 'as-required' basis. The frequency and duration of contact differed between trials, with the duration of the intervention lasting up to 2 years. Outcomes were reported at 9 months to 5 years post-randomisation.
Cognitive-behavioural therapy
Four published trials of CBT [25][26][27][28] were included in the review (n = 620). One paper 27 published in Chinese but with an English abstract was translated subsequent to publication of the schizophrenia (update) guideline 15 and included in this analysis.
Participants were recruited from a range of services which included early intervention services, community mental health clinics and in-patient psychiatric wards. In two trials, participants were exclusively in their first episode of psychosis. 25,27 Another trial 26 additionally included participants who had been admitted for a second time, providing the episode occurred within 2 years of the first admission (17% of their sample). The fourth trial 28 included participants who had consulted a mental health professional for psychosis for the first time in the past 2 years. Cognitive-behavioural therapy was delivered individually in three out of the four trials, [25][26][27] with a group-based approach in the fourth. 28 Two of the interventions specifically adapted the CBT approach for early psychosis, 25,28 with the remaining two interventions targeting positive symptoms 26 and insight building. 27 The frequency of sessions and the duration of treatment varied across trials, with the total duration ranging from 5 weeks (plus booster sessions) 26 to 1 year. 25 At up to 2 years post-treatment follow-up, when compared with standard care alone, CBT significantly reduced mean positive symptoms with a pooled SMD of 70.60 (95% CI 70.79 to 70.41; heterogeneity I 2 = 0%, P = 0.44) and mean negative symptoms with a pooled SMD of 70.45 (95% CI 70.80 to 70.09; heterogeneity I 2 = 62%, P = 0.07). These benefits were not evident at the end of treatment in terms of both positive (SMD = 70.05, 95% CI 70.22 to 0.12; heterogeneity I 2 = 0%, P = 0.92) and negative symptoms (SMD = 0.03, 95% CI 70.17 to 0.23; heterogeneity I 2 = 0%, P = 0.41), or relapse within the 2-year follow-up period (27.8% v. 32.2%, P = 0.44; heterogeneity I = 79%, P = 0.03). Rates of hospital admission up to 2 years follow-up also failed to demonstrate any additional benefit for CBT compared with standard care (38.4% v. 38.5%, P = 0.94; heterogeneity I 2 = 0%, P = 0.36).
Family intervention
Three trials (n = 288) assessing family intervention in early psychosis were included in the review. [29][30][31] Participants were recruited from psychiatric services, including in-patient units, and were either first or second admissions, 29,31 or had made first contact with services within the past 6 months. 30 Two trials 29,30 included the individual with psychosis in the family sessions, whereas in Zhang et al 31 the majority of family sessions did not include the patient. The interventions delivered in each trial included an element of psychoeducation and problem-solving, with crisis management also evident in one trial. 29 Interventions varied in their mode of delivering, with two trials 29,30 utilising an individual family approach and the remaining trial combining individual and group-based family sessions. Only one trial 29 reported relapse and a further two trials 30,31 reported hospital admission; these outcomes were combined to increase statistical power.
The combined analysis indicated that at the end of treatment, participants receiving family intervention were less likely to relapse or be admitted to hospital compared with those receiving standard care (14.5% v. 28.9%; NNTB = 7, 95% CI 4 to 20; heterogeneity I 2 = 0%, P = 0.40). At up to 2 years follow-up, one study 29 reported a numerically lower risk of relapse (23.1% v. 30.8%, P = 0.38), although this was not statistically significant. None of the included family intervention trials provided data on mean positive and negative symptoms.
Main findings
For people with early psychosis, in four trials of early intervention services, four trials of CBT, and three trials of family intervention, meta-analysis demonstrated advantages over standard care. By the end of treatment, early intervention services produced clinically important reductions in the risk of both relapse and hospital admission. In addition, small effects favouring early intervention services were shown in terms of reduced symptom severity and improved access to and engagement with treatment (including psychological therapies). Family intervention also produced clinically important reductions in the risk of relapse and hospital admission when compared with standard care. In the 2 years following the intervention, medium effects favouring CBT were demonstrated in terms of reduced positive and negative symptom severity. We found no data on the effect of family intervention on symptoms and insufficient evidence to reach a conclusion about the impact of CBT on relapse or hospital admission.
Early intervention services
Compared with a previous review of early interventions in psychosis, 13 our meta-analysis found stronger evidence to support the effectiveness of early intervention services overall. The earlier review included fewer trials that specifically focused on servicelevel interventions delivered during the 'critical period' following onset of psychosis. Furthermore, although the previous review included both discrete psychosocial and multicomponent service-level interventions, there was a lack of comparable trials for any conclusions to be drawn. Our findings do, however, Early intervention service Hospital admission 11,12,24 End of treatment 3 342/280 RR = 0.67 (0.54 to 0.83) Relapse (full or partial) 11,12 End of treatment 2 91/81 RR = 0.66 (0.47 to 0.94) Positive symptoms (PANSS or SAPS) 11,24 End of treatment 2 260/208 SMD = 70.21 (70.42 to 70.01) Negative symptoms (PANSS or SANS) 11,24 End of treatment 2 260/208 SMD = 70.39 (70.57 to 70.20) Not receiving a psychological intervention 11,12,24 End of treatment 3 344/286 RR = 0.67 (0.46 to 0.97) Not in contact with index team 11,24 End of treatment 2 314/266 RR = 0.60 (0.39 to 0.92) Leaving the study early for any reason 11,12,23,24 End substantiate those previously reported in a narrative review of randomised and non-randomised studies by Penn and colleagues, 14 who concluded that early interventions had beneficial effects across a range of domains, although further investigation was needed to establish the robustness of these findings. 14 Our review attempts to overcome these limitations and provides the first meta-analytic evidence indicating that both early intervention services and discrete psychological interventions improve outcomes for early psychosis.
In the present review, the early intervention services provided in all of the trials included the provision of psychosocial interventions, pharmacological treatment and some form of case management involving smaller case-loads (1:10) and an assertive approach to treatment. All of the components were tailored to meet the needs of the individual patient and offered at the earliest opportunity. These elements were not present in treatment as usual, although an assertive approach to treatment is so common that it cannot be specifically excluded. The psychological interventions used in the included trials were CBT and either family intervention 12,23,24 or family counselling. 11 It is possible that the reduced case-loads and more appropriate use of pharmacological interventions within early intervention services may account for some of the clinical and statistically important improvements demonstrated. Although further research is needed to investigate the beneficial contributions of these features of early intervention, given the positive effects of CBT and family intervention when delivered as discrete interventions for people with early psychosis, it is just as likely that these two psychosocial interventions have contributed to some of the benefits of early intervention services in this review.
Gleeson and colleagues 32 recently demonstrated that the addition of a cognitive-behavioural and family therapy-based relapse prevention programme to an early intervention service for individuals in remission from a first episode of psychosis was more likely to prevent or significantly delay a second episode when compared with an early intervention service alone. In this trial the early intervention service alone included only family psychoeducation and peer support. This study provides some evidence to support our hypothesis: that an important part of the overall effectiveness of the early intervention teams included in our meta-analysis derives from the inclusion of two evidencebased psychological interventions, namely, CBT and family intervention. In our review we have shown that the likelihood of a service user receiving a psychosocial intervention in an early intervention team is double that found in a community mental health team.
Limitations
One limitation of the present review is the paucity of trials included in each meta-analysis. We excluded trials focusing on high-risk groups or prevention of psychosis because of the possible ethical implications of targeting interventions at these individuals. 5 Another limitation is the variability in long-term follow-up measures available in different trials making some comparisons difficult. Only one trial of an early intervention service provided long-term data (up to 5 years post-randomisation), 24 whereas all four trials of CBT [25][26][27][28] and one of family intervention 29 included long-term follow-up measures. Therefore, it remains to be determined whether the effects of early intervention services are sustained.
Psychological interventions
Despite the limitations, our findings regarding the efficacy of CBT and family intervention are consistent with, and reflect, the wider evidence base found in the treatment and management of later psychotic episodes. The updated edition of the schizophrenia guideline 15 recommends that both interventions should be offered to people experiencing an acute episode of schizophrenia and for promoting recovery in those with established schizophrenia.
The evidence presented here suggests that CBT for early psychosis has longer-term benefits in terms of reducing symptom severity. Consistent with the wider evidence base for CBT for established psychosis, the present review failed to find any evidence that CBT reduced relapse rates in early psychosis, which suggests that the main benefits of this intervention are likely to be a reduction in symptoms and distress in early and established psychosis. This finding confirms a recent review assessing both RCTs and non-randomised studies of CBT in first-episode psychosis, which also failed to demonstrate positive effects on relapse and readmission. 33 Although the number of RCTs for family interventions for early psychosis was limited in our review, the evidence is consistent with the larger body of evidence for the role of family interventions in established schizophrenia, in that family intervention reduced combined hospital admission and relapse rates. The review conducted for the updated edition of the schizophrenia guideline 15 also found robust evidence for the efficacy of family intervention in established schizophrenia in reducing symptoms at the end of treatment. However, in the present review, none of the included trials reported measures that allowed us to assess this in the context of early psychosis. It is, therefore, anticipated that family intervention in first-episode psychosis may also reduce symptom levels.
Critical period
The studies included in the present review did not provide any data relating to DUP, as all papers focused on people with an agreed diagnosis, not on populations at high risk of becoming psychotic and receiving a diagnosis. A number of other reviews assessing DUP as a predictor have indicated that longer DUP is subsequently associated with poorer outcomes, including reduced adherence to CBT, 34 altered response to antipsychotic medications, 35 poorer social functioning 36 and increased levels of disability. 37 There is some suggestion from studies assessing the impact of early intervention programmes on high-risk and ultra-high-risk populations that education and awareness of psychosis may significantly reduce DUP. 38 However, further research is needed to clarify issues surrounding DUP. 39 The present review focused on the first 3-5 years following the onset of illness. This period has been defined as a critical period, when many of the psychological, clinical and social deteriorations associated with psychosis might occur, 5,7,40 and when interventions might potentially have their greatest positive impact on prognosis. 5,6 Although the current evidence to support this idea is limited, intervening at the earliest possible opportunity makes both practical and ethical sense, and hope remains that such intervention might reduce subsequent symptom severity, loss of functioning and other negative consequences of psychosis such as social exclusion. 41 Intervening early may also help to reduce the adverse social and societal consequences of the disorder for both individuals and their family and carers. However, it can also be argued that providing excellent care and access to a range of appropriate and effective psychological, pharmacological and vocational interventions should be available at any stage of psychosis. 42,43 Implications On balance, the evidence reviewed here suggests that early intervention services are an effective way of delivering care for people with early psychosis and can reduce hospital admission, relapse rates and symptom severity, while improving access to and engagement with a range of treatments. The characteristics of these early intervention services include the provision of multimodal psychosocial interventions, pharmacotherapy, and some form of case management with lower case-loads and an assertive approach to treatment, all within the context of intervening as early as possible. Our review also suggests that providing evidence-based psychological interventions as part of a comprehensive early intervention service may contribute to improving outcomes for people with early psychosis. It is important that these psychological interventions have been shown rather more robustly to be effective for people with established schizophrenia. This raises the possibility that comprehensive services comparable to those described here as early intervention services, which include a full range of evidence-based psychological interventions, should be considered for people with established psychosis.
Strategy Peter Wells
Love was at a premium -Jane ran out of supplies. Father a miner, his life stained by cold dust, his chest a box of birds, let go his last persecutory breath.
Mum had three daughters to keep, all got the message: love is a ration book. Jane, the youngest, had least time for what was left of the crust; a starveling in love she sickened for it. When the strategy was rumbled she risked the lot and slit her wrists In and out of hospital a lifetime career; the only way to keep going and to save Mum.
She hid behind the curtains when she won the ward prize for a cake. She could not explain herself.
Paint became her arbiter picture after picturethey did not need words.
At long last, she found words: 'I got into hospital by pretending to be sick, I got home by pretending to be sane'. | 2018-04-03T06:17:59.042Z | 2010-11-01T00:00:00.000 | {
"year": 2010,
"sha1": "24795fe3e271dfa0163010f0d5fc3cf09bcbf918",
"oa_license": "CCBYNC",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/27B4BBAFFD9D8E29458290B843EB0E10/S0007125000253439a.pdf/div-class-title-early-intervention-services-cognitive-behavioural-therapy-and-family-intervention-in-early-psychosis-systematic-review-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3a339b2b907e6e09f6dca3c0cce499b5226483d9",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52288182 | pes2o/s2orc | v3-fos-license | Authentication of cyber-physical systems under learning-based attacks
The problem of attacking and authenticating cyber-physical systems is considered. This paper concentrates on the case of a scalar, discrete-time, time-invariant, linear plant under an attack which can override the sensor and the controller signals. Prior works assumed the system was known to all parties and developed watermark-based methods. In contrast, in this paper the attacker needs to learn the open-loop gain in order to carry out a successful attack. A class of two-phase attacks are considered: during an exploration phase, the attacker passively eavesdrops and learns the plant dynamics, followed by an exploitation phase, during which the attacker hijacks the input to the plant and replaces the input to the controller with a carefully crafted fictitious sensor reading with the aim of destabilizing the plant without being detected by the controller. For an authentication test that examines the variance over a time window, tools from information theory and statistics are utilized to derive bounds on the detection and deception probabilities with and without a watermark signal, when the attacker uses an arbitrary learning algorithm to estimate the open-loop gain of the plant.
I. INTRODUCTION
The recent technological advances in wireless communications and computation, and their integration into networked control and cyber-physical systems (CPS) [1], open the door to a myriad of new and exciting opportunities in transportation, health care, agriculture, energy, and many others.
However, the distributed nature of CPS is often a source of vulnerability [2]- [4]. Security breaches in CPS can have catastrophic consequences ranging from hampering the economy by obtaining financial gain, through hijacking autonomous vehicles and drones, and all the way to terrorism by manipulating life-critical infrastructures [5]- [9]. Real-world instances of security breaches in CPS that were discovered and made available to the public include the revenge sewage attack in Maroochy Shire, Australia [10], the Ukraine power attack [11], A. Khina is with the School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel 6997801 (e-mail: anatolyk@eng.tau.ac.il).
the German steel mill cyber-attack [12] and the Iranian uranium enrichment facility attack via the Stuxnet malware [13]- [17]. Consequently, studying and preventing such security breaches via control-theoretic methods has received a great deal of attention in recent years [18]- [29].
An important and widely used class of attacks on CPS are based on the "man-in-the-middle" (MITM) attack technique (cf. [30]): an attacker takes over the physical plant's control and sensor signals. The attacker overrides the control signals with malicious inputs in order to push the plant toward an alternative trajectory, often unstable and catastrophic. Consequently, many CPS constantly monitor/sense the plant outputs with the objective of detecting a possible attack. The attacker, on the other hand, aims to overwrite the sensor readings in a manner that would be indistinguishable from the legitimate ones.
The simplest instance a MITM attack is the replay attack [31]- [33], in which the attacker observes and records the legitimate system behavior across a long period of time and then replays it at the controller's input; this attack is reminiscent of the notorious attack of video surveillance systems, in which previously recorded surveillance footage is replayed during a heist. A well-known example of this attack is that of the Stuxnet malware, which used an operating system vulnerability to enable a twenty-one seconds long replay attack during which the attacker has driven the speed of the centrifuges at a uranium enrichment facility toward excessively high and destructive speed levels [34]. The extreme simplicity of the replay attack, which can be implemented with zero knowledge of the system dynamics and sensors specification, has made it a popular and well-studied topic of research [31]- [33], [35]- [37].
In contrast to the replay attack, a paradigm that follows Shannon's maxim of Kerckhoffs's principle: "the enemy knows the system," was considered by Satchidanandan and Kumar [38] and Ko et al. [39]. This assumes that the attacker has complete knowledge of the dynamics and parameters of the system, which allows the attacker to construct arbitrarily long fictitious sensor readings, that are statistically identical to the actual signals, without being detected.
To counter both the replay and the "statistical-duplicate" attacks, Mo and Sinopoli [32], and Satchidanandan and Kumar [38], respectively, proposed to superimpose a random watermark, unknown to the attacker, on top of the (optimal) control signal. In this way, by testing the correlation of the subsequent measurements with the watermark signal, the controller is able to detect the attack. By superimposing watermarking at different power levels, improved attack detection probability can be traded for an increase in the control cost.
The two interesting approaches described above suffer from some shortcomings. First, in the case of a replay attack the usage of the watermarking signal is unnecessary: by taking a long enough detection window, the controller is always able to detect such an attack even in the absence of watermarks by simply testing for repetitions. A watermark is only necessary when the detection window of the controller is small compared to the recording (and replay) window of the attacker. Second, in the case of a statistical-duplicate attack, we must assume that the attacker has no access to the signal generated and applied by the controller. Since this type of attack assumes the attacker has full system knowledge, if it also has access to the control signal then it can construct a fictitious sensor readings containing any watermark signal inscribed by the controller. Assuming there is no access to the control signal seems a questionable assumption for an attacker who is capable of hijacking the whole system and overriding the control signal.
The two approaches constitute two extremes: the replay attack assumes no knowledge of the system parametersand as a consquence it is relatively easy to detect. The statistical-duplicate attack assumes full knowledge of the system dynamics-and as a consequence it requires a more sophisticated detection procedure, as well as additional assumptions to ensure it can be detected.
In the current work, we explore a model that is somewhat in between these two extremes. We assume that only the controller has perfect knowledge of the system dynamics. This is a reasonable assumption, since the controller is tuned in much longer than the attacker and can therefore learn the system dynamics to a far greater percision than the attacker. On the other hand, we assume the attacker knows that the system is linear and time-invariant, but does not know the actual openloop gain. It follows that the attacker needs to "learn" the plant first, before being capable of generating a viable fictitious sequence of sensor readings. In this setting, we also consider the case when the attacker has full access of the control signals, and we investigate the robustness of different attacks to system parametric uncertainty. To determine whether an attack can be successful or not, we rely on physical limitations of the system's learning process, similar to an adaptive control setting [40], rather than on cryptographic/watermarking techniques.
Our approach is reminiscent of parametric linear system identification (SysID), but in contrast to classical SysID our attacker is constrained to passive identification. Specifically, we consider two-phase attacks akin to the exploration and exploitation phases in reinforcement learning/multi-armed bandit problems [41], [42]: in the exploration phase the attacker passively listens and learns the system parameter(s); in the exploitation phase the attacker uses the learned parameter(s) of the first phase to try and mimic the statistical behavior of the real plant, in a similar fashion to the statisticalduplicate attack. For the case of two-phase linear attacks, we analyze the achievable performance of a least-squares (LS) estimation-based scheme and a variance detection test. We also provide a lower bound on the attack-detection probability under the variance detection test and any learning algorithm. We provide explicit results for the case where the duration of the exploitation phase tends to infinity. To enhance the security of the system, we also extend the results to the case of a superimposed watermark (or authentication) signal.
An outline of the rest of the paper is as follows. We set up the problem in Sec. II, and state the main results in Sec. III, with their proofs relegated to the appendix. Simulations are provided in Sec. IV. We conclude with a discussion of future research directions in Sec. V.
A. Notation
We denote by N the set of natural numbers. All logarithms, denoted by log, are to base 2. For two real valued functions g and h, g( We denote by x j i = (x i , · · · , x j ) the realization of the random vector X j i = (X i , · · · , X j ) for i, j ∈ N, i ≤ j. · denotes the Euclidean norm. P X denotes the distribution of the random variable X with respect to (w.r.t) probability measure P, whereas f X denotes its probability density function (PDF) w.r.t. to the Lebesgue measure, if it has one. An event happens almost surely (a.s.) if it occurs with probability one. For real numbers a and b, a b means a is much less than b, in some numerical sense, while for probability distributions P and Q, P Q means P is absolutely continuous w.r.t. Q. dP/dQ denotes the Radon-Nikodym derivative of P w.r.t. Q. The Kullback-Leibler (KL) divergence between probability distributions P X and P Y is defined as where E P X denotes the expectation w.r.t. probability measure P X . The conditional KL divergence between probability distributions P Y |X and Q Y |X averaged over P X is defined where (X,X) are independent and identically distributed (i.i.d.). The mutual information between random variables X and Y is defined as I(X; Y ) D (P XY P X P Y ). The conditional mutual information between random variables X and Y given random variable Z is defined as I(X; Y |Z)
II. PROBLEM SETUP
We consider the networked control system depicted in Fig. 1, where the plant dynamics are described by a scalar, discrete-time, linear time-invariant (LTI) system where X k , a, U k , W k are real numbers representing the plant state, open-loop gain of the plant, control input, and plant disturbance, respectively, at time k ∈ N. The controller, at time k, observes Y k and generates a control signal U k as a function of Y k 1 . We assume that the initial condition X 0 has a known (to all parties) distribution and is independent of the disturbance sequence {W k }, which is an i.i.d. process with PDF known to all parties. We assume that U 0 = W 0 = 0.
(a) Exploration: During this phase, the attacker eavesdrops and learns the system, without altering the input signal to the controller (Y k = X k ).
(b) Exploitation: During this phase, the attacker hijacks the system and intervenes as an MITM in two places: acting as a fake plant for the controller (Y k = V k ) by impersonating the legitimate sensor, and as a malicious controller (Ũ k ) for the plant. With a slight loss of generality and for analytical purposes, we assume Moreover, to simplify the notation, let Z k (X k , U k ) denote the state-and-control input at time k and its trajectory up to time k-by . The controller is equipped with a detector that tests for anomalies in the observed history Y k 1 . When the controller detects an attack, it shuts the system down and prevents the attacker from causing further "damage" to the plant. The controller/detector is aware of the plant dynamics (1) and knows exactly the open-loop gain a of the plant. On the other hand, the attacker knows the plant dynamics (1) as well as the plant state X k , and control input U k (or equivalently, Z k ) at time k (see Fig. 1). However, it does not know the open-loop gain a of the plant.
In what follows, it will be convenient to treat the open-loop gain of the plant as a random variable A that is fixed in time, whose PDF f A is known to the attacker, and whose realization a is known to the controller. We assume all random variables to exist on a common probability space with probability measure P, and denote the probability measure conditioned on A = a by P a . Namely, for any measurable event C, we define A is further assumed to be independent of X 0 and {W k |k ∈ N}.
We consider the (time-averaged) linear quadratic (LQ) control cost [43]: where the weights q and r are non-negative known (to the controller) real numbers that penalize the cost for state deviations and control actuations, respectively.
A. Adaptive Integrity Attack
We define Adaptive Integrity Attacks (AIA) that consist of a passive and an active phases, referred to as exploration and exploitation, respectively.
During the exploration phase, depicted in Fig. 1a, the attacker eavesdrops and learns the system, without altering the input signal to the controller, i.e., Y k = X k during this phase.
On the other hand, during the exploitation phase, depicted in Fig. 1b, the attacker intervenes as a MITM in two different parts of the control loop with the aim of pushing the plant toward an alternative trajectory (usually unstable) without being detected by the controller: it hijacks the true measurements and feeds the controller with a fictitious input Y k = V k instead. Furthermore, it issues and overrides a malicious control signal to the actuatorŨ k instead of the signal U k that is generated by the controller as depicted in Fig. 1b. Remark 1. Attacks that manipulate the control signal by tampering the integrity of the sensor readings, while trying to remain undetected, are usually referred to as integrity attacks, e.g., [32]. Since in the class of attacks described above, the attacker learns the open-loop gain of the plant in a fashion reminiscent of adaptive control techniques, we referred to attacks in this class as AIA. •
B. Two-Phase AIA
While in a general AIA the attacker can switch between the exploration and exploitation phases back and forth or try to combine them together in an online fashion (see Sec. V-A), in this work, we concentrate on a special class of AIA comprising only two disjoint consecutive phases as follows.
Phase 1: Exploration. As illustrated in Fig. 1a, for k ∈ [0, L] the attacker observes the plant state and control input, and tries to learn the open-loop gain a. We denote by the attacker's estimate of the open-loop gain a.
• Phase 2: Exploitation. As illustrated in Fig. 1b, from time L + 1 and onwards the attacker hijacks the system and feeds a malicious control signal to the plantŨ k and a fictitious sensor reading Y k = V k to the controller. •
C. Linear Two-Phase AIA
A linear two-phase attack is a special case of the twophase AIA of Sec. II-B, in which the exploitation phase of the attacker takes the following linear form.
whereW k for k = L, . . . , T − 1 are i.i.d. with fW = f W ; U k is the control signal generated by the controller, which is fed with the fictitious virtual signal V k by the attacker; V L = X L ; and is the estimate of the open-loop gain of the plant at the conclusion of Phase 1. The controller/detector, being aware of the dynamic (1) and the open-loop gain a, attempts to detect possible attacks by testing for statistical deviations from the typical behavior of the system (1). More precisely, under legitimate system operation (corresponding to the null hypothesis [44, Ch. 14]), the controller observation Y k behaves according to Note that in case of an attack, during Phase 2 (k > L), (5) can be rewritten as where (6b) follows from (4). Hence, the estimation error ( − a) dictates the ease with which an attack can be detected. While the controller in general can carry out different statistical tests to test the validity of (5) such as the Kolmogorov-Smirnov and Anderson-Darling tests [44, Ch. 14], we consider a specific test in Sec. III-A that requires knowledge of only the second-order statistics.
D. Deception and Detection Probabilities
Define the hijack indicator at time k as At every time k, the controller uses Y k 1 to construct an estimatê Θ k of Θ k . We consider the following events.
• Θ k = 0,Θ k = 0: There was no attack, and no attack was declared by the detector. • Θ k = 1,Θ k = 1: An attacker hijacked the controller observation before time k but was caught by the controller/detector. In this case we say that the controller detected the attack. The detection probability at time k is defined as • Θ k = 1,Θ k = 0: An attacker hijacked the observed signal by the controller before time k, and the controller/detector failed to detect the attack. In this case, we say that the attacker deceived the controller or, equivalently, that the controller misdetected the attack [44,Ch. 3]. The deception probability at time k is defined as • Θ k = 0,Θ k = 1: The controller falsely declared an attack. We refer to this event as false alarm. The false alarm probability at time k is defined as Clearly, The controller wishes to achieve a low false alarm probability, while guaranteeing a low deception probability and a low control cost (3). In addition, in case of an attacker that knows (or has perfectly learned) the system gain a, and replaces {X k } of (1) with a virtual signal {V k } that is statistically identical and independent of it, the controller has no hope of correctly detecting the attack. We further define the deception, detection, and false alarm probabilities w.r.t. the probability measure P, without conditioning on A, and denote them by P k Dec , P k Det , and P k FA , respectively. For instance, P k Det is defined as
III. STATEMENT OF THE RESULTS
We now describe the main results of this work. We start by describing a variance-based attack-detection test in Sec. III-A. We derive upper and lower bound on the deception probability in Sec. III-B. The proofs of the results in this section are relegated to the appendix.
A. Attack-Detection Variance Test
A simple and widely used test is the one that seeks anomalies in the variance, i.e., a test that examines whether the empirical variance of (5) is equal to E W 2 . In this way, only second-order statistics of W need to be known at the controller. The price of this is of course is its inability to detect higher-order anomalies.
Specifically, this test sets a confidence interval of length 2δ > 0 around the expected variance, i.e., it checks whether where T is called the test time. That is, as is implied by (6), the attacker manages to deceive the controller ( Eq. (5) suggests that the false alarm probability of the variance test (13) is By applying Chebyshev's inequality and (2), we have P a,T FA ≤ min 1, As a result, as T → ∞ the probability of false alarm tends to zero. Hence, in this limit, we are left with the task of determining the behavior of the deception probability (9). We note that the asymptotic assumption T → ∞ simplifies the presentation of the results. Nonetheless, a similar treatment can be obtained in the non-asymptotic case.
B. Bounds on the Deception Probability Under the Variance Test
In what follows, we assume that there exists β > 0, such that Remark 2. Assuming the control policy is memoryless, namely U k is only dependent on Y k , the process V k is Markov for k ≥ L + 1. By further assuming that L = o(T ) and using the generalization of the law of large numbers for Markov processes [45], we deduce Consequently, in this case we have β ≤ 1/Var [W ]. In addition, when the control policy is linear and stabilizes (4), that is U k = −ΩY k and | + Ω| < 1, it is easy to verify that (15) holds true for β = (1 − ( + Ω) 2 )/Var [W ]. • We now provide lower and upper bounds on the deception probability (9) of any linear two-phase AIA (4) where in (4) is constructed using any arbitrary learning algorithm.
1) Lower Bound: To provide a lower bound on the deception probability P a,T Dec , we consider a specific estimate of at the conclusion of the first phase by the attacker, assuming a controller that uses the variance test (13). To this end, we use leastsquares (LS) estimation due to its efficiency and amenability to recursive update over observed incremental data, which makes it the method of choice for many applications of real-time parametric identification of dynamical systems [40], [46]- [52]. The LS algorithm approximates the overdetermined system of equations . . .
by minimizing the Euclidean distancê to estimate (or "identify") the plant, the solution to which iŝ Remark 3. Since we assumed W k for all time k has a PDF, the probability that X k = 0 is zero. Consequently, (16) is well-defined. • Using LS estimation (16) achieves the following asymptotic deception probability. Theorem 1. Consider any linear two-phase AIA (4) with fictitious-sensor reading power that satisfies (15) and a control policy {U k }. Then, the asymptotic deception probability when using the variance test (13) is bounded from below as Remark 4. Thm. 1 guarantees lim T →∞ P a,T Dec = 1 for the choice U k = −aY k , where Y k = X k during the exploration phase. U k = −aX k is the optimal control policy for LQ control cost (3) for r = 0 (no penalty on the control actions). An important consequence is that, for this choice, even without having any prior knowledge of the open-loop gain of the plant, the attacker can still carry out a successful attack. for some R > 0, and consider any control policy {U k } and any linear two-phase AIA (4) with fictitious-sensor reading power (15) that satisfies √ β ≤ R. Then, the asymptotic deception probability when using the variance test (13) is bounded from above as In addition, if for all k ∈ {1, . . . , L} A → (X k , Z k−1 1 ) → U k is a Markov chain, then for any sequence of probability Remark 5. By looking at the numerator in (17c), it follows that the bound on the deception probability becomes looser as the amount of information revealed about A by the observation Z L 1 increases. On the other hand, by looking at the denominator, the bound becomes tighter as R increases. This is consistent with the observation of Zames [53] (see also [49]) that SysID becomes harder as the uncertainty about the open-loop gain of the plant increases; in our case, a larger uncertainty interval R corresponds to a worse estimation of the open-loop gain A by the attacker, which leads, in turn, to a decrease in the attacker achievable deception probability. The denominator can also be interpreted as the intrinsic uncertainty of A when this is observed at resolution √ δβ, as it corresponds to the entropy of the random variable A when this is quantized at such resolution. • In conclusion, Thm. 2 provides two upper bounds on the deception probability. The first (17) clearly shows that increasing the privacy of the open-loop gain A-manifested in the mutual information between A and the state-and-control trajectory Z L 1 during the exploration phase-reduces the deception probability. The second bound (18) allows freedom in choosing the auxiliary probability measure Q X k |Z k−1 1 , making it a rather useful bound. An important instance is that of an i.i.d. Gaussian plant disturbance sequence W k ∼ N (0, σ 2 ); by choosing Q X k |Z k−1 1 ∼ N (0, σ 2 ), for this case, for all k ∈ N, we can rewrite the upper bound (18) in term of E P (AX k−1 + U k−1 ) 2 as follows. ) → U k is a Markov chain, and W k ∼ N (0, σ 2 ) is an i.i.d. Gaussian plant disturbance sequence, the following upper bound on the asymptotic deception probability holds: where Remark 6. While the upper bound in (17c) is valid for all control policies, the upper bound in (18), and consequently also the one in (19), is only valid for control policies where A → (X k , Z k−1 1 ) → U k form a Markov chain for all k ∈ {1, . . . , L}. To demonstrate this, choose U k = −AX k and evaluate the bounds in (17c) and (19). Clearly (20) is finite. On the other hand I(A; Z L 1 ) and hence also the upper bound in (17c), is infinite, since, given X k and U k , A can be fully determined. •
C. Watermarking
To increase the security of the system, at any time k the controller can add an authentication (or watermarking) signal Γ k to an unauthenticated control policy {Ū k |k ∈ N}: We refer to such a control policy U k as the authenticated control policyŪ k . We denote the states of the system that would be generated if only the unauthenticated control signal U k 1 were applied byX k 1 , and the resulting trajectory-byZ k 1 (X k 1 ,Ū k 1 ). A "good" authentication signal entails little increase in the control cost (3) compared to its unauthenticated version while providing enhanced detection probability (9) and/or false alarm probability. Remark 7. In both the replay-attack [32] and the statisticalduplicate [38] models no access to the control signal U k by the attacker was allowed. Thus, to improve the detection probability of the controller in case of an attack, one could add an authentication/watermarking signal, which helped the controller to identify abnormalities by correlating the input watermarking signal with its contribution to the sensor reading.
However, since in the statistical-duplicate setting full system knowledge at the attacker was assumed, if the attacker has the access to the control signal it could easily simulate the contribution of the any inscribed watermarking signal to the sequence of fictitious sensor readings. In contrast, in the replayattack setting, no system knowledge is assumed, rendering any knowledge of the control signal useless, unless learning the plant dynamics is invoked. In our setup the attacker has full access to the control signal. However, in contrast to the statistical-duplicate setting, it cannot perfectly simulate the effect of the control signal as it lacks knowledge of the openloop gain. Thus, the watermarking signal here is used for a different purpose-to impede the learning process of the attacker.
• At first glance, one may envisage that superimposing any watermarking signal Γ k on top of the control policy {Ū k |k ∈ N} would necessarily enhance the detectability of an attack, since the observations of the attacker are in this case noisier. However, it turns out that injecting a strong noise may in fact speed up the learning process as it improves the power of the signal maginifed by the open-loop gains with respect to the observed noise [54].
The following corollary proposes a class of watermarking signals that provide enhanced guarantees on the deception probability P T Dec . Corollary 2. For any control policy {Ū k |k ∈ N} with tra-jectoryZ k 1 = (X k 1 ,Ū k 1 ) and its corresponding authenticated control policy U k 1 (21) with trajectory Z k 1 = (X k 1 , U k 1 ), under the assumptions of Corollary 1, if for all k ∈ {1, . . . , L} then letting Ψ k−1 k−1 j=1 A k−1−j Γ j , the following majorization holds: (20).
IV. SIMULATIONS
In this section, we compare the empirical performance of the variance-test with our developed bounds. At every time T , the controller tests the empirical variance for abnormalities over a detection window [T −T +1, T ], where T ≤ T , using a confidence interval 2δ > 0 around the expected variance (13). When T = T the statistical test used in the simulation, the hijack indicator Θ T , and its estimateΘ T at the controller reduce to the definitions of the variance test in (13), the hijack indicator in (7), and the estimate of the latter of Sec. II-D, respectively. In the interest of brevity, we state the exact simulation parameters only in the figure captions.
Gaussian, and since we examine the false alarm rate-Y k = X k . 500 Monte Carlo simulations were performed. A semilogarithmic scale for the detection window size T is used.
A. False Alarm Rate
We start by examining the false alarm rate P a,T FA under the variance test as a function of the detection window size T . We depict, in Fig. 2, the empirical false alarm rate evaluated using the Monte Carlo method along with the theoretical bound of (14). Both the theoretical bound and the empricial performance present a similar behavior: the false alarm rate is large for small values of the detection window T , and decays to zero for T → ∞. We note that since Chebyshev's inequality is not tight in general there exists a gap between the empirical and theoretical performance that can be closed using exponential bounds that require further statistical knowledge of W t .
B. Replay Attack
The variance-test detection rate (8) P 1.2,800 Det of a replay attack as a function of the detection window size T is depicted in Fig. 3, along with the corresponding false alarm rate. No watermarking signals are used in the simulation. The figure clearly shows that the variance-test detection rate of a replay attack saturates when the detection window size T goes to infinity for a fixed confidence interval 2δ. On the other hand, as discussed in Sec. IV-A, the false alarm rate decays to zero as T → ∞, in this case. Remark 8. Instead of fixing the confidence interval 2δ, one may choose to hold the false alarm rate fixed to a prescribed value by appropriately adjusting the confidence interval. Under this setup, the detection rate will decrease to zero upon increasing the detection window size. As discussed in Sec. IV-A, the false alarm rate decays to zero as the size of the detection window T tends to infinity. Hence, for a sufficiently large detection window size, the attacker's success rate could potentially tend to one. A semi-logarithmic scale for the detection window size T is used. We used the following parameters for the simulation: U k = −0.85aY k for all 1 ≤ k ≤ 800, a = 1.2, and {W k } is an i.i.d. standard Gaussian. 500 Monte Carlo simulations were performed. The alarm rate is depicted for replay attacks with different recording lengths L as well as in the absence of an attack (false alarm rate).
C. Adaptive Integrity Attack
Indeed, such a behavior is observed in Fig. 4a, where the controller executes a control policy under the LQ cost criterion (3) with q = 1 and r = 0 (no penalty on the control action): the success rate of the AIA tends to one as T → ∞, whereas the success rate of the replay attack saturates at a lower value. Fig. 4b depicts a similar comparison for U k = −0.77aY k , which corresponds to a control policy under the LQ cost criterion (3) with r = 1 and q = 2.239. In this case, the success rate of the AIA is lower than that for r = 0 (depicted in Fig. 4a). This can also be seen analytically by noting that the attacker uses the LS algorithm (16) of Sec. III-B.1 to constructÂ, for which the deception probability of the attacker as T → ∞ (characterized in Thm. 1) is the one for the optimal control policy for r = 0 (U k = −aY k ). Fig. 5 compares the attacker's success rate P 1.2,800 Dec as a function of the exploration phase duration L of four different control policies: Fig. 5, and in agreement with Thm. 1, the deception probability decreases as the control policy gets farther away from the optimal control policy under an LQ control cost (3) and r = 0, when the attacker uses the LS algorithm (16) to constructÂ.
D. Watermarking
We now evaluate the detection rate as a function of the power of the watermarking signal Γ k under a variance test and an AIA attack that uses the LS algorithm (16) to constructÂ. To this end, we fix the detection window to be T = T = 800, which guarantees a negligible false alarm probability as discussed in Sec. IV-A, and use Gaussian i.i.d. zero-mean watermarks {Γ k } as in (21), with different powers; the control action is equal to U k = −0.85aY k + Γ k . The performance is depicted in Fig. 6.
E. Random Open-Loop Gain
Thm. 1 provides a lower bound on the deception probability conditioned on A = a. Hence, by applying the law of total probability w.r.t. the PDF f A of A as in (12), we can apply the result of Thm. 1 to provide a lower bound also on the average deception probability for a random A. In this context, Fig. 7 compares the lower and upper bounds on the deception probability provided by Thm. 1 and Corol. 1, respectively. where A is distributed uniformly over [−1, 1]. (19) is valid when the control input is not a function of random variable A; hence, we assumed U k = −0.045Y k . In addition, a simple calculation shows that if the control input is proportional to Y k , then the lower bound provided in Thm. 1 is not a function of the duration of the exploration phase L, as is demonstrated in Fig. 7.
V. DISCUSSION AND FUTURE WORK A. Online Learning
In our analysis we assumed that the exploration and exploitation phases are separate from each other. However, when the attacker hijacks the input and output control signals, it can continue "exploring" the plant (see Fig. 1b) and improving its estimation/identification of the plant. Securing systems against such online attacks is an interesting research problem.
B. Deception-Cost Tradeoff
Denote byŪ * k the optimal unauthenticated control signal (recall Sec. III-C) and its corresponding LQ cost-byJ * T . We further define an authenticated control signal U k = f k (Ū * k ) at time k, where f k is some possibly random function; we define the corresponding cost by J T and the increase in cost due to the authentication, compared to the optimal (unauthenticated) control costJ * T -by ∆J T J T −J * T . By invoking Thm. 2, we can optimize the upper bound (17) on the deception probability by solving the optimization problem min ∀f k :∆J T ≤θ
C. Oblivious Controller
A more realistic scenario is the one in which neither the attacker nor the controller are aware of the open-loop gain of the plant. In this scenario, both parties strive to learn the openloop gain-posing two conflicting objectives to the controller, who on the one hand, wishes to speed up the learning process to itself, but on the other hand wishes to slow down the learning process of the eavesdropping attacker.
In such a situation, standard adaptive control methods [40], [47]- [49] are clearly insufficient, as no asymmetry between the attacker and the controller can be achieved under the setting of Sec. II. To create a security leverage over the attacker, the controller needs to utilize watermarking. Interestingly, a properly designed watermarking signal would enjoy a positive double effect by facilitating the learning process of the controller while hindering that of the attacker at the same time. Finally, we note that unless the controller is able to detect an MITM attack (the attacker's exploitation phase), its learning process will be hampered by the fictitious signal that is generated according to the virtual system of the attacker (4).
D. Vector Systems
We concentrated on scalar systems in this work. The extension of the established results to the vector (possibly partiallyobservable) case is an interesting research venue we would like to explore.
E. Continuous Testing
Throughout this work, we have assumed that the controller tests the integrity of the system at a specific time step T , that is taken, in turn, to be very large (tends to infinity). Since the controller does not know the exact time instant at which an attack might occur, a more realistic scenario would be that of continuous testing, i.e., that in which the integrity of the system is tested at every time step and where the false alarm and deception probabilities are defined with a union over time. We leave this treatment for future research.
F. A Unified View of Cyber-Physical Security
A unified view of cyber-physical systems security is presented in Fig. 8. The different ways that switches φ 1 , φ 2 , φ 3 and φ 4 , φ 5 , φ 6 in Fig. 8 can be connected together correspond to different attack scenarios, as detailed below.
• Denial-of-service (DoS) attack [23]- [26]. In this attack the attacker disrupts the service by overloading the system with superfluous requests. This can be modeled by an attacker that opens and closes switches φ 1 → φ 2 or φ 4 → φ 5 , interchangeably, either in a random or a deterministic fashion. • False data-injection attack [18], [19]. In this attack, a false input signal is injected either to the plant or to the controller, corresponding to switches φ 2 → φ 3 and φ 4 → φ 5 , or switches φ 1 → φ 2 and φ 5 → φ 6 being closed, respectively. • Man-in-the-middle (MITM) attack. In this model the attacker intervenes in two places at the control loop, corresponding to switches φ 2 → φ 3 and φ 5 → φ 6 being closed. This scenario reduces to the one depicted in Fig. 1b and subsumes the replay attack, the statisticalduplicate attack, and the AIA discussed in this work.
APPENDIX I PROOF OF THEOREM 1
We break the proof of Thm. 1 into several lemmas that are stated and proved next.
In the case of linear two-phase attacks, in the limit of T → ∞, the empirical variance, which is used in the variance test (13), can be expressed in terms of the estimation error of the open-loop gain as follows.
Lemma 1. Consider any linear two-phase attack (4) with fictitious-sensor reading power that satisfies (15) and some control policy {U k }. Then, the variance test (13) reduces to Proof. Since the exploration phase of a linear two-phase attack starts at time k = L + 1, using (1) and (6) we have The following lemma establishes an upper bound on the squared estimation error ( − a) 2 in terms of the state and distrubance sequences of plant, when is constructed using LS estimation (16).
Lemma 3. If the attacker constructs using LS estimation (16), then Proof. This proof essentially follows from [49, Lemma 1] for our scalar plant (1). As mentioned before, since W k for all time k has a PDF the probability that X k = 0 is zero. Hence, the random variables S k = U k /X k are well-defined except on a measure zero set for all k. Thus, we have By further applying the Cauchy-Schwarz inequality, we deduce a.s. (25) Finally, by noting that (a−S k )X k = aX k −U k = X k+1 −W k and substituting it in (25) concludes the proof.
The proof of Thm. 1 now follows by combining the results of Lemmata 2 and 3, and noting that X k+1 −W k = aX k +U k .
APPENDIX II PROOF OF THEOREM 2
The proof is based on the continuum Fano inequalities [56]. We start by proving (17). Since the attacker observed the plant state and control input during the exploration phase which lasts L time steps, and since A → (X L 1 , U L 1 ) →Â constitutes a Markov chain, using the continuous domain version of Fano's inequality [56,Prop. 2], we have whenever √ δβ ≤ R. Using (11) and Lem. 2 we deduce lim T →∞ P a,T Det = P a |Â − a| ≥ δβ .
To prove (18), we further bound I(A; Z L 1 ) from above via KL divergence manipulations; similar arguments have been previously used, e.g., in [57]. The proof of the following lemma follows the arguments of [49], and is detailed here for completeness.
We next bound I A; Z k Z k−1 1 from above.
where we substitute the definition of Z k (X k , U k ) to arrive at (28a), (28b) follows from the chain rule for mutual informations and the Markovity assumption A → (X k , Z k−1 1 ) → U k , we use the definition of the conditional mutual information in terms of the conditional KL divergence (recall Sec. I-A) to attain (28c) and (28d), the manipulation in (28e)is valid due to the condition P X k |Z k−1 1 Q X k |Z k−1 1 in the setup of the lemma, and (28f) follows from the non-negativity property of the KL divergence.
Applying the bound of Lem. 4 to the first bound of the theorem (17) proves the second bound of the theorem. | 2018-09-17T05:22:38.000Z | 2018-09-17T00:00:00.000 | {
"year": 2019,
"sha1": "88a9dd508bd78f23836b81707e4b9b4f23b2acdb",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ifacol.2019.12.183",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "cc3442efece8d0a8e2acf00a4e9b1c7f050e264f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
18992501 | pes2o/s2orc | v3-fos-license | Identification of biologically relevant subtypes via preweighted sparse clustering
Cluster analysis methods are used to identify homogeneous subgroups in a data set. Frequently one applies cluster analysis in order to identify biologically interesting subgroups. In particular, one may wish to identify subgroups that are associated with a particular outcome of interest. Conventional clustering methods often fail to identify such subgroups, particularly when there are a large number of high-variance features in the data set. Conventional methods may identify clusters associated with these high-variance features when one wishes to obtain secondary clusters that are more interesting biologically or more strongly associated with a particular outcome of interest. We describe a modification of the sparse clustering method of Witten and Tibshirani (2010) that can be used to identify such secondary clusters or clusters associated with an outcome of interest. We show that this method can correctly identify such clusters of interest in several simulation scenarios. The method is also applied to a large case-control study of TMD and a leukemia microarray data set.
Introduction
In biomedical applications, cluster analysis is frequently used to identify homogeneous subgroups in a data set that provide information about a biological process of interest. For example, in microarray studies of cancer, a common objective is to identify cancer subtypes that are predictive of the prognosis (survival time) of cancer patients (Bhattacharjee and others, 2001;Sorlie and others, 2001;van 't Veer and others, 2002;Rosenwald and others, 2002;Lapointe and others, 2004;Bullinger and others, 2004). In studies of chronic pain conditions, such as fibromyalgia or temporomandibular disorders (TMD), one may wish to develop a more precise case definition for the condition of interest by identifying subgroups of patients with similar clinical characteristics (Jamison and others, 1988;Bruehl and others, 2002;Davis and others, 2003;Hastie and others, 2005). However, conventional clustering methods (such as k-means clustering and hierarchical clustering) may produce unsatisfactory results when applied to these types of problems.
Identification of biologically relevant clusters in complex data sets presents several challenges.
It is common for the biologically relevant clusters to differ with respect to only a subset of the features. This is particularly true in genetic studies, where the majority of the genes are not associated with the outcome of interest. Moreover, it is possible that some other subset of the features form clusters that are not associated with the outcome of interest. In genetic studies, given that genes work in pathways, genes in the same pathway are likely to form clusters even if the pathway is not associated with the biological outcome of interest.
As a motivating example, consider the artificial data set represented in Figure 1. Observe that there are two sets of clusters in this data set: features 1-50 form one set of clusters, and features 51-250 form a separate set of clusters. Also, note that the difference between the cluster means is much greater for the clusters formed by features 51-250 than it is for the clusters formed by features 1-50. Thus, when conventional clustering methods are applied to this data set, they will most likely identify the clusters corresponding to features 51-250. However, if observations 1-100 are controls and observations 101-200 are cases, then we would be interested in the clusters corresponding to features 1-50, which would not be identified by most existing clustering methods.
See Nowak and Tibshirani (2008) for a more detailed discussion of this problem.
A number of methods exist for clustering data sets when the clusters differ with respect to only a subset of the features (Ghosh and Chinnaiyan, 2002;Friedman and Meulman, 2004;Bair and Tibshirani, 2004;Raftery and Dean, 2006;Pan and Shen, 2007;Koestler and others, 2010;Witten and Tibshirani, 2010). In particular, the method of Nowak and Tibshirani (2008) is designed specifically for the situation described in Figure 1. However, many of these methods are computationally intensive, and their running times may be prohibitive when applied to highdimensional data sets. More importantly, with the exception of the method of Nowak and Tibshirani (2008), these methods only produce a single set of clusters. If the clusters identified by the method are not related to the biological outcome of interest, there is no simple way to identify the more relevant secondary clusters. Also, these methods generally do not consider an outcome variable or any other biological information that could help identify the clusters of interest. In other words, if these methods are applied to a data set similar to Figure 1, they are likely to produce clusters that are not related to the outcome of interest.
The problem of identifying clusters associated with an outcome variable has also not been studied extensively (Bair, 2013). In many situations, there is an outcome variable that is a "noisy surrogate" Bair and others, 2006) for the true clusters. For example, in genetic studies of cancer, it is believed that there are underlying subtypes of cancer with different genetic aberrations, and some subtypes may be more responsive to treatment (Rosenwald and others, 2002;Bullinger and others, 2004;Bair and Tibshirani, 2004). These sub-types cannot be observed directly, but a surrogate variable (such as the patient's survival time) may be available. In other words, the outcome variable provides some information about the clusters of interest, but the true cluster assignments are still unknown for all observations. An artificial example of this situation is shown in Figure 2. In this example, the mean of the outcome variable for observations in cluster 2 is higher than the mean of the outcome variable for observations in cluster 1. However, there is considerable overlap in the distributions. Thus, higher values of the outcome variable increase the likelihood that an observation belongs to cluster 2, but any classifier that attempts to predict the cluster based on the outcome variable will have a high error rate.
We propose a novel clustering method that is applicable in situations where one wishes to identify secondary clusters associated with an outcome of interest (such as the scenario illustratated in Figure 1). It is based on a modification of the "sparse clustering" algorithm of Witten and Tibshirani (2010), which we call preweighted sparse clustering. It can be applied both to the general problem of identifying secondary clusters in data sets and to the special case where one wishes to identify clusters associated with an outcome variable. We will show that our proposed method produces more accurate results than competing methods in several simulated data sets and apply it to real-world studies of chronic pain and cancer.
Methods
This section will begin by briefly describing several existing methods for identifying clusters associated with a biological process of interest. We will then describe our proposed method as well as the data sets (both simulated and real) to which the proposed method will be applied.
Related Clustering Methods
2.1.1 Sparse Clustering Suppose that we wish to cluster the n × p data matrix X, where n is the number of observations and p is the number of features. Assume that the clusters only differ with respect to some subset of the features. Witten and Tibshirani (2010) propose a method called "sparse clustering" to solve this problem. A brief description of the sparse clustering is as follows: Let d i,i ′ ,j be any dissimilarity measure between observations i and i ′ with respect to feature j. (Throughout the remainder of this discussion, we will assume that d i,i ′ ,j = (X ij −X i ′ j ) 2 the Euclidean distance between X ij and X i ′ j .) Then Witten and Tibshirani (2010) propose to identify clusters C 1 , C 2 , . . . , C K and weights w 1 , w 2 , . . . , w p that maximize the weighted betweencluster sum of squares subject to the constraints j w 2 j = 1, j |w j | < s, and w j 0 for all j, where s is a tuning parameter and n k is the number of elements in cluster k. To maximize (2.1), Witten and Tibshirani (2010) use the following algorithm: 1. Initialize the weights as w 1 = w 2 = · · · = w p = 1/ √ p.
2. Fix the w i 's and identify C 1 , C 2 , . . . , C K to maximize (2.1). This can be done by applying the standard k-means clustering method to the n × n dissimilarity matrix where the (i, i ′ ) 3. Fix the C i 's and identify w 1 , w 2 , . . . , w p to maximize (2.1) subject to the constraints that j w 2 j = 1 and j |w j | < s. See Witten and Tibshirani (2010) for a description of how the optimal w i 's are calculated.
This procedure requires a user to choose the number of clusters k and the tuning parameter s.
We will not discuss methods for choosing these parameters; see Witten and Tibshirani (2010) for an algorithm for choosing s, and see Tibshirani and others (2001), Sugar and James (2003), or Tibshirani and Walther (2005) for several possible methods for choosing k.
Although this method produces impressive results in a wide variety of problems, it tends to identify clusters that are dominated by highly correlated features with high variance, which may not be interesting biologically. It also does not consider the values of any outcome variables that may exist. Thus, in the situation illustrated in Figure 1, there is no guarantee that the clusters identified by this method will be associated with the outcome of interest.
Complementary Clustering
Methods have been developed to identify secondary clusters of interest that may be obscured by "primary" clusters consisting of large numbers of high variance features (such as the situation illustrated in 1). Nowak and Tibshirani (2008) proposed a method for uncovering such clusters, called complementary hierarchical clustering. Again assume that we wish to cluster the n × p data matrix X. The first step of this method performs traditional hierarchical clustering on X. This set of hierarchical clusters is used to generate a new matrix X ′ that is defined to be the expected value of the residuals when each row of X is regressed on the group labels when the hierarchical clustering tree is cut at a given height. The expected value is taken over all possible cuts. This has the effect of removing high variance features that may be obscuring secondary clusters. Complementary hierarchical clustering is then performed on this modified matrix X ′ , yielding secondary clusters. Witten and Tibshirani (2010) proposed a modification of this procedure (called "sparse complementary clustering") using a variant of the methodology described in Section 2.1.1.
One significant shortcoming of these methods is the fact that they are only applicable to hierarchical clustering. To our knowledge there are currently no published methods for identifying secondary clusters based on partitional clustering methods (such as k-means clustering).
Semi-Supervised Clustering Methods
The situation where the observed outcome variable is a noisy surrogate variable for underlying clusters is very common in real-world problems. However, there are relatively few clustering methods that are applicable for this type of problem (Bair, 2013). Bair and Tibshirani (2004) propose a method that they called "supervised clustering." Supervised clustering performs conventional k-means clustering or hierarchical clustering using only a subset of the features. The features are selected by identifying the features that have the strongest univariate association with the outcome variable. For example, if the outcome is dichotomous, one would calculate a t-statistic for each feature to test the null hypothesis of no association between the feature and the outcome and then perform clustering using only the features with the largest (absolute) t-statistics. Koestler and others (2010) propose a method called "semi-supervised recursively partitioned mixture models" (or "semi-supervised RPMM").
This method is similar to the supervised clustering method of Bair and Tibshirani (2004) in that one first calculates a score for each feature (such a t-statistic) that measures the association between that feature and the outcome and then performs clustering using only the features with the largest univariate scores. The difference between semi-supervised RPMM and supervised clustering is that semi-supervised RPMM applies the RPMM algorithm of Houseman and others (2008) to the surviving features rather than a more conventional k-means or hierarchical clustering model.
These methods have successfully identified clinically relevant subtypes of cancer in many different studies Bullinger and others, 2004;Chinnaiyan and others, 2008;Koestler and others, 2010). However, these methods have significant limitations. In particular, both supervised clustering and semi-supervised RPMM require a user to choose the number of features that are used to form the clusters, and the results of these methods can depend heavily on the number of "significant" features selected. Moreover, it is very unlikely that these methods will successfully identify the truly significant features that define the clusters while excluding irrelevant features.
Preweighted Sparse Clustering
To overcome the shortcomings of these methods, we propose the following simple modification of sparse clustering, which we call preweighted sparse clustering. The preweighted sparse clustering algorithm is described below: 1. Run the sparse clustering algorithm, as described previously.
2. For each feature, calculate the F-statistic, F j , (and associated p-value p j ) for testing the null hypothesis that the mean value of the feature j does not vary across the clusters.
3. For each feature j, define: where m is the number of p j 's such p j α.
4. Run the sparse clustering algorithm using these w j 's (beginning with step 2) and continuing until convergence.
In other words, the preweighted sparse clustering algorithm first performs conventional sparse clustering. It then identifies features whose mean values differ across the clusters. Then the sparse clustering algorithm is run a second time, but rather than giving equal weights to all features as in the first step, this preweighted version of sparse clustering assigns a weight of 0 to all features that differed across the first set of clusters. The motivation is that this procedure will identify secondary clusters that would otherwise be obscured by clusters which have a larger dissimilarity measure (such as the situation illustrated in Figure 1).
This procedure requires one to choose a p-value threshold α for deciding which features should be give nonzero weight. An obvious choice is α = 0.05/p, where p is the number of features. However, the user may choose a less or more stringent cutoff depending on the sample size and other considerations. Also note that this procedure may be repeated multiple times if the secondary clusters identified are still unrelated to the biological outcome of interest.
Supervised Sparse Clustering
The preweighted sparse clustering algorithm described above is an unsupervised method, since it does not require or use an outcome variable. If an outcome variable is available and the objective is to identify clusters associated with the outcome variable, one may use the following variant of preweighted sparse clustering to incorporate such data, which we call supervised sparse clustering.
The supervised sparse clustering procedure is described below: 1. Let T j be a measure of the strength of the association between the jth feature and the outcome variable. (If the outcome variable is dichotomous, T j could be a t-statistic, or if the outcome variable is a survival time, T j could be a univariate Cox score.) Let T (1) , T (2) , . . . , T (p) denote the order statistics of the T j 's.
2. Run the sparse clustering algorithm with initial weights w 1 , w 2 , . . . , w p , where 3. Repeat steps 2 and 3 from the standard sparse clustering algorithm until convergence.
In other words, supervised sparse clustering chooses the initial weights for the sparse clustering algorithm by giving nonzero weights to the features that are most strongly associated with the outcome variable. Note that that no initial clustering step is required. This is similar to the semi-supervised clustering method of Bair and Tibshirani (2004) and the semi-supervised RPMM method of Koestler and others (2010).
The supervised sparse clustering procedure requires the choice of a tuning parameter m, which is the number of features to be given nonzero weight in the first step. Our experience suggests that the procedure tends to give very similar results for a wide variety of different values of m; therefore, optimizing the procedure with respect to this tuning parameter is unnecessary. As a default we suggest m = √ p, where p is the number of features. We will use this default throughout this manuscript unless otherwise noted.
Simulated Data Sets
We generated a series of simulated data sets to evaluate the performance of preweighted sparse clustering and compare it to the complementary clustering method of Nowak and Tibshirani (2008) and the complementary sparse clustering method of Witten and Tibshirani (2010). We generated simulated data sets similar to the simulated data sets in Nowak and Tibshirani (2008), who generated a series of p × 12 data matrices as follows: Here, the ǫ ij 's are iid standard normal random variables. See Figure 3 for a graphical illustration of this data set. The first p e rows represent the "primary clusters" in the data set and the final p e rows represent the "secondary clusters." We considered four simulation scenarios (similar to the four simulation scenarios considered in Nowak and Tibshirani (2008)). We let a = 6 in all four scenarios. Unless otherwise specified, we also let b = 3, σ = 1, and n a = 6 for each simulation scenario. For the first three scenarios, 1000 matrices were generated with p = 50 and p e = 20. In the first scenario, we varied the value of b. In the second scenario, we varied the value of σ, and in the third scenario, we varied n a . In the final scenario, we generated 100 matrices with p = 2000 and varied the value of p e . (The first three scenarios are identical to the simulations of Nowak and Tibshirani (2008); the final scenario was modified slightly for computational reasons.) Preweighted sparse clustering, complementary clustering, and complementary sparse clustering were applied to each simulated data set. Each method identified a set of primary clusters and a set of secondary clusters, and the number of times each method identified the correct clusters was recorded.
We also generated a series of 1000 simulated data sets to test the supervised sparse clustering algorithm. Specifically, we generated 1000 5000 × 200 data matrices X where Here I(x) is an indicator function, and the u ij 's are iid uniform random variables on (0, 1). The ǫ ij 's are iid standard normal, as before. We also defined the binary outcome variable y as follows: variable y also observed is a "noisy surrogate" for the true clusters. This y is related to the true clusters, but 30% of the y i 's are misclassified. This is consistent with what we might expect to observe in a study of chronic pain, where the only observed outcome variable is a patient's subjective pain report, which is not always a reliable indicator of case status.
The objective of this simulation is to determine if supervised sparse clustering can correctly identify the clusters that are associated with the y i 's (as opposed to the other sets of spurious clusters). Supervised sparse clustering was applied to each of the 1000 simulated data sets. Three other methods were also considered, namely conventional sparse clustering, the semi-supervised clustering method of Bair and Tibshirani (2004), and conventional 2-means clustering on the first three principal components of the data set. We also attempted to apply the semi-supervised RPMM method of Koestler and others (2010) to these simulated data sets, but in each case the procedure returned a singleton cluster.
OPPERA Data
We applied our preweighted sparse clustering method to a data set collected in the Orofacial The second OPPERA data set was the OPPERA prospective cohort study, which includes all 3258 initially TMD-free individuals. See Bair and others (2013) for a more detailed description of this cohort. This data set includes the same 116 predictor variables that were considered in the case-control data set. However, the outcome variable was the time until the development of first-onset TMD. Since some participants did not develop first-onset TMD before the end of the follow up period, the outcome was treated as a censored survival time.
In our analysis of the OPPERA case-control data, we applied the preweighted sparse clustering algorithm as outlined in Section 2.2. Conventional sparse 2-means clustering was applied to the data set, after which the features that showed strongest mean differences across the clusters were given a weight of 0 when the preweighted version of sparse clustering was applied. The preweighted version was then applied for a second time in the same manner to identify tertiary clusters. All features were normalized to have mean 0 and standard deviation 1 prior to performing the clustering. The association between both the primary clusters and secondary clusters and chronic TMD was evaluated by calculating odds ratios and performing a chi-square test of the null hypothesis of no association between the clusters and TMD case status. We then applied preweighted sparse clustering to the OPPERA prospective cohort data set to produce a second set of primary and secondary clusters and evaluated the association between these cluster labels and the time until first-onset TMD using Cox proportional hazards models. Complementary hierarchical clustering was also applied to both data sets for comparison. (Complementary sparse hierarchical clustering was not considered for computational reasons.) We also applied our supervised sparse clustering, sparse clustering, semi-supervised clustering, and clustering on the (first five) principal component scores to the OPPERA case-control data and the prospective cohort data. We let k = 3 for the case-control data and k = 2 for the prospective cohort data. Both data sets were randomly partitioned into a training set and a test set with an equal number of chronic TMD cases (or cases of first-onset TMD) in both partitions.
Each clustering method was applied to the training data and a lasso model (Tibshirani, 1996; Friedman and others, 2010) was fit to the training data to predict the resulting clusters. This lasso model was then used to predict the clusters on the test data. The association between the clusters predicted by each of these methods and chronic TMD was again evaluated by calculating the odds ratios and performing chi-square tests, and the association between the predicted clusters and first-onset TMD was evaluated by fitting a Cox proportional hazards model.
Leukemia Microarray Data
We applied our supervised sparse clustering algorithm to the leukemia microarray data of Bullinger and others (2004). This data set includes data for 116 subjects with acute myeloid leukemia. Gene expression data for 6283 genes are recorded for each subject, as well as survival times and outcomes.
Survival times ranged from 0 to 1625 days, with an average time of 407.1 days. The objective was to identify genetic subtypes (i.e. clusters) using the gene expression data that could be used to predict the prognosis of leukemia patients.
We applied our supervised sparse 2-means clustering method to this data set as well conventional sparse clustering, semi-supervised clustering, and clustering on the PCA scores. Before applying any of the clustering methods, the data were randomly partitioned into a training set and a test set, each of which consisted of 58 observations. Each clustering method was applied to the training data. To identify the "most significant" genes prior to applying supervised sparse clustering and semi-supervised clustering, the association between each gene and survival was evaluated by calculating the univariate Cox score for each gene. See Beer and others (2002) or Bair and Tibshirani (2004) for more information. For each set of clusters, a nearest shrunken centroid model (Tibshirani and others, 2002) was fit to the clusters in the training data and then applied to the test data to predict cluster assignments on the test data. The association between the predicted clusters in the test set and survival was evaluated using Cox proportional hazards models for each clustering method.
Simulated Data Sets
The results of the first set of simulation scenarios are shown in Tables 1, 2
OPPERA Data
We applied the preweighted sparse 2-means clustering method to the OPPERA case-control data.
The weights for both the primary, secondary and tertiary clusters are shown in Figure 4. Observe that the measures of autonomic function had the largest feature weights for the primary clusters, whereas the measures of psychological distress had the largest feature weights for the secondary clusters. Measures of thermal pain have the largest features weights for the tertiary clusters. Thus, the preweighted sparse clustering method revealed a biologically meaningful set of secondary and tertiary clusters that were not identified by the conventional sparse clustering algorithm.
The associations between chronic TMD and the primary and secondary clusters identified by preweighted sparse 2-means clustering and complementary hierarchical clustering are shown in Table 6. Observe that there is a significantly higher proportion of TMD cases in cluster 2 for both the primary and secondary clusters, but the association is stronger in the secondary clusters (OR = 1.9, p = 3.2 × 10 −5 ) than in the primary clusters (OR = 1.6, p = 0.002). Thus, if the primary objective is to identify clusters associated with TMD, the secondary clusters are preferable to the primary clusters, indicating that the secondary clusters are not only biologically meaningful but may be more clinically relevant than the primary clusters. The clusters identified by complementary hierarchical clustering were also associated with chronic TMD, although they were more weakly associated with TMD than the clusters identified by preweighted sparse clustering.
We also applied the preweighted sparse clusting and complementary hierarchical clustering methods to the OPPERA prospective cohort data. The results of this analysis are summarized in Table 7. The primary clusters identified by preweighted sparse clustering were not significantly associated with first-onset TMD (HR = 1.2, p = 0.09). However, the secondary clusters were associated with first-onset TMD (HR = 1.9, p = 6.5 × 10 −7 ). Such a result suggests that clusters associated with an outcome of interest (first-onset TMD in this scenario) may be obscured by a set of clusters unrelated to the outcome of interest. The preweighted sparse clustering method was able to identify these obscured clusters. Neither the primary nor the secondary clusters identified by complementary hierarchical clustering were significantly associated with first-onset TMD.
Finally, we applied supervised sparse clustering (as well as three other variants of clustering discussed earlier) to the OPPERA case-control data and prospective cohort data. The results are shown in Tables 8 and 9. While all four methods identified clusters that were associated with chronic TMD, the clusters produced by supervised sparse clustering and supervised clustering were much more strongly associated with TMD than the clusters produced by the methods that did not consider an outcome variable. Similarly, the two supervised clustering methods identified clusters associated with first-onset TMD whereas the clusters identified by the other two methods were not associated with first-onset TMD. This suggests that clustering methods that consider an outcome variable may do a better job of identifying biologically relevant clusters than methods that do not consider this information. Note that Tables 8 and 9 show the results for predicted clusters on an independent test data set, so they cannot be attributed to overfitting.
Leukemia Microarray Data
For each clustering method, the hazard ratio and associated p-values for the predicted test set clusters are shown in Table 10. All four methods produced clusters that were associated with patient survival, although the clusters produced by supervised sparse clustering were more strongly associated with survival than the clusters produced by the other methods. This indicates that supervised sparse clustering can identify biollogically meaningful and clinically relevant clusters in high-dimensional biological data sets. The fact that the predicted clusters were associated with survival on an independent test set suggests that this finding is not merely the result of overfitting.
Discussion
Cluster analysis is frequently used to identify subtypes in complex data sets. In many cases, the primary objective of the cluster analysis is to identify clusters that offer new insight into a biological question of interest or that can be used to more precisely phenotype (and hence diagnose and treat) a particular disease. However, in many cases, the clusters identified by conventional clustering methods are dominated by a subset of the features that are not interesting biologically or clinically.
Despite the fact that this problem is very common in cluster analysis, relatively few methods have been proposed to identify clusters in these situations. As noted earlier, the idea of "complementary clustering" was first proposed by Nowak and Tibshirani (2008), and Witten and Tibshirani (2010) proposed an alternative method based on sparse clustering. However, these methods have several drawbacks. They can only be used with hierarchical clustering. To our knowledge, our proposed method is the first complementary clustering algorithm that may be applied to kmeans clustering or other clustering methods. Although we have only considered preweighted k-means clustering in this study, our methodology is easily applicable to sparse hierarchical clustering or any other clustering method that can be used within the sparse clustering framework of Witten and Tibshirani (2010). Furthermore, the complementary sparse hierarchical clustering method can be computationally intractible when applied to data sets with numerous observations. (We attempted to apply this method to the OPPERA data, but we were forced to abort the procedure as it was using over 40 GB of memory.) Finally, as observed in Sections 3.1 and 3.2, preweighted sparse clustering can identify clinically relevant clusters in some situations when these existing methods fail to identify such clusters.
The problem of finding clusters that are associated with an outcome variable has also not been studied extensively. Previously proposed methods include the semi-supervised clustering method of Bair and Tibshirani (2004) and the semi-supervised RPMM method of Koestler and others (2010). Semi-supervised clustering produces useful results in a variety of circumstances, but the clusters produced by semi-supervised clustering can vary depending on the choice of tuning parameters and sometimes have poor reproducibility. Semi-supervised clustering can also fail to identify the true clusters of interest when the association between these clusters and the observed outcome is noisy, as we saw in Section 3.1. Likewise, a drawback of semi-supervised RPMM is that it can fail to detect that clusters exist in a data set. (Indeed, semi-supervised RPMM produced a singleton cluster in each of the examples we considered in the present study.) Supervised sparse clustering has been shown to overcome these shortcomings and can produce reproducible clusters more strongly associated with the outcome in some situations (see Section 3.3).
One shortcoming of the proposed preweighted sparse clustering is the fact that the clusters obtained may vary with respect to the choice of the tuning parameter s in the sparse clustering algorithm (see Section 2.1.1). The question of how to choose this tuning parameter has not been studied extensively. Witten and Tibshirani (2010) propose a method for choosing s based on permuting the columns of the data, but in our experience this method tends to produce values of s that are too large, which sometimes results in clusters that are not associated (or less strongly associated) with the outcome of interest. Choosing a smaller value of s may produce better results.
The question of how to choose this tuning parameter is an area for further study.
Despite this limitation, we believe that preweighted sparse clustering and supervised sparse clustering are powerful tools for solving an understudied problem. These methods can be used to identify biologically meaningful clusters in data sets that may not be detected by existing methods. More importantly, these methods can be used to identify clinically relevant subtypes of diseases like TMD and cancer, ultimately leading to better treatment options. Fig. 2. Artificial example of a situation where the outcome variable is a "noisy surrogate" for the true clusters. In this artificial example, the density functions of the outcome variable for observations in each of two clusters are shown above. Observations in cluster 2 are more likely to have higher values of the outcome variable than observations in cluster 1, but there is considerable overlap between the two groups. Thus, classifying observations to clusters based solely on the outcome variable will result in a high misclassification error rate. Table 2. Results of the first simulation when the values of σ were varied. The clusters associated with the first p e rows were defined to be "Effect 1," and the clusters associated with the final p e rows were defined to be "Effect 2." Table 3. Results of the first simulation when the values of n a were varied. The clusters associated with the first p e rows were defined to be "Effect 1," and the clusters associated with the final p e rows were defined to be "Effect 2." Table 5. Results of the second simulation study. The following methods were applied to the simulated data set described in Section 2.4: 1) supervised sparse clustering, 2) sparse clustering, 3) supervised clustering , 4) 2-means clustering on the top 3 principal component (PCA) scores. The mean number of misclassified observations (and associated standard errors) are shown for each method. Table 6. The association between chronic TMD and the primary and secondary clusters identified by preweighted sparse clustering and complementary hierarchical clustering on the OPPERA casecontrol data. In each case, the cluster with the lower proportion of TMD cases was called cluster 1. Table 7. The association between the incidence of first-onset TMD and the primary and secondary clusters identified by preweighted sparse clustering method and complementary hierarchical clustering on the OPPERA prospective cohort data. A Cox proportional hazards model evaluated the null hypothesis of no association between TMD incidence and the cluster assignments. The hazard ratio and associated p-values of each cluster is reported below. Table 8. Four different clustering methods were applied to the OPPERA case-control training data. Each observation in the test data was assigned to a cluster by fitting a lasso model to predict the clusters on the training data and applying this model to the test data. The table below shows the association between each (predicted) cluster and chronic TMD on the test data. In each case, the cluster with the lowest proportion of TMD cases was called cluster 1 and the cluster with the highest proportion of TMD cases was called cluster 3. The odds ratio for TMD in each cluster (relative to cluster 1) and corresponding p-values were also calculated. | 2013-10-12T14:14:13.000Z | 2013-04-13T00:00:00.000 | {
"year": 2013,
"sha1": "5b1515ab6052288d9c76201b48cd8ce9022574dd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dd6a8c9d4b3204991d762ee80c2d2c629e8c5723",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Biology"
]
} |
237278969 | pes2o/s2orc | v3-fos-license | Grazing alters species relative abundance by affecting plant functional traits in a Tibetan subalpine meadow
Abstract Domestic livestock grazing has caused dramatic changes in plant community composition across the globe. However, the response of plant species abundance in communities subject to grazing has not often been investigated through a functional lens, especially for belowground traits. Grazing directly impacts aboveground plant tissues, but the relationships between above‐ and belowground traits, and their influence on species abundance are also not well known. We collected plant trait and species relative abundance data in the grazed and nongrazed meadow plant communities in a species‐rich subalpine ecosystem of the Qinghai–Tibet Plateau. We measured three aboveground traits (leaf photosynthesis rate, specific leaf area, and maximum height) and five belowground traits (root average diameter, root biomass, specific root length, root tissue density, and specific root area). We tested for shifts in the relationship between species relative abundance and among all measured traits under grazing compared with the nongrazed meadow. We also compared the power of above‐ and belowground traits to predict species relative abundance. We observed a significant shift from a resource conservation strategy to a resource acquisition strategy. Moreover, this resource conservation versus resource acquisition trade‐off can also determine species relative abundance in the grazed and nongrazed plant communities. Specifically, abundant species in the nongrazed meadow had aboveground and belowground traits that are associated with high resource conservation, whereas aboveground and belowground traits that are correlated with high resource acquisition determined species relative abundance in the grazed meadow. However, belowground traits were found to explain more variances in species relative abundance than aboveground traits in the nongrazed meadow, while aboveground and belowground traits had comparable predictive power in the grazed meadow. We show that species relative abundance in both the grazed and the nongrazed meadows can be predicted by both aboveground traits and belowground traits associated with a resource acquisition versus conservation trade‐off. More importantly, we show that belowground traits have higher predictive power of species relative abundance than aboveground traits in the nongrazed meadow, whereas in the grazed meadows, above‐ and belowground traits had comparable high predictive power.
| INTRODUC TI ON
Understanding the impact of disturbance on community assembly remains a key challenge in ecology and conservation science (Mouillot et al., 2013). Grazing by domestic livestock is one of the most globally widespread land uses, which has led to dramatic changes in plant community composition and species diversity (Chapin et al., 2000;Li et al., 2017). Thus, there is an urgent need to better understand how grazing alters plant community assembly-if we are to restore plant diversity in the grazed meadows .
Resources acquisition versus resource conservation trade-off are the key to lead to varied plant community compositions, and plant species with high resource acquisition generally increase in relative abundance after grazing (Augustine & McNaughton, 1998;Díaz et al., 2001;Evju et al., 2009). In contrast, plant species with high resource conservation tend to dominate the nongrazed meadows . Indeed, a meta-analysis has found that at the global scale, annual species are the dominating species in the grazed meadows; in contrast, perennial species dominate the nongrazed meadows (Díaz et al., 2007). Usually, annual species tend to have high resource acquisition, whereas perennial species tend to promote high resource conservation strategy (Roumet et al., 2006(Roumet et al., , 2016. The shifts from resource conservation in the nongrazed meadow to resource acquisition in the grazed meadow can result in responding to varied functional traits (presumed adaptive morphological and physiological traits ;Strauss & Agrawal, 1999). For example, higher resource acquisition rates in the face of high grazing pressure tend to make plant to have shorter plant maximum height and higher specific leaf area (Díaz et al., 2001;Li et al., 2017), whereas high resource conservation rates in the nongrazed condition tend to make plants to have lower specific leaf area and taller stature to adapt to resource limitation (Díaz et al., 2007).
Thus, functional traits can have high power in predicting grazinginduced variations in species abundances (Niu et al., 2010). However, other studies analyzing the response of Australian dry shrub and wood lands found that functional traits show very low power to predict variation in plant species abundance in response to grazing (Vesk et al., 2004). Moreover, research in West African savannas, semiarid Australian shrublands, subhumid grasslands, and subalpine and alpine meadows (Cingolani et al., 2005;de Bello et al., 2005;Díaz et al., 2007;Kahmen et al., 2002;Klimesova et al., 2008;Kühner & Kleyer, 2008;Li et al., 2013;Meers et al., 2008;Niu et al., 2010;Wang et al., 2018;Yang et al., 2015) has failed to reach a consensus that functional traits are indeed good indicators of grazing-induced differences in species abundance.
Two main problems persist when investigating trait-abundance relationships: Comparisons between the grazed and the nongrazed communities that focus mainly on aboveground traits (Niu et al., 2010) and belowground traits remain overlooked in grazingrelated studies and global databases (Bergmann et al., 2017).
Belowground traits have been found to be highly correlated with a resource acquisition versus. resource conservation trade-off (de la Riva et al., 2018;Prieto et al., 2015) and thus may also play a key role in mediating grazing-induced alterations in community composition (Klumpp et al., 2009;McInenly et al., 2010). However, belowground traits do not necessarily fall in the same strategy gradients as aboveground traits (Bergmann et al., 2017(Bergmann et al., , 2020. Moreover, there is little study reporting the strong belowground trait-abundance relationships in both the grazed and the nongrazed meadows. Thus, quantifying relationships between species relative abundance and both of aboveground and belowground traits in the grazed and nongrazed communities may help to understand how grazing modifies community assembly processes. Compared with aboveground traits, belowground traits are much harder to measure, but belowground traits can better capture plant resource conservation than aboveground traits, as they can better reflect a plant's ability to acquire limiting nutrients (e.g., nitrogen;Bardgett et al., 2014). However, it has been found that both aboveground and belowground traits can shift from resource conservation in the nongrazed meadow to resource acquisition in the grazed meadow (Roumet et al., 2016). Moreover, strong correlations between aboveground and belowground traits have been reported (de la Riva et al., 2018).
Aboveground traits are thus assumed a good proxy for belowground traits (Klumpp et al., 2009). If this is true, merely using aboveground traits is useful enough to understand how grazing-induced shifts in a resource acquisition versus resource conservation trade-off affect community assembly. If this is not true, both belowground and aboveground traits should be measured; otherwise, using aboveground trait only cannot be good enough to revel how grazing-induced shifts in resource acquisition and resource conservation strategies influence community assembly. Quantifying the relative contribution of aboveground and belowground traits to species relative abundance can help verify this, but nearly no study has investigated the relative importance of aboveground traits and belowground traits in species relative abundances in the nongrazed and grazed meadows. abundance than aboveground traits in the nongrazed meadow, whereas in the grazed meadows, above-and belowground traits had comparable high predictive power.
K E Y W O R D S
aboveground and belowground traits, grazing, resource-based trade-offs, restoration, subalpine meadows, trait-abundance relationships Here, we compared plant functional traits in a nongrazed meadow and a grazed meadow in the Qinghai-Tibet Plateau-both of which have 30 years of well-documented grazing history. We collected an extensive dataset consisting of: (a) three plant aboveground traits (maximum photosynthesis rate, specific leaf area, and maximum height); (b) five plant belowground traits (root diameter, root biomass, specific root length, root tissue density, and specific root area); and (c) plant species relative abundance. The selection of these traits was based on their strong associations with plant resource acquisition and resource conservation strategies (de la Riva et al., 2018). For example, shorter plant maximum height and higher specific leaf area are associated with high resource acquisition despite high grazing pressure (Díaz et al., 2001;Li et al., 2017).
In contrast, in the nongrazed condition, lower specific leaf area and higher plant maximum height may be a strategy for possessing high resource conservation to adapt to resource limitation (Westoby et al., 1999). The grazed communities generally had higher light availability (Rook et al., 2004) than the nongrazed communities and thus may have different photosynthesis rate-abundance relationships. Moreover, higher photosynthesis rate is indicative of higher growth rates (Kirschbaum, 2011; and thus may reflect plant resource acquisition. Belowground traits often exhibit a resource acquisition versus resource conservation trade-off, with resource acquisition warranting high specific root length and specific root area, but low root tissue density, root biomass, and root density; and resource conservation resulting in high root tissue density, root biomass, and root density, but low specific root length and specific root length (Feng et al., 2018;Klumpp et al., 2009;McInenly et al., 2010;Roumet et al., 2016). However, it remains unknown whether these aboveground and belowground traits can determine species relative abundance in the nongrazed and grazed meadows.
Employing this extensive dataset, our goal was to evaluate the effect of excluding grazing from an ecosystem with a long history of both managed grazing and utilization by native herbivores.
Specifically, we aim to quantify the following: (a) whether grazing and grazing removal alter plant aboveground and belowground traits so that they can shift from resource conservation to resource acquisition in Qinghai-Tibet Plateau; (b) whether the resource acquisition versus resource conservation trade-off reflected in both aboveground and belowground traits can determine species relative abundance in the nongrazed and grazed meadows; and (c) whether aboveground and belowground traits have the comparable predictive power of above-and belowground traits on species relative abundance in the grazed and nongrazed meadows.
| Study site
Field sampling was conducted in two species-rich subalpine meadows located in the eastern section of the Qinghai-Tibet Plateau, Luqu, Figure S1a). Mean annual precipitation in the region is ~530 mm, 70% of which occurs from June to August.
The mean annual temperature is 2.4°C. Vegetation at the study sites is dominated by Elymus nutans, Kobresia humilis, and Thermopsis lanceolata, and soils are classified as alpine meadow soils . The focal meadows are in a large area of 4,000 ha.
| Sampling
We sampled one nongrazed meadow (control) and one grazed meadow. The land-use histories over the last 30 years for the grazed and nongrazed meadows were obtained by interviewing local farmers. Since year 2000, the Chinese government has established the fence to protect some areas of this grazed meadow, thereby resulting in a nongrazed meadow. Thus, the nongrazed meadow (also fenced meadow) had not been grazed for 18 years prior to the study.
Yak is the only disturbance, and grazing by small mammals has not been reported or observed in either the grazed or the nongrazed meadow.
We placed 30 0.25-m 2 quadrats in each of the grazed and nongrazed plots (Figure S1b), in August (peak growing season) 2018.
Quadrats were regularly spaced at 20-m intervals along 6 parallel transects. The grazed and nongrazed plots were about 500 m distant.
Species relative abundance was measured as number of individuals and biomass (Morlon et al., 2009), because many meadow plant species are clonal and aboveground biomass may better reflect species relative abundance than considering only the number of individuals.
To quantify aboveground biomass for each species, we harvested all aboveground parts for each species and oven-dried them at 80°C for 2 days before weighing. Species relative abundance was then calculated as the ratio of aboveground biomass of each species to the total aboveground biomass of all species in all 30 0.25-m 2 quadrats in the nongrazed and grazed meadows, respectively. (Table S2). Measurements were done as described in and Bergmann et al. (2017), and further details are given in the Supplementary Material (Text S1). Specifically, the 30 quadrats were used to generate a species list, and then, each species was assigned a mean value for each trait based on the measurements of 15 individuals per species. This process was repeated for both the grazed and the nongrazed treatments. We had 42 species that were present in the nongrazed and grazed meadows, and we measured their traits in all the meadows. This way, we hoped to ensure that intraspecific variation was appropriately incorporated in our analyses.
| Statistical analysis
We firstly used a Pearson correlation to examine the bivariate relationships among all aboveground and belowground plant traits for all species found in the nongrazed and grazed meadows. We secondly run a principal component analysis (PCA) to evaluate whether species trait values varied between the grazed and the nongrazed meadows can be significantly differentiated by these eight aboveground and belowground traits. We thirdly tested whether species relative abundance for all species in the nongrazed and grazed meadows can be well predicted by PCA axis (PC1 and PC2) using linear regression. We also examined whether species relative abundance of each plant species can be well predicted by its responding aboveground and belowground traits in the nongrazed and grazed meadows individually using linear regression. Finally, we used a variance partitioning analysis to quantify the relative contribution of aboveground and belowground traits to species relative abundance for all species in the grazed and nongrazed meadows, thereby revealing whether aboveground traits can be a good proxy for belowground traits in predicting species relative abundance for all species in the nongrazed and grazed meadows, respectively. Specifically, species relative abundance with traits as explanatory variables can be divided into four complementary components: (a) "purely aboveground traits," variance explained by aboveground traits alone; (b) "shared aboveground and belowground traits," variance explained by both aboveground traits and belowground traits; (c) "purely belowground traits," variance explained by belowground traits alone; and (d) "unexplained residual variation" (Legendre et al., 2009;. For both the nongrazed and the grazed meadows, variance partitioning was done using the function "varpart" in the "vegan" R package (Oksanen et al., 2016). The percentage of variation is achieved by adjusted R-square from variance partitioning (Legendre et al., 2009).
Our analyses were parametric and assumed normally distributed data. Across all sites, both species relative abundance and trait values were strongly right-skewed, so we log-transformed both species relative abundance and trait data, which resulted in near-normal distributions for both.
| RE SULTS
We sampled 63 and 46 native species in the nongrazed and grazed meadow, respectively (Table S1). Species richness was higher in the nongrazed meadow than in the grazed meadow, and 42 of the 46 species in the grazed meadow were also present in the nongrazed meadow (Table S1). However, species composition differed: Native perennial species dominated the nongrazed meadow, while native annual species were more prominent in the grazed meadow ( Figure S2).
We observed strong correlations among the five root traits, as well as between the three aboveground traits, for all species in the nongrazed and grazed meadows (Figure 1). For example, root density, maximum height, root biomass, and root tissue density are all significantly positively correlated (Figure 1). Similarly, specific root length, photosynthesis rate, specific root area, and specific leaf area are also all significantly positively correlated (Figure 1). However, RD, H, RB, and RTD are significantly negatively associated with specific root length, photosynthesis rate, specific root area, and specific leaf area (Figure 1).
Results of PCA revealed that the "nongrazed and grazed species" were significantly separated by PC2 (Table S3 and Figure 2). PC2 was also good predictor of species relative abundance in both the grazed and the nongrazed meadows ( Figure S3).
All aboveground and belowground traits were good predictors of species relative abundance in both the heavily grazed and the nongrazed meadow communities (p < .001; Figures 3 and 4). The relationships between species relative abundance and plant traits were often in opposite directions in the nongrazed and grazed meadows.
Abundant species in the nongrazed meadow exhibited relatively high maximum height, root biomass, and root tissue density, but low photosynthesis rate, specific leaf area, specific root length, and specific root area-the converse was true for nonabundant species (p < .001; Figures 3 and 4). In contrast, in the grazed meadow, abundant species had high photosynthesis rate, specific leaf area, specific root length, and specific root area, but low maximum height, root biomass, and root tissue density, and the opposite patterns were evident for nonabundant species (p < .001; Figures 3 and 4).
Variance partitioning analyses showed that in the nongrazed meadows, belowground traits explained a greater proportion of the variance (51%) than aboveground traits (31%) in predicting species relative abundance ( Figure 5). However, in the grazed meadows, above-and belowground traits had comparable high predictive power of species relative abundance (76% and 70% of the total variance, respectively; Figure 5).
| D ISCUSS I ON
This study provides empirical evidence that grazing and grazing removal alter plant aboveground and belowground traits so that they can shift from resource conservation in the nongrazed meadow to F I G U R E 1 Interrelationships among aboveground traits (maximum height, photosynthesis rate, and specific leaf area) and belowground traits (root density, root biomass, specific root length, specific root area, and root tissue density) in the nongrazed and grazed meadows. * indicates p < .05 based on correlation analysis Wright et al., 2004). Here, we found significant correlations among all aboveground and belowground traits we measured in both the nongrazed and the grazed meadows. This indicated that grazing and grazing removal altered plant aboveground and belowground traits so that they could shift from resource conservation to resource acquisition in the Qinghai-Tibet Plateau.
The principal component analysis revealed that the nongrazed and grazed species were significantly separated by PC2, whose lower and higher values are highly associated with a resource acquisition versus resource conservation trade-off. Thus, the shift from resource conservation to resource acquisition led to significant different trait values between the nongrazed and the grazed meadows. We also found PC2 was significantly correlated with species relative abundances in both the nongrazed and the grazed meadows, indicating this shift from resource conservation to resource acquisition can also determine species relative abundance in both the nongrazed and the grazed meadows. As a result, the variations in aboveground and belowground traits that reflect a shift from resource conservation in the nongrazed meadow to resource acquisition in the grazed meadow may determine variations in species relative abundance in the nongrazed and grazed meadows. Indeed, we found annuals that had high resource acquisition-dominated F I G U R E 2 Principal component analysis of belowground (root density, root biomass, specific root length, specific root area, and root tissue density) and aboveground traits (maximum height, photosynthesis rate, and specific leaf area) in the nongrazed and grazed meadows F I G U R E 3 Relationships between species relative relative abundance (%) and aboveground traits: photosynthetic rate (µmol m −2 s −1 ), plant maximum height (cm), and specific leaf area (cm 2 /g) in the nongrazed (above) and grazed (below) meadows. Each point represents the mean value of a single species. Fitted red lines are generated from linear regression with corresponding significance (P) grazed meadow, and perennial species that possessed high resource conservation-dominated nongrazed meadows. We also found significant but differing trait-abundance relationships in the nongrazed and grazed meadows for both aboveground and belowground traits.
Thus, the resource conservation versus resource acquisition tradeoff at both aboveground and belowground levels determined species relative abundance in both the nongrazed and the grazed meadows.
Given the strong correlations between aboveground and belowground traits and significant trait-abundance relationships for both aboveground and belowground traits, aboveground traits seem to be a good proxy of belowground traits in predicting species relative abundance in the nongrazed and grazed meadows. This is evident in the grazed meadows, as in the grazed meadows, aboveground and belowground traits explained comparable fractions of the variance (70% and 76%, respectively) in species relative abundance. Moreover, variance explained by both above-and belowground traits is very high (68%), which might be attributed to the strong correlations between aboveground and belowground traits. These results indicate that grazing may select for species with high resource acquisition strategies, possibly with a very narrow range of trait values, resulting in above-and belowground traits with comparable predictive power for species relative abundance. Compared with belowground traits, aboveground traits are relatively easier to measure and have been much more widely studied across global communities (Wright et al., 2004). Thus, we can only measure aboveground traits to quantify trait-abundance relationships in the grazed meadow in the future in the Qinghai-Tibet Plateau.
However, aboveground traits cannot be a good proxy for belowground traits in the nongrazed meadow. That is because in the nongrazed meadow, belowground traits explain a greater proportion of the variance (51%) than the aboveground traits (32%) in predicting species relative abundance. One possible reason is that resource competition may make resource conservation strategy prevail in the nongrazed meadows, which in turn may allow for a wider range of trait combinations, thereby leading to above-and belowground traits with different predictive power for species relative abundance.
Indeed, our previous works have reported that competition in nitrogen uptakes was a major influence on community assembly in the nongrazed meadows in this subalpine ecosystem . Perennials generally have larger and more established root systems than annuals (Roumet et al., 2006). Thus, compared with aboveground traits, plant's ability to acquire limiting F I G U R E 4 Relationships between species relative abundance and five measured belowground traits: root average diameter (mm), root biomass (g), specific root length (cm/g), root tissue density (g/cm −3 ), and specific root area (cm 2 /g) in the nongrazed (above) and grazed (below) meadows. Each point represents the mean value of a single species. Fitted red lines are generated from linear regression with corresponding significance (P)
F I G U R E 5
The percentage of variation in species relative abundance explained by four predictor types: (a) "purely aboveground traits"; (b) "shared aboveground and belowground traits," variance explained by both above-and belowground traits; (c) "purely belowground traits"; and (d) "unexplained variables" in the nongrazed and grazed meadow, respectively. The percentage of variation is achieved by adjusted R-square from variance partitioning nutrients can be better reflected by belowground traits (e.g., nitrogen; Bardgett et al., 2014). It is therefore not surprising that we found that belowground traits were stronger predictors of species relative abundance than aboveground traits in the nongrazed meadow. These findings imply that belowground traits may be more critical than aboveground traits for quantifying the role of the species niche on community assembly and maintenance of diversity in the nongrazed meadows. This indicates merely using aboveground traits only cannot reveal the strong effects of resource conservation on community assembly in the nongrazed meadows. Thus, both aboveground and belowground traits should be measured in the nongrazed meadows.
Despite these compelling results, our data had inherent limitation; namely, although we measured three aboveground and five belowground traits that capture grazing-induced resource acquisition and resource conservation in the nongrazed meadows, foliar nutrient contents that may explain the unaccounted variances (22% and 48%, respectively) in species relative abundance should have been measured and tested. Many studies on the Tibetan Plateau involving fertilization treatments on the grazed and nongrazed meadow communities have established soil-available nitrogen and phosphorus as critical limiting nutrients with substantial impacts on community structure (Niu et al., 2016;Yang et al., 2015;Zhang, Gilbert, Wang, et al., 2013;. There is no doubt that foliar nutrient contents are important and influence species relative abundance in these environments, which merit further investigation. Moreover, data on trait and species relative abundance and abiotic conditions (i.e., soil moisture and nutrient) at the nongrazed and grazed meadows should be collected over several time steps in the future, thereby providing insight as to whether these two sites are actually diverging in terms of species composition.
| CON CLUS ION
We provide novel insights on different contributions of above-and belowground traits to plant species relative abundance in the nongrazed and grazed meadows. Grazing disturbance is expected to affect mostly aboveground plant parts, but by imposing strong se-
CO N FLI C T O F I NTE R E S T
The authors declare no conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data used in this manuscript have been provided in the Appendix S1. | 2021-08-25T05:26:00.398Z | 2021-07-27T00:00:00.000 | {
"year": 2021,
"sha1": "91156656ffa4bfa0c0068e0872698435360940e6",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.7891",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91156656ffa4bfa0c0068e0872698435360940e6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263319863 | pes2o/s2orc | v3-fos-license | Resistance Exercise Program in Cognitively Normal Older Adults: CERT-Based Exercise Protocol of the AGUEDA Randomized Controlled Trial
To provide a comprehensive CERT (Consensus on Exercise Reporting Template)-based description of the resistance exercise program implemented in the AGUEDA (Active Gains in brain Using Exercise During Aging) study, a randomized controlled trial investigating the effects of a 24-week supervised resistance exercise program on executive function and related brain structure and function in cognitively normal older adults. 90 cognitively normal older adults aged 65 to 80 were randomized (1:1) to a: 1) resistance exercise group; or a 2) wait-list control group. Participants in the exercise group (n = 46) performed 180 min/week of resistance exercise (3 supervised sessions per week, 60 min/session) for 24 weeks. The exercise program consisted of a combination of upper and lower limb exercises using elastic bands and the participant’s own body weight as the main resistance. The load and intensity were based on the resistance of the elastic bands (7 resistances), number of repetitions (individualized), motor complexity of exercises (3 levels), sets and rest (3 sets/60 sec rest), execution time (40–60 sec) and velocity (as fast as possible). The maximum prescribed-target intensity was 70–80% of the participants’ maximum rate of perceived exertion (7–8 RPE). Heart rate, sleep quality and feeling scale were recorded during all exercise sessions. Those in the wait-list control group (n = 44) were asked to maintain their usual lifestyle. The feasibility of AGUEDA project was evaluated by retention, adherence, adverse events and cost estimation on the exercise program. This study details the exercise program of the AGUEDA trial, including well-described multi-language manuals and videos, which can be used by public health professionals, or general public who wish to implement a feasible and low-cost resistance exercise program. The AGUEDA exercise program seems to be feasible by the high retention (95.6%) and attendance rate (85.7%), very low serious adverse event (1%) and low economic cost (144.23 € /participant/24 weeks). We predict that a 24-week resistance exercise program will have positive effects on brain health in cognitively normal older adults.
E
xercise is medicine, and as such it has documented beneficial effects on the brain (1).Physical exercise is a promising strategy to prevent cognitive decline (2)(3)(4)(5)(6)(7)(8) and is a treatment for improving global cognition and brain-related health outcomes (4,9,10).Various types and doses of exercise likely have their own specific neurophysiological responses (11) and associated cognitive benefits (5).Indeed, certain doses (e.g., 10 METs-h/week) may be required for detecting exercise-induced cognitive changes (12).However, there are still important gaps in knowledge that limit well-designed and targeted exercise programs for health benefits.Some of these identified gaps are (i) lack of information about detailed type and dose of exercise (i.e., what), (ii) potential moderators (i.e., for whom) and (iii) the underlying mechanisms (i.e., how) (13,14).
High-quality and detailed reporting of exercise interventions is needed (14) to improve quality appraisal, enable evidence synthesis and replication, and improve translation of resistance exercise programs with the aim of improving brain health.To address this challenge, the Consensus on Exercise Reporting Template (CERT) provides a standardized format to report exercise intervention programs (38).The CERT guidelines include 16 items as the minimal amount of information necessary to report exercise interventions, and to allow development, guidance, evaluation, interpretation and assistance with an effective exercise program for everyday clinical practice (38).
Collectively, well-designed CERT-based randomized controlled trials (RCTs) should report detailed information about FIIT exercise principles (Frequency, Intensity, Time and Type) to establish accessible (which favors its applicability to any population and context), robust, and replicable evidencebased exercise recommendations for brain health.Thus, the aim of this study is to provide the rationale and comprehensive description, based on CERT guidelines, of the 24-week resistance exercise-based program of the AGUEDA trial, in which the primary outcome is to investigate the effects of a 24-week resistance exercise program on executive function in cognitively normal older adults.This may serve researchers and public health practitioners who would like to implement a feasible and low-cost resistance exercise program with expected positive effects on cognitive and brain outcomes in cognitively normal older adults.
Participants and recruitment
A total of 90 cognitively normal older adults (65-80 years old) from Granada (Spain) participated in the AGUEDA trial.Participants were recruited through the local media (television, radio, newspaper), promotional flyers, announcements to local aging and senior citizen agencies, online sites, and social media.The recruitment process started in March 2021 finishing in May 2022.
Inclusion and exclusion criteria were defined as follows: (i) older adults between 65 -80 years, (ii) physically inactive (i.e., defined as not participating in any resistance exercise program in the last 6 months or accumulating less than 600 METs-Min/ week by the International Physical Activity Questionnaire (IPAQ) (39)
Randomization
Community-dwelling older adults were randomized into a resistance exercise group (n=46) or a wait-list control group (n=44).Participants assigned to the exercise group attended 3 supervised exercise sessions per week during 24 weeks, while the wait-list control group was asked to maintain their usual lifestyle.Participants assigned to the wait-list control group were given an opportunity to attend the 24-week exercise program after completion of the wait-list period.The trial protocol was in accordance with the principles of the Declaration of Helsinki and was approved by the Research Ethics Board of the Andalusian Health Service (CEIM/CEI Provincial de Granada; #2317-N-19 on May 25th, 2020).All participants provided informed consent once all study details were explained.Recruitment, enrollment, and randomization occurred on a rolling basis.
Exercise program structure
The 24-week resistance exercise program implemented in the AGUEDA trial followed the CERT guidelines (Table S1) and is summarized in Figure 1.The AGUEDA resistance exercise program was designed based on previous evidence focused on health outcomes (i.e., quality of life or cognitive benefits) in older population (4,(15)(16)(17)(18)(19)(20) and followed the guidelines for resistance training in older adults from the International Conference of Frailty and Sarcopenia Research (ICFSR) (45) and the American College of Sports Medicine (ACSM) (46).The resistance exercise program consisted of a combination of upper and lower limb exercises using elastic bands and the participant's own body weight.This enabled easy application, low-cost and feasible translation to different contexts (i.e., homes and clinical practices).The exercise program was performed in groups of 4 -6 participants and was conducted by professional trainers with a bachelor's degree in Sport and Exercise Sciences.The exercise program was carried out in a fitness room at the University Institute of Sports and Health (IMUDS) in the city of Granada, Spain.
Exercise equipment
The data collection materials for the training session (i.e., training sheets or adverse event questionnaires) as well as the audiovisual and complementary resources (i.e., exercise program guide) of the exercise program are reposited at GitHub https://github.com/aguedaprojectugr/CERT_AGUEDA.Table S2 shows the list of the files available on the repository.
The elastic bands used were Thera-Band® (47) which are available in 7 different colors corresponding to different resistances (i.e., yellow-soft, red-medium, green-strong, blueextra strong, black-strong special, silver-athletic and goldolympic).All the elastic bands have a length of 1.50m and the grip was standardized at 1 meter of length for all participants.
For the execution of exercises with elastic bands, 4 support points were necessary: knee (e.g., woodcutter), waist (e.g., row), head (e.g., face pull) and above the head (e.g., lat pulldown).The participant maintained the maximum elongation distance from the support point without any force on the elastic band.A detailed description of the exercise equipment is shown in Table S3.
Exercise doses
The total prescribed volume of the AGUEDA trial for each participant assigned to receive exercise was 24 weeks with 3 sessions per week of 60 min duration.Therefore, each participant was prescribed a total of 72 training sessions.The volume is measured by; (i) number of sets (i.e., 3), (ii) number of repetitions (i.e., as many as possible) and (iii) time of set execution (i.e., from 40 to 60 seconds).
Intensity was measured primary by the 10-point Borg Rating of Perceived Exertion scale (RPE) with a target rating of 7-8 (i.e., Very difficult) (48).The Thera-Band RPE scale (47) was also used during the familiarization weeks (i.e., 2 first weeks) to help participants learn the use of the RPE scale with elastics bands.The modifiable variables used to control the intensity were (i) velocity of execution (i.e., as fast as possible), (ii) rest (i.e., 60 seconds of rest each set) and, (iii) levels: basic (level 1), intermediate (level 2) and advanced (level 3).These different levels (8 weeks per level), including 3 training sessions per week/level, progressively increased across the motor complexity of exercises and elastic band resistance (Table 1).Motor complexity involves increasing technical difficulty level of the exercise, which raises the demand for other physical abilities during the resistance exercise stimulating multi-systemic (or multi-component) adaptations (coordination, balance, core stability, power, agility, among others) (49).These multi-systemic adjustments are due to the different characteristics of the resistance exercises: (i) unilateral execution of exercises, which increases the coordination level, and provide changes in the activation pattern of trunk stabilizer muscles (e.g., unilateral push press) (46), (ii) performance of exercises with non-cyclical patterns of movement (e.g., Turkish , which elevates the level of coordination and improves motor control, (iii) multi-segmental exercises that raise the level of stress on the neuromuscular and motor control system (e.g., woodcutter rotation) promoting continuous adaptations (45) or (iv) instability exercise, demanding for balance , joint, and core stability (e.g., walking lunge) (49).
Furthermore, increasing the motor complexity of resistance training exercises in older populations enhances functional capacity, which can transfer daily activities and independence (49).
A summary of the type and load is presented in Figure 2.However, the total dose was quantified on an individualized and standardized basis as explained below.
Standardized load
Participants were prescribed 3 sets of 8 exercises per session.The duration of each set was from 40 to 60 seconds according to each week's assigned level with a rest of 60 seconds between sets.The resting time between exercises was adjusted to the instructions and preparation to the next exercise.The execution time equally increased at each level to ensure that each participant achieved the targeted intensity with a proper technique: 40 seconds per set execution within the first 3 weeks, 50 seconds of execution within the next 3 weeks and 60 seconds of execution within the 7th and 8th week.This equal number of sets and time allowed each participant to perform a certain number of repetitions according to their own capabilities.Velocity of the different exercises was used as another measure to modulate the intensity and adaptations at a muscular level, using as fast as possible but controlled execution speed (i.e., 1 sec of eccentric and 1 sec of concentric phase) (50).In addition, the maximum prescribed-target intensity was 70-80% of the participants' maximum rating of perceived exertion (7-8 RPE).RPE was recorded after each exercise and at the end of the entire session (47) (File A1).Participants started the exercise program within 2 weeks of familiarization.The familiarization period focused on exercise techniques described above using controlled execution speed and not on intensity (i.e., 2 sec of eccentric and 2 sec of concentric phase).
Individualized external load
It was important to consider the individual limitations and physical differences among participants to achieve the desired intensity, particularly in older populations.All participants started the exercise program with a soft elastic resistance (e.g., yellow-soft) for familiarization with the different exercises, but the resistance on the elastic bands increased on an individual basis.Elastic resistance was progressively increased at the Table 1.Training sessions (1, 2 and 3) for each level (1, 2 and 3) The warm-up (8 minutes) consists of (i) myofascial massage with a tennis ball, (ii) joint mobility, and (iii) movement of the main muscles involved during the training session.
Structure of sessions and exercises
The exercise sessions included (i) a warm-up, (ii) the main part of resistance exercises and (iii) a cool-down phase (Figure 3).Data collected during the sessions were recorded in a papertraining group sheet during the session (i.e., date, duration of phases, and individualized data mentioned above) (File A2) and then, data were registered in an individual and group registration Excel sheet for all participants (File A3 and File A4).A detailed description of the AGUEDA exercise training program is shown in a multi-language manual and series of videos (File A5 and File A6).
Exercise
The duration of the main exercise intervention lasted about 45 min.There were 3 different exercise sessions per level, and each exercise session included 8 different exercises.The exercise progression from one exercise to another was done sequentially (i.e., 3 sets of exercise 1 -rest -3 sets of exercise 2).Table 1 describes the exercises included in each exercise session and each level.The selected exercises included basic movement patterns (51) involving large muscle groups, horizontal traction, vertical traction, horizontal thrust, vertical thrust, hip extension and flexion, hip dominants, knee dominants, anti-rotation, anti-extension, anti-flexion, and antilateral flexion (52).Each session start with exercises in standing position (i.e., except training session 1 of level 3 that began with push-ups) and ended with specific lumbopelvic exercises in prone or supine position to avoid abrupt changes in body position that may cause dizzinesss (53).
Cool-down
The cool-down lasted approximately 7 min.This phase aimed for relaxation including myofascial release (54), joint mobility or stretching (i.e., static, or dynamic), and was focused on the trained muscle groups.
Modifications and adaptations of exercises
Exercise adaptation is an important factor when designing exercise programs, particularly in older adults (53, 55, 56).The AGUEDA exercise program included specific adaptations for the exercises due to different causes: (i) Previous pain: the prevalence of pain in older adults increases with age (55).We added different modifications and adaptations for shoulder, knee, and back pain, such as unilateral exercises for the other painful limb (e.g., right, or left leg), isometric exercises for the painful muscles (e.g., isometric wall squat), and a change within the plane of movement (e.g., plank on wall).
(ii) Injury during the intervention: falls are the leading cause of injury among adults aged ≥65 years (56).After rehabilitation and/or recovery, participants were asked to rejoin the training sessions.The trainers modified and adapted the exercises under the supervision and clearance of a physician.
(iii) Vestibular symptoms and dizziness: these symptoms are a usual and significant problem in the elderly (53).Exercises that resulted in dizziness were modified by changing the exercise position from lying to standing (e.g., standing dead bug) or the exercise movement from dynamic to static (i.g., isometric squat instead of squat).
Other variables recorded
The AGUEDA research team collected additional variables related to the training during all sessions: (i) Participant heart rate (HR): HR was recorded using HR monitors (Chest strap Polar H10, Polar, Kempele, Finland) during all sessions for safety purposes to identify abnormal or excessive cardiovascular responses during exercise.The data was stored in an online training diary using 2 apps (Polar Flow and Elite HRV) for analyzing exercise intensity matching with RPE scale and further to understand the cardiovascular adaptations of this resistance exercise program.The validity of the Polar chest strap and the apps have been examined and described in previous studies (57-59).Each participant had a specific email account for apps to be associated to a specific mobile phone (Huawei MI A2 lite), which was carried by the participant throughout the session.
(ii) Participant feeling and sleep quality: feeling scale before and after each exercise session was measured by a validated question: "How do you feel before/after the session?" (60).In addition, previous night sleep quality was also reported by participants before each training session using an individual validated question: "How well did you sleep last night?"(61).
Retention
The retention was calculated from the number of participants from the intervention group who had completed the post assessments (65).Study retention was 95.65% (44 of 46 participants completed the post assessments).
Adherence
Adherence to the exercise program was measured by session attendance by the proportion of sessions completed out of the total prescribed sessions.An 80% attendance was required for the per protocol analysis (i.e., > 57 exercise sessions).Exercise sessions were performed on a regular basis during holidays (i.e., summer or Christmas).Any missed session was registered and rescheduled to an alternative date to maximize program adherence.For exceptional cases when rescheduling was not feasible, online sessions were conducted through a video call using the following materials: (i) mobile phone, (ii) Polar band H10, (iii) elastics bands, (iv) RPE scale, and (v) an online training program guide previously explained in detail (File A8).
The average recorded attendance of 46 participants was 84.12%.10% (53 out of 454 sessions) of the missed sessions were rescheduled, increasing the average attendance to 85.71%. 2 participants had less than 60% of attendance, 3 participants between 60-80%, and 41 participants achieve >80%.External factors to the exercise program were identified as the cause for missed sessions.Reasons for the lowest adherence (<60% of attendance) were unwillingness to undertake the exercise program for the presence of a health condition.Regarding the others, the 3 participants missed sessions due to health conditions.
In addition, specific strategies were implemented to promote participant engagement: (i) Music: Speaker with music during the exercise sessions based on the participant's choice.
(ii) Extrinsic motivation: well-internalized extrinsic motivation by the trainers in charge, such as personally valuing certain outcomes of the exercises as a particularly important factor for initial adoption (66) (e.g., "With this exercise you will gain muscle mass in your back").
(iii) Intrinsic motivation: individual intrinsic feedback in a close and encouraging attitude by the trainers (67) (e.g., "Inhale and exhale slowly").
(iv) Group feedback: positive group feedback in order to promote feelings of competence and self-confidence in the participants (e.g., "All of you have improved a lot, keep going").
(v) Workshops: bimonthly body-mind, mobility or game workshops were held to keep motivation and maintain contact during the 24 wait-weeks with the wait-list control group.
Upon program completion, three additional strategies were used to help participants to continue engaging in exercise: (i) Guide of the AGUEDA exercise program: a complete manual of the AGUEDA exercise program, including a multilanguage visual and theoretical description of each of the training sessions, was delivered to participants to facilitate the practice of physical exercise in an autonomous way (File A5 and File A6).
(ii) Wait-list control group: Participants assigned to the waitlist control group could perform the exercise program after finishing the post-evaluations.
(iii) Flyer: An invitation, with a welcome discount, to become a member of a training center in Granada (Spain) to help participants (i.e., exercise and wait-list control group) to maintain their training habit under the supervision of a professional personal trainer (File A9) after finishing the project.
Adverse Event
Any adverse event that occurred, such as injury, emergency, or scheduled surgery, was recorded, reported, and evaluated by the research team in REDCap (68), an online platform designed to store and manage electronic data.Adverse data were recorded, if possible, at the time of the event but also asked at midpoint and post assessments (i.e., at 12 and 25 week) by a phone call.A customized adverse event form (File A7) included the seriousness, severity, chronicity, and resolution occurrence in participants, even events unrelated to the exercise program (e.g., COVID).The severity of adverse events was classified in 3 categories (i.e., mild, moderate, severe) (69).In case of joint injury or other injuries, a physician clearance was required before rejoining the resistance exercise program.There was 1 severe adverse event during the intervention period but without causal relationship with assessments or the exercise program.
Economic cost estimation
We report an estimation of the cost of delivering the AGUEDA exercise program including the human resources and equipment (Table S5).The cost is based on an average group size of 6 participants.The estimated total cost of the 24-week AGUEDA resistance training program was 6623.58 €, with an approximate cost of 144.23 € per participant and 865.49€ per group.
The costs have been calculated in based to the minimal material required for replication in any context.Benefits of body weight resistance training has been recognized by the ACSM by a functional way to exercise with minimal equipment and space, making it an inexpensive, convenient, and accessible form of exercise for people of all ages and fitness levels (70).
Equally the use of elastic bands is increasing as an alternative method for improve muscle strength (71, 72) in older adults because of the following reasons: (i) low-cost and available instead of weight machines (73-75), (ii) convenient to different levels of physical fitness (76) and (iii) portability; allowing individuals home or outdoor use (75).
Discussion
Although evidence supports the potential role of resistance exercise on brain health during aging (1), there is insufficient information to allow replication of exercise programs and for determining the appropriate characteristics of resistance exercise for cognitive and brain health benefits in cognitively normal older adults (14).The present study has described a CERT-based description of the 24-week supervised resistance exercise program implemented in the AGUEDA trial, a RCT investigating the effects on brain health in cognitively normal older adults.
Previous literature has primarily focused on the effects of aerobic exercise (i.e., walking) on cognition (23,(77)(78)(79)(80)).However, the potential benefits of resistance training on cognitive functioning are being increasingly investigated for The AGUEDA exercise program has been designed using ACSM guidelines for older adults with the emphasis on resistance exercise (46), as well as scientific literature demonstrating overall health benefits of resistance exercise for this population (9, 51, 88).The exercise program was performed with elastic bands and body weight, which is an effective strategy for improving basic movement patterns, instrumental activities of daily living (34), and for functional tasks related to safety when is perform with high speed (89).In addition, the exercises were based on specific movements, as pushing, pulling, lifting, holding, trunk flexion, trunk rotation and stabilization, making exercises more functional, efficient, and beneficial for increasing absolute strength and power in older adults (51, 90) Furthermore, the current exercise program seems to be feasible in comparation with other interventions (91, 92), by the high retention (95.6%) and attendance rate (85.7%), minor adverse event (1%) and low economic cost (144.23€/participant/24 weks) in comparation with other types of training (i.e., weight training equipment (73-75)).The AGUEDA program's accessibility and portability allow it to be performed anytime and anywhere, making it a feasible exercise program for older population (71).Its convenience and adaptability to different fitness levels and training goals make it easier replication, such as fitness centers, homes, or research laboratories (76).Moreover, we have carefully considered several adherence exercise strategies, based on social-cognitive principles (93), to maximize participant engagement to the AGUEDA exercise program.
Limitations and strengths
The present study has several limitations.Recruitment started during COVID (i.e., March 2021) and was slower than expected, starting at different times of the year.This made it difficult to implement the 24-week exercise program at the same time of the year for all training groups.Seasonal exercising may not have impact on physical fitness (94), but weather should be considered when interpreting differences in physical activity patterns (95) including the use of a mask during the training sessions (96).However, the schedule was flexible and was largely directed by the participant's needs, and uninterrupted during the 24 weeks.Another limitation could be the previous experience and the physical condition of the participants, both being heterogeneous and potentially acting as moderators of any outcomes reported in the study.For example, prior experience with exercise could influence the individualized prescriptions and progress in the resistance and repetitions between participants.
There are also several strengths that must be acknowledged.First, a detailed description of the resistance exercise program was implemented which requires minimal equipment and enables application and translation to a different daily life context (e.g., elderly person at home) and clinical practices (e.g., physiotherapist).Second, the design included a resistance exercise program based on evidenced-based exercise guidelines for older adults (97).Third, the specific population focused on cognitively normal older adults who do not have a clear characteristics-response of exercise.Last, the program included a variety of exercises as well as adherence strategies that could increase adherence to the exercise programs.
Conclusion
In conclusion, the comprehensive CERT-based description of the AGUEDA trial, including well-described multi-language manuals and videos, will serve researchers and public health practitioners who wish to implement a feasible and low-cost 24-week resistance exercise program in cognitively normal older adults.
Ethical Approval and consent to participate: Institutional review board approval was obtained from Andalusian Health Service prior to the start of the trial.CEIM/CEI Provincial de Granada; #2317-N-19; approval date: 25/05/2020.
Figure 1 .
Figure 1.The resistance exercise program implemented in the AGUEDA trial according to the CERT guideline judgement of the trainer and participant, aiming to reach the weekly target RPE, based on the standardized basis mentioned above.Rep: Repetition; RPE: Rate of perceived exertion.
Figure 2 .Figure 3 .
Figure 2. Characteristics and periodization of the supervised AGUEDA resistance exercise program , (iii) classified as cognitively normal according to the Spanish version of the modified Telephone Interview of Exercises with elastic bands.The rest of exercise are performed with someone's body weight. *
Table 2 .
Retention, adherence, and adverse event achieve for the AGUEDA trial *All the reasons were external to the exercise program.**As much as possible.-No applicable
Table 3 .
Economic cost of the AGUEDA trial | 2023-10-02T15:07:30.551Z | 2023-09-30T00:00:00.000 | {
"year": 2023,
"sha1": "780f834dabfd7b62849515c7928df018208bda40",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12603-023-1982-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "5bd837ecdbd1e1f50163bd0e8d0fce46e5029c5d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
210714184 | pes2o/s2orc | v3-fos-license | Codimension One Homology of Noncompact Spaces with Nonnegative N-Bakry \`Emery ricci Curvature
In this paper, we generalize topological results known for noncompact manifolds with nonnegative Ricci curvature to spaces with nonnegative $N$-Bakry \'Emery Ricci curvature. That is, we study the codimension one integral homology of noncompact spaces with nonnegative $N$-Bakry \'Emery Ricci curvature. For example, we prove that if $M^n$ is a complete, noncompact Riemannian manifold with positive $N$-Bakry \'Emery Ricci curvature where $N>n$, then $H_{n-1}(M,\mathbb{Z})$ is 0.
Remark 1.2. Note that Ric N X is a generalization of Ric N φ because if X = ∇ φ, then Ric N X = Ric N φ . Similarly, we call Ric N φ a generalization of Ric because if φ is constant, then Ric N φ = Ric.
The main tool used by Shen and Sormani in [10] is the Cheeger-Gromoll Splitting Theorem, which states that if Ric ≥ 0 and M contains a line, then M is isometric to a product metric, R k × N , where N doesn't contain any lines [2].
There are also different versions of the Cheeger-Gromoll Splitting Theorem for the N -BakryÉmery Ricci curvature. The Ric N X ≥ 0 assumption becomes a weaker hypothesis as N increases. N > n is our strongest premise and the Splitting Theorem holds with no further assumptions [5,Theorem 2], [3,Theorem 1.3]. N < 1 or N = ∞ is a weaker premise and the Splitting Theorem does not hold in general; however, if we include the additional assumptions that X = ∇φ and φ < K for K constant, then we do obtain a splitting [13,Corollary 1.3]. If N = 1 the Splitting Theorem does not hold, even when φ is bounded. However, if X = ∇φ with φ < K, then we get a more general warped product splitting [13,Theorem 1.2]. Here, we say that (M, g) has a warped product splitting if M is diffeomorphic to R × L where L is an (n − 1)-dimensional manifold and there exists u : R → R + such that g = dr 2 + u 2 (r)g 0 for a fixed metric g 0 . We call g a warped product over R.
Using a remark by Fang-Li-Zhang, we can show that if Ric φ ≥ 0 and ∇φ → 0 at ∞, the Splitting Theorem holds [3,Remark 3.1]. This is interesting because unlike the generalization of Myers' Theorem, which says if Ric ∞ φ ≥ λg > 0, and if |∇φ| is bounded, then M n is compact [4], in order for the Splitting Theorem to hold for Ric ∞ φ , we need ∇φ → 0 at ∞ rather than |∇φ| bounded. We will discuss this in more detail in Section 2.
The following table summarizes the known versions of the Splitting Theorem for Ric N X ≥ 0: If Ric N X ≥ 0, then: [3] Our main result generalizes the result of Shen-Sormani to all non-negative Bakry-Emery Ricci bounds where the Splitting Theorem is known. Theorem 1.3. Let M n be complete and noncompact.
Theorem 1.4. Let M n be complete and noncompact.
If
Ric ∞ φ ≥ 0 with ∇φ → 0 at ∞, then H n−1 (M, Z) = 0 or Z. In Section 4, we will give examples to show that our results in Theorem 1.3 are optimal. In Example 4.1, we construct an example that satisfies the following: Ric N φ > 0 for N ≤ 1 or N = ∞, φ is unbounded, and H n−1 (M, Z) = Z. In Example 4.2, we will give an example where Ric N φ > 0 for N = ∞, ∇φ doesn't converge to 0 at ∞, and the Splitting Theorem does not hold. Finally, in Example 4.3, we will construct an example where Ric ∞ φ > 0, ∇φ is bounded, ∇φ doesn't converge to 0 at ∞, and H n−1 (M, Z) = Z.
Our approach follows that of Sormani, in [11], where she studies a property called the loops to infinity property. In [11,Theorem 1.7], Sormani proved that if M n is non-compact and doesn't satisfy the geodesic loops to infinity property, then there is a line in its universal cover. A manifold doesn't satisfy the loops to infinity property if for some ray γ, there exists h ∈ π 1 (M, γ(0)) and some compact set K such that every loop which is homotopic to h along γ must loop back to K, the compact set. We will review this result in more detail in the next section.
Next, we will follow [11,Proposition 1.9], which says, if M n has nonnegative Ricci curvature with h ∈ π 1 (M ) which does not satisfy the geodesic loops to infinity property along a ray γ, then by the Cheeger-Gromoll Splitting Theorem, if γ ∈ M is a ray, then the lift of γ must lie in the R direction of the universal cover of M . Then, in [10], Shen and Sormani use these theorems along with methods in algebraic topology to classify the n − 1 integral homology of complete noncompact manifolds with nonnegative Ricci curvature.
In the proof of [11,Proposition 1.9], the only place Ric ≥ 0 is used is in the Splitting Theorem. However, when we have Ric 1 φ ≥ 0 with bounded (above and below) potential function, φ, we instead get a warped product splitting over R by [13,Lemma 4.4]. We will work around this issue by analyzing the R direction of the splitting in Section 3.
Definitions and Background Statements
In this section, we will give some definitions and review the proof of the Line Theorem (See Theorem 2), which is the main tool we use to prove the results in this paper. First, we recall the definition of a line. Next, we give the definition for the notion of a loop being homotopic to another loop along a ray. Definition 2.3. Given a ray γ and a loop C : [0, L] → M based at γ(0), we say that a loop C : [0, L] → M is homotopic to C along γ if there exists r > 0 with C(0) = C(L) = γ(r) and the loop, constructed by joining γ from 0 to r with C from 0 to L and then with γ from r to 0 is homotopic to C, in π 1 (M, γ(0)).
Finally, we define the geodesic loops to infinity property. Definition 2.4. An element h ∈ π 1 (M, γ(0)) has the geodesic loops to infinity property along γ if for any A ⊂ M compact, there exists a loop C ⊂ M \ A which is homotopic to a representative loop, C of h along γ.
We are ready to present Sormani's Line Theorem.
Theorem 2.5. [11,Theorem 1.7] If M n is a complete non-compact manifold which does not satisfy the geodesic loops to infinity property, then there is a line in its universal cover.
Proof.
Since M n is a complete, non-compact manifold, there exists a ray, γ : [0, ∞) → M n . Let h ∈ π 1 (M, γ(0)) which does satisfy the loops to infinity property, and let C be a representative of h based at γ(0). Because h doesn't satisfy the loops to infinity property, there exists a compact set A ⊂ M such that any loop homotopic to C Now, let M be the universal cover of M , and let π : M → M be the covering map. Identifying loops in π 1 (M, γ(0)) with deck transformations, let γ and h • γ be lifts of γ starting at γ(0) and h • γ(0) respectively, and let C be the lift of C, starting at γ(0) and ending at h • γ(0).
Through some computational details which we will omit, (See [11, Theorem 1.7] for more details), Let γ ∞ be the geodesic with these initial conditions. Then γ ∞ runs from lim Thus, we have constructed a line, namely γ ∞ , in M .
The following corollary follows from [11] and the generalizations of the Cheeger-Gromoll Splitting Theorem.
Corollary 2.6. Let M n be a complete, noncompact Riemannian manifold, and suppose one of the following holds: Then, (i) If g ∈ π 1 (M ), then either g or g 2 has the geodesic loops to infinity property.
(ii) If there exists g ∈ π 1 (M ) which does not satisfy the loops to infinity property along a given ray γ, then for all h ∈ π 1 (M, γ(0)), h must satisfy Also, M must have a split double cover which lifts γ to a line.
(iii) If D be a precompact subset of M and ∂D is simply connected, then π 1 (D) can only contain elements of order 2.
(iv) If D be a precompact subset of M with smooth boundary, where γ is a ray such that γ(0) ∈ D and if S be any connected component of ∂D containing a point γ(a), then the image of the inclusion map is N ⊂ π 1 (Cl(D), γ(a)) such that π 1 (Cl(D), γ(a))/N contains at most two elements.
Corollary 2.7. Let M n be a complete, noncompact Riemannian manifold, and suppose one of the following holds: 3. Ric N φ ≥ 0 with N ≤ 1 and φ bounded above, and there exists a point p ∈ M such that (Ric N X ) p > 0.
Then, M n has the geodesic loops to infinity property.
Proof.
First, we will show that M and its universal cover, M , have no lines. Suppose for the sake of contradiction that M contains a line. We saw earlier in the paper that each of the four premises gives us a version of the Splitting Theorem. Hence, M = R × N . However, Ric N φ ( ∂ ∂r , ∂ ∂r ) = 0, which is a contradiction, thus proving our claim.
Ergo, by Sormani's Line Theorem, M has the loops to infinity property.
Before we state our next theorem, we will show that if Ric φ ≥ 0 and ∇φ → 0, then the Splitting Theorem holds. According to a remark by Fang-Li-Zhang, if Ric ∞ φ ≥ 0 and if φ satisfies the condition lim Proof.
Let γ(t) be a unit speed ray. Then, φ(γ(t)) = ∇φ,γ ≤ |∇φ| 2 ≤ ε, where the first inequality follows by Cauchy Schwarz. After integrating, for the same ε > 0 and R > 0, we get φ(γ(t)) < εt + C, where C is a constant. We will use this to show that lim Letting ε → 0, we get lim In Theorems 1.3 and 1.4, we state that given our curvature bounds, we can find H n−1 (M, Z). In the following theorem, we generalize further and give the n − 1 homology with coefficients in G, where G is an Abelian group.
Theorem 2.9. Let M n be a complete noncompact manifold with either of the following: 1. Ric N X ≥ 0 with N > n.
Then we have the following cases: (i) If M n has two or more ends and G is an Abelian group, then (ii) If M n is one-ended with the loops to infinity property, then H n−1 (M, Z) = 0.
(iii) If M n is one-ended and doesn't have a ray with the loops to infinity property, and G is an Abelian group, then We will give a sketch of the proof of Theorem 2.9 omitting the N = 1 case, which we will explore in the next section.
Proof of Theorem 2.9 omitting the N = 1 case. If M n is two-ended, then M n contains a line, so we can use the Splitting Theorem and follow the proof of [10, Proposition 3.1] to get (i).
If M n has is one-ended and has the loops to infinity property, we follow [10, Proposition 3.2], using the Cheeger-Gromoll Splitting Theorem and the Universal Coefficient Theorem, to get (ii).
Finally, if M n is one-ended and doesnt have a ray with the loops to infinity property, we will use Sormani's Line Theorem to get (iii).
Now, for the sake of completion, we will also give a short proof of Theorem 1.3.
Proof of Theorem 1.3 omitting the N = 1 case.
Cases 4, 5, and 6 clearly follow from Theorem 2.9. We will focus on Cases 1, 2, and 3. Since the proofs of these cases are the same, without loss of generality, suppose Ric N X > 0 with N > n. Then by Corollary 2.7, M n must have the geodesic loops to infinity property. We will show that M n must be one-ended. Suppose for the sake of contradiction, that M n is two-ended. Then M n must contain a line, which means M n must split by the Splitting Theorem. However, Ric N X ( ∂ ∂r , ∂ ∂r ) = 0 which is a contradiction. Thus, M n must be one-ended and so by Theorem 2.9 (ii), H n−1 (M, Z) = 0.
Warped Product Splitting and Geodesic Loops to Infinity
First, we recall the definition of a ray lying in the split direction. Before we prove the next proposition, we will first show that there exist examples of Riemannian manifolds with Ric 1 φ ≥ 0 not satisfying loops to infinity property along a given ray γ and universal cover which has a warped product splitting.
Example 3.2. Let φ ∈ C 2 be bounded with bounded first and second derivatives. By [13,Corollary 2.4], there exists λ large enough so that Ric 1 φ ≥ 0 and g = dt 2 + e 2φ n−1 S n λ . Now consider M = (R × S n ) / G , where G is the group generated by h(t, x) = (a − t, −x) for any constant a > 0. If we also assume φ(a − t) = φ(t), then h is an isometry and (M, g, φ) satisfies Ric 1 φ ≥ 0. h does not have the loops to infinity property along (t, 0) = (−t, a), so (R × S n ) / G satisfies all of the necessary properties.
We state the next three theorems, which can be found in [13,Lemmas 4.2 4.4], because we will use them in the proof of our main result, Lemma 3.7. (1) γ 2 is either constant or its image is a minimizing geodesic in (N, g N ).
(2) If γ 2 is not a constant and γ is a line in M , then the image of γ 2 is a line in N .
Next, we state remarks from [13,Lemma 4.4] and [8, page 208, Remark 8] which we will use in the proof of Lemma 3.7.
We are prepared to state our main result. Lemma 3.7. Let (M, g, φ) be a Riemannian manifold with Ric 1 φ ≥ 0 and |φ| ≤ K for K > 0. Suppose there exists h ∈ π 1 (M ) which does not satisfy the geodesic loops to infinity property along a given ray γ. Then the lift γ of γ is in the split direction, and h * ( γ (t)) = − γ (t).
Proof.
Let ( M , g) be the universal cover of M . By Theorem 2.5, there exists a line in M . By Theorem 3.3, we have the following cases: either M = N × R k and g = g N + g R k , or M = N × R with g = e 2f (r) n−1 g N + dr 2 , where N contains no lines.
If g = g N + g R k , then we have a product metric, so we can follow the proof of [11,Proposition 1.9] to obtain the desired conclusion.
Suppose g = e 2f (r) n−1 g N + dr 2 , where N contains no lines. Recall the setup of Theorem 2.5. We know that there are minimal geodesics C i running from γ(r i ) to h • γ(r i ). [11,Theorem 1.7] Let p N :M → N and p R :M → R be the projections onto the N component and the R component, respectively. Let C i (t) = (x i (t), y i (t)), where x i (t) := p N ( C i (t)) and for t i ∈ (0, L i ) as in Theorem 2.5. The last equality follows from Remark 3.5.
By Theorem 3.4, since γ ∞ (t) is a line, x ∞ (t) is either the image of a line or constant. However, N doesn't contain any lines, so |x ∞ (0)| = 0. Now, using Remark 3.6, e 4f (y(t i )) n−1 Then, So, we know that lim We want to show that for any t, there exists i 0 ∈ N such that for all i ≥ i 0 , |y i (t)| is strictly positive. Suppose for the sake of contradiction that there exists some t 1 such that for all i ≥ i 0 , y i (t 1 ) = 0. Then, since C i (t) is unit speed, we have which is a contradiction. Thus, for all t, there exists i large enough so that |y i (t)| is strictly positive. In particular, since |y i (t)| is never 0 in R, |y i (t)| never changes direction, and so d R (y(r i ), h(y(r i )) = L(y i (t)) = where |x i (t)| 2 → 0 uniformly by the above as i → ∞, and ε i → 0. Thus, Since y(t) and h(y(t)) are in R, we can write y(t) = t 0 y (s)ds − y(0) and h(y(t)) = t 0 h * (y (s))ds − h(y(0)). Also, the only possible isometries in R are reflections, translations, and a combination of the two. We want to show that h * cannot be a translation.
Suppose for the sake of contradiction that h * (y (s)) = y (s).
Taking the limit of both sides, we get lim i→∞ |h(y(r i )) − y(r i )| L i = 0, which is a contradiction. Thus, h * must be a reflection, and In order to show thatγ is in the split direction, along with showing (2), we must also show that |x (s)| = 0 for all s. We proceed by using (2) to show that lim By (2), we have the following equality: By the Fundamental Theorem of Calculus and the Triangle Inequality, = |y(r i ) − y(0) − h(y(r i )) + h(y(0))| L i ≥ |h(y(r i )) − y(r i )| L i − |h(y(0)) − y(0)| L i .
Taking the limit of both sides, and by (1), On the other hand, since |y (s)| = 1 − e 2f (s) Hence, |y (s)| = 1, so |x (s)| = 0,γ(t) = (x(0), y(t)), andγ is in the split direction. Corollary 3.8. If M n is a complete noncompact manifold with Ric 1 φ ≥ 0, |φ| bounded, and there exists an element h ∈ π 1 (M ) which doesn't satisfy the loops to infinity property along a given ray γ, then M n is a flat normal bundle over a compact totally geodesic soul.
Proof.
We can follow the proof of [10, Theorem 3.1], except instead of using the Cheeger-Gromoll Splitting Theorem, we use Theorem 3.3.
We will give a sketch of the proof of the next corollary.
Examples
In this section, we will give examples to show that our main result (Theorem 1.3) is optimal. In the following example, we will give a space and metric where Ric N φ > 0 for N = ∞ and N ≤ 1, φ is unbounded, and H n−1 (M, Z) = Z.
Observe that in Example 4.1, the Splitting Theorem does hold. In our next example, we will construct an example where Ric ∞ φ > 0, lim Now, we will give an example where Ric φ > 0, ∇φ is bounded, lim ρ→∞ 1 ρ 2 φ(γ(s))ds is nonzero and finite, the Splitting Theorem doesn't hold, and H n−1 (M, Z) = Z, rather than 0. Example 4.3. Let M = R × S n−1 , where our metric is g = dr 2 + ρ 2 (r)g N . Let V be a vector in the tangent space of S n−1 . We wish to construct ρ(r) and φ(r) such that Ric φ > 0 everywhere and φ(r) and ρ(r) are smooth.
Let ρ be a function such that where A and C are constants. The following graph is an example of what ρ might look like: Later in the example, we will consider ερ where ε > 0, so the space will look like a cylinder with a small dip around 0. We proceed with our calculations: Given our metric, Ric φ ( ∂ ∂r , ∂ ∂r ) = −(n − 1)ρ ρ +φ and Ric φ (V, V ) = (n − 2)(1 −ρ 2 ) − ρρ +φρ (See [9], page 69). | 2020-01-20T02:00:32.978Z | 2020-01-16T00:00:00.000 | {
"year": 2020,
"sha1": "5801871eb8ad70d42bbfbe3144b6a1243587cdba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5801871eb8ad70d42bbfbe3144b6a1243587cdba",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119299089 | pes2o/s2orc | v3-fos-license | The Relationship Between Entanglement, Energy, and Level Degeneracy in Two-Electrons Systems
The entanglement properties of two-electron atomic systems have been the subject of considerable research activity in recent years. These studies are still somewhat fragmentary, focusing on numerical computations on particular states of systems such as Helium, or on analytical studies of model-systems such as the Moshinsky atom. Some general trends are beginning to emerge from these studies: the amount of entanglement tends to increase with energy and, in the case of excited states, entanglement does not necessarily tend to zero in the limit of vanishing interaction between the two constituting particles. A physical explanation of these properties, shared by the different two-electrons models investigated so far, is still lacking. As a first step towards this goal we perform here, via a perturbative approach, an analysis of entanglement in two-electrons models that sheds new light on the physical origin of the aforementioned features and on their universal character.
I. INTRODUCTION
Entanglement constitutes one of the most fundamental phenomena in Quantum Mechanics [1][2][3][4][5][6]. Entangled states of composite quantum systems exhibit non-classical correlations that give rise to a rich variety of physical phenomena of both fundamental and technological significance. Quantum entanglement can be considered in two different and complementary ways. On the one hand, entanglement can be viewed as a resource. The controlled manipulation of entangled states is at the basis of several quantum information technologies. On the other hand, entanglement can be regarded as a fundamental ingredient for the physical characterization of natural quantum systems such as, for instance, atoms and molecules (for a comprehensive and up to date review on this subject see [6]). These two points of view are closely related to each other, although the latter is somehow less developed than the former. Concerning the second of the abovementioned approaches, several researchers have investigated in recent years the phenomenon of entanglement in atomic physics [6][7][8][9][10][11][12][13][14][15][16][17]. This line of enquiry is contained within the more general program of applying tools and concepts from information theory to the analysis of atomic and molecular systems [18][19][20][21][22][23][24][25][26][27]. Most of the studies on entanglement in two-electron systems focused on the properties of the concomitant ground states. However, the entanglement features exhibited by excited states of two-electron atomic systems have also been explored [13,14]. In this regard, the most detailed results have been obtained from analytical investigations of the entanglement properties of exactly soluble models, specially the Moshinsky one [13]. The behaviour of these soluble models is consistent with some (partial) results yielded by numerical explorations of entanglement in Helium based on high quality, state-of-the-art wave functions. Some general trends begin to emerge from these investigations. First, and not surprisingly, entanglement is found to increase with the strength of the inter-particle interaction. Second, entanglement also tends to increase with energy. Finally, the entanglement of excited states does not necessarily vanish in the limit of zero interaction. The last two properties are, perhaps, less intuitively clear than the first one. In fact, in a recent comprehensive review article on entanglement in atomic and molecular systems by Tichy et al. [6] it is said that ". . . the limit of vanishing interaction strengths does not necessarily yield a non-entangled state . . . it remains open whether this discontinuity effect has to be considered an artefact of the entanglement measures that are used, or whether a physical explanation will be provided in future.".
It is remarkable that the various two-electron models where entanglement has been studied so far share the basic qualitative features mentioned above. This suggests that these features may constitute generic properties of these kind of two-fermion models. In this regard, we share the opinion expressed by Tichy et al. [6]: "Due to the non-integrability of any non-hydrogen-like atom, theoretical studies of multielectron systems have, so far, mainly focused on exactly solvable model atoms. While such models differ strongly from real multielectron atoms as concerns the interelectronic interaction and the definition of the confining potential, they allow insight in some qualitative features." The aim of the present work is to clarify the origin of the aforementioned properties. To this end we are going to consider a perturbative approach to this problem, regarding the term in the Hamiltonian describing the interaction between the two electrons as a small perturbation. We shall show that the eigenvalue degeneracy of the unperturbed Hamiltonian (describing independent particles) plays a crucial role in explaining the entanglement features of the "perturbed" system.
II. QUANTUM ENTANGLEMENT IN SYSTEMS OF TWO IDENTICAL FERMIONS
Correlations between two identical fermions that are only due to the antisymmetric nature of the two-particle state do not contribute to the state's entanglement [28][29][30][31][32][33][34][35][36]. The entanglement of the two-fermion state is given by the quantum correlations existing on top of these minimum ones. For example, a two-fermions state of Slater rank one (that is, a state whose wave function can be expressed, in terms of an appropriate single-partice orthonormal basis, as one single Slater determinant) must be regarded as non-entangled. There are deep, fundamental physical reasons for this. On the one hand, the correlations exhibited by such states are not useful as a resource to perform non-classical information transmission or information processing tasks [28]. On the other hand, the non-entangled character of these states is consistent with the possibility of associating complete sets of properties to both parts of the composite system (see [29][30][31] for a detailed analysis of various aspects of this approach).
Two useful quantitative measures for the amount of entanglement of a pure state |ψ of a system of two identical fermions are expressed in terms of (see [37] and references therein) the linear entropy, and the von Neumann entropy of the single-particle reduced density matrix ρ r . Notice that according to the entanglement measures given by Eqs.
(1) and (2) a pure state that can be represented by a single Slater determinant has no entanglement (that is, it is separable). The fermionic entanglement measures (1) and (2) are closely related to the Schmidt decomposition of pure states of systems constituted by two identical fermions [32]. For any pure state |ψ of two identical fermions it is posible to find an orthonormal basis {|i , i = 0, 1, . . .} of the single-particle Hilbert space such that the state |ψ can be written as where the Schmidt coefficients λ i verify 0 ≤ λ i ≤ 1 and i λ i = 1 (in the case of systems with a single-particle Hilbert space of finite dimension N , we assume that N is even and that the sums on the index i run from i = 0 to i = N/2). Then one has that the entanglement measures (1) and (2) can be expressed in terms of the Schmidt coefficients of the state |ψ respectively as [32,37], and In the particular case of systems of two fermions with a single-particle Hilbert space of dimension four, the quantity 2ε L reduces to the entanglement measure (usually referred to as squared concurrence) studied in [28] (see also [35]). The entanglement measure given by equations (1) and (4) has been recently applied to the analysis of various physical systems or processes, including electron-electron scattering processes [34], the study of entanglement-related aspects of quantum brachistochrone evolutions [35], and the entanglement properties of two-electron atomic models [13]. As a final remark on entanglement in fermionic system we mention that in the present work we deal with the fermionic case of the concept of entanglement between particles. This is not the only possible conception of entanglement in systems of identical particles. In particular, there is an approach to the study of entanglement in systems of indistinguishable particles which focuses on the entanglement between different modes (see, for instance, [6] and references therein).
III. PERTURBATIVE APPROACH
Let us consider a system of two identical fermions ("electrons") governed by a Hamiltonian of the form H = H 0 + λH ′ , where the unperturbed Hamiltonian H 0 corresponds to two independent (non-interacting) particles, λH ′ describes the interaction between the electrons, and λ is a small parameter. When this system is treated perturbatively, the perturbative corrections to the eigenenergies correspond to some "fine structure" sitting on top of the main pattern due to the spectrum of H 0 . It is plain that within this scenario the leading, zeroth-order contribution to the energy spectrum is independent of the detailed structure of the perturbation H ′ . As we shall presently see, the situation is completely different when, instead the energy, we calculate the entanglement of the system's eigenstates. When the unperturbed energy eigenvalues are degenerate the leading (zeroth-order) contribution to the eigenfunction's entanglement does depend, in general, on the details of the perturbation.
Let us consider an m-fold degenerate energy level of H 0 , with an associated set of m orthonormal eigenstates |ψ j , j = 1, . . . m. Since H 0 describes two non-interacting particles, the m eigenstates |ψ j can always be chosen to be Slater determinants written in terms of a family of orthonormal single-particle states |φ ). All the members of the subspace H s spanned by the states |ψ j are eigenstates of H 0 corresponding to the same eigenenergy. That is, energywise they are all equivalent. However, the different members of this subspace have, in general, different amounts of entanglement. Typically, the interaction H ′ will lift the degeneracy of the degenerate energy level. If we solve the eigenvalue problem corresponding to the (perturbed) Hamiltonian H and take the limit λ → 0, the perturbation H ′ will "choose" one particular basis {|ψ ′ k λ→0 } among the infinite possible basis of H s . The states constituting this special basis will in general be entangled. These states are of the form |ψ ′ k λ→0 = m j=1 c kj |ψ j , and are determined (according to standard perturbation theory [38]) by the eigenvectors of the m × m matrixH with elements given byH ij = ψ i |H ′ |ψ j . It is then clear that in the limit λ → 0 the eigenstates of H will in general be entangled.
Letm be the number of different single-particle states within the family {|φ It is a quite typical behavior thatm tends to increase with the degree of degeneracy m of the energy levels of H 0 which, in turn, tends to increase with energy (that is, it tends to increase as one considers higher excited states). This explains (at least in part) why the range of entanglement-values available to the eigenstates {|ψ ′ k λ→0 } tends to increase with energy. Indeed, the maximum amount of entanglement (as measured by (2)) that can be achieved by a linear combination of Slater determinants constructed from the single-particle states {|φ where Ω is the integer part ofm/2. Expression (6) provides an upper bound for the entanglement of the states In the present work we are going to focus on the entanglement properties exhibited by the eigenstates {|ψ ′ k λ→0 } and on the entanglement upper bound (6). In this regard, our perturbative approach is unusual, since we are focusing on "zeroth-order properties". However, it must be stressed that the amounts of entanglement of the states {|ψ ′ k λ→0 } are in general finite quantities that do not vanish when λ → 0 and, consequently, constitute dominant aspects of the entanglement-related features characterizing the system.
IV. TWO INTERACTING SPIN-1 2 FERMIONS IN AN EXTERNAL CONFINING POTENTIAL
We apply now the previous considerations to a system consisting of two interacting spin-1 2 fermions in an external confining potential U (x). The interaction between the particles is described by the potential function V (x 1 − x 2 ), with V an even function. The Hamiltonian of this system is then, where x 1 and x 2 are the coordinates of the two particles. We use atomic units (m = 1, = 1). A relevant instance of this system corresponds to the case of harmonic confinement, U (x) = 1 2 ω 2 x 2 , where ω is the natural frequency of the external harmonic field. This case includes the Mishonsky atom [7,13,39], where the interaction between the particles is also harmonic, λV (x 1 − x 2 ) = 1 2 λω 2 (x 1 − x 2 ) 2 , with λω 2 ≥ 0 being the square of the natural frequency of the interaction harmonic field. The Moshinsky atom is an exactly soluble system whose entanglement properties have been studied in detail. The examples considered here indicate that some important entanglement-related features of the Moshinsky model are also encountered in more general systems.
We now apply the formalism of perturbation theory to a system described by the Hamiltonian (7) with harmonic confinement and a generic interaction V between the particles. The unperturbed Hamiltonian is then, and the perturbation, When λ = 0, the model consists of two independent harmonic oscillators with the same natural frequency. Let |n (n = 0, 1, 2, ...) be the eigenstates of each of these oscillators. Then, the kets |n, ± constitute a single-particle orthonormal basis (the signs ± correspond, in standard notation, to the spin state of the spin-1 2 particle). The eigenstates of H 0 are characterized by two quantum numbers n 1 and n 2 corresponding to the alluded pair of independent oscillators. The corresponding eigenenergies depend only on the value of the sum n 1 + n 2 and are m-fold degenerate with m = 2(n 1 + n 2 ) + 1 (m = 2(n 1 + n 2 ) + 2) if n 1 + n 2 is even (odd). Assuming that n 1 + n 2 is odd, with n 1 = n 2 − 1, and taking spin into consideration, we can choose the following set of m antisymmetric eigenstates (all with the same energy), which are represented by single Slater determinants and consequently have zero entanglement. A similar set of separable eigenstates of H 0 can be chosen when n 1 + n 2 is even.
We consider now a harmonically confined two-fermion system with an interaction potential given by a repulsive Dirac delta function, For the first excited energy level of H 0 (n 1 + n 2 = 1) which is four-fold degenerate we then have, and the corresponding eigenvectors can be written as In the limit of vanishing interaction, λ → 0, the eigenstates corresponding to the first two excited energy levels of the full Hamiltonian tend to the states (13), which have the following amounts of entanglement, It can be verified after some algebra that the eigenvectors (and the associated amounts of entanglement) obtained in this case coincide with those corresponding (in the λ → 0 limit of the two first excited energy levels) to an harmonic interaction. That is, they are the same as those associated with the Moshinsky model. We thus see that the harmonic and the Dirac delta interactions lead, for particles confined by an external harmonic well, to the same entanglement behaviour of the first excited states in the limit of weak interaction. We now consider the case of generic external (one dimensional) potential U (x) and interaction V ( with where |0 and |1 are the ground and first excited eigenstates corresponding to the external confining potential U (x). The eigenvectors ofH are, The values of entanglement exhibited by these states are ε L = ε vN = 0 associated to |ψ ′ 2 and |ψ ′ 3 and ε L = 1 2 , ε vN = ln 2 associated to |ψ ′ 1 and |ψ ′ 4 . This result generalizes the previous one, as we have solved the problem for generic interactions and external confining potentials. Here it is possible to obtain general results for an arbitrary confining potential U (x) for the case where one has (in the limit of vanishing interaction) one particle in the ground state and one particle in the first excited state of U (x), because the degeneracy of the concomitant energy level (of the two-partile system) can be determined directly without knowing the detailed energy spectrum associated with U (x). On the other hand, the properties exhibited by states of higher excitation in the limit of vanishing interaction do depend (via the degeneracy appearing in this limit case) on the detailed eigenenergies of the confining potential U (x). Consequently, the analysis of the limit of vanishing interaction can be performed only in a case-by-case way. In the next section we are going to consider higher excited states in the case of a generic interaction potential V (x 1 − x 2 ) and a harmonic confining potetial.
V. ENTANGLEMENT UPPER BOUND FOR EXCITED STATES IN THE LIMIT OF WEAK INTERACTION
We consider now two spin-1 2 particles (in one dimension) confined by an external harmonic potential and having a generic interaction λV (x 1 − x 2 ). We shall calculate general upper bounds for the entanglement of the eigenstates of this system in the limit λ → 0. These bounds, expressed in terms of the quantum numbers n 1 and n 2 characterizing the eigenfunctions of H 0 , are, ε L (|n 1 n 2 ) ≤ n 1 + n 2 n 1 + n 2 + 1 (21) ε vN (|n 1 n 2 ) ≤ ln(n 1 + n 2 + 1) Equation (22) constitutes a particular instance, corresponding to a harmonic confining potential, of the general upper bound (6). In Fig. 1 we plot the entanglement bounds against n 1 + n 2 . These curves represent the maximum possible entanglement compatible with those quantum numbers (the bounds do not depend on the interaction and are, in this sense, universal). In the particular case n 1 +n 2 = 2, besides the above upper bound, we can calculate the exact amount of entanglement (in the limit λ → 0) for an arbitrary interaction potential V (x 1 − x 2 ). This case corresponds to a five-fold degenerate energy level of H 0 shared by a set of eigenvectors of the form (10), with m = 5. The generic matrixH is of the form with The corresponding normalized eingevectors can be expressed as follows |ψ ′ 4 = 1 2r 2 1 + 1 (−r 1 |ψ 2 + r 1 |ψ 3 + |ψ 5 ) where r 1 and r 2 are functions of the matrix elements b, c, d, e given by the eexpressions, with and Now we calculate the amounts of entanglement of the states |ψ ′ 4 and |ψ ′ 5 that depend (in the same way) on r 1 and r 2 respectively. The values adopted by these two constants depend on the form of the interaction V (x 1 − x 2 ). The general expression for the amount of entanglement of the states |ψ ′ 4 and |ψ ′ 5 are, We plot these expressions in Fig. 2. Note that the particular value r i = 1 √ 2 corresponds to the harmonic interaction in the Moshinky model. The stars in both plots correspond to the entanglement amount for the Moshinsky atom (harmonic interaction). All depicted quantities are dimensionless.
VI. CONCLUSIONS
By recourse to a perturbative approach we studied the entanglement-related properties of a system consisting of two interacting spin-1 2 fermions ("electrons") confined by an external potential. Our present results clarify some aspects of the basic entanglement features exhibited by particular two-electrons models studied previously, and shed some light upon the fact that these systems share important qualitative entanglement properties that are also observed in more general cases. Our analysis highlights the important role played by the degeneracy of the energy levels of the "unperturbed" (interaction-free) Hamiltonian H 0 . The non-vanishing entanglement exhibited by the interacting particles in the limit of vanishing interaction is due to the particular eigenbasis of H 0 "chosen" by the interaction. This amount of entanglement tends to increase with the alluded degeneracy which, in turn, tends to increase with energy. This sheds light on the physical reasons behind the fact (observed in all cases studied so far) that the amount of entanglement exhibited by the eigenstates of two-electrons systems tends to increase with energy. These basic trends do not depend on the particular entanglement measure employed, as has been shown in the present work, where entanglement measures based upon the linear and the von Neumann entropies were considered. In connection with this point it is worth to mention a relevant question raised in a recent review article on entanglement in atoms and molecules [6]: does the existence, for some excited states, of a finite amount of entanglement in the limit of vanishing interaction depend upon the particular entanglement measure employed? As already mentioned, the answer to this question is that the alluded feature constitutes an intrinsic property of the systems under consideration, which does not depend on the entanglement measure used.
As particular illustrations of the above considerations we have • Computed for general confining and interaction potentials, in the limit λ → 0, the entanglement measures based upon the linear entropy and the von Numann entropy of the excited states associated with the four-fold degenerate unperturbed first excited state.
• Obtained for a harmonic confining potential and a generic interaction potential V (x 1 − x 2 ), the entanglement of the eigenstates corresponding in the above limit to the second excited unperturbed energy level (given by n 1 + n 2 = 2).
• Determined, for a harmonic confining potential and an arbitrary interaction, upper bounds on the amounts of entanglement exhibited by the system's eigenstates in the limit of vanishing interaction. These upper bounds are expressed in terms of the quantum numbers n 1 and n 2 .
The results advanced here corresponding to the entanglement in the limit of vanishing interaction are exact, and our procedure can in principle be applied to any excited state of systems of the kind considered in this work. In this sense, the perturbative method used here is not (in intself) the fundamental protagonist of our present considerations. The perturbative approach was used only as a tool to determine (exactly) the entanglement features of the system's eigenstates in the limit of vanishing interaction, in order to get some insight on the qualitative entanglement features of two-elctrons models. It is worth stressing that the entanglement exhibited by these systems in the limit λ → 0 constitutes a basic, dominant aspect of their entanglement-related characteristics. It is to be expected, on general physical grounds, that the entanglement-degeneracy relationship uncovered here constitutes a typical feature of atomic models. This provides a first step towards explaining the fact that the general entanglement features exhibited by soluble models such as the Moshinsky one are also observed in other two-electrons systems.
The ideas discussed here may also be useful for the analysis of entanglement-related aspects of other scenarios involving interacting fermions, such as those appearing in molecular or solid state physics. Just to mention one example, a situation similar to the one observed in the atomic models considered here also occurs when studying the behaviour of electronic entanglement in the dissociation process of diatomic molecules [27]. One observes that, for instance, in the limit of large values of the reaction coordinate (corresponding to vanishing interaction between the electrons in the system) describing the dissociation of H 2 , the electronic entanglement does not tend to zero [27]. | 2012-09-24T15:23:05.000Z | 2012-03-01T00:00:00.000 | {
"year": 2012,
"sha1": "b277b3b5401f38f8dbc658095f750d1ad3dc8dec",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1209.5299",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b277b3b5401f38f8dbc658095f750d1ad3dc8dec",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
214698409 | pes2o/s2orc | v3-fos-license | Transient Thermal Analysis in an Intermittent Ceramic Kiln with Thermal Insulation: A Theoretical Approach
Department of Mechanical Engineering, Federal University of Campina Grande, Campina Grande 58429-900, Brazil Department of Chemical Engineering, Federal University of Campina Grande, Campina Grande 58429-900, Brazil Federal Institute of Education, Science and Technology of Paraı́ba, Patos 58700-000, Brazil Department of Food Engineering, Federal University of Campina Grande, Campina Grande 58429-900, Brazil
Introduction
Research studies indicate that the first ceramic parts appeared at Dolni Vestonice (Czech archaeological site) in approximately 26000 BC, followed by Siberia in 12000 BC, China in 11000 BC, and Mesopotamia in 8000 BC, and in both Asia, the Middle East, and Europe in the late Neolithic period, between 7000 and 6000 BC [1][2][3][4]. e production of ceramic products consists of the following steps: preparation of the raw material, forming (molding), drying, and firing. In the manufacture of ceramic products, water is added to clay in order to facilitate workability in the molding step, i.e., to give plasticity to the material.
Although ceramic products have been used since ancient times, until the end of the nineteenth century, the production system of ceramic materials did not undergo major technological changes. Until this time, the production was manual, the drying was done in the sun, and the firing was carried out in field kilns, a type of intermittent kiln with low thermal efficiency [5].
In drying, an appreciable amount of thermal energy is used to evaporate the water that was added to the part during the forming period and to provide the necessary mechanical strength to reduce the chance of failure during firing process. If the drying process is not performed correctly, it can lead to material defects, loss of productivity, and increase in the energy consumption. According to the literature [6][7][8][9][10][11][12][13][14][15][16][17][18], the operating variables that influence the drying process are temperature, relative humidity and velocity of the drying air, time, shape, thickness, area/volume ratio, particle size, initial moisture content, and properties of the ceramic material.
After the drying step, the molded part is subjected to high temperatures (firing) in order to provide it rigidity and mechanical resistance. e firing process of ceramic materials in kilns must follow a preestablished firing curve. e firing curve relates the temperature of the part to the processing time, indicating the heat transfer rate at which the part must be heated or cooled and the operating temperature at each instant of time. Figure 1 shows a typical firing curve for ceramic products.
For each material, there is a critical rate of heating and cooling in such a way that if it is exceeded, product quality losses will surely occur. Optimizing the firing curve for a given material means knowing the critical heating and cooling rates in each temperature and product type.
Kiln can be defined as a structure within which it is possible to heat materials at high temperatures, i.e., it is the equipment where the drying and firing steps of the molded part occur. In the ceramics industry, a lot of equipment are made of refractory materials, which are ceramics capable of maintaining their strength and physical integrity at high temperatures (above 3000°C) and are classified into two main groups: intermittent or continuous (tunnel type) kiln [4,5,19].
Various types of fuels can be used in ceramic kilns, such as biomass (in the form of untreated wood waste and firewood), fuel oil, and natural gas [20]. Among them, natural gas stands out because it is more efficient and is considered a clean source of energy, when compared to other fuels.
Despite the great importance of an energy analysis in thermal equipment related to thermal insulators, involving the academy and the industrial sector, several works are directed toward a simple steady-state approach [21][22][23][24].
us, there is a need for more sophisticated studies, for example, those related to the transient heat transfer process.
Cavalcanti et al. [25] quantified heat losses in an intermittent kiln, concluding that convection heat loss is very low compared to radiation heat loss. e authors attribute this fact to the high surface temperature of the kiln and the low wind speeds in the surroundings (0.8 m/s).
Utlu and Hepbasli [26] performed energetic and exergetic analyses in a continuous kiln (tunnel type) for drying and firing of ceramic materials, using operational data. e energy and exergy efficiencies obtained were 39.98% and 16.41%, respectively.
Almeida et al. [27] studied numerically and experimentally the drying of hollow ceramic bricks in continuous cross-flow industrial dryer. e authors found through energy efficiency that the dryer provides a large waste of energy (over 90%) during the drying process. Low exergy efficiency was also observed, ranging from 7% to 14% depending on temperature, indicating that the drying process in this kiln is typically dissipative and therefore has high thermal energy consumption.
Gomez et al. [28] quantified the heat transfers that occur in an intermittent ceramic kiln during the heating and cooling stages. Results indicate that the greatest heat loss occurs by radiation to the sidewalls of the equipment and that a considerable amount of energy is required to heat the base, ceiling, and sidewalls of the kiln. Further, it was observed, from numerical results, that the use of 25 mm thick ceramic fiber on the sidewalls of the kiln was sufficient to provide a considerable reduction in the maximum external surface temperature and an energy gain of approximately 35% when compared with the kiln without thermal insulation.
us, in contribution to this research area, the main purpose of this work is to quantify the influence of the type and thickness of thermal insulators applied on the external sidewalls of an intermittent ceramic kiln, operating with natural gas as fuel, on the heat losses by convection and radiation, temperature distribution in the insulating material, and energy gain that the configuration provides for the system compared to the kiln without thermal insulation, using a transient approach.
Materials and Methods
is paper presents thermal analyses in an intermittent ceramic kiln, operating with natural gas as fuel, illustrated in Figure 2. e dimensions of the kiln and the experimental procedure for monitoring external ambient and kiln temperatures are reported in previous work [28].
To evaluate the influence of the type and thickness of thermal insulation on the response variables, it was considered that the temperatures on the external (T s,ext ) and internal (T int ) sidewalls of the kiln with thermal insulation during the heating step are equivalent to those obtained for the experiment performed with the kiln without thermal insulation ( Figure 3). is consideration ensures that the firing curve is the same for all analyzed cases.
With this new configuration, the temperature of the outer surface of the insulation, T s2,ext , is unknown. us, it must be determined so that it is possible to calculate the heat lost by convection and radiation to the external environment.
Four types of thermal insulators for application in the kiln were analyzed, with thickness ranging from 0.5 mm to 100 mm. To select the types of thermal insulation, factors such as market availability, temperature range, whose upper Advances in Materials Science and Engineering limit must be greater than 300°C, and the availability of information regarding the main properties required for the calculations developed here were taken into consideration. Table 1 presents the thermal conductivity as a function of operating temperature as well as the recommended application range for each type of thermal insulation [29,30]. Table 2 indicates the density, specific heat, and emissivity of each insulating material [31][32][33][34], which do not change significantly with temperature. e general differential equation for describing onedimensional transient heat transfer is given by the following equation [35]: where ρ, c p , T, k, and S are density, specific heat, temperature, thermal conductivity, and the source term, respectively. e initial condition for thermal insulation in the sidewalls of the kiln is given by the following equation: (2) e boundary conditions on the kiln wall (x � 0) and on the outer surface of the insulation (x � L insu ) are described, respectively, by equations (3) and (4): where k insu , h r , h c , T s,ext , T s2,ext , and T amb are, respectively, the thermal conductivity of the thermal insulators, convective and radiation heat transfer coefficients, surface temperature outside the kiln, surface temperature outside the insulation, and external ambient temperature where the kiln is located.
To solve equation (1), the numerical method based on the finite volume formulation was used. e first step of the method is to discretize the domain, that is, to divide it into subdomains, also known as control volumes. e second step of the finite volume method is what distinguishes it from other numerical methods used in computational fluid dynamics: integrate the differential equation (equation (1)) under each of these subdomains, obtaining a set of algebraic equations relating the control volumes.
To understand the method, consider a one-dimensional domain that has been discretized into five control volumes, as shown in Figure 4. Taking the third control volume, its center is identified by the letter P. e east and west faces of the analyzed control volume are identified by e and w, respectively. Similarly, the centers of the east and west control volumes of this are identified by E and W. e distance between faces w and e is given by Δx, which is the length of the control volume P. e distances between the centroids P and E and between W and P are identified by δx PE and δx WP , respectively. Similarly, the distances between centroid P and face e and between face w and centroid P are identified by δx Pe and δx wP , respectively. e discretization technique was such that the west face of the first control volume coincides with boundary 1 (x � 0) and the east face of the last control volume coincides with the boundary 2 (x � L insu ), respectively, ensuring that all control volumes are whole [36]. In addition, all control volumes are of the same size (δx WP � δx PE � Δx).
Integrating equation (1) on a given one-dimensional control volume and over a time interval from t to t + Δt gives the following equation: As this analysis is applied for one transient problem, the discretization scheme can be made explicit, implicit, or fully implicit. Details of these methods can be found in Patankar Advances in Materials Science and Engineering 3 [37] and Smith [38]. e fully implicit scheme was employed because it is unconditionally stable [35].
First Control Volume (Boundary 1).
Disregarding the source term and solving the integrals of equation (5) for the first control volume, we obtain the following equation: Taking T t+Δt P � T P and T t P � T 0 P and rearranging equation (6), the discretized equation by the finite volume method for the first control volume is given by equation (7), as follows: where k f1 and T s,ext are, respectively, the thermal conductivity and the temperature of the thermal insulation on the face in contact with the kiln wall (x � 0).
Central Control Volume.
Disregarding the source term and solving the integrals of equation (5) for the central control volumes, we have Considering linear interpolation functions for temperature and making T t+Δt P � T P and T t P � T 0 P , we can write Rearranging, we obtain the discretized equation for the central control volumes as follows:
Last Control Volume (Boundary 2).
Disregarding the source term, solving the integrals of equation (5), and considering linear interpolation functions for temperature, we have the following equation for the last control volume: e surface temperature outside the insulation, T s2,ext , can be written as a function of the heat lost by unit area of the thermal insulation (q f2 ″ ) as follows: Substituting the expression of T s2,ext obtained in equation (12) at the boundary condition of the outer surface of the insulation (equation (4)) and rearranging the terms, we obtain the heat flux lost by the thermal insulation (q f2 ″ ) as follows: Substituting the surface temperature outside the insulation (equation (12)) as a function of the heat flux lost by the thermal insulation (equation (13)) in equation (11) and rearranging the terms, we obtain the equation discretized by the finite volume method for the last control volume as follows: As can be seen, both sides of the discretized equations (equations (7), (10), and (14)) contain temperatures in the new time step, characteristic of the fully implicit discretization scheme. us, a system of algebraic equations, with one equation for each control volume, must be solved at each step of the process. For solving such systems of equations, the iterative calculation tool available in Microsoft Excel software was used. e variables k w , k e , k f1 , and k f2 are, respectively, the thermal conductivities on the west, east, west face of the first control volume, and east face of the last control volume. For the sake of simplicity, such thermal conductivity values were considered constant for all control volumes at a given time, but changing from one instant to another as a function of the average thermal insulation temperature, according to the following equation: is the average temperature of the thermal insulation at time t, T i (t) is the temperature at the center of the control volume i at time t, and N is the number of control volumes. e variables h r and h c in equation (14) are, respectively, the radiation and convection heat transfer coefficients. e radiation heat transfer coefficient is calculated according to equation (16) [22], as follows: where ε insu is the emissivity of thermal insulation, σ is the Stefan-Boltzmann constant (σ � 5.67 × 10 − 8 W/(m 2 × K 4 )), and T s2,ext and T amb are the absolute temperatures of the outer surface of the insulation ( Figure 3) and of the external surroundings, respectively. e convective heat transfer coefficient can be calculated as a function of the average Nusselt number (Nu L ), according to the following equation: where k f is the thermal conductivity of the fluid (ambient air) and L is the characteristic length, which corresponds to the height of the outer sidewall of the kiln. Modeling the kiln sidewalls as vertical flat plates, considering that the nature of the fluid flow is natural convection, and admitting laminar flow (10 4 ≤ Ra L ≤ 10 9 ), the correlation proposed by Churchill and Chu [39] was used to calculate the average Nusselt number, as follows: where Pr is the Prandtl number and Ra L is the Rayleigh number, calculated from equation (19), as follows: where ] is the kinematic viscosity, α is the thermal diffusivity, g is the gravity acceleration, β � 1/T f is the volumetric expansion coefficient, T s2,ext and T amb are the outer surface temperature of the thermal insulation and external ambient temperature, respectively, and L is the height of the outer sidewall of the kiln. e values of the parameters ], α, Pr, and β are tabulated according to the fluid type and the film temperature (T f ), which corresponds to the average temperature between the external ambient temperature (T amb ) and the outer surface temperature of the thermal insulation (T s2,ext ), and their values are calculated for each time of the process.
To guarantee that the numerical results were independent of the number of control volumes, a mesh convergence analysis was developed using the methodology proposed by Celik et al. [40], which is based on Richardson's extrapolation [41,42]. Using this methodology, it is possible to determine the ideal mesh by calculating the grid convergence index (GCI). In addition, Richardson extrapolation allows to estimate the exact solution from the results of existing meshes.
is methodology has been used in several works related to the most diverse areas of computational fluid dynamics [43][44][45][46][47][48]. e entire procedure for mesh convergence analysis is described in detail in Gomez et al. [28].
In addition to the mesh convergence analysis, a time step independence study based on absolute error was employed. e response variable (ϕ) for the two previously mentioned studies was the energy gain by the thermal insulation during the kiln heating process (Q in insu ), calculated using the composite trapezoidal rule, as follows: where
Advances in Materials Science and Engineering
where a is the time instant that the kiln was turned on (a � 0 s), b is the time instant that the kiln was turned off (b � 840 min, based in experiments), and n is the number of time steps in the heating step. q in insu (a), q in insu (b), and q in insu (i) are the rates of energy entering the thermal insulation at time a, b, and i, respectively.
To quantify the influence of type and thickness of thermal insulation on the kiln efficiency, it was proposed the energy gain variable, which can be calculated according to the following equation: where E supplied and E supplied insu are the energies provided during the heating process for the uninsulated and thermally insulated kiln, respectively, and are calculated according to equations (23) and (24), as follows: where _ E supplied and _ E E supplied insu are the heat rates supplied to the kiln without and with thermal insulation, respectively, and are calculated according to equations (25) and (26), as follows: where _ E st is the rate at which energy is stored in the kiln, q lost (base/ceiling) is the heat lost by the base/ceiling assembly of the kiln, and q lost sidewalls is the heat lost by the sidewalls of the kiln without thermal insulation. e calculation methodology for each heat transfer parameter in the kiln without thermal insulation is presented in detail in Gomez et al. [28]. q lost insu is the heat lost by thermal insulation (equation (27)) and _ E st insu is the rate at which energy is stored in the thermal insulation, calculated according to equation (28).
where A is the sum of the kiln sidewall areas; ρ insu , L insu , and c p insu are, respectively, the density, thickness, and specific heat of the thermal insulators; and ΔT average insu is the average temperature variation within the thermal insulation in the time interval ∆t.
It is important to emphasize that, given the considerations that have been made (Figure 3), the curves of q lost(base/ceiling) and _ E st as a function of time are common for the kiln with and without thermal insulation. What changes from case to case is the heat lost by the sidewalls of the kiln, influenced by the type and thickness of the thermal insulation. Figure 5 shows the average temperatures of the outer and inner surfaces of the kiln without insulation, as well as the ambient air temperature in the surroundings of the kiln during the heating step, obtained by experiments. It is observed that the curve for the external surface temperature of the kiln, T s,ext , is one of the boundary conditions used in the analysis of the kiln with thermal insulation, as seen in equation (3). e curve for the ambient air temperature outside the kiln (T amb ) is one of the input parameters of the other boundary condition of the physical problem, as seen in equation (4).
Results and Discussion
As previously mentioned, a mesh convergence study was performed representing the thermal insulation used in the kiln sidewalls. For each thermal insulation thickness analyzed, three types of meshes were created, named M 1 , M 2 , and M 3 , with 20, 10, and 5 control volumes, respectively. Table 3 presents the representative mesh size as a function of the thermal insulation thickness, where l 1 , l 2 , and l 3 are the representative mesh sizes of M 1 , M 2 , and M 3 , respectively. In order to satisfy the recommendations proposed by Celik et al. [40], the number of control volumes of each mesh was defined such that the refinement factors between the meshes were equal, i.e., r 21 � r 32 � 2. It is worth noting that all analyses were performed using the same type of thermal insulation (ceramic fiber) and the same time step (Δt � 0.1 min). e response variable used to do the mesh convergence analysis was the heat conduction energy gain by the thermal insulation during the heating stage (Q in insu ), according to equation (20). e values of this variable as a function of the mesh and thermal insulation thickness are reported in Table 4. Table 4 presents the results of the mesh convergence study parameters as a function of thermal insulation thickness (L insu ).
e values of C in the table indicate monotonic convergence for cases where L insu ≥ 7.5 mm, since their values are in the range between 0 and 1, ensuring the applicability of the Richardson extrapolation method to the variable of interest in the given range [41]. For L ≤ 5.0 mm, it is observed that the convergence is oscillatory, since − 1 < C < 0. Celik et al. [40] states that the oscillatory convergence should not be considered as an unsatisfactory result, because if ε 21 � ϕ 2 − ϕ 1 or ε 32 � ϕ 3 − ϕ 2 is very close to zero, it may indicate that the exact solution has already been reached, as it seems to be the case. It is important to emphasize that the step-by-step procedure to calculate the C parameter and the mesh convergence index (GCI) can be found in the previous work [28].
It is observed that there is a reduction in the convergence condition for all analyzed cases, since GCI 21 < GCI 32 , indicating that the dependence was reduced with the mesh refining. From the proximity of the values of GCI 32 and (r 21 ) p GCI 21 , it is possible to state that the asymptotic interval was reached and that the extrapolated solution is close to the exact solution for this variable, for all analyzed cases [49]. For all thickness values, there is also a good proximity between the extrapolated solution and the solutions for M 1 and M 2 . us, considering that the more refined the mesh, the greater the total simulation time, the 10 control volumes mesh (M 2 ) was chosen for further analysis.
For all analyzed cases, the energy balance is not satisfied for the 5-volume control meshes. e 10 and 20 control volumes have values below 10 − 9 for the global energy balance in thermal insulation during all time instants of the heating process, considering a convergence criterion of 10 − 7 . e next step was to perform a time step independence analysis using the same type of thermal insulation (ceramic fiber), the same mesh (M 2 ), and the same response variable (Q in insu ). For this analysis, three distinct time steps were used, as presented in Table 5. It is observed, for all analyzed cases, that the absolute error of the response variable between the time steps of 0.1 min and 1.0 min (ε 12 � ϕ 1 − ϕ 2 ) is considerably smaller when compared to the absolute error between the time steps of 1.0 min and 15.0 min (ε 23 � ϕ 2 − ϕ 3 ). us, considering that the smaller the time step used, the greater the total simulation time, a time step of Δt � 1 min was adopted for further analysis. Figure 6 illustrates the average temperature in thermal insulation during the heating process for several ceramic fiber thicknesses. ese values are calculated from the arithmetic mean of the temperatures obtained in the center of each of the 10 control volumes at each time instant.
Such results are essential for the analysis, since the thermal conductivity of the insulating material for each control volume at a given time is iteratively calculated as a function of the average temperature in the insulation at its respective time instant (equation (15)). From the analysis of this figure, it was verified that regardless of the thickness of the thermal insulation, the average temperature tends to increase along the heating process, as expected. As seen in Table 1, the thermal conductivity of the insulating materials increases with the increase of the average temperature in it; thus, the increase in the insulating material thickness promotes a decrease in the average temperature of the thermal insulation and, consequently, a reduction in its thermal conductivity. Figure 7 illustrates the external surface temperature of the thermal insulation during the heating process for several ceramic fiber thicknesses.
From the analysis of this figure, it is observed that the external surface temperature of the thermal insulation decreases with an increase in the material thickness, proving that the greater the thickness of the insulating material, the higher the thermal resistance to heat transfer. us, a higher thickness of thermal insulation implies a smaller difference between the external surface temperature and the ambient temperature, thus promoting a reduction in convection and radiation heat losses through the kiln sidewalls, as will be confirmed below. It is important to state that the 0 mm thickness curve in Figure 7 represents the temperature of the thermal insulation on the face in contact with the kiln wall, as seen in Figure 3 and equation (3), therefore being the first boundary condition (x � 0) of the physical problem for the thermally insulated kiln. Figure 8 shows the convective heat transfer coefficient during the heating process for several ceramic fiber thicknesses. e thermophysical properties, kinematic viscosity, thermal diffusivity, volumetric expansion coefficient, and
Advances in Materials Science and Engineering
Prandtl number, tend to increase the value of the convective heat transfer coefficient as the thermal insulation thickness is increased. is happens because the increase in the thermal insulation thickness promotes a reduction in the surface temperature outside the insulation (Figure 7) and, consequently, a reduction in the film temperature. In contrast, the increase in the thermal insulation thickness promotes a decrease in the thermal conductivity of the fluid (k f ), because of the reduction in the film temperature [22], and in the difference between the surface temperature outside the thermal insulation and the ambient temperature, as previously seen, both contributing, more significantly, to a reduction in the convective heat transfer coefficient. Figure 9 shows the radiation heat transfer coefficients during the heating process for several ceramic fiber thicknesses. e radiation heat transfer coefficient strongly depends on the temperatures of the outer surface of the thermal insulation and of the external surroundings, as seen in equation (16), making its value more influenced by the thermal insulation thickness as compared to the convection heat transfer coefficient. Figures 10 and 11 show the heat fluxes lost by convection and radiation, respectively, during the heating process, for several ceramic fiber thicknesses.
As expected, the greater the thermal insulation thickness on the kiln sidewalls, the lower the heat lost by convection and radiation. It is also noted that for a given thermal insulation thickness, the heat lost by radiation from the sidewalls is always greater than the heat lost by convection. is is because the radiation heat transfer coefficient is greater than the convection heat transfer coefficient throughout the kiln heating process. Figure 12 illustrates the rate at which energy is stored in the thermal insulation during the heating process for several ceramic fiber thicknesses.
e analysis of this figure shows that the greater the insulation thickness is, the higher the rate at which energy is stored in it. Among the four types of thermal insulation analyzed, ceramic fiber has the highest accumulated energy rates. is is because both the density and the specific heat of the ceramic fiber are higher than those presented by the other insulating materials analyzed (Table 2), making it have a greater capacity to absorb heat. Figure 13 shows the heat rate that must be supplied to the kiln during the heating process for several ceramic fiber thicknesses.
From the analysis of this figure, it can be seen that up to approximately 100 minutes, the heat transfer rates provided to the kiln are very close. From this moment on, the heat flux supplied to the kiln is strongly influenced by the thermal insulation thickness. e greater the thickness of the thermal insulation is, the lower the energy rates that must be supplied to the kiln at each instant of time in order to maintain the same firing curve during the heating step. is can be explained by the fact that the increase in the thermal insulation thickness increases the thermal resistance by conduction, which reduces the surface external temperature of the thermal insulation (Figure 7), providing an increase in the convection and radiation thermal resistances. ese phenomena reduce heat loss and the energy rate that must be supplied to the kiln. Figure 14 shows the accumulated energy supplied to the kiln during the heating process for several ceramic fiber thicknesses.
Such curves are obtained by integrating in time the heat rate curves (Figure 13) for each type and thickness of thermal insulation throughout the heating process. It is evident that the greater the thickness of the thermal insulation is, the lower the thermal energy that must be supplied to the kiln during the heating step. Figure 15 shows the influence of type and thickness of thermal insulation on the total energy supplied during the kiln heating step. From this result and using equation (22), it was possible to calculate the energy gain as a function of the type and thickness of the thermal insulation, as shown in Figure 16.
A number of observations can be made from the analysis of Figures 15 and 16 as follows. e greater the thermal insulation thickness is, the lower the energy that must be supplied during the kiln heating process and, consequently, the greater the energy gain. It is important to emphasize that from a certain thickness value (50 mm approximately), the energy gain increases insignificantly with the increase in the thermal insulation thickness.
Fiberglass is the type of thermal insulation that provides the greatest energy gain for any thickness analyzed, followed by rockwool, calcium silicate, and finally ceramic fiber.
While 5.0 mm of fiberglass is sufficient to provide an energy gain of approximately 29.9%, the same thickness for rockwool, calcium silicate, and ceramic fiber provides, respectively, energy gains of 28.0%, 26.5%, and 24.3%.
For low insulation thicknesses, the greater the influence of the insulation type on the energy gain, while for high thicknesses the energy gain is little influenced by the type of thermal insulation. For a thickness of 2.5 mm, the range of energy gain is from 17.9% (ceramic fiber) to 24.1% (glass fiber), i.e., a variation of 6.2%. For a thickness of 100 mm, the energy gain ranges from 38.3% (ceramic fiber) to 39.7%, i.e., a 1.4% variation.
e results presented here indicate that the lower the thermal conductivity of the material, the greater the energy gain, as expected. e use of a thermal insulator with lower thermal conductivity implies less heat loss and therefore less energy must be supplied to the kiln, resulting in a higher energy gain. Figure 17 shows the influence of thermal insulation type and thickness on the maximum external surface temperature obtained during the kiln heating process.
A number of observations can be made from the analysis of Figure 17, as follows. e greater the thickness of the thermal insulation, the lower the maximum external surface temperature of the kiln.
Fiberglass is the insulating material which provides the greatest reduction in the maximum external surface temperature for any thickness analyzed, followed by rockwool, calcium silicate, and finally ceramic fiber. While 2.5 mm of fiberglass is sufficient to reduce the maximum external surface temperature of the kiln from 249.3°C (without thermal insulation) to 148.1°C, the same 2.5 mm of rockwool, calcium silicate, and ceramic fiber reduces the maximum external surface temperature to 150.9°C, 158.2°C, and 176.8°C, respectively.
For thicknesses above 5.0 mm, the higher the thickness of the thermal insulation, the lower the influence of the thermal insulation type on the maximum external surface temperature. For a thickness of 5.0 mm, the range of the maximum external surface temperature is 115.7°C (fiberglass) to 145.2°C (ceramic fiber), i.e., a range of 29.5°C. For a thickness of 100 mm, the maximum external surface temperature range is 42.8°C (fiberglass) to 49.3°C (ceramic fiber), i.e., a range of 6.5°C. e results indicate that the lower the thermal conductivity of the material, the lower the maximum external surface temperature.
In regard to fiberglass, it is a fiber-reinforced polymer using glass fiber or glass reinforced plastic [50]. It is made of plied, spun, and texturized continuous glass filaments. e fiberglass has many advantages, when compared to other types of thermal insulation, such as low thermal conductivity, low density, very high strength, high sound absorption, flame retardant (no burn), exceptional weathering resistance (low moisture absorption), and low cost [50][51][52]. us, fiberglass has been applied as thermal and acoustic insulation at different thermal equipment, pipelines, and building constructions, reducing heat transfer. It is also commonly used to prevent condensation of water vapor on cold surfaces, such as air conditioning ducts (ductwork) and cold-water pipes. Furthermore, fiberglass is not generally considered to be dangerous, but if not properly sealed off, it can get into air vents and circulate through the environment, irritating the skin and respiratory system of humans.
Conclusions
Considering what has been presented, it is important to use thermal insulation in processes where it is desired to reduce heat losses to the external environment and, consequently, to promote an increase in its thermal efficiency.
To ensure the reliability of the numerical results obtained, mesh convergence and time step analyses were performed representing thermal insulation, based on Richardson extrapolation, grid convergence index (GCI), and absolute error calculations.
From the obtained results and discussions made in this work, it can be concluded that (a) e proposed methodology can be easily applied in similar processes, having knowledge of the temperature and air velocity conditions in the surroundings of the kiln, as well as the external and internal surface temperatures of the kiln. (b) e mathematical procedure developed in this paper presented a low computational effort when compared to commercial CFD solvers, such as STAR CD ® , Ansys Fluent ® , and Ansys CFX ® .
(c) Results indicated that the higher the thickness of the thermal insulation, the lower the maximum external surface temperature and the greater the energy gain when compared to the kiln without thermal insulation. While 1 mm fiberglass thickness provides a 57.88°C reduction in the maximum external surface temperature and an energy gain of 15.37%, a 100 mm fiberglass thickness provides a reduction of 206.59°C in the maximum external surface temperature and an energy gain of 39.67% when compared to the kiln without thermal insulation. (d) Of the four types of thermal insulation analyzed, fiberglass is the insulating material that provides the greatest energy gain and, consequently, promotes the greatest economic and environmental gains. (e) Fiberglass, due to excellent physical characteristics, such as low thermal conductivity and density, is the thermal insulation that provides the greatest reduction in maximum external surface temperature, contributing to the greatest reduction in thermal discomfort and the risk of work accidents when in operation.
Finally, the results presented here can be used as a decision-making tool in choosing the type of thermal insulation and its thickness that are capable of providing the desired energy gain for the kiln, with a better benefit/cost ratio. It is important to emphasize that other factors must be considered when choosing the type of insulation, such as shelf life, maintenance cost, and effects on human health.
Abbreviations a E , a P , a 0 P , a W , S u , S P : Coefficients (-) A: Area (m 2 ) c p : Specific heat (J kg − 1 K − 1 ) c pinsu : Specific heat of the thermal insulation (J kg − 1 K − 1 ) C: Mesh convergence parameter (-) _ E st : Rate at which energy is stored in the kiln (W) _ E st insu : Rate at which energy is stored in the thermal insulation (W) _ E supplied : Heat rate supplied to the kiln without thermal insulation (W) E supplied : Energy supplied during the heating process for the kiln without thermal insulation (J) _ E insu supplied : Heat rate supplied to the kiln with thermal insulation (W) E supplied insu : Energy supplied during the heating process for the kiln with thermal insulation (J) g : Gravity acceleration (m s − 2 ) h c : Convection heat transfer coefficient (W m − 2 K − 1 ) h r : Radiation heat transfer coefficient (W m − 2 K − 1 ) GCI: Grid convergence index (-) k e : ermal conductivity on the east face of the control volume (W m − 1 K − 1 ) k f : ermal conductivity of the fluid (ambient air) (W m − 1 K − 1 ) k f1 : ermal conductivity on the west face of the first control volume (W m − 1 K − 1 ) k f2 : ermal conductivity on the east face of the first control volume (W m − 1 K − 1 ) k insu : ermal conductivity of thermal insulation (W m − 1 K − 1 ) k w : ermal conductivity on the west face of the control volume (W m − 1 K − 1 ) l: Representative mesh size (mm) L: Characteristic length (height of the outer sidewall of the kiln) (m) L insu : ermal insulation thickness (mm) N: Number of control volumes (-) Nu L : Average Nusselt number (-) Pr: Prandtl number (-) q lost(base/ceiling) : Heat lost by the base/ceiling assembly of the kiln (W) q lost sidewalls : Heat lost by the sidewalls of the kiln without thermal insulation (W) q in insu : Rate of energy entering the thermal insulation (W) q conv insu : Heat flux lost by convection (W) q rad insu : Heat flux lost by radiation (W) Q in insu : Energy gain by the thermal insulation during the kiln heating stage (J) Ra L : Rayleigh number (-) S: Source term (W m − 3 ) t: Time (s) T: Temperature (°C) T average insu : Average temperature of the thermal insulation (°C) T int : Surface temperature inside the kiln (°C) T s,ext : Surface temperature outside the kiln (°C) T s2,ext : Surface temperature outside the thermal insulation (°C) T amb � T ∞ : Ambient air temperature outside the kiln (°C) T f : Film temperature (°C) T E : Temperature at the control volume E at time t + ∆t (°C) T P : Temperature at the control volume P at time t + ∆t (°C) T 0 P : Temperature at the control volume P at time t (°C) T W : Temperature at the control volume W at time t + ∆t (mm) δx PE : Distance between the centroids P and E (mm) δx WP : Distance between the centroids W and P (mm) δx Pe : Distance between the centroid P and face e (mm) δx wP : Distance between the face w centroid P (mm) Δx: Length of the control volume P (mm) α: ermal diffusivity (m 2 s − 1 ) β: Volumetric expansion coefficient (K − 1 ) ε: Emissivity (-) υ: Kinematic viscosity (m 2 s − 1 ) ρ: Density (kg m − 3 ) σ: Stefan-Boltzmann constant (W m − 2 K − 4 ) ϕ: Response variable (MJ) ϕ 21 ext : Extrapolated solution for the response variable used in the mesh convergence study (MJ).
Data Availability
e data used to support the findings of this study are included within the article. | 2020-03-12T10:23:47.055Z | 2020-03-10T00:00:00.000 | {
"year": 2020,
"sha1": "a2dcb6b23160f538ba08cc64af557226c35fba89",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/amse/2020/6476723.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "90e167b3ac6c393198b0c6196d8926efa4034720",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
244951100 | pes2o/s2orc | v3-fos-license | Review of vegetation indices for studies of post-mining processes
Each phenomenon has its cause and effect. During the research on the post-mining processes, the reasons can be found in ongoing processes taking place after the end of the mining exploitation. Therefore, a very important aspect is Geomonitoring. Currently, the post-mining processes taking place all over the world, should be considered into two groups of processes: such taking place in subsurface regions and such on the surface on the Earth. Through an integrated Geomonitoring it is possible to observe, among others, the vegetation of the mining areas. The observation of the state of vegetation is an aspect of research using remote sensing methods, e.g. indicators of vegetation. Of course, the reduction of plant vegetation may be caused by other reasons as well. Therefore, an important aspect is to distinguish changes in vegetation resulting affects coming from natural factors (climate changes, long-term droughts or sudden weather phenomena) and the resulting from post-mining processes. This article presents the indicators that have been discovered and used in the research of plant vegetation, which can be used in the post-mining.
7KH PRVW LPSRUWDQW DUH GDWD
Initial data are the beginning of studying every process, so it is always worth subjecting them to evaluation, analysis and interpretation [15]. The assessment of the quality of the preliminary data allows for the identification of the characteristics of the data and the adaptation of the collected materials to a given research project. In terms of data resulting from remote sensing not all data are useful for analysis, some of them have distortions related to a given sensor. Therefore, before data collection is started, criteria should be defined on the basis of which data will be assessed in terms of suitability for a given project. Possible criteria are: x Spatial extent; x Time resolution (time space) ; x Cloud cover; x Number and characteristics of spectral ranges (spectral resolution): x Spatial resolution; x Scale in the case of maps. Figure 1 presents the time spaces of individual satellite missions. The first data results from the 1972 Landsat 1 mission [17]. Of course, the first data are less accurate, but it is a "sentimental" source material, because thanks to them it is possible to visualize the surface like it was 50 years ago. It is noticeable that in recent years, it is possible to obtain satellite data from various sources, ranging from: x NASA space missions: Landsat 7, Landsat 8, MODIS (Moderate Resolution Imaging and Spectroradiometer), x NASA and Japan's Ministry of Economy space mission -ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), x ESA space missions -Copernicus: Sentinel 1, Sentinel 2, Sentinel 3 and Sentinel 5P. As part of research on the observation of vegetation, one should distinguish space missions that have multispectral scanners. Figure 2 presents the spectral bands of space missions. These devices enable, the calculation indices through the spectral bands. However, not all space missions are equipped with appropriate sensors for research on vegetation indices, so Figure 2 does not show data for Sentinel 1, Sentinel 3 and Sentinel 5P. They are used in other areas: Sentinel 1-observation of Earth movements, Sentinel 3 -observation of sea and Sentinel 5P -in the study of the composition of the Earth's atmosphere [27].
The spectral bands or spectral resolution of individual space missions are different from one another. The oldest data -Landsat 1 and Landsat 2 -, present only visible spectral bands. Technological development has led to the development of new instruments that cover new spectral bands. Landsat 3, which was equipped with Multispectral Scanner Instrument (MSS), made it possible to measure thermal ranges. The individual missions: Landsat7, Landsat 8, Sentinel 2, MODIS and ASTER also provide spectral bands in the near-infrared (NIR) and short wavelength infrared (SWIR). In the field of vegetation research, the visible data (red, green, blue) and near-infrared bands are an important range. Most vegetation indices take into account the above addressed spectral ranges in their formulations.
Another important aspect is the spatial resolution. Spatial or ground resolution means the size of the smallest object that can be distinguished by a sensor, equated to the field pixel of remote sensing image [32]. This is particularly important for the calculation of vegetation indices as not all data have the same resolution. Therefore before proceeding with the study, it is important to refer to Figure 3, which contains the spatial characteristics of the different satellite missions in relation to the frequencies of the spectral bands. The range of individual spatial resolutions is from 10m for Sentinel 2 to 1000m for MODIS. The higher the spatial resolution the more accurate the resulting data will be.
The latest Landsat 7, Landsat 8, Sentinel 2 and ASTER data are characterised by the best fit of the spectral ranges both in terms of spatial resolution and frequency of the spectral ranges for vegetation observations. Therefore, in the literature, research in this area is mostly based on data from Landsat 7, 8 and Sentinel 2 [33]. Understanding the individual spectral bands of the satellites makes it possible to calculate the vegetation indices, which play an important role in the process of Geomonitoring of postmining processes.
9HJHWDWLRQ ,QGLFHV
Observation of the state of vegetation in a given area is one aspect of research work on Geomonitoring of mining areas. The simplest form is the visual assessment of covered and uncovered areas. The results can be represented by the composition of the RGB spectral bands (red, green, blue). It can be used to identify degraded vegetation by long-term drainage [34].
In research on the monitoring of changes in vegetation, the indices of vegetation are used. %XF]\ĔVka [33] indicates 4 groups of indicators that can be observed in post-mining processes. These are indicators: "vegetation, determining the water content, geological -enabling the identification of various types of rocks and materials, and intended for the detection of areas that have been damaged by fires".
The definition of the vegetation index was submitted by Jackson et al. [35]: "An ideal vegetation index would be highly sensitive to vegetation, insensitive to soil background changes, and only slightly influenced by atmospheric path radiance" Table 1 presents, in chronological order, a selection of vegetation indices that can be used in the Geo-monitoring of post-mining processes. Table 1 also contains all the abbreviations for the indices used, as well as the formula for their calculation. In the section, different classifications of vegetation indices available in the research literature will be presented.
The first attempt to classify the indices of vegetation was undertaken by Lautenschlager and Perry [36]. They presented the classifications into two categories. The first rearranges the indexes bands 5 and 7 channels (CH5 and CH7 in the Fig.4). They are: AVI [37], NDVI7, PVI7, TVI7 and RVI7. The second, on the other hand, rearranges the indexes using the 4 and 6 channels (CH4 and CH6 in the Fig.4), as well as the indexes containing three or four bands. These are: NDVI6, PVI6, RVI6 and TVI6.
Figure 4
Idealized reflectance a patterns of herbaceous vegetation and soil from 0.4ȝP WR ȝm. Source: [38] Bariou et al. [39] classified the vegetation indices depending on the combination of the number of spectral bands. The first group is a combination of 2 spectral bands, while second group is a combination 3 or 4 spectral bands.
Huete et al. [40,41] classified the vegetation indices into two groups: ratio indices -SR [42,43], RVI [44,45], NDVI [46] and well as TVI [45], and orthogonal indices -PVI [45] and GVI [47] "The orthogonal indices are distinct from the ratio indices in that isolines of equal 'greenness' do not converge at the origin but instead remain parallel to the principal axis of the soil line. The first operate by direct measurement while the second work by indirect measurement. Hence, it can be noted that the IOP Conf. Series: Earth and Environmental Science 942 (2021) 012034 IOP Publishing doi:10.1088/1755-1315/942/1/012034 7 difference between ratio indices and orthogonal indices is a difference in 'objective' between indices" [48, p.114].
(1) where a is slope of the bare soil line and b is intercept of soil line [55].
Determining the slope and intersection of the soil line, requires drawing a line through these clouds of points representing the soil in NIR-RED space (Fig.5).
Figure 5
Vegetation spectra isolines in NIR-RED wavelength as predicted by ratio-, normalized difference and perpendicular vegetation indices. Source: [54] The relationship between near infrared and visible reflections from bare soils is generally linear and several vegetation indices have been developed using coefficients for this relationship. The indexes: SAVI1 [54], SAVI2 [55], TSAVI1 [56], TSAVI2 [49], MSAVI1 [57], MSAVI2 [57], OSAVI [58], attempt to study soil samples, assuring that most soil spectra follow the soil line. A significant correction can be found when adaptation to soil is performed [54]. IOP [48] distinguish two groups of vegetation indices as the first and second generation of indices. The first generation of indices based of empirical methods without taking into account atmospheric effects, soil brightness or soil colour. This series referred to multispectral scanners mounted on Landsat space mission satellites and for clearly defined applications that do not check multipliers for other regions. The second generation of indices, which have significant improvements over the original indexes thanks to mathematical but also physical reasoning -phenomena explaining the interactions between electromagnetic radiation, the atmosphere, vegetation and the soil surface. These include: PVI [40], SAVI1 [54], SAVI2 [55] MSAVI1 [57], MSAVI2 [57], TSAVI1 [56], TSAVI2 [49], ARVI [60], SARVI [53] and NDVI [46]. They are based on the reflectance value corrected for sensor calibrations and atmospheric effects [48].
The classifications presented above show the scientific progress in the field of vegetation indices. In mining areas, the mining activity can affect the vegetation in the area through: indirect environmental stress or direct damage [80]. Vegetation indices, which enable the detection of changes and trends in the vegetation in mining areas, are a particularly important issue for Geomonitoring of post-mining processes. Individual vegetation indicators should also be considered in relation to the data available for the study area and data from satellite missions. Not all indicators listed in Table 1 are useful for studying post-mining processes. Selected vegetation indices presented in Table 1, which are not sensitive to: biomass change, light soil colour, soil brightness or atmospheric changes, can be useful for vegetation observation in Geomonitoring of postmining processes, can be divided into 4 groups: x The first of these should be defined as basic, based on as few spectral channels as possible. These include NDVI [46], AVI [37], EVI1 [65], EVI2 [66], GNDVI [69], TTVI [52], NRVI [49], NDRE [71] and WDRVI [72].
x The third group are indices, which take into account the influence of the atmosphere: ARVI [60], SARVI [60] and VARI [61]. x The fourth group consists of indices that study the chlorophyll content in vegetation: CARI [62], MCARI1 [63], MCARI2 [64], MCARI3 [64] and TCI [79]. The review of world literature provides information on current use of remote sensing data for the analysis of plant cover changes in post-mining areas. The topics of the most frequently undertaken studies concern the monitoring of the reclamation process, including the development of vegetation in degraded areas.
Also Raval et al. [86] carried out an attempt to estimate biomass production in a reclaimed coal mining area of Wise County (USA) on the basis of Landsat 5 satellite images for the period 2008-2010. The authors used the following indices in their study: NDVI [46], NDMI (Normalized Difference Moisture Index) [87], SVR (shortwave-infrared/visible ratio) [88] and MSR [68]. [80] conducted a study on monitoring changes in the Changhe River area (China) based on calculated vegetation indices: SR [43], NDVI [46], SAVI1 [54], TSAVI2 [49], PVI [40], NLI [89], MSR [68] and TC greeness [90], derived from Landsat space missions over the period 2001-2013. Padmanaban et al. [91] conducted a study on vegetation vegetation changes based on the NDVI [46] index calculated from Landsat data 2013-2016. The authors indicated the location of two subsidence zones in the Kircheller Heide area (Germany), these phenomena were a consequence of rising groundwater table.
Also BuczyĔska and Blachowski [92] in their study on the analysis of changes in vegetation in the area of the closed Babina mine (Poland) using Landsat satellite images in the period 1989-2019. The authors used the indicators in their publication: NDVI [46], GNDVI [69] and EVI1 [65].
The above case studies, which use individual vegetation indicators in the calculations, indicate the legitimacy of using vegetation indicators in Geomonitoring of post-mining processes. The article presents a documented structural procedure and a summary, which aims to indicate which vegetation indications are justified in the observation of post-mining processes. (1 + ܴܫܰߩ + (6 כ ܦܧܴߩ െ 7,5 כ )ܧܷܮܤߩ EVI1 corrects the effects of atmospheric factors and soil signals at the same time, especially in areas with dense crowns EVI2 is designed to be used by sensors that do not clog the blue channel but calibrated to achieve similar values to EVI1.
ܦܧܴߩ
The RDVI is similar in its application to the SAVI, it suppresses results from "bare ground" and sun. In contrast to the SAVI, its informative value is not very good in sparsely overgrown or dry areas.
Uses the difference between near-infrared and red wavelengths, along with the NDVI, to highlight healthy vegetation.
It is insensitive to the effects of soil and sun viewing geometry..
&RQFOXVLRQV
Post-mining processes on the Earth's surface have a negative impact on the surrounding environment. Geomonitoring of post-mining processes on the Earth's surface is a very topical issue. The observation of the state of vegetation using remote sensing methods is one of the aspects of Geomonitoring in post-mining. Remote sensing methods allow the development of research problems for large areas, which is an undoubted advantage of this method. Before starting the research, it is necessary to get acquainted with the characteristics of the initial data, especially with: spatial and spectral resolution. The best data characteristics are possessed by space missions: Landsat 7, Landsat 8 and Sentinel 2, both in terms of spatial resolution and spectral ranges, which are used to calculate vegetation indices. These data are publicly available. One limitation that may occur is cloud cover. The higher the spatial resolution of the satellite data, the more accurate the data obtained will be. This study presents a chronological review of vegetation indices, which can be used in the post-mining. The proposed group of vegetation indices: basic, soil line based, atmospheric and chlorophyll content based, form the basis for further research in post-mining. | 2021-12-08T20:20:28.013Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1b730079e73d355c3735e1dd467c20c429a85583",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/942/1/012034",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1b730079e73d355c3735e1dd467c20c429a85583",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
196480593 | pes2o/s2orc | v3-fos-license | Pseudomesotelomatous adenocarcinoma with three cases
an active smoker for forty-five packets/year, he had not asbestos exposure. 1000cc serohemorrhagic fluid was drained from the patient. No malignant cells were detected on cytological examination. Parasternal emphysema in both lungs, volume loss in right hemithorax and advanced pleural effusion were observed in thorax computerized tomography (CT). Fluorodeoxyglucose (FDG) uptake (SUV max, 7), FDG uptake in several lymph nodes (SUV max, 3.7-4.8) in the right sub paratracheal and subcarinal area of the mediastinum (Figure 1) were present in plaque-like pleural thickening areas be about more pronounced in the middle and lower zones in positron emission tomography (PET-CT) on almost all surfaces of the right hemithorax pleura. Video-assisted thoracic surgery (VATS) was performed with mesothelioma doubt. Tumor showing vascular invasion in places was observed in the form of multifocal small foci and nodulations on visceral and parietal pleura in pathological examination of wedge resection and partial parietal pleuractomy material taken from right lung upper lobe and lower lobe. In immunohincistochemical researches, TTF-1 (thyroid transcription factor-1), CEA (carcinoembryogenic antigen) and CK-7 (cytokeratin-7) were positive,staining with CK20, CK5/6, Melone A, HMB45, WT1 (Wilms tumor) and calretinin was not seen (Figure 2A) (Figure 2B). Intraplasmic mucin staining was detected with PAS-Alcian Blue histochemical staintumor primer was evaluated as lung adenocarcinoma. Progression and pulmonary embolism developed after the 3rd cure in patient who underwent chemotherapy after pleurodesis. The patient died with respiratory insufficiency at the fifth month of follow-up.
an active smoker for forty-five packets/year, he had not asbestos exposure. 1000cc serohemorrhagic fluid was drained from the patient. No malignant cells were detected on cytological examination. Parasternal emphysema in both lungs, volume loss in right hemithorax and advanced pleural effusion were observed in thorax computerized tomography (CT). Fluorodeoxyglucose (FDG) uptake (SUV max, 7), FDG uptake in several lymph nodes (SUV max, 3.7-4.8) in the right sub paratracheal and subcarinal area of the mediastinum (Figure 1) were present in plaque-like pleural thickening areas be about more pronounced in the middle and lower zones in positron emission tomography (PET-CT) on almost all surfaces of the right hemithorax pleura. Video-assisted thoracic surgery (VATS) was performed with mesothelioma doubt. Tumor showing vascular invasion in places was observed in the form of multifocal small foci and nodulations on visceral and parietal pleura in pathological examination of wedge resection and partial parietal pleuractomy material taken from right lung upper lobe and lower lobe. In immunohincistochemical researches, TTF-1 (thyroid transcription factor-1), CEA (carcinoembryogenic antigen) and CK-7 (cytokeratin-7) were positive,staining with CK20, CK5/6, Melone A, HMB45, WT1 (Wilms tumor) and calretinin was not seen (Figure 2A) ( Figure 2B). Intraplasmic mucin staining was detected with PAS-Alcian Blue histochemical staintumor primer was evaluated as lung adenocarcinoma. Progression and pulmonary embolism developed after the 3 rd cure in patient who underwent chemotherapy after pleurodesis. The patient died with respiratory insufficiency at the fifth month of follow-up.
Case 2
A sixty-year-old male patient has been placed a drain by thoracic surgeon upon seen massive fluid on the right in the chest x-ray due to shortness of breath. Partial decortication and pleurodesis were performed with VATS on the suspicion of malignancy in fluid cytology. Visceral and mucinous adenocarcinoma infiltration common in the parietal pleura was detected as a result of the biopsy. It was found as WT-1, Calretinin, Thrombomodulin, CK20, CK5/6, D2-40 were negative, TTF-1 was strong, CK7 was strong, CEA was weakly positive ( Figure 3A) & ( Figure 3B) in immunohistochemical staining. It was referred to our clinicfor treatment arrangements. There were fifteen p-year cigarette stories, he did not drink for 30years, he did not define asbestos exposure. FDG uptake (SUV max, 6-7.6) in the common pleural thickening areas which were evident in the upper and middle zone of the right hemithorax, FDG uptake in the right upper and lower paratracheal conglomerate lymphadenopathies (SUVmax, 7), subcarinal (SUVmax, 6) and left hilar lymph nodes (SUVmax, 3.8) was detected on PET-CT taken ( Figure 4). The patient who was scheduled for chemotherapy did not accept the treatment and came out of follow-up.
Introduction
Pseudomesotelomatous adenocarcinoma is a rare, heterogeneous tumor with poor prognosis. Clinical, radiological, cytological and even histopathological findings are similar to epithelial type mesotheliomaIt is difficult to reach the diagnosis with conventional imaging modalities and routine pathologic examination. In this article, 3 cases of our clinic were presented and it was aimed to draw attention to the difficulty and methods of differential diagnosis with mesothelioma.
Case 1
A sixty-five year-old male patient was admitted to our clinic with complaints of cough and shortness of breath for 7-8months. He was
Case 3
A seventy-five year old male patient referred to our clinic for dyspnea and involuntary weight loss. He was an active smoker for one hundred thirty packets/year, he had not asbestos exposure. Patient was hospitalized to our clinic for pleural effusion and infection six months ago and no malignancy was detected in Thoracic CT and fluid cytology. The patient whom fluid was lowered with ant biotherapy was discharged at the end of the treatment so as to come to the control. Massive effusion was determined in the left hemithorax in the patient who reapplied on the increase of symptoms. Irregular FDG uptake was present in the areas of common linear nodular thickening in the left hemi thorax pleura on PET/CT (SUV max 7.2-11) ( Figure 5). Tumor cells composed of invasive glandular structures in the pleura and with mucinous material in cytoplasm were detected as a result of VATS. The tumor is negative in the pleural fluid. CK8/18, CD15, CEA, CK7 was positive and D2-40, calretinin; WT-1, TTF-1 and CK-20 were negative in immunohistochemical staining ( Figure 6). Chemotherapy was started to the patient who was diagnosed with invasive mucinous adenocarcinoma.
Discussion
Similarity of lung adenocarcinoma withmalign mesothelioma in terms of radiological, macroscopic and microscopic features but showing immunohistochemical differences are called pseudomesotelomatous adenocarcinoma. 1 The incidence was 0.46% in all cancers and 6% in diffuse pleural tumors. 2 Pseudomesotelomatous carcinomas are very rare heterogeneous tumors, there are cases reported with other cell types other than adenocancer. All of our cases had mucinous components. 3 Clinical features of pseudomesotelomatous adenocarcinoma rare similar to malignant mesothelioma. Lung adenocarcinoma usually occurs in non-smoker female patients with nodular appearance; however it's mostly male in malign mesothelioma and has an asbestos exposure story. Pseudomesotelomatous adenocarcinoma is commonly occured in males (83-94%), at a later age (mean age 63-68), in smokers (45-87%) and people with asbestos exposure (21-76%). 4 Three of our cases were male gender, with an average age of 68.8years, all three cases had smoking history but had no asbestos exposure.
The most common radiological symptoms in pseudomesotelomatic adenocarcinoma are pleural fluid with or without pleural mass, pleural thickening with nodular or irregular surface and all around pleural uptake. 5 Most researchers have noted that it is difficult to distinguish with imaging methods such as chest X-ray, CT and magnetic resonance (MR). It is difficult to distinguish primary lung carcinoma or mesothelioma with PET that displays the metabolic characteristic again with the SUV values of the lesions. PET-CT refers to a diffuse pleural tumor and may gained favor in determining the appropriate location for tissue biopsy. It also helps to exclude pleural metastasis from other organ adenocarcinomas. 6 In our cases, FDG uptakes was present in several lymph nodes at mediastinum to be more pronounced in middle and lower zones in the common pleural thickening areas where are plaque style, on almost all surfaces of the hemithorax pleura in PET-CT.
Definitive diagnosis is not possible with cytologic evaluation of pleural effusion. VATS is often required to get a sample of the size that is suitable for definite diagnosis. 7 Histomorphological, histochemical and immunohistochemical findings should be evaluated together for the definitive diagnosis of pseudomesotelomatic adenocarcinoma. The specificity of TTF-1, CEA and CD15 as the most useful markers in patients with adenocarcinoma is respectively 100%, 97-98.8%, 93% and 67%. TTF-1 is negative and specificity of calretinin is 85-99.8% and specificity of CK5/6 is 83% 8 in malign mesothelioma. The results of our cases were consistent with adenocarcinoma.
The correct diagnosis of the disease is important when making a decision about treatment. Chemotherapy with platinum-based dual combinations is the standard approach in mutation-negative patients with lung adenocarcinoma according to the National Comprehensive Cancer Network (NCCN) guidelines. In mutation (EGFR, ALK, ROS1) positive patients, new molecular targeted agents became the standard of first choice treatment. Malignant mesothelioma is a regimen which was performed pemetrexed-platinum therapy activity in the advanced stage. Molecular therapy studies are underway and there is no drug that has shown efficacy for today. 9 Clinical results of pseudomesotelomatic adenocarcinoma are not satisfactory because of their resistance to CT and RT treatments. 2,3 Average lifetime of non-small cell lung cancer (NSCLC) is 8months similar to stage 4cases. 10 The survivor did not receive a response to treatment for our specific single case as a result of survival and 5months of survival could be achieved.
In conclusion, pseudomesotelomatous adenocarcinoma is a rare heterogeneous group of tumors with poor prognosis and can be confused with mesothelioma. True diagnosis is needed for appropriate treatment planning. The differences in the treatment approach are important to evaluate adenocancer cases in terms of new in-use molecular treatment chance although it is not important. Positron emission tomography is advisor in terms of a diagnosis, differential diagnosis and biopsy location. Immunohistochemical examinations are essential for definitive tissue diagnosis. | 2019-03-17T13:06:13.722Z | 2017-05-02T00:00:00.000 | {
"year": 2017,
"sha1": "35bbf5cc0d044035e66d53b80fe1336286b112b4",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/MOJCR/MOJCR-06-00174.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8c9416ba9e3963582523697f5bb3f97fc4ce4ab0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229685937 | pes2o/s2orc | v3-fos-license | Brain and Nasal Cavity Anatomy of the Cynomolgus Monkey: Species Differences from the Viewpoint of Direct Delivery from the Nose to the Brain
Based on structural data on the nasal cavity and brain of the cynomolgus monkey, species differences in the olfactory bulb and cribriform plate were discussed from the viewpoint of direct delivery from the nose to the brain. Structural 3D data on the cynomolgus monkey skull were obtained using X-ray computed tomography. The dimensions of the nasal cavity of the cynomolgus monkey were 5 mm width × 20 mm height × 60 mm depth. The nasal cavity was very narrow and the olfactory region was far from the nostrils, similar to rats and humans. The weight and size of the monkey brain were 70 g and 55 mm width × 40 mm height × 70 mm depth. The olfactory bulb of monkeys is plate-like, while that of humans and rats is bulbar, suggesting that the olfactory area connected with the brain of monkeys is narrow. Although the structure of the monkey nasal cavity is similar to that of humans, the size and shape of the olfactory bulb are different, which is likely to result in low estimation of direct delivery from the nose to the brain in monkeys.
Introduction
The efficient delivery of drugs to the brain is very important for drug therapy for brain disorders such as Parkinson's and Alzheimer's disease, multiple sclerosis, and autism spectrum disorder. In general, the blood-brain barrier (BBB) strictly inhibits the entry of hydrophilic drugs into the brain from systemic circulation after intravenous and oral administration. The BBB is a continuous cerebral vasculature with endothelial cells connected by tight junctions formed by cell adhesion proteins, including occludin. Endocytosis activity is very low in the cerebral vasculature [1]. These features result in the failure of brain disorder treatment by hydrophilic and high molecular weight drugs such as peptides.
Some reviews have summarized strategies to overcome poor systemic drug delivery to the brain [2]. One of them is a direct delivery route from the nasal cavity. Small drugs, peptides, and even genes and cells were reported to be delivered directly to the brain after nasal application in rats.
In our previous manuscripts, it was clarified that direct delivery from the nose to the brain (DDNB) enables two peptide drugs, oxytocin (MW: 1008 Da) [3] and CPN-116 (MW: 835 Da) [4] to act on the brain. Han et al. [5] showed that the concentration of plasmid DNA (7.2 kbase) in the brain after nasal administration was 10 times higher than the serum concentration, while after intravenous administration it was two orders of magnitude lower in the brain than that in the serum. After nasal application of plasmid DNA encoding β-galactosidase, a significantly higher expression was observed. According to Danielyan et al. [6], after mesenchymal stem cells and T406 human glioma cells stained with fluorescent dye are administered nasally, they can be observed in the thalamus, hippocampus, and cerebral cortex. After nasal administration of 3 × 10 5 mesenchymal stem cells, 1000 cells were observed in the whole brain, indicating that 3% were successfully delivered. Similar results were described by Yu-Taeger et al. [7] and Galeano et al. [8].
In primary research, mice and rats have typically been used for in vivo studies on DDNB. However, the anatomy of the nasal cavity and brain and their connections in humans are different from those in animals. The data derived from animal studies should be carefully interpreted for extrapolation to humans. At the preclinical stage of the drug development process, monkeys are usually used to evaluate the pharmacokinetics and efficacy of drug candidates. The nasal cavities and brains of monkeys are expected to be more similar to those of humans than those of rats and mice. Invasive treatment is impossible in human studies, but determination of drug concentrations in the monkey brain is feasible. However, no manuscript has reported on the anatomy of the monkey brain and nasal cavity, while information on those of humans and rats is available [9][10][11][12]. In this study, to better understand the anatomy of the monkey nasal cavity, brain, and olfactory bulb, 3D data on the head of a cynomolgus monkey were obtained using X-ray computed tomography. Together with observations of the brain and the olfactory bulb, the anatomy of the brain and the nasal cavity of humans, monkeys, and rats was compared, and the influence of species differences on DDNB is discussed.
Materials and Methods
The skull and brain of the female monkey (5 years old) were provided by the New Drug Research Center, Inc. (Kobe, Japan). The head was taken from the monkey euthanized after another animal study by intramuscular ketamine and intravenous sodium pentobarbital followed by exsanguination. The animal study (study No.: 18905) was approved on 24 October 2018 by the Animal Care Council at the New Drug Research Center, Inc. (approval No.: 181019A). The study was conducted in accordance with relevant national legislation.
The acquisition of 3D data of the monkey skull was outsourced to JMC Corporation (Yokohama, Kanagawa, Japan), in which an industrial nanofocus CT scanner, Phoenix Nanotom M (GE Measurement & Control, Tokyo, Japan) was used. Images were obtained under the setup of the CT scanner, as indicated below. The 3D data were visualized with VGSTUDIO MAX 3.2 (Volume Graphics GmbH, Heidelberg, Germany). Figure 2 shows horizontal sections at 2 mm intervals from the bottom (−8 mm) to the top (+10 mm) of the nasal cavity, which is very narrow. The paranasal sinus was confirmed to be a large round-shaped space just beside the nasal cavity. The sections from −8 to +10 mm indicate that the height of the nasal cavity is approximately 20 mm. According to the section at −6 mm, the bottom of the nasal cavity is wide (4 mm width). With increasing horizontal position, the nasal cavity becomes narrower. The sections at +2 mm, +4 mm, and +6 mm demonstrate that the structure of the upper part of the nasal cavity is complicated. This complication is likely due to nasal turbinates. Figure 3 shows coronal sections at 3 mm intervals from the nostril to the pharynx. Complex structures due to the nasal turbinates are observed in the six sections from −9 to +6 mm. Figure 4 shows sagittal sections of the nasal cavity at 1 mm intervals from the nasal septum (0 mm) to the side wall (+5 mm). These sections show that the width of the nasal cavity is 5 mm. It is noteworthy that a space extends from the brain to the upper right part of the nasal cavity (indicated by white circles). The space is visible in the 0 mm and +1 mm sections, but disappears in the +2 mm (1) (2)
Structure and Size of the Cynomolgus Monkey Nasal Cavity
(1)horizontal section at 0 mm (3)sagittal section at 0 mm (2)coronal section at 0 mm Figure 2 shows horizontal sections at 2 mm intervals from the bottom (−8 mm) to the top (+10 mm) of the nasal cavity, which is very narrow. The paranasal sinus was confirmed to be a large round-shaped space just beside the nasal cavity. The sections from −8 to +10 mm indicate that the height of the nasal cavity is approximately 20 mm. According to the section at −6 mm, the bottom of the nasal cavity is wide (4 mm width). With increasing horizontal position, the nasal cavity becomes narrower. The sections at +2 mm, +4 mm, and +6 mm demonstrate that the structure of the upper part of the nasal cavity is complicated. This complication is likely due to nasal turbinates. Figure 2 shows horizontal sections at 2 mm intervals from the bottom (−8 mm) to the top (+10 mm) of the nasal cavity, which is very narrow. The paranasal sinus was confirmed to be a large round-shaped space just beside the nasal cavity. The sections from −8 to +10 mm indicate that the height of the nasal cavity is approximately 20 mm. According to the section at −6 mm, the bottom of the nasal cavity is wide (4 mm width). With increasing horizontal position, the nasal cavity becomes narrower. The sections at +2 mm, +4 mm, and +6 mm demonstrate that the structure of the upper part of the nasal cavity is complicated. This complication is likely due to nasal turbinates. Figure 3 shows coronal sections at 3 mm intervals from the nostril to the pharynx. Complex structures due to the nasal turbinates are observed in the six sections from −9 to +6 mm. Figure 4 shows sagittal sections of the nasal cavity at 1 mm intervals from the nasal septum (0 mm) to the side wall (+5 mm). These sections show that the width of the nasal cavity is 5 mm. It is noteworthy that a space extends from the brain to the upper right part of the nasal cavity (indicated by white circles). The space is visible in the 0 mm and +1 mm sections, but disappears in the +2 mm section, suggesting that it is very narrow (less than 2 mm wide).
Size of the Nasal Cavity and Relative Position with the Brain
The left part of Figure 5 shows the shape and size of the nasal cavity. As demonstrated by the structural data above, the dimensions of the nasal cavity are 5-6 mm width, 20 mm height, and 50 mm depth. These data also indicate that the olfactory region, the area around the cribriform plate connected to the olfactory bulb, is 40 mm away from the nostril. Efficient delivery of drugs to the olfactory area using a normal spraying device is likely difficult. On the right side of Figure 5, the sagittal cross-section of the monkey skull is shown. The dissected brain was placed at the original position to show the locations of the nasal cavity and brain. As shown in the photo, the brain is located at the upper right position of the nasal cavity. Figure 4 shows sagittal sections of the nasal cavity at 1 mm intervals from the nasal septum (0 mm) to the side wall (+5 mm). These sections show that the width of the nasal cavity is 5 mm. It is noteworthy that a space extends from the brain to the upper right part of the nasal cavity (indicated by white circles). The space is visible in the 0 mm and +1 mm sections, but disappears in the +2 mm section, suggesting that it is very narrow (less than 2 mm wide).
Size of the Nasal Cavity and Relative Position with the Brain
The left part of Figure 5 shows the shape and size of the nasal cavity. As demonstrated by the structural data above, the dimensions of the nasal cavity are 5-6 mm width, 20 mm height, and 50 mm depth. These data also indicate that the olfactory region, the area around the cribriform plate connected to the olfactory bulb, is 40 mm away from the nostril. Efficient delivery of drugs to the olfactory area using a normal spraying device is likely difficult. On the right side of Figure 5, the sagittal cross-section of the monkey skull is shown. The dissected brain was placed at the original position to show the locations of the nasal cavity and brain. As shown in the photo, the brain is located at the upper right position of the nasal cavity.
Size of the Nasal Cavity and Relative Position with the Brain
The left part of Figure 5 shows the shape and size of the nasal cavity. As demonstrated by the structural data above, the dimensions of the nasal cavity are 5-6 mm width, 20 mm height, and 50 mm depth. These data also indicate that the olfactory region, the area around the cribriform plate connected to the olfactory bulb, is 40 mm away from the nostril. Efficient delivery of drugs to the olfactory area using a normal spraying device is likely difficult. On the right side of Figure 5, the sagittal cross-section of the monkey skull is shown. The dissected brain was placed at the original position to show the locations of the nasal cavity and brain. As shown in the photo, the brain is located at the upper right position of the nasal cavity.
Pharmaceutics 2020, 12, 1227 5 of 8 mm depth. These data also indicate that the olfactory region, the area around the cribriform plate connected to the olfactory bulb, is 40 mm away from the nostril. Efficient delivery of drugs to the olfactory area using a normal spraying device is likely difficult. On the right side of Figure 5, the sagittal cross-section of the monkey skull is shown. The dissected brain was placed at the original position to show the locations of the nasal cavity and brain. As shown in the photo, the brain is located at the upper right position of the nasal cavity.
Size of the Brain
Photos of the monkey brain taken from different angles are shown in Figure 6. The weight of the brain is 70 g. The dimensions of the brain are 45 mm height, 55 mm width, and 70 mm depth. The white circles indicate the right olfactory bulb. In the front view, the olfactory bulb has a plate-like structure. In the bottom view, the long olfactory tract connecting the olfactory bulb to the brain is shown. The appearance of the olfactory bulb is different from that of rats and humans.
Size of the Brain
Photos of the monkey brain taken from different angles are shown in Figure 6. The weight of the brain is 70 g. The dimensions of the brain are 45 mm height, 55 mm width, and 70 mm depth. The white circles indicate the right olfactory bulb. In the front view, the olfactory bulb has a plate-like structure. In the bottom view, the long olfactory tract connecting the olfactory bulb to the brain is shown. The appearance of the olfactory bulb is different from that of rats and humans.
Discussion
Monkeys have been used for some relevant in vivo studies. According to Kumar et al., after nasal spraying of progesterone, the concentration in the cerebrospinal fluid (CSF) in female rhesus monkeys was significantly higher than that after intravenous injection, although there were similar plasma concentrations [13]. A prostaglandin D agonist exhibited more potent pharmacological activity (sleep-inducing effect) in monkeys after nasal administration than after intravenous administration [14]. In the cynomolgus monkey, nasally administered interferon-β1b reached levels one order of magnitude higher in the olfactory bulb and trigeminal nerve than in other peripheral tissues. The brain distribution of interferon-β1b was visualized by autoradiography [15]. Iwasaki et al. investigated species differences in DDNB [16]. They used the ratio Kp,in/Kp,iv as an index of DDNB and clarified that the delivery of six compounds to the olfactory tract and the rest of the brain after nasal administration to monkeys was higher than that in rats, while the uptake by the olfactory bulb and trigeminal nerve was lower. The authors suggested that no mechanism offers better delivery in monkeys. A rare case of a human pharmacokinetic study was reported by Born et al. [17]. MSH/ACTH(4-10), vasopressin, and insulin were nasally applied to human volunteers, and the peptide concentrations in the lumbar CSF and serum were measured. There were much higher concentrations of MSH/ACTH(4-10) in the CSF after a 5 mg dose. Serum levels after 1 mg and 5 mg doses were similar to those in the placebo group.
Two cranial nerves connected to the nasal cavity, the olfactory and trigeminal nerves, are involved in the direct pathway from the nose to the brain [18]. The olfactory nerve extends from the
Discussion
Monkeys have been used for some relevant in vivo studies. According to Kumar et al., after nasal spraying of progesterone, the concentration in the cerebrospinal fluid (CSF) in female rhesus monkeys was significantly higher than that after intravenous injection, although there were similar plasma concentrations [13]. A prostaglandin D agonist exhibited more potent pharmacological activity (sleep-inducing effect) in monkeys after nasal administration than after intravenous administration [14]. In the cynomolgus monkey, nasally administered interferon-β1b reached levels one order of magnitude higher in the olfactory bulb and trigeminal nerve than in other peripheral tissues. The brain distribution of interferon-β1b was visualized by autoradiography [15]. Iwasaki et al. investigated species differences in DDNB [16]. They used the ratio Kp,in/Kp,iv as an index of DDNB and clarified that the delivery of six compounds to the olfactory tract and the rest of the brain after nasal administration to monkeys was higher than that in rats, while the uptake by the olfactory bulb and trigeminal nerve was lower. The authors suggested that no mechanism offers better delivery in monkeys. A rare case of a human pharmacokinetic study was reported by Born et al. [17]. MSH/ACTH(4-10), vasopressin, and insulin were nasally applied to human volunteers, and the peptide concentrations in the lumbar CSF and Pharmaceutics 2020, 12, 1227 6 of 8 serum were measured. There were much higher concentrations of MSH/ACTH (4)(5)(6)(7)(8)(9)(10) in the CSF after a 5 mg dose. Serum levels after 1 mg and 5 mg doses were similar to those in the placebo group.
Two cranial nerves connected to the nasal cavity, the olfactory and trigeminal nerves, are involved in the direct pathway from the nose to the brain [18]. The olfactory nerve extends from the olfactory bulb through small holes in the cribriform plate to the olfactory epithelium in the nasal cavity. Olfactory receptors are expressed at the terminal membranes of olfactory neurons. The role of the olfactory nerve is to transmit information on smells captured by olfactory receptors. The pathway in which the olfactory nerve is involved is called the olfactory route, through which it is feasible to deliver drugs to the cerebrum. In contrast, the trigeminal nerve extends from the pons to the nasal cavity. The route of the trigeminal nerve from the pons through the skull to the nasal cavity is much longer than that of the olfactory nerve. Drugs can be delivered directly to the pons and cerebellum through the trigeminal route. For this pathway, it is not yet clear how or through which route drugs are transported to the pons and cerebellum. Because the targets of many drugs acting on the brain are located in the cerebrum, this research focuses on the olfactory pathway. Figure 7 shows the location of the olfactory bulb and cribriform plate in the monkey skull. The sagittal section at 1 mm is shown on the left side. The connection between the nasal cavity and brain is indicated by a white square. On the right side, the section in the white square is expanded. The narrow space, indicated by a white dashed line, is shown in the skull. Based on its shape, the narrow space fits the olfactory bulb, as the inset photo shows. The cribriform plate is located between the olfactory bulb and the nasal cavity. The thickness of the cribriform plate is likely 3-5 mm. The size of the cribriform plate is a determinant of the efficiency of DDNB. Judging from the shape of the olfactory bulb, the area of the cribriform plate of the monkey is likely small, leading to the underestimation of DDNB.
Pharmaceutics 2020, 12, x FOR PEER REVIEW 6 of 8 Figure 7 shows the location of the olfactory bulb and cribriform plate in the monkey skull. The sagittal section at 1 mm is shown on the left side. The connection between the nasal cavity and brain is indicated by a white square. On the right side, the section in the white square is expanded. The narrow space, indicated by a white dashed line, is shown in the skull. Based on its shape, the narrow space fits the olfactory bulb, as the inset photo shows. The cribriform plate is located between the olfactory bulb and the nasal cavity. The thickness of the cribriform plate is likely 3-5 mm. The size of the cribriform plate is a determinant of the efficiency of DDNB. Judging from the shape of the olfactory bulb, the area of the cribriform plate of the monkey is likely small, leading to the underestimation of DDNB. Figure 8 shows a schematic representation of the locations of the nasal cavity and brain in rats, monkeys, and humans. In rats, walking quadrupedally, the nasal cavity is located in front of the brain and the spinal cord extends horizontally. In contrast, in humans, walking bipedally, the nasal cavity is translocated below the brain and the spinal cord extends vertically downward. The shift in the position of the nasal cavity relative to the brain likely represents evolutionary changes between rats and humans. The relative position of the nasal cavity in the monkey, walking bipedally at some times and quadrupedally at others, is possibly an intermediate between rats and humans. Figure 8 shows a schematic representation of the locations of the nasal cavity and brain in rats, monkeys, and humans. In rats, walking quadrupedally, the nasal cavity is located in front of the brain and the spinal cord extends horizontally. In contrast, in humans, walking bipedally, the nasal cavity is translocated below the brain and the spinal cord extends vertically downward. The shift in the position of the nasal cavity relative to the brain likely represents evolutionary changes between rats and humans. The relative position of the nasal cavity in the monkey, walking bipedally at some times and quadrupedally at others, is possibly an intermediate between rats and humans. monkeys, and humans. In rats, walking quadrupedally, the nasal cavity is located in front of the brain and the spinal cord extends horizontally. In contrast, in humans, walking bipedally, the nasal cavity is translocated below the brain and the spinal cord extends vertically downward. The shift in the position of the nasal cavity relative to the brain likely represents evolutionary changes between rats and humans. The relative position of the nasal cavity in the monkey, walking bipedally at some times and quadrupedally at others, is possibly an intermediate between rats and humans.
Conclusions
The anatomy of the monkey nasal cavity is similar to that of humans and rats. The nasal cavity is narrow, and the olfactory region is away from the nostril. However, the olfactory bulb is different. The shape of the monkey olfactory bulb is plate-like, while that of humans and rats is bulbar. The difference suggests the small area of the olfactory region, possibly resulting in the underestimation of DDNB as compared to humans. | 2020-12-24T09:13:48.296Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "c34e38b3d8e0f8b92612afc016633c541f83d3b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/12/12/1227/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "194210f7c7a7ac6b8b5d34fb2c3b13a0f3b3f1f8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
118921149 | pes2o/s2orc | v3-fos-license | Folded Inflation, Primordial Tensors, and the Running of the Scalar Spectral Index
I discuss folded inflation, an inflationary model embedded in a multi-dimensional scalar potential, such as the stringy landscape. During folded inflation, the field point evolves along a path that turns several corners in the potential. Folded inflation can lead to a relatively large tensor contribution to the Cosmic Microwave Background, while keeping all fields smaller than the Planck scale. I conjecture that if folded inflation generates a significant primordial tensor amplitude, this will generically be associated with non-trivial scale dependence in the spectrum of primordial scalar perturbations.
I. INTRODUCTION
The spectrum of primordial perturbations is unlikely to be strictly scale invariant or, equivalently, the scalar spectral index, n R , is unlikely to be exactly unity. Conversely, it is often assumed that n R is itself independent of the wavenumber k. However, the first year's data from the Wilkinson Microwave Anisotropy Probe [WMAP] yielded two tantalizing hints of scale dependence in the perturbation spectrum: a lack of power at small l and, when combined with other datasets, a weak preference for dnR d ln k = 0 [1]. Any hint that the primordial universe is non-vanilla [2] is of crucial importance, since this would constrain both inflation and competing scenarios of the early universe.
The evidence for scale dependence is tentative. At small l the data is accurate, but the lack of power may be due to cosmic variance [3]. Conversely, the apparent evidence for dnR d ln k = 0 could easily be a statistical artifact but if dnR d ln k is close to the current central value we will soon know this with some certainty. Even if both effects are real, they may both be manifestations of the same underlying physics, or two different phenomena.
It is often said that a substantial value for dnR d ln k is unlikely, since it suggests a significant departure from smoothness in the inflaton potential. Cosmological measurements probe a small range in k-space, corresponding to around 10 e-foldings of inflation, and putting a "feature" in the potential that affects scales inside this narrow window requires tuning. However, this paper presents a scenario where scale dependence is not only permissible, but expected.
My starting point is Lyth's observation that at high energies the total excursion made by the inflaton exceeds the Planck scale [4]. As Lyth points out, this is worrying if the inflationary potential has a stringy or supergravity origin: in these models the potential is steep when any field value exceeds the Planck scale. If the inflation scale is low (relative to the GUT scale), the inflaton evolves comparatively slowly, resolving the conundrum. The primordial tensor spectrum rises with the energy scale, and Lyth argued that if inflation is consistent with supergravity the tensor contribution must be very small. Suppose inflation is driven by several distinct fields, but only one field is typically evolving at any given time, so the overall inflationary trajectory follows a path with several "corners". We dub this model folded inflation. On the face of it, folded inflation is extremely contrived. However, in string theory the potential surface for the light scalar fields is now thought to be a complicated and rugged landscape. This is an unknown multi-dimensional function with as many as 500 scalar degrees of freedom [5]. The landscape has an exponentially large number of extrema, and it also has an exponentially large number of paths. Some small subset of these paths will be suitable candidates for folded inflation. During folded inflation the individual fields remain sub-Planckian but the total change in all the fields may exceed the Planck scale. The hope is thus that of the huge number of different "downhill" paths within the landscape, at least one of them can drive a cosmologically acceptable period of inflation without the need for additional tuning.
For folded inflation to work at high scales, Lyth's argument forces the presence of several corners during the last 60 e-folds of inflation. These corners will usually be associated with significant scale dependence in the perturbation spectrum. At a sufficiently high energy scale, any given 10 e-folding window in the spectrum will almost certainly contain a corner-induced feature. A similar argument for a scale-dependent spectrum is presented in [6,7]. For folded inflation, dnR d ln k = 0 is thus natural if inflation occurs at high energy densities.
In what follows, I review the connection between the evolution of the inflaton and the energy scale, and summarize the perturbation spectra in models with multiple fields. I show that the spacing between corners in the inflationary trajectory is correlated with the tensor amplitude. If the tensors are readily detectable, folded inflation leads to a significant value for dnR d ln k . Without a clear description of the stringy landscape, the discussion in this paper is necessarily more qualitative than quantitative. However, we identify lines of enquiry that will sharpen our understanding of the inflationary phenomenology of the overall stringy landscape.
II. THE LYTH BOUND
Lyth's argument is very simple, and we repeat it here in the notation of [1]. The scalar (density) and tensor perturbations for a single, slowly rolling field are Here M P l = 2.43 × 10 18 GeV is the reduced Planck mass, and ǫ is the first term in the (potential) slow roll expansion, The scalar and tensor spectral indices are where, as usual, scale independent spectra correspond to n R = 1 and n h = 0. The ratio of the tensor and scalar spectra is correlated with the slope of the tensor spectrum, leading to the slow-roll consistency condition. We know the amplitude of density perturbation spectrum accurately from WMAP, where T is the CMB temperature in µK, and k 0 is a fiducial wavenumber. From the 1-Year WMAP data, Since this will be a qualitative discussion we can safely omit uncertainties. Using the number of e-folds N = ln(a) as a time variable, dN/dt =ȧ/a = H, and in the slow-roll limit The right hand side of this equation is proportional to √ ǫ and thus √ r, so During ∆N e-folds, ∆φ is For quasi de Sitter expansion, ∆N ∼ ∆ ln k ∼ ∆ ln l, where l labels the CMB multipole. Consequently, ∆N ≈ 7.6 over the range of scales probed by the CMB up to l max = 2000, whereas ∆N ∼ 60 over the physically accessible portion of the inflationary era. Finding that ∆φ ∼ > M P l is a significant embarrassment if we also imagine that the inflationary potential is embedded in a supergravity model, or the stringy landscape. The potential acquires significant corrections when |φ| ∼ M P l , ensuring that V ′ /V is large, ruining the flatness needed to support inflation. If ∆φ ∼ > M P l , the field must evolve into a region where the potential has a substantial slope, telling us that our assumptions are not mutually compatible.
Lyth argued for a comparatively low value for the tensor amplitude, in order to ensure that inflation is consistent with supergravity. Lyth took the minimal detectable value of r to be 0.07, and deduced that tensors are probably forever undetectable. Today we can be more optimistic -with heroic efforts r ∼ 6×10 −4 [10] or better [11] might be possible. For now, the observational bound on r remains high -r < 0.7 in the 1-Year WMAP dataset [1]. If we assume that the maximum excursion allowed for the field is ∆φ ∼ M P l and ∆N = 60 then we have r ∼ < 0.002, which is well beyond the sensitivity of Planck [12], but may be possible in the distant future.
III. FOLDED INFLATION
The analysis in the previous section implicitly assumed that inflation is driven by a single field, but paths in the stringy landscape can be folded into several different directions, such that i |∆φ i | > M P l but |∆φ i | < M P l .
The full expression for the perturbations produced by multiple scalar fields is [13] if all the fields have canonical kinetic terms. The index a runs over all the fields. The tensor amplitude is unchanged, since it is a function of the overall density. The consistency condition is now an inequality, The inequality is saturated if the field is free to move in only one direction [14], possibly after a redefinition of the fields. Consequently, tensor modes are most likely to be detectable in the CMB if the inflationary potential has a unique "downhill direction". The spectral index can again be written in terms of derivatives of the potential.
To lowest order in the slow roll expansion, n R depends on a mixture of first and second derivatives, whereas third derivatives appear in dnR d ln k . Looking at (15) we see why a saddle point in a multifield potential is often associated with a dramatic rise in the power spectrum [15]. Firstly, near a saddle, a small change in the field value in the "downhill" direction can cause a large change in the number of e-folds, leading to a large value for the relevant derivative in (15). Secondly, within some region around the saddle point there are two downhill directions, not one, leading to a further amplification of the density peturbation spectrum. This phenomenon is seen in the final phase of locked inflation [16,17], where a saddle in the potential leads to large perturbations over some range of k, with runaway black hole production in the post-inflationary universe.
Consequently, for folded inflation to produce a realistic cosmology, the functional form of the corners in the potential must ensure they do not lead to an overproduction of black holes. This is analagous to the tight constraints on the parameter range open to locked inflation [17]. In principle, one could design a folded inflation potential with a corner that did not lead to a detectable dnR d ln k signal. However, this is an additional and unnecessary constraint as there is no strong phenomenological reason for stipulating that dnR d ln k = 0. The WMAP team's analysis of the inflationary implications of their results [1] included an analysis of inflation driven by the single-field potential [7] V This is not a multi-field potential, but the best-fit values of its parameters [1] lead to a spectrum that is clearly k-dependent. While ǫ remains small, the higher order slow roll parameters are large, with |ξ| ∼ 100 near φ s , signalling both strong scale dependence and a breakdown in the slow-roll approximation.
Given that ξ can take on large values over a small range of field values in the single field case, we expect that a generic corner in a random folded potential would correspond to a local feature in the perturbation spectrum. This is especially true if we are relying on purely combinatorial arguments to motivate the existence of a folded inflation potential embedded in the stringy landscape. Since the observational data accommodates a locally large value of ξ (and its multi-field generalization) as well as dnR d ln k , insisting that the spectrum does not change significantly as the field point passes through the corner would amount to an ad hoc and needless tuning.
While folded inflation provides a way out of Lyth's constraint on the inflation scale, there is a correlation between the spacing of corners in the potential, and the tensor amplitude. Using data from both the CMB and large scale structure, we can probe the power spectrum over ∆ ln k ∼ 10. To ensure that the field point does not turn a corner as these scales leave the horizon, we need r < 0.08. This is a conservative bound, since it assumes that the observable part of the spectrum is exactly matched to the period of inflation between the corners. Moreover, as the results for potentials like (17) indicate, a localized feature in the potential leads to a broader feature in k-space. Also, any feature is smeared in l-space, since each CMB multipole samples a range of k-values. In fact, two corners in the potential could overlap when viewed in l-space. If we guess that the average separation of corners is roughly M pl (or deduce the likely separation by studying the combinatorics of the landscape) we could estimate the likelihood that the spectrum contains overlapping corners, as a function of r.
A different scenario, assisted inflation [8] relies on many fields rolling simultaneously, which cooperate to produce inflation. In the stringy landscape, we could imagine starting many fields with the same initial value and letting them roll toward the center of the hypercube defined where all fields are sub-Planckian. There are two reasons why assisted inflation is unlikely to be realized within the stringy landscape. Cross-couplings between the fields tend to undermine assisted inflation [9], and while all the fields in the string landscape need not couple at tree level, we do expect each field to be coupled to some of the others. Secondly, assisted inflation requires a large number of fields to be prepared in roughly the same initial state, effectively reducing the dimensionality of the landscape. In the case of folded inflation, we have proposed the existence of a special path, but the usual objection to doing so is undermined by the combinatorics of a multidimensional potential. Reducing the effective dimensionality of the landscape undercuts the strength of these combinatorial arguments, making it unlikely that assisted inflation can be set up in this context. However, assisted inflation often possesses a formal attractor solution [18], so it can be written in terms of an effective field theory where only one field is evolving , suggesting that the inequality in (16) is saturated, providing a different mechanism by which multi-field models can evade Lyth's constraint on the tensor amplitude.
IV. SUMMARY AND FUTURE DIRECTIONS
The above qualitative analysis strongly suggests that if a) the inflationary potential is embedded within the stringy landscape or any other theory that contains supergravity, and b) the tensor to scalar ratio in the CMB, r ∼ > 0.08, then the scalar spectrum is likely to exhibit nontrivial scale dependence. This is effectively the multi-field generalization of Lyth's argument that r is low if inflation is driven by a single field.
Conversely, we do not need r to be large in order to observe a non-trivial dnR d ln k . For instance [19] describes a model with significant scale invariance, associated with a multi-component field moving in the string moduli space, but at an energy scale low enough to ensure that there is no observable tensor contribution to the CMB. This is a quantitative analysis of a particular potential and trajectory that realizes folded inflation. Several other inflationary models previously discussed in the literature can be interpreted as folded models [20,21,22]. Burgess et al. [23] examine realistic inflationary models motivated by the KKLMT proposal [24], finding a possible tensor signal and models with a running index. This calculation applies to a specific model, but its conclusions overlap with those obtained on much more general grounds here.
Even if the inflationary trajectory does contain corners, there is no guarantee that this will have an observable impact on the spectrum. Inflation can last until the density is well below the GUT scale, so that the region of the spectrum corresponding to the corners in the trajectory lies far outside the present horizon. Secondly, while I argued that a generic corner will result in a non-trivial dnR d ln k , this outcome is not guaranteed. Furthermore, if corners produce localized features analogous to those associated with potentials like equation (17), the resulting spectrum will not have a "constant" running. In future work, I plan to examine the possible types of corners that could arise in an arbitrary multi-dimensional potential, using a multi-fiield generalization of Monte Carlo reconstruction [25]. Likewise, progress in understanding string phenomenology will give a better understanding of how many folded trajectories exist within the landscape.
Since the stringy landscape can easily support many possible inflationary trajectories, there is almost inevitably an anthropic element to this discussion. However, by requiring the phenomenological parameters that describe our universe -both at the astrophysical and paticle levels -be mutually consistent, it may still be possible to make qualitative predictions based on the overall properties of the landscape. For example, Arkani-Hamed and Dimopoulos [26] recently used landscape based arguments to identify particles physics signals correlated with the absence of low-energy supersymmetry. It will be interesting to examine anthropic bounds on the scaledependence of n R . If a large running is natural, an upper limit on dnR d ln k should be saturated in our universe, using a variant of Weinberg's argument for a non-zero cosmological constant [27]. Anthropic bounds on the amplitude of the density fluctuations have been proposed [28,29] and generalizations of these arguments should lead to constraints on the scale dependence of n R .
In this paper, I have argued that when inflation occurs at high energies, a scale dependent spectrum is a natural result. This is in contrast to the usual theoretical prejudice against running in the scalar spectrum, and if this correlation is confirmed observationally it will provide circumstantial evidence for the existence of the stringy landscape. | 2019-04-14T02:52:56.962Z | 2004-07-06T00:00:00.000 | {
"year": 2004,
"sha1": "c6a6fd9f41f74d157bd43e954f0aac04119c505a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c6a6fd9f41f74d157bd43e954f0aac04119c505a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
271571757 | pes2o/s2orc | v3-fos-license | Using DeepLabCut-Live to probe state dependent neural circuits of behavior with closed-loop optogenetic stimulation
Abstract Background Closed-loop behavior paradigms enable us to dissect the state-dependent neural circuits underlying behavior in real-time. However, studying context-dependent locomotor perturbations has been challenging due to limitations in molecular tools and techniques for real-time manipulation of spinal cord circuits. New Method We developed a novel closed-loop optogenetic stimulation paradigm that utilizes DeepLabCut-Live pose estimation to manipulate primary sensory afferent activity at specific phases of the locomotor cycle in mice. A compact DeepLabCut model was trained to track hindlimb kinematics in real-time and integrated into the Bonsai visual programming framework. This allowed an LED to be triggered to photo-stimulate sensory neurons expressing channelrhodopsin at user-defined pose-based criteria, such as during the stance or swing phase. Results Optogenetic activation of nociceptive TRPV1 + sensory neurons during treadmill locomotion reliably evoked paw withdrawal responses. Photoactivation during stance generated a brief withdrawal, while stimulation during swing elicited a prolonged response likely engaging stumbling corrective reflexes. Comparison with Existing Methods: This new method allows for high spatiotemporal precision in manipulating spinal circuits based on the phase of the locomotor cycle. Unlike previous approaches, this closed-loop system can control for the state-dependent nature of sensorimotor responses during locomotion. Conclusions Integrating DeepLabCut-Live with optogenetics provides a powerful new approach to dissect the context-dependent role of sensory feedback and spinal interneurons in modulating locomotion. This technique opens new avenues for uncovering the neural substrates of state-dependent behaviors and has broad applicability for studies of real-time closed-loop manipulation based on pose estimation. Manuscript Highlights Closed-loop system probes state-dependent behaviors at pose-modulated instances Bonsai integrates DeepLabCut models for real-time pose estimation during locomotion Phase-dependent TRPV1 + sensory afferent photostimulation elicits context-specific withdrawal responses
Introduction
State dependent behaviors in animals are influenced by internal or external factors that alter activity in a context dependent manner.Locomotion is a context dependent motor behavior enabling animals to navigate their environment by means such as walking, flying, or swimming.The dynamic nature of locomotion is facilitated by sensory inputs and spinal interneurons that modulate motor neuron activity to influence the temporal and alternating coordination of flexor and extensor muscles (Engberg & Lundberg, 1969;Grillner & El Manira, 2020;Harnie et al., 2022;Rossignol, 1996).To achieve coordinated and adaptable movements, these motor outputs must be responsive to incoming sensory inputs, such as unexpected external obstacles that may perturb locomotion.The stumbling corrective reflex, a flexor response in the swing phase and an extensor response in the stance phase, exemplifies this sensorimotor integration (Forssberg, 1979;Forssberg et al., 1977;Mayer & Akay, 2018;Wand et al., 1980).Studying the context dependent role of locomotor perturbations has been challenging due to molecular and technical limitations.This study aims to bridge this gap by providing a framework to couple nearly instantaneous pose tracking with pose modulated optogenetic stimulation at different phases of the step cycle.
Mice are excellent models for studying the neural mechanisms of locomotion due to the availability of genetic tools that allow for targeted manipulation of specific cell populations (Sjulson et al., 2016).Combining these genetic approaches with optogenetics enables researchers to control neuronal activity with high spatiotemporal precision (Boyden et al., 2005;Deisseroth, 2011).Optogenetic photostimulation occurs in real-time when light-sensitive proteins, expressed in genetically defined neuronal populations, are activated or inhibited by light (Yizhar et al., 2011).This technique offers the ability to target specific neurons at precise times, without inducing long-term compensatory mechanisms or neural rewiring that may occur with other techniques such as neuronal ablation or chemogenetics (Roth, 2016).Optogenetic manipulation of spinal cord neurons has been challenging due to the limited penetration of light through tissue.However, recent advances in optogenetic tools, such as the development of red-shifted opsins (Chuong et al., 2014;Klapoetke et al., 2014) and highly sensitive opsins (Mardinly et al., 2018), have enabled the manipulation of deeper populations of spinal interneurons.By integrating optogenetics with genetically encoded tools in mice, researchers are starting to dissect the role of specific neural circuits in locomotion with unprecedented specificity and temporal resolution (Kiehn, 2016;Kiehn et al., 2010).
Advances in open-source machine learning techniques for recording and quantifying animal behavior have paralleled improvements in the precision of cell targeting for refined neuronal manipulation.Supervised machine learning-based pose estimation tools, such as DeepLabCut (DLC) and SLEAP, require minimal human labeling to train a network to detect and track animal postures (Mathis et al., 2018;Pereira et al., 2022).DLC is particularly advantageous due to its ability to reliably capture user-defined features such as individual hindlimb joints, using high-performance feature detection and sophisticated deep learning models to analyze hindlimb kinematics (Mathis et al., 2018;Nath et al., 2019).The exceptional tracking performance of DLC becomes even more valuable when integrated into event-based frameworks like Bonsai, which allow for the processing of multiple data streams in real-time (Lopes et al., 2015).By incorporating DLC pose tracking, Bonsai enables the creation of a closed-loop feedback system capable of triggering an LED pulse at specific pose-modulated instances, opening up new possibilities for studying the neural mechanisms underlying behavior (Kane et al., 2020;Lopes et al., 2015).
The confluence of genetic targeting, spatiotemporal manipulation, and machine learning has created an unprecedented opportunity for closed-loop feedback in neuroscience research.By leveraging these powerful tools, we can now investigate the causal relationships between neural activity and behavior with unparalleled precision and specificity.In this study, we demonstrate the feasibility of this approach by instrumenting DeepLabCut-Live (Kane et al., 2020), a real-time pose estimation system, and optogenetics to manipulate a specific population of primary sensory afferents during locomotion in mice.By using DeepLabCut-Live to track the animal's hindlimb kinematics and trigger optogenetic stimulation at specific phases of the locomotor cycle, we can probe the context-dependent role of these sensory afferents in modulating motor output and uncover the neural mechanisms underlying adaptive behaviors, such as the stumbling corrective reflex.This closed-loop system opens up new avenues for investigating the neural circuits underlying state-dependent behaviors and highlights the immense potential of integrating cutting-edge techniques from genetics, optogenetics, and machine learning in neuroscience research.
Animal housing, surgery, and behavioral experiments conformed to Rutgers University's Institutional Animal Care and Use Committee (IACUC: protocol #:201702589).All mice used in experiments were housed in a regular light cycle room (lights on from 08:00 to 20:00) with food and water available ad libitum.
Joint labeling for video tracking
Animals were anesthetized with 1.5% -2% isoflurane to remove fur over the right hindlimb and right-side of the abdomen.Using White Oil-Based Paint Marker (Sharpie), dots were placed directly onto the skin over five anatomical landmarks: iliac crest (IC), hip, ankle, metatarsophalangeal joint (MTP), and the tip of the second toe.DLC is a markerless posture tracking system, but explicitly marking the hindlimb joints improves the ease of manual labeling and model training speeds.Due to dermal slippage, the knee joint is not marked.Instead, the lengths of the femur and tibia bones were measured to triangulate the position of the knee post-hoc given the 2D coordinates of the hip and ankle joints.
Treadmill locomotion training and video recordings
Two days prior to experimentation, mice were trained once each day to locomote on the treadmill by gradually increasing the belt speed from 5 -20 cm/s.On experimentation day, animals were habituated to the behavior room for at least thirty minutes prior to placement in the Digigait™ motorized treadmill (Mouse Specifics, Inc., Boston, MA).Once accustomed to treadmill locomotion at average walking speeds of 20 cm/s, at least 5 step cycles were captured perpendicular to the treadmill using a Promon U1000 monochrome high-speed camera (510113-00-0000, AOS Technologies AG, Switzerland).The AOS Technologies Imaging Studio software suite was utilized on a high performance Windows machine to control camera capture with 864 x 796 pixel resolution at 415 frames per second (FPS).Infrared lights were placed on either side of the treadmill behind the camera to illuminate the scene and reduce glare (Table 1).
Fiber optic probe implantation, optogenetic stimulation, and c-Fos validation
Optic probe implantation was performed at 8-12 weeks of age.The optogenetic probe surgery was performed as previously described (Smith et al., 2019).In brief, animals were anesthetized with isoflurane (5% initial, 1.5-2% maintenance) and shaved over the thoracolumbar region.While positioned in a stereotaxic frame, a longitudinal incision (~3 cm) was made over the T10-L1 vertebrae.The paraspinal musculature was removed to expose the spinal vertebra and the space between T12 and T13 was cleared to expose the spinal cord.Surgical staples were attached to T12 and T13 to provide a fixation point for the optic probe.A 400 nm core, 1 mm length fiber optic probe (Thorlabs), was then positioned over the exposed spinal cord, lateral to the midline, and secured to the surgical staples using layered dental cement (Ivoclar) and super glue (Krazy glue).The surgical site was closed with surgical staples, and the animals were allowed to recover for two weeks prior to behavior experimentation (Figure 1A).
To deliver LED stimulation, a Python compatible pulse generator, Pulse Pal (Sanworks) was connected to an LED driver (Thorlabs), which was in turn connected to a fiber-coupled LED (Thorlabs) (Table 1).The Pulse Pal was programmed to stimulate at 2.2 mW for 10 ms upon receiving a trigger signal from the host computer.
Optogenetic activation of TRPV1 + afferent terminals was assessed by delivering spinal photostimulation to anesthetized TRPV1 Cre ;Advillin FlpO ;R26 LSL-FSF-ChR2 (Ai80) mice and then processing spinal cord for c-Fos.Mice were anesthetized for 1 hr prior to photostimulation.Unilateral photostimulation (2.2 mW, 10 ms pulses, 10 Hz for 30 min) was delivered to the spinal cord by positioning an optic fiber probe (400 nm core, 1 mm fiber length, ThorLabs) above the spinal cord surface.Following photostimulation mice remained under anesthesia for a further 1 hr before transcardial perfusion with heparinized-saline followed by 4% paraformaldehyde in PBS.Tissue was dissected and post-fixed in 4% paraformaldehyde at 4°C for 2 hrs.Transverse sections (50 µm thick) were collected using a vibrating microtome (Leica VT1000S) and processed for immunohistochemistry as previously described (Hughes et al., 2012).Sections were incubated in a cocktail of primary antibodies: chicken anti GFP (1:1000, Aves), rabbit anti c-Fos (1:1000, Synaptic Systems), and mouse anti NeuN (1:2000, Millipore).Primary antibody labeling was detected using species-specific secondary antibodies (1:500).Sections were incubated in primary antibodies for 72 hrs and in secondary antibodies for 12-18 hrs at 4°C.All antibodies were made up in a 0.1M phosphate buffer with 0.3M NaCl and 0.3% Triton X-100.
Training DLC model for offline and online tracking
For offline tracking of hindlimb body parts, a DeepLabCut model was trained using a ResNet-50 backbone.A training dataset was prepared by adding 9 videos with an average of 2210 overall frames and extracting an average of 125 frames per video to manually label the five anatomical landmarks of interest.The dataset was shuffled and split 95:5 for training and testing, respectively.The network was trained for 300,000 iterations using the training subset, and then evaluated on the held-out testing subset.Using a confidence threshold of 0.9, we observed an average test error of 2.86 pixels and average train error of 2.67 pixels, compared to human provided annotations.
Following this procedure, an additional model was trained for 24,500 iterations to track the 3x2 calibration grid to generate a pixel to millimeter conversion for every video recording.The training dataset was prepared by adding 5 videos with an average of 735 overall frames and extracting an average of 40 frames per video to manually label the six grid points of interest.The dataset was shuffled and split 95:5 for training and testing, respectively; and using a confidence threshold of 0.7, we observed average test error of 3.47 pixels and average train error of 3.42 pixels, compared to human provided annotations.
For real-time tracking of hindlimb body parts, a DeepLabCut model was trained using a MobileNet backbone.A training dataset was prepared by adding 6 videos recorded at 60, 125, and 415 FPS with an average of 887 overall frames and extracting an average of 100 frames per video to manually label the five anatomical landmarks of interest.The dataset was shuffled and split 95:5 for training and testing, respectively.The network was trained for 400,000 iterations using the training subset, and then evaluated on the held-out testing subset.Using a confidence threshold of 0.7, we observed average test error of 1.73 pixels and average train error of 1.59 pixels, compared to human provided annotations.Using the export model function within DLC, this model was exported into the Protocol Buffer format (.pb file) for seamless integration into Bonsai.
Offline pose analysis
Outputs from DLC were filtered and confidence thresholded.Pixel coordinates were converted to millimeters using calibration information obtained by a separate DLC model.The tracked body parts include the iliac crest (IC), hip, ankle, metatarsophalangeal joint (MTP), and the tip of the second toe.The position of the knee was inferred by measuring the lengths of the femur and tibia bones and then triangulation of the knee position using a custom matlab script.
Step cycle was analyzed using custom matlab scripts (https://github.com/tischfieldlab/Closed_Loop_Opto_Stimulation).In brief, local extrema positions of the toe were used to identify phase boundaries, with the local maxima of the toe indicating the start of the stance phase and the local minima of the toe indicating the start of the swing phase.
Hardware
To record and quantify hindlimb kinematics with real-time manipulation of neural activity during treadmill locomotion, six primary sets of hardware were utilized: a high performance Windows machine, openCV compatible high-speed camera, infrared lighting, motorized treadmill, pulse generator, and optogenetics supplies (Table 1).An openCV compatible camera is required to interface with Bonsai on a Windows machine.In this study, two cameras were purchased, one to record at high frame rates for offline pose estimation and a second with openCV compatibility for real-time pose estimation.
Software
All software tools have been compiled in Table 2. DeepLabCut-Live has three modes of operation, a stand alone GUI (DeepLabCut-Live!GUI), or pretrained model integration into Bonsai or Autopilot.In this study, we coupled a pre-trained DLC model with Bonsai to enable a closed-loop feedback system that detects a pose, performs an operation, and returns a processed pose (Kane et al., 2020).
Experimental setup and validation of pose tracking
Prior to data collection, animals were anesthetized, shaved on their right hindlimb area, and markers placed using white oil-based markers on five anatomical landmarks: iliac crest (IC), hip, ankle, metatarsophalangeal joint (MTP), and the tip of the second toe (Figure 2A).Animals were allowed to recover from anesthetic and then trained to locomote on the treadmill apparatus.Once trained to walk at speeds of 20 cm/s, we proceeded to capture high-speed video of the animals.
To acquire video of a mouse walking on a treadmill, we used a motorized treadmill, and positioned a tripod mounted camera, oriented perpendicular to the direction of treadmill movement, approximately 1.5 meter distance from the treadmill.To increase illumination of the scene, while minimizing animal discomfort, we positioned two infrared spotlights on either side of the camera and oriented them to focus on the treadmill.In each video recording, a calibration grid was placed in the field of view to convert pixels to metric units.The output of this behavioral experiment is a dataset of video files used for subsequent analysis (Figure 2B).
We collected 5 videos consisting of a range between 5 and 27 step cycles per mouse, across 5 mice.Each video was trimmed to capture a minimum of 5 consecutive step cycles, forming a dataset of 9 videos.From this initial set of videos, we prepared a dataset totalling 1125 frames extracted from the captured videos and each frame labeled by an expert annotator.We then trained a DeepLabCut model for 300,000 iterations to accurately infer the positions of the five labeled body parts.Evaluation of the body part coordinates inferred by the model showed that the model predictions were accurate, having a mean test error of 2.38 pixels on held-out data compared to human annotations (Figure 3A).
Offline analysis and validation of step cycle and toe tracking
The ability to accurately detect step cycle boundaries as well as specific events within the step cycle is a critical requirement for this closed-loop intervention.This is also important for post-hoc analysis of experiments concerning the effect of such interventions.A given step cycle begins with the stance phase (the time between initial paw contact and liftoff), and progresses into the swing phase (the time between paw liftoff and contact with the surface again).To eliminate variability in step cycle durations that may confound comparisons within and between groups, step cycles were normalized from 0 to 1.We analyzed the step cycle in wild-type animals at a treadmill belt speed of 20 cm/s, with no fiber implant and observed an average step cycle duration to be 260 ms; dependent on the stride frequency of the individual mouse and consistent with previously published results (Leblond et al., 2003).We generated hindlimb skeletal stick plots to represent the progression of a step cycle using joint coordinates and phase boundaries.We also plotted the average trajectory of the toe coordinate over 10 consecutive normalized step cycles and found that in a normal animal, the toe height increased during the swing phase to an average peak height of 5.58 mm (SEM ± 1.25) (Figure 3B).This data provides a useful baseline characterization of the step cycle in wild-type animals without surgical manipulation.
Establish criteria for pose modulated stimulation
Offline analysis of step cycles from body part tracking data has the benefit of access to the entire time series of poses, however, a real-time closed-loop system only has access to the past and present (no future access).Additionally, offline analysis can tolerate heavier computational demands and long latency, while a real-time system must make fast decisions with only instantaneous pose information.We therefore set out to craft simple, low-latency, heuristics that accurately infer specific step cycle events from instantaneous pose data.
Using the complete hindlimb skeleton video recordings, we inspected videos within and between animals to craft rules for identifying specific phases of the step cycle, such as the initiation of stance, stance, the initiation of swing, and swing.In this study, differences in the x-coordinates of individual joints define phase criteria, given the assumption that the animal is always horizontally oriented with its nose pointing towards the right edge of the video and the tail pointing towards the left of the video and a coordinate system with the origin in the top right of the video.The initiation of the stance occurs when the X-position of the MTP is greater than the X-position of the IC.Stance occurs when the X-position of the ankle is greater than the X-position of the hip.The initiation of swing occurs when the X-position of the ankle is greater than the X-position of the MTP.Swing occurs when the X-position of the toe is greater than the X-position of the IC (Figure 4A).
Closed-loop system integration
We next modified our experimental apparatus to support a closed-loop stimulation experimental paradigm by adding hardware for optogenetic stimulation.First, a PulsePalv2 was connected to the acquisition computer via a USB cable.The digital output of the Pulse Pal was then connected to an LED driver, which was then connected to an LED.The output of the LED was routed via a fiber optic cable to the treadmill arena, where it terminated in a rotary joint.From there a fiber optic patch cable can be connected between the rotary joint and the cannula previously implanted above the spinal cord.
To capture and control the camera acquisition, we used the Basler Ace 2: a2A1920-160umPRO camera and the Basler pylon camera suite to preview camera settings prior to real-time pose estimation acquisition.To analyze incoming data and make the decision to stimulate or not, we programmed an experimental workflow in the Bonsai-RX environment with the following packages: DeepLabCut Library, DeepLabCut Design Library, Pylon Library, PulsePal Library, PulsePal Design Library.First, a Camera Capture node was configured to acquire images from the Basler camera at 150 FPS, and the images were routed to a Video Writer node to save the raw video stream to an AVI video file.Images from the Camera Capture node were also routed to a PredictPose node, which submits the input images to a DeepLabCut model for pose estimation and outputs the detected body part coordinates.Predicted body part coordinates were routed to a CsvWriter node, which writes these to a CSV file for later analysis.GetBodyPart nodes receive pose estimations and pick coordinates corresponding to the selected body parts (e.g.IC and MTP), selecting the X coordinate via Position.X nodes, which are combined via a Zip node and then subtracted from one another through a Subtract node.The result is converted to a boolean value through a GreaterThan node, comparing the result of the subtraction to zero, and then inverting the boolean result by comparison to False in an Equal node.To prevent multiple triggers (i.e. each frame after a condition is met until the condition is no longer met), the output is filtered by a DistinctUntilChanged node, which only produces distinct contiguous results.To allow the experimenter some control over valid experimental periods where stimulation may be allowed to be triggered, a Gate node, paired with a KeyDown node to allow triggering only within 30 seconds after the experimenter has pressed a keyboard key.When a sample is allowed through the gate, it is routed to a TriggerOutput node which triggers the Pulse Pal to begin playing the stimulation sequence, and the timestamp of the gate signal is recorded via a CsvWriter node (Figure 4B).
For stimulation targeted to different step cycle events (e.g.Stance initiation, stance, swing initiation, swing), we simply change the body parts compared by changing the body part name parameter in each of the two GetBodyPart nodes.Here, we collected a total of 16 videos, with 4 videos per mouse for each step cycle stimulation event, across 4 mice.Each video had a range between 10 -14 single pulse stimulation triggers.
Validation of closed-loop LED triggering
Closed-loop systems have strict latency requirements in order to ensure desired interventions can be delivered in the proper moment.Latency can arise from several sources (e.g.Camera, pose estimation, pose analysis, stimulus delivery) and latency is additive across the experimental workflow.Additionally, the developed workflow must be effective and robust in detection of desired step cycle events and subsequent triggering of stimulus delivery.
To validate the effectiveness of the Bonsai workflow, a pilot study was designed to test the accuracy and latency of real-time DLC tracking and TriggerOutput commands once the optogenetic stimulation criteria is satisfied (Figure 5A).A treadmill-trained wild-type mouse was placed on the motorized treadmill and the fiber-coupled LED was taped to the side panel of the treadmill, in view of the camera (Figure 5B).Video recordings captured the mouse completing at least ten consecutive step cycles.Within the workflow, the Gate node recorded each instance that the gate opened to enable LED stimulation via TriggerOuptut once a key was pressed on the keyboard and the stimulation criteria was satisfied.Stimulation accuracy was then verified using the video file and excel sheet outputs of Bonsai to coordinate gate open timestamps with camera capture timestamps and aligning these timestamps to their corresponding frames in the video file (Figure 5C).
Visual inspection to align when the LED is ON with an instance such as the initiation of swing when the x-coordinate of the ankle exceeds the MTP yields a latency dependent on the frame rate.At 150 FPS, a frame is generated every 6.67 ms and the LED may turn ON within this time interval before it is visible in the video frame.True latency is determined by calculating the latency of real-time DLC tracking, overall system latency such as computer processing speeds and Pulse Pal trigger output speeds, and subtracting the delay between the instance detection and the command to trigger the LED.This first pass testing corroborates that the overall system is effective at triggering an LED to turn ON at pose-modulated instances and indicates its potential use to photo stimulate genetically encoded tools in mice.
Validation of optogenetic activation of TRPV1 + primary afferent fibers
To validate our ability to successfully activate nociceptive primary afferent fibers, we anesthetized TRPV1 Cre ;Advillin FlpO ;R26 LSL-FSF-ChR2 (Ai80) mice and applied direct photostimulation (2.2 mW, 10 ms pulses, 10 Hz for 30 min) to the spinal cord by positioning an optic fiber probe (400 nm core, 1 mm fiber length, ThorLabs) above the spinal cord surface.Animals were maintained under anesthesia, then perfused transcardially with 4% paraformaldehyde 1hr following photostimulation.Spinal cord sections from photostimulation segments were processed and immunolabeled to visualize EYFP (to label TRPV1-ChR2 + afferent terminals), the activity marker c-Fos (to label activated spinal cord neurons), and NeuN (to label dorsal horn neurons).In line with previous characterizations of TRPV1 + sensory afferents (Caterina et al., 1999(Caterina et al., , 2000;;Samineni et al., 2017), EYFP expression was largely restricted to the superficial dorsal horn (LI-LII, Figure 1B).As expected from the location of EYFP + terminals, we observed a significant increase in c-Fos + neurons within LI-II ipsilateral to photostimulation (Figure 1B-C).Together, this data demonstrates our ability to optogenetically stimulate TRPV1 + primary afferent fibers within the dorsal horn of the spinal cord.
Characterizing the effects from photostimulation during specific step cycle events
We finally sought to evaluate the effects of in vivo photostimulation of TRPV1 + primary afferent fibers in the dorsal horn of the spinal cord in awake, behaving mice at different phases of the step cycle.TRPV1 + primary afferent fibers are known to transmit nociceptive information, and their activation has previously been shown to evoke nociceptive responses (i.e.withdrawal of the paw in response to pain) (Beaudry et al., 2017;Samineni et al., 2017).We hypothesized that stimulation of these neurons would cause reflexive paw withdrawal, and interrupt the normal progression of step cycle dynamics.
To test this hypothesis, TRPV1 Cre ;Advillin FlpO ;R26 LSL-FSF-ChR2(Ai80) animals were surgically implanted with a fiber optic probe positioned to illuminate the dorsal horn of the spinal cord at the L3-4 level (Figure 1A).Mice were allowed to recover and subsequently trained to walk on the treadmill at a belt speed of 20 cm/s.On the day of experimentation, mice were habituated to the behavior room, and gently placed on the treadmill.Using the Bonsai workflow described in 3.4, we recorded animals walking while simultaneously evaluating the poses of the right hindlimb.After an animal successfully performed at least 5 step cycles, the experimenter pressed a KeyDown to allow the Gate node to enable pose-triggered photostimulation at the chosen step cycle event.Photostimulation consisted of a single 2.2 mW 10 ms pulse.The data collected was then processed offline for analysis of step cycle and paw withdrawal responses.
As expected, animals were responsive to the photoactivation of TRPV1 + sensory afferents during treadmill locomotion (Figure 6A).Stick plots visualize the trajectory of each hindlimb joint during each stimulation event and highlight the elevation of the paw following stimulation and during the swing phase (Figure 6B).Variability between stick plot representations are attributed to a subject's individual stepping frequency and differences in responses between the step cycle stimulation events (i.e.prolonged paw elevation as opposed to brief responses).
Peak paw withdrawal is defined as the highest paw elevation response following photo stimulation within the defined step cycle event.With a treadmill belt speed of 20 cm/s, by the time the right hindlimb strikes the ground to initiate stance, the left hindlimb is grounded and preparing to initiate swing.During optogenetic stimulation at the initiation of stance, the mice generated a brief paw withdrawal with an average elevation peak of 2.89 mm (SEM ± 0.46), compared to the average 0.03 mm (SEM ± 0.02) without stimulation.The subjects consistently responded with an elevation of the right hindlimb, and on many occasions, the mice used their left hindlimb to propel themselves forward, as if to normally start the swing phase, and then both hind paws were briefly elevated in the air.This reaction frequently caused the mice to generate a second paw elevation, as seen with the first two peaks in paw withdrawal heights, prior to readjusting their step and continuing with a more normal swing phase.During stimulation of the stance phase, the mice generated an average peak paw withdrawal height of 2.54 mm (SEM ± 0.85), compared to the average 0.14 mm (SEM ± 0.09) without stimulation.This stimulation generated variable responses in which the mice may briefly elevate their right paw and the paw stutters before coming in contact with the ground, or the right paw exhibits a large withdrawal elevation and once it comes in contact with the ground, the mouse is prepared to continue with normal step cycles (Figure 6B-C).
At 20 cm/s, when the right hindlimb is preparing to initiate swing, the left hindlimb is grounded because it recently initiated stance.Optogenetic stimulation at the initiation of swing generates an initial average paw withdrawal peak height of 4.86 mm (SEM ± 1.42), compared to the average 0.99 mm (SEM ± 0.39) without stimulation.During this stimulation event, the mice frequently responded with increased knee flexion and an elevated right hindlimb, followed by an elevation of the left hindlimb, so that both hind paws are in the air until either hindlimb contacts the ground (Figure 6B-C).The combination of the right and left hindlimb elevation increased the overall response time.Photo stimulation within the stance phase and at the initiation of the swing phase generate similar responses with elevation during the stimulation period, followed by an extended duration and more elevated paw position prior to transitioning into a normal step.Lastly, during stimulation of the swing phase, the mice generated an average peak paw withdrawal elevation of 7.22 mm (SEM ± 0.83), compared to the average 3.51 mm (SEM ± 0.39) without stimulation.The first hump of the withdrawal elevation is the onset of a normal swing phase and the following peaks are the elevations that occur due to stimulation (Figure 6C).Variation in the stimulation timing may be attributed to differences in stride frequency.Overall, these responses highlight the phase dependence of nociceptive withdrawal responses, and though not quantified here, point to an important role for left-right hindlimb coordination.
Discussion
4.1 A Novel Closed-Loop System for Probing State-Dependent Neural Circuits Our study demonstrates a novel closed-loop system that integrates real-time pose estimation with optogenetic manipulation to probe state-dependent neural circuits during locomotion.By combining DeepLabCut-Live pose tracking with phase-specific optogenetic stimulation of nociceptive sensory neurons, we were able to begin to investigate how sensory inputs modulate locomotor output in a context-dependent manner.This approach addresses a key challenge in studying adaptive locomotor behaviors by allowing precise temporal control of neural manipulation based on the ongoing motor state.
Phase-Dependent Effects of Sensory Stimulation on Locomotor Output
The results from our proof-of-principle experiments with TRPV1 + sensory afferent stimulation reveal intriguing phase-dependent effects on locomotor output.Photostimulation during both stance and swing phases initiated a paw withdrawal response, but the characteristics of these responses differed markedly depending on the phase of stimulation.When stimulation occurred during stance, with the paw on the ground, it elicited a brief withdrawal response with an average peak height of 2.54 mm (Figure 6C).In contrast, stimulation during swing, when the paw was already in the air, produced a more pronounced response.This swing-phase stimulation led to an extended withdrawal with an average peak height of 7.22 mm and was accompanied by increased knee flexion (Figure 6B-C).
These phase-dependent differences in response magnitude and kinematics align with our understanding of how sensory input is processed differently depending on the current state of the limb.The more pronounced response during swing, characterized by greater elevation and increased knee flexion, may reflect the engagement of protective reflexes similar to the stumbling corrective response (Mayer & Akay, 2018).This observation underscores the importance of state-dependent sensorimotor integration in shaping adaptive locomotor behaviors.
4.3 Bilateral Responses to Unilateral Stimulation: Implications for Spinal Circuit Coordination Intriguingly, while our optogenetic stimulation was unilateral, as confirmed by c-Fos immunostaining (Figure 1B), we observed bilateral responses in the hindlimbs during stance and swing stimulations.Though we only tracked and quantified right hindlimb kinematics, visual inspection of video recordings demonstrated an initial withdrawal occurring in the stimulated (right) hindpaw, followed by elevation of the contralateral (left) hindpaw (Figure 6A).This bilateral response to unilateral stimulation highlights the intricate left-right coordination mechanisms within the spinal locomotor circuitry and suggests that sensory inputs can modulate locomotor patterns across both sides of the spinal cord, likely in a stimulation intensity and phase-dependent manner.Indeed, previous work has shown that motor responses of one limb can be initiated by cutaneous stimulation applied to the contralateral limb (Gauthier & Rossignol, 1981;Perl, 1957).These contralateral responses are mediated by spinal cord commissural neurons that are responsive to cutaneous stimulation (Laflamme et al., 2023), as well as descending serotonergic modulation (Abbinanti et al., 2012;Butt & Kiehn, 2003).The phase-dependent nature of this bilateral response further underscores the context-specific integration of sensory information within the locomotor circuitry, which may be influenced by both local spinal circuits and descending control.
Future Applications and Potential Enhancements of the System
The ability to manipulate specific neuronal populations at precise phases of the step cycle opens new avenues for investigating the neural control of locomotion.Our system's capacity to elicit and measure these phase-dependent and bilaterally coordinated responses demonstrates its utility in probing the complex interactions within spinal circuits during ongoing behavior.Future studies could leverage this approach to further dissect the neural mechanisms underlying interlimb coordination, potentially by combining our technique with physiological recordings of muscle (Chung et al., 2023;Pearson et al., 2005) and/or spinal interneurons (Lavaud et al., 2024).Moreover, this system's versatility allows for broader applications in studying state-dependent behaviors.The real-time pose estimation component also offers the potential to trigger manipulations based on more complex behavioral events or postures, extending beyond simple phase-based criteria.
To further enhance the system's capabilities and reduce latency, several improvements could be implemented.Incorporating a forward prediction filter, such as a Kalman filter, could help compensate for processing delays by anticipating future poses (Kane et al., 2020).Optimizing camera frame rates and resolution, as well as utilizing high-performance computing hardware, can minimize overall system latency.Additionally, expanding to multi-camera setups to acquire 3D motion analysis would provide a more comprehensive view of the animal's behavior and potentially improve the accuracy of pose estimation and phase detection by mitigating occlusions.
Conclusion: Bridging Cellular Manipulations with Complex Behaviors
Our integrated DeepLabCut-Live (Kane et al., 2020) and optogenetics approach provides a powerful new tool for probing the neural circuits underlying adaptive motor behaviors.By enabling precise, state-dependent manipulation of neural activity, this system helps bridge the gap between cellular-level manipulations and complex behavioral outputs.The phase-dependent and bilaterally coordinated responses we observed highlight the complex nature of sensorimotor integration during locomotion and demonstrate the potential of this approach in unraveling the intricacies of motor control.As the field of neuroscience continues to emphasize the importance of studying neural circuits in the context of naturalistic behaviors, tools like the one presented here will be crucial in advancing our understanding of how the nervous system generates and modulates adaptive behaviors.Prior to recording, labels are placed over five anatomical landmarks: iliac crest (IC), hip, ankle, metatarsophalangeal joint (MTP), and the tip of the second toe.The tibia and femur length are measured to triangulate the knee joint post-hoc.(B) A camera is mounted perpendicular to the treadmill to record sequences of images of an animal performing the task.In each recording, a calibration grid is placed in the field of view to convert pixels to metric units post-hoc.Recordings are stored on a computer as AVI files for post-hoc kinematic analysis.
Figure 2 -
Figure 2 -Experimental setup for video recording.(A)Prior to recording, labels are placed over five anatomical landmarks: iliac crest (IC), hip, ankle, metatarsophalangeal joint (MTP), and the tip of the second toe.The tibia and femur length are measured to triangulate the knee joint post-hoc.(B) A camera is mounted perpendicular to the treadmill to record sequences of images of an animal performing the task.In each recording, a calibration grid is placed in the field of view to convert pixels to metric units post-hoc.Recordings are stored on a computer as AVI files for post-hoc kinematic analysis.
Figure 3 -
Figure 3 -Pipeline for video analysis.(A) A DeepLabCut model is trained with manually labeled key points from a minimal subset of extracted video frames.Following training and testing, the DeepLabCut model produces and saves keypoint labels for every frame of the video recording for post-hoc analysis.(B) Using keypoint coordinates from (A), custom MATLAB scripts were generated to measure the X and Y coordinates of keypoints.Example data shows a stick plot with each keypoint labeled and the toe height tracked across a single step cycle (normalized 0-1).
Figure 4 -
Figure4-Pipeline for closed-loop system integration.(A) Video recordings are examined to establish user-defined thresholds for optogenetic stimulation.Here, the relative x-coordinates of keypoints were used to define specific phases of the step cycle, namely: Stance initiation, Stance, Swing initiation, and swing.(B) Bonsai workflow for user-defined optogenetic stimulation.Bonsai uses camera captured video acquisition coupled with a pre-trained DeepLabCut model to predict body part coordinates in real time.Using these coordinates, user-defined instances (A) are detected to trigger optogenetic stimulation.
Figure 5 -
Figure 5 -Validation of closed-loop triggering.(A) Example user-defined instance to identify swing initiation (Ankle > MTP).(B) Behavioral setup is as in Figure 1, with the addition of an optic fiber in the camera field of view.Here, the bonsai workflow is integrated to the behavioral setup to provide an external command to a TTL driver that triggers LED activation at user-defined instances.(C) Example visual inspection of closed-loop optogenetic stimulation at a swing initiation.Right: Instance detection of the open Gate and camera timestamps are then compared to validate closed-loop triggering.
Figure 6 -
Figure 6 -Closed-loop optogenetic stimulation of nociceptive primary afferents during locomotion.(A) TRPV1 Cre ;Advillin FlpO ; R26LSL-FSF-ChR2(Ai80) mice were implanted with an optic probe at the L3-4 spinal cord level.During treadmill locomotion (20 cm/s) user-defined instances were used to trigger optogenetic stimulation at specific phases of the step cycle: Control -no stimulation; Stance initiation; Stance; Swing initiation; Swing.Timing of optogenetic stimulation is depicted with a blue outline surrounding the frame.(B) Representative stick plots for each instance with key points labeled.(C) Quantification of paw height tracked across the step cycle (normalized 0-1) and maximum paw withdrawal height in control (gray, no optogenetic stimulation) and experimental (blue, optogenetic stimulation) steps.The average optogenetic stimulation time and SEM are indicated by vertical lines.Paired T-tests; Stance initiation, **p = 0.0097; Stance, ns p = 0.0804; Swing initiation, ns p = 0.1192; Swing, *p = 0.0170.n = 4 mice.
Table 2 .
Software specifications for offline and real-time pose estimation as outlined in blue. | 2024-08-01T13:11:59.463Z | 2024-07-29T00:00:00.000 | {
"year": 2024,
"sha1": "81aa8a7894dfacdf14aed47d4f509a7b26e6fb0f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11312470",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4aa79c954b727aa349627f247ad7f254ca47df58",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
35827036 | pes2o/s2orc | v3-fos-license | Schistosomiasis Mansoni in Bananal ( State of São Paulo , Brazil ) . I . Efficiency of Diagnostic and Treatment Procedures
Bananal is an important focus of Schistosoma mansoni in the State of São Paulo. Accordingly, programmed active search for human cases, annual coproscopic surveys and treatment of infected cases were started in 1998, aiming at producing a sharp prevalence rate drop by the year 2000. S. mansoni eggs were searched for in two Kato-Katz slides per patient. Cases were followed up according to the routine of the local Family Health Program. In 1998, 130 samples out of 3,860 showed S. mansoni eggs; in 1999, 105 out of 3,550, and in 2000, 64 out of 3,528. Prevalence rates were 3.4%, 2.9%, and 1.8%, and average egg-counts 59, 64, and 79 eggs per gram of feces respectively. Prevalence rates decreased steadily after treatment, but persistently positive cases showed no significant decrease in parasite burdens. Egg count variation depended on sex and age bracket. Persistent residual cases admittedly preclude the eradication of this infection by only searching for and treating carriers. In addition, resistance to therapy and low sensitivity of fecal examinations, can not be ignored. Moderate to heavy worm burdens, frequently associated with hepatomegaly elsewhere, produced no serious cases in Bananal.
The municipality of Bananal is situated in the eastern part of the State of São Paulo, latitude 22º40'44"S and longitude 44º19'08"W, at 560 m above sea level.It has an area of 615 km 2 .About one half of the estimated 16 thousand inhabitants live in the urban areas.In the year 1975, the discovery of six cases of schistosomiasis mansoni in the municipality was announced by Piza (1976).However, Corrêa et al. (1962) and Ramos and Piza (1971) had already found planorbid vectors in that municipality.Even though the collection of infected snails was at that time a requisite for the transmission of Schistosoma mansoni in a site to be admitted, it is presumable that cases of schistosomiasis in Bananal had been known earlier to the personnel of the Campanha de Combate à Esquistossomose (CACEsq) -Campaign to Combat Schistosomiasis -subsequently known as Superintendência de Controle de Endemias, Sucen (Superintendency of Control of Endemic Diseases).
In a recent paper, Teles (2001) gave an account of the cases diagnosed in Bananal from 1979 onwards, with the aid of stool examinations performed at Sucen laboratories.The author stresses that the transmission of schistosomiasis in the municipality of Bananal is now practically restricted to the urban areas and that the annual rate of notification of new autochthonous cases has varied, mostly due to reasons only indirectly associated with the implementation of the control program and to a lesser extent to an actual variation in the risks of transmission of the parasite.
There is a broad consensus on the issue of the introduction of S. mansoni into Brazil from Africa alongside the slave traffic, as supported by Lutz (1919), but a controversy persists about the time at which the first foci of S. mansoni were established in Bananal.Machado (1977) was perhaps the author mainly responsible for the diffusion of the idea that, as a result of internal migration, schistosomiasis mansoni spread rather recently across the Brazilian territory towards São Paulo and other Southern states.Piza et al. (1959) hypothesize that S. mansoni has possibly reached the valley of the river Paraíba for the first time as a consequence of the arrival of Northeastern Brazilian carriers coming to work.They had taken part in the construction of the Rio-São Paulo highway during the beginning of the third decade of the 20th century, lived in military camps during the Constitutionalist Revolution or done repair works, which lasted until the end of the following decade, on the Estrada de Ferro Central do Brasil (Central Railroad of Brazil).Silva (1983) and Chieffi and Waldman (1988) admit that foci of S. mansoni already existed during the mid 1800s, before slavery was abolished.At that time, a large slave contingent migrated to São Paulo to work in coffee plantations, the most economically important activity until the mid 1900s.In this context, Silva (1983) suggests that, as the municipality of Bananal remained outside the path of relatively recent migratory pressures, it is likely that schistosomiasis had become endemic earlier in that region than it might be surmised.
Concerning the intermediate hosts, Teles (1996) reported that besides Biomphalaria straminea, found in one water body, B. tenagophila is mostly responsible for the transmission of S. mansoni in Bananal.
In view of the persistence of transmission of schistosomiasis and its detrimental effects upon the health of local inhabitants, a specific plan was devised in 1998, as a result of collaboration between Sucen and local health authorities, to strengthen the actions against schistosomiasis in Bananal.A reduction of its prevalence level to less than 1% by the year 2000 was among the aims of this plan, which included a parasitological survey, the treatment of infected people, and general prophylactic measures as improvement of water supply and sewer systems.To evaluate the efficacy of such measures and the current chances of development of serious sequels in the long term is the object of this paper.
MATERIALS AND METHODS
Yearly examination of fecal samples from residents of the urban area of the municipality of Bananal was programmed to detect and eventually treat subjects infected with S. mansoni.The Sucen staff was responsible for the distribution of containers and examination of fecal samples.Work on the field was done in one district at a time, beginning with those with the highest prevalence records.
Microscopic diagnosis of S. mansoni was performed by using the quantitative direct examination technique described by Katz et al. (1972), known as the Kato-Katz method.We examined two preparations per stool sample, as recommended by the WHO (1985).The results are given in terms of eggs per gram (epg) of feces.All subjects diagnosed as infected with S. mansoni were treated at the local outpatient service with oxamniquine (Mansil ® ) not later than 10 days after receipt of results.After treatment, the subjects remained under observation to check any reaction.
RESULTS
The urban area of the municipality of Bananal, where our work was done, is shown in the Figure .The course of River Bananal within the urban area separates the districts of Niterói, Laranjeiras and Cerâmica, on the lefthand bank and Vila Bom Jardim, Centro and Palha on the opposite bank.
Table I lists numbers of samples examined and proportions of positive cases per year.Until 1997 the periodicity of stool examinations had been irregular.The plan to enhance the activities of prophylaxis and control of schistosomiasis included a more extensive collection of fecal samples.
Table II lists, respectively, the numbers of stool examinations and proportions of positive cases per district and per year during the period from 1994 to 2000.
Table III lists the same data corresponding to the years 1998, 1999 and 2000 plus results of egg-counts.Such data suggest that the incidence of this parasitosis was higher in Fazenda Três Barras and Palha in 1998, in Vila Bom Jardim and Laranjeiras in 1999 and in Centro, Bom Jardim and Palha again in 2000.On account of some rather high egg-counts, the occurrence of hepatomegaly can not be ruled out.No evidence of a positive association between prevalence and egg-counts was found.In some cases, low proportions of positives coincided with high eggcounts.
During the three years in which the plan has been in operation, the active search for infected people has significantly increased, the number of examinations performed amounted to 70% of those performed during the period from 1994 to 2000 and 47% of new cases diagnosed and treated as shown in Table II fom a locality known as Fazenda Três Barras and the districts Palha and Cerâmica, where the highest prevalences of the municipality are found.
Distribution of cases by age bracket and sex is shown in Table IV.The highest proportions of positivity and eggcounts were found in male subjects.Females, whose eggcounts had been decreasing, showed a sharp increase in prevalence, while only a slight increase of this was observed in connection with males during the year 2000.Infected males make up about ¾ of all cases diagnosed between 1998 and 2000.During this period, the male/female proportion remained practically invariable, but average egg-counts are highest in males of all age brackets.Egg-counts less than 100 epg predominate; 13 to 16% correspond to moderate infections.One case observed in the year 2000 had epg values compatible with serious disease.
Clinical examination revealed no hepatosplenic abnormalities attributable to schistosomiasis.Upon registering at the Programa Saúde da Família (PSF) -Family Health Program, all patients were given oxamniquine (Mansil ® ) not later than 10 days after being diagnosed.Inspection of records revealed 7 cases with positive results in 1997 and 1998; 6 in 1998 and 1999; 3 in 1999 and 2000; 2 in 1998 and 2000.
DISCUSSION
On inspection, records of the cases entered since 1979 indicate that the transmission of schistosomiasis in Bananal has been stable.Teles (2001) presumes that variations in the number of new cases since then might have been predominantly due to a more extensive use of diagnostic means and only to a lesser degree to a change in the epidemiological picture.
Some discontinuities in the distribution of stool samples examined per district since 1994 are observed, which changed the records of new cases detected.During the late 1980s the program for controlling schistosomiasis was changed by Sucen.From that time onwards, the population samples under study were surveyed with a periodicity which depended on the prevalence observed during the previous year.It is pertinent to the matter under discussion that somewhat earlier in that decade, the fight against vectors of resurgent and emergent diseases, as yellow fever and dengue took priority over the epidemiological vigilance against chronic endemic infections such as schistosomiasis.This tendency resulted in serious operational problems, primarily due to a paucity of financial support.Another drawback to a proper attitude towards the problem of schistosomiasis might be attributed to a misinterpretation of the prophylactic role of chemotherapy and the consequent expectation of a short term extinction of this parasitosis.In view of the complexities peculiar to such an endemic infection, attention has been drawn away from schistosomiasis to other subjects currently in the limelight.
In spite of such obstacles, the prime concern of those involved in organizing programs to control schistosomiasis should include feedback from the effects of previous prophylactic actions and an analysis of the most pertinent variables involved.Conceição and Coura (1978), Paulo, Brazil (1994-2000) Year VB Jardim (+) (%) Conceição (1978) and Dias et al. (1982) state that the effects of chemotherapy on the reduction of prevalence and parasite burdens vary widely according to local epidemiological peculiarities and that different stages of the action require different strategies.
The information provided by quantitative stool examinations is doubtless epidemiologically pertinent, but some peculiarities of the life cycle of S. mansoni adds to the difficulties of microscopic diagnosis, such as the possibility of an infection caused by a small number of worms, possibly all of one sex, irregular oviposition, non-random distribution of the eggs in the fecal mass and development of host immunity (Gryseels 1996).The effects of such factors can be serious when only one fecal sample is collected from each subject.Oxamniquine resistant worms, mentioned by Cioli et al. (1978), Coura et al. (1980), Prata et al. (1980), Bina and Prata (1980), Dias et al. (1982), Kloetzel (1982) and Coelho et al. (1997) also set important limits on the evaluation of prevalence, parasite burden, and risk of infection.All such drawbacks notwithstanding, given the fact that the parasitological surveys were comprehensive, it can be concluded that the prophylactic measures introduced in Bananal succeeded in reducing schistosomiasis prevalence.However, the effect of treatment on parasite burdens was not so evident.The highest percentages of positive cases registered during the last six years (6.14%) occurred in districts situated on the left-hand bank of the river Bananal: Niteroi, Laranjeiras, Cerâmica and Três Barras, while the districts Centro, Palha and Bom Jardim contributed 3.8% of the cases.It is interesting to note that about 65% of the urban population of Bananal lives in districts situated on the right-hand bank of the same river.It is possible that residents on left-hand bank increase their chance of infection when wading across the river on their way to the center of Bananal.Although the impact of chemotherapy on egg counts has not been immediately evident, a reduction of the number of eggs discharged into water bodies will eventually result.As remarked by MacDonald (1965), a small number of S. mansoni eggs discharged into the environment will sustain the transmission of this parasite.That is why the stress on proper sanitation can not be overestimated.Thus in Bananal, the epidemiological situation regarding schistosomiasis has been surveyed yearly.
In Bananal, transmission of schistosomiasis occurs in the absence of recent migratory pressures, under fairly good socio-economic conditions and better than average sanitation.Even in quarters inhabited by low-income families, living standards are higher than those observed on the outskirts of big cities, where Teles (1994) and Coura Filho (1997a,b) observed more favorable conditions for the urbanization of schistosomiasis.
For the moment, in spite of the progress made, it seems prudent to proceed with surveying the population at risk, aiming at diagnosing infection cases and applying chemotherapy to the infected ones.To improve the reliability of diagnostic methods high sensitivity and specificity serological techniques will also be used.High parasite burdens, persistent in spite of a reduction in prevalence, tend to cause a serious disease in which, in addition to intestinal lesions, there is hepatesplenic involvement.To evaluate such cases, careful clinical examinations are mandatory.Unsatisfactory effects of chemotherapy and relapses showed up as a result of parasitological surveys, indicating reinfection or drug-resistance.It is thus advisable, in such cases, to switch to the use of praziquantel, a well-tested drug, and its efficacy being similar to that of oxamniquine.
The distribution of infected people and parasite burdens by sex and age bracket gives further information about the risks of infection and the adequate strategy to control schistosomiasis.There are definite indications that the active search for infected people among the most exposed groups, schoolchildren, for instance, has been adequate.In Bananal, positive cases and the heaviest parasite burdens are found mostly in the 7 to 14 year agebracket.Routine assistance of town dwellers by the Family Health Program solves the problems of clinical diagnosis and follow-up.
As most of the egg counts are at the lower limit of sensitivity of the method (12 epg), a significant proportion of false negative results are expected.Even though a low susceptibility to S. mansoni has been attributed to B. tenagophila, the characteristics of the foci found in Bananal, of which a low estimated density of the worm population is not the least important, give a clear proof The situation observed in Bananal confirms the views of Conceição and Coura (1978), Katz (1980) and Barbosa and Barbosa (1998), who remark that the probability of success in the control of this endemic infection increases when the peculiarities of each epidemiological situation are taken into account.
Efforts to make sanitation as effective as possible, although recognized as extremely important in preserving the health of a population, produce, as regards the prophylaxis of schistosomiasis, results dependent on the region under consideration.Thus, the impact of sanitation on prevalence differs even in those districts where treated water and adequate sewage disposal are available to most of the inhabitants.For instance, during 1999 and 2000 the districts of Bom Jardim and Palha had already satisfactory sanitation conditions.However, the expected sharp decrease in proportions of positive stools was not observed.The identification of new cases every year indicates a repeated contact of the town inhabitants with contaminated water during activities such as extracting sand and stones from the river bed, washing animals, fishing and some others, which may cause the infection to continue, even if no drug-resistant parasite strain is demonstrated.In fact, only two of the subjects, who might be cases of infection with drug-resistant strains of S. mansoni were observed to pass eggs of this worm after chemotherapy, having denied any further contact with suspect water bodies.
. A continuous decrease of the prevalence as a function of time was observed in the districts of Palha, Laranjeiras, Cerâmica and Centro.A steady increase has been observed in the Niteroi district.More than half of the cases diagnosed since 1994 come Urban area design of Bananal, State of São Paulo, Brazil.
Centro
average (eggs/g of feces)
TABLE I
Samples, cases and percentuals of Schistosoma mansoni diagnosed on Bananal districts, State of São
TABLE III Cases
and worm burdens (eggs/g of feces) by age groups and sex in Bananal, State of SãoPaulo, Brazil (1998-2000)
TABLE IV
Cases distribution by worm burdens (eggs/g of feces) of Bananal districts, State of SãoPaulo, Brazil (1998-2000) | 2017-08-30T13:20:44.751Z | 2002-01-01T00:00:00.000 | {
"year": 2002,
"sha1": "4938a393c5016b5c32cd098d1f9b160be1c17075",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/mioc/a/P8V3QLY8fxGnBVq7bsjgrbm/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4938a393c5016b5c32cd098d1f9b160be1c17075",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44011609 | pes2o/s2orc | v3-fos-license | What do we get from navigation in primary THA?
Navigation in primary total hip arthroplasty has a history of over 20 years. During this process, imageless computer navigation can be particularly helpful in optimally restoring the hip’s biomechanics. This involves the accurate placement of the acetabular component with the determination of the anteversion and abduction, whereby the navigated femur-first technique also allows for a calculation of the combined anteversion. Additional critical parameters such as the reconstruction of the rotation centre, as well as the femoral and acetabular offset, can also be optimally adjusted. Last but not least, an intra-operative evaluation and equalisation of the leg length is possible. Nonetheless, the disadvantages of this surgical technique in terms of the high costs in the acquisition and preservation of the necessary devices, as well as the longer operation time, must be taken into account. However, economic aspects are not the only thing preventing widespread use of the navigation technique. Determining the plane of reference (APP) for the optimal orientation of the implants is based on palpation of the bony landmarks – and this is influenced by the thickness of the soft tissue layer. Furthermore, the experience of the surgeon constitutes a variable that influences the accuracy of navigation. In summary, hip navigation certainly offers an interesting technique for the optimisation of total hip arthroplasty with reconstruction of proper biomechanics. At the same time, there is currently a lack of high-quality randomised controlled long-term trials that evaluate the clinical advantage for the patients, together with cost utility and survival rates. Cite this article: Renner L, Janz V, Perka C, Wassilew GI. What do we get from navigation in primary THA? EFORT Open Rev 2016;1:205-210. 10.1302/2058-5241.1.000034.
Introduction
The short-and long-term success of primary total hip arthroplasty (THA) is associated with the correct reconstruction of the hip biomechanics. This includes the reconstruction of the rotation centre and offset, the correct positioning of the cup and shaft (anteversion, inclination and antetorsion) and the equalisation of leg length. Deviations in these parameters arising through planning errors and intra-operative misinterpretation can lead to a higher rate of complications such as reduced range of motion (ROM), and can raise the risk of impingement of the components, which in turn leads to increased wear, inlay breakage and dislocation. 1 These complications result in higher revision rates, a shortening of the implant service life and, not least, dissatisfied patients. [2][3][4] For the use of the conventional freehand technique, both precise pre-operative planning and intra-operative re-evaluation are essential for correct implant positioning and optimal function. Simple methods include marking the implants on printed radiograph images with the help of planning films or intra-operative fluoroscopy. 5,6 However, this requires excellent three-dimensional thinking on the part of the surgeon and is obviously dependent on the surgeon's experience. 7, 8 Earlier studies were nevertheless able to show that even experienced surgeons only seldom managed to achieve a reliable and reproducible implant position using the conventional freehand technique. For example, an acetabular cup position outside the target zone recommended by Lewinnek et al 9 was observed in 50% of cases, even where experienced surgeons were involved. 10 Navigation, on the other hand, promises an accurate reconstruction of the aforementioned biomechanical parameters, while decreasing the number of outliers. 11 The first clinical use of a CT-assisted surgical robot for femoral canal preparation took place in 1992. 12 In subsequent years, the technique progressed to the use of passive navigation systems, which were initially image-based, specifically in computerised tomography or fluoroscopy. The navigation system currently most in use is based on infrared wave communication and is referred to as an 'imageless' navigation system. [13][14][15] It uses optical tracking arrays that, for example, are fixed on the ipsilateral iliac crest and serve as a reference point throughout the entire surgery.
The anterior pelvic plane (APP), which is determined by both the anterior superior Iliac spine and the symphysis, serves as the reference plane for the abduction and anteversion of the socket. The femoral antetorsion can, depending on the navigation system, be intra-operatively referenced over the epicondylar axis or the most dorsal points of the femoral condyles. In the case of imageless navigation systems, these points are read with a blunt tracker over the soft What do we get from navigation in primary THA?
x.0000EOR0010.1302/2058-5241.1.000034 research-article2016 Instructional Lecture: Hip tissue. Thus, the accuracy of the navigation systems depends on the accurate acquisition of these planes.
Criticisms of navigation include: the increased operation time, an inaccurate reading of the anatomical landmarks especially in overweight patients, high acquisition costs and the potential danger of an electronic malfunction.
This review provides a consolidated overview of the advantages and disadvantages of navigation in primary THA and, last but not least, recommendations regarding further use.
Acetabular component positioning
Lewinnek et al 9 describe the so-called 'safe zone' for acetabular components having an abduction of between 30° and 50°, and an anteversion between 5° and 25 degrees. Dislocation of cups implanted outside this zone was four times more likely. There are now differing recommendations; 16 nonetheless, component placement plays a decisive role in the wear and stability of the THA. 17 Studies show an improved positioning of the acetabular components with the use of navigation. [18][19][20] Lass et al 14 compared freehand positioning and imageless computer navigation in a prospective randomised study. They were able to observe higher accuracy with the navigation system and a significant difference concerning anteversion. No difference was observed for abduction.
The comparison of different guidance methods in 1980 total hip arthroplasties showed a significantly higher rate of acetabular components within Lewinnek's safe zone when using robotic-and navigation-guided techniques. 21 A prospective, randomised, controlled study comparing cup position showed a placement of the acetabular components within the zone suggested by Lewinnek et al 9 of 90% with imageless navigation and 80% with conventional placement (p = 0.661). 15 Kalteis et al 20 also reported that 53% of the cups (16 of 30) implanted with the freehand technique were outside the safe zone, compared to 7% (two of 30) implanted using an imageless navigation system. In a prospective randomised study, Parratte and Argenson 19 were able to observe an outlier rate of 57% using the freehand technique and 20% with the use of an imageless navigation system. The use of this imageless navigation system indeed led to a marked improvement, but showed significant errors for overweight patients with a BMI ⩾ 27.
The high prevalence of incorrect socket position is surprising when one considers the results of an imageless navigation validation experiment. In such an experiment, an average precision of 1° for abduction and 1.3° for anteversion were proven for an imageless navigation system. 22 In this study, the landmarks were read ex vivo without the error-inducing soft tissue, which explains the high accuracy. Spencer et al observed a large intra-and inter-individual variation as well as significant errors during APP-landmark acquisition by means of an imageless navigation system in a cadaver model. 23 In a similar model, Parratte et al 24 were then able to observe that the imageless navigation system is influenced by the thickness of the tissue over the bony APP landmarks. Richolt and Rittmeister 25 demonstrated with the aid of an ultrasound examination that the fatty layer is three times thicker over the symphysis in comparison to the anterior superior iliac spine. In contrast to the anterior superior iliac spine, the fat over the symphysis is not moveable, but can only be compressed by the blunt registration pointer. A direct reading of the bony landmarks over the symphysis is theoretically only possible with extremely slender patients. A nearly correct reading of both anterior superior iliac spine and the resulting errors in the registration of the symphysis creates a reference plane that does not correspond to the bony APP. Parratte et al also referred to the resulting plane as a 'cutaneous Lewinnek plane'. 24 It is through these registration errors that the significant versioning errors arise in the study discussed above. 24 Wolf et al developed a mathematical model which can calculate the resulting errors in the anteversion and abduction from registration errors of the APP. 26 Thus, a total error in the APP registration of only 4 mm can result in an error of 7° in the anteversion and 2° in the abduction. 26 In order to avoid these errors and enable an exact positioning, ultrasound was integrated into the navigation workflow. In the meantime, the advantage of this technique in comparison to imageless navigation was proven in ex-vivo and clinical studies.
In a cadaver study, the integration of the 2D-B mode ultrasound in the navigation algorithm allows for a very exact and reproducible registration of the APP. 27 This exact registration causes only a minor error in the acetabular cup position as shown by the navigation system and in the actual post-operative result. In this case, less experience on the part of the surgeon and a higher BMI in the cadaver do not result in a clinically relevant error. It was shown in a prospective randomised study that the ultrasound-based navigation has an outlier rate of 25% compared with 30% in the imageless navigation group. 28
Femoral component positioning
A malposition of the femoral component can also lead to complications after THA. For a positioning of the acetabular cup within the safe zone according to Lewinnek, an antetorsion of 15° is recommended for the femoral component. This corresponds to the native femoral antetorsion attested by Toennis and Heinicke. 29 The conventional implantation of the femoral components, in comparison to the acetabular components, shows a large variation of up to 44° in range. Due to the individual anatomy of the proximal femur, this can lead to rotatory and sagittal malalignment, even with cement-free implants. Consequently, it is difficult to achieve an antetorsion of 15° in every case. This fact clearly demonstrates the disadvantages of the respective target zones for the shaft and socket. It is not possible to compensate for the incorrect positioning of one component by the modification of another during surgery.
It should be emphasised, therefore, that the concept of combined anteversion of the acetabular and femoral components, 30 which in a finite element and mathematical model attains a value of 37.7°, theoretically enables an impingement-free range of motion for the THA and, simultaneously, a higher stability. 31 The navigated femur-first technique offers the possibility of achieving an optimised combined anteversion. 32, 33 For the shaft navigation, the dorsal femoral condyles are consulted as reference points. Dorr et al achieved an accuracy of 4.8° with respect to the femoral component positioning using imageless navigation, 32 though with a similarly large variability compared to the freehand technique. 7 The antetorsion of the shaft, especially with surgical approaches that allow for a good view of the femur, appears to be easily identifiable by experienced surgeons. However, an exact measurement allows for the optimisation and adjustment of the combined anteversion in context with the position of the acetabular cup. One of the latest studies on potential impingement-free range of movement after THA showed a better result with the navigated femur-first technique, in comparison to the conventional minimally invasive implantation. 34
Leg length equalisation
Leg length difference after THA varies on average between 1 mm and 15.9 mm according to current literature. 35,36 Lengthening of over 6 mm and shortening of over 10 mm are perceived by the patients. Nonetheless, this is one of the most frequent reasons for court claims from patients and is crucial to post-operative satisfaction.
Without navigation, leg length is difficult to evaluate directly during surgery, as an exact measurement is hardly possible given the obstacles of the patient's position, pelvic tilt and sterile covering. Intra-operative fluoroscopy shows no significant difference in comparison to a control group with regard to leg length, yet leads to a marked increase in surgery time. 37 In a matched-pair study, Manzotti et al 38 showed a significantly better restored leg length six months after THA in patients operated on with computer-assisted navigation. The number of patients with a significant leg length difference of over 10 mm was also smaller in this group. In another study, post-operative leg length difference averaging 3 mm was very rare (computer-assisted). However, there was no significant difference from the results of conventional methods. 39 In the study by Ellapparadja et al, 40 over 96% of the patients operated on had a leg length difference of under 6 mm. In other studies, the residual leg length difference even fell under 5 mm for 93% of patients. 13
Acetabular and femoral offset
The reconstruction of the femoral and acetabular offset is essential for an adequate function of the THA. 41-43 A reduction of the offset can lead to a decrease in both the lever arm and in abductor strength. This can in turn lead to a THA impingement, to bone-to-bone impingement and even to the patient limping. 44 Furthermore, reduction of the offset can lead to an increase in the forces in the area of the bearing surface which may result in increased wear. 45 In a recent study, 95.39% of the navigated hips had a similar offset (within 6 mm) to the opposite side. 40 The same was seen in another study, in which the global or femoral offset was able to be reconstructed within 8 mm in 98% of cases. 13
Limitations and criticisms of navigation
The introduction of new surgical methods, especially in cases where the standard methods already achieve a good to very good outcome, has in particular to be evaluated with respect to its advantages and disadvantages.
One disadvantage of navigation in primary THA is the longer surgery time. 15,20,38 Kalteis et al 20 report a lengthening of OR time of 8 minutes, or as much as 10 minutes depending on the steps necessary for the registration process. Manzotti et al 38 recorded a surgery time of 73.17 minutes (range 48-116) in the control group, and 89.39 minutes (range 77-122) in the navigated group and therefore a significant difference (p < 0.01). A prolonged operation time can be associated with an increased risk of complications. As operation time in the studies mentioned is only increased within the range of a few minutes, it is debatable whether this short period of time has a significant impact. Nonetheless, it must be taken into consideration that the surgery time may be longer during the surgeon's initial mastery of the learning curve involved.
An additional criticism regards the increased costs. These comprise the acquisition costs for the system and the costs for disposables such as reflective markers, as well as the increased surgery time already mentioned. The acquisition of a navigation system therefore appears economically sensible only for high volume clinics, 46 although cost-effectiveness studies are lacking here.
Furthermore, the intra-operative determination of the anatomical reference plane can lead to errors in the socket positioning. Spencer et al 23 showed this in a cadaver study in which eight surgeons had to determine the anterior pelvic plane using pointer-based navigation. The abduction and anteversion of the socket theoretically implanted on the basis of the set landmarks did show significant differences (anteversion sd 9.6°, abduction sd 6.3°) and may be especially difficult in the case of overweight patients. 47 However, the study by Gupta et al 48 showed no difference in cup abduction and anteversion in patients with an elevated body mass index in the case of robotic-guided navigation. The integration of ultrasound into the navigation algorithm has also led to an improvement in the landmark acquisition and positioning of the implant. Which technique wins through in the future depends above all on the practicability of the technique for the majority of surgeons, beyond its acceptance at highly specialised centres.
Finally, imageless computer navigation claims to be able to achieve a more accurate placement of the components as compared with the conventional implantation of a THA. In this context, the question arises of the existence of a safe zone for the acetabular components. Lewinnek et al 9 and McCollum et al 16 recommend different target areas. Other authors see no correlation between incidence of dislocation and a placement of the acetabular component in the safe zone. 49,50 Rather, they see an individual approach to the particular anatomy of the patient as being meaningful.
In view of the current lack of large, randomised controlled studies on navigation in primary THA with long-term follow-up, the above-mentioned advantages -while they can certainly be documented -are not yet relevant for daily clinical practice. Gurgel et al 15 showed no significant difference with respect to the absolute values of the abduction and anteversion in a prospective randomised controlled study with 20 THAs in each case (freehand placement versus imageless navigation). Most studies that entail a comparison to conventional implantation show similar values with respect to leg length equalisation and the reconstruction of the offset. 13 Meta-analyses comparing navigation and freehand positioning were only able to show an improved accuracy of the implant positioning and a reduction in the number of outliers. 51, 52 In the debate about the practicality of navigation in primary total hip arthroplasty, it is not only purely biomechanical sophistication with statistically significant differences that needs to be discussed -its significance in terms of clinical outcome as well as subjective patient satisfaction is more critical. 38 For example, one study showed a significant difference in the Harris hip score between navigated and conventionally implanted hips six weeks after surgery, but not six months or one year afterwards. 34 One of the first mid-term follow-up studies comparing navigated and conventional implantation found no differences in clinical outcome, bone density and polyethylene wear between five and seven years post-surgery. 53 When working with navigation in practice, it must be remembered that every surgeon must also be able to perform the surgery using the conventional method, given that, due to electronic error, a failure of the navigation technique is possible at any time.
Conclusions
Computer navigation in primary hip arthroplasty seems to be a valuable tool to achieve exact positioning of the components and an equal leg length. Studies have shown how to improve the measurement of the anatomical landmarks using ultrasound. Nevertheless, navigation has disadvantages that may hinder its widespread use, including high costs, longer surgery time and a current failure to completely satisfy the determination of the APP as the reference plane. Randomised controlled studies with longterm follow-up will have to prove the clinical relevance of navigation techniques in primary THA in the future. | 2018-04-03T05:52:46.973Z | 2016-05-01T00:00:00.000 | {
"year": 2016,
"sha1": "fd3588e0d92a38342ea394509a58b9eb50f59a60",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1302/2058-5241.1.000034",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd3588e0d92a38342ea394509a58b9eb50f59a60",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
118850748 | pes2o/s2orc | v3-fos-license | On the Differential Rotation of Massive Main Sequence Stars
To date, asteroseismology has provided core to surface differential rotation measurements in eight main-sequence stars. These stars, ranging in mass from $\sim$1.5-9$M_\odot$, show rotation profiles ranging from uniform to counter-rotation. Although they have a variety of masses, these stars all have convective cores and overlying radiative regions, conducive to angular momentum transport by internal gravity waves (IGW). Using two-dimensional (2D) numerical simulations we show that angular momentum transport by IGW can explain all of these rotation profiles. We further predict that should high mass, faster rotating stars be observed, the core to envelope differential rotation will be positive, but less than one.
INTRODUCTION
Rotation is a key property of stars that has important consequences for their long term evolution and eventual demise. Rotation is particularly important in massive stars where it contributes significantly to chemical mixing (Zahn 1992;Talon et al. 1997) and may determine the eventual explosion energy and nucleosynthetic yield of the star (Heger & Langer 2000). Given its importance, it would be extremely beneficial if constraints could be placed on stellar internal rotation as well as on the dominant physical mechanisms responsible for such rotation. However, theoretically determining the internal rotation of stars is plagued by complicated hydrodynamic processes which are difficult to simulate numerically and until recently, observations have provided little constraint.
Fortunately, the observational landscape has recently changed due to space missions like Convection, Rotation and planetary Transits (CoRoT) and Kepler. With the continuous duty cycle provided by these missions observers have been able to place constraints on the internal rotation of hundreds of evolved stars using mixed modes (Beck et al. 2012;Mosser et al. 2012;Deheuvels et al. 2015). The overall consensus of these observations is that angular momentum coupling between the contracting core and envelope is far more efficient than previously expected. Although it is still unclear what the physical mechanism is that causes this efficient coupling, internal gravity waves (IGW) are a key contender (Fuller et al. 2014).
Core-envelope differential rotation has also been measured in eight intermediate and massive main sequence stars using both p-and g-modes (Aerts et al. 2003;Pamyatnykh et al. 2004;Briquet et al. 2007;Kurtz et al. 2015;Saio et al. 2015;Triana et al. 2015;Schmid et al. 2015). We note that the term core here is used to mean the region just outside the convective core and is really the inner radiative region. Throughout this text we will use this terminology for consistencies sake but emphasize that "core" used here does not refer to the convective core. This handful of observations have shown a variety of differential rotation profiles. The measurements of HD 157056 (Briquet et al. 2007), KIC 9244992 , KIC 11145123 (Saio et al. 2015) and the binary system KIC10080943 (Schmid et al. 2015) show fairly uniform rotation, although notably not exactly uniform. On the other hand, HD29248 (Pamyatnykh et al. 2004) and HD129929 (Aerts et al. 2003) show cores spinning more rapidly than their envelopes. Finally, HD 10526294 (Triana et al. 2015) shows an envelope spinning faster than the core and with the opposite direction.
It has been shown previously that IGW generated by convection are very efficient at transporting angular momentum (Rogers et al. 2013). This is particularly true in stars which have convective cores and extended overlying radiative regions. In this configuration IGW generated at the convectiveradiative interface propagate outward into a region whose density is decreasing dramatically. This causes wave amplitudes to increase rapidly. Because of this increase in amplitude, very small amplitude perturbations at generation can lead to large perturbations in the envelope and therefore, lead to efficient angular momentum transport if the waves dissipate (Rogers et al. 2013).
Convection generates both prograde and retrograde waves at the convective-radiative interface. The initial symmetry breaking of a uniformly rotating medium caused by the dissipation of predominantly prograde or retrograde waves at the surface is a stochastic process, meaning the angular momentum transport by IGW could either speed up (if prograde waves are dissipated) or slow down (if retrograde waves are dissipated) the radiative region.
This initial symmetry breaking sets the stage for further angular velocity evolution, which will depend on the dominant dissipation mechanism. If waves dissipate through nonlinear wave breaking, then subsequent evolution can vary in sign and a strong mean flow may not develop. If waves dissipate predominantly through radiative dissipation then whichever sign flow dominates initially will grow and eventually reverse in time (as in the Quasi-Biennial Oscillation (Baldwin et al. 2001), although it is still unclear whether such an oscillation would proceed in a massive star (Rogers et al. 2013)). Similarly, if a critical layer develops, then any initial mean flow will be amplified, but on a much faster timescale (than radiative diffusion alone). In massive stars the density stratification is such that waves are likely to non-linearly break. However, whether or not a critical layer develops will depend on the surface wave flux, which depends on the convective flux, and the details of the stratification. Therefore, the outcome of IGW transport can vary from simple efficient angular momen-tum transport between the convective and radiative regions, to strong differential rotation if a critical layer develops.
While the observed stars vary in mass they share the common characteristic of having convective cores with overlying radiative regions, albeit with different extent. Given the limited number and resolution of the observations, here we use a single fiducial model of a star with a convective core and radiative envelope. By simply varying the initial rotation rate (to mimic different initial conditions) and convective flux (to mimic different masses and ages) we show that angular momentum transport by convectively driven IGW can explain the variety of observed rotation profiles.
ROTATION IN MASSIVE MAIN SEQUENCE STARS
To date core-envelope differential rotation has been measured in eight main sequence intermediate and massive stars. Here we briefly summarize those results. The first measurement of core-envelope differential rotation ( Ω c /Ω e ) in a main sequence star was done by Aerts et al. (2003) for the B3V star HD129929. That star was found to have a core rotating approximately 3.6 times faster than its envelope (Dupret et al. 2004;Aerts 2008). HD29248, another B star of similar mass (Pamyatnykh et al. 2004;Ausseloos et al. 2004), was found to have a core spinning approximately 5 times faster than its envelope. Briquet et al. (2007) found a rotation profile consistent with uniform rotation for the ∼8M ⊙ star HD157056. More recently, Kurtz et al. (2015) and Saio et al. (2015) have found nearly uniform rotation for the F stars KIC9244992 and KIC11145123. Though importantly, with high confidence, they find that KIC9244992 has an envelope rotating slightly slower than its core (Ω c /Ω e = 0.97) and conversely KIC11145123 has an envelope spinning slightly faster than its core (Ω c /Ω e = 1.03). Similarly, Schmid et al. (2015) constrained core-envelope differential rotation in both components of the binary system KIC10080943 and found one object has a slightly faster core than envelope, while the other member shows the opposite. 1 Finally, Triana et al. (2015) used 19 g-mode multiplets to do a full inversion to find the radial differential rotation profile for KIC10526294. They found that the envelope was spinning significantly faster than the core (Ω c /Ω e = 0.3) but perhaps more surprising, with the opposite sign. That is, the envelope is rotating in the opposite direction to the core, see Fig.3.
With the exception of Triana et al. (2015), all of these stars only have a measurement of the ratio of Ω c /Ω e and not an actual rotation profile. These ratios are derived from multiplets of g-modes, which are confined to the region just outside the convection zone, and multiplets of p-modes, which are confined to the surface regions. Therefore, the differential rotation measurement is really a measure of two regions of the star, just outside the convective core and just beneath the surface. Furthermore, mode identification is much easier in slower rotators, therefore, all of the observed stars are slow rotators, perhaps unusually so. Therefore, these observed stars may not represent the rotation profiles of intermediate and massive main sequence stars as a whole. Consequently, in the following we will consider a variety of initial rotation rates.
MODELING ANGULAR MOMENTUM TRANSPORT BY IGW
In order to model angular momentum transport by IGW in stellar interiors we solve the Navier-Stokes equations in the anelastic approximation (Gough 1969;Rogers & Glatzmaier 2005). The equations are solved in two-dimensions (2D), representing an equatorial slice of the star. Here we use a 3M ⊙ star as a fiducial model. The radial domain extends from 0.01R ⋆ to 0.90R ⋆ , encompassing both the convective core and radiative envelope to accurately model the convective generation of waves. The reference state thermodynamic variables are calculated from a polynomial fit to a one-dimensional model calculated using the Cambridge stellar evolution code STARS for a 3M ⊙ star (Eggleton 1971), with a central hydrogen fraction, X c =0.47. At this age the convective core occupies 0.30R ⊙ or 14% of the radial domain. Because of the steep density gradient 90% of the angular momentum of the star resides within 60% of the radius.
These simulations, like all hydrodynamic simulations, require higher than realistic diffusion coefficients for numerical stability (here we use ν = 4 × 10 13 and κ = 5 × 10 11 cm 2 s −1 ). Such diffusion coefficients would damp IGW unrealistically on their journey to the surface of the star. To compensate for this enhanced diffusion we force the waves harder by forcing the convection harder so that the waves reach the surface with more realistic amplitudes. Forcing the convection harder leads to convective velocities which are ∼ 10 −20 times larger than expected from mixing-length theory, depending on the model. However, if we compare our simulated surface velocities to those calculated assuming mixing length theory for the convective velocities, proper density stratification and realistic diffusivities, those velocities are comparable. More details of the numerical model can be found in Rogers et al. (2013).
Because the observations span a range of masses, here we consider slightly different convective fluxes as a crude way to mimic different stellar masses and ages. Broadly, we expect higher convective fluxes (Q/c v ) to be associated with more massive stars, but given the other details we have neglected, such as varying stratification and age, this is not necessarily the case. We further consider different rotation rates to mimic different initial conditions. Table 1 lists the initial conditions and parameters of the models considered along with their resulting core-envelope differential rotation. Fig. 1 shows a typical time snapshot within the simulated domain of the temperature and vorticity for model M4.
SIMULATED CORE-ENVELOPE DIFFERENTIAL ROTATION
To mimic the regions probed by observations we average the core rotation over ∼ 0.2R ⋆ outside the convection zone (Ω c ) and the surface rotation over ∼ 0.1R ⋆ below the surface (Ω e ). The ratio of these two values varies substantially in time, therefore, to best illustrate the results we show the histogram of values obtained for each model in Fig.2 (details are in the figure caption). The distribution of values seen in Fig.2 indicates the stochastic nature of wave generation and dissi- Table 1 Model parameters. Ω i is the initial rotation rate given in rad/s. Q/c v represents the convective forcing in units K s −1 , where c v is the specific heat at constant volume. The values 1.5 and 3 result in root mean squared convective velocities of ∼ 2.9 and 4.5 km s −1 , respectively, values ∼10 and ∼20 times larger than predicted by mixing length theory. The differential rotation, Ω c /Ω e , represents the mean ratio of core to envelope rotation. The time and spatial averaging are discussed in the text. Errors quoted are due to variations in time, also discussed in the text. AM /AM i represents the integrated angular momentum compared to the initial angular momentum content of the system, demonstrating the level at which angular momentum is conserved in the system. pation. Hence, while most of the profiles are Gaussian with a clear average, there is some deviation and skewness. The values of Ω c /Ω e quoted in Table 1 are the mean values with errors of one standard deviation. The variability due to differences in spatial averaging are smaller than those in time, so long as Ω c is measured within the radiative region and away from convective overshoot. If the core value includes the convection zone, the ratio Ω c /Ω e becomes significantly more variable, tends to increase and its distribution is often not Gaussian. This may be due to inadequate time resolution or reduced dimensionality, but is more likely due to the stochastic nature of turbulent convection. Each of the models is run for at least 20 wave crossing times of the entire radiative envelope for a typical wave (horizontal wavenumber 10 and frequency 10µHz), or ∼ 100 convective turnover times, which amounts to ∼ 10 7 s. We note that some models are run substantially longer and do not show substantial variation and certainly none outside the error bars quoted. In Fig.2 we immediately see that the range of differential rotation profiles seen in the simulations (−0.03-5) is similar to that observed (−0.3-5). More specifically, our low flux models with a variety of low rotation rates converge to core-envelope differential rotation values between ∼ 1-5, similar to seven of the eight observations of differential rotation (HD129929, HD29248, HD157056, KIC9244992, KIC11145123, KIC10080943). These models show a slight preference toward values closer to one than to five, similar to the observations. Simply, we expect HD129929, HD29248 and HD157056 to be described by high flux rather than low flux models. However, numerous other effects (such as stratification, Brunt-Vaisala barrier, etc. -see Discussion) could affect the surface flux of waves contributing to these stars appearing more like low flux models. Low flux, high rotation models also show values very close to one. Notably though, the averages are not exactly one. Therefore, in these low flux models, there is some angular momentum transport by waves but not enough to bring the system significantly away from its initially uniform state. This is particularly true in faster rotating models, where wave transport is less efficient. Ratios are ∼1, IGW transport some angular momentum but not so much to bring about substantial differential rotation. (c) High flux, low rotation models M5 and M8. In these models IGW transport significant angular momentum, causing the envelope to spin substantially faster than the core. In these low rotation models, negative ratios (envelope spinning retrograde) are favored, nicely explaining the observation of Triana et al. (2015). (d) High flux, high rotation models M9,M11 and M13. IGW transport is efficient enough to cause the envelope to spin faster than the core, so that Ω c /Ω e < 1, but in contrast to the slowly rotating models, these favor positive (prograde) surface rotation. Again these stars have yet to be observed.
High flux models on the other hand converge to coreenvelope differential rotation values generally between ±1. In this case, IGW are particularly efficient at spinning up (in amplitude) the radiative envelopes and hence, envelopes generally spin faster than cores. For slow rotators this transport is efficient enough and predominantly due to retrograde waves, so that negative values are common. On the other hand, in fast rotators, prograde waves dominate and bring about a fast, but positive rotation in the envelope. It is worth noting that, in general, slow rotators tend to favor retrograde wave deposition at the surface and hence, retrograde envelope rotation, while fast rotators favor prograde wave deposition at the surface and hence, prograde envelope rotation. At the moment, the theoretical reason for this tendency is unknown.
The high flux, slow rotator behavior seen in these simulations is similar to the differential rotation pattern observed in the star KIC10526294 (Triana et al. 2015). In Fig.3 we show the rotation profile inferred for KIC10526294 Triana et al. (2015), with error bars, along with the time-averaged rotation profile from M4 which was initiated with a rotation rate similar to KIC10526294 and which develops a counter-rotating envelope. There we see that a low rotation model with high IGW flux could reproduce the observed rotation profile of KIC10526294. The outer layers of the star, that are spinning retrograde, represent only ∼1% of the angular momentum. Therefore, those models which only conserve angular momentum to ∼2% might not accurately capture the surface dynamics. However, in this particular case (M4), the angular momentum is larger than the original value, so the angular momentum discrepancy can not be explained by the retrograde envelope. Therefore, while we have to be careful when interpreting our surface angular velocities, this likely doesn't affect these particular results. We should note that while our time averaged Ω c /Ω e are similar to observed values it is worth keeping in mind that observations represent an instant in time, not a time average, so any of the values seen in Fig.2 could potentially be observed. Similarly, while we have been able to reproduce the rotation profile of KIC10526294 with a time averaged profile, we expect that run long enough, our simulated profile would evolve.
DISCUSSION
Based on these 2D numerical simulations we conclude that IGW can explain current observations of core-envelope differential rotation in main sequence stars with a convective core. These results lead to a few conclusions and predictions. Low flux models with low rotation can show a variety of differential rotation profiles, ranging from ∼1-5. Low flux models with high rotation have rotation profiles closer to uniform, though notably, not exactly. High flux models with low rotation generally show faster and counter-rotating envelopes. Finally, high flux models of high rotation (which have not yet been observed) will have envelopes spinning faster than their core, but with positive sign. In our simulations the transition between fast and slow rotators (or retrograde versus prograde envelopes) occurs ∼ 10 −5 rad/s, but this will likely depend on the age and mass of the star.
Given that simply changing the convective flux by a factor of two in our simulations can lead to significantly different rotation profiles it is worth discussing what could lead to a different convective flux in a real star, or more appropriately, a different surface wave flux. Of course, the first is mass, with higher mass stars having higher luminosities and therefore, higher convective fluxes. The second is age, as a star evolves its luminosity increases somewhat which could lead to enhanced convective fluxes. However, as the star ages it also develops a severe gradient in the Brunt-Vaisala frequency at the convective-radiative interface due to the chemical com-position gradient left behind by converting Hydrogen to Helium. Such a gradient could act as a filter to waves propagating outward to the surface and thus reduce the surface wave flux, possibly causing a massive star to appear more like a low flux model. It is hard to know how these two effects combined affect the surface wave flux. In reality, changes in both mass and age are also accompanied by changes in the stratification throughout the radiative region which could affect the effective propagation and dissipation of waves, and hence the surface wave flux. Any of these effects could contribute to individual stars being better described by different model parameters than initially expected and all of these effects should be considered in future models and as more observations become available.
This work, in addition to Aerts & Rogers (2015), are some of the first to make direct comparisons between numerical hydrodynamic simulations and observations. Such comparisons clearly require some caveats. First and foremost, these simulations are carried out in 2D. We expect that IGW transport would be more efficient in 2D as waves are not able to spread out over the sphere and because 2D turbulence has an inverse cascade. Therefore, we expect the timescales of angular momentum transport in these simulations to be shorter than in the actual star, but we can not say by how much. This difficulty in extrapolating timescales is because mean flow development depends on velocity correlations and it is difficult to say how much more efficient these correlations are in 2D versus 3D. Furthermore, we do not know how wave transport will proceed at higher latitudes but we expect it to be less efficient than at the equator. Therefore, the observations, which represent a latitudinal average, are likely a lower limit of our simulated equatorial differential rotation. Finally, treating mass and evolutionary state as simply a change in flux is inadequate.
We have shown that our numerical simulations of IGW can explain the observed differential rotation profiles observed in intermediate and massive main sequence stars. Given the shortcomings of these simulations (2D, increased viscosity, increased thermal diffusivity) it is surprising that the results agree as well as they do. This agreement is likely due to the limited observational constraints and to the fact IGW transport can vary significantly. As observational constraints become more numerous, more sophisticated simulations which properly consider mass and age of individual stars and proper dimensionality will be necessary. In turn, we expect that additional observational constraints can be used to constrain simulation parameters. One robust conclusion from both the observations and the numerical simulations is that stellar rotation is complex and can admit a variety of profiles. | 2015-11-12T08:02:28.000Z | 2015-11-12T00:00:00.000 | {
"year": 2015,
"sha1": "599c57b7e61d472f41b32aa7ebd285e54d804c68",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1511.03809",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "599c57b7e61d472f41b32aa7ebd285e54d804c68",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
38016029 | pes2o/s2orc | v3-fos-license | Mathematical Method to Search for Monic Irreducible Polynomials with Decimal Equivalents of Polynomials over Galois Field GF(p q )
Substitution boxes or S-boxes play a significant role in encryption and decryption of bit level plaintext and ciphertext respectively. Irreducible Polynomials (IPs) have been used to construct 4-bit or 8-bit substitution boxes in many cryptographic block ciphers. In Advance Encryption Standard the 8-bit the elements S-box have been obtained from the Multiplicative Inverse (MI) of elemental polynomials (EPs) of the 1 st IP over Galois field GF(2 8 ) by adding an additive element. In this paper a mathematical method and the algorithm of the said method with the discussion of the execution time of the algorithm, to obtain monic IPs over Galois field GF(p q ) have been illustrated with example. The method is very similar to polynomial multiplication of two polynomials over Galois field GF(p q ) but has a difference in execution. The decimal equivalents of polynomials have been used to identify Basic Polynomials (BPs), EPs, IPs and Reducible polynomials (RPs). The monic RPs have been determined by this method and have been cancelled out to produce monic IPs. The non-monic IPs have been obtained with multiplication of α where α GF(p q ) and assume values from 2 to (p-1) to monic IPs.
INTRODUCTION
Now Basic Polynomials or BPs over Galois Field GF(p q ) have been defined as the polynomials with highest degree q.The polynomials with degree less than q have been termed as Elemental Polynomials or EPs over Galois Field GF(p q ).The polynomials those contains only constant term have been termed as Constant Polynomials or CPs over Galois Field GF(p q ).BPs that have more than one non-constant BPs as Factors have been termed as Reducible Polynomials or RPs over Galois Field GF(p q ).Rest of BPs those have CPs and itself as factors have been termed as Irreducible Polynomials or IPs over Galois Field GF(p q ).BPs with coefficient of highest degree term or leading coefficient equal to unity have been termed as Monic BPs and rest with leading coefficient greater than unity have been termed as Non-Monic BPs as follows, A basic polynomial BP(x) over finite field or Galois Field GF(p q ) is expressed as, BP(x) = a q x q + a q-1 x q-1 + ---+ a 1 x + a 0. B(x) has (q+1) terms, where a q has been non-zero and has been termed as the leading coefficient.A BP has been monic if a q is unity, else it is non-monic.The GF(p q ) have (p qp) elemental polynomials ep(x) ranging from p to (p q -1) each of whose representation involves q terms with leading coefficient a q-1 .The expression of ep(x) is written as, ep(x) = a q-1 x q-1 + ---+ a 1 x + a 0 , where a 1 to a q-1 have not been simultaneously zero.
Many of BP(x), which has an non-constant elemental polynomial as a factor under GF(p q ), have been termed as reducible.Those of the BP(x) that have no factors have been termed as irreducible polynomials IP(x) and has been expressed as, IP(x) = a q x q + a q-1 x q-1 + ---+ a 1 x + a 0 , where a q ≠ 0.
In Galois field GF(p q ), the decimal equivalents or DEs of BPs vary from p q to (p q+1 -1) while the EPs have been those with decimal equivalents vary from p to (p q -1).Some of the monic BPs have been irreducible, since it has no monic nonconstant EPs as a factor.
The method in this paper has been to look for the DEs of monic RPs with multiplication, addition and modulus of pnary coefficients of each term of each two monic EPs to obtain the DE of monic RP.The polynomials belonging to the list of RPs have been cancelled leaving behind the monic IPs.A non-monic IP have been computed by multiplying a monic IP by α where α GF(p) and assumes values from 2 to (p-1).
In literatures, to the best knowledge of the present authors, there is no mention of a paper in which the composite polynomial method is translated into an algorithm and in turned into a computer program.
The survey of relevant Literatures has been notified in sec.2.For convenient understanding, the proposed mathematical 18 method is presented in Sec.3.for p=7 with q=7.The method can find all monic and after it all non-monic IPs IP(x) over GF (7 7 ).Sec.4.demonstrates the obtained results and a discussion on efficiency of the algorithm to show that the proposed searching algorithm is actually able to search for any extension of the Galois field with any prime over Galois field GF(p q ), where p= 3, 5, 7,....,101,..,p and q= 2, 3, 5, 7,…,101,….q.In Sec.5.and Sec.6. the conclusion of the paper, and the references have been illustrated.The complete Lists of all monic IPs in a sequential manner over Galois fields GF(7 7 ) and (101 3 ) have been found in ref.
LITERATURE SURVEY
In early Twentieth Century Radolf Church initiated the search for irreducible polynomials over Galois Field GF(p q ) for p = 2, 3, 5 and 7 and for p = 2, q = 1 through 11, for p =3, q = 1 through 7, for p = 5, q = 1 through 4 and for p = 7, q = 1 through 3 respectively.A manual polynomial multiplication among respected EPs gives RPs in the said Galois field.All RPs have been cancelled from the list of BPs to give IPs over the said Galois field GF(p q ) [RC35].Later The necessary condition for a BP to be an IPs had been generalized to Even 2 characteristics.It had also been applied to RPs and gives Irreducible factors mod 2 [RS62].Next to it Elementary Techniques to compute over finite Fields or Galois Field GF(p q ) had been descried with proper modifications [TD63].In next the factorization of Polynomials over Galois Field GF(p q ) had been elaborated [EB67].Later Appropriate Coding Techniques of Polynomials over Galois Field GF(p q ) had been illustrated with example [TK68].The previous idea of factorizing Polynomials over Galois Field GF(p q ) [EB67] had also been extended to Large value of P or Large Finite fields [EB70].Later Few Probabilistic Algorithms to find IPs over Galois Field GF(p q ) for degree q had been elaborated with example
MATHEMATICAL METHOD TO SEARCH FOR MONIC IPS OVER GF (p q )
In this section the overview of the method behind the proposed algorithm has been given in subsec.3.1.The example to search for monic IPs over Galois field GF(7 7 ) has been described in subsec.3.2.The pseudo code of the proposed algorithm of proposed mathematical method has been given in subsec.3.3 and its time complexity and comparison of time complexity with other algorithms have been illustrated in subsec.3.4.
Overview of the Method
The idea behind this mathematical method and is algorithm has been to choose any two non-constant monic EPs at a time split the respective DEs into p-nary coefficients of respective EPs.Two EPs have been multiplied through polynomial multiplication or multiplication by the said method to obtain a BP.Since the obtained BP has two non-constant EPs as factors so it is termed as monic RPs.After considering all possible two EP combinations it has been found that all possible monic RPs have been generated.The monic RPs have been cancelled out from the list of all monic BPs leaving behind all monic IPs.The monic IPs have been multiplied with all CPs to obtain all non-monic IPs.
In the case of multiplication of two monic EPs, the respective DEs have been split into coefficients of respective EPs.All coefficient of each EP have been multiplied by modulo multiplication with each other along with variables.Next to it the coefficients of the same degree term have been added by modulo addition to obtain the concerned monic BP or monic RP.RPs have been cancelled out from the list of monic BPs to obtain monic IPs.
Mathematical method to search for monic IPs over Galois Field GF(7 7 ).
Here the interest has been to find the monic IPs over Galois Field or GF(7 7 ), where p=7 has been the prime field and q=7 has been the extension of that prime field.In general the indices of multiplicand and multiplier have been added to obtain the product.The extension q=7 can be demonstrated as a sum of two integers d 1 and d 2 .The degree of the highest degree term present in EPs of GF(7 7 ) has been (q-1) = 6 through 1.The polynomials with highest degree of term has been 0, are constant polynomials and they do not play any significant role here, so they have been neglected.Hence the two set of monic elemental polynomials for which the product has been a monic BP where p=7, q=7, have the degree of highest degree terms d In this way the DEs of all the monic BPs or monic RPs have been pointed out.The monic RPs belonging to the list of monic BPs have been cancelled out leaving behind the monic IPs.Non-monic IPs have been computed with multiplication of a monic IP by α where α GF(p) and assumes values from 2 through 6.
Generalized mathematical method to search for monic IPs over Galois Field GF(p q ).
Here the interest has been to find the monic IPs over Galois Field or GF(7 7 ), where p=7 has been the prime field and q=7 has been the extension of that prime field.In general the indices of multiplicand and multiplier have been added to obtain the product.The extension q can be demonstrated as a sum of two integers d 1 and d 2 .The degree of the highest degree term present in EPs of GF(p q ) has been (q-1) through 1.The polynomials with highest degree of term has been 0, are constant polynomials and they do not play any significant role here, so they have been neglected.Hence the two set of monic elemental polynomials for which the product has been a monic BP, have the degree of highest degree terms d 1 , d 2 where, d 1 =1,2,3,..,(q-1/2), and the corresponding values of d 2 have been, (q-1), (q-2), (q-3).,...,q-(q-1/2).Here the number of coefficients in the monic basic polynomial, BP = (q+1); they have been defined as BP 0, BP 1, BP 2, BP 3, BP 4, BP 5, BP 6, BP 7…….., BP q, the value of the suffix also indicates the degree of the term of the monic BP and for monic polynomials BP 7 = 1.for this case, total number of blocks is the number of integers in d 1 or d 2, i.e. (q-1/2).
Time Complexity of the Given Pseudo Code
Since the pseudo code of algorithm consists of three nested loops so the time complexity of the algorithm has been O(n 3 ).
DISCUSSION
From the Experiment on C99 platform the obtained results have been shown in Table .2. given below.The hand on Calculation and analysis of results have been done for GF(3 3 ), GF(3 5 ), GF(3 7 ), GF(7 3 ), GF (11 3 ) and it has been proved that the proposed algorithm works correctly on each Galois Fields.From this conclusion the list of all monic IPs in a monotonically increasing order of DEs have uploaded to links given in ref.
[SDS17] and [SDH17].From the table below and hands on calculation it seems that the calculation is correct and up to date.
From Table .1.it seems that the complexity of other algorithms increases with value of prime p and extension q.But for this algorithm the complexity is same for all p and q.That is why for large value of p and q the algorithm takes few minutes to produce the list of all monic IPs over the examined Galois field.So this algorithm has been proved to be a better algorithm.On the other hand most other algorithms had been developed with in concern of binary galois field GF(2) or Galois Field GF(p) where the proposed algorithm is designed in concern of extended Galois field GF(p q ).So the aspects of the proposed algorithm have a broad range of application.
CONCLUSION
To the best knowledge of the present authors, there is no mention of a paper in which the composite polynomial method is translated into an algorithm and turn into a computer program.The new mathematical method has been a much simpler method similar to composite polynomial method to find monic IPs over Galois Field GF(p q ).It is able to determine DEs of the monic IPs over Galois Field with a larger value of prime, also with large extensions.So this method can reduce the complexity to find monic IPs over Galois Field GF(p q ) with large value of prime and also with large extensions of the prime field.So this would help the crypto community to build S-boxes or ciphers using IPs over Galois Fields of a large value of prime, also with the large extensions of the prime field.
Substitution box or S-box in block ciphers is of utmost importance in Public Key Cryptography from the initial days.A 4-bit S-box has been defined as a box of 2 4 = 16 elements Varies from 0 to F in hex, arranged in a random manner as used in Data Encryption Standard or DES [AT90][HF71][NT77][NT99].Similarly for 8 bit S-box, number of elements are 2 8 or 256 varies from 0 to 255 as used in Advance Encryption Standard or AES [DR00][VM95].So the construction of S-boxes is a major issue in Cryptology from initial days.Using Irreducible Polynomials to construct S-box had already adopted by crypto community.But the study of IPs has been limited to almost binary Galois field GF(2 q ) as used in AES S-boxes [DR00][VM95].So search for Monic as well as Non-Monic IPs has been the untouched stone to break in cryptography.
[MR80].Later Factorization of multivariate polynomials over Galois fields GF(p) had also been introduced to mathematics community [AL85].With that the separation of irreducible factors of BPs [EB67] had also been introduced later [RM87].Next to it the factorization of BPs with Generalized Reimann Hypothesis (GRH) had also been elaborated [LR88].Later a Probabilistic Algorithm to find irreducible factors of Basic bivariate Polynomials over Galois Field GF(p q ) had also been illustrated [DW90].Later the conjectural Deterministic algorithm to find primitive elements and relevant primitive polynomials over binary Galois Field GF(2) had been introduced [MR90].Some new algorithms to find IPs over Galois Field GF(p) had also been introduced at the same time [VS90].Another use of Generalized Reimann Hypothesis (GRH) to determine irreducible factors in a deterministic manner and also for multiplicative subgroups had been introduced later [LR92].The table binary equivalents of binary primitive polynomials had been illustrated in literature [MZ94].The method to find roots of primitive polynomials over binary Galois field GF(2) had been introduced to mathematical community [IS96].A method to search for IPs in a Random manner and factorization of BPs or to find irreducible factors of BPs in a random fashion had been introduced later [PX96].After that a new variant of Rabin's algorithm [MR80] had been introduced with probabilistic analysis of BPs with no irreducible factors [GP97].Later a factorization of univariate Polynomials Over Galois Field GF(p) in sub quadratic execution time had also been notified [EV98].Later a deterministic algorithm to factorized IPs over one variable had also been introduced [EJ01].An algorithm to factorize bivariate polynomials over Galois Field GF(p) with hensel lifting had also been notified [GA02].Next to it an algorithm had also been introduced to find factor of Irreducible and almost primitive polynomials over Galois Field GF(2) [BZ03].Later a deterministic algorithm to factorize polynomials over Galois Field GF(p) to distinct degree factors had also been notified [SE04].A detailed study of multiples and products of univariate primitive polynomials over binary Galois Field GF(2) had also been done [SM05].Later algorithm to find optimal IPs over extended binary Galois Field GF(2 m ) [MS07] and a deterministic algorithm to determine Pascal Polynomials over Galois Field GF(2) [CF08] had been added to literature.Later the search of IPs and primitive polynomials over binary Galois Field GF(2) had also been done successfully [AA09].at the same time the square free polynomials had also been factorized [CR09] where a work on divisibility of trinomials by IPs over binary Galois Field GF(2) [RW09] had also been notified.Later a probabilistic algorithm to factor polynomials over finite fields had been introduced [SM11].An explicit factorization to obtain irreducible factors to obtain for cyclotomic polynomials over Galois Field GF(p q ) had also been reported later [LQ12].A fast randomized algorithm to obtain IPs over a certain Galois Field GF(p q ) had been notified [JC13].A deterministic algorithm to obtain factors of a polynomial over Galois field GF(p q ) had also been notified at the same time [DM14].A review of construction of IPs over finite fields and algorithms to Factor polynomials over finite fields had been reported to literature [GH14][NC14].An algorithm to search for primitive polynomials had also been notified at the same time [WJ14].The residue of division of BPs by IPs must be 1 and this reported to literature a bit later [SJ15].The IPs with several coefficients of different categories had been illustrated in literature a bit later [HJ16].The use of zeta function to factor polynomials over finite fields had been notified later on [BP17] At last Integer polynomials had also been described with examples [EWNN].
1 , d 2 where, d 1 =1,2,3, and the corresponding values of d 2 are, 6,5,4.Here the number of coefficients in the monic basic polynomial, BP = (q+1) = (7+1) = 8; they are defined as BP 0, BP 1, BP 2, BP 3, BP 4, BP 5, BP 6, BP 7, the value of the suffix also indicates the degree of the term of the monic BP and for monic polynomials BP 7 = 1.for this case, total number of blocks is the number of integers in d 1 or d 2, i.e. 3.
The comparison of time complexity of the proposed algorithm with Rabin's and modified rabin's algorithm has been given below in table.1. | 2017-12-27T08:16:42.234Z | 2017-12-20T00:00:00.000 | {
"year": 2017,
"sha1": "5029400b539cef8687c2adb957f10d70e3c53bd7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=82078",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5029400b539cef8687c2adb957f10d70e3c53bd7",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
246165387 | pes2o/s2orc | v3-fos-license | Shareholder wealth implications of software firms’ transition to cloud computing: a marketing perspective
Moving into cloud computing represents a major marketing shift because it replaces on-premises offerings requiring large, up-front payments with hosted computing resources made available on-demand on a pay-per-use pricing scheme. However, little is known about the effect of this shift on cloud vendors’ financial performance. This study draws on a longitudinal data set of 435 publicly listed business-to-business (B2B) firms within the computer software and services industries to investigate, from the vendors’ perspective, the shareholder wealth effect of transitioning to the cloud. Using a value relevance model, we find that an unanticipated increase in the cloud ratio (i.e., the share of a firm’s revenues from cloud computing) has a positive and significant effect on excess stock returns; and it has a negative and significant effect on idiosyncratic risk. Yet these effects vary across market structures and firms. In particular, unanticipated increases in market maturity intensify the positive effect of moving into the cloud on excess stock returns. Further, unexpected increases in advertising intensity strengthen the negative effect of shifting to the cloud on idiosyncratic risk.
Over the past few years, the computer software and services industries have witnessed a rapid growth in cloud computing, with the global public cloud market expected to reach nearly $364 billion by 2022, up from $242 billion in 2019 (Gartner, 2020). Cloud computing is a technological innovation that grants customers on-demand access to hosted computing resources made available on a pay-per-use pricing model (Chen & Wu, 2013;Mell & Grance, 2011). Shifting to the cloud has dominated discussions among information technology (IT) firms because it involves substantial changes to the components of a vendor's marketing mix (see Moorman et al., 2018).
First, moving to the cloud amounts to a paradigm shift in the nature of a vendor's offerings: from providing IT as a product to delivering computing functionality as a service (see Cusumano et al., 2015). In particular, cloud solutions are delivered in a hosted environment operated by the vendor-unlike the in-house IT infrastructure deployed internally by customers (Fazli et al., 2018;Ma & Seidmann, 2015). Hosting arrangements provide computing resources as on-demand services; they neither constitute a license purchase nor provide customers with contractual rights to take possession of the underlying IT assets (Chen & Wu, 2013). Second, transitioning to the cloud entails a fundamental shift in the vendor's pricing strategy. Specifically, cloud offerings are typically billed on a pay-per-use basis; hence, they disrupt software firms' revenue streams hitherto characterized by lump-sum, up-front licensing fees (Breznitz et al., 2018;Burgelman & Schifrin, 2014). Third, moving into the cloud entails a profound shift in the firm's distribution strategy. In fact, the Internet-based delivery model in cloud arrangements establishes a direct online channel that can bypass traditional third-party distributors (e.g., software resellers and integrators). V Kumar served as Area Editor for this article.
In response to these extensive changes, there has been considerable variation in firms' reliance on cloud computing in their business models. For example, Oracle and Sales force. com generated, respectively, about 5% and 93% of their revenues in 2015 from selling cloud-based solutions. The implication is that there are different opinions, among managers and investors, regarding the effectiveness of shifting to the cloud, as noted by Exact Holding's chief executive officer, Erik Van Der Meijden: After we had completed an internal restructuring and we had put [our] cloud solutions on a solid growth trajectory, I wanted to grow even faster in the cloud. Our shareholders were divided on that, and we had to temporize our transformation in order not to alienate investors and the stock market from us. 1 We therefore need empirical research that documents (a) how moving into the cloud affects firm performance, and (b) how this effect varies across market structures and firms. Yet the literature on cloud computing is still relatively nascent, and largely focuses on the technological aspects of shifting to the cloud, leaving the research on the performance outcomes of cloud transition an underexploited area (Fazli et al., 2018).
Against this backdrop, the current study makes two key contributions. First, we investigate empirically the joint effects of unanticipated changes in the cloud ratio on a vendor's stock returns and stock risk. We define the cloud ratio as a firm's share of revenues that are generated by providing cloud-based solutions. We exploit unexpected changes in the cloud ratio to explore the value relevance of moving into the cloud. The reason is that, according to the efficient market hypothesis (Fama, 1970), the stock market reacts only to the release of unanticipated information that can change investors' expectations of future cash flows. Shareholders are likely to encounter cloud revenue surprises because, for example, prices in contract-based payment arrangements "are privately negotiated, opaque, and involve price discrimination" (Du et al., 2013, p. 625). Further, we use market-based measures as performance metrics because they are forward-looking and less easily manipulated by accounting practices (Edeling et al., 2020;. Second, we develop a contingency framework that examines the moderating effects of unanticipated changes in market maturity and advertising intensity. Market maturity, or the extent of product commoditization and sluggish growth in a market, plays a leading role in shaping the dynamics and outcomes of innovations (Cusumano et al., 2015;Utterback & Abernathy, 1975). Unanticipated changes in market maturity may happen in response to the emergence of a dominant design or a new technological trajectory (Sood & Tellis, 2005). Similarly, advertising is a chief contributor to how effectively innovations create value for customers and competitive advantage for firms . Unanticipated changes in advertising intensity occur because, for instance, managers may unexpectedly use discretion in advertising expenditures to meet or beat analysts' earnings forecasts (Caylor, 2010;Mizik, 2010).
To test our conceptual framework, we assemble a longitudinal data set of 2,008 yearly observations pertaining to 435 publicly traded B2B firms within the computer software and services industries (primary Standard Industrial Classification [SIC] codes of 7370-7379) from 2005 to 2019. Using a stock return response model, we find that an unanticipated increase in the cloud ratio enhances shareholder wealth by increasing excess stock returns and by decreasing idiosyncratic risk. 2 To the best of our knowledge, the current study presents the first systematic empirical evidence on the longterm return and risk implications of shifting to the cloud from the cloud providers' perspective. As such, our study complements that of Son et al. (2014), which examines the effect of adopting cloud computing on short-term announcement abnormal returns from the cloud users' viewpoint.
We also find that unanticipated increases in market maturity and advertising intensity enhance the performance effects of shifting to the cloud. Specifically, unexpected increases in market maturity intensify the positive effect of shifting to the cloud on excess stock returns. Further, unanticipated increases in advertising intensity strengthen the negative effect of moving into the cloud on idiosyncratic risk. These findings highlight the importance of integrating an industry life cycle perspective into the performance analysis of cloud computing as a technological innovation with the potential to disrupt current IT delivery models and hence the marketplace's competitive dynamics (see Cusumano et al., 2015). Furthermore, the results indicate that the success of adopting a cloud-based business model depends heavily on vendors' investment in marketing-as is evident from the testimony of practitioners, who state that "marketing is a core competency (sometimes the only one) of every successful cloud business" (Bessemer Venture Partners, 2012, p. 17).
The rest of this paper proceeds as follows. We start by developing our theory and hypotheses. Next, we describe the data, our measurement and operationalization of constructs, the model estimation procedures, and our results. We conclude by discussing our study's contributions, summarizing its limitations, and identifying directions for further research.
Conceptual background and hypotheses
The global IT market size is projected to total $4.2 trillion in 2021, an increase of 8.6% from 2020 (Gartner, 2021). An intriguing phenomenon is that, over the past several years, many software and IT service companies have been replacing their traditional, on-premises offerings with cloud-based solutions. The National Institute of Standards and Technology defines cloud computing as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction" (Mell & Grance, 2011, p. 2).
Cloud computing is a technological innovation that enables the on-demand use of IT as a utility. The underlying technology in cloud computing represents a significant advance in the state of the art over on-premises IT solutions. For example, the multi-tenancy hosted architecture of the cloud allows vendors to share pooled resources across multiple customers. It follows that cloud providers can orchestrate the required support services centrally with possibly fewer debugging efforts (August et al., 2014); hence, shifting to the cloud enables vendors to spread their costs across scaled operations. In contrast, on-premises offerings are installed and maintained locally on the customers' in-house IT infrastructure-which requires the vendor to deliver regular maintenance, bug-fixing patches, and upgrades separately for each customer.
A related advantage of cloud computing is that the vendor can, by automatically changing its active number of servers, allow customers to scale their computational capacity up or down in (nearly) real time without requiring that they make capacity pre-commitments (Fazli et al., 2018). A fully scalable architecture eliminates the need to respond manually to traffic spikes that would otherwise call for additional resources. This increased flexibility helps customers lower operational costs and enhance performance reliability by seamlessly matching capacity to fluctuating demand (Ma & Seidmann, 2015). In contrast, on-premises offerings require customers to build their service set-ups with ample capacity to hedge against the risk of network congestion. The downside of that approach is that often a large proportion of in-house computing power then remains idle simply to ensure constant and "always on standby" service capacity, which increases the cost of keeping the IT infrastructure running (Ma & Seidmann, 2015).
According to Sood and Tellis (2005, p. 152), "understanding technological innovation is vital for marketers" because it "is perhaps the most powerful engine of growth." However, "academic research on cloud computing is still relatively new and most of the work done on this topic focuses on technological issues of the cloud" (Fazli et al., 2018, p. 3). For example, Choudhary and Zhang (2015) examine cloud vendors' optimal software release time and patching strategy; and August et al. (2014) investigate the security implications of offering cloud solutions. It follows that researchers and practitioners need a better understanding of the performance effects of shifting to the cloud-a technological innovation capable of transforming vendors' business models.
In light of these considerations, this study has two objectives. First, we examine-from the vendors' perspectivethe link between adopting a cloud-based business model and firm performance. Thus, we explore the effects of unanticipated changes in the cloud ratio on firm return and firm risk. Toward that end, we use stock return response modeling because (i) the stock market reacts only to the release of unanticipated information that can change investors' expectations of future cash flows (Fama, 1970); and (ii) marketing actions often incorporate information that takes a long time before being fully reflected in stock prices (Pauwels et al., 2004;. Using a value relevance model enables us to determine whether the new information contained in a firm's cloud ratio changes is associated with long-term changes in its stock price (see Sorescu et al., 2017).
As Sorescu and Spanjol (2008) point out, technological innovations can affect firm return and risk differently. Therefore, accounting for return and risk as separate dimensions of shareholder value yields a more granular insight into the performance implications of shifting to the cloud. Thus, we focus on excess stock returns as a measure of a firm return, thereby assessing the net value that the stock market bestows on a vendor's emphasis on cloud computing (see, e.g., Bharadwaj et al., 2011;Frennea et al., 2019;Mishra & Modi, 2016). Our proxy for firm risk is idiosyncratic risk, which captures the stock returns volatility that is unexplained by overall market movements. Idiosyncratic risk accounts for nearly 85% of the observed variation in stock prices (Goyal & Santa-Clara, 2003); hence, it is widely used as a measure of stock returns risk in the marketing literature (see, e.g., Frennea et al., 2019;Han et al., 2017).
Second, we develop a contingency framework that investigates the boundary conditions for the relationship between moving into the cloud and firm performance. Specifically, competition dynamics determined by an industry's life cycle stage are likely to influence the effectiveness of providing on-demand, hosted cloud solutions that substitute traditional, on-premises offerings (see Cusumano et al., 2015;Macdonald et al., 2016;Suarez et al., 2013). Therefore, we explore the role of unanticipated changes in market maturity as a potential factor that moderates the linkage between shifting to the cloud and firm performance. Market maturity is the phase of an industry's life cycle characterized by high levels of technological standardization and slow demand growth (Cusumano et al., 2015). "When these limits are reached, the only possible way to maintain the pace of progress is through radical system redefinition-that is, a move to a new technological platform" (Sood & Tellis, 2005, p. 154). As such, the emergence of maturity in a market is likely to affect the performance potential of cloud computing as a technological disruption.
Further, we explore the role of unanticipated changes in advertising intensity as a potential factor that can determine the effectiveness of transitioning to the cloud. Advertising has become an increasingly important part of B2B marketing (Swani et al., 2020). Expenditures on advertising make up nearly 13.8% of the typical B2B communications budget (Gopalakrishna & Lilien, 2012). 3 Investing in advertising is of direct importance to cloud providers for several reasons. First, customers' frequently cited concerns about cloud computing (e.g., security risks, service availability) can adversely affect their adoption of cloud-based solutions. Advertising can reinforce a cloud vendor's value proposition and provide a form of service quality assurance that mitigates customers' perceived risk of purchase (see . Second, vendors typically offer cloud solutions directly and online, rather than through independent distributors. Hence, they are less likely to benefit from the promotional activities performed by third-party distributors. Under such circumstances, a vendor's investment in advertising likely becomes an essential component of its cloud transition success. Fig. 1 depicts our conceptual framework. In developing our theoretical framework, we draw on the innovation literature to identify the pathways by which moving into the cloud might affect firm performance (see, e.g., Dotzel & Shankar, 2019;Fang et al., 2011;Rubera & Kirca, 2012;Sood & Tellis, 2005;Sorescu & Spanjol, 2008). According to this literature, technological disruptions could affect firm performance via several mechanisms. For instance, offering new customer value through innovations generates demand from existing and new customers, thereby resulting in increased cash flows (Dotzel & Shankar, 2019). Similarly, delivering innovation-based value differentiates a firm in the market and magnifies customers' switching costs, leading to a more stable customer base that promises a smoother cash flow stream (Sorescu & Spanjol, 2008). Accordingly, it would be reasonable to examine how shifting to cloud computing as a technological disruption influences firm performance from the innovation viewpoint.
Effects of unanticipated changes in the cloud ratio on excess stock returns
A firm's shift to cloud computing affects excess stock returns in several ways. First, cloud solutions generate greater value for customers (Son et al., 2014). For example, the hosted nature of cloud offerings relieves customers of the burdens associated with installing and maintaining expensive inhouse IT infrastructure and of the need to deal with such time-consuming administrative functions as security management and capacity planning (Ma & Seidmann, 2015). Cloud users also have quicker access to product upgrades because the cloud's hosted aspect enables vendors to deploy product enhancements centrally, reducing not only the intervals between software releases but also the time-to-market of new functionalities (Breznitz et al., 2018;Son et al., 2014).
Delivering added value to customers should enhance customer satisfaction and customer loyalty, which result in larger cash flows (Srivastava et al., 1998).
Second, transitioning to the cloud provides opportunities for the vendor to expand its customer base and thus to increase its cash flow. For instance, the pay-per-use pricing model enables cloud providers to attract financially constrained customers who would otherwise be excluded from the market (Breznitz et al., 2018). The reason is that the usage-based pricing model allows customers to convert their fixed IT-related expenses to costs that vary as a function of their usage rates-unlike perpetual licensing, which typically involves large up-front fees (Chen & Wu, 2013;Son et al., 2014). Along these same lines, the Web-based nature of cloud offerings allows vendors to expand their market to include almost any place with Internet access; hence, they can effectively reach geographically distant customers that have limited access to traditional distribution channels.
Third, moving into the cloud enables vendors to exploit economies of scale and a more efficient allocation of resources (Chen & Wu, 2013). In particular, the cloud's multi-tenancy hosted architecture allows vendors to share pooled resources across multiple customers. Hence, cloud providers can undertake support services centrally and may end up devoting less effort to debugging tasks (August et al., 2014). The presence of scale economies enables cloud vendors to concentrate on cost reduction and profit improvement. In contrast, because on-premises offerings are installed and maintained locally on the customer's inhouse IT infrastructure, they require the vendor to deliver regular maintenance, bug fixing, and upgrades separately for each customer. The consequence is reduced operational efficiency, which impairs vendors' profitability.
Fourth, the hosted nature of cloud solutions implies that vendors usually have complete access to detailed realtime data on customers' usage behavior. Accessing such information plays a central role in developing a customeroriented marketing strategy (Kopalle et al., 2020)-as noted by Mark Garret, Adobe's former chief financial officer: Because we are operating in the cloud, we have a better read on their needs-we know who signed up for Creative Cloud, which apps they have downloaded, and which features they are using. We are using predictive analytics and our own marketing tools to listen to our customers and strengthen our relationships with them. 4 The ability to acquire and utilize customer usage data makes it possible for firms to be more precise in their processes of value creation and customer engagement (Kopalle et al., 2020). For example, leveraging usage history data helps cloud vendors identify unmet customer needs and thereby increase their cash flows by developing novel functionalities that satisfy those requirements (Liu et al., 2016).
Despite these benefits, several concerns may be raised about vendors' transition to the cloud. For instance, one could argue that users' sensitivity to security risks discourages them from adopting cloud-based solutions. Yet as Steve Daheb, senior vice president for Oracle Cloud, has stated, emerging technologies such as machine learning and artificial intelligence can be integrated within the cloud so that identifying potential threats and/or self-patching can be performed automatically-rendering the cloud more secure than in-house IT assets, specifically for customers with limited IT capabilities. 5 Another concern is that moving into the cloud makes a vendor's customer base more susceptible to competition because it lowers customers' switching costs by eliminating up-front investments in IT infrastructure. However, switching suppliers involves nontrivial expenditures on search, adaptation, and development (Burnham et al., 2003); hence, offering cloud-based solutions does not eliminate entirely the "lock-in" advantage. More importantly, vendors "tend to sign multi-year cloud contracts" (Yahoo! Finance 2015), which "not only give providers … a steady, stable revenue source they can rely on, but they also disincentivize customers from shopping around with competitors" (Insider, 2020). 6 Taken together, shifting to the cloud increases a vendor's cash flows by delivering superior value to customers, expanding its customer base, enhancing its operational efficiency, and generating customer intelligence. A larger cash flow enhances shareholder wealth through increased firm value (Rao & Bharadwaj, 2008). As such, unanticipated changes in a firm's cloud ratio are likely to convey credible information to shareholders about its prospective performance. According to the efficient market hypothesis, investors incorporate this information into security prices when assessing the firm's future financial health. Formally: H1 An unanticipated increase in the cloud ratio has a posi tive effect on excess stock returns.
Effects of unanticipated changes in the cloud ratio on idiosyncratic risk
An unanticipated increase in the cloud ratio reduces a vendor's idiosyncratic risk for several reasons. First, as discussed in the prior section, cloud-based solutions offer customers added value, in the form of cost reductions and/ or productivity enhancements. Delivering superior value enhances customer satisfaction and engenders customer loyalty (Coulter and Coulter 2003;Mani et al. 2006). Loyal customers are less vulnerable to competition and hence provide vendors with a relatively smoother cash flow stream as they return to repurchase, cross-buy, or purchase add-ons (Bharadwaj et al., 2011). Second, cloud vendors' access to detailed, real-time information on customers' usage behavior enables them to forecast users' demand pattern more accurately and hence reduce the uncertainty associated with their future cash flows (see Kopalle et al., 2020). Third, unlike on-premises offerings with typically one-time payments, subscription-based cloud solutions generate recurring revenues that promise a more predictable cash flow stream (Breznitz et al., 2018). Fourth, cloud offerings are often delivered based on medium-to long-term contracts (see Yahoo! Finance 2015), a well-known safeguard against customer churn (see Bharadwaj et al., 1993). Taken together, the combination of delivering added benefits, accessing information on customers' usage behavior, establishing a recurring subscription-based revenue model, and enforcing contractual commitment helps a cloud vendor build a more stable customer base that offers a smoother revenue stream (Srivastava et al., 1998). Accordingly, unanticipated changes in a firm's cloud ratio signal credible information to shareholders about the stability of its future cash flows. Based on the efficient market hypothesis, shareholders integrate this information into their valuation of the firm. Hence, H2 An unanticipated increase in the cloud ratio has a nega tive effect on idiosyncratic risk.
Moderating effect of unanticipated changes in market maturity
An increase in market maturity is manifested by increased product commoditization and demand saturation (Cusumano et al., 2015). In that event, it becomes much more difficult to earn and sustain above-normal profits. We expect the effect of moving into the cloud on excess stock returns to be stronger in mature markets. The price sensitivity of customers in a market's mature phase makes cloud-based solutions more economically appealing to them (see Cusumano et al., 2015). For example, the hosted nature of cloud computing lowers customers' operating expenses as compared with running IT infrastructure in house (Ma & Seidmann, 2015). The cloud's usage-based pricing scheme likewise enables customers to eliminate those costs associated with unused IT resources that stem from demand uncertainties (Chen & Wu, 2013). Moreover, shifting to the cloud allows vendors to expand their customer base by reaching new customer segments-including small-and medium-sized businesses with limited purchasing power as well as remotely located businesses with limited access to traditional distributors. The additional revenues from these customers help cloud vendors cope with the declining demand characteristic of a mature market.
In sum, as price-based competition increases in mature markets, shifting to the cloud becomes an indispensable source of value creation and hence of revenue generation. Therefore, an unanticipated increase in market maturity provides new information to investors about the performance potentials of a move into cloud computing. Therefore, H3 An unanticipated increase in market maturity strength ens the positive effect of unanticipated cloud ratio increases on excess stock returns.
Similarly, we argue that an unanticipated increase in market maturity strengthens the negative effect of unexpected cloud ratio increases on a firm's idiosyncratic risk. The lack of technological differentiation in mature markets exacerbates the competition by increasing the substitutability of vendors' offerings (Sawhney et al., 2003;Suarez et al., 2013). Delivering added value to customers in the form of cost reductions and/or productivity gains differentiates a cloud provider in mature markets and encourages customers' repurchasing (see Ulaga & Reinartz, 2011). Further, cloud-based solutions often lock customers into long-term contracts (see Yahoo! Finance 2015). Contractual commitments prevent customers from switching to other suppliers, so cloud vendors can remove some of the market from the competitive arena to ensure earnings smoothing (see Bharadwaj et al., 1993). This effect becomes more prominent in mature markets where customers often incur less costs to switch to other suppliers (see Cusumano et al., 2015). Therefore, an unanticipated increase in market maturity provides new information to investors about the performance implications of shifting to the cloud for a vendor's earnings stability. Accordingly, H4 An unanticipated increase in market maturity strength ens the negative effect of unanticipated cloud ratio increases on idiosyncratic risk.
Moderating effect of unanticipated changes in advertising intensity
We expect an unanticipated increase in advertising intensity to strengthen the positive effect of unexpected cloud ratio increases on excess stock returns. Specifically, advertising accelerates the adoption rate of innovative offerings by boosting brand awareness (Joshi & Hanssens, 2010); and by reducing customers' perceived risk of purchase . Leveraging these benefits is vital for cloud providers because they often provide cloud solutions directly and online, rather than through third-party distributors. Hence, cloud providers are less likely to benefit from promotional activities performed by independent distributors or sales representatives. Furthermore, customers' frequently cited concerns about shifting to the cloud (e.g., security risks, service availability) suggest that advertising can serve to mitigate their purchase risk through reinforcing a vendor's value proposition and providing a form of service quality assurance (see Tuli et al., 2012). In addition, the discretionary nature of advertising is such that it conveys credible signals to investors about a firm's potential for demand growth (Tuli et al., 2012). The competitive advantages derived from advertising investments make it easier for the firm to attract new customers and to nurture existing ones. Hence, an increase in advertising expenditures is indicative of a firm's potential to capitalize on the growth opportunities available from shifting to the cloud. The preceding remarks lead us to conclude that an unanticipated increase in advertising intensity conveys new information to shareholders regarding a cloud provider's intention to build the market-based competencies necessary for competitive success in the cloud environment. It also transmits a positive signal to the stock market about the firm's confidence in the prospects of its cloud business. We again reference the efficient market hypothesis in positing that the disclosure of this new information affects investors' assessment of the firm's shift to cloud computing, which should strengthen the relationship between cloud transition and excess stock returns. Formally, we postulate: H5 An unanticipated increase in advertising intensity strengthens the positive effect of unanticipated cloud ratio increases on excess stock returns.
Similarly, we argue that an unanticipated increase in advertising intensity strengthens the negative effect of unexpected cloud ratio increases on idiosyncratic risk. A firm's advertising efforts create customer brand equity, an intangible market-based asset that enhances customer loyalty and retention (McAlister et al., 2007). Advertising also helps differentiate a firm's brand from those of competitors and hence makes it more costly for customers to switch their transactions to a different vendor (Anderson & Simester, 2013;Sridhar et al., 2016). Increased brand loyalty and differentiation function as hedging mechanisms (McAlister et al., 2007), enabling cloud vendors to build a more stable customer base that is less vulnerable to competition. Building on the efficient market hypothesis, we expect the disclosure of new information on a vendor's advertising intensity to influence the risk implications of the firm's transition to the cloud. Formally, H6 An unanticipated increase in advertising intensity strengthens the negative effect of unanticipated cloud ratio increases on idiosyncratic risk.
Data and sample
To test our theoretical framework, we assemble a longitudinal data set from multiple sources. As the starting point for sample construction, we use the merged Center for Research in Security Prices (CRSP)-Compustat database to create a list of publicly traded firms operating in the computer software and services industries (primary SIC four-digit codes of 7370-7379). There are several reasons why these industries are a relevant context in which to study cloud computing. First, cloud solutions are increasingly replacing on-premises licensing, which has a strong effect on the revenue streams of traditional computer software and service providers (PwC, 2016). Second, computer software and service vendors typically disclose their revenues from cloud computing in their 10-K annual reports, which allows us to build the cloud ratio measure as a proxy for the degree of emphasis on cloud computing in a firm's business model.
We obtain accounting and stock returns data from, respectively, the merged CRSP-Compustat and the CRSP databases. To do so, we use PERMNOs as our firm identifier. Also, we use the Kantar Media's Ad$pender database to collect the information on firms' advertising spending. We carefully match the company names from the merged CRSP-Compustat database with those in the Ad$pender dataset to retrieve the information on firms' advertising expenditures.
The merged CRSP-Compustat database includes 6,079 observations pertaining to 975 firms with SIC codes of 7370-7379, single class shares, and non-missing PERMNOs during the 2005-2019 time period. Given that our focus in this research is on B2B firms, we exclude from our sample those firms that sell business-to-consumer (B2C) offerings either solely or jointly with B2B solutions. This reduces the sample size to 4,707 observations for 753 firms. To compute the firms' cloud ratios, we draw on the information available in their 10-K annual reports. However, we exclude 383 observations pertaining to 142 firms from the sample because they do not provide enough information about whether or not they offer cloud-based solutions. This results in a sample of 4,324 observations for 691 firms that can be accurately classified as cloud vs. non-cloud providers.
Nonetheless, not all the firms that provide cloud-based solutions disclose information on their cloud revenues as a separate item in their income statements. Indeed, firms may bundle their cloud revenues with the revenues from non-cloud offerings; examples of the latter include perpetual licenses, post-contract customer support and maintenance services, and professional services (e.g., consulting services). For instance, in its 2020 annual report, NCR classifies cloud revenues as a part of the firm's "service revenue", which includes also "hardware and software maintenance revenue, implementation services revenue, … as well as professional services revenue" (p. 34). The bundling of cloud-and non-cloud revenues prevents us from computing a firm's cloud ratio. Therefore, we exclude from our sample those observations that offer cloud-based solutions without separately disclosing their cloud revenues. This reduces the sample size to 2,725 observations from 515 firms.
Finally, data requirements for estimating autoregressive models to operationalize the continuous explanatory variables in our models as unanticipated changes in those variables (as detailed later) jointly with data availability on the control variables in our models reduce the final usable sample size to 2,008 yearly observations from 435 firms over the 2005-2019 time period. Table 1 presents the sample distribution across the primary SIC four-digit industries.
Measures
Excess stock returns Following Bharadwaj et al. (2011), we compute compounded monthly stock return as below: In Equation 1, and throughout the study, the subscripts i, j, t, and k respectively denote firm, 4-digit SIC industry, year, and month; SR ijt is the compounded monthly stock return; and Ret ijk reflects the holding-period return. To obtain excess stock returns, we subtract the returns on US Treasury bonds, which is also known as the risk-free rate of return, from the compounded monthly return.
Idiosyncratic risk
To compute our measure of idiosyncratic risk, we use Carhart's (1997) four-factor model-which adds a "momentum" factor to the three-factor model of Fama and French (1993): Here, R ijtd denotes daily return on day d; R f, td is daily riskfree return; R m, td denotes daily return on a value-weighted market portfolio; SMB td is daily return on a portfolio of small stocks minus the return on a portfolio of large stocks; HML td represents daily return on a portfolio of stocks with a high book-to-market ratio minus the return on a portfolio of stocks with a low book-to-market ratio; UMD td is the momentum factor; ε ijtd denotes the error term; and α 0 -α 4 are the regression parameters. The standard deviation of the estimated residuals in Equation 2 captures the idiosyncratic variation in stock returns. (1) (2) R ijtd − R f,td = α 0 + α 1 R m,td − R f,td + α 2 SMB td + α 3 HML td + α 4 UMD td + ε ijtd Cloud ratio Computer software and service providers typically break out cloud and non-cloud revenues in their 10-K annual reports. We use keywords such as "cloud", "hosted" (vs. "in-house"), "on-demand" (vs. "on-premises" and "perpetual"), "Internet-based", "Web-based", "online", "Software-as-a-Service", "Platform-as-a-Service", and "Infrastructure-as-a-Service" to identify cloud-based revenue sources in the firms' annual reports. We compute a firm's cloud ratio in a given year as the sum of its revenues from cloud computing divided by its total revenues. Appendix A gives some examples of how we construct the cloud ratio measure.
Market maturity
We follow Suarez et al.'s (2013) approach to measure market maturity at the primary 4-digit SIC code level. In the growth stage of an industry's life cycle, market density-that is, the number of firms operating in a market-continues to increase as new firms enter the market.
Once the market enters its mature phase, however, density begins to decline because firms start to exit the market (Agarwal et al., 2002). We identify the onset of maturity as the peak of market density. Then, we compute market maturity as (−1/Density jt ) × 100 for the years before the onset of maturity and as (1/Density jt ) × 100 for the years thereafter, where Density jt denotes the market density of 4-digit SIC industry j in year t. As such, market maturity takes negative (resp., positive) and increasing values before (resp., after) the onset of maturity.
Advertising intensity In line with prior research (e.g., Malshe & Agarwal, 2015), we measure advertising intensity as the ratio of advertising expenditures to total sales. We use Kantar Media's Ad$pender database to obtain the information on firms' advertising spending. Given that Kantar Media does not naturally cover all the firms in our sample, we follow Malshe and Agarwal (2015) and Malshe et al. (2020) to impute the missing advertising values. 7 Specifically, we compute the ratio of advertising to sales, general, and administrative (SG&A) expenses for each firm with available advertising spending in a given year in the Ad$pender database; then, we obtain the annual average of the advertising-to-SG&A ratio at the 4-digit SIC code level. To estimate the missing value of advertising for a firm, we multiply the firm's SG&A by the corresponding yearly average of advertising/SG&A ratio for its 4-digit SIC industry.
Control variables
In our analysis, we control for several firm-and industry-specific factors that are likely to affect firm performance. In particular, the size of a firm is a key determinant of its security returns (Fama & French, 1992); hence, we control for firm size, computed as the log-transform of total sales (Kalaignanam et al., 2013). We also control for market share, or a firm's sales divided by the overall sales of its 4-digit SIC industry. A larger market share is likely to improve a firm's financial performance because it results in market-power advantages and enables the firm "to charge higher selling prices from customers and to negotiate lower purchase prices with suppliers" (Edeling & Himme, 2018, p. 3). In addition, we use firm profitability, or the ratio of earnings before interest and taxes (EBIT) to total sales, as a control (de Andrés et al., 2017). We also control for R&D intensity, or the ratio of R&D expenditures to total sales, as a proxy for the level of a firm's emphasis on research and development activities.
We control for accounts receivable intensity, or the ratio of receivables to total sales, because it has been shown to affect both firm return and risk (Frennea et al., 2019). We include financial leverage as a control because it affects stock returns through equity risk (Ozdagli, 2012); leverage is measured as the ratio of long-term debt to EBIT (see, e.g., Bates et al., 2009). To account for the effect of acquisition investments on changes in cloud-based revenues, we control for acquisitions expenditures, normalized by total assets (Bates et al., 2009). We control for financial slack because it affects a firm's ability to invest in growth opportunities (Fang et al., 2008). We operationalize financial slack as the ratio of working capital to total sales (Kim et al., 2018). We also control for intangible intensity, or 1 minus the ratio of net property, plant and equipment to total assets, because intangible assets are critical sources of competitiveness (Tuli et al., 2010). In addition, we include the dividends payout ratio, or the dividends-to-income ratio (He et al. 2020), as a control because changes in a firm's dividends policy may incorporate information about its future earnings (Benartzi et al., 1997).
We also control for competitive intensity, which is operationalized as 1 minus the Herfindahl-Hirschman index (Lee et al., 2015). In addition, we use market turbulence as a control because moving into cloud services may become a more prominent source of revenue in volatile markets (see, e.g., Fang et al., 2008). We operationalize market turbulence as the coefficient of variation for the overall sales in a given 4-digit SIC industry over the preceding five years (Claussen et al., 2018). Finally, we include year dummies as controls to capture the effect of global shocks on firm performance. Table 2 summarizes our constructs' definitions and how they are measured.
Operationalization
The stock market reacts only to the release of unexpected information with critical implications for future firm performance (Fama, 1970). We therefore use a stock return response model that explores whether the new information contained in a construct is associated with long-term changes in a firm's stock price (see, e.g., Bharadwaj et al., 2011;Edeling et al., 2020;Frennea et al., 2019;Mishra & Modi, 2016). To capture the release of new information, we operationalize the continuous explanatory variables in our framework as unanticipated changes in those variables. The degree of demand volatility in a market Coefficient of variation for overall sales in a market over the preceding five years (Claussen et al., 2018) For each firm, we disentangle the time-based unexpected changes in a continuous variable by estimating a first-order autoregressive time-series model as follows (Bharadwaj et al., 2011;Mishra & Modi, 2016;Mizik & Jacobson, 2004): where X ijt is the variable of interest; δ 0i is the firm-specific intercept; δ 1i reflects the persistence of time series; and the predicted residuals (i.e., ̂X , ijt ) reflect the unanticipated changes in variable X (i.e., UΔX ijt ). We provide the descriptive statistics and correlations for the variables in Table 3. To limit the influence of potential outliers in our estimations, we winsorize all the continuous variables at the 5% and 95% levels of their respective distributions.
Estimation
We build our stock return response model on Carhart's (1997) four-factor model. Since a firm's stock price at a given point in time reflects all available information about the firm up to that time, adding unanticipated changes in the variables of interest allows us to capture the investors' reaction to the release of new information (Edeling et al., 2020). Therefore, to examine the effect of moving into the cloud on excess stock returns, we incorporate unanticipated changes in the continuous explanatory variables, along with year dummies, into Carhart's model: Here, UΔCR ijt denotes unanticipated changes in the cloud ratio; UΔMM jt is unanticipated changes in market maturity; UΔADI ijt is unanticipated changes in advertising intensity; Ζ ijt represents the matrix of control variables, which consists of unexpected changes in firm size, market share, profitability, R&D intensity, accounts receivable intensity, financial leverage, acquisition expenditures, financial slack, intangible intensity, the dividends payout ratio, competitive intensity, and market turbulence as well as year dummies. The μ ijt term captures unobservable variables; and β 0 -β 9 and the vector Β denote the regression coefficients to be estimated.
Sample selection Our focal independent variable is unanticipated changes in the cloud ratio. Therefore, our sample includes only firms with non-missing, identifiable cloud revenue data. However, the software industry also includes firms that bundle their cloud sales with other sources of revenues in their 10-k reports. Excluding such firms from our sample could lead to selection bias because disclosing cloud revenues may be a non-random strategic decision. To address sample selection, we use the approach proposed by Heckman (1979). A firm's choice to disclose (S ijt = 1) or to not disclose (S ijt = 0) cloud revenue data is a function of firm-and industry-level characteristics. Following Han et al. (2017), we use the proportion of peer firms (i.e., those firms operating in the same 4-digit SIC industry as a focal firm) with non-missing cloud revenue data as an exclusion restriction. We argue that our excluded variable satisfies both the instrument relevance criterion and the exclusion restriction. In fact, common industry norms in disclosing cloud revenues are likely to be related to a firm's decision to report its cloud sales. Yet it is unlikely that peer firms can collectively observe and/or act on the focal firm's omitted variables, suggesting that our excluded variable is uncorrelated with the omitted variables captured by the error terms in Equations 4 and 5 (see Srinivasan & Ramani, 2019). Therefore, we estimate the following probit model: where PPF _ CR ijt denotes the proportion of peer firms that disclose data on their cloud revenues; and π 0 -π 3 alongside the vector Π are the regression coefficients to be estimated. We subsequently include the inverse Mills ratio obtained from Equation 6 into the final models to control for the selection bias.
Control function approach Even with the extensive list of covariates used in Equations 4 and 5, we are unable to account for all the variables that could affect both unanticipated changes in the cloud ratio and firm performance. It follows that, in our model specification, the presence of time-invariant unobservable variables (e.g., organizational culture) that could be correlated with unexpected cloud ratio changes may bias our estimates. To overcome this challenge, (6) Pr S ijt = 1 = Φ π 0 + π 1 PPF_CR ijt + π 2 UΔMM jt + π 3 UΔADI ijt + ijt ; we use a fixed-effects time-series panel model that removes time-invariant unobservable variables by applying a withintransformation to the data (see Bharadwaj et al., 2011;Wooldridge, 2009).
Further, the error terms in Equations 4 and 5 include unobserved time-varying components that may affect both firm performance and the cloud ratio. For example, performance is driven by many other variables-such as organizational agility in responding to changing market conditions-that could also influence the shift to cloud computing. Similarly, unobserved time-varying factors such as investment opportunities are likely to affect a firm's allocation of resources to strategic initiatives (e.g., Chakravarty & Grewal, 2011). If so, then there may be an endogeneity concern as regards advertising intensity in our models. Accordingly, one must control for such unobservable variables in order to correct for biases that may arise from the non-random nature of cloud transition or advertising investment decisions. However, information on these potentially crucial variables is not available in our data. Hence, there may be an omitted variable bias in our models' estimates of the relationship between unanticipated cloud ratio changes and firm performance or the moderating effect of unexpected changes in advertising intensity (see Germann et al., 2015;Papies et al., 2017).
Following previous studies (e.g., Sridhar et al., 2016;Srinivasan et al., 2018), we use the control function approach to address these sources of endogeneity. In doing so, we employ the average of peer firms' cloud ratios and advertising intensities as our instrumental variables. Using the information available about peer firms to construct our instruments is in accord with previous research in marketing (e.g., Germann et al., 2015;Jindal & McAlister, 2015;Srinivasan et al., 2018). Our excluded variables meet both the instrument relevance criterion and the exclusion restriction.
In terms of the relevance criterion, we argue that an increase in the average of peer firms' cloud ratios could be indicative of an overall increase in the market demand for cloud solutions. It is therefore reasonable to assume that a vendor places more emphasis on cloud computing when the average of peer firms' cloud ratios increases. Hence, we expect that the average of peer firms' cloud ratios will be positively related to unanticipated changes in a firm's cloud ratio. Similarly, "firms are known to look to their peers to guide their marketing actions", suggesting a high correlation between a firm's and its peers' degree of emphasis on advertising in their promotion mix "because they are guided by similar norms" (Sridhar et al., 2016, p. 47). Therefore, we expect the average of peer firms' advertising intensities will be positively related to unanticipated changes in a focal firm's advertising intensity.
In terms of the exclusion restriction, we argue that a firm's omitted variables are difficult to observe and hence to assess. Therefore, it is most unlikely that peer firms can collectively measure such variables and/or act on them strategically (Germann et al., 2015;Sridhar et al., 2016). Thus, we can reasonably expect that our excluded variables are uncorrelated with the omitted variables that are captured by the error terms in Equations 4 and 5.
The control function method relies on a two-step procedure to condition out the variation in unobservable factors correlated with the endogenous variables of interest (for details, see, e.g., Petrin & Train, 2010;Wooldridge, 2015). First, we perform auxiliary regressions of unexpected cloud ratio and advertising intensity changes on our instrumental variables together with other exogenous variables as regressors: where AVGPF _ CR ijt represents the average of peer firms' cloud ratios; AVGPF _ ADI ijt denotes the average of peer firms' advertising intensities; IMR ijt is the inverse Mills ratio; σ ijt and ρ ijt are random error terms; and γ 0γ 4 , τ 0 -τ 4 , and the vectors Γ and Τ are the regression parameters.
Second, the predicted residuals (i.e., ̂i jt and ̂i jt ) from these regressions are added to the final models to serve as the control functions that condition on the parts of unexpected cloud ratio and advertising intensity changes that depend on the error terms. After adding the predicted residuals, the remaining variations in unexpected cloud ratio and advertising intensity changes will be independent of the error terms. Equations 9 and 10 specify our final models: where η ijt and ζ ijt denote the random error terms; and λ 0 -λ 12 , φ 0 -φ 8 , and the vectors Λ and Φ are the regression parameters. Following Petrin and Train (2010) and Wooldridge (2015), we bootstrap the entire estimation procedure based on 1,000 replications to obtain valid standard errors for the estimated coefficients.
Estimation results
We present the estimation results for the auxiliary models (i.e., Equations 6-8) in Table 4. As Model 1 shows, the proportion of peer firms with non-missing cloud revenues is a significant predictor of the selection probability (π = 2.441, p < .01). In Model 2, the average of peer firms' cloud ratios has a positive and significant effect on unanticipated changes in a firm's cloud ratio (γ = .031, p < .05). In Model 3, the effect of average of peer firms' advertising intensities on unanticipated changes in a firm's advertising intensity is positive and significant (τ = .033, p < .01). In addition, the F-statistics in Models 2 and 3 are, respectively, 14.90 (p < .01) and 12.55 (p < .01), which are above the recommended threshold of 10 (Staiger and Stock 1997). These findings constitute strong evidence for the validity of our instrumental variables. .264 *Significant at 10% level, two-sided **Significant at 5% level, two-sided ***Significant at 1% level, two-sided
Hypotheses tests
We report the estimation results for Equations 9 and 10 in Table 5. In Model 1, the coefficient for unexpected cloud ratio changes is positive and significant (λ = 20.208, p < .05). We thus find evidence for a positive effect of an unanticipated increase in the cloud ratio on excess stock returns, which supports H1. In Model 2, unanticipated increases in the cloud ratio have a negative and significant effect on idiosyncratic risk (φ = −.338, p < .05), which supports H2.
In Model 3, after adding the interaction terms, the effect of unexpected cloud ratio changes on excess stock returns remains positive and significant (λ = 25.218, p < .05). Furthermore, we find that unexpected changes in market maturity positively moderate the effect of unexpected cloud ratio changes on excess stock returns (λ = 9.768, p < .05), in support of H3. However, the moderating effect of unanticipated changes in advertising intensity on the link between unexpected cloud ratio changes and excess stock returns is insignificant (φ = −348.473, n.s.); we thus fail to find support for H5. This is likely because unexpected changes in advertising intensity can have dual opposing effects that offset each other in the cloud environment. On the one hand, advertising expenditures provide credible signals to investors about the potential for demand growth. On the other hand, adopting a subscription-based business model may, at least initially, result in slow cash inflow (see Breznitz et al., 2018). Therefore, given the capital-intensive nature of advertising investments, an unexpected increase in advertising spending may concern investors about a cloud vendor's short-term profitability.
In Model 4, after including the interaction terms, the effect of unanticipated cloud ratio changes on idiosyncratic risk remains negative and significant (φ = −.393, p < .05). However, the moderating effect of unanticipated changes in market maturity on the relationship between unexpected cloud ratio changes and idiosyncratic risk is insignificant (φ = −.028, n.s.); we thus fail to find support for H4. This is likely because switching suppliers in the maturity phase of technology-and capital-intensive markets may still involve nontrivial expenditures on search, integration, and adaptation (see Burnham et al., 2003). This can, at least partially, substitute the "lock-in" advantage available from moving into the cloud and hence lessen the competitiveness of cloud vendors. Finally, unanticipated changes in advertising intensity negatively moderate the effect of unexpected cloud ratio changes on idiosyncratic risk (φ = −21.416, p < .05), in support of H6. We plot these moderating effects in Fig. 2.
Sensitivity analyses
Alternative source of advertising spending data In our main analyses, we used the Kantar Media's Ad$pender database to retrieve the information on firms' advertising expenditures. As an alternative data source, we use the merged CRSP-Compustat database to obtain the information on firms' advertising spending. As shown in Table 6, Models 1 and 2, our findings are not sensitive to using this alternative source of advertising spending data.
Data requirement for calculating idiosyncratic risk
To operationalize the idiosyncratic risk measure, we estimate Equation 2 after restricting our sample to firms with at least 250 daily stock return observations in a given year. The results in Model 3 of Table 6 indicate that our findings are robust to imposing this constraint on our sample. 8 Fig. 2 Moderating effects of unexpected changes in market maturity and advertising intensity 8 We found similar results after limiting our sample to firms with at least 30, 60, or 120 daily stock return observations in a given year. The results are available upon request.
General discussion
The invention of cloud computing as a disruptive technological paradigm for delivering IT products and services has shaken the software industry to its core. Pivoting to the cloud-in light of its profound effects on a vendor's mix of marketing elements-has dominated discussions among marketing managers at software firms (see Moorman et al., 2018). Yet the literature on the performance implications of moving into the cloud is sparse, leaving academics and practitioners with a limited understanding of the financial outcomes of adopting a cloud-based business model (Fazli et al., 2018). In addressing this gap, our study fulfills two objectives.
First, we document empirically the shareholder wealth implications of transitioning to the cloud from the vendors' perspective. Our results establish that an unanticipated increase in the cloud ratio enhances a firm's excess stock returns and reduces its idiosyncratic risk. By focusing on the shareholder value implications of moving into the cloud, our study responds to the growing body of marketing-finance interface research that calls for examining the value relevance of marketing-related innovations (e.g., Dotzel et al., 2013;Dotzel & Shankar, 2019;Geyskens et al., 2002;Sood & Tellis, 2009). Second, we highlight the roles of market maturity and advertising intensity as key determinants of the effectiveness of moving into the cloud. Our findings reveal that the effect of shifting to the cloud on firm return becomes stronger in the presence of unexpected increases in market maturity. Further, an unexpected increase in advertising intensity enhances the negative relationship between moving into the cloud and idiosyncratic risk.
Theoretical contributions
Our research bears a number of important theoretical implications. To the best of our knowledge, this is the first largescale empirical study to investigate, from the vendors' standpoint, the long-term return and risk implications of moving into the cloud. As such, our study complements that of Son et al. (2014) in three important ways.
First, Son et al. (2014) explore the effect of adopting cloud-based solutions from the users' perspective. In contrast, our study examines the performance implications of shifting to the cloud from the vendors' point of view. This is of direct importance to cloud providers because they are under intense pressure to determine how well they are operating in the cloud environment (McKinsey & Co., 2015). Second, technological innovations such as cloud computing can affect firm return and firm risk differently (see Sorescu & Spanjol, 2008). Therefore, developing a more granular insight into the performance implications of shifting to the cloud requires accounting for return and risk as separate dimensions of shareholder value. Although Son et al. (2014) examine the effect of adopting cloud computing on users' abnormal returns, they do not account for the potential risk implications of this shift. In this study, we use stock return response modeling to investigate the joint effects of moving to the cloud on a vendor's firm return and firm risk. Third, Son et al. (2014) explore how announcing the adoption of cloud computing affects customers' short-term abnormal stock returns. In the current study, we use the stock return response model approach to examine the long-term performance implications of vendors' transition to the cloud. This is of direct interest to cloud vendors and their shareholders because "it is well known that the economic return to a marketing activity, such as a new product introduction, is obtained over the long run" (Srinivasan et al., 2009, p. 30).
Furthermore, by examining the moderating role of market maturity in determining the effectiveness of shifting to the cloud, we complement previous studies in the innovation literature that underscore how an industry's life cycle affects the evolution of technological innovations (e.g., Cusumano et al., 2015;Sood & Tellis, 2005). In addition, our investigation into the role of advertising intensity in moderating the relationship between cloud transition and shareholder wealth contributes to the nascent literature on the performance effects of value creation and appropriation investments (e.g., Frennea et al., 2019;).
Managerial implications
Our findings have critical implications for managerial practice. Despite software firms' increasing interest in cloud computing, there remains considerable skepticism among senior managers about the financial outcomes of transitioning to the cloud (PwC, 2017). Our study offers corporate executives a fresh perspective on the performance implications of moving into cloud computing. We show that shifting to the cloud can contribute to shareholder wealth by increasing excess stock returns. For an average firm in our sample, a 1 percentage point unexpected increase in the cloud ratio boosts excess stock returns by about .2 percentage point, which corresponds to an increase of $384 million in the firm's market capitalization. This finding should give top management the confidence to depart from traditional on-premises licensing schemes and to embrace cloud-based business models. It also has practical importance for the investment community because unexpected cloud ratio increases can convey credible signals about a firm's future financial health and hence must be integrated into portfolio composition analyses.
Our results also suggest that moving into the cloud increases shareholder wealth by reducing idiosyncratic risk. For an average firm in the sample, a 1 percentage point unexpected increase in the cloud ratio reduces idiosyncratic risk by about .004, which is equivalent to a 25% decrease in the firm's idiosyncratic risk. This finding is of direct relevance to managers because risk is a fundamental dimension of firms' financial performance (Han et al., 2017). An increase in risk makes bondholders and creditors more averse to uncertain payoffs and thereby exacerbates a firm's cost of raising external capital (Panousi & Papanikolaou, 2012). Risk has a similarly adverse effect on a firm's ability to invest in R&D and capital expenditures because uncertain cash flows increase the likelihood of a cash shortfall (Minton & Schrand, 1999).
In addition, we find that the relationship between moving to the cloud and shareholder wealth is contingent on industry-and firm-level factors. In particular, the effect of unexpected cloud ratio changes on firm return becomes stronger in the presence of unanticipated increases in market maturity. Our results establish that moving from the lowest to the highest quartile of unanticipated changes in market maturity amplifies the positive effect of unexpected cloud ratio increases on excess stock returns by approximately 4.9%. Hence, managers should be aware that the life cycle stage of an industry in which a firm operates bears implications for investing in the cloud as an IT delivery model. For example, the intense price-based competition that prevails in the mature phase of an industry's life cycle provides a highly suitable environment for the shift to cloud computing.
Moreover, an unexpected increase in advertising intensity strengthens the linkage between moving to the cloud and firm risk. Moving from the lowest to the highest quartile of unanticipated changes in advertising intensity increases the negative effect of unexpected cloud ratio increases on idiosyncratic risk by about 4.7%. This finding should interest marketing managers, who are under constant pressure "to demonstrate the contribution of advertising to financial performance" (Srinivasan et al., 2009, p. 24). It also illustrates that the stock market bestows higher values on shifting to the cloud when that strategy is backed by substantial advertising investments. Therefore, software firms should involve marketing managers in both the formulation and implementation of their shift to cloud computing so as to ensure that their business model objectives and marketing efforts are well aligned and integrated.
Limitations and opportunities for further research
Our study has limitations that translate into avenues for future research. First, this work is a crucial first step toward understanding the role of moving to the cloud in the context of firms' marketing strategies. Motivated by data availability, we have focused on cloud computing in general. Yet our findings could be enriched by examinations of how different types of cloud solutions affect firm performance. Similarly, data availability limited us to using a firm's overall advertising spending when measuring its advertising intensity. Future studies are encouraged to expand our findings by distinguishing between cloud-vs. non-cloud-based advertising expenditures. 9 Second, we followed previous empirical research in the marketing-finance interface literature by including only publicly traded software firms in our sample. Although our theory is applicable to a broad range of firms, future research could examine the generalizability of our findings by using a sample that includes private firms.
Third, our study focuses on the shareholder wealth implications of moving into the cloud from the vendors' perspective; however, it would be also instructive to examine how the stock market evaluates the migration of customers to cloud computing. The anecdotal evidence shows the rapidly rising rate of cloud adoption. In a survey conducted by International Data Group, Inc (2018), 73% of the respondents reported that they already have at least one application-or a part of their computing infrastructure-in the cloud. With regard to this topic, Son et al. (2014) examine how announcing the adoption of cloud computing affects customers' abnormal stock returns. Scholars could profit from adopting our value relevance approach to explore the long-term return and risk implications of this shift from the users' viewpoint.
Fourth, our sample ends in 2019, which is prior to the COVID-19 outbreak. However, as noted by PwC, "a confluence of existing factors driving cloud transition has been further accelerated by the COVID-19 crisis: Cloud spending rose 37% to $29 billion during the first quarter of 2020. This trend is likely to persist, as the exodus to virtual work underscores the urgency for scalable, secure, reliable, costeffective off-premises technology services". 10 Future studies can build on our findings to explore the performance implications of shifting to the cloud during and post the COVID-19 pandemic. 11 Fifth, in order to ensure the consistency and integrity of our conceptual and empirical frameworks, we focus our analyses on B2B cloud providers. Although we expect our empirical results to be generalizable to the B2C context, future studies can expand our findings by examining the performance implications of shifting to the cloud in the B2C setting. Sixth, our findings highlight the importance of advertising investments as a key marketing promotional activity in the B2B cloud environment. "The massive budgets allocated towards marketing, and advertising in particular, suggests that B2B managers consider advertising a smart investment" (Swani et al., 2020, p. 582). Another important future extension is to investigate the moderating role of direct selling investments as a type of relationship marketing activity in the B2B cloud selling process. 12 9 We thank an anonymous reviewer for raising this point. 10 https:// www. pwc. com/ us/ en/ indus tries/ tmt/ libra ry/ covid 19-cloudinfra struc ture. html 11 We thank an anonymous reviewer for raising this point. 12 We thank an anonymous reviewer for raising this point.
Cloud revenue
License and hardware License and hardware revenues are generated from licensing the right to use its software products on-premises and, in certain instances, selling hardware as a component of the product offering.
Non-cloud revenue Services Services revenues are generated primarily from professional services and educational services fees.
Non-cloud revenue
Salesforce.com January 31, 2015 Subscription and support ... subscription fees from customers accessing our enterprise cloud computing services and from customers paying for additional support beyond the standard support that is included in the basic subscription fees ...
Cloud revenue
Professional services and other ... related professional services such as process mapping, project management, implementation services and other revenue. "Other revenue" consists primarily of training | 2022-01-23T05:32:52.016Z | 2022-01-21T00:00:00.000 | {
"year": 2022,
"sha1": "c98e83122eae096d728849e863d4dd1d46ad43d5",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11747-021-00818-7.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c98e83122eae096d728849e863d4dd1d46ad43d5",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264498290 | pes2o/s2orc | v3-fos-license | Characteristics of the clinical pharmacists' interventions at the main general tertiary care hospital in Qatar
Medication-related problems (MRPs) are prevalent throughout healthcare systems, whereby pharmacy-based interventions are pivotal to reducing occurrence. In the Middle East, including Qatar, the professional roles of pharmacists have been expanding to improve patient safety. This study aimed to characterize and analyze pharmacist-led interventions among hospitalized patients in the leading general hospital in Qatar. A retrospective analysis of pharmacist interventions in the internal medicine ward, critical care unit, and emergency department (ED) was conducted. Data were extracted from three periods of 1 month (March 1–31, 2018, July 15–August 15, 2018, and January 1–31, 2019). A descriptive type of analysis was undertaken. A total of 340 patients with 858 interventions were analyzed. The average age of the study participants was 51 years (SD ± 17.7). The study population was predominantly male (65%). The prevailing pharmacist intervention was adding drug therapy (27%), followed by medication discontinuation (18%) and dosage adjustments (16%). This pattern was maintained across all subpopulations, e.g., gender, age, and ward, except for the ED, where cessation of medication was the most frequent intervention (4%). The two pharmacological classes associated with most interventions were anti-infective and cardiovascular agents. Pharmacist interventions effectively identify, prevent, and resolve MRPs in general inpatient settings in Qatar.
INTRODUCTION
Patient safety is a core objective of several national and international healthcare systems. 1,2Since the release of "To Err Is Human" by the Institute of Medicine (IOM), tremendous global efforts have been dedicated to ensuring the provision of optimum care to patients and minimizing the occurrence of medication-related problems (MRPs). 3he Pharmaceutical Care Network Europe (PCNE) has defined MRPs as "an event or circumstance involving drug therapy that actually or potentially interferes with desired health outcomes." 4The PCNE classified the causes of MRPs into nine domains: drug selection, drug form, dose selection, treatment duration, dispensing, drug use process, patientrelated, patient transfer-related, and others, e.g., no outcome monitoring. 4The most prevalent form of MRPs is medication errors, which could lead to patient harm, i.e., preventable adverse drug events (ADEs). 50][11][12][13] For instance, clinical pharmacybased interventions in the United States contributed to identifying and averting prescribing errors in 0.3%-1.9% of all medication orders. 135][16][17][18][19][20][21] In Qatar, there are two published attempts in relation to this aspect.The first focused on MRPs identified on discharge in four primary healthcare facilities, while the other focused on medication errors in neonatal intensive care units (NICUs). 22,23Aligning with international standards, the pharmacist role is substantially evolving in Qatar and is becoming more patientcentered.As the distribution of interventions varies between different settings, it is essential to investigate the clinical pharmacy interventions performed in the different settings and their Characteristics of the clinical pharmacists' interventions at the main general tertiary care hospital in Qatar emergency department (ED) of HGH during the study follow-up duration were eligible for inclusion in the study.The study sample followup of three periods of 1 month, i.e., March 1-31, 2018, July 15-August 15, 2018, and January 1-31, 2019.We included all interventional recommendations by clinical pharmacists or clinical pharmacy specialists on hospitalized patients.The clinical interventions are, initially, only suggestions for consideration by physicians, requiring their approval to be implemented in patient cases.Thus, only interventions that physicians accepted were included in our study without conducting content or quality assessment, as long as the intervention was revised and approved by the respective prescriber.Interventions reported by non-clinical pharmacists (staff/operational pharmacists) were excluded from this study.Staff pharmacists work in outpatient or inpatient pharmacies to verify or dispense medications.In contrast, clinical pharmacists work in inpatient settings alongside other healthcare providers to develop healthcare plans for hospitalized patients.Missing or incomplete data from the intervention sheet were obtained from the EPR system of the patient.
Outcomes
The primary outcome was to characterize clinical pharmacist interventions to prevent ADEs among patients admitted to the internal medicine, critical care, and emergency units of HGH.The interventions were categorized according to different characteristics (i.e., age, gender, medical disorder, pharmacological category, and hospital ward).
Data Extraction and Synthesis
The intervention details and relevant sociodemographic data were abstracted from the EPR system into a spreadsheet.Patients were classified according to the hospital ward, gender, and age [adults (18-
Sample Size
4][25] Our sample size was based on duration, and we believe that over 25% of the year (hence, 3 out of 12 months) is a sufficient sample size.To enhance the representativeness of our sample size, we included interventions reported during three nonconsecutive months to cover the period immediately after the annual staff performance evaluation (first month of the year), before the annual evaluation (last month of the year), and the middle between these.This is because the evaluation may influence the documentation of interventions by clinical pharmacists, whereby they may become more vigilant.
Statistical Analysis
Extracted data from patient records were populated into a data spreadsheet for descriptive analysis.The mean [± standard deviation (SD)] was calculated for continuous variables, while frequencies and percentages were calculated for categorical variables.
Characteristics of the Study Patients
During the 3-month study period, 340 participants were admitted to HGH, with 858 clinical pharmacy interventions included in this study.The mean age of the population was 51 years, and 65.3% were male.The majority of patients were Arab (55.3%), followed by Asian (non-Arab) (34.4%), primarily admitted to the general internal medicine unit (53.8%), followed by the emergency unit (30.88%)
Emergency Department (ED)
Clinical pharmacists working in the ED recorded 146 (17.0%) interventions out of 858 total included interventions in this study.Distinct from the order of interventions prevalence observed in the overall study population and all previously discussed subcategories, the prevailing intervention in the ED was discontinued medications (4.3%) (Table 2).The following types of change in resource use were the addition of another medication (3.9%), change in medication route (1.4%), decrease in dose (1.28%), and increase in dose (0.93%).Half of the vaccine recommendations were performed in the ED ward, and there were only two documentation instances of adding a prophylactic agent during hospitalization.Notably, most patients under this category were adults (n = 117) and males (n = 136), Tables 3A and 3B.Anti-infective agents continued to be the most commonly associated class with interventions (n = 28), followed by fluids and electrolytes (n = 20), cardiovascular medicines (n = 18), and GI medications (n = 14) (Appendix 2).
DISCUSSION
To our knowledge, this is the first study to describe clinical pharmacist-delivered interventions in an inpatient setting at a general tertiary hospital in Qatar.
Findings from this study indicated that adding another medication was the most frequent intervention.This differs from previously published studies, which reported dose adjustment as the predominant intervention. 10,21,25However, only some of these studies included the addition of drug therapy in their classifications.Consistent with our results, one study conducted in outpatient clinics showed that pharmacists were actively involved in adjusting patients' therapies, which included prescribing medications. 26This adds a new dimension to pharmacist duties, including identifying untreated conditions.In most published studies conducted in various settings and at different levels of care, including the present one, discontinuation of inappropriate prescription and dose alterations were featured high in the intervention categories.][27] Similar to previous studies, most interventions pertained to anti-infective agents and cardiovascular medications, possibly due to the frequency of prescribing these agents. 21,25,28djusting dose and cessation of medicines were the most prevalent interventions for errors with the anti-microbial agents.This is justifiable since three of the four most frequently identified agents require renal dosage adjustment, i.e., piperacillintazobactam, vancomycin, and meropenem. 6Of particular interest in our findings is that stopping an anti-infective agent was the prevailing intervention under this pharmacological class.This suggests that more efforts are required to plan and implement anti-microbial stewardship programs, especially since the widespread reliance on ceftriaxone, the third top agent in our study, has been reported in the literature to be the significant driver of cephalosporin resistance. 29mong cardiovascular medications, the addition of a prophylactic agent during hospitalization was frequently reported, primarily due to the need for low molecular weight heparin (LMWH) and UFH as venous thromboembolism (VTE) prophylaxis.
Although one would expect prophylaxis against VTE to be most prevalent in patients admitted to the ICU as they are at higher risk for VTE, our study showed that most prophylactic agents during hospitalization were added to patients in non-ICU settings.This could be due to the presence of ICU protocol in HGH and other hospitals worldwide that incorporated VTE prophylaxis as part of the initial assessment upon admission and in the follow-up daily rounds. 30OL. 2023 / ART.28 Characteristics of the clinical pharmacists' interventions at the main general tertiary care hospital in Qatar Our results reinforce the importance of pharmacistled medication reconciliation, a common practice in all hospitals under HMC, as medication initiation and discontinuation were frequently reported.Medication reconciliation programs are recognized as an effective method to tackle the burden of medication discrepancies and the subsequent potential patient harm that could occur during the transition of care. 11en though the addition of a vaccine is expected in older patients as more vaccines are recommended to this age group, e.g., flu vaccine and pneumococcal polysaccharide vaccine, most vaccination recommendations in our study were in younger adults. 30This could be attributed to the presence of national-level vaccination campaigns and the home health service in Qatar that follows up with elderly patients and ensures that they are up-to-date with their vaccines.
As fluctuations in glucose levels are anticipated in hospitalized patients owing to stress and changes in medications and diet, insulin was the single agent with the highest frequency of interventionrequiring errors. 31The most prevalent intervention for insulin was the change in the prescribed dose.
Medications associated with interventions in ICU patients were distinct from what has been identified in the overall population.Fluids and electrolytes were the second most cited class.
Previous studies elucidated our results by demonstrating that fluid and electrolyte imbalances are among the most encountered medical disorders in critically ill patients. 32Another noteworthy finding in our study was obtained from the ED, which reported medication discontinuation as the dominant intervention.This differs from other studies investigating interventions in the ED, where drug information inquiry and dosage modifications were more common.[35] It is also noteworthy that the male and female subcategories needed to be more balanced.This, however, could be considered representative of Qatar's population as the latest demographical statistics in the country illustrated that approximately 76% of the population consisted of males. 36is study has limitations that should be acknowledged.The study design is retrospective, which has the inherent limitation of possibly less-than-perfect documentation of interventions.
In addition, all interventions that satisfied the eligibility criteria were included without running a quality appraisal for their content, where we assumed that the interventions were inherently validated as they were reviewed by the intervening pharmacist and the physician who approved them.
Overall, however, this study still provides crucial information on the prevalence and nature of MRPs and the extent of the clinical pharmacist's role in preventing ADEs.Additionally, findings from this research should be shared with prescribers to highlight the issue of ADEs, which will help them reflect on their current practices and subsequently take measures to reduce the occurrence of ADEs.
Overall, for future directions, there is a need for frequent clinical auditing of pharmacist interventions to ensure their quality and validity.Furthermore, maintaining, if not expanding, the pharmacist role with better coordination of care among healthcare professionals is necessary to ensure high-quality care to patients.
CONCLUSION
This retrospective analysis showed that clinical pharmacist interventions effectively identify, prevent, and resolve MRPs in hospitalized patients at the main general hospital in Qatar.The prevailing pharmacist intervention was the addition of a medication, followed by medication discontinuation and dose adjustments.This pattern was maintained across all subcategories except for the ED, where cessation of medication was the most frequently identified.
APPENDICES
64 years old) and elderly (≥65 years old)].To categorize the pharmacy clinical interventions, the authors developed a comprehensive data collection sheet that classifies interventions into 18 types to capture all the possible interventions that a clinical pharmacist could suggest.
Prevalence of the Different Types of Clinical Pharmacy Interventions
(5.7%) (Table2).The addition of a culture test was only identified in the elderly female group, while five out of six vaccine interventions were in adult male patients.Out of 50 formulation change interventions, 35 were from intraventricular (IV) to oral (PO) route, seven were from nasogastric tube (NGT) to PO, five were recommending an alternative therapy that is in the formulary (in cases where the physician prescribed a non-formulary medication), and one of each of PO to NGT, PO to different PO formulation, and PO to patch.
Table 2 . The distribution of pharmacy interventions according to age, gender, and hospital ward.
), vancomycin (n = 22), ceftriaxone (n = 19), and meropenem (n = 16).thetop identified agent (n = 27), followed by 13 interventions for both dalteparin and unfractionated heparin (UFH).Discontinuation of medication was reported in 17.2% of interventions VOL.2023 / ART.28 Characteristics of the clinical pharmacists' interventions at the main general tertiary care hospital in Qatar Figure 1.Classes of medications implicated in errors.underthis drug category, followed by the addition of a prophylactic agent (16.4%).
Table 2
).Unlike the interventions obtained from general internal medicine units, the increase in medication dose (4%) was more common than the decrease in the dose (2.5%).Only five interventions were about the addition of a
Table 3 . (Cont.)
3B.The distribution of pharmacy interventions according to gender versus hospital ward.Characteristics of the clinical pharmacists' interventions at the main general tertiary care hospital in Qatar prophylactic agent during hospitalization, and two were for each of the decrease in medication duration and the addition of a vaccine, while only one intervention reported an addition of a culture test. | 2023-10-27T15:19:21.730Z | 2023-10-26T00:00:00.000 | {
"year": 2023,
"sha1": "4da0e9c19771971960afd56d0ed5e715d020242c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5339/qmj.2023.28",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "215edfbd4b556771814ad53d0dc9db68f200d3a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.