id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
4917710
|
pes2o/s2orc
|
v3-fos-license
|
Optimal DoF region of the K-User MISO BC with Partial CSIT
We consider the $K$-User Multiple-Input-Single-Output (MISO) Broadcast Channel (BC) where the transmitter, equipped with $M$ antennas, serves $K$ users, with $K \leq M$. The transmitter has access to a partial channel state information of the users. This is modelled by letting the variance of the Channel State Information at the Transmitter (CSIT) error of user $i$ scale as $O(P^{-\alpha_i}$) for the Signal-to-Noise Ratio (SNR) $P$ and some constant $\alpha_i \geq 0$. In this work we derive the optimal Degrees-of-Freedom (DoF) region in such setting and we show that Rate-Splitting (RS) is the key scheme to achieve such a region.
I. INTRODUCTION
The use of multiple antennas at the transmitter has dramatically increased the capacity of wireless networks, as multiple antennas can help to achieve a larger number of Degrees-of-Freedom (DoF). However, in order to achieve the theoretical multiplexing gain, a sufficiently accurate Channel State Information at the Transmitter (CSIT) is required [1], [2]. Nonetheless, acquiring an accurate CSIT is a difficult task.
In this paper we investigate the DoF region of the K-User Multiple-Input-Single-Output (MISO) Broadcast Channel (BC), where the transmitter has a partial knowledge of the channel of the users. As in [3], [4], the partial CSIT is captured by letting the variance of the channel estimation error of user i decay as O(P −αi ) for some exponent α i ∈ [0, 1], which represents the CSIT quality. Under such setting, a great deal of research has mostly focused on characterizing the Sum-DoF.
A key result was shown in [3], where by assuming that the CSIT qualities of the users are arranged as α 1 ≥ · · · ≥ α K , it was proved that a Sum-DoF upperbound is given by 1+α 2 +· · ·+α K . Moreover, such an upperbound is achievable through a Rate-Splitting (RS) strategy [5]- [7]. While the Sum-DoF is an important information to know, it does not reveal any information about the individual DoF achieved by each user but only about the sum. The individual DoF of the users are instead characterized by the DoF region, which is the set of all achievable DoF tuples (d 1 , . . . , d K ). However, since taking into consideration the DoF achieved by each user is difficult, to describe the DoF region is a challenging task.
In this work, to the best of our knowledge, we characterize for the first time the optimal DoF region in the above setting. Building upon the work in [3], we derive an outer-bound of the optimal region, which is a polyhedron. We then show the achievability of such an outer-bound, which is the main E. Piovano and B. Clerckx are with the Department of Electrical and Electronic Engineering, Imperial College London, SW7 2AZ, UK (e-mail: {e.piovano15, b.clerckx}@imperial.ac.uk). This work has been partially supported by the EPSRC of UK, under grant EP/N015312/1. challenge of this work since we have to show the achievability of each tuple (d 1 , . . . , d K ) and not just the achievability of the sum. We introduce an original approach: instead of characterizing and showing the achievability of the corner points of the polyhedron which looks unfeasible for a large number of users, we characterize and show the achievability of each facet of the polyhedron. The key strategy for the achievability is RS with flexible power allocation. Hence, RS is not only optimal to achieve the Sum-DoF, but also to achieve the DoF region.
II. SYSTEM MODEL
This work considers a setup where a transmitter, equipped with M antennas, serves K single-antenna users, with K ≤ M . The users are indexed by the set K = {1, . . . , K}. At t-th channel use, the signal received by the i-th receiver is where h H i (t) ∈ C 1×M is the channel vector and x(t) ∈ C M×1 is the transmitted signal, which is subject to the power constraint E( x(t) 2 ) ≤ P . The term n i (t) ∼ CN (0, 1) indicates the additive noise. We define the channel matrix H ∈ C K×M , drawn from a continuous ergodic distribution and such that the joint density of its elements exists. We assume that the matrix and all its sub-matrices are full-rank. In addition, to avoid degenerate situations, we assume that the entries and the determinant of H(t) are bounded away from zero and infinity [3]. For each user i, the transmitter has a current estimate of the channel, indicated asĥ i (t). The partial CSIT is modelled as h i (t) =ĥ i (t) +h i (t), whereh i (t) is the channel estimation error at the transmitter.ĥ i (t) andh i (t) are assumed to be uncorrelated. Furthermore, the CSIT errorh i (t) has i.i.d. entries CN (0, σ 2 i ), where σ 2 i ≤ 1, while the entries ofĥ i (t) have a variance equal to 1 − σ 2 i . For the sake of notational convenience, the index t of the channel use is omitted in the rest of the paper. The variance σ 2 i is assumed to decay with the SNR P as O(P −αi ), where α i is defined as the CSIT quality exponent. We can restrict the exponent α i to the case α i ∈ [0, 1] since, from a DoF perspective, α i = 0 offers no gain over a no CSIT case while α i ≥ 1 corresponds to a perfect CSIT. We assume, without loss of generality, that users are ordered with respect to their CSIT quality, i.e. α 1 ≥ α 2 ≥ . . . ≥ α K . We also remind that, given a unitary Zero-Forcing (ZF) precoded vector v such thatĥ H i v = 0, the equation E[|h H i v| 2 ] = O(P −αi ) is satisfied. The transmitter has messages W 1 , . . . , W K intended for the corresponding users. Codebooks, probability of error, achievable rate tuples (R 1 (P ), . . . , R K (P )) and the capacity region C(P ) are all defined in the Shannon theoretic sense. The DoF tuple (d 1 , . . . , d K ) is said to be achievable if there exists (R 1 (P ), . . . , R K (P )) ∈ C(P ) such that d i = lim P →∞ Ri(P ) log(P ) for all i ∈ K. The DoF region is defined as the closure of all achievable DoF tuples (d 1 , . . . , d K ) and is denoted by D * .
III. MAIN RESULT
In order to state the main result of the paper, we define A as the set of all possible non-empty subsets of K with elements arranged in an ascending order. For instance, in case of K = {1, 2, 3}, the set A is given by Any element of A, which is itself a set, is indicated with a calligraphic upper case letter and its elements are denoted with the corresponding lower case letter (with numbered subscripts). For instance S = {s 1 , s 2 , . . . , s |S| } ∈ A, where s 1 < s 2 < . . . < s |S| . The main result is the following.
Theorem. The optimal DoF region D * of the K-User MISO BC with partial CSIT is given by all the real tuples We denote as D the above region described by the inequalities (2) and (3). In order to show that D coincides with the optimal DoF region D * , we need to show that D is simultaneously an outer-bound of the optimal region and is achievable. The fact that D is an outer-bound of D * follows after few steps from [3, Th. 1], which states that the Sum-DoF of the K-User MISO BC, with K ≤ M , is upperbounded by The result was shown assumining α 1 = 1 for the first user. However, since enhancing the CSIT does not harm the Sum-DoF, the same upperbound holds for a generic value of α 1 ∈ [0, 1]. The region D is constructed by applying such a Sum-DoF upperbound to any arbitrary subset of users S ∈ A, which states that the Sum-DoF of users in S is upperbounded by i∈S d i ≤ 1 + i∈S\{s1} α i . Considering all possible subsets of users S ∈ A and given that the DoF of each user is a non-negative real value, we obtain D as an outer-bound of the optimal DoF region D * . The challenge of the paper is to show the achievability of D, addressed in Section V. This means to show that each DoF tuple (d 1 , . . . , d K ) of D, which takes into consideration the individual DoF achieved by each user and not just the sum, is achievable.
IV. RATE-SPLITTING SCHEME
In this section, we remind the RS scheme which will be used to show the achievability of D in Section V. In RS, we transmit two kinds of symbols that are superimposed in the power domain: a common symbol decoded by all users on top of private symbols decoded by the respective users only. This strategy has been shown to be more robust in treating interference when partial CSIT is available compared to conventional linear precoding schemes (where only private symbols are transmitted) [4]- [6]. Getting into the details of the scheme, the message of each user i ∈ K is split into K are jointly encoded into the common symbol x (c) , which has to be decoded by all K users. Each private sub-message W i , which is decoded by user i only. It is assumed that all the symbols are drawn from a unitary-power Gaussian codebook. Next, the symbols are linearly precoded and power allocated. The transmitted signal takes the form are unitary precoding vectors, and P (c) and P (p) i are the corresponding allocated powers with P (c) + i∈K P (p) i ≤ P . Since the common symbol has to be decoded by all users, v (c) is chosen as a random (or generic) precoding vector. On the other hand, the private symbols are precoded by ZF over the channel estimate, i.e. v The received signal in (1) for user j ∈ K is given by All users decode the common symbol by treating the interference from all other private symbols as noise. From (6), it can be verified that the common symbol x (c) , in order to be successfully decoded by all users, can carry a DoF of The DoF of the common symbol can be split in all possible ways among users in K. We denote as d (c) j the DoF of the common symbol given to user j. It follows that any non-negative real tuple (d is an admissible partition of the DoF carried by the common symbol among the users in K. Next, each user removes x (c) by performing Successive Interference Cancellation (SIC) and proceeds to decode its own private symbol. From (6), the private symbol intended for user j ∈ K can carry a DoF of where (x) + = max{x, 0}. K ) indicates an admissible partition of the total DoF carried by the common symbol, which is given by (7), as described above.
RS outperforms conventional linear precoding scheme, as Zero-Forcing Beamforming (ZFBF), in case of partial CSIT. In particular RS attains the Sum-DoF upperbound in (4), which is achievable considering (a j ) j∈K = b, for any b such that α 2 ≤ b ≤ α 1 , and any split of the DoF carried by the common symbol (which is irrelevant to the Sum-DoF). In fact, from (8), we have that such power allocation leads to d (7). Hence, the Sum-DoF is equal to (4). It is important to notice that ZFBF achieves a Sum-DoF of α 1 + . . . + α K . Hence, it only attains the upperbound in (4) for α 1 = 1, while it fails when α 1 < 1, where RS is needed.
V. PROOF OF THE ACHIEVABILITY OF D
In this section we show the achievability of D characterized in Section III. The region D is the K-dimensional polyhedron given by the intersection of the half-spaces described by (2) and (3). We show that D is achievable by induction over the number of users K, considering a number of antennas at the transmitter M ≥ K. The hypothesis is clearly true for K = 1. We assume that the hypothesis is valid for K = 1, . . . , k − 1 and we consider the case K = k. First, the half-spaces in (2) and (3) are delimited by the hyperplanes obtained by substituting the half-spaces' inequalities with equalities. In total, there are 2 K + K − 1 hyperplanes. Any of these hyperplanes contains a facet of the polyhedron D and the set of all the facets corresponds to the boundary of D.
In our paper we show the achievability of D in a novel way. Instead of characterizing and showing the achievability of the corner points as in [4], we show the achievability of D by characterizing and showing the achievability of each of its facets. In fact, in [4], only the two-user case was considered. In such a case the two dimensional region boils down to a polygon and the corner points are simple to characterize. However, the characterization of the corner points looks unfeasible for the K-dimensional case. Since a corner point is given by the intersection of K hyperplanes, characterizing the corner points means to analyse each of the 2 K +K−1 K subsets of K hyperplanes to see if they intersect in a point. When a subset of K hyperplanes intersects in a point, we need to further verify if such a point belongs to the outerbound. If the point belongs to the outer-bound, it is a corner point. Such procedure is unfeasible for large K. Here, instead of finding the corner points, we propose a new approach where the facet contained in each of the hyperplanes delimiting D is first characterized and then the achievability of each point of the facet is shown. The facets from (3) will be shown to be achievable by RS with flexible power allocation and flexible split of the common symbol, while the facets from (2) will be shown to be achievable by induction hypothesis. We first show the achievability of the facets contained in the hyperplanes which delimit the half-spaces in (3). Any of these hyperplanes is given by i∈S d i = 1 + i∈S\{s1} α i , for a subset S ∈ A. We denote the facet contained in such an hyperplane as F S . The facet F S can be analytically characterized as the set of all the points contained in the hyperplane which satisfy all the other inequalities of the polyhedron in (2) and (3). Hence, F S is the set of all non-negative real tuples (d 1 , . . . , d k ) such that where the elements of G (arranged in an increasing order) are indicated as G = {g 1 , . . . , g |G| }. While the inequalities in (2) are satisfied by considering non-negative real tuples, (11) identifies the hyperplane containing F S and the inequalities in (10) identify all the other inequalities of D in (3). Showing directly the achievability of F S by (10) and (11) is a difficult task. We start by rewriting F S in an equivalent form where the values which can be taken by d j , for each user j ∈ K, are bounded through inequalities. This is obtained, for each j ∈ K, by comparing an inequality in (10), considering a specific G, with the equality in (11). Then we show that the new form of F S is achievable by RS. We first consider the case |S| ≥ 2. We start by analysing the elements j ∈ S. In case of j = s 1 , we consider the inequality in (10) for the specific G = S \ {s 1 } and the equality in (11), i.e.
By comparing the inequality and the equality, it follows that d s1 ≥ α s2 . We then move to the case j ∈ S \ {s 1 }. Here, we consider the inequality in (10) for G = S \ {j} and (11), i.e.
By comparison, it follows that d j ≥ α j . Summarizing, for j ∈ S, we have d s1 ≥ α s2 and d j ≥ α j for j ∈ S \ {s 1 }. Next, we analyse the elements j ∈S, whereS = K \ S. The setS is partitioned into three subsets, denoted asS 1 ,S 2 and S 3 , such that the subsetS 1 = { j ∈S | j < s 1 }, the subset In case of j ∈S 1 , we first compare the inequality in (10) for the case G = S ∪ {j} and the equality in (11), i.e.
We can conclude that the facet F S is included in the set of all the non-negative real tuples (d 1 , . . . , d k ) given by Furthermore, it can be verified that each tuple (d 1 , . . . , d k ) in (16) satisfies the conditions in (10) and (11). It follows that F S coincides with the set of tuples described by the inequalities in (16). Hence, (16) is equivalent to (10) and (11). We show the achievability of each point of F S through RS. First, we split F S into two subsets, denoted by F S,1 and F S,2 , on the basis of the value of d s1 . The subset F S,1 contains all the tuples of F S such that α s2 ≤ d s1 ≤ α s1 , while F S,2 contains all the tuples of F S such that d s1 > α s1 . So F S,1 is given by where, for any value of d s1 , the subsetsS 21 and S 22 are defined asS 21 = { j ∈S 2 | α j ≥ d s1 } and S 22 = { j ∈S 2 | α j < d s1 } and they correspond to a partition ofS 2 on the basis of the value of α j compared to d s1 . Each admissible tuple (d 1 , . . . , d k ) of F S,1 is achieved by RS considering (a 1 , . . . , a k ) such that With such power allocation, the DoF (d The common symbol's DoF, which is equal to d (c) = 1 − d s1 from (7), is partitioned in the following way Equality in (9) is satisfied and the achievability of the tuple (d 1 , . . . , d k ) follows. The subset F S,2 is equal to F S \ F S,1 and it is given by all the non-negative real tuples (d 1 , . . . , d k ) such that Each tuple (d 1 , . . . , d k ) of F S,2 is achieved by RS considering (a 1 , . . . , a k ) equal to The DoF (d The DoF carried by the common symbol, which is equal to d (c) = 1 − α s1 from (7), is partitioned in the following way Equation (9) is satisfied and the tuple (d 1 , . . . , d k ) is achievable. Since the subsets F S,1 and F S,2 are both achievable, F S is achievable. Hence, the facets F S for |S| ≥ 2 are achievable. Next, we move to the case |S| = 1, i.e. S = {s 1 }. The setS = K \ S is partitioned into two subsets, denoted asS 1 andS 2 , such thatS 1 = { j ∈S | j < s 1 } and S 2 = { j ∈S | j > s 1 }. In case of j ∈S 1 , by comparing (10) for G = {j, s 1 } and (11), we deduce that d j ≤ α s1 . Similarly, in case of j ∈S 2 , by comparing (10) for G = {s 1 , j} and (11), we deduce that d j ≤ α j . As earlier, F S is so rewritten as the set of all the non-negative real tuples (d 1 , . . . , d k ) Each (d 1 , . . . , d k ) is achieved by RS with (a 1 , . . . , a k ) The common symbol's DoF, which is equal to d (c) = 1 − α s1 , is given to user s 1 only, i.e. the partition is such that d (c) s1 = d (c) and d (c) j = 0 for j ∈ K \ {s 1 }. We finally consider the facets contained in the hyperplanes which delimit the half-spaces in (2). Taking any j ∈ K, we denote the facet contained in the hyperplane d j = 0 as F (0) j .
After removing the redundant inequalities, F (0) j is given by all the non-negative real tuples (d 1 , . . . , d k ) which satisfy whereĀ j is the set of all possible non-empty subsets of K \ {j} with elements arranged in an ascending order. For instance, in case of K = {1, 2, 3} and j = 1, we have that A j = {{2}, {3}, {2, 3}}. While d j = 0 (so user j is not considered), the set of admissible tuples (d i ) i∈K\{j} corresponds to the region in (2) and (3) when considering the k − 1 users K \ {j}. Since we have M antennas, with M ≥ k (hence M larger than k − 1), the facet F (0) j is achievable by induction hypothesis. Since all facets of the polyhedron are achievable, all the remaining points of the polyhedron are achievable by time-sharing. Hence, the outer-bound D for K = k is achievable and it coincides with the optimal DoF region D * .
VI. CONCLUSION
In this paper we show that RS is the key strategy to achieve the whole DoF region for the MISO BC with partial CSIT. The essence of RS, compared to conventional transmission techniques as ZFBF which rely on the transmission of private symbols only, is the transmission of a common symbol on top of the private symbols. The presence of the common symbol allows to tackle the multi-user interference originating from the partial CSIT more efficiently and, considering a flexible power allocation for the private symbols and flexible split of the common symbol, to achieve the entire DoF region. RS boils down to ZFBF in case of perfect CSIT, where the common message is not needed and ZFBF is sufficient to achieve the whole DoF region.
|
2017-03-24T07:33:57.869Z
|
2017-03-21T00:00:00.000
|
{
"year": 2017,
"sha1": "dcbedf6f87296f1493d7944169a98f757bddff2f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1703.07191",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9aa79f1a81dadc2935c1e222d4fe48c298f928ee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
244268372
|
pes2o/s2orc
|
v3-fos-license
|
Experimental Study of the Dynamic Shear Modulus Ratio and Damping Ratio of the Quaternary Sedimentary Soils in the Offshore Areas of the Yellow Sea
The e ff ects of marine and continental sedimentary environments and geological ages on the dynamic shear modulus ratio and damping ratio of the Quaternary sedimentary soils in the o ff shore areas of the Yellow Sea were analyzed by using a resonant column device (GCTS, USA). The results show the following: (1) The G max of various marine soils increases with the depth and shows a typical linear relationship. (2) The marine transgression has signi fi cantly di ff erent e ff ects on the dynamic shear modulus ratio versus the shear strain amplitude curves (i.e., G / G max ~ γ a curves) and the damping ratio versus the shear strain amplitude curves (i.e., λ ~ γ a curves) of the di ff erent soil types in the o ff shore areas of the Yellow Sea. The e ff ects of marine transgression were strong on clays, moderate on silty clays, and minor on silts. (3) The geological ages have noticeable e ff ects on the G / G max ~ γ a curves of the tested marine silty clays, marine silts, and continental silty clays, but the e ff ects of geological ages on the λ ~ γ a curves are minimal. The fi tting parameters and recommended empirical equations of the G / G max ~ γ a and λ ~ γ a curves for each type of the tested soils (silty clay, clay, and silt) were obtained mirroring the e ff ects of sedimentary environments and geological ages.
Introduction
The 21st century is widely considered to be the era of the ocean. All of the coastal countries have placed a higher priority on the ocean within their overall framework of national development. China has a very long coastline of more than 18,000 km, and thus, China has successively proposed marine strategic plans such as the Belt and Road and the Yellow Sea Economic Circle. In particular, the Yellow Sea and its coastal areas are experiencing intensive planning and construction of a large number of offshore traffic projects and marine projects. The Yellow Sea and its coastal areas are located in the North China Seismic Zone, which has complex seismic geological structures and is frequently subject to seismic activity, including a magnitude 6 earthquake in the Yellow Sea in 1764, a magnitude 6.5 earthquake in the Yellow Sea in 1764, and a far-field magnitude 8.5 earthquake in Tancheng in 1668. There may be Late Pleistocene faults and Holocene faults in the zone, which increase the possibility of destructive earthquakes in the future. The offshore areas of the Yellow Sea contain a thick Quaternary sedimentary sequence, which is comprised of soft soils. These soils are mainly marine plains dominated by cohesive soils and saturated sandy soils. The strong earthquakes that occur in this area may lead to a significant site amplification effect and result in the subsidence of soft soil or liquefaction of sandy soil in these areas, which poses a serious threat to the safety of major engineering structures and the performance of socioeconomic activities.
The variation of the dynamic shear modulus ratio G/G max and the damping ratio λ against the shear strain amplitude γ a directly reflects the nonlinear and hysteretic characteristics of the stress-strain relationship of soils under dynamic loads. They are not only the basic dynamic parameters to describe the nonlinear hysteretic constitutive model but also essential for accurately analyzing the seismic response of soil layers.
The G/G max and λ of soils are strongly region-specific since soils in different regions might be from different geological ages, exist in different sedimentary facies, and have different sedimentogenesis [1][2][3][4][5][6]. At present, a large number of studies have been carried out on marine soils. For example, Koutsoftas and Fischer explored the influence of the overconsolidation ratio OCR on the dynamic characteristics of G and λ of two kinds of marine clays through resonance column and cyclic triaxial tests [7]. Liang et al. proposed a new correlation function method for the calculation of G and λ in triaxial tests and investigated the G/G max -γ a and λ -γ a curves of saturated coral sand from the Nansha Islands, South China Sea, considering the influence of effective confining pressure and relative density [8]. Yang et al. established the empirical equations of G/G max -γ a curves for the undisturbed soils in the Yangtze River estuary [9]. Morsy et al. evaluated the dynamic characteristics of Egyptian calcareous sand in the range of small and medium shear strains [10]. Wu et al. researched the small-strain stiffness of marine silty sand [11]. Senetakis and Payan conducted smallstrain resonant column tests in torsional and flexural modes of vibration which quantified two types of damping ratios [12]. Some tests have been conducted for the dynamic characteristics of fine-grained soil, especially for high plasticity clay [13]. Based on field and laboratory tests, researchers studied the static and dynamic properties of soils in Catania [14]. Głuchowski et al. researched the laboratory characterization of a compacted-unsaturated silty sand. The results proposed that the compaction procedure caused an overconsolidation state dependent on the moisture content during compaction effort [15]. Feng et al. used the resonance column test to evaluate the influence of confining pressure, mix ratio, curing age, and cement content on the dynamic characteristics of subsea sand-silt mixtures [16]. Khosravi et al. developed a new methodology to extend an existing small-strain shear modulus G max model to determine G max of unsaturated silty soils along different paths of the soil water retention curve including the scanning loops [17].
However, few tests pay attention to the influence of sedimentary environment and geological age on the dynamic properties of soil, especially if they had significant impact on dynamic shear modulus ratios and damping ratios. Furthermore, the experimental studies on the G and λ characteristics of the soils in the coastal area of the Yellow Sea have not been reported yet.
To fill this knowledge gap, in this study, the dynamic shear modulus ratios and damping ratios of the soils in a typical region of the coastal areas of the Yellow Sea are tested by resonant column tests. The effects of sedimentary facies and geological ages were explored in detail. The results of this study provide a scientific and theoretical basis to analyze seismic site effects for major engineering sites.
Engineering Geological Conditions of the Study Region
The study area is located near the coast of the Yellow Sea, and most parts of the region are less than 5 m above sea level, falling in the category of a coastal marine plain. The Quaternary sediments in the region are more than 200 m thick and have experienced marine transgression five times since the late Early Pleistocene. With the formation of fluvial, lacustrine, and marine deposits alternately, broad coastal facies and alluvial facies with soft clay layers and saturated sand layers have been formed. According to the chronological order of strata, the characteristics of soil sedimentary structure are as follows: The sedimentary environments and geological ages of the soils have a significant impact on their dynamic deformation characteristics. The soil samples were identified based on their colors and the existence of shells, calcareous nodules, and iron-manganese oxides. The identifications in conjunction with the comparison between the borehole logs and the relevant geological maps account for the classification of soil categories, sedimentary facies, and geological ages of the tested samples. The classification, summarized in Table 1, reveals that there are 19 clayey soil samples in the shallow layers, within 100 m below the surface. 75% of the silt samples are mainly deposited in the Pleistocene, which is attributed to the fact that the seawater in the marine transgression carried a large amount of granular soils from the rivers, lakes, and marine facies into the flat areas where separation and sedimentation took place. The Holocene silt sand samples are mainly deposited in the marine facies. The silty clay samples from both the marine and continental 2 Geofluids
Soil Sampling and Testing
3.1. Soil Sampling. A total number of 89 undisturbed soil samples were collected from 14 boreholes in the coastal area of the Yellow Sea by using the in situ soil tube method. The depth of soil samples are from 0 to 100 m. As shown in Figure 1, the borehole sites were distributed close to each other so as to reveal the dynamic characteristics of the soils in more detail. Each undisturbed soil sample was prepared into a solid cylindrical shape specimen with the diameter of 50 mm and the height of 100 mm. Figure 2). The torque (or rotation) and the cell pressure are controlled independently by the apparatus. The consolidation pressure is provided by a pneumatic servo system, and a fully automated floating torsional drive is attached to excite at the top of the samples. First, the soil was made into a solid cylindrical sample with a diameter of 50 mm and a height of 100 mm. Isotropic consolidation was conducted after the soil specimen was installed into the test apparatus, with a membrane filmed outside it. The effective confining pressure was determined according to the depth of the soil layer. The durations of the consolidation are more than 3 hours and 12 hours for cohesionless soils and cohesive soils, respectively. After consolidation, resonant column testing was conducted by applying the multistage frequency sweeping excitation on the top of the specimen following ASTM D4105-92 to measure the shear modulus G and damping ratio λ in the shear strain range of 10 -6~1 0 -3 . The schemes of the resonant column tests, specifically the index properties of the specimens and the corresponding effective confining pressures, are listed in Table 1.
Testing Results and Analysis
Since the sedimentary environment and geological age of the soils have a significant impact on their dynamic shear modulus ratios and damping ratios, the soil samples were observed to determine their colors and whether or not they contained shells, calcareous nodules, and/or ironmanganese oxides. The observations were made in conjunction with the comparative analysis of the borehole logs and the relevant geological maps in order to categorize the soil samples based on their soil properties, sedimentary facies, and geological ages. The results (Table 2) revealed that there were many clayey soils in the shallow layers within 100 m of the surface, while the silt samples were mainly deposited in the Pleistocene, which is attributed to the fact that the seawater in the marine transgression carried a large amount of gravel soils from the rivers, lakes, and marine facies into the flat areas where separation and sedimentation took place. In contrast, the silty clay samples from both the marine and continental facies were primarily deposited in the Holocene and Pleistocene. The Holocene clay samples were mainly deposited in the marine facies. Figure 3 shows the typical results of the resonant column test. The strain amplitude of the sample under different excitation frequencies is shown in Figure 3(a). The resonance frequency f1 of the sample γ a at the maximum under the 5 Geofluids corresponding excitation load can be obtained. At the resonance frequency, the strain time history of the sample is shown in Figure 3(b). Under free vibration, the strain time history of the sample is shown in Figure 3(c).
4.1.
Change of G max with Depth. As an important parameter for evaluating the dynamic characteristics of soil and characterizing the maximum elastic stiffness of soil, the maximum dynamic shear modulus G max is usually defined as G when γ a ≤ 10 −6 . According to the hyperbolic relationship between soil dynamic modulus and dynamic strain under small vibration proposed by Hardin and Drnevich [18], the linear relationship between 1/G and γ a can be obtained as 1/G = a + bγ. And then, the hyperbolic model (γ a ⟶ 0) between 1/G and shear strain γ a can be used to obtain the maximum dynamic shear modulus G max of marine soil.
The TSH-100 resonant column test system developed by the GCTS company can measure the dynamic shear modulus G of the soil in the range of 10 −6~1 0 −3 . Equation (1) can be used to obtain the G max of marine soil at different depth ranges from 15 to 140 MPa. Figure 4 shows the G max values of various marine soils and their changes with depth. It can be seen that the G max of various marine soils increases with the depth and shows a typical linear relationship. The prediction relationship between G max and depth can be expressed as G max = 15:37 × h + 1:13. The h represents the depth of the sample, and its unit is m.
Effects of the Sedimentary Environment on the G/G max
γ a and λ~γ a Curves. The three-parameter Martin-Davidenkov model was adopted to investigate the variation characteristics of dynamic shear modulus ratio G/G max against the shear strain amplitude γ a of the tested soils since it has been proven to fit experimental data well for soil samples in Jiangsu province, China [5]. The model is expressed as Here, A, B, and γ 0 are the best-fitting parameters. In particular, in the case of A = 1 and B = 0:5, the Martin-Davidenkov model simplifies to the H-D hyperbolic model [19], where γ 0 denotes the reference shear strain and is equal to the shear strain values when G/G max = 0:5 [20].
The damping ratio versus shear strain amplitude (λ -γ a ) curves of each tested specimen were fitted and analyzed using the following empirical Equation (3) proposed by Chen et al. [5]: Here, λ min is the basic damping ratio of a soil sample under a very small strain, which is related to the soil properties and consolidation state. λ 0 and β are the shape coefficients of the λ -γ a curve. 6 Geofluids Figure 5 illustrates the effects of marine and continental sedimentary environments on the G/G max~γa and λ~γ a curves for the various types of soils. For Holocene silty clays, the G/G max~γa curves of the marine silty clays are slightly lower than those of the continental silty clays, while the λ -γ a curves of the marine silty clays are higher than those of the continental silty clays. This is mainly attributed to the fact that the marine silty clays were dominated by muddy silty clay deposited during the Holocene, and therefore, they exhibit stronger nonlinearity. For Pleistocene clays, the G/G max~γa curves of the marine clays are higher than those of the continental clays, while the λ -γ a curves of the marine clays are lower than those of the continental clays, which may be due to the higher strength of the clay crust formed during the marine transgression. The differences between the marine and continental silty clays in terms of their G/G max~γa and λ~γ a curves are smaller than the differences between the marine and continental clays. The G/G max~γa curves of the marine silty clays are slightly lower than those of the continental silty clays, while the marine and continental silty clays have similar λ -γ a curves. For the Pleistocene silt sands, the marine G/G max~γa and λ~γ a curves are similar to the continental G/G max~γa and λ~γ a curves. In general, within the working range of the shear strain (10 -6~1 0 -3 ) of the resonant column device, the G/G max -γ a curves of the marine soils are slightly higher than those of the continental soils, and the differences between the λ~γ a curves of the two types of soils are even smaller.
Effects of Geological
Age on the G/G max~γa Curve and λ~γ a Curves. Due to the limited number of sampling sites, the Pleistocene soils were not further classified into different Pleistocene stages. The G/G max~γa and λ~γ a curves of the various soil types were compared considering the two geological age categories: the Pleistocene versus the Holocene ( Figure 6). The results show that in general, the geological age strongly affects the G/G max~γa and λ γ a curves of the marine soils (silt clays and silt sands). The G/G max~γa curves of the Pleistocene marine soils are higher than those of the Holocene marine soils while the λ -γ a curves of the Pleistocene marine soils are lower than those of the Holocene marine soils. For continental depositional, geological age has a clear effect on the G/ G max~γa curves of the continental silt clays. The G/G max γ a curves of the Pleistocene continental silt clays are significantly higher than those of the Holocene continental silt clays. However, the effect of geological age on the λ γ a curves is quite weak. Overall, geological age has less effect on the λ~γ a curves of the continental soils than those of the marine soils. Test data (holocene) Test data (pleistocene) Fitted line (holocene) Fitted line (pleistocene) Silt sand, marine depositional Figure 6: Influence of geological age on G/G max~γa curve and λ~γ a fitting curves of different deposits.
Geofluids
As was previously discussed, the soils formed in the Quaternary marine sedimentary environment in the offshore areas of the Yellow Sea have significantly different dynamic shear modulus ratios and damping ratios than those formed in the Quaternary continental sedimentary environment. The effect of geological age on G/G max is similar to its effect on λ, while the effect on G/G max is slightly greater than that on λ. For ease of application in practical engineering, the fitting parameters of the G/G max~γa curves and the λ~γ a curves were obtained for each type of the classified samples (Table 3). For silty clay and silt, the newer the sedimentary age, the smaller the value of A and the bigger the value of B. The average G/G max and λ values at various shear strain levels calculated by the recommended parameters are presented in Table 4.
Conclusions
The sedimentary characteristics of the Quaternary soils formed during the transgression in the offshore areas of the Yellow Sea were investigated. The dynamic shear modulus ratios and the damping ratios of the soil samples were tested considering the effects of sedimentary environments, geological ages, and soil types. The main conclusions are as follows.
(1) The G max of various marine soils increases with the depth and shows a typical linear relationship (2) The effects of marine transgression on the G/G max~γa and λ~γ a curves of the Quaternary soils in the offshore areas of the Yellow Sea are significant. The effects are strong on the clays, moderate on the silty clays, and minor on the silts. The G/G max~γa and λ~γ a curves of the Pleistocene marine clays are higher and lower than those of the continental clays, respectively, which may be due to the higher strength of the clay crust formed during the marine transgression (3) The effects of geological ages on the G/G max~γa and λ~γ a curves of the Quaternary soils in the offshore areas of the Yellow Sea have significant differences between soil type. The effects were strong on the marine silty clays, marine silts, and continental silty clays 9 Geofluids (4) Compared with the sedimentary environment, geological age generally had greater effect on the G/G max~γa and λ~γ a curves of the various types of Quaternary soils in the offshore areas of the Yellow Sea. The fitting parameters of the G/G max~γa and λ~γ a curves were obtained for each soil type under different sedimentary environments and geological ages. Moreover, the averaged curves for each type of soil were recommended for the application in practical engineering
Data Availability
The data are generated from experiments and can be available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
|
2021-10-18T17:33:01.167Z
|
2021-09-30T00:00:00.000
|
{
"year": 2021,
"sha1": "d3ccca2335d73267ad9a7dd130d1692588b1fc88",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/8374741",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d0c1bf506c959e77448081129b5b7d4585c1af4c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Geology"
]
}
|
54900631
|
pes2o/s2orc
|
v3-fos-license
|
Differential Expression of Serum Ceruloplasmin And α2-HS Glycoprotein among Nasopharyngeal Carcinoma Patients
The two-dimensional gel electrophoresis (2-DE) approach was used to evaluate the simultaneous expression of serum proteins in patients with nasopharyngeal carcinoma (NPC) and to detect differentially expressed proteins which could be used as specific and sensitive biomarkers for early diagnosis of the disease. We have subjected unfractionated whole sera of ten newly diagnosed Malaysian Chinese patients with World Health Organization (WHO) type III NPC to 2-DE and image analysis. The results obtained were then compared to that generated from sera of ten normal healthy controls of the same ethnic group and range of age. Our data showed higher expression of ceruloplasmin (CPL) and lower expression of α2-HS glycoprotein (AHS) in the serum high abundance 2-DE protein profiles of NPC patients as compared to that of the normal healthy controls. The ceruloplasmin (CPL) and α2-HS glycoprotein (AHS) spots were identified by mass spectrometric analysis and Mascot database search. In conclusions, this information may help in early diagnosis of nasopharyngeal carcinoma.
Introduction
Nasopharyngeal carcinoma (NPC) is the cancer occurring in the mucosal lining (squamous, columnar, and transitional epithelium) and the minor salivary glands of nasopharynx [1]. It is a rare malignancy in western countries (less than 1 case per 100,000 person-years), and it is one of the most confusing, commonly misdiagnosed and poorly understood disease. But it occurs more frequently in China (more than 20 cases per 100,000 person-years among Southern Chinese) and Southeast Asia [2]. The rate of incidence generally increases from ages 20 years to around 50 years [3]. In Malaysia, NPC is the third most common cancer in men (male: 8.5 cases per 100,000 person-years and female: 2.6 cases per 100,000 person-years) and the rate is highest among the Chinese men (15.9 cases per 100,000 person-years) [4].
According to the World Health Organization (WHO), nasopharyngeal carcinoma is classified into three histological categories. Type I represents well to moderately differentiated squamous cell carcinomas with keratin production. Type II includes nonkeratinizing carcinomas. Type III comprises a diverse group of carcinomas, and these lesions often are described as undifferentiated carcinomas or lymphoepitheliomas. Types II and III are commonly associated with elevated Epstein-Barr virus (EBV) titers and have a better prognosis than type I [2]. Nasopharyngeal carcinoma shows a remarkably high cure rate for early-stage of disease, and early detection is critical to improve the overall prognosis of patients. However, the presenting clinical features of NPC are often nonspecific, and examination of nasopharynx requires expertise and renders early detection is difficult. Thus, diagnosis of NPC is mainly made by biopsy of the nasopharyngeal mass [5,6]. At present, the detection of IgA (Immunoglobulin A) antibodies to EBV specific antigens is the most common serological aid to the diagnosis of NPC [7].
The proteomics approach offer paradigm shift in studies on the simultaneous expression of serum proteins in patients with cancer. Detection of selective or aberrantly expressed serum proteins in cancer patients may prompt investigations of their potential application as novel diagnostic or prognostic biomarkers. Few biomarkers have been identified and proposed by proteomics technologies in NPC, and include: ceruloplasmin, Fibronectin, Mac-2 binding protein, Plasminogen activator inhibitor 1, and Inter-alpha-trypsin inhibitor precursor [3,6,8]. Some metabolites also appear promising to be useful biomarkers for diagnosis of NPC, and they are hydroxyphenylpyruvate, N-acetylglucosaminylamine, Nacetylglucosamine, and kynurenine [6].
In the present study, it was thought worthwhile to use the application of two-dimensional gel electrophoresis (2-DE) in serum of Malaysian NPC patients to detect differentially expressed proteins which could be used as specific and sensitive biomarkers for early diagnosis of the disease. Hence, we have subjected unfractionated whole sera of ten Malaysian Chinese patients with NPC and ten normal healthy controls of the same ethnic group and range of age for 2-DE and image analysis. The differentially expressed proteins were identified by mass spectrometric analysis and Mascot database search.
Serum samples
Ten serum samples were gathered from 7 males and 3 females of Malaysian Chinese patients with NPC, ages ranged within 40-65 years old. These were used as routine laboratory testing prior to the treatment. Through histopathological tests, all patients were confirmed having undifferentiated carcinoma or WHO Type III NPC for either stage T1N1M0 or stage T2N2M0 [2]. Meanwhile the control sera were obtained from another ten normal healthy Malaysian Chinese volunteers, within the gender distribution and range of age as the patients. All sera samples were kept at -20°C and subjected to similar treatment. This research was conducted in the Proteomic and Drug Discovery laboratory in Forest Research Institute of Malaysia (FRIM). The research proposal was approved by the Ethics Committee of SEGi University.
Two-dimensional gel electrophoresis
2-DE was performed as described before by using pre-casting immobilized dry strips of Amersham Biosciences, Uppsala, Sweden [8][9][10][11]. Whereby seven μl (450 μg protein) of unfractionated whole human serum samples were placed onto 11 cm isoelectric rehydrated pre-casting immobilized dry strips having pH between 4 to 7. Meanwhile for the second dimension analysis, these strips were then subjected to electrophoresis using 8% to 18% gradient polyacrylamide gel in the presence of sodium dodecyl sulphate. Three replicates of tests were conducted and analyzed.
Silver staining
The 2-DE gels were developed by silver staining as previously described by Heukeshoven and Dernick [12]. For mass spectrometric analysis, gels were stained according to the method of Shevchenko [13].
MALDI-ToF pro analysis
All resolved proteins spots appeared in protein profiles were identified using the standard plasma protein reference, SWISS ExPASy [14]. The appearance of α1-antitrypsin (AAT), α1-B glycoprotein (ABG), α2-HS glycoprotein (AHS), complement factor B (CFB), clusterin (CLU) and the β chain of haptoglobin (HAP), were confirmed as described before [9]. The identification of ceruloplasmin (CPL) and α2-HS glycoprotein (AHS) spots were carried out similarly by using the Ettan MALDI-ToF Pro, while gel trypsin digestion was performed according to the method of Shevchenko [13]. Mass analyses were conducted using a mixture 1 μl of extracted sample with equal volume of matrix solution consisting of 10 mg/ml alpha-cyano-4hydroxycinnamic acid in 0.5% trifluoroacetic acid (TFA) and 50% acetonitrite (ACN). Only an amount of 0.3 μl of the solution was placed onto the slide loader.
Database search
This study utilized Mascot program (www.matrixscience.com) to search for protein database. This program makes use of the peptide mass fingerprints (PMFs) to locate for database of matching peptides from known proteins. The following keywords were used during the search: trypsin digests (one missed cleavage allowed), species: Homo sapiens, mass value: monoisotropic, peptide mass tolerance: ± 0.1 Da, peptide charges state: 1+ and NCBInr database. Protein identification was further subject to Amersham Biosciences Ettan MALDI software for confirmation.
Image analysis
Using Molecular Analyst PD Quest densitometry software (Bio-Rad, Hercules, Calif., USA), protein spots were analyzed in terms of volume. The background noise was subtracted out and the analysis was confined to ten clusters of protein spots with M r ≥ 30,000, distinctively separated by 2-DE, having AAT, ABG, AHS, CFB, CLU, CPL, HAP and three unidentified proteins termed PR1, PR2 and PR3. While majority of non-dissolve proteins such as albumin, serum polypeptides having idiotypic and/or allotypic variations (the heavy and light chains of all isotypes of immunoglobulin and the α chains of haptoglobin) and those having low M r protein spots were not assessed. In this study, the percentage of volume contribution refers to the volume percentage of a protein taken against the total spot volume of all proteins including the unresolved peptides in each gel.
Statistical analysis
The values were presented as mean with ± SD (standard deviation). This study used independent samples t test to analyze the significance of differences between normal subjects and patients. A p value of less than 0.05 was considered as having significant different.
Results and Discussion
Our procedure to separate the unfractionated whole sera of normal healthy individuals by 2-DE created typical high resolution serum profiles consisting only the high abundance proteins. Based on the conditions of our experiments, protein spots that are usually generated and observed by silver staining were albumin, the heavy and light chains of IgA, IgG, IgM, α1-antitrypsin, α1-B glycoprotein, α2-HS glycoprotein, complement factor B, clusterin, β chain of haptoglobin and three other unidentified protein spot clusters (Figure 1a). The identities of the above known high abundance acute-phase proteins have been reported before [9]. Similarly when we subjected unfractionated serum samples from 7 males and 3 females of newly diagnosed patients with NPC to 2-DE and silver staining based on the same experimental conditions, comparable results were obtained, with the exception of differentially expressed proteins CPL and AHS. Figure 1b demonstrates a typical representative unfractionated serum protein profile of patients with NPC. normal controls indicated significantly higher expression of CPL (p=0.0001) and lower expression of AHS (p=0.01) in patients. However, comparable results were obtained for all the other high abundance serum proteins analyzed (Figure 2). Both CPL ( Figure 3) and AHS (Figure 4) spots were identified by using the Ettan MALDI-ToF Pro mass spectrometry via in gel trypsin digestion. The results of this study showed that 2-DE is highly reproducible in the NPC. However, one previous report has indicated that image analysis performed on triplicate 2-DE gels of serum samples produced minimal relative standard deviation in percentage values of volume contribution of all serum protein spots analyzed [9]. Another previous study has also shown the over expression and highest serum levels of CPL in nasopharyngeal cancer patients [8]. The higher levels of ceruloplasmin (CPL) in sera of patients with various types of cancers have also been previously described [15][16][17][18][19][20][21][22][23]. Identity of CPL was detected by using Ettan MALDI-ToF Pro mass spectrometry. Analysis was conducted in reflector mode, and MASCOT database search of the peptide mass fingerprints was used. Figure 4: MALDI-ToF Pro mass spectrum of α2-HS glycoprotein (AHS). Identity of ASH was detected by using Ettan MALDI-ToF Pro mass spectrometry. Analysis was conducted in reflectors mode, and MASCOT database search of the peptide mass fingerprints was used.
In the present study, the higher expression of CPL was again detected in the sera of all NPC patients. In addition to CPL, the expression of AHS was lower in sera of patients while most of the other high abundance serum proteins resolved by 2-DE was comparable to that of the control sera. Lower serum level of AHS in nasopharyngeal cancer was previously reported [24] which supports our finding in this study. The CPL and AHS spots were identified by mass spectrometric analysis and Mascot database search. The sera of NPC patients generated typical protein profiles that were also different from the earlier reported 2-DE protein profiles of patients with breast cancer and fibrocystic disease of the breast [9]. Our data showed higher expression of CPL and lower expression of AHS in serum high abundance 2-DE protein profiles of NPC patients as compared to that of the normal healthy controls. In conclusion, this information may help in early diagnosis of nasopharyngeal carcinoma.
|
2019-03-12T13:05:57.185Z
|
2015-05-30T00:00:00.000
|
{
"year": 2015,
"sha1": "94f0e17e1a5d8f521e3fa3aa64a1a6c6705d3ab8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2155-9929.s2-010",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f5070c8f971523e3605f078e5b8a0287666bae43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118429057
|
pes2o/s2orc
|
v3-fos-license
|
Impurity-doped Kagome Antiferromagnet: A Quantum Dimer Model Approach
The doping of quantum Heisenberg antiferromagnets on the kagome lattice by non-magnetic impurities is investigated within the framework of a generalized quantum dimer model (QDM) describing a) the valence bond crystal (VBC), b) the dimer liquid and c) the critical region on equal footing. Following the approach by Ralko et al. [Phys. Rev. Lett. 101, 117204 (2008)] for the square and triangular lattices, we introduce the (minimal) extension of the QDM on the Kagome lattice to account for spontaneous creation of mobile S=1/2 spinons at finite magnetic field. Modulations of the dimer density (at zero or finite magnetic field) and of the local field-induced magnetization in the vicinity of impurities are computed using Lanczos Exact Diagonalization techniques on small clusters (48 and 75 sites). The VBC is clearly revealed from its pinning by impurities, while, in the dimer liquid, crystallization around impurities involves only two neighboring dimers. We also find that a next-nearest-neighbor ferromagnetic coupling favors VBC order. Unexpectedly, a small size spinon-impurity bound state appears in some region of the the dimer liquid phase. In contrast, in the VBC phase the spinon delocalizes within a large region around the impurity, revealing the weakness of the VBC confining potential. Lastly, we observe that an impurity concentration as small as 4% enhances dimerization substantially. These results are confronted to the Valence Bond Glass scenario [R.R.P. Singh, Phys. Rev. Lett. 104, 177203 (2010)] and implications to the interpretation of the Nuclear Magnetic Resonance spectra of the Herbertsmithite compound are outlined.
I. INTRODUCTION
The spin-1/2 quantum Heisenberg antiferromagnet (QHAF) on the kagome lattice (see Fig. 1(a)) is believed to be the paradigm of frustrated quantum magnetism. It is also the best candidate to observe exotic (non-magnetically ordered) phases predicted by theorists. Among those, an algebraic spin liquid 1 , a gapped dimer -or spin-liquid 2,3 or a number of valence bond crystals (VBC) with 12-site 4 , 18-site 5 and 36site [5][6][7] unit cells or with columnar dimer order 8 have all been proposed as possible ground state (GS) of the kagome QHAF.
On the experimental side, Herbertsmithite 9 is an almost perfect realization of a kagome QHAF. In this material Copper atoms carrying spin-1/2 degrees of freedom (oxidation state +2) are located on the sites of (very weakly coupled) kagome layers. In addition to the crystallographic structure, the closeness to an ideal system is supported by the remarkable absence of any magnetic ordering at the lowest attainable temperatures 10 , as expected for a highly frustrated quantum magnet.
However, small deviations from a perfect kagome QHAF are known in this material.
In addition to a small Dzyaloshinskii-Moriya (DM) anisotropy 11 , Herbertsmithite contains a small amount of non-magnetic Zinc impurities substituting Copper atoms in the kagome planes. Nuclear Magnetic Resonance (NMR) offers a fantastic tool to probe impurities via the change induced by a nearby impurity on the local magnetic field at the nucleus whose resonance is being measured. The first estimations from NMR 12 of the Zinc concentration range from 6 to 10 %. It is not clear therefore, which experimental features in Herbertsmithite can really be attributed to intrinsic properties of the kagome QHAF and which of them are induced e.g. by Zinc substitution.
One of the strong motivations of this work is to investigate the effect of spinless impurities in quantum disordered (i.e. non-magnetic) phases which are the most serious candidates for the Herbersmithite ground state (GS). Starting from a simple description of this material in terms of a SU(2)symmetric kagome spin-1/2 QHAF (neglecting the small DM anisotropy) recent calculations of NMR spectra 13 seem to reflect key experimental features 12 . On the other hand, the local susceptibilities (Knight shifts) obtained by high tempera-ture series expansion 14 were claimed to agree with an algebraic (gapless) spin liquid phase. Here we provide an alternative analysis based on a generalized quantum dimer model (QDM) which can naturally describe a) topological (so-called Z 2 ) spin-liquid, b) VBC and c) the critical region on equal footing and distinguish features around impurities which are specific to each phase. The paper is organized as follow: first, in Section II, we describe the QDM framework, the modeling of the static dopants and the simplest extension of the QDM to account for a small magnetization (induced by a finite magnetic field) in the ground state. In Section III, the results for a single imbedded impurity are described. This includes the investigation of the pinning of the VBC and the calculation of the spin density of a doped mobile spinon around the vacancy. Then, in Section IV we move on to the case of several impurities and show how the dimerization pattern can be enhanced. A discussion of the experimental NMR data in the light of our results is given in Section V as well as some concluding remarks in Section VI.
A. Impurities described as vacancies
Let us first discuss qualitatively the effect of a single impurity 34 . A Zinc impurity is a spinless ion and therefore can be described as a vacant site as depicted in Figs. 1(b,c). In other words, the bond exchange interaction S i · S j (in units of the exchange coupling J) acts on all bonds of the kagome lattice except on the four bonds connected to the impurity. Using Lanczos exact diagonalizations (LED) of small clusters of the kagome QHAF 15 , it has been shown that a single impurity tends to 'localize' two singlet bonds next to it [16][17][18] . Although this local phenomenon is well captured even on very small clusters 13 , LED of the QHAF do not allow to investigate reliably the perturbation of the media at larger distances from the impurity, which can be probed e.g. by NMR. Therefore, we shall use here an effective description based on the recently developed QDM allowing to handle larger clusters 19,20 .
B. Competing singlet phases in the pure system
Starting from a projection of the QHAF on the (nonorthogonal) NN singlet basis 21 (an approximate resonating VB description) followed by a transformation onto an ad-hoc orthogonal 'dimer' basis and, finally, by a series expansion (in terms of a small overlap parameter α = 1/ √ 2), one gets an effective QDM of the type shown in Fig. 2. Here we use the amplitudes obtained in Ref. 20, V 6 = 1/5, V 8 = 2/63, V 10 = 1/255, V 12 = 0, J 6 = −4/5, J 8 = 16/63, J 10 = −16/255, J 12 = 0 (in units of J) which defines our "Heisenberg" effective QDM H Heis . For convenience, the labels γ in the amplitudes J γ and V γ correspond to the lengths of the associated resonance loops. An exactly solvable "Rokhsar-Kivelson" (RK) QDM H RK defined by vanishing diagonal amplitudes V γ = 0 and equal J γ kinetic amplitudes (set here to -1/4) has also been introduced 23 . Following Ref. 22, we shall consider a linear interpolation between the two Hamiltonians, H eff (λ) = λH Heis + (1 − λ)H RK . As we shall see next, it is remarkable that this "generalized" QDM can describe topological dimer-liquid, VBC and the critical region on equal footing. Whether this model (for λ = 1 or around this value) describes faithfully the original (unprojected) microscopic QHAF on the Kagome lattice is still under debate. However, lots of insights on how impurities behave generically in the quantum disordered phases described by this effective model can be obtained. So, prior to the introduction of impurities, let us give a brief summary of the results of the pure generalized QDM (in zero magnetic field). Generically, short or long-range ordering of the dimers can happen yielding to dimer liquids or VBC, respectively. Numerical results 22 show that a 36-site unit cell VBC (shown schematically in Fig. 3(a)) is stabilized for λ > λ C 0.94, in particular at the "Heisenberg" point λ = 1. A remarkable (topological) gapped Z 2 dimer liquid which bears an exact analytical form at the RK point λ = 0 is stable for λ < λ C as shown in the phase diagram of Fig. 3(b). Besides, at exactly λ = 1, J 12 vanishes so that the undetermined chiralities of the pinwheel (or star) resonances of the VBC pattern (see e.g. Ref. 22 for details) lead to an extra (Ising-like) macroscopic degeneracy. These findings at λ = 1 are in perfect agreement with recent series expansions on the QHAF 7 . Note that the proximity to a Quantum Critical Point (QCP) suggests that dimer fluctuations remain strong and that the VBC amplitude is weak at λ = 1. However it should be stressed here that the derivation of the λ = 1 effective QDM is only semi-quantitative since it relies i) on the projection of the Heisenberg Hamiltonian onto the singlet subspace spanned by the NN VB and ii) it is subject to a (controlled) truncation of a series expansion. Besides, the real material might contains longer range exchange interactions. This suggests that a faithful description of the material in terms of a generalized QDM might not correspond exactly to λ = 1. We can however consider λ as a "phenomenological" parameter which enables to "tune" the system in one of the two quantum disordered phases or in the critical region in between to investigate the generic role of impurities in each case. C. Extending the QDM to finite magnetic field Spinon hoppings -In an NMR setup a small magnetic field is applied to slightly polarize the system. The local sitedependent magnetization is then experimentally accessible via the measured Knight shift. However, ground states with finite magnetization cannot be addressed theoretically within the above QDM which only describes the singlet subspace dynamics. Nevertheless, a finite field/magnetization setup can be realized theoretically by introducing extra spin-1/2 degrees of freedom (named "spinons" hereafter) in the QDM description as first proposed in Ref. 27. Physically, such a spinon (polarized along the magnetic field) can be viewed as resulting from the breaking of a singlet bond by the introduction of a vacant site (impurity). Spinon nearest-neighbor pairs can also appear or disappear by exchanging with dimers (see below). The bond exchange interaction of the QHAF leads to the motion of the spinons followed by some dimer "backflow" 28 . Since, the generalized QDM has been derived in Ref. 20 assuming a fermion representation of the (original) SU(2) dimers, consistency implies that the mutual (bare) statistics of the spinons should be fermionic 35 . Here, we extend the procedure of Ref. 20 to derive from the microscopic QHAF the simplest effective Hamiltonian (in lowest α 2 order) governing the dynamics of the (fermionic) spinons sufficient to capture the main features of spinon delocalization. The relevant processes are shown in Figs. 1(d-f). The overlap matrix elements between the initial and final SU(2) configurations |φ and |ψ involved in these processes are O φ,ψ = −1/2. The corresponding Heisenberg matrix elements H φ,ψ equal +1/2 (in units of J) for the last two processes of Figs. 1(e,f) and vanishes for the first one of Fig. 1(d). Note that, for consistency with the definition of the QDM of Fig. 2, we subtract a singlet-bond energy and include a 4/3 multiplicative factor 29 . Using the expansion scheme given by (the first two terms of) Eq. (41) in Ref. 20, we deduce easily that, in the effective QDM, the first process vanishes and the two remaining kinetic processes shown in Figs. 1(e,f) involve the same "spinon hopping" amplitude t sp = +1/2. This procedure also generates (at order α 4 ) a potential term of amplitude V sp = 1/4 which "counts" the number of possible "spinon hopping" of each kind. Stricly speaking, this (minimal) effective spinon Hamiltonian is most relevant to the physical case λ = 1 but it is also interesting to investigate its properties in an extended range e.g. 0.5 ≤ λ ≤ 1. Spinon chemical potential -The density of spinons n sp = N sp /N is tuned by the magnetic field H. Indeed, N sp spinons polarized along the field gain Zeeman energy −(h/2)N sp , where h = 4 3 gµ B H, which is balanced by a creation energy term (∆ B /2)N sp . Note that we rescale here the magnetic field by the same 4/3 factor as the QDM 20,29 . Physically, the parameter ∆ B corresponds to the energy cost of breaking a singlet bond into a triplet made of two polarized static NN spinons 30 . Practically, (h − ∆ B )/2 plays the role of a chemical potential for the spinons and the (grand canonical) GS energy becomes, is the GS energy of the QDM depicted in Fig. 2 doped with N sp mobile spinons and N imp (random) impurities and where a minimization over N sp has been carried out. Although, strictly speaking, one expects ∆ B = 1 (in units of J) 29 within the procedure of Ref. 20 to derive the effective QDM, its "optimum" value for a given order of the expansion scheme is unclear. However, ∆ B can also formally be related to the physical spin gap ∆ S of the pure system, i.e. the minimal energy cost to create a triplet (S = 1) two-spinon state in zero magnetic field (see below). Lastly, we note that two spinons can form a (triplet) bound state 36
A. Local dimer modulation
As a preliminary study, we now start by just inserting a single spinless impurity (no spinon). As stated above, we use the modelisation of the impurity in terms of a vacant site as shown in Figs. 1(b,c). In the framework of the generalized QDM, the vacancy suppresses the resonant and diagonal terms of showing the location of the phase transition. For λ < λc, the large spectral gap above the quasi-degenerate topological GS correspond to the single-vison gap of the Z2 liquid. (b) Average dimer density versus Manhattan distance from the impurity on the same N=75 cluster. The short distance behavior is separated in the inset for clarity. The mean value corresponds to 1/4 dimer per bond.
In the very diluted impurity limit, one does not expect to fundamentally modify the bulk properties of the quantum magnet. However, a single impurity can e.g. pin VBC order and, in some way, offers a very interesting probe of its existence. To test this, we have considered the 5 √ 3×5 √ 3 periodic cluster of Fig. 4(a) with N = 75 sites and a single impurity (to comply with the hard-core dimer constraints such a cluster has to contain an odd number of sites) 24 . The full spectrum of this cluster is shown in Fig. 5(a) as a function of the parameter λ. Here we have distinguished between the four topological sectors (for such a torus topology) 38 and used the reflection symmetry w.r.t. one of the C 2 -axis passing through the impurity site (σ = ±1). A quantum phase transition 25 occurs as a function of λ for λ = λ C 0.9373 as evidence from the sharp peak in the GS generalized susceptibility χ λ = ∂ 2 E GS /∂λ 2 shown in the inset 26 . A topological Z 2 liquid (with four-fold degeneracy on a torus) is stable for λ < λ C . At λ = λ C the single-vortex (or "vison") gap 39 corresponding to the first odd (σ = −1) energy excitation vanishes. For λ > λ C the system spontaneously breaks reflection symmetry (the GS is two-fold degenerate with σ = ±1 quantum numbers) as expected in the 36-sites VBC. These results and, in particular, the estimation of the critical value λ C 0.9373 are fully consistent with the results obtained on a pure 108-sites cluster 22 .
It is interesting to examine the behavior of the dimer density as a function of the Manhattan distance (see Fig. 1(c)) from the impurity, as reported in Fig. 5(b) for different val-ues of λ between 0.5 and 1. One striking feature is the large value of the dimer density on the two bonds facing the impurity (distance d = 2). This can be interpreted as a (singlet) dimer crystallization as found in prior studies of the Heisenberg quantum antiferromagnet [16][17][18] . Our study reveals that this phenomena is very robust and depends weakly on the supposed phase, liquid (λ < λ C ) or crystalline (λ > λ C ), of the model. Therefore this feature cannot be used practically as an experimental fingerprint. In contrast, when considering longer distances, strong differences in the bond modulation occur in the supposed liquid and VBC phases. For λ < λ C , in the Z 2 dimer liquid, the dimer density becomes very uniform beyond distance 1. When increasing λ the appearance of strong modulations is exactly correlated to the crossing of the critical value and the entering in the VBC phase. For λ = 1, the relative bond modulation amplitude is of order 20%. giving the general form of these matrix elements, one gets A γ = j 2 (i,j)∈γ,NNN i,j /(γ/2+ (i,j)∈γ,NN i,j ) where each loop topology has now to be distinguished and i,j = −1 (+1) if the sites i and j on the loop γ are separated by an odd (even) distance. Note that for γ = 12 (star resonance), the NN QHAF matrix element γ/2 + (i,j)∈γ,NN i,j vanishes so that the leading contributions to J 12 and V 12 are directly proportional to j 2 . Using the general expansion scheme given by Eq. (41) of Ref. 20 it is straightforward to obtain the leading contribution (at order α 10 and α 20 respectively), J 12 = − 3 8 j 2 and V 12 = − 3 256 j 2 . The excitation spectrum for λ = 1 shown as a function of j 2 in Fig. 6(a) reveals two gapped phases separated by a narrow gapless region located around j c 2 ∼ 0.05 (which might become a quantum critical point in the thermodynamic limit). The j2 < j c 2 region (including j 2 = 0) corresponds to the above pinned VBC as confirmed by the GS quasi-degeneracy and the enhanced VBC patterns shown in Fig. 6(b) for weak ferromagnetic j 2 . In contrast, for AFM NNN exchange j 2 > j c 2 , only a small dimerization is observed in Fig. 6(b), except at short distances from the impurity. This might be the signature of a dimer liquid or a very weakly pinned VBC and more work (on pure systems) is needed to resolve this case.
C. Does a spinon bind to an impurity ?
Notoriously, spinons in dimer liquids (VBC) are known to be deconfined (confined). However, confinement/deconfinement is an asymptotic (e.g. long-distance) property of the system and unexpected behaviors can appear at shorter distance (comparable e.g. to the typical impurity separation). Indeed, a spinon could bind to an impurity (acting here as a "static spinon") even in a dimer liquid or, reversely, a spinon could spread over a very large area (e.g. beyond the cluster sizes available numerically) even in a VBC. To investigate such issues, we now consider a 4 √ 3 × 4 √ 3 (48-sites) cluster 24 (j 2 = 0) doped with a static vacancy and a mobile spinon. The maps of both the (bond) dimer density and the site magnetization are shown in Fig. 7 for three distinct regimes : (i) λ = 0.5 < λ C , deep in the dimer liquid phase; (ii) λ = 0.9 < λ C and (iii) λ = 1 > λ C , in the dimer liquid and VBC phases respectively, in the vicinity of λ C . The dimer density maps in this lightly polarized system closely resemble the previous zero field data of Fig. 5(b) : the dimer liquid phase at λ = 0.5 exhibits a very homogeneous dimer density apart from the two "crystallized" bonds next to the impurity while strong dimer patterns are clearly visible in the VBC phase. Note that the dimer pattern gets more pronounced in the dimer liquid phase when approaching the dimer liquid-VBC transition, as e.g. for λ = 0.9. magnitudes of the dimer and of the (magnetic field induced) spin density modulations seem to scale with each other. The behavior in the dimer liquid phase is fully consistent with the expected spinon deconfinement mechanism found by LED of small clusters of the Kagome QHAF 16,31 . In contrast, the VBC sample does not seem to show at the length scale of the cluster the (opposite) confinement behavior found e.g. in the VBC phase of the QHAF on the checkerboard lattice 31 .
Although probably not directly relevant to experiments, the data of Fig. 7(a), deep in the dimer liquid phase, are of particular theoretical interest : here, although the dimer pattern is very uniform (as expected in a dimer liquid phase) and no confining potential is expected, the spinon still forms a tight bound state with the impurity. We believe however that this behavior is not generic and depends strongly on the details of the spinon Hamiltonian : here the bound state is stabilized by an increase of spinon kinetic energy in the vicinity of the impurity. Whether such bound state can appear spontaneously without magnetic field depends on the value of the spinon chemical potential i.e. on ∆ B . If this is the case, the impuritydoped system becomes gapless for magnetic excitations (see below).
IV. FINITE IMPURITY CONCENTRATION
We now consider the case of N imp = 2 impurities doped into the same cluster of N = 48 sites, attempting to (crudely) mimic a finite density of impurities n imp = N imp /N of order 4%. For simplicity, we fix the relative position between the two impurities at a "typical" distance, as shown in Fig. 4(b). We shall also consider the same values of the parameter λ as above in the investigation of a single impurity, namely λ = 0.5, 0.9 and 1, for comparison. Hereafter, we also set j 2 = 0.
A. Enhanced dimerization
Let us first assume zero total magnetization e.g. N sp = 0. The corresponding data are shown in Figs. 8(a-c). We clearly observe a sizable increase of the average dimerization compared to N imp = 1. This is particularly clear for λ = 0.9, a parameter corresponding, in the pure system, to the dimer liquid phase close to the phase transition at λ = λ C . In fact, with two doped impurities, the new dimerization pattern in Fig. 8(b) resemble very closely the one at λ = 1 in Fig. 8(c).
It is interesting to also note that the local dimerization patterns around impurities can be different. Indeed, e.g. in Figs. 8(b) & (c), while one impurity is surrounded by two (quasi-isolated) strong dimer bonds, the second impurity stabilizes two neighboring symmetric resonating hexagons. All these findings are in fact consistent with the proposed picture of the Valence Bond Glass 33 .
B. Spin gap
We now considerer the possibility that the ground state carries a small magnetization proportional to the (doped) spinon density m = 1 2 n sp . If a spin gap exists at h = 0, we expect that a large enough magnetic field will eventually suppress the gap and induce a small magnetization. In our picture, the spin gap is the minimal energy to create a pair of mobile spinons. From Eq. (1), the spin gap at low field h is therefore given by: where ∆ B can be taken as a phenomenological parameter (considering the fact that our spinon hamiltonian is only a minimal model and does not include the potentially important higher-order longer-range hoppings). As in the conventional picture, the spin gap (if any) vanishes linearly at a critical field. We have numerically computed the spin gap in the same N = 48 cluster doped with two impurities (same λ's also as above) i.e. for an impurity concentration n imp = N imp /N 4%, and compare it to the equivalent data in the pure system (n imp = 0) in Fig. 9. Interestingly, we find that doped impurities lead to a reduction of the spin gap in the dimer liquid phase e.g. for λ = 0.5. In the VBC phase the effect is much less pronounced. We believe that the existence of an impurityspinon bound state at λ = 0.5 (see above) is responsible for this trend: if spinons get bound to available impurities (see below) the energy cost to create two of them is reduced.
C. Spin density modulations
We have also examined the spinon density map (equivalent to the local magnetization map) in the two spinon-two impurity doped N = 48 cluster, again for the same values λ as above. The features shown in Figs. 10(a-c) are consistent with our previous results for a single impurity. In the λ = 0.5 dimer liquid, a spinon clearly forms a bound state around each impurirty. In contrast, in Figs. 10(b) & (c) which we might view as some local region of a Valence Bond Glass 33 , spinons seem to be repelled (at short distance) from the impurities. At further distances, strong spin density modulations are induced by a strong (average) dimerization (which reversely is weakly affected by the small magnetization of the sample).
A. General considerations
The different magnetization maps in the liquid and VBC phases are expected to lead to different characteristic NMR (theoretical) spectra which can be confronted to experiments in Herbertsmithite 12 . The theoretical Copper NMR spectra is defined by the histogram of the local site magnetizations. Since Oxygen is located between two coppers, the effective local magnetization is obtained by summing up the contri-butions from both sites (named here as bond magnetization). Both site and bond magnetization histograms have been computed for λ = 0.5, λ = 0.9 and λ = 1 using the spinon density maps of Figs. 7(a-c) (one impurity on 48 sites) and Figs. 10(a) & (c) (two impurities on 48 sites) providing the theoretical Copper and Oxygen NMR low-temperature spectra shown in Figs.11. Note that, for convenience, we work here at fixed magnetization 40 corresponding to one spinon per impurity i.e. m = 1 2 n imp . This is a natural choice at very small temperatures if spinons bind to impurities or if the spin gap is very small. In any case, this corresponds physically to a magnetic field h > ∆ S (h = 0). For convenience, the Copper and Oxygen Knight shifts (x-axis) have been normalized w.r.t. the bulk values corresponding to a uniform spatial distribution of the magnetization (mimicking high temperatures). Note however that, even for such a uniform distribution of the Copper spin density, the Oxygen nucleus located on the bonds connected to the impurity site see only one Copper atom instead of two, yielding a small "satellite" peak in the spectrum of Fig. 11(b) at half the bulk Oxygen NMR shift.
Comparing quantitatively high and low-temperature (in fact here T = 0) spectra obtained at a low fixed magnetic field would in principle require the knowledge of the temperature dependent magnetic susceptibility χ(T ) which is beyond our numerical capability. However, a number of robust features of these theoretical NMR spectra can be linked to physical behaviors of the spinons around impurities. E.g. when spinons spontaneously bind to impurities (as it is the case in the λ = 0.5 dimer liquid phase within our model), one expects low-energy magnetic excitations, possibly at arbitrary small energy scale so that the broad theoretical spectra (at fixed m) of Fig. 11(c,d,i,j) should provide a good sketch of low-field NMR spectra. On the other hand, Figs. 11(e,f) correspond to the opposite case where spinons are strongly delocalized around impurities leading to narrower structures in the spectra (in the same units). However, if the system retains a finite spin gap (as expected in this type of dimer liquid), for a small magnetic field, the average position of the spectrum should be strongly pushed downwards w.r.t. its high temperature reference values.
B. Herberthsmithite described as a Valence Bond Glass
Let us now discuss our results in the light of the recently obtained 17 O NMR experimental spectra on the Kagome QHAF Herberthsmithite material which contains an intrinsic small Zinc substitution in the Copper Kagome planes 12 . Although the experimental NMR spectra are strongly broaden by quadrupolar effects typical of powders, two separate structures can clearly be followed from high to low temperatures. The average shift of the "satellite" peak is roughly half of the main one so that this peak has been interpreted as resulting from the resonance of the four Oxygen nucleus (labelled as "α" in Fig. 11) next to an impurity site. Our results show that dimerization is enhanced by impurity doping so that the proposed Valence Bond Glass 33 should be stabilized in an enlarged interval of λ compared to the region of stability of the Fig. 7(a-c) obtained for a single impurity on 48 sites (effective concentration of ∼ 2%) and of Fig. 10(a) and (c) obtained for two impurities on 48 sites (effective concentration of ∼ 4%). The Knight shift variables are normalized w.r.t. the values corresponding to an ideal uniform distribution of the spin density and defining the reference NMR spectra (a,b) labeled as "high-temperature". Zero-temperature spectra for ∼ 2% impurity concentration λ = 0.5 (c,d), λ = 0.9 (c,d) and λ = 1 (g,h); Same for ∼ 4% impurity concentration λ = 0.5 (i,j) and λ = 1 (k,l). For a single impurity, the impurity peaks are labelled according to the positions of the resonating nucleus shown in the insets (see also Fig. 1(c)).
VBC in the pure model.
It has been recently argued that, confined spinons in the Valence Bond Glass can form a random-singlet phase with a wide distribution of spin gaps down to zero energy, ensuring the presence of fluctuating spins at arbitrarily low temperatures as indeed observed experimentally. The low-temperature low-field magnetic response of a Valence Bond Glass is therefore dominated by isolated impurities in rare regions of the sample, each of them carrying an almost free spinon loosely bound by a (shallow) VBC confining potential 41 . Within this scenario the T, h → 0 NMR spectrum is qualitatively given by the dominant magnetic response of the above regions. If this physics is at play in Herberthsmithite, Fig. 11(h) should therefore give a crude qualitative sketch of the T → 0 Oxygen NMR spectrum. This spectrum vaguely shows two broad structures (as experiments) but the assignment of the small shift structure to the α Oxygen is not so clear as shown in Fig. 11(h). Note that the NMR spectrum would become more complicated when raising temperature since more and more regions of the sample will contribute (as e.g. Fig. 10(c) giving Fig. 11(l)).
VI. DISCUSSIONS AND CONCLUSIONS
Theoretically, our approach is certainly based on an approximate framework relying on i) the truncation within the NN VB basis, ii) an overlap expansion up to a finite order and iii) a "minimal" spinon effective hamiltonian. However, we believe it still captures a number of important features and unambiguously shows that even a small (experimentally unavoidable) impurity doping should have important experimental consequences. Fig. 10(c) probably gives a fairly reliable picture of what is to be expected in a small area of the experimental system at finite magnetization. Even an impurity concentration as small as 4% leads to an important increase of the average dimerization and to a fairly inhomogeneous spin density distribution. This property should be robust in a more elaborate treatment of the NN QHAF, like for example when using a refined spinon Hamiltonian including longer-range hoppings.
It has been recently argued that, confined spinons in the VBC can form a random-singlet phase with a wide distribution of spin gaps 33 , ensuring the presence of fluctuating spins at arbitrarily low temperatures as indeed observed experimentally. Interestingly, our data on small clusters show that the VBC pattern is not strongly confining, probably due to the weakness of the VBC amplitude compared to the spinon kinetic energy. On the scale of the typical impurity separation, spinons tend to delocalize around impurities. However, different types of dimerization patterns get pinned around different impurities. In this sense, our findings have some similarity with the picture of the Valence Bond Glass 33 . Not surprisingly, we have not found any appreciable reduction of the spin gap (compared to the pure case) for the (two-) impurity configuration we have chosen (except when spinon-impurity bound states form at λ = 0.5) : we believe our cluster is still too small to realize (rare) configurations with isolated impurities that could exhibit (almost) free spin within some (large) confining length. This issue certainly needs further work.
On the experimental side, a more realistic description of Herbertsmithite may require to go beyond the simple NN QHAF. It is possible that longer range exchange interactions destabilize the VBC ground state in favor of the Z 2 dimer liquid or some other candidate GS. It might also happen that the VBC phase is only stable at temperatures lower than those experimentally reached. Finally, we note that, experimentally, a large enough DM anisotropy 11 could fill out the spin gap of the Z 2 dimer liquid or VBC phases.
|
2010-09-30T09:39:21.000Z
|
2010-06-28T00:00:00.000
|
{
"year": 2010,
"sha1": "2e8ae214dbfe518d691a49748a9743747a6ee691",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1006.5370",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2e8ae214dbfe518d691a49748a9743747a6ee691",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
207967691
|
pes2o/s2orc
|
v3-fos-license
|
The effect of temperature on the mechanical properties and workability of rock salt
Abstract Underground salt mining accounts for about 16 percent of the total salt production worldwide. When excavating salt rock, the cutters of the road header come into contact with the rock. This produces friction and, consequently, a rise in temperature. Generally, as temperature increases, salt gradually loses its plasticity. The extent of these alterations depends on the presence of other minerals in the rock. This paper presents the results of laboratory tests on regularly shaped samples of salt. An analysis was performed of the results of compressive, tensile and induced-shear strength, and of Young's modulus, Poisson's ratio, cuttability index and side chipping angle. The testing was conducted on samples with a temperature of about 20°C and samples heated to 50°C and 80°C. The tests showed that as temperature increased, so did compressive and tensile strength, and longitudinal and transverse strain of salt. The temperature increase caused, however, a decrease in shear strength. The cuttability index and the side chipping angle also decreased when the heated samples were being cut. The percentage changes in the parameters within the 60-degree temperature range were as high as several dozen percent.
INTRODUCTION
Underground rock salt mining accounts for about a sixth of the global production of this resource. Although salt deposits are sedimentary in nature, saline sedimentation varied at different depths, often leaving salt rocks with components of carbonate rocks (mainly gypsum), as well as clay minerals, and even metamorphic rocks (Garlicki 2013, Mortazavi et al. 2017. Depending on the form of deposition, geological and mining conditions and sediment purity, salt deposits can be exploited in a number of ways (Andrusikiewicz 2008, Poborska-Młynarska 2015, using both blasting and mechanical techniques. Salt rocks usually exhibit poor or average strength, depending on mineralogical and petrographic properties (Cyran et al. 2016). In addition, their elastic-plastic behaviour in terms of strain-stress characteristics is not a typical rock behaviour, causing continuous strains as a function of time (Kolano andFlisiak 2013, Phatthaisong et al. 2018). During salt rock excavation using mechanical techniques, the cutting heads come into contact with the rock mass, producing friction and, consequently, an increase in temperature (Powell, 1969). Hence, before solid salt rock can be cut by milling or shaping, the rock's cuttability and strength must be investigated (Andrusikiewicz 2008), including at increased temperatures. Extensive research literature exists on mechanical alterations in salt due to changes in temperature. Sriapai et al. (2012) demonstrated that a considerable increase in temperature, reaching close to 200°C, caused a linear decline in both the compressive and tensile strength of salt rock mass. In their paper Ostadhassan and Tamimi (2014) reported that compressive strength at similar temperatures increases logarithmically. With regard to strain parameters, researchers have found that elasticity, expressed by the Young's modulus, decreases as temperature rises. Within the range of 200°C, it might decline by half, whereas transverse strain, expressed by Poisson's ratio, usually increases by more than 10 percent (Iverson et al. 2012, Sartkaew & Fuenkajorn 2013, Phatthaisong et al. 2018).
Since the rise in temperature of the rock mass due to friction by cutter heads is short-lived and localized, no such significant increases in temperature are recorded. This paper presents the results of laboratory tests of geomechanical and cuttability properties of rock salt. The tests were performed on samples cut from a lump of rock salt and heated to about 50°C and 80°C. An analysis was performed of the results of compressive, tensile and induced-shear strengths, and of Young's modulus, Poisson's ratio, the cuttability index and the side chipping angle.
TESTS OF MECHANICAL PROPERTIES
The tests were performed on Miocene salt from a Carpathian region (Garlicki 2013). The lumps of salt sampled for testing varied in colour from white to greycoloured. The brightest (white) variety was the purest. It was nearly monomineralic. The chemical and mineralogical analyses showed that this variety was 99.7 percent halite. Other identified minerals included anhydrite and potassium and iron aluminosilicates -most likely clay minerals. The darkest varieties were 98.17 percent halite. Other identified minerals in these salt samples included anhydrite, whose very small portion transformed into gypsum.
To determine compressive, tensile and shear strengths, cut-out cube samples with a side length of 50 mm were used. Elastic parameters testing (Young's modulus and Poisson's ratio) involved samples with a slenderness of 1.5. This is consistent with the guidelines of the International Society for Rock Mechanics (ISRM), although for strain tests of rocks, it is recommended to use samples with a slenderness of 2-2.5. Due to the atypical elastic-plastic behaviour of the salt samples under load, the values obtained should not differ significantly from those provided by tests on more slender samples. Four to eight determinations per parameter were made for each sample at pre-defined temperatures of 23°C, 50°C, and 80°C. Compressive strength was determined according to the PN-G 04303:1997 standard. The rate of loading, relative to the transverse cross-section of the sample, depended on the rock strength. In the case under analysis, the rate was 0.4 kN/s (approx. 0.12 MPa/s). The tensile strength was determined using the transverse compression method (the so-called Brazilian test) according to the PN-G 04302:1997 standard. The test involved splitting the sample by applying compressive strength distributed uniformly along the side of the sample and perpendicular to the layers with the use of a specially designed mould (Fig. 1). The rate of loading relative to the sample's transverse cross-section was 0.05 kN/s. Shear strength was tested using a method proposed by ISRM in 1975 involving an induced shear angle of 45° (Fig. 2). This involved compressing the sample in special moulds with one washer that adjusted the concentric alignment of the moulds to ensure equal shear area. The longitudinal strain modulus (Young's modulus) and transverse strain modulus (Poisson's ratio) were determined in line with ISRM's guidelines (Ulusay & Hudson 2007) in two ways: as a value within the complete elastic region understood as the linear relationship between stress and longitudinal strain (Young's modulus), and as a secant value based on 0 Rc and 0,5 Rc relative to strain increase between these stress values (deformation modulus). During the tests, the outer temperature, as well as the inner temperature -after failure -were being monitored continuously with a thermographic camera. An analysis of the dispersion of compressive and tensile strength results (Fig. 3 and 5) indicated that the samples at room temperature exhibited much greater strength variability than the samples heated to 50°C and 80°C. During prolonged determinations of compressive strength, which involved recording transverse and longitudinal strains to determine the moduli, the samples were found to release heat rapidly, and temperature decreased at a high rate as well. The samples heated to 50°C cooled to a maximum surface temperature of 38°C, while those heated to 80°C changed their temperature locally to 46°C. It should be added, however, that the such sharp drops in sample temperatures occurred only in several cases. In most cases temperatures decreased by a dozen-odd degrees. Figure 6 shows two halves of a sample split after the Brazilian tensile test in which the temperature was still about 50°C. The obtained average shear strength values decreased proportionally to rising temperatures (Fig. 7). In a sense, shear strength is a parameter the provides the closest reflection of rock behaviour during mechanical salt rock mining.
Fig. 7 Shear strength range
The cutting operation by the knives of road headers can be compared with rockmass shear. Therefore, shear strength can be considered a parameter that describes the process of rock pieces separationt. Figure 8 shows the relationship between shear strength and displacement at a pre-defined angle of 45°. The tests showed that salt was so plastic as to require a displacement of nearly 6 mm to reach critical shear strength.
Fig. 8 A diagram from the testing machine during the shear test
At high temperatures, salt elasticity expressed as Young's modulus (Fig. 9) decreased as well, which is consistent with literature reports.
Fig. 9 Young's modulus range
With regard to Poisson's ratio (Fig. 10), determined within a very narrow range (0-0.5), one value can often disturb a certain trend. Thus, it is difficult to state whether temperature has an effect on salt rock strain behaviour.
CUTTABILITY INDEX TEST
The laboratory test to determine the cuttability index A and the side chipping angle ψ involved making open cuts in the rock samples using a standard test cutter. The cutting depth was predefined. The tests were designed to measure the components of the cutting resistance (Ps -cutting force, Pd -contact force, Pb -lateral force). Subsequently, the cuts are measured to determine the actual cutting depth gs and cutting width bs. The resulting values Ps, Pd, Pb, gs and bs could then be used to determine the cuttability index A (the ratio of the resultant cuttability resistance and the cutting depth) and the side chipping angle ψ (the ratio of the opening-cut width difference arctangent and the double cutting depth). For this purpose, a test facility was used that recorded the components of the rock sample's cutting resistance. Figure 11 illustrates this facility.
Fig. 11 Shaper-based cutting resistance test stand with a strain-gauge head
The cutting resistance test facility consists of a horizontal shaper, a strain-gauge head (test cutter handle) fitted with a standard test radial cutter, and a rocksample specimen holder on the shaper table. The signals from the gauge head are sent through conductors via the strain-gauge amplifier to the measurement computer for recording and further processing and analysis. The cutting tests were performed for the temperatures of 22°C and about 50°C and 80°C. The measurements were taken on cuts with a depth of gs = 5 mm at the cutting speed vs of about 1 m/s. Each consecutive cut was made at a scale interval ts of at least 30 mm. Also, thermographic images were captured during the cutting tests. Each cut sample was heated to about 55°C in an oven and placed in the shaper hand. Once the sample's surface was even again, the procedure was repeated with a cutting depth of 5 mm. After cutting, signs of cracking were noticed in the midsection of the sample. Next, after the sample was heated to about 83°C and its surface was again evened, new cuts were made with a cutting depth of 5 mm. Figure 12 shows the sample surface with these cuts. In the sample midsection, crack propagation can be seen leading down to the bottom surface of the sample.
Fig. 12 Sample with cuts made at about 75°C
A comparison of the cut temperatures (Figures 13 and 14) showed that at 22°C the cut surface temperature rose to about 27.5°C along its entire length, and at 55°C, the temperature rose only in the end section of the cut -by 4°C. An analysis of the results for the cuttability index A and the side chipping angle ψ indicated that the salt behaviour was similar to that observed in the shear strength measurements. As the sample temperature rose, the cuttability index A and the side chipping angle ψ decreased markedly. After excluding outliers (for measurements at 20°C, this was due to a big piece of salt chipping off from the lateral surface, and for measurements at 55 and 77°C, due to salt-sample cracking), the average values of the cuttability index A and side chipping angle ψ were: • 20°C -A = 2.192 kN/cm, ψ = 55.8° • 55°C -A = 1.742 kN/cm, ψ = 51.2° • 77°C -A = 1.653 kN/cm, ψ = 48.7°. Data on the cuttability of the studied material, depending on the cuttability index A and the side chipping angle ψ (based on the coal cuttability index provided by CMG KOMAG Gliwice) indicate that at room temperature the salt sample can be classified as a medium-cuttable material. When heated to 50°C, it becomes a highly workable material. However, the side chipping angle ψ decreases instead of growing.
|
2019-11-14T16:14:29.051Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "0c42192509f414e0746549fc9b700f681707a893",
"oa_license": null,
"oa_url": "https://doi.org/10.2478/ntpe-2019-0041",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "0c42192509f414e0746549fc9b700f681707a893",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
237251461
|
pes2o/s2orc
|
v3-fos-license
|
A case of multiple fibroid uterus, complete placenta praevia, antepartum haemorrhage, myomectomy and obstetric hysterectomy: a near miss
Near Miss’ in obstetrics is defined as very ill pregnant or recently delivered woman who nearly died but survived a complication during pregnancy, childbirth or within 42 days of termination of pregnancy. SAMM (Severe Acute Maternal Morbidity) refers to a life-threatening disorder that can end up in near miss with or without residual morbidity or mortality. Women who develop SAMM during pregnancy share many pathological and circumstantial factors related to their condition. Although some of these women die, a proportion of them narrowly escape death. Near miss cases and maternal deaths together are referred to as severe maternal outcome (SMO). Severe morbidity data are vital for policy planners to know requirements of essential and emergency obstetric care (EmOC) to manage these. It is also assumed to be a better indicator than maternal mortality alone for designing, monitoring, follow up and evaluation of safe motherhood programs. Antepartum haemorrhage (APH) is defined as bleeding from or into the genital tract, occurring from 24+0 weeks of pregnancy and prior to the birth of the baby. The most important causes of APH are placenta praevia and placental abruption, although these are not the most common. APH complicates 3–5% of pregnancies and is a leading cause of perinatal and maternal mortality worldwide. Upto one-fifth of very preterm babies are born in association with APH, and the known association of APH with cerebral palsy can be explained by preterm delivery. Obstetric haemorrhage encompasses both antepartum and postpartum bleeding. Here, we present such a case report of a near miss.
INTRODUCTION
Near Miss' in obstetrics is defined as very ill pregnant or recently delivered woman who nearly died but survived a complication during pregnancy, childbirth or within 42 days of termination of pregnancy. SAMM (Severe Acute Maternal Morbidity) refers to a life-threatening disorder that can end up in near miss with or without residual morbidity or mortality. Women who develop SAMM during pregnancy share many pathological and circumstantial factors related to their condition. Although some of these women die, a proportion of them narrowly escape death. Near miss cases and maternal deaths together are referred to as severe maternal outcome (SMO). Severe morbidity data are vital for policy planners to know requirements of essential and emergency obstetric care (EmOC) to manage these. It is also assumed to be a better indicator than maternal mortality alone for designing, monitoring, follow up and evaluation of safe motherhood programs. [1][2][3] Antepartum haemorrhage (APH) is defined as bleeding from or into the genital tract, occurring from 24+0 weeks of pregnancy and prior to the birth of the baby. The most important causes of APH are placenta praevia and placental abruption, although these are not the most common. APH complicates 3-5% of pregnancies and is a leading cause of perinatal and maternal mortality worldwide. Upto one-fifth of very preterm babies are born in association with APH, and the known association of APH with cerebral palsy can be explained by preterm delivery. Obstetric haemorrhage encompasses both antepartum and postpartum bleeding. Here, we present such a case report of a near miss.
CASE REPORT
20 years old young Primigravida, unbooked case, @ POG~26 weeks presented with torrential bleeding, in Haemorrhagic Shock fetal cardiac activity was present. Internal examination was deferred, however vaginal toileting was done to assess blood loss and collection of blood in interoitus, bits of placental tissue were also present. She was being seen at another centre and was a known case of Pregnancy with multiple Fibroids and Complete Placenta Praevia covering Os. She had visible clinical pallor (Hb 4 gm%), tachycardia (146/mt), thready peripheral pulses, hypotension (60/mmHg), oliguria (15 ml high coloured urine drained on catheterization), tachyopnea (36/mt), air hunger, with fresh bleeding PV along with passage of clots. Her blood group was A negative. She was taken up for emergency hysterotomy/ caesarean section with preoperative consent for obstetric hysterectomy under GA, ASA (V) with central line in situ. Simultaneous resuscitation with blood colloids and crystalloids was done preoperatively and intraoperatively using pressure cuffs with 2 wide bore intracath in upper and lower limbs. Four units of whole blood were transfused preoperatively and intraoperatively and four units were kept on standby.
Abdominal incision was midline vertical as per existing guidelines in standard textbooks and intraoperative findings were as shown in Figure 1-2. Large-15×15 cm intramural fibroid in LUS, -10×10 cm intramural fibroid on anterior wall, -8x8 cm intramural fibroid on fundus, -6×6 cm intramural fibroid on left cornual edge and -4×4 cm intramural fibroid on posterior wall. Fetus and placenta could not be delivered from LUS or midline vertical hysterotomy as both fibroids were impacting the approach to fetus. A deliberate decision of myomectomy of anterior wall fibroid was made for extraction of fetus. After completion of said myomectomy fetus could not be delivered as LUS fibroid was also impacted this was also removed (Figure 3-5). Delivery of a live extremely premature with very low birth weight (900 gm) male fetus was possible after all manoeuvres. Fetus was handed over to paediatrician who was immediately put on ventilator and surfactant was given.
Placenta which was covering os was delivered with a large retroplacental clot; however, lady continued to bleed on table. Next of kin were informed again by anaesthesiologist and a rapid obstetric hysterectomy (TAH) was performed with all other fibroids in situ. Haemostasis was achieved. Intra-abdominal and subcutaneous drain were placed. Abdomen closed. She was transfused with four units of whole blood postoperatively along with parenteral antibiotics, analgesia and supportive measures. She was ambulated on first postoperative day and oral fluids were administered. Parenteral antibiotics, analgesia and fluids were stopped after 48 hours and oral soft diet was given to the patient. Catheter was removed after 48 hrs and subcutaneous drain was removed after 72 hrs. Intraabdominal drain was removed on 5 th postoperative day. She had an uneventful recovery and sutures were removed on 14 th postoperative day. Contrast MR urogram was performed as a cautionary investigation to exclude ureteric or bladder injuries which was normal. Lady was placed on HRT (Low dose Estrogen only) along with Calcium and Vit D3. Couple was counselled for Surrogacy for future fertility.
Premature infant expired after 72 hrs on Ventilatory support.
DISCUSSION
The core obstetric complications predisposing pregnant women to near-miss events are almost always similar. The major chunk is formed by hemorrhagic disorders which may be antepartum, peripartum, or postpartum. Pregnancies complicated with hypertension-related disorders (eclampsia) and disorders of morbid placenta become more prone to obstetric haemorrhage. In this study, 45.6% of near-miss cases were caused by PPH and 37% by hypertensive disorders. In comparison, the literature also reports haemorrhage and hypertensive disorders to be the major predictors of near-miss cases as well as maternal mortality. 4-6 Some pregnancy-related complications leading to high-risk childbirth are almost unavoidable. The benefit of evaluating near-miss events in depth is that the records of these patients and the hindrances they had to witness can help in creating safer and more approachable obstetric health care for future patients. Some of these factors may be associated with things lacking at the patient's end such as desire for home delivery to maintain tradition, inadequate antenatal care, non-compliance with healthcare practitioner's advice, disbelief in modern medicine, and others. Some factors are associated with delay in reaching a tertiary care institution due to longer distances, lack of transport or funds. Factors related to health system include delay in providing immediate relief and/or referral, lack of adequate intensive care facility, well-trained staff, and others. [7][8][9] This case has been highlighted because it was a young Primigravida with no living issues and multiple obstetric complications which required a tough decision by surgeon to perform obstetric hysterectomy to save maternal life with concurrence of intensivist and relatives. Availability of blood and blood products was extremely important in management of the case. There is a lack of uniform criteria for identification of cases of severe obstetric morbidity or maternal near miss. Identification of cases is complex and varies across studies. Three major criteria have been mentioned in a review conducted by the WHO, these are described in (Table 1).
There are not many studies available from India on maternal near miss inspite of very high maternal mortality ratios and poor maternal health care. The causes of near miss vary in different geographical areas of the world and also there are variations within countries. Haemorrhage, hypertensive disorders, sepsis and obstructed labour are the most important causes in the developing countries. 10 Causes of near miss are similar to causes of maternal deaths prevailing in the area. A systematic review to determine the causes of maternal deaths conducted by the WHO recorded wide regional variation. Haemorrhage was the leading cause of maternal deaths in Africa (33.9%) and in Asia (30.8%) while in Latin America and the Caribbean, hypertensive disorders were responsible for 25% deaths. Anaemia was reported as an important cause in 12.8% deaths in Asia, 3.7% in Africa and none in the developed countries. Studies from our country have also reported anaemia as an important cause and contributor to maternal mortality and severe maternal morbidity.
To understand the gaps in access to adequate management of obstetric emergencies leading to severe maternal complications and death three delays have been identified. The first delay is in deciding to seek care by the woman and/or her family as they are unaware of the need for care, this occurs as the danger signs are not recognized or there is lack of support of the family. The second delay is in reaching an adequate health-care facility as the services may not exist or may be inaccessible for reasons such as distance, lack of transport, cost or socio-economic barriers. The third delay occurs in receiving adequate care at that facility resulting from errors in diagnosis and clinical decision-making, or lack of medical supplies and of staff proficiency in the management of obstetric emergencies. In developing countries, about 75% of women with severe obstetric morbidity are in a critical condition upon arrival, underscoring the significance of the first two delays. Availability, accessibility, cost of health-care and behavioural factors play an important role in the utilization of maternal health services.
The causes of APH include: placenta praevia, placental abruption and local causes (for example bleeding from the vulva, vagina or cervix). It is not uncommon to fail to identify a cause for APH when it is then described as 'unexplained APH'. There are no consistent definitions of the severity of APH. It is recognised that the amount of blood lost is often underestimated and that the amount of blood coming from the introitus may not represent the total blood lost (for example in a concealed placental abruption). It is important therefore, when estimating the blood loss, to assess for signs of clinical shock. The presence of fetal compromise or fetal demise is an important indicator of volume depletion. Definition of blood loss: (a) Spottingstaining, streaking or blood spotting noted on underwear or sanitary protection; (b) Minor haemorrhage-blood loss less than 50 ml that has settled; (c) Major haemorrhage-blood loss of 50-1000 ml, with no signs of clinical shock; (d) Massive haemorrhage-blood loss greater than 1000 ml and/or signs of clinical shock (IUFD in presence of any blood loss is massive haemorrhage); (e) Recurrent APH is the term used when there are episodes of APH on more than one occasion (RCOG GTG 63).
Here the lady presented to us in haemorrhagic shock with no available alternative to prevent maternal morbidity. Clinical criteria related to a specific disease entity.
Disease specific definitions used for common conditions and clinical criteria defined for severe morbidity e.g. Preeclampsia and Eclampsia.
Ease to interpret cases can be identified retrospectively. Quality of care can be identified.
All problems may not be covered. Difficult to define and quantify the condition.
Management Specific.
Management or intervention to disease e.g. hysterectomy, blood transfusion or admission to ICU.
Simple to use in identification of cases.
Depends on other variables such as availability of ICU beds, indications for hysterectomy.
Organ system dysfunction or failure.
Based on the concept that there is a sequence of events leading from good health to death. Death is preceded by organ dysfunction or failure are specified e.g. HELLP, DIC, AFLP, AFE, PPCMP, Jaundice in preeclampsia, Puerperal Sepsis.
Allows for identification of critically ill women. Keeps focus on severe diseases.
Dependent on the existence of a minimum level of care including functioning laboratories and basic critical care monitoring.
CONCLUSION
Obstetric emergencies demand prompt life-saving measures. Accepting the concept of near miss and identifying the clinical characteristics of these patients is a substantial step towards preventing maternal mortality. Combating these issues at the level of primary health care facilities has become essential with availability of functional OT, Blood bank services and dedicated HDU (Obstetrics) staff. Evaluating patients for risk factors and providing high-risk and SAMM patients HDU (Obstetrics) care can further decrease the ratio of maternal mortality. In order to reduce the incidence of near-miss cases, it is important to address women at all levels including awareness about antenatal compliance, hygienic deliveries in proper health care facilities, availability of trained staff, and birth spacing. Maternal near miss has emerged as an adjunct to investigation of maternal deaths as the two represent similar pathological and circumstantial factors leading to severe maternal outcome.
As the number of maternal near-miss cases is more than the maternal deaths and the cases are alive to directly inform on problems and obstacles that had to be overcome during the process of health-care, they provide useful information on quality of health-care at all levels.
Thus, there is a need for application of the maternal nearmiss concept for assessment of maternal health and quality of maternal care. RCOG guidelines are helpful in management of such cases with multiple comorbidities; however, resource poor nations must develop their own protocols for management of obstetric emergencies in peripheral settings.
|
2021-08-20T18:40:13.512Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "5c74b1a00c8a4e4277b4d46347123d55fddb4d51",
"oa_license": null,
"oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/10056/6554",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6dcf82f51a8b87ca139e1d837ddc7a481b6a940a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
270034849
|
pes2o/s2orc
|
v3-fos-license
|
Development of a delayed chronic subdural haematoma 3 years after traumatic brain injury with urinary incontinence: a case report
Introduction: The authors present a case of a delayed chronic subdural haematoma, a rare occurrence that manifested 3 years after a traumatic brain injury, accompanied by an unexpected symptom of urinary incontinence. Chronic subdural haematoma (CSDH) is a well-known condition characterised by the accumulation of old, liquefied blood under the dura mater, usually following minor head trauma. However, the atypical presentation of CSDH in a young patient without predisposing factors and the association with urinary incontinence challenge conventional understanding. This report explores the clinical manifestations, radiological findings, and management of this exceptional case, providing valuable insights into this unusual presentation. Case presentation: In this report, the authors present the case of a 23-year-old male with an unremarkable medical history, devoid of prior neurological deficits, who presented with persistent headaches, memory impairment, left-right disorientation, slurred speech, and urinary incontinence, troubling him for the past month. The patient had a history of a traumatic brain injury from a road traffic accident 3 years earlier, initially devoid of concerning symptoms. Imaging revealed a large heterogeneous mass lesion in the left fronto-parietal lobe consistent with a chronic subdural haematoma. The patient underwent surgical evacuation and excision of the haematoma, leading to the successful resolution of symptoms. Clinical discussion: Conventionally, chronic subdural haematoma is observed in elderly individuals following minor head trauma. However, this case challenges the traditional understanding by highlighting its delayed occurrence in a young patient without known predisposing factors. This case emphasises the need to consider delayed presentations even without immediate neurological deficits. The unexpected symptom of urinary incontinence underscores the necessity of comprehensive evaluations to understand the associated neurological effects of CSDH. A surgical approach was crucial for both diagnosis and treatment, underscoring the significance of prompt intervention in such atypical cases. Conclusion: This exceptional case sheds light on a delayed chronic subdural haematoma occurring years after traumatic brain injury in a young patient without known risk factors. The presence of urinary incontinence as a symptom further amplifies the uniqueness of this case. Understanding and recognising atypical presentations of CSDH is vital for accurate diagnosis and timely intervention. This report underscores the importance of vigilance and an integrated approach to managing patients with subdural haematomas, particularly in unexpected demographics and circumstances, to ensure optimal outcomes and patient well-being.
Introduction
A subdural haematoma refers to a condition where blood accumulates beneath the dura mater, which is one of the protective layers surrounding the brain, often resulting from bleeding of the bridging veins situated between the brain's protective membranes [1,2] .Chronic subdural haematoma (CSDH) is a distinctive type, characterised by the gradual accumulation of aged, partially liquefied blood, forming a chronic subdural haematoma [3] .Typically, CSDH becomes evident ~3 weeks after a head injury [1,3] .This condition predominantly affects the elderly population and is frequently associated with neurological deficits, even following minor traumatic events [4] .Symptoms can vary widely, ranging from alterations in mental status, such as mild confusion to coma [5] , and commonly presenting as posttraumatic headaches and weakness or paralysis on one side of the body, often interspersed with clear intervals.Healthcare practitioners use specific timeframes to categorise CSDH presentations, distinguishing them as acute (within 3 days of trauma), subacute (within 4-21 days), or chronic (after 21 days).
Several factors contribute to the risk of developing CSDH, including advanced age, a history of prolonged alcohol misuse, and prior traumatic brain injuries, which can lead to significant cerebral atrophy [6,7] .This cerebral atrophy subsequently heightens the susceptibility to subdural haematoma (SDH) following minor head injuries or whiplash, even in the absence of direct physical impact.Additionally, the use of anti-platelet medications, direct oral anticoagulants, or vitamin K antagonists further increases the vulnerability to SDH [8] .
Although urinary incontinence may not seem directly related to CSDH, it is crucial to understand that CSDH can exert pressure on the brain [9] .This pressure can disrupt the brain's critical role in regulating bladder function, ultimately leading to urinary incontinence.While urinary incontinence is a symptom observed in various other medical conditions, its presence can serve as an indicator of a lesion within the cerebrum.
The management of urinary incontinence in patients with CSDH typically involves a comprehensive approach that addresses both the underlying haematoma and the associated neurological symptoms.Surgical drainage of the haematoma is a common intervention aimed at relieving pressure on the brain and alleviating neurological symptoms [10] .Furthermore, rehabilitation and physical therapy may be necessary to assist patients in regaining bladder control and restoring other functional abilities that may have been affected by the haematoma.
This case report presents a unique patient with delayed chronic SDH observed on brain computed tomography (CT), initially devoid of clinical abnormalities.The presented case is notable for its atypical presentation of a delayed CSDH in a young patient without predisposing factors, who did not report any deficits or alterations in the level of consciousness (ALOC) or fluctuations in the Glasgow Coma Scale (GCS) following the initial traumatic incident or in the subsequent years.This case highlights the importance of considering radiological imaging in all patients presenting with neurological symptoms, emphasising the necessity of prompt referral to further evaluate their condition.
The reporting of the following case adhered to the Surgical CAse REport (SCARE) guidelines [11] .
Case presentation
A 23-year-old male patient with no known comorbidities presented to our outpatient department (OPD) with a range of concerning symptoms.These included persistent headaches, leftright disorientation, acalculia (difficulty with arithmetic calculations), memory impairment, slurring of speech, and shameless urinary incontinence.These issues had been troubling him for the past month.According to the patient, he was in good health until three years ago when he was involved in a road traffic accident (RTA) due to a motorcycle slip.He was initially treated at a local hospital for a laceration on his left frontal skin.However, due to the unavailability of a CT scan at the time, no further imaging was performed.The patient was discharged home as he was clinically stable and did not exhibit any apparent injuries or fractures.
The patient's recent symptoms began with a diffuse and dull headache that was progressively worsening.Notably, this headache did not respond to over-the-counter painkillers and was aggravated by lying down and bending forward.Additionally, the headache was accompanied by memory impairment and shameless urinary incontinence.His previous medical history was insignificant.
Upon examination, the patient's vital signs and sub-vitals were found to be within normal limits.A mini-mental examination revealed that the patient was oriented to time, place, and person.However, he displayed signs of memory impairment, acalculia, left-right disorientation, and fluent but disordered speech.Other than these cerebral signs, there were no observable cranial nerve deficits, cerebellar signs, or motor and sensory abnormalities.
Initial baseline investigations, including a chest X-ray (CXR), complete blood count (CBC), urea and creatinine levels (UCEs), liver function tests (LFTs), prothrombin time (PT), activated partial thromboplastin time (APTT), and viral markers, all returned within normal ranges.The haematological and biochemical profile of the patient is shown in Table 1.
HIGHLIGHTS
• A 23-year-old male presented with a delayed chronic subdural haematoma (CSDH) 3 years after a traumatic brain injury, challenging conventional understanding due to the absence of immediate neurological deficits.• The patient exhibited symptoms uncommon for CSDH, including persistent headaches, urinary incontinence, memory impairment, and left-right disorientation without immediate paralysis or weakness.• Surgical evacuation and excision of the haematoma were performed, leading to the successful resolution of symptoms and highlighting the importance of prompt intervention in atypical cases.• The case challenged conventional understanding as it occurred in a young patient without known predisposing factors and underscored the need for considering delayed presentations even without immediate neurological deficits.
A plain CT brain scan (Fig. 1) revealed a concerning finding: a large heterogeneous mass lesion with a hyperdense rim and internal septae in the left fronto-parietal lobe, causing mass effect and sub-falcine herniation.Subsequently, an MRI brain scan (Fig. 2) with contrast was conducted, which provided further insights.The MRI showed a large extraxial abnormal signal intensity area along the left fronto-parietal region, displaying heterogeneous hyperintense signals on T1-weighted images with a hypointense outer membrane and similarly heterogeneous hyperintense signals on T2-weighted/FLAIR images.Multiple internal hypointense septae and a hypointense outer membrane were also observed.Post-contrast sequences demonstrated enhancement of both the inner and outer membranes.Additionally, restricted diffusion was observed on DWI/ADC mapping, with the lesion measuring ~13.7 × 4.7 cm in anteroposterior (AP) and transverse (TR) dimensions and 6.1 cm in craniocaudal (CC) dimension.
The SWI sequence revealed areas of susceptibility adjacent to the dural surface and septae, as well as surrounding vasogenic oedema in the left frontal lobe.Susceptibility foci corresponding to contusions were also identified.Notably, the mass effect caused significant compression of adjacent brain parenchyma, leading to the effacement of sulci and gyri in the left front-parietal lobe, as well as the frontal horn and body of the left lateral ventricle.Additionally, there was significant sub-falcine herniation with a midline shift measuring ~1.9 cm towards the right side.Furthermore, the mass effect caused medial displacement of the left anterior cerebral artery, M4 segment of the left middle cerebral artery, and left posterior cerebral artery.Interestingly, there was asymmetry with hypoplasia of both cerebellar hemispheres, along with prominent vascular spaces identified at the right basal ganglia region.Magnetic resonance angiography (MRA) demonstrated patent vessels, and magnetic resonance venography (MRV) showed no filling defects within the sinuses.In the surgical procedure, a meticulous and complex approach was taken to address the brain lesion.A reverse question mark incision on the left side, guided by preoperative MRI findings, provided access to the targeted area.A fronto-temporo-parietal craniotomy was executed, allowing the surgical team to remove a portion of the skull to reach the brain.The dura mater, a protective membrane surrounding the brain, was opened in a C-shaped manner with the base oriented towards the middle fossa.Notably, the dura was found to be avascular, and devoid of blood vessels.Upon opening the lesion capsule, foul-smelling and dirty contents were found (Fig. 3).Biopsies and samples of these contents were meticulously collected for histopathology and culture analysis.The procedure also involved an internal debulking process, aided by the observation that the lesion was extra-axial and easily separated from the surrounding brain while maintaining the arachnoidal plane.A gross total resection was successfully achieved.To ensure a secure closure, the dura was meticulously closed in a water-tight manner, and the wound was closed in layers.Additionally, an autologous subdural (ASD) procedure was performed intraoperatively, indicating a comprehensive surgical approach aimed at addressing this complex brain lesion.The postoperative CT of the patient is shown in Figure 4.
The final biopsy and histopathological report were consistent with haematoma formation.Upon gross examination, the specimen consisted of a single dura-covered tissue piece measuring 5.5 × 4 × 1 cm.The cut surface appeared light brown to dark brown and firm.Microscopic examination revealed a thick fibrous wall containing fibrin and haemorrhage.There was no evidence of significant inflammation, granulomata, or neoplastic processes.The diagnosis rendered is a posterior fossa SOL with features compatible with haematoma formation.No evidence of any neoplastic process was identified.
Discussion
Neurological deficits represent a hallmark of CSDH.Hemiparesis, characterised by weakness or paralysis on one side of the body, is a prevalent neurological deficit in CSDH, occurring in up to 58% of cases.Additional presentations encompass focal neurological deficits, weakness or paralysis in a single limb (monoplegia), seizures, altered mental status, or extra-pyramidal symptoms, including movement disorders.However, our case diverges from the typical pattern, as the patient, despite having a large CSDH, did not display any post-trauma paralysis or weakness, indicating an unusual absence of immediate neurological deficits.Symptoms only manifested after 3 years, persisting for a month, primarily as headaches, urinary incontinence, and left/right disorientation or slurring of speech, which are atypical for CSDH.
The mechanism underlying CSDH generally involves minor head trauma, causing bleeding from bridging veins within the brain into the meninges [12] .Over time, the accumulated blood partially liquefies and encapsulates, forming a chronic subdural haematoma.If left unevacuated, it can undergo calcification or ossification [13] .However, in this case, the absence of calcification or ossification after 3 years suggests that the haematoma did not result from immediate post-trauma, as calcification typically begins around 6 months, leading to calcified CSDH, occasionally progressing to ossification [13] .
CSDH commonly affects elderly individuals, presenting with altered mental status, monoplegia, headache, and seizures [3] .In contrast, our case is unique, involving a 23-year-old patient without known comorbidities, no alcohol use, and no blood thinners or anti-platelet drugs.Factors such as advanced age, chronic alcohol abuse, and prior traumatic brain injury (TBI) predispose individuals to significant cerebral atrophy, heightening the risk of SDH from trivial head injury or whiplash without direct physical impact [3,5] .Most documented cases involve delayed traumatic intraparenchymal or extradural haematomas.However, the occurrence of delayed acute SDH or delayed chronic SDH in patients devoid of coagulation disorders or risk factors is infrequent and poorly understood.Notably, our patient developed urinary incontinence, suggesting that the voiding symptoms stemming from the subdural haematoma might have resulted from compression of the descending corticospinal tracts [14] .The haematoma's location might have led to compression of the area responsible for detrusor innervation.
The management of urinary incontinence in patients with CSDH typically involves a comprehensive approach that addresses both the underlying haematoma and the associated neurological symptoms.Surgical drainage of the haematoma is a common intervention aimed at relieving pressure on the brain and alleviating neurological symptoms [10] .Furthermore, rehabilitation and physical therapy may be necessary to assist patients in regaining bladder control and restoring other functional abilities that may have been affected by the haematoma.
When evaluating patients with neurological symptoms, it is crucial to recognise that not all cases will conform to the typical patterns or expectations.This particular case underscores the need for a high index of suspicion and thorough diagnostic investigations, even in the absence of immediate neurological deficits or significant clinical abnormalities.
Given the unique nature of this case, it is recommended that healthcare practitioners maintain a vigilant approach to patients presenting with neurological symptoms, regardless of their age or apparent risk factors.Radiological imaging, such as CT or MRI of the brain, should be considered as an essential diagnostic tool to identify potential underlying pathologies, including subdural haematomas.Early recognition and appropriate referral for imaging studies can help prevent delayed diagnoses and ensure timely interventions.
This case challenges conventional understanding, highlighting the need for a more comprehensive grasp of atypical presentations and underlying mechanisms of CSDH.A multidimensional evaluation considering various factors is essential for precise diagnosis and effective management in such uncommon cases.
Conclusion
CSDH is well-reported post-trauma and in the elderly but it's rare occurring in young patients without any comorbid, years after trauma without any prior neurological deficits and calcified changes.Seizures and paralysis are common presentations in CSDH patients but urinary incontinence and disruption of high cerebral function is an uncommon presentation.An initial CT scan is crucial in determining the aetiology as well as the management.Although this phenomenon is uncommon, emergency physicians must be vigilant for this possibility in the face of persistent or delayed post-traumatic symptoms even if an initial CT scan is normal.
Ethical approval
This study was approved by the ethics committee of the institution.
Figure 2 .
Figure 2. MRI scan of brain-chronic subdural haematoma, haemorrhagic contusions at left frontal lobe with surrounding vasogenic oedema, and absence of cerebral venous sinus filling defect on magnetic resonance venography.
Figure 4 .
Figure 4. Postoperative computed tomography scan of brain-craniotomy defect with pneumocephalus and subgaleal collection (2.3 cm) in left fronto-parietal bone, crescentric hypodensity indicating subdural collection (2.3 cm) with compression effects on adjacent brain parenchyma, partial effacement of left lateral ventricle, and 1.4 cm midline shift towards the left side.
Table 1
Haematological and biochemical profile of the patient HCT, haematocrit; MCH, mean corpuscular hemoglobin; MCHC, mean corpuscular hemoglobin concentration; MCV, mean corpuscular volume; RBC, red blood cell; RDW-CV, red cell distribution width-coefficient of variation; RDW-SD, red cell distribution width-standard deviation.
|
2024-05-26T15:10:58.318Z
|
2024-05-27T00:00:00.000
|
{
"year": 2024,
"sha1": "3d2656fe589f2591648c73a3c06fe5322ceef8b6",
"oa_license": "CCBYND",
"oa_url": "https://doi.org/10.1097/ms9.0000000000002221",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eabf649791f6b537c4b68d1a0c3990faea3ffa2c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85457254
|
pes2o/s2orc
|
v3-fos-license
|
Two Birds, One Stone: Double Hits on Tumor Growth and Lymphangiogenesis by Targeting Vascular Endothelial Growth Factor Receptor 3
Vascular endothelial growth factor receptor 3 (VEGFR3) has been known for its involvement in tumor-associated lymphangiogenesis and lymphatic metastasis. The VEGFR3 signaling is stimulated by its main cognate ligand, vascular endothelial growth factor C (VEGF-C), which in turn promotes tumor progression. Activation of VEGF-C/VEGFR3 signaling in lymphatic endothelial cells (LECs) was shown to enhance the proliferation of LECs and the formation of lymphatic vessels, leading to increased lymphatic metastasis of tumor cells. In the past decade, the expression and pathological roles of VEGFR3 in tumor cells have been described. Moreover, the VEGF-C/VEGFR3 axis has been implicated in regulating immune tolerance and suppression. Therefore, the inhibition of the VEGF-C/VEGFR3 axis has emerged as an important therapeutic strategy for the treatment of cancer. In this review, we discuss the current findings related to VEGF-C/VEGFR3 signaling in cancer progression and recent advances in the development of therapeutic drugs targeting VEGF-C/VEGFR3.
Introduction
Vascular endothelial growth factor receptor (VEGFR) tyrosine kinases are critical regulators in the development and maintenance of blood and lymphatic vascular systems. In mammals, VEGFRs consist of three membrane proteins referred to as VEGFR1 (FLT1), VEGFR2 (KDR/FLK1), and VEGFR3 (FLT4) [1][2][3][4]. The activity of VEGFRs is modulated by five secreted glycoproteins, the vascular endothelial growth factors (VEGFs), which include VEGF-A, VEGF-B, VEGF-C, VEGF-D, and PLGF. The VEGF ligands bind to and activate three different VEGFRs, resulting in the stimulation of angiogenesis and lymphangiogenesis [5][6][7]. The VEGFR1 gene produces two major proteins, a full-length receptor and a soluble VEGFR1 (sFlt-1). Full-length and soluble VEGFR1 are high-affinity receptors for VEGF-A, VEGF-B, and PLGF, and have been shown to function as negative regulators of VEGFR2 signaling [8][9][10][11]. In response to VEGF-A binding, VEGFR1 only exerts low activation of intracellular signaling and serves as a decoy receptor for VEGF-A, preventing its binding to VEGFR2 [12]. Although the kinase activity of VEGFR1 is relatively low compared with that of VEGFR2, the binding of PLGF can induce survival signals in endothelial cells and enhance angiogenesis [13]. In addition, several studies have shown that VEGFR1 signaling is critical for tumor growth, metastasis, activation of monocyte/macrophages, and macrophage migration [14][15][16][17][18]. VEGFR2 is another signaling receptor for VEGF-A and has been shown to play an important role in Figure 1. The signaling pathways of vascular endothelial growth factors and vascular endothelial growth factor receptors (VEGFs/VEGFRs) and their biological functions. The three tyrosine kinase (TK) receptors have specific binding capabilities. VEGF-A, VEGF-B, and PLGF can bind to VEGFR1 and mediate its biological functions. The binding of VEGF-A, VEGFR-C, and VEGF-D can stimulate the activation of VEGFR2, resulting in cell proliferation and angiogenesis. VEGF-C and VEGF-D bind to VEGFR3 and induce downstream signaling which mediates cell survival and lymphangiogenesis. Neuropilin 1 (NRP1) and neuropilin 2 (NRP2) can function as co-receptors for VEGFR2 and VEGFR3. The binding of VEGF-A isoforms and NRP1 can form a complex with VEGFR2, leading to the induction of downstream signaling which regulates the proliferation and migration of endothelial cells. VEGF-C/D bind to NRP2 and forms a complex with VEGFR3, activating the VEGFR3 signaling which enhances the proliferation of lymphatic endothelial cells (LECs) and lymphangiogenesis. MKK4, Mitogen-activated protein kinase kinase-4; JNK1/2, c-Jun N-terminal kinase-1/2; PI3K, phosphoinositide-3 kinase; AKT/PKB, AKT/protein kinase B; PKC, protein kinase C; ERK, extracellular signal-related kinase; SHC-GRB2, Src homology domain containing growth factor receptor-bound protein 2.
Regulation of VEGFR3 Signaling
VEGFRs consist of seven immunoglobulin-like (IG) domains that comprise the ligand-binding part, a single transmembrane domain, and a cytoplasmic tail which contains the split kinase domains for transducing growth factor signals. However, IG domains of VEGFR3 are different from that of other VEGFRs, where the fifth IG domain of VEGFR3 is cleaved and the two processed parts are held together through a disulfide bond [26] (Figure 1). The first and second IG domains of VEGFR3 are responsible for ligand binding, whereas the fourth to seventh IG domains are important for receptor homodimerization, heterodimerization (VEGFR2/VEGFR3), and receptor activation [27,28]. It has
Regulation of VEGFR3 Signaling
VEGFRs consist of seven immunoglobulin-like (IG) domains that comprise the ligand-binding part, a single transmembrane domain, and a cytoplasmic tail which contains the split kinase domains for transducing growth factor signals. However, IG domains of VEGFR3 are different from that of other VEGFRs, where the fifth IG domain of VEGFR3 is cleaved and the two processed parts are held together through a disulfide bond [26] (Figure 1). The first and second IG domains of VEGFR3 are responsible for ligand binding, whereas the fourth to seventh IG domains are important for receptor homodimerization, heterodimerization (VEGFR2/VEGFR3), and receptor activation [27,28]. It has been known that VEGF-C and VEGF-D have a high affinity for VEGFR3. A previous study shows that VEGF-C is essential for sprouting of the first lymphatic vessels from embryonic veins. In Vegfc−/− mice, endothelial cells can commit to the lymphatic endothelial lineage but do not form lymphatic vessel sprouts from the embryonic veins [25]. In contrast, no defects in formation of lymphatic vessel sprouts from the embryonic veins were observed in Vegf-d-deficient mice [29]. However, one study demonstrates that endogenous Vegf-d in mice is dispensable for lymphangiogenesis during development, but its expression significantly contributes to lymphatic metastasis of tumors [30].
VEGF-C binding induces VEGFR3 dimerization and enhances the phosphorylation of tyrosine kinases in the cytoplasmic tail, resulting in the increase of downstream signaling. These phosphotyrosine residues then serve as docking sites for recruiting cytoplasmic signaling mediators that elicit diverse cellular responses such as cell proliferation, migration, and survival. Phosphorylated Tyr1337 has been proposed to be a binding site for the Src homology domain containing growth factor receptor-bound protein 2 (SHC-GRB2) complex, which activates the KRAS signaling pathway and regulates the transformation activity of VEGFR3 [31]. VEGF-C-induced phosphorylation of Tyr1230 and Tyr1231 stimulates the AKT/protein kinase B (AKT/PKB) and extracellular signal-related kinase (ERK) signaling pathways, contributing to proliferation, migration, and survival of lymphatic endothelial cells (LECs) [32,33]. Phosphorylation of Tyr1063 of VEGFR3 mediates cell survival by recruiting CRK I/II and inducing c-Jun N-terminal kinase-1/2 (JNK1/2) signaling via mitogen-activated protein kinase kinase-4 (MKK4) [33]. VEGFR3 phosphorylation also triggers phosphoinositide-3 kinase (PI3K)-dependent activation of AKT and protein kinase C (PKC)-dependent activation of ERK1/2 pathways. Stimulation of both signaling pathways promotes the proliferation of lymphatic endothelial cells [32] (Figure 1).
The signaling via VEGFRs is also modulated through interactions with their coreceptors, such as neuropilin 1 (NRP1) and neuropilin 2 (NRP2). Originally, neuropilins were found to be expressed in the nervous and vascular systems and were identified as axonal guidance factors implicated in nerve development. NRP1 is mainly expressed in arteries, whereas NRP2 is expressed in veins and LECs [34,35]. It has been described that NRP1 specifically binds to VEGF-A isoforms such as VEGF-A165 and forms a complex with VEGFR2. The formation of VEGF-A165/NRP1/VEGFR2 complex induces VEGFR2 phosphorylation and downstream signaling, which regulates the proliferation and migration of endothelial cells [36,37]. In the vascular system, the expression of NRP2 and VEGFR3 is mainly in lymphatic vessels [38,39]. Nrp2-deficient mice show small lymphatic vessels and capillaries, which implies that the expression of NRP2 is critical for the development of lymphangiogenesis [38]. Although the mechanism of NRP2-mediated lymphangiogenesis remains unclear, increasing evidence suggests that NRP2 binds to VEGF-C/D and forms a complex with VEGFR3, thereby activating the VEGFR3 signaling which enhances the proliferation of lymphatic endothelial cells and lymphangiogenesis [40][41][42].
VEGFR3 is initially expressed in all vascular endothelial cells during embryogenesis and early postnatal development but later becomes restricted to LECs and certain fenestrated capillaries [43,44]. Since VEGFR3 expression is restricted to lymphatic vessels, it has been used as a marker for lymphatic vessels [45]. However, increasing evidence suggests that VEGFR3 is upregulated in blood vessels in some tumors and chronic wounds during active angiogenesis [46][47][48][49]. VEGFR3 has also been shown to be expressed in neuronal progenitors, osteoblasts, and macrophages [50][51][52]. Furthermore, recent studies have indicated that VEGFR3 expression is detected in different types of cancers and it contributes to tumor progression and lymphatic metastasis (Table 1) . +, the expression of VEGFR3 is correlated with angiogenesis or lymph node metastasis; −, the expression of VEGFR3 is not correlated with angiogenesis or lymph node metastasis.
Functional Roles of VEGFR3 in Lymphatic Endothelial Cells
Lymphatic vessels are an integral part of the cardiovascular system, and are important for tissue fluid homeostasis, immune surveillance, and lipid absorption. The lymphatic vasculature collects extracellular fluids, proteins, lipids, and immune cells through lymphatic capillaries and drains lymph into pre-collector vessels that contain valves, ultimately transporting into the venous circulation [94,95]. Defective development of lymphatic vessels causes several disorders including vascular malformation, lymphoedema, and lymphangiectasia [96], whereas enhanced lymphangiogenesis is associated with tumor metastasis and tissue inflammation [97]. It has been shown that growth of lymphatic vessels occurs upon the exposure of LECs to VEGF-C-induced VEGFR3 signaling [25]. Available data support that VEGFR3 is critical for lymphatic vessel development. For example, VEGFR3 mutations identified in human and mice are known to cause lymphoedema [24,98,99]. Moreover, mice with Vegfr3 deletion die at around E10.5 due to failure of cardiovascular development [100]. Furthermore, VEGF-C/VEGFR3 signaling is also implicated in modulating the remodeling and homeostasis of lymphatic vessels. A study of Vegf-c-deficient mice suggested that VEGF-C signaling was required for the migration of LECs and the formation of lymphatic vessel sprouts from embryonic veins [25]. A recent study shows that LECs of Vegf-c-deficient mouse embryos fail to detach from the cardinal vein and are unable to form the dorsal peripheral longitudinal lymphatic vessel (PLLV) and the ventral primordial thoracic duct (pTD), which results in lethality of mouse embryos [101]. Results obtained from genetically engineered animals further support the essential role of VEGF-C in lymphangiogenesis showing that depletion of the matrix-binding adapter protein CCBE1 reduces proteolytic processing of VEGF-C by protease A disintegrin and ADAMTS3 metalloprotease, resulting in the attenuation of the VEGFR3 signaling and lymphangiogenesis [102,103]. In addition, overexpression of VEGF-C induces the proliferation of LECs and hyperplasia of the lymphatic vasculature through VEGFR3 [104].
Clinical Significance of VEGF-C/VEGFR3 Expression in Tumors
Lymphangiogenesis is an important step in tumor progression. Dysregulation of lymphangiogenic factors has been known to promote lymphangiogenesis, which induces the formation of new lymphatic vessels that connect with the surrounding lymphatic vessels and provide routes for the transport of tumor cells to distant sites. The potential roles of the VEGFR-C/VEGFR3 axis in regulating tumor lymphangiogenesis and progression have been suggested. The expression of VEGF-C is detected in a variety of human tumors [105][106][107][108][109][110][111][112] and the increased level of VEGF-C is significantly correlated with lymph node metastasis, distant metastasis, and poor prognosis [97,113]. VEGF-C overexpression in breast cancer cells activates the VEGF-C/VEGFR3 axis in LECs and induces the formation of lymphatic vessels within and around tumors, resulting in enhanced tumor metastasis through lymphatic vessels [114,115] (Figure 2). In addition, mice bearing VEGF-C-overexpressing human breast carcinoma cells exhibited increased lymphangiogenesis and tumor metastasis via the lymphatic vessels [116]. Moreover, a soluble form of VEGFR-3, a potent inhibitor of VEGF-C/VEGF-D signaling, can inhibit lymphangiogenesis and suppress tumor metastasis [117].
Expression and Function of VEGFR3 in Immune Cells
Lymphatic vessels transport fluid, soluble antigens, and immune cells from peripheral tissues to draining lymph nodes (dLNs), where adaptive immunity and tolerance are modulated [118,119]. In addition to providing the routes for the trafficking of peripherally activated dendritic cells (DCs) into dLNs to activate immune response, lymphatic vessels also provide the routes for cellular egress leading to immune resolution [120,121]. It has been reported that lymphangiogenesis often occurs in chronic inflammatory tissues, including inflammatory bowel disease, chronic airway inflammation, and psoriasis [120,122,123]. VEGF-C and VEGFR3 are largely responsible for the development of lymphatic vessels. The pathogenic roles of VEGF-C and VEGFR3 in chronic inflammatory diseases and immune response have been well characterized in recent investigations. A previous study showed that the systemic inhibition of VEGFR3 increases the formation of inflammatory edema and inflammatory cell accumulation despite the inhibition of lymphangiogenesis in a Keratin 14 (K14)-VEGF-A transgenic (Tg) mouse model. Chronic delivery of VEGF-C or VEGF-D (which activates VEGFR3 signaling) into the skin of K14-VEGF-A mice significantly suppressed chronic skin inflammation, epidermal hyperplasia, and accumulation of CD8 cells. Similar results were also found VEGFR3 is primarily expressed in LECs, but is also expressed in non-endothelial cells, such as tumor cells (Table 1). Recently, Batsi et al. reported that the expression of VEGFR3 was detected in the nuclei of tumor cells and endothelial cells of tumor vessels in both primary urothelial bladder carcinoma and their recurrent tumors. However, the expression of VEGFR3 was not correlated with tumor grade and clinical stage [53]. Previous studies have also demonstrated that VEGFR3 protein was detected in breast cancer specimens. High expression levels of several angiogenesis-related proteins, including VEGFR3, are observed in patients with early-stage breast cancer and are associated with clinicopathological parameters and survival outcome [54]. It has been shown in a mouse model that the expression of VEGF-C and VEGFR3 promotes tumor growth and metastasis in an autocrine manner, whereas treatment with a VEGFR3 antagonist significantly suppresses tumor growth and lung metastasis [55]. Eroglu et al. also found that, while VEGFR3 is expressed in breast cancer cells, its expression is not associated with lymph node metastasis [56].
A recent study demonstrated that tumor-associated macrophages induced the expression of VEGF-C and VEGFR3 in lung adenocarcinoma cells, resulting in enhanced migration and invasion of cancer cells. Blockade of VEGFR3 signaling inhibits tumor growth and markedly suppresses the migration and invasion of tumor cells by upregulating the expression of p53 and PTEN. Furthermore, the study's data revealed that the inhibition of VEGFR3 enhances chemosensitivity of doxorubicin in lung adenocarcinoma cells [58].
VEGFR3 expression has also been found in ovarian cancer cells and activation of the VEGFR3 signaling is induced by VEGF-C, which is produced by tumor-associated myeloid cells. The inhibition of VEGFR3 signaling results in the down-regulation of BRCA expression and cell cycle arrest. Moreover, VEGFR3 blockade chemosensitizes ovarian cancer to cisplatin chemotherapy in vitro and in vivo [59]. Decio et al. confirmed that VEGF-C and VEGFR3 were expressed in ovarian tumor tissues. VEGF-C released by tumor cells stimulates the VEGFR3 signaling in a paracrine and autocrine manner, leading to an increase in tumor growth and metastasis. Targeting the VEGF-C/VEGFR3 pathway decreases tumor burden and dissemination of ovarian tumors [60]. In renal cell cancer (RCC), the expression of VEGFR3 has been demonstrated in several studies [61,62]. Furthermore, Zhang et al. showed that VEGFR3 expression is correlated with histological grade, the status of lymph node, and metastasis in papillary renal cell carcinoma. Moreover, the expression of VEGFR3 can serve as a prognostic marker for papillary renal cell carcinoma and is also a predictor of lymph node metastasis as well [63].
Immunohistochemical analysis and qRT-PCR studies have demonstrated that the expression of VEGFR3 was increased in endometrial carcinomas compared with normal endometrium [64,65]. Additionally, VEGFR3 expression was significantly associated with tumor stage and poor disease-free survival in endometrial carcinomas [64]. Zhu et al. found that VEGFR3 was highly expressed in tissue samples of colorectal cancer. High expression of VEGFR3 was associated with the TNM (tumor, node, metastasis) stage and lymph node metastasis of colorectal cancer. The authors further illustrated that lipopolysaccharide (LPS) could upregulate VEGFR3 expression through increasing the binding of NF-κB to the promoter of VEGFR3, thereby promoting the migration and invasion of colorectal cancer cells [66]. Another study also showed that VEGFR3 expression was found in colorectal cancer and its expression was associated with lung metastasis [67].
A previous study showed that VEGFR3 expression was also found in gastric cancer and correlated with poorer prognosis, TNM stage, and lymphatic metastasis [69]. Recently, Dai et al. showed in an orthotopic mouse model that treatment with VEGFR3 antibody-conjugated ginsenoside Rg3 nano-emulsion might inhibit the expression of VEGF-C, VEGF-D, and VEGFR3, resulting in the suppression of tumor growth and lymphatic metastasis of human gastric cancer [70]. The expression of VEGFR3 mRNA and protein were also detected in multiple cancers, including bladder, oral, head and neck, esophageal, and cervical cancers [71][72][73][74][75][76][77].
In prostate cancer, Yang et al. demonstrated that VEGF-C mRNA and VEGFR3 were highly expressed in tumorous prostate tissue. The expression of VEGFR3 is higher in VEGF-C mRNA-positive tumors compared to VEGF-C mRNA-negative tumor tissues. Thus, VEGFR3 expression is associated with poor prognosis and metastasis in human prostate cancer [79]. High expression levels of VEGFR2 and VEGFR3 were also detected in several medullary thyroid carcinoma (MTC) samples [80]. Another study investigated the influence of RAS mutation on the expression of TKI target proteins in MTC tumors. The results showed that VEGFR3 protein is expressed in few RAS-positive tumors and VEGF is frequently expressed in wild-type tumors. These findings could improve the selection of MTC patients for targeted therapy [81]. Kurenova et al. demonstrated that focal adhesion kinase (FAK) and VEGFR3 form a complex to promote cell proliferation in pancreatic ductal adenocarcinoma (PDA). They further showed that a small molecule inhibitor C4 could disrupt the interaction of FAK and VEGFR3 and inactivate FAK/VEGFR3 signaling to suppress cancer cell growth. Moreover, the combination of C4 and gemcitabine showed a significant synergistic effect on tumor suppression in PDA [82]. Another small molecule inhibitor C10 was also found to target the FAK/VEGFR3 complex and inhibit the growth of pancreatic tumor in vivo [83].
The expression of VEGFR3 has been detected in neuroblastoma cell lines, and the blockade of FAK-VEGFR3 interaction by C4 has also reduced cellular migration and proliferation. In addition, the combination of C4 and doxorubicin significantly suppressed tumor growth in a xenograft animal model [84,85]. VEGFR3 expression has been found in melanoma. Targeting FAK-VEGFR3 interaction by the small molecule C4 significantly inhibits melanoma tumor growth in vivo [87]. Recently, VEGF-C and VEGFR3 were found to be expressed in basal cell carcinoma (BCC). Yeh et al. demonstrated that the VEGF-C/VEGFR3 axis enhances the migration, invasion, and stemness of skin cancer cells via the KRAS/YAP1/Slug pathway. Targeting the VEGF-C/VEGFR3 axis by VEGFR3 blocking peptide significantly suppressed skin cancer progression [92].
Expression and Function of VEGFR3 in Immune Cells
Lymphatic vessels transport fluid, soluble antigens, and immune cells from peripheral tissues to draining lymph nodes (dLNs), where adaptive immunity and tolerance are modulated [118,119]. In addition to providing the routes for the trafficking of peripherally activated dendritic cells (DCs) into dLNs to activate immune response, lymphatic vessels also provide the routes for cellular egress leading to immune resolution [120,121]. It has been reported that lymphangiogenesis often occurs in chronic inflammatory tissues, including inflammatory bowel disease, chronic airway inflammation, and psoriasis [120,122,123]. VEGF-C and VEGFR3 are largely responsible for the development of lymphatic vessels. The pathogenic roles of VEGF-C and VEGFR3 in chronic inflammatory diseases and immune response have been well characterized in recent investigations. A previous study showed that the systemic inhibition of VEGFR3 increases the formation of inflammatory edema and inflammatory cell accumulation despite the inhibition of lymphangiogenesis in a Keratin 14 (K14)-VEGF-A transgenic (Tg) mouse model. Chronic delivery of VEGF-C or VEGF-D (which activates VEGFR3 signaling) into the skin of K14-VEGF-A mice significantly suppressed chronic skin inflammation, epidermal hyperplasia, and accumulation of CD8 cells. Similar results were also found by intracutaneous injection of recombinant VEGF-C156S mutant protein, a specific VEGFR3 ligand, which significantly reduced skin inflammation [121]. D'Alessio et al. demonstrated that increased lymphangiogenesis and lymphatic function reduced inflammatory bowel disease. The authors found that the VEGF-C/VEGFR3 signaling mediates "resolving" macrophage activation and mobilization in a STAT6-dependent manner, resulting in bacterial antigen clearance from the inflammatory area to the draining lymph nodes [120]. Furthermore, the expression of VEGFR3 in different immune cells has been reported. Hamrah et al. demonstrated the expression of VEGFR3 in corneal DCs and its up-regulation in inflammation. The authors further characterized that VEGFR3 + DCs are CD11c + CD45 + CD11b + and mostly major histocompatibility (MHC) class II − CD80 − CD86 − , which belong to immature DCs of the monocytic lineage [124]. In addition, Fernandez Pujol et al. reported that VEGFR3 is detected in immature DCs. In the presence of angiogenic growth factors, the immature DCs can differentiate into endothelial-like cells [125]. The expression of VEGF-C, VEGF-D, and VEGFR3 in tumor-associated macrophages (TAMs) has been shown in human cervical cancer [126]. The study indicates that VEGF-C/VEGFR3-expressing TAMs may play an important role in peritumoral lymphangiogenesis. Moreover, Su et al. also demonstrate that the VEGF-C/VEGFR3 axis is critical for macrophage infiltration in lung cancer, and VEGFR3-mediated macrophage infiltration may be involved in the radiosensitization of lung cancer [127]. More recently, Zhang et al. show that Gram-negative bacterial infection or LPS stimulation can elevate the expression of VEGFR3 and VEGF-C through TLR4-NF-kB signaling in macrophage, whereas VEGF-C ligation of VEGFR3 forms a negative feedback loop to inhibit TLR4-induced inflammatory responses. Their results represent a self-control mechanism to prevent uncontrolled inflammation in macrophages during bacterial infection [128]. The expression of VEGFR3 was also reported in natural killer (NK) cells. Lee et al. showed that the NK cells from acute myeloid leukemia (AML) express higher levels of VEGFR3 and lower levels of IFN-γ compared to the NK cells from healthy donors [93]. Moreover, increased lymphatic vessels and lymph drainage are correlated with tumor progression and tumor-associated lymphangiogenesis to enhance immune tolerance [129][130][131]. Emerging evidence also demonstrates that inflammatory lymphangiogenesis is correlated with graft rejection in renal and renal transplants [132,133]. Therefore, it is likely that VEGFR3 in immune cells might play complex roles in stimulation and resolution of immune response.
Development of Drugs That Target VEGF-C/VEGFR3 Signaling
As mentioned previously, tumor-associated lymphangiogenesis plays a critical role in the mediation of tumor metastasis and has emerged as a novel target for cancer treatment [134,135]. Currently, multiple therapeutic strategies have been developed for targeting VEGF-C/VEGFR3 signaling, including (1) small molecule receptor tyrosine kinase inhibitors (TKIs) of VEGFR3; (2) monoclonal antibodies or receptor traps targeting VEGF-C; and (3) neutralizing antibodies or peptides that block the VEGFR3 signaling.
Small Molecule TKIs of VEGFR3
Several TKIs have been developed for inhibiting the kinase activity of VEGFRs. Four TKIs that can be administered orally, namely, sorafenib, sunitinib, pazopanib, and axitinib, have been approved by the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) for clinical use [136,137] (Figure 3). The therapeutic efficacy of sorafenib monotherapy has been shown in patients with advanced renal cell carcinoma (RCC) and hepatocellular carcinoma (HCC) [138,139]. Sunitinib monotherapy has also shown significant improvement in progression-free survival (PFS) in patients with metastatic RCC [140]. The activity of pazopanib monotherapy was assessed in locally advanced or metastatic RCC, which showed improvement in PFS [141]. Recently, the therapeutic efficacy of axitinib has been demonstrated in metastatic renal cell carcinoma (mRCC) and the promising therapeutic efficacy of axitinib was demonstrated. Therefore, axitinib has been approved by the US FDA and EMA in the treatment of mRCC [137].
Cediranib is an oral VEGFR TKI and has been shown to suppress the activity of VEGFR2 and VEGFR3, leading to the inhibition of angiogenesis and lymphangiogenesis [142]. In the phase III ICON 6 trial, cediranib monotherapy has shown promising efficacy in platinum-sensitive relapsed ovarian cancer [143]. Brivanib, a selective dual inhibitor of VEGFRs and fibroblast growth factor receptors (FGFRs), has been evaluated in patients with advanced HCC. However, the results from phase III trials suggest that brivanib as an adjuvant therapy to transarterial chemoembolization (TACE) did not improve overall survival [144]. Moreover, the efficacy and safety of vandetanib in patients with advanced RET-rearranged non-small-cell lung cancer (NSCLC) was assessed in phase II trials. The clinical anti-tumor activity and a manageable safety profile of vandetanib were observed in patients with advanced RET-rearranged NSCLC [145,146]. Another TKI, motesanib, was tested in phase III trials in combination with paclitaxel and carboplatin (P/C) in advanced NSCLC patients. However, motesanib plus P/C did not significantly improve PFS [147] (Figure 3). Although the anti-tumor activity of TKIs has been reported, they are not highly selective since most of them target the ATP binding pocket. For example, sorafenib and sunitinib have been demonstrated to inhibit VEGFRs, platelet-derived growth factor receptors (PDGFRs), FGFRs, KIT, RET, and FLT3. These multi-targeted TKIs block a variety of kinases in addition to VEGFRs, resulting in adverse effects unrelated to VEGFR blockade. Therefore, the development of more specific VEGFR TKIs will improve anti-lymphangiogenic and anti-tumor activity with fewer off-target effects. Very recently, a small molecule TKI, SAR131675, has been reported to be highly specific for VEGFR3. The treatment of SAR131675 suppresses lymphangiogenesis and lymphatic metastasis in several experimental tumor models [148,149] (Figure 3). Cediranib is an oral VEGFR TKI and has been shown to suppress the activity of VEGFR2 and VEGFR3, leading to the inhibition of angiogenesis and lymphangiogenesis [142]. In the phase III ICON 6 trial, cediranib monotherapy has shown promising efficacy in platinum-sensitive relapsed ovarian cancer [143]. Brivanib, a selective dual inhibitor of VEGFRs and fibroblast growth factor receptors (FGFRs), has been evaluated in patients with advanced HCC. However, the results from phase III trials suggest that brivanib as an adjuvant therapy to transarterial chemoembolization (TACE) did not improve overall survival [144]. Moreover, the efficacy and safety of vandetanib in patients with advanced RET-rearranged non-small-cell lung cancer (NSCLC) was assessed in phase II trials. The clinical anti-tumor activity and a manageable safety profile of vandetanib were observed in patients with advanced RET-rearranged NSCLC [145,146]. Another TKI, motesanib, was tested in phase III trials in combination with paclitaxel and carboplatin (P/C) in advanced NSCLC patients. However, motesanib plus P/C did not significantly improve PFS [147] (Figure 3). Although the antitumor activity of TKIs has been reported, they are not highly selective since most of them target the ATP binding pocket. For example, sorafenib and sunitinib have been demonstrated to inhibit VEGFRs, platelet-derived growth factor receptors (PDGFRs), FGFRs, KIT, RET, and FLT3. These multi-targeted TKIs block a variety of kinases in addition to VEGFRs, resulting in adverse effects unrelated to VEGFR blockade. Therefore, the development of more specific VEGFR TKIs will improve anti-lymphangiogenic and anti-tumor activity with fewer off-target effects. Very recently, a small molecule TKI, SAR131675, has been reported to be highly specific for VEGFR3. The treatment
Monoclonal Antibody Targeting VEGF-C/VEGFR3
Targeting the VEGF/VEGFRs signaling axis by using monoclonal antibodies has been demonstrated in recent years. The humanized anti-VEGF monoclonal antibody, bevacizumab, is an antibody approved by the US FDA for clinical use [150][151][152]. Bevacizumab-induced VEGF-A neutralization can prevent the binding of VEGF-A to VEGFR1 and VEGFR2, suppressing their activation and subsequent signaling cascades. Another neutralizing antibody against VEGFR2, ramucirumab, has also been approved for the treatment of various cancers including advanced gastric or gastro-esophageal junction adenocarcinoma, NSCLC, and advanced or metastatic urothelial carcinoma. Recently, a specific anti-VEGFR3 monoclonal antibody, IMC-3C5, has been assessed and has completed phase I trials in patients with advanced solid tumors and colorectal cancer (CRC). The results from the phase I study indicated that IMC-3C5 was well-tolerated up to the highest planned dose, but anti-tumor activity was not significant in CRC [153]. Another drug targeting the VEGF-C/VEGFR3 axis is VGX-100, a fully humanized VEGF-C neutralizing antibody which specifically binds to VEGF-C protein and thereby prevents its binding to VEGFR3. The therapeutic activity of VGX-100 was assessed in patients with advanced solid tumors in clinical phase I, and the trial was recently completed (ClinicalTrials.gov Identifier: NCT01514123) [154] (Figure 3). However, the results have not yet been published.
Numerous antibodies, soluble receptor proteins, and IgG fusion proteins targeting the VEGF-C/VEGFR3 axis have been investigated in preclinical studies. Jimenez et al. developed a bispecific antibody which binds to both VEGFR2 and VEGFR3 in a dose-dependent manner and inhibits the interaction of VEGF-A/VEGFR2 and VEGF-C/VEGFR3. Their results showed a simultaneous dual blockade of VEGFR2 and VEGFR3 by the antibody, subsequently inhibiting the migration of endothelial cells [155]. A previous study demonstrated that a soluble VEGFR3 decoy receptor, sVEGFR3-Fc, expressed by a recombinant adeno-associated viral vector, potently suppressed tumor-associated lymphangiogenesis and lymphatic metastasis in highly metastatic melanoma, renal cell carcinoma, and prostate cancer models [156]. By using antibody phage-display, Rinderknecht et al. developed a human monoclonal antibody fragment (single-chain fragment variable, scFv) that specifically binds to VEGF-C with high affinity and inhibits VEGF-C/VEGFR3 signaling [157]. A new receptor-immunoglobulin (Ig) fusion protein, VEGFR3-Ig, that could simultaneously bind to VEGF-A and VEGF-C has been reported recently. VEGFR3-Ig has been shown to block tumor-associated angiogenesis, lymphangiogenesis, and metastasis in a highly metastatic HCC model [158]. In addition, Yeh et al. showed that VEGF-C/VEGFR3-mediated KRAS/YAP1/Slug pathway could be suppressed by treatment with anti-VEGFR3 peptide, leading to the inhibition of migration, invasion, and stemness of skin cancer cells [92] (Figure 3).
Conclusions
The VEGF-C/VEGFR3 axis has been implicated in cancer progression by directly affecting tumor cells or modulating lymphangiogenesis and immune response. High expression of VEGF-C/VEGFR3 has been demonstrated to be correlated with increased lymphatic metastasis and poor prognosis in numerous types of cancers (Table 1). Over the last two decades, tumor-associated lymphangiogenesis is considered as a potential target for treating metastatic diseases. Therefore, the development of drugs targeting the VEGF-C/VEGFR3 signaling has received much attention, which could be beneficial for patients with VEGF-C/VEGFR3-driven cancers. Multiple VEGFR TKIs have been tested in clinical/preclinical studies, and several VEGFR TKIs have been approved for clinical use (Table 2). However, these agents might inhibit multiple kinases in addition to VEGFR3, and the "off-target" effects might increase adverse effects. Hence, development of more selective and specific anti-VEGFR3 TKIs is required. In addition, the VEGF-C/VEGFR3 signaling has been shown to be involved in regulating immune tolerance and suppression [93,120,121]. Targeting the VEGF-C/VEGFR3 axis could enhance anti-tumor immune responses. Currently, several studies focused on VEGF-C/VEGFR3-mediated immunobiology in LECs and immune cells are now growing. The results from these studies will increase our understanding of how the VEGF-C/VEGFR3 axis affects immunity and will provide the rationale for the development of new immunotherapeutic strategies for cancer therapy.
Conflicts of Interest:
The authors declare no conflicts of interest.
|
2019-03-24T13:02:39.474Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "038d1f0f728e661724a332a832af49f3b5d7b64f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/8/3/270/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "038d1f0f728e661724a332a832af49f3b5d7b64f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
232434510
|
pes2o/s2orc
|
v3-fos-license
|
De novo assembly of a new Olea europaea genome accession using nanopore sequencing
Olive (Olea europaea L.) is internationally renowned for its high-end product, extra virgin olive oil. An incomplete genome of O. europaea was previously obtained using shotgun sequencing in 2016. To further explore the genetic and breeding utilization of olive, an updated draft genome of olive was obtained using Oxford Nanopore third-generation sequencing and Hi-C technology. Seven different assembly strategies were used to assemble the final genome of 1.30 Gb, with contig and scaffold N50 sizes of 4.67 Mb and 42.60 Mb, respectively. This greatly increased the quality of the olive genome. We assembled 1.1 Gb of sequences of the total olive genome to 23 pseudochromosomes by Hi-C, and 53,518 protein-coding genes were predicted in the current assembly. Comparative genomics analyses, including gene family expansion and contraction, whole-genome replication, phylogenetic analysis, and positive selection, were performed. Based on the obtained high-quality olive genome, a total of nine gene families with 202 genes were identified in the oleuropein biosynthesis pathway, which is twice the number of genes identified from the previous data. This new accession of the olive genome is of sufficient quality for genome-wide studies on gene function in olive and has provided a foundation for the molecular breeding of olive species.
Introduction
Olive (Olea europaea L.), belonging to the family Oleaceae, is one of the most important and widely distributed fruit trees in the Mediterranean Basin. It has a history of more than 4000 years and has been planted in more than 40 countries. China began importing olive seeds and seedlings from Albania in the 1960s and now cultivates olive trees in 14 provinces, mainly Gansu, Sichuan, and Yunnan. Olive oil is a world-famous highgrade cooking oil that is rich in unsaturated fatty acids and distinct micronutrients, such as oleuropein, squalene, and hydroxytyrosol 1 . Olive is also well known for its biological functions, including its anti-inflammatory, antiviral, cardiotonic, anti-carcinogenic, antioxidant, and antihypertensive properties 2,3 .
Olive has economic, ecological, cultural, and scientific value and is widely appreciated 4 . The selection of olive varieties has always been based on traditional breeding practices, thus rendering molecular breeding a challenge. This is an important contributor to the lack of availability of a high-quality genome. Thus far, the genomes of two olive varieties (Olea europaea L. subsp. europaea var. europaea cv. 'Farga' and Olea europaea L. sylvestris) have been sequenced 5,6 . The two versions of the genome are mainly based on the next-generation sequencing method, which generated genomes of 1.31 G and 1.48 G with contig N50 values of 52.35 kb and 25.49 kb, respectively. A large number of scaffolds were assembled from the contigs, but none of them were completely anchored to the chromosomes. It is relatively difficult to obtain high-quality plant genomes, as plant genomes are generally large, with high heterozygosity and high numbers of repetitive sequences 7 . Olive has high heterozygosity, high numbers of repetitive sequences, and a large genome, which has hindered the production of a highquality reference assembly of the two versions of the olive genome. Technological improvements have increased the yield and length of genome sequencing, particularly thirdgeneration sequence technologies, such as PacBio thirdgeneration sequencing and Oxford Nanopore thirdgeneration sequencing (ONT) technology 8,9 . In addition, a genome-wide chromosome conformation capture technique, Hi-C, is often used to further assemble chromosomescale genomes based on a sequenced draft genome 10 .
In this study, an olive cultivar (Olea europaea L. subsp. europaea cv. 'Arbequina') that is suitable for mechanized harvesting and dense planting was sequenced using ONT sequencing (Fig. 1). Hi-C technology was used to generate a chromosome-scale assembly for the high-quality olive genome. We compared the results of different genome assembly strategies, namely, Canu, Wtdgb, and SMARTdenovo, with the single assembly; merged the assembled results in pairs; and merged the assembled results from the three methods. We discovered that SMARTdenovo had the best effect when using the single assembly strategy, while the strategy of merging the results of the three methods produced the longest contig N50 of 4.67 M. Using the obtained high-quality olive genome, we performed gene family expansion and contraction analysis, whole-genome replication analysis, phylogenetic analysis, positive selection analysis, and comparative genomics analysis.
Preliminary characterization of the olive genome
Due to the wide variety of olives, it was necessary to obtain information on the genome size, heterozygosity, and repeat content of this new accession of the olive genome. Three 350 bp libraries were constructed using genomic DNA from leaf samples, and 96. 48 Gb of highquality data was sequenced and filtered using the NovaSeq 6000 Illumina sequencing platform. The total sequencing depth was~75×, and the sequencing data Q30 ratio was above 91.10% (Supplementary Table S1). Flow cytometry (Fig. 2b) and k-mer analysis (Fig. 2b) of this dataset indicated that the olive genome has a high level of heterozygosity (1.09%) with a repeat sequence content of 56.18% and a genome size of~1.3G, which is slightly smaller than that of the previous olive genome (Olea europaea subsp. europaea; 1.38 GB) 6 and oleaster genome (Olea europaea var. sylvestris; 1.46 GB) 5 .
ONT sequence, genome assembly, and annotation
High-quality and high-molecular-weight genomic DNA was extracted and sequenced following ONT standard protocols 11 . A total of 9,009,932 raw reads with 146,825,799,392 bases were obtained. After further filtering out the adapters, low-quality reads, and short fragments (length < 2000 bp), the total dataset was obtained. Overall, we obtained 4,708,203 clean reads for a total of 129 Gb of sequence (representing 100×fold coverage; Supplementary Table S2). Notably, the average length of the reads was 27,311 bp, the length of read N50 values was 30,890 bp, and the longest read reached 1 Mbp (962,647 bp). The clean read length distributions of all reads are shown in Supplementary Table S3. Most of the clean reads were distributed in the range of If DNA is contaminated, it will not only reduce the amount of valid data but also affect the accuracy of subsequent analyses and result in large deviations in genomic characteristics, such as the genome size, heterozygosity, repeat sequence ratio, and GC content, which will ultimately affect the subsequent genome assembly. If a certain proportion (1% or more) of reads match a species that is distantly related, the data may be contaminated. To determine whether the sequenced data were contaminated, we randomly selected 2000 reads from the sequencing data and performed a BLAST alignment with the nucleotide (Nt) database 12 , which showed that most of the reads were aligned to O. europaea (oleaster), Hesperelaea palmeri, Vitis vinifera, Sesamum indicum, and other plant species (Supplementary Table S4), verifying that the data were not contaminated. If the extranuclear DNA content in the sequencing library is too high, it will increase the difficulty of genome assembly and might even cause errors. We then performed a SOAP alignment with the three 350 bp libraries from the Illumina sequencing and the chloroplast sequence (NCBI Accession NO. NC_015623) of olive (Supplementary Table S5) 13 . Approximately 2-3% of reads were mapped to chloroplasts, and these sequences were removed before assembly. The results also showed that the enriched DNA was mainly olive nuclear DNA.
The sequenced ONT clean data were then assembled into the final genome using seven different assembly strategies, namely, Canu, Wtdgb, SMARTdenovo, Canu +Wtdgb, SMARTdenovo+Canu, Wtdgb+SMARTdenovo, and Wtdgb+SMARTdenovo+Canu, according to the standard protocols for each strategy (Table 1). Canu was used to precorrect the original reads 14 . A total of 1290 contigs were ultimately obtained by the combined Wtdgb +SMARTdenovo+Canu assembly strategy, with a contig N50 of 4.67 Mb and a total contig length of 1.30 G, and the largest contig in this assembly was 25.18 Mb. This is a great improvement over previous studies (with a contig N50 of 52.35 kb for O. europaea subsp. europaea and a contig N50 of 25.49 kb for oleaster) 5,6 (Fig. 2c).
Three strategies were next used to evaluate the integrity of the assembled genome. First, 99.33% (656,154,460 of 660,556,220) of Illumina DNA-Seq reads were mapped to the assembled genome, and the properly mapped (paired-end reads mapped to the genome with a distance consistent with the length distribution of the sequenced fragments) read rate was 86.81%. BUSCO was first used to search the conserved plant genes (1614 conserved plant genes in the database) in the assembled olive genome, and 1521 genes, accounting for 94.24% of the total genes in the database, were identified (Supplementary Table S6). This ratio is similar to that for cv 'Farga' (1501 genes account for 92.99% of the total genes in the database) but much higher than that for var. sylvestris (1380, accounting for 85.50% of the total genes in the database). BUSCO analysis of gene sets was also conducted in these three versions of the olive genome, which also showed a high ratio of 92.87% in cv 'Arbequina', much higher than that in var. sylvestris (85.25%) 5,6 (Supplementary Table S6). Then, 438 conserved genes (95.63%) were identified in the 458 eukaryotic conserved sequences using CEGMA (Supplementary Table S7). These high mapping rates indicate the high integrity of the assembled olive genome 15 .
Hi-C scaffolding
A total of 232.97 Gb of clean data were obtained from Hi-C sequencing, covering the O. europaea genome at nearly 180x. After statistics and error correction of the genome sequences by Hi-C assembly, a total of 962 scaffolds, with a scaffold N50 of 42.60 Mb ( Fig. 3a and Supplementary Table S8), were obtained. The derived scaffolds were then assembled into 23 chromosomes using LACHESIS analysis tools 16 . To assess the results of the Hi-C assembly, the 23 chromosomes were cut into equal lengths in 100 kb bins, and the number of Hi-C read pairs covered between any two bins was used as a signal of the strength of the interaction between the two bins. Twentythree chromosomes could be clearly distinguished, and the intensity of the interaction at the diagonal position was higher than that at the nondiagonal position, indicating that the intensity of the interaction between adjacent sequences in the chromosome results of the Hi-C assembly was high, confirming that the assembled genome was of high quality (Fig. 3b). In total, 1.1 Gb of sequences was mapped onto the chromosomes. The sequences whose order and direction could be determined were 976.51 Mb, accounting for 95.03% of the total length of the mapped sequence (Supplementary Table S9).
Repeat sequences were predicted using the LTR_FIN-DER and RepeatScout software packages. A total of 1,815,585 sequences with a total length of 743,103,344 bp, accounting for 67.37% of the olive genome, were predicted ( Fig. 3 and Supplementary Table S10). Genes were predicted using de novo, homologous species, and RNAseq unigene prediction strategies. A total of 53,518 protein-coding genes were predicted on the current assembly, which has a similar number of gene sets to those in previous studies 5, 6 . A genome-wide comparison was performed in the Rfam database. A total of 118 microRNAs and 192 rRNAs were identified using Blastn, and 674 tRNAs were identified using tRNAscan-SE 17 . Next, the predicted genes were annotated in the Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), EuKaryotic Orthologous Groups (KOG), TrEMBL, and Nonredundant (Nr) databases. A total of 50,969 genes were annotated, accounting for 95.24% of the total predicted genes (Supplementary Table S11).
Comparative genomics analysis
Comparative genomics analysis of O. europaea was performed with the genome sequences of 11 plant species (Helianthus annuus, Glycine max, Arachis hypogaea, Ricinus communis, Arabidopsis thaliana, Populus trichocarpa, Sesamum indicum, Oryza sativa, Citrus sinensis, Amborella trichopoda, and Olea europaea var. sylvestris). A total of 51,805 gene families were obtained; 2487 gene families were common among all 12 species, while 806 families were specific to olive ( Fig. 4a and Supplementary Table S12). These specific gene families were then annotated to GO terms and KEGG pathways (Fig. S1). The GO annotations were mainly related to metabolic process, cellular process, and response to stimulus in the "biological process" term; cell part, cell, and organelle in the "cellular component" term; and catalytic activity, binding, and transporter activity in the "molecular function" term ( Fig. S1a). The KEGG pathway analysis showed that "carbon metabolism" and "protein processing in endoplasmic reticulum" demonstrated the largest gene family expansion (Fig. S1b). Gene family copy number analysis showed that the olive gene family ranges from one to more than four copies, which is similar to that for sunflower and soybean, and olive has a large proportion of genes in families with four or more members (Fig. 4b).
Further analysis of gene family expansion and contraction revealed that 252 gene families expanded and 52 gene families contracted in the olive genome (Fig. 4c). These 252 expanded gene families were then annotated to GO terms and KEGG pathways. The GO annotations were mainly related to response to stimulus, cellular process, and metabolic process in the "biological process" term; cell organelle, cell, and cell part in the "cellular component" term; and transporter activity, binding, and catalytic activity in the "molecular function" term (Fig. S2). The KEGG pathway analysis showed that "oxidative phosphorylation", "photosynthesis", and "plant-pathogen interaction" demonstrated the largest gene family expansion (Fig. S3).
Phylogenetic analysis was conducted using the singlecopy protein sequences of 12 species. As expected, oleaster and olive had the closest genetic relationship and diverged from their ancestors at~5-20 Mya. Synteny analysis was carried out on olive (O. europaea) and oleaster (O. europaea var. sylvestris), and the variations in genome structure and homologous gene pairs were analyzed. There was a high linear relationship between the olive and oleaster genes (Fig. 5a); a total of 52,991 genes were found to have synteny with oleaster in olive. The synteny between chromosomes was partially dislocated (Fig. 5b). This may be a result of only~50% of oleaster sequences being anchored to the chromosomes.
Positive selection analysis identified 34 genes containing significantly positively selected sites. GO analysis showed that these genes were mainly in the "obsolete ATP catabolic process" category of the biological process term, "plasmodesma" in the cellular component category, and "organic cyclic compound binding" in the molecular function category. The KEGG pathway analysis revealed that these positively selected genes were mostly involved in pyruvate metabolism, nucleotide excision repair, and homologous recombination pathways. Whole-genome duplication (WGD) analysis was carried out by fourfold synonymous (degenerative) third-codon transversion (4DTv) and distributions of synonymous substitutions per synonymous site (Ks). One main peak was observed in the O. europaea genome based on the abundance of 4DTv site values (4DTv value of 0.09) and Ks value (Ks value of 0.25), indicating that O. europaea had experienced a WGD event. The genomes of C. sinensis, H. annuus, and S. indicum were used to identify the 4DTv and Ks values from synteny blocks between O. europaea, which suggested that O. europaea experienced large-scale gene duplication more recently than these three closely related species (Fig. 6).
Identification of oleuropein and fatty acid biosynthesis genes in olive
Oleuropein and fatty acid biosynthesis pathway genes were identified based on their homology with known genes from transcriptome data 3 . A total of nine gene families with 202 genes in oleuropein biosynthesis and 14 gene families with 128 genes in the fatty acid biosynthesis pathway were identified, which is more than in the previous transcriptome data (Fig. 7). In terms of the oleuropein biosynthesis pathway, geranyl diphosphate is first catalytically converted to geraniol by 29 geraniol synthases (GESs), which is much more than the number identified in the previous study (four GES genes). Thirty geraniol 8-hydroxylase oxidoreductase (G8H) genes are involved in the hydroxylation of geraniol to 8-hydroxylase, which is twice the number of genes identified from the previous transcriptome data. 8-Hydroxyase is then catalyzed by 8-hydroxygeraniol oxidoreductase (8-HGO) to form 8-oxogeranial, and nine 8-HGO genes are involved in this step, which is less than the 13 genes previously identified. Iridoid synthase (ISY) forms iridodial from 8-oxogeranial, and two ISY genes were identified in this step, which is similar to the number identified in the earlier transcriptome study. Iridotrial and 7-deoxyloganic acids are synthesized as follows. The structural gene involved in this reaction is iridoid oxidase (IO), and 23 IO genes were identified in the olive genome, which is many more than in the transcriptome data. O-Glucosyl is then added to 7-deoxyloganic acid to form 7-deoxyloganic acid via the catalysis of 7-deoxyloganetic acid-O-glucosyl transferase (7-DLGT) and 21 7-DLGTs were identified. 7-Deoxyloganic acid hydroxylase (7-DLH) is then used to form loganic acid by the hydroxylation of 7-deoxyloganic acid, and 41 7-DLH genes were identified in this step, which is 10 more genes than in the previous transcriptome data. Two methyls are added onto loganic acid to form loganin by loganic acid methyltransferase (LAMT). Eight LAMT genes were identified in the olive genome, which is similar to a previous study. Finally, secologanin is synthesized by secologanin synthase (SLS), and 39 SLS genes were obtained, which is many more than the four SLS genes in earlier transcriptome data (Fig. 7).
The hydroxytyrosol biosynthesis pathway is initiated from tyrosine and then catalyzed by polyphenol oxidase (PPO), primary amine (copper-containing) oxidase (CuAO), and tyrosine decarboxylase (TDC) to form dihydroxyphenylalanine (DOPA), p-hydroxyphenylacetic acid (p-HPAA), and tyramine, respectively. Sixteen PPO, eight CuAO, and five TDC genes were identified in the olive genome, with similar gene numbers to those in the previous study. DOPA is then catalyzed by DOPA decarboxylase (DDC) to produce dopamine, and five DDC genes were identified. Dopamine and tyramine are then oxidized by CuAO to form 4-hydroxyphenylacetic acid (4-HPA) and 3,4-dihydroxyphthalic acid (3,4-DHPA), respectively. 3,4-DHPA finally generates hydroxytyrosol through the catalysis of 10 alcohol dehydrogenases (ALDHs). In parallel, 4-HPA is catalyzed by phenylacetaldehyde reductase (PAR) to produce tyrosol. A total of nine PAR genes were identified in the olive genome. Finally, the formation of oleuropein from secologanin and tyrosol is catalyzed by other enzymes (Fig. 7).
The above results are based on the analysis of the whole-genome data, but some genes annotated in the genome may not be expressed in plant tissues. Combined analysis of the transcriptome and proteome of tree peony seeds on different days after pollination was conducted to better understand the transcriptional and translational regulation of seed development and oil biosynthesis, which indicated significant differences in the number and abundance of differentially expressed genes and proteins but a high level of consistency in expression patterns and metabolic pathways 18 . To study the expression levels of the above genes in different tissues of O. europaea, the second-generation RNA-seq transcriptome data were reanalyzed in the new genome. Samples of fruits (F), fully expanded leaves from the shoots (NL), and fully expanded leaves from the base of the stem (OL) were tested for gene expression levels ( Fig. 7 and Supplementary Table S13) 19 . Fatty acid biosynthesis analysis detected 95/128 genes expressed in the tissues of F, NL, and OL (Supplementary Table S13). As in other plants, fatty acid biosynthesis in olive mainly occurs in the fruit tissue. The results of the transcriptome data also proved that the expression levels of most fatty acid synthesis-related genes in the fruits cv. 'arbequina' were much higher than those in the leaves. A total of 149 genes were shown to be expressed in the above tissues, accounting for three-quarters of the total number of genes (202 genes in total were identified in the oleuropein biosynthesis pathway). Heatmap analysis showed that each gene family has its own expression characteristics, implying functional differences among the family members. Using the 8HGO gene family as an example, in terms of tissue differences, the three tissues were clustered into different categories. In terms of expression level, all genes Fig. 7 Identification and expression level of oleuropein and hydroxytyrosol biosynthesis genes in olive. Blue is the abbreviation for the gene, red numbers in brackets represent the number of detected genes in the gene family, and the green number in brackets is the number of total transcripts in the family. Heatmaps show the expression level of differentially expressed genes: yellow to red color represents higher than log 10 (FPKM) data of genes; green to blue color represents lower than log 10 (FPKM) data of genes. Samples are fruits (F1-F3 indicate the three biological replications), fully expanded leaves from shoots (NL1-NL3 indicate the three biological replications), and old leaves from the base of the stem (OL1-OL3 indicate the three biological replications). GES geraniol synthase, G8H geraniol 8-hydroxylase, 8-HGO 8-hydroxygeraniol oxidoreductase, ISY iridoid synthase, IO iridoid oxidase, 7-DLGT 7-deoxyloganetic acid-O-glucosyl transferase, 7-DLH 7-deoxyloganic acid hydroxylase, LAMT loganic acid methyltransferase, SLS secologanin synthase, PPO polyphenol oxidase, DDC DOPA decarboxylase, CuAO primary amine (copper-containing) oxidase, ALDH alcohol dehydrogenase, TDC tyrosine decarboxylase, PAR phenylacetaldehyde reductase, p-HPAA p-hydroxyphenylpyruvic acid, p-HPAA p-hydroxyphenylacetic acid, DOPA dihydroxyphenylalanine, 3,4-DHPA 3,4-dihydroxyphthalic acid, 4-HPA 4-hydroxyphenylacetic acid could be divided into two categories, and the expression levels of these two categories in different tissues exhibited similar expression trends.
Discussion
Angiosperms, flowering plants, provide essential resources for human life, such as food, energy, oxygen, and materials. To date, a large number of angiosperm genomes have been sequenced 20 . Previous studies have used the shotgun sequencing method to sequence and assemble the genomes of two oil olive varieties (cultivated olive and oleaster) 5,6 . However, olive has a large genome with high heterozygosity and high repeat sequence numbers. Next-generation sequencing will thus lead to low genome quality. Previous sequencing studies obtained a contig N50 of 52 kb and scaffold N50 of 228 kb for cultivated olive and a contig N50 of 25 kb and scaffold N50 of 443 kb for oleaster (Fig. 2). Such high-quality genomes are insufficient for studying genetics and gene function in olive. To improve the quality of the olive genome, we used the Oxford Nanopore sequencing method to sequence and assemble the olive genome. A contig N50 of 4.67 Mb with a total contig length of 1.30 G and a largest contig of 25.18 Mb were obtained (Fig. 2). These obtained sequences were then assembled into 23 chromosomes by Hi-C (Fig. 3).
In addition to third-generation sequencing technology, different assembly strategies have been used to obtain high-quality genomes. The best technique for assembling genomes sequenced by Oxford Nanopore thirdgeneration sequencing technology has not been determined 7,21 . The Canu, Wtdgb, and SMARTdenovo software packages have always been considered suitable for assembling Oxford Nanopore third-generation sequencing data 14,22 . However, it is unclear which software package is optimal for assembly. Therefore, we first used a separate assembly method to assemble the olive genome and then merged the results two by two, ultimately merging the results of the three strategies. On the basis of our assembly results, the SMARTdenovo method was optimal for use alone, while combining the results of the two methods, namely, Wtdgb+SMARTdenovo, was best. Merging of the assembly results from the three strategies produced the longest contig N50 of 4.67 M and the fewest fragments, which proved to be the best strategy for Oxford Nanopore third-generation sequencing assembly in olive ( Table 1). The acquisition of this high-quality reference genome provides a good foundation for studies on the gene function and molecular breeding of olive.
Oleuropein, the most abundant olive secoiridoid, is a desirable component of high-quality olive oil and strongly influences flavor due to its bitter and pungent sensory notes 23 . Due to the particularity and importance of oleuropein, it is important to identify genes related to the oleuropein biosynthesis pathway. Thus far, the identification of oleuropein biosynthesis genes has been limited to transcriptional data, which are incomplete and not conducive to future research into the biological functions of related genes 3,24 . This study systematically identified the genes in the oleuropein biosynthesis pathway based on the high-quality oil olive genome. Compared with previous studies, the present work identified more genes that participate in the regulation of oleuropein synthesis were discovered (Fig. 6).
Very little is known about the pathway of secoiridoid synthesis at this stage, and what is known is limited to a few species, such as Catharanthus roseus 25 . As a consequence, the structural genes involved in this biosynthetic pathway have not been completely determined. However, the pathway from geranyl diphosphate to secologanin has been elucidated, but the subsequent reactions are unclear 26 . Based on this, we illustrated a biosynthetic pathway map containing the structural genes necessary for oleuropein synthesis (Fig. 6). A total of 202 genes were identified in the oleuropein biosynthesis pathway, which is double the number of genes identified from the previous transcriptome data. This confirmed that the obtained olive genome was nearly complete, facilitating future research into the olive genome.
The economic value, cultural value, and academic value of olive are widely acknowledged worldwide. In this study, a chromosome-level, high-quality olive genome was obtained using Oxford Nanopore third-generation sequencing and Hi-C technology, which produced large improvements over the previous version of the genome. The genome is of sufficient quality for genome-wide studies on the functions of olive genes and has provided a foundation for the molecular breeding of olive species.
Genome survey
The physical fragmentation method (ultrasonic vibration) was used to break the extracted genomic DNA samples into fragments of~350 bp, from which three small-fragment sequencing libraries were constructed through the steps of end repair, addition of A, addition of adapters, target fragment selection, and PCR. The libraries were then sequenced using the NovaSeq 6000 system. To determine whether the extracted sample DNA was contaminated, 10,000 single-end reads were randomly selected from the three 350 bp libraries obtained by sequencing and BLAST-compared with the Nt database 12 . The three 350-bp libraries obtained by Illumina sequencing were compared with the chloroplast sequences (NC_015623.1, 155,896 bp) of oleaster, a relative of olive, to determine whether there was nonnuclear DNA contamination 13 . The library data were used to construct a k-mer distribution map with k = 21 and to assess the genome size, ratio of repeated sequences, and heterozygosity. The k-mer analysis was carried out using "k-mer freq stat" software (developed by Biomarker 120 Technologies Corporation, Beijing, China). Genome size (G) was estimated based on the following formula: G = k-mer number/average k-mer depth, where k-mer number = total k-mers-abnormal k-mers (with too low or too high frequency).
Genome sequencing and de novo assembly
Leaf samples of O. europaea cv. 'Arbequina' were collected in the olive grove of the Research Institute of Forestry, Chinese Academy of Forestry, in Mianning, Sichuan Province. The genome was sequenced using the Oxford Nanopore third-generation sequencing platform. Clean data were corrected by Canu software, following which Wtdbg (https:// github.com/ruanjue/wtdbg), SMARTdenovo (https://github. com/ruanjue/smartdenovo) 14,27,28 , and Canu were used for genome assembly based on the corrected data, and then the genomes assembled by the three software packages were merged by Quickmerge software (https://github.com/ mahulchak/quickmerge). Racon software was used to perform three rounds of correction on the integrated genome, and then the next-generation DNA-seq data were used to perform three rounds of correction using Pilon software, ultimately obtaining the genome sequence 29 . CEGMA and BUSCO were used to assess the completeness of the genome assembly 15,30 .
The repeat sequence database of the olive genome was constructed using two software programs, namely, LTR_FINDER and RepeatScout 31,32 . PASTEClassifier was used to categorize the database, which was then combined with the Repbase database as the final repeat sequence database 33,34 . RepeatMasker was used to predict the repeat sequence of the olive genome based on the constructed repeat sequence database 35 . Genscan, Augustus, Glim-merHMM, GeneID, and SNAP software were used to make a de novo prediction of the genetic structure of the genome [36][37][38][39] ; GeMoMa was used to make predictions based on homologous species 40 ; and then EVM software was used to integrate the prediction results 41 . Hisat and Stringtie software 42,43 were used for transcript assembly (accession numbers: SRR10743047, SRR10743049, SRR10743048, SRR 10743044, SRR10743045, SRR10743046, SRR10743041, SRR10743042, and SRR10743043), TransDecoder (http:// transdecoder.github.io) and GeneMarkS-T software were used for gene prediction 44 .
Hi-C library construction and chromosome assembly
The type of Hi-C library construction and sequencing was in situ Hi-C, which mainly includes cell crosslinking, endonuclease digestion, biotinylation, cyclization, DNA purification, capture, and sequencing 45,46 . Fresh tissues (leaves) were crossed-linked with formaldehyde, and cross-linked DNA was then digested by Hind III restriction enzyme. The sticky ends of these fragments were endrepaired, marked with biotin, and then blunt-end proximityligated to generate circular molecules. Subsequently, these circular DNA molecules were fragmented into 300-500 bp fragments, and DNA ends were sheared, enriched by biotin pulldown and processed for paired-end sequencing (150-bp paired-end). After library construction had been completed, the library concentration and insert size were detected using a Qubit 2.0 fluorimeter and Agilent 2100 bioanalyzer, respectively, and the effective concentration of the library was accurately quantified using quantitative PCR to ensure library quality. The Illumina NovaSeq 6000 platform was then used for high-throughput sequencing with a read length of PE150. The obtained Hi-C data were used for chromosome-level assemblies. The draft contigs were divided into fragments with a length of 50 kb and clustered by LACHESIS software using valid interaction read pairs 16 . We assessed the quality of each fragment with HiCPro (v2.8.1) 35 and removed duplicates 47 , and Hi-C data were then mapped to the segments using BWA (v0.7.10-r789) software 48 . The uniquely mapped data were retained for scaffold assembly using LACHESIS software with parameters CLUSTER_N = 10, CLUSTER_MIN_RE_SITES = 48, ORDER_MIN_N_-RES_IN_TRUN = 14, CLUSTER_MAX_LINK_DENSITY = 2, CLUSTER_NONINFORMATIVE_RATIO = 2, and ORDER_MIN_RES_IN_SHREDS = 15.
Gene cluster analysis and phylogenetic tree construction
Orthofinder software was used to classify the protein sequences of 12 species into families (the alignment method used was diamond, and the alignment e-value was 0.001), and the PANTHER database was used to annotate the obtained gene families 49,50 . Finally, GO and KEGG enrichment analyses were performed for the olive-specific gene families 51 . MAFFT was used to compare each singlecopy gene family sequence (parameter: localpair -maxiterate 1000), and then Gblocks (parameter: b5 = h) was used to remove regions with poor sequence alignment or large differences. All the gene family sequences were connected end-to-end to obtain a supergene 52,53 . IQFinder's built-in model detection tool ModelFinder was used for model detection, and the best model obtained was JTT + F + I + G4. This best model was then used to construct an evolutionary tree using the maximum likelihood (ML) method, with the number of bootstrap replicates set to 1,000 54 . MCMCTREE, a software package that comes with PAML, was used to calculate divergence times 55 .
Gene family expansion and contraction analysis CAFE (Computational Analysis of gene Family Evolution) software was used to analyze divergence times and gene family expansion and contraction 56 . The results of evolutionary tree and gene family clustering were used to estimate the number of gene families of the ancestors in each phylogenetic tree branch, thereby predicting gene family contraction and expansion. The criterion for defining significant expansion or contraction was a P-value < 0.05.
Positive selection analysis
The CodeML module in PAML was used for positive selection analysis 55 . Single-copy genes of C. sinensis, H. annuus, O. europaea, O. europaea, L. sylvestris, and S. indicum were obtained, and the protein sequence of each gene family was compared using MAFFT (parameter: localpair -maxiterate 1000). The "chi2" program in the PAML program was used to perform likelihood ratio tests on Model A (assuming that the foreground branch ω was in a positive choice, i.e., ω > 1) and the null model (meaning that the ω value of any site was not allowed to be >1), with significance assessed at P < 0.01. The Bayesian method (BEB, Bayes empirical Bayes method) was used to obtain positive selection sites (greater than 0.95 is usually considered significantly positively selected sites), and the genes receiving significant positive selection were ultimately obtained.
Synteny analysis
Diamond software was used to compare the gene sequences of the two species to determine similar gene pairs (e < 1e − 5, C score > 0.5, where JCVI software was used to filter the C score value) 57 . Next, MCScanX software was used to determine whether similar gene pairs were adjacent on the chromosome, ultimately obtaining all the genes in the synteny block 58 . Samples for RNA-seq discussed in the "Identification of oleuropein and fatty acid biosynthesis genes in olive" section were analyzed according to a previous study 19 .
|
2021-04-01T13:39:14.947Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ed045a2a9589a32bee9311fab29c68b9fa242e0e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41438-021-00498-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59f9b81cf7485ed473fa2c5ed2d0a4db95dd178c",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215787699
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Network Analysis of Preterm vs. Full-Term Infant-Mother Interactions
Several studies have reported that interactions of mothers with preterm infants show differential characteristics compared to that of mothers with full-term infants. Interaction of preterm dyads is often reported as less harmonious. However, observations and explanations concerning the underlying mechanisms are inconsistent. In this work 30 preterm and 42 full-term mother-infant dyads were observed at one year of age. Free play interactions were videotaped and coded using a micro-analytic coding system. The video records were coded at one second resolution and studied by a novel approach using network analysis tools. The advantage of our approach is that it reveals the patterns of behavioral transitions in the interactions. We found that the most frequent behavioral transitions are the same in the two groups. However, we have identified several high and lower frequency transitions which occur significantly more often in the preterm or full-term group. Our analysis also suggests that the variability of behavioral transitions is significantly higher in the preterm group. This higher variability is mostly resulted from the diversity of transitions involving non-harmonious behaviors. We have identified a maladaptive pattern in the maternal behavior in the preterm group, involving intrusiveness and disengagement. Application of the approach reported in this paper to longitudinal data could elucidate whether these maladaptive maternal behavioral changes place the infant at risk for later emotional, cognitive and behavioral disturbance.
Introduction
Understanding and predicting human behavior has been a central question in the history of mankind. Recently, interest turned to quantitative analysis of human activities using mathematical models and network tools, addressing temporal and structural features of human communication [1,2]. To gain new insights into one of the most fundamental parts of human activities, we compare preterm and full-term babies' and mothers' behaviors in dyadic situations. Prematurity is not an illness and does not unconditionally cause a developmental delay; however, preterm babies are at risk of impaired cognitive and social development [3][4][5]. A preterm infant's developmental prospects depend on risk-and protective factors. Understanding and predicting the long-term outcome of development have been addressed by applying perinatal risk scales [6] and by analyzing environmental factors such as socio-economic status and the quality of life [7]. Because the explanatory power of these approaches was found to be weak, research focus turned toward caregiver-infant interactions which have been found to contribute to the developmental outcome through complex transactions between infant characteristics and caregiver behaviors [8][9][10]. A growing amount of evidence suggests that maternal behaviors toward preterm babies may have differential characteristics, which are either adaptive or maladaptive in light of the preterm baby's atypical needs [11].
The premature baby's developmental lag and the weaker selfregulation requires a higher degree of adaptation from the mother [12].
Failure of adaptation to the baby's atypical needs can put the interaction at risk. Neonatal neurological functions normally developing in intrauterine conditions have to develop outside. This overburdens the under-developed nervous system of the very young baby. Preterm infants are often difficult to interact with: they tend to be less organized, less optimally alert, less responsive to stimulation and provide less clear signals [13] which makes the interaction less pleasant or rewarding for the dyads. For instance, Crnic et al [14] found that mothers of preterm infants smile less often and their infants show less positive affect throughout the first year than full-terms do. The differences were most noticeable at 12 months.
Premature birth may find the parents unprepared for welcoming the baby both in physical and psychological terms. The maternal attitudes are influenced by a host of negative emotions, like disappointment, feeling of guilt, resentment, or anxiety about the baby's survival and potential impairment as well as by the often shocking appearance of the preterm baby, the long separation, and the behavioral manifestations of the immature, stressed nervous system [13]. However, the reported data on the characteristics of the preterm mothers' behaviors are inconsistent. In some studies the mothers of preterm infants were more active and responsive than mothers of full-term infants [14,15], whereas other authors found the opposite: the preterm mothers were less active, less sensitive and responsive, and expressed fewer emotions [10,13,16].
Various reasons may account for the apparent inconsistency, e.g. the degree of immaturity and perinatal complications in the infant, maternal preparedness and support, the infant's age at the observation, and the context of interaction [17,18].
In addition, there are distinct ways of how data are derived from the observed events. The majority of studies on mother-infant interactions used global rating scales [19,20], which may be helpful in detecting certain features of the interaction but do not catch patterns in the sequences of behaviors. Microanalytic (frame by frame) coding systems, in contrast, are suitable for recording bidirectional transactions [21,22]. These systems preserve the chronology of events, and also allow observation of rare events. Microanalytic coding systems have been developed for the analysis of different interactions, including physician-patient, couples, and mother infant interactions [23][24][25].
In this paper we present a comparative study on the early mother-infant relationship. Our novel approach is summarized in Figure 1. This approach involves utilization of network analysis tools, which have recently been applied for quantitative analysis of human activities, e.g. addressing temporal and structural features of human communication [1,2]. Preterm and full-term infants' and mothers' behaviors were observed in dyadic situations and coded micro-analytically. Coded data were analyzed through forming interaction networks and identifying transition patterns between combined infant/mother states in order to capture the key characteristics of preterm and full-term infant-mother interactions. Our network approach reveals the interaction pattern of all behavioral states and can also highlight potential interaction paths. In-depth analysis of a vast observational material by our novel approach provides new insights into human interactions which could not be found by the conventional analysis tools used in psychology.
Design
The data presented and analyzed in this paper are a subset (age of 1 year 62 weeks) of the data from a prospective longitudinal quasi experiment aiming at detecting the determinants of developmental outcome of preterm children. In this study the preterm group is compared to a control group containing full-term mother-infant dyads. The study is a quasi experiment because it lacks random assignment of subjects to groups [26]. The research protocol was approved by the Research Ethics Committee of the Institute of Psychology of the Hungarian Academy of Sciences. Signed informed consents were obtained from the parents for participating in the study, as well as from the parents on behalf of their children that they also participated in the study.
Subjects
Seventy-two singleton infants and their mothers participated in the study. Thirty of these infants were born preterm, at 28-33 weeks of gestation (mean GA 30.9 weeks, SD 1.5 weeks), with birth weights of 800-1990 grams (mean BW 1437 grams, SD 260 grams). The children possessed no congenital abnormalities or obvious sensory deficiencies, and their perinatal course was free of severe complications. The ages of the preterm infants were corrected according to their expected birthday. Risk scores on the Parmelee Obstetric and Postnatal Complication Scales [27] ranged between 6-17 (mean 10.4, SD 2.9), and they were regarded by the neonatologists as low-to moderate risk babies. The male/female ratio was 50/50 (none of the perinatal variables were related to gender). Mothers of preterm babies were recruited soon after the childbirth in a Neonatal Intensive Care Unit in Budapest (Hungary).
The gestational age range for the preterm infants was chosen with certain considerations in mind. After 28 weeks of gestation, with good perinatal care and if the organism is otherwise healthy, the degree of maturation enables the central nervous system to adapt the vital autonomic processes to the extrauterine conditions without life-threatening difficulties. On the other hand, it is an extremely important period in the development of alertness and state regulation, and in this respect these preterms are expected to be still markedly different from the full-term neonates.
The comparison group of 42 healthy full-term infants (GA .37 weeks, mean BW 3421 g, SD 374.3 g, range 2650-4350 g, 52% boys, 48% girls) and their mothers were selected from the subjects Figure 1. Design of the experimental approach. Preterm and full-term infants' and mothers' interactions were videotaped in dyadic situations and coded micro-analytically. Coded data were analyzed through formation of complex interaction networks and by identification of transition patterns between combined infant/mother states. The participants shown on the photograph have given written informed consent, as outlined in the PLOS consent form, to publication of their photograph. doi:10.1371/journal.pone.0067183.g001 of the Budapest Parent-Infant Study [28]. The mean age of the mothers was 28.3 years in the preterm group (range: 20-42), and 26.6 years in the comparison group (range: [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34]. The two groups were comparable in demographic variables (living conditions, fathers' education, parents' profession). However, mothers of full-term babies had somewhat higher levels of education x 2 (3, N = 72) = 14.39, p,0.05.
Procedure
Mother-infant dyads were observed at the infant's age of 12 months in a play situation. To reduce potential reactivity [29], observations were made at the subjects' home. Observational sessions were recorded by a female researcher using a handheld video camera. Each visit began with a familiarization period, lasting about 10 minutes. Subsequently the mother was asked to play with her infant as she ordinarily would, and to disregard the researcher's presence as much as possible. Mean length of interactions was 415 second (,7 min) (SD 118 sec).
Behavioral Recordings
The videotaped events were coded separately for mother and infant resulting in two parallel behavioral state streams. Using a mutually exclusive and exhaustive micro-analytic category system, every second of the behavior was coded with a single category within each mother/infant behavioral stream. Hence within each mother/infant behavioral stream the beginning of a new behavioral state necessarily implies the end of the previous behavioral state.
Behavioral Categories
Interactions were discerned in aspects of (1) whether there is a joint activity or not, (2) how a harmonious/disharmonious play interaction is developed and broken up, (3) how infant and mother are related to each other: (i) whose play idea is accepted or who leads the interaction, (ii) how leadership gets accepted or refused by the other. The categories were the following: Infant: 1: plays (plays with a toy of his/her interest); 2: explores (searches for/approaches new toy); 3: obeys (the child passively acts in accordance to the mother's commands, verbal or non-verbal initiative, interference or physical control, without showing either negative or positive emotional reaction); 4: cooperates (the infant follows maternal verbal or physical interactive actions, initiations with an interested and/or positive emotional expression (e.g. smile, laugh, positive vocalization, gesture of excitement); 5: defies (actively opposes the mother's idea/command); 6: neglects (ignores mother or her ideas, does not comply with the mother's command but does not oppose explicitly); 7: passive (is not involved in any activity); 8: other (none of the above categories).
Mother: 10: other (none of the categories below); 11: follows (follows the infant's playing activity, she adapts herself to the infant, they focus on the same thing, mother is involved); 12: enriches (enriches the infant's play with her own ideas, but does not change toy/game, elaborates the infant's play, shows a new aspect of how to use a toy); 13: physically forces (physically forces or prevents the infant from doing something); 14: commands (verbally demands the infant to do something); 15: directs attention (intrusively directs the infant's attention. She insists on her own idea, irrespective of the infant's involvement in doing something else); 16: interrupts (interrupts the infant's play activity with anything else other than directing the infant's attention to another toy, e.g. cleans the nose, adjusts clothes of the infant, etc.); 17: passive (not doing anything and being uninvolved); 18: neglects (not playing with the infant, and actively doing something else); 19: inappropriate (any behavior not satisfying the infant's obvious need, expressing disappointment about the infant's behavior, or expressing developmentally unreachable expectation towards the infant); 20: manipulates toy (not playing but manipulating the toy to promote the infant's activity, e.g. assembling a toy).
Based on previous reports [30,31] we considered the interaction to be the most harmonious when infant was engaged in a play (1) and the mother followed or enriched his activity (11,12). More generally, interaction was considered to be smooth if the mother (11,12) or the infant (3,4) adjusted to the partner's idea. When leadership was not accepted by the other, conflict occurred and interaction was found disharmonious (infant: 5, 6, mother: 13,14,15). Neglecting behavior (6, 18) expressed lack of joint activity and ignorance toward the other person.
Inter-rater reliability was established by coding 14% of the sample by two independent coders. Time-unit kappa k = 0.82 was based on whether the coders agreed with the behavior category within 2 seconds, and computed by GSEQ [32].
Construction of Interaction Networks
In order to get an insight into the dyadic nature of the interaction, i.e. how the behavior of one party affects the actions and reactions of the other party, we applied network analysis tools. A behavioral transition was defined as a change in either the infant's and/or the mother's behavioral state. These transitions were extracted from the coded behavioral streams using custom MatLab (MathWorks, version R2010b) scripts.
Behavioral transitions were visualized as a network using the online available network visualization software Cytoscape [33]. Each node in the network represents a combination of infant and maternal behaviors (termed ''state'') and the links between the nodes represent transitions between these states ( Figure 2). Most of the nodes are connected by links in both directions but for simplicity the arrows indicating the directions of transitions are not shown on the network figures. The most frequent state (infant plays/mother follows, 1-11) is highly connected in both the preterm and full-term groups; therefore we placed it in the center of networks in Figure 2. The networks do not contain any time components, it does not preserve the information how transitions (links between nodes) occur in time relative to each other (timesequence). Each transition has been quantified by counting the number of occurrences of the particular transition in a given group and normalizing it by the total number of transitions observed in that group. The obtained value, termed ''transition rate'', can be considered a percentage as it is the weight of a certain transition in relation to the total number of transitions. In this way, the transition rates are normalized to both the number of infants in each group, and the different length of the individual recordings. There are 62 links (1729 transitions) in the full-term behavioral network and 69 links (1864 transitions) in the preterm network. We found transition rates to be in the range of 0 to 5% for all transitions. The 'other' states (8 and 10) were omitted from the analysis because they cannot be linked to a specific behavior.
Comparing Full-term and Preterm Behavioral Statetransitions
In order to compare the interaction patterns of the two groups, their transition networks (described above) were subtracted from each other. In this way the distinctive transitions become visible. The subtracted interaction network has been generated by subtracting the transition rate of a given preterm transition from that of the same transition in the full-term network, and thus obtaining the difference between the two groups. Positive differences above 0.5 and negative differences below 20.5 are visualized in the subtracted network. This threshold has been set to be well above the majority of the links and is justified by the fact that most but not all the differences are statistically significant. Group-distinctive transitions are termed ''distinct transitions''. In addition, the average time spent in each state was subtracted between the two groups and used for the scaling of node sizes. The differences have not been normalized.
Statistical Significance of the Differences in the Transitions
The significance of distinct transitions between the groups were tested using x 2 tests, as the test of random networks can be used only for the transition with the highest occurrence. The transition with the highest occurrence (1-11R1-12, infant plays, mother follows/enriches) was tested against randomized networks to get a measure of the significance of this transition. The randomized networks were generated from the original interaction networks of the data by swapping the end-nodes of two randomly picked links while keeping the weight with the link. To generate one random network, 20000 link-swaps were performed, although a swap was only accepted if the resulting transitions were not present before the swap. In this way the general parameters for the network as the number of nodes, the number of links, and the number of connections each node has to other nodes in the network (degree distribution) are conserved [34]. 5000 random full-term and 5000 random preterm networks were derived. Single full-term and preterm randomized networks were subtracted in the same way as for the original data analysis. To see if the transition between the two combined states of 1-11 to 1-12 was a result of the network structure and not a finding in the data we recorded the number of times this transition favored full-term by at least the same amount as in the real data. The p value is then this number divided by the total number of tests. The observed strength of the full-term transition from 1-11 to 1-12 is likely not accidental (p,0.005).
Sequence Analyses of Behaviors
Three previous transitions were analyzed starting from a ''distinct transition'' in order to detect the sequences of groupdistinctive behaviors. All the states leading up to a distinct transition were recorded, and the occurrence of each state was counted. For each of these states, we again recorded which state happens right before, how many times each state happens, and so on. From this we could see if there is a certain pattern in previous transitions of states, leading up to the transition of interest. We could also generate a rate (in percentage of all states) of transitions leading up to the specific transition.
Structural Features of Interaction Networks
The number of links connected to any given node (connectivity) in the network will fall into one of the following five categories: (1) The nodes were largely connected across the networks in both groups and very few sub-network structures were found. A subnetwork would suggest that certain behavioral states and transitions would be isolated from the rest of the network and only reachable through the connecting node. The majority of nodes have multiple incoming and outgoing links in both the preterm and the full-term networks (Figure 3), however, the preterm network has significantly higher fraction of nodes with high connectivity than the full-term network, x 2 (1, N = 131) = 12.6, p = 0.00038. Higher connectivity suggests higher variability in behavioral transitions. The two networks show differences in the occurrence of the other four possible nodestatuses (Figure 3), which occur more often in the full-term group. However, only the difference in the number of nodes which are not linked to any other nodes is statistically significant, x 2 (1, N = 131) = 5.51, p = 0.019. Most of these nodes correspond to states where the infant is passive (7-17, 7-19, 7-20) or disobeys (5-17, 5-18, 5-19), and to states where the mother is passive (3-17, 5-17, 7-17) or inappropriate (5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19). Nodes corresponding to the ''defies'' behavior in infants are generally less connected in the full-term network (Figure 2, top and middle panels). The structural features of the interaction networks are summarized in Table 1.
Group-distinctive Transitions
The 6 most frequent transitions (A-F) were common in the two groups (see also Tables 2 and 3): (A) While the mother and the infant are involved in a play initiated by the infant, the mother elaborates it occasionally.
(B) The infant stops being involved in a game and starts a new activity or the infant starts a new game and gets involved in it, while the mother follows his switch. (C) The mother stops following the infant's activity and attempts to direct the infant's attention to a new toy, while the infant does not change behavior. Also vice versa in the full-term group: the mother stops directing the infant and starts to follow his/her activity. (D) The infant starts a new activity, and the mother stops following him/her and tries to redirect his/her attention to the object of her own interest. (E) The infant chooses a new toy, the mother attempts to direct his/her attention to another toy but the infant starts to neglect the mother. (F) The infant neglects the directing mother but subsequently accepts the mother's idea without sign of joy.
Subtraction of the two networks ( Figure 2, bottom panel) revealed the most prominent differences in the occurrences of transitions. Group distinctive behavioral transitions (Full-term: G-N; Preterm: O-Q) are described below: (G) The mother directs the infant, who is initiating a new activity. The infant stops initiating and obeys the mother's direction (accepts her idea without sign of joy). (H) The mother initiates and directs the activity, and the infant plays according to the mother's idea joylessly. Subsequently the infant shows a signs of enjoying the activity. (I) The mother initiates an activity and the infant plays according to the mother's idea happily, while the mother follows him. Subsequently the infant initiates a new game and the mother follows his switch. Figure 3. Connectedness of nodes in the full-term (left) and preterm (right) infant-mother interaction networks. A coarse-grained degree distribution analysis has been performed on the data, separating the combined mother-infant behavioral states into the 5 categories listed to the right. The 5 categories separate the states into the very sparsely connected states which have either one input and one output (1) or either no input or no output (5), states which have multiple inputs but only one output (2), states which have only one input but multiple outputs (3) (J) The infant plays according to the mother's idea and shows signs of joy while the mother enriches the activity. Then the infant chooses a new toy, and the mother enriches his/her activity. (K) The infant chooses a new toy, and the mother enriches his/ her activity. Subsequently the mother directs the infant's attention to another object. (L) The mother enriches the infant's activity, and the infant plays happily the game suggested by the mother. Then the infant changes his activity according to his idea, and the mother follows him/her. (M) The mother enriches infant's activity, and the infant plays happily the game suggested by mother. Then the mother directs the infant's attention to another object. (N) The mother directs the infant's attention to a new activity, and the infant happily plays along. Then the mother stops directing and follows the infant, who keeps playing.
The statistical significance of these differences was evaluated using x 2 tests (Tables 4 and 5). Based on the results of x 2 tests, transitions A, C, H, J, K, L,M, and N occur significantly more often in the full-term group than in the preterm group, and G and I show a tendency for that. In transitions A, B, C, I, J, L, and N the mother adapts or switches to adapt to the infant's activity, while in transitions G and H the infant accepts the mother's idea.
Based on the results of x 2 tests, transitions O, P, and Q can be considered to occur significantly more often in the preterm group than in the full-term group. None of these distinctive preterm transitions belong to the high frequency transitions: (O) The infant neglects the directing mother, and subsequently the mother applies physical force. 30% of the preterm dyads have this transition (9 out of 30) and 4 out of the 9 dyads have this transition multiple times, compared to the fullterm group where it occurs in only 4 of the 42 dyads, one of them having it twice. (P) The infant plays based on his/her own idea while the mother does not pay attention to him/her and is actively engaged in another activity. Then the mother gets involved in the infant's play. This transition occurs only twice in the full-term group (in 2 of the 42 dyads), and 12 times in the preterm group (in 6 of the 30 preterm dyads, and 3 of these have it more than once). (Q) The participants neglect each other; there is no relationship between them. Subsequently the mother directs infant's attention to a toy. Vice versa: the mother directs the infant, and the infant neglects the mother, and subsequently the mother disengages and starts to do something else, resulting in no relationship between the two. It happens very rarely (only 4 times) in the full-term group (6-18R6-15 in the case of 1 infant, and 6-15R6-18 for 3 infants). However, we found at least one transition in 30% of mother-preterm infant observations, and in 23% of the cases we observed more than one transition.
To get an insight how the distinctive preterm transitions affect the mother-infant interaction, we asked how often and how fast harmonious play (1-11 or 1-12) was developed after the O and Q transitions (P is a transition to [1][2][3][4][5][6][7][8][9][10][11]. When harmonious play is reached after transition O (6-15R6-13) it occurs in about 80 s (mean: 79.7 s, SD: 53.6 s) in the preterm group, and in about 21 s in the full-term group (mean: 20.67 s, SD: 2.52 s). 18.75% of the observed preterm O transitions (3 out of 16) and 40% of the O transitions in the full-term group (2 out of 5) was not followed by harmonious play within the recorded time.
In the full-term group a harmonious state (1-11 in all the cases) was reached relatively soon after the transition Q (mean = 29 s, SD = 25 s). In the preterm group transition ,Q. occurred more frequently (p,0.05). We found at least one ,Q. transition in 30% of the mother-preterm infant observations, and in 23% of the cases we observed it more than once. In these transitions infants neglect their mother's attention directing attempt, for which mothers of preterms often respond by withdrawing themselves from the interaction (neglecting the infant), and then trying to direct the infant's attention again. Interestingly, in the preterm group the 6-18R6-15 (mother switches from neglecting to directing the infant while infant neglects mother) transition led to harmonious play (1-11 or 1-12) after only ,2 minutes (117 s) or longer (mean = 248 s, SD = 91 s), and in 21% of the cases the interaction never returned to harmonious after this transition. In the case of the 6-15R6-18 (Q.) transition (mother switches from directing to neglecting the infant while infant neglects mother), we found only one occasion where harmonious play (1-11) was reached in a short time (10 seconds), which represents about 5% of all the cases. Our data shows that in the preterm group the 6- Table 3. The most frequent behavioral transitions in the full term group identified by network analysis (Figure 2, G. directs (15) explores (2) R directs (15) obeys (3) H. directs (15) obeys (3) R directs (15) cooperates (4) I. follows (11) cooperates (4) R follows (11) explores (2) J. enriches (12) cooperates (4) R enriches (12) explores (2) Directions of behavioral transitions are also indicated in the codes (, and .). doi:10.1371/journal.pone.0067183.t003 Table 2. The most frequent behavioral transitions in the preterm group identified by network analysis (Figure 2, middle panel).
Comparing Full-term and Preterm Behavioral State Transition Sequences
The subtracted transition network suggests that there are potential distinctive paths in the system although the network on its own does not reveal the time sequence of transitions. In Figure 4 we show the distribution of states preceding a selected distinctive full-term (,A, 1-12R1-11) and distinctive preterm transition (,Q, 6-18R6-15). Transition ,A., which is the most frequent transition in this study, occurs in the full-term group more often than in the preterm group, and is often periodic. In this transition the infant plays based on his/her own idea, while the mother alternates between following and enriching his/her activity. Interestingly, during the sequence preceding the 1-12R1-11 (,A) transition mothers of full-term infants are predominantly in states 11 (follows), 12 (enriches), or 20 (handles toy), while mothers of preterm infants often can be found in state 14 (commands), 15 (directs attention) by both controlling the infant's activity or 17 (being passive). Similar to the full-term group, in the preterm group the most likely preceding transition of 1-12R1-11 (,A) is 1-11R1-12 (A.) and vice versa (Figure 4).
Transition ,Q., which was found only once in the full-term group, is likely to happen periodically in the preterm group (mother directs/neglects infant while infant neglects mother, Figure 4). It was preceded by states where the infant was in the 'neglect' state in all cases, and most of the cases the mother was trying to direct the attention of the neglecting infant (6)(7)(8)(9)(10)(11)(12)(13)(14)(15). This non-beneficial pattern of interaction between the mother and the infant is difficult to break once the mother and infant have entered it.
Discussion
In this work we present a novel approach for analyzing motherinfant interactions, focusing on behavioral changes. The method was applied to compare the interaction of mothers with 12 months old preterm and full-term infants. The most frequent behavioral transitions were the same in both groups (A to F, Tables 2 and 3). The states with the infant playing and the mother following or enriching his/her activity occurred remarkably often. This interaction is often considered to be optimal in the western cultures [35]. In such cases the infant has the choice of what to play, and the mother stays involved in the interaction and helps to maintain the infant's attention by occasionally enriching and elaborating his/her ideas. This maternal behavior is favorable in various aspects: it (i) enhances the infant's focused attention by keeping him/her longer in a certain activity, (ii) enriches the infant's knowledge and repertoire of skills, (iii) allows the infant to experience that he is an able-to-act individual, and (iv) provides mutual joy and satisfaction in the interaction.
Mostly mothers adjusted to the infant's activity. This finding is in agreement with the observation of van Beek [16], who called this phenomenon 'infant dominance'. However, occasionally
Group Differences in Behavioral Transitions
Beside the major similarities, preterm dyads showed differences in the mother -infant interactions one year postpartum. We analyzed the possible patterns of transitions, i.e. how many different behaviors do precede and follow a given behavior. Interaction patterns are generally diverse in both groups, the majority of behavioral states can be reached from several different states, and can also lead to many different behaviors. However, significantly more behavioral states belong to this category in the preterm group (94% vs 71%, Figure 3). Behavioral states in the preterm group have generally higher connectivity, i.e. transitionpaths are more diversified, suggesting that interaction sequences in the preterm group are more heterogeneous compared to the fullterm group. Also, there are several behavioral states which occur only in the preterm group ( Figure 3). In these states the infant is either passive or defies, and the mother is passive or inappropriate.
Our results generally suggest that interaction of full-term infants and their mothers are more focused and harmonious, while the preterm transition pattern is more evenly spread. The method was also able to capture differences in the occurrences of certain behavioral transitions between the two groups. We identified 8 significant distinctive transitions which occurred substantially more frequently in the full-term group, and 3 distinctive transitions for the preterm group. Our results suggest that full-term infants spend more time playing based on their own ideas than their preterm peers, and transitions occur more frequently between playing, cooperating, exploring and obeying ( Table 4). The major difference in the maternal behaviors is that the transition pattern of mothers of full-term infants is focused on three states: following and enriching the infant's activity and directing the infant's attention to her ideas. These transitions are more frequent than in the case of mothers of preterm infants. Our results on a low-tomedium risk preterm sample support the conclusions of several previous reports which found that the interactions of preterm dyads are less harmonious [10,13,14,36,37]
Maternal Intrusiveness and Disengagement in the Preterm Group
Our findings do not support previous findings that mothers of preterm infants' would be either more [14,15] or less [10,13] active, than mothers of full-term infants. Using network analyses on micro-analytic data rather suggests that during the interaction occasionally they can become both active (intrusive) and neglecting (disengaged).
Several studies observed elevated maternal intrusiveness among mothers of preterm infants [38], but according to our best knowledge, none of them examined closely how a 1-year old infant handles maternal intrusiveness. Infants of our preterm sample often do not pay attention to their attention directing mother, instead they neglect them. This behavior is similar, but more conscious, to that of young infants, who show gaze aversion to maternal overstimulation or attention attracting activity [18,39]. The neglecting behavior of the infant can be an attempt to cope with the emotional distress caused by the intrusive mother. As a response to the neglecting behavior of their infants, mothers often increased control over the infant by using physical force (O) or disengaged from them (Q) (e.g. clean up the room). After these transitions harmonious play rarely developed, and even if it did, after a prolonged period, presumably because both infants and mothers got frustrated.
Much of the mother-infant interaction research has been aimed at better understanding maternal intrusiveness, and not much effort has been focused on examining the effects of maternal disengagement. Neglecting is not equivalent with the dimension of being non-responsive or active/passive, which got relatively high attention in the past decades [14,15]. According to our codingdefinition, a neglecting mother, despite of the instruction of the researcher, does not play with the infant, and actively involved in doing something else (e.g. tries to contact the cameraman, or cleans up the room).
Previous studies suggested that unpredictable alternation of maternal behavior between intrusiveness and disengagement may be particularly detrimental to the development of a young child, because they cannot anticipate and engage accordingly [40]. Despite its infrequency, negative control and maternal intrusiveness and hostility in the early mother-infant interaction can most likely be associated with behavioral and emotional symptoms of the child. The directive maternal behavior was also found to be associated with poorer language development [41][42][43].
Conclusion
Our approach allowed an in-depth insight into the motherinfant interaction unattainable using the traditional methods of psychology. In addition to corroborating the existing view of the importance of preterm birth in mother-infant interactions, our findings supplemented the picture with additional details. In the context of mother and infant playing together, the most frequent behavioral transitions did not differ in the two groups: infant playing or exploring with mother following, enriching or directing. However, the transitions in the preterm dyads were found to be more diverse compared to their full-term counterparts, and they were also unfavorable as they tended to make the interactions disharmonious (mother neglecting, directing or forcing the infant). Because these maladaptive maternal behavioral changes are likely to place the infant at risk for later emotional, cognitive and behavioral disturbance, future cross-cultural research with larger samples is needed to confirm our conclusions. Also, longitudinal studies should clarify how the coupling of an over-sensitive infant with an intrusive/disengaged mother affects the development outcome.
|
2016-05-12T22:15:10.714Z
|
2013-06-21T00:00:00.000
|
{
"year": 2013,
"sha1": "3b2850a29951cca5d616862628835e0dd96b0c00",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0067183&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0af034a02a9f9f1973d0633cbb2923cb348f3109",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119287679
|
pes2o/s2orc
|
v3-fos-license
|
Hyperon sigma terms for 2+1 quark flavours
QCD lattice simulations determine hadron masses as functions of the quark masses. From the gradients of these masses and using the Feynman-Hellmann theorem the hadron sigma terms can then be determined. We use here a novel approach of keeping the singlet quark mass constant in our simulations which upon using an SU(3) flavour symmetry breaking expansion gives highly constrained (i.e. few parameter) fits for hadron masses in a multiplet. This is a highly advantageous procedure for determining the hadron mass gradient as it avoids the use of delicate chiral perturbation theory. We illustrate the procedure here by estimating the light and strange sigma terms for the baryon octet.
Introduction
Hadron sigma terms, σ where we have taken the u and d quarks to be mass degenerate, m u = m d ≡ m l . (The superscript R denotes a renormalised quantity.) Other contributions to the hadron mass come from the chromo-electric and chromo-magnetic gluon pieces and the kinetic energies of the quarks, [2]. Sigma terms are interesting because they are sensitive to chiral symmetry breaking effects. Experimentally the value for σ (N ) l has been deduced from low energy π-N scattering. A delicate extrapolation to the chiral limit [1] gives a result for the isospin even amplitude of σ πN /f 2 π (with σ πN ≡ σ (N ) l ), from which the sigma term may be found. The precise value obtained this way has been under discussion for many years. However within the limits of our lattice calculation, this will not concern us here and for orientation we shall just quote a range of results from earlier analyses of [3,4] of 45 (8) MeV while a later dispersion analysis [5] suggested a much higher value 64 (7) MeV. An estimation using heavy baryon chiral perturbation theory gave 45 MeV, [6]. A more recent estimate gave 59 (17) MeV, [7]. Even less is known about the nucleon strange sigma term. Eq. (1) is usually written (in particular for the nucleon) as (i.e. we consider σ (N ) l and y (N )R rather than σ (N ) l and σ (N ) s ). The simplest calculation, e.g. [1] (which we will discuss in more detail later) uses first order in SU(3) flavour symmetry (octet) breaking to give and where m R s /m R l is the ratio of the strange to light quark masses, which using the leading order PCAC formula for this ratio gives The Zweig rule, N|(ss) R |N ∼ 0 would then give while any non-zero strangeness content, y (N )R > 0 would increase this value of σ (N ) l , σ (N ) s (and indeed, due to the large coefficient, σ (N ) s quite rapidly). Determination of the strange sigma term (and in particular y (N )R ) is important in constraining the cross section for the detection of dark matter. WIMPs would be scattered off nuclei by the exchange of scalar particles, such as the Standard Model Higgs particle, which will interact more strongly with heavier quark flavours. This coupling can be parameterised in terms of the fractional contribution of a quark flavour q to the nucleon's mass M N , f Tq = m R q N|(qq) R |N /M N . While the contributions of the charm and heavier flavours approach a constant that is proportional to the gluonic contribution f Tg , there is a strong dependence of the cross section on the value of f Ts , see e.g. [8,9] and references therein.
In this article, we shall investigate this simple picture as described in eqs. (3), (5) and in particular test the linearity assumption of SU(3) flavour symmetry breaking.
Flavour symmetry expansions
Lattice simulations start at some point in the (m R s , m R l ) plane and then approach the physical point (m R * s , m R * l ) along some path. (In future we shall denote the physical point with a * .) As we shall be considering flavour symmetry breaking then we shall start here at a point on the flavour symmetric line m R l = m R s and then consider the path keeping the average quark mass constant, m = const.. The SU(3) flavour group (and quark permutation symmetry) then restricts the quark mass polynomials that are allowed, [23], giving for the baryon octet with and A 1 and A 2 are unknown coefficients. So to linear order in the quark mass, we only have two unknowns (rather than four). A similar situation also holds for the pseudoscalar and vector octets (one unknown) and baryon decuplet (also one unknown). These functions highly constrain the numerical fits. (At O(δm 2 l ) only the baryon decuplet has a further constraint.) Permutation invariant functions of the masses X S , (or 'centre of mass' of the multiplet) can be defined which have no linear dependence on the quark mass. For example for the baryon octet we have (The corresponding result for the pseudoscalar octet is given later in eq. (29).) Furthermore expanding about a specific fixed point, m l = m s = m 0 on the flavour symmetric line and allowing m to vary, we then have We will see that A 1 , A 2 give all the non-singlet hyperon sigma terms and M ′ (m 0 ) the singlet terms.
As an example of the quark mass expansion from a point on the flavour symmetric line in Fig. 1 against M 2 π /X 2 π together with a linear fit, eq. (7) and implicitly eq. (29) using 2 + 1 O(a) improved clover fermions at β = 5.50, [24] using two starting values for the quark mass on the flavour symmetric line, namely κ 0 = 0.12090, 0.12092.
All the points have been arranged in the simulation to have constant m. We see that a linear fit provides a good description of the numerical data from the symmetric point (where M π ∼ X * π = 410.9 MeV) down to the physical pion mass. In a little more detail, the bare quark masses are defined as So once we decide on a κ l this then determines κ s . Note that κ 0;c drops out of eq. (13), so we do not need its explicit value. These initial κ 0 values chosen here, namely κ 0 = 0.12090 and 0.12092 are close to the path that leads to the physical point (κ 0 = 0.12092 being slightly closer). (This is discussed in more detail in [23], which also contains numerical tables and phenomenological values for the hadron masses. Results not included there are given in Appendix C.) This path is also illustrated later in section 4.3, Fig. 4. Although finite size effects tend to cancel in ratios of quantities from the same multiplet, we nevertheless fit just to the results from the 32 3 × 64 lattices (filled circles) using the linear fit of eq. (7). Finally note that we also have a similar flavour expansion for the pseudoscalar octet as for the baryon octet, as will be discussed in section 4.3.
(Hyperon) scalar matrix elements
Scalar matrix elements can be determined from the gradient of the hadron mass (with respect to the quark mass) by using the Feynman-Hellman theorem which is true for both bare and renormalised quantities. So if we take the derivative with respect to the bare quark mass we get the bare qq matrix element, while if we take the derivative with respect to the renormalised quark mass we get the renormalised matrix element. In the left panel of Fig. 2, we show the nucleon masses (green diamonds) and the flavour symmetric nucleon masses (maroon squares) against 1/κ l , 1/κ 0 respectively (from eq. (12) these are proportional to the bare quark mass). From the Feynman-Hellmann theorem, the slope of the masses (maroon squares) gives the total q=u,d,s N|qq|N , while the slope of the masses (green diamonds) gives the valence contribution 2 . The difference between the two contributions gives the disconnected contribution. Because here all three quark masses are equal, the disconnected contribution for all three quarks will be the same. The two slopes thus give the estimates q N|qq|N con q N|qq|N for bare lattice quantities.
To look at renormalised matrix elements, we need a plot against the renormalised mass, (aM π ) 2 (as in leading order PCAC, M 2 π is proportional to the renormalised quark mass, eq. (31)). This is shown in the right panel of Fig. 2. The slopes are now much closer to each other. We now find the estimates for renormalised lattice quantities, giving y (N )R ∼ 2 × 0.085/(1 − 0.085) ∼ 0.19. So although for bare matrix elements, there is a significant strange quark content this is reduced in the renormalised matrix element.
We shall now try to make these considerations a little more quantitive.
(Hyperon) σ equations 4.1 Renormalisation
For Wilson (clover) fermions under renormalisation the singlet and non-singlet pieces of the quark mass renormalise differently [25,26]. We have In the action the term q m q qq = q m R q (qq) R i.e. a renormalisation group invariant or RGI quantity. Upon writing this in a matrix form and inverting gives so for α Z = 0 then there is always mixing between bare operators.
As an example of where this manifests itself, the relation between the bare, y (H) , and renormalised y (H)R , cf. eq. (2), is then given by so we see that y (H)R = y (H) for clover fermions. Additionally, since α Z > 0 and y (H) ∼ > 0 we find that y (H)R < y (H) , i.e. is reduced. Useful quark combinations are the octet and singlet combinations, namely Furthermore, using the Feynman-Hellman theorem, eq. (14) and with the hadron flavour expansion, eq. (7) together with eq. (11) gives Eq. (21), the equation for the matrix element of an octet operator, only involves c H (the hadron mass expansion keeping the singlet quark mass constant), while eq. (22), the matrix element of a singlet operator, only involves M ′ 0 (occuring when changing the singlet quark mass). Eq. (21) also leads to eq. (3) as discussed in the introduction 3 .
Finally note that the quantities are RGI, all Z factors cancel when they are renormalised. Linear combinations of these two quantities are also RGI in particular the combination used previously of σ considered separately are not RGI, see eqs. (17), (18). The renormalised quantities are mixtures of the two lattice quantities, and α Z is needed to relate lattice values to continuum values. Refering back to Fig. 2 we see that the bare lattice strange sigma term is much larger that the renormalised strange sigma term, due to a cancellation between the two terms in eq. (18).
σ equations
Multiplying the renormalised quark mass, eq. (17), together with eqs. (21), (22) (or more generally with eq. (18)) we can find RGI combinations (i.e. a form where the renormalisation constant Z NS cancels). In particular we find where r is the ratio of quark masses Thus we have to find the (fixed) coefficients (1 + α Z )m 0 c H , m 0 M ′ 0 (m 0 ). We then determine the physical values of the sigma terms by extrapolating to the point where the quark mass ratio takes its physical value, i.e. r = r * .
We observe that we have two simultaneous equations, which can be easily solved to give 4 We see that the smallness of σ Again, as seen in section 3, y (H)R only depends on gradients and not on the physical point.
It is now convenient to normalise the coefficients by X N so we now need to find the coefficients (1 + α Z )m 0 c H /X N (m 0 ) and m 0 M ′ 0 (m 0 )/X N (m 0 ).
Determination of the coefficients
The hint for determining the coefficients from our lattice data is given in section 3, where we consider gradients with respect to a renormalised or physical quantity -here taken as the pion mass. As in eq. (7) we also have a similar expansion for the pseudoscalar octet, ). This gives a good representation of the data as can be seen from Fig. 12 of [23]. Analogously to eq. (10) we can define a flavour singlet quantity However, as well as eq. (7), we have the additional constraint from PCAC If we now consider an expansion in the (physical) pion mass then eliminating δm l between eq. (7) and eq. (29) gives from the point on the symmetric line m 0 = m. Thus if we plot M H /X N versus M 2 π /X 2 π (holding the singlet quark mass, m constant) then the gradient immediately yields (1 + α Z )m 0 c H /X N . The only assumption is that the 'fan' plot splittings remain linear in δm l down to the physical point. In Fig. 1 we show this plot giving the results (68) , which gives M 2 ′ 0 π (m 0 ) = 2α(1 + α Z ). So now eliminating (m − m 0 ) between eqs. (11), (35) gives Again in a plot of X N (m)/X N (m 0 ) versus X 2 π (m)/X 2 π (m 0 ) the gradient immediately gives the required ratio m 0 M ′ 0 (m 0 )/X N (m 0 ). We have also replaced M N by X N and M 2 π by X 2 π (which allows us to use all the 32 3 × 64 data available for a particular m). In Fig. 3 we plot X N (m)/X N (m 0 ) versus X 2 π (m)/X 2 π (m 0 ). From Finally the quark mass ratio, r, must be estimated. In Fig. 4 As in section 2, we see that for constant m the data points lie on a straight line (i.e. there is an absence of significant non-linearity). Furthermore the gradient is fixed at −2. (Indeed leaving the gradient as a fit parameter for the κ 0 = 0.12090 confirms that this gradient is very close to −2.) Together with PCAC, eq. (31) this gives the x-axis is proportional to m R l while the y-axis is proportional to m R s and thus the ratio gives r. Taking our physical scale to be defined from M 2 π /X 2 N | * (i.e. from the x-axes of
Curvature effects
What can we say about corrections to the linear terms? The simple linear fit describes the data well, from the symmetric point to our lightest pion mass, both along the m = const. line and the flavour symmetric line. To see qualitatively the possible influence of curvature we now compare linear fits with quadratic fits. These will be used to estimate possible systematic effects. We briefly discuss these effects here.
In Fig. 5 we compare the results of a quadratic fit and a linear fit, both for the The results in the next section include systematic error estimates from both these curvature sources combined in quadrature. In Appendix B we give some more details.
Results
We can now numerically determine y (H)R and σ effects from higher order terms, as discussed in section 4.4. We see that there is an order of magnitude increase in the fraction of H|(ss) R |H compared to H|(uu + dd) R |H as we increase the strangeness content of the baryon from the nucleon (no valence strange quarks) to the Ξ (two valence strange quarks).
Turning to the sigma terms themselves, from eq. (24) we can find an indication of the magnitude of σ Table 1. (Again the first error is the statistical error, while the second systematic error is due to possible quadratic effects.) While the data for κ 0 = 0.12090 is more complete than for κ 0 = 0.12092 (cf. the plots in Fig. 1) and demonstrates linear behaviour, as the path starting at κ 0 = 0.12092 is closer to the physical point (cf. Fig. 4) we shall use these values as our final values. These results are illustrated in Fig. 6 for y (H)R * where H = N, Λ, Σ, Ξ.
By varying r in eq. (27) 5 , we plot in Fig. 7 σ is rapidly decreasing 5 Using, for example, the results from the left panel of Fig. 4, r may be re-written as . Table 1 for κ 0 = 0.12092.
Conclusions
Keeping the average quark mass constant gives very linear 'fan' plots from the flavour symmetric point down to the physical point. This implies that an expansion in the quark mass from the flavour symmetric point will give information about the physical point. In this article we have applied this to estimating the sigma terms (both light and strange) of the nucleon octet. There has been no use of a chiral perturbation expansion (indeed this is an opposite expansion to the one used here, expanding about zero quark mass).
Our results are given in section 5 and we quote from there a value for the nucleon sigma terms of (The first error is the fit error while the second error indicates possible effects from higher order terms in the flavour expansion.) Note that expansions about the SU(3) flavour line require consistency between many QCD observables, here for example not only for the baryon octet under consideration here, but also for the pseudoscalar octet, and PCAC and the ratio of the light to strange quark mass.
Of course there are several more avenues to investigate. Numerically an increase in statistics for the masses along the flavour symmetric line would reduce the dominant error (both statistical and systematic) and so directly help in decreasing the present errors. Our approach here has been to emphasise linearity at the expense (presently) of reaching exactly the physical point. This can be addressed by interpolating between a small set of constant m lines about the physical point. Additionally the use of partial quenching will also help to get closer to the physical pion mass. With more data, a systematic investigation of quadratic quark mass terms in the flavour expansion should be considered, to reduce the systematic errors. Finally while the use of linear or quadratic terms along the line of constant m is unproblematic, so that it is unlikely that eq. (40) will change by much, more subtle is the relation involving X(m) (i.e. the gradient when changing m.) For the example of clover fermions we haveg 2 (m) = (1 + b g am)g 2 which clearly does not change if m = constant, but will slightly change when m does. However this is probably not a large effect (as b g seems small). For a discussion of some aspects of this issue see [29,30].
, is the part of the hadron mass due to the quark and gluon kinetic energy, interaction energy, etc., [2], i.e. the part of the hadron mass which is not due to the coupling with the Higgs vacuum expectation value.
We can use the higher order mass equations in [23]
B Higher order effects
In this Appendix, we discuss a little more quantitatively the systematic errors induced by the inclusion of the quadratic terms in the fit formulae. We concentrate particularly on the nucleon sigma terms, σ By comparing c H from the linear fit with c H + 2b H δm * l from the quadratic fit, we can estimate the maximum possible change.
We use the data at κ 0 = 0.12090, because this is the case where we have the most data, covering the largest range in quark mass splitting, δm l . In this case we have data covering about 3/4 of the gap from the symmetric point to the physical point, so we have the most chance of seeing curvature effects if they are present.
For the fan plot (left panel of Fig. 5), the curvature terms are found to be small, and statistically compatible with zero curvature. In Fig. 9 we compare the nucleon sigma terms from the slopes of the two fits by using eq. (27) together with eq. (48). Again we see that the curvature effect is very small in the case of σ (N ) l , particularly at small m l , and much larger for σ (N ) s . Can we explain this difference? is responsible for about 25% of the quantity, so a 10% change in slope translates to a 2.5% change in σ is different, the singlet and non-singlet terms appear with opposite signs, so σ (N ) s is given by the difference between two large quantities. Thus a 10% change in the non-singlet matrix element is leveraged into a 25% change in σ (N ) s . Repeating this procedure for the other hadrons gives similar non-singlet uncertainties.
B.2 Curvature along the symmetric line
We also use a linear fit to describe the baryon masses along the symmetric line (the line with all three quark masses equal). What is the effect of using a quadratic fit to determine the slope along this line?
In the right panel of Fig. 5 we compare a quadratic and linear fit to the symmetric baryon masses. As before, the quadratic term is compatible with zero curvature. Indeed the quadratic term is probably too large and is likely due to having a short lever arm and low statistics at the lightest point rather than to be a real effect. (Also we would expect that chiral perturbation theory would predict a downward curve.) Feeding these values into eq. (27) gives an estimate of the possible effect of quadratic terms, due to curvature along the symmetric line, which we will include in our final error estimate. This curvature effect is the same for every hadron, giving an uncertainty ∼ 4 MeV for σ l and ∼ 55 MeV for σ s . However because the shift is universal, this does not effect splittings, so the systematic error in σ (H) l − σ (H ′ ) l is still given by the ∼ 1 MeV value of the previous subsection. For y (H)R , using the first equation in eq. (4) gives percentage changes in y (N )R of 60% and 30% for y (Λ)R , y (Σ)R and y (Ξ)R .
C Hadron Masses
We collect here in Tables (2) -(5) numerical values for the meson pseudoscalar octet and baryon octet, not given in [23]. (All the data sets used here are over ∼ 2000 configurations for the 24 3 × 48 volumes and ∼ 1500 − 2000 configurations for the 32 3 ×64 volumes except for κ 0 = 0.12099 which has ∼ 500 configurations.) Errors are from a bootstrap analysis.
|
2012-02-24T10:53:43.000Z
|
2011-10-22T00:00:00.000
|
{
"year": 2011,
"sha1": "4486382e64a8e83eb78e003e5421f9fa14486357",
"oa_license": null,
"oa_url": "https://digital.library.adelaide.edu.au/dspace/bitstream/2440/76479/1/hdl_76479.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4486382e64a8e83eb78e003e5421f9fa14486357",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
239604360
|
pes2o/s2orc
|
v3-fos-license
|
Simultaneous removal of fl uoride, manganese and iron by manganese oxide supported activated alumina: characterization and optimization via response surface methodology
Fluoride, iron and manganese simultaneous exceedance of standard can be observed in groundwater in northeastern China. This work aims to apply a highly ef fi cient method combining adsorption and oxidation for the synchronous removal of the inorganic ions. An innovative adsorbent (manganese-supported activated alumina) was synthesized by the impregnation method and showed a signi fi cant adsorption capacity better than that of fresh activated alumina. The characterization (scanning electron microscope; Brunauer, Emmett and Teller; X-ray diffraction and Fourier transform infrared spectroscopy) results veri fi ed the successful introduction of MnOOH and MnO 2 , and the improvement of surface microstructure enhanced the removal ability. The effect of single factors, such as pH value, reaction time or dosage on the removal performance has been veri fi ed. The maximum removal ef fi ciencies of fl uoride, iron and manganese were optimized via Response surface methodology considering the independent factors in the range of MO@AA dosage (5 – 9 g/L), pH (4 – 6) and contact time (4 – 12 h). Noted that compared with control, MO@AA exhibited 59.4% of improved fl uoride performance. At pH of 5.79, contacting time of 12 h and 8.21 g/L of MO@AA, fl uoride, iron and manganese removal were found to be 91, 100 and 23%, respectively. Herein, MO@AA was distinguished as good applicability for the treatment of fl uoride-, iron- and manganese-containing groundwater. This of a promising adsorbent by impregnation methods. Characterizations of the adsorbent and the effect of various parameters like adsorption time, adsorbent dosage and initial pH were conducted using manganese oxide supported activated alumina (MO@AA). In addition, Response surface methodology and Design-Expert software were used to determine optimal operating parameters. the data obtained. Then, the mechanism of fl uoride, manganese and iron removal can be discussed.
to the limited set of experimental conditions provided by the software. Meanwhile, the optimal process parameters and operating conditions can be found (Zhang et al. 2020;Zhao et al. 2020). Response surface methodology, which has been confirmed with high accuracy, is suitable for multi-factor experiments. This paper is mainly describes synthesis of a promising adsorbent by impregnation methods. Characterizations of the adsorbent and the effect of various parameters like adsorption time, adsorbent dosage and initial pH were conducted using manganese oxide supported activated alumina (MO@AA). In addition, Response surface methodology and Design-Expert software were used to determine optimal operating parameters.
Preparation of the adsorbent
As shown in Figure 1, the MO@AA were prepared by the impregnation method (Liping 2011). Briefly, 30 g of the particles were impregnated in 150 mL of MnSO 4 solution (0.05 mol/L) with continuous stirring and heating for 6 h at 115°C (magnetic stirrer), then 2.76 mL H 2 O 2 with a concentration of 30% were immersed. At this point, the particles turned light yellow. Subsequently, 45 mL 25% NH 3 ·H 2 O were poured into the mixture. The brown precipitates, generated on the surface of the samples in this step, were confirmed as MnOOH and MnO 2 . Next, the mixture was washed thoroughly in a constant temperature oscillation incubator with hot DI water (30-40°C) for 40 min at the speed of 150 r/min. After several rounds of flushing, the adsorbent was dried for 4 h at 100°C and cooled to room temperature for further use.
Characterizations of the adsorbent
The surface microstructure of MO@AA was explored by scanning electron microscope (SEM) (S-4800, Hitachi, Japan). When the accelerating voltage was 5 kV and the temperature was constant, the photomicrographs were recorded. The measurement of particle surface areas and pore sizes were conducted by N 2 air-suction desorption on an automatic static physical adsorption instrument (Autosorb-IQ2-MP, Quantachrome, America). The total specific surface was calculated based on the multipoint Brunauer, Emmett and Teller (BET) equation (P/P 0 ¼ 0.005-0.3). The total pore volume was measured at P/P 0 ¼ 0.99. X-ray diffraction (XRD) was measured using an X'Pert Pro instrument (Spectris, Holland). Measurement conditions were as follows: tube voltage 40 kV, tube current 40 mA, Cu K-alpha radiation source, γ ¼ 0.15406 nm, scanning range 5-90°, and scanning at the speed of 5°/min. Lastly, transmission spectra were analyzed using Fourier transform infrared spectroscopy (FTIR) (NICOLET iS50, Thermo Nicolet Corporation, America) for infrared absorption spectra. The infrared range was set to 4,000-400 cm À1 , and each scanned 32 times.
Single-factor study
The prepared MO@AA were used for the removal of fluoride, manganese and iron from the aqueous solution. All trials were conducted under the conditions of room temperature (25 + 0.1°C) and acid condition (hydrochloric acid was used to adjust the pH) by batch scale. Here, 100 mL of fluoride, iron and manganese solution with the initial concentrations of 0.26 mmol/ L, 0.04 mmol/L, and 0.02 mmol/L were mixed with a certain mass of adsorbents. Then, the mixture was shaken in a constant temperature oscillation chamber for a predetermined period of reaction time at the speed of 120 r/min. The residual concentrations of fluoride, manganese, and iron were detected and accessed using fluorometric spectrophotometry and spectrophotometer, respectively. The limits of fluoride, manganese, and iron concentrations from the Standard Test Method for Drinking Water (GB/T 57750-2006), were 0.05 mmol/L, 0.01 mmol/L and 0.002 mmol/L.
In the previous study, pH value, adsorbent dosage, contacting time were basic factors affecting the effect of removal efficiency. Then, based on the range of the single-factor test, Response surface methodology was conducted to obtain the corresponding response values. The concentrations of contaminants were detected at certain time intervals, and the adsorption quantity (q t mg/g) was investigated using the following equation: where C t (mg/L), w (g) and V (L) represent the concentration of fluoride ions at time t (min), the weight of the adsorbent and the volume of the solution, respectively. The removal efficiency of contaminants (%) was calculated using the following mathematical expression: where C o (mg/L) and C t (mg/L) were taken as the initial and final solution concentration.
Box-Behnken experimental design
Box-Behnken experiments were carried out to investigate the interaction of three independent process variables (pH, adsorbent dosage, contact time) and optimize the maximum percent removal efficiency. The scheme, a total of 17 runs, consisted of three levels (low, medium and high). Independent factors, including pH, contact time, adsorbent dosage, were written as A, B, and C. Response value (removal rate of fluoride, iron and manganese) were denoted as Y 1 , Y 2 and Y 3 , respectively. Analysis of variance was generated to manifest the influence of individual linear, quadratic and interaction terms. The applicability of the model was checked using the coefficient of determination (R 2 ) and coefficient of variation (C.V. %). The commonly used second-order polynomial equation can be expressed as: where Y is the response value (%), b i is the regression coefficient, and 1 is the error of the model. The main effects and interactions between factors were determined. Through the Response surface methodology model, the parameters (coefficients of correlation, P-value, F-value, residual analysis and predicted values) were validated by the data obtained. Then, the mechanism of fluoride, manganese and iron removal can be discussed.
. SEM analysis
As depicted in Figure 2, the surface microstructures of fresh alumina and MO@AA were observed by SEM. It manifested that the surface appearance changed obviously in the modification process. As shown in Figure 2(a), the pore structure of AA was inconspicuous with only a few pores and obvious bulks were heaped up on the surface. Whereas, Figure 2(b) shows that after impregnation of AA, certain pores became uniformed and extensive impurities removed, which improved the adsorption performance. In addition, the surface of modified AA presented a convex spinous structure. These alterations that occurred may be possibly for the sake of the introduction of functional groups into the surface of AA by manganese oxides, which destroyed the crystal structure to manifest higher energy of adsorption. Thus, it can be inferred that manganese oxides were successfully loaded on to AA.
BET analysis
To examine the pore properties of AA before and after modification, BET analysis was carried out. As displayed in Table 3, the specific surface increased 24% compared with unloaded alumina beads, which is expected to improve adsorption ability. However, total pore volume and average pore diameter were slightly reduced, attributed to a small amount of manganese oxides entering the pore channel. With respect to the adsorption-desorption isotherm plot shown in Figure 3(a) and 3(b), its nature was similar to type II isotherms with a typical H2 hysteresis loop (Thommes 2016). It was indicated that the process was unrestricted monolayer-multilayer and the MO@AA were mesoporous. Additionally, the N 2 adsorption capacity increased with the build-up of relative pressure, revealing that the pore structure was not damaged during the modification process. The curves of MO@AA did not coincide but formed the larger hysteresis loops, which exhibited better mesoporous properties, being consistent with SEM analysis. The hysteresis loop at high pressure resulted from the occurrence of condensation and evaporation at different relative pressures. The above results indicated that loading with manganese oxides had little effect on the pore structure of the samples.
XRD analysis
XRD measurements were conducted to investigate the main constituent elements and chemicals of AA and modified AA, respectively. XRD analysis of AA before and after treatment is exhibited in Figure 4. It can be concluded that the modification process had little effect on the crystal structure of the samples, indicating that the basic framework of AA showed no obvious change. Taking (Yang et al. 2021). This revealed that the major oxide of fresh AA was Al 2 O 3 and manganese oxide was successfully loaded onto the surface of the modified samples after impregnation.
FTIR analysis
It was well known that the chemical groups involved in the adsorption process were directly affected the performance of fluoride removal. Hence, to verify the functional groups, the FTIR spectroscopy was performed. Figure 5 shows the FTIR spectra of fresh alumina and MO@AA. The peaks located at around 3,455 cm À1 , 1,617 cm À1 , 1,384 cm À1 and 586 cm À1 were observed, respectively. The adsorption band in between 3,455 cm À1 and 3,626 cm À1 was due to -OH stretching vibrations and the peak became sharp. The shift was possibly caused by the increase in hydroxyl groups after manganese oxides loaded. When the concentration of hydroxyl group increased, the association effect would be enhanced, and the stretching vibration peak would become sharp. The intensified water molecule bending vibration at 1,617 cm À1 indicated that the adsorption process might be hydrogen bonding with the hydroxyl group. As can be seen from the images no other chemical groups were formed, which was speculated to be electrostatic adsorption. After adsorption, the sharpness of peaks of the hydroxyl group implied that the ion exchange reaction had occurred and the hydroxyl group had been replaced. The bands at 1,384 cm À1 were CO 3 2À symmetric stretching vibration. This phenomenon can be explained in that AA hydrophilicity inevitably absorbed H 2 O and CO 2 in the air. The peak at 592 cm À1 was attributed to Al-O bond vibration in unmodified AA, while higher bands at 586 cm À1 were due to the combination of Al-O and Mn-O bonds. In general, the peak position of activated alumina before and after modification had little variation, and the peak shape was basically the same, which demonstrated that the basic skeleton of AA had no obvious change in the process of modification. In the process of adsorption, not much had changed for the functional groups, but these obtained an increased -OH stretching vibration peak, so the content of hydroxyl groups and the number of active sites increased.
Effect of contact time
The contacting time is related to the degree of reaction. The longer the reaction time, the better the effect of pollutant removal. As shown in Figure 6, the contact time exhibited a major influence on the pollutant removal. It expressed a similar increasing tendency in the removal of fluoride and iron with a further increment of time. The adsorption capacity of fluoride calculated by Equation (1) was dramatically accelerated in the preliminary stage (0-4 h), which was mainly due to external diffusion. Subsequently, from 4 to 12 h, the adsorption growth rate slowed down. This phenomenon can be explained in that MO@AA was gradually covered by fluoride ions, especially on active sites and pores. Then, in the final stage (12-24 h), the amount of fluoride adsorbed tended to be stable. That is to say, the adsorption came to an equilibrium state, the maximum achieved fluoride removal efficiency was 98% (Equation (2)). In the process of reducing the concentration of iron and manganese ions, the adsorption and contact oxidation processes worked together. The manganese oxide accumulated on the surface of MO@AA, and the contact oxidation capacity was enhanced. After 24 h of reaction, the maximum iron and manganese removal rates of the effluent water reached 83 and 12% (Equation (2)), respectively. When set as a long period of contact time in acid solution (pH 4), leaching of manganese was observed. This can be explained by the reaction of excess hydrogen ions with MnOOH (Bochatay & Persson 2000):
Effect of adsorbent dosage
With the increase in adsorbent dosage, the active sites and the amount of manganese oxide provided by MO@AA was reinforced, and the fluoride and iron removal rate gradually improved, as observed from Figure 7. This may be due to the enhanced attraction between MO@AA and the contaminants. The fluoride, iron and manganese rates of removal increased 50, 63 and 9% (Equation (2)) in the dosage range selected (1-11 g/L), respectively. It was observed that this further increased the amount of dosage, but removal efficient did not markedly change. This may be due to saturation on the surface of MO@AA.
Effect of pH value
In a strong acid medium (pH , 2.00), HF generation affects the removal of fluoride ions. Here, the initial pH was set from 3.00 to 10.00 (Chen et al. 2021). Apparently, as demonstrated in Figure 8, pH had a significant monitoring force driving the removal process. The increase in pH led to an increase in the removal of fluoride, which could be attributed to the fact that positively charged MO@AA in acidic conditions was combined with negatively charged fluoride ions. When pH increased, the existence of excess OH À ions may compete with negatively charged fluoride ions that weakened the interaction of electrostatic forces (Roy et al. 2018). In general, the removal capacity for fluoride was high at low pH, whereas high pH favors the oxidation of iron and manganese ions. Therefore, pH 4-6 was chosen as the optimum range for pH value for the simultaneous removal of fluoride, manganese and iron.
Analysis of response surface methodology and the model fitting
The ANOVA results for fluoride, iron and manganese are shown in Tables S1-S4 in Supplementary Materials. The values for F and P implied the significance of the fitted equations and the extent to which the original hypothesis is not rejected. The smaller the P-value and the larger the F-value, the more significant is the effect of this item on the response value (Thommes 2016). By analyzing the results obtained, all three Response surface methodology models showed good predictability.
In the regression equation for fluoride (Table S1), the F-value was 9.59, the P-value was 0.0035 , 0.005. Values of P less than 0.005 indicated model terms were significant. In this case, A, B, C, AB, and B 2 were significant factors. Adeq precision measures the signal-to-noise ratio. A value greater than 4 is desirable. Thus, 11.357 indicates an adequate signal. This model can be used to navigate the design space. The values of the correlation coefficient (R 2 ) and adjusted R 2 were 0.9250 and 0.8296, respectively. A high R 2 value (more than 0.8) is expected. There was only 2% of the total variation that was not explained by the model. All demonstrated the good fitness for the model. Results of ANOVA for iron are shown in Table S2. The model F-value of 71.53 demonstrated that the model was accurate. There was only a 0.01% chance that a Model F value this large could occur due to noise. P-values less than 0.05 indicate model terms were significant. In this case, B, C, BC, A 2 and B 2 are significant factors. The sequence of the independent factors was B 2 . C . A 2 . BC . AB . A . AC . C 2 . The R 2 of 0.8680 (.0.8) was in reasonable agreement with the adjusted R 2 of 0.9754 (.0.8). Furthermore, the Adeq precision in this model was 26.409 . 4. It showed that this model fits well with the actual situation, has good stability, high test reliability, and accuracy.
The ANOVA result of manganese model is given in Table S3. The high F-value (347.5) implied the reliability of the model. The existed chance of P , 0.0001 indicated that the model was highly significant. In this manner, A, B, C, AB, A 2 , B 2, and C 2 were significant variables. The high Adeq precision value of 35.942, high R 2 of 0.9924 and adjusted R 2 of 0.9826 were found. It reflected that only 1% of the total variations cannot be explained. Here, this model can be used.
The correlation between actual and predicted values of removal efficiency is displayed in Figure 9(a)-9(c). The points distribution was in the vicinity of a straight line, suggesting that the developed model was adequate in predicting the response variables for the experiment, the reliability of the data was proven. The simulation results showed that the model could be evaluated with a 95% confidence level. To further investigate the effect of pH, and the time of reaction, a multivariate coupling experiment was carried out. The 3D images of Response surface methodology plot shown in Figure 10(a)-10(c) verified the connection of the two parameters on the simultaneous removals of fluoride, iron and manganese. With the increase in pH from 4 to 6, fluoride removal capacity slowed down and the removal efficiency of manganese and iron was enhanced. While, with increase in contacting time, removal rate correspondingly improved. In addition, the slope of the plots was large, and the large plot indicated a huge impact of independent factors on the response values. Figure 11(a)-11(c) shows 3D surface of the impact of pH and dosage on response value. When the pH value rose continually, it an increase was observed in the removal of fluoride along with a decrease in iron and manganese removal efficiency. As the time of reaction was extended, the solid-liquid two-phase system gradually reached an equilibrium, leading to an increase in the removal effect. Maximum efficiency of 94, 100 and 24% was observed in this plot, for 8 g/L of adsorbent.
Effect of variation in contact time and dosage
In Figure 12 Water Science & Technology Vol 84 No 12,3811 (with a higher slope) verified a greater coefficient impact than adsorbent dosage on response value. These results and observations were also confirmed by the ANOVA results presented in Tables S1-S3.
Optimization and validation of the model
To meet the drinking water standards, an optimal condition can be selected based on the Response surface methodology model. In this work, the raw fluoride, iron and manganese ion mass concentrations of 0.26 mmol/L, 0.04 mmol/L, and 0.02 mmol/L were mixed with MO@AA at an initial pH of 5.79, the reaction time of 12 h, adsorbent dose of 8.21 g/L (Table 6). Optimum process condition was 0.955. Under the optimized conditions, the maximum removal rate was achieved. The measured value had an error of only 1, 0 and 1%, with the simulation result. Consistent with the above analysis, the Response surface methodology model has good accuracy and desirability.
Mechanism of fluoride, manganese and iron removal
Compared with the reported adsorbents in Table 2 which showed good performance in the removal of fluoride, iron and manganese, respectively, MO@AA can take effect in the simultaneous removal of these three inorganic solutes. In addition, MO@AA worked effectively over a wider range of pH (3-9). The comparison of fresh alumina and MO@AA is demonstrated in Figure 13. It manifested that the adsorption capacity of fluoride rose sharply on MO@AA as compared to the control. The possible reasons analyzed on the basis of characterization (SEM and BET) were given as follows: the number of active sites enhanced, the specific surface increased, convex spinous structure presented and the pore structure improved. The mechanism of adsorption on MO@AA was based on the experimental results and the analysis of XRD and FTIR. It can be deduced that the adsorption process may be the reason for electrostatic attraction and ion exchange. For one thing, at acidic conditions, the surface hydroxyl group was protonated and led to the surface of MO@AA being electropositive (Kumari et al. 2020). Thus, the negative fluoride ions charged primely onto the positively surface by the electrostatic force as follows: In addition, hydroxyl released when in acidic medium, resulting in the increase in pH. Furthermore, MO@AA absorbed water molecules led to surface hydroxylation (Teng et al. 2009). Active sides enhanced in this process and MnOOH was reduced to Mn(OH) 2 . Simultaneously, fluoride ions exchanged with hydroxyl groups due to the similar hydrated ionic radius. The chemical reaction equations could be expressed as (Yang 2013): R-(Al 2 O 3 ) n Á Mn(OH) 2 þ 2F À ¼ R-(Al 2 O 3 ) n MnF 2 þ 2OH À (10) The removal of ferrous ions may be due to the oxidation of MnOOH by dissolved oxygen in the aquatic environment, which generated a high-valent manganese compound. Subsequently, MnO·Mn 2 O 7 was further oxidized with divalent iron in solution to form Fe(OH) 3 (Weili 2020 For manganese removal, two steps were involved: (i) the dissolved Mn 2þ adsorbed by MnO 2 ; (ii) the adsorbed Mn 2þ oxidized to high valence manganese compounds. At the beginning of the reaction, influenced by low pH, MO@AA showed poor performance in oxidation. A small amount of Mn 2þ was adsorbed on the surface of the particles (Equation (15)). As the hydroxyl group undergoes ion exchange with the fluoride ion, the hydroxide is released into the aqueous solution, while the high pH facilitates the oxidation of the manganese ion. As the contact time increases, its contact oxidation capacity becomes stronger and stronger, and then contact oxidation for manganese removal is the main focus (Equation (16)-(18)). The equations are shown below (Liping 2011):
CONCLUSIONS
In this work, a novel adsorbent was successfully fabricated on containments reduction. The percent fluoride removal varied from 31% to 91% after the impregnation method. Results obtained from SEM proved better adsorption performance with the convex spinous structure and more active sides on the surface of MO@AA. XRD also manifested the introduction of MnOOH and MnO 2 after impregnation. The specific surface increased 24% compared with unloaded alumina beads on the basis of BET, with respect to the adsorption-desorption isotherm, it verified that the process was multilayered and the modified AA was mesoporous. The FTIR spectroscopy analysis implied that the ion exchange reaction had occurred. By modelling, the effects of the three parameters including pH, contacting time and the amount of dosage were evaluated. Under optimized conditions, maximum removal efficiency reached 90.59%, 100% and 23.46%. Furthermore, it can be deduced that the mechanism of adsorption consisted of electrostatic attraction and ion exchange. The oxidation process played a major role in treating iron and manganese-containing simulated water. Thus, MO@AA showed the great potential of fluoride, iron and manganese removal and was a promising adsorbent for groundwater treatment. In the follow-up trials, MO@AA would be applied in real groundwater to remove the excess fluoride, iron and manganese ions. More cost-effective methods can be explored in the next phase.
|
2021-10-23T15:07:14.785Z
|
2021-10-21T00:00:00.000
|
{
"year": 2021,
"sha1": "a4f14a55307119a462d495c5129427125d18acc1",
"oa_license": "CCBY",
"oa_url": "https://iwaponline.com/wst/article-pdf/84/12/3799/979722/wst084123799.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "036ab27d33715d365541ae9a92e44511d2b2c2f7",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2590726
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of Verotoxin-Producing Escherichia coli (VTEC) in a survey of dairy cattle in Najaf, Iraq
Background and Objectives Dairy cattle have been implicated as principal reservoir of Verotoxin-Producing Escherichia coli (VTEC), with undercooked ground beef and raw milk being the major vehicles of food borne outbreaks. VTEC has been implicated as an etiological agent of individual cases and outbreaks in developed countries. This study was designed to determine the prevalence of VETEC in diarrheic dairy calves up to 20 days of age in Najaf, Iraq. Materials and Methods 326 fecal samples from diarrheic calves were collected for isolation of Escherichia coli O157:H7 and non-O157 VTEC isolates. Non-sorbitol fermentation, enterohemolysin phenotype, and slide agglutination with antisera were used for screening and detection of these serotypes. Results Nineteen (5.8%) non-sorbitol fermenting and 3 (0.9%) enterohemolysin-producing E. coli were obtained. Only 9 were agglutinated with available antisera and none of them belonged to the O157:H7 serotype. Three were found to be verotoxin positive on Vero cell monolayers. These included serotype O111 (2 isolates) and serotype O128 (1 isolate). All three VTEC isolates were resistant to ampicillin and streptomycin. Two exhibited adherence phenotype on HEp-2 cells. Conclusion E. coli O157:H7 serotype is not prevalent in diarrheic dairy calves, and VTEC is not a frequent cause of diarrhea in calves in Najaf/ Iraq.
INTRODUCTION
Verotoxin-producing Escherichia coli (VTEC), including O157:H7, was identified in 1982 as an important human pathogen causing bloody diarrhea or hemorrhagic colitis (HC) which can lead to lifethreating sequelae such as hemolytic-uremic syndrome (HUS) and has been reported with increased frequency during the past decade as a cause of human illness (1). VTEC has been implicated as an etiological agent of individual cases and outbreaks in developed countries (2). In developing countries, the situation is different. Although an outbreak of bloody diarrhea due to VTEC has been reported in Cameroon, but it is not recognized as a significant cause of human disease in Bangladesh,India,. Although the number of serotypes of VTEC causing human disease is increasing, E. coli O157: H7 continues to be the dominant cause of HC and HUS (7).
Dairy cattle have been implicated as principal reservoir of VTEC, with undercooked ground beef and raw milk being the major vehicles of food borne outbreaks (8). Earliest surveys of cattle herds performed in the States of Washington and Wisconsin showed a higher prevalence of this organism in heifers and calves than in adult cattle (9). Other studies in Ontario, Canada found that the prevalence of VTEC in calves (2 weeks to 3 months) was significantly higher than cows (10). Subsequent studies have consistently shown that young animals have the highest prevalence rates, although the youngest animals show relatively low rates. The relatively high prevalence in young animals is consistent with the fact that calves, when infected experimentally with this bacterium, shed the organism for a longer period of time than do older cattle (1).
Extensive efforts have been made to isolate VTEC from cattle in various geographical regions across the world, but there has been no report of VTEC in Iraqi cattle. This study was designed to determine the prevalence of VTEC in diarrheic dairy calves up to 20 days of age in Najaf, Iraq.
MATERIALS AND METHODS
Study animals. Fecal samples, from diarrheic calves, were collected from some private dairy farms in the vicinity of Najaf city, from March to August 2006. Diarrheic calves were divided into three groups according to age. Group 1 comprised calves between 2 weeks to 2 months old. Group 2 comprised calves between 3 to 5 months old. Group 3 comprised calves of more than 5 months of age.
Sample processing. Fecal samples were collected by rectal swabs using sterilized cotton-tipped applicators and placed in a tube containing 5 ml of sterile enrichment broth (trypticase soy broth, Oxoid, with 50 µg/L cefixime and 4 mg/L vancomycin) and incubated at 37 ºC for 18-24 h (11).
Phenotypic characterization.
A loopful from the growing culture in enrichment broth was sub-cultured onto sorbitol MacConkey agar for detection of nonsorbitol fermenting E. coli, and washing sheep blood agar (tryptose blood agar base, with 0.11% CaCl 2 and 5% washing defibrinated sheep blood) for detection of enterohemolysin-producing E. coli (12). After overnight incubation, non-sorbitol fermenting or enterohemolysin-producing isolates were identified using traditional biochemical tests, including indole, methyl red, voges-proskaur, citrate, urease, and Kligler iron agar. For detection of E. coli O157: H7, the bacteria which were identified as E. coli, other biochemical tests including cellobiose fermentation, β-glucuronidase production, and KCN broth turbidity were employed.
Preparation of bacterial lysate. Bacterial lysates were prepared as described by O`Brien and LaVaeck (13). A 100 ml portion of synicase broth medium was distributed in each 250 ml Erlenmeyer flasks, and then inoculated with single E. coli colonies grown on trypticase soy agar plates. The flasks were incubated at 37ºC for 48 h with shaking (Shaker, Sigma, USA) at 180 rpm. The bacteria was harvested by centrifugation at 10,000 rpm (4ºC) for 20 min, washed twice with saline solution, and re-suspended in buffer (3.72 g KCl, 2 g MgCl 2 , 2.42 g tris-hydrochloride, and 1000 ml distilled water, pH 7.4). The cells were disrupted by 3 min. of intermittent sonic oscillation (Sonipred 150, MES, UK). The sonic extracts were clarified by centrifugation at 12,000 rpm at 4ºC for 1 h, and sterilized with 0.22 µm Millipore filter (Difco, USA).
Determination of verotoxin production. Verotoxin was determined in microtiter plates (Lab-Tek, USA) containing 96 wells with 0.1 ml of cell culture (MEM 199 with Hank`s salts and glutamine, Flow Lab, UK) supplemented with 10% fetal calf serum (Flow Lab), 100 units/ml penicillin G and 100 µg/ml streptomycin. Vero cells (purchased from Central Health Lab, Baghdad) monolayer were obtained by seeding 1.6 X 10 4 cells per well, 1 to 2 days at 36ºC in 5% CO 2 incubator (Memmert, Germany) before use. Four-fold dilutions of bacterial lysates in cell culture medium were added in 0.1 ml quantities into wells, and the plate were incubated at 36ºC in 5% CO 2 incubator. Wells containing cells not exposed to bacterial lysates were negative controls. Vero cells were examined daily for 7 days and morphological effects were recorded (14).
Antibiotic susceptibility testing. The following antibiotics were studied and were provided by the manufacturer: nalidixic acid, tetracycline, ampicillin, chloramphenicol, carbenicillin, kanamycin, gentamycin, and trimethoprim-sulphamethoxazole. In viro susceptibility tests were performed using agar diffusion method on Muller-Hinton agar medium. Results were interpreted according to the recommendations of the National Committee for Clinical Laboratory Standards (15).
Tissue culture adhesion test. Adhesion patterns to HEp-2 cells (purchased from Central Health Lab., Baghdad) in culture were assessed as previously described (16). Monolayer of HEp-2 cells grown on cover slips (diameter 13 mm) in 24-well plates (Lab-Tek) were prepared in the absence of antibiotics. Twoday-old monolayers of HEp-2 cells were used for the tests. Bacterial isolates were grown overnight at 37ºC in trypticase soy broth. Before the test, monolayers were washed once with Dulbeccos PBS (BDH, UK). One ml of MEM medium without antibiotics or sera was added to each well. The overnight bacterial culture (20 µl) was inoculated into each well and the plates were incubated at 37ºC for 30 min. The monolayers were washed 6 times with PBS, and 1ml of the medium was added to each well. After a further 3h incubation period, the monolayers were washed 3 times with PBS, fixed with absolute methanol, stained with 10% (V/V) Giemsa stain (BDH), and examination showed the bacteria adhering to HEp-2 cells.
Statistical analysis. The X 2 test was used for statistical analysis. P < 0.05 was considered to be statistically significant. Table 1 shows the number of non-sorbitol fermenting and enterohemolysin-producing E. coli isolates recovered from the diarrheic calves. A total of 326 diarrheic calves samples were examined for detection of these isolates. Non-sorbitol fermenting E. coli isolates were obtained from (5.7%) calves in group 1, (8.7%) in group 2, and (2.7%) in group 3. However, no significant difference was found among the cattle of different age groups (P > 0.05). On the other hand, enterohemolysin positive E. coli isolates were detected in age groups 2 and 3 in very low percentages. No enterohemolysin positive E. coli isolates were detected in age group 1 of the tested calves. The isolation rate of enterohemolysin-producing E. coli showed no a 2 weeks to 2 months; b 3 to 5 months; c more than 5 months. significant association among groups of calves tested (P > 0.05). All case calves excreted watery manure of grayish or yellow color and in 20 samples blood flecks and/or mucous (6.1%) were presented.
E. coli phenotyping and serotyping.
Biochemical characteristics of the non-sorbitol fermenting (19 isolates) or enterohemolysin-producing E. coli (3 isolates) showed that they behave as typical E. coli when they grow on the classic screening medium sorbitol MacConkey agar. These isolates were further screened serologically. Only 9 isolates were agglutinated with available antisera (Table 2). Three isolates belonged to serotype O111, 2 isolates to serotype O44, and 1 isolate to each of serotypes O128, O119, O86, and O26. The serotype O157:H7 was not detected in this study.
Determination of verotoxin production. In this investigation, an attempt was made to evaluate the frequency of verotoxin production in 9 isolates (Table 2). Results demonstrated that the cell lysates of 3 (0.9%) isolates (obtained from 326 diarrheic calves) had the same irreversible cytopathic effect in Vero cell monolayers (Vero cells appeared round, shriveled, and many floated free in the medium). Of the three VTEC; two isolates belonged to the serotype O111 and one isolate belonged to the serotype O128. Two isolates (1.6%) were from calves 3 to 5 months of age (group 2). Results also showed that 2 (66.7%) of the three VTEC isolates were enterohemolysin positive and one isolate was a non-sorbitol fermenter (Table 2).
Antibiotic susceptibility testing. The results of the in vitro susceptibility to antibiotics of the 3 isolates of VTEC are presented in Table 3. All isolates were resistant to ampicillin and streptomycin. Whereas 2 (66.7%) isolates belonging to serotype O111 were resistant to cephalosporin and tetracycline, one (33.3%) isolate belonging to serotype O128 was resistant to gentamicin. However, nalidixic acid, chloramphenicol, carbenicillin, and kanamycin were the highly effective antibiotics against the VTEC isolates tested. properties of VTEC isolates found that 2 of the 3 VTEC isolates were adherent to HEp-2 cells and that two adherence patterns were detected. In one isolate (VTEC O128), the bacteria were bound to localized areas of HEp-2 cells in which they form very clearcut microcolonies. This pattern is called localized adherence (Fig. 1). In another isolate (VTEC O111), the bacteria were clumped with a characteristic of stacked brick appearance found on the surface of HEp-2 cells and on glass slide free from cells. This pattern is called aggregative adherence (Fig. 2).
DISCUSSION
The classic screening medium for E. coli O157:H7 is sorbitol MacConkey agar. This method exploits the fact that E. coli O157:H7, unlike 90% of other E. coli isolates does not ferment sorbitol rapidly (17). Other studies reported that sorbitol MacConkey agar medium is a useful, rapid, reliable screening aid for the detection E. coli O157:H7 in stool samples, but it is not generally useful for detection of VTEC strains of serotypes other than O157:H7 (18). However, the study of Ojeda et al. (19) showed that all 19 VTEC strains isolated from patients with hemolytic-uremic syndrome were sorbitol negative. On the other hand, it has been suggested that the enterohemolytic phenotype, detected on washing sheep blood agar is highly efficient for detection of most of the VTEC strains that are pathogenic to humans and animals (3). Beutin et al. (20) found that 89% of 64 VTEC isolates showed a correlation between enterohemolysin and verotoxin production. Results also showed that non-sorbitol fermenting E. coli isolates were detected in 5.8% of all diarrheic cattle tested (Table 1). Several investigators declared that more than 5% of E. coli isolates were unable to ferment sorbitol rapidly (21).
The inability of this study to detect the serotype O157:H7 among non-sorbitol fermenting and enterohemolysin-producing E. coli isolates in cattle confirms the results obtained by other authors, who reported that this serotype is uncommon in cattle and its isolation rates are much lower than those of non-O157:H7 serotypes (8,22,23). On the contrary, Wells et al. (9) determined the prevalence of E. coli O157:H7 among cattle of different age groups and found that this organism was isolated from 5 of 210 calves (2.3%), but only 1 isolate was documented of 662 adult cows (0.15%). Surveys of United States dairy and beef cattle have found E. coli O157:H7 in 0 to 2.8% of animals, with the highest isolation rates reported from younger rather than older animals (9, 24). The morphological changes in Vero cell monolayers in this study were the same as that described in other studies (25).
Results of this study revealed that out of three VTEC isolates, two were from calves with 3 to 5 months of age. This result agreed with the finding obtained by Dutta et al. who found that 4 (6.5%) of the 61 samples from diarrheic cattle were positive for VTEC and all the positive samples were from calves below 6 months of age (5). Results also revealed that two (66.7%) of the three VTEC isolates recovered from the diarrheic cattle in the present study were enterohemolysinproducers (Table 2). Other studies reported that verotoxin production and enterohemolysin production were closely associated (26, 7). Djordjevic et al. (27) showed that 75 (89.3%) of 84 VTEC strains isolated from 1,623 diarrheic sheep in Australia expressed enterohemolysin. On the other hand, Reissbrodt et al. (28) found no genetic linkage between VT production and sorbitol fermentation.
The high sensitivity of VTEC isolates in the present study to most of the antibiotics tested may be due to low number of the VTEC isolates obtained. However, all the isolates were resistant to ampicillin and streptomycin. Orden et al. reported that E. coli strains isolated from diarrheic dairy calves showed low resistance to cephalosporins and quinolones antibiotics (29). It is suggested that resistance to antibiotics has become more prevalent in VTEC isolates. In the United Kingdom, the proportion of 157 VTEC resistant to at least one antibiotic has increased from 10% in 1992 to 20% in 1994 (30).
High adherence capacity has been considered as an important factor for the maintenance of bacteria on the mucosal surface of the host organism. Different studies from various parts of the world (16, 31) incriminated this pattern as virulence factor. Results of this study detected two adherence patterns among VTEC isolates, localized and aggregative adherence patterns. Other studies examined the adherence of VTEC strains. Willshaw et al. (32) found that 13 of 48 VTEC human isolates exhibited a localized phenotype on HEp-2 cells. Aslani et al. (31) showed that 18 of 70 VTEC isolates manifested difference adherence patterns to HeLa cells. Based on the observations in this study, it can be concluded that the E. coli O157:H7 serotype is not prevalent. This study also showed that VTEC is not a frequent cause of diarrhea in calves in Najaf.
|
2014-10-01T00:00:00.000Z
|
2010-09-01T00:00:00.000
|
{
"year": 2010,
"sha1": "1daf85a36ae14421e7e5682c9eb2b19ce4ecdb8f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9c74f18e246f4b3279c0b899df78291c105c530a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
11413623
|
pes2o/s2orc
|
v3-fos-license
|
Anatomy of a wrong diagnosis: false Sinus Venosus Atrial Septal Defect
In contrast with transthoracic echocardiography, transesophageal echocardiography provides a sure way to make the diagnosis of sinus venosus atrial septal defect; on the other hand this abnormality is more complex than that seen with the secundum atrial septal defect, and inexperienced operators may fail to recognize properly the defect. In front of a high reported sensitivity using transesophageal echocardiography, specificity is difficult to assess, due to possible underreporting of diagnostic errors. We describe a false positive diagnosis of sinus venosus atrial septal defect, in the setting of enlarged right chambers of the heart because of pressure overload. Modified anatomy of the heart, together with the presence of a prominent linear structure(probably Eustachian Valve) and an incomplete examination in this case made image interpretation very prone to misinterpretation. In this anatomical setting transesophageal longitudinal "bicaval" view may be sub-optimal for examining the atrial septum, potentially showing false images that need to be known for correct image interpretation. Nonetheless, a scan plane taken more accurately at the superior level would have demonstrated/excluded the pathognomonic feature of sinus venosus atrial septal defect in the high atrial septum, between the fatty limbus and the inferior aspect of the right pulmonary artery; moreover TEE allows morphological information about the posterior structures of the heart that need to be investigated in detail for a complete diagnosis.
Background
Transthoracic echocardiography has high sensitivity to detect secundum-type atrial septal defects(ASDs) and up to 100% for defects of the interatrial foramen primum, while its diagnostic usefulness for more uncommon causes of shunting at the atrial level is considerably less [1,2]. In particular, diagnostic images of the sinus venosus atrial septal defect (SVD) usually are not obtainable in most adults [1].
In contrast to clinical examination or transthoracic echocardiography, transesophageal echocardiography (TEE) provides a sure way to make the diagnosis of SVD; on the other hand this abnormality is more complex than that seen with the secundum ASDs, and inexperienced operators may fail to recognize the defect. TEE is a highly accurate means to diagnose SVD, especially when performed by experienced operators [3].
In front of a high reported sensitivity for SVD diagnosis, specificity is difficult to assess, due to possible underreporting of diagnostic errors.
The following case report is to date the first in medical literature to describe a false positive diagnosis of SVD, even if probably it is not such a rare mistake, particularly in the setting of modified heart anatomy.
Case report
A 30-year-old woman presented for evaluation of multiple syncopal spells. She had been in her usual state of health until four months earlier when first syncopal episode presented; since then she has had five similar episodes. She was not using any type of medication; five years earlier she gave birth to a healthy baby, following an uncomplicated pregnancy. She described her syncopal episodes as typical for true syncope: episodes were 1) transient, 2) self-limited 3) leading to falling, 4) the onset was relatively rapid with spontaneous recovery.
ECG pattern was suggestive for right ventricle enlargement and strain (Fig 1).
Echocardiogram showed: massive dilatation of both the right atrium and the right ventricle, with a high estimated systolic pulmonary artery pressure = 75 mmHg. There was no clear evidence for any of the commonly diagnosed adult congenital abnormalities causing left to right shunt (ASDs, ventricular septal defect, patent ductus arteriosus) (Fig 1).
CT scan of the chest, performed with the intent to rule out the presence of pulmonary thromboembolism was negative but confirmed right cavities enlargement with a consequential abnormal anatomical orientation of the heart; leftward shift of the atrial septum was evident too (Fig 2). A nuclear pulmonary perfusion scan was normal too, definitely excluding pulmonary thromboembolism. Transesophageal echocardiogram (TEE) showed the presence of a big ASD(maximum diameter = 29 mm), (Fig 3). When agitated saline was injected from the right antecubital vein the air-bubbles showed to-and-fro shunting across SVD with changing atrial pressures during the cardiac cycle (Video 1-see additional file 1).
On the contrary cardiac cath ultimately demonstrated the absence of whatsoever ASD, confirming the presence of pulmonary hypertension, consequently diagnosed as primary.
We analyzed all the TEE images once again, trying to figure out where the diagnostic error originated; longitudinal "bicaval view" is usually utilized in TEE to investigate the atrial septum in its supero-posterior limbus, on a vertical plane starting from the inferior vena cava through the superior vena cava. In this case it is possible that this very view, usually ideal for most common secundum-ASDs ECG and transthoracic echocardiogram Figure 1 ECG and transthoracic echocardiogram. ECG shows qR pattern in V1, QRS right axis deviation and diffuse repolarization abnormalities; transthoracic echocardiogram, apical 4-chamber view shows dilatation of the right atrium and of the right ventricle(max. diameter = 57 mm).
detection, was misleading and inappropriately obtained, so that high atrial septum was not shown.
In fact, getting back to the CT scan image, it appears that the abnormal heart morphology and orientation caused by right pressure overload, relatively to the normally posi-tioned esophagus, modified the usual structures encountered by the ultrasound beam when oriented through both the venae cavae (these are used as a marker for correct longitudinal beam orientation); in this case, because of the massive right atrium dilatation, the beam was directed from the esophagus (not visualized) directly Figure 2 Contrast-enhanced chest CT scan. Four-chamber view of the heart. Right cavities enlargement with consequential abnormal anatomical orientation of the heart; leftward shift of the atrial septum is clearly shown. Same 4-chamber image as in Figure 2; the direction of the ultrasound beam, from the esophagus through the right atrium, is represented in yellow Figure 4 Same 4-chamber image as in Figure 2; the direction of the ultrasound beam, from the esophagus through the right atrium, is represented in yellow. Fig. 3 correctly reinterpreted. Image is reinterpreted in light of the atypical anatomy obtained by the cut-plane used, indicated by the yellow line in Fig. 4. Caption is consequently changed in the image. RA=right atrium, IVC=inferior vena cava, SVC=superior vena cava. through the right atrium, "skipping" the two usually encountered structures in normal hearts, namely the left atrium and (even more importantly) the atrial septum (Fig. 4).
TEE-Bicaval longitudinal view
In this setting the presence of a linear echogenic structure, mistakenly interpreted as the atrial septum, apparently showed the absence of the superior limbus of the atrial septum, typical for SVD (Fig. 5). Very low patient compliance for probe in the high esophageal position limited the possibility to obtain potentially helpful images of the SVC-RA junction on the horizontal and longitudinal imaging-planes.
We can only speculate about the nature of this "false atrial septum" structure: it probably represents a prominent Eustachian Valve or it could be part of the Chiari network or it could simply be generated by right atrial extreme distortion and enlargement.
Nonetheless TEE examination was at best incomplete in this case, since we obtained only mid-atrial longitudinal images ( fig. 3, fig. 5, Video 1-see additional file 1) not only inappropriate for SVD-Superior Vena Cava type diagnosing, but misleading in this particular anatomical setting.
This type of ASD can be correctly diagnosed only by a higher longitudinal scan with respect to the one we performed during the above-mentioned TEE exam.
A scan plane taken more accurately at a more superior level would have demonstrated the pathognomonic feature of SVD-Superior Vena Cava type, in the atrial septum between the fatty limbus and the inferior aspect of the right pulmonary artery (Fig. 6).
Moreover TEE has the potential to show morphological information about the posterior structures of the heart, that always need to be investigated in detail for a complete diagnosis since, in one third of the cases, SVD-Superior Vena Cava type is associated with anomalous entry of the right pulmonary veins to the heart. Conclusion a) An incomplete TEE exam, with no images of the most superior part of the atrial septum(between the fatty limbus and the inferior aspect of the right pulmonary artery) and b)the unlucky contextual presence of a prominent Eustachian Valve and a very dilated right heart, together were responsible for wrong image interpretation in this case; nonetheless careful evaluation of different views with different probe/beam orientation by a more experienced operator could have established the correct diagnosis.
Echocardiographers performing TEE should be aware of this pitfall when examining patients with enlarged right chambers and abnormal heart orientation; this may be more relevant for "real-world" adult echocardiographers, a) Same image as in Fig. 5, reoriented to be compared with Fig. 6b; the right part is added to show "virtual" anatomy(yel-low line-drawing in the grey sector), should the probe be placed higher in the esophagus; b) Modified from"The echo manual 2 nd edition" by Oh JK, Seward JB, Tajik AJ Figure 6 a) Same image as in Fig. 5, reoriented to be compared with Fig. 6b; the right part is added to show "virtual" anatomy(yellow line-drawing in the grey sector), should the probe be placed higher in the esophagus; b) Modified from"The echo manual 2 nd edition" by Oh JK, Seward JB, Tajik AJ. SVD image obtained using the correct scan plane(higher transducer positioning compared with Fig. 6a) for imaging of the higher atrial septum and eventual SVD; the presence of the right pulmonary artery(RPA), here showed in its short axis, posteriorly to the fatty limbus of the atrial septum, confirms correct positioning for eventual SVD imaging(SVD is definitely present in this case).
generally poorly trained in recognition of the rarest congenital abnormalities.
Mid-atrial longitudinal "bicaval" view, normally utilized to diagnose most common ASDs, is sub-optimal for SVD-Superior Vena Cava type; it is not only insufficient for a complete examination of the higher atrial septum, but, particularly when confronted with modified heart anatomy, it may potentially show false images that need to be taken into consideration for correct image interpretation.
|
2014-10-01T00:00:00.000Z
|
2003-11-07T00:00:00.000
|
{
"year": 2003,
"sha1": "e2d6401cf2309ecf3a57b7fec5f29db5f4ee4223",
"oa_license": "CCBY",
"oa_url": "https://cardiovascularultrasound.biomedcentral.com/track/pdf/10.1186/1476-7120-1-15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85989c4e608748b73b21fd77104572bea7eebc55",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263826985
|
pes2o/s2orc
|
v3-fos-license
|
Surgical resection identified pseudo‐invasion with submucosal dense fibrosis in early colorectal cancer existing beyond the planned endoscopic submucosal dissection line: A case report
Abstract Pseudoinvasion is a phenomenon in which adenomatous tissue deviates into the submucosa with the mucosal lamina propria in colorectal epithelial tumors. A relatively large, stalked, neoplastic lesion of the sigmoid colon is considered at high risk of pseudoinvasion. A few reports have described endoscopic mucosal resection or polypectomy for colorectal tumors with pseudoinvasion, but the vertical margins were not sufficiently assessed. Because a positive margin can be a risk factor for recurrence, endoscopic treatment for pseudoinvasion should be carefully considered. We herein report a case in which even endoscopic submucosal dissection (ESD) was not adequate for curative resection of pseudoinvasion in early colorectal cancer. The endoscopic findings of a 25‐mm Type 0‐Is lesion in the sigmoid colon suggested a low possibility of carcinoma invasion into the deep submucosa. Although ESD was considered to be indicated in this case, laparoscopic sigmoid colon resection was eventually performed because we observed a broadly pulled muscle layer and an almost undetectable submucosal layer during ESD. The surgical specimen showed that the tumor glands of pseudoinvasion existed beyond the planned ESD dissection line, indicating that the vertical margin would have been positive if we had continued ESD. Whether pseudoinvasion was associated with the infeasibility of ESD remains unclear. This case indicates that diagnosing the presence and depth of pseudoinvasion by magnified endoscopy with narrow‐band imaging is challenging and that preoperative examinations, such as endoscopic ultrasound, may be needed for a tumor with a high risk of pseudoinvasion.
INTRODUCTION
Pseudoinvasion is a phenomenon in which adenomatous tissue deviates into the submucosa with the mucosal lamina propria in colorectal epithelial tumors.Pseudoinvasion is thought to occur when repeated mechanical stress on the tumor lesion, such as torsion or traction associated with peristalsis, causes the intramucosal tumor tissue to move into the submucosa. 1,2seudoinvasion tends to be observed in patients with a relatively large, stalked, neoplastic lesion of the sigmoid colon. 3Although a few reports have described endoscopic mucosal resection or polypectomy of colorectal tumors with pseudoinvasion, the vertical margins were not sufficiently assessed.We herein report a case in which the surgically resected specimen of an intramucosal carcinoma with pseudoinvasion demonstrated that the pseudoinvasion existed beyond the planned dissection line of endoscopic submucosal dissection (ESD).This case indicates that even ESD may not be adequate for curative resection of pseudoinvasion and underscores the importance of establishing preoperative examinations for pseudoinvasion.
Case report
A 48-year-old Japanese man underwent colonoscopy at his local clinic because a fecal occult blood test was positive, and a Type 0-Is polyp was detected in the sigmoid colon.He was referred to our hospital for endoscopic treatment.Repeat colonoscopy was performed for a detailed evaluation and showed a 25-mm Type 0-Is lesion in the sigmoid colon.The distorted lesion appeared tall, heavy, erythematous, and protruding with several shallow depressions (Figure 1a).The ridge was not tense, and the dividing lobe was maintained.The base exhibited fold convergence, but the mobility of the lesion was good.These nonmagnified endoscopic findings were thought to be compatible with intramucosal carcinoma.Magnified endoscopic images with narrow-band imaging in the depressed area demonstrated a vascular pattern of uninterrupted vascularity, varied caliber, and meandering.The surface pattern showed an uneven distribution with irregularity.Based on these findings, a JNET classification type 2B colorectal tumor was diagnosed (Figure 1b).Crystal violet staining showed a type IVv pit pattern in most areas.The pit of the depressed area exhibited slight marginal irregularity, but the devastation was not apparent.These findings were considered compatible with a type Vi pit pattern with mild irregularity (Figure 1c).The endoscopic findings suggested a low possibility of carcinoma invasion into the deep submucosa, and we thus considered that ESD was indicated for the lesion.
ESD was performed after the patient provided adequate informed consent.Shortly after starting the incision for ESD, a broadly pulled muscle layer was observed; however, the submucosal layer was almost undetectable (Figure 1d).Although the traction method was employed, it was difficult to determine the dissection line.Considering the high risk of perforation, ESD was stopped.Laparoscopic sigmoid colon resection was performed 11 days later.The histopathological assessment of the surgical specimen demonstrated that most of the lesion consisted of tubulovillous adenoma characterized by tubulovillous proliferation of columnar atypical cells with moderately enlarged nuclei.In some areas of the apex of the tumor lesion, atypical epithelium with highly enlarged and overlapping nuclei formed irregular glandular ducts, compatible with well-differentiated tubular adenocarcinoma (Figure 2).Adenomatous tissue was observed in the submucosal tissue as well as the intramucosal lesion, and it was accompanied by dense fibrosis from the front area of the adenomatous lesion to the muscularis propria and interstitial tissue that was compatible with the mucosal lamina propria.The adenoma lesion extended to the base and even existed on the originally planned line for ESD (Figure 3).Immunostaining for desmin was performed because it can provide diagnostic information for differentiating pseudoinvasion from the submucosal invasion of carcinoma.The muscularis mucosa-like structure can be observed in the interstitium of invasive carcinoma, and it appears similar to pseudoinvasion.However, the muscularis mucosa-like structure in cases of invasive carcinoma is negative on desmin immunostaining, whereas the crosses of muscularis mucosa observed in cases of pseudoinvasion are positive on desmin immunostaining. 4In the present case, we observed fractures and crosses of muscularis mucosa that were positive on desmin immunostaining (Figure 4).Finally, we diagnosed the lesion as intramucosal welldifferentiated tubular adenocarcinoma in tubulovillous adenoma with pseudoinvasion.Whether the presence of pseudoinvasion contributed to the pulled muscle layer observed during ESD, in this case, remains inconclusive.
DISCUSSION
In this case, we encountered a large 25-mm tumor in the sigmoid colon that had characteristics compatible with those of previously reported tumors with a high risk of pseudoinvasion. 3The four main pathological features of pseudoinvasion are cystic lesion formation, a continuous adenoma lesion within intramucosal lesions and submucosal tissue, interstitial tissue considered to be the mucosal lamina propria around the tumor glands, and absence of desmoplasia or desmoplastic reaction (which is characteristic of invasive carcinoma). 5,6In the present case, the second through fourth above-describe features were observed.Additionally, the desmin immunostaining result was consistent with pseudoinvasion.It is noteworthy that the pseudoinvasion was accompanied by solid fibrosis from the front area of the pseudoinvasion to the muscularis propria in this case.There is a possibility that, if there was no fibrosis in this area,we might have been able to better identify the submucosal layer and complete ESD.It has been reported that one of the causes of muscle-retracting sign (MR sign) appearance is fibrosis caused by mechanical forces generated between the submucosa and the muscularis layer due to intestinal peristalsis.We believe that pseudoinvasion has a similar mechanism of occurrence and coexisted with the MR sign in this case. 7,8Given that such mechanical forces also may affect the development of pseudoinvasion, the coexistence of the MR sign and pseudoinvasion could occur, although the causal link between these findings remains unestablished.
We believe the lessons to be gained from the present case are as follows.Because pseudoinvasion can exist even beyond the ESD dissection line, the diagnosis and assessment of pseudoinvasion must be carefully considered before endoscopic treatment.The surgical specimen in the present case showed the presence of tumor glands of pseudoinvasion beyond the ESD dissection line, indicating that the vertical margin would have been positive if we had continued ESD.In one study, among insufficiently treated colorectal tumors with positive lateral margins, recurrence occurred in 9.4% of cases after ≥6 months of follow-up. 9To the best of our knowledge, no clinical study has focused on the association between recurrence and a positive vertical margin.However, given the above-mentioned finding regarding positive lateral margins, we believe the risk of recurrence due to a positive vertical margin should be considered.Further studies are necessary to evaluate this risk.Some case reports have discussed the usefulness of endoscopic ultrasound (EUS) before endoscopic treatment of a colorectal tumor with pseudoinvasion. 10With EUS, the lesion of pseudoinvasion is observed mainly in the third layer (submucosa) as a hypoechoic mass with strong echo attenuation.EUS before ESD might have allowed for the prediction of pseudoinvasion in the present case.Given the known characteristics of a high risk of pseudoinvasion, preoperative EUS might be useful for a large tumor in the sigmoid colon.Determining which patients are candidates for EUS before ESD is a crucial clinical challenge.
F I G U R E 1 2
Endoscopic findings of the lesion before and during endoscopic submucosal dissection.(a) A tall, heavy, erythematous, protruding, and distorted lesion with several shallow depressions at the apex was observed in the sigmoid colon.(b) The surface pattern showed an uneven distribution with irregularity, compatible with JNET classification type 2B.(c) A slight irregularity was observed in the depressed area, diagnosed as Vi mild irregularity.(d) A broadly pulled muscle layer was observed, and the submucosal layer was almost undetectable.Histological findings with hematoxylin and eosin staining.(a) Magnification of the left-side box area in panel b.In some areas of the apex of the tumor lesion, atypical epithelium with greatly enlarged and overlapping nuclei formed irregular glandular ducts, compatible with well-differentiated tubular adenocarcinoma.(b) The presence of tubular adenocarcinoma was confirmed in the area indicated by the red arc.(c) Magnification of the right-side box area in panel b.Most of the lesion consisted of tubulovillous adenoma with tubulovillous proliferation of columnar atypical cells containing moderately enlarged nuclei.(d) Magnification of the lower box area in panel b.This is a tubular adenoma with tubular growth in the submucosa.The glands are accompanied by a surrounding intramucosal interstitium.
F I G R E 3
Histological with hematoxylin and eosin staining and the endoscopic submucosal dissection cutline.(a) The adenoma tissue (yellow arrowheads) extended to the base, and the adenoma tissue of pseudoinvasion was observed beyond the endoscopic submucosal dissection line (red arrow).(b) Magnification of the endoscopic submucosal dissection line (red arrow).(c) Dense fibrosis was observed between pseudinvasion and the muscularis propria.
F I G U R E 4
Histological findings with desmin immunostaining.(a) Fractures and crosses of muscularis mucosa were observed (red arrowheads).(b) Magnification of the box area in panel a.
|
2023-10-12T05:06:26.487Z
|
2023-10-10T00:00:00.000
|
{
"year": 2023,
"sha1": "a47fedbb2b132447eb6688cfbf76e58afaa919ae",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a47fedbb2b132447eb6688cfbf76e58afaa919ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236403299
|
pes2o/s2orc
|
v3-fos-license
|
Time-Inhomogeneous Feller-type Diffusion Process with Absorbing Boundary Condition
A time-inhomogeneous Feller-type diffusion process with linear infinitesimal drift α(t)x+β(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha (t)x+\beta (t)$$\end{document} and linear infinitesimal variance 2r(t)x is considered. For this process, the transition density in the presence of an absorbing boundary in the zero-state and the first-passage time density through the zero-state are obtained. Special attention is dedicated to the proportional case, in which the immigration intensity function β(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta (t)$$\end{document} and the noise intensity function r(t) are connected via the relation β(t)=ξr(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta (t)=\xi \,r(t)$$\end{document}, with 0≤ξ<1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0\le \xi <1$$\end{document}. Various numerical computations are performed to illustrate the effect of the parameters on the first-passage time density, by assuming that α(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha (t)$$\end{document}, β(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta (t)$$\end{document} or both of these functions exhibit some kind of periodicity.
with A 1 (x 0 , t 0 ) and A 2 (x 0 , t 0 ) given in (1), to solve imposing the initial delta condition and the absorbing boundary condition in the zero-state: Furthermore, let be the random variable describing the first-passage time (FPT) through the zero-state starting from X (t 0 ) = x 0 > 0; we denote by We note that the FPT density g(0, t|x 0 , t 0 ) is not affected by the boundary condition on the zero-state, provided that it is attainable. The problem of determining FPT densities for the Feller-type diffusion process arises in a variety of fields, including neurobiology, population dynamics, queueing systems and mathematical finance (cf., for instance, Linetsky [30], Masoliver and Perelló [35], Buonocore et al. [36], D'Onofrio et al. [37], Giorno et al. [38,39], Albano e Giorno [40], Di Nardo and D'Onofrio [41]). For instance, in population dynamics g(0, t|x 0 , t 0 ) describes the extinction density, whereas in queueing systems represents the busy period density. Lavigne and Roques in [18] focus on the distribution of the extinction times of a population whose size is described by a time-inhomogeneous Feller-type diffusion process with infinitesimal drift A 1 (x, t) = α(t) x and infinitesimal variance A 2 (x, t) = σ 2 x, where α(t) is a continuous function and σ 2 is a positive constant.
The functions (2) and (7) are intimately related; indeed, one has: Relation (8) shows that the determination of g(0, t|x 0 , t 0 ) requires the explicit evaluation of the transition pdf f a (x, t|x 0 , t 0 ) in the presence of an absorbing boundary at the zero-state.
Plain of the Paper
The paper is organized in five sections and seven appendices in which the proofs of the main results are reported. In Sect. 2, for the time-inhomogeneous Feller-type diffusion process X (t), with infinitesimal moments (1), we give some preliminary results concerning the Laplace transform (according to x 0 ) of the transition pdf f a (x, t|x 0 , t 0 ) in the presence of an absorbing boundary in the zero-state. The proportional case, in which the immigration intensity function β(t) and the noise intensity function r (t) are related as β(t) = ξ r (t), with 0 ≤ ξ < 1, is also analyzed. In Sect. 3, the transition pdf f a (x, t|x 0 , t 0 ) is obtained for the process (1) in the general case, by distinguishing the case x = 0 (Sect. 3.1) and x > 0 (Sect. 3.2). In Sect. 4, we focus on the FPT of X (t) through the zero-state for the general case and we determine the expression of the FPT pdf g(0, t|x 0 , t 0 ). In Sects. 3 and 4, we also show as the results of the proportional case can be derived from the general case. In Sect. 5, various numerical computations are performed making use of MATHEMATICA to illustrate the effect of periodic intensity functions on the FPT pdf g(0, t|x 0 , t 0 ). Specifically, we assume that the growth intensity function α(t), the immigration intensity function β(t) or both these functions exhibit some kind of periodicity. The FPT mean t 1 (0, t|x 0 , t 0 ) and the coefficient of variation CV(0|x 0 , t 0 ) = √ Var(0|x 0 , t 0 )/t 1 (0|x 0 , t 0 ) are also analyzed.
Preliminary Results
In this section, we determine the Laplace transform (according to x 0 ) of the transition pdf f a (x, t|x 0 , t 0 ) in the general case. Furthermore, the explicit expressions of the transition pdf and of the FPT density through the zero-state are obtained in the proportional case.
Laplace Transform
For t ≥ t 0 and x ≥ 0, we consider the Laplace transform: We determine Z a (x, t|s, t 0 ) so that, by taking its inverse Laplace transform, we obtain f a (x, t|x 0 , t 0 ). Multiplying both sides of (3) by e −sx 0 , integrating with respect to x 0 over the interval [0, +∞) and making use of the boundary condition (5), we have the following partial differential equation to solve with the initial condition derived from (9) by using the initial condition (4).
For t ≥ t 0 , we have: where Proof The proof is given in Appendix A.
Proportional Case
For all t ≥ 0, we suppose that the continuous functions β(t) and r (t) are proportional, i.e.
Proposition 2 Under the assumption (14), for t ≥ t 0 one has: Furthermore, the transition pdf of X (t) in the presence of an absorbing boundary in the zero-state is: with A(t|t 0 ) and R(t|t 0 ) defined in (13) and where denotes the modified Bessel function of the first kind.
Proof The proof is given in Appendix B.
Note that, the first of (16) follows by taking the limit as x ↓ 0 in the second, recalling that for fixed ν and for z → 0 one has (cf. Abramowitz and Stegun [42], p. 375, no 9.6.7): If (14) holds, for t ≥ t 0 , x > 0 and x 0 > 0 from (16) it follows: Proposition 3 Under the assumption (14), for t ≥ t 0 and x 0 > 0 one has: with R(t|t 0 ) given in (13) and where γ (a, z) = z 0 e −y y a−1 dy, Re a > 0 ( 2 0 ) denotes the incomplete gamma function.
General Case
We assume that α(t), β(t) and r (t) are continuous functions such that where with A(t|t 0 ) and R(t|t 0 ) given in (13). We note that V a (x, t|s, t 0 ) does not dependent upon β(t). Therefore, to obtain the transition pdf f a (x, t|x 0 , t 0 ) for X (t) with infinitesimal moments (1), we proceed as follows: (1) we determine the transition pdf f a (0, t|x 0 , t 0 ) for x 0 > 0 and t ≥ t 0 by taking the inverse Laplace transform of Z a (0, t|s, t 0 ); (2) we find the inverse Laplace transform v a (x, t|x 0 , t 0 ) of (29) and we calculate the transition pdf f a (x, t|x 0 , t 0 ) as a convolution, according to x 0 , between f a (0, t|x 0 , t 0 ) and the function v a (x, t|x 0 , t 0 ) for x > 0, x 0 > 0 and t ≥ t 0 .
General Case: x = 0
In this section, we obtain the transition pdf in the presence of an absorbing boundary in the zero-state when the process X (t) reaches x = 0 at time t ≥ t 0 . By setting x = 0 in (12), for t ≥ t 0 we obtain: with A(t|t 0 ) and R(t|t 0 ) defined in (13).
In the sequel, we denote by B n (d 1 , d 2 , . . . , d n ) the complete Bell polynomials, recursively defined as follows: Proposition 5 Under the assumption of Proposition 1, for t ≥ t 0 and x 0 > 0 the transition pdf of the time-inhomogeneous Feller-type diffusion process X (t) with an absorbing boundary in the zero-state is where with A(t|t 0 ) and R(t|t 0 ) defined in (13), B n (d 1 , d 2 , . . . , d n ) given in (31) and in (32), and denoting the Laguerre polynomials.
Proof The proof is given in Appendix C.
General Case: x > 0
In this section, we obtain the transition pdf Proposition 6 Under the assumption of Proposition 1, for x 0 > 0 and t ≥ t 0 , one has: with A(t|t 0 ) and R(t|t 0 ) defined in (13), whereas δ(x) denotes the delta Dirac function and I ν (z) represents the Bessel function modified of first kind.
Proof The proof is given in Appendix D. (40) is the sum of two terms. The second term in (40) identifies For x > 0, the transition pdf f a (x, t|x 0 , t 0 ) can be obtained via a convolution, according to x 0 , between the pdf f a (0, t|x 0 , t 0 ) and the function v a (x, t|x 0 , t 0 ), determined in Propositions 5 and 6, respectively: Proposition 7 Under the assumption of Proposition 1, for t ≥ t 0 , x > 0 and x 0 > 0 one has: with A(t|t 0 ) and R(t|t 0 ) given in (13) and Ψ (t|z, t 0 ) defined in (34).
Note that, by taking the limit as x ↓ 0 in (42), we obtain (33).
The First-Passage Time Through the Zero-State
We now focus on the distribution function of the FPT through the zero-state for the timeinhomogeneous Feller-type diffusion process X (t), with infinitesimal moments (1), when α(t), β(t) and r (t) are continuous functions such that α(t) ∈ R, β(t) ∈ R, r (t) > 0, β(t) ≤ ξ r (t), with 0 ≤ ξ < 1. The FPT problem of X (t) through the zero-state can be studied starting from Eq. (8) and making use of (42).
Proof The proof is given in Appendix E.
Proof
The proof is given in Appendix F.
Special Cases
Under the assumption (14), we analyze the cases in which the growth intensity function α(t), or the immigration intensity function β(t) or both of them have some kind of periodicity. These cases are of interest in various applied fields, such as in population growth and in queueing systems. Indeed, periodic immigration intensity functions play an important role in the description of the evolution of dynamic for systems influenced by seasonal immigration or other regular environmental cycles. Furthermore, periodic growth intensity functions express the existence of fluctuation in the population dynamics and the presence of rush hours occurring on a daily basis in queueing systems.
Periodic Immigration Intensity Function
We consider the time-inhomogeneous Feller-type process X (t) such that with α ∈ R, 0 ≤ ξ < 1 and where ν > 0 is the average of the periodic function r (t) of period Q, c is the amplitude of the oscillations, with 0 ≤ c < 1. From (13), for t ≥ t 0 one has A(t|t 0 ) = α (t − t 0 ) and (55) Then, from (55) one obtains: so that, by virtue of (23), the FPT through the zero-state is a certain event for α ≤ 0. Moreover, for α = 0 the FPT moments (26) are divergent. In Figs. 1, 2 and 3, the FPT distribution G(0, t|x 0 , t 0 ) = 1 − +∞ 0 f a (x, t|x 0 , t 0 ) dx, obtained making use of (19), and the FPT pdf g(0, t|x 0 , t 0 ), given in (22), are plotted as function of t for the diffusion process (53) for some choices of parameters. In Fig. 4, the mean t 1 (0|x 0 , t 0 ) and the coefficient of variation CV(0|x 0 , t 0 ), obtained making use of (26), are plotted as function of ν for ξ = 0, 0.3, 0.6. We note that as ν increases, the FPT mean t 1 (0|x 0 , t 0 ) decreases whereas the coefficient of variation increases. Instead, as ξ increases in [0, 1), the FPT mean increases and the coefficient of variation decreases, due to a raise of the immigration intensity function.
Periodic Growth Intensity Function
We consider the time-inhomogeneous Feller-type process X (t) such that with r > 0, 0 ≤ ξ < 1 and where η ∈ R is the average of the periodic function α(t) of period Q 1 , b determines the amplitude of the oscillations, with 0 ≤ b < 1. In Fig. 5, the intensity function (57) is plotted as function of t for some choices of parameters η, b and Q 1 . The dotted lines refer to the average cases, in which α(t) = η with η = −5 (bottom) and η = 5 (top). From (13), for t ≥ t 0 one has and (59) Fig. 5 The intensity function α(t), given in (57), is plotted as function of t for some choices of parameters. The dotted lines refer to the average cases Then, from (59) one obtains: so that, by virtue of (23), the FPT through zero-state is a certain event for η ≤ 0. Moreover, for η = 0 the FPT moments (26) are divergent.
In Fig. 6, the FPT pdf g(0, t|x 0 , t 0 ), given in (22), is plotted as function of t for the process (56) for some choices of parameters. Instead, in Fig. 7, the mean t 1 (0|x 0 , t 0 ) and the coefficient of variation CV(0|x 0 , t 0 ), obtained making use of (26), are plotted as function of r for ξ = 0, 0.3, 0.6. We note that as r increases, the FPT mean t 1 (0|x 0 , t 0 ) decreases, whereas the coefficient of variation increases. Moreover, the FPT mean and the coefficient of variation increase with ξ in [0, 1).
Periodic Immigration and Growth Intensity Functions
We consider the time-inhomogeneous Feller-type process X (t) such that with 0 ≤ ξ < 1, r (t) defined in (54) and α(t) given in (57). Recalling (13), for t ≥ t 0 one obtains A(t|t 0 ) given in (58) and The explicit expression of R(t|t 0 ) in (61) is obtained in Appendix G. We note that lim t→+∞ R(t|t 0 ) diverges as η ≤ 0, so that, due to (23), the FPT through the zero-state is a certain event for X (t).
In Fig. 8, the FPT pdf g(0, t|x 0 , t 0 ), given in (22), is plotted as function of t for the process (60) for some choices of parameters. Comparing Figs. 6 and 8 , we note the effect of the different periodicities of the growth intensity function α(t), with Q 1 = 1, and of the immigration intensity function β(t) = ξ r (t), with Q = 2. In Fig. 9, the mean t 1 (0|x 0 , t 0 ) and the coefficient of variation CV(0|x 0 , t 0 ), obtained making use of (26), are plotted as function of ν for ξ = 0, 0.3, 0.6. As ν increases, the FPT mean t 1 (0|x 0 , t 0 ) decreases whereas the coefficient of variation increases. Instead, as ξ increases in [0, 1), both the FPT mean and the coefficient of variation increase.
Concluding Remarks
In this paper, we have considered a time-inhomogeneous Feller-type diffusion process We have assumed that the zero-state represents an absorbing boundary for X (t). This process plays a relevant role in different fields, including physics, biology, neuroscience, finance and others. For instance, in population biology α(t) represents the growth intensity function and can be positive, negative or zero at different time instants, β(t) describes the immigration intensity function; instead, the noise intensity function r (t) takes into account the environmental fluctuations. For this process, the transition density f a (x, t|x 0 , t 0 ) in the presence of an absorbing boundary in zero-state and the FPT density g(0, t|x 0 , t 0 ) from X (t 0 ) = x 0 to the zero-state are obtained. Special attention is dedicated to the proportional case, in which the immigration intensity function and the noise intensity function are related as β(t) = ξ r (t), with 0 ≤ ξ < 1. Various numerical computation are performed to illustrate the effect of periodic intensity functions on the FPT pdf g(0, t|x 0 , t 0 ), by assuming that α(t), β(t) or both these functions exhibit some kind of periodicity.
with the initial conditions: so that i.e. the condition (5) is satisfied.
|
2021-07-27T00:04:59.328Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e81f3adbfc92f623e76a62554212ca9b4aaa725a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10955-021-02777-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "149d3f4aa6b2954cca0b155e4e1b5062df5842af",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
15865956
|
pes2o/s2orc
|
v3-fos-license
|
Quantization as Asymptotics of Diffusion Processes in the Phase Space
This work is an extended version of the paper arXiv:0803.2669v1[math-ph], in which the main results were announced. We consider certain classical diffusion process for a wave function on the phase space. It is shown that at the time of order $10^{-11}$ {\it sec} this process converges to a process considered by quantum mechanics and described by the Schrodinger equation. This model studies the probability distributions in the phase space corresponding to the wave functions of quantum mechanics. We estimate the parameters of the model using the Lamb--Retherford experimental data on shift in the spectrum of hydrogen atom and the assumption on the heat reason of the considered diffusion process. In the paper it is shown that the quantum mechanical description of the processes can arise as an approximate description of more exact models. For the model considered in this paper, this approximation arises when the Hamilton function changes slowly under deviations of coordinates, momenta, and time on intervals whose length is of order determined by the Planck constant and by the diffusion intensities.
Introduction
In this paper we propose a model which, on the one hand, allows one to estimate the probability distribution of a quantum particle in the phase space in the low temperature heat field. For the first time this problem was solved by Wigner [3], but he constructed "quasi-distributions" on the phase space which can be negative and hence have no physical sense. On the other hand, the proposed model yields one more construction of quantization of mechanical systems and can be used in the new approach to foundation of the classical quantization procedure. This is an old problem. Various approaches to this problem, in particular probabilistic ones, can be found in [4,5,6,7,8,9]. These works essentially influenced the author during the construction of the present model.
In this paper we consider the classical model of a diffusion process for a wave (complex valued) function on the phase space. The analysis of the differential equation of the model shows that the motion in the model splits into rapid and slow motions. The result of the rapid motion is that the system, starting from an arbitrary wave function on the phase space, goes to a function belonging to certain distinguished subspace. The elements of this subspace are parameterized by the wave functions depending only on the coordinates. The slow motion along the subspace is described by the Schrodinger equation.
Using the assumptions on the heat reason of the diffusions and the correspondence of the consequences of the model with the known physical experiments of Lamb-Retherford [10] (the Lamb shift in the spectrum of hydrogen atom), we estimate the diffusion coefficients and the time of the transition process from the classical description, in which the Heisenberg indeterminacy principle does not hold, to the quantum description in which the Heisenberg principle already holds. The time of the transition process has order 1/T · 10 −11 sec, where T is the temperature of the medium.
The results of this work have been announced in the paper [1]. Proofs of theorems 4 and 5 are instructive but rather technical, hence they are exposed in the Appendix. The estimate of the parameters of the model is also exposed in the Appendix.
The author is grateful to professor G. L. Litvinov, who was attentive to this work, made a lot of editorial comments, and stimulated an essential revision of the text, and to professor A. V. Stoyanovsky, who translated this paper to English.
Description of the model
We consider a mathematical model of a process whose state at each moment of time is given by a wave function, which is a complex valued function ϕ(x, p), where (x, p) ∈ R 2n , and n is the dimension of configuration space. In contrast to quantum mechanics, where the wave function depends only on coordinates or only on momenta, in our case the wave function depends both on coordinates and on momenta. As in quantum mechanics, it is assumed that for the wave functions the superposition principle holds, and the probability density ρ(x, p) on the phase space, corresponding to the wave function ϕ(x, p), is given by the standard formula ρ(x, p) = ϕ * (x, p)ϕ(x, p) = |ϕ(x, p)| 2 . (1) In the present work we consider the classical model of diffusion process for the wave function ϕ(x, p) on the phase space. It is assumed that each complex vector of the wave function is simultaneously in 4 motions: the base point of the complex vector moves along the classical trajectory given by the Hamilton function H(x, p); the base point of the vector moves randomly with respect to coordinates and momenta, being in diffusion process with constant diffusion coefficients a 2 and b 2 with respect to coordinates and momenta, respectively; the base point of each vector moves along a random trajectory as a result of motions described in the two preceding points, and the vector itself rotates with very large constant angular velocity ω = mc 2 /h in the coordinate system related with this point, where m is the mass of the particle, c is the light velocity,h is the Planck constant; the length of all complex vectors of the wave function at the moment t of time is multiplied by exp(abnt/h) (this is a purely technical requirement which does not affect the relative probabilities of position of the particle in the phase space).
It is assumed that the wave vector ϕ(x, p, t) at the point (x, p) at the moment t of time equals, by the superposition principle, to the sum of wave vectors given by the distribution of vectors ϕ(x, p, 0) at the initial moment of time which get to the point (x, p) at the moment t due to the motions described above.
The mathematical model of the process
Consider the diffusion process on the phase space in which the wave function ϕ(x, p, t) at the moment t satisfies the differential equation where H(x, p) is the Hamilton function; a 2 and b 2 are the diffusion coefficients with respect to coordinates and momenta, respectively. If we omit the last summand in equation (2), then we obtain a first order partial differential equation ∂ϕ/∂t = Aϕ, where This part of equation (2) Note that in the case when the configuration space is three dimensional where τ = mc 2 dt/H, in accordance with the formulas of special relativity theory, is the proper time in the coordinate system related with the particle moving with the momentum p. I. e. in this case, the vector whose base point moves along the classical trajectory, rotates with the constant angular velocity ω = mc 2 /h in the coordinate system related with this point. On the contrary, if in the right hand side of equation (2) we leave only the last summand of the form (3), then we obtain the equation This equation describes the diffusion component of the motion of vectors ϕ(x, p, t) in the phase space. In this motion, the base points of the vectors move according to the classical homogeneous diffusion process with the diffusion coefficients with respect to coordinates and momenta equal to a 2 and b 2 , respectively. And the vector itself is parallel transported during small random transports from a point (x, p) to the point (x + dx, p + dp), and its length at moment t is multiplied by exp(abnt/h). Note that the parallel transport of vectors on the phase space is given by a connection expressed by the following formula: L (dx,dp) ϕ(x, p) − ϕ(x, p) ≈ −(i/h)ϕ(x, p)pdx, where L (dx,dp) ϕ(x, p) is the parallel transport of the vector ϕ(x, p) from the point (x, p) along the infinitely small vector (dx, dp).
In the particular case when the configuration space is three dimensional, such the connection on the phase space is related to synchronization of moving clocks at the points of the phase space. Indeed, if a particle with coordinates x = (x 1 ; x 2 ; x 3 ) moves with the velocity v = (v 1 ; v 2 ; v 3 ), then, according to formulas of special relativity theory, proper time is expressed through the observer time t by the formula where xv = x 1 v 1 + x 2 v 2 + x 3 v 3 is the scalar product of the vectors x and v, and c is the velocity of light.
For a free particle with the momentum p = (p 1 ; p 2 ; p 3 ) and the stationary mass m, the energy E = c √ p 2 + m 2 c 2 and, respectively, Substituting these expressions into (6), after computations we obtain: Consider the distribution of complex vectors ϕ(x, p, t) on the phase space, rotating with constant angular velocity ω = mc 2 /h in the proper time τ , which, at the moment τ = 0, are equal to one and the same vector ϕ 0 . We have Substituting formula (8) into this expression, we obtain Hence, if L (△x,0) ϕ(x, p, t) is the shift of the vector ϕ(x, p, t) = ϕ 0 exp(−imc 2 τ /h) by the vector △x along coordinates x without change of proper time, then In the limit of infinitely small △x we obtain the required formula for this case: On the other hand, if we have a shift L (0,△p) of the vector ϕ(x, p, t) = ϕ 0 exp(−imc 2 τ /h) along momentum by △p without change of proper time, then, since in the special relativity approximation, acceleration does not change the proper time of a particle, we have the equality L (0,△p) ϕ(x, p, t) = ϕ(x, p, t) and L (0,dp) ϕ(x, p) − ϕ(x, p) = 0.
Note that under these assumptions, the derivation with respect to the vector of infinitely small shift along the k-th coordinate corresponds to the differential operator D x k = ∂/∂x k − ip k /h, and the derivation with respect to the shift along the k-th momentum corresponds to the usual differential operator D p k = ∂/∂p k , where k = 1, ..., n.
Note also that these operators of shift along coordinates and momenta do not commute. The commutators of these differential operators read D p k , D x k = −i/h and D p k , D x j = 0, where k = j and k, j = 1, ..., n.
Thus, the shifts along coordinates and momenta of wave functions on the phase space realize a representation of the Heisenberg group.
Analysis of the diffusion component of the equation
Consider the diffusion equation (5) in more detail. This equation can be represented as follows: where It is natural to call the operator ∆ a,b by the diffusion operator for the representation of the Heisenberg group with the diffusion intensities a and b with respect to coordinates and momenta, respectively.
Let us look for a solution of equation (12) in the form Substituting this expression into equation (12), dividing both parts of the equality by exp(ixp/h) and transferring the summand (nab/h)ϕ 0 into the left hand side, we obtain the equation where ϕ 0 = ϕ 0 (x, p, t) is some function.
To solve equation (14), let us decompose the function ϕ 0 (x, p, t) into the Fourier integral with respect to p, i. e., let us represent the function ϕ 0 (x, p, t) in the form (16) Substituting this expression for ϕ 0 (x, p, t) into the equation (14), we obtain that ψ 0 (x, y, t) satisfies the equation The right hand side of this equation is a self adjoint operator with discrete spectrum consisting of negative numbers. Indeed, the equation for eigenvalues of this operator reads n k=1 a 2 ∂ 2 χ ∂x 2 where χ = χ(x, y) is a function of x and y. This equation is the stationary Schrodinger equation for harmonic oscillator, and it is well studied (see, for instance, [13], p. 94). In particular, it is known that on the set of functions which tend to zero as x tends to infinity, equation (18) has discrete spectrum consisting of negative eigenvalues λ 1 > λ 2 ≥ . . .. The greatest eigenvalue λ 1 = −nab/h corresponds to the eigenfunction χ 1 (x, y) = (b/aπh) n/4 exp (−b(x − y) 2 /2ah). The next eigenvalues are less than λ 1 , and the difference is greater than or equal to ab/h.
Since the eigenfunctions χ k (x, y) of the operator (18) form a complete system of functions in the class of functions tending to zero as x tends to infinity, an arbitrary function ψ 0 (x, y, t) from this class can be represented as a series where c k (y, t) = R n ψ 0 (x, y, t)χ k (x, y)dx (19) are the coefficients of the decomposition of the function ψ 0 (x, y, t) with respect to eigenfunctions χ k (x, y). Substituting the expression of the function ψ 0 (x, y, t) in the form of this series into equation (17), we obtain that this equation in the orthonormal basis of eigenfunctions χ k (x, y), k = 1, 2, . . ., splits into an infinite system of equations: where λ 1 + nab/h = 0 and λ k + nab/h ≤ −ab/h for k > 1.
Hence c 1 (y, t) = c 1 (y, 0), and c k (y, t) = c k (y, 0) exp((λ k + nab/h)t) exponentially decay with time for k > 1. Hence the summand in ψ 0 corresponding to the first eigenvalue will give the main contribution into the function ψ 0 after time of order (h/ab).
Respectively, since by definition ϕ 0 (x, p, t) = Fhψ 0 (x, y, t) and Fourier transform is continuous, we have Fhψ 0 (x, y, t) = Fh(c 1 (y, 0)χ 1 (x, y)). (21) Since we will not use other eigenfunctions, introduce the notation To make the formulas shorter, let us also denote ψ(y) def = c 1 (y, 0), where c 1 (y, 0) is expressed by the formula (19) with k = 1 and t = 0. That is, Since by formula (13) ϕ(x, p, t) = ϕ 0 (x, p, t) exp(ixp/h), then by formulas (21), (22) and using the notation ψ(y) def = c 1 (y, 0) above and also the equality (23) and notation (16), we obtain the following theorem. Theorem 1. Let ϕ(x, p, 0) be an arbitrary function such that Fourier transform of the function ϕ(x, p, 0) exp(−ixp/h) with respect to p tends to zero as x → ∞. Then the solution ϕ(x, p, t) of the diffusion equation (5) exponentially with time (with the number in the exponent equal to −abt/h) tends to a stationary solution of the form and Note that χ 2 (x − y) is the probability density of the normal distribution with respect to x with the mathematical expectation y and dispersion ah/(2b). If the quantity ah/(2b) is small, then the function χ 2 (x − y) is close to the delta function of x − y.
The composition of expressions (25) and (24) yields a projector P 0 form the space of wave functions defined on the phase space onto certain subspace. The elements of this subspace are parameterized by functions of the form ψ(y), where y ∈ R n , i. e., by wave functions on the configuration space.
Theorem 2. The projection operator P 0 given by composition of expressions (25) and (24), has the form The operator P 0 is self-adjoint and commutes with the operator ∆ a,b . Proof of this theorem is obtained by substitution of formulas (25) and (26) into (24), by an algebraic transformation of the number in the exponent, and by computation of the integral over y. The integral over y is the Fourier transform of the exponent of a quadratic polynomial, whose analytical expression is known. The author performed the computations using the system Mathematica [14], supporting symbolic mathematical computations.
The commutativity of the operators P 0 and ∆ a,b follows from the fact that the orthogonal projector P 0 distinguishes the subspace of eigenvectors of the self-adjoint operator ∆ a,b with the zero eigenvalue.
Formulas (24) and (1) imply the following statement. Theorem 3. If ψ(x) is a wave function on the configuration space and ϕ(x, p) is the wave function on the phase space corresponding to it by formula (24), then the probability density in the phase space ρ(x, p) = |ϕ(x, p)| 2 is expressed by the formula In contrast to quasi-distributions [3], the density ρ(x, p) in the phase space, given by expression (28), is always nonnegative. Its expression differs from the expression for the Wigner function by exponents under integral, which give the densities of distributions close to delta functions. To prove Theorem 3, one should substitute into formula (1) expression (24). We obtain where χ(x − y) is given by relation (26). After substitution of (26) into (29), the change of coordinates y = x + (x ′′ − x ′ )/2 and y ′ = x + (x ′′ + x ′ )/2 under integral and a transformation of the numbers in the exponents, we obtain formula (28).
The algebra of observables given by real functions on the phase space but averaged over the probability densities of the form (28), has been studied in [15].
The function ρ(x) of density of probability distribution in the configuration space is expressed through the density ρ(x, p) in the phase space by the formula ρ(x) = R n ρ(x, p)dp. Hence, integrating expression (29) over p, we obtain the following statement.
Corollary 1. If ψ(x) is a wave function on the configuration space, then the corresponding probability density ρ(x) in the configuration space is given by the formula where χ(x − y) is given by (26). That is, ρ(x) is obtained from |ψ(x)| 2 by smoothing (convolution) with the density of normal distribution with dispersion ah/(2b), and the exactness of defining coordinate is bounded by the quantity ∼ ah/(2b).
As it is known [16], in quantum electrodynamics the minimal error of measuring coordinates of an electron in the stationary system is bounded by the quantityh/(mc), where m is the mass of the electron, and c is the light velocity. Hence the statement of Corollary 1, although not corresponding to non-relativistic quantum mechanics (in which it is assumed that coordinates can be measured with any degree of exactness), does not contradict with a more exact theory, quantum electrodynamics.
If one assumes that the diffusion is induced by heat action on the electron, then the diffusion coefficients with respect to coordinates and momenta are expressed in statistical physics (see, for example, [17], Ch.7, §4 and §9) through the temperature T by the formulas a 2 = kT /(mγ) and b 2 = γkT m, where k is the Boltzmann constant, m is the mass of the electron, γ is the friction coefficient of the medium per unit of mass. Hence a/b = (γm) −1 and ab = kT . That is, in this case, the quantity a/b, which enters expression (26) and determines the dispersion of smoothing in Corollary 1, does not depend on the temperature. On the other hand, the time t of the transformation process determined in Theorem 1, has the form t ∼h/(ab) =h/(kT ) = T −1 · 7.638 · 10 −12 . More detailed formulas for the estimate of the quantityh/(ab) are given in Appendix 3.
Analysis of the model of the process
Let us return to the study of the main equation (2). Taking into account the estimate made at the end of the previous subsection, let us consider the quantityh/(ab) in equation (2) as a small parameter, and let us assume that coordinates and momenta change a little at this time in the classical motion defined by the Hamiltonian H(x, p), and also let us assume that the function H(x, p) and all its derivatives grow at infinity no faster than a polynomial.
Theorem 4. The motion described by equation (2) asymptotically splits ash/(ab) → 0 into a rapid motion and a slow one.
1) As a result of rapid motion, an arbitrary wave function ϕ(x, p, 0) turns at the time of orderh/(ab) into a function of the form (24): where The wave functions of the form (31) form a linear subspace. Elements of this subspace are parameterized by wave functions ψ(y) depending only on coordinates y ∈ R n .
2) The slow motion starting from a nonzero wave function ϕ(x, p, 0) of the form (31) from this subspace, goes inside this subspace, and is parameterized by the wave function ψ(y, t) depending on time. The function ψ(y, t) satisfies the Schrodinger equation of the form ih∂ψ/∂t =Ĥψ, wherê and χ(x − y) is given by formula (32). Proof of Part 1 of Theorem 4 is postponed till Appendix, due to its large volume and technicalities.
Proof of Part 2 of Theorem 4. In the first Part of Theorem 4 it is stated that after the rapid motion, the initial distribution ϕ(x, p, 0) turns to the form with just a small difference with (31). After the slow motion the distribution remains in the class of functions of the form (31), but changes in time.
To study the slow motion, let us look for a solution of equation (2) in the form (31) in which ψ = ψ(y ′ , t) is considered as dependent on time.
Let us substitute expression (31) into equation (2). Since by construction, this expression for ϕ satisfies equation ∆ a,b ϕ = 0, then after substitution of expression (31) into equation (2) and after dividing both parts of the equation by exp (ixp/h), we obtain the following equation: If in the obtained equation one opens the brackets, reduces the similar terms, multiplies both parts of the equation by 1/(2πh) n/2 χ(x − y) exp(iyp/h) and integrates by p and by x, then, taking into account the equality R n χ 2 (x − y)dx = 1, we obtain the following equation: where the operatorĤ is obtained from the function H by formula (33), required in Part 2 of Theorem 4. Theorem 5. If ah b is a small quantity and H(x, p) = p 2 2m + V (x), then the operatorĤ, up to terms of order ah/b, has the form Proof of Theorem 5 is given in Appendix 2.
The first two summands in formula (36) give the standard Hamilton operator. The last summand is a constant and can be neglected. The previous summand before the last one will be considered (due to the smallness of ah/b) as a perturbation of the Hamilton operator.
Assuming that the deviations in the spectrum of the hydrogen atom (the Lamb shift) observed in the Lamb-Retherford experiments [10] are induced by the previous to the last summand in formula (36), one can estimate the quantity a/b. The computations by the standard method of perturbation theory analogous to the computations of [18], give the following estimate: a/b = 3.41 · 10 4 sec/g (see the computations in Appendix 3). Hence, the standard deviation of the normal distribution χ 2 , with which we make smoothing in formula (30), has the form ah/(2b) = 4.24 · 10 −12 cm. This quantity is much less than the radius of the hydrogen atom, and it is close to the Compton wave length of the electronh/(mc) = 3.86 · 10 −11 cm.
Conclusion
In this paper it is shown that the standard quantum mechanical description of a process can arise as an approximation of certain classical model for a diffusion process for a wave function in the phase space. The computations show that the proposed model in the form of differential equation (2) describes physical processes in a rather adequate manner in the standard cases for standard Hamiltonian. But this model can be applied as well for computations of processes with a nonstandard Hamiltonian or a Hamiltonian rapidly changing in time, as in sudden perturbations [19] or for periodically changing potential with frequency of order ab/h, and it can be compared with experimental data.
Appendix 1
Proof of Part 1 of Theorem 4 Let ϕ(x, p, t) be a solution of equation (2). In the notations (3) and (4) equation (2) takes the form where A is a skew Hermitian operator, and ∆ a,b is the self-adjoint diffusion operator.
Consider the derivative with respect to time of ||ϕ|| 2 (the square of the norm of the function ϕ). We have Hence, by linearity of the scalar product ; and by the equalities Aϕ; ϕ = − ϕ; Aϕ and ∆ a,b ϕ; ϕ = ϕ; ∆ a,b ϕ , we obtain the equality Denote byφ def = ϕ/||ϕ|| the normalized function ϕ. If the function ϕ satisfies equation (37), then, taking into account formula (38), we have By definition of the functionφ its norm ||φ|| ≡ 1, and it is natural to call equation (39) the equation for the normalized wave function.
Consider now the projection of the functionφ(x, p, t) on the subspace of stationary solutions of the diffusion equation (5).
For the proof of the first Part of Theorem 4 one needs to show that the quantity ||ϕ 1 || 2 = 1 − ||ϕ 0 || 2 becomes small after time of orderh/(ab).
To make formulas shorter, introduce the notation Statement 1. If ϕ satisfies equation (37), then the quantity η(t) = ||ϕ 0 || 2 satisfies the differential equatioṅ where The maximum in the latter expression is taken over all normalized functions ϕ 0 from the subspace of stationary solutions of the diffusion equation (5), for which the function |φ 0 (x, p)| 2 gives a probability distribution in the physical region of the phase space for the given process.
We have the following equalities which follow from the definition of the function ϕ 0 , from independence of the operator P 0 of time, from equality (39), from linearity of the scalar product ; with respect to each argument, from commutativity of P 0 with ∆ a,b , from self-adjointness of the operators P 0 and ∆ a,b , from skew Hermitian property of the operator A and from the projection property P 0 = P 2 0 :
Let us now estimate the quantity β(t)= − 2 ∆ a,bφ1 ;φ 1 . To this end, decompose the functionφ 1 with respect to the orthonormal eigenfunctions χ i of the self-adjoint operator ∆ a,b . We haveφ 1 = ∞ i=0 c i χ i . Since the vectorφ 1 is orthogonal to the vectors with eigenvalue equal to 0, we have ∆ a,bφ1 = i: λ i =0 λ i c i χ i and i: λ i =0 c 2 i = 1. Hence where λ max is the maximal nonzero eigenvalue of the self-adjoint operator ∆ a,b . Using this inequality and also knowing that λ max = −ab/h (see Proof of Theorem 2), we obtain the estimate required in Statement 1: where the integral is taken over the interval (ε, 1 − ε), on which the denominator of the expression under the latter integral is positive. Simple computations show that for this the following inequalities should hold: If one takes into account that β min = 2ab/h (see Statement 1) and α max = (C + oh(1))/ √h (see Statement 2), then, ash → 0, the right hand side of the previous inequality can be represented in the following form by decomposing into the Taylor series: This and inequality (46) imply that the number ε can be chosen to be arbitrary (small, ash → 0), satisfying the inequality The last integral in inequality (45) of the form has been computed using the system Mathematica 5.0 (see [14]), and the result was decomposed into the series with respect to β min as β min → ∞. These computations yield the following equality: where arctanh(x) = (ln(1 + x) − ln(1 − x))/2 is the hyperbolic arctangens. If one assumes that ε andh are small quantities, then, substituting into this equality the expressions for β min = 2ab/h and α max = (C + oh(1))/ √h from Statements 1 and 2, and decomposing the obtained expression into a series with respect to ε, one can obtain the following estimate: Inequality (45), formulas (47), (48), (50), and the definition of the function η(t) (40) immediately imply the following statement. Statement 3. Letφ(x, p, t) be a normalized solution of equation (2), and, at the initial moment for t = 0, let the following inequality hold: η(0) def = ||P 0φ (x, p, 0)|| 2 ≥ ε, where P 0 is the projection operator from Theorem 2 and ε is an arbitrary number satisfying the inequalities Then, for any t > t ε , where the quantity η(t) def = ||P 0φ (x, p, t)|| 2 ≥ 1 − ε, i. e., the square of the distance from the functionφ(x, p, t) to the subspace of stationary solutions of the diffusion equation, described by Theorem 1, will be less than or equal to ε.
Part 1) of Theorem 4, being proved at this section, immediately follows from Statement 3. For conclusion of the proof it remains to prove Statement 2.
Proof of Statement 2. In the proof of Statement 2 we shall need computations of some integrals containing the function χ(y) given by expression (26) from Theorem 1. The results of computations of these integrals are listed in the following two Lemmas. Lemma 1. Letχ be the Fourier transform of the function χ, where χ(y) = (b/aπh) n/4 exp(−by 2 /(2ah)). Then the following equality holds: On the contrary, the Fourier transform ofχ yields χ. I. e., One has more general integrals: Functions χ andχ satisfy the following relations: The derivatives of the functions χ(y) andχ(k) have the form ∂χ(y) ∂y j = −b/(ah) y j χ(y); ∂χ ′ (k) ∂k j = −a/(bh) k jχ (k).
The functions χ 2 andχ 2 are the densities of the normal distribution in the configuration space and in the space of momenta, respectively, with the zero mathematical expectations and the dispersions equal to ah/(2b) and bh/(2a). I. e., the following equalities for the function χ hold:
The other moments have order o(h). Analogous equalities hold for the functionχ:
R nχ 2 (ξ)dξ = 1; Besides that, the following equalities hold: All integrals from Lemma 1, except for the latter one (61), are well known. The latter integral has been computed by the author using the system Mathematica 5.0 [14].
14)
R 4n Proof of Lemma 2 is based on the use of formulas from Lemma 1. Let us compute here the first integral of Lemma 2. The other integrals are computed in a similar way.
For the first integral, after substitution into it, instead of D, of expression (62), we obtain 1 (2πh) n 2 n/2 R 4n In the proof of Statement 2 we shall also need the averaging of functions F (x, p) on the phase space (x, p) ∈ R 2n with respect to the density of the distribution This density looks similar to Wigner's quasidistribution, but does not coincide with it. Denote byF W ′ ψ the average of the function F (x, p) with respect to the density W ′ ψ . That is, Lemma 3. If F (x, p) is a smooth function which, together with all its derivatives, grows at infinity no faster than a polynomial, and ψ(x) is an arbitrary smooth complex valued function rapidly decreasing at infinity, then the following equality holds: Proof of Lemma 3. Let us make the change of variables k = p/h under the integral (64), and let us integrate over y. We obtain whereψ * (k) = 1 (2π) n/2 R n ψ * (y)e iyk dy is the Fourier transform of the function ψ * (y). Since the function ψ * (y) is rapidly decreasing, its Fourier transformψ * (k) is also a function rapidly decreasing at infinity.
Sinceh is a small quantity, let us decompose the smooth function F (x,hk) overhk by the Taylor formula up to the terms of first order. We have where θ = (θ 1 , ..., θ n ) and 0 ≤ θ i ≤ 1 for i = 1, ..., n.
Let us substitute this expression of the function F (x,hk) into (65). We obtain Let us estimate the coefficient beforeh in the second summand of the obtained equality. We have The latter equalities follow from the fact that by the statement of Lemma 3, the expression in the integral, standing under the operation max, grows no faster than a polynomial of certain degree N with respect to each variable, and |ψ(x)| and |ψ * (k)| are rapidly decreasing functions (decreasing at infinity faster than any power), and the limit as r → ∞ of the integral of a positive rapidly decreasing function exists and is equal to some M. Formula (67) and the boundedness of the coefficient beforeh → 0 in the second summand of this formula imply that Hence The latter equality is obtained by computing the integral over k. This integral over k is the inverse Fourier transform of the functionψ * (k), and it yields the function ψ * (x). Lemma 3 is proved. The distribution W ′ ψ, as Wigner's distribution, is not nonnegative, and the distribution ρ ψ , given by expression (29) for the wave function ψ, is nonnegative.
Denote byF ρ ψ the average of the function F (x, p) with respect to the distribution ρ. That is, F (x, p) is a smooth function which, together with its derivatives, grows at infinity no faster than a polynomial, and ψ(x) is an arbitrary smooth complex valued function rapidly decreasing at infinity, then the following equality holds:F where oh (1) is an infinitely small quantity with respect toh. Proof of Lemma 4. ConsiderF ρ ψ given by expression (69). Let us represent the functionψ * (y ′ ), using composition of the direct and inverse Fourier transform (15), in the following form: i. e., in the following form:
Let us substitute this expression into expression (69). After simple transformations under integral we obtain
In this expression, let us integrate over y ′ using formula (54) from Lemma 1. We obtain In this expression, let us make change of variables, introducing new variables ξ = p − k and η = x − y. Then, p = k + ξ, x = y + η, and Assuming η and ξ to be small quantities, let us decompose the function F (y + η, k + ξ) by the Taylor formula at the point (y, k) up to the terms of second order. We obtain the following expression, in which the values of the function F and its derivatives are taken at the point (y, k): Let us substitute this decomposition instead of the function F (y+η, k+ξ) into the latter integral, and let us integrate over the variables η and ξ using the formulas written out in Lemma 1. We obtain is the second summand under the integral in expression (75) divided byh, and the average valuesF W ′ ψ andS W ′ ψ of the functions F and S with respect to the distribution W ψ are given by expression (64).
Since by statement of Lemma 4, the function F (x, p) grows at infinity, together with its derivatives, no faster than a polynomial, the same property holds for the function S(x, p). Let us apply Lemma 3 to the function S(x, p). We obtain thatS W ψ is bounded ash → 0. This and equality (75) imply the equalityF ρ ψ =F W ′ ψ + O(h), which is equivalent to the equality required in Lemma 4.
Thus, all the Lemmas necessary for the proof of Statement 2, are proved. Let us now proceed to the proof of Statement 2 itself.
By Statement 1, α max def = 2·maxφ 0 ||Aφ 0 −P 0 Aφ 0 ||, where A is given in our case by expression (4), the projection operator P 0 is given by expression (27), and normalized functionsφ 0 are given by expression (24), and the function ϕ 0φ * 0 is the probability distribution in the physical region of the phase space for the given process.
If the operator A is represented as a sum A = 2n+1 j=1 A j , then, by the property of the norm stating that the norm of the sum of vectors is no greater than the sum of norms of these vectors, we have Hence, for the estimate of the quantity ||Aφ 0 − P 0 Aφ 0 || it suffices to estimate the quantities ||A sφ0 − P 0 A sφ0 ||, for s = 1, . . . , 2n + 1.
In our case the operator A is given by expression (4): and , for j = 1, . . . , n; Note also that since P 0 is a self-adjoint projection operator, the vectors P 0 A sφ0 and A sφ0 − P 0 A sφ0 are orthogonal. The sum of these vectors equals A sφ0 . Hence the following equality holds: Let us start estimating these quantities, starting with s = 2n + 1.
Let us substitute this expression into expression (82) for I int . We obtain Let us compute the latter integral over y ′ by formula (54) from Lemma 1. After substitution we obtain Let us substitute the obtained expression for I int into expression (81) for h 2 ||A 2n+1φ0 || 2 . After simple transformations we obtain dk dy ′′ dy dx dp. (85) In the obtained expression, let us make change of variables, introducing new variables ξ = p − k and η = x − y. Then p = k + ξ, x = y + η, and Assuming that η and ξ are small quantities, let us decompose the function f 2 (y + η, k + ξ) into the Taylor series at the point (y, k) up to terms of the second order. We obtain the following expression, in which the values of the function f and its derivatives are taken at the point (y, k), Let us substitute this expression, instead of function f 2 (y + η, k + ξ), into the latter integral and let us perform integration over the variables η and ξ using the formulas written out in Lemma 1. We obtain ah 2b 1.
2. An estimate of ||P 0 A 2n+1φ0 || 2 . Let us now estimate the expression subtracted in (79), i. e., ||P 0 A 2n+1φ0 || 2 = P 0 A 2n+1φ0 ; A 2n+1φ0 , where the operator A 2n+1 is the multiplication by the smooth function −i/hf (x, p). Expanding this expression with the scalar product and substituting into it the expression for the operator A 2n+1 , expression (24) forφ 0 and expression (27) for the operator P 0 , represented, in the notations of Lemma 1 in the form dx ′ dp ′ , and multiplying both parts of the equality byh 2 , we obtain dy ′ dy dx ′ dp ′ dx dp ×f (x, p) I int dy dx ′ dp ′ dx dp, where The integral I int is the same as in (82) in the computation ofh 2 ||A 2n+1φ0 || 2 . Let us substitute into expression (89) the representation of the integral I int in the form (84). After simple transformations we obtain, dk dy ′′ dy dx ′ dp ′ dx dp.(90) In the obtained integral, let us make a change of variables, introducing Then, After substitution of x ′ , x, p, p ′ , expressed through the new variables, and after simple transformations, we obtain Since the functions χ 2 andχ 2 yield the densities of normal distributions with small dispersions, let us decompose the function f (y + η, k + ξ + ξ ′ )f (y + η + η ′ , k + ξ) in the previous expression into the Taylor series at the point (y, k) up to terms of the second order, assuming the quantities η, η ′ , ξ, ξ ′ to be small. We have, After substitution of the decomposition of the function f (y+η, k+ξ+ξ ′ )f (y+ η + η ′ , k + ξ) in the form (92) into expression (91) and computing integrals over η, η ′ , ξ, ξ ′ using the integrals of Lemma 2, we obtain Thus, we have estimated the expressionh 2 ||A 2n+1φ0 || 2 by formula (88) and expressionh 2 ||P 0 A 2n+1φ0 || 2 by formula (93). Let us substitute these formulas into expression (79), let us reduce similar terms, and let us divide both parts of the equality byh 2 . We obtain The last row of expression (94) is the density of the distribution W ′ψ given by expression (63). Let us apply Lemma 4 to the right hand side of the equality (94). We obtain where ρψ(, k) is the nonnegative density of distribution given by expression (29) for the functionψ().
To estimate the required expression maxφ 0 ||A 2n+1φ0 − P 0 A 2n+1φ0 ||, introduce the constant C 2n+1 by the following equality: where the maximum is taken over the physical domain U of values of the coordinates and momenta for the given process, which contains the support of the density function of the probability distribution ρψ. Then, equality (95) implies that The latter equality is obtained from the normalization condition for the density ρψ, i. e., from the equality R n ρψdydk = 1. Hence, taking the square root from both parts of the inequality, we finally obtain This finishes examining Case 1.
Case
where I int is given by expression (82). Let us substitute here, instead of I int , its expression in the form (84). After simple transformations and after substitution of the derivative of the functionχ in the form (57), we obtain dk dy ′′ dy dx dp.
In the obtained expression, let us make the change of variables η = x − y and ξ = p − k. Then, x = y + η, p = k + ξ, and where in the function ∂H ∂x j 2 , instead of variables x and p, we have substituted y + η and k + ξ, respectively.
Assuming η and ξ to be small quantities, let us decompose the function ∂H ∂x j 2 (y + η, k + ξ) into the Taylor series up to the zero order. We have Let us substitute this expression into the previous one, and integrate it over η and ξ, using the integral (61) from Lemma 1. Finally we obtain 2.
2. An estimate of ||P 0 A 2j−1φ0 || 2 . By construction, this expression satisfies the inequalities Hence and from relations (79) and (100) we obtain The last row of expression (101) is the density of the distribution W ′ψ given by expression (63). Let us apply Lemma 4 to the right hand side of equality (101). We obtain where ρψ(, k) is the nonnegative density of distribution given by expression (29) for the functionψ().
Introduce the constant C 2j−1 by the following equality: where the maximum is taken over the physical region U of values of coordinates and momenta for the given process. Taking into account this notation, inequality (102) implies that The latter equality is obtained, as in the first case, from the normalization condition for the density of the probability distribution ρψ. Hence, taking the square root from both parts of the latter inequality, we finally obtain This finishes examining Case 2.
Case
Let us estimate the quantity ||A 2jφ0 −P 0 A 2jφ0 ||. If one applies the operator A 2j = − ∂H ∂p j ∂ ∂x j to the functionφ 0 of the form (24), i. e., to the function ϕ 0 (x, p) = 1 (2πh) n/2 R nψ (y)χ(x − y)e i(x−y)p/h dy, then, taking into account formula (57) for the derivative of the function χ, we obtain the following equalities: Hence, using the property of the norm that the norm of a sum of vectors is no greater than the sum of norms of these vectors, we obtain the inequality Note that the operator A ′ 2j coincides with the operator A 2j−1 (see the previous Case) if one replaces the function ∂H ∂x j in it to −ib a ∂H ∂p j . Therefore, using formulas (104) and (103), we obtain On the other hand, the operator A ′′ 2j coincides with the operator A 2n+1 (see Case 1), in which f (x, p) = p j ∂H ∂p j . Hence, using formulas (98) and (96), we obtain . Then using inequalities (105), (106), and (107), and using the introduced notation C 2j , we finally obtain This finishes examining Case 3. Now we are ready to finish the proof of Statement 2. By Statement 1, α max def = 2 · maxφ 0 ||Aφ 0 − P 0 Aφ 0 ||. Hence, applying to inequality (76) relations (98), (104), and (108), we obtain where C def = 2 2n+1 s=1 C s . Q. E. D. Statement 2 is proved.
Proof of Theorem 5
For proof of Theorem 5, consider formula (33) from Theorem 4 for the case when H(x, p) = p 2 2m + V (x). We havê Thus, in this caseĤψ is represented as the sum of three integralsĤψ = I 1 + I 2 + I 3 , where Note that the expression R n χ(x − y)χ(x − y ′ )dx from the first integral can be transformed to the following form: Hence the integral I 1 is transformed to the form e ī h (y−y ′ )p ψ(y ′ , t)dy ′ dp.
As known from the formulas for Fourier transform, for any smooth function f (y ′ ) the following equality holds: Below we shall often use this equality.
In particular, one has the equality e ī h (y−y ′ )p ψ(y ′ , t)dy ′ dp.
Let us differentiate both parts of the latter equality with respect to y k . We obtain, taking into account relation (115), e ī h (y−y ′ )p ψ(y ′ , t)dy ′ dp e ī h (y−y ′ )p ψ(y ′ , t)dy ′ dp.
To compute the integrals I k 4 , consider the following equality which is a particular case of equality (115): e ī h (y−y ′ )p ψ(y ′ , t)dy ′ dp.
Let us differentiate both parts of this equality with respect to y k . We obtain ×e − b(y−y ′ ) 2 4ah e ī h (y−y ′ )p ψ(y ′ , t)dy ′ dp.
Hence, taking into account notation (120) for the integral I k 4 and relation (115), we obtain the equalities ψ(y, t).
Let us express I k 1 from this equality: Taking the sum of the obtained equality over all k from 1 to n, we obtain an expression for the integral I 1 : Let us pass to computing the integral I 2 given by expression (111). Consider equality (113) of the form Let us differentiate both parts of this equality with respect to y ′ k . We obtain , or, after omitting common factors, Substituting this equality into expression (111) for the integral I 2 , we obtain e ī h (y−y ′ )p ψ(y ′ , t)dy ′ dp.
Comparing the latter expression with expression (120) for the integrals I k 4 , we obtain I 2 = −h 2 /m n k=1 I k 4 . Substituting here the computed expressions (123) for the integrals I k 4 , we finally obtain Now consider the integral I 3 given by expression (112). In accordance with formula (115), this integral can be transformed to the form where χ 2 (x − y) = (b/(aπh)) n/2 exp(−b(x − y) 2 /(ah)) is the density of the normal distribution, or, after the change of variables x ′ k = x k − y k , to the form Whereas the previous integrals have been computed exactly, let us compute this integral approximately, assuming the dispersion ah/2b of the normal distribution χ 2 (x ′ ) to be a small quantity, decomposing the function V (y+x ′ ) into the Taylor series at the point y with respect to x ′ up to the second order, and decomposing ∂V (y + x ′ )/∂y k up to the first order. We have Hence, since for the density of the normal distribution χ 2 (x ′ ) the following relations hold: R n χ 2 (x ′ )dx ′ = 1, R n x ′ k χ 2 (x ′ )dx ′ = 0, R n x ′ k x ′ k ′ χ 2 (x ′ )dx ′ = 0 for k = k ′ , and R n x ′ k x ′ k χ 2 (x ′ )dx ′ = ah/(2b), we finally obtain Thus, sinceĤψ = I 1 + I 2 + I 3 , where the integrals I 1 , I 2 , I 3 are given respectively by expressions (110), (111), (112), then, substituting here their computed values as expressions (124), (126), (127) and reducing similar summands, we obtain the expression forĤ required in Theorem 5: from Theorem 5, wherê Thus, under these assumptions one can suppose that the diffusion coefficient with respect to coordinates a 2 = kT /(mγ), and the diffusion coefficient with respect to momenta (as in the Fokker-Planck equation) b 2 = γkT m.
On the other hand, the time of the transformation process to the process described by the Schrodinger equation, is expressed, by Theorem 4, by the quantityh/(ab) =h/(kT ). This time for T = 1 • K equals 7.638 · 10 −12 sec.
|
2008-12-30T20:52:42.000Z
|
2008-12-30T00:00:00.000
|
{
"year": 2008,
"sha1": "9379bc0beecb6af26f00e19094be7114236e0dbe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9379bc0beecb6af26f00e19094be7114236e0dbe",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
18453734
|
pes2o/s2orc
|
v3-fos-license
|
Five-Dimensional Gauged Supergravity and Supersymmetry Breaking in $M$~Theory
We extend the formulation of gauged supergravity in five dimensions, as obtained by compactification of $M$~theory on a deformed Calabi-Yau manifold, to include non-universal matter hypermultiplets. Even in the presence of this gauging, only the graviton supermultiplets and matter hypermultiplets can couple to supersymmetry breaking sources on the walls, though these mix with vector supermultiplets in the bulk. Whatever the source of supersymmetry breaking on the hidden wall, that on the observable wall is in general a combination of dilaton- and moduli-dominated scenarios.
Introduction
One of the most promising recent developments for attempts to construct satisfactory unified models in the context of string theory has been the realization that the strong-coupling limit can be treated using an eleven-dimensional approach [1,2,3,4]. In particular, this offers the possibility of reconciling the GUT scale M GU T , estimated on the basis of low-energy data from LEP and elsewhere, with the string unification scale calculated in terms of the fourdimensional Planck scale [4,5,6,7]. This reconciliation is possible in the strong-coupling limit with a fifth dimension L 5 that is considerably larger than M −1 GU T . According to this scenario, six of the original eleven dimensions are compactified at a length scale comparable to M −1 GU T , beyond which physics is described by an effective five-dimensional supergravity, that is reduced further to an effective four-dimensional theory at length scales larger than L 5 [2,4,6,8,9,10,11,12,13,14,15,16,17,18,19].
Physics on the boundaries of the fifth dimension are also described by effective fourdimensional supersymmetric theories. The effective five-dimensional supergravity theory in the bulk space between these boundary walls serves to communicate between them, and provides, e.g., the essential framework for describing the mediation of supersymmetry breaking by gravitational interactions through the bulk, between a suspected source on the hidden wall and physics in the observable sector [20,21,6,8,13,14,18,22]. The general structure of five-dimensional supergravity theories has been studied, as have specific features of the effective theory obtained from the original eleven-dimensional theory by compactification on a six-dimensional Calabi-Yau manifold [21,23,24,25]. In this case, the multiplicities of vector supermultiplets and matter hypermultiplets are related to the topological data h (1,1) and h (2,1) of the Calabi-Yau manifold, and the structure of the Chern-Simons terms and the geometries of the scalar-field manifolds are also related to properties of the Calabi-Yau manifold. Furthermore, consistent compactification requires a deformation [2] of the Calabi-Yau manifold along the fifth dimension that induces a gauging [23,25] of the effective five-dimensional supergravity theory.
In a previous paper [24], we discussed the issue of mediation of supersymmetry breaking through the five-dimensional bulk, stressing in particular that only the gravity supermultiplet and the universal and non-universal matter hypermultiplets can couple to supersymmetry breaking on the walls. The vector supermultiplets lack such a coupling because a parity symmetry forbids their supersymmetry variations from having expectation values on either boundary. This previous discussion was not formulated explicitly in the gauged form of the five-dimensional supergravity theory.
In this paper, we supplement this previous discussion, first by extending the construction of the gauged supergravity [23,25] to include non-universal hypermultiplets, and then by discussing the ensuing coupled dynamics of the gravity and vector supermultiplets and the matter hypermultiplets in the bulk, including terms related to the Calabi-Yau deformation [2]. We find that there is non-trivial dynamical mixing in the bulk, but confirm that the vector hypermultiplets cannot couple directly to the breaking of supersymmetry to the walls. The possible types of supersymmetry breaking correspond to the dilaton-and moduli-dominated scenarios for supersymmetry breaking discussed originally in the context of weakly-coupled heterotic string theory. However, even if just one of these is dominant on the hidden wall, the dynamical mixing in the bulk may cause both of them to be present on the observable wall.
In particular, we are interested in the specific source of supersymmetry breaking provided by a condensate of strongly-interacting gauge fermions on the hidden wall. We demonstrate that, in the standard-embedding version of the Horava-Witten model [1,2,3], the reduction from eleven dimensions down to five dimensions of the coupling between the bulk moduli and the gaugino condensate living on the wall is the same in both gauged and non-gauged versions of the effective five-dimensional supergravity. We stress also the fact, already demonstrated in our previous paper, that the effective five-dimensional coupling of the condensate to moduli includes a direct coupling of the condensate, not only to the universal hypermultiplet scalars and to scalars from the gravity and vector multiplets, but also to Z 2 -even and Z 2 -uneven scalars from the non-universal hypermultiplets, including the type-(2, 1) moduli.
We use the following conventions for the space indices throughout this paper. We use the capital latin characters A, B, C, for the eleven-dimensional space-time. The Calabi-Yau coordinates we denote by, small latin alphabet a, b, c, in the case when we use real coordinates or by i, j,ī,j, in the case of complex coordinates. For the five dimensions orthogonal to the Calabi-Yau we write α, β, γ, and finally the four-dimensional Minkowski space we denote by µ, ν. Since it will not be ambiguous in any case, we will also use the capital latin alphabet to label the harmonic (2,1) and (1,1) forms on Calabi-Yau.
Primer of Five-dimensional Supergravity
We first recall some essential features of the five-dimensional supergravity theory that describes M-theory dynamics in the bulk after compactification on a Calabi-Yau manifold. It contains h (1,1) vector fields A i µ , of which one is the graviphoton and the remaining h (1,1) − 1 belong to vector supermultiplets. 1 These are accompanied by h (1,1) scalars X I , a complex scalar C and the three-form C αβγ , which is dual to a scalar D. The five-dimensional supergravity theory contains then a universal hyperplet whose bosonic components are (V, D; C,C), where V ≡ 1 6 d ABC X A X B X C represents the Calabi-Yau volume. The shape moduli represent the h (1,1) − 1 independent scalar components of the vector supermultiplets, and the graviphoton is the model-dependent combination 1 All the notation we use in this paper is compatible with that in [24] and [26].
which is orthogonal to the hypersurface (1), with respect to the metric where the V A form a basis for the (1, 1) forms and A = 1, ..., h (1,1) . The combination (2) is, however, not the same as the combination of vector fields that participates in the gauging induced by the deformation of the Calabi-Yau manifold, as we now discuss.
The linearized solution for the eleven-dimensional Bianchi identities in the standard-embedding case is This is the expression appropriate on the half-circle x 11 ∈ (0, πρ 0 ). To continue to the other half-circle, we have to remember that G abcd is Z 2 -odd, and hence has to change sign when it crosses any of the fixed planes. It is important to note that the G ABCD vacuum does not depend explicitly on the coordinate x 11 . On a Calabi-Yau manifold, the vacuum configuration for G must be a (2, 2) form. Since h (2,2) = h (1,1) on a three-fold, it is convenient to choose as a basis of H (2,2) the forms Y B related to duals of V A : In this way, one has The constants α B are given a geometric interpretation through the representation where C B is the four-cycle dual to the form Y B , and we have included explicitly the antisymmetric step function ǫ(x 11 ) in the formula (6), in order to recall that α B is also Z 2 -odd, like the background G abcd itself.
We now recall briefly the way the gauging arises [23,25] in connection with the non-trivial vacuum solution for the components of the antisymmetric-tensor field and its strength, which is linear in x 11 to lowest non-trivial order in κ 2/3 (4). As we discuss in more detail later, in order to construct the effective five-dimensional theory, one expands the Lagrangian around this nontrivial eleven-dimensional background, and treats five-dimensional zero modes as fluctuations in that non-vanishing background [23,25]. Substituting such an expansion into the topological C ∧ G ∧ G term in the eleven-dimensional supergravity Lagrangian, one finds, among other terms, a new coupling between zero modes of the form ∂ µ D C µ , where D is in our language the imaginary part of the complex even scalar S from the universal hypermultiplet, and in the language of the effective four-dimensional theory on a wall is simply the universal axion, and C µ is the combination of the h (1,1) U(1) gauge fields in the bulk, where the coefficients α B are given by (6). Thus, its composition depends on the orientation of the gauge and gravitational instantons with respect to the cohomology basis used to define the zero modes.
This mixing between a vector boson and a derivative of a pseudoscalar, which is dual to the component of G αβγδ with all indices five-dimensional, is reminiscent of a Higgs mechanism. The only way to accommodate it in an explicitly supersymmetric theory is as part of a squared covariant derivative in the gauged five-dimensional supergravity, where the gauging is of translations along the imaginary direction of the complex Z 2 -even scalar S = V + i D + ... 2 . There are other terms in the Lagrangian which arise from the gauging, for instance the scalar potential coming from the Killing prepotentials, and these terms can also be found via the reduction on the nontrivial background given above [25]. In this paper, we extend the analysis of [25] to include fields coming from the non-universal hypermultiplets. Since some important expressions in the effective Lagrangian become notably more complex, we discuss some key points of the reduction in more detail in the following sections of this paper.
Since the coupling of the scalar D to the gauge boson C is of order O(κ 2/3 ), it is of higher order than the kinetic couplings in the bulk which we considered in [24]. Likewise, the necessary supersymmetrization involves higher-order bulk couplings. These can be obtained from formulae given in [27], as well as the corresponding modifications to the supersymmetry transformation laws [25] discussed in Section 6. In particular, we note that the potential term related by supersymmetry to the O(κ 2/3 ) mixing, analogous to D terms in four-dimensional supersymmetry, is of order O(κ 4/3 ): see Section 5. This exemplifies the fact that the new couplings in the gauged theory are of higher order in κ 2/3 than the σ-model couplings we considered in [24].
Couplings to Non-Universal Hypermultiplets
We start with key steps in the dimensional reduction in the presence of non-universal (2, 1)moduli. The scheme for the reduction follows [26] closely, as in [24]. The basic modifications compared to the compactifications with just a single universal hypermultiplet are already visible in the reduction of the three-form field. We consider the expansion of the three-form field C IJK into harmonics, distinguishing between three different configurations of the indices I, J, K. To give non-vanishing zero modes, the indices have to be either all non-compact, one non-compact and two compact, or all compact. This is because, on a Calabi-Yau manifold, only the (3,0), (2,1), (1,1), (0,0) harmonic forms and their Hodge duals are non-vanishing. Taking this into account, we may write the following decomposition Using the basis V A : A = 1, ..., h (1,1) of harmonic (1,1) forms, we can write the above expansion as The non-trivial part of (9) is the term with three compact indices. We concentrate on its expansion in terms of non-vanishing harmonic (2,1) forms in the Dolbeault cohomology basis in H 2,1 : and the (3,0) form which constitutes the Dolbeault cohomology basis for H (3,0) . In is important to notice that (10) includes h (2,1) + 1 forms, which are not all linearly independent, since by definition we only have h (2,1) non-vanishing harmonic (2,1) forms. The convenience of the choice (10) is due to an obvious analogy with homogeneous coordinates, which we discuss later below.
In order to discuss the H 3 cohomology sector, we introduce a real deRham cohomology basis (α I , β I ), where I = 0, ..., h (2,1) , one of our aims being to impose the invariance of C under simplectic transformations Sp(2h (2,1) + 2) [26]. This basis is dual to a canonical homology basis for H 3 (M, Z) which we denote by (A I , B I ). The two bases are defined in such a way that We introduce the periodsZ I and F I of the holomorphic (3,0) form Ω (11) viã and Following [28], one can show that the complex structure of the manifold M is entirely determined by theZ I , implying that F I = F I (Z I ). It is clear from the definition (14) that rescaling Z I → λZ I , where λ is a non-zero number, corresponds to a rescaling of Ω. Therefore theZ I can be regarded as projective coordinates for the complex structure:Z I ∈ P H (2,1) , with Ω being homogeneous of degree one in these coordinates: As already mentioned, these homogeneous coordinates, although convenient in our case, cannot be a good coordinate system, since the space P H (2,1) is a h (2,1) -dimensional quaternionic manifold, whilst there are h (2,1) + 1 coordinatesZ I . We can define inhomogeneous coordinates by for example.
We use now the real cohomology basis (α I , β I ) to expand the holomorphic (3,0) form (11). Since Ω is a complex form, we can perform the expansion only if we complexify the real basis. We therefore write it as Kodaira has derived [29] the following decomposition: where the K I are coefficients that depend on theZ I , but not on the coordinates of the Calabi-Yau space, and the Φ I are (2,1) forms. As mentioned before, the forms Φ I are not linearly independent. It is, however, convenient to use the above set of h (2,1) + 1 forms, remembering that they satisfy the conditionZ which leaves the right number of linearly independent degrees of freedom. We will show in the following paragraphs that the constraint (20) is consistent with previous definitions.
one can easily show that Ω, Using the expansion (18), we conclude from (23) that the functions F I (Z) have the following property It follows from (24) that F I is the gradient of a homogeneous function of degree two, i.e., It is also useful to notice that we can write (25) in the following form As stated previously, we use the following Dolbeault cohomology basis in H (3,0) and H (2,1) : and with the additional condition (20). In writing (28), we used the the fact that (Φ I , Ω) = 0 and expressed K I in terms of Ω, by taking the inner products of both sides of (28) with Ω. The condition (20) follows from equations (28,18,26). It is easy to see thatZ I Ω I = Ω which immediately gives (20).
We can now use the integrals over the real cohomology basis (α, β) to express everything in terms of moduliZ I and the holomorphic function F (Z). In the rest of this section, we drop the tilde fromZ in order to make the equations more readable, not forgetting that at the end we must pass to inhomogeneous coordinates given by (17). One easily finds the following relations ( which will be useful in the following.
The real cohomology basis which we have introduced in this section proves to be very useful [26] in performing the expansion of the three-form field C abc in terms of harmonic forms. As was argued in [26], it enables us to fix arbitrary coefficients appearing in the expansion. We do not repeat here all the technical discussion, and present here only the result. We use the notationĈ for the three-form field and C for the five-dimensional scalar field. The expansion derived in [26] then readŝ where the C I : I = 0, ..., h (2,1) are the complex five-dimensional scalar fields in the bulk hypermultiplets, and a IJ , b I J are coefficients which, as was argued in [28], depend explicitly on the moduli Z but not on the coordinates of the Calabi-Yau space. Using we can express the basis (α, β) in terms of (2,1) and (3,0) forms: where N IJ = 1 4 (F IJ + F IJ ), and we have omitted all the indices. Using the above expressions, we can writê where F ′′ = F IJ . Using arguments given in [26], one can show finally that where γ is some numerical factor. We can also write the above expression in terms of the real basis (α, β)
Coupling to the Gaugino Condensate
In addition to extending the study of this gauged supergravity to include the non-universal hypermultiplets, and to calculate explicitly the potential for the scalar fields associated with the vector multiplets and the hypermultiplets, we also include into the gauged supergravity picture the coupling of the bulk moduli to the gaugino condensate living on the hidden wall. This is of particular phenomenological importance, as hidden-sector gaugino condensation remains a primary candidate source of supersymmetry breaking in M-theory models. We shall treat the reduction of the wall-bulk coupling rather completely, in order to make explicit the additional couplings of the condensate to non-universal hypermultiplets, which have, so far, only been studied in our previous paper [24].
We use (37) to write down the following expression for the field strength of the field C abc : where F 3 ≡ F abc , and we have not written explicitly the three internal indices a, b, c. We see in (39) that the term which is propotional to the holomorphic three-form, which will couple to the gaugino condensate reads The fields C, being odd, have to vanish or to have a discontinuity on the walls, as do the derivatives with respect to x 5 (x 11 ) of the even moduli Z and S. However, each part of the above equation contains an even number of Z 2 -odd objects, so each can have a well-defined nonvanishing limit on the wall, and couple there to any gaugino condensate.
The above calculation shows that the coupling to the gaugino condensate involves not only the universal hypermultiplet, but also scalar fields from non-universal hypermultiplets. To be more explicit, we consider the function F (Z) that characterizes the simple model discussed in [24], namely F (Z) = (Z 0 ) 2 − (Z a ) 2 , a = 1, ..., h (2,1) .
This gives a Calabi-Yau space with a nontrivial moduli sector, sufficient to study the questions we want to ask, although the corresponding Yukawa couplings vanish, since these are given by the third derivatives of F . The choice (41) of F leads to the following form of the (h (2,1) + 1)dimensional matrix N and its inverse: The combination of moduli and their derivatives which couples directly to the condensate is in terms of physical, untilded, quantities, which should also multiplied by a factor V CY , since above we have been working in the metric which is canonically normalized in eleven dimensions. We note that the above expression contains different powers of moduli fields and their derivatives. The lowest-order part is simply 2 ∂ 11 C 0 , i.e., the derivative of the Z 2 -odd component of the universal hypermultiplet with respect to x 11 . The result (43) is nothing other than the component form of the five-dimensional σ-model expression given in [24]: where we assume, as mentioned above, the conventional wisdom that the four-dimensional gaugino condensate must be proportional to the Calabi-Yau (3, 0) form Ω ijk . We note that the coupling (44) includes also the possibility of switching on the part of the background for the Chern-Simons forms which is proportional to Ω ijk , L → L + < Ω CS >, as discussed in [14] and in [18]. If one considers switching on a part of the background for the Chern-Simons form that is proportional to the (2, 1) forms Φ I , such a background would couple to the following combinations of the massless modes The components of the background proportional to heavy modes of the Laplacian on the Calabi-Yau space decouple from the massless modes.
The calculation given in some detail above constitutes the derivation of the effective fivedimensional coupling (44) from the eleven-dimensional Lagrangian given in [1,20]. The result of this procedure is not sensitive to the gauging of the five-dimensional supergravity, as the background value of G abc11 which solves the consistency equations to order κ 2/3 in eleven dimensions vanishes for the standard embedding. Since in eleven dimensions only G abc11 couples to the condensates, the non-trivial backgrounds for the other components of the antisymmetric tensor field strength G do not affect the coupling.
We return at this point to the reduction of the C ∧ G ∧ G term from the original elevendimensional action, to see in more detail how the coupling to the gauge boson arises. This coupling must be proportional to the background value of the field strength G, and the only components of G that have vacuum expectation values are these with all indices tangent to the Calabi-Yau space. Hence, from the decomposition of the three-form field into zero modes, we see that the terms affected by the background are of the form Using the decompositions (9) of C and (5) of G cdef and integrating over the Calabi-Yau space, we immediately find the five-dimensional coupling Remembering that, upon using the equations of motion, the four-form G αβγδ is seen in five dimensions to be dual to a closed one-form, which may be represented locally by the derivative of a scalar, we see that we have found the mixed bilinear term. Taking into account the possible index structures of the four-form G, we see that this scalar, which we shall call D, is the only one which can couple directly to any vector field. To describe the correspondence between D and G αβγδ more precisely, we note that to perform correctly the duality transformation we have to take into account two other terms. The first and obvious one is the square G αβγδ G αβγδ from the kinetic term, and the second one is from the topological C ∧ G ∧ G term. Using again the decompositions of C and G which we have given earlier in (37, 39), we obtain and the only non-trivial gauge-covariant derivative is It is straightforward to see that, in the case where h (2,1) = 0, the complicated expression in the bracket in (49) reduces to −4i(C 0 ∂ µC0 −C 0 ∂ µ C 0 ), which is the limit considered in [25].
This completes the construction of the coupling of the gauged five-dimensional supergravity to a gaugino condensate living on a four-dimensional boundary. This coupling is the only part of the construction where the enhancement of the hypermultiplet sector plays an important role. However, it is precisely this part that turns out to be insensitive to the gauging. The nature of the gauging does not change either, as it is still the gauging of the translation of the scalar dual to G αβγδ , which is the imaginary part of the complex modulus S [24].
We conclude this section with the observation that the symmetry of the quaternionic manifold which is gauged, the translation of Im(S), is broken down to a discrete subgroup on the boundaries by the instantons of the gauge bundles living there.
Scalar Potential in Gauged Supergravity
We now construct the scalar potential in the bulk which appears due to the gauging, including the non-universal hypermultiplets. This will provide the final ingredient needed for a discussion of the modifications to the analysis of supersymmetry breaking and its transmission given in [24].
First we recapitulate the basics of the gauged supergravity structure given in [27]. When one compactifies supergravity from eleven dimensions down to five dimensions, one finds vectors, moduli scalars and associated fermions in the five-dimensional gravity supermultiplet, h (1,1) − 1 vector multiplets which also contain associated scalars, and the h (2,1) + 1 hypermultiplets discussed above. The complex scalars (zero-forms) z i : i = 1, ..., n where n ≡ h (1,1) − 1 that come from the n vector multiplets span a special Kahler manifold SM. The real scalar fields q u (u = 1, ..., 4m) coming from the m = h (2,1) + 1 hypermultiplets can be regarded as coordinates of a quaternionic manifold QM.
As shown in [27], the gauge potential can be expressed in the following form, using purely geometrical objects: Here, the indices Λ and Σ run from 0 to n (they correspond to the vector fields including the graviphoton), the indices i and j take values 1 to n, and the indices u and v take values 1 to 4m, corresponding to the hypermultiplets. Additionally, we note that g ij * in (51) is the special Kähler metric for the scalars z i coming from the vector multiplets, h uv is the metric on the quaternionic manifold, and k i Λ and k u Λ are the Killing vectors for the special Kahler and quaternionic manifold, repectively. The vectors L Λ are (parts of) covariantly holomorphic sections: (∂ i * − 1 2 ∂ i * K)L Λ = 0 where K is the Kahler potential (satisfying condition (4.27) from [27]), of the 2n+2-dimensional symplectic vector bundle with the structure group Sp(2n+ 2, R) over the special Kahler manifold SM, and the f Λ i are covariant derivatives of L Λ : Finally, the P x Λ are triplets (x = 1, 2, 3) of prepotentials associated with each Killing vector on the quaternionic manifold QM.
The non-holomorphic sections L Λ can be related to holomorphic sections X Λ in the following way: where K is the Kahler potential. In the case of most interest here, we can regard X Λ as a set of homogeneous coordinates on SM. This means [27,30] that we can write with z 0 = 1. Using the holomorphic function F (X), we can determine the rest of the geometric structure of SM, and in particular the functions f Λ i . The object we need to determine the scalar potential is P x Λ . Following [27], the triplet of zero-form prepotentials P x Λ associated to each Killing vector is given by where ω y = ω y v dq v is the Sp(2) connection, and Ω x = Ω x uv dq u ∧dq v is the coresponding curvature. The quaternionic manifold admits three complex structures J x : x = 1, 2, 3, that satisfy the quaternionic algebra J x J y = −δ xy 1 + ǫ xyz J z .
As a generalization of the Kähler form on a complex manifold, one can define a triplet of two-forms, called the Hyper-Kähler form: Part of the definition of a quaternionic manifold is the requirement that the curvature of the principal SU(2) bundle be proportional to the Hyper-Kähler two-form: It is useful to define the vielbein one-form where C αβ is the flat Sp(2m)-invariant metric, and ǫ AB is the flat Sp(2) ≈ SU(2)-invariant metric. We can express the curvature Ω x in terms of the vielbein On the other hand, we can easily find the connection ω y by requiring the vielbein to be covariantly closed with respect to both the SU(2) connection ω z and some Sp(2m, R)-Lie algebra valued connection ∆ αβ : This connection has been calculated explicitly in terms of scalar fields in [30], and approximete explicit expressions can be found in [24].
Having calculated both Ω x and ω x , we are now able to determine the prepotential P x Λ using (54). To keep the discussion simple, we concentrate hereafter on the specific example with just one non-universal hypermultiplet. Our scalar fields q u we treat as real and imaginary parts of complex fields: (S, C 0 ) from the universal hypermultiplet and (Z 1 , C 1 ) from the non-universal one. The complex field S is a combination of the real scalars V and D introduced previously, defined by S = V + iD. The vielbein U Aα has the following form [30] where we have introduced four one-forms defined as follows where and N = 1 2 In the vielbein (62), the index A takes values 1 and 2 corresponding to the fundamental representation of Sp(2) ≈ SU (2), and the index a corresponds to the fundamental representation of Sp(4, R) and takes values from 1 to 4. Using (60), we are now able to calculate the curvature entering into (54). We have where the Sp(4) and Sp(2) metrics have the following forms and using (60) we find the following result where Ω = Ω x σ x . The Sp(2) connection is given by [30] where ω = ω x σ x .
We can read the form of the Killing vector generating isometries. off from the covariant derivative appearing in the reduction to five dimensions: where D = 1 2i (S − S). Changing variables, we find which gives Anticipating the final result, we assume that only the x = 3 component of P x Λ is non-zero, i.e., P Λ ∝ σ 3 . This is an Ansatz, but it is straightforward to see that in this way we obtain an exact solution. In this case, (54) reduces to The components Ω x uv and ω x v are easily read off (78) and (79): and It is useful to note that, because of (82), the only relevant parts of Ω 3 are those proportional to dS and dS. On the other hand, it is only the one-form v which contains dS. This simplifies greatly the whole analysis, and, for example, (84) can be written now as Finally, we obtain the following simple and exact solution This result and (51) lead immediately to the following form for the scalar potential: where F is the so-called kinetic matrix, which is related to the objects from the potential (51) by The above potential can be written less formally in terms of the objects defined at the begining of this paper. In particular, in the case where we have just one non-universal hypermultiplet, we get and the potential reads where G AB is the inverse metric of the Kähler manifold spanned by the scalars in the vector multiplets, defined in (3), and α A has the geometrical interpretation given in (6) 3 .
Supersymmetry Breaking Transmission between Walls
We now examine the modifications to the supersymmetry transformation laws for fermionic fields that are induced by the gauging. First we simply list the relevant new parts of the respective transformation laws and g is, as in [27] and as in earlier parts of this paper, a formal gauge coupling which counts gauging-induced terms. Using (6),(80),(92), we see immediately that all the above new contributions are Z 2 -odd, and hence discontinuous across, or vanishing on, the fixed planes. This means that they provide no new channels for communication of the supersymmetry breaking from the visible wall to the five-dimensional bulk, or from the bulk to the observable wall, beyond the channels already identified in [24] in the context of ungauged supergravity. Similarly, the covariant derivatives do not introduce any new complication in the analysis, because the fifth component of the new gauge-field term gA Λ k Λ is Z 2 -odd, so that D µ Φ has the same Z 2 properties as ∂ µ Φ, for any field Φ, and hence behaves in exactly the same way.
As a result, the only effects of the gauging on the transmission of supersymmetry breaking are indirect, through the mixing of moduli scalars in the newly-created scalar potential, and possibly via other higher-order interactions in the bulk. We recall that the gauge potential itself is of order κ 4/3 relative to the σ-model interactions which were take into account in the earlier analysis [24]. Therefore, it seems that the gauging introduces higher-order effects that do not affect qualitatively the previous analysis.
However, one should also bear in mind the qualitatively new possibility that supersymmetry breaks down in the bulk, and that this gets communicated to the walls via the channels discussed previously. Unfortunately, it is not obvious how to generate in this way the hierarchy of supersymmetry breaking required in the observable sector. The only possibility appears to be the introduction of new parameters having the form of generalized Fayet-Iliopoulos terms, through P x Λ → P x Λ + ξ x Λ , see [27] for technical details. However, the analysis of this possibility involves a more complex study of the dynamics of the bulk σ model, that lies beyond the scope of this paper.
To visualize the relevance of the lowest-order solution to the equations of motion in the bulk, even in the presence of sources and nonlinearities, let us consider the equation of motion for the volume modulus of the Calabi-Yau space, S r = V (x 11 ), in the model obtained explicitly in [23,25], freezing all the other variables at some specific expectation values. The sources given there are due to F 2 and R 2 terms on the walls, and are of order κ 2/3 . The simplified Lagrangian is which gives the equation of motion Since the first derivative of S r is already of order κ 2/3 , the middle term in (103), being quadratic, is of order κ 4/3 , and hence subdominant compared with the other two, at least formally. Thus, at the lowest non-trivial order κ 2/3 , (103) is exactly of the form which has been studied in [22,24], and is given by a linear combination a |x 11 | + b |x 11 − πρ 0 | + c. By choosing properly the coefficients of this linear combination one recovers exactly the linear part of the full solution announced in [23,25], which corresponds to Witten's solution. It is likely that adding any additional nonlinear terms in the bulk at order κ 4/3 is not going to affect the leading linear behaviour of the background.
We now complete this analysis by giving the complete equations of motion between walls for the moduli which are relevant for the supersymmetry breaking transmission, S and C i , i = 0, 1, ..., h (1,1) . We consider for simplicity the case when the expectation values of the complex structure moduli Z i are set to zero. The interesting observation is that after diagonalization of the matrix of second derivatives the contribution to the equations coming from the scalar potential survives only in the equation of motion for the volume modulus S. It drops out from the equations of motion for the moduli C i . Hence, the only equation which gets modified with respect to the equations considered in [24] is the one which contains S ′′ . The modified equation with the sources is where i, j = 0, 1, ..., h (1,1) , and the corresponding boundary conditions on the half-circle are lim with ̺ v,h determined by the source terms on the walls. Again, exactly as found out in [24], one can check that the singularities cancel between themselves. The new term is where G AB is the inverse metric of the Kähler manifold spanned by the scalars from the vector multiplets, defined in (3, and α A can be given a geometrical interpretation as in (6). This constitutes an analog of a bulk 'charge' density, and in general depends on the vacuum configuration of the shape moduli t A . We note that, although superficially the scalar potential in the Kählerian frame looks somewhat pathological, with hypersurfaces of singularities and a potential run-away along the direction of S, its contribution to the equations of motion can be, at least in the cases we consider here, quiet regular. Using sources on the walls, one obtains, upon integration of the modified equations, configurations which are qualitatively very similar to those discussed in [24]. Hence we expect the main features of the physics of the transmission of supersymmetry breaking to remain unchanged.
When one sets the expectation values of the Z 2 -odd fields C 0,i and of the moduli Z i to zero, one recovers the BPS solution found in the papers [23,25]. However, as in the ungauged case of [24], when a condensate on any wall is switched on, it becomes impossible to set all these expectation values to zero. This means that the actual solution in that case has to depart from the BPS solution found in [23,25]. Unfortunatly, it is difficult to find analytically the solution corresponding to non-zero condensates: straightforward numerical integration gives backgrounds which again break all supersymmetries in the bulk. This point merits further study.
We would like to emphasize an important point where the gauged supergravity constitutes an essential advance over the leading linear solution. If one substitutes the simple linear backgrounds for the S and t A fields, as dictated by the leading solutions to the equation of motion, into the ungauged supersymmetry transformations in five dimensions, one finds that supersymmetry is apparently completely broken in the bulk. This does not agree with the original result of Witten [2]. The point is that, when one works directly in eleven dimensions [2], one can cancel the harmful contributions to supersymmetry transformations by rotating the internal spinor η lying in the space tangent to the Calabi-Yau three-fold. On the other hand, when one works directly in five dimensions, there is no spinor η which could be rotated. Suitable counterterms which would restore supersymmetry must then be added by hand to the transformation rules. This is easier said than done, since one has to worry about the closure of the supersymmetry algebra if one modifies the rules. However, the gauged supergravity comes to the rescue. Supersymmetrizing the gauging introduces corrections to the transformations laws which act precisely in the way one needs. They restore part of the supersymmetry in the bulk, in the spirit of the original eleven-dimensional results of [2].
Conclusions
To summarize the outcome of this analysis, we first recall that the coupling to bulk fields of a source on the wall, such as a gaugino condensate, is already suppressed by a power of κ 2/3 . The effects of the new gauge-related terms in the bulk on the transmission of supersymmetry breaking are formally of higher order, and hence unlikely to change qualitatively the conclusions we reached in [24], working with only the leading-order Lagrangian. They do contribute additional mixing of the scalars and vectors living in the bulk, and one should check further the supersymmetry transformations laws, which are modified. However, as as has been already noticed in [23], the corrections to these transformations, which are linear in the non-trivial background, are not only formally of higher order but also discontinous across the walls, since the background to which they are proportional is itself discontinuous. This means that these corrections do not appear on the walls, and hence do not open up any new channels of com-munication of supersymmetry breaking from the hidden wall to the bulk, or from the bulk to the fields living on the visible wall, beyond those already identified in our earlier paper [24].
We observe that the origins of the non-trivial backgrounds of certain five-dimensional zero modes, such as the real part of S which represents the Calabi-Yau volume, are traceable to non-trivial sources living on the walls. These are coupled to zero modes that change quasilinearly across the bulk. The roles of such sources, which we studied in our previous paper in the leading-order Lagrangian, continue to hold to leading order also in the presence of the terms associated with the gauging, as do our conclusions.
In connection with the analysis of the transmission of supersymmetry breaking in the presence of gauging, we have found the extension of the gauged supergravity model of [23,25] which includes a minimal sector of zero modes associated with (2, 1) forms on the Calabi-Yau space, which manifest themselves as non-universal hypermultiplets in five dimensions. In particular, we have determined the way in which the non-universal multiplets couple to gaugino condensates, which are primary candidate for hierarchical supersymmetry breaking in the framework of M theory.
The results of this paper open the way to a more phenomenological analysis of the transmission of supersymmetry breaking from the hidden wall to the observable wall through the bulk.
|
2014-10-01T00:00:00.000Z
|
1998-11-14T00:00:00.000
|
{
"year": 1998,
"sha1": "2d8ad77d5cda0c3bca7754d6104928ff9f6f417b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9811133",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9d2555b088042b2403645430ecf3d41013f5fdf2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
225515033
|
pes2o/s2orc
|
v3-fos-license
|
Conservation Gaps in Traditional Vegetables Native to Europe and Fennoscandia
: Vegetables are rich in vitamins and other micronutrients and are important crops for healthy diets and diversification of the food system, and many traditional (also termed underutilized or indigenous) species may play a role. The current study analyzed 35 vegetables with a European region of diversity with the e ff ort to map the conservation status in Fennoscandia and beyond. We mapped georeferenced occurrences and current genebank holdings based on global databases and conducted conservation gaps analysis based on representativeness scores in situ and ex situ. Out of the 35 target species, 19 got at a high priority score for further conservation initiatives, while another 14 species got a medium priority score. We identified a pattern where traditional vegetables are poorly represented in genebank holdings. This corresponds well to a lack of attention in the scientific community measured in number of published papers. Considering the grand challenges ahead in terms of climate change, population growth and demand for sustainability, traditional vegetables deserve greater attention. Our contribution is to provide a basis for conservation priorities among the identified vegetables species native to Fennoscandia.
Introduction
The Russian scientist N.I. Vavilov linked crop diversity to region of domestication and established the concept of "centers of origin" [1], later termed "regions of diversity" [2]. To develop and maintain food production we need genetic diversity and in addition to a variety of cultivars and landraces, crop wild relatives represent such diversity. They can be gene sources for pest and disease resistance but also for more robust plants adapted to a more unpredictable climate [3][4][5]. Khoury et al. [6] pointed to conservation of genetic resources as a global concern; people consume food with genetic resources from outside their borders, often from other continents, so conservation and access to plant genetic resources is an international issue. Countries have committed themselves to safeguard genetic resources [7] and each country aims to map and conserve its diversity. Details on access are regulated under the International Treaty for Plant Genetic Resources for Food and Agriculture [8].
The northern part of Europe was not included in any of the centers proposed by Vavilov; however, Zeven and Zhukovsky [2] included a European-Siberian region of diversity, which covered Central-, Northern-and Eastern Europe, and Russia towards Mongolia and Kazakhstan. They linked several forage grasses and legumes, but also vegetables, herbs and spices, and fruit and berries to this region. As this is a huge region, we narrowed this paper to focus on vegetables, herbs and spices native to Fennoscandia. Fennoscandia is located between the Atlantic Ocean and Eurasia and includes Norway, Sweden, Denmark, Finland and a part of North-Western Russia. Precambrian granites and gneisses dominate the bedrock and boreal forest grow over large parts of the area and continues eastwards into Siberia [9]. High mountains in the west have a large impact on precipitation and there is a maritime climate in the west and a more continental in the east. Agriculture is located mostly along the coastline and in valleys but also in forests. Outlands are grazed by sheep and reindeer, and to a lesser extent by cattle.
Vegetables are among the plants especially important for healthy diets and food security [10]. Traditional vegetables (also termed underutilized or indigenous vegetables) represent a huge potential for diversification and are often grown in household gardens [11]. Such vegetables, or their wild ancestors, are present throughout Europe. The plants have been collected and domesticated and for many of the species cultivation has disappeared. For Fennoscandia, root and bulb plants were especially valuable as they could be stored and used during the long winter, but leaf vegetables were valuable too as they were used fresh during the summer [12]. Today, fresh vegetables are year-around commodities, but still local production and plant traditions are important. Promotion of recipes with local ingredients is popular, both as identity and as marketing a region or a country [13]. Furthermore, it is a way of appreciating local or indigenous knowledge. In Fennoscandia, we know that the Sami people collected plants, such as angelica (Angelica archangelica L.) and common sorrel (Rumex acetosa L.), and they later started to cultivate these vegetables [14].
Taking a bird's-eye view, one could argue that Fennoscandia and Northern Europe are not an important region for crop diversity, especially if compared to mega-centers, such as Central Asia, the Mediterranean and Latin America [15]. We still wanted to review Fennoscandia with a focus on vegetables, also including herbs and spices. Few studies have been carried out on such species from the region and we believe the region could harbor valuable genetic resources due to its northern location. The species in focus include both well-recognized crops and species that are partly domesticated or previously cultivated but forgotten today. We included annuals, biennials and perennials. Cultivation of most species came to Fennoscandia from the south or east, like the cultivation of carrot (Daucus carota L.) and beet (Beta vulgaris L.), which were domesticated in the Mediterranean, and asparagus (Asparagus officinalis L.), whose primary center of origin is on the salt-steppes of Eastern Europe or Western Asia [2]. These species are global vegetables today but have genetic resources present in Fennoscandia, also as wild or semi-wild populations. These populations are the northernmost in the world; thus, they may harbor valuable traits for breeding for crop adaptation.
Different approaches have been suggested for conducting gap analysis for genebanks. Focused Identification of Germplasm Strategy (FIGS) is one, and it works on the premise that adaptive traits in the material mirror the environmental conditions of their place of origin [16]. Diversity can be maximized by sampling accessions based on their geographic regions and collections could be built to cover a range of environments [17]. Another approach that shares some of the same premises is to use spatial analysis [18] but to include both ecological, geographical and sampling representativeness to calculate a final conservation score both in situ and ex situ [19]. For the present study we chose the latter approach.
As far as we know, no gap analysis of vegetables native to Europe and especially Northern Europe has been done. Work has been done on temperate grasses and pulses [20] and on crops with origin in other parts of the world, such as potato [21], cowpea [22], beans [23], eggplants [24], wild cucurbits [25] and wild chili pepper [19]. Traditional vegetables and their genetic resources have been promoted as important in tropical and subtropical regions [26,27] but seldom in regions such as Europe and Fennoscandia.
The current study aims at producing new insight into traditional Fennoscandian vegetables and their conservation status. We wanted to identify priority species and priority areas for germplasm collection missions, in order to safeguard diversity with a focus on Fennoscandia and beyond. We wanted to highlight the value of in situ conservation and to recognize genetic resources as an ecosystem service. We did not aim to attach economic value to such a service but just to pinpoint that plants growing in the forests and mountains are of interest not only for the local communities but also Agriculture 2020, 10, 340 3 of 17 on a global scale as plants harbor traits of importance for future crop improvement and for increased resilience and diversification of our food systems.
Selection of Target Species
We selected target species on the basis of previous literature and on geographical criteria. One key source of information was the study of Zeven and Zhukovsky [2], who listed around 200 species belonging to the European-Siberian region of diversity (Table S1 (Supplementary Materials)). We categorized these species according to use and selected the vegetables, herbs and spices for further consideration. The next selection criterion was nativity to Fennoscandia. We used the Global Biodiversity Information Facility (GBIF) [28] and Mossberg and Stenberg [29] for this. We We also removed hybrids, such as Mentha × piperita L. and Mentha × rotundifolia (L.) Huds., due to taxonomic uncertainties in the wild populations of these hybrids. We ended up with 35 target species of vegetables that we could term Fennoscandian traditional vegetables (Table 1).
Species (Common Name) Annex 1 A Comments on Use B
We categorized the species according to primary products used (leaf, bulb, tuber, root, stem, flower, seed), their cultivation status (widely cultivated, previously cultivated/rarely cultivated), and according to FAO's Annex 1 list of priority species for conservation under the International Treaty for Plant Genetic Resources in Food and Agriculture [8].
Data Collection and Validation
We surveyed the occurrences from The Global Biodiversity Information Facility (GBIF) [28], applying the scientific names and countries as filters in the search function. GBIF occurrences can be from natural populations, herbarium specimens, seed collections, or other records. The total number per species and clusters of georeferenced occurrences were compiled by using the map function of the GBIF website. The information was used to illustrate where natural populations are expected to be found. We surveyed the global ex situ genebank holdings using GENESYS online facility, the Global Gateway to Genetic Resources [30]. We applied the scientific names, countries of origin, and biological status of accessions as filters in the search functions and restricted the search to wild accessions only, comprising natural, semi-natural/wild, and semi-natural/sown. The top two provenience counties for each species were compiled from the summary function of the database as well as from the Fennoscandia countries Norway, Sweden, Finland and Denmark. A reported accession is a seed sample or another conserved propagation material unit maintained in genebanks that are reporting to GENESYS. Many genebanks, and especially the large ones, such as the Nordic genebank, report to the facility.
We then collated the occurrence data of the target species from both GBIF and GENESYS. Presence data from GBIF were programmatically downloaded using the R [31] package "rgbif" [32]. Presence data from GENESYS were downloaded manually from the web platform applying the filter "wild" on the biological status of the accession. For the data cleaning and filtering we applied the following criteria: (i) remove occurrences with missing geographical coordinates, or outside the reported administrative boundaries [33]; (ii) remove locations reported to be collected before 1960 to match with the baseline climate; (iii) remove locations from country centroids or with no decimals as it is a sign that these points were only taken at the country level and had low precision; (iv) re-assign coordinates located in coastal waters to their nearest location on the coastline using a 10 arc-min buffer; (v) remove duplicated records within the same grid cell at a 5 arc-min resolution [34].
For each target species, the relative importance of the target region (here Fennoscandia) was calculated. To further narrow down would be difficult due to lack of geo-references in the GENESYS ex situ conservation database. We therefore extracted the number of wild (natural, semi-natural or semi-wild) accessions with provenance in Norway, Sweden, Finland and Denmark in the database.
To track the number of scientific publications for the target species, we used the Web of Knowledge Core Collection [35]. The facility covers more than 12,000 international journals in the areas of the natural and social sciences and arts and humanities. We did not restrict our searches to any period, language or publication type and searched for the scientific names (with AND between genus and species) with the topic (TS command). The topic in Web of Science includes words from titles, abstracts and key words. Thus, topic will always catch a higher number of records than title searches.
Data Analysis
The distribution of the target species was modeled within longitudes −25 and −60 and latitudes 34 and 71. The analyses were performed in R [31,37] applying an ensemble method for species distribution modeling (SDM) implemented by the R package "BiodiversityR" [38]. The procedure consists in calculating the suitability as a weighted average of all probabilities predicted by the SDM algorithms [39] following three steps: (i) calibrate the models by assessing the performance of all SDM algorithms in a 10-fold cross-validation and computing the area under the curve (AUC) [40][41][42]; (ii) retain the algorithms that contributed at least 5% to the whole ensemble of models measured with the weighted AUCs from each algorithm; (iii) generate the suitability maps using the predictions from the selected algorithms in step 2. To generate the presence-absence maps, we converted the ensemble suitability from step 3 using the threshold of maximum specificity + maximum sensitivity [43]. In Figure S1 (Supplementary Materials) we show the AUC values for the selected algorithms for each ensemble of the target species.
Gap Analysis
For the gap analysis we followed the methods described by Khoury et al. [19] using the R package "GapAnalysis" [44]. Four scores were calculated for the ex situ and in situ conservation statuses. The sampling representativeness score (SRS), the geographic representativeness score (GRS), the ecological representativeness score (ERS) and the final conservation score (FRS). The metrics ranged from 0 to 100, where a score of 0 means a poor conservation state and 100 a well conserved state. For this we used the distribution models described in the previous section, with the records from GBIF and GENESYS.
The SRS ex situ provides a general indication on how comprehensive are the collections in genebanks. This compares the total number of reported germplasm accessions (G) available in GENESYS against the number of referenced (H) records in GBIF, were an ideal ratio would be of 1:1. We used all references from GBIF and germplasm records from GENESYS, regardless of the existence of geographical records. The GRS ex situ measures the proportion of the geographical range comprised by the species distributions that are conserved in genebanks. We created buffers of 0.5 arc-degrees for each genebank collection and estimated the areas where genebank accessions were collected within the modeled geographical range of species. The ERS ex situ assesses the proportion of ecoregions that are represented in the genebank collections. Ideally a species is well conserved when it has reported collections for every potential ecoregion covered by its geographical distribution. To estimate this index we used the TNC (The Nature Conservancy) terrestrial ecoregions [45]. We obtained the ex situ final conservation score (FCS ex situ) by taking the average of the previous ex situ metrics.
To assess the in situ conservation we applied the four metrics described previously focusing on the state of conservation of the target species within officially protected areas. In situ conservation maintains genetic variation in its natural environments. The current system with nature reserves and national parks tends to focus on habitats or ecosystems rather than specific plant genetic resources. However, plant genetic resources may also be found outside protected areas. Thus, we should be aware that limiting the analysis of in situ conservation to officially recognized protected areas may constitute only a fraction of the biodiversity found in situ. We used the World Database of Protected Areas (WDPA) [46], selecting all terrestrial and coastal protected areas within the study region. For the SRS in situ we computed the proportion of GBIF occurrences that falls within the protected areas. For the GRS in situ we compared the proportion of area of the modeled distribution for each species that are covered by a protected area. For the ERS in situ compares the variation in ecoregions that comprises the distribution range of each species within the protected areas. The in situ final conservation score for in situ (FCS in situ) was computed by taking the average of the previous in situ metrics. We computed the Final Conservation Score by averaging all ex situ and in situ scores. As applied by Khoury et al. [19], we categorized the final conservation score in priority for further conservation accordingly: (i) high priority when scoring <25; (ii) medium priority when scoring ≥25 to <50; low priority when scoring ≥50 to <75; and (iv) sufficiently conserved when scoring ≥ 75.
Genebank Holdings and Ex Situ Conservation Gaps
A total of 39,541 accessions were found in GENESYS for the targeted 35 vegetables and of these 5968 were classified as wild (including wild/natural and semi-wild/semi-natural). Furthermore, less than 64% of these wild accessions were georeferenced, with an average of 53% (±27%) georeferenced accessions per species. The provenance countries of the accessions are provided in Table 2.
Genebank Holdings and Ex Situ Conservation Gaps
A total of 39,541 accessions were found in GENESYS for the targeted 35 vegetables and of these 5968 were classified as wild (including wild/natural and semi-wild/semi-natural). Furthermore, less than 64% of these wild accessions were georeferenced, with an average of 53% (±27%) georeferenced accessions per species. The provenance countries of the accessions are provided in Table 2. Table 2. Overview of georeferenced GBIF occurrences of target species with the main cluster of occurrences and the total number of records in GBIF. In addition, as overview of the total number of accessions in genebank holdings GENESYS and how many of these that are classified as wild (natural, semi-natural or semi-wild), and again how many of these wild accessions are from Norway (NOR), Sweden (SWE), Finland (FIN) and Denmark (DEN). In brackets are the number of georeferenced accessions. We next examined gaps in the ex situ conservation system of genetic resources of the target species as reported by GENESYS. First, groundnut peavine (Lathyrus tuberosum L.) had no single accessions in the database, neither of cultivated nor of wild types. Thereafter, leafy goosefoot, master-wort, allgood (Chenopodium bonus-henricus L.) and tuberous-rooted chervil had less than 30 accessions in total in GENESYS. Looking at the proportions of wild (wild/natural and semi-wild/semi-natural) compared to cultivated biological status of the material (commercial cultivars, landraces and breeding lines), a low representation of wild material (less than 3% of the total number of accessions) was detected for swede/rapeseed (Brassica napus (L.) Mill.) and turnip, followed by horseradish (Armoracia rusticana (Lamm.) Gaertner. Mey and Scheb.), garden orache, garden lovage and black mustard (Brassica nigra (L.) Koch.), all with 3-15% wild accessions. At the other end, with more than 75% wild accessions, we found watercress (Nasturtium officinale R. Br.), great burnet (Sanguisorba officinalis L.), sea kale, Rampion bellflower (Campanula rapunculus L.) and broad-leaved dock.
A clear correlation (R 2 = 0.74) was found between GENESYS records in the form of number of accessions per species and publication records at ISI Web of Science (Figure 4). The minor crops were poorly represented and the lowest number of publications were for groundnut peavine (four publications), garden myrrh (7), tuberous-rooted chervil (13), Rampion bellflower (16), leafy goosefoot (16), sand leek (21), sea kale (24) and allgood (25). Eleven species were represented with more than 800 publications each with swede/rapeseed (16, Looking closer at the Fennoscandia provenance, no wild accession from this region was detected in GENESYS for swede/rape, green mint, garden sorrel, leafy goosefoot, tuberous-rooted chervil, patience dock, small burnet (Sanguisorba minor Sp.), watercress, great burnet, master-wort, Rampion bellflower and broad-leaved dock. Species with a low representation from Fennoscandia, where less than 2% of the wild accessions in GENESYS were from Fennoscandia, were hops, turnip, black mustard and carrot. At the other end of the scale, beet, asparagus, parsnip, angelica, caraway and garden myrrh had a significant proportion of wild accessions from Fennoscandia. Looking closer at the Fennoscandia provenance, no wild accession from this region was detected in GENESYS for swede/rape, green mint, garden sorrel, leafy goosefoot, tuberous-rooted chervil, patience dock, small burnet (Sanguisorba minor Sp.), watercress, great burnet, master-wort, Rampion bellflower and broad-leaved dock. Species with a low representation from Fennoscandia, where less than 2% of the wild accessions in GENESYS were from Fennoscandia, were hops, turnip, black mustard and carrot. At the other end of the scale, beet, asparagus, parsnip, angelica, caraway and garden myrrh had a significant proportion of wild accessions from Fennoscandia.
Conservation Status
The analysis on the conservation gaps showed that 19 of the 35 target vegetables fell into the high priority category for Europe based on the final conservation score across in situ and ex situ scores as explained in the Materials and Methods section ( Figure 5). Another 14 target species were scored within the medium priority category while only carrot and beet scored in the low priority category.
The analysis on the conservation gaps showed that 19 of the 35 target vegetables fell into the high priority category for Europe based on the final conservation score across in situ and ex situ scores as explained in the Materials and Methods section ( Figure 5). Another 14 target species were scored within the medium priority category while only carrot and beet scored in the low priority category.
Discussion
Our point of departure was that conservation of plant genetic resources is a global concern where relevant scientific communities in each country need to take part. There are several uncertainties about the introduction of plants to Northern Europe as references are rare prior to the 16th century [63][64][65]. Plants may have been introduced but escaped from cultivation and now exist as semi-wild populations. We showed this pattern in the predicted species richness, where the higher diversity is found in northwestern Europe. We could regard such populations as part of our biocultural heritage, as they have survived for hundreds of years and thus have good traits for adaptation and hardiness. Many accessions have been safeguarded in ex situ genebanks, and wild plants continue to survive in situ, at least if their natural environments remain unchanged. Still, there is much to be done, especially for minor crops such as traditional vegetables.
Discussion
Our point of departure was that conservation of plant genetic resources is a global concern where relevant scientific communities in each country need to take part. There are several uncertainties about the introduction of plants to Northern Europe as references are rare prior to the 16th century [63][64][65]. Plants may have been introduced but escaped from cultivation and now exist as semi-wild populations. We showed this pattern in the predicted species richness, where the higher diversity is found in northwestern Europe. We could regard such populations as part of our bio-cultural heritage, as they have survived for hundreds of years and thus have good traits for adaptation and hardiness. Many accessions have been safeguarded in ex situ genebanks, and wild plants continue to survive in situ, at least if their natural environments remain unchanged. Still, there is much to be done, especially for minor crops such as traditional vegetables.
With the gap analysis ( Figure 5) we showed that only two out of the 35 target crops could be classified as low priority crops for further conservation. The low rate of georeferenced accessions reported to GENESYS (~53% per species) may limit our understanding on the full conservation status of the 35 target vegetables in Europe. Uncertainty remains whether the conservation score could be higher if all the sampled data were complete. This calls not only to prioritize collection missions for these species, specially the 19 with high priority, but also for best practices in reporting accessions and data management of existing information to prevent duplicated efforts in new collections.
There are different explanations to the result that many of our traditional target vegetables are poorly represented in genebanks. One explanation is that crops being minor or underutilized implies them not being in focus. One could argue that such crops have little value. Our results showed that the species with the highest number of publication records were those with the highest number of conserved accessions, and vice versa. Traditional vegetables on the high priority list, such as master-wort, leafy goosefoot and tuberous-rooted chervil, have no or very few accessions and this accounts for all genebanks reporting to GENESYS. The mentioned species are also relatively rare to find as wild populations (Table 2, Figure 1). Some details on a few of the species: master-wort (belonging to the Apiaceae) has been cultivated in Fennoscandia and used to flavor beverages or for medicinal purposes [66]. Recent research has identified bioactive constituents as different furanocoumarins [67]. The plant is said to have been introduced to Fennoscandia in mediaeval time. It is very rare to find today and grows in small populations in old meadows or close to farmhouses [68]. Few measures have been taken so far to conserve these populations. Leafy goosefoot was included in Rudbeck's Catalogus plantarum published in 1658 about Swedish wild or semi-wild plants [69]. It was previously cultivated as a leafy vegetable, but cultivation has disappeared. Today the plant is rare to find and most populations are in or close to cultivated landscapes and in the southern parts of Fennoscandia. Tuberous-rooted chervil (an Apiaceae) could be an alternative to potato and is rich in starch, with a dry matter content exceeding 35% [70]. In France, tuberous-rooted chervil is considered a gourmet vegetable and initiatives have been made to extend cultivation by breeding varieties of the plant [71]. In Fennoscandia, the plant has most likely been introduced and is naturalized but rare, and most of these populations are found in the southern landscapes. A closely related species, often termed a subspecies C. prescottii, is widespread especially in northern areas of Fennoscandia and seems to have a Russian origin [72].
Other species on the high priority list were garden orache, garden lovage, sand leek, and the two Mentha species, green mint and apple mint (M. suaveolens Ehrh.), the latter with very limited distribution in Fennoscandia. Garden orache is known from the area since medieval times [69]. According to Linné [73], the plant was common as a weed but has now almost disappeared. Garden lovage was described as naturalized in Sweden in the 18th century but was also introduced to Fennoscandia.
Wild leek (Allium ampeloprasum L.), watercress, patience dock and Rampion bellflower are species at a high priority but very rare to find as wild populations in Fennoscandia. Horseradish and hops are common but are maintained in clonal archives. Such archives were established in the Nordic countries by the national programs for plant genetic resources, but they only maintain genotypes with a documented cultivation history and not wild/semi-wild plants [74]. Allgood and groundnut peavine got a medium final score. Up to the 19th century groundnut peavine was grown in Europe as a vegetable used for its edible tubers [75]. Now it is more of an ornamental plant but has potential for use in diversification of horticulture. Allgood (belonging to the Amaranthaceae) is another priority species. The plant has been used as a potherb and a medicinal plant. In Sweden, people used leaves against pancratium [76] and roots have been used to treat diarrhea and lung infections [77]. Today the plant is very rare in Fennoscandia and to our knowledge is in decline.
As our overview has shown, some species are poorly represented in genebanks. To conserve more than a few accessions per species is of great value, as different accessions may have different nutrient contents [78] as well as different adaptation traits. Thus, collects from Fennoscandia might have a special value in a global context. A few plants were domesticated in Fennoscandia, such as angelica, but the majority were introduced, most likely through trading routes and monasteries in medieval times [79]. Little is known about the details. However, it is documented that certain species tend to have their habitats at historical sites [80,81]. Ethnobotanical studies are relevant for understanding the distribution and use of plants. The targeted vegetables have been necessities used as food and spices but also for other purposes. For Fennoscandia the work of Høeg [12] is especially valuable for compiling traditional plant knowledge. For the eastern and southern parts of Europe the works of Pieroni, Soukand and Dogan are important [82][83][84][85][86][87]. Luczaj et al. [88] presented an overview of the changes in the contemporary use of wild food plants in Europe using examples from Poland, Italy, Spain, Estonia and Sweden. A general decline in use and knowledge has been identified across Europe. Our concern is that such a lack of knowledge may lead to lack of care. Habitat destruction is more likely to take place when knowledge is lacking. Indigenous people had ways to secure plant populations. They also used a wide range of species [89,90]. To start using our bio-cultural heritage is positive, as shown in the New Nordic Cuisine [13]. However, over-exploitation is a risk factor if wild species are commercialized without being put into agricultural production or a proper sustainable harvesting regime. Independent of today's situation, conservation of genetic resources is important for future generations.
Conclusion
We have demonstrated that traditional vegetable genetic resources are sparsely represented in genebank collections. A hundred years after Vavilov published his ideas on centers of origin, we still have a long way to go in Europe. Gaps have been identified. Traditional vegetables have a great potential for healthy diets, for diversification and for local innovation and food culture [11,91,92]. Conservation action and promotion of such crops should be carried out. The impacts of climate change on current habitats and current production systems are not fully understood [3,93,94]. The importance of diversity seems to be increasing.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2077-0472/10/8/340/s1. Table S1: Excel file with complete list of species with an European region of Diversity. Figure Funding: This work was supported by Interreg Sweden-Norway, the European Regional Development Fund.
|
2020-08-13T10:07:53.150Z
|
2020-08-06T00:00:00.000
|
{
"year": 2020,
"sha1": "0d1f4b67a4d49affb37873f8a8c7f738924b508e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0472/10/8/340/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "db2e63281859b1a025cac722bf88fc3053942f01",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
11274149
|
pes2o/s2orc
|
v3-fos-license
|
Effect of cisplatin on in vitro production of lipid peroxides in rat kidney cortex.
Cisplatin (cis-diamminedichloroplatinum II), an antitumor agent with a dose-limiting adverse effect of nephrotoxicity, increased lipid peroxidation in a time- and concentration-dependent manner in rat renal slices incubated in vitro. The addition of an antioxidant, N-N'-diphenyl-p-phenylenediamine (DPPD), to the incubation medium completely inhibited this increase. We also studied the in vitro effects of agents that modify cisplatin nephrotoxicity on lipid peroxidation in the slices caused by cisplatin. Mannitol, which protects against cisplatin nephrotoxicity, almost completely inhibited the increase in lipid peroxidation caused by cisplatin. Methionine, which potentiates cisplatin nephrotoxicity, made the slices more susceptible to peroxidation. The decrease with cisplatin in p-aminohippurate (PAH) accumulation in incubated kidney cortical slices, the accumulation being a representative biochemical process in the transport ability of renal cells, was partially inhibited when DPPD was in the medium. The results suggested that cisplatin directly affected renal tissues in which free radicals generated by cisplatin may interact with membrane lipids to cause the production of lipid peroxides that damage membrane function. Compounds that modify cisplatin nephrotoxicity such as mannitol and methionine may act by affecting the production of renal lipid peroxides by cisplatin.
Abstract-Cisplatin
(cis-diamminedichloroplatinum II), an antitumor agent with a dose-limiting adverse effect of nephrotoxicity, increased lipid peroxidation in a time and concentration-dependent manner in rat renal slices incubated in vitro. The addition of an antioxidant, N-N'-diphenyl-p-phenylenediamine (DPPD), to the incubation medium completely inhibited this increase. We also studied the in vitro effects of agents that modify cisplatin nephrotoxicity on lipid peroxidation in the slices caused by cisplatin. Mannitol, which protects against cisplatin nephro toxicity, almost completely inhibited the increase in lipid peroxidation caused by cisplatin. Methionine, which potentiates cisplatin nephrotoxicity, made the slices more susceptible to peroxidation. The decrease with cisplatin in p-aminohip purate (PAH) accumulation in incubated kidney cortical slices, the accumulation being a representative biochemical process in the transport ability of renal cells, was partially inhibited when DPPD was in the medium. The results suggested that cisplatin directly affected renal tissues in which free radicals generated by cisplatin may interact with membrane lipids to cause the production of lipid peroxides that damage membrane function. Compounds that modify cisplatin nephrotoxity such as mannitol and methionine may act by affecting the production of renal lipid peroxides by cisplatin.
Cisplatin (cis-diamminedichIoropIatinum II) is an important chemotherapeutic agent. It is highly effective in the treatment of ovarian, testicular, and bladder carcinomas, cancers of the head and neck, and elsewhere (1). The most important adverse effect of cisplatin, which limits doses that can be used, is nephrotoxicity (2)(3)(4)(5). The mechanism responsible for this nephrotoxicity is not known. Orgotein which has superoxide dismutase activity ameliorates the nephrotox icity of cisplatin (6). Pretreatment of rats with an antioxidant, a-tocopherol or N-N' diphenyl-p-phenylenediamine (DPPD), de creases the cisplatin-induced nephrotoxicity (7). These results suggest that the production of free radicals may be responsible for the nephrotoxicity. The injection of cisplatin to rats increases lipid peroxides in the kidney * To whom all correspondence should be addressed . (8). Abnormal levels of free radicals react with polyunsaturated fatty acids and cause an increase in lipid peroxides (9, 10). An increase in the lipid peroxidation of poly unsaturated fatty acids in plasma and sub cellular membranes may cause cellular disturbances. Lipid peroxidation has been implicated in the development of renal injury caused by cephaloridine, mercuric chloride, gentamicin, ischemia and reflow (11)(12)(13)(14).
In this study, we examined the effects of cisplatin on lipid peroxidation in renal cortical slices and measured p-aminohippurate (PAH) accumulation in the slices as a biochemical index for an in vitro evaluation of cellular damage. We investigated whether DPPD, a powerful antioxidant (15, 16), had an effect on changes in lipid peroxidation and PAH accumulation caused by cisplatin. We reported that cisplatin caused a significant increase in lipid peroxidation and a decrease in PAH accumulation in the renal cortical slices in vitro and that the antioxidant inhibited these changes.
Materials and Methods
Preparation of renal slices: Male Sprague Dawley rats weighing about 200 g were used. They were fed standard chow and given free access to water. The rats were decapitated, and the kidneys were rapidly removed, decapsulated and placed in ice cold isotonic saline medium (0.9% NaCI). Thin renal slices (0.3-0.4 mm in thickness) were prepared free-hand with a razor blade on an ice-cold Petri dish. The slices were immersed in the chilled saline medium until use.
Measurement of lipid peroxidation in the slices: Renal slices weighing about 200 mg were incubated at 37'C in 5 ml of incubation medium containing 150 mM KCI and 20 mM Tris-HCI buffer, pH 7.4, with or without cisplatin and another agent such as DPPD. The measurement of lipid peroxidation using renal slices was conducted in Tris-HCI buffer rather than Hepes buffer. The reason for this is that the slices incubated in the former had a lower base-line value for lipid peroxides than the slices incubated in the latter, although the production of lipid peroxides in the incubated slices were affected by cisplatin to the same extent in both Tris-HCI buffer and Hepes buffer. After incubation, the slices were removed, weighed, and homogenized in the same medium. Lipid peroxidation in homogenates prepared from renal tissues was monitored by the measurement of production of malondial dehyde with the thiobarbituric acid assay described by Buege and Aust (17).
PAH accumulation during the incubation of the slices: The experimental procedures for measurement of PAH accumulation in the slices were essentially the same as described elsewhere (18). Renal cortical slices weighing approximately 150 mg were incubated at 37°C in 10 ml of incubation medium con taining 134 mM NaCI, 5.9 mM KCI, 1.5 mM CaC12, 1.2 mM MgCl2, 11.5 mM glucose, 5.8 mM N-2-hydroxyethyl-piperazine-N'-2 ethanesulfonic acid buffer (Hepes) (pH 7.4), 0.074 mM PAH and 1 % inulin, which was added to estimate the extracellular space of the slices. After incubation, the samples of the slices and medium were used for spectro photometric analyses of PAH and inulin by the methods of Bratton and Marshall (19) and Roe et al. (20), respectively. PAH ac cumulated was expressed as the slice-to medium concentration ratio (S/M) for PAH.
Chemicals and statistics: Cisplatin was purchased from Sigma Chemical Co. (St. Louis, MO). DPPD was obtained from the Tokyo Kasei Kogyo Co., Ltd. (Tokyo, Japan) and dissolved in 99.6% methyl alcohol, which was added to the medium at the final concentration of 0.4%. Other chemicals were of the highest purity available from com mercial sources.
Data are expressed as means±S.E. Statistical analysis was done by analysis of variance. Significant differences at the level of P<0.05 were evaluated among multiple comparisons with Dunnett's test (21).
Results
Cisplatin -induced lipid peroxidation in slices: In vitro studies are useful for inves tigating whether cisplatin directly affects the production of lipid peroxidation in tissues. We used renal slices incubated with or without 2 mM cisplatin at 37'C for various periods of time (Fig. 1). Although incubation of the slices with the drug gave no change in lipid peroxidation up to 30 min, incubation for 60 min produced a significant increase. When incubation of the slices with cisplatin was carried out for 120 min, the increase in lipid peroxidation was concentration dependent (Fig. 2).
Incubation of the slices with neither com pound resulted in the production of 33.13± 1.04 nmoles of malondialdehyde/g tissue, which we took as the base-line value. The extra lipid peroxidation caused by 2 mM cisplatin was completely inhibited by 1 ,uM DPPD (Table 1).
It has been reported that mannitol diuresis is useful in reducing the nephrotoxicity of cisplatin (22, 23). Figure 3 shows that the presence of 2 mM mannitol in the incubation medium, which has been reported to be a concentration sufficient to scavenge free radicals generated by a nephrotoxic drug (24) production of lipid peroxidation by cisplatin was further accelerated in the presence of 4 mM methionine, although the addition of methionine (4 mM) showed no effect on lipid peroxidation in the absence of cisplatin (Fig. 4). This acceleration by methionine was observed in concentrations of the amino acid from 0.5 to 10 mM in a concentration dependent manner, but higher concentration of methionine itself (10 mM) had a stimulatory effect on lipid peroxidation in the slices (data not shown).
Effects of cisplatin and DPPD on PAH accumulation in slices: We examined the effects of cisplatin with or without DPPD on PAH accumulation in rat renal cortical slices, which was measured for the evaluation of biochemical changes in transport ability of renal cells. Cisplatin caused a significant decrease in PAH accumulation in incubated slices (Fig. 5). This result was consistent with the results reported by other investigators (25,26). The decrease in PAH accumu lation in slices during 90 min of incubation with cisplatin was partially inhibited by DPPD. The antioxidant alone had no effect on PAH accumulation. Treatment of rats with DPPD (0.5 g/kg, i.p.) inhibited the decrease in PAH accumulation in the slices induced by cisplatin (0.5 mM or 1 mM) (Fig. 6).
Discussion
Previous studies have demonstrated the effect of cisplatin injection into rats on malondialdehyde level in the kidney (8), indicating the increase in the malondialdehyde level at 72 hr after cisplatin injection (5 mg/ kg, i.p.). Pretreatment of rats with an antioxi dant (a-tocopherol or DPPD) prevents the increase in the renal malondialdehyde level caused by cisplatin administration (8). The direct effect of cisplatin on renal tissues should be discussed in terms of the increase in malondialdehyde production. These in vitro experiments showed for the first time that cisplatin in the incubation medium increased the malondialdehyde level in the rat renal slices. The increase in the malondi aldehyde level caused by cisplatin was concentration and time-dependent, and it was prevented by the addition of DPPD. These results suggest that the increased lipid peroxidation with cisplatin may be due to a direct effect of the drug on the kidney and the hypothesis that free radicals generated by cisplatin interact with mem brane lipids to cause production of lipid peroxides. It is not yet clear, however, what free radical cisplatin generates.
Mannitol protects against the nephro toxicity of cisplatin (22, 23) and scavenges hydroxyl radicals (27). The prevention by mannitol of the increase in malondialdehyde formation in renal slices caused by cisplatin in this study may arise from scavenging of reactive oxygen species by mannitol. Cis platin nephrotoxicity monitored by blood urea nitrogen is increased by administration of methionine at the same time (28). The presence of methionine made the slices more susceptible to cisplatin-induced lipid peroxidation.
The mechanism of such an effect of methionine on cisplatin-induced lipid peroxidation remains to be established. Mannitol and methionine may modify the nephrotoxicity of cisplatin by affecting the production of lipid peroxides.
The decrease in PAH accumulation caused by cisplatin was inhibited not only by the addition of DPPD in the medium but also by pretreatment with DPPD. Csallany and Draper showed that DPPD, following its administration to rats, is accumulated in the liver and skeletal muscle (29). However, they did not examine the distribution of DPPD in kidney. Ramsammy et al. (30) observed that injection of DPPD to rats reduces the increase in renal lipid peroxidation induced by aminoglycoside, suggesting that DPPD in jected into rats may be distributed in the kidney.
Preliminary experiments in our laboratory have shown that addition of DPPD to the medium does not prevent the in vitro inhibition of sodium-potassium-activated ATPase activity by cisplatin in rat renal microsomes.
If there was a chemical inter action of cisplatin with DPPD, this would not be the case. Therefore, this inhibitory effect of DPPD on the decrease in PAH accumulation caused by cisplatin probably was not due to 'the chemical interaction of cisplatin with DPPD.
Platinum accumulates in the kidney and is found in many subcellular sites of the kidney, mostly in the cytosolic compartment, when cisplatin is injected to rats (31,32). Rat renal cortical slices concentrate cisplatin in energy and temperature-dependent systems from the incubation medium (33). The intra cellular site at which cisplatin exerts its nephrotoxic effect is unknown. If an excess of free radicals, which overwhelm the cellular protective mechanisms, are formed close to plasma membranes by cisplatin, they may cause changes of membrane components such as lipids and protein and consequently change transport activity. The prevention by DPPD of the decrease in PAH accumulation caused by cisplatin may reflect the action of the drug on the membranes by involvement in the formation of free radicals.
Details of the mechanisms involved in the increase in lipid peroxidation in renal tissues caused by cisplatin are still unknown. Together with in vivo results that antioxidants which attenuate the increase in blood urea nitrogen caused by cisplatin (7) protect against the increase in lipid peroxidation caused by the drug (8), the findings in this study lead us to speculate that the nephro toxic effect of cisplatin is associated at least in part with lipid peroxidation mediated by free radicals.
|
2018-04-03T00:21:05.620Z
|
1987-01-01T00:00:00.000
|
{
"year": 1987,
"sha1": "63ad7293fdb1bc20986cd3b265ff00c80d883dce",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/44/1/44_1_71/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "11dd7207c54e9f9e0569b4746b1bd71c828da3f9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Biology"
]
}
|
46619889
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of susceptibilities to the effects of antipsychotic drugs on lever-press avoidance responses between mice and rats.
Effects of antipsychotic drugs, chlorpromazine, haloperidol and tetrabenazine, on lever-press avoidance responses under Sidman avoidance (response-shock interval = 30 sec and shock-shock interval = 5 sec) and discriminated avoidance (intertrial interval = 25 sec and warning duration = 5 sec) situations in male mice of the dd strain were investigated. The results were compared with those in male rats of the Wistar strain. All the drugs tested were administered s.c., and the avoidance responses of the mice and rats were observed for 1 hr after each administration. Chlorpromazine, haloperidol and tetrabenazine suppressed the avoidance responses of the mice and rats in a dose-dependent manner. The susceptibilities of the mice to these drugs were calculated to be 1/5-1/6 as low as those of the rats. However, the potencies of the avoidance-suppressing effects of chlorpromazine, haloperidol and tetrabenazine were estimated to be 1:20:1.3 and 1:18:1.4 by the Sidman avoidance responses in the mice and rats, respectively, and 1:18:1.1 by the discriminated avoidance responses in both the mice and rats. These results suggest that the conditioned lever-press avoidance responses in mice, as well as those in rats, are applicable for the evaluation of antipsychotic drugs.
Conditioned lever-press avoidance re sponses in Sidman and/or discriminated avoidance situations in rats have usually been utilized to study characteristics of central effects of drugs, in particular antipsychotic drugs (1). This is because antipsychotic drugs specifically suppress the avoidance responses, and the avoidance-suppressing activities cor relate well with the clinical potencies and the daily doses in psychotic patients (2,3). Although mice offer many advantages as experimental animals, there are few reports which have studied the effect of an anti psychotic drug, chlorpromazine, on the dis criminated level-press avoidance response in mice (4).
Recently, we (5,6) demonstrated that the dd strain male mice acquired well the con ditioned lever-press avoidance responses in both Sidman and discriminated avoidance situations and that the avoidance responses established were stable for a long time. We (6) also reported that the change in the avoidance responses in mice after diazepam, an antianxiety drug, was almost identical with those in rats. However, it is unclear whether the changes in the lever-press avoidance responses in mice are similar or different to those of rats after antipsychotic drugs.
The purpose of this experiment was to compare susceptibilities to the effects of three different types of antipsychotic drugs, chlor promazine, haloperidol and tetrabenazine, on conditioned lever-press avoidance responses in Sidman and discriminated avoidance & S. TADOKORO situations between the dd strain mice and the Wistar strain rats.
Materials and Methods
Animals: The experimental animals were adult male mice of the dd strain and adult male rats of the Wistar strain. These animals were provided by the Institute of Experimental Animal Research, Gunma University School of Medicine. Groups of 8 mice each had been housed in aluminium cages of 30(W) x20 (D)x10(H) cm with a wooden-flake floor mat. Groups of 4 rats each had been housed in stainless steel wire mesh cages of 45(W) x25(D) x20(H) cm. Solid diet (MF: Oriental Yeast Co., Tokyo) and tap water were freely available except during the times of the avoidance test. The breeding room was artificially illuminated by fluorescent lamps on a 12 hr light-dark schedule (light on at 6:00 and light off at 18:00), and the room tem perature was regulated to 22+2°C. However, the humidity was not controlled.
When both the mice and rats were 10 weeks of the age, and weighed 30-32 g and 280-300 g, respectively, conditioning of the lever-press avoidance response in either Sidman or discriminated avoidance situation was started.
Apparatus: The operant chambers for mouse, 18(W)x9(D)x10(H) cm, and for rat, 25(W)x20(D)x19(H) cm, were made of acrylfiber and aluminium boards. A stainless steel lever was set in the right side wall of each chamber. A microswitch attached to the lever could be activated when the mouse and rat pressed the lever with forces of more than 1.5 g and 10 g, respectively. The avoidance-controlling and data-recording apparatus (GT 7705 and GT 7710, respec tively; O'hara & Co. Ltd., Tokyo) used in the present experiments for both the mice and rats were the same as those used in our previous experiments (3,(5)(6)(7).
Avoidance schedules: The temporal factors of the Sidman avoidance schedule (8) were a response-shock interval=30 sec and a shock-shock interval=5 sec. The shock was an electric current of 100 V, 1 mA, 50 Hz AC, and was given to the animal through the stainless steel floor grid of the operant chamber for 0.3 sec. The indices of the Sidman avoidance response were response rate (lever-presses/min) and shock rate (shocks/min) during the avoidance session of 1 hr. The temporal factors of the discriminated avoidance schedule (7, 9, 10) were an intertrial interval=25 sec and a warning duration=5 sec. The warning signal was an 800 Hz pure tone from a speaker. The shock was the same intensity and duration with that presented in the Sidman avoidance situation. However, during the conditioning sessions, an escape contingency was inserted in the schedule to enable the animals to rapidly acquire the discriminated avoidance response as mentioned above (7). The indices of the discriminated avoidance response were response rate and avoidance rate (number of correct responses/number of avoidance trials) during the avoidance session of 1 hr.
Each avoidance session consisted of 1 hr per day, and it was held every other day during the conditioning period and every day during the drug testing period. The pro cedures for the conditioning of the avoidance responses in mice and rats were the same and have been reported in the previous papers (5-7, 10, 11). After a conditioning of 1 5 sessions, the mice and rats which achieved a shock rate of less than 0.5/min in the Sidman avoidance situation and an avoidance rate of higher than 75% in the discriminated avoidance situation, with a stable response rate, were selected. The drug tests were carried out by using these animals. All the avoidance tests were held between 9:00-18:00.
Drugs: The drugs tested were chlorpro mazine HCI (Contomin Inj.; Yoshitomi Pharm. Co., Osaka), haloperidol (Serenace Inj.; Dainippon Pharm. Co., Osaka) and tetrabe nazine HCI (powder; Pfizer-Taito Co., Tokyo). The commercial preparations of chlorpro mazine and haloperidol were diluted by a physiological saline vehicle, and tetrabe nazine powder was dissolved in the saline vehicle. All the drugs were administered s.c. immediately before the start of the avoidance session, and the avoidance response was observed for 1 hr thereafter. Each adminis tration volume was fixed to 1 ml/100 g body weight for the mouse and 1 ml/kg body weight for the rat. The doses tested (shown in Figs. 1 and 2) were expressed in the salt forms. The drug testing sessions were held once a week, and on the days before, the saline vehicle was administered as the control sessions. On the other days except Sunday, the avoidance response was monitored without administration of the drug or the saline vehicle to check stability of the avoidance response. The order of the drugs tested was chlorpromazine, haloperidol and tetrabenazine, and the doses administered were changed from the lower ones to the higher ones.
Data analysis: The data obtained were statistically analyzed by Student's t-test within the same species. When P values were equal or less than 0.05, they were considered to be significant differences.
In order to compare the susceptibilities of the mice to the avoidance-suppressing effect of the drugs with those of the rats, the doses which increased the shock rate to I /min in the Sidman avoidance situation and decreased the avoidance rate to 50% in the discriminated avoidance situation were graphically esti mated from the dose-effect relation curves for these measurements as mentioned above (3). These doses were considered to be effective doses for suppression of the Sidman and discriminated avoidance responses.
Results
After 15 sessions of conditioning, about 60% and 80% of the mice subjected achieved the critical levels of shock rate of less than 0.5/min in the Sidman avoidance situation and avoidance rate of higher than 75% in the discriminated avoidance situation, respec tively. About 90% of the rats subjected achieved the critical levels after the con ditioning on both the schedules.
Chlorpromazine, haloperidol and tetrabe nazine suppressed the Sidman avoidance response of both the mice and rats, and induced dose-dependent decrease in the response rate and increase in the shock rate (Fig. 1). The response rate was significantly lower as compared with the saline vehicle administered control value when doses of more than 1 mg/kg of chlorpromazine, 0.1 mg/kg of haloperidol and 2 mg/kg of te trabenazine were administered to the mice and when doses of more than 0.5 mg/kg, 0.025 mg/kg and 0.5 mg/kg, respectively, were administered to the rats. The shock rate was significantly higher as compared with the control value when doses of more than 1 rng/kg of chlorpromazine, 0.1 mg/kg of haloperidol and 1 mg/kg of tetrabenazine were administered to the mice and when doses of more than 0.25 mg/kg, 0.018 mg/kg and 0.25 mg/kg, respectively, were adminis tered to the rats.
Chlorpromazine, haloperidol and tetrabe nazine suppressed the discriminated avoidance response of both the mice and rats and induced dose-dependent decrease in the response and avoidance rates (Fig. 2). The response rate was significantly lower as compared with the saline vehicle adminis tered control value when doses of more than 2 mg/kg of chlorpromazine, 0.2 mg/kg of haloperidol and 4 mg/kg of tetrabenazine were administered to the mice and when doses of more than 0.5 mg/kg, 0.035 mg/kg and 0.5 mg/kg, respectively, were given to the rats. The avoidance rate was significantly lower as compared with the control value when doses of more than 2 mg/kg of chlorpromazine, 0.1 mg/kg of haloperidol and 1 mg/kg of tetrabenazine were adminis tered to the mice and doses of more than 0.25 mg/kg, 0.025 mg/kg and 0.25 mg/kg, respectively, were given to the rats.
The susceptibilities of the mice to the avoidance-suppressing effects of the drugs tested were calculated to be 1 /5.5 1 /5.8 and 1 /4.7 1 /4.9 as low as those of the rats by the Sidman and discriminated avoidance tests, respectively ( Table 1). The effective doses for avoidance-suppression of chlor promazine, haloperidol and tetrabenazine were slightly lower in the Sidman avoidance sistuation than in the discriminated avoidance situation.
However, the ratios for the avoidance-suppressing potencies of chlor promazine, haloperidol and tetrabenazine were almost the same in the mice and rats in both the Sidman and discriminated avoidance situations, i.e., chlorpromazine: haloperidol : tetrabenazine=1 : 17-20 : 1 .1 1.4. The ratios inversely correlated with the daily clinical doses in psychotic patients (12) as shown in Table 1.
Discussion
The main purposes of a behavioral study of a drug are to predict clinical effects such The present experiment demonstrates that even though both the mice and rats show almost the same levels of shock rate and avoidance rate, the mice emit a higher baseline response rate than the rats in both the Sidman and discriminated avoidance situ ations. This result is probably due to a differ ence in the characteristics between mice and rats, i.e., mice show a higher locomotor activity than rats in general. A gross obser vation also revealed that the mice emit higher rates of after shock burst response and intertrial response.
However, the present experiment demon strates that three different types of anti psychotic drugs, chlorpromazine, haloperidol and tetrabenazine, i.e., phenothiazine, buty rophenone and benzoquinolizine derivatives, respectively, suppress both the Sidman and discriminated avoidance responses in the mice as well as in the rats in a dose-dependent manner. These results are consistent with those reported in rats by many investigators (1,3,13,14).
In addition, the dose-effect relation curves of chlorpromazine, haloperidol and tetra benazine for the shock rate and avoidance rate and the effective doses for suppression of the avoidance responses reveal that the susceptibility of mice to the avoidance suppressing effects of these drugs is 1/5 1 /6 as low as those of rats. The species difference in the susceptibility to drugs as well as in the baseline behavior are con sidered to be due to the difference in the drug metabolizing rates, neural activities etc. A further study is required to elucidate the species difference in the susceptibility to drug effects.
However, the dose-effect relation curves for the shock rate and avoidance rate in mice are almost identical with parallel shifted curves in rats. Moreover, the ratios for the potencies of the avoidance-suppressing effects of chlorpromazine, haloperidol and tetrabenazine estimated by the effective doses for the avoidance suppression are almost the same between mice and rats in both the Sidman and discriminated avoidance situ ations. These ratios also inversely correlated fairly well with the daily doses in psychotic patients (12).
In these respects, it can be concluded that the conditioned lever-press avoidance re sponses in the Sidman and discriminated avoidance situations in mice, as well as those in rats, are applicable for the preclinical evaluation of antipsychotic drugs.
|
2018-04-03T06:25:00.465Z
|
1983-01-01T00:00:00.000
|
{
"year": 1983,
"sha1": "73a5ac460b8b7a2209bc3541fb76f06a94b75653",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/33/6/33_6_1127/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dfffb5fa58984bc60a0e3855ddb77a8ff99e140d",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
237216560
|
pes2o/s2orc
|
v3-fos-license
|
RNase III and RNase E Influence Posttranscriptional Regulatory Networks Involved in Virulence Factor Production, Metabolism, and Regulatory RNA Processing in Bordetella pertussis
ABSTRACT Bordetella pertussis has been shown to encode regulatory RNAs, yet the posttranscriptional regulatory circuits on which they act remain to be fully elucidated. We generated mutants lacking the endonucleases RNase III and RNase E and assessed their individual impact on the B. pertussis transcriptome. Transcriptome sequencing (RNA-Seq) analysis showed differential expression of ∼25% of the B. pertussis transcriptome in each mutant, with only 28% overlap between data sets. Both endonucleases exhibited substantial impact on genes involved in amino acid uptake (e.g., ABC transporters) and in virulence (e.g., the type III secretion system and the autotransporters vag8, tcfA, and brkA). Interestingly, mutations in RNase III and RNase E drove the stability of many transcripts, including those involved in virulence, in opposite directions, a result that was validated by qPCR and immunoblotting for tcfA and brkA. Of note, whereas similar mutations to RNase E in Escherichia coli have subtle effects on transcript stability, a striking >20-fold reduction in four gene transcripts, including tcfA and vag8, was observed in B. pertussis. We further compared our data set to the regulon controlled by the RNA chaperone Hfq to identify B. pertussis loci influenced by regulatory RNAs. This analysis identified ∼120 genes and 19 operons potentially regulated at the posttranscriptional level. Thus, our findings revealed how changes in RNase III- and RNase E-mediated RNA turnover influence pathways associated with virulence and cellular homeostasis. Moreover, we highlighted loci potentially influenced by regulatory RNAs, providing insights into the posttranscriptional regulatory networks involved in fine-tuning B. pertussis gene expression. IMPORTANCE Noncoding, regulatory RNAs in bacterial pathogens are critical components required for rapid changes in gene expression profiles. However, little is known about the role of regulatory RNAs in the growth and pathogenesis of Bordetella pertussis. To address this, mutants separately lacking ribonucleases central to regulatory RNA processing, RNase III and RNase E, were analyzed by RNA-Seq. Here, we detail the first transcriptomic analysis of the impact of altered RNA degradation in B. pertussis. Each mutant showed approximately 1,000 differentially expressed genes, with significant changes in the expression of pathways associated with metabolism, bacterial secretion, and virulence factor production. Our analysis suggests an important role for these ribonucleases during host colonization and provides insights into the breadth of posttranscriptional regulation in B. pertussis, further informing our understanding of B. pertussis pathogenesis.
B acterial pathogens must be able to rapidly adapt to the range of environments encountered within the host during infection. Successful colonization requires tightly regulated gene expression balancing stress state responses, resisting specific immune and nutritional challenges, and production of virulence factors (1,2). Many environmental signals are detected by sensor kinases that coordinate rapid transcriptional changes through the phosphorylation of response regulators (3). However, rapid changes in gene expression can also be coordinated at the posttranscriptional level by regulatory RNAs. In recent years, there has been substantial expansion in the number of identified regulatory RNAs and in the understanding of the wide range of adaptive responses in which they are involved. Regulatory RNAs have been shown to play roles in modulating quorum-sensing responses, changing cell surface structures, and finetuning bacterial metabolism (2,4,5). There are three main classes into which regulatory RNAs are grouped: (i) regulatory elements found in the 59 untranslated region (UTR), such as riboswitches and RNA thermometers; (ii) cis-encoded, antisense RNAs (asRNA) that are transcribed from the opposite strand of the target mRNA; and (iii) small noncoding regulatory RNAs (sRNA), 50-to 400-nucleotide trans-encoded transcripts that require an RNA chaperone, such as Hfq, to catalyze interactions with the target transcript (6). Interactions between regulatory RNAs and the mRNA targets often lead to changes in the secondary structure of the target mRNA. These changes can modulate rates of translation by altering accessibility to ribosome binding sites or leading to cleavage of the target mRNAs. Cleavage events can result in either the degradation or stabilization of the target mRNA, whereby the regulatory RNAs are associated with directing the activity of one or several ribonucleases that are involved in cleavage of their target RNA(s) (7).
Ribonucleases involved in regulatory RNA turnover can digest from the terminal ends of an RNA transcript (exonucleases) or cleave at an internal site (endonucleases) (8). Two key endonucleases involved in regulatory RNA processing in bacterial pathogens are RNase III and RNase E. RNase III is an endoribonuclease with specific activity for double-stranded RNA (dsRNA). It acts as a homodimer with a key role in the maturation of ribosomal RNAs (rRNAs) and transfer RNAs (tRNAs) (9,10). RNase III has been shown to influence Staphylococcus aureus virulence through cleavage of the RNAIII transcript (11,12) and has an important role in mediating changes of metabolic state in Escherichia coli (9). Due to its specificity for double-stranded RNA, RNase III has also been implicated in processing asRNAs and their targets (13).
RNase E is an essential endoribonuclease that forms a homotetramer associated with the bacterial inner membrane. The N-terminal RNase domain containing the catalytic site is required for the maturation of rRNAs but is also involved in the degradation of many cellular transcripts. Three other proteins (RhlB, enolase, and PNPase) complex with RNase E through its C-terminal half, forming a multicomponent structure known as the degradosome (8,14). The C-terminal half has been shown to be required for interactions with the RNA chaperone Hfq, with the sRNA and mRNA molecules bound to the proximal and distal faces of Hfq, respectively (15,16). Through these interactions with Hfq and its role in sRNA turnover, RNase E has been shown to regulate the expression of virulence factors and mediate several stress responses at the posttranscriptional level (17)(18)(19).
In Bordetella pertussis, the causative agent of whooping cough, virulence factor production is centrally controlled by the BvgAS two-component system (20). The system coordinates two main transcriptional states known as the Bvg1 and Bvg2 modes. When the cells are grown at 35 to 37°C, B. pertussis enters the Bvg1 phase in which the sensor kinase BvgS phosphorylates the response regulator BvgA. BvgA-P is then able to promote the expression of virulence-associated genes (vags) and suppress the expression of virulence-repressed genes (vrgs) (20)(21)(22). BvgS can be modulated by lowering the growth temperature or by adding millimolar concentrations of nicotinic acid or MgSO 4 to growth media in vitro. This Bvg2 mode is further characterized by the repression of vags and the expression of vrgs. Finally, a third gene expression profile can be produced by the BvgAS system. Known as the Bvg intermediate mode (Bvg i ), it is characterized by the expression of vrgs, vags not requiring full BvgA-P activation, and Bvg i -phase-specific polypeptides (Bips), of which BipA has been the most widely studied (23,24). Indeed, the regulation of genes within the virulence regulon has many complexities, since several overlapping regulatory systems have been shown to modulate the expression of genes in the Bvg1 or Bvg2 modes (25,26). Interestingly, Hfq has been shown to be required for the expression of many virulence factors (27). Additionally, recent studies have also identified potential virulence-associated sRNAs, suggesting the involvement of regulatory RNAs in pathogenesis (27)(28)(29)(30).
Even though regulatory RNAs have been identified, there are few examples in B. pertussis in which the targets for these regulatory RNAs have been identified. To understand more about the contribution of regulatory RNAs to posttranscriptional gene regulation in Bordetella, we generated mutant alleles of both the endonucleases RNase III and RNase E and analyzed their impact on the B. pertussis global transcriptome using RNA-Seq. Here, we show that both endonucleases are involved in processing approximately 25% of the B. pertussis transcriptome, including transcripts associated with bacterial metabolism and virulence factor production. In this regard, we further examined two genes encoding virulence factors which are representative of both the subtle and more dramatic differential expression patterns observed within our data set; brkA and tcfA encode autotransporter proteins involved in serum resistance and host colonization, respectively (31,32). Validating the robustness of the RNA-Seq analysis, we showed that changes in transcript abundance for both autotransporters is mirrored at the protein level. Through comparisons of our RNA-Seq data set to published studies (28,30), we determined more than 100 gene loci that are potentially influenced by regulatory RNAs. This study thus probes the posttranscriptional landscape of B. pertussis in showing how altered RNA processing impacts the B. pertussis transcriptome, while also identifying gene loci potentially influenced by asRNA and sRNA.
RESULTS
Generation and growth characteristics of the RNase III mutant, BPrncD45A. Cisencoded asRNAs are transcribed from the opposite strand from the target RNA and can interact with the target through complementary base pairing (6). Since RNase III specifically cleaves RNA-RNA duplexes, it has been implicated in several asRNA control mechanisms. In B. pertussis, the rnc gene encoding RNase III is part of a five-gene operon consisting of lepA, lep, rnc, era, and recO (33). Both lep and era have been identified as essential genes in B. pertussis (34) and overlap with the start and stop codon of the rnc gene, respectively. Repeated attempts to delete rnc were unsuccessful, likely due to potential polar effects. Previous in vitro studies of Escherichia coli RNase III have shown that mutations at the active site aspartic acid (D45) resulted in ⁓30,000-fold reduction in RNase activity (35). Also, approaches in Streptomyces coelicolor to diminish RNase III nuclease activity while maintaining other important biological functions also targeted the active site aspartic acid (36). Furthermore, the importance of this residue in endonuclease activity was also modeled using the structure of RNase III from Aquifex aeolicus (37). Amino acid sequences of the rnc genes from B. pertussis, E. coli, S. coelicolor, and A. aeolicus were aligned (Fig. S1A in the supplemental material) and residues in B. pertussis RNase III corresponding to known active site motifs were mapped. As shown in Fig. S1A, the D45 residue shown to be important for catalytic activity (35)(36)(37) is highly conserved in B. pertussis. We thus chose to substitute this aspartic residue for alanine on the chromosomal copy of the rnc gene using allelic exchange (38) and assessed the impact of this mutation on asRNA regulation in B. pertussis.
When grown on Bordet-Gengou (BG) agar, the B. pertussis RNase III mutant (BPrncD45A) produced hemolytic colonies which appeared to be smaller than wild type. This growth defect was further observed when the strain was grown in complete Stainer-Scholte medium (SS-medium; Fig. 1), with the RNase III mutant strain showing an extended lag phase yet reaching comparable optical densities to wild type over the course of the growth curve. This growth defect was reversed through complementation with the wild-type copy of the gene constitutively expressed from a low-copy-number plasmid (Fig. S2A).
Generation and growth characteristics of the RNase E mutant, BPrneDCT. The turnover of mRNAs targeted by sRNAs is efficiently catalyzed by the endonuclease RNase E. Through interactions with the RNase E C-terminal half, the RNA chaperone Hfq bound to an sRNA is proposed to assist in directing the endonuclease to target transcripts for degradation (39). RNase E is an essential RNase required for the maturation of ribosomal RNAs and tRNA precursors (8). Despite the essentiality of RNase E, investigators working with E. coli and Salmonella have established viable RNase E mutations, allowing studies of the enzyme in vivo (40,41). Indeed, mutations resulting in the truncation of the RNase E C-terminal domain have been shown to reduce RNase E function without negatively impacting the enzyme's role in rRNA maturation (42). Importantly, the RNase E C-terminal half is required for mRNA targeting mediated by Hfq and small RNAs (15).
To understand more about the role RNase E plays in transcript stability and turnover in B. pertussis, 465 amino acids were deleted from the C-terminal half of RNase E to generate the strain BPrneDCT (Fig. S1B). This mutation was designed to mimic the rne-105 mutation in E. coli (42). When grown in SS-medium, the mutant strain grew to an optical density greater than that of the wild-type strain and aggregated less in liquid medium (Fig. 1). When grown on BG agar, the colonies of the RNase E mutant were similar in size to those of the wild-type strain from which it was derived; however, the RNase E strain was nonhemolytic. This phenotype was reversed when complemented with the full-length protein ( Fig. S2B and C).
Impact of mutations in RNase III and RNase E on the B. pertussis transcriptome. The mutations generated in the B. pertussis rnc and rne genes were designed to alter RNA turnover in the cell. Overall, this would impact RNA metabolism as well as many systems associated with posttranscriptional regulatory circuits. To determine the changes to the regulation of the B. pertussis transcriptome, the wild-type and mutant strains were grown to mid-log phase (optical density at 600 nm [OD 600 ] = 0.6 to 0.7) and RNA was extracted for transcriptomic analysis by RNA-Seq. Sequencing reads were aligned to the Tohama I genome (NCBI:txid257313) (33) and gene expression changes were compared to wild type using DESeq2 (43). Differentially expressed genes were defined as having a fold change of $1.5 compared to wild type and an adjusted P value # 0.05 across five biological replicates.
In the RNase III mutant, there were 1,062 genes differentially expressed when compared to the wild type ( Fig. 2A). As shown in volcano plots (Fig. 2B) mapping the spread of differentially expressed genes, the expression of 585 of these genes was negatively affected by the loss of RNase III activity, while 477 were upregulated. Compared to wild type, ptlEFGH were among the most downregulated genes in this data set, while the type three secretion system (T3SS) chaperone protein bp2265 was among the most upregulated (Table S1). Specific pathways, identified using Generally Applicable Gene-set Enrichment (GAGE) (44), were shown to be dysregulated in the RNase III mutant (Fig. 2C). This included pathways associated with quorum sensing (bpe02024), ABC transporters (bpe02010), and tRNA biosynthesis (bpe00970).
The RNase E mutant showed 956 genes differentially expressed when compared to wild type ( Fig. 2A). Like the RNase III mutant, more than 580 genes were downregulated by the mutation to RNase E (Fig. 2B). Comparing the two data sets, 444 genes were commonly differentially expressed and, of these, 131 genes (Table S1) showed opposing patterns in fold change in the two mutant backgrounds. Here, the autotransporters vag8 and tcfA were the most downregulated genes, showing a .25-fold reduction when compared to wild type (Table S1). Interestingly, the rne gene itself was among the most upregulated genes, suggesting a potential impact on RNase E autoregulation (8). The RNase E mutant also showed enrichment in pathways associated with B. pertussis pathogenesis (bpe05133), bacterial secretion (bpe03070), and metabolic pathways (bpe01100) (Fig. 2C).
RNase E is involved in the processing of sRNAs in various bacterial pathogens, yet RNase III can also play a role in sRNA turnover due to RNA duplex formation with the target mRNA (45). Several putative targeted sRNAs have been previously identified by in silico analysis of the B. pertussis genome (46). Using the predicted genome coordinates of these 17 sRNAs, we examined whether the endonuclease mutations altered the expression of this subset of putative regulatory RNAs in B. pertussis. Even though sRNAs are typically processed by RNase E, it was surprising that only two of the predicted sRNAs (bprK and bprM') were differentially expressed in the RNase E mutant, although four sRNAs (bprB, bprJ, bprM, and bprM') were found to be differentially expressed in the RNase III mutant. One of these sRNA species, bprM', was differentially expressed in both mutants, but it was upregulated in the RNase III mutant and downregulated in the RNase E mutant (Table S1).
Taken together, the endonuclease mutations each affected approximately one quarter of the B. pertussis transcriptome. Even though each mutation ultimately impairs endonuclease function, only approximately half of the differentially expressed transcripts appeared to become less abundant through the loss of full RNase III or RNase E activity. This data suggests that these endonucleases may contribute to stabilizing many mRNAs within the B. pertussis transcriptome. Another outstanding observation was the substantial variation, and moderate overlap between the impacts of these two endonucleases, implying novelty with a degree of redundancy.
Validation of RNA-Seq data set. To verify the gene fold changes reported in the RNA-Seq data set, we experimentally characterized the differential expression of two genes in the mutant backgrounds. Specifically, we examined the expression changes of the autotransporters BrkA and TcfA, virulence factors that were also shown to be differentially expressed in the Dhfq mutant (28). Fig. 3A demonstrates the normalized counts of both brkA and tcfA and shows a near 2-fold increase in transcript abundance in the RNase III mutant. A stronger phenotype was observed in the RNase E mutant, where there was a 4-fold reduced abundance of brkA transcripts and, strikingly, a normalized count for tcfA approaching zero, which translated to a 28-fold reduction in transcript abundance compared to wild type (Table S1). To verify these data at the RNA and protein level, fresh cultures were inoculated into SS-medium and samples for quantitative PCR (qPCR) were taken at late log phase for RNA analysis. For protein expression, samples for immunoblots were taken across a time course at early log phase, late log phase, and stationary phase. The results from qPCR showed an approximately 2-fold increase in brkA and tcfA transcripts in the RNase III mutant compared to wild type. In the RNase E mutant, qPCR showed a 2-fold and 20-fold reduction in brkA and tcfA, respectively (Fig. 3B). Differences in protein abundance were estimated by analyzing the density of bands corresponding to expression of BrkA and TcfA in immunoblots. Using ImageJ (47) for quantification, the RNase III mutant showed an approximate 1.5-fold increase in BrkA and TcfA levels when compared to wild type. In the RNase E mutant, protein expression of BrkA was reduced by approximately 2-fold, whereas TcfA protein expression was almost nondetectable across the time course (Fig. 3C). These data thus corroborated the RNA-Seq analysis by showing gene and protein expression patterns that mirrored what was seen in the transcriptome analysis of the two mutants. Notably, phenotypes for each endonuclease mutant at both the RNA (qPCR) and protein (immunoblot) levels were returned to wild-type levels upon complementation with the wild-type copy of the appropriate gene ( Fig. S2D and E).
Mutations in RNase III and RNase E drove virulence factor stability in opposite directions. The BvgAS two-component regulatory system sits at the center of virulence gene expression in Bordetella. In the Bvg1 mode, more than 500 genes are regulated by phosphorylated BvgA (Table 1) (48), including several genes associated with virulence. Since both RNase III and RNase E are implicated in bacterial virulence factor production and pathogenesis (49), they might influence or be influenced by the BvgAS regulon. Comparison between the RNA-Seq data set collected here and the recent transcriptomic analysis of the BvgAS regulon (21) showed that approximately 50% of the genes in the BvgAS regulon were differentially expressed in the RNase III and RNase E mutant strains ( Table 1, Table S2 and S3).
B. pertussis expresses several virulence factors in the Bvg1 mode, with many characterized through in vitro and in vivo infection models (48). As part of the BvgAS regulon, these well-characterized virulence factors number approximately 70 genes and comprise those encoding toxins, secretion systems, and colonization factors. To determine whether the mutations in RNase III and RNase E impacted the stability of these Effect of RNases on B. pertussis Regulatory Networks virulence gene transcripts, the fold changes of these transcripts were identified within our RNA-Seq data set. Fig. 4 and Table S4 detail the fold changes of the 71 well-characterized virulence genes in the two endonuclease mutants. Interestingly, 66 of these virulence factor transcripts were found to be less abundant in the RNase E mutant, including the cya operon, which is likely responsible for the nonhemolytic phenotype observed with this mutant. Autotransporters tcfA and vag8, and the putative type 3 secretion chaperone bscW, were more than 25-fold downregulated when compared to wild type. In the RNase III mutant, 36 of these 71 virulence factors were significantly differentially expressed. Of these virulence factors, 10 were found to be more abundant when compared to wild type and shown to be less abundant in the RNase E mutant; however, it should be noted that the ptl locus was downregulated in both mutant backgrounds. The BvgAS system can also produce a third gene expression profile intermediary of Bvg1 and Bvg2 known as the Bvg intermediate mode (Bvg i ) (23). The marker protein of the Bvg i phase, BipA, showed a similar pattern in transcript abundance wherein its transcript abundance was reduced in the RNase III mutant yet increased in RNase E mutant. Finally, although the Tohama-1 strain of B. pertussis does not have a functional T3SS when grown in vitro (i.e., in the absence of host cells) (28,50), transcripts associated with the T3SS locus were less abundant in both mutants, with the tip complex protein Bsp22 being more abundant in the RNase III mutant.
Taken together, these data identify an interesting relationship in which RNase III and RNase E have opposing activities on select virulence factor transcripts. This association suggests these endonucleases may be involved in modulating transcript stability during pathogenesis as a potential means of fine-tuning virulence factor expression within the host.
RNase III and RNase E processed transcripts associated with metabolism in the Bvg+ mode. During the Bvg1 mode, phosphorylated BvgA alters the expression of many genes associated with metabolic and catabolic pathways. This suggests that B. pertussis coordinates substantial changes to cell homeostasis between Bvg1 and Bvg2 modes. Since both RNase E and RNase III are associated with controlling metabolic gene expression in E. coli (9), we sought to identify differentially expressed metabolism genes associated with the Bvg1 mode (21).
This comparison showed that 69/119 of the Bvg-associated metabolism genes were also affected by the mutation to RNase III. Of this subset, all genes associated with the RNase III mutation were less abundant (Fig. 5). We performed the same analysis on the RNase E mutant and showed that 64/119 of the Bvg-associated metabolism genes were differentially regulated. Of these 64 differentially expressed genes, 50 were shared between the RNase E and RNase III mutants and the transcripts associated with many members of this gene subset were less abundant compared to wild type (Fig. 5). Further analysis showed that 23 of these 50 genes were Effect of RNases on B. pertussis Regulatory Networks ABC transporters associated with amino acid uptake (Fig. 5, Table S5) found in GAGE pathway bpe02010 (Fig. 2C). Taken together, these data suggest that both RNase III and RNase E play a role in the maintenance of many metabolism transcripts in the Bvg1 mode. Since many of the genes involved appeared to be associated with resource uptake, the data suggests that these endonucleases may somehow be involved in coordinating the transition in cell homeostasis between Bvg1 and Bvg2 modes. Comparative analysis of B. pertussis transcriptomic data provided insights into regulatory RNA networks. Previous work in B. pertussis has identified several regulatory RNAs, yet not much is known about the target transcripts on which they act. As endonucleases are often involved in the processing of the target mRNA coupled with regulatory RNAs, we compared the findings from the transcriptomic analysis of the endonuclease mutants here to previously published studies regarding the B. pertussis primary transcriptome (30) and the hfq regulon (28) to identify putative gene loci which might be regulated by either asRNAs or sRNAs.
As recently detailed in the report of the B. pertussis primary transcriptome (30), a majority of the asRNA transcripts are produced from internal promoters found within the large number of insertion (IS) elements present within the B. pertussis genome. Transcription from promoters present within the insertion elements has been shown to run into the adjacent genes. In some cases, these are found in antisense to the adjacent gene and can influence transcript stability (51). Since RNase III is involved in cleaving double-stranded RNA, we investigated whether RNase III was involved in the resolution or degradation of transcripts at IS antisense junctions.
The B. pertussis primary transcriptome identified 192 genes adjacent to IS elements and influenced by an antisense transcript (30). Of these, 39 genes (20%) were found to be differentially expressed in the RNase III mutant (Table S6). Genes such as fim2, metX, and recR, as well as other genes in this subset, were less abundant when compared to wild type, suggesting that endonuclease RNase III may be required to stabilize the interacting transcripts produced from these loci.
We also compared our findings to the regulon controlled by the RNA chaperone Hfq, since Hfq plays a substantial role in mediating the interactions, stability, and processing of sRNAs with their mRNA targets (27,28). This analysis established that approximately 50% of the genes in the hfq regulon were differentially expressed in both endonuclease mutants ( Table 1, Table S7). This overlap of genes included many ABC transporters and virulence factors, including the autotransporters brkA, tcfA, and vag8. Moreover, many genes required for the type III secretion system were downregulated in all three mutant backgrounds. Given this, we looked for additional operons that may be influenced in a similar way. This analysis identified 19 unique operons (Table S8), including several ABC transporters, a quinol oxidase biosynthesis cluster, and the ptl operon required for pertussis toxin secretion (52).
Thus, using the known relationship between RNase E and Hfq and the specific activity for double-stranded RNA by RNase III, our work identified a number of gene loci that were previously not known to be regulated at the posttranscriptional level.
DISCUSSION
Regulatory RNAs provide bacterial pathogens with a mechanism to rapidly modify gene expression profiles while under stress. Often processed by one of several ribonucleases, cleavage of the target mRNA can substantially alter the stability of the transcript and/or modulate rates of translation. RNase III and RNase E are endonucleases involved in processing of cis-encoded asRNA and trans-encoded sRNAs, respectively. Here, mutants of both RNase III and RNase E were generated by allelic exchange and the impact of these mutations on the B. pertussis transcriptome was assessed by RNA-Seq.
The mutations to RNase III and RNase E resulted in the differential expression of approximately 1,000 genes, representative of about 25% of the B. pertussis genome. Figure 2B shows that many of the transcripts found to be differentially expressed in the two mutants were less abundant when compared to wild type. With the impaired activity in both endonuclease mutant backgrounds, we expected to observe increases in transcript abundance. However, it was surprising to see more than half of the differentially expressed transcripts were reduced in abundance in the mutant strains. In the case of the RNase III mutation, this reduction in transcript abundance might have been, in part, a consequence of the slow growth phenotype associated with the mutation (53). For example, much of the lipid A biosynthesis pathway genes (lpxABDHK) were approximately 2-fold reduced when compared to wild type (Table S1), corresponding to the approximately 2-fold reduction in the doubling rate of the mutant cells. In contrast, there was no growth defect associated with the C-terminal truncation of RNase E, and yet a similar proportion of transcripts were reduced in BPrneDCT, the RNase E mutant strain. Interestingly, the Lgm locus (locus tags BP0397 to BP0399), which is involved in the modification of lipid A (54), was increased almost 6-fold in the RNase E mutant (Fig. 2B). Changes in transcript abundance may be associated with several mechanisms involved in RNA half-life, such as changes in rates of translation and a loss of cleavage events mediated by these two endonucleases, resulting in alternate RNA secondary structures (55). Differential processing of regulatory RNAs may also contribute to the stability of these transcripts (56).
RNase E is associated with the turnover of sRNAs in bacteria through the C-terminal domain coordinating interactions with the RNA chaperone Hfq (57). RNase III is also involved in regulatory RNA circuits by cleaving RNA-RNA duplexes, such as those formed from asRNA binding to its mRNA target (4,58). Taking our RNA-Seq data set, we compared the data to other published studies to establish associations between the endonuclease mutants and known sRNAs and asRNAs. Using this approach, we showed that 5 previously identified sRNAs (46) were differentially expressed in our data set. Of these, 4 of the sRNAs were unique to the RNase III mutant. Even though RNase III is typically associated with cleaving asRNAs, this endonuclease also processes trans-encoded regulatory RNAs (11,12,59,60). Furthermore, as the RNase III mutation influenced many metabolic processes, this might have resulted in several stress responses contributing to the production of sRNAs enriched under these conditions (2,5). It is interesting that RNase III also played a role in regulating many genes found to be in an antisense orientation to an adjacent insertion element. Many of these genes were downregulated in the mutant strain, suggesting that the full catalytic function of RNase III is potentially required for the activation of many genes near insertion elements.
One of the most striking phenotypes of the RNase E mutant was the loss of hemolysis (Fig. S2B) accompanied by marked reduction in expression of CyaA (Fig. S2C). Hemolysis in Bordetella pertussis is due to the activity of adenylate cyclase toxin, CyaA, which is produced and exported by the genes found in the cya operon (61). The RNA-Seq analysis here showed that in the RNase E mutant, the cya operon was approximately 2-fold reduced compared to wild type. This would suggest a decrease in hemolysis, although in fact a complete absence was observed. Interestingly, diminished adenylate cyclase activity was also observed in the Dhfq strain (27). Of note, qPCR analysis of cyaA in this hfq mutant showed that transcript abundance was similar to the wild-type strain, implying that the cyaA mRNA requires posttranscriptional modification for optimal translation. Taken together, this suggests that RNase E is likely involved in processing of the cyaA transcript in an Hfq-dependent manner. Hfq is known to catalyze the interactions with a small RNA (15,16); however, the associated regulatory RNA involved with modulating cyaA translation is yet to be identified. Additionally, the reduction in the cya operon transcript abundance in the RNase E mutant may also suggest that this endonuclease is involved in stabilizing the mRNA of the entire locus. Overall, this provides several layers of complexity around the regulation of adenylate cyclase toxin expression in B. pertussis, although the importance of this regulation in Bordetella pathogenesis is yet to be determined.
To identify loci potentially regulated by regulatory RNAs, we compared the transcriptomes of each endonuclease mutant to the hfq regulon (28). This revealed a substantial overlap with genes that were also differentially expressed in the mutant hfq background. From this, it was observed that much of the T3SS was downregulated in both the RNase III and RNase E mutant strains, as well as in the Dhfq strain, further implying a role for regulatory RNAs in the function of the secretion system. In many bacteria, the control of the T3SS tends to be complex and both regulatory RNAs and endonucleases have been implicated in this process (62). In Yersinia, both RNase E (63) and Hfq (64, 65) were shown to play a role in the function of the secretion system.
Among the Bordetella spp., regulation of T3SS consists of the extracytoplasmic sigma factor BtrS, a cognate anti-sigma factor BtrA, and partner-switching modules BtrVW (66). In both the endonuclease mutants here described, these regulatory proteins were differentially expressed compared to wild type under these conditions. However, there were no significant changes in transcript abundance of BtrASVW in the hfq background (28). These data add an additional layer of complexity to the regulation of the T3SS, potentially involving factors outside the T3SS locus. Further investigation would be required to determine a mechanism by which this may work.
Alongside the T3SS, more than 90% of widely studied virulence factors involved in B. pertussis pathogenesis (21) were decreased in abundance in the RNase E mutant, with more than half of these well-characterized virulence factors being more than 4fold reduced, and some more than 20-fold reduced in abundance. This was quite a striking finding, since similar mutations in E. coli are often associated with a more subtle impact on transcript stability (42). Surprisingly, many of these virulence factors were upregulated in the RNase III mutant. Of the 444 genes that shared differential expression in both mutants, 131 of these genes had opposing fold changes in the two mutant backgrounds. The opposite effects on transcript stability produced by these two endonucleases have been observed previously; however, the reason for this remains unclear (56). Gene transcripts showing opposite stabilities in the mutant backgrounds include the response regulator bvgA and autotransporters vag8, tcfA, and brkA. It is intriguing that the mutations in RNase III and RNase E exerted such a drastic impact on the transcripts for tcfA and vag8, since both these autotransporters were also strongly downregulated in the Dhfq strain of B. pertussis (27,28). Taken together, these data suggest that the abundance of these autotransporter transcripts could be determined by regulatory RNAs. Complex regulatory interactions involving RNase III, RNase E, and Hfq have been characterized previously (56,60,67). For example, the inactivation of the sodB transcript in E. coli under iron-limited conditions requires RNase III, RNase E, and the sRNA RyhB in complex with Hfq (59). However, the regulatory RNA influencing expression of vag8 and tcfA in B. pertussis, and the mechanism by which it interacts with these genes, are yet to be elucidated.
Currently, there are few well-defined mechanisms by which regulatory RNAs work in B. pertussis. The work here highlights additional regulation of metabolism and key virulence factors involved in B. pertussis pathogenesis. Furthermore, by comparing our data to the currently available data on the B. pertussis transcriptome, we have highlighted gene loci potentially influenced by regulatory RNAs, including the Lgm locus (54), which we are currently investigating. Even though there are other ribonucleases present in B. pertussis, the mutant strains generated here provide details on the posttranscriptional landscape of B. pertussis while also contributing another layer of understanding of the complex regulatory circuits responsible for Bordetella pathogenesis.
Construction of endoribonuclease mutants in B. pertussis. To generate the point mutation in the rnc gene on the B. pertussis chromosome, ;500 bp upstream and downstream of the rncD45 codon were amplified using primers detailed in Table 2. Fragments were designed to be joined to each other using an NheI site. The position of the NheI site, when incorporated into the B. pertussis genome, would provide the desired D45A mutation, while not influencing the adjacent serine codon. This ;1,000-bp fragment was then cloned into pSS4894 (38) between MluI sites and BamHI restriction sites to generate pGI-rncD45A. This construct was first transformed into E. coli RHO3 before conjugation into B. pertussis.
The truncation of the rne C-terminal half was generated by first amplifying ;500-bp regions flanking the 1,398 bp of the rne C-terminal half that was to be removed. Flanking regions were ligated together Effect of RNases on B. pertussis Regulatory Networks July/August 2021 Volume 6 Issue 4 e00650-21 msphere.asm.org 13 at EcoRI sites and introduced to pSS4894 at SalI and BamHI restriction sites. This construct was transformed into RHO3 for conjugation. Allelic exchange plasmids were delivered into B. pertussis by conjugation with minor alterations to the published procedure (38). Briefly, recipient strain BP338 was grown on BG agar in the presence of 50 mM MgSO 4 and nalidixic acid for 3 days. These cells were then harvested from plates using a sterile swab and resuspended in 1.5 ml of SS broth supplemented with DAP. The optical density at 600 nm (OD 600 ) of the resuspended BP338 was then measured. The donor RHO3 strains were grown overnight and OD 600 was measured. BP338 and donor RHO3 strains were then mixed at a ratio of 2:1 and spotted onto SS-agar plates containing SS-supplements, BSA (0.15%), and 50 mM MgSO 4 . Conjugation mixtures were incubated for 6 h at 37°C before being swabbed and reswabbed onto fresh BG plates with nalidixic acid (30mg/ml), gentamicin (100mg/ml), and 50 mM MgSO 4 . After 4 days incubation at 37°C, single colonies were restreaked onto BG plates with nalidixic acid to allow activation of the BvgAS system and I-SceI expression driven by the pertussis toxin promoter. These plates were grown until distinct colonies could be observed. From this, single colonies were then screened by PCR for the presence of the mutant allele.
PCR-confirmed mutants were then grown for ;72 h in complete SS-medium and genomic DNA was extracted using the DNeasy blood and tissue kit (Qiagen) as per the manufacturer's instructions. Gene fragments, including upstream and downstream regions used for allelic exchange, were then amplified and the mutations confirmed by Sanger sequencing (Genewiz).
RNA-Seq and bioinformatic analyses. Wild-type and mutant cells were grown to an OD 600 of 0.6 to 0.7 in SS broth and 2 ml of cells was harvested into 4 ml RNA Protect Bacteria (Qiagen). RNA was extracted from cell pellets using the RNAqueous RNA extraction kit (Ambion) following kit instructions, and DNA was then removed using the Turbo DNase kit. RNA concentrations were measured by nanodrop spectrophotometer and DNA removal was verified by PCR. Total RNA quality was assessed using a BioAnalyzer 2100, with each sample producing an RNA integrity number greater than nine. rRNA depletion was completed using the RiboZero bacteria kit (Illumina). The KAPA Stranded Total RNA kit (KAPA Biosystems) was used for library construction (five replicates per genotype), and sequencing was done by the University of British Columbia's Sequencing and Bioinformatics Consortium on an Illumina HiSeq2500. Single-end 100-bp reads were checked for quality using FastQC v0.11.7 (70) and aligned to the B. pertussis sp. strain Tohama I reference genome from NCBI (NCBI: txid257313 [33]) using the alignment program STAR v2.6.1a (71). Counts of aligned reads were generated using HTSeq's count function v0.9.1 (72).
Raw library sizes for all samples had a minimum of 1,032,961, median of 4,355,770, and maximum of 8,888,916 reads after removing low count genes (i.e., fewer than 10 counts across the number of biological replicates). Differentially expressed genes between each of the mutants and wild-type bacteria were determined using the Wald statistical test implemented in the Bioconductor package DESeq2 v1.24.0, R version 3.5 (43). Results were corrected for multiple testing using the Benjamini-Hochberg method, and were filtered for genes with an adjusted P value # 0.05 and an absolute fold change $ 1.5. Functional enrichment of these differentially expressed genes was performed using an overrepresentation analysis via the R package Gage v2.34.0 (44), testing for enriched pathways from the KEGG database for B. pertussis sp. strain Tohama I. Pathways resulting from this analysis were considered significant based on a false discovery rate (FDR)-corrected q value threshold of #0.1. To generate counts of mapped reads for each sample and genotype along a defined region of interest, we used IGVtools' count function v2.4.14 (73).
Volcano plots were generated by plotting log 2 fold change against 2log 10 of the P value for all genes detected in our RNA-Seq analysis. Genome coordinates of putative sRNAs (46) present in the Bordetella genome were added prior to DESeq2 analysis to determine any differential expression among these potential regulatory RNAs. Subsets of differentially expressed genes were generated by filtering locus tags present in other transcriptomic analysis data sets of Bordetella pertussis (21,28,30).
Quantitative PCR. Wild-type and mutant cells were grown in SS-medium at 37°C. Cultures were harvested at mid-log phase to late log phase and stationary phase. Cell pellets were resuspended in 100 ml Tris-EDTA pH 7.0 with 1 mg/ml lysozyme. RNA was then extracted following the RNAqueous RNA extraction kit protocol (Ambion) and treated for DNA contamination using Turbo DNase I (Ambion). RNA concentrations were measured by nanodrop spectrophotometer and DNA removal was verified by PCR.
To verify gene transcript abundance changes, cDNA was made from 250 ng of each RNA sample using I-script cDNA synthesis kit (Bio-Rad) following kit instructions. All primers used are detailed in Table 2 and were designed to have an annealing temperature of 60°C. Quantitative PCR (qPCR) reactions were set up using SsoAdvanced SYBR green master mix and measured on a Bio-Rad CFX96 instrument. Each sample was run in duplicate with each 10-ml reaction containing 500 nM of each primer pair and 12.5 ng cDNA. The thermal cycling program consisted of an initial melt step of 98°C for 3 min, then 95°C for 15 secs and 60°C for 30 secs for 40 cycles, followed by melt curve analysis. Results were normalized using rpoB as a housekeeping gene and fold changes calculated using the threshold cycle (2 -DDCt ) method (74).
Immunoblotting. B. pertussis BP338 wild type and endonuclease mutant strains were grown in complete Stainer-Scholte medium to the desired growth phase, mid-log (OD 600 0.5 to 0.6), late log (OD 600 0.8 to 0.9), and stationary phase (time point = 108 h) or from plates at 72 h. For each time point, 1 ml of culture was harvested by centrifugation and cell pellets were resuspended to an OD 600 of 5.0 in SS salts (75) to adjust for equal loading. An equal volume of 2Â sample buffer was added and the samples heated to 95 to 100°C for 5 min. Each time point sample was run on 12% SDS-polyacrylamide gels and transferred to polyvinylidene difluoride (PVDF) membranes. Membranes were probed with polyclonal antibodies raised against BrkA (76), TcfA (77), and monoclonal antibodies to CyaA (Sigma). Primary antibodies were visualized using horseradish peroxidase (HRP)-conjugated secondary antibodies. The band intensities were then measured using ImageJ (47) to estimate fold changes in protein expression.
Data availability. Sequence and count files are available at the Gene Expression Omnibus under accession number GSE164312.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only.
|
2021-08-20T06:17:24.978Z
|
2021-08-18T00:00:00.000
|
{
"year": 2021,
"sha1": "4faeaee1791110daae51ab76b313ad7ecbe07ea7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/msphere.00650-21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67024227340af38e573c1eb19e822606a8a04e8e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233912890
|
pes2o/s2orc
|
v3-fos-license
|
Pediatric and adolescent mood disorders: An analysis of factors that influence inpatient presentation in the United States
Background Mental health is an essential aspect of health and wellbeing that the general population often overlooks. This study aims to utilize a nationwide sample [Healthcare Cost and Utilization Project (HCUP) Kid’s Inpatient Database (KID)] to analyze the factors affecting inpatient mood disorder admissions in the United States. Methods A total of 295,472 cases ages 1–20 were identified to meet the criteria (Appendix A) for the selected mood disorders from the HCUP KID 2016 dataset. We conducted descriptive statistics of the individual diagnosis. We evaluated the relationships with variables such as age (grouped), sex, region, disposition, household income, race, rural-urban demographics, and mean charges. We also conducted association tests for the variables of interest. Results An average of six days LOS was observed for mood disorders compared to four days LOS for other pediatric inpatient admissions nationwide. The highest prevalence rate (per 100,000) of single (5050), recurrent (2284) episode MDD and bipolar disorder (2445) was observed among no charge (uninsured) populations. The native American population had the highest rate prevalence of single episode MDD (3274) and highest extreme and significant loss of function at presentation. The highest manic episode presentation rate was observed among Black (12) and Native American (9) populations. Manic episodes and bipolar disorder were higher among young adults (47 and 4554); teenagers (13–17) showed a higher presentation rate for all other mood disorders. Conclusion No charge (uninsured), teenagers (13–17), females, native Americans, and south and midwest regions showed a higher rate of mood disorder presentations among the population. Understanding these variances could play a vital role in highlighting the need for new innovative care approaches. Comprehensive mental health programs in collaboration with educational and community organizations and other stakeholders could be vital to addressing mood and mental health among these populations. This approach tackles several social influencers such as stigma and support to ensure effectiveness and sustainability.conclusion.
Introduction
Pediatric and adolescent mental and psychological health is an essential aspect of health and wellbeing that health-care providers and the general population have often overlooked. Global estimates from the World Health Organization indicate that roughly 10e20% of adolescents experience mental health conditions [1]. In the United States, mood disorders are among the leading causes of mortality and morbidity among pediatric and adolescent populations [2]. Additionally, mood disorders in children consist of some of the most debilitating categories of emotional and behavioral disturbances in youth, resulting in academic, social, and interpersonal relationship difficulties [3]. These trends necessitate careful evaluation and concerted effort with multiple stakeholders to address this problem. Several studies have established the relationship between mood, psychological disorders, and other medical and health conditions [4,5]. These populations' medical and psychological changes are often exhibited by acting out and changes in behavior that directly or indirectly impact their wellbeing and health in general [6]. The presence of multiple risk factors has been shown to directly impact these presentations [7]. Factors such as conformity with peers, autonomy, sexual exploration, and ease of communication via social media and other technology directly impacts the outcomes of mood disorder presentations in these populations [3,8]. The likelihood of inpatient presentation and potential comorbidities directly or indirectly impacts these patients' socioeconomic outlook [9].
The prevalence of other psychiatric comorbidities, including family history and irritability, is likely to be observed for those presenting with manic episodes and other mood disorders [10]. The ability to effectively diagnose variances in performance and factors that impact such presentations needs to be adequately studied to develop effective approaches in addressing them. Persistent irritability and lack of adequate mitigation efforts often lead to violent outbursts with potentially dangerous consequences to patients and society [11]. Many studies have shown an increased rate of bipolar diagnosis in the pediatric and adolescent population [12e14]. Multiple factors show an impact in the prevalence of pediatric bipolar, including an increase in diagnosing and recognition, side effects of stimulant medications, and changes in diagnostic criteria [15]. The chronic nature of these conditions [16] makes it imperative for timely diagnosis and treatment.
An estimated 2e5% of children and adolescents experience significant depression [17]. The presence of a family history of emotional or psychological difficulties plays a significant factor in the prevalence of these psychiatric presentations [18]. The lack of active engagement and of a concerted effort to recognize and treat depression among this population risks escalating recurrence and worsening of other health problems. Given the full range of presentation of depression, a provider-educator-parent partnership must develop multifaceted strategies to encourage youth discussion and openness about their feelings and experiences. A significant percentage of those who experience a single episode are likely to have recurrent events in the future [6]. The interconnectivity between these psychiatric disorders necessitates a concerted effort to create a multifaceted support system for these populations. Adolescents diagnosed with Major Depression Disorders (MDD) have been found to have neurocognitive impairments as part of a varied range of health issues that arise or worsen due to this comorbidity [19]. Research has shown that individuals with persistent depression had frequent health-care resource utilization and increased care costs [20]. There has been consistency regarding the presence of depression and comorbidities; moreover, the lack of effective coordination between primary care and mental health services impacts health outcomes. These comorbidities have been shown to lead to "hidden prevalence, underdiagnosed, and undertreated depression, especially in the underserved populations" [21].
It is estimated that over 43 billion dollars can be associated with the care for depression in the United States [22]. Mood disorder presentations with comorbidities such as mania and depression have been shown to present with a worse prognosis. Distinguishing between mood disorders and creating an effective plan for screening in a primary care setting is imperative in improving overall health outcomes in these populations. The impact of developmental factors plays a vital role in these mood disorders in the pediatric and adolescent population. Furthermore, these presentations often negatively affect their social, environmental, family, and academic activities [23]. Several approaches are effective in treatment for these populations [24]. A multifaceted approach must be taken to ensure adequate resources to aid in addressing such a problem.
This research is focused on evaluating the inpatient rates of adolescent and pediatric mood presentations and factors that directly or indirectly impact these presentations. Understanding these factors will help implement innovative approaches to care that are targeted at this age group.
Methods
We included 295,472 cases that met the inclusion criteria based on ICD 10 codes for each of the mood disorders (Appendix 1) in the Kid's Inpatient Dataset (KID 2016). The KID 2016 includes only ICD-10 data elements for inpatient presentations. The dataset consists of 44 states, which makes up a significant majority of inpatient presentations across the country.
Study cohort
We included all patients with the diagnosis described in Appendix 1 and aged between 1 and 20 years with a primary or secondary diagnosis of the selected mood disorders.
Analysis
We conducted descriptive statistics of the individual diagnosis (Appendix A). We evaluated the relationships with variables such as age (grouped), sex, region, disposition, household income, race, rural-urban demographics, and mean charges. Disease severity and mortality for all patients grouped by age, race, location and income were further analyzed. We also conducted a logistic regression to evaluate the predictors of presentation for each diagnosis. All analyses were performed with weights per HCUP guidelines. All variable analysis was performed with IBM (IBM Corp., Armonk, NY) Statistical Package for Social Sciences (SPSS) Version 23. Power BI (2019; Microsoft Corporation, Redmond, WA) to calculate the rates of presentation and graphs.
Results
A total of 295,472 cases were identified as meeting the criteria for the selected mood disorders based on ICD-10 classification. Of the total included in this study, recurrent MDDs, Bipolar disorders, and single episodes of MDD comprised the highest numbers of cases, presenting with 22%, 19%, and 46% of the total number of cases, respectively. As shown in Fig. 1, young adults (18e20 years) and females had the highest bipolar presentation rates of 4555 and 1042 per 100000 hospitalizations, respectively. Teenagers within the 13-17-year group had the highest rates of a single (68275) and recurrent MDD (36035) and unspecified mood disorder (13548) diagnosis. Unspecified mood disorder presentations were similar among pre-teens and young adults, while the highest presentations were seen among teenagers (13e17) and men at a rate of 2678 and 459 per 100,000 presentations, respectively. Further analysis also indicated a statistically significant difference (P < .001) in Length of Stay (LOS) for mood disorder presentation (Pre-Teen ¼ 7, Teenagers ¼ 6, Young Adults ¼ 6) compared to the general population (Pre-Teen ¼ 4, Teenagers ¼ 4, Young Adults ¼ 4). Females showed the highest presentations in all categories, except with manic episodes and unspecified mood disorders, as shown in Fig. 1. Single and recurrent MDD presentations showed the highest variance in presentations between males and females.
Evaluation of patient disposition showed that most of the patients were discharged routinely after receiving inpatient medical care. A significant percentage also discharged themselves against medical advice. Table 1 shows that the highest mortality was observed among patients with a single episode of MDD and bipolar disorder with 238 and 66 cases, respectively. Table 1 also shows higher mortality and discharges against medical advice rates among single episode major depression and bipolar compared to other diagnoses. Individuals diagnosed with depression were more likely to be transferred compared to other mood disorders.
Income by zip code
The presentation rate was generally evenly distributed across income demographics, as depicted in Fig. 2. Single, recurrent MDD and bipolar disorders had the highest rates across all income groups. The lowest income groups showed a higher presentation of bipolar disorder with a rate of 920 and 930 compared to 885 and 887 per 100000 hospitalizations among high-income groups. Recurrent MDD prevalence was ubiquitous across income classification, with about 16000 cases in every income category. An average of 6 days LOS was observed for mood disorders compared to 4 days LOS for other pediatric inpatient admissions nationwide. Payer demographics also differed significantly (P < .001) compared to the general population with Medicaid (47%), private insurance (45%), other (4%), and self-pay (3%), among mood disorder presentations. As shown in Fig. 2, the highest rates of presentations for all mood disorders were among the population without health insurance categorized as no charge, followed by Medicare recipients.
Race
As shown in Table 2, the race demographics indicate a higher single and recurrent MDD presentation rate among native Americans and Whites, followed by Blacks, Hispanics, and Other and Asians. The rate of bipolar and manic episodes among the population was significantly higher among White and Black populations than other racial demographics. Further analysis shows a higher LOS (days) for Native Americans (9), Asians, and Pacific Islanders (7) compared to other races with an average LOS of 6 days. The rate of severity presentation varied considerably by race and region. The Native American population shows a higher rate of severity presentation relative to population size with extreme loss of function and significant function loss. Extreme mortality risk was observed among Native Americans and the White population.
Region & Rural-Urban Classification
Mood disorder presentation varied by region, with the highest rate presentation in all diagnostic categories found in the Midwest, as shown in Fig. 3. Average of LOS (Days) also varied with Northeast (7), Midwest (5), South (6), and West (6) for patients with mood disorders and an average of 4 LOS (days) for non-mood disorder hospitalization. Extreme loss of function rate was high in the Midwest, West, Northeast, and South. Presentation by region varied considerably, with a higher presentation of single and recurrent depression in the Midwest, Northeast, South, and West. The Northeast witnessed a higher manic episode and was the third highest in both single and repetitive MDD presentation. The Western part of the country had the lowest inpatient presentation in every mood disorder diagnosis, as observed in Fig. 3. Fig. 4 provides an overview of presentations in a rural-urban setting. The rate of manic episodes among the population was primarily the same across the classification areas. Metropolitan areas with 50,000e250,000 population size had a higher presentation of single and recurrent MDD and the second-highest bipolar disorder rate (1030 per 100,000) presentations. Higher mortality and severity risk at presentation were observed among smaller micropolitan and rural areas than large urban centers. Most of the population presented with a moderate and significant loss of function, as shown in Fig. 5. The rate of major and extreme loss of function was high among White (7266 & 5174) and Native Americans (6594 & 4952). Further analysis of severity at presentation shows a higher rate of major and significant loss of function among the highest income demographics (51st to 75th & 76th to 100th income percentiles). Table 3 shows a higher charge for patients with manic episodes of single MDD presentations with $39,287 and $31,182, compared to the general pediatric population. There are minor differences in the charges (bipolar, persistent & unspecified mood disorders) for other diagnoses compared to the general population given the very small percentage of mood disorders. These charges associated with presentations are much higher than the most frequent reasons for pediatric inpatient presentations, as shown by other studies [25e27].
Discussions
Understanding the prevalence of mood disorders in children and adolescents is imperative in developing effective strategies for providing quality care to these patient populations. Given that the highest prevalence is observed among teenagers (13e17 years) and females, a concerted public and population health are needed to ensure adequate education on the subject matter, as indicated by several researchers [3,28,29]. Teen targeted approaches are likely to aid in addressing self-care and better disease management to prevent inpatient hospitalizations. The relatively high number of individuals that discharge themselves against medical advice across all age groups supports the need for concerted local and national educational efforts on psychological disorders among the pediatric population. Several studies have found a relationship between severity and increased re-admission likelihood in populations that discharge against medical advice [30,31]; this calls for a combined population and public health approach to improving education and understanding such decisions, especially in the pediatric and adolescent population. The mortality and morbidity rate stratified by race and income points to specific demographics disproportionately impacted [32e34]. The variance in the rate of mood disorder presentation by race requires further studies to ascertain the influence of nature [35] (genetics) and nurture (social, environment) on disease prevalence [36,37]. Given the disproportionality of health resources availability across the United States [38], the creation of educational programs that educate and empower minority women and youth would be valuable in addressing mood disorders [39]. The longer LOS and higher severity observed among specific populations (such as Native Americans) supports the idea that more investment in such communities is needed to improve health outcomes [40]. Higher mortality and morbidity (severity) rates in small metropolitan and rural areas further support the notion of health-care inequities and their impact on patient outcomes [41]. The income gap observed in disease presentation also supports the relationship between social determinants of health and mental health [33].
The Payer demographics findings support the notion that governmental funds (Medicaid and Medicare) pay for a significant proportion of the disease burden [42]. The utilization of emergency medical services by region for such presentations shows the need for concerted efforts in certain parts of the country to provide preventive behavioral and psychological care to these populations. As shown in Table 3, the cost variance early indicates a higher cost burden for individuals with specific mood disorders than for the general population; this finding is supported by other studies on the increasing cost of mental health services in the United States [43]. It is imperative to develop local and national comprehensive pediatric and adolescent mental health educational programs [44e47]. Such programs should be directly related to school curricula to ensure adequate education and minimize stigma and shame associated with such medical conditions.
Conclusion
Several factors influence pediatric and adolescent mood disorders in inpatient admissions. Understanding these variances could play a vital role in highlighting the need for new innovative care approaches. The lack of sufficient programs tailored to provide care to children and adolescents directly impacts health-care outcomes and hence cost of care, as demonstrated by this study. Factors such as region, age, income, sex, rural-urban, payer, severity, mortality, and LOS play a role in the likelihood of inpatient presentation. Comprehensive mental health programs in collaboration with educational and community organizations and other stakeholders are vital to addressing mood and mental health among this study population. Such an approach should tackle several social influences such as stigma and support to ensure effectiveness and sustainability. Health policymakers need to note that about 50% of mood disorder presentations were identified through ER visits across all regions and rural-urban demographics. Such findings call for a reevaluation of current health-care policy on comprehensive care to the pediatric and adolescent population, especially regarding mental health. More work needs to be done on mental health for pediatric, adolescent, and early adult populations across the country. Approaching such essential issues through a community and population health perspective could hold the key to providing quality care to these populations.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Ethical approval
This article does not contain any studies involving human participants performed by any of the authors.
This research article does not contain any studies involving animals performed by any of the authors.
CRediT authorship contribution statement
|
2021-05-08T00:03:41.685Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "51159279c3879e2827a91a68f18f0aa67610808e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijpam.2021.01.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "931fde0319b801037808726b4ab5925a1873a2b2",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269599730
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Implementation and Initial Validation of Respiratory-Gated Stereotactic Body Radiotherapy for Thoracoabdominal Tumors Under Abdominal Compression Using an Anzai Laser-Based Gating Device With Visual Guidance on an Elekta Linear Accelerator
We have clinically implemented gated stereotactic body radiotherapy under abdominal compression using an Anzai laser-based gating device with visual guidance in combination with an Elekta linear accelerator. To ensure accuracy, we configured the gating window for each patient by correlating the respiratory curve from the laser sensor and the tumor positions from the 4D computed tomography (CT) images reconstructed with the aid of the respiratory curve. This allowed us to define a patient-specific gating window to keep the tumor displacement below 5 mm from the end-expiration, assuming the reproducibility of the tumor trajectories and the laser-based body surface measurements. Results are summarized as follows: 1) A patient-specific gating window internal target volume (ITV) with a prespecified maximum tumor displacement relative to the end-expiration was obtained by acquiring a 4D CT consisting of 20 phase CT sets and a respiratory curve from the Anzai system. 2) Respiratory hysteresis was managed by setting two different thresholds on the respiratory curve based on the predetermined maximum tumor displacement relative to end-expiration. 3) Abdominal compression increased gating window width, thereby presumably leading to faster gated-beam delivery. 4) Gamma index pass rates in sliding-window gated intensity-modulated radiotherapy (IMRT) were superior to those in gated volumetric modulated arc therapy (VMAT). 5) Intrafraction gated cone-beam computed tomography (CBCT) demonstrated that the tumor appeared to remain within the gating window ITV during the stereotactic gated sliding-window IMRT. In conclusion, we have successfully implemented gated stereotactic body radiotherapy at our clinic and achieved a favorable clinical validation result. More cases need to be evaluated to increase the validity.
Introduction
Respiratory-gated radiotherapy was first proposed in 1989 by Ohara at the University of Tsukuba [1].Until the mid-1990s, it was only used by this institution [2,3].According to a recent survey report by the American Association of Physicists in Medicine (AAPM) Task Group (TG) 324, out of 536 institutions, 60% used the internal target volume (ITV) method, 14% used breath hold, 11% used abdominal compression, and 10% used gating [4].The ITV method is a standard motion management technique, where a treatment volume is defined by the envelope of the lesion delineated on all phases of the 4D CT.The AAPM TG 76 report, which is nearly identical to the AAPM 91 report, recommends considering respiratory management techniques if a range of motion greater than 5 mm in any direction is observed [3,5].In other words, if abdominal compression fails to achieve the 5 mm range of motion, a gating technique may need to be considered.Respiratory gating aims to reduce doses to organs at risk without impairing doses to the tumor.A normal tissue complication probability (NTCP) model estimated that respiratory gating reduced pneumonitis risk from 43% to 32% for the highest-risk lung cancer patients [6].Previous gating techniques have included free breathing gating with patient body surface measurements, or with fluoroscopic imaging of metal markers embedded near the tumor as a surrogate [1,[7][8][9][10].Respiratory gating with audio guidance and visual feedback was also reported to improve the reproducibility and regularity of respiratory amplitude and period [8].To the authors' knowledge, no studies have reported respiratory gating under abdominal compression on a standard linear accelerator (linac).
The purpose of this study was to clinically implement and validate our respiratory-gated stereotactic radiotherapy technique under abdominal compression using an Anzai laser-based gating device with visual guidance in combination with an Elekta linac.
Technical Report
We started respiratory-gated radiotherapy using an Elekta linac, Axesse (Elekta AB, Stockholm, Sweden), and an Anzai respiratory gating system, AZ-733VI (Anzai, Tokyo, Japan), including a laser sensor with visual guidance unit, ABLE (Anzai, Tokyo, Japan), where the gating was performed under abdominal compression.The Elekta Response (Elekta AB, Stockholm, Sweden) gating control interface was used to transfer the gating signal from the Anzai system to the Elekta linac, and the linac gun hold-on time was adjusted to the maximum allowable value of 6.50 sec to minimize the gate-on delay [11].It was reported that the abdominal compression helped maintain the reproducibility of tumor motion trajectories during the computed tomography (CT) simulation, the pre-treatment tumor localization, and the beam delivery sessions [12].To ensure accuracy, we configured the gating window for each patient by correlating the respiratory curve from the laser sensor and the tumor positions from the 4D CT images reconstructed with the aid of the respiratory curve.This allowed us to define a patient-specific gating window to keep the tumor displacement below 5 mm from the end-expiration, assuming the reproducibility of the tumor trajectories and the laser-based body surface measurements among the above three periods.This study was approved by the institutional review board of Tokyo Medical University with an approval ID of T2023-0199.Written informed consent was obtained from the patients involved in this study.
Figure 1 shows a CT simulation for lung tumor treatment planning with a SOMATOM Emotion 16 (Siemens Healthineers AG, Erlangen, Germany) under abdominal compression using the vacuum-assisted fixation device, BodyFIX (Elekta AB, Stockholm, Sweden), and an in-house compression block made of Styrofoam ( 18x 18 x 4 cm 3 ) with a commercial carbon fiber frame, BodyFIX Diaphragm Control (Elekta AB, Stockholm, Sweden), to apply pressure to the block.A narrow laser beam was projected perpendicular to the patient's abdomen with an aimed laser-abdomen distance of 12 cm.The laser spot was projected on the subcostal line midway between the inferiormost thoracic cage border and the L3 vertebral body to detect respiratory motion while minimizing the effect of the aortic pulsation.The projected position was marked to accurately reproduce the measurement during the course of the treatment.The reflected light was detected on a onedimensional array sensor, where the detected signal position indicates the distance to the abdomen.The laser sensor detects abdominal displacement caused by breathing and generates a respiratory curve, which was then exported to the CT unit to reconstruct multi-phase 4D CT images, where amplitude-based respiratory phase information was obtained from the respiratory curve.A surgical tape was applied to the laser spot on the BodyFIX vacuum cover sheet, thereby avoiding irregular reflection.A portable display included in the ABLE unit was placed over the patient's face to provide visual guidance for stable breathing as shown in Video 1. Treatment plans were created by Monaco 5.1.1 (Elekta AB, Stockholm, Sweden) treatment planning system (TPS) using the end-expiratory CT and the other two CT image sets giving 5 mm displacements before and after reaching the end-expiration.A gating window ITV was defined by these three CT image sets.A narrow laser beam was projected perpendicular to the patient's abdomen with an aimed laser-abdomen distance of 12 cm.The laser spot was projected on the subcostal line midway between the inferiormost thoracic cage border and the L3 vertebral body to detect respiratory motion while minimizing the effect of the aortic pulsation.The reflected light was detected on a 1D array sensor, where the detected signal position indicates the distance to the abdomen.The laser sensor detects abdominal displacement caused by breathing and generates a respiratory curve, which was then exported to the CT unit to reconstruct multi-phase 4D CT images, where amplitude-based respiratory phase information was obtained from the respiratory curve.A portable display was placed over the patient's face to provide visual guidance for stable breathing.Treatment plans were created using the end-expiratory CT images.
CT: computed tomography VIDEO 1: Patient guidance display to help stabilize breathing.
Breathing in and breathing out duration can be configured separately.
View video here: https://vimeo.com/939069793 Figure 2 shows respiratory-gated lung tumor stereotactic radiotherapy under abdominal compression using the vacuum-assisted BodyFIX and the compression block with a 20 cm wide strap to apply pressure.The laser sensor detects abdominal displacement caused by respiration and generates a respiratory curve.Gating thresholds were set on the respiratory curve by referring to the tumor displacement relative to the endexpiration.The resulting gating signal was then exported to the linac to control the beam delivery.Again, the portable display was placed over the patient's face to visually guide stable breathing (Video 1).The carbon fiber frame used in the CT room was found to be too large for use with the linac, as its gantry head would hit the frame when the tumor was located far from the center of the body.The strap did not have this problem.The room-in to room-out for a patient required approximately 22 minutes including five minutes for fixation and setup, two minutes for pre-treatment CBCT imaging, one minute for pre-treatment couch adjustment, seven minutes for noncoplanar beam delivery with two minutes for couch rotation, two minutes for in-treatment gated CBCT imaging, and three minutes for release from the fixation.Besides, the first fraction required an additional minute for breathing practice.The laser sensor detects abdominal displacement caused by respiration and generates a respiratory curve.Then, gating thresholds were set on the respiratory curve by referring to the tumor displacement relative to the endexpiration, and the resulting gating signal was exported to a linac to control the beam delivery.Again, the portable display was placed over the patient's face to visually guide stable breathing.
Figure 3a depicts a plot of the tumor displacement in the superior-inferior direction relative to endexpiration measured on 4D CT images as a function of the normalized height of the respiratory curve relative to end-expiration measured by the laser sensor, where the right half of the horizontal axis shows the respiratory amplitude moving from end-expiration (0%) to end-inspiration (100%) in 10% increments, while the left half shows the respiratory amplitude moving from end-inspiration (100%) to end-expiration (0%) in 10% increments.In this plot, 20-phase CT images were generated and utilized.The time course is indicated by the white arrows.Figure 3b If the tumor displacement is allowed to be less than 5 mm relative to end-expiration during beam delivery, abdominal compression will increase the gating window from a 20-0% range to a 40-0-50% range in this particular case.This will result in a longer beam-on time per respiratory cycle.
FIGURE 4: A comparison of tumor displacements relative to endexpiration for another patient with and without abdominal compression, as a function of the normalized height of the respiratory curves relative to end-expiration.
The blue diamonds and red squares are with and without abdominal compression, respectively.The white arrows indicate the time course.If we allow the tumor displacement to be less than 5 mm relative to end-expiration for beam delivery, the abdominal compression increases the gating window from a range of (20%, 0%) to a range of (40%, 0%, 50%) in this particular case, resulting in a longer beam-on time per respiratory cycle.
Video 2 demonstrates the use of pre-treatment 4D CBCT imaging for daily registration of a lung tumor under abdominal compression.The patient couch was translated using the calculated registration errors.The video shows that after the couch adjustment, the tumor at end-expiration is observed at the superiormost position in the gating window ITV (in sky blue color), where the gating window ITV was defined as an ITV considering only respiratory phases within the gating window [13,14].
VIDEO 2: Pre-treatment 4D CBCT imaging for daily registration of a lung tumor under abdominal compression.
The patient couch was translated using the calculated registration errors.The video shows that the tumor at the end-expiration is observed at the superiormost position in the gating window ITV (in sky blue color) after the couch adjustment, where the gating window ITV was defined as an ITV considering only respiratory phases within the gating window.In other words, 4D CBCT imaging allows us to accurately and precisely position the tumor at the end-expiration, which is very important in our workflow.Respiratory hysteresis was also observed in the video, presumably leading to an unsymmetrical gating window as typically shown in Figures 3a, 3b.
CBCT: cone-beam computed tomography; ITV: internal target volume View video here: https://vimeo.com/933611169 For further validation, intrafraction kV CBCT imaging was performed during gated-beam delivery.The kV projection images were acquired only when the measured respiratory signal amplitude corresponded to the preconfigured gate-on period, as typically shown in yellow in Figure 3b.The reconstructed CBCT images can be used to visualize the time-weighted tumor position during the gated-beam delivery.radiation therapy (IMRT) for a lung tumor case in Video 2. The noncoplanar seven-port sliding-window technique was used with couch angles of 0°, 10°, and 350°.Specifically, the intrafraction half-scan CBCT images were acquired immediately after completion of the first ports with a 0° couch angle, while the linac gantry was manually rotated 200° for CBCT imaging each time the measured respiratory signal amplitude met the preconfigured gate-on condition.The patient was instructed to maintain the same breathing pattern during this CBCT data acquisition.The figure displays the gating window ITV in sky blue and the planning target volume (PTV) in red, where the PTV was defined by adding an isotropic margin of 3 mm to the gating window ITV.The tumor appeared to remain within the gating window ITV during the stereotactic gated sliding-window IMRT.Specifically, the intrafraction half-scan CBCT images were acquired immediately after the first ports were completed with a 0° couch angle, with the linac gantry manually rotated 200° for CBCT imaging each time the measured respiratory signal amplitude met the preconfigured gate-on condition.During the CBCT data acquisition, the patient was instructed to maintain the same breathing pattern as before.In this figure, the gating window ITV is shown in sky blue and the planning target volume (PTV) is shown in red.The tumor appeared to remain within the gating window ITV during the stereotactic gated sliding-window IMRT.
CBCT: cone-beam computed tomography; IMRT: intensity-modulated radiotherapy; ITV: internal target volume; PTV: planning target volume To verify the delivered dose distribution, we created three-arc noncoplanar stereotactic flattening filter-free (FFF) volumetric modulated arc therapy (VMAT) plans and seven-port noncoplanar stereotactic slidingwindow FFF IMRT plans with a photon energy of 6 MV using the Monaco TPS.These plans were created using the same CT image set of a lung tumor patient with a prescription dose of 50 Gy in five fractions.The plans were exported to the Elekta linac and the beams were delivered to the ArcCHECK quality assurance phantom (Sun Nuclear, Florida, USA).Global gamma pass rates were evaluated using a threshold of 10% and a criterion of 3 mm/3% in absolute dose mode.
Table 1 compares the global gamma pass rates and beam delivery times for non-gated and gated 6 MV FFF non-coplanar VMAT and IMRT deliveries using identical patient CT images and the dose prescriptions/constraints. The gamma pass rates (3 mm/3%) were 94% for VMAT and 99% for sliding-window IMRT cases.The beam delivery times were comparable between non-coplanar VMAT and non-coplanar sliding-window IMRT under either gated or non-gated conditions.
Discussion
We have clinically implemented gated stereotactic body radiotherapy under abdominal compression using the Anzai laser-based gating device with visual guidance in combination with our Elekta linac.
We showed that a patient-specific gating window with a prespecified maximum tumor displacement relative to the end-expiration can be obtained by acquiring a 4D CT consisting of 20 phase CT sets and a respiratory curve with the aid of the Anzai device.Using the Anzai device, respiratory hysteresis was also managed by setting two different thresholds on the respiratory curve based on the predetermined maximum tumor displacement relative to end-expiration.Without setting the two different thresholds, the gating window would not have been optimized for cases with respiratory hysteresis.
Figure 4 showed that the abdominal compression minimized instantaneous tumor motion during 4D CT image acquisition, thereby leading to more stable respiratory management for the gating window setting.Another finding was that abdominal compression increased gating window width per cycle, thereby presumably leading to faster gated-beam delivery.To our knowledge, no studies have reported respiratory gating under abdominal compression on a standard linac.In addition, this is the first report suggesting that abdominal compression would accelerate the gated-beam delivery.
Video 2 shows that the 4D CBCT imaging allows us to accurately and precisely position the tumor at the end-expiration, which is very important in our workflow.Respiratory hysteresis was also observed in the video, presumably leading to an unsymmetrical gating window as typically shown in Figures 3a, 3b.
Figure 5 shows that the tumor appeared to remain within the gating window ITV during the stereotactic gated sliding-window IMRT, as the intrafraction gated CBCT provided the time-weighted tumor location during the gated IMRT delivery.This was made possible because the tumor position at the end-expiration was preadjusted immediately before beam delivery with the use of the 4D CBCT as shown in Video 2. To the best of our knowledge, no previous studies have reported the visualization of time-weighted tumor volume during intermittent delivery of gated IMRT beams.
Table 1 shows that the sliding-window gated IMRT had a better gamma index pass rate than the gated VMAT.Snyder reported that gated VMAT on an Elekta linac resulted in gantry overrun and rewind and showed reduced pass rates depending on the gantry speed [15].The sliding-window gated IMRT does not have this issue; and therefore, our results appear to be consistent with Snyder's observation.
Limitations of this study include limited sample sizes, lack of comparative analyses against other respiratory management techniques, and potential institutional biases.Further studies with broader scope and larger sample sizes are necessary to further validate and refine the approach.
Conclusions
We have described the clinical implementation of our respiratory-gated stereotactic body radiotherapy technique under abdominal compression using an Anzai laser-based gating device with visual guidance in combination with an Elekta linac.An initial validation result of the technique was demonstrated by acquiring intrafraction gated-CBCT images, showing that the tumor appeared to remain within the gating window ITV during the stereotactic gated sliding-window IMRT.More cases need to be evaluated to increase
FIGURE 1 :
FIGURE 1: A photograph showing a CT simulation for lung tumor treatment planning under abdominal compression using a vacuumassisted fixation device and an in-house compression block with a carbon fiber frame to apply pressure to the block.
FIGURE 2 :
FIGURE 2: A photograph showing respiratory-gated lung tumor stereotactic radiotherapy under abdominal compression using the vacuum-assisted fixation device and the compression block with a 20 cm wide strap to apply pressure to the block.
Figure3adepicts a plot of the tumor displacement in the superior-inferior direction relative to endexpiration measured on 4D CT images as a function of the normalized height of the respiratory curve relative to end-expiration measured by the laser sensor, where the right half of the horizontal axis shows the respiratory amplitude moving from end-expiration (0%) to end-inspiration (100%) in 10% increments, while the left half shows the respiratory amplitude moving from end-inspiration (100%) to end-expiration (0%) in 10% increments.In this plot, 20-phase CT images were generated and utilized.The time course is indicated by the white arrows.Figure3bdisplays a corresponding respiration curve measured by the laser displacement sensor as a function of time.The beam-on and beam-off thresholds were configured on the respiratory curve by correlating the tumor displacement with the respiratory curve.The two thresholds were individually defined based on the tumor displacement in (a) and the respiratory amplitude in (b), relative to end-expiration.In these figures, the thresholds were set on the respiratory curve where the tumor displacement was 5 mm.The red and purple circles in (a) and (b) represent the corresponding thresholds giving the tumor displacement of 5 mm, and the gate-on periods are highlighted in yellow in (b).
FIGURE 3 :Figure 4
FIGURE 3: The tumor displacement in the superior-inferior direction and the corresponding respiration curve as a function of time.(a) A plot of the tumor displacement in the superior-inferior direction relative to end-expiration measured on 4D CT images as a function of the normalized height of the respiratory curve relative to end-expiration measured by the laser sensor, where the right half of the horizontal axis shows the respiratory amplitude moving from endexpiration (0%) to end-inspiration (100%) in every 10% step, while the left half shows the respiratory amplitude moving from end-inspiration (100%) to end-expiration (0%) in every 10% step.In this plot, 20-phase CT images were generated and utilized.The white arrows indicate the time course.(b) A corresponding respiration curve as a function of time, measured by the laser displacement sensor.The beam-on and beam-off thresholds were individually configured on the respiration curve by correlating the tumor displacement with the respiration curve.In other words, the thresholds were defined with respect to the tumor displacement plot in (a) and the respiratory amplitude plot in (b), where the tumor displacement was obtained relative to end-expiration.In these figures, the two thresholds were defined on the respiratory curve where the tumor displacement was 5 mm.The red and purple circles in (a) and (b) are the thresholds giving the tumor displacement of 5 mm relative to endexpiration and the gate-on periods are shown in yellow in (b).CT: computed tomography
Figure 5
Figure 5 displays the respiratory-gated CBCT images obtained during stereotactic gated intensity-modulated
FIGURE 5 :
FIGURE 5: Respiratory-gated CBCT images acquired during stereotactic gated IMRT for a lung tumor case in Video 2 using a noncoplanar seven-port sliding-window technique with couch angles of 0°, 10°, and 350°.
|
2024-05-06T15:05:33.208Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "e5897a31886c31bac6790434dd807aded5a0dd6c",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/technical_report/pdf/249105/20240504-9967-82s0hu.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9f41fd26eeaaf3be30a51594f93996f6dbefd92",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Physics"
],
"extfieldsofstudy": []
}
|
214079990
|
pes2o/s2orc
|
v3-fos-license
|
A prospective study on outcomes of stainless steel proximal femoral nail for unstable intertrochanteric fractures in rural population
Introduction: Intertrochanteric fractures are common in geriatric population. They account for 34% of all hip fractures. Surgical treatment with proximal femoral nail (PFN) is being replaced with newer generation nails. Stainless steel PFN is still being widely used in resource limited settings. Material and Methods: This is a prospective study in a tertiary care hospital serving rural populations in southern India. The study included 20 patients with unstable intertrochanteric fractures with a minimum follow-up of 6 months. Intra-operative bleeding, surgical time, radiation exposure in seconds were recorded. Patients were evaluated at each visit by Harris hip score. Duration of hospital stay, time to union and mobility status were recorded. Results: The mean operating time in our study was 70.15 (SE + 9.39) minutes. The mean C arm exposure in our study was 119.60 seconds (SE + 8.16). The average blood loss per patient was 221 ml (SE +41.66). The mean duration of stay in hospital in our study was 17.5 days. In our study, 14 patients had excellent results, 4 patients had good results, 1 patient had fair result and 1 case had a poor result according to Harris Hip Score. Conclusion: Stainless steel proximal femoral nailing remains a relevant choice for unstable intertrochanteric fractures in rural and resource limited settings.
Introduction
Intertrochanteric fractures of the hip are very common fractures encountered by orthopaedic surgeons across the world [1] . The incidence is expected to increase with the rise in geriatric population [2] . Various surgical treatments are available like dynamic hip screw (DHS), proximal femoral nail (PFN), proximal femoral locking plate (PFLP), proximal femoral nail anti-rotation (PFNA), etc. In resource limited settings, stainless steel (SS) DHS and PFN are still popularly used due increased cost of newer implants [3] . DHS is the preferred treatment for stable IT fractures, whereas PFN or other cephalo-medullary fixation techniques are useful in unstable IT fractures [4] . In this article, we would like to discuss the functional outcome of SS PFN in unstable intertrochanteric fractures.
Material and Methods 2.1 Study design:
This prospective study was done in the department of orthopaedics, at a tertiary care hospital in rural part of southern India. Institutional Ethics Committee approval was obtained. Study period was from October 2015 to January 2017 with a minimum followup of 6 months. Twenty consecutive patients presenting with closed unstable intertrochanteric fractures to the casualty and the orthopaedic out-patient department were included in the study, fulfilling the inclusion and exclusion criteria described below.
Pre-operative workup
All fractures were classified according to Evans classification. All patients satisfying the inclusion criteria were assessed both clinically and radiologically before decision for a surgical intervention is made. In all the patients, preoperative Buck's skin traction was applied to the fractured lower limb. Oral or parental NSAIDs and tramadol were used for pain management. Preoperative work-up was done by routine blood work-up and x-rays of chest and traction internal rotation view of hip. Venous doppler was done to rule out deep vein thrombosis (DVT) in patients presenting after more than 3 days of injury. Preoperative medical fitness was taken in all cases. A preanaesthetic examination was carried out a day prior to the surgery. A written consent for the surgery was obtained from all patients.
Implant
Stainless steel Proximal femoral nail we used is a cephalomedullary implant, which has biomechanical advantages over extramedullary implants. The Western version has a proximal diameter of 17.5mm; load bearing femoral neck screw of 11.0mm and derotation screw of 6.5mm. Indian version has a proximal diameter of 13mm; load bearing femoral neck screw of 8mm and derotation screw of 6mm to suit the proximal femur of Indian patients. Distal locking screws are of size 4.9mm and distance between the proximal screws is 5 mm in both the versions [5] . We used Short PFN in all cases. Femoral nail cap was not used.
Post-operative follow up
Serial radiograph and Harris hip scoring criteria were used to assess the radiological and functional outcomes respectively. Patients were assessed at 2 weeks, 4 weeks, 2 months, 3 months and 6 months.
Statistical analysis
Data was entered using the software Epi info version 7.2.1.0 and analysed using software SPSS version 24.0. Categorical study variables like gender, age group, side of the fracture, mode of injury, post-operative shortening, functional outcome, post-operative complications was described in terms of frequency and percentage. Continuous variables like age, blood loss, operating time, radiation exposure, duration of stay in hospital, etc. were described in mean and standard deviation. Association between age, sex and side of fracture to the functional outcomes were assessed using fisher exact probability test.
Results
Mean age of the study participants was 65.6 years ( Figure 1). Male to female ratio of 3:1 was noted. Right hip was more commonly (65%) involved. The trivial household trauma (self-fall) was the mode of injury in the majority of the patients (80%) followed by road traffic accidents (20%). In our study 14 patients (60.9%) were operated within 7 days from the time of injury, 6 patients (26.1%) operated after 7 days from the time of injury due to associated comorbid conditions like diabetes mellitus, hypertension, cardiac status after obtaining fitness for surgery from anaesthesia. The mean time interval between injury and surgery was 5.15 days in our study. Mean blood loss was 221ml (SD 41.66) (table 1). The mean C arm exposure in our study was 119.60 seconds (SD 8.16) (table 2). The operating time was calculated from skin incision to skin closure. The mean operating time in our study was 70.15 (SD 9.39) minutes. The mean duration of stay in hospital in our study was 17.5 days.
Discussion
The mean age in the study conducted by Kumar R [6] et al. was 62.3 years. The mean age in the study by Yamauchi K, et al. [7] was 79.7 years. The average age of patient was 67.8 years in study by Bhahat, et al. [8] in 2013. Females are more commonly affected than males due to post-menopausal osteoporosis. Chang KP, et al. [9] reported an epidemiological study of 229 hip fractures. They suggested that, with an increase in age, there was an increase in the incidence rates of hip fractures in both male and female patients. Even then, the most striking difference in incidence between sexes [116 (women) and 0 (men) per 100,000 person-years] was seen in the 60-64 year age group, this difference decreased as the age progresses [2597 (women) and 1187 (men) per 100,000 person-years in the 85+ year age group]. Our study group consisted of more males than females (3:1) and majority (50%) were aged between 60 to 70 years of age. The most common associated medical problem was anaemia in 6 cases (40%), followed by hypertension in 4 cases (26.7%) and diabetes in 3 cases (20%). Preoperative blood transfusion was done in 4 patients (26.7%) in view of anaemia. The mean blood loss in our study was 221 (SD 41.66 ml). The results were comparable with Schipper I.B. et al. [10] in which the mean blood loss was 220 ml. In a study by Pajarinen J et al. [11] et al. the mean blood loss was 330 ml. The mean operating time in our study was 70.15 + 9.39 minutes. In comparision, Pajarinen J et al. [11] reported a mean operating time of 55 minutes.
The mean C arm exposure in our study was 119.60 + 8.16 seconds, which is comparable to other studies [12,13]. There was one incidence of postoperative superficial surgical site infection in a patient. Postoperatively, one patient had shortening of 2 cm. The mean POD of mobilization in this study was 4.25 + 2.05 days. Delay in mobilisation compared to other fractures is probably due to elderly age of the patients.
Restoration of walking ability was earlier in patients treated with proximal femoral nail than dynamic hip screw [11] . Studies have shown comparable outcomes after treatment with both proximal femoral nail and gamma nail [10] . Multiple factors have been implicated like implant design, fracture stability, operative technique, surgeon skills and learning curve in the outcome of good results [14] . Optimal reduction of the fracture, confirmation of reduction in both anteroposterior and lateral views and accurate positioning of the nail and screws should be obtained at all times to prevent screw cutout. Reduction in distal nail diameter and pre-reaming of femoral canal to a size bigger than the implant decrease the complication rates of femoral shaft fractures. Patients with narrow femoral canal and abnormal curvature of the proximal femur are the relative contraindications to intramedullary fixation with proximal femoral nail. Patients with small neck size (<24.5mm) sometimes do not accommodate two screws, such cases need to be preoperatively planned for other implants [5,15] . The duration of stay in hospital varied from 2 weeks to 4 weeks in our study. The mean duration of stay in hospital was 17.5 days (SD 4.8) in our study. Socioeconomic factors make it expensive for the disabled post-operative patient to visit hospital multiple times for wound care, sometimes more than the surgical costs itself, hence in such cases patients are discharged after sutures are removed and mobilised adequately. In the study conducted by Schipper B et al. [10] the mean duration of stay was 21.4 days. The mean duration of stay in hospital was 12.96 days in the study conducted by Kumar R et al. [6] The patients were followed up at 4 weeks, 2 month, 3 months and 6 months post operatively. In our study, none of patients were lost to follow up. In our study, the fractures united by 10 to 15 weeks duration in 15 patients, 15 to 20 weeks in 2 patients, less than 10 weeks in 2 patients and more than 20 weeks in one patient. 65.2% fractures united during 10 to 15 weeks with a mean of 13.10 + 2.92 weeks. At the end of 6 months 11 patients walked without any support, 7 patients walked with the help of a cane, 2 patients used a walker. In one patient there was proximal screw back out at 3 month follow up. One patient had screw cut out on follow up. The other 17 patients had no complications and performing their day to day activities. Significant difference (p 0.004) was noted in functional outcome between age groups (table 4). Pajarinen J et al. [11] treated 108 patients with intertrochanteric femoral fractures. In results they suggested that the use of the proximal femoral nail may allow a faster post-operative restoration of walking ability, when compared to the dynamic hip screw. Bhahat U et al. [8] in 2013 did a comparative study between proximal femoral nailing and dynamic hip screw in the treatment of intertrochanteric fracture of femur. They observed that even though the PFN has more radiation exposure than DHS, it has lesser bleeding, less operative time, earlier ambulation and better Harris hip score in early periods. In the long term both the implants had similar functional outcomes. The strengths of our study are that it systematically discusses the versatile implant which is slowly being replaced by newer and costlier implants. Limitations of our study are low sample size and being a hospital based study it has selection bias.
Conclusion
Proximal femoral nailing is a good choice of fixation of unstable intertrochanteric fractures in adults provided right implant is chosen and the proper surgical technique is followed. The procedure offered excellent pain free mobile hip, with easy rehabilitation and rapid return to functional level. Proximal femoral nailing reduced the complications of prolonged immobilization, prolonged rehabilitation. The procedure offers early mobilization, rapid return to pre injury level, improved quality of life and provides a long-term solution in elderly patients with unstable intertrochanteric fractures of the femur with extramedullary implants and comparable results with newer cephalomedullary implants. Hence, stainless steel PFN is suitable implant for unstable intertrochanteric fractures, especially in price sensitive patients in developing countries. There is a need for cost analysis between stainless steel PFN and other cephalomedullary fixations.
|
2020-02-20T09:17:41.297Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "5260e48bd63e895e55baeb294fca0492946f406c",
"oa_license": null,
"oa_url": "https://www.orthopaper.com/archives/2020/vol6issue1/PartG/6-1-74-859.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "82ab2b88cc55db698e6667e255792cd89bee1831",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253421400
|
pes2o/s2orc
|
v3-fos-license
|
Changes in pathogen distribution in the blood culture of neonates before and after the COVID-19 pandemic, Henan, China
The the changes in Streptococcus pneumoniae and Haemophilus influenzae infection in children before and after the Coronavirus disease 2019 (COVID-19) pandemic in Henan, China, recently caught our interest. However, no data were available on the changes in pathogen distribution in the blood culture of neonates before and after the COVID-19 pandemic.
The reports of Li et al. 1 and Zhou et al. 2 in this journal, which demonstrated the changes in Streptococcus pneumoniae and Haemophilus influenzae infection in children before and after the Coronavirus disease 2019 (COVID-19) pandemic in Henan, China, recently caught our interest. However, no data were available on the changes in pathogen distribution in the blood culture of neonates before and after the COVID-19 pandemic.
Bloodstream infection is a serious infectious disease. Owing to the weak immune system that newborns have, neonates are more prone to infection by pathogens, which leads to bloodstream infection. 3 Neonatal bloodstream infection is a major disease that seriously threatens the life of newborns. The incidence rate of neonatal bloodstream infection is 4.5-9.7/10 0 0. 4 Bloodstream infection is primarily caused by bacteria, fungi, and viruses. The distribution of bloodstream infection pathogen may vary in different regions. In addition, in response to COVID-19, many countries have implemented strict interventions, such as social distancing, wearing masks, limiting crowd gatherings, and restricting outdoor activities. 5 Coronavirus disease 2019 (COVID-19), as well as its prevention and control measures, have severely affected people's lifestyles and may also affect the epidemiology of pathogens. 6 Analyzing pathogen distribution in bloodstream infections in neonates before and after the COVID-19 pandemic is helpful to provide a basis for hospital infection prevention and clinical management strategies.
In this study, we compared the number of positive blood cultures, the positive blood culture rate, and the constituent ratio of pathogens to explore the impact of the COVID-19 pandemic on the pathogen distribution in bloodstream infection in neonates. As shown in Fig. 1 , we analyzed the number of blood culture samples sent, the number of positive blood cultures, and the positive infection rate before and after the COVID-19 pandemic. The results showed that both the number of blood culture samples and the number of positive blood cultures decreased in 2020, and then increased slightly in 2021 (after the COVID-19 pandemic) compared to that in 2018 and 2019 (before the COVID-19 pandemic). The positive rate in blood culture gradually decreased from 2018 to 2021.
Further, we analyzed the pathogen distribution in the blood cultures of neonates before and after the COVID-19 pandemic ( Table 1 ). Our data showed that the pathogenic microorganisms present in newborn blood cultures were primarily Klebsiella pneumoniae, Escherichia coli , and coagulase-negative staphylococci before and after the COVID-19 pandemic, which account for more than 60% of pathogenic microorganisms. Among the pathogens, the abundance of Klebsiella pneumon iae, Escherichia coli , and coagulasenegative staphylococci gradually decreased from 2018 to 2021, whereas the percentage of Klebsiella pneumoniae gradually decreased from 2018 to 2020 and increased in 2021. The percentage of Escherichia coli gradually increased from 2018 to 2020 and decreased in 2021. The percentage of coagulase-negative staphylococci gradually increased from 2018 to 2021. In addition, Saccharomycetes and Ochrobactrum anthropi were not detected in 2020 and 2021 (after the COVID-19 pandemic), whereas Pseudomonas aeruginosa was detected in both 2020 and 2021 but not in 2018 and 2019.
Our data showed that the COVID-19 pandemic decreased the pathogen detection rate and changed the pathogen distribution in the blood culture of neonates. With the gradual control of the COVID-19 pandemic, people's lives will return to normal, and the pathogen distribution of blood culture in neonates will also change. For example, the proportion of Klebsiella pneumoniae decreased during the period of strict control of COVID-19 (2020), whereas the proportion increased during the recovery period of the pandemic (2021). The changes in pathogen distribution in neonatal blood cultures before and after the COVID-19 pandemic require attention.
In conclusion, we found that the number and positive rate of pathogens in the blood of neonates decreased during COVID-19, and the distribution of pathogens also changed. Continuous monitoring of the changes in pathogen distribution in the blood cultures in neonates can be helpful for preventing neonatal infection by pathogens.
Declaration of Competing Interest
None.
|
2022-11-10T14:23:22.434Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "8192a5074613c3ebfdfd7d48e624a04374501d44",
"oa_license": null,
"oa_url": "http://www.journalofinfection.com/article/S0163445322006442/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ddb841abfe8d10c9eab5ad20926374990408d664",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
270239889
|
pes2o/s2orc
|
v3-fos-license
|
Predictive Models for the Transition from Mild Neurocognitive Disorder to Major Neurocognitive Disorder: Insights from Clinical, Demographic, and Neuropsychological Data
Neurocognitive disorders (NCDs) are progressive conditions that severely impact cognitive function and daily living. Understanding the transition from mild to major NCD is crucial for personalized early intervention and effective management. Predictive models incorporating demographic variables, clinical data, and scores on neuropsychological and emotional tests can significantly enhance early detection and intervention strategies in primary healthcare settings. We aimed to develop and validate predictive models for the progression from mild NCD to major NCD using demographic, clinical, and neuropsychological data from 132 participants over a two-year period. Generalized Estimating Equations were employed for data analysis. Our final model achieved an accuracy of 83.7%. A higher body mass index and alcohol drinking increased the risk of progression from mild NCD to major NCD, while female sex, higher praxis abilities, and a higher score on the Geriatric Depression Scale reduced the risk. Here, we show that integrating multiple factors—ones that can be easily examined in clinical settings—into predictive models can improve early diagnosis of major NCD. This approach could facilitate timely interventions, potentially mitigating the progression of cognitive decline and improving patient outcomes in primary healthcare settings. Further research should focus on validating these models across diverse populations and exploring their implementation in various clinical contexts.
Introduction
Neurocognitive disorders (NCDs) encompass a wide range of conditions characterized by a decline in cognitive functioning, with Alzheimer's disease being the most well-known.According to the Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5), mild neurocognitive disorder (mild NCD) is defined as a noticeable decline in cognitive functioning that does not significantly interfere with daily activities [1,2].This includes conditions like mild cognitive impairment (MCI), which serves as a precursor to major neurocognitive disorder (major NCD) [3][4][5].Major NCD is characterized by a significant decline in cognitive abilities that impairs daily life, affecting memory, language, and other cognitive functions [6].Major NCD, previously known as dementia, is defined by the Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5), as a significant cognitive decline from a previous level of performance in one or more cognitive domains (complex attention, executive function, learning and memory, language, perceptual-motor function, and social cognition).This decline must be substantial enough to interfere with independence in daily activities [7].Major NCD can arise from various etiologies, including Alzheimer's disease, vascular disease, traumatic brain injury, and other conditions.Symptoms vary depending on the cause but generally include memory loss, impaired judgment, language difficulties, and changes in personality and behavior [8].The diagnostic process for major NCD involves a thorough clinical assessment, including medical history, cognitive testing, and possibly neuroimaging or laboratory tests to identify the underlying cause and rule out other conditions.Clinicians use standardized tools like the Mini-Mental State Examination (MMSE) or the Montreal Cognitive Assessment (MoCA) to evaluate cognitive decline [9].
Neurocognitive disorders represent a significant and growing public health concern as populations age.Alzheimer's disease, the most prevalent form of major NCD, affects millions worldwide and incurs substantial emotional and economic costs [10].These disorders not only affect the individuals diagnosed but also have profound impacts on families and caregivers, often necessitating long-term care and support.Understanding the distinctions between mild NCD and major NCD is crucial for early intervention and management [11].While mild NCD might not severely disrupt daily life, it signals a higher risk of progressing to more severe forms, highlighting the need for early detection and therapeutic strategies.
The intermediate phase between minor NCD and major NCD is characterized by mild cognitive impairments that, while noticeable, do not severely impact daily activities.Common symptoms include memory loss, difficulty with problem solving, challenges with planning and organizing, and language difficulties, as well as decreased attention span and concentration [12].Patients in this stage often retain independence but may need help with complex tasks and may experience frustration or anxiety due to their cognitive challenges, with changes sometimes first noticed by friends and family [1].Studies on minor NCD often include patients in this intermediate phase to track progression to major NCD, highlighting the importance of longitudinal research to monitor cognitive, structural, and biomarker changes over time [2].Progression from minor to major NCD varies widely, typically occurring within 3 to 5 years, and is influenced by factors such as age, genetics, and comorbid conditions [13].
Early detection and intervention in neurocognitive disorders are paramount for improving patient outcomes.Predictive models that can accurately identify individuals at risk for progressing from mild NCD to major NCD are crucial for implementing timely interventions that can slow cognitive decline [14].Recent advancements have demonstrated the transformative impact of artificial intelligence (AI)-based techniques in early detection and diagnosis, emphasizing the potential of these tools in clinical settings [15,16].AI approaches have been applied to detect early signs of cognitive impairment, allowing for more timely and precise diagnoses.This is essential as the aging population grows and the prevalence of dementia increases globally.
Timely intervention can significantly alter the trajectory of neurocognitive disorders.For instance, lifestyle modifications, cognitive training, and pharmacological treatments can be more effective if initiated during the early stages of cognitive decline.AI and machine learning models are particularly promising in this regard.By analyzing vast datasets, these technologies can identify subtle patterns and predictors of disease progression that might be missed by traditional diagnostic methods.Moreover, AI can enhance the accuracy and efficiency of neuropsychological assessments, reducing the burden on healthcare systems and providing more accessible diagnostic options for patients.
Several studies have focused on identifying predictive markers for the transition from mild NCD to major NCD.Cognitive performance, particularly episodic memory deficits, has consistently been highlighted as a robust predictor of progression.Delayed recall in episodic memory tests is one of the most significant indicators of future decline [17][18][19].
Biomarkers play a critical role in the early detection of neurocognitive disorders.For example, AI-driven approaches have shown potential in improving predictive accuracy by analyzing complex patterns of brain connectivity.Another study highlighted the importance of transdiagnostic biomarkers, which can provide insights across different neurocognitive and psychiatric disorders, thereby improving the overall understanding of disease mechanisms [23].Integration of multiomics profiles and multimodal electroencephalographic (EEG) data has contributed significantly to personalized diagnostic strategies, enhancing the precision of early detection methods [24].
The field of biomarker research is rapidly evolving, with significant advancements in both molecular and imaging techniques.CSF biomarkers, such as tau protein and amyloidbeta, have been extensively studied for their roles in the pathophysiology of Alzheimer's disease [21].Elevated levels of these proteins are associated with neuronal damage and plaque formation, key features of Alzheimer's pathology.Additionally, neuroimaging techniques, including magnetic resonance imaging (MRI) and positron emission tomography (PET) scans, provide critical insights into brain structure and function, enabling the identification of atrophy patterns and metabolic changes associated with cognitive decline [25].
Functional biomarkers, such as changes in brain connectivity observed through restingstate functional MRI, have also shown promise in early detection.These biomarkers can reveal disruptions in neural networks that precede clinical symptoms, offering a window for early intervention [26].Moreover, advancements in genomics and proteomics are paving the way for the discovery of novel biomarkers that could further refine diagnostic accuracy and prognostic assessments [27].
Genetic factors also play a crucial role in predicting cognitive decline.The presence of the apolipoprotein E (APOE) ε4 allele has been strongly associated with an increased risk of Alzheimer's disease and faster cognitive decline [18].Demographic factors, including age, education, and gender, further influence the risk of progression from mild NCD to major NCD.For instance, older age and lower educational attainment are associated with higher risks [21,22].The interplay between genetics and environmental factors is complex and multifaceted [28][29][30].While the APOE ε4 allele is the most well-established genetic risk factor for Alzheimer's disease, other genes are also being investigated for their roles in neurocognitive disorders [31].Genome-wide association studies (GWASs) have identified numerous genetic variants that contribute to disease susceptibility, highlighting the polygenic nature of these conditions [32].Understanding the genetic architecture of neurocognitive disorders can inform personalized medicine approaches, where interventions are tailored based on an individual's genetic profile [33].
Demographic factors also offer valuable insights into disease risk and progression.Studies have shown that individuals with higher educational attainment or greater cognitive reserve tend to exhibit a slower rate of cognitive decline [34].This suggests that lifelong cognitive engagement may confer protective effects against neurocognitive disorders.Additionally, sex differences in disease prevalence and progression rates have been observed, with women generally exhibiting a higher risk of developing Alzheimer's disease [35].Hormonal factors, lifestyle differences, and genetic variations may all contribute to these disparities [36].
Despite advancements in identifying individual risk factors, comprehensive models that integrate multiple data sources to predict progression to major NCD are still lacking.Most studies have examined cognitive markers, genetic factors, and biomarkers separately, but there is a need for a unified approach that combines these elements to enhance predictive capabilities.Furthermore, longitudinal studies are needed to better understand the trajectory of cognitive decline and biomarker changes in individuals with mild NCD [37].
Recent findings underscore the urgent need for innovative and cost-effective early-stage intervention strategies to address the growing global challenge of dementia [38].
The development of comprehensive predictive models involves integrating diverse datasets, including clinical, genetic, biomarker, and neuroimaging data [39].Machine learning and artificial intelligence are instrumental in this endeavor, offering powerful tools to analyze complex datasets and generate predictive models with high accuracy [40].These models can identify individuals at high risk for progression to major NCD, facilitating early intervention and personalized treatment plans.Longitudinal studies are particularly valuable in understanding the natural history of neurocognitive disorders.By following individuals over time, researchers can observe the progression of cognitive decline and identify early markers of disease.Such studies also allow for the assessment of intervention efficacy, providing critical insights into which treatments are most effective at different stages of the disease [41].
In conclusion, the integration of AI-based techniques, biomarkers, genetic factors, and comprehensive predictive models holds significant promise for the early detection and intervention of neurocognitive disorders.As the prevalence of dementia continues to rise globally, these advancements are crucial for enhancing diagnostic accuracy, understanding disease mechanisms, and developing personalized treatment strategies.Ongoing research and longitudinal studies are essential to refine these approaches and ensure their effective implementation in clinical settings.By leveraging the power of technology and multidisciplinary research, we can make significant strides in combating the challenges posed by neurocognitive disorders and improving the quality of life of affected individuals.
The current study aims to develop and validate predictive tools that enhance the screening and risk assessment of major NCD in primary healthcare settings by integrating clinical, demographic, and neuropsychological data.
Materials and Methods
A brief description of the study's methodological approach towards the inclusion of predictive factors and the implementation of specific models for data analysis follows.The methodology of this study was carefully designed to ensure robust and reliable data collection, analysis, and interpretation.The study aimed to develop and validate predictive models for the transition from minor neurocognitive disorder (minor NCD) to major neurocognitive disorder (major NCD).We employed a comprehensive approach that integrates demographic, clinical, and neuropsychological assessments to identify significant predictors of cognitive decline.The study design and methodology are grounded in established research practices and supported by the relevant literature to enhance the validity of our findings.Prior studies have demonstrated the importance of longitudinal data in understanding the progression of neurocognitive disorders [12,13].Additionally, the use of neuropsychological tests, such as the CAMCOG and the GDS, has been validated in various settings to predict cognitive decline [42,43].Our approach ensures a comprehensive evaluation of factors contributing to the transition from minor to major NCD, providing a robust framework for early intervention strategies.
Furthermore, the international literature emphasizes the importance of focusing on the structure of the data and correlations in Generalized Estimating Equations (GEE) analysis rather than a fixed minimum number of observations per variable [44][45][46].GEE analysis can be applied even to small sample sizes [47].However, it is worth noting that, in our research, we adhered to the general assumption for regression models that requires a minimum threshold of 10 observations per independent variable [48][49][50].In our case, there were 394 observations/24 variables = 16.4 observations per variable.
Subjects
Data regarding 132 participants in an ongoing registry of the Neurology Department of the University Hospital of Alexandroupolis with minor NCD and with available diagnostic follow-up assessments for at least 2 years were included in the study.The final sample consisted of participants who visited the outpatient dementia clinic and underwent the examination as a part of their routine neuropsychological assessment (Figure 1).
Subjects
Data regarding 132 participants in an ongoing registry of the Neurology Department of the University Hospital of Alexandroupolis with minor NCD and with available diagnostic follow-up assessments for at least 2 years were included in the study.The final sample consisted of participants who visited the outpatient dementia clinic and underwent the examination as a part of their routine neuropsychological assessment (Figure 1).All participants signed an informed consent form prior to their participation.Approval was also required for the patients with dementia by their caregiver and/or by a legal representative.Approval for the study was granted by the Ethics Committee of the University Hospital of Alexandroupolis (ΔΣ1/Θ68/06-04-2020).The data were analyzed anonymously.
Inclusion/Exclusion Criteria
The interview collected biographical information and medical data, including information regarding any medical diagnosis; history of cardiovascular, metabolic, and neurological syndromes; and history of affective diseases.Individuals with NCDs underwent a neurological examination, neuropsychological assessment, neuroimaging, and specific biochemical and hematological testing.
A total of 132 individuals at baseline met the diagnostic criteria for minor NCD as defined in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) [51].These criteria include the following: (a) self-reported or observed decline in cognitive functioning by the patient, family member, or clinician; (b) cognitive impairment for the individual's age demonstrated by formal neuropsychological testing; (c) evidence of gradual cognitive decline in objective tasks beyond normal aging but not meeting criteria for dementia; (d) preserved general cognitive and daily function; and (e) no prior diagnosis of dementia or other conditions (e.g., depression, delirium, intoxication, or psychosis) that could explain the impairment.Additional inclusion criteria were as follows: age over 40 years; mild cognitive decline based on Mini-Mental State Examination (MMSE) [52,53] and Montreal Cognitive Assessment (MoCA) score [54,55], defined as 1 to 1.5 standard deviations (SDs) below the mean for age-and education-adjusted norms based on normative data; absence of other neurological diseases; not currently taking cholinesterase inhibitors, antipsychotics, and/or anticholinergic drugs.The inclusion criterion of age over 40 years in studies of minor NCD is significant for capturing early markers of cognitive decline and understanding the progression of the disorder.While cognitive decline is more common in older adults, including patients All participants signed an informed consent form prior to their participation.Approval was also required for the patients with dementia by their caregiver and/or by a legal representative.Approval for the study was granted by the Ethics Committee of the University Hospital of Alexandroupolis (∆Σ1/Θ68/06-04-2020).The data were analyzed anonymously.
Inclusion/Exclusion Criteria
The interview collected biographical information and medical data, including information regarding any medical diagnosis; history of cardiovascular, metabolic, and neurological syndromes; and history of affective diseases.Individuals with NCDs underwent a neurological examination, neuropsychological assessment, neuroimaging, and specific biochemical and hematological testing.
A total of 132 individuals at baseline met the diagnostic criteria for minor NCD as defined in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) [51].These criteria include the following: (a) self-reported or observed decline in cognitive functioning by the patient, family member, or clinician; (b) cognitive impairment for the individual's age demonstrated by formal neuropsychological testing; (c) evidence of gradual cognitive decline in objective tasks beyond normal aging but not meeting criteria for dementia; (d) preserved general cognitive and daily function; and (e) no prior diagnosis of dementia or other conditions (e.g., depression, delirium, intoxication, or psychosis) that could explain the impairment.Additional inclusion criteria were as follows: age over 40 years; mild cognitive decline based on Mini-Mental State Examination (MMSE) [52,53] and Montreal Cognitive Assessment (MoCA) score [54,55], defined as 1 to 1.5 standard deviations (SDs) below the mean for age-and education-adjusted norms based on normative data; absence of other neurological diseases; not currently taking cholinesterase inhibitors, antipsychotics, and/or anticholinergic drugs.The inclusion criterion of age over 40 years in studies of minor NCD is significant for capturing early markers of cognitive decline and understanding the progression of the disorder.While cognitive decline is more common in older adults, including patients starting at age 40 helps identify early symptoms of minor NCD, as this age group may begin to show subtle signs of cognitive impairment [12].This inclusion is associated with several critical parameters: genetic factors like the presence of the APOE ε4 allele may start influencing cognitive decline in individuals in their 40s [2]; lifestyle factors, such as diet, exercise, and smoking, as well as comorbidities like hypertension, diabetes, and cardiovascular diseases, begin to impact cognitive health in midlife, increasing the risk of minor NCD [13]; and neuropsychological changes in cognitive performance and brain health can show measurable changes in this age group, making it a critical period for early detection and intervention [1].Therefore, including patients over 40 years old provides valuable insights into the early onset and progression of cognitive decline.
The exclusion criteria were as follows: secondary causes of cognitive deficits confirmed with laboratory tests, including vitamin B12/folate determination and thyroid functioning tests; structural lesions on conventional brain MRI, such as territorial infarction, intracranial hemorrhage, brain tumor, hydrocephalus, and traumatic brain injury.The subjects were followed annually with two repeated clinical visits after baseline.
Conversion to Major NCD
Our primary study outcome was progression to major NCD.In order to avoid circularity, the major NCD diagnostic criteria did not include any of our predictive neuropsychological testing markers.We used the criteria based on modes of assessment that were independent of our neuropsychological decline measure.Participants were diagnosed with major NCD [56] based on DSM-V and decline in activities of daily living as well as cognitive test scores for MMSE and MoCa (total scores) falling 2 or more standard deviations (SDs) below the mean based on available normative data.
Neuropsychological Predictive Factors
All participants underwent a battery of neuropsychological tests comprising the Greek version of the Cambridge Cognitive Examination Scale (CAMCOG) as part of the Cambridge Mental Disorders of the Elderly (CAMDEX) [42,57].The cognitive domains assessed through the subtasks were abilities of praxis, orientation, understanding, language, memory, and perception.Confrontation naming was evaluated by the Boston Naming Test (BNT) [58] and executive functioning by the Functional Cognitive Assessment Scale (FUCAS) [59].The Functional Rating Scale of Symptoms of Dementia (FRSSD) was administered to assess the patient's functionality in daily activities based on the caregiver's perspective.Emotional status was evaluated by the Geriatric Depression Scale (GDS) for the detection of depressive symptoms [43,60].The Hamilton Depression Scale (HAM-D) [61] was administered to assess patients' emotional states through 17 questions from the caregiver's perspective.Patients' neuropsychiatric disturbances were assessed through the Neuropsychiatric Inventory (NPI) administered to caregivers [62].The neuropsychological examinations took place in a quiet room, and every participant was tested individually by the same neuropsychologist of the Neurology Department.
Demographic and Clinical Predictive Factors
Age (years), sex (male/female), and education (years) were the demographic factors studied, while the clinical variables included duration of minor NCD (years), body mass index (BMI) ≥ 25 [63] (yes/no), smoking (yes/no), alcohol consumption (yes/no), history of cardiovascular disease (yes/no), presence of white matter lesions (yes/no), and cerebrovascular burden (at least one of the following risk factors: atrial fibrillation, cerebrovascular diseases, hypertension, diabetes, and hypercholesterolemia).
Statistical Analysis
Data were analyzed using a Generalized Estimating Equations (GEE) framework to account for the longitudinal nature of the data with repeated measures over time (baseline, first year, and second year) [64,65].The dependent variable was diagnosis, coded as 0 for minor NCD and 1 for major NCD.The independent variables included in the model are shown in Tables 1 and 2. The model was specified with a binomial distribution and a logit link function.The working correlation structure was set as unstructured, allowing for a general form of the covariance matrix for the repeated measures [66,67].Employing an unstructured correlation matrix is beneficial when the intervals between measurements are uniform across subjects [44], which aligns with the methodology of our study.Wald χ 2 tests were used to determine the significance of the predictors, with a 95% confidence interval.All analyses and graphical representations were performed using SPSS (version 25.00) software, and the significance level was set at p < 0.05.
Results
The Quasi-Likelihood under Independence Model Criterion (QIC) was 428.865, and the Corrected Quasi-Likelihood under Independence Model Criterion (QICC) was 454.655, indicating the model's adequacy in fitting the data in comparison to alternative models that were tested.The model performed well, with an accuracy of 83.7%.Table 1 shows the descriptive statistics for the categorical (nominal/ordinal) and continuous (scale) variables of the model.
The interaction effects of time with sex, BMI ≥ 25, and alcohol consumption were visually explored through estimated marginal means plots.The charts in Figure 2 show the estimated probabilities of major NCD diagnosis for various subgroups based on sex, BMI ≥ 25, and alcohol consumption.The diagrams present the mean values and the 95% confidence intervals for each subgroup at three time points.The charts reveal the following: (A) males have a higher likelihood of major NCD diagnosis compared to females at all time points, with the differences being more pronounced in the second year; (B) individuals with excessive BMI (BMI ≥ 25) appear to have a higher likelihood of major NCD diagnosis in the second year compared to those without excessive weight; (C) individuals who consume alcohol have a greater likelihood of major NCD diagnosis in the second year compared to those who do not consume alcohol.
The interaction effects of time with sex, BMI ≥ 25, and alcohol consumption were visually explored through estimated marginal means plots.The charts in Figure 2 show the estimated probabilities of major NCD diagnosis for various subgroups based on sex, BMI ≥ 25, and alcohol consumption.The diagrams present the mean values and the 95% confidence intervals for each subgroup at three time points.The charts reveal the following: (A) males have a higher likelihood of major NCD diagnosis compared to females at all time points, with the differences being more pronounced in the second year; (B) individuals with excessive BMI (BMI ≥ 25) appear to have a higher likelihood of major NCD diagnosis in the second year compared to those without excessive weight; (C) individuals who consume alcohol have a greater likelihood of major NCD diagnosis in the second year compared to those who do not consume alcohol.Figure 3 includes two charts with the 95% confidence intervals for the variables CAMCOG-praxis and GDS among patients with minor NCD and patients with major NCD over the time periods.Both in the first and second year, individuals displaying major NCD showed lower values in CAMCOG-praxis and GDS.Indeed, even visually, it appears that for CAMCOG-praxis the differences are statistically significant.Figure 3 includes two charts with the 95% confidence intervals for the variables CAMCOG-praxis and GDS among patients with minor NCD and patients with major NCD over the time periods.Both in the first and second year, individuals displaying major NCD showed lower values in CAMCOG-praxis and GDS.Indeed, even visually, it appears that for CAMCOG-praxis the differences are statistically significant.In summary, the results from the GEE show the influence of categorical variables like sex, BMI, and alcohol consumption on the likelihood of being diagnosed with major NCD.For instance, females showed a reduced likelihood of a major NCD diagnosis compared to males, those with a BMI over 25 were more likely to receive a major NCD diagnosis, and alcohol consumers also showed a higher probability of a major NCD diagnosis.Additionally, higher scores on the CAMCOG-praxis and higher scores on the GDS were associated with a decreased likelihood of a major NCD diagnosis.
Discussion
This study aimed to address the critical need for early detection and effective management of neurocognitive disorders by developing a validated predictive model for the transition from minor neurocognitive disorder (minor NCD) to major neurocognitive disorder (major NCD).By focusing on demographic, clinical, and neuropsychological factors, we aim to enhance early intervention strategies within primary healthcare settings, ultimately improving patient outcomes and quality of life.The ultimate goal of this research is to facilitate early identification of individuals at high risk for major NCD, enabling timely interventions that can delay or prevent severe cognitive decline.The In summary, the results from the GEE show the influence of categorical variables like sex, BMI, and alcohol consumption on the likelihood of being diagnosed with major NCD.For instance, females showed a reduced likelihood of a major NCD diagnosis compared to males, those with a BMI over 25 were more likely to receive a major NCD diagnosis, and alcohol consumers also showed a higher probability of a major NCD diagnosis.Additionally, higher scores on the CAMCOG-praxis and higher scores on the GDS were associated with a decreased likelihood of a major NCD diagnosis.
Discussion
This study aimed to address the critical need for early detection and effective management of neurocognitive disorders by developing a validated predictive model for the transition from minor neurocognitive disorder (minor NCD) to major neurocognitive disorder (major NCD).By focusing on demographic, clinical, and neuropsychological factors, we aim to enhance early intervention strategies within primary healthcare settings, ultimately improving patient outcomes and quality of life.The ultimate goal of this research is to facilitate early identification of individuals at high risk for major NCD, enabling timely interventions that can delay or prevent severe cognitive decline.The primary challenge lies in integrating diverse data sources-demographic information, clinical history, and neuropsychological test results-into a cohesive and accurate predictive model.Overcoming this challenge requires extensive knowledge in neuropsychology, geriatrics, and advanced statistical modeling techniques.
Our study highlights a crucial predictive model for identifying individuals at risk of major NCD among those with minor NCD.Baseline clinical profiles, particularly emphasizing increased BMI (BMI ≥ 25) and alcohol consumption, as significant factors associated with heightened major NCD risk.Conversely, certain predictors appeared to lower the likelihood of major NCD, including being female, exhibiting higher scores on the GDS, and achieving higher scores in praxis assessments.Of note, to achieve the study's goals, a deep understanding of neurocognitive disorders and the factors influencing their progression is essential.This includes expertise in neuropsychological assessment, familiarity with demographic and lifestyle risk factors, and proficiency in statistical methods for analyzing longitudinal data.
Alcohol consumption, defined as consuming more than one glass of alcohol per day based on self-reported questionnaires, emerged as a predictive factor for progression to major NCD.Considering previous research, Kuang et al. [68] highlighted alcohol consumption as a significant predictive factor in various models for dementia, emphasizing its importance in assessing cognitive health alongside other variables, such as age, activities of daily living questionnaire score, and smoking status.Conversely, Koch et al. [69] emphasized caution in alcohol consumption among individuals with minor NCD, consistent with Xu et al. [70] who identified a J-shaped relationship between alcohol intake and cognitive decline, suggesting that light to moderate drinking might reduce dementia risk in this vulnerable population.While moderate alcohol intake has been linked to potential cognitive benefits in certain contexts, such as reduced progression to major NCD in individuals with minor NCD [71], caution is warranted.Xu et al. [72] demonstrated a nonlinear association between alcohol consumption and dementia risk, suggesting that excessive drinking may actually elevate the risk.
The present study identified BMI (BMI ≥ 25) as a predictive factor for conversion to major NCD.This finding aligns with previous research exploring the complex relationship between BMI and cognitive decline.Hessler et al. [73] investigated cardiovascular health metrics and dementia risk in older adults.While their study did not find a direct association between BMI and dementia risk, it emphasized the importance of other cardiovascular risk factors, such as smoking, physical activity levels, and glucose levels, which could indirectly impact cognitive health.A more recent study [74] examined the effects of the MIND diet on cognition in older adults with a higher BMI and suggested that dietary interventions alone may not significantly alter cognitive outcomes, highlighting the complexity of factors influencing dementia risk beyond BMI.Mirza et al. [75] explored trajectories of depressive symptoms and their association with dementia risk.Although not directly related to BMI, their findings underscored the importance of mental health factors in cognitive decline, which could interact with BMI-related mechanisms.Another study [76] provided insights into the obesity paradox, where late-life lower BMI was associated with increased cortical amyloid burden and dementia risk, especially in APOE4 carriers.
In our comprehensive investigation into predictive factors associated with the conversion to major NCD, a notable finding emerged regarding the impact of female sex on dementia risk.Our analysis indicated that female sex is linked to a lower risk for major NCD progression, a conclusion that draws upon a body of research exploring sex-specific influences on cognitive health.Studies [77,78] have consistently observed a higher preva-lence and elevated risk of AD among women compared to men.These findings suggest potential sex-specific vulnerabilities to certain forms of dementia, underscoring the need to explore biological and environmental factors that may contribute to these disparities.Conversely, other researchers [68,79] have highlighted educational disparities and hormonal fluctuations, particularly estrogen changes, as potential contributors to variations in dementia risk between the Moreover, recent studies [80,81] have reported differential associations of risk factors with dementia among men and women, indicating that certain risk factors may have distinct impacts based on sex.
Our research finding that increased scores on the GDS predict a slower progression to major NCD adds a significant dimension to the discourse on the potential of depressive symptoms as early indicators of cognitive impairment.This result is particularly insightful in the context of our study methodology, where participants identified with elevated GDS scores at baseline were subsequently enrolled in a comprehensive monitoring program.This intervention included both pharmacological treatment and counseling services for the participants and their family systems, aimed specifically at mitigating the impact of depressive factors on cognitive health.Supporting our approach, the study by Mirza et al. [75] aligns well with our findings, suggesting that increasing trajectories of depressive symptoms over time are associated with a higher risk of dementia, thereby reinforcing the notion that actively managing these symptoms could significantly alter the progression trajectory of cognitive decline.In contrast, another study [82] found that mild depressive symptoms did not predict AD, highlighting the importance of the severity and management of symptoms in their role as potential predictors.Moreover, while Nation et al. [83] emphasize the utility of neuropsychological markers in predicting dementia, the integration of therapeutic interventions in response to depressive symptomatology in our study provides a model for combining psychological and neuropsychological assessments to enhance predictive accuracy.
Our investigation aligns with previous studies, showing that improved praxis abilities can have a protective effect against cognitive decline.These abilities are integral to our daily cognitive functions and may play a role in mitigating the symptoms or delaying the progression of neurocognitive disorders.One study highlights the positive impact of literacy and education on visuo-constructional abilities, suggesting that educational activities enhancing these abilities could support cognitive health in the elderly [84].This is particularly evident in individuals who engage in complex cognitive activities, showing better performance in cognitive tests compared to those with lower engagement levels.However, the relationship between praxis abilities and cognitive health is not straightforward.Martins-Rodrigues et al. found that certain visuo-constructional tasks did not effectively differentiate between individuals with minor NCD and healthy controls, suggesting that the protective effects of enhanced praxis abilities may be limited to specific cognitive functions or early stages of cognitive decline [85].
In summary, our results highlight the importance of specific demographic and lifestyle factors in predicting the transition from minor NCD to major NCD.Increased BMI and alcohol consumption were associated with a higher risk, while female sex, higher GDS scores, and better praxis abilities were linked to a lower likelihood of progression.These findings underscore the need for comprehensive risk assessments in primary care settings to identify and support at-risk individuals effectively.
This research is significant as it addresses a growing public health concern: the rising prevalence of neurocognitive disorders among aging populations.Early detection of individuals at high risk for major NCD can lead to timely interventions, potentially delaying the onset of severe cognitive decline and reducing the overall burden on healthcare systems.
Strenghts and Limitations of the Study
One of the study's primary strengths is the integration of multiple types of predictive data, including clinical risk factors and neuropsychological tests, which make it a useful tool for general practitioners in primary healthcare settings.Additionally, the study focused on developing and validating predictive tools that can be effectively implemented within community-based healthcare frameworks.These tools are designed to facilitate early intervention strategies and improve patient outcomes, making them particularly valuable in remote areas where they can aid general practitioners in referring patients to specialized centers.Moreover, the longitudinal study design, which included follow-ups, allowed for a more accurate assessment of disease progression from minor NCD to major NCD.
The study has certain limitations.The sample size of 132 participants from a single institution may limit the generalizability of the findings to broader populations or different geographic locations, potentially affecting its utility in diverse or remote settings.There is also a need for further validation studies across diverse populations to ensure the model's generalizability and clinical applicability.Additionally, the study acknowledges gaps in the literature where comprehensive models integrating various data sources (cognitive, genetic, and biomarker data) are still lacking.
Potential Clinical Implications and Future Research
The clinical applications of our findings are substantial.The predictive model can aid in the early identification of patients at risk of major NCD, facilitating timely referrals to specialized care and the implementation of personalized intervention strategies.The findings from this study pave the way for several avenues of future research.First, further refinement of predictive models is necessary to enhance their accuracy and reliability.This can be achieved by incorporating additional biomarkers and genetic information alongside the current demographic, clinical, and neuropsychological factors.Advanced machine learning techniques could also be employed to analyze these complex datasets more effectively.Secondly, the impact of targeted interventions based on the predictive model should be explored.Moreover, it is crucial to validate the predictive model across diverse populations.Finally, the integration of the predictive model into primary healthcare systems requires careful consideration of its implementation and usability by general practitioners.Research should focus on developing user-friendly tools and training programs that facilitate the adoption of these predictive models in everyday clinical practice.
Conclusions
The study successfully developed and validated predictive tools that integrate clinical risk factors and neuropsychological tests tailored for implementation in primary healthcare settings.These tools enable effective screening and risk assessment for neurocognitive disorders (NCDs), thereby facilitating early intervention strategies which can significantly improve patient outcomes.The findings underscore the influence of demographic factors, such as gender, with females exhibiting a lower risk of major neurocognitive disorder (MNCD) progression compared to males, suggesting potential sex-specific interventions.Additionally, lifestyle factors like BMI and alcohol consumption were significant in MNCD progression, presenting opportunities for targeted interventions to delay or prevent severe cognitive impairments.Moreover, certain neuropsychological predictors appeared to lower the likelihood of major NCD, including exhibiting higher scores on the GDS and achieving higher scores in praxis assessments.The proposed comprehensive and validated model can be adaptable to various healthcare settings and is poised to significantly influence clinical practice, especially in primary care environments, by providing tools that support early diagnosis and tailored intervention strategies.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the University Hospital of Alexandroupolis, Greece (∆Σ1/Θ68/06-April 2020).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Figure 2 .
Figure 2.Estimated probabilities of major NCD diagnosis over time with 95% confidence intervals of means by sex, body mass, and alcohol consumption.
Figure 2 .
Figure 2.Estimated probabilities of major NCD diagnosis over time with 95% confidence intervals of means by sex, body mass, and alcohol consumption.
Figure 3 .
Figure 3. Changes in mean values of CAMCOG-praxis (A) and GDS (B) with 95% confidence intervals by diagnosis and time.
Figure 3 .
Figure 3. Changes in mean values of CAMCOG-praxis (A) and GDS (B) with 95% confidence intervals by diagnosis and time.
Table 1 .
Descriptive statistics for baseline demographic, clinical, and cognitive data.
Table 2 .
Parameter estimates for predictors of major NCD diagnosis.
|
2024-06-05T15:17:14.416Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "88d5e40f3cd1472a87e1de9379a5773b59efd48d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/12/6/1232/pdf?version=1717237561",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56ef47deeb77879fcb192a3cc5b3d104ad0ff891",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201210566
|
pes2o/s2orc
|
v3-fos-license
|
E ff ects of Grassland Management on Overwintering Bird Communities
Birds that depend on grassland and successional ‐ scrub vegetation communities are experiencing a greater decline than any other avian assemblage in North America. Habitat loss and degradation on breeding and wintering grounds are among the leading causes of these declines. We used public and private lands in northern Virginia, USA, to explore bene fi ts of grassland management and associated fi eld structure on supporting overwintering bird species from 2013 to 2016. Speci fi cally, we used non ‐ metric multidimensional scaling and multispecies occupancy models to compare species richness and habitat associations of grassland ‐ obligate and successional ‐ scrub species during winter in fi elds comprised of native warm ‐ season grasses ( WSG ) or non ‐ native cool ‐ season grasses ( CSG ) that were managed at di ff erent times of the year. Results demonstrated positive correlations of grassland ‐ obligate species with decreased vegetation structure and a higher percentage of grass cover, whereas successional ‐ scrub species positively correlated with increased vegetation structure and height and increased percentages of woody stems, forb cover, and bare ground. Fields of WSG supported higher estimated total and target species richness compared to fi elds of CSG. Estimated species richness was also in fl uenced by management timing, with fi elds managed during the previous winter or left unmanaged exhibiting higher estimated richness than fi elds managed in summer or fall. Warm ‐ season grass fi elds managed in the previous winter or left unmanaged had higher estimated species richness than any other treatment group. This study identi fi es important winter habitat associations ( e.g., vegetation height and fi eld openness ) with species abundance and richness and can be used to make inferences about optimal management practices for overwintering avian species in eastern grasslands of North America. © 2019 The Authors. Journal of Wildlife Management Published by Wiley Periodicals, Inc. on behalf of The Wildlife Society.
The effects of land management on grassland and early successional bird communities has been a topic of attention in recent decades because these species have declined more steeply than any other guild of birds in North America Samson 1994, Askins et al. 2007). Land management activities for grasslands such as burning (Churchwell et al. 2008), mowing (Bollinger et al. 1990, Blank et al. 2011, use of agricultural chemicals (Martin et al. 2000, Bartuszevige et al. 2002, Newton 2004, Mineau et al. 2005, and conservation buffers (Burger et al. 2006, Berges et al. 2010) have negative and positive effects on breeding populations of grassland and successional-scrub birds. For example, earlier and more frequent hay harvests result in increased nest failures for Savannah sparrows (Passerculus sandwichensis), bobolinks (Dolichonyx oryzivorus), and other grassland-dependent species (Perlut et al. 2006(Perlut et al. , 2011. In contrast, establishing conservation buffers around the edges of managed fields increases breeding bird abundance, species richness, and diversity (Berges et al. 2010) and improves nest success (Adams et al. 2013). Although knowledge has been gained from decades of research on breeding habitats, there is limited knowledge of the habitat needs of grassland and successional-scrub birds outside of the breeding season. For long-distance migrant species in general, the loss and degradation of winter habitat in central and south America has been hypothesized as a major contributing factor in bird declines (Hostetler et al. 2015, Marra et al. 2015a). The majority of North American grassland and successional-scrub birds, however, are residents or short-distance migrants, in which their winter distributions are restricted within North America (Igl and Ballard 1999). Therefore, there is a need to understand non-breeding season requirements, especially for imperiled populations.
Events occurring on wintering sites can affect the survival and reproduction of several long-distance migrants, influencing population dynamics in subsequent breeding seasons (Marra et al. 1998(Marra et al. , 2015bStudds et al. 2008;Costantini et al. 2010;Harrison et al. 2011). For example, winter and staging habitats containing abundant food sources and cover result in earlier departure dates and improved survival during migration in long-distance migratory species such as American redstarts (Setophaga ruticilla; Marra et al. 2015bMarra et al. , 1998Studds et al. 2008) and snow geese (Chen caerulescens atlantica; Bêty et al. 2003). In short-distance migrants and resident birds, increased food availability in winter can increase survival (Jansson et al. 1981), advance breeding dates (Salton et al. 2015) and laying dates, and increase fledging success (Robb et al. 2008, Costantini et al. 2010. Of 56 species that breed in grassland and successional-scrub vegetation communities in eastern North America (Sauer et al. 2011), nearly half also winter in the United States, including eastern grasslands. Many current land management recommendations for eastern grassland and successional-scrub birds, however, only pertain to breeding habitats, leaving a deficit of information available on best management practices for lands with overwintering species.
Vegetation structure and composition are important environmental measures for predicting bird species richness and abundance, but optimal measures vary considerably between species groups (MacArthur and MacArthur 1961, Tews et al. 2004). For breeding grassland and successionalscrub birds, the structure and composition of vegetation can have a significant influence on bird communities; increased structural heterogeneity is correlated with increased bird community diversity and stability (Fuhlendorf and Engle 2001, Fuhlendorf et al. 2006, Hovick et al. 2015. In winter, heterogeneous vegetation structure can provide thermal protection (Ginter and Desmond 2005), improve foraging opportunities (Bechtoldt andStouffer 2005, Ginter andDesmond 2005), and decrease predation risk (Watts 1996). Grasslands in the eastern United States, however, are often managed to leave minimal structure during winter months.
For example, hay fields in eastern grasslands are harvested as late as September (Plantureux et al. 2005) and pastures are stockpiled with cattle for winter grazing (Poore et al. 2000). This leads to reduced seed resources (Maron and Jefferies 2001) and limited forage and shelter opportunities for birds during winter. Thus, the timing of grassland management could affect winter habitat and associated bird survival.
Although the majority of hay and pasture lands in the eastern United States are comprised of non-native coolseason grasses (CSG), there are also a growing number of fields being restored to native warm-season grasses (WSG), often through state conservation prescriptions that recommend diverse seed mixes for optimizing wildlife (Moorman et al. 2017). When augmented with multiple forb and legume species, warm-season grass fields increase the structural heterogeneity of fields during the growing season and are associated with higher diversity in mammals (Mengak 2004), arthropods (McIntyre andThompson 2003), pollinators (Myers et al. 2012), and birds (Flanders et al. 2006, Harper et al. 2015. Best management practices for WSG in the eastern United States are designed to increase structural heterogeneity and minimize invasions by non-native species (Washburn et al. 2000), which also benefits breeding grassland and successional-scrub bird populations (Flanders et al. 2006). There is limited research on habitat use by winter bird communities in WSG, however, with recent work (<15 studies) focused in ecoregions of the Midwest, Great Plains, and southern United States (Conover et al. 2007, Plush et al. 2013, Monroe and O'Connell 2014, Hovick et al. 2015, Saalfeld et al. 2016). Nevertheless, non-native vegetation negatively influences the density of several grassland-obligate species overwintering in the Texas, USA, coastal plains (Saalfeld et al. 2016) and overwintering bird diversity in the Flint Hills of Kansas and Oklahoma, USA, is positively associated with increased vegetation height (Monroe and O'Connell 2014).
Currently, there are no studies that address habitat use by winter bird communities in grasslands in the eastern United States. These grasslands, which include pastures, hayfields, abandoned agricultural fields, and successional scrub, comprise >20% of the landscape from Maine to Florida, USA (Sleeter et al. 2013). With variation in species assemblages, ecoregion attributes, land use, and resulting vegetation structure between eastern grasslands and other North American grasslands (Omernik 1987), it is important to understand differing responses in bird communities to optimize conservation opportunities specific to eastern grasslands.
Our objective was to understand how the winter bird community responds to grassland management and associated vegetation structure in Virginia, USA, as a proxy for eastern grasslands of the mid-Atlantic. We hypothesized that vegetation structure, which varies by field type (CSG or WSG) and management regime, would strongly influence the avian community. We expected higher species richness and abundance in fields associated with greater structural heterogeneity and that heterogeneity would be lowest in recently managed fields. We also hypothesized that fields comprised of non-native vegetation would exhibit lower avian richness and abundance because of increased homogeneity in the vegetation structure.
STUDY AREA
We conducted this study for 3 winters from 2013 to 2016 on 25 properties (property size range = 14-2,000 ha) across 11 counties in Virginia, that were either in public (n = 4) or private (n = 21) ownership ( Fig. 1). We recruited and surveyed all field sites through Virginia Working Landscapes, a research-based conservation initiative convened by the Smithsonian Conservation Biology Institute (SCBI) in Front Royal, Virginia. (www.VAWorkingLandscapes.org; accessed 7 Oct 2018). This program works collectively with landowners, scientists and private citizens to study effects of land use and management on native biodiversity, including avian and vegetation communities, and has access to >120 properties for conducting research. Field sites were opportunistically selected based on landowner permissions, field size (>8 ha), and management (i.e., not managed during the survey season). The 11-county region used for this study was characterized by rolling hills (elevation range = 0-1,359 m) over igneous and metamorphic bedrock with stretches of karst topography throughout the western portion (Hyland 2005). The center of the study region was intersected by Shenandoah National Park along the Blue Ridge Mountains. Dominant mammal fauna in the region included white-tailed deer (Odocoileus virginianus), coyote (Canis latrans), American black bear (Ursus americanus), red fox (Vulpes vulpes), Virginia opossum (Didelphis virginiana), and raccoon (Procyon lotor; Handley 1991). The land cover was dominated by eastern temperate deciduous forest with grasslands comprising approximately 30% of the study region (National Land Cover Database 2011). Most grasslands in the region were managed for grazing livestock or growing hay and were comprised of non-native CSG such as tall fescue (Schedonorus arundinaceus), Kentucky bluegrass (Poa pratensis), and orchard grass (Dactylis glomerata). These fields were generally managed homogenously with grass removal (through haying or grazing) occurring frequently throughout the year. Fields converted to WSG contained a mix of native grasses ( ]). Fields comprised of WSG in this region were generally managed once every 1-3 years by burning or bush-hogging. This region experienced 4 distinct seasons with hot, humid summers (Jun-Aug) and moderately cold winters (Dec-Feb). Average temperature for the study months (Dec-Feb; and area ranged between −11°C and 16°C (x = 1.5°C) and the average snowfall was 14 cm (National Oceanic and Atmospheric Administration 2017).
METHODS
Fields (n = 41) were ≥8 contiguous ha of grassland and included varying compositions of forbs (0-100%,x = 12.75%) and woody vegetation (0-62%,x = 8.5%) but were divided into WSG (n = 20) or CSG (n = 21). Fields were also categorized by management timing: fall (Sep-Nov), summer (May-Aug), and previous winterunmanaged (Jan-Apr). Nine fields were managed differently in subsequent years so were surveyed multiple years as independent sites (n = 50). No fields were managed during survey months. We combined fields managed in the previous winter with unmanaged fields because they had ≥7 months of growth prior to being surveyed. Management included burning (annual; 10 fields), mowing (≥2 times annually; 11 fields), continuous grazing (5 fields), or bushhogging (annual; 24 fields). In all cases, management resulted in the removal of field vegetation. Therefore, for the purpose of this study we combined all management activities and focused on vegetation attributes relative to time since last disturbance. In each field, we established 3 200-m-long transects ≥100 m from field edges and 200 m apart. If a property contained >1 survey field, we separated adjacent survey clusters by >400 m to reduce the probability of double-counting birds between survey fields (Davis et al. 2013).
Bird and Vegetation Surveys
We visited fields 3 times during the survey period (once per month in Dec, Jan, and Feb). A single observer used variable width transect surveys and distance sampling techniques to sample bird abundance (Buckland et al. 2001, Diefenbach et al. 2003) between 0900 and 1300 (EST) on days with no precipitation and wind speeds <20 km/hour (Gabrey et al. 1999). In contrast to the breeding season, birds can be surveyed throughout the day during the non-breeding season (Fletcher et al. 2000). The observer traveled southwest from the northeast to avoid sun glare at a rate of approximately 40 m/minute and recorded the estimated perpendicular distance of detected birds from the centerline to the nearest 5 m over 5 minutes. The observer recorded all birds regardless of detection method (e.g., flushing from ground, perched in vegetation, vocalizing). We also recorded temperature, date, time, wind speed, snow cover, and cloud cover for each site visit. Bird survey methods were reviewed by Smithsonian's Institutional Animal Care and Use Committee and met all requirements of the institution. We measured vegetation and structural heterogeneity of each field along each line transect once a year between January and February at a time of no snow cover. We surveyed 6 1-m 2 plots at 40-m intervals along each transect. A single observer visually estimated and recorded percent ground cover of grasses, forbs, woody stems, leaf litter, and bare ground for each plot (Daubenmire 1968). We estimated habitat openness along multiple dimensions in each plot using cone of vulnerability (COV) and vertical visual obstruction. The cone of vulnerability is a 3-dimensional view of visual obstruction and is used as a measure of habitat structure for ground-dwelling species (i.e., northern bobwhites [Colinus virginianus]; Kopp et al. 1998). Briefly, we used a 1-m polyvinyl chloride pole placed at the center of each plot to measure angles in 8 directions (N, NE, E, SE, S, SW, W, NW) from 10 cm above the ground to the top of the nearest obstructing vegetation. We used the average of the 8 measurements to estimate the angle of obstruction for each plot, and the volume of unoccupied space above each point. A higher value indicates less structural cover and therefore increased vulnerability to aerial predators. For vertical visual obstruction, we used a modification of the Robel method (Robel et al. 1970) using a 1-m polyvinyl chloride pole divided into 10-cm segments. We recorded Robel measurements in 2 opposite corners of each plot, 2 m from the pole, resulting in 12 measurements/transect. Briefly, we counted the number of visible 10-cm segments, leaving those segments fully obstructed by vegetation to account for height of vertical obstruction to the nearest 5 cm. We also used a segmented polyvinyl chloride pole to measure the height of the tallest plant in each plot to the nearest 5 cm.
Statistical Analyses
We considered each annual survey for a field to be an independent survey because some sites were managed differently each year, and because all but 7 sites were surveyed 2 out of the 3 years. We calculated site-level covariates using the means of all variables for each field including percent cover of grasses, forbs, woody stems, and bare ground; vegetation height; COV; and visual obstruction. Categorical covariates included field type (WSG vs. CSG) and management timing (1 = fall, 2 = summer, 3 = late winter-no management). We then combined the 2 management prescriptions into a single measure (management = WSG 1, WSG 2, WSG 3, CSG 1, CSG 2, CSG 3). We examined the correlation between covariates using Spearman's rank correlation coefficient and no 2 variables had a correlation |r s | >0.7. We used a 1-way analysis of variance (ANOVA) to compare site-level covariates between field types, management timing, and combined management. With significant ANOVA results (P <0.01), we calculated Tukey post hoc pairwise comparisons to determine significant differences between groups. To account for multiple testing, we used the Bonferroni correction and considered significant only those covariates for which P < 0.05/7 = 0.007 (Legendre and Legendre 1998). We classified 16 of the detected species as target species. These species included grassland (n = 3), successional-scrub species (n = 8), or other (n = 5) according to the Breeding Bird Survey's groupings (Sauer et al. 2011; Table 1). We included species classified as other in the analysis because of their frequent occurrence on our study sites in winter. Although these birds do not breed in grassland and successional-scrub vegetation communities, they frequented our sites during winter, and it is likely these species are similarly affected by management activities in winter. We calculated relative abundances by dividing the number of detections for each species by the number of transects surveyed in each field during each year of the study.
To explore potential relationships between relative abundance of target species and habitat characteristics, we used non-metric multidimensional scaling (NMDS; Minchin 1987) based on Bray-Curtis dissimilarity (Bray andCurtis 1957, Faith et al. 1987). Specifically, we used relative abundances and the metaMDS and envfit functions from the vegan package (Oksanen et al. 2013) in Program R version 3.2.2 (R Core Team 2015) to project a summary of habitat use for the 16 target species. We chose to use the Bray-Curtis distance metric in NMDS because it is sensitive to differences in the most abundant species and less sensitive to infrequently encountered species (Pillsbury et al. 2011). We visualized the results using a triplot of sample points, bird species, and environmental variables, to identify the most prominent habitat characteristics to include in subsequent occupancy models. We used multispecies occupancy models (MSOMs; Zipkin et al. 2010) to determine the effects of grassland management and associated structure on non-breeding bird diversity. These models are an extension of the singlespecies occupancy model (MacKenzie et al. 2002) that analyzes detections of all species encountered during replicated surveys at a set of sites. We defined occupancy as a binary variable where presence equals one for any species that occurred within 50 m of transect counts and zero otherwise. Replicated surveys over multiple visits allowed for a distinction between species that are absent and species that are present but not detected (Royle et al. 2005). We assumed that occurrence and detection probabilities varied by species and were influenced by habitat management, structural characteristics, and survey-specific features. We modeled the occurrence probabilities for all species and target species at each transect dependent on whether transects were in WSG fields or CSG fields. This allowed for species-level effects to differ between the 2 field types. We also incorporated effects of management timing because this influenced vegetation structure. In addition, we included 2 structural characteristics, COV and vegetation height, based on NMDS results. For the detection model, we included vegetation height, temperature, minutes after sunrise, and day of season (1 Dec = 1, 28 Feb = 90) as possible species-specific detection covariates. We standardized continuous covariates for the occurrence and detection models to have a mean of zero. We conducted Bayesian analysis of the model using data augmentation techniques described by Royle et al. (2007), which allow for an estimation of the number of species in the community, including those that were unobserved during sampling. Analysis by data augmentation ensures increased precision of occurrence estimation and improved analysis of community species richness. We analyzed the model using a Bayesian approach in Program R version 3.2.2 (R Core Team 2015) and WinBUGS (Lunn et al. 2000). For each MSOM we ran 2 chains of length 10,000 after a burn-in of 5,000 and thinned the posterior chains by 5. We assessed convergence using theR statistic (Zipkin et al. 2010). We used the MSOM results to compare species richness, including unobserved species (all species, n = 50; target species, n = 25) between the 2 field types and under different management treatments (field type + management timing) by averaging the number of species estimated by the models for each treatment group. We used 1-way analysis of variance (ANOVA) to compare species richness between field types and management. With significant ANOVA results (P < 0.01), we calculated Tukey post hoc pairwise comparisons of species richness. We also compared transect-specific associations of species richness with COV and vegetation height.
RESULTS
Warm-season grass fields, on average, had a higher percentage of bare ground (F 1,226 = 44.51, P < 0.001), greater visual obstruction (F 1,226 = 47.88, P < 0.001), and taller vegetation (F 1,226 = 75.78, P < 0.001) than CSG ( Table 2). Cool-season grasses had higher COV (F 1,226 = 50.99, P < 0.001) and percent grass cover (F 1,226 = 81.26, P < 0.001). Percent woody vegetation and percent forb cover did not differ between field types. Cool-season fields that were not managed in fall or summer had a higher percentage of woody cover than all other field types (F 1,226 = 42.45, P < 0.001) and relatively high percent forb cover. Warm-season grass fields had the lowest mean COV, and fields managed in the summer (timing = 2), the previous winter, or unmanaged (timing = 3) had lower COV than those managed in the fall, regardless of field type ( Table 2). We detected 7,505 individuals of 41 species of birds during winter transect surveys ( Table A1). The model estimated 47.1 species in the region (95% posterior interval = 44-57). We selected 16 target species for NMDS analysis based on Table 1. List of 16 target overwintering bird species used to quantify the difference in bird communities between field types across years (2013-2016) and sites (grasslands in Northern Virginia, USA). Asterisks indicate breeding occurrence, in addition to winter, in study region. Superscript represents level of conservation concern; 1 = Partners in Flight (https:// www.partnersinflight.org/; accessed 14 Jun 2018) common bird in steep decline; 2 = Appalachian Mountains Joint Venture (http://amjv.org/; accessed 14 Jun 2018) priority species.
Sturnella magna Grassland
Red-winged blackbird* Agelaius phoeniceus Other their vegetation association groupings or for their frequent use of our grassland sites in winter ( Table 1). The NMDS ordination resulted in a 2-axis solution, with a final stress of 0.143, which is within the range reliable for community data and unlikely to have been obtained by chance (Oksanen et al. 2013). The 2 axes together represented 87.6% of the variance in target bird communities, using a fit-based R 2 measure. Visualization of the NMDS demonstrated positive correlations of grassland-obligate species to higher COV, lower visual obstruction, and a higher percentage of grass cover ( Fig. 2A).
In contrast, successional-scrub species positively correlated with taller vegetation and increased percentages of woody stems, forb cover, and bare ground. Cone of vulnerability explained the most variation in species composition between the survey points (multi-response permutation procedure; A = 0.498; P < 0.001; Fig. 2B). Fields with higher COV values had a higher abundance of grassland-obligate species, red-winged blackbirds (Agelaius phoeniceus), and eastern bluebirds (Sialia sialis). Successional-scrub species were most abundant in fields with lower COV. One MSOM considered observed (n = 41) and estimated (n = 50) species for the total bird community and another considered the observed (n = 16) and estimated (n = 25) species for only the target species. Estimates of total and target species richness were higher in WSG than CSG fields (total species: F 1,226 = 76.21, P < 0.001; target species: F 1,226 = 75.21, P < 0.001; Fig. 3A) though occurrence probabilities for many species were similar in the 2 field types (Fig. 3B). Values of total species richness estimated by the model were similar to observed species richness in WSG (observed = 5.62 ± 0.33 vs. estimated 5.49 ± 0.22) and CSG fields (observed = 2.78 ± 0.35 vs. estimated = 2.86 ± 0.19).
Estimated total and target species richness was also influenced by management timing, with fields managed during the previous winter or left unmanaged exhibiting higher estimated richness than fields managed in summer or fall (total species: F 2,225 = 59.36, P < 0.001; target species: F 2,225 = 81.07, P < 0.001; Fig. 4A). When combining management timing and field type, WSG fields managed in the previous winter or left unmanaged (WSG 3) had higher estimated total and target species richness than any other treatment group (total species: F 5,222 = 32.01, P < 0.001; target species: F 5,222 = 40.96, P < 0.001; Fig. 4B). Though CSG 3 fields had higher estimated total and target richness than CSG 1 and CSG 2 fields (P < 0.001), they were not different from WSG 2 fields (total and target species: P = 0.999). Target species richness was higher in CSG 3 fields than WSG 1 (P = 0.019) but not different for total species (P = 0.521).
Structural characteristics also influenced species richness and individual species occurrence probabilities. Estimated total and target species richness was higher in fields with tall vegetation (total species: R 2 = 0.32, P < 0.001; target species: R 2 = 0.40, P < 0.001; Fig. 5A) and lower in fields with high COV measurements (total species: R 2 = 0.27, P < 0.001; target species: R 2 = 0.35, P < 0.001), indicating that field openness reduced species occupancy. However, 20.08 (1.96) c the results of the NMDS indicated not all species respond similarly to low vegetation height. The abundance estimates of some grassland-obligate species, like horned larks (Eremophila alpestris) and eastern meadowlarks (Sturnella magna), demonstrated a negative correlation with vegetation height. In addition, vegetation height negatively influenced species-specific detection probability (Fig. 5B).
DISCUSSION
We investigated effects of grassland management timing and field type on associated habitat structure for the winter bird community using a network of public and private lands. Our results showed that fields comprised of WSG had taller vegetation, more vertical and horizontal structure, and more bare ground than fields comprised of non-native CSG, and supported our hypothesis that non-native vegetation would exhibit lower avian richness. Our results also demonstrated that, regardless of field type, fallow fields supported more species than fields managed during the summer or fall, supporting our hypothesis that species richness would be associated with greater structural heterogeneity and therefore lower in recently managed fields. We do not know how this grassland management affects survival and subsequent reproduction, but our results indicate the potential for significant effects on winter food availability and protective cover. In addition, our research demonstrates the importance of recognizing regional differences in grassland management and the resulting vegetation compositions and associated bird communities.
Warm-season grasses have been endorsed by conservation managers to improve breeding habitat for grassland and successional-scrub species (West et al. 2016). Several recommended management practices for WSG enhance habitat quality and structural heterogeneity, resulting in increased bird diversity and reproductive success during the breeding season (Fuhlendorf et al. 2006, Churchwell et al. 2008. For example, studies in tallgrass prairies have demonstrated that patch-burning and grazing, a process by which a field is burned or grazed in patches, creates a more heterogeneous landscape and increases the variety of grassland bird communities across the landscape (Fuhlendorf et al. 2006) and improves reproduction for nesting birds (Churchwell et al. 2008). Warm-season grass fields in the eastern United States are generally managed using patch-burning techniques when conditions allow, but many are also bush-hogged every 1-3 years to help slow succession. In contrast, traditionally managed CSG in the eastern United States, such as hayfields or pastures, are managed homogenously with disturbances occurring frequently and uniformly across the field. Uniform management limits field heterogeneity and therefore only satisfies the habitat requirements of a limited suite of species (Fuhlendorf et al. 2006). This was demonstrated in our study by a reduced suite of species occupying CSG fields in winter compared to WSG fields. In contrast, many of the WSG fields included in the study were managed using patch-burning techniques and had less frequent disturbances, resulting in increased structural heterogeneity, and thereby increased bird species richness.
The response of bird communities to management timing can vary greatly during the breeding season (Brawn et al. 2001, Perkins et al. 2009), though few studies have explored the response of wintering bird communities. Furthermore, no studies have compared the response of winter bird communities to the timing of management in CSG versus WSG fields. Hovick et al. (2014) explored the response of 6 overwintering grassland species to time since disturbance in a tallgrass prairie and found varying results, with each species demonstrating habitat associations at different stages of regrowth. In their study, time since disturbance occurred over a larger time frame than in our study, however, with shortest time since disturbance being <12 months and the longest being >24 months. Our study explored the response of the winter bird community on a shorter temporal extent (<12 months) because grasslands in the study region can be managed several times throughout the year. For example, traditionally managed hayfields in the eastern United States are harvested earlier and more frequently than WSG, with as many as 3-4 cuttings annually (Savoie et al. 1985). In contrast, WSG fields managed for hay or biomass production are harvested later in the season to accommodate a later growing season, and only allow for 1 or 2 harvests a year (Vogel et al. 2002). Fields managed for wildlife are generally managed during winter months to coincide with optimal burning conditions (e.g., prescribed burns) or to stimulate forb growth (e.g., disking) and thus are not disturbed during the growing season (Harper 2007). Our findings demonstrate that fields left undisturbed through the growing season, regardless of grass type, have higher bird species richness during the winter months than traditionally managed CSG fields. Thus, leaving CSG fields fallow throughout the growing season, or longer, can promote similar structure to WSG, resulting in increased heterogeneity and species richness. Warm-season grass fields managed throughout the growing season did not differ from CSG fields that had not been managed in fall or summer. This result suggests that WSG fields improve winter food availability and cover regardless of management timing. One explanation of this result is that WSG develop later in the growing season than CSG (Newman and Moser 1988) resulting in later emergence and dispersal of seeds, potentially providing an important winter food source. Another explanation is that WSG are harvested at a taller height (20-40 cm stubble; Forwood and Magai 1992) than traditionally managed CSG fields (5-15 cm stubble; Gillen and Berg 2005), leaving more cover for winter birds. Our vegetation surveys estimated a higher average height of falland summer-managed WSG fields (0.67 m and 1.17 m, respectfully), compared to the average height of fall-and summer-managed CSG fields (0.21 m and 0.74 m, respectively). However, our study had few fall-(n = 5) and summer-managed (n = 3) WSG fields. Therefore, further work needs to focus on identifying optimal management timing in WSG for wintering grassland and successionalscrub bird species.
The results of this study suggest that WSG fields improve winter habitat structure for a suite of successional-scrub species, and therefore can be promoted as a conservation tool for declining species in the eastern United States. The NMDS results demonstrated that successional-scrub species were most abundant in fields with increased structural heterogeneity, which was related to WSG fields. In contrast, fields with low structural heterogeneity supported more grassland-obligate species except for Savannah sparrows. Similar to previous winter studies, Savannah sparrows had less-specific requirements and were observed in a range of field and management types, vegetation heights, and COV values (Hovick et al. 2014, Saalfeld et al. 2016. Previous studies have also made this observation of eastern meadowlarks (Hovick et al. 2014, Saalfeld et al. 2016), though our study had almost exclusive occupancy of this species in recently managed fields regardless of field composition. This could be due to differences in regional responses of vegetation to management, which differ as a function of rainfall and soil type, in addition to season of management (Baldwin et al. 2007, Twidwell et al. 2012. It is possible that relatively greater rainfall during the growing season and more productive soils in eastern regions result in denser vegetation, which deters meadowlarks during the breeding season (West et al. 2016). Given that meadowlarks occur in the study region year-round, this result further emphasizes the importance of considering regional differences in habitat structure and associated habitat use, especially for species of concern.
Conditions during the non-breeding season can influence individual performance in subsequent seasons (Harrison et al. 2011). For example, high-quality wintering sites (i.e., those containing abundant food sources and ample cover) are associated with earlier arrival dates on the breeding grounds and increased fledgling survival in American redstarts (Norris et al. 2004). Barn swallows (Hirundo rustica) demonstrate earlier arrival dates with favorable winter conditions, resulting in increased frequency of second broods and a higher number of fledged offspring (Saino et al. 2004). It is likely that shortdistance migrant and resident species occupying North American grasslands in winter are similarly influenced in subsequent seasons by winter habitat quality, though these findings have not been elucidated. Therefore, timing of grassland management could have effects on winter habitat quality and associated bird survival and reproduction in the breeding season. Though the results of our study do not reflect habitat-associated survival, they demonstrate patterns of habitat-use for wintering bird communities that provide a foundation for future research. For example, recent advances in the use of intrinsic markers provide opportunities to quantify winter habitat quality using habitat-specific isotopic signatures (Marra et al. 1998, Norris and Marra 2007, Rushing et al. 2016. Thus, future work should focus on comparing the quality of CSG and WSG fields as wintering habitat and the associated survival and subsequent reproduction of birds that overwintered in these fields. , and mean estimated richness in fields dominated by cool-season grasses (CSG) versus native warm-season grasses (WSG) managed in fall, summer, or winter-not managed (B). Regardless of field type, fields managed in the previous winter or not managed had higher richness than all other treatments (A). When management timing was combined with field type (B), warm-season grass fields managed in the previous winter or left unmanaged had the highest estimated species richness. Letters indicate significant differences in total species richness between treatments.
MANAGEMENT IMPLICATIONS
Our results demonstrate that the composition of plant species and the timing of their management influences vegetation structure, and therefore the bird communities overwintering in eastern grasslands. We recommend (when possible) deferring the annual mowing or bush-hogging of fields until late winter or early spring (Feb-Apr) to optimize vegetative cover for birds throughout winter. To further improve structural heterogeneity, we also recommend the conversion of fields from CSG to WSG, to support a more diverse overwintering bird community. The results of this study also demonstrate that speciesspecific recommendations may differ from recommendations for whole bird communities. For example, field management for horned larks or eastern meadowlarks may differ from the rest of the bird community. Regardless, this work increases our understanding of avian habitat associations during winter and provides support for optimizing management practices to sustain birds overwintering in eastern grasslands.
ACKNOWLEDGMENTS
We thank the landowners of Virginia Working Landscapes for providing permissions to survey their properties for the purpose of this study. We are also grateful to the Smithsonian-Mason fellowship program that supported A. E. Johnson.
|
2019-08-23T06:04:01.783Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "247151c6afcbc401a17f3af31c405348318b8003",
"oa_license": "CCBYNC",
"oa_url": "https://wildlife.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jwmg.21730",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "43cc4a4de79afacc2c5c5d79ff745c7ae229d553",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
}
|
257580189
|
pes2o/s2orc
|
v3-fos-license
|
Rectum adenocarcinoma metabolic subtypes analysis and a risk prognostic model construction based on fatty acid metabolism genes
Fatty acid metabolism is an essential part of cancer research due to its role in cancer initiation and progression. However, its characteristics and prognostic value in rectum adenocarcinoma have not been systematically evaluated. We collected fatty acid metabolism gene expression profiles and clinical information from the cancer genome atlas and gene expression omnibus databases. After excluding individuals lacking clinical information and the presence of genetic mutations, we performed consistent clustering of the remaining patients and selected stable clustering results to group patients. Differentially expressed genes and gene set enrichment analysis were compared between subgroups, while metabolic signature identification and decoding the tumor microenvironment were performed. In addition, we explored the survival status of patients among different subgroups and identified signature genes affecting survival by least absolute shrinkage and selection operator regression. Finally, we selected signature genes to construct a risk prognostic model by multivariate Cox regression and evaluated model efficacy by univariate Cox regression and the receiver operating characteristic curve. By consensus clustering, patients were distinguished into 2 stable subpopulations, gene set enrichment analysis and metabolic signature identification effectively defined 2 completely different subtypes of fatty acid metabolism: fatty acid catabolic subtype and fatty acid anabolic subtype. Among them, patients with the fatty acid catabolic subtype had a poorer prognosis, with a significantly lower proportion of myeloid dendritic cells infiltration within the tumor microenvironment. Aquaporin 7 (hazard ratio, HR = 2.064 (1.4408–4.5038); P < .01), X inactive specific transcript (HR = (0.3758–0.7564), P = .045) and interleukin 4 induced 1 (HR = 1.34 (1.13–1.59); P = .034), were selected by multivariate Cox regression, which constructed a risk prognostic model. The independent hazard ratio of the model was 2.72 and the area under curve was higher than age, gender and tumor stage, showing better predictive efficacy. Our study revealed the heterogeneity of fatty acid metabolism in rectum adenocarcinoma, defined 2 completely distinct subtypes of fatty acid metabolism, and finally established a novel fatty acid metabolism-related risk prognostic model. The study contributes to the early risk assessment and monitoring of individual prognosis and provides data to support individualized patient treatment.
Introduction
As one of the most common digestive tract cancers, annual colorectal cancer incidences and fatalities could reach over 1.9 million and 940,000, respectively, accounting for 10% of all cancer cases and deaths. [1] Over the past few decades, physicians seem to be accustomed to describing malignancies in the colon and rectum as colorectal cancer. However, a growing body of research has shown that colon adenocarcinoma and rectum adenocarcinoma (READ) have apparent differences in developmental origin, genetic characteristics, gut microbiota, and risk factors. [2][3][4] In addition, the incidence of READ in Asia is much higher than that of colon adenocarcinoma. The proportion of early-onset READ cases (diagnosed under 50) increases year by year, resulting in a significant public health concern worldwide. [5,6] Smoking cessation, a healthy diet, and proper exercise can prevent READ. At the same time, colonoscopy screening, stool testing, and other methods can be conducive to the early diagnosis of READ. [1][2][3]7] Nevertheless, the prognosis of The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Medicine READ patients has not substantially improved due to the high cost of colonoscopies, inadequate medical services, and recurrent diseases. [1,2] Therefore, reliable and reasonable indicators must be identified to assess individual prognostic risk and guide individualized treatment so that the survival of READ patients can be improved.
Over the past decade, continuous efforts have been invested in exploring READ screening strategies and prognostic markers. The proposal and application of the TNM international staging system, C-reactive protein, carcinoembryonic antigen, and other methods have significantly improved READ diagnosis and treatment. [8,9] Notably, these screening strategies and prognostic markers have numerous limitations, and novel effective tools are required to complement and replace them. Fortunately, the continuous accumulation of high-throughput data has provided a deeper and more comprehensive understanding of the genetic characteristics and pathogenesis of READ, making it possible to develop READ prognostic markers based on sequencing data. [10][11][12] More and more evidence has shown that fatty acid metabolism profoundly affects cancer occurrence, development, and treatment resistance through adenosine triphosphate generation (beta-oxidation), glycerophospholipid synthesis, protein acylation, and sustained redox homeostasis. [2,3] One clinical study showed significantly varied medium-chain fatty acid concentrations among individuals in the READ progression stage and identified capric acid as a highly effective READ biomarker. [13] Meanwhile, Zhu et al [14] demonstrated that blocking the fatty acid metabolizing enzyme could reduce lipid metabolism in mice and inhibit the occurrence of READ. Therefore, in-depth exploration of fatty acid metabolism in READ is of great significance for identifying potential targets of READ and improving the treatment.
In this study, based on publicly available data in the cancer genome atlas (TCGA) and gene expression omnibus (GEO), we distinguished into fatty acid catabolic and anabolic subtypes through a comprehensive analysis of fatty acid metabolism genes in READ patients, in which patients with fatty acid catabolic subtype had not only a poorer prognosis but also a relatively low proportion of myeloid dendritic cells (mDCs) infiltration within the tumor microenvironment. Subsequently, we applied least absolute shrinkage and selection operator (LASSO) regression and multifactorial Cox regression and integrated the computational results of different cohorts to finally construct a highly specific and sensitive risk prognostic model consisting of 3 fatty acid metabolism genes. The research flow of this paper is shown in Figure 1.
Data collection and preprocessing
RNA-seq, microarray data, and corresponding clinical information were obtained from TCGA data portal (https://portal.gdc. cancer.gov/) and the NCBI GEO database (https://www.ncbi. nlm.nih.gov/geo/). Raw counts were normalized to transcripts per million by gene length in the annotation file (GDC.h38 Flattened GENCODE v22 GFF). Then, log2-scale transformed microarrays data were not further normalized. READ patients were screened from the dataset to exclude individuals whose RNA expression did not match clinical information. In addition, we also excluded individuals with genetic mutations from the microarray data. A total of 123 TCGA-READ patients and 109 READ patients from GSE87211 [15] were included in the study cohort, and the clinical characteristic is presented (see Table 1).
Consensus clustering
The "HALLMARK_FATTY_ACID_METABOLISM" genes in the molecular signatures database [16] were located and defined as fatty acid metabolism genes. After excluding genes with <10% expression, matrix constructed from fatty acid metabolism genes were used for clustering in Consensus ClusterPlus (v.1.58.0; parameters: reps = 500, pItem = 0.8, pFeature = 1, distance = "spearman"). [17] The best clustering result under different cluster numbers was determined by comparing the consensus cumulative distribution function and Delta Area. The t-distributed stochastic neighbor embedding was executed in Rtsne (v.0.15) to validate the accuracy of the cluster. Finally, clusters were visualized as heatmaps in ComplexHeatmap (v.2.10.0). [18]
Metabolic signature identification
Extract the "FATTY ACID METABOLISM" related gene sets from the 225 eigengene dataset built in the IOBR package. [22] Subsequently, based on consensus clustering results, the signature score of different clusters against gene sets was evaluated by a single-sample gene set enrichment analysis, [23] and minimal and maximal values were set to 0 and 1, respectively. In addition, Student t test was applied to analyze whether there were significant differences in the signature score between groups.
Decoding the tumor microenvironment
Based on the expression matrix, we quantitatively scored and normalized the abundance of immune and stromal cell populations in READ patients via MCP Counter [24] in IOBR. Furthermore, the differences in the relative abundance of the same cell population in different clusters were compared, and the correlation between individual cell abundance and the fatty acid metabolic signature score was analyzed by calculating the Pearson correlation coefficient.
Kaplan-Meier survival analysis
The matrix containing outcome events and follow up times corresponding to the patient were collated by cluster, and the follow up times were adjusted into monthly units. Kaplan-Meier survival analysis with grouping as categorical information was conducted in survival (v.3.2-13) and presented using survminer (v.0.4.9).
LASSO regression
The glmnet (v.4.1-3) was applied to the matrix, and the cluster of the patient was used as a regression condition. Variable selection was presented to penalize the data fitting and to calculate both the minimum and standard errors (1se) of the optimal parameters. A model was constructed from the screened variables, and its predictive ability was evaluated by receiver operating characteristic (ROC) curves as well as the area under curve (AUC) in ROCR (v.1.0-11).
Multivariate cox regression and risk prognostic model construction
To reduce the complexity and increase the interpretability of the model, multivariate Cox regression was applied to the included variables for stepwise regression. The regression coefficients and expression levels corresponding to each variable were then combined to obtain the risk score formula as follows.
Risk score = n i = 1 β i * Exp i Accordingly, patients were divided into 2 groups (high risk and low risk), and the survival statuses of the 2 groups as well as the distribution of each variable's expression levels were visualized. In addition, we stratified the patients according to age, gender, and tumor stage in turn, thus determining the distribution of individuals in the 2 groups at different levels. Finally, the prognostic risk prediction ability of the model was validated through survival analysis.
Assessing the independent prognostic value of the model
To more comprehensively assess the independent prognostic value of the model, we calculated risk scores for each patient and integrated them with age, gender and tumor stage for univariate COX regression analysis. In addition, we also analyzed the accuracy of the model based on the ROC curve.
Statistics and visualization
The statistics information for every experiment was detailed in the figures, figure legends, and figure notes. The violin plot and scatter plot were calculated and visualized by ggstatsplot (v.0.9.1). The other portraits (such as heatmap, the volcano plot, and so on) were calculated by rstatix (v.0.7.0) and represented via ggplot2 (v.3.3.5) or ggtree (v.3.2.1). [25] For the split violin plot, Student t test between 2 categorical variables were performed using the Bonferroni test. The Pearson correlation coefficient was used to check the association between 2 continuous variables in the scatter plot. Log Rank test was used to assess survival differences between groups. For repeated statistical methods or color schemes in the paper, only the first occurrence is annotated, with the following occurrences consistent with the first annotation.
Patient classification based on fatty acid metabolism genes
According to the expression level transcripts per million of fatty acid metabolism genes, we employed consensus clustering under different cluster numbers for the TCGA cohort comprising 123 READ patients ( Fig. 2A; Figure
The association between patient prognosis and fatty acid metabolic subtypes
To further explore the potential biological phenomena between cluster A and cluster B, we performed GSEA for different clusters in the TCGA cohort and GEO cohort separately based on the ranked gene sets. Cluster A was significantly enriched in pathways related to fatty acid catabolism, including Fat digestion and absorption, and PPAR signaling pathway (Fig. 3A-B). Cluster B is associated with fatty acid anabolism, such as Terpenoid backbone biosynthesis pathways and Steroid hormone biosynthesis pathways (Fig. 3A-B). Furthermore, in the metabolic signature identification for different cohorts, cluster A scored significantly higher than cluster B in metabolic pathways such as fatty acid beta-oxidation and fatty acid catabolic process, whit the latter preferring fatty acid biosynthesis (Fig. 3C-F). Accordingly, we defined cluster A as the fatty acid catabolic subtype and cluster B as the fatty acid anabolic subtype. More importantly, we found that patients attributed to cluster A had a poorer prognosis (P < .05) (Fig. 3G-H).
The composition of the tumor microenvironment in different fatty acid metabolic subtypes
Based on the MCPcounter [22] in IOBR, we found that the relative abundance of immune and stromal cells in the tumor microenvironment differed in READ patients with different fatty acid metabolism. Among them, cluster A individuals had lower relative abundance of CD8 + T cells, mDCs, NK cells, endothelial cells, or fibroblasts than cluster B. Notably, the group differences with just mDCs were consistent between the 2 cohorts ( Fig. 4A-B; Figure S3A
The correlation of expression of aquaporin 7(AQP7), X inactive specific transcript (XIST), interleukin 4 induced 1 (IL4I1) with fatty acid metabolic subtypes and patient prognosis
In this study, we first identified signature genes (min:28, 1se:16) that could distinguish fatty acid metabolic subtypes of patients by LASSO regression, and constructed a model based on regression coefficients of signature genes. The ROC indicated that the prediction accuracy of the model was acceptable 045) predicted better prognosis (Fig. 5E-F).
Risk prognostic model constructed from key genes with accurate predictive utility
Although we have identified key genes that accurately predict the fatty acid metabolic subtype and prognosis of patients, their roles lack effective validation. Therefore, we analyzed the risk scores of patients based on the multivariate Cox regression model to discriminate them into high risk and low risk groups.
According to the risk scores' median, we demonstrated this classification's effectiveness in different cohorts. The expressions of the 3 key genes in different risk groups exactly correspond to their hazard ratios (Fig. 6A-B). After stratifying patients by age, gender, and tumor stage, we found a higher proportion of patients defined as high risk in individuals with older or advanced tumors (III/IV), while individuals with younger or early-stage tumors (I/II) were more inclined to be low risk. At the same time, gender did not yield consistent results across cohorts (Fig. 6C-D). In addition, survival analyses of different cohorts all indicated that patients in the high risk group had a worse prognosis than those in the low risk group (Fig. 6E-F). Notably, the univariate COX regression results showed that the risk score exhibited a high hazard ratio in both independent cohorts [TCGA: READ: HR = 2.72 (1.43-5.16); GSE87211: HR = 2.72 (1.66-4.46)] and was statistically significant (P < .01) (Fig. 7A-B). More importantly, for the different cohorts, the AUC value of the risk score was 0.91 and 0.708, respectively. In contrast, the tumor stage, which is often used to predict patient prognosis in clinical practice, was only 0.63 and 0.584 (Fig. 7C-D).
Discussion
Metabolic abnormality is an important feature of cancer cells. [26] It actively promotes the occurrence and development of cancer cells by regulating gene expression, resisting apoptosis, and promoting cell proliferation and migration. [26,27] In-depth studies of the metabolic profile of cancer cells have shown that nutrients such as acetic acid, fatty acid, lactic acid, branched-chain amino acid, serine, and glycine directly participate in cell transformation or support cancer growth through metabolism. [28,29] Not all studies suggest that abnormal intracellular metabolism promotes cancer development. Whether certain substances thought to be key to cancer metabolism must promote the proliferation and migration of cancer cells remains controversial. [30,31] However, it is undeniable that metabolism-based research provides reliable and effective biomarkers for cancer diagnosis and prognosis, and metabolism-based targeted cancer therapy has broad application prospects. This study found differential expression of fatty acid metabolism genes in READ patients. Based on the expression pattern, patients could be divided into 2 stable subpopulations. Pathway enrichment and metabolic signature identification results for subpopulations showed that the 2 patient subpopulations corresponded to 2 completely different fatty acid metabolic subtypes: fatty acid catabolic subtype and fatty acid anabolic subtype. Subsequent survival analyses indicated that the patient subpopulation corresponding to the fatty acid catabolic subtype had a poorer prognosis. Notably, fatty acid beta-oxidation has been shown to promote cancer cells growth and proliferation in a variety of cancers, including READ. [27,29,32] In addition to the unique metabolic profile exhibited, the local microenvironment of cancer cells is rich in immune cells and stromal cells, which build a complex ecosystem of cancers and together contribute to cancer development and progression. [27,29,33] In the study, we used the MCPcounter, [22] a validated and robust quantitative method, to assess the relative abundance of immune and stromal cells within the tumor microenvironment of READ patients. Subsequently, we compared the differences in the relative abundance of the same cells between individuals of different fatty acid metabolic subtypes, where the relative abundance of mDCs was significantly lower in patients with fatty acid catabolic subtype than others. Furthermore, the relative abundance of mDCs in both groups was significantly negatively correlated with the signature score of fatty acid catabolic pathway and positively correlated with the signature score of fatty acid anabolic pathway. As specialized antigen-presenting cells of the immune system, dendritic cells (DCs) have the unique ability to attract and activate naïve CD4 and CD8 cells. [34] Among them, DCs-induced cytotoxic T cell responses are an important pathway for individuals to fight against tumors. [34,35] Notably, by comparing the ability of different subtypes of DCs to initiate cytotoxic T cells, Giulia Nizzoli et al [36] found that mDCs produce higher levels of cytotoxic molecules (IL12) compared to plasmacytoid dendritic cells and are the main effector cells for inducing cytotoxic T cell responses. Furthermore, unlike conventional monocyte-derived DCs, mDCs targeting tumors for immunotherapy does Figure 7. Assessment of the predictive efficacy of a risk prognostic model. (A-B) The forest plot of age, gender, stage and the risk score (**, P ≤ .01). (C-D) The ROC curve of age, gender, stage and the risk score. ROC = receiver operating characteristic. Medicine not require an extensive culture period, is clinically reproducible, and has demonstrated encouraging immunological and clinical results in completed clinical trials. [37,38] Taken together, we found that READ patients exhibit fatty acid metabolic heterogeneity that significantly affects individual prognosis. We also suggest that targeted metabolic regimens for fatty acid catabolic subtypes with low mDCs infiltration may yield better clinical outcomes, while the potential efficacy of mDCs-mediated immunotherapy is relatively stronger for fatty acid synthetic subtypes.
Further analysis showed that the expression levels of AQP7, XIST, and IL4I1 could predict the prognostic risk of patients. The high expression of AQP7 and IL4I1 indicated a poorer prognosis, and the high expression of XIST indicated a better prognosis. As an aquaglyceroporin, AQP7 promotes the hydrolysis of intracellular triglycerides into glycerol and fatty acids, while enhancing the transport of extracellular glycerol, and intracellular fatty acids, which provide adenosine triphosphate through β-oxidation. [39,40] As an immunosuppressive enzyme, IL4I1 induces cancer escape by inhibiting T-cell proliferation, [41] and our study found that high expression of IL4I1 may alter fatty acid metabolism patterns in READ patients and increase individual prognostic risk. Similarly, IL4I1 was found in the prognostic model of renal clear cell carcinoma constructed by Zhu et al. [42] XIST is a long noncoding RNA that mediates female mammalian development by recruiting protein complexes and is considered a female characteristic gene. [43] Multiple studies [44,45] suggested that women appear to be less likely to develop READ than men, and men are more likely to die from READ than women, which is consistent with the finding that XIST protects READ patients in this study. Subsequently, we established a risk prognostic model based on AQP7, XIST, and IL4I1 and calculated the risk score for each patient. Based on the risk score, we distinguished patients into a high risk group and a low risk group, in which the survival rate of patients in the high risk group decreased significantly with a longer follow up time and was much lower than that of the low risk group. In addition, we found that older and advanced tumor patients were more inclined to obtain a higher risk score, thar is, a poorer prognosis, which is consistent with the effect of age and tumor stage on patient prognosis in clinical practice, [2,3] indicating the validity of the model. More importantly, the univariate COX results suggest that the risk score has a good independent prognostic value, while the AUC value of the risk score is much higher than that of age, gender, and tumor stage, indicating that it is a more accurate predictor of patient prognosis.
This study has certain limitations. First, the fatty acid metabolism genes defined in the study may not be completely accurate due to data incompleteness, data update delay, and other issues in the molecular signatures database itself. Second, the source of data used in the risk prognostic model is limited to the TCGA and GEO databases, and the utility and credibility of the model will further improve if it is applied for prospective clinical trial cohorts.
Conclusion
In conclusion, this study revealed the heterogeneity of individual fatty acid metabolism and effectively defined 2 mutually independent fatty acid metabolic subtypes through systematic analysis of fatty acid metabolism genes in READ patients. In addition, the study confirmed that READ patients with fatty acid catabolic subtypes have a poorer prognosis and lower infiltration of mDCs in the tumor microenvironment, which is a good guide for the selection of therapeutic measures. More importantly, we constructed a risk prognostic model based on metabolic subtypes and individual survival status and demonstrated its desirable predictive utility. We expect that this study will help clinicians achieve early assessment and long term monitoring of the prognosis of READ patients and provide a reference for individualized patient treatment.
|
2023-03-18T05:04:47.972Z
|
2023-03-17T00:00:00.000
|
{
"year": 2023,
"sha1": "455301c0b3cd5945a2a4a35677d20553c5074044",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "455301c0b3cd5945a2a4a35677d20553c5074044",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18646245
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of textural differentiations in forest resources in Romania using fractal analysis
: Deforestation and forest degradation have several negative effects on the environment including a loss of species habitats, disturbance of the water cycle and reduced ability to retain CO 2 , with consequences for global warming. We investigated the evolution of forest resources from development regions in Romania affected by both deforestation and reforestation using a non-Euclidean method based on fractal analysis. We calculated four fractal dimensions of forest areas: the fractal box-counting dimension of the forest areas, the fractal box-counting dimension of the dilated forest areas, the fractal dilation dimension and the box-counting dimension of the border of the dilated forest areas. Fractal analysis revealed morpho-structural and textural differentiations of forested, deforested and reforested areas in development regions with dominant mountain relief and high hills (more forested and compact organization) in comparison to the development regions dominated by plains or low hills (less forested, more fragmented with small and isolated clusters). Our analysis used the fractal analysis that has the advantage of analyzing the entire image, rather than studying local information, thereby enabling quantification of the uniformity, fragmentation, heterogeneity and homogeneity of forests.
Introduction
Forests have been managed intensively due to increased demand for wood and cropland [1][2][3].Sustainable forest management is imperative when facing important environmental and socio-economic changes in forest resources, as pressure on the forests' resources has an adverse effect on several components of ecosystems (water pollution, reducing the biodiversity, climate change, soil erosion) [4].Specifically, long-term changes in forest environments have direct effects on climate change, carbon cycling, and carbon-nitrogen interactions, thus underlining the need for novel management strategies to maintain and preserve the global carbon storage and ecosystem roles of the forest [5,6].The European continent hosts a variety of forest biomes, from the temperate to boreal regions, generally characterized by considerable anthropogenic influence causing environmental changes [7][8][9][10].
Romania incorporates large continuous virgin forest ecosystems in the temperate zone of Europe [11,12].In the recent decades, the deforestation rate in Romania has rapidly intensified in the Carpathians and Subcarpathians through a surge in the demand from the wood industry, timber export growth and manufacturing of wood [3,13].The natural forest cover in Romania is under threat due to the high population growth rate [14].Urbanization processes (new buildings for residential, commercial, academic and business purposes) and the conversion of rural areas into urban developed areas have caused substantial changes in land use and land cover [15].
Romania is a competitive player on the European wood market.It produces a quarter of European hardwood lumber output due to it having the lowest selling prices on the continent [16].According to the National Institute of Statistics [17], the amount of roundwood produced in 2002 was 4.3 million m 3 from coniferous forests, 4 million m 3 from broadleaved forests, 1.1 million m 3 from hardwood forests and 1.1 million m 3 from softwood forests.A considerable increase in production has happened since the turn of the new millennium with 2015 values of 6 million m 3 coniferous forest, 6.6 million m 3 broadleaved forest, 5 million m 3 hardwood forest, and 1.1 million m 3 softwood forest.The Carpathian Mountains generally have a total growing stock of about 250-300 m 3 /ha for coniferous species with moderately lower the stock rates in the Western Carpathian Mountains (about 200-250 m 3 /ha) [18].The utilization rate of forest decreased to 48% in 2005 expressed as a percentage of the annual increment, a decrease from 60% in 1990 [19].
Based on the forest management literature in Romania, sustainable forestry should consider: 1.
The preventive role of forests through the stability of slopes mitigating landslides, floods and flash floods, and catastrophic inundations of river watersheds [20][21][22][23].
2.
The ecological function of forests (providing ecosystem services such as biodiversity) [24].
3.
The socio-economic trends governed by political changes after the post-socialist period [25,26] and forest restitutions [27,28] to former private owners and legal entities [29].
The assessment of forest cover change is a first necessary step to identify critical areas of land use changes [30].Methods of forest change mapping are mainly based on an assessment from remotely sensed imagery based on changes in spectral characteristics though time [30,31] or from indices designed for mapping forest changes such as the disturbance index [32][33][34].This index was applied to map forest cover changes in the protected areas of the Eastern Carpathians Mountains in Romania (Maramureş Mountains Nature Park, Rodna Mountains National Park, Călimani National Park) [28].
However, a spatially explicit identification and quantification of forest fragmentation, together with the spatial patterns and dynamics quantification during deforestation processes are needed.The powerful quantitative approach for this was presented by [35] based on the fractal analysis (fractal geometry).The fractal analysis provides a complementary approach to traditional per-pixel based assessment of deforestation and forest disturbance, by including a contextual dimension into the analysis.Fractal analysis in the form of the box-counting, pyramid dimension, Minkowski dimension, Tug-of-War dimension and Fractal Fragmentation Index (FFI) were recently used in the analysis of forests [36][37][38].In this study, we used a different approach to fractal analysis in relation to textural differentiations of forest resources by analysis of the fractal object dilation effects on fragmentation and heterogeneity of the forest spatial evolution.We applied this approach to forest resources from the eight development regions (DRs) in Romania to obtain information about the spatial effects and distribution of deforestation and reforestation.Through fractal analysis, we provide a quantification of the degree of fragmentation of forested areas.Ultimately, this research contributes by assisting institutional interventions to alleviate the imbalances that are the consequence of deforestation in Romania.
Study Area
This study includes eight administrative regions (development regions; DRs) in Romania, namely, North-East, South-East, South Muntenia, South-West Oltenia, West, North-West, Center and Bucharest-Ilfov DRs (Figure 1).In this study, we used a different approach to fractal analysis in relation to textural differentiations of forest resources by analysis of the fractal object dilation effects on fragmentation and heterogeneity of the forest spatial evolution.We applied this approach to forest resources from the eight development regions (DRs) in Romania to obtain information about the spatial effects and distribution of deforestation and reforestation.Through fractal analysis, we provide a quantification of the degree of fragmentation of forested areas.Ultimately, this research contributes by assisting institutional interventions to alleviate the imbalances that are the consequence of deforestation in Romania.
Study Area
This study includes eight administrative regions (development regions; DRs) in Romania, namely, North-East, South-East, South Muntenia, South-West Oltenia, West, North-West, Center and Bucharest-Ilfov DRs (Figure 1).The development regions of Romania were established in 1998 to facilitate the effective regional coordination in Romania following the European Union (EU) accession.According to the law 315/2004, development regions are not administrative-territorial units and are therefore not legal entities of the counties, thus posing challenges to forest management [39].Forest cover in Romania of 63,990 km 2 includes beech (Fagus sylvatica), fir (Abies alba), spruce (Picea abies), oak (Quercus robur), ash (Fraxinus excelsior) and pine (Pinus) [40].The largest forest area of 51.9% is in the Carpathian Mountains (formed by coniferous and deciduous species) followed by 37.2% in the hills (the Subcarpathians, the Getic Plateau, the Transylvanian Plateau, the Western Hills) and 10.9% in the plains (the West Plain and the Romanian Plain).Forest exploitations in Romania is in continuous growth in the Carpathian Mountains and in the hilly areas [20].The most deforested areas are in the The development regions of Romania were established in 1998 to facilitate the effective regional coordination in Romania following the European Union (EU) accession.According to the law 315/2004, development regions are not administrative-territorial units and are therefore not legal entities of the counties, thus posing challenges to forest management [39].Forest cover in Romania of 63,990 km 2 includes beech (Fagus sylvatica), fir (Abies alba), spruce (Picea abies), oak (Quercus robur), ash (Fraxinus excelsior) and pine (Pinus) [40].The largest forest area of 51.9% is in the Carpathian Mountains (formed by coniferous and deciduous species) followed by 37.2% in the hills (the Subcarpathians, the Getic Plateau, the Transylvanian Plateau, the Western Hills) and 10.9% in the plains (the West Plain and the Romanian Plain).Forest exploitations in Romania is in continuous growth in the Carpathian Mountains and in the hilly areas [20].The most deforested areas are in the Eastern Carpathians (counties Suceava (SV), Maramures , (MM), Harghita (HR), Covasna (CV), Vrancea (VN)) and in the Southern Carpathians (counties Arges , (AG), Vâlcea (VL), Hunedoara (HD)).
The analysis was conducted at the level of development regions because of the pronounced differences in the way forest resources have been managed between regions which in turn reflected the levels of deforestation and reforestation.
Image Preprocessing
Digital images obtained from the GFC (Global Forest Change) 2000-2012 database, the Department of Geographical Sciences at the Maryland University were used as input for the fractal analysis.This database is the result of analyzing Landsat 7 ETM+ (30 m spatial resolution) data of forest areas during the period 2000-2012 [41], representing global information on forest disturbance with the highest spatial resolution currently available.The Landsat spatial resolution provides sufficient spatial detail for a good fractal analysis and we used a time-series of satellite images representing the annual time steps from 2000 to 2012 to quantify and analyze areas of deforestation and forest reforestation.
The initial projection of the Landsat GFC data was converted from WGS 1984 World Mercator (EPSG 3395) to the Stereographic 70 coordinate system (EPSG 31700).Specific information about Landsat imagery used for the analysis mask is presented in Table 1.The satellite images were stored in uncompressed color TIFF format by using ArcGIS and the raster images were transformed to vector (Raster to point) with the resolution of each pixel being equivalent to the dimension of a point.To extract surfaces for each territorial-administrative unit (UAT), a spatial join was conducted, allowing for points to be grouped for each territorial-administrative unit.The calculation of forest change areas for each UAT and for each year from 2000 to 2012 was repeated for the three types of Landsat 7 ETM+ assessments available in the GFC database (forest cover loss, forest cover gain and year of gross loss).The Landsat 7 ETM+ imagery was segmented using the Color Deconvolution vector H & E DAB plugin for the ImageJ software version 1.49t [42] and subsequently converted to binary images, using a threshold of 1-255.The binarized images only contained black pixels with 0 value (background pixels, non-fractal pixels, meaning non-forest areas) and white pixels with 255 values (foreground pixels, fractal pixels, meaning forested, deforested and reforested areas).
Fractal Analysis
We conducted a fractal analysis of the digital binarized images of the forest area evolution affected by deforestation and reforestation for eight DRs.Fractal analysis was conducted because of the ability to provide additional textural information on the effects incurred by deforestation which provides complementary information on forest disturbance as compared to traditional per-pixel forest disturbance analysis.Fractalyse 2.4.software (Research Centre ThéMA, University of Franche-Comté, Besançon, France) was employed for fractal analysis [43,44].
The box-counting dimension was used for determining the fractal dimension of the forest areas (D Surf-dil0 ), the fractal dimension of the dilated forest areas (D Surf-dil3 ) and the fractal dimension of the border of the dilated forest areas (D Bord-dil3 ).This is the most used method to estimate fractal dimensions.The image is covered by a quadratic grid and the grid size ε is varied.For each value ε, the number of squares N(ε) containing any black pixels are counted.In Fractalyse 2.4, grids of different sizes are overlaid on a two-dimensional black and white image of forest areas and the number of black cells is counted for each grid size.The black cells represent the forest areas while the white ones indicate an absence of forest areas.A double logarithmic plot of N(ε), the number of occupied cells versus 1/ε is drawn and subsequently the slope of a linear regression gives an estimate of the box dimension.
The mathematical formula used for the calculation of the box counting fractal dimension was: where D B-C is the box-counting fractal dimension, ε is the side length of the box, and N(ε) is the number of non-overlapping and contiguous boxes of side ε required to cover the area of the fractal object [40,45].
As the zero limits cannot be applied to digital images, D B-C was estimated by means of the equation: where d is the slope of the graph log (N(ε)) against log (1/ε) [45].
The box-counting algorithm cannot offer a perfect coverage of a fractal pattern, so the number of occupied boxes is approximative.Hence, the estimation of the box-counting dimension requires a sufficient number of levels of analysis [43] but also the largest possible range of box sizes [46].
Four fractal dimensions were considered in this study: the fractal box-counting dimension of the forest areas (D Surf-dil0 ), the fractal box-counting dimension of the dilated forest areas (D Surf-dil3 ), the dilation fractal dimension (D dil3 ) and the fractal box-counting dimension of the border of the dilated forest areas (D Bord-dil3 ).We analyzed D Surf-dil3, D dil3 and D Bord-dil3 to determine whether large variation in the fractal dimension was a resultof a transition from the fractal organization with a lot of free spaces (D Surf-dil0 ) to another organization with less free spaces or a more compact organization (D Surf-dil3 ) (Figure 2).provides complementary information on forest disturbance as compared to traditional per-pixel forest disturbance analysis.Fractalyse 2.4.software (Research Centre ThéMA, University of Franche-Comté, Besançon, France) was employed for fractal analysis [43,44].The box-counting dimension was used for determining the fractal dimension of the forest areas (DSurf-dil0), the fractal dimension of the dilated forest areas (DSurf-dil3) and the fractal dimension of the border of the dilated forest areas (DBord-dil3).This is the most used method to estimate fractal dimensions.The image is covered by a quadratic grid and the grid size ε is varied.For each value ε, the number of squares N(ε) containing any black pixels are counted.In Fractalyse 2.4, grids of different sizes are overlaid on a two-dimensional black and white image of forest areas and the number of black cells is counted for each grid size.The black cells represent the forest areas while the white ones indicate an absence of forest areas.A double logarithmic plot of N(ε), the number of occupied cells versus 1/ε is drawn and subsequently the slope of a linear regression gives an estimate of the box dimension.
The mathematical formula used for the calculation of the box counting fractal dimension was: where DB-C is the box-counting fractal dimension, ε is the side length of the box, and N(ε) is the number of non-overlapping and contiguous boxes of side ε required to cover the area of the fractal object [40,45].As the zero limits cannot be applied to digital images, DB-C was estimated by means of the equation: ( where d is the slope of the graph log (N(ε)) against log (1/ε) [45].
The box-counting algorithm cannot offer a perfect coverage of a fractal pattern, so the number of occupied boxes is approximative.Hence, the estimation of the box-counting dimension requires a sufficient number of levels of analysis [43] but also the largest possible range of box sizes [46].
Four fractal dimensions were considered in this study: the fractal box-counting dimension of the forest areas (DSurf-dil0), the fractal box-counting dimension of the dilated forest areas (DSurf-dil3), the dilation fractal dimension (Ddil3) and the fractal box-counting dimension of the border of the dilated forest areas (DBord-dil3).We analyzed DSurf-dil3, Ddil3 and DBord-dil3 to determine whether large variation in the fractal dimension was a resultof a transition from the fractal organization with a lot of free spaces (DSurf-dil0) to another organization with less free spaces or a more compact organization (DSurf-dil3) (Figure 2).The first step of the morphological characterization of the forest patterns involves measuring the D Surf-dil0 which describes how the forest mass is concentrated across scales on a given area.D Surf-dil0 can take any value between 0 and 2. D Surf-dil0 equal to 0 corresponds to a limited case where the pattern is made up of a single point (e.g., a single patch of forest was only 0.022 km 2 ).D Surf-dil0 < 1 corresponds to a pattern made up of unconnected elements, detached clusters (the patches of forests are highly dispersed and mostly isolated from one another, similarly to a Fournier Dust).D Surf-dil0 > 1 corresponds to a mix of connected elements forming large clusters, connected elements forming small clusters, and isolated elements.When D Surf-dil0 is close to 2, the forest pattern is quite uniformly distributed following only a single scale with a high degree of homogeneity.In this case, the forest is uniformly distributed and most elements are connected to each other.The overall shape is then mainly a single large and compact cluster.An increasing D Surf-dil0 indicates a more dense and homogenous pattern of forest areas with increasing filling of the existing areas characterized by gaps.A decreasing D Surf-dil0 indicates a growing fragmentation in multi-scale clusters and a more uneven distribution of forest areas.This uneven fragmentation of deforested or reforested areas can be imposed by human activity or natural conditions by themselves.The quality of the fractal dimension estimation is controlled by using the Pearson correlation coefficient R. Estimations were accepted when R exceeded the value of 0.99 [47].If the fit between the empirical curve and the estimated curve is low, it is be concluded that either the pattern being studied is not fractal, or that it is multi-fractal [43].
The second step of the morphological characterization of the forest pattern involves a measure of the D dil3 .The principle of dilation is based on an algorithm introduced by Minkowski and Bouligand [48] to establish the dimension of an object using the measure theory and involves surroundings of each object point by additional border points with increasing size for the each iteration step.During the first dilation, each pixel is surrounded by a border of one pixel in width.Then, the reference element was a square of 3 2 pixels in size.At the second iteration step, each pixel was surrounded by a border of two pixels in width.The structuring element is then a square of 5 2 pixels in size [44].At the third iteration step, each pixel is surrounded by a border of three pixels in width and the structuring element is then a square of 7 2 pixels in size.
In the third step, we measured the box-counting dimension of forested patterns after three dilations.With three dilations, all details referring to 1.078 km 2 areas were unclear, meaning that the smallest spaces disappeared.The pattern is fully connected in a fractal manner and the fractal dimension of the dilated forested area (D Surf-dil3 ) should be higher than forest-free areas; D Surf-dil3 should also be higher than 1.If D Surf-dil3 differs substantially from D Surf-dil0 , the way of combining the smallest free spaces with the surrounding forest pixels follows a fractal organization very different from the spatial organization of the forest pattern at larger scales.On the contrary, if D Surf-dil3 is not very different from D Surf-dil0 , the smallest free spaces do not follow a multi-scale pattern; they only influence the value of D Surf-dil0 slightly by introducing noise into the measure [49].
The fourth step of the morphological characterization of the forest pattern was to measure the box-counting dimension of its dilated borders (D Bord-dil3 ).The morphology of the borders is different from that of the areas and involved different ways to describe the geometry of forest areas.We used fractal analysis of dilated borders in order to eliminate the tiniest free spaces whose spatial forest organization is not well known and tends to be rather irregular.
Because of the forests' fragmentation and heterogeneity, many borders of forest areas tend to exert irregularity with the D Bord-dil3 greater than 1.For more compact patterns, D Surf-dil3 tends to be higher than D Bord-dil3 and vice versa, because the shapes of forested patterns are more diverse in the largest counties.This may lead to slight differences in the absolute values of the fractal dimensions, without affecting the general conclusions of the present paper.
Spatial Evolution of Forest Covered Areas
Initially, the spatial evolution of forested, deforested and reforested areas was determined using ImageJ based on the Landsat GFC data (Table 2).The forest-covered areas had generally decreased during the period 2000-2012.Only for a few DRs the observed reforestation was higher than deforestation (Center DR and S-W Oltenia DR).At a national level, D Surf-dil0 of forested areas was 1.50 which indicated an increased complexity, fragmentation and irregular arrangement by predominantly connected forested clusters.Due to the accentuated non-uniformity of the spatial distribution of the reforested and deforested areas, D Surf-dil0 was as low as 0.93 for deforested and 0.88 for reforested areas respectively, indicating primarily detached forest clusters (Table 3).We observed that the West and Central development regions (DRs) had the highest values of D Surf-dil0 , due to the greater homogeneity of their forested areas, arranged in contiguous clusters, followed by North-West, West, South-West and North-East DRs.Scarcely forested DRs such as the South-East Dobrogea, South Muntenia and the Bucharest-Ilfov exerted a reduced values of D Surf-dil0 which was lower than the national average.These DRs are counties characterized by limited areas of forest with a dominant relief of plains or low hills (Figure 3a).A clear difference was observed between the values of fractal dimension for forested areas as compared to deforested/reforested (Figure 3b). between the values of fractal dimension for forested areas as compared to deforested/reforested (Figure 3b).The forests appear as small clusters, isolated patches or are distributed along the waters.Particularly, values were smaller (1.25) for the counties characterized by plains, as Brăila (BR) and Călărași (CL) and also for Bucharest (B) county (0.92) (Figure 4a).The fractal dimension of deforestation indicates the fact that the highest values of DSurf-dil0 are found in the Central, North-West and West regions which reveals a greater volume of exploited timber and that deforestation occurred in a rather unstructured manner.The lowest values of fractal dimensions from South Muntenia, South-East Dobrogea and Bucharest-Ilfov regions were caused by the limited areal extent of forests (Figure 4b).
The reforested areas display a smaller fractal dimension because these areas are smaller, but on the other hand they cover a large part of the fragmentation and irregularity induced by the deforestation.The fractal dimension DSurf-dil0 of a reforested area is higher than the deforested area in South-West Oltenia and Center DRs where important reforestation campaigns were implemented.On the contrary, differences are observed between DSurf-dil0 of deforested areas and reforested areas in the Bucharest-Ilfov region.The forests appear as small clusters, isolated patches or are distributed along the waters.Particularly, values were smaller (1.25) for the counties characterized by plains, as Brăila (BR) and Călăras , i (CL) and also for Bucharest (B) county (0.92) (Figure 4a).The reforested areas display a smaller fractal dimension because these areas are smaller, but on the other hand they cover a large part of the fragmentation and irregularity induced by the deforestation.The fractal dimension DSurf-dil0 of a reforested area is higher than the deforested area in South-West Oltenia and Center DRs where important reforestation campaigns were implemented.On the contrary, differences are observed between DSurf-dil0 of deforested areas and reforested areas in the Bucharest-Ilfov region.
Dilated Fractal Dimension (Ddil3)
For the forested areas, Ddil3 provides interesting information about the fractal dimension obtained after dilation (Figure 5 a,b).Two situations can be observed: the usual case with counties characterized by lower landscape (plains, low hills, sparsely forested counties) where the fractal dimension Ddil3 is less than DSurf-dil0 (e.g., DSurf-dil0 − Ddil3 for Galați (GL) was 0.32) and an atypical situation for counties with rugged landscapes (mountainous, hilly, densely forested) where Ddil was less than DSurf-dil0 (e.g.Ddil3 − DSurf-dil0 for Vrancea (VN) was 0.25) (Figure 6a).The main cause was that complex compact forestry areas markedly influence the Ddil computation.The fractal dimension of deforestation indicates the fact that the highest values of D Surf-dil0 are found in the Central, North-West and West regions which reveals a greater volume of exploited timber and that deforestation occurred in a rather unstructured manner.The lowest values of fractal dimensions from South Muntenia, South-East Dobrogea and Bucharest-Ilfov regions were caused by the limited areal extent of forests (Figure 4b).
The reforested areas display a smaller fractal dimension because these areas are smaller, but on the other hand they cover a large part of the fragmentation and irregularity induced by the deforestation.The fractal dimension D Surf-dil0 of a reforested area is higher than the deforested area in South-West Oltenia and Center DRs where important reforestation campaigns were implemented.On the contrary, differences are observed between D Surf-dil0 of deforested areas and reforested areas in the Bucharest-Ilfov region.
Dilated Fractal Dimension (D dil3 )
For the forested areas, D dil3 provides interesting information about the fractal dimension obtained after dilation (Figure 5a,b).Two situations can be observed: the usual case with counties characterized by lower landscape (plains, low hills, sparsely forested counties) where the fractal dimension D dil3 is less than D Surf-dil0 (e.g., D Surf-dil0 − D dil3 for Galat , i (GL) was 0.32) and an atypical situation for counties with rugged landscapes (mountainous, hilly, densely forested) where D dil was less than D Surf-dil0 (e.g., D dil3 − D Surf-dil0 for Vrancea (VN) was 0.25) (Figure 6a).The main cause was that complex compact forestry areas markedly influence the D dil computation.
obtained after dilation (Figure 5 a,b).Two situations can be observed: the usual case with counties characterized by lower landscape (plains, low hills, sparsely forested counties) where the fractal dimension Ddil3 is less than DSurf-dil0 (e.g., DSurf-dil0 − Ddil3 for Galați (GL) was 0.32) and an atypical situation for counties with rugged landscapes (mountainous, hilly, densely forested) where Ddil was less than DSurf-dil0 (e.g.Ddil3 − DSurf-dil0 for Vrancea (VN) was 0.25) (Figure 6a).The main cause was that complex compact forestry areas markedly influence the Ddil computation.The difference was larger than in the case of deforested areas due to less reforestation of forest areas (Table 4).Data source: Data were extracted from images provided by [41].
At the level of DRs, the lowest values were recorded in South Muntenia, South East Dobrogea and Bucharest-Ilfov regions, where the reforestation was less pronounced, (while the highest values were recorded in West, North-West and South-West Oltenia DRs with significant reforestation.(Figure 6c). the relation D dil > D Surf-dil0 disappeared and was replaced by D dil < D Surf-dil0 .D Surf-dil0 − D dil3 varies between 0.2 in Bucharest (B) and 0.6 in Vâlcea (VL).At the DR level, the lowest values were found in Bucharest-Ilfov and South-Muntenia regions with the lowest extent of deforestations and in West and South-Oltenia DRs with the most deforestation.(Figure 6b).The same situation was observed in the reforested areas, where D Surf-dil0 − D dil3 varied between 0.18 in Bucharest (B) and 0.76 in Bihor (BH).The difference was larger than in the case of deforested areas due to less reforestation of forest areas (Table 4).Data source: Data were extracted from images provided by [41].
At the level of DRs, the lowest values were recorded in South Muntenia, South East Dobrogea and Bucharest-Ilfov regions, where the reforestation was less pronounced, (while the highest values were recorded in West, North-West and South-West Oltenia DRs with significant reforestation.(Figure 6c).
Fractal Dimension of the Dilated Forest Areas (D surf-dil3 )
After applying a three-fold dilation (D dil3 ) (Figure 7a), the fractal dimension D Surf-dil3 of forest areas was increased compared to D Surf-dil0 in all situations.The South-West Oltenia, North-East, North-West, Center, and West DRs with rugged landscapes, including mountains and high hills had a lower increase of D Surf-dil3 compared to D Surf-dil0 with a percentage increase with values between 9.6% and 23.9% according to the fractal analysis (Figure 7b).
Fractal Dimension of the Dilated Forest Areas (Dsurf-dil3)
After applying a three-fold dilation (Ddil3) (Figure 7a), the fractal dimension DSurf-dil3 of forest areas was increased compared to DSurf-dil0 in all situations.The South-West Oltenia, North-East, North-West, Center, and West DRs with rugged landscapes, including mountains and high hills had a lower increase of DSurf-dil3 compared to DSurf-dil0 with a percentage increase with values between 9.6% and 23.9% according to the fractal analysis (Figure 7b).For Bucharest-Ilfov, South-Muntenia, and South-East Dobrogea DRs with landscapes characterized by plains and low hills, there was a decrease of the fractal dimension by 4.5%-7.7 %.The situation was much clearer at the county level (Figure 8a).The counties with mountainous landscapes and widespread forested areas (Harghita (HR), Suceava (SV), and Caraș-Severin (CS) counties) had lower DSurf-dil3 values compared to DSurf-dil0 (between 2.8% and 4.3%).In counties dominated by landscapes of plains/low hills with scarce forested areas (Bucharest (B), Călărași (CL), For Bucharest-Ilfov, South-Muntenia, and South-East Dobrogea DRs with landscapes characterized by plains and low hills, there was a decrease of the fractal dimension by 4.5%-7.7%.The situation was much clearer at the county level (Figure 8a).The counties with mountainous landscapes and widespread forested areas (Harghita (HR), Suceava (SV), and Caras , -Severin (CS) counties) had lower D Surf-dil3 values compared to D Surf-dil0 (between 2.8% and 4.3%).In counties dominated by landscapes of plains/low hills with scarce forested areas (Bucharest (B), Călăras , i (CL), Teleorman (TR)), more significant increases of D Surf-dil3 compared to D Surf-dil0 were observed (with values between 11.3% and 36.6%).
DR (due to higher values of DSurf-dil3 in Caraș-Severin (CS) and Timiș (TM)), because the reforestation has been conducted in small and relatively uniform patches.At the county level, there was a strong correlation between DSurf-dil0 and DSurf-dil3 (R 2 = 0.83), but this correlation was weaker for deforested areas because the reforestation was distributed more chaotically.Regions with varied landscapes, including mountains, hills and highlands such as South-West Oltenia, North-West and Center showed a lower increase of DSurf-dil3 compared to DSurf-dil0 between 30.5% and 40.2% compared to DRs with landscapes of lower plains and hills such as Bucharest-Ilfov, South-East Dobrogea, and South Muntenia, where the dilation induced increase of fractal dimensions ranged between 58.7% and 211.8%.The values of DSurf-dil3 − DSurf-dil0 for reforested areas were much higher than the DSurf-dil3 − DSurf-dil0 for the deforested areas because of the high extent of fragmentation.However, the counties with landscapes dominated by mountains and dense forests (Gorj (GJ), Suceava (SV), Maramureș (MM)) showed a lower increase of DSurf-dil3 compared to DSurf-dil0 (from 35% to 18.3%).In counties with landscapes dominated by lower plains and hills with limited areas of forest (Bucharest (B), Ilfov (IF)), a more pronounced increase of DSurf-dil3 was detected compared to DSurf-dil0 with values between 61.3% and 334.1% (Figure 8c).
Fractal Dimension of the Dilated Forest Areas Borders (DBord-dil3)
Extracting the border of three-fold dilated images dil3 allowed for the calculation of the fractal dimension DBord-dil3.DBord-dil3 provided information about the fractal shape of forested area borders and it increases when was growing the irregularity of the borders (Figure 9 It follows that as the difference between D Surf-dil3 and D Surf-dil0 increased, the fractal fragmentation also increased, especially for counties characterized by plain landscapes, because the artificial "fractalization" of space as the dilation was more pronounced. At the level of DRs, D Surf-dil3 of deforested areas increased proportionally with the degree of dilation, except for a higher increase of D Surf-dil3 in South-West Oltenia towards the North-East DRs (Table 5).This may be explained by the smaller areas of forest in Vaslui (VS), Ias , i (IS) and Botos , ani (BT), mainly arranged in strips.At the county level, there was a very strong positive correlation between D Surf-dil0 and D Surf-dil3 , with R 2 = 0.99, indicating that the deforested area was relatively small, compact and unevenly distributed.Data source: Data were extracted from images provided by [41].
Center and North-West DRs with varied landscapes, including mountains, hills and highlands had a lower increase of D Surf-dil3 compared to D Surf-dil0 (an increase of 31.1% to 32.6%).Bucharest-Ilfov, South-East Dobrogea, and South-Muntenia DRs with lower plains and hills showed a dilation induced increase of the fractal dimension with values between 47.5% and 109.8%.The mountain landscape of highly deforested counties (Suceava (SV), Vrancea (VN), Harghita (HR)) showed a lower increase of D Surf-dil3 compared to D Surf-dil0 with values between 19.6% and 34% (Figure 8b).In counties dominated by the lower plains and hills landscapes, and with a low degree of forest areas such as Bucharest (B), Constant , a (CT), and Vaslui (VS) the increases of D Surf-dil3 compared to D Surf-dil0 , were larger with values between 54.3% and 161.5%.D Surf-dil3 of reforested areas increased proportionally to the dilation, with the exception of the higher increase in the West DR compared to the North-West DR (due to higher values of D Surf-dil3 in Caras , -Severin (CS) and Timis , (TM)), because the reforestation has been conducted in small and relatively uniform patches.At the county level, there was a strong correlation between D Surf-dil0 and D Surf-dil3 (R 2 = 0.83), but this correlation was weaker for deforested areas because the reforestation was distributed more chaotically.Regions with varied landscapes, including mountains, hills and highlands such as South-West Oltenia, North-West and Center showed a lower increase of D Surf-dil3 compared to D Surf-dil0 between 30.5% and 40.2% compared to DRs with landscapes of lower plains and hills such as Bucharest-Ilfov, South-East Dobrogea, and South Muntenia, where the dilation induced increase of fractal dimensions ranged between 58.7% and 211.8%.The values of D Surf-dil3 − D Surf-dil0 for reforested areas were much higher than the D Surf-dil3 − D Surf-dil0 for the deforested areas because of the high extent of fragmentation.However, the counties with landscapes dominated by mountains and dense forests (Gorj (GJ), Suceava (SV), Maramures , (MM)) showed a lower increase of D Surf-dil3 compared to D Surf-dil0 (from 35% to 18.3%).In counties with landscapes dominated by lower plains and hills with limited areas of forest (Bucharest (B), Ilfov (IF)), a more pronounced increase of D Surf-dil3 was detected compared to D Surf-dil0 with values between 61.3% and 334.1% (Figure 8c).
Fractal Dimension of the Dilated Forest Areas Borders (D Bord-dil3 )
Extracting the border of three-fold dilated images dil3 allowed for the calculation of the fractal dimension D Bord-dil3 .D Bord-dil3 provided information about the fractal shape of forested area borders and it increases when was growing the irregularity of the borders (Figure 9 a,b).
At the county level, the smallest fractal dimensions of forested areas (indicating lower fragmentation and complexity of the border) were found for forested areas of Bucharest (B), Vrancea (VN) and Caras , -Severin (CS) (very small but compact) whereas the largest fractal dimensions were found in Vaslui (VS), Galat , i (GL), Ias , i (IS) as the forested areas were distributed as elongated strips (Figure 10a).reforested areas.
Fractal Dimension of the Dilated Forest Areas Borders (DBord-dil3)
Extracting the border of three-fold dilated images dil3 allowed for the calculation of the fractal dimension DBord-dil3.DBord-dil3 provided information about the fractal shape of forested area borders and it increases when was growing the irregularity of the borders (Figure 9 At the county level, the smallest fractal dimensions of forested areas (indicating lower fragmentation and complexity of the border) were found for forested areas of Bucharest (B), Vrancea (VN) and Caraș-Severin (CS) (very small but compact) whereas the largest fractal dimensions were found in Vaslui (VS), Galați (GL), Iași (IS) as the forested areas were distributed as elongated strips (Figure 10a).
Lowest values of DBord-dil3 were recorded in South-East Dobrogea and Bucharest-Ilfov where deforestation was sporadic and relatively compact, the highest in N-E where the fragmentation of forest areas was intense due to deforestations and South-West Oltenia.At the county level, for Bucharest, but also for Vrancea (VN), Covasna (CV), and Caraş-Severin (CS) counties for deforested areas, the lowest DBord-dil3 indicated fragmentation and less complexity due to very small but compact forested areas (where the deforestation was of a compact shape) and the highest fractal dimensions were found in Vaslui (VS), Cluj (CJ), and Iași (IS) counties (where the deforestation was shaped as elongated strips) (Figure 10b).At the level of DRs, the lowest values of DBord-dil3 were recorded in South-Muntenia, South-East Dobrogea and Bucharest-Ilfov regions, with small and relatively homogeneous deforested areas.The highest values of DBord-dil3 were registered in DRs with widespread deforestation of more irregular shape due to the configuration topography, and with forest poaching, such as the Center, West and North-West.The lowest values of DBord-dil3 of reforested areas were calculated, similarly to deforested areas, in South Muntenia, South-East Dobrogea and Bucharest-Ilfov DRs, with low and relatively homogeneous reforestation, especially by plantation.The highest values of DBord-dil3 were registered in DRs with fragmented landscapes (e.g., Center, West and North-West DRs), where self-regeneration and plantation campaigns were imposed (Table 6).
At the county level, the lowest fractal dimension values (between 0.57 and 1.14) were calculated the borders of forest areas in the regions of plains or low hills.e.g., Bucharest (B), Teleorman (TR), Botoşani (BT), and Vaslui (VS) counties, indicating their fragmentation and lower complexity.In contrast, the highest fractal dimension values of 1.36 and 1.44 were calculated for Alba (AB), Argeș (AG), Harghita (HR), and Suceava (SV) counties (Figure 9c).Lowest values of D Bord-dil3 were recorded in South-East Dobrogea and Bucharest-Ilfov where deforestation was sporadic and relatively compact, the highest in N-E where the fragmentation of forest areas was intense due to deforestations and South-West Oltenia.At the county level, for Bucharest, but also for Vrancea (VN), Covasna (CV), and Caraş-Severin (CS) counties for deforested areas, the lowest D Bord-dil3 indicated fragmentation and less complexity due to very small but compact forested areas (where the deforestation was of a compact shape) and the highest fractal dimensions were found in Vaslui (VS), Cluj (CJ), and Ias , i (IS) counties (where the deforestation was shaped as elongated strips) (Figure 10b).
At the level of DRs, the lowest values of D Bord-dil3 were recorded in South-Muntenia, South-East Dobrogea and Bucharest-Ilfov regions, with small and relatively homogeneous deforested areas.The highest values of D Bord-dil3 were registered in DRs with widespread deforestation of more irregular shape due to the configuration topography, and with forest poaching, such as the Center, West and North-West.The lowest values of D Bord-dil3 of reforested areas were calculated, similarly to deforested areas, in South Muntenia, South-East Dobrogea and Bucharest-Ilfov DRs, with low and relatively homogeneous reforestation, especially by plantation.The highest values of D Bord-dil3 were registered in DRs with fragmented landscapes (e.g., Center, West and North-West DRs), where self-regeneration and plantation campaigns were imposed (Table 6).Data source: Data were extracted from images provided by [41].
At the county level, the lowest fractal dimension values (between 0.57 and 1.14) were calculated the borders of forest areas in the regions of plains or low hills.e.g., Bucharest (B), Teleorman (TR), Botoşani (BT), and Vaslui (VS) counties, indicating their fragmentation and lower complexity.In contrast, the highest fractal dimension values of 1.36 and 1.44 were calculated for Alba (AB), Arges , (AG), Harghita (HR), and Suceava (SV) counties (Figure 9c).
Forest Changes in Romania
Our results showed important spatial forest changes in Romanian DRs during recent decades.By using a satellite based information on forest changes and fractal analysis, we provide information describing the state and changes in forest areas in all eight DRs of Romania during the period 2000-2012.The satellite based per-pixel assessment of forested, deforested and reforested areas was complemented by fractal analysis revealing morpho-structural and textural differentiations of forest disturbance in different DRs, providing a quantification of changes in the uniformity, homogeneity/heterogeneity and compaction/fragmentation of forests.
Forest areas in Romania are characterized by a general downward trend over the period covered by the study due to an increased level of legal and illegal logging [50].This development is reflected by the economic changes during this period and is spawned by changes in legislation which encouraged deforestations [11,[36][37][38].In a longer historical perspective, forest cover in Romania has been characterized by significant changes over the last 100 years, mostly due to socio-political factors that have had significant impact on the forest management [51].The 1926 to 1948 period has been characterized by a decrease in private land ownership, whereas from 1948 to 1989, in the socialist period, the forests were entirely state-owned.In the post-socialist period, an increase in private lands is a result of the three privatization laws from 1990 to 2004 [27,52].Logging was particularly intense during the Socialist-era in the 1960s and 1970s but forest harvesting declined during the 1990s as a consequence of the collapse of the timber market and lower institutional strength and weak law enforcement, triggering an increase in illegal logging at the expense of planned logging [27,51,52].Areas of primary forest in Romania covered aproximately 4000 km 2 in 1984 but this was reduced to 2185 km 2 in 2004 [28,53] and consequently the habitats for species were also diminished [24].
Usage of Fractal Analysis for Quantification of Forest Changes
We have evaluated the forest resources [41] using the box-counting method [44], determining the fractal dimension of the forest areas (D Surf-dil0 ), the fractal dimension of the dilated forest areas (D Surf-dil3 ), the fractal dimension of the border of the dilated forest areas (D Bord-dil3 ) and dilation fractal dimension (D dil3 ).Our results add new insights to forest cover change as compared to previous studies based on fractal analysis [3,11,13,[36][37][38].The findings of this study clearly confirmed the hypothesis that fractal analysis can provide an interesting new way of quantifying the degree of fragmentation and organization of forest areas characterized by deforestation and reforestation.Here, we used fractal analysis to study the characteristics of the forestry morphology for various levels of forested, deforested and reforested areas in Romania.Fractal analysis was used because of the ability of the fractal dimension to provide a quantitative description of the forest environment.By measuring the contrast of forestry patterns across scales, this method is applicable for the evaluation and characterization of forest area types.Through fractal analysis, we obtained complementary information on deforestation and reforestation/regeneration compared to classical change detection analysis based on image classification because the fractal analysis is able to analyze irregular spatial structures.
In the fractal analysis of forested areas, we used the box-counting method to describe spatial patterns [11,[36][37][38] and the dynamics of deforestation [34].In previous studies, fractal analysis of forest areas was determined by using the Fractal Fragmentation Index [13,36] or Minkowski Dimension and Pyramid Dimension [33].Until now, the dilation method was used to evaluate the quality of the commune or urban built-up areas [44] and we clarified the importance of utilizing the dilation method in understanding the textural organization of forest areas and the effects generated by deforestation and reforestation.The main findings are summarized and discussed in the following five points: 1.
We show that fractal analysis can offer valuable textural information about the spatial organizations of forest resources.Also, the fractal analysis highlights the spatial effects of deforestation on forest resources.2.
D dil3 was shown to be a useful index that indicates the degree of compactness of forested areas.D dil3 showed higher values than D surf-dil0 only in the situation of densely forested areas and with a great degree of compactness.Otherwise, for areas less forested, deforested or reforested, D dil3 was less than D surf-dil0 .
3.
The difference between D surf-dil3 and D surf-dil0 can be utilized as an index of the forest area complexity organization.This indicates that the value of D surf-dil3 − D surf-dil0 increases for more heterogeneous, fragmented and scattered forest areas.If forest areas were compact and organized in big clusters, the difference between D surf-dil3 and D surf-dil0 tended to be lower.4.
D Bord-dil3 offered important textural information about deforestation and reforestation.The more compact the deforestation or the reforestation, the lower was the value of D Bord-dil3 .
5.
The fractal analysis highlighted morpho-structural and textural differences between forested, deforested and reforested areas grouped by DRs or counties.Differences were observed between landscapes dominated by mountains and high hills (more forested and compact organization) as compared to landscapes of lower plains or hills (less forested, more fragmented, with many small and isolated clusters).Also, obvious differences were found between morpho-structural and textural patterns of forested areas (more compact) as a function of deforested and reforested areas (more isolated points, patches randomly distributed).At the level of DRs, areas of deforestation not followed by a sufficient reforestation show a change from a compact structure of forest areas to a chaotic structure with multiple free spaces that are non-forested pixels.
Perspectives for Sustainable Forest Management
The current study shows the importance of utilizing the dilation method in understanding the textural organization of forest areas and the effects generated by deforestation and reforestation.The method, based on fractal analysis, thereby shows the advantage of revealing the effects and patterns of forest disturbance altogether.Fractal analysis of both D Surf and D bord is involved in characterizing forestry morphology.
Modelling of deforestation through fractal analysis, targeting forest textural differentiation, provides interesting complementary information to traditional per-pixel deforestation mapping.By combining such methods, it is expected that decision makers can gain an improved starting point for understanding the underlying driving forces of deforestation, which is instrumental for optimal planning of sustainable forestry and future exploitation of forest resources.Specifically, the morpho-structural and textural information on forest fragmentation and heterogeneity provided by fractal analysis can be of added value in assessing the risk of landslides, floods and flash floods, which is higher in the watersheds from DRs characterized by high deforestation rates in the Carpathians and Subcarphatians [21,22].Also, forest areas have an important ecological function in relation to ecosystem services such as biodiversity.Here, increased landscape fragmentation is known to be a major determinant for a biodiversity decline [54,55] and fractal analysis offers a powerful way of quantifying landscape or forest fragmentation and assessment of temporal changes herein from remote sensing time-series.
However, the model did not sufficiently highlight the complexity of morphology of forest areas at a local scale, such as the ratio of clusters and isolated clumps of forest.This shortcoming may be resolved by a complementary model of local fractal analysis including mass-dimension (for areas) and ruler-dimension (for the border of the principal cluster).
It must be kept in mind that the Landsat based global forest cover (GFC) maps [41] used as an input for the fractal analysis are based on definitions of forest loss.Forest loss in the GFC data is defined as the disturbance or complete removal of tree cover canopy (below 25% tree canopy cover) including any conversion of natural forests by local communities, also including deforestation for establishment of plantations.Therefore, the forest mapping should be intersected with ground knowledge for the correct interpretation of patterns of forest loss.Moreover, the Landsat based forest cover maps are based on optical remote sensing that is known to suffer from shortcomings in the mapping of forest degradation, mainly due to the lack of sensitivity of optical instruments to changes within and below the forest canopy [56].
Finally, a more detailed analysis, e.g., differentiating the type of forests (deciduous or coniferous forest, private or state forests) might help in establishing of the novel forest management which would be better suited for practical implementation.
Conclusions
Fractal analysis of forest areas provides the possibility to calculate the fractal dimension both of shapes and areas.Therefore, the fractal analysis is a tool to evaluate spatially explicit geographic phenomena in more detail and can improve our knowledge of the spatial organization of disturbances in forested areas.We analyzed changes in forest resources from Romanian development regions affected by both deforestation and reforestation by using fractal analysis.Through calculation of four forest area fractal dimensions we generated the quantitative morpho-structural and textural information about the degree of uniformity, fragmentation, heterogeneity and homogeneity complementary to traditional non-textural estimates of changes in the spatial extent of deforestation and reforestation.Therefore, we recommend the application of fractal analysis to the time-series of satellite remote sensing data as a new approach to providing additional information for the improvement of forest area management.
Figure 1 .
Figure 1.Geographical study area by outlining the development regions in Romania.
Figure 1 .
Figure 1.Geographical study area by outlining the development regions in Romania.
Figure 2 .
Figure 2. Workflow of data pre-processing, GIS analysis and fractal analysis of deforestation processes for Romania.
Figure 2 .
Figure 2. Workflow of data pre-processing, GIS analysis and fractal analysis of deforestation processes for Romania.
Figure 3 .
Figure 3. (a) DSurf-dil0 of forested, deforested and reforested areas from development regions of Romania DRs of Romania.(b) Boxplots with notches represent the median, interquartile and extreme values of DSurf-dil0 of forested, deforested and reforested areas from DRs of Romania.
Figure 3 .
Figure 3. (a) D Surf-dil0 of forested, deforested and reforested areas from development regions of Romania DRs of Romania.(b) Boxplots with notches represent the median, interquartile and extreme values of D Surf-dil0 of forested, deforested and reforested areas from DRs of Romania.
Figure 5 .Figure 4 .
Figure 5. (a) Ddil3 of forested, deforested and reforested areas from development regions of Romania.(b) Boxplots with notches represent the median, interquartile range and extreme values of Ddil3 of forested, deforested and reforested areas from Romanian DRs.This fact was obvious at the level of the following DRs: Bucharest-Ilfov, South-Muntenia with
Figure 5 .Figure 5 .
Figure 5. (a) Ddil3 of forested, deforested and reforested areas from development regions of Romania.(b) Boxplots with notches represent the median, interquartile range and extreme values of Ddil3 of forested, deforested and reforested areas from Romanian DRs.This fact was obvious at the level of the following DRs: Bucharest-Ilfov, South-Muntenia with lower landscape and less forested areas revealing Ddil < DSurf-dil0.The West and Center DRs with rugged landscapes, dominated by mountains or hills revealed Ddil > DSurf-dil0.For deforested areas, the relation Ddil > DSurf-dil0 disappeared and was replaced by Ddil < DSurf-dil0.DSurf-dil0 − Ddil3 varies between 0.2 in Bucharest (B) and 0.6 in Vâlcea (VL).At the DR level, the lowest values were found in Bucharest-Ilfov and South-Muntenia regions with the lowest extent of deforestations and in West and South-Oltenia DRs with the most deforestation.(Figure 6b).The same situation was observed in the
Figure 7 .
Figure 7. (a) DSurf-dil3 of forested, deforested and reforested areas from development regions of Romania.(b) Boxplots with notches represent the median, interquartile range and extreme values of DSurf-dil3 of forested, deforested and reforested areas from DRs of Romania.
Figure 7 .
Figure 7. (a) D Surf-dil3 of forested, deforested and reforested areas from development regions of Romania; (b) Boxplots with notches represent the median, interquartile range and extreme values of D Surf-dil3 of forested, deforested and reforested areas from DRs of Romania.
Figure 9 .
Figure 9. (a) DBord-dil3 of forested, deforested and reforested areas from development regions of Romania.(b) Boxplots with notches represent the median, interquartile range and extreme values of DBord-dil3 of forested, deforested and reforested areas from DRs of Romania.
Figure 9 .
Figure 9. (a) D Bord-dil3 of forested, deforested and reforested areas from development regions of Romania; (b) Boxplots with notches represent the median, interquartile range and extreme values of D Bord-dil3 of forested, deforested and reforested areas from DRs of Romania.
Figure 10 .
Figure 10.(a) The fractal box-counting dimension of the border of the dilated forest areas (DBord-dil3) of forested areas.(b) The fractal box-counting dimension of the border of the dilated forest areas (DBord-dil3) of deforested areas.(c) The fractal box-counting dimension of the border of the dilated forest areas (DBord-dil3) of reforested areas.
Figure 10 .
Figure 10.(a) The fractal box-counting dimension of the border of the dilated forest areas (D Bord-dil3 ) of forested areas; (b) The fractal box-counting dimension of the border of the dilated forest areas (D Bord-dil3 ) of deforested areas; (c) The fractal box-counting dimension of the border of the dilated forest areas (D Bord-dil3 ) of reforested areas.
Table 2 .
Characteristics of forest resources per development region during the period 2000-2012.
Table 4 .
Fractal dimension of forested, deforested and reforested areas (Ddil3) from development regions of Romania between 2000 and 2012
Table 4 .
Fractal dimension of forested, deforested and reforested areas (D dil3 ) from development regions of Romania between 2000 and 2012.
|
2017-03-31T08:35:36.427Z
|
2017-02-24T00:00:00.000
|
{
"year": 2017,
"sha1": "ec97f5a8ac8ae2f81a23966eb043719d3c17ff1b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/8/3/54/pdf?version=1487930744",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "ec97f5a8ac8ae2f81a23966eb043719d3c17ff1b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
219635109
|
pes2o/s2orc
|
v3-fos-license
|
IS RESTORATIVE JUSTICE AN APPROPRIATE LEGAL REMEDIATION FOR SEXUAL VIOLENCE?
This paper questions applicability of restorative justice in cases of sexual violence. Specific nature and serious consequences of sexual violence are the reason why this question has appeared. In order to find out the answer, the authors have presented the characteristics, mechanisms and nature of restorative justice, concurrently offering the comparison of arguments in favor of and against the applicability of restorative justice in this particularly sensitive type of criminal offences. Together with the review of diverse theoretical approaches to this matter, the authors have tested the applicability of restorative justice in cases of sexual violence in Bosnia and Herzegovina. In this paper normative, comparative and historical scientific methods have been used.
INTRODUCTION
The establishment of restorative justice, brought about by the criticism of the application of retributive justice over the centuries, gave hope to both scholars and practitioners that the voice of victims would be heard (more) loudly and that a new cathartic approach toward the perpetrator would facilitate the preventive function, which is a goal of the criminal law system 1 . In 1970, restorative justice, which was a natural method of solving conflict in some native cultures for centuries 2 , turn to be modern criminal law movement that spread fast within contemporary criminal law systems. From that period until now, restorative justice has gone through growing pains 3 and in the contemporary criminal law it has consistently gained credibility as a powerful alternative to the traditional responses to crime throughout the world. For those states that haven't introduced it at all or just minimally, there are recommendations given on the international level for it to be introduced and applied into the criminal law 4 . A good example is the recommendation to the Member States to "facilitate the referral cases as appropriate to restorative justice services, together with establishment of procedures and guidelines" from Article 12 of Victims Directive 2012/29/EU, which raises the awareness of the necessity for turning towards restorative justice. 1 Marilin Armour states that eighty-five studies and four meta analyses that have been generated over the past thirty years show consistently high rates of participant satisfaction in a variety of forms of restorative justice applications. For example, the recent meta analysis of 12 000 juveniles in juvenile cases has shown "25 percent reduction in recidivism, leading the researchers to claim that victim-offender mediation is a well-established, empirically supported intervention for reducing juvenile recidivism". See at: Marilyn Armour, "Restorative Justice: some facts and history", July 20, 2018. https://charterforcompassion.org/restorative-justice/restorative-justice-some-facts-and-history. 2 Aboriginals, Maori, Native Americans and other native cultures are representatives of cultures that used original forms of restorative justice for centuries in order to resolve the conflict and bring the justice. "The Origins of Restorative Justice". May 29, 2018, http:// www.restorativeapproaches.eu/origins. 3 In this paper the process of acceptance of restorative justice is referred to as "growing pains" since it has taken time for practitioners who had been applying traditional retributive system for years to understand benefits of restorative justice and positive results of its implementation. Until that has happened, it had been usually underestimated and ignored. Apart from the problem of introducing restorative justice de lege, nowadays it is questioned how much of restorative justice is applied de facto. 4 One of them is for instance Victims Directive 2012/29/EU, which is according to Emanuella Biffi "the first binding and enforceable legal instrument at the EU level that defines restorative justice and provides the legal safeguards to protect victims of crimes informed about and participating in a restorative justice process". See: Emanuella Biffi Even though we can testify to its evolution and point out numerous discussions and debates among scholars and practitioners on general aspects of restorative justice, it is important to wonder whether restorative justice is applicable to all criminal offences, regardless of their nature 5 . Particularly, the subject of our interest in this paper is to establish whether restorative justice is applicable to sex offences.
The reason for our concern for this particular group of criminal offences is their specific nature. Sex offences are neither a new group of criminal offence nor a territorially limited negative social phenomena. Some of them existed even in the ancient times 6 and are present even nowadays throughout the world in a variety of forms 7 . The nature of sex offences is what makes them peculiar as compared to other criminal offences. They are deeply intimate criminal offences, which even when diverse in its forms, cause universally scientifically recognised consequences to victims 8 . Those 5 Their nature is correlated to the object of protection. Some criminal offences are more intimate (crimes against family, sexual liberties and moral), some of them don't have recognisable victims and therefore lack the intimate nature (crimes of endangerment). 6 For instance in Roman Law, Rape as a criminal offences existed to protect both males and females. More in: MarinkaCetinić, «Pogled na pitanje da li je potrebno preispitivanje inkriminacije silovanja», u Anali Pravnog fakulteta u Beogradu, Pravni fakultet u Beogradu, Beograd, (1995): 81. 7 See World Health Organization Report On Sexual And Reproductive Health, https://www.who.int/reproductivehealth/topics/violence/sexual_violence/en/. 8 Consequences of sexual violence usually fall into physical, psychological and social categories. Physical consequences are all possible forms of violation of physical health of victim. According to the WHO, they include bruises, vaginal bleedings, infections, fibroids, genital irritations, etc. One of them is (unwanted) pregnancy which is in 18% a common consequence of rape in Ethiopia and in 15-18% in Mexico. See more in: WORLD Report On Violence And Health, World Health Organization, 2002., www.who.int/violence.../ violence/world_report/en. Psychological consequences last even longer than physical ones and they directly control the possibility of a victim to reintegrate into society after the critical event. They might be: frigidity, lack of sexual interest, lack of self-confidence, lack of trust and confidence towards others, apathy, depression, etc. See in World Report On Violence And Health, World Health Organization, 2002., www.who.int/violence.../violence/ world_report/en, [last access: 10.07.2018]. Social consequences are dependent on the type of society and community a victim lives in. For example, in traditional societies, victims of sexual violence are stigmatized and in various cases, because of the lack of empathy and understanding from society, it founds them responsible for their own victimization. Instead of being protected as a victim, they are being judged and secondarily victimized. criminal offences are violating victim's sexuality, intimacy, and trust. They are leaving life-long feeling of shame and pain and questions unsolved such as "why has this happened to me?", "am I the one to be blamed?", "how am I going to trust another person ever again?". Perpetrators (sometimes) don't regret and even don't see their victim as a vulnerable human being who has to live with the consequences of their crime for their lifetime. If the tools of restorative justice involve the active participation of the victim and the offender in the resolution of the conflict and require their interaction, then we have to wonder what their interaction in cases of sex offences would be like or would it be possible at all?
The deep consequences of those crimes have required society to classify such actions as criminal offences and to establish criminal sanctions for its perpetrators. Presently, all countries in the world have parts of their criminal codes that define which actions are deemed sex offences 9 . The important fact here is that there is collective conscience of all of society for a reaction, a prevention, and a condemnation of those actions 10 . Having that in mind, it is doubtless that there is constant emphasis on prevention of those criminal offences. 9 The content of those provisions is globally varied in terms of forms of actions and sanctions. For example, entire chapters of some modern criminal codes are devoted to incrimination of sex offences: the Criminal Code Of Federation Of Bosnia And Herzegovina, Chapter 19, "Criminal Offences against sexual liberties and morality"; the Criminal Code Of the Republic Of Germany, Chapter 13, "Criminal Offences against sexual orientation"; the Criminal Code Of the Republic Of France, Chapter 2, "Criminal Offences against physical and psychological integrity". While some sex-violence-including actions are criminal offences in one country, they might not be criminal offences in other countries; while some of them in the same country are being decriminalized throughout time, some of them are being criminalized (For example, while in the Criminal Code Of the Federative Nation Republic Of Yugoslavia from 1951, in Article 186 had prescribed criminal offence "Unnatural adultery ", that referred to a sexual activity between men. Bosnia and Herzegovina, that is its successor doesn't incriminate homosexuality as unnatural adultery or as a criminal offence). 10 The aim of this paper is to establish if the application of restorative justice, for the purpose of crime prevention, would be possible for such deeply intimate cases, or would its mechanisms result in creation of an environment for the possibility of a secondary victimization. Assisted with dogmatic, normative, and comparative scientific methods, this discussion will fall into three main parts: Part II is a brief overview of restorative justice, establishing its aim and its principles and tools, so that they may be tested as to whether they would be appropriate to use for sex offence cases. Part III identifies all possible benefits of its application together with examples of successfully applied restorative justice in those cases. Finally, Part IV questions the applicability and eligibility of restorative justice in sex offence cases in Bosnia and Herzegovina, by bringing in the correlation of positive criminal law provisions referring to restorative justice and sex offences.
DEFINING RESTORATIVE JUSTICE
The term restorative justice is believed to have been invented and first used by Albert Eglash in 1977, who noted three types of a criminal justice system: "a retributive justice based on the penal system; a distributive justice based on the therapeutic treatment of the offender, and a restorative justice based on restitution, or compensation of the damage caused by the criminal offence". 11 During the nineteen-seventies, the inefficiency of the traditional retributive criminal justice system had become a "trigger" for many critical comments. Classical, retributive justice, that had consisted of the idea of punishment for the perpetrator and the condemnation for his criminal actions, existed for centuries. 12 According to Cragg "…punishment seems to conflict with values like forgiveness, mercy, compassion, and benevolence, all of which reflect non-punitive ways of solving problems of human conflict" 13 . The preventive function of the criminal law in practice couldn't have been guaranteed since some perpetrators negated its preventive function by perpetrating new criminal offences after being released from prison.
The relation between the perpetrator and the victim was primary unfriendly, their communication ended with the criminal offence, and during criminal procedure their interests were confronted. The victims' role were of secondary importance, even though the entire criminal proceeding started because the critical action towards that victim had been perpetrated. According to Christie 14 , the state stole the conflict from its original owners (perpetrator and victim). That is why this traditional, retributive system of criminal justice was the subject of criticism 15 . Hudson 16 states that retributive justice has failed in cases where victims are women and when it comes to intimate violence. Additionally, Cragg notes that "retributive justice is incompatible with values like compassion and it has inhuman aura" 17 .
This criticism has spurred the invention of new, alternative methods of responding to crime, which are more human, more efficient and deprived of their retributive properties 18 14 Nils Christie, "Conflicts as property", The British Journal On Criminology, (1977): 1. 15 According to Van Ness and Strong, retributive justice didn't end up in repentance and rehabilitation. Even the rehabilitation was "simply an impossible goal", and "...failed policy". They are referring to comments of other scholars, suggesting some of the reasons for failure of previous forms of justice: from the improper screening of participants, overoptimistic view of human nature, the idea that state can see the deficiencies of an offender, that subject them to the treatment with the aim to help them, but it fails. See more in: Daniel W. Van ment of victimology and victim protection movements, efforts have been made towards advocacy for more active involvement of the victim in the existing criminal justice system and giving the opportunity to the victim, but also the community, to participate in the process of rehabilitation of the offender. All together, this has resulted in the development of a modern concept of restorative justice, which is considered to be one of the most important achievements of the contemporary criminal justice system and criminal policy. 19 Tony Marshall has defined 20 restorative justice as "a procedure in which all parties, or participants in a specific criminal offense, meet, in order to decide together on the resolution of the consequences of the criminal offense and its implications for the future" 21 . According to Johnstone and Van Ness, it is "a constructive and progressive alternative to more traditional ways of responding to crime and wrongdoing" 22 . Additionally, they add that restorative justice "is a distinctive state of affairs that we should attempt to bring about in the aftermath of criminal wrongdoing , and which might be said to constitute the 'justice'" 23 .
Unlike the retributive criminal justice system, the restorative justice model changes the angle of viewing the phenomenon of criminality in the sense that the interest shifts towards the victim of a crime, whose interests are marginalized in the traditional criminal justice system. It also shifts the perspective towards the damaged relations between a perpetrator and a victim in a way that attempts to restore damaged relations by resolving 19 See in: Johannes Wheeldon , "Finding Common Ground: Restorative Justice and its Theoretical Construction(s) (online)", Contemporary Justice Review, Oxford: Routledge, 2009, 91. 20 According to Ćopić , the notion of restorative justice is one of those terms that have been mostly discussed in the modern criminal and criminological literature, but yet, there is no universally accepted definition. See in: Sanja Ćopić, "Pojam i osnovni principi restorativne pravde", Temida, vol. 10 the conflict that has occurred. Accordingly, since the main emphasis is not on the punishment but on restoration of the broken relationship, the application of restorative justice means more humane treatment of the perpetrator, avoidance of his/her stigmatization and social exclusion as well as the improvement of reintegration and inclusion. These are the key points that differ the restorative approach from the classic retributive criminal procedure.
Like any system, restorative justice rests on a set of certain coherent principles 24 that reflect its system and above all the complexity. These principles constitute a kind of a guideline for the further development of restorative justice programs and the ground work for their evolution. They recognize that the criminal offense is a violation of human relations and that the focus should be placed on repairing damages done due to wrongful actions rather than on the rules that have been violated 25 . According to Bazemore et al., restorative justice is based on four principles as it follows: 26 (1) the perception of the crime, first of all, as a violation of people and interpersonal relations;(2) correction of damage caused by a criminal offense; (3) creating the conditions that the offender understands and takes responsibility for his work (active responsibility); and (4) reintegration of the offender and the victim.
Sexual Offences-Cases' Sensitivity
If we take into consideration the definition of sexual violence given by the World Health Organization in their report titled World Report on Violence and Health, from 2002, it is quite visible that those offences include "any sexual act, attempt to obtain a sexual act, unwanted sexual comments or advances, or acts to traffic, or otherwise directed, against a person's sexuality using coercion, by any person regardless of their relationship to the victim, in any setting, including but not limited to home and work" 27 . Basically, those criminal offences are committed with the lack of will of a victim or by misuse of their confidence 28 , with intentions to violate another human's dignity.
Sexual offences are intimate offences. In those cases, there are usually two quite opposite sides of a story and the lack of witnesses. Consequently, a victim is the main witness. The intimate characteristics of those criminal offences cause them to be in dark number of crime 29 . Victims, mostly because of the shame or fear decide not to press charges (report) against perpetrators and thus very often some of those actions are never prosecuted.
The consequences that they leave are extremely serious. According to the same WHO Report 30 , besides the physical effects, those offences leave their traces in the psyche of victims (together with all other possible con- 27 World Report On Violence And Health, 2002. 28 Centre for Disease Control and Prevention offers the definition of Sexual Violence, that corresponds to its nature. It is as a sexual act committed against someone without that person's given consent freely. It includes: completed or attempted forced penetration of a victim, completed or attempted alcohol or drug-facilitated penetration of a victim, completed or attempted forced acts in which a victim is made to penetrate someone, completed or attempted alcohol or drug-facilitated acts in which a victim is made to penetrate someone, non-physically forced penetration which occurs after a person is pressured to consent or submit to being penetrated, unwanted sexual contact and non-contact unwanted sexual experience. See details at: "Preventing Sexual Violence". [ sequences 31 ). Having that in mind, it is disputable whether classical retributive justice would be acceptable for him/her: the form of justice where that same victim would be only a secondary subject of criminal proceedings, who would be interested in high punishment of perpetrator, and who would be (if possible without facing the perpetrator) giving a statement as the witness, fulfill its role in the criminal proceedings?! Or would it be advisable to actively involve the victim in the proceedings to make the victim responsible for the outcome of the case? Would that encourage him/her to understand that his/her opinion counts and is important for the outcome of the case? Would that bring closure to the victim? Between those two solutions, there are some, very understandable, disputes in theory, that consequently offer reasons for and against restorative justice in sex offences cases. Wosner reviews some of the arguments against the applicability of restorative justice in those cases, which include: "[the] possible causation of stress to the victim in the moment of facing the perpetrator, secondary victimization during their meeting, and confidentiality violation." 32 But even with those understandable arguments, that author recommends the application of restorative justice, in all forms of sexual offences. 33 Wosner is not the only author who argues against restorative justice in sex offences cases. Mercer et al. 34 point out that mechanisms of restorative justice are riskier for sex offences than for other crimes. They express the emotional dimension of the application of restorative justice that shows up while interfacing with a perpetrator and which finally can be a trigger for the victim's re-traumatization 35 . Victims of sex offences are more vulnerable than in the other crimes 36 . Nevertheless, even though their concern in this matter is shown, in order not to generalize 31 See supra note 10. 32 all critical situations and not to eliminate the possibility of application of restorative justice, they advise experts to make a clear distinction between restorative risks and criminological ones. While restorative risks refer to the risk present in restorative practice to create potential harm to either party, the criminologist refers to factors that made the offender perpetrate the criminal offence and may influence recidivism 37 . So, not every risk or harm is caused by restorative justice but instead might have a criminological cause.
The other side of the coin
Without any doubt, all of the above-mentioned arguments against active involvement of the victim in the solution of the victim's own case are quite understandable and reasonable. They are focused on the consequences of victimization and are results of profound research which are more of a psychological nature than legal 38 . Even with justified opinions against restorative justice application to sex offences, it is required to determine if there are opposite views among scholars and practitioners and moreover to establish if there are positive examples of its application in practice.
Hudson favors applying restorative justice to sexual offences, on the ground that classical criminal justice has not prevented sexual offences so far. 39 Additionally, restorative justice aims to help a victim, not to make her/him go through secondary victimization but to get redress and to get to the primary importance in cases. She has analysed four very important 37 Id., 13. 38 In literature, the COSA Model (Circle of Support and Accountability) is mentioned as a possible hybrid alternative to restorative justice in the cases of Sexual Offences. It emerged in 1998 in Canada. In its basis it includes both reparation and restoration of a relationship. It consists of an inner circle (volunteers that are trained by circle coordinators) and an outer circle (community, professionals such as therapists, etc.). The research on its efficiency of prevention of recidivism shows that the COSA Model is successful since the recidivism has been reduced up to 83% in the COSA Groups. See on: "Restorative justice in cases of sexual violence", www.just.ee, 6. 39 consequences of the application of restorative justice to sexual violence-related cases, which proves importance for the victim and the outcome of the case, and shows a desirability of the application of restorative justice to them (when possible). 40 According to Hudson, those four impacts are: a perpetrator by admitting guilt in interface with a victim shows that he/she appreciates the victim; admits that a criminal offence is bad and harmful; shows that he/she is embarrassed; and understands that it would be the best for him/her to avoid perpetration of criminal offences 41 .
First impact would have therapeutic influence on the victim, so all questions such as "why has this happened to me?" or the fear of socialization would be overcome. The three remaining impacts would have a therapeutic and cathartic impact on the perpetrator. Namely, feeling shame for his/her wrongdoings or showing remorse to a victim would bring relief to the perpetrator and would motivate him/her to abstain from criminal offences. That would deter the perpetrator from possible recidivism into criminal activity. Accordingly, those four aspects are the most desirable outcomes of restorative justice in those cases that obviously not only are more emotionally satisfying to the victim than retributive justice (which has kept them far from interfering into the resolution of the case), since restorative justice makes the victim feel important but it may result in the prevention of crime. The preventive effective of restorative justice for sex offences cases is stressed by McAlinden, who says that "…restorative justice is more humane re-integrative approach in sex offences cases…" 42 and that "…has effectiveness in prevention of such criminal offences" 43 .
The everlasting fear of secondary victimisation, which is used as an argument against restorative justice in sex offence cases might be questioned with McAlinden's response to it 44 rather than to presume what might happen with the usage of restorative justice and gives a strong argument that trivializes the fear of secondary victimization as an opposing argument. In this regard, the study made in South Australia on 400 cases of sex offences has shown that conferences (as a mechanism of restorative justice) are less victimizing to a victim than the court regime 45 .
In sex offence cases, due to their nature, the power balance between parties is infringed. The case where the victim would be considered as an active party who would be asked for an opinion, to interface with the perpetrator, would reverse the imbalance of power that was created in the perpetration of the criminal offence.
Mercer et al. 46 justifiably point out other possible benefits of the application of restorative justice to sex offences cases. According to them, offering a victim restorative justice in order to resolve a broken relationship with the perpetrator and to overcome the critical event, would help the victim to re-imagine a safer and positive future. That argument is of high importance since the establishment of communication between them would help the victim to overcome the fear and the shame, so the victim would be able to have a normal life again. How much that communication is important for victims is confirmed in a case in Denmark where the victim of a sex offence was interviewed by a therapist and said, "…the thing that would help the best would be talk with the offender" 47 . Additionally, according to Wosner 48 , scientific research made in Texas and Ohio shows that in 82% of cases where the dialog between perpetrators and sexual offence victims had been established, the victims felt rehabilitated, healed and in the process of personal growth.
Van Ness and Strong advocate application of restorative justice to sex offence cases, and they quote judge John Kelly, who said that "a purpose of the criminal law should be to heal the wounds caused by crime -wounds 45 Id. 46 such as those of the rape victim from whom even the offenders' conviction and sentencing had not been enough" 49 .
Few examples of usage of restorative justice in sex offence cases
The arguments mentioned above result from of a critical review of the implementation of forms of restorative justice in sex offence cases in the countries with deep history of its implementation, such as Australia, New Zealand, Canada and the USA. Mercer et al. give an example of the application of restorative justice to a case of a sexual offence that had been perpetrated in a family 50 . Girl Courtney was only 15 years old when her brother Lee (17) raped her. She told her mother about what had happened and her mother reported the offence to the police. Lee was sentenced to community supervision with a therapeutic program. The case direction-changing moment happened when their mother asked for restorative justice to be implemented in this case, since she wanted to discuss the critical event with family members to help her son reintegrate into his family 51 . The same author gives another example of successfully used restorative justice in the case Jo vs. Darren 52 , since the final outcomes of its application for the victim were: "emotional recognition and empathy, shame management, self-forgiveness and understanding why she was the target" 53 .
Another example of successful application of restorative justice is the "Restore Program" that has been applied in Arizona. It addresses date and acquaintance rape cases that had been perpetrated by offenders who violated the law for the first time 54 . In that way, the opportunity has been 49 They also mentioned the example of a rape case judged by judge John Kelly.Within the procedure the victim was told that she had no fault for the rape that had been committed to her. That was a moment when her psychological healing started. See more in: D. Van given for first-time offenders to divert themselves from the classical criminal procedure and they go through a restorative justice program that might be powerful enough to prevent them from recidivism. A similar approach is present in South Australia, where younger offenders, who plead guilty, can be diverted from the criminal procedure and can participate in certain forms of restorative justice 55 .
Furthermore, Restore Programs are two programs -one that has been applied in New Zealand and another one in the USA. They are applied to sexual offences and, according to Bolitho and Freeman 56 , they have received positive comments such as, "satisfying and procedurally fair" 57 . In New Zealand there are certain conditions given for usage of restorative justice in those cases. The conditions represent the true spirit of restorative justice since they consist of an imperative for bilateral voluntarism for commitment to restorative justice, from both the perpetrator and the victim. Moreover, their joint desire for restorative justice to be applied in their case should be visible through their formal agreement. Finally, when applied, restorative justice offers therapy and support for the victim. 58
Which forms of restorative justice would be recommendable for sex offence cases?
Among many authors 59 there is agreement that if applied, the most recommendable forms of restorative justice would be mediation and restorative justice conferences. The substantive difference between mediation and conferences 60 is in that the facilitator of mediation is a mediator (objective party) and in conferences there is wider society who might partici- pate in case resolution (family, social workers, etc.) 61 . However, the aim of both approaches is to establish the ruined communication between parties through talk in a non-legal language. The applicability of conferences is, in our opinion, still disputable. If the active role of the family is dominant in those cases, what would happen in cases where the family doesn't support the victim, but as it sometimes happens, finds the victim is the one to be blamed for causing the victimization? That approach is, unfortunately, quite often visible in traditional societies. Together with those two forms of restorative justice, Mercer et al. emphasize the importance of the Face to Face Method 62 , which includes a small number of participants (which increases confidentiality of case) and that has at least two facilitators. The focus of this method is on both the victim and the offender, with the main aim of establishing a dialogue between them.
***
Even though there are respectful arguments that show the potential restorative justice risks in cases of sexual offences, the opposing arguments show that there are considerable benefits of the application of restorative justice and the potential possibilities for overcoming those risks. Having in mind, on the one hand that restorative justice aims to fix broken relationships and help both parties to overcome the critical event and on the other hand as previously mentioned, critical points within sexual offences should coexist with the retributive system. It should be an elective type of justice, instead of being an obligatory one. The decision on if it should be applied or not should be made after establishing some basic checkpoints 63 a sense to check whether there are statutory possibilities for its application (severity of the criminal offence in general). In that way, severe criminal offences would be automatically excluded from its application, preventing the possibility of their trivialization through easier approach towards the perpetrator. Then, having in mind one of the most important principle of criminal law is poenalia sunt restriguenda 64 , after checking case suitability, the case should be screened to determine whether that particular case of criminal offence is statutorily a fit for restorative justice and it would be recommendable to apply restorative justice. Case screening would mean that therapists 65 should talk with the victim, in order to establish whether the victim's physical and psychological condition makes restorative justice capable in that instance. Only after those steps should restorative justice be applied. With its electiveness, it remains as an option for the victim to use it if he/she feels like it would help more than the classical retributive justice, while not mandating the victim to go through it and potentially to be secondarily victimized.
A FEW NOTES OF APPLICABILITY OF RESTORATIVE JUSTICE TO SEXUAL OFFENCES IN BOSNIA AND HERZEGOVINA
Having in mind the nature of restorative justice and the argument that in order to apply restorative justice to sexual offence cases, case suitability and case screening should be checked, in order to avoid possible negative effects on victim and to achieve all the benefits out of it, the few next pages will be devoted to testing the applicability of restorative justice in sex offences in Bosnia and Herzegovina. To this end, after a short overview on the situation of the application of restorative justice in general in Bosnia and Herzegovina, the analysis on testing case suitability will be done 64 It refers to the need to determine all relevant issues in criminal law through each case separately. 65 Vince Mercer, Karin Sten Madsen, Marie Keenan and Estelle Zinsstag also recommend therapy to precede the decision upon restorative justice since in sex offences lots of emotions are involved and the will of victim could be questioned the best through the therapy. See 26. through a legal analysis of the relevant provisions of positive criminal law in Bosnia and Herzegovina.
4.1.The Role of Restorative Justice in BH Criminal Justice System
The imagination of concept of a justice system that would provide victims with important roles in the criminal case and where the victim-and-offender relation might be re-established, was turned into reality in 1998 in Bosnia and Herzegovina with the first Criminal Code brought after the war. 66 . The next criminal law reform in Bosnia and Herzegovina also allowed for restorative justice as a possibility. If asked, most scholars and practitioners would say that the establishment of restorative justice was a major step towards the harmonization of the criminal law of Bosnia and Herzegovina with international standards. Even though restorative justice is celebrating its 20th anniversary in Bosnia and Herzegovina, still most would say that the addition of restorative justice was merely an aim of its prescription, because when compared with its true application in practice, it remained more de iure, than de facto issue.
Nevertheless, according to the positive criminal law(s) of Bosnia and Herzegovina, restorative justice is introduced in the criminal system in a very fragmented level, through some institutes which, before concluding that there are forms of restorative justice, they need to deeply analyze the case in order to deduct whether restorative justice is appropriate. While some forms are applicable for adults, others are applied for juveniles only. ty claim 68 , work for the general good 69 , and commitment of an offender to fulfilling certain obligations when pronouncing a conditional condemn 70 .
One of the manifestations of restorative justice is present in the possibility of mediation, when a victim files a property claim. A property claim 71 is the right of each victim whose personal or property rights have been endangered or damaged by a criminal offense 72 and it will be discussed in criminal proceedings (at the request of an authorized person) if it does not significantly delay that procedure. A property claim as such would not be an example of restorative justice if there were not a possibility for the court to propose the mediation procedure to the injured party 73 and the defendant, if it finds that the property claim is of such nature that it would be desirable to finalize it through mediation. The proposal for the mediation may also be given by the injured party and the defendant until the completion of the main hearing.
The criminal substantive legislation of Bosnia and Herzegovina governs work for the common good at liberty 74 (hereinafter "WCGL"), a sanction that was incorporated into all criminal laws in Bosnia and Herzegovina in 2003, that has its restorative elements. This sanction is used when a conditional/suspended sanction would not achieve the main aim of sanctions, but at the same time, execution of imprisonment would not be needed in order to achieve it. So, WCGL is basically the middle, the most convenient path. The conditions for imposing it are given in codes and they refer to the situation in which the court reviews and imposes a sentence of imprisonment for a term of up to one year, but in the same time determines that the imposed sentence (with the consent of the accused) will be replaced with work for the common good at liberty 75 . The restorative element in it is recognizable in the substance and type of the work itself, since it may be targeted for benefit of victim.
One of the warning measures that is a part of the criminal substantive law of Bosnia and Herzegovina 76 is a conditional sentence and it contains elements of restorative justice in the broader sense. The court determines to sanction the offender and at the same time determines that it will not be executed, if the convicted person, during a certain period of time 77 does not commit a new criminal offense. A suspended sentence is used for minor offences, when maximum imprisonment does not exceed two years or a money fine 78 . The restorative element may be found in a situation when together with a conditional sentence, the court may impose the convicted person to compensate the damage caused by his/her criminal offense.
b) Forms of restorative justice for juveniles
When it comes to juveniles 79 , there are three forms of restorative justice that might be applied to juveniles: educational recommendations 80 , police warning, and delayed pronouncement of the sentence of juvenile imprisonment. Educational recommendations are alternative measures in the criminal law of Bosnia and Herzegovina. They may be used when the objective and the subjective conditions of the particular case have been fulfilled: while the objective condition is that for the offence prescribed, the sanction is a monetary fine or imprisonment not to exceed three years, while among subjective conditions there are inter alia: that the juvenile shows an interest to reconcile with the victim 81 and that the victim gives consent for reconciliation 82 . While the first element of restorative justice may be recognized through the will of the perpetrator for reconciliation with the victim, which shows the possibility of fixing the broken relationship and their conflict, the remaining restorative justice element is recognizable through the type of the possible educational recommendations that could be imposed on a juvenile. There are six of them 83 and out all of them only two have restorative justice nature: a personal apology to the injured party and monetary compensation.
Regarding the police warning, according to Article 23 of the code on the protection and dealing with juveniles in criminal proceedings in Bosnia and Herzegovina, "it may be given if, objectively, a monetary fine or imprisonment not to exceed 3 years is prescribed by the law, for the criminal offence, and subjectively: if the perpetrator has willingly plead guilty, there is enough evidence that the crime was committed, and a police warning hadn't been imposed to that person before". The restorative justice element may be found in it if it is observed from a wider perspective -with a police warning, the aim of punishing would be achieved without punishing or having a criminal procedure at all. That brings efficiency to the criminal proceedings and therefore benefiting society as a whole.
4.2.Brief Overview on the Suitability of Restorative Justice for Sexual Offences in BH Positive Law
Having in mind the forms of restorative justice that have been prescribed by BH positive law, in this part of paper, we will bring incriminated sexual offences to that background in order to establish suitability of restorative justice for them.
The Criminal Law in Bosnia and Herzegovina, due to its specific constitutional organization, has been distributed on two levels: the state level and the level of entities There is no special chapter within the CCBH governing sex offences because that group of criminal offences has been prescribed by entity-level codes. In XIX Chapter of the CCFBH titled "Crimes against sexual liberties and morality" those sexual offences have been prescribed: rape, sexual intercourse with a helpless person, sexual intercourse with abuse of a position, sexual intercourse, sexual intercourse with a child, indecent activities, sexual satisfaction in front of a child or juvenile, inducement to prostitution, exploitation of a child or juvenile for pornography, child pornography, incest, human trafficking and organized human trafficking 88 .
The same offences have been prescribed by the CCBD in its XIX chapter 89 . The CCRS differentiates sexual offences perpetrated towards adult and towards juveniles, so Chapter XIV 90 and Chapter XV 91 refer to the offences against sexual integrity: rape, sexual blackmail, sexual intercourse with a helpless person, sexual intercourse with the abuse of a position, 84 inducement to prostitution, sexual harassment, sexual intercourse with a child younger than the age of fifteen, sexual abuse of a child younger than the age of fifteen, instructing a child to attend child pornography, exploiting children for pornography, exploiting children for pornographic performances, introducing children to pornography, exploiting a computer network or communicating with other technical means for executing criminal acts of sexual abuse or exploitation of the child, satisfaction of sexual urges in front of others, and inducement of the child to prostitution.
As it has been mentioned above that in global mediation vis a vis property claim, work for the common good at liberty, a conditional sentence, police warning and educational recommendations are possible forms/ manifestations of restorative justice in the BH legal system, after testing the objective condition (sanction prescribed for sex offences) for application of those forms of restorative justice (suitability check), we have come to the following conclusions: Mediation and property claim, according to the positive criminal codes, may be applied to all basic forms of sexual offences since its objective criteria is referring to all criminal offences. Yet, when it comes to mediation, having in mind the specific state of the victims of sexual violence, in the authors' opinion, on top of the main rules for becoming a mediator 92 , additional conditions should be imposed. Mediators should be specialized and experienced for work with the victims of sexual violence. Their active involvement is crucial for avoiding the negative aspects that might happen in bringing a perpetrator and a victim to the same place. Their knowledge and experience may prevent possible stress and conflict between the parties in mediation.
Additionally, when it comes to property claim, even though it is an open possibility for application in the positive law, statistical data shows that in practice it is seldom used for cases of the war crimes (when rape 92 They have been prescribed in the Book of Rules for Mediators. Conditions for becoming a mediator include: faculty degree, registered practice and training, mediators in cases of sexual violence. is one of the actions). For example, in their paper, Šimić and Kazić 93 , are referring to the decisions on property claims within war crime cases. In the practice of the State Court of Bosnia and Herzegovina in the cases of war crimes, "out of 162 final verdicts, the decision on the property claim within the criminal procedure was made in 5 cases and out of the 182 first degree verdicts that have not yet been finalized, this decision was made in 2 cases" 94 .
Generally, based on efficiency, most of the property claims are transferred to be decided in civil proceedings. Solving those claims in civil proceedings instead of criminal ones would not be a problem, if there were not cases where witnesses (who are victims) are protected by pseudonym or other forms of protection and since in civil procedure a claimant's identity should be known, there is a risk of either not bringing claims in civil proceedings because of fear of uncovering the true identity of a victim or of making a claim that will reveal the identity of a victim.
Since a conditional sentence may be applied in cases when in concreto the sanction would be a monetary fine or imprisonment not to exceed two years, this form of restorative justice may be applied in the indecent activities 111 , forcing a child to attend sexual intercourse 112 , exploitation of children for pornography 113 , offences of using and introducing children with pornography 114 , exploiting a computer network or communicating with other technical means for executing criminal acts of sexual abuse or exploitation of the child 115 , satisfaction of sexual urges in front of others 116 , and inducement of the child to prostitution 117 . The remaining criminal offences have much harsher penalties, with a mandatory minimum that exceeds the objective condition and therefore are not suitable for restorative justice.
If we have in mind the objective condition for the application of the work for the common good at liberty 118 which refers to the situation when the court determines a sanction of 1 year of imprisonment, when compared with sanctions for sexual offences in concreto (and prescribed minimum of sanction), this form of restorative justice-natured sanction may be applied to sex offence cases, which is the case with a conditional sentence.
Finally, regarding educational recommendations and police warning within provisions for sexual offences, if checked from the point of prescribed sanctions for those criminal offences, they may be applied in the cases of 119 : a sexual intercourse with the abuse of a position, indecent activities, sexual satisfaction in front of a child or juvenile, child pornography and incest. Therefore, they are not applicable to other sex offences because they are more severe offences for which restorative justice would not be suitable.
CONCLUSION
Restorative justice is, without any doubt, the approach that views socially unacceptable behavior and responds to it in a fundamentally different way, unlike the retributive criminal justice system. It has gained increasing credibility as a powerful alternative to the traditional responses to crime. In this paper it is discussed whether it is recommendable to apply some forms of restorative justice to sex offence cases. Namely, those cases of intimate relation between the perpetrator and the victim, the level of victimization and stress, and its deep consequences do some scholars find unsuitable for restorative justice to be applied to.
We have shown what positive aspects that restorative justice leaves on victims. The satisfaction that victim might feel for the possible remorse of a perpetrator and the fact that they are crucial parties of the process might have a positive effect on the victim. That is also the way how the victim can regain balance lost in the relationship with the perpetrator and finally can find closure which is crucial for the victim to continue with her/his life. Additionally, the shame and remorse that the perpetrator might feel after the interface with the victim can prevent him/her of committing a new criminal offence.
The arguments against restorative justice that consist of fear of secondary victimization might be overcome easily, when each case is screened by a specialist and therapists who would establish whether restorative justice should be applied or not in that particular case. Together with that measure, restorative justice should be checked for its suitability that is usually defined by the criminal law which offers objective conditions for its application.
Through this paper it is shown that restorative justice in Bosnia and Herzegovina is accepted in fragments, mostly due to the lack of understanding of its importance. Forms/manifestations of restorative justice in BH legal system are mediation within property claim, work for the common good at liberty, conditional sentence (adults), educational recommendations and police warning (juveniles). Most of them are applicable to basic forms of sexual offences, except for the conditional sentence and work for common good on liberty, which cannot be applied in the cases of human trafficking and organized human trafficking. In the cases of media-tion, it has been noted that one should be extremely careful in considering its application; the effect of interface of the victim and the perpetrator on the victim should go through checkpoints and mediators should be specialized for work with the victims of sexual offences in order to prevent possible secondary victimization. Regarding property claim, even though it is governed as a possibility and as a right of the wronged party, it is often left for the civil court to decide upon them. This is problematic in the cases where victims, as witnesses, are protected by measures that mask their identity since in civil proceedings a claimant's identity should be known.
In the authors' opinion, limitation in the application of restorative justice to sexual offences should be considered only for severe cases with aggravating elements (such as aggravated consequences of the offencedeath or injury, or perpetration of an offence several times towards the same person, or in a cruel way or that person is a child). Exactly this approach is accepted in Bosnia and Herzegovina as well, since most forms of restorative justice are applicable only to main (basic) forms of those offences. So, it is possible to conclude that the positive law of Bosnia and Herzegovina provides the opportunity for the application of those forms of restorative justice to the above-mentioned cases. The recommendation for reaching optimum use of restorative justice in general would be to let restorative justice be an elective form of justice (elected jointly by a perpetrator and a victim), so if they want this kind of resolution of their conflict, it should be offered to them. Alternatively, if neither party elects for restorative justice, then it should not be mandatory.
|
2020-05-28T09:14:58.663Z
|
2020-01-27T00:00:00.000
|
{
"year": 2020,
"sha1": "bd3423952c8199cddbb073b01004452cc02b104e",
"oa_license": "CCBY",
"oa_url": "https://czasopisma.kul.pl/recl/article/download/4774/5548",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "97040d4d8287b67e8c7eca4a4c4e4cf317b1076f",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
247778550
|
pes2o/s2orc
|
v3-fos-license
|
A Generalized Cluster-Free NOMA Framework Towards Next-Generation Multiple Access
A generalized downlink multi-antenna non-orthogonal multiple access (NOMA) transmission framework is proposed with the novel concept of cluster-free successive interference cancellation (SIC). In contrast to conventional NOMA approaches, where SIC is successively carried out within the same cluster, the key idea is that the SIC can be flexibly implemented between any arbitrary users to achieve efficient interference elimination. Based on the proposed framework, a sum rate maximization problem is formulated for jointly optimizing the transmit beamforming and the SIC operations between users, subject to the SIC decoding conditions and users’ minimal data rate requirements. To tackle this highly-coupled mixed-integer nonlinear programming problem, an alternating direction method of multipliers-successive convex approximation (ADMM-SCA) algorithm is developed. The original problem is first reformulated into a tractable biconvex augmented Lagrangian (AL) problem by handling the non-convex terms via SCA. Then, this AL problem is decomposed into two subproblems that are iteratively solved by the ADMM to obtain the stationary solution. Furthermore, to reduce the computational complexity and alleviate the parameter initialization sensitivity of ADMM-SCA, a Matching-SCA algorithm is proposed. The intractable binary SIC operations are solved through an extended many-to-many matching, which is jointly combined with an SCA process to optimize the transmit beamforming. The proposed Matching-SCA can converge to an enhanced exchange-stable matching that guarantees the local optimality. Numerical results demonstrate that: i) the proposed Matching-SCA algorithm achieves comparable performance and a faster convergence compared to ADMM-SCA; ii) the proposed generalized framework realizes scenario-adaptive communications and outperforms traditional multi-antenna NOMA approaches in various communication regimes.
I. INTRODUCTION
Wireless communications are currently undergoing an unprecedented revolution. It is predicted by Cisco that the number of wireless-enabled devices will increase to more than 40 billion by 2023 [1]. Furthermore, the types of future wireless-enabled devices will vary from smart phones to connected cars, wearables, sensors, collaborative robots, and so on. Due to the explosive demands of wireless traffics and the emergence of various innovative wireless applications, nextgeneration wireless network, also referred to as the sixth generation (6G), is evolving towards a new era of the Internet of Everything (IoE) [2]. Driven by this exciting vision, 6G is expected to embrace broadband-hungry transmissions, pervasive access, and massive connectivity in diverse and heterogeneous communication scenarios [3]. To meet these challenges, the realization of 6G requires a fully integration and a seamless convergence of different multiple access technologies, namely next-generation multiple access (NGMA) [4]. As a promising multiple access technology, power-domain non-orthogonal multiple access (NOMA) 1 [5] has become an indispensable component of NGMA. By exploiting the signal superposition at transmitters and the successive interference cancellation (SIC) at receivers, NOMA enables users served by the same time/frequency/space/code resource block to be further multiplexed and distinguished in the power domain. Hence, it can dramatically enhance the network capacity and user connections, as well as reducing the outage probability [6], [7].
On the road from NOMA to NGMA, the combination of NOMA and multiple-antenna technologies has been regarded as one key aspect [4], [8]. On the one hand, multiple-antenna technologies can enable spatial-domain multi access (SDMA) and provide additional spatial degrees of freedom (DoFs) to assist NOMA communications [9]. On the other hand, NOMA opens up new dimensions and opportunities for resource reuse, which is capable of increasing the affordable traffic loadings of multiple-antenna communication systems [10]. Therefore, multiantenna NOMA provides a promising way to significantly improve spectral efficiency and connection density for next-generation wireless systems [8].
A. Prior Works
In the past few years, extensive literatures have been devoted to the development of multiantenna NOMA systems. Existing multi-antenna NOMA systems can be mainly classified into two categories [11], namely beamformer-based NOMA and cluster-based NOMA, which differ in the strategies of both multi-antenna beamforming and SIC operation designs.
1) Studies on beamformer-based NOMA: Beamformer-based NOMA [12]- [15] directly serves different users via distinct beamforming vectors, whose beamforming strategy is similar to conventional multiple-antenna communication systems. Meanwhile, by carrying out SIC between the multiplexed users, the spatial interference that cannot be effectively mitigated by beamforming can be further suppressed leveraging NOMA. Based on a minorization-maximization algorithm, the authors of [12] optimized the beamformer to maximize the sum rate for a multi-user downlink multiple-input single-output NOMA (MISO-NOMA) system. Simulation results signified that beamformer-based NOMA outperforms the traditional multi-antenna communication systems in the severely overloaded systems, where the transmit antenna number is much larger than the user number. Additionally, the authors of [13] investigated the optimal power allocation in a two-user downlink multiple-input multiple-output NOMA (MIMO-NOMA) system, which can achieve the capacity region of the MIMO broadcast channel under the derived channel state information (CSI) condition. The authors of [14] derived the condition of quasi-degraded channels, based on which a low-complexity precoding scheme was proposed for multi-user MISO-NOMA transmissions to approach the rate region of the dirty paper coding. By considering both perfect and imperfect CSI cases, the authors of [15] further proposed low-complexity beamforming and user selection schemes to improve the sum rate and the outage probability of beamformer-based NOMA systems.
2) Studies on cluster-based NOMA: Different from the beamformer-based NOMA, clusterbased NOMA [8], [16]- [18] typically divides the highly channel correlated users into the same cluster, where each cluster shares the same beamforming vector. While the inter-cluster interference is mitigated via beamforming, the intra-cluster interference is suppressed by carrying out SIC within each cluster [8]. In [16], the authors analysed the achievable sum rate of the clusterbased NOMA system and the multiple-input multiple-output and orthogonal multiple-access (MIMO-OMA) system, which suggested the superior performance of the optimized multi-antenna NOMA scheme. The authors of [17] considered a downlink cluster-based NOMA system with ZF beamfoming design, and then proposed efficient user clustering and power allocation algorithms.
The authors of [18] proposed a two-stage beamforming scheme for a two-user downlink MISO-NOMA system, which performs ZF beamforming to eliminate inter-cluster interference, while designing intra-cluster beamformers for users in the same cluster by minimizing the transmit power. Furthermore, the author of [19] investigated two different NOMA beamfoming schemes, where the NOMA user shares the spatial beam with legacy SDMA users or exploits a dedicated beam. The optimal solution for both schemes are analyzed, and the studies showed that sharing spatial beam can significantly reduce the computational complexity at the expense of a slight performance loss.
B. Motivations and Contribution
Note that SIC plays an important role in NOMA and the design of SIC operations between users is crucial for the eventual performance achieved by NOMA. As discussed above, current multi-antenna NOMA approaches [8], [12]- [18], [20] generally assume that the SIC is sequentially carried out within the same cluster, namely cluster-specific SIC, thus leading to both benefits and drawbacks. To be more specific, on the one hand, beamformer-based NOMA assigns all users to a single cluster, which is shown to be capable of achieving the same performance as the dirty paper coding scheme in some specific scenarios [20]. However, given the sequential nature of cluster-specific SIC, users in higher SIC decoding orders have to implement a large number of SIC operations before decoding their own signals, thus leading to a high system complexity. Moreover, beamformer-based NOMA also encounters the SIC overuse issue [4], especially when users' channels are low-correlated. This is because the SIC decoding conditions will impose the spatial interference to low channel-correlated users, even if this interference could have been eliminated via the spatial multiplexing. On the other hand, cluster-based NOMA partially alleviates the SIC overuse issue by dividing users into different clusters, where the inter-cluster and intra-cluster interference can be mitigated via spatially separated beamforming and SIC, respectively. Therefore, it can support a large number of users with a moderate SIC complexity. However, cluster-based NOMA relies on the assumption that the users in the same cluster have high channel correlations while the users of different clusters experiencing low channel correlations, which may not always holds given the randomness of wireless channels.
It can be observed that both beamformer-based NOMA and cluster-based NOMA are scenariocentric, whose effectiveness depends on specific scenarios, and thus cannot meet the heterogeneous scenario challenges for next-generation wireless networks. Against this background and to pave the way to NGMA, this paper proposes a novel generalized downlink multi-antenna NOMA transmission framework with the concept of cluster-free SIC. It enables SIC to be flexibly implemented over any arbitrary non-orthogonal users to achieve efficient interference elimination, thus breaking the constraints of the existing cluster-specific multi-antenna NOMA approaches. Mathematically, it provides a generalized modelling, which not only unifies the existing approaches but also provides more flexible transmission options, thus overcoming the shortcoming of existing approaches. This enables a paradigm of scenario-adaptive multi-antenna NOMA for NGMA. The contributions of this paper can be summarized as follows.
• We propose a novel generalized downlink multi-antenna NOMA transmission framework with the concept of cluster-free SIC, which enables flexible SIC operations between users to facilitate efficient interference elimination. The proposed framework can overcome shortcomings of traditional methods and empower a scenario-adaptive multi-antenna NOMA paradigm. We formulate a sum rate maximization problem for jointly optimizing the transmit beamforming and the SIC operations subject to SIC decoding conditions and users' data rate constraints.
• We develop an alternating direction method of multipliers-successive convex approximation (ADMM-SCA) algorithm to tackle the formulated mixed-integer nonlinear programming (MINLP) problem, which is highly coupled and non-convex. The original problem is first reformulated into a tractable augmented Lagrangian (AL) problem, where the non-convex terms are handled by invoking the SCA method. The obtained biconvex AL problem is then decomposed into two convex subproblems, which are iteratively solved by ADMM to obtain a stationary solution.
• We propose a Matching-SCA algorithm to further reduce the computational complexity and overcome the parameter initialization sensitivity of ADMM-SCA. The SIC operations between users are modelled as a two-sided many-to-many matching with externality. Then, we extend the conventional swap-based matching to efficiently solve the SIC operation problem, while employing the SCA to jointly optimize the corresponding transmit beamforming. The proposed Matching-SCA can converge to an enhanced exchange-stable matching, which guarantees the local optimality.
• Numerical results demonstrate that the proposed Matching-SCA algorithm results in comparable performance and a faster convergence compared to the ADMM-SCA algorithm, especially in the overloaded regime. It is also shown that the proposed generalized multiantenna NOMA framework is capable of achieving efficient SIC operations and scenarioadaptive communications, which outperforms traditional transmission schemes regardless of system loadings and users' channel correlations.
C. Organization and Notation
The rest of this paper is organized as follows. Section II presents the generalized downlink multi-antenna NOMA transmission framework and formulates the sum rate maximization problem. In Section III, an ADMM-SCA algorithm is developed for solving the formulated joint optimization problem. Furthermore, a low-complexity and fast-convergent Matching-SCA algorithm is proposed in Section IV by extending the conventional many-to-many matching theory.
Section V presents numerical results to demonstrate efficiencies of the proposed algorithms, and Section VI finally concludes the paper. Since the proposed framework eliminates the concept of cluster, each user is not required to share beamforming vectors with any other users. Denote the transmit beamforming matrix by W = [w 1 , w 2 , ..., w K ] ∈ C N ×K . For each user k ∈ K, the received signal can be expressed as ... 13 31 flexible SIC decoding order To efficiently mitigate the inter-user interference suffered by each user, the proposed framework introduces a novel cluster-free SIC concept, which differs from traditional methods in that it enables SIC to be flexibly implemented between any two non-orthogonal users without the predefined user clusters. Mathematically, we define the binary indicator α ik , ∀i, k ∈ K, which specifies whether the SIC operation is carried out at user i to decode the signal of user k.
Specifically, α ik = 1 indicates that user i will first employ the SIC to decode the signal of user k before decoding its own signal for eliminating interference from user k, and α ik = 0 otherwise.
As it is generally impossible to mutually implement the SIC decoding at both users, we have As implied by (2), the variables α also determines the SIC decoding order. The achievable rate of the proposed framework can be modelled as follows.
1) Communication rate modelling: When user k decodes its own signal, the observed interference Intf k→k after SIC operations can be expressed as where α is the matrix defined as α = [α 1 , ...α K ], with α k = [α 1k , α 2k , ...., α Kk ] T being the SIC operation vector for user k. Therefore, the signal-to-interference-plus-noise ratio (SINR) SINR i→k for user k to decode its own signal can be given by As a result, the achievable data rate R k→k (α, W) for user k to decode its own signal can be 2) SIC decoding rate modelling: By employing SIC, when α ik = 1, user i needs to decode user k's signal before decoding its own signal. Define Intf i→k (α, W), ∀i, k ∈ K, i = k, as the observed interference when user i decoding the signal of user k. To model Intf i→k (α, W), we should determine in which cases the interference from the other user u, u ∈ K, u = k, can be eliminated via SIC when decoding user k's signal at user i. As shown in Fig. 2, this can only happen in the following two cases if all the involved SIC decoding can be successfully carried out: (i) On the one hand, if both user i and user k carry out SIC to decode the signal of user u, i.e., α iu = α ku = 1, the interference h H i w u 2 from user u can be eliminated when decoding the signal of user k at user i. As depicted in Fig. 2(a), this case follows the same principle of conventional multi-antenna NOMA, i.e., the three users are regarded in the same cluster.
(ii) On the other hand, if no SIC operation is carried out between user k and user u, i.e., α ku = α uk = 0, and user i employs SIC to decode the signals of both u and k, i.e., α iu = α ik = 1, then user i would sequentially decode signals of user k and user u according to the ascending order of their channel gains. Therefore,when user i decodes user k's signal, the interference from user u can be eliminated via SIC if h u 2 ≤ h k 2 , as depicted in Without loss of generality, we assume that users are sorted in ascending order of their channel gains, i.e., h u 2 ≤ h k 2 , ∀u < k. Therefore, given α ik = 1, when u < k, the interference from user u cannot be eliminated if α iu = 0 or α iu = α uk = 1. Moreover, when u > k, the interference from user u cannot be eliminated if α iu = 0 or α ku = 0. Mathematically, the interference Intf i→k (α, W) for decoding the signal of user k at user i can be formulated by (5) The corresponding SINR for user i to decode user k's signal, defined as SINR i→k (α, W), can be computed by Moreover, the achievable data rate for the SIC decoding can be given by To completely eliminate the interference via the SIC as described above, the following condition has to be satisfied to ensure the successful SIC decoding when The sum rate of the proposed generalized cluster-free NOMA framework can be given by Essentially, by introducing the cluster-free SIC, the proposed framework provides a generalized and unified modelling, where the beamformer-based NOMA, cluster-based NOMA, and SDMA can be all regarded as the special cases of the proposed framework, as analysed as follows.
1) Special case 1 -Beamformer-based NOMA:
When there is only one SIC decoding sequence that involves all the connected users, i.e., α ik = 1, ∀i > k, and α ik = 0 otherwise, the proposed generalized framework is equivalent to beamformer-based NOMA. In this case, the achievable sum rate can be given by 2) Special case 2 -Cluster-based NOMA: When user i and user k are served by two aligned beamforming vectors, i.e., ∃c ik ∈ R, w k = c ik w j , i, k ∈ K, they are considered to share the same spatial beam, which is similar to the traditional cluster-based NOMA systems. If SIC decoding is sequentially carried out only between users served by aligned beamforming vectors, then the proposed generalized framework reduces to cluster-based NOMA, where α ik = 1 if i > k and ∃c ik ∈ R such that w k = c ik w i , and α ik = 0 otherwise. Suppose there exists G clusters, indexed by G = {1, 2, ..., G}. Denote the user set of cluster g by K g . Then, the achievable sum rate of the cluster-based NOMA system can be written as 3) Special case 3 -SDMA: If there is no SIC operation between any users, i.e., α ik = 0, ∀i, k ∈ K, i = k, then the proposed generalised NOMA framework is equivalent to SDMA. The sum rate of the SDMA system can be expressed as In addition to unifying the traditional methods, the proposed framework also enables more flexible SIC operations. A specific example is shown in Fig. 1, where users cannot be ideally divided into a single or multiple user clusters, and the cluster-specific SIC schemes are not flexible enough. To empower efficient interference elimination, the proposed cluster-free scheme breaks the clustering limitations, which can flexibly enables SIC operations between highly channel-correlated users (e.g., user 2 and user 4, user 1 and user 4), while adaptively preventing ineffective SIC operations between less channel-correlated users (e.g., user 1 and user 2).
Remark 1.
The proposed framework provides a generalized model to unify traditional methods, and enables more flexible transmission options with cluster-free SIC to achieve adaptive interuser interference mitigation. Therefore, it can overcome the defects of traditional methods and reap their gains to deal with diverse scenarios facing next-generation wireless communications.
Owing to these merits, we can straightforwardly derive that the achievable sum rate of the proposed framework can outperform or is at least not worse than traditional approaches, i.e.,
B. Problem Formulation
Our goal is to maximize the sum rate while guaranteeing SIC decoding conditions and ensuring users' data rate requirements by jointly optimizing the transmit beamforming and the cluster-free SIC operations between users. Mathematically, the optimization problem can be formulated as 2 where constraint (10b) represents the SIC decoding conditions rearranged from (7), (10c) guarantees the minimum data rate of each user i, and (10d) ensures the maximum transmit power of BS does not exceed P max . Furthermore, (10e) indicates that user i and user k, i = k, cannot mutually implement the SIC decoding, and (10f) indicates the binary variable constraint.
Nevertheless, it's challenging to solve P 0 owing to the following reasons. Firstly, the SINR expressions in (10a)-(10c) are neither convex nor concave with respect to the optimization variables.
Additionally, the design of SIC operations introduces the binary constraint (10f). Furthermore, the optimization variables are highly coupled with each others in both the interference terms and the objective function. Therefore, P 0 is a non-convex and highly coupled MINLP problem, which is nondeterministic polynomial time-hard (NP)-hard. This makes it difficult to find the globally optimal solution. To deal with these difficulties, locally algorithms are proposed in the following sections.
III. ADMM-SCA BASED SOLUTION
In this section, an ADMM-SCA algorithm is developed to solve P 0 . The highly coupled MINLP is first equivalently reformulated into a tractable AL problem with continuous variables.
By invoking the SCA to handle the non-convex terms, the AL problem can be approximately transformed into a series of biconvex optimization problem. Based on the strongly convergenceguaranteed ADMM method, we further decompose the biconvex problem into two convex subproblems, which can be iteratively solved to achieve the stationary solution.
Moreover, since the interference term Intf i→k (α, W) in (5) suffers from the highly coupling variables α ki , α iu , α uk , and W, we equivalently transform the interference term as follows to make it tractable. Since both {α ik } and {β ik } are binary variables, the coupling terms (5) can be directly recast as Therefore, the interference terms Intf k→k (α, W), ∀k ∈ K, in (3) and (5) can be equivalently rewritten as (16) is convex with respect to β.
Proof. For i = k, Intf i→k (β, W) is a linear function of β. Therefore, we only need to verify the convexity for i = k. According to the derivation in [22], the pointwise maximum function g(β) = max{g 1 (β), g 2 (β)} is convex if g 1 (β) and g 2 (β) are both convex functions. Since (14) and (15) are both pointwise maximums of affine functions of β, it can be concluded that Intf i→k is convex with respect to β, which completes the proof.
To deal with the non-convex data rate expression, we further introduce a series of auxiliary variables S = {S ik } ∀i,k∈K , I = {I ik } ∀i,k∈K , and r = {r ik } ∀i,k∈K . Specifically, I ik indicates the upper bound of the interference Intf i→k (β, W), ∀i, k ∈ K. Moreover, S ik and r ik signify the lower bounds of the effective gains and the achievable rate for decoding user k's signal at user i, ∀i, k ∈ K, respectively. Therefore, the intractable MINLP problem (10) can be written as the following continuous problem: Proof. Owing to the monotonicity of the log 2 (·) function, the constraints (17b)-(17d) in problem P 1 always hold with equality at the optimum point. Therefore, the solutions obtained by solving , ∀i, k ∈ K, which demonstrates the equivalence between the optimal solutions of P 1 and P 0 . Now we can invoke the strongly convergence-guaranteed ADMM framework [23] to deal with the resulting problem P 1 . By dualizing and penalizing the coupling equality constraints (11) and (12) into the objective function, the AL problem of P 1 can be formulated as [24] P AL : max α,β,W,S,I,r where f 0 (r) = k∈K log 2 (1 + r kk ) is the original objective function. L (1) (α, β, λ) and L (2) α, β, λ respectively denote the AL terms corresponding to equality constraints (11) and (12), given by where λ = λ ik and λ = λ ik are the dual variables and ρ is the non-negative penalty parameter. As proven in [24], by alternatively optimizing the primal variables {α, β, W, S, I, r} and dual variables λ, λ of the AL problem, the residuals of constraints (11) and (12), i.e., (β ik + α ik − 1) and α ik β ik , ∀i, k, will converge to zeros and the binary constraint can be satisfied.
B. ADMM-SCA Algorithm
According to Lemma 1, the AL problem P AL is convex over β. However, constraint (17b) is non-convex since log 2 1 + S ik is a difference of concave function over I ik . Furthermore, constraint (17c) is non-convex over W. To handel these nonconvex constraints, we integrate the SCA method [25] into the ADMM framework [23]. Utilizing SCA, the non-convex components can be approximately and sequentially linearized into a series of convex expressions based on the first-order Taylor approximation for a given local point. Thus, the AL problem can be further decomposed into convex subproblems that can be optimized based on ADMM in an alternative and iterative manner.
Let w k , I ik , α ik , and r kk denote the values of the optimization variables w k , I ik , α ik , and r kk obtained from the previous SCA iteration, respectively. We first define function Based on the first-order Taylor approximation around w k , i.e., , we can recast the constraint (17c) as Similarly, by taking the first-order Taylor expansion of function q 2 (I ik ) = log 2 (I ik ) at point I ik , we can obtain q 2 (I ik ) ≤ q 2 I ik , I ik = log 2 I ik + 1 ln 2 After rearrangement, the constraint (17b) can be transferred into Furthermore, to decouple α ik and r kk in constraint (17e), the term α ik r kk can be rearranged as α ik r kk = 1 4 (α ik + r kk ) 2 − 1 4 (α ik − r kk ) 2 . To deal with this difference-of-convex expression, we linearize the non-convex term − 1 4 (α ik − r kk ) 2 using the first-order Taylor expansion, i.e., Considering (24), constraint (17e) can be transformed into Based on the above analyses, the AL problem P AL can be approximately transformed into the following problem during each SCA update: The resulting problem P 2 is a biconvex problem, which can be decomposed into two nested convex subproblems over two variable blocks {α ik } ∀i =k , W and {β ik } ∀i =k . Based on the ADMM framework, these convex subproblems can be solved alternatively at each iteration, followed by which the dual variables λ and λ are updated. In light of this, we propose an ADMM-SCA algorithm, which has three steps during each iteration.
Firstly, given β, α, W, I, r, λ, λ , the ADMM-SCA algorithm jointly optimizes the SIC operations {α ik } ∀i =k , and the transmit beamforming W by solving the following convex problem where ω (1) ik and ω (2) ik denotes the Lagrangian multipliers for constraints (13), and ω Since (29) is a convex optimization problem, it can be easily solved by the interior point method using the standard convex optimization tool, such as CVX [26].
The overall ADMM-SCA algorithm can be summarized as Algorithm 1. The computational complexity of solving convex subproblems (27) and (29) via the interior point method can be respectively given by O (4K 2 + M K) 3.5 and O (3K 2 ) 3.5 [22]. Therefore, the computational where T denotes the number of iterations for reaching convergence. According to the analyses in [27] and [28], the ADMM-SCA algorithm can converge to a feasible and stationary solution of problem P 1 with polynomial time complexity. However, the ADMM framework generally suffers from slow convergence and requires a high computational complexity. Moreover, the obtained discrete variables α and β are usually highly sensitive to the initialized parameters, which thus significantly impact the resulting performances. Hence, we randomly initialize W, and test N ini groups of initialized parameters for {β, α, S, I} to empirically choose the initialization points in different communication regimes.
IV. LOW-COMPLEXITY MATCHING-SCA BASED SOLUTION
Although the ADMM-SCA algorithm achieves monotonic convergence to a desirable suboptimal solution, it may need a large number of iterations for convergence and require high computational complexity when user number increases. Moreover, the achieved performance is typically highly sensitive to the initialized parameters due to the discrete optimization. To overcome these shortcomings, in this section we further propose a novel low-complexity and efficient strategy, which solves the non-convex NP-hard MINLP based on the matching game theory and the inexact SCA method.
A. Many-To-Many SIC Matching Problem
Firstly, we model the cluster-free SIC optimization as a dynamic two-sided matching game among the connected users. We define two virtual user sets U and V with logically disjoint entries, where U consists of the users that execute SIC to cancel the interference imposed by users from V. Without loss of generality, we define U = V = K. If user u ∈ U is scheduled to carry out SIC to eliminate interference from user v ∈ V, we say user u and user v are matched to each other, which is denoted by (u, v).
The SIC operations can be formulated as a matching problem, which yields the following definitions and remarks.
Definition 1 (Many-to-Many Matching). A man-to-many matching µ is a function from set U ∪V to the set of all subsets of U ∪ V, such that In the above definition, condition (i) means that each user u ∈ U can carry out SIC for a subset of users in V, and the cardinality of µ(u) cannot exceed N u . Condition (ii) indicates that the interference from each user v ∈ V can be eliminated with SIC by at most N v users from U. Condition (iii) represents that the mapping of user u ∈ U is the subset of V, and vice versa. Condition (iv) implies that when u ∈ U matches with v ∈ V, v matches with u as well. Without loss of generality, we set N u = N v = K here. Thus, both the cluster-based and beamformer-based NOMA approaches can be included as special cases of the proposed strategy.
During the matching process, each user u ∈ U and v ∈ V have their individual preference lists. Given a matching µ, we formulate the utility function U µ k , i.e., the preference value of each user k ∈ U ∪ V over matching µ, as its achievable data rate while fixing the matching states of the other users, which can be defined as Here, (32) returns the achievable data rate of user u as implied by the SIC constraint 1 α ik R k→k ≤ R i→k from (10b), where R µ k , α µ , and W µ denotes the data rate of user k, the SIC operations, and the beamforming coefficients corresponding to the matching µ, respectively. Therefore, the total utility over the matching µ can be expressed as U µ = k∈K U µ k .
Lemma 2. The formulated two-sided many-to-many matching problem has the properties of externality and non-substitutability.
Proof. The properties can be demonstrated as follows. i) Externality: Owing to the feature of the multi-antenna NOMA system, the interference suffered by each user k ∈ U ∪ V varies with matching states of the other users. Therefore, the achievable data rate and preference of each user k also depend on other users, and each user should take into account the internal relationship of the other users when determines its matching state. This renders the externality of the SIC matching problem. ii) Non-substitutability: Given two virtual user sets U and V, each user u ∈ U prefers to match with a subset V u of V, which is defined as the choice of u in V, denoted by C u (V) = V u . Here, user u prefers V u to any subset of V, which can be denoted as V u u V , To address the externality and ensure exchange stability, we first introduce the following matching swap operation as defined by conventional matching theory [29], [30] which means that user u and u exchange their matched users v and v while keeping all other users' matching states unchanged. Here, we also consider the swap operation over "holes", i.e., the empty set ∅ that does not contain any users. Specifically, the matching state of user pair (u, v) can be transferred from matched into unmatched after the swap operation µ ∅∅ uv . Furthermore, the state of the user pair (u, v) can shift from unmatched to matched based on µ ∅v u∅ . Based on the matching swap operation, the swap-blocking pair can be defined as follows.
where U k (µ) denotes the utility of a user k ∈ U ∪ V over matching µ.
B. Extended Many-To-Many Matching for SIC Operations
In the proposed framework, the SIC operations α determine both the user matching and the decoding order, as implied by (2). However, the traditional swap operation in (33) only optimizes the user matching, and cannot dynamically and jointly optimize the NOMA SIC decoding order.
To address this problem, we extend the traditional matching algorithm to efficiently optimize α for the generalized cluster-free multi-antenna NOMA.
In contrast to the traditional matching model, the matched users (u, v) and (v, u) in the formulated matching game have completely different physical meanings, which lead to swaps of SIC decoding orders. Therefore, we propose a decoding order swap operation, which enables the exchange of SIC decoding order between two matched users u and v. The decoding order swap operation µ u v uv can be defined as follows: Combining the conventional user matching swap operation (33) (ii) for v = u , v = u, we can obtain that v = µ(u) and U µ u v uv > U (µ).
The above condition (i) indicates that after a conventional matching swap (33), the overall utility should be increased. Moreover, to ensure that the matching swap from which leads to the constraint α v u = α vu = 0 in condition (i). Condition (ii) implies that after a decoding order swap operation (34), the overall utility should be improved.
Note that Definition 3 determines the swap rule of the formulated matching. To be more specific, if there exists an enhanced swap-blocking pair satisfying any of the above conditions (i) and (ii), then the matching is not stable and convergent, and the corresponding matching µ u v uv orμ u v uv would be "approved". Therefore, by extending the concept of exchange stability in conventional matching [31], we can define the enhanced exchange stability as follows.
Definition 4 (Enhanced Exchange-Stable Matching). The two-sided matching µ is an enhanced exchange-stable matching if and only if there dose not exist an enhanced swap-blocking pair.
C. Matching-SCA Based Joint Optimization
Based on the proposed extended matching, we further develop a dual-loop iterative algorithm to jointly optimize the SIC operations and the transmit beamforming. In the outer loop, the matching state µ of SIC operations are updated by the extended many-to-many matching while fixing the transmit beamforming. In the inner loop, given the SIC operations, the transmit beamforming W is sequentially optimized via an SCA process.
The inner-loop transmit beamforming optimization can be illustrated as follows. Given the current matching state µ and the corresponding SIC operation variables α µ , the beamforming W can be optimized by invoking the SCA method, as analysed in Section III-B. By fixing α µ , W can be optimized by sequentially solving the following convex problem max W,S,I,r s.t. r ik + log 2 I ik + 1 ln 2 (17f), (17h). (35f) The developed joint optimization algorithm, namely Matching-SCA, can be summarized as Algorithm 2. Firstly, the initialized matching states of all users is set as unmatched, i.e., α = I K×K . Moreover, the transmit beamforming W is randomly initialized with a feasible point. The iterative procedure exploits a dual-loop structure. Specifically, given the current matching states µ, the beamforming coefficients W are sequentially optimized via SCA in the inner loop to an inexact solution that is not required to be locally converged (step 7-step 12). Thereafter, by fixing the transmit beamforming W, we further perform an enhanced swap-matching process in the outer loop (step 13-step 20). By searching enhanced swap-blocking pairs, preferable matching swaps and decoding order swaps can be obtained to improve the sum rate. Based on the resulting matching α µ and W, we further update the auxiliary variables I and S as Update l ← l + 1.
The above process is repeated until the termination criterion is reached.
D. Theoretical Analysis
The properties of the proposed Matching-SCA algorithm with regard to the stability, convergence, and optimality can be theoretically analysed as follows.
Proposition 2 (Stability). The proposed Algorithm 2 eventually reaches an enhanced two-sided exchange-stable matching.
Proof. We prove this proposition by contradiction. Assume that there exists an enhanced blocking pair (u, u ) in the resulting µ * , which satisfies condition (i) or condition (ii) in Definition 3.
According to step 14-step 20 in Algorithm 2, the proposed algorithm will continue swap until no enhanced blocking pair satisfying the swap conditions exists in the current matching. That is to say, µ * should not be the resulting matching, which contradicts the initial assumption.
Therefore, it can be concluded that an enhanced exchange stability can be achieved by the proposed Matching-SCA Algorithm 2 eventually. This completes the proof.
where c = corr × e jφ with φ being the randomly generated phase within [0, 2π] and corr controlling the mean channel correlation. For different communication regimes, N ini = 20 groups of initialization parameters are tested to empirically select the initialization points for the ADMM-SCA algorithm. In Fig. 3(a), we first compare the convergence behaviors under different user numbers. Here, we introduce the exhaustive search-SCA algorithm as a benchmark, which solves beamforming coefficients W by SCA while obtaining the globally optimal α by exhaustively searching all possible combinations. As shown in Fig. 3(a), if initialized parameters are well tuned, the ADMM-SCA algorithm can achieve the near-optimal SIC operations indicated by exhaustive search-SCA. However, ADMM-SCA incurs high computational complexity and takes more than 60 iterations for convergence when the number of users increases. Moreover, the Matching-SCA algorithm yields close performances to ADMM-SCA while achieving much faster convergence, which can converge within 30 inner-loop iterations even in the severely overloaded system.
A. Convergence Behavior
It is worth pointing out that Matching-SCA may outperform ADMM-SCA especially in the overloaded systems. This is because the binary SIC operation variables α obtained by ADMM-SCA is highly sensitive to initialized parameters, but Matching-SCA can alleviate the overdependence on the parameter initialization.
B. Performance Comparisons
To demonstrate the performance of the proposed generalized cluster-free framework as anal- correlated users more adequately. In Fig. 4(b), we further compares the SIC decoding complexity, i.e., the number of matched user pairs k∈K j∈K\{k} (α kj + α jk ) that implement SIC decoding in different approaches. Owing to the cluster-free scheme, the SIC decoding complexity of the proposed framework increases with users' channel correlations, and is higher than CB-NOMA/SDMA but lower than BB-NOMA, which demonstrates that it can achieve scenarioadaptive SIC operations and efficient interference suppression. We set corr = 0.6 and corr = 0.9 for low channel correlation and high channel correlation, Table I. It can be observed that the proposed generalized cluster-free framework leads to the highest performance regardless of the system loadings and channel correlations.
VI. CONCLUSIONS
A novel generalized multi-antenna NOMA framework has been proposed based on the clusterfree SIC, which can reap the gains and overcome the shortcomings of traditional approaches, thus enabling a scenario-adaptive multi-antenna NOMA paradigm for NGMA. The transmit beamforming and the SIC operations were jointly optimized to maximize the sum rate subject to the SIC decoding conditions and data rate constraints of users. To tackle the resulting highlycoupled NP-hard MINLP problem, an ADMM-SCA algorithm was developed to obtain the stationary solution. Furthermore, to accelerate the convergence and overcome the over-dependence on parameter initialization of ADMM-SCA, a Matching-SCA algorithm was proposed. Based on an extended many-to-many matching procedure, the proposed Matching-SCA can converge to an enhanced exchange-stable matching which guarantees the local optimality. Our numerical results showed that the proposed Matching-SCA algorithm has comparable performance to ADMM-SCA, and achieves fast convergence despite the increment of connected users. Numerical results also verified that the proposed framework can outperform traditional multi-antenna NOMA approaches under varying channel correlations and both underloaded and overloaded regimes, which confirmed the effectiveness of the proposed framework and motivated the future research on the generalized cluster-free NOMA for empowering NGMA.
APPENDIX A: PROPOSITION 1 Let {α t * , W t * , I t * , S t * , r t * } and W t,l , I t,l , S t,l , r t,l denote the optimal values obtained from outer-loop iteration t and the corresponding inner-loop iteration l of Algorithm 2, respectively.
In each inner-loop SCA iteration l, functions q 1 (w k ) and q 2 (I ik ) defined in III-B are linearized based on the previous iterate points w t,l−1 k and I t,l−1 ik . From (21), by solving problem (35), the obtained S t,l ik and W t,l ik always satisfy S t,l ik ≤ q 1 w t,l k , w t,l−1 k ≤ q 1 w t,l k = h H i w t,l k 2 , ∀i, k ∈ K.
Furthermore, considering q 2 I t,l ik ≤ q 2 I t,l i,k , I t,l−1 i,k from (22), the inequality r t,l ik + q 2 I t,l ik ≤ r t,l ik + q 2 I t,l k , I t,l−1 k ≤ log 2 I t,l ik + S t,l ik holds, ∀i, k ∈ K. Hence, given the fixed SIC operation variables α t * , any feasible solution W t,l , I t,l , S t,l , r t,l of problem (35) is also feasible to P 1 .
|
2022-03-30T01:15:48.180Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a4ca910961cb7e0b15c5e3cbc379df91a5e4ec6f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a4ca910961cb7e0b15c5e3cbc379df91a5e4ec6f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
9978665
|
pes2o/s2orc
|
v3-fos-license
|
ABO Incompatible Kidney Transplantation—Current Status and Uncertainties
In the past, ABO blood group incompatibility was considered an absolute contraindication for kidney transplantation. Progress in defined desensitization practice and immunologic understanding has allowed increasingly successful ABO incompatible transplantation during recent years. This paper focused on the history, disserted outcomes, desensitization modalities and protocols, posttransplant immunologic surveillance, and antibody-mediated rejection in transplantation with an ABO incompatible kidney allograft. The mechanism underlying accommodation and antibody-mediated injury was also described.
Introduction
The most effective treatment of end-stage renal disease is kidney transplantation, but a severe donor shortage has significantly limited this treatment. To overcome this profound donor shortage, immunologic barriers historically considered as absolute contraindications to transplantation are being reevaluated. One such barrier is the ABO blood group incompatibility. Kidney transplantation across the ABO blood group barrier has the potential to expand the pool of donors, increase the availability of transplantable organs, and decrease the prolonged time on the waiting list for a kidney [1][2][3][4]. In addition, through the help of a better understanding of related immunologic mechanisms and effective various regimens for controlling it, ABO-incompatible kidney transplantation (ABOi KT) is being performed with increasing frequency [5].
To clarify the current status and uncertainties in this area, the present paper focuses on recently reported outcomes of ABOi KT, preconditioning methods before transplantation, posttransplant monitoring and management, diagnosis and treatment of antibody-mediated rejection, and the basic elucidation of immune tolerance and accommodation.
History of ABO-Incompatible
Kidney Transplantation 2.1. Brief History of ABOi KT. The use of an ABO-incompatible (ABOi) kidney is not a recent development. The first attempt at ABOi KT was reported in 1955 by Chung et al. [6]. In their experience, eight of ten ABOi kidney allografts did not work successfully within the first few postoperative days. Although further attempts at ABOi KT have been sporadically reported, these series revealed similar poor outcomes with graft survival rates of approximately 4% at one year [7][8][9][10]. Therefore, ABOi KT was largely abandoned. An interesting clinical trial was reported in 1987 when Thielke et al. [11] showed that 12 of 20 transplants from blood group A2 donors into O recipients maintained longterm allograft function. This procedure is based on the finding that the expression of the A antigen on the red blood cell in the A2 donor was much weaker than that in the A1 donor. Regrettably, this technique can be used only in a small minority of KT candidates.
In 1987, Alexandre et al. introduced an effective desensitization protocol to achieve success in ABOi living donor KT [11]. This protocol included pretransplant repeated plasmapheresis as a strategy not only to reduce the titers of anti-A or -B antibodies, but also to decrease the antilymphocyte globulin-based induction. This plasmapheresis also altered the triple maintenance immunosuppression of cyclosporine, azathioprine, and corticosteroids and concomitant splenectomy [12]. A one-year graft survival of 75% and a recipient survival of 88% were achieved in the 23 recipients [2]. While their results were impressive and became the basis of the next desensitization protocols for ABOi KT, the ABOi KT was still uncommon in the west.
These efforts regarding ABOi KT were significantly expanded in Japan because of the near absence of deceased donors and the only 0.15% of living A2 donors. The largest number of ABOi KT since 1989, more than 1000 cases, has been performed in Japan [13]. The percentage of ABOi KT surgeries reached 14% of all living donor KTs performed in Japan [11]. Following the remarkable results reported in the Japanese center utilizing modern desensitization techniques, together with the development of new immunosuppressive therapies, ABOi KT began receiving new interest in Europe and the USA in the early 2000s [12].
Published Clinical Outcomes of ABOi KT.
Short-term results from the protocol described above have been notable. For instance, in the study of Tydén et al. [14], recipients with a baseline anti-A or -B IgG titer of up to 1 : 128 were successfully transplanted with no episode of acute rejection. Montgomery [15] reported one-year patient and graft survivals of 96.3% and 98.3%, respectively, in a cohort of 60 consecutive ABOi KTs using a variety of protocols. Oettl et al. [16] demonstrated a 100% survival rate of both patients and grafts at one-year after transplant.
Moreover, long-term results of ABOi KT reported by western and Japanese transplant centers also have shown that ABOi KT is equivalent to ABO-compatible KT [12]. Genberg et al. [17] reported that ABOi KT had no negative impact on long-term graft function compared to that of ABO-compatible KT in terms of patient survival, graft survival, or incidence of acute rejection after a mean followup of three years. Tydén et al. [18] found that graft survival was 97% for the ABOi KT compared with 95% for the ABOcompatible KT in their three-center experience at their fiveyear followup. Patient survivals were 98% in both KT groups.
In the analyzed UNOS data of Gloor and Stegall [19], they concluded that a long-term immunological response against ABO incompatibility has little effect on graft survival with current immunosuppressive protocols and patient monitoring. Tanabe [20] summarized the outcomes of 851 ABOi KT performed in 82 institutions in Japan between 1989 and 2005. The five-year graft survival in their study was 79%, with patient survival at 90%. Montgomery [15] obtained five-year patient and graft survivals of 89% following ABOi KT at Johns Hopkins Hospital. Fuchinoue et al. [21] report the five-year outcome of ABOi KT as a graft survival rate of 100%, whereas Ishida et al. [13] achieved a graft survival of 57% and patient survival of 89% at ten years postoperatively for more than 130 cases of ABOi KT.
General Methodology of Desensitization.
For the successful performance of ABOi KT, the antibody-mediated response must be understood and targeted. Over the past 20 years, several strategies have been developed to resolve and modulate this response. These strategies, or desensitizations, are all based on the same principles [5,12,22], including not only the removal of preexisting antibodies that are directed at the donor ABO antigen, but also waiting to transplant until the anti-ABO antibody titer is below a set target. Additionally, the prevention of further production of new recipient anti-ABO antibodies before and after transplantation is another founding principle. Ordinarily, several pretransplant apheresis sessions are required for antibody removal. To prevent reformation of the antibody, apheresis is followed by intravenous immunoglobulin, a mixture of immunosuppressive therapies, and erasable splenectomy [11,12]. This procedure usually occurs over a period of one to two weeks.
Antibody Depletion Technique.
In the field of ABOi KT, currently used antibody depletion techniques include therapeutic plasma exchange, double-filtration plasmapheresis, and antigen-specific immunoadsorption. The great difference among these techniques is their degree of selectivity [12].
The simplest and most common method to remove antibody from plasma is therapeutic plasma exchange, in which large amounts of plasma are withdrawn and replaced with colloid solutions [23]. This procedure eliminates approximately 20% of the anti-ABO antibodies with each session. However, this technique is not sufficiently selective to remove only protective antibodies and also removes coagulation factors, hormones, and antiviral and antibacterial immunoglobulin G (IgG) and immunoglobulin M (IgM). The removal of these factors increases the risk of bleeding or infection [24]. However, this technique is by far the least expensive means of removing antibodies [12].
The selective techniques of double-filtration plasmapheresis or antigen-specific immunoadsorption are safe and more effective and are therefore usually the first choice. Because no coagulation factors are eliminated, large plasma volumes can be processed, and the resultant efficacy is increased compared to that of therapeutic plasma exchange [12]. Using a second filter, double-filtration plasmapheresis is capable of eliminating the plasma fraction containing the immunoglobulins [12] and decreases the amount of plasma discarded [23]. Using the process of immunoadsorption, the plasma is processed through a Glycosorb ABO immunoadsorbent column and reinfused into the patient. There are no volume losses, and thus the number of adsorption cycles has no limit [23]. Although the ABOi KT is an American Society for Apheresis (ASFA) category II indication, there has been no clinical trial differentiating antibody reduction procedures or the standardized monitoring protocol [4].
Intravenous
Immunoglobulin. Intravenous immunoglobulin plays a role in the downregulation of the antibodymediated immune response [5]. The immunoglobulin blocks not only the Fc receptor on the mononuclear phagocyte, but also the direct neutralization of the alloantibody. Further, it inhibits the CD19 expression on the activated B cell, as well as that of the complement and the alloreactive T cell. Although alloantibody rebounds within days of the discontinuation of plasmapheresis, the benefit of intravenous immunoglobulin may continue for many months after drug administration.
Splenectomy.
Traditionally, concurrent splenectomy was an important prerequisite of the desensitization protocol for ABOi KT, based on the idea that it contributed to the reduction of the antibody-producing B-cell pool [5]. Alexandre et al., the early investigator of ABOi KT, suggested that the splenectomized recipient had a much smaller risk of antibody-mediated rejection.
However, whether splenectomy is essential for successful ABOi KT remains unproven. Sonnenday et al. [2] found that the suppression of anti-ABO antibody after splenectomy was not significantly different from that of nonsplenectomized patients. Sonnenday et al. [2] reported that splenectomized recipients had a 25% greater mortality at 84 months compared with that of non-splenectomized recipients. Gloor et al. [25] reported that splenectomy is not necessary even for patients with high-baseline anti-ABO antibody titers. Takahashi et al. [26] demonstrated that splenectomy is not necessary to inhibit antibody production because significant numbers of memory cells exist in the bone marrow.
There is a growing consensus that splenectomy is not necessary in ABOi KT recipients [19]. Most investigations have commonly substituted anti-CD20 monoclonal antibody induction [5].
Anti-CD20 Monoclonal
Antibody. The anti-CD20 monoclonal antibody, rituximab, directly inhibits B-cell proliferation and induces cellular apoptosis through the binding of complement. Complement, in turn, mediates antibodydependent cell-mediated cytotoxicity and subsequent cell death [27].
Several centers have established the use of rituximab as a chemical splenectomy due to its potent ablation of the B-cell compartment [20]. The advantage of rituximab over splenectomy is that it ablates the B cell during the period of the greatest risk of antibody-mediated rejection and then allows the humoral immune system to heal with an intact spleen [28]. Fuchinoue et al. [21] reported that patients who received rituximab induction had a lower incidence of acute antibody-mediated rejection and better outcome of fiveyear graft survival compared to those in ABO-compatible or -incompatible KT recipients who were not treated with rituximab.
However, rituximab's mechanism of action in ABOi KT is not clear. Since CD20 antigen is not expressed on plasma cells, rituximab is effective against pre-B and B-cells but not against plasma cells, which contributes to antibody production [26]. Some data [24] suggest that rituximab has a much weaker impact on the memory B cell population, the progenitor of the IgG-secreting plasma cell. The appropriate dosage and the initial time for administration were also unclear. Toki et al. [29] indicated that a low-dose rituximab less than 375 mg/m 2 has a potent impact on the depletion of B cells in the spleen and peripheral blood. Toki et al. [29] demonstrated that a single dose of rituximab, even at 50 mg/m 2 , depleted B cells from the peripheral blood as effectively as did the 375 mg/m 2 dose. Fuchinoue et al. [21] revealed that there was no difference in serum creatinine at one year after transplant, irrespective of the dose of rituximab.
As an extension of this idea, these have led to a modified protocol that consists of an antibody-depleting procedure and intravenous immunoglobulin with no long-term B-cell suppression from splenectomy or rituximab. Flint et al. [30] and Montgomery et al. [31] have applied this protocol to their cohort studies. Montgomery's results showed that the five-year graft survival rate was 88.7% with no instance of antibody-mediated rejection or graft loss.
Quadruple Immunosuppression.
Immunosuppressive regimens are required for both T-cell-mediated immunity and B-cell-mediated immunity, which are similar to that used for ABO-compatible KT [26]. Calcineurin inhibitors (cyclosporine and tacrolimus) and antimetabolites (mycophenolate mofetil and azathioprine) are mainly used with low-dose steroids. In addition, monoclonal or polyclonal antibody agents (anti-CD25 antibody or antithymocyte globulin) are also often used during the induction period. Tanabe [20] started to use tacrolimus in combination with mycophenolate mofetil as a basic immunosuppressant after ABOi KT and reported a greatly improved graft survival compared with that of cyclosporine administration.
Antimetabolites seem to take seven∼ten days to be efficient as in vivo immunosuppressants. Therefore, immunosuppressive therapy as desensitization should be started before transplantation in order to adequately inhibit antibody production [20].
Current Desensitization Protocols.
In order to achieve a successful outcome with ABOi KT, many centers have employed their own independent desensitization protocols. Although there are slight differences in the preconditioning formalities depending on the transplant center, most include a combination of pretransplant plasmapheresis, intravenous immunoglobulin, and tacrolimus-mycophenolate-based immunosuppression with antibody induction. Splenectomy or rituximab administration is used selectively. After transplant, very close monitoring of the anti-ABO antibody titer is typically carried out for a minimum of two weeks. If necessary, plasmapheresis is added to eliminate the rebounding antibody level [5,15]. The details are shown in Table 1.
Graft Accommodation and B-Cell
Tolerance. If anti-ABO antibodies are removed prior to transplantation, one of three types of immune response may occur: rejection, immune tolerance, or accommodation [32]. (1) About 2-5% of patients produce antibodies to the incompatible ABO antigen, which mediate allograft rejection. (2) Some recipients seem to have immunologic tolerance to the incompatible ABO antigen because they do not reject the allograft or produce anti-ABO antibody against it. (3) The others display an accommodation state regarding the allograft [33]. Theoretically, natural anti-ABO antibody might be inducing antibody-mediated rejection in ABOi KT [26] and can manifest as a hyperacute rejection or as an acute or delayed humoral rejection. Antibody-mediated damage can result in rapid and irreversible graft thrombosis due to complement activation or contributes to long-term graft dysfunction [34]. However, as the anti-ABO antibody usually reaccumulates and persists after successful ABOi KT, the recipient maintains satisfactory graft function. This resistance of allograft to antibody-mediated rejection despite the significant presence of anti-ABO antibodies in the recipient serum is known as accommodation [1,5,35,36].
Park et al. [1] defined the criteria of accommodation in ABOi KT to include (1) detectable anti-ABO antibody in the recipient serum, (2) normal graft histology according to light microscopy, (3) the presence of A or B antigen in the graft, and (4) renal function similar to that of ABO compatible patients (GFR greater than 45 mL/min at oneyear after transplant). In 2006, the American Society of Transplantation reached a consensus on accommodation, stating that it occurred when C4d deposition was detected with normal function and structure of the graft [35].
How accommodation is induced and through what mechanism it is maintained are not well understood. Various hypotheses have been proposed to describe the mechanism responsible for accommodation [34,37], including changes in the function of antidonor antibody, changes in the antigen, acquired resistance of the allograft through the expression of an antiapoptotic gene, or an expression of complement regulatory protein.
The study of Ishida et al. [33] presented the difference in quality between antibodies produced by accommodated and non-accommodated recipients. Kirk et al. [38] suggested that accommodation is related to a shift from the IgG isotype to the IgG2 isotype that is less effective at activating complement and that competitively inhibits the binding of the more cytotoxic isotype. Chang and Platt [39] discovered that healthy organs could absorb antidonor antibody in large amounts, for which the accommodated functioning graft served as a sink. According to these results, accommodation may reflect a change in the properties of the antibody or antigen.
Park et al. [1] and Delikouras and Dorling [40] reported that the Bcl-2 and Bcl-xl, antiapoptotic molecules, were found in the accommodated ABO incompatible kidney graft. However, Bax, a proapoptotic marker, was not detected.
Salama et al. [41] demonstrated upregulation of Bcl-xL in the microvascular endothelial cells of accommodated grafts.
These findings are consistent with the hypothesis that the endothelium of the kidney allograft will be initially exposed to low titers of anti-ABO antibody, which will in turn instigate a series of protective changes that manifest as accommodation.
Consistent with this concept, data from Stegall et al. [42] suggest that a decrease in tumor necrosis factor-α and alteration in SMAD (mothers against decapentaplegic homolog) gene expression may be important in long-term accommodated grafts. Griesemer et al. [35] introduced the concept that the upregulation of a complement regulatory protein such as CD59 seems to be involved in accommodation by preventing the formation of the membrane attack complex (MAC) on the accommodated graft.
Accommodation is different from immune tolerance. The accommodated allograft kidney remains protected even though it is transplanted into a new recipient. However, an immunotolerant allograft preserves the potential to reject the tissue from the same donor. Immune tolerance is a state of immune unresponsiveness to the presence of specific non-self-antigens in the absence of long-term immunosuppression. Published studies have provided evidence that B cells have an important role in tolerance. Kirk et al. [38] postulated that the prolonged depletion of alloreactive B cells in tolerant mice is achieved through the dominant and active suppression of T helper cells. Ogawa et al. [32] suggested that prolonged T-cell suppression in the ABOi KT recipient may result in a similar induction of tolerance to that of the incompatible ABO antigen.
Posttransplant Monitoring and Desensitization (Posttransplant Immunologic Surveillance).
The monitoring of anti-ABO antibody titer is critical for determining the effectiveness of desensitization and the optimum time to permit graft implantation. After transplantation, the anti-ABO antibody level must be monitored to detect its reaccumulation, which may indicate or induce antibody-mediated rejection. In patients with a higher rebound in serum antibody production after the incompatible transplant, desensitization therapy, especially antibody-depletion procedures, should be repeated. Most studies [3,43] agree that posttransplant DFPP was ineffective at preventing the rebound of anti-ABO antibody titers compared to the efficacy of therapeutic plasma exchange.
The Jones Hopkins group of Montgomery et al. [15,28] determined the initiation of posttransplant plasmapheresis based on the pretransplant baseline antibody level before preconditioning. Others [44] have granted that preemptive posttransplant plasmapheresis may be dispensable, favoring an on-demand strategy according to the post-transplant antibody titer elevation. However, there are conflicting opinions on which antibody titer is meaningful and how long antibody monitoring should continue.
Kayler et al. [45] found that all patients whose posttransplant antibody titer remained below 1 : 8 exhibited stable renal function. Those patients who had an increased titer Journal of Transplantation 7 above 1 : 64 experienced allograft failure. Stegall et al. [42] recommended initiating plasmapheresis if the antibody titer increases to 1 : 16 in the first two weeks after transplantation. Gloor et al. [25] showed that humoral rejection was rare when the antibody level was maintained less than 1 : 8 in the first week and 1 : 16 in the second week after transplantation. They then allowed antibody titers to rise if the graft function and surveillance biopsies were normal. Takahashi [46,47] asserted that anti-ABO antibody titers should be suppressed to the lowest level during the first week after transplant, when surface antigenicity is increasing in the vascular endothelial cells of the graft kidney.
On the other hand, Park et al. [1] demonstrated that anti-ABO antibody titers return to detectable levels in all accommodated and nonaccommodated recipients, even in the absence of humoral rejection or chronic graft damage. Tobian et al. [3] also demonstrated that the clinical significance of an increased posttransplant anti-ABO antibody level is variable, and that there was no dependable correlation for antibody-mediated rejection. These findings are supported by the fact that a high titer of antibody is generally necessary but is not sufficient for antibody-mediated rejection.
Acceptable Anti-Blood Group Titer at the Time of Transplant.
Before the initiation of preconditioning, the baseline anti-ABO antibody titer is well known as a significant predictor of the severity of antibody-mediated graft injury as well as graft survival [19,42]. Although a few recent reports have shown that a high-baseline antibody titer is no longer predictive for poor graft outcome in patients that received tacrolimus-or mycophenolate mofetil-based immunosuppression [6,19], antibody removal should be as complete as possible.
Most centers performing ABOi KT have adhered to the guideline that serum anti-ABO antibody titers should be 1 : 16 or lower before transplantation [26]. However, the acceptable upper limit of anti-ABO antibody titers is exclusively based on empirical evidence, not substantiated by deductive reasoning. Wilpert et al. [48] decreased the antibody titer to below 1 : 4 before transplant, while Chung et al. [6] chose a limit of 1 : 32 in their preconditioning protocol.
Some institutions use the antiglobulin IgG antibody titer endpoint as the critical titer when assessing patients before and after transplantation. Others consider both IgM and IgG antibody titers [4]. Which type of anti-ABO antibody, IgM or IgG, causes antibody-mediated graft damage due to the unique characteristics of the antibody remains unclear [49]. If anti-ABO IgG antibodies are believed to be responsible for worse graft outcomes, it is expected that blood group O recipients will be more likely to suffer graft damage than will A or B recipients.
Antibody-Mediated Rejection.
Antibody-mediated rejection (ABMR) is known as the primary cause of graft loss in ABOi KT. It is clear that ABMR has a negative influence on short-term outcome following ABOi KT. Recent studies have reported that ABMR occurred in 17.9% up to 30% of ABO-incompatible kidney transplants [22,50,51]. Toki et al. [50] demonstrated that anti-ABO IgG antibody titers of 1 : 32 at the time of transplantation and the presence of donorspecific anti-HLA antibodies were independent risk factors for ABMR. Although the development of desensitization protocols has improved graft survival [50], the outstanding results are largely due to aggressive surveillance, early detection, and an enhanced therapeutic approach for ABMR [45].
The hyperacute rejection due to anti-ABO antibody does not occur within 24 hours, which is called the silent period. The greatest incidence of acute antibody-mediated rejection occurs two to seven days after transplant and does not typically occur after more than one month. Some researchers, therefore, have defined the first two weeks after transplantation as the critical period during which accommodation is usually induced and established. Once accommodation is established, acute antibody-mediated rejection does not occur, leading to the stable period [26].
Takahashi [26] classified acute ABMR into two types on the basis of antigen stimulation. Type I ABMR is caused by resensitization due to recovery of the ABO antigen. ABO antigen in the graft directly stimulates immunological responses, resulting in an explosive antibody production early in the critical period, typically, IgG antibody. Type II ABMR is caused by primary sensitization due to an ABO blood-group-associated antigen. In response to bacterial infection, an ABO-antigen-like substance is found on the surfaces of bacterial cells, acting as a cross-reacting antigen to cause sensitization and antibody production, mainly IgM. This type II rejection usually progresses more slowly and is less severe than is type I rejection.
Diagnosis of Acute Antibody-Mediated Rejection.
Clinically, ABMR was suspected when the serum creatinine level was increased relative to the previous value, together with a decrease in urine output. Renal biopsy is typically performed in such suspected cases [21]. Acute ABMR after ABOi KT is diagnosed on the basis of morphologic, immunohistologic, and serologic features. Morphologic evidence includes (1) leukocyte (neutrophil, monocyte, and macrophage) infiltration into the peritubular capillary and/or glomeruli; (2) arterial fibrinoid necrosis; (3) glomerular and arterial thrombi; (4) acute tubular injury. Immunohistologic evidence involves (1) peritubular capillary C4d deposition and (2) immunoglobulin and/or complement in arterial fibrinoid necrosis. For serologic evidence, circulating specific antidonor antibodies at the time of biopsy should be found. Overall, at least one finding in each of the three categories must be present for a biopsy to be diagnosed as acute ABMR [22,25,52]. These diagnostic criteria were established by the National Institutes of Health and the Banff working group. The former group also requires clinical evidence of graft dysfunction, while the latter group accepts the possibility of subclinical acute ABMR, defined as C4d staining and leukocyte margination in the peritubular capillary in a protocol biopsy of a well-functioning graft [52].
Antibody-mediated rejection is thought to be caused by endothelial cell activation in the graft [39]. Peritubular capillary C4d deposition has been considered to be an important histologic indicator of antibody-endothelial cell interaction and is a key element in the diagnosis of ABMR. The presence of donor-specific antibody or the presence of C4d alone in the peritubular capillary is not diagnostic of acute ABMR in the setting of ABOi KT [5,19]. Racusen and Haas [52] reported that C4d staining was associated with ABMR and graft injury in the malfunctioning graft, whereas it reflected graft accommodation in the stably functioning graft. Setoguchi et al. [53] detected C4d staining in the peritubular capillary in 94% of their protocol biopsy specimens but in only 28% of subclinical ABMR cases. Haas et al. [54,55] observed C4d deposition in 80% of protocol biopsies in the absence of allograft dysfunction or other histologic abnormalities suggestive of acute ABMR. Meanwhile, they [54] suggested that deposition of C3d, alone or in combination with C4d, may identify a more severe form of ABMR associated with a high risk of graft loss.
Treatment Strategy for Acute Antibody-Mediated Rejection.
Basically, ABMR was treated through the reinitiation of plasmapheresis to remove circulating antibodies [19,53]. Plasma exchange for the treatment of rejection was first reported in 1977 [43]. Just et al. [56] reported an immunoadsorption-based protocol to eliminate deposited IgG from the allograft.
Standard treatment for ABMR consists of repeated plasmapheresis-plasma exchange or immunoadsorption and intravenous immunoglobulin [57]. Various combinations of these therapeutic modalities have been successfully used to treat ABMR and improve outcomes. Most institutes [25,58,59] treated ABMR with a series of plasma exchanges followed by low-dose IVIG in addition to methylprednisolone until clinical improvement was achieved or until ABMR was histologically determined to have been resolved. Racusen and Haas [52] reported that reversal rates for ABMR were approximately 90% using such protocols, contrasted to reversal rates of less than 50% with traditional immunosuppression alone. Rituximab, an immunosuppressive agent which controls antidonor antibody production, and several complement inhibitors have also been reported to obtain significant efficacy [60].
The use of rituximab is intended to deplete B cells and thereby suppress antibody production. Several doses of rituximab at 375 mg/m 2 were intravenously administered to resolve ABMR [27]. Garrett et al. administrated rituximab as a first-line therapy for ABMR. Sarwal et al. noted that allografts with CD20 + cells in biopsy specimen were strongly associated with the clinical phenotype of glucocorticoid resistance and chose to treat ABMR with rituximab.
Together with tacrolimus-mycophenolate rescue, the majority of AMBR cases have received antithymocyte globulin at the time of plasmapheresis [57]. Toki et al. [50] administered OKT3 at a dose of 5 mg/day for seven days in patients with persistent antibody-mediated rejection.
Previously mentioned anti-ABMR therapies including plasmapheresis, intravenous immunoglobulin, antithymocyte globulin, and rituximab have provided suboptimal results [61]. One reason for this insufficient success is that they do not exert direct effects on the mature plasma cell. A proteasome inhibitor, such as bortezomib, depletes both transformed and nontransformed plasma cells in animal and human transplant recipients. Recent reports have also shown that eculizumab, a humanized monoclonal antibody against terminal complement protein C5, is an effective therapy to inhibit terminal complement activation and prevent antibody-induced injury. All of these modes are useful in the treatment of refractory ABMR [24,57].
Splenectomy may be a possible option as a rescue treatment for severe ABMR resistance to standard treatment after ABOi KT. Kaplan et al. [60] reported the first early experience with rescue splenectomy and suggested that the procedure may specifically and irreversibly deplete memory B cells, thus offering an additive effect to the standard treatment.
Chronic Antibody-Mediated Rejection.
Although successful strategies have been developed to treat acute ABMR, humoral alloreactivity in the early posttransplant period adversely impacts long-term allograft survival and contributes to chronic rejection [54]. Toki et al. [50] revealed that a high panel-reactive antibody value was significantly associated with the development of chronic ABMR characterized by transplant glomerulopathy. Recent studies [19,50] have found that a prior history of ABMR was significantly associated with the development of transplant glomerulopathy, with an incidence of 22% at one year after transplantation. Smith et al. [37] clarified sequential stages of chronic ABMR using an animal model and determined that the first symptom was circulating alloantibody, C4d, or a combination of the two. Only rarely was transplant glomerulopathy observed in the absence of C4d or alloantibody.
The National Institutes of Health suggested diagnostic criteria for chronic ABMR in ABOi kidney allograft [52]. Their criteria require three of the following four lesions: (1) arterial intimal fibrosis, (2) interstitial fibrosis/tubular atrophy, (3) duplication of the glomerular basement membrane, or (4) lamination of the peritubular capillary basement membranes.
Minimizing Immunosuppressive Therapy.
ABOi KT is considered an increased immunologic risk; therefore, aggressive immunosuppressive protocols traditionally have been used [11]. Most centers have adopted polyclonal antibody for the induction and chronic maintenance of immunosuppression based on tacrolimus and mycophenolate, starting two weeks before the transplantation.
Nevertheless, Chuang et al. [62] showed that maintenance-immunosuppressive therapy did not affect isoagglutinin titer in ABOi KT. In addition, no significant difference in isoagglutinin titer was observed between tacrolimus and cyclosporine groups. Flint et al. [30] demonstrated that only standard immunosuppression could produce a successful ABOi KT as long as an adequate desensitization protocol was employed. Far from high immunologic risk, avoidance of excessive immunosuppression potential is a benefit to ABOi KT recipients. Magee [5] avoided lymphocyte-depleting antibody because it is no more effective in preventing ABMR and is adversely associated with a higher incidence of infective complications.
The long-term need for steroids remains a question in ABOi KT. Crew and Ratner [24] and Galliford et al. [58] reported that ABOi KT can be successfully accomplished using a steroid-sparing regimen without resulting steroidresistant rejection.
Conclusions
The idea that ABO blood group incompatibility should be considered an absolute contraindication to kidney transplantation has been challenged in the past two decades. As the pretransplant and posttransplant desensitization protocols have developed and changed in many different fields, satisfactory results have been observed. As the body of immunologic knowledge including that regarding antibody-mediated rejection has grown, the allograft outcomes have been enhanced. Overall success rates are now comparable with those of ABO-compatible kidney transplantation. Due to the surprising result, the pool of potential ABOi KT candidates has increased.
Because the long-term outcome of ABOi KT has not yet been determined, uncertain and contentious ideas regarding it use still exist. Despite this, ABOi KT has become one of the established therapies. These inspiring circumstances forecast a hopeful future for ABO-incompatible kidney transplantation.
|
2014-10-01T00:00:00.000Z
|
2011-12-10T00:00:00.000
|
{
"year": 2011,
"sha1": "b0c16f66be5ea6605c981ec7aeeb203e6cedb9e5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2011/970421",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6083e8d5b33c76283262f70aa493fb0a065daae2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
109939502
|
pes2o/s2orc
|
v3-fos-license
|
Rectal and Vaginal Eradication of Streptococcus agalactiae (GBS) in Pregnant Women by Using Lactobacillus salivarius CECT 9145, A Target-specific Probiotic Strain
Streptococcus agalactiae (Group B Streptococci, GBS) can cause severe neonatal sepsis. The recto-vaginal GBS screening of pregnant women and intrapartum antibiotic prophylaxis (IAP) to positive ones is one of the main preventive options. However, such a strategy has some limitations and there is a need for alternative approaches. Initially, the vaginal microbiota of 30 non-pregnant and 24 pregnant women, including the assessment of GBS colonization, was studied. Among the Lactobacillus isolates, 10 Lactobacillus salivarius strains were selected for further characterization. In vitro characterization revealed that L. salivarius CECT 9145 was the best candidate for GBS eradication. Its efficacy to eradicate GBS from the intestinal and vaginal tracts of pregnant women was evaluated in a pilot trial involving 57 healthy pregnant women. All the volunteers in the probiotic group (n = 25) were GBS-positive and consumed ~9 log10 cfu of L. salivarius CECT 9145 daily from week 26 to week 38. At the end of the trial (week 38), 72% and 68% of the women in this group were GBS-negative in the rectal and vaginal samples, respectively. L. salivarius CECT 9145 seems to be an efficient method to reduce the number of GBS-positive women during pregnancy, decreasing the number of women receiving IAP during delivery.
Introduction
Neonatal sepsis contributes substantially to neonatal morbidity and mortality and is a major global public health challenge worldwide. According to the age of onset, neonatal sepsis is divided into early-onset sepsis (EOS) and late-onset sepsis (LOS). EOS has been variably defined based on the age at onset, with bacteremia or bacterial meningitis occurring at ≤72 h in infants hospitalized in the neonatal intensive care unit versus <7 days in term infants, and usually reflects transplacental or ascending infections from the maternal genitourinary tract [1]. samples were submitted to an enrichment step in Todd Hewitt broth (Oxoid) to facilitate the isolation of S. agalactiae in CNA plates.
Initially, identification of the bacterial strains (at least one isolate of each colony morphology per medium and per sample) was performed by 16S rDNA sequencing using the primers and PCR conditions described by Kullen et al. [20]. Sequencing reactions were prepared using the ABI PRISM ® BigDye™ Terminator Cycle Sequencing kit with AmpliTaq DNA polymerase according to the manufacturer's instructions (Applied Biosystems, Foster City, CA, USA) and were run on an ABI 377A automated sequencer (Applied Biosystems). The resulting sequences were used to search sequences deposited in the EMBL database using the BLAST algorithm. The identity of the strain was determined on the basis of the highest (>98%) scores.
Identification of yeasts and confirmation of the initial 16S rDNA-based bacterial identifications was performed by MALDI-TOF (VITEK MS, BioMerieux, Marcy-L'Étoile, France) [21]. Identification of S. agalactiae isolates was also confirmed by using a latex agglutination test (Streptococcal grouping kit, Oxoid, Basingstoke, UK), following the instructions of the manufacturer.
Those isolates identified as belonging to the genus Lactobacillus were preserved for further studies. For such a purpose, an MRS-C broth culture of each isolate was mixed with glycerol (30%, v/v) and kept at −80 • C until required. A total of 89 different Lactobacillus strains were isolated from the vaginal swabs and submitted to the Random Amplification of Polymorphic DNA (RAPD) genotyping as described [22] in order to avoid duplication of isolates. Among them, 10 Lactobacillus salivarius strains were selected for further characterization on the basis of the following criteria: (1) absence of S. agalactiae, Gardnella vaginalis, Candida spp., Ureaplasma spp. and Mycoplasma spp in the vaginal samples from which the lactobacilli were originally isolated; (2) Qualified Presumption of Safety (QPS) status conceded by EFSA; and (3) the ability of the strain to grow rapidly in MRS broth under aerobic conditions (≥1 ×10 6 cfu/mL after 16 h at 37 • C).
Antimicrobial Activity of the Lactobacilli Strains against GBS
Initially, an overlay method [23] was used to determine the ability of the lactobacilli strains to inhibit the growth of 12 different S. agalactiae strains. Among them, 6 strains had been isolated from blood or cerebrospinal fluid in clinical cases of neonatal sepsis (Hospital Universitario Ramón y Cajal, Madrid, Spain) while the remaining 6 had been isolated from vaginal samples of pregnant women (our own collection). It was performed using MRS agar plates, on which the lactobacilli strains were inoculated as approximately 2 cm-long lines and incubated at 37 • C for 48 h. The plates were then overlaid with the indicator S. agalactiae strains vehiculated in 10 mL of Brain Heart Infusion (BHI, Oxoid) broth supplemented with soft agar (0.7%), at a concentration of~10 4 colony-forming units (cfu)/mL. The overlaid plates were incubated at 37 • C for 48 h and, then, examined for clear zones of inhibition (>2 mm) around the lactobacilli streaks. All experiments assaying inhibitory activity were performed in triplicate.
Production of Specific Antimicrobials (Bacteriocins, Lactic Acid, Hydrogen Peroxide) by the Lactobacilli Strains
Bacteriocin production was assayed using an agar diffusion method as described by Dodd et al. [24] and modified by Martín et al. [25], using the S. agalactiae strains as the indicator bacteria employed for the overlay method. The lactobacilli strains were screened for hydrogen peroxide production following the procedure described by Song et al. [26]. In the case of positive strains, hydrogen peroxide production was also measured by the quantitative method of Yap and Gilliland [27]. The concentration of L-and D-lactic acid in the supernatants of MRS cultures of the lactobacilli strains was quantified using an enzymatic kit (Roche Diagnostics, Mannheim, Germany), following the manufacturer's instructions. The pH values of the supernatants were also measured. All these assays were performed in triplicate and the values were expressed as the mean ± SD.
Coaggregation and Co-culture Assays
The ability of the lactobacilli strains to aggregate with cells of the S. agalactiae strains was investigated following the procedure of Younes et al. [28]. The suspensions were observed under a phase-contrast microscope after Gram staining.
To test the anti-S. agalactiae activity of the lactobacilli in a broth assay format, tubes containing 20 mL of MRS broth were co-inoculated with 1 mL of a Lactobacillus strain culture (7 log 10 cfu/mL) and 1 mL of an S. agalactiae strain (7 log 10 cfu/mL). Subsequently, the cultures were incubated for 6 h at 37 • C in aerobic conditions. Immediately after the co-inoculation and after the incubation period, aliquots were collected, serially diluted and plated on MRS-C plates and CHROMagar StrepB agar plates (CHROMagar, París, France) for the selective enumeration of lactobacilli and streptococci, respectively. Correct taxonomic assignment was confirmed by the MALDI-TOF analysis as described previously.
Survival After In Vitro Exposure to Saliva and Gastrointestinal-Like Conditions
The survival of the strain to conditions resembling those found in the human digestive tract (saliva, human stomach and small intestine) was assessed in the in vitro system described by Marteau et al. [29], with the modifications reported by Martín et al. [25]. For this purpose, the strain was vehiculated in UHT-treated milk (25 mL) at a concentration of 10 9 CFU/mL. The values of the pH curve in the stomach-like compartment were those recommended by Conway et al. [30]. Different fractions were taken at 20, 40, 60, and 80 min from this compartment, and exposed for 120 minutes to a solution with a composition similar to that of human duodenal juice [30]. The survival rate of the strain was determined by culturing the samples on MRS agar plates, which were incubated at 37 • C for 48 h.
Adhesion to Caco-2, HT-29 and Vaginal Cells and to Mucin
The ability of the strains to adhere to HT-29 and Caco-2 cells was evaluated as described by Coconnier et al. [31] with the modifications reported by Martín et al. [25]. HT-29 and Caco-2 were cultured to confluence in 2 mL of DMEM medium (PAA, Linz, Austria) containing 25 mM of glucose, 1 mM of sodium pyruvate and supplemented with 10% heat-inactivated fetal calf serum, 2 mM of L-glutamine and 1% non-essential amino acid preparation. At day 10 after confluence, 1 mL of the medium was replaced with 1 mL of DMEM containing 10 8 CFU/mL of the strains. Adherence was measured as the number of lactobacilli adhered to the cells in 20 random microscopic fields. The assay was performed by triplicate.
Adherence to vaginal epithelial cells collected from healthy premenopausal women was performed as described previously [32].
The adhesion of the lactobacilli strains to mucin was determined according to the method described by Cohen and Laux [33].
Sensitivity to Antibiotics
The sensitivity of the strains to antibiotics was tested using the lactic acid bacteria susceptibility test medium (LSM) [34] and the microtiter VetMIC plates for lactic acid bacteria (National Veterinary Institute of Sweden, Uppsala, Sweden), as described previously [35]. Parallel, minimum inhibitory concentrations (MICs) were also determined by the E-test [AB BIODISK, Solna, Sweden) following the instructions of the manufacturer. Results were compared to the cut-off levels proposed by the European Food Safety Authority [36].
Hemolysis, Formation of Biogenic Amines and Degradation of Mucin
For investigation of hemolysis, strains were streaked onto layered fresh horse blood agar plates and grown for 24 h at 37 • C. Zones of clearing around colonies indicated hemolysin production. The capacity of the strains to synthesize biogenic amines (tyramine, histamine, putrescine and cadaverine) from their respective precursor amino acids (tyrosine, histidine, ornithine and lysine; Sigma-Aldrich) was evaluated using the method described by Bover-Cid and Holzapfel [37]. The potential of the strains to degrade gastric mucine (HGM; Sigma) was evaluated in vitro as indicated by Zhou et al. [38].
Acute and Repeated Dose (4-Weeks) Oral Toxicity Studies in a Rat Model
Wistar male and female rats (Charles River Inc., Marget, Kent, UK) were used to study the acute and repeated dose (4-weeks) oral toxicity of L. salivarius CECT 9145 in a rat model. Acclimation, housing and management (including feeding) of the rats was performed as previously described [39]. The rats were 56-days old at the initiation of treatment. Acute (limit test) and repeated dose (4 weeks) studies were conducted in accordance with the European Union guidelines (EC Council Regulation No. 440), and authorized by the Ethical Committee on Animal Research of the Complutense University of Madrid (protocol 240111).
In the acute (limit test) study, 24 rats (12 males, 12 females) were distributed into two groups of 6 males and 6 females each. After an overnight of fasting, each rat received skim milk (500 µL) orally (control group or Group 1), or a single oral dose of 1 × 10 10 CFU of L. salivarius CECT 9145 dissolved in 500 µL of skim milk (treated group or Group 2). Doses of the test and control products were administered by gavage. At the end of a 14 days observation period, the rats were weighed, euthanized by CO 2 inhalation, exsanguinated, and necropsied.
The repeated dose (4 weeks) (limit test) study was conducted in 48 rats (24 males, 24 females) divided in four groups of 6 males and 6 females each (control group or Group 3; treated group or Group 4; satellite control group or Group 5; and satellite treated group or Group 6). Rats received a daily oral dose of either skim milk (Groups 3 and 5) or 1 × 10 9 CFU of L. salivarius CECT 9145 dissolved in 500 µL of skim milk (Groups 4 and 6) for 4 weeks. All rats of Groups 3 and 4 were deprived of food for 18 h, weighed, euthanized by CO 2 inhalation, exsanguinated, and necropsied on Day 29. All animals of the satellite groups (Groups 5 and 6) were kept a further 14 days without treatment to detect the delayed occurrence, persistence or recovery from potential toxic effects. All rats of the Groups 5 and 6 were deprived of food for 18 h, weighed, euthanized by CO 2 inhalation, exsanguinated, and necropsied on day 42.
Behavior and clinical observations, blood biochemistry and hematology analysis, organ weight ratios and histopathological analysis were carried as described previously [39]. Bacterial translocation to blood, liver or spleen, and total liver glutathione (GSH) concentration was evaluated following the methods described by Lara-Villoslada et al. [40]. In this prospective pilot clinical assay, 57 pregnant women (39 rectal and vaginal GBS-positive women; 18 rectal and vaginal GBS-negative women at the start of the intervention), aged 25-36, participated in this study. All met the following criteria: a normal pregnancy and a healthy status. Women ingesting probiotic supplements or receiving antibiotic treatment in the previous 30 days were excluded. Women with lactose intolerance or a cow's milk protein allergy were also excluded because of the excipient used to administer the strain. All volunteers gave written informed consent to the protocol (10/017-E), which had been approved by the Ethical Committee of Clinical Research of the Hospital Clínico San Carlos Madrid (Spain).
Volunteers were distributed into 3 groups (1 probiotic group and 2 placebo groups). All the volunteers in the probiotic group (n = 25) were GBS-positive and consumed a daily sachet with~50 mg of freeze-dried probiotic (~9 log 10 cfu of L. salivarius CECT 9145) from week 26 to week 38 of the pregnancy. Placebo subgroup 1 (n = 14) included GBS-positive women (pregnancy week ranging from 19 to 30) that were going to receive IAP because they had a previous baby that suffered a GBS sepsis. Placebo subgroup 2 (n = 18) included GBS-negative women (pregnancy week ranging from 14 to 26). Women in both placebo subgroups received a daily sachet containing 50 mg of the excipient used to carry the probiotic strain. In all cases, the intervention lasted from the start of the intervention to week 38. Probiotic-and excipient-containing sachets were kept at 4 • C throughout the study. All volunteers were provided with diaries to record compliance with the study product intake. Minimum compliance rate (% of the total treatment doses) was set at 86%.
Recto-vaginal GBS screening was performed at 28, 32 and 38 weeks. Rectal and vaginal exudates samples collected during the trial were serially diluted and plated on Granada (Biomerieux; isolation of hemolytic GBS, which appear as orange colonies), and CHROMagar StrepB (CHROMagar; for isolation of hemolytic and non-hemolytic GBS, which appear as purple colonies) agar plates. To avoid sensitivity-related problems, samples were submitted to a GBS enrichment step in Todd-Hewitt broth (Oxoid). After 24 h at 37 • C, the broth cultures were spread on CHROMagar agar plates. Correct taxonomic assignment was confirmed by MALDI-TOF and latex agglutination analyses, as described previously. At the last sampling time (week 38), recto-vaginal GBS screening was performed not only in our laboratory but also in those of the hospitals in which the respective women were going to deliver their babies.
Microbiological data were recorded as CFU/mL and transformed to logarithmic values before statistical analysis. Two-way ANOVA was used to investigate the effect of the individual (woman) and sampling time on the semiquantitative S. agalactiae counts in vaginal swabs. Statistical significance was set at P < 0.05. Statgraphics Centurion XVI version 17.0.16 (Statpoint Technologies Inc, Warrenton, Virginia) was used to carry out statistical analyses.
Microbiological Analysis of Vaginal Swabs Obtained from Pregnant and Non-Pregnant Women
Bacterial growth was detected in all the samples when they were inoculated on MRS (2.70-8.08 log 10 colony-forming units (cfu); mean 5.36 log 10 cfu); CNA (3.00-7.92 log 10 cfu; mean 5.13 log 10 cfu) and GAR (2.70-8.10 log 10 cfu; mean 5.24 log 10 cfu) agar plates. Similar bacterial groups grew in these three media. Growth on MCK, SDC or Mycoplasma plates was only detected in a few percentages of samples (from 0% in Mycoplasma plates to~40% in SDC plates).
S. agalactiae could be isolated from both non-pregnant (~25%) and pregnant (~19%) women. Candida albicans and other yeasts were isolated from approximately 7 and 36% of the non-pregnant and pregnant women, respectively. Gardnerella vaginalis was isolated in~7% of the pregnant women. In both groups, Lactobacillus was the dominant genus since it was detected in~93% of the participating women.
Among the Lactobacillus isolates obtained in this study, a few were selected to evaluate their potential as probiotics to control GBS populations on the basis of the following criteria: (1) absence of S. agalactiae, Gardnella vaginalis, Candida spp., Ureaplasma spp., and Mycoplasma spp. in the vaginal samples from which the lactobacilli were originally isolated; (2) Qualified Presumption of Safety (QPS) status (European Authority of Food Safety, EFSA); and (3) ability of the strain to grow rapidly in MRS broth under aerobic conditions (≥1 ×10 6 cfu/mL after 16 h at 37 • C). In fact, only 10 strains (V3III-1, V4II-90, V7II-1, V7II-62, V7IV-1, V7IV 60, V8III-62, V11I-60, V11III-60 y, V11IV-60) met all the criteria and all of them belonged to the same species (Lactobacillus salivarius). These strains were then selected for further characterization. Later, L. salivarius V4II-90 was deposited in the Spanish Collection of Type Cultures (CECT) as L. salivarius CECT 9145 and, therefore, this is the name used for this strain in this article.
Antimicrobial Activity of the Lactobacilli Strains Against GBS and the Production of Potential Antimicrobial Compounds
Initially, the antimicrobial activity of the 10 selected lactobacilli against the S. agalactiae strains was determined by an overlay method. Clear inhibition zones (ranging from 2 to 20 mm) were observed around the lactobacilli streaks.
In relation to the antimicrobial compounds that may be responsible for such activity, the concentration of L-and D-lactic acid and the pH of the supernatants obtained from MRS cultures of the lactobacilli are shown in Table 1. The global concentration of L-lactic acid was similar (~10 mg/mL) in all the supernatants. In contrast, D-lactic acid was not detected in the supernatants of the tested strains. In addition, all the strains acidified the MRS-broth medium to a final pH of~4 after 16 h of incubation; among them, L. salivarius V7IV-1 showed the highest acidifying capacity (final pH of 3.8).
No bacteriocin-like activity could be detected against the tested S. agalactiae strains. Two strains (L. salivarius CECT 9145 and V7IV-1) were able to produce hydrogen peroxide (7.29 µg/mL ± 0.69 and 7.46 µg/mL ± 0.58, respectively) ( Table 1). Table 1. The pH and concentrations of L-and D-lactic acid (mg/mL; mean ± SD), and hydrogen peroxide (µg/mL; mean ± SD) in the supernatants obtained from the MRS cultures of the lactobacilli (n = 4). The capacity of the lactobacilli strains to form large well-defined co-aggregates with S. agalactiae was strain-dependent. Strains V3III-1, V7IV-60 and V11IV-60 coaggregated with 5 S. agalactiae strains; strains V8III-62, V11I-60 and V11III-60 with 7; strain V7II-62 with 9 S. agalactiae strains; and strains CECT 9145, V7II-1 and V7IV-1 with 10 S. agalactiae strains (Figure 1). The ability of the lactobacilli strains to interfere or inhibit the growth of four S. agalactiae strains was evaluated using MRS broth co-cultures. Co-cultures with S. agalactiae seemed not to affect the growth of any of the L. salivarius strains ( Table 2). In contrast, most of the L. salivarius strains were able to interfere at a higher or lower degree with the growth of the different S. agalactiae strains included in this assay. Among them, L. salivarius CECT 9145 showed the highest ability to inhibit the growth of S. agalactiae since the presence of two of the four S. agalactiae strains was not detectable in the co-cultures and the concentration of the other two showed a~2.5 log10 decrease after an incubation period of only 6 h at 37 ºC (Table 2). Interestingly, no viable streptococci could be detected when the co-cultures were incubated for 24 h ( Table 2).
Survival After In Vitro Exposure to Saliva and Gastrointestinal-Like Conditions
The viability of the strains after exposition to conditions simulating those found in the gastrointestinal tract varied from ~64% (L. reuteri CR20, L. salivarius CECT 9145) to 30% (L. salivarius V3III-1) ( Table 3).
Adhesion to Caco-2, HT-29 and Vaginal Cells and to Mucin
In this study, the lactobacilli strains tested were strongly adhesive to both Caco-2 and HT-29 cells, with the exception of the negative control strain (L. casei imunitas) which showed a low adhesive potential (Table 4). In addition, all showed adhesion to vaginal epithelial cells. Among the L. salivarius strains, L. salivarius CECT 9145 globally displayed the highest ability to adhere to both intestinal and vaginal epithelial cells ( Table 4). The lactobacilli strains tested showed a variable ability to adhere to porcine mucin (Table 4). L. salivarius CECT 9145 and L. salivarius V7IV-1 were the strains that showed the highest adherence ability.
Sensitivity to Antibiotics
The MIC values of the lactobacilli strains for 16 antibiotics assayed are shown in Table 5. All the strains were sensitive to most of the antibiotics tested, including those considered clinically relevant antibiotics such as gentamycin, tetracycline, clindamycin, chloramphenicol, and ampicillin, showing MICs equal to or lower than the breakpoints defined by EFSA (EFSA, 2018). All the strains were resistant to vancomycin and kanamycin, which is an intrinsic property of the L. salivarius at the species level.
Hemolysis, the Formation of Biogenic Amines and the Degradation of Mucin
The strains did not show the ability to produce biogenic amines, and they were neither hemolytic nor able to degrade gastric mucin in vitro.
Survival After In Vitro Exposure to Saliva and Gastrointestinal-Like Conditions
The viability of the strains after exposition to conditions simulating those found in the gastrointestinal tract varied from~64% (L. reuteri CR20, L. salivarius CECT 9145) to 30% (L. salivarius V3III-1) ( Table 3).
Adhesion to Caco-2, HT-29 and Vaginal Cells and to Mucin
In this study, the lactobacilli strains tested were strongly adhesive to both Caco-2 and HT-29 cells, with the exception of the negative control strain (L. casei imunitas) which showed a low adhesive potential (Table 4). In addition, all showed adhesion to vaginal epithelial cells. Among the L. salivarius strains, L. salivarius CECT 9145 globally displayed the highest ability to adhere to both intestinal and vaginal epithelial cells ( Table 4). The lactobacilli strains tested showed a variable ability to adhere to porcine mucin (Table 4). L. salivarius CECT 9145 and L. salivarius V7IV-1 were the strains that showed the highest adherence ability.
Sensitivity to Antibiotics
The MIC values of the lactobacilli strains for 16 antibiotics assayed are shown in Table 5. All the strains were sensitive to most of the antibiotics tested, including those considered clinically relevant antibiotics such as gentamycin, tetracycline, clindamycin, chloramphenicol, and ampicillin, showing MICs equal to or lower than the breakpoints defined by EFSA (EFSA, 2018). All the strains were resistant to vancomycin and kanamycin, which is an intrinsic property of the L. salivarius at the species level.
Hemolysis, the Formation of Biogenic Amines and the Degradation of Mucin
The strains did not show the ability to produce biogenic amines, and they were neither hemolytic nor able to degrade gastric mucin in vitro.
Acute and Repeated Dose (4 Weeks) Oral Toxicity Studies in a Rat Model
All animals survived both oral toxicity trials. The development of the treated animals during the experimental periods corresponded to their species and age. There were no significant differences in body weight or body weight gain among groups treated with L. salivarius CECT 9145 (including the satellite ones) in comparison to the control groups at any time point of the experimental period. No abnormal clinical signs, behavioral changes, body weight changes, hematological and clinical chemistry parameters, macroscopic or histological findings, or organ weight changes were observed. There were no statistical differences in body weights among groups. Similarly, no statistically significant differences in body weight gain, food and water consumption were observed between the groups. No significant differences in liver GSH concentration were observed between the control and treated groups (9.54 ± 1.21 vs. 9.37 ± 1.39 mmol/g, P > 0.1). L. salivarius CECT 9145 could be isolated from colonic material and vaginal swabs samples of all the treated animals (probiotic groups) at the end of the treatment. The concentration oscillated between 5.39 and 8.85 log 10 cfu/g of the colonic material, and between 3.34 and 6.14 log 10 cfu/swab in the vaginal samples. The strain could not be detected in any sample from the placebo group.
The Efficacy of L. salivarius CECT 9145 to Eradicate GBS from the Intestinal and Vaginal Tracts of Pregnant Women: A Pilot Clinical Trial
At the inclusion in the study, GBS was detected in both rectal and vaginal swabs obtained from 39 women, out of a total of 57 participating women, while the rest of the women (n = 18) were GBS-negative (Table 6). This last group of GBS-negative women, who did not ingest the L. salivarius strain also had negative GBS cultures from rectal and vaginal swabs taken regularly at 28, 32 and 36-38 weeks (Table 6). A group of GBS-positive women at the start of the study (n = 14) did not receive the probiotic and the routine screening results for vaginal and rectal GBS at 28, 32 and 36-38 weeks were found to be all positive (Table 6). Significantly, the group of GBS-positive women that started using the probiotic (9 log 10 cfu/daily) since they were enrolled in this study (from 26 weeks) also tested positive for GBS at 28 weeks, but an increasing number of GBS-negative results appeared in the successive swabs collected until delivery (Table 6). At 30 weeks, the culture of rectal swabs taken from four women of this group rendered a negative result and the number of these samples increased to 18 (72% of the participants) at 38 weeks. Similar results were obtained by culturing vaginal swabs obtained from this group, although the proportion of women testing negative for GBS were always slightly higher when analyzing the rectal swabs than in vaginal swabs ( Table 6).
The estimation of the concentration of GBS in vaginal swabs taken regularly up to the delivery from all participants is shown in Figure 2. There were no significant changes in both GBS-negative women (n = 18) and GBS-positive women (n = 14) without oral administration of L. salivarius CECT 9145 regarding the semiquantitative estimation of GBS. However, the number of vaginal swabs where GBS could not be detected increased in successive sampling times in the group that initially tested positive for GBS taking 9 log 10 cfu of L. salivarius CECT 9145 (n = 25). The mean value for S. agalactiae counts decreased significantly with the administration time of L. salivarius CECT 9145 ( Figure 2) from a mean value of 5.14 cfu/mL at 26 weeks (n = 25) to 3.80 cfu/mL at 38 weeks (n = 9) (Figure 2). In contrast, a study involving a low number of participants found significant taxonomic differences in stools of 6-month infants, when mothers were GBS carriers, as compared to non-carriers [45]. Anyway, there is no epidemiological evidence for a correlation between neonatal colonization with GBS and specific shifts in the maternal intestinal or vaginal microbiome.
In the USA and many other countries (including Spain), women are routinely screened in the late third trimester (between 35 and 37 weeks' gestation) for GBS colonization by rectovaginal swabs and subsequent cultures. If the rectovaginal swab is culture-positive, or if the patient has GBS in the urine, or has a prior history of GBS perinatal infection, intrapartum prophylactic antibiotics are No adverse effects arising from the intake of L. salivarius CECT 9145 were reported by any of the women who participated in this study. The results of the GBS status obtained in our laboratory at week 38 were identical to those obtained in the hospitals were the recruited women were screened for GBS and, as a result, none of the women who became GBS-negative in this study received IAP.
Discussion
In this work, the GBS colonization rates were 25% and 20% among non-pregnant and pregnant women, respectively. In pregnant women, GBS colonization is found in up to 30% of rectovaginal samples [2,41] and stable colonization with the same clone for several years has been demonstrated [41]. Previous studies have shown that the presence of GBS is not linked to an abnormal microbiome or a reduction of the predominant Lactobacillus genus in the vaginal tract of the mother [42][43][44].
In contrast, a study involving a low number of participants found significant taxonomic differences in stools of 6-month infants, when mothers were GBS carriers, as compared to non-carriers [45]. Anyway, there is no epidemiological evidence for a correlation between neonatal colonization with GBS and specific shifts in the maternal intestinal or vaginal microbiome.
In the USA and many other countries (including Spain), women are routinely screened in the late third trimester (between 35 and 37 weeks' gestation) for GBS colonization by rectovaginal swabs and subsequent cultures. If the rectovaginal swab is culture-positive, or if the patient has GBS in the urine, or has a prior history of GBS perinatal infection, intrapartum prophylactic antibiotics are administered to prevent vertical transmission of GBS to the neonate during labor and delivery. Some European countries (e.g., UK) have not adopted the GBS screening program but, instead, administer antibiotics upon the development of a risk factor for GBS neonatal disease (e.g., prolonged rupture of membranes). However, none of these approaches has eliminated neonatal GBS infections. This is because these prevention strategies do not address the risk of ascending infection, which can potentially occur anytime during pregnancy, leading to preterm birth or stillbirth.
Overall, the prevention of GBS infection in pregnancy is still a complex question, with risk likely associated to several factors, including the pathogenicity of the GBS strain, host factors, influence of the vaginal/rectal microbiome, false-negative screening results, and/or changes in GBS antibiotic resistance [6,46]. Currently, strategies are mainly focused on the prevention of GBS transmission during labor and delivery through the use of antibiotics. This strategy does not fully capture the biology of the GBS infection, nor does it completely address the full burden of the GBS disease. Moreover, antibiotic resistance is increasing and the use of antibiotics during pregnancy has consequential effects for neonatal health that are only now being appreciated [47]. To successfully eradicate the burden of disease, interventions need to be specifically targeted while having minimal detrimental effects on the microbiome. Therefore, there is a need for alternatives that are respectful with the neonatal and infant microbiota, and that do not compromise the health of future generations. In this context, the final objective of this work was the selection of safe probiotic strains with the in vitro and in vivo ability to eradicate GBS from the intestinal and genitourinary tracts of pregnant women and/or their infants.
The genus Lactobacillus constitutes the dominant bacterial group of the vaginal tract in most healthy women, playing a key role in the genitourinary homeostasis [48][49][50][51][52]. In this study, all the vaginal isolates (from either pregnant or non-pregnant healthy women) that fulfilled the initial selection criteria belonged to the species L. salivarius. This species is part of the indigenous microbiota of the human gastrointestinal tract, oral cavity, genitourinary tract and milk, and some strains have been studied as probiotics because of their in vitro and in vivo antimicrobial, anti-inflammatory and immunomodulatory properties [53][54][55][56][57][58][59][60][61][62][63][64]. Previous studies have shown the ability of L. salivarius strains to inhibit the growth of vaginal pathogens, including Gardnerella vaginalis and Candida albicans and, therefore, we have suggested their potential to be used as probiotics for the treatment or prevention of vaginal infections [65,66].
Administration of probiotic bacteria benefits the host through a wide array of mechanisms that are increasingly recognized as being either species-and/or strain-specific [67]. A comparative genomics study that included 33 L. salivarius strains isolated from humans, animals or food revealed that this species displays a high level of genomic diversity [68]. Therefore, the selection of L. salivarius strains for probiotic use requires the experimental validation of target-tailored phenotypic traits. Some L. salivarius strains have shown to be efficient in preventing infectious diseases such as mastitis caused by staphylococci and streptococci, when administered during late pregnancy [69]. Moreover, the oral administration of L. salivarius strains is also a valid strategy for the treatment of such a condition during lactation and, in fact, one of the strains was more efficient than antibiotics for this target [70]. In this work, the target was the antagonism towards GBS and, as a consequence, properties such as antimicrobial activity against S. agalactiae strains or coaggregation with this species were considered particularly relevant.
The production of antagonistic substances such as bacteriocins, hydrogen peroxide or organic acids represents an important contribution to the defense mechanisms exerted by intestinal and vaginal lactobacilli [59,71]. Some L. salivarius strains produce bacteriocins or display bacteriocin-like activity against a variable spectrum of Gram-positive bacteria, including S. agalactiae strains [72]. However, none of the L. salivarius strains selected in our study displayed bacteriocin-like activity against S. agalactiae strains. Therefore, the antimicrobial activity that the selected L. salivarius strains exhibited against S. agalactiae must be related to the production of other antimicrobial compounds, such as organic acids. The ability of lactobacilli to acidify the vaginal milieu contributes to the displacement and inhibition of pathogens proliferation [73] and, more specifically, the acid production by lactobacilli has been directly correlated with the inhibition of GBS growth [74]. Another antimicrobial defense mechanism attributed to some intestinal or vaginal lactobacilli is the production of peroxide hydrogen, a compound that is toxic for catalase-negative bacteria, such as streptococci [75]. The production of this compound by L. salivarius has already been reported [59,76,77]. In our study, L. salivarius CECT 9145 (the strain that showed the highest anti-GBS activity) produced high amounts of lactic acid and, in addition, was able to produce peroxide hydrogen.
The ability to adhere to intestinal or vaginal epithelial cells or to mucin, and to co-aggregate with potential pathogens constitutes one of the main mechanisms for preventing their adhesion and colonization of mucosal surfaces. Therefore, it is not strange that such properties are considered relevant to the selection of probiotic strains [28]. The high adherence of L. salivarius strains to Caco-2 and HT-29 cells or to mucin has been previously observed [53,59,78]. Globally, L. salivarius CECT 9145 showed the best combination of adherence to epithelial cells, co-aggregation with S. agalactiae and the inhibition of S. agalactiae strains in broth co-cultures. This strain showed a high survival rate during transit through an in vitro gastrointestinal model and survival of lactobacilli when exposed to conditions found in the gastrointestinal tract seems to be a critical pre-requisite for a probiotic strain when its use as a food supplement is pursued, as it was the case.
Some vaginal strains of L. gasseri and L reuteri have also been reported to co-aggregate with GBS [78]. In contrast, no co-aggregation activity between S. agalactiae and other vaginal lactobacilli belonging to the species L. acidophilus, L. gasseri and L. jensenii was observed in another study [32], a fact suggesting that such property is a highly strain-specific trait. In relation to broth co-cultures, the capacity to antagonize the growth of S. agalactiae by lactobacilli strains belonging to different species, including L. salivarius, has been previously reported [79,80]. Similar to our results, this activity was strain-dependent [79].
One of the most important criteria for the selection of probiotic strains is the assessment of their safety, particularly to the target population. In this work, no adverse effect was reported by any of the women who participated in the clinical trial and ascribed to the probiotic group [thus, receiving L. salivarius CECT 9145 at 9 log 10 cfu daily for several weeks). Previously, other L. salivarius strains have been shown to be well-tolerated and safe in animal models [40] and in human clinical assays [70,[81][82][83], including one involving pregnant women [69].
The L. salivarius strains included in this study were very susceptible to most of the antimicrobials tested. In fact, their MICs were lower than the cut-offs established for lactobacilli to seven out of the eight antibiotics required for this species by the European Food Safety Authority [36]. The only exception was kanamycin. The intrinsic resistance of lactobacilli to kanamycin and other aminoglycosides (such as neomycin or streptomycin) has been repeatedly reported [84,85], and this is thought to be an L. salivarius species-specific trait due to the lack of cytochrome-mediated transport of this class of antibiotics [86]. The L. salivarius strains were also resistant to vancomycin but the assessment of vancomycin sensitivity is not required by EFSA in the case of homofermentative lactobacilli (including L. salivarius) since they are intrinsically resistant to this antibiotic probably due to the presence of D-Ala-D-lactate in their peptidoglycan structure [87]. Therefore, L. salivarius CECT 9145 and the other strains evaluated in this study can be considered as safe from this point of view.
Lactobacilli are among the Gram-positive bacteria with the potential to produce biogenic amines and these substances can cause several toxicological problems and/or may act as potential precursors of carcinogenic nitrosamines [37]. The screened L. salivarius strains neither produced histamine, tyramine, putrescine or cadaverine nor harbored the gene determinants required for their biosynthesis. Additionally, they were unable to degrade gastric mucin in vitro.
Some studies have been focused on the potential of different lactic acid bacteria strains or their metabolites to inhibit the growth of S. agalactiae in vitro or in murine models [74,80,[88][89][90][91][92][93][94][95]. However, few studies have evaluated the efficacy of probiotic strains for the rectal and vaginal eradication of GBS in pregnant women. Ho et al. [96] examined the effect of Lactobacillus rhamnosus GR-1 and Lactobacillus reuteri RC-14 taken orally on GBS-positive pregnant women at 35-37 weeks of gestation, and found that GBS colonization changed from positive to negative in 42.9% of the women in the probiotic group. The rate of women that became GBS-negative was lower than in our study and this might be due to the fact that, in the cited study, the recruited women started the probiotic intake many weeks later. A second study using the same two strains (L. rhamnosus GR-1 and L. reuteri RC-14) provided non-conclusive results due to the low adherence to the probiotic treatment since only seven of 21 women in the intervention group completed the entire 21 days of probiotics [97].
It is important to highlight that nutrition may also play a key role in creating mucosal conditions favoring the action of bacterial strains that are able to improve the rectal and vaginal environments, as it is the case of L. salivarius CECT 9145. Such conditions may include the selective fermentation of dietary fiber, the production of relevant bioactive compounds, such as short-chain fatty acids [98], or the use of hyaluronic acid, which has been shown to be useful in the treatment of female recurrent genitourinary infections [99]. The impact of diet on the outcomes of clinical assays involving probiotic-interventions is often underrated and should be taken into account in future studies.
Our study includes the whole process from strain isolation to a pilot clinical study specifically targeting GBS eradication in pregnant women. The criteria followed for the selection of the best candidate for such a target (L. salivarius CECT 9145) allowed a notable reduction in the rate of GBS-colonized women and led to a reduction in the use of antibiotics during the peripartum period. As a conclusion, the administration of L. salivarius CECT 9145 to GBS-positive pregnant women is a safe and successful strategy to significantly decrease the rates of GBS colonization during pregnancy and, therefore, to reduce the exposure of pregnant women and their infants to intrapartum prophylaxis. Work is in progress to study the mechanisms involving GBS antagonism, including the study of the strain genome and to initiate a multicenter well-designed clinical trial involving a higher number of women. Funding: This work was supported by Laboratorios Casen Recordati SL. The funding agency had no role in study design, data collection and analysis.
|
2019-04-13T13:02:46.007Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "220b679b20d10674ac0a172e885821905feaddae",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/11/4/810/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "220b679b20d10674ac0a172e885821905feaddae",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244398773
|
pes2o/s2orc
|
v3-fos-license
|
Pd(II)-Catalyzed Enantioselective C(sp3)–H Arylation of Cyclopropanes and Cyclobutanes Guided by Tertiary Alkylamines
Strained aminomethyl-cycloalkanes are a recurrent scaffold in medicinal chemistry due to their unique structural features that give rise to a range of biological properties. Here, we report a palladium-catalyzed enantioselective C(sp3)–H arylation of aminomethyl-cyclopropanes and -cyclobutanes with aryl boronic acids. A range of native tertiary alkylamine groups are able to direct C–H cleavage and forge carbon-aryl bonds on the strained cycloalkanes framework as single diastereomers and with excellent enantiomeric ratios. Central to the success of this strategy is the use of a simple N-acetyl amino acid ligand, which not only controls the enantioselectivity but also promotes γ-C–H activation of over other pathways. Computational analysis of the cyclopalladation step provides an understanding of how enantioselective C–H cleavage occurs and revealed distinct transition structures to our previous work on enantioselective desymmetrization of N-isobutyl tertiary alkylamines. This straightforward and operationally simple method simplifies the construction of functionalized aminomethyl-strained cycloalkanes, which we believe will find widespread use in academic and industrial settings relating to the synthesis of biologically active small molecules.
■ INTRODUCTION
Strained cycloalkanes displaying an aminomethyl-substituent are common features in pharmaceutical candidates and approved drugs as well as agrochemicals. These small polar scaffolds frequently convey important physical features that lead to enhanced biological properties, when compared with linear N-alkyl congeners ( Figure 1A). 1 In particular, cyclopropane and cyclobutane derivatives can boost metabolic stability and reduce lipophilicity when used as bioisosteres of gem-dimethyl, isopropyl, or phenyl groups, which results from a combination of high coplanarity of the ring-carbon atoms, relatively shorter C−C bonds, enhanced π-character, and shorter and stronger C−H bonds. Furthermore, the welldefined exit vectors of these rigid cycloalkanes make them ideal as scaffold candidates through which to probe distinct spatial environments, particularly through their deployment as single enantiomers. 2 As a result of these properties, the preparation of functionally diverse nonracemic aminomethyl-cyclopropanes (AMCPs) and aminomethyl-cyclobutanes (AMCBs) represents an important challenge for chemical synthesis. While the synthesis of simple unfunctionalized variants of aminomethylstrained cycloalkanes can be achieved via N-alkylation, reductive amination, or amide reduction with readily available strained cycloalkane-containing starting materials, the synthesis of more complex, densely functionalized variants frequently requires multiple steps as a result of the problematic amine functionality that precludes the effective use of many of the well-established ring formation protocols. 3 Metal-catalyzed C(sp 3 )−H functionalization of simple monofunctionalized strained cycloalkane frameworks has emerged as a powerful alternative strategy (to de novo methods) 3 for the synthesis of higher order variants, in particular, on cyclopropane scaffolds ( Figure 1B). Yu and coworkers have reported a series of Pd(II)-C(sp 3 )−H functionalization reactions on cyclopropane derivatives directed by Narylcarboxamides, 4a,b N-triflamides, 4c carboxylic acids, 4d and primary amines, 4e many of which can be rendered enantioselective. Cramer and co-workers exploited oxidative addition to a pendant bromoarene motif to direct intramolecular Pd(0)-catalyzed C(sp 3 )−H arylation onto triflimideprotected N-aryl-aminomethyl-cyclopropanes. 5a This approach was also extended to a number of other tethering units to formulate an approach to the synthesis of bicyclic systems containing a substituted cyclopropane unit and, in many cases, could be carried out enantioselectively. 5c,d Xu and co-workers reported an Ir-catalyzed C(sp 3 )−H borylation directed by a carboxamide motif. 6 In contrast, the deployment of Pd(II)-catalyzed C(sp 3 )−H functionalization strategies on cyclobutane scaffolds is less common ( Figure 1B). Yu and co-workers were able to extend their seminal carboxamide-directed C(sp 3 )−H arylation of cyclopropanes to the corresponding cyclobutane frameworks. 7a−c Subsequent advances enabled the deployment of native carboxylic acids, 7d ketones (via transiently generated imines), 7e and oximes 7f as directing groups for a selection of C(sp 3 )−H functionalization reactions, many of which could, again, be rendered enantioselective using a range of ligandcontrolled strategies. Baran and Reisman have shown, independently, that reactivity augmenting auxiliary-directed C−H arylation can be leveraged for the synthesis of di-and trisubstituted cyclobutane derivatives. 8 Finally, Davies and coworkers reported a nondirected C−H arylation of arylcyclobutanes through the reaction of catalytically generated Rh-carbenoids. 9 Considering the demonstrated importance of aminomethyl-cyclopropanes and -cyclobutanes, harnessing the native tertiary amine functionality to direct C−H transformations on the ring framework would provide a powerful tool for the streamlined synthesis of complex variants of these substituted strained cycloalkanes.
Here, we report the development of a Pd(II)-catalyzed process capable of affecting enantioselective desymmetrizing arylation of methylene-C(sp 3 )−H bonds in aminomethylcyclopropanes and -cyclobutanes ( Figure 1C). The reaction platform exploits the versatile coordination capacity of native, unbiased tertiary alkylamines, which are replete of reactivityaugmenting auxiliary groups. A broad scope is presented across a series of strained cycloalkanes and transferring aryl groups, leading to nonracemic cis-substituted cyclic products with high enantiomeric ratios. The multifaceted role of a commercial Nacetyl-amino acid ligand not only enables the cycloalkane desymmetrization process but it can also be applied in a kinetic resolution-type mode to form trisubstituted aminomethylcyclopropanes, which together with the basic transformation will be of interest to practitioners of synthetic chemistry tasked with preparing biologically active small molecules. 1
■ RESULTS AND DISCUSSION
Over the last 7 years, our group has established the use of unprotected free(NH)-alkylamines in Pd(II)-catalyzed C-(sp 3 )−H functionalization. 10 The use of amines in their native form significantly advances their synthetic utility by precluding the need for additional multistep procedures to add and remove auxiliary directing functionalities. Central to the success of many of these transformations was the exploitation of an intramolecular hydrogen bond between the carbonyl oxygen atom of the Pd(II)-bound carboxylate and the NH motif of the ligated amine, which oriented the substrate such that the C−H bond aligned with the requisite carboxylate ligand for C−H bond cleavage. 11 However, this platform cannot be extended to tertiary alkylamine-directed C(sp 3 )−H activation because there is no NH feature in these substrates. In addressing this, we discovered that a ligand-directed strategy, wherein an N-acyl amino acid ligand 12 was able to promote a C(sp 3 )−H activation event over competitive βhydride elimination pathways, which had presumably precluded the use of tertiary alkylamines in C−H activation reactions prior to our work ( Figure 2A). Crucial to the success of this activation platform was a relay effect originating from the α-substituent on the amino acid ligand which oriented the acetamide group in perfect alignment for γ-C−H bond cleavage in preference to the corresponding β-hydride elimination pathway. Accordingly, a general γ-C(sp 3 )−H arylation platform was developed which coupled γ-methyl groups in a wide range of tertiary alkylamines with aryl-boronic acids. 13 Furthermore, the chiral nature of the N-acetyl-t-leucine ligand was exploited through an enantioselective desymmetrization method for N-isobutyl-derived tertiary alkylamines ( Figure 2B). The origin of the enantioselectivity is thought to arise from minimization of 1,3-diaxial interactions between the nonreacting N-substituent and the nonreacting methyl group on the reacting alkyl chain of the substrate within the two lowest-energy conformations of chair-like six-membered ring transition structures. However, asymmetric induction was highly dependent on the structure of the nonreacting amine substituents: Acyclic tertiary alkylamines delivered products in good yield and with high enantioselectivity, whereas substrates directed through a N-heterocycle motif performed modestly across a range of examples and ultimately limited the wider efficacy of the transformation. In these cases, we believe that interactions between the catalyst and saturated heterocycle frameworknot present with smaller acyclic substituents disturb the ideal conformation of the transition structures and lead to poorer enantioselectivity.
As part of the evolution of the tertiary alkylamine-directed platform, we questioned whether enantioselective γ-methylene C(sp 3 )−H arylation could be achieved on the strained ring framework of aminomethyl-cyclopropanes and cyclobutanes. If the reaction was able to accommodate an unbiased range of Nsubstituents on the tertiary alkylamine function, then the products of such a transformation could have widespread utility in the construction of nonracemic complex strained cycloalkane scaffolds that are prevalent in biologically relevant small molecules.
Investigations toward the development of a γ-methylene C(sp 3 )−H arylation on AMCP scaffolds began by reacting amine 1a with phenyl boronic acid 2a under conditions related to our previous studies (Table 1, entry 1). 13 With 3 equiv of amine 1a, a reaction using 10 mol % of Pd(OAc) 2 , 20 mol % of N-Ac-(L)-Tle-OH, 2.5 equiv of Ag 2 CO 3 , and 2.0 equiv of 1,4benzoquinone at 50°C delivered an 94% assay yield (determined by 1 H NMR) of a single cis-substituted γ-arylated cyclopropane (3a), with a 99:1 enantiomeric ratio (e.r.). However, we were surprised to find that a reaction without the ligand delivered a 12% assay yield of racemic 3a (entry 2), which is in contrast to the corresponding γ-methyl C(sp 3 )−H arylation on linear N-propyl tertiary alkylamines where no background reaction was observed. 13 Given that the acetate anion of the Pd(OAc) 2 appears capable of affecting the γmethylene C(sp 3 )−H activation on AMCPs, albeit at low conversion, we were concerned that in less reactive systems, this deleterious pathway might become more dominant and thereby erode enantioselectivity. We reasoned that a palladium catalyst without the acetate counteranion might obviate the background reaction. We were pleased to find that when using 10 mol % of Pd(PhCN) 2 Cl 2 , the reaction still had excellent assay yield and enantioselectivity, but importantly afforded no background reaction in the absence of the N-acetyl amino acid ligand (entries 3 and 4). Further tuning of the reaction parameters delivered an optimized protocol that involved stirring a DMF solution of phenyl boronic acid, amine 1a (1.5 equiv), benzoquinone (1.0 equiv), Pd(PhCN) 2 Cl 2 (10 mol %), and N-acetyl tert-(L)-leucine (20 mol %) at 40°C for 15 h, to afford 82% yield of product 3a, after chromatographic purification, with an e.r. of >99:1 (entry 5). It is interesting to note that a reaction using amine 1a as the limiting reagent (with 2 equiv of PhB(OH) 2 2a) gave a 58% assay yield of 3a. We believe it is possible that a modest excess of amine is required to compete with a product inhibition through ligation to the palladium catalyst.
In lieu of a crystalline sample of product 3a, we initially predicted that the model for γ-methyl C(sp 3 )−H arylation of N-isobutyl tertiary alkylamine would provide an accurate rationale for the stereochemical outcome on the cyclopropane system; minimization of the 1,3-diaxial interactions between nonreacting groups on the nitrogen atom and the cyclopropane ring in the reacting chain would be the dominating feature determining the lowest energy pathway ( Figure 2B). However, the rigid cyclopropane framework would likely instill geometric restrictions into the chair-like transition structures based on the N-isobutyl tertiary alkylamine model. Accordingly, we calculated new transition structures for the γmethylene C(sp 3 )−H activation on the aminomethyl-cyclopropane scaffold ( Figure 3A) and found that amine 1a generated boat-like TS1 as the lowest-energy form. TS1 displays the empirically required conformation for C(sp 3 )−H cleavage, where the amido-palladium (OC−N−Pd) dihedral angle of 11.5°serves to arrange the cyclopropane ring so that its steric interactions with the nonreacting N-substituents are minimized. 14 TS2, an alternative boat-like transition structure, is substantially higher in energy and displays interactions between the cyclopropane ring and the nonreacting Nsubstituent. A chair-like transition structure (TS3), similar to that found for the reaction of N-isobutyl tertiary alkylamines, appears to be destabilized by pseudo 1,3-diaxial interaction between one of the nonreacting N-substituents (axial) and a CH 2 unit of the cyclopropane, increasing the energy by 5.1 kcal·mol −1 . A final transition state that is worthy of comment is TS4, which was found to be 5.9 kcal·mol −1 higher than TS1 and appears to be destabilized by torsional interactions. Therefore, a pathway through TS1 would deliver palladacyclic intermediate int-I, and benzoquinone-assisted reductive elimination would be expected to form the (1R,2S)-arylsubstituted cyclopropane 3a ( Figure 3B).
With a set of optimized conditions for a γ-methylene C(sp 3 )−H arylation on AMCPs and a basic understanding of the factors controlling the stereoinduction, we set about exploring the scope of this new enantioselective transformation (Chart 1). An important part of these studies was determining the range of nonreacting amine substituents that were accommodated in the reaction. Our previous studies on a γmethyl C(sp 3 )−H arylation on N-isobutyl tertiary alkylamines had shown a clear limitation in the scope of the amine heterocycles amenable to this transformation; the e.r. of the products was substantially elevated only when acyclic substituents were displayed part of the amine. Therefore, we were pleased to find that a piperidine-derived AMCP also reacted well under the standard conditions and produced the arylated product 3b with >96:4 e.r. (Chart 1A). A selection of other nitrogen-containing six-membered ring heterocyclederived AMCPs (3c−h), displaying a variety of functional motifs and features common to pharmaceutical agents, also performed well, giving products with >99:1 e.r. For example, piperazine (3d) and morpholine (3e)-derived substrates produced reasonable yields of the corresponding arylated Lower yields were obtained in the presence of competing Pd(II)-coordinating functionality (isoxazoline in 3h). The configuration of the product confirmed our calculations for the boat-type transition structure and validated our model for asymmetric induction. Our previous work on γ-methyl C(sp 3 )−H arylation on pyrrolidine-derived substrates failed to generate any of the desired arylated products because the competitive β-hydride elimination pathways dominated the reaction, leading to decomposition of the substrate. However, we were pleased to find that, despite competitive β-hydride elimination, the reaction of a pyrrolidine-derived AMCP gave 3i with an e.r. > 96:4 in a modest, yet synthetically usable, yield. Similarly, azetidine-and spirocyclic-derived substrates also produced their arylated products (3j,k) with excellent e.r's and represent attractive small-molecule fragments of interest in the design of biologically active molecules. A bicyclic amine substrate failed to generate its corresponding product (3l), likely due to the hindered nature of the nitrogen lone pair, which prevents an efficient coordination with the Pd(II)center. While we did not extensively explore the scope of aminomethyl-cyclopropanes with acyclic nonreacting substituents (3a,m−o), we did find that ester and N-carbamylazetidine functionality did not adversely affect the reaction and gave products 3m and 3n in high e.r. The reaction was able to accommodate Lewis-basic heteroarene functionality, but the product (3o) was formed with lower yield and enantioinduction, possibly as a result of competitive coordination which affects the stability of the required transition structure. Substrates containing more hindered tertiary alkylamine motifs were also tolerated by the reaction and produced the corresponding arylated-cyclopropanes with high enantiomeric ratios (3p and 3q). Interestingly, we found that further substitution on the cyclopropane at the same position as the aminomethyl-group gave substrates amenable to the γmethylene C(sp 3 )−H arylation, although the yield and e.r. of the products 3r−t were lower than their lesser-substituted congeners. While we are not certain of the origins of this reduced enantioselectivity, it seems likely that the addition geminal substituent on the cyclopropane ring would lead to a syn-pentane-like interaction in the corresponding TS1, thereby raising its energy such that other transition structures may come into play. This further substitution did, however, allow us to assess several selectivity factors in substrates containing more than one suitably proximal C−H bond.
We prepared a substrate that presented a competing γmethyl C−H bond in addition to the γ-methylene C−H bond of the cyclopropane. Reaction under the standard conditions produced an approximately 3.5:1 mixture of products in favor of C−H arylation on the cyclopropane ring (3s). In spite of the enhanced reactivity of cyclopropane C−H bonds, the selectivity observed over the classically more reactive γ-methyl C−H bonds is surprising. When the reaction was challenged with a substrate displaying a proximal aryl group and the γmethylene C(sp 3 )−H bond of the cyclopropane, we observed an approximately 2:1 ratio in favor of arylation on the arene (to 4b); the arylated cyclopropane was produced with an e.r. of 96:4, which provides a modest but usable yield of the highly substituted enantioenriched aminomethyl-cyclopropane (3t). Neither primary or secondary aminomethyl cyclopropanes were productive substrates in this reaction.
Following the assessment of the amine motif, the focus shifted toward assessing the scope of the boronic acid component (Chart 1B). It was initially found that arylboronic acids substituted with electron-withdrawing groups delivered lower reactions yields, due to the significant formation of the homocoupled biaryl (see Supporting Information for details). However, better conversion to the desired γ-methylene C(sp 3 )−H bond arylation product was achieved when carrying the reaction at 40°C for longer reactions times and with N,Ndimethylacetamide (DMA) as solvent. With this subtle change to the reaction conditions, a variety of aryl groups with substituents at the meta-or para-positions underwent transfer in good yields: aryl groups containing carbonyls (3aa−ab), halogens (3ac−ad), N-substituted arenes (3ae−af), alkoxy ethers (3ag, 3aj), nitro groups (3ah), trifluoromethyl (3ai), extended aromatic systems (3ak), and dioxalane groups (3al). A selection of pyridyl-boronic acids were also compatible with the reaction and transferred the Lewis basic heterocycles to the cyclopropane scaffold with excellent e.r's, albeit in lower yield compared to benzene derivatives (3am−ao); 3-pyridyl boronic acid, chosen as a representative unsubstituted Lewis basic heteroarene, was unsuccessful in the reaction with homocoupled heteroarene observed as the major product. Unfortunately, arylboronic acids displaying ortho-substituents or free amino groups failed to deliver the desired product under these reaction conditions (3an−ao). All arylated aminomethyl-cyclopropanes displayed excellent levels of enantioselectivity, suggesting that the boronic acid component is not involved in the enantiodetermining step.
With the γ-methylene C(sp 3 )−H bond arylation of AMCPs displaying a broad substrate scope in both components and a good understanding of the transition structures governing the enantioselective C−H cleavage, we questioned whether this transformation would be amenable to kinetic resolution of racemic substituted cyclopropanes. 15 We chose trans-substituted cyclopropane 5 with which to test this potentially
Scheme 1. Reaction of Racemic Disubstituted Aminomethyl-cyclopropanes to Form Enantioenriched Trisubstituted Products
Journal of the American Chemical Society pubs.acs.org/JACS Article useful transformation, as the presence of a substituent on the opposite face to the reacting C−H bond should not affect the amine conformations depicted in TS1. Accordingly, reaction of 2.0 equiv of disubstituted cyclopropane 5, under our standard conditions, delivered a 61% yield of trans-diaryl amine 6 with a e.r. of 98:2 (Scheme 1). The formation of 6 was accompanied by a small amount of an isomeric trisubstituted aminomethylcyclopropane 7 arising from γ-methine arylation of the (R,R)isomer of aminomethyl-cyclopropane 5 at the benzylic position on the strained ring in >99:1 e.r. The remaining starting aminomethyl-cyclopropane starting material, 5, was recovered with an e.r. of 75:25. A similar reaction with only 1.0 equiv of amine 5 produced modest yields of the trisubstituted aminomethyl cyclopropane 6 with a 93:7 e.r. and 16% of the starting material (5) recovered with an e.r. of 97:3. Unfortunately, the conversion of amine 5 to two different arylated products made the calculation of the selectivity factor for this transformation not possible. Despite this, the "kinetic resolution" can be applied in a practical manner to form enantioenriched differentially trans-diarylated trisubstituted aminomethyl-cyclopropanes, compounds that would be difficult to make in a straightforward fashion via contemporary methods. Next, we next turned our attention to the development of the, a priori more demanding, C−H arylation of AMCBs (8). Guided by the studies on γ-C(sp 3 )−H arylation of the cyclopropane series, we found that the same conditions also led to the formation of arylated aminomethyl-cyclobutane 9a in 63% assay yield. Increasing the reaction temperature to 60°C , however, provided an optimal 78% assay yield (73% after purification by silica gel chromatography) of 9a with an e.r. > 97:3 (Chart 2A). In this case, the e.r. was determined by 1 H Journal of the American Chemical Society pubs.acs.org/JACS Article NMR analysis after treatment of 9a with methyl iodide (to make the tetraalkyl ammnonium salt) and counterion exchange with a chiral hexa-coordinate phosphate salt (see Supporting Information for details). 16 In exploring the scope of the cyclobutane arylation, we found that the amine motifs containing common functional groups like esters (9b), electron-rich heteroarenes (9c), protected alcohols (9d), and amines (9e) all delivered good yields of their corresponding arylated aminomethyl-cyclobutanes with excellent e.r's. A range of substrates containing saturated heterocyclic tertiary alkylamines piperidine (9f−g), morpholines (9h), and piperazine (9i) also worked well, although the yields and e.r's were slightly diminished compared to the corresponding cyclopropane systems (Chart 2A). The C−H bonds in cyclobutanes are less reactive than cyclopropanes as a result of them having less sp 2 character, which likely explains the lower yields. 17 Similar to that observed with cyclopropanes, the presence of a substituent in a geminal position to the directing amine can still deliver the expected arylation (9j), but a slightly lower e.r. of 88.5:11.5 was observed. A selection of substituted arylboronic acids (2) worked well in the reaction to form aminomethyl-cyclobutane products (9aa−ag) displaying a range of useful functional groups (Chart 2B). Interestingly, we found that the use of a ligand based on phenylalanine generally gave better yields. Enantiomeric ratios were routinely high although the yields were lower than those obtained for the corresponding cyclopropane series.
To test whether the reaction was competent on more complex substrates, we submitted the pharmaceutical agent, Ivabradine, 18 to the reaction conditions (Chart 2C). To complement the actual enantiomer of Ivabradine, the D-form of the amino acid ligand is used in combination with the otherwise standard catalytic reaction conditions to provide a modest, but synthetically usable yield of the phenylated product 11 as a single diastereoisomer. This late-stage functionalization tactic potentially provides access to modular arylated variants of Ivabradine that would be difficult to access using other methods if required.
Arylated aminomethyl-cyclobutane 9ad provided a single crystal in its hydrochloride salt, from which we were able to determine its absolute configuration through analysis of the Xray diffraction. Accordingly, this enabled us to investigate whether our model for the cyclopropane reaction was consistent with the four-membered ring system. Computational calculations determined that the nonplanar AMCBs have access to a few more diastereomeric transition states than the rigid cyclopropane ring (Figure 4). Although a number of transition structures could be identified, only the most relevant pairs are detailed here, but a more detailed analysis can be found in the Supporting Information. The lowest transition structure was found to be TS5, where a twist-boat conformation (observed between palladium, nitrogen, the 3carbon backbone, and the cleaving hydrogen atom) minimizes the eclipsing interactions within the substituted cyclobutane as a result of the puckered conformation of the four membered ring. The lack of steric interactions contrasts with TS7, where a H-to-H distance of 2.03 Å is observed between the methylene group of the cyclobutene ring and the N-methyl substituent, resulting in an energy difference of 3.9 kcal·mol −1 . Interestingly, two other transition states (TS6 and TS8) where found to proceed through a chair-like conformation, resembling the ones predicted when C−H activation is attempted on linear Nisobutyl alkylamines ( Figure 2B). When the system loses its strained character, the chair-like transition states recover their predominant stability among other conformations. TS8 exhibits a 1,3-diaxial-type interaction between the cyclobutane and the N-methyl substituent, which makes it significantly higher in energy. TS6 presents no detrimental steric interactions, and the reason for its 2.1 kcal·mol −1 energy difference compared to TS5 lies in the presence of torsional strain within the backbone of the substrate. It is important to emphasize that the most stable transition states within each diastereomeric complex (TS5 and TS6) are devoid of destabilizing steric interactions with the ligand and the predicted enantiomeric ratio relies on a much more subtle torsional strain within the aminomethyl-cyclobutane backbone.
■ CONCLUSION
In summary, we have developed a method for the selective C− H arylation of strained cycloalkanes displaying an appendant tertiary amine functionality. With the aid of an inexpensive chiral ligand, it was possible to synthesize a wide range of arylated cycloalkane products all displaying exclusive cis diastereoselectivity and enantiomeric ratios frequently >95:5. Common saturated N-heterocycles, such as piperidines, piperazines, morpholines, pyrrolidines, and azetidines as well as acyclic tertiary alkylamines substituents, were amenable to this γ-methylene C(sp 3 )−H arylation strategy. Computational studies were able to accurately predict the observed enantioselectivity for both types of ring-strained systems, and the origin of enantioselectivity relied on the restricted geometry of the internal amidate base, which limits the different conformations accessible to the reacting substituent through which C−H activation can be accessed. We believe that this operationally simple method will be of interest to those interested into the synthesis of conformationally defined biologically active functional cycloalkane scaffolds in industrial and academic institutions. The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/jacs.1c11921. All experimental procedures, extended mechanistic discussion, computational calculations, and compound characterization (including 1 H and 13 C NMR spectra, IR, HRMS, and X-ray data) are available in the document (PDF)
|
2021-11-19T08:30:11.483Z
|
2022-02-25T00:00:00.000
|
{
"year": 2022,
"sha1": "f060af071674a27d6e405ff5b78534e88f976c92",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5b020e1c3b37f1187554265c45fcf78352e3dc8a",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247204508
|
pes2o/s2orc
|
v3-fos-license
|
A Wideband Reconfigurable CMOS VGA Based on an Asymmetric Capacitor Technique with a Low Phase Variation
: This paper presents a wideband digitally controlled variable gain amplifier (VGA) with a reconfigurable gain tuning range and gain step in a 65 nm CMOS process. A unique asymmetric capacitor-based reconfigurable technique is proposed to extend the gain tuning range and realize gain step reconfiguration. An active neutralization topology based on a stackless transistor is utilized to compensate for the additional phase shift introduced by the gain tuning. Moreover, a current-type digital-to-analog converter (DAC) is also integrated for easier precise gain control. With the asymmetric capacitor varying from 1000 fF to 200 fF with a step of 400 fF, the proposed VGA achieves a 12.2/9.2/6.1 dB gain tuning range with a 0.4/0.3/0.2 dB gain resolution, respectively. At the maximum gain tuning range mode, the measured minimum root-mean-square (RMS) phase error is 1.7 ◦ at 23.4 GHz. At the finest gain step control mode, the RMS phase error measured across 20–30 GHz is lower than 1.9 ◦ . The tested result also shows the proposed VGA achieves a peak gain of 13 dB with a 3 dB bandwidth of 21.4–29 GHz, and the output 1 dB compression point (OP 1dB ) is up to 8.6 dBm at 25 GHz.
Introduction
For the fifth-generation (5G) new radio (NR) phased array beamformers, the variable gain amplifier (VGA) is the key building block, which has attracted increasing attention from industrial and academic fields. The VGA is designed to serve both purposes. Firstly, it can effectively compensate for the different losses caused by the phase shifter during phase shifting [1,2]. Secondly, enough amplitude weighting can be provided for a phased array to achieve high sidelobe suppression [3][4][5][6][7][8][9]. For millimeter-wave (mm-wave) phase shifters (PSs), the 6 dB gain tuning range of the VGA is sufficient enough to cover the loss variation [2]. However, it is more desirable for the VGA to have a high gain resolution and low phase variation, to avoid introducing extra gain errors and degrading the phase resolution of PSs. For phased array systems, sidelobe suppression is very important, which directly determines the signal quality of the entire link. In order to achieve less than −30 dB sidelobe suppression, for a 16-element phased array, a range of gain tuning of about 12 dB is required, according to Taylor's method [10]. Based on the two applications mentioned above, a new generation VGAs should have reconfigurable characteristics of gain tuning range and gain step to simultaneously serve both purposes, which can greatly increase the flexibility of phased array systems. To the authors' best knowledge, so far, the VGA with reconfigurable gain tuning range and gain step size has not yet been reported.
Moreover, a VGA with low additional phase shift during gain tuning is also very important, which can greatly simplify the complexity of phased array calibration procedures [9]. To this end, various phase-invariant VGAs have been proposed [11][12][13][14][15], such as the study by [12], which achieves a 7.5 dB gain tuning range with <3.5 • root-mean-square (RMS) procedures [9]. To this end, various phase-invariant VGAs have been proposed [11][12][13][14][15], such as the study by [12], which achieves a 7.5 dB gain tuning range with <3.5° root-meansquare (RMS) phase error across 27-42 GHz by introducing interstage inductance. However, the designs mentioned above all adopt multiple-stack transistor structures, which suffer from more complex circuit topologies and higher supply voltage values, compared with stackless topologies under the same technology node and the normal supply voltage recommended by the vendor.
To address these issues, a new technique named as asymmetric capacitor-based reconfigurable technique is proposed to extend the gain tuning range and reconfigure the gain step. Based on a stackless transistor structure, an active neutralization technique is adopted to minimize the additional phase shift during gain tuning. Furthermore, for achieving accurate gain control to reduce gain error, the chip also integrates a high-resolution digital-to-analog converter (DAC).
Design and Analysis of VGA
Unlike the widely used current-steering ( Figure 1a) and Gilbert-cell-based ( Figure 1b) VGA structures [16][17][18][19][20][21], the proposed reconfigurable digitally controlled VGA used a stackless common source (CS) topology, as plotted in Figure 1c, which has advantages such as a simpler circuit structure and lower power supply. Figure 2a presents the full circuit schematic of the proposed reconfigurable VGA, and in both stages, a differential CS structure was used. Among them, the input stage is a variable gain stage to achieve gain tuning, and the output stage is a fixed gain stage to realize high output power. Furthermore, in order to more easily achieve accurate gain control and high robustness against the process and supply voltage and temperature variations (PVT) [16][17][18][19][20], the designed VGA used a digital control method. The control voltages Va and Vb were generated by a 7-bit current-type DAC control circuit [22]. Additionally, to achieve wideband matching and compact layout, transformer-based high-order matching networks were employed.
Asymmetric Capacitor-Based Reconfigurable Technique
The core idea behind the proposed asymmetric capacitor-based reconfigurable technique was to connect asymmetric capacitors Cx and Cy in series on the gate nodes of transistors (M1-M4), respectively, as shown in Figure 2a. It should be pointed out that since the value of Cx is different from Cy, it is called asymmetric capacitors. Adjusting the values of asymmetrical capacitors provides another dimension of gain control, and hence, reconfigurable gain tuning range and gain step can be achieved. Figure 2b depicts the structure of the applied asymmetric capacitors. To achieve three configurations, the adjustable capacitor Cx was designed to be composed of three capacitors and two switching transistors, which are controlled by the bias voltage of Ctr1 and Ctr2. In addition, to ensure the reconfigurable effect, it is necessary to ensure that the designed capacitances of the asymmetric capacitors are as close as possible to the desired theoretical value. Based on this, both of the asymmetric capacitors used in this study were metal insulator metal (MIM) topology, because of its high resistance to process deviation. Meanwhile, Cx and Cy used similar capacitor arrays.
To investigate the reconfigurable mechanism, the core of the variable gain stage is shown separately, and its simplified schematic diagram is shown in Figure 3a. For further and more intuitive theoretical analysis, the corresponding half-side small-signal equivalent circuit is also established, as shown in Figure 3b. Based on Figure 3b, the voltage gain can be calculated by where gm refers to the transconductances of transistors M1 and M4; the Cgd and Cds describe the parasitic gate-to-drain capacitor and drain-to-source capacitor, respectively; the ro represents channel output resistance; the ZL is used to characterize the load impedance. In addition, A1 and A2 are obtained by Figure 2. (a) Schematic of the proposed reconfigurable digitally controlled VGA and (b) diagram of the structure of the asymmetric capacitors C x and C y , where C x is an adjustable capacitor and C y is a fixed capacitor.
Asymmetric Capacitor-Based Reconfigurable Technique
The core idea behind the proposed asymmetric capacitor-based reconfigurable technique was to connect asymmetric capacitors C x and C y in series on the gate nodes of transistors (M 1 -M 4 ), respectively, as shown in Figure 2a. It should be pointed out that since the value of C x is different from C y , it is called asymmetric capacitors. Adjusting the values of asymmetrical capacitors provides another dimension of gain control, and hence, reconfigurable gain tuning range and gain step can be achieved. Figure 2b depicts the structure of the applied asymmetric capacitors. To achieve three configurations, the adjustable capacitor C x was designed to be composed of three capacitors and two switching transistors, which are controlled by the bias voltage of Ctr1 and Ctr2. In addition, to ensure the reconfigurable effect, it is necessary to ensure that the designed capacitances of the asymmetric capacitors are as close as possible to the desired theoretical value. Based on this, both of the asymmetric capacitors used in this study were metal insulator metal (MIM) topology, because of its high resistance to process deviation. Meanwhile, C x and C y used similar capacitor arrays.
To investigate the reconfigurable mechanism, the core of the variable gain stage is shown separately, and its simplified schematic diagram is shown in Figure 3a. For further and more intuitive theoretical analysis, the corresponding half-side small-signal equivalent circuit is also established, as shown in Figure 3b. Based on Figure 3b, the voltage gain can be calculated by where g m refers to the transconductances of transistors M 1 and M 4 ; the C gd and C ds describe the parasitic gate-to-drain capacitor and drain-to-source capacitor, respectively; the r o represents channel output resistance; the Z L is used to characterize the load impedance. In addition, A 1 and A 2 are obtained by (2) where g m1 and g m4 are biased by V a and V b , respectively, which are generated by a 7-bit DAC. According to (1), it can be observed that the voltage gain is proportional to the difference between V a and V b ; that is, the greater the difference between the two, the higher the gain. Based on this, to achieve the gain tuning of the VGA, V a should not be equal to V b . Furthermore, when setting V a less than V b , from (1), the maximum gain tuning range ∆G max can be derived as where g m,min and g m,max are the transconductances of transistors biased in minimum and maximum control voltages. It should be pointed out that since the amplifier gain is in decibels, the voltage gain should be converted into decibels, after which the logarithmic operation is performed. Based on this, the normal difference operation becomes division when placed in the logarithmic operation. Therefore, the final derivation of the gain tuning range appears as a ratio. In addition, it is worth mentioning that the C gd is not ignored but neutralized based on the proposed topology in deriving Equations (1)-(4). The detailed proofs of Equations(1)-(4) are presented in Appendix A. Then, based on (4), the gain resolution G r of the VGA can be calculated as where n is the control bits of the DAC. According to (4) and (5), the proposed asymmetric capacitor-based reconfigurable technique provides a new method to configure ∆G max and G r by adjusting the coefficients A 1 and A 2 . Meanwhile, in order to keep the maximum gain of the VGA, it is needed to set A 2 ≈ 1. As shown in Figure 4a,b, the conventional methods are only obtained ∆G max and G r by controlling transconductance g m of the transistors, whereas the proposed technique achieves another dimension for ∆G max and G r control. When A 1 is varied from 1 to 0, ∆G max and G r will be reconfigured. When the asymmetric capacitors C x and C y are designed to be greater than 1000 fF across 20-30 GHz, it can be calculated from (2) and (3) that the values of the coefficients A 1 and A 2 are approximately equal to one. Conversely, when they are smaller than 1000 fF, the values of A 1 and A 2 will be less than one. Thus, the gain tuning range and gain step can be reconfigured, which is adjusted to capacitors C x and C y . From what has been discussed above, the capacitance of C x was adjusted by 2-bit switched-capacitor array, which could achieve 200/600/1000-fF; the C y was designed to be a fixed value with a capacitance close to 1000 fF, so that the coefficient A 2 was approximately equal to one, as shown in Figure 3a. By configuring different C x values, the simulated small-signal gains versus frequency are plotted in Figure 4c. Among them, in order to observe the reconfigurable effect more intuitively, only the maximum and minimum gain states are shown in the results. At the minimum gain control mode, the gain of the VGA changes greatly as C x increases from 200 fF to 1000 fF, while the gain is almost the same at the maximum gain control mode. As a result, the reconfigurable gain tuning range and gain step can be realized. The simulation results shown in Figure 4c agree well with the theoretical analysis.
Phase Compensation Technique
For mm-wave VGAs, low additional phase shift during gain tuning is also a very important performance metric [9]. As mentioned before, many scholars have conducted extensive research to realize low phase variation during gain tuning. For VGAs with CS topologies, the parasitic capacitor C gd has been discussed in detail [23] as the most important factor causing the phase variation. In order to eliminate C gd , a method is widely used in amplifier design that introduces a positive feedback capacitor, which is called the capacitive cross-coupled neutralization (CCCN) technique [24]. This technique has the advantages of simple structure and obvious neutralization effect. However, its disadvantage is also obvious, that is, only good neutralization can be achieved in a narrow frequency band, as the positive feedback capacitor changes with frequency. This greatly limits the design of this technique in wideband VGA circuit design. To overcome this problem, the active neutralization technique in the previous study [23] was employed. The core idea of this technique is to replace the positive feedback capacitor in the conventional CCCN technique with an active transistor. Since the auxiliary transistors (M 3 and M 4 ) and the main transistors (M 1 and M 2 ) have the same size, the C gd of the two can be guaranteed to be the same with frequency changes, thus achieving good neutralization in the wide frequency band. Figure 5 plots the simulated maximum phase variation in the classic and proposed active neutralization-based CS VGAs across 20-30 GHz under the same gain adjustment range. Both simulations were conducted based on the same circuit configuration. Across 20-30 GHz, the maximum deviation of the phase variation in the proposed VGA with active neutralization technique was below 0.3 • during gain tuning. In contrast, the maximum phase variation results for the two classic CS VGAs were relatively large. Among them, one VGA with conventional CCCN technique exhibited the maximum phase variation of 1.6 • , while another VGA without any neutralization techniques showed 8.6 • phase variation at 30 GHz. These results fully illustrate two points: (1) the proposed VGA with active neutralization technique can effectively eliminate C gd and achieve low phase variation; (2) good phase compensation can be realized in the wide frequency band, which is very suitable for broadband VGA design.
Measurement Results
The proposed reconfigurable digitally controlled CS VGA was fabricated in a 65 nm CMOS process, and its die micrograph is presented in Figure 6. In the case of not including PAD, the core area of this chip was 0.758 mm × 0.23 mm. The two-stage VGA had a total power consumption of 98 mW with a 1 V supply, and the selection of 1 V supply voltage followed the vendor's recommendation under the corresponding process node. It is worth mentioning that, to achieve active neutralization, the auxiliary pairs in the proposed structure were on, which consumed slightly extra power to maintain the same gain. Even so, the total power consumption of the variable gain stage with the DAC was only 29 mW (including auxiliary pairs). The VGA presented in this paper consumed relatively high power consumption, which is because it achieved large output power. If a system does not need such high output power, the bias voltage of the output stage can be decreased, and the total dc power consumption of the proposed VGA would be reduced accordingly.
Measurement Results
The proposed reconfigurable digitally controlled CS VGA was fabricated in a 65 nm CMOS process, and its die micrograph is presented in Figure 6. In the case of not including PAD, the core area of this chip was 0.758 mm × 0.23 mm. The two-stage VGA had a total power consumption of 98 mW with a 1 V supply, and the selection of 1 V supply voltage followed the vendor's recommendation under the corresponding process node. It is worth mentioning that, to achieve active neutralization, the auxiliary pairs in the proposed structure were on, which consumed slightly extra power to maintain the same gain. Even so, the total power consumption of the variable gain stage with the DAC was only 29 mW (including auxiliary pairs). The VGA presented in this paper consumed relatively high power consumption, which is because it achieved large output power. If a system does not need such high output power, the bias voltage of the output stage can be decreased, and the total dc power consumption of the proposed VGA would be reduced accordingly.
mentioning that, to achieve active neutralization, the auxiliary pairs in the proposed structure were on, which consumed slightly extra power to maintain the same gain. Even so, the total power consumption of the variable gain stage with the DAC was only 29 mW (including auxiliary pairs). The VGA presented in this paper consumed relatively high power consumption, which is because it achieved large output power. If a system does not need such high output power, the bias voltage of the output stage can be decreased, and the total dc power consumption of the proposed VGA would be reduced accordingly. The VGA gain was controlled by Va and Vb, which were generated by the designed seven-bit DAC. The measured 32 states of gains under different control modes are shown in Figures 7a, 8a and 9a, respectively. It can be observed that the proposed VGA achieved a 12.2/9.2/6.1 dB gain tuning range, respectively, when the bias voltage of Ctr1 and Ctr2 were configured from 11 to 10 and then to 00 (Cx varying from 1000 to 200 fF, stepping 400 The VGA gain was controlled by V a and V b , which were generated by the designed seven-bit DAC. The measured 32 states of gains under different control modes are shown in Figures 7a, 8a and 9a, respectively. It can be observed that the proposed VGA achieved a 12.2/9.2/6.1 dB gain tuning range, respectively, when the bias voltage of Ctr1 and Ctr2 were configured from 11 to 10 and then to 00 (C x varying from 1000 to 200 fF, stepping 400 fF). Implementation of a 6.1 dB gain tuning range was aimed to insert loss compensation of PSs in phased array systems. Implementation of a 12.2 dB gain tuning range was intended for gain tuning in each element to suppress the sidelobes. As for the intermediate state, it was a compromise that was reserved according to actual design requirements. Meanwhile, the peak gain remained basically constant with the changes in C x . The measured 3 dB bandwidth was as wide as 7.6 GHz, from 21.4 to 29 GHz, with an approximately 13 dB peak gain.
DAC
Electronics 2022, 11, x FOR PEER REVIEW 8 o fF). Implementation of a 6.1 dB gain tuning range was aimed to insert loss compensat of PSs in phased array systems. Implementation of a 12.2 dB gain tuning range was tended for gain tuning in each element to suppress the sidelobes. As for the intermed state, it was a compromise that was reserved according to actual design requireme Meanwhile, the peak gain remained basically constant with the changes in Cx. The me ured 3 dB bandwidth was as wide as 7.6 GHz, from 21.4 to 29 GHz, with an approximat 13 dB peak gain.
To further achieve the ultrahigh average sidelobe suppression ratio of the beamfo ing to ensure beamforming quality and chain data rate, the VGA was suggested to achi a gain step of less than 0.5 dB [10]. As for insertion loss compensation of PSs, higher g steps were also required, to avoid introducing extra gain errors. Based on the above c siderations, the proposed VGA was designed to have a 0.4/0.3/0.2 dB gain resolution, spectively. Achieving such a high-precision gain step control, it was necessary to des a high-precision DAC. According to (5) and the measured gain tuning range, the calcu tion results of the three gain resolutions are as follows: . Implementation of a 6.1 dB gain tuning range was aimed to insert loss compensat of PSs in phased array systems. Implementation of a 12.2 dB gain tuning range was tended for gain tuning in each element to suppress the sidelobes. As for the intermed state, it was a compromise that was reserved according to actual design requireme Meanwhile, the peak gain remained basically constant with the changes in Cx. The me ured 3 dB bandwidth was as wide as 7.6 GHz, from 21.4 to 29 GHz, with an approxima 13 dB peak gain.
To further achieve the ultrahigh average sidelobe suppression ratio of the beamfo ing to ensure beamforming quality and chain data rate, the VGA was suggested to achi a gain step of less than 0.5 dB [10]. As for insertion loss compensation of PSs, higher g steps were also required, to avoid introducing extra gain errors. Based on the above c siderations, the proposed VGA was designed to have a 0.4/0.3/0.2 dB gain resolution, spectively. Achieving such a high-precision gain step control, it was necessary to des a high-precision DAC. According to (5) and the measured gain tuning range, the calcu tion results of the three gain resolutions are as follows: Furthermore, the three configurations for the min, mean, and max gain step ver the six-bit binary control word overall frequency band are plotted in Figures 7b, 8b a 9b, respectively. The proposed VGA basically realized three reconfigurable gain step 0.4/0.3/0.2 dB, respectively. Then, the measured input return loss (S11) and output ret loss (S22) of the VGA for the different configurations are plotted in Figures 7c, 8c and respectively. It can be seen from the results that changing the capacitance of Cx will sligh affect the offset of S11, which, due to the Cx, will slightly change the imaginary part of source impedance. Since the output stage was a fixed gain stage, S22 basically remain unchanged versus 32 control words. In addition, it is worth mentioning that, to achi wideband matching and compact layout, the transformer-based high-order matching n works were adopted. By adjusting the coupling coefficient k of the primary and second coils, the values of resonance frequency (ωL and ωH) could be adjusted, thereby realiz the adjustment of the bandwidth [25]. Figure 10 presents the measured small-signal ga versus 32 gain modes at 25.4 GHz, which verifies that the proposed VGA achieves rec figurable gain tuning range and gain resolution, in addition to realizing linear gain c trol. The RMS phase error was used to characterize the phase variation in the VGA. T specific formula for calculating RMS phase error is expressed as To further achieve the ultrahigh average sidelobe suppression ratio of the beamforming to ensure beamforming quality and chain data rate, the VGA was suggested to achieve a gain step of less than 0.5 dB [10]. As for insertion loss compensation of PSs, higher gain steps were also required, to avoid introducing extra gain errors. Based on the above considerations, the proposed VGA was designed to have a 0.4/0.3/0.2 dB gain resolution, respectively. Achieving such a high-precision gain step control, it was necessary to design a high-precision DAC. According to (5) and the measured gain tuning range, the calculation results of the three gain resolutions are as follows: Furthermore, the three configurations for the min, mean, and max gain step versus the six-bit binary control word overall frequency band are plotted in Figures 7b, 8b and 9b, respectively. The proposed VGA basically realized three reconfigurable gain steps of 0.4/0.3/0.2 dB, respectively. Then, the measured input return loss (S11) and output return loss (S22) of the VGA for the different configurations are plotted in Figures 7c, 8c and 9c, respectively. It can be seen from the results that changing the capacitance of C x will slightly affect the offset of S11, which, due to the C x , will slightly change the imaginary part of the source impedance. Since the output stage was a fixed gain stage, S22 basically remained unchanged versus 32 control words. In addition, it is worth mentioning that, to achieve wideband matching and compact layout, the transformer-based high-order matching networks were adopted. By adjusting the coupling coefficient k of the primary and secondary coils, the values of resonance frequency (ω L and ω H ) could be adjusted, thereby realizing the adjustment of the bandwidth [25]. Figure 10 presents the measured small-signal gains versus 32 gain modes at 25.4 GHz, which verifies that the proposed VGA achieves reconfigurable gain tuning range and gain resolution, in addition to realizing linear gain control.
The RMS phase error was used to characterize the phase variation in the VGA. The specific formula for calculating RMS phase error is expressed as where θ i is the phase at the i state, and θ ave is the average of the phases of all states. In this paper, the phase was extracted from the argument of S21. The RMS phase errors for all gain states were measured by taking the maximum gain state as a reference, and they are shown in Figure 11a. At the maximum gain tuning range mode, the measured RMS Figure 11b.
the six-bit binary control word overall frequency band are plotted in Figures 7b, 8b and 9b, respectively. The proposed VGA basically realized three reconfigurable gain steps of 0.4/0.3/0.2 dB, respectively. Then, the measured input return loss (S11) and output return loss (S22) of the VGA for the different configurations are plotted in Figures 7c, 8c and 9c, respectively. It can be seen from the results that changing the capacitance of Cx will slightly affect the offset of S11, which, due to the Cx, will slightly change the imaginary part of the source impedance. Since the output stage was a fixed gain stage, S22 basically remained unchanged versus 32 control words. In addition, it is worth mentioning that, to achieve wideband matching and compact layout, the transformer-based high-order matching networks were adopted. By adjusting the coupling coefficient k of the primary and secondary coils, the values of resonance frequency (ωL and ωH) could be adjusted, thereby realizing the adjustment of the bandwidth [25]. Figure 10 presents the measured small-signal gains versus 32 gain modes at 25.4 GHz, which verifies that the proposed VGA achieves reconfigurable gain tuning range and gain resolution, in addition to realizing linear gain control. The RMS phase error was used to characterize the phase variation in the VGA. The specific formula for calculating RMS phase error is expressed as where ϴi is the phase at the i state, and ϴave is the average of the phases of all states. In this paper, the phase was extracted from the argument of S21. The RMS phase errors for all gain states were measured by taking the maximum gain state as a reference, and they are shown in Figure 11a. At the maximum gain tuning range mode, the measured RMS phase error was 1.7° at 23.4 GHz. Across 20-30 GHz, the measured RMS phase error was less than 5.5°. At the medium gain tuning range mode (0.3 dB gain step), the measured minimum RMS phase error was 0.5° at 25.2 GHz. Across 20-30 GHz, the measured RMS phase error was less than 2.4°. At the finest gain step control condition, the RMS phase error measured across 20-30 GHz was lower than 1.9°. At 30 GHz, a minimum phase error of 0.22° was achieved. Meanwhile, under the maximum gain state, the OP1dB was up to 8.6 dBm at 25 GHz, as plotted in Figure 11b. Table 1 summarizes the performance of this research and compares it to prior studies. The proposed VGA is the only one that can reconstruct gain tuning range and gain resolution, which will greatly increase the flexibility of phased array systems. Additionally, it is the only one that adopts stackless transistor topology, thus achieving the lowest supply voltage, compared with other state-of-the-art multiple-stack transistor structures. Table 1 summarizes the performance of this research and compares it to prior studies. The proposed VGA is the only one that can reconstruct gain tuning range and gain resolution, which will greatly increase the flexibility of phased array systems. Additionally, it is the only one that adopts stackless transistor topology, thus achieving the lowest supply voltage, compared with other state-of-the-art multiple-stack transistor structures.
Conclusions
A 21.4-29 GHz reconfigurable digitally controlled VGA based on stackless CS structure with novel asymmetric capacitor-based reconfigurable and active neutralization phase compensation techniques was introduced. The proposed VGA achieved a 12.2/9.2/6.1 dB gain tuning range with a 0.4/0.3/0.2 dB gain step while maintaining the peak gain constant. At the finest gain step mode, the measured RMS phase error was <1.9 • across 20-30 GHz. At 30 GHz, a minimum phase error of 0.22 • was achieved. At the maximum gain state, it achieved a measured 13 dB peak gain and 8.6 dBm OP 1dB . The measurement characteristics demonstrate the proposed VGA is suitable for 5G mm-wave NR phased array beamformers that require reconfigurable gain tuning range and gain step with low phase variation.
Data Availability Statement:
The study did not report any data.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
Derivations of Equations (1)-(4) Figure A1 shows a small-signal equivalent circuit of a common source (CS) amplifier with parasitic elements when the effect of asymmetric capacitors is considered.
According to Kirchhoff's current law (KCL) and Kirchhoff's voltage law (KVL), the following equations can be obtained: across 20-30 GHz. At 30 GHz, a minimum phase error of 0.22° was achieved. At the maximum gain state, it achieved a measured 13 dB peak gain and 8.6 dBm OP1dB. The measurement characteristics demonstrate the proposed VGA is suitable for 5G mm-wave NR phased array beamformers that require reconfigurable gain tuning range and gain step with low phase variation. The study did not report any data.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
Derivations of Equations (1)-(4) Figure A1 shows a small-signal equivalent circuit of a common source (CS) amplifier with parasitic elements when the effect of asymmetric capacitors is considered.
According to Kirchhoff's current law (KCL) and Kirchhoff's voltage law (KVL), the following equations can be obtained: Figure A1. Small-signal equivalent circuit with asymmetric capacitor C x . Figure A2 shows the small-signal equivalent circuit of a CS amplifier with parasitic elements when the effect of asymmetric capacitors is not considered.
Electronics 2022, 11, x FOR PEER REVIEW 12 of 17 Figure A2 shows the small-signal equivalent circuit of a CS amplifier with parasitic elements when the effect of asymmetric capacitors is not considered.
According to Kirchhoff's current law (KCL) and Kirchhoff's voltage law (KVL), it can be derived that According to Kirchhoff's current law (KCL) and Kirchhoff's voltage law (KVL), it can be derived that In order to facilitate the operation, let Av = V out /V in . Then, Equation (A6) can be simplified to In order to facilitate the operation, let: Then, Equation (A5) can be simplified to Equation (A11) can be simplified to: Based on Equations (A12) and (A13), A 1 is given by Similarly, A 2 is given by According to Equations (A14) and (A15), Equation (A5) can be written as Qualitatively analyzing Equation (A16), compared with the conventional symmetrical capacitor structure, the A 1 coefficient is generated by the voltage divider of the asymmetrical capacitor C x . Therefore, if the capacitance of C x is large (equivalent to short-circuit without voltage divider), A 1 is approximately equal to 1; on the contrary, if the capacitance of C x is small, and there is a certain voltage divider, A 1 will be less than 1. Figure A3a shows the half-side small-signal equivalent circuit of the proposed variable gain stage with asymmetric capacitors. Since it is a half-side circuit, V in becomes onehalf. According to the above derivation, coefficients A 1 and A 2 are introduced due to the influence of the asymmetric capacitor. Therefore, in order to express the derivation of Equation (1) more clearly and concisely, we first use the symmetrical capacitor structure to derive Equation (1). Based on Equation (A16), for the asymmetric capacitor structure, coefficients A 1 and A 2 should simply be added later to the numerator. Figure A3b presents the half-side small-signal equivalent circuit of the variable gain stage without asymmetric capacitors. According to Kirchhoff's current law (KCL) and Kirchhoff's voltage law (KVL), it can be derived that Since 1 2 V in + = − 1 2 V in −, the left side of Equation (A18) can be simplified to Equations (A19) and (A20), respectively.
|
2022-03-03T16:18:01.794Z
|
2022-02-28T00:00:00.000
|
{
"year": 2022,
"sha1": "d2ef9f830d84696daaddf2be5da176eaf058a75f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/11/5/751/pdf?version=1646116866",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "169304c14cd9ff8d486816164924445d9a798ac0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
119332820
|
pes2o/s2orc
|
v3-fos-license
|
Holographic Complexity Growth Rate in Horndeski Theory
Based on the context of complexity = action (CA) conjecture, we calculate the holographic complexity of AdS black holes with planar and spherical topologies in Horndeski theory. We find that the rate of change of holographic complexity for neutral AdS black holes saturates the Lloyd's bound. For charged black holes, we find that there exists only one horizon and thus the corresponding holographic complexity can't be expressed as the difference of some thermodynamical potential between two horizons as that of Reissner-Nordstrom AdS black hole in Einstein-Maxwell theory. However, the Lloyd's bound is not violated for charged AdS black hole in Horndeski theory.
Introduction
The gauge gravity duality states connections between quantum gravity in string theory or more general settings and strongly coupled gauge field theories living on the boundary of the gravity background [1][2][3][4]. It has brought remarkable insight into understanding the phenomena various strongly coupled systems, such as low energy QCD, quark gluon plasma and condensed matter theory [5][6][7][8]. Recently, two proposals about quantum computational complexity have emerged, namely, the conjecture of complexity = volume (CV) [9,10] and the conjecture of complexity = action (CA) [11,12]. Many studies has been done about these two conjectures, see [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27] and references there in. In this paper, we shall follow the CA conjecture. The CA conjecture has been proposed by Brown etal [11,12], which states that the quantum complexity of ground state of CFT is given by the classical action evaluated on the "Wheeler-DeWitt patch" (WDW), and the WdW patch is the spacetime region enclosed by future and past light rays started from a bulk Cauchy slice, reflected at the boundaries and then ended in another bulk Cauchy slice, e.g., see Fig 1 in section 2.
The conjecture thus reads (1.1) It was found that the late time action growth of various neutral black holes in the Einstein gravity theory is proportional to the black hole mass M [11,12], 2) It suggests that the neutral black hole saturates the Lloyd's bound on the rate of the computation [28] . Later, the gravitational action growth for charged and/or rotated AdS black holes in Einstein gravity were studied [29], the result turns out to be where the Ω , J are the angular velocity and angular momentum of the black hole, while the µ , Q are the electrical potential and charge of the black hole. The result was further generalized and the gravitational action growth can be written as the difference of the generalized enthalpy between the two corresponding horizons [30] dI dt where F is the free energy, T , S are the temperature and entropy of the black hole, and H = F + T S is the generalized enthalpy. It was pointed out that the result still holds for higher derivative theories. Recently, it was explicitly showed that the result is true for AdS black holes in the f (R) gravity, massive gravity theories [31][32][33][34] and Lovelock gravity [35].
However, it was pointed out that the action growth expression is different for charged black hole with a single horizon in the Einstein-Maxwell-Dilaton and Born-Infeld theories [36]. It is thus worthwhile taking a further step to explore the pattern of the action growth of charged AdS black holes with only one horizon in higher derivative gravity theory. In this paper, we shall study the action growth of AdS black holes in Horndeski gravity theory.
Horndeski theory is a kind of higher derivative scalar-tensor theory which has the similar property of Lovelock gravity, that the Largrangian involves terms which are more than two derivatives, but the equations of motion are consisted of terms which have at most two derivatives acting on each field [37]. AdS black holes have been constructed in Horndeski gravities in [38,39] and their thermodynamics were studied in [40,41]. The stability and causality of these black holes were carried out in [42][43][44]. Holographic application of Horndeski theory were investigated in [45][46][47][48][49][50][51][52]
Neutral black holes in Horndeski theory
In this section, we consider Einstein-Horndeski theory in four space time dimensions, which is given by Rg µν is Einstein tensor and χ is a scalar field. For static ansatz the theory admits asymptotic AdS black holes with planar ( = 0) and spherical ( = 1) topology [38].
Planar black hole
The metric profile of the planar black hole solution is the same as that of Schwarzschild-AdS black hole, the solution is given by The solution has two integration constants, µ is related to the black hole mass and β is related to scalar field χ which should be positive. The AdS radius l = 1/g is not determined by the cosmological constant Λ but by the ratio of α over γ which is precise in the critical point, and thus the holographic a-theorem holds for this system as pointed out in [51]. When µ = 0, the solution turns out to be an AdS vacuum with the scalar χ being logarithmic in terms of radial coordinate r. The conformal symmetry of the AdS is broken down to the Poincare symmetry plus the scaling symmetry because of the logarithmic scalar χ, which means the dual field theory is scaling invariant. The Horndeski coupling γ doesn't have a smooth zero limit and should not be treated as a perturbative parameter. It was also showed that the kinetic term of the scalar perturbation δχ is non-negative as long as γ is great than zero [52].
The mass of the black hole is given by, for more detail about the thermodynamics of the black hole we refer to [40], Now, we follow the method in [13] to calculate the late time action growth. In this method, the null coordinates were introduced and The the metric can be written as For the choices of (t, r), (u, r) and (v, r), we have where w = t , u , v and Ω is the volume of the two dimensional transverse space.
The Wheeler-de Witt patch of the black hole is defined by future light rays starting inside the black hole and reaching to the boundaries, then being joined to past light rays ending at the future singularity, which is illustrated in Fig.1. The left part of Fig.1 shows the two patches corresponding to the actions I(t 0 ) and I(t 0 + δt) which have a time difference δt. In the right part of Fig.1, the shadow area represents the difference of the actions surface and joint contributions, with Here, we choose the convention in [13] that the contributions from null boundary are zero. K is the Gibbons-Hawking term and a will be illustrated later. Due to the equation of motion the Horndeski term in the action vanishes, the bulk contribution has a simpler form and as the ρ 0 (v) term in I V 2 and thus the two terms cancel out in the whole expression leaving Since it is a small variation from v 0 to v 0 + δt, and so is the radius. Thus the above expression can be written as To the later time, the surface B approaches black hole event horizon, it turns out to be where r 0 is the radius of the event horizon.
Next, we consider boundary terms involving K, the normal vector is n α = 1 √ f ∂ α r, and K is given by It is understood that the value in the square root should be absolute value, like √ h = |h|. And For our black hole solution, it is simply given by Finally, we consider the joint contribution adS. a is defined by With those, we have At late times, r B → r 0 , r 0 is the radius of the event horizon. The joint term is then given
Spherical black hole
The spherical black hole solution in Einstein-Horndeski theory was constructed in [38] χ = 3βg 2 r 2 (3g 2 r 2 + 1) The profile is more complicated and h is not equal to f any more. The mass of the black hole is We shall follow the same method as that in the last subsection to calculate the action growth, the steps and situation are similar, hereafter we shall just present the main result of the calculation. The WdW pataches are the same as the planar case, and the late time action difference is consist of three parts, with The bulk contribution is
Charged black holes in Horndeski theory
In this section we consider Einstein-Hordeski-Maxwell theory in four dimensional space time. The theory is given by For static ansatz it was found that theory admits black hole solutions with planar and spherical topologies [39].
Planar black hole
First, we take a look at the charged planar black hole, the solution is given by f = 36g 4 r 8 (βγ + 4κ) 2 (κq 2 − 6g 2 r 4 (βγ + 4κ)) 2 h , h = g 2 r 2 − µ r + κq 2 r 2 (βγ + 4κ) − κ 2 q 4 60g 2 r 6 (βγ + 4κ) 2 , The thermodynamics was fully analysed in [40]. The mass, charge and electrical potential of the black hole are given by, we chose the gauge that the electrical potential vanishes on the horizon. Different from the neutral case, there is an additional curvature singularity r * where f diverges In order to describe a black hole, we require that the singularity r * should be inside the event horizon, which insures that the temperature is always positive, T > 0. From (3.6) we can see that in the limit q → 0 the singularity r * → 0, going back to the usual singularity.
However, it is worth pointing out that this charged black has no extreme limit, the solution has one and only one horizon. In order to see this property more clearly, we express the profile h in terms of r * , hereafter, a prime " ' " denotes derivative with respect to " r ". We find that local extremes for the black hole solution, which is quite different from RN-AdS black hole.
We now turn to the calculation of action difference, the method we follow is the same as that of neutral case. The WdW patch is similar, too, except that the past light core end at the curvature singularity r * rather than the usual singularity r = 0. Hence here, we shall skip the intermediate steps and present the final result. The total action difference is given by where Thus the action growth is As mentioned in previous, we can see from (3.6) that r * → 0 when q → 0, thus, in the limit q → 0, the action growth δS δt → 2M , going back to the neutral case as expected. It is obvious from (3.10) that C 0 is positive, so the action growth rate is less than 2M , satisfying the Lloyd's bound.
Spherical black hole
The solution is given by with constrains The mass, charge and electrical potential of the black hole are given by, for more detail about the thermodynamics of the black we refer to [41], , (3.14) and ω is the volume of the unite sphere, we choose a gauge so that the electrical potential a vanishes on the black hole event horizon. Again, there is an additional singularity r * where In order to avoid a naked singularity, we require that the singularity r * should be inside the event horizon, which insures that the temperature is always positive, T > 0. It is similar to that of planar case, this charged black has no extreme limit, too. The solution has one and only one horizon. With the same strategy, we can do the analysis by using r * . We find that local extremes of profile h, where h = 0, are equal to h e = r 2 e − r 2 * 2 3g 2 r 2 e + r 2 * (βγ + 4κ) + 4κ Now we are in the position to calculate the action difference with the same procedure, the total action difference includes three parts, the bulk, boundary and joint parts and is given by So the action growth is Again, when q → 0, r * → q 2 √ 2 , thus in the limit of q → 0, the combination QΦ + C 1 approaches zero and the action growth reduces to the neutral case, δS δt → 2M . For small q the combination of QΦ + C 1 is given by Here, we set κ = 1 and g = 1 for simplicity. We can easily see that the combination is great than zero for a not very small r 0 . However, as we mentioned that the singularity should live inside the black hole event horizon, r 0 > r * , so r 0 can't be arbitrarily small, when q is small, the black hole radus r 0 should be great than q 2 √ 2 , in this limit we have that which is obviously great than zero. So, we found that the combination of QΦ+C 1 is positive for small q. For general parameter range we can not prove the combination QΦ+C 1 is always greater than zero, however we plot QΦ + C 1 as a function of q and r 0 for large number of parameter choices which imply that QΦ+C 1 is greater than zero, we present several of them in Fig. 2, . It seems that the combination of QΦ + C 1 is always greater than zero, and the holographic complexity satisfies the Lloyd's bound.
Conclusions
In this paper, we studied the holographic complexity in Horndeski gravity theories through the " Complexity = Action" conjecture. In particular, we calculated the gravitational action growth of neutral and charged AdS black holes in Horndeski gravities. We found that the rate of change of action for neutral black holes with planar and spherical topologies is 2M , which is the same as the universal result [11,12] and saturates the Lloyd's bound [28].
The charged black holes are more complicated. We analysed the metric profiles carefully and found that the charged black holes with planar and spherical topology both have only one event horizon which are quite different from that of the RN black holes. We computed the gravitational action growth for the charged black holes with planar and spherical topologies. It turns out that the action growth for planar topology is less then 2 M thus satisfies the Lloyd's bound. Whilst, for spherical case, we showed that the action growth is less than 2M when q is small. Though we didn't prove analytically that the result holds for the whole range of parameters, we did numerically studies a substantial parameter choices and found that the action growth is less than 2M , which leads us to believe that the action growth is always less than 2M and satisfy the Lloyd's bound.
Here, we just studied the late time rate of change of holographic complexity, it is worthwhile going a step further to see the effect of the higher-derivative non-minimally coupled Horndeski term to the complexity of formation [54], subregion complexity [16,[55][56][57] and also to explore full time dependence of the holographic complexity [58][59][60].
|
2018-11-15T14:00:45.000Z
|
2018-11-08T00:00:00.000
|
{
"year": 2019,
"sha1": "310de0cbd2f1866dd84a573d18c24d27d942260d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-019-6547-4.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "595b44a949e20ba42b33b5f84d12be40b8f8c32a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119086867
|
pes2o/s2orc
|
v3-fos-license
|
Nanoparticle emissions from gasoline vehicles DI & MPI
The nanoparticles (NP) count concentrations are limited in EU for all Diesel passenger cars since 2013 and for gasoline cars with direct injection (GDI) since 2014. For the particle number (PN) of MPI gasoline cars there are still no legal limitations. In the present paper some results of investigations of nanoparticles from five DI and four MPI gasoline cars are represented. The measurements were performed at vehicle tailpipe and in CVS-tunnel. Moreover, five variants of “vehicle – GPF” were investigated. The PN-emission level of the investigated GDI cars in WLTC without GPF is in the same range of magnitude very near to the actual limit value of 6.0 × 10 1/km. With the GPF’s with better filtration quality, it is possible to lower the emissions below the future limit value of 6.0 × 10 1/km. The modern MPI vehicles also emit a considerable amount of PN, which in some cases can attain the level of Diesel exhaust gas without DPF and can pass over the actual limit value for GDI (6.0 × 10 1/km). The GPF-technology offers in this respect further potentials to reduce the PN-emissions of traffic.
Introduction
The invisible nanoparticles (NP) from combustion processes penetrate easily into the human body through the respiratory and olfactory pathways and carry numerous harmful health effects potentials.The nanoaerosol in vehicle exhaust is known to be a complex mixture of different volatile and non-volatile species often showing a bimodal particle size distribution with a nucleation mode smaller than 20 nm and a larger accumulation mode that mainly contains aggregates of primary particles.
The larger accumulation mode is usually composed of more graphitic soot particles with an elemental carbon (EC) structure, whereas the particles in the nucleation mode are reported to be mainly volatile organics, especially when sulphur is absent from fuel and lubrication oil [1][2][3][4].However, recent studies detected also low-volatility particle fractions in the ultrafine size range when sampling was carried out according to PMP protocol at 300°C [5][6][7].
These particles are suspected to be nucleated metal oxides originating from metal additives in lubrication oil or fuels [8][9][10][11].The formation of this particulate fraction was especially observed when the soot content was low as in idle condition of diesel vehicles.These particles mainly appear in the ultrafine size rage < 23 nm.While the mass contribution of these ultrafine particles in vehicle emissions is very low, their contribution to the number concentration is significant.Moreover, these ultrafine particles may contribute to the surface composition of the aerosol and have therefore a significant impact on health effects associated with pollution.
Knowledge about the emission level, chemistry and formation mechanisms of these particles is an important objective in order to assess their toxic potential, and to propose effective measures to reduce these emissions.
Studies for gasoline fueled internal combustion engines pointed out that also this vehicle class can emit remarkable amounts of particles [6,12,13].Especially gasoline direct injection technology (GDI) shows particle number (PN) emissions significantly higher than modern diesel cars equipped with best available DPF technology.Since the trend for gasoline vehicles with GDI technology is increasing, a significant rise in emission is predicted in the near future.
The nanoparticles emissions are produced especially at cold start and warm-up conditions and at a dynamic engine operation [14].The lube oil contributes to this emission in the sense of number concentrations in nuclei mode and composition [8][9][10].
The investigations of morphology of the nanoparticles from gasoline direct injection engine revealed principally graphitic structures, which can store some metal oxides in certain conditions and can be overlapped by condensates [15,16].
Car manufacturers and suppliers of exhaust aftertreatment technology offer several mature solutions of GPF for efficient elimination of the nanoparticles from DI SIengines [17,18].
There is also nanoparticles emission of gasoline vehicles with MPI (multipoint port injection).Some of them emitting high amount of PN and PM.In a study of AFHB, [19], an older model with MPI was found to emitting at stationary part load operation up to 4 orders of magnitude more nanoparticles, than a lower emitting GDI car.The main reason for this increased PN-emission was attributed to the increased lube oil consumption.Never-theless an inferior quality of mixture preparation cannot be excluded.
The MPI technology has a big share of the worldwide market because of its lower costs and simplicity and in several countries this technology will still stay as primary option for several years to come.
From this perspective and taking account of the progressing exhaust gas legislation aiming an increased care about health and environment protection it is necessary to include the cars with MPI in the efforts to reduce PN & PM.
Some investigations in present paper were performed at AFHB (Laboratories for IC-Engines and Exhaust Emission Control of the Berne University of Applied Sciences, Biel CH) as a part of the network project GasOMeP, together with the Swiss Research Institutions: EMPA, FHNW and PSI.
This paper presents: comparisons of NP-emissions of five GDI vehicles and four MPI vehicles at steady state (SMPS) and at transient (CPC) operation, as well as the emissions reduction potentials with different gasoline particle filters (GPF's) on some GDI cars.
Tested vehicles
Table 1 summarizes the most important GDI vehicle data.As a reference of the best available technology, concerning the reduction or elimination of PM-and PN-emissions a modern Diesel passenger car with a high-quality DPF was included in the tests (vehicle ).
Fuels and lube oils
The gasoline used was from the Swiss market, RON 95, according to SN EN228.A bigger charge of gasoline was purchased for the project and it was analysed at INTERTEC Laboratory.The most important data are given in Table 3.
The lube oils for GDI-vehicles were also analysed at EMPA Laboratory, Table 4, which shows the 9 most prominent metals and the sums of all analysed 21 metals.For all GDI-vehicles, except of vehicle , the same lube oil was applied.
For the Diesel car as well as for the MPI cars the lube oils were not changed and not analysed.
Test methods and instrumentation
The vehicles were tested on a chassis dynamometer at constant speeds and in the dynamic driving cycles WLTC, with cold & warm engine start.
Chassis dynamometer -following test systems were used:
-roller dynamometer: AFHB GSA 200 -driver conductor system: Tornado, version 3.3.-CVS dilution system: Horiba CVS-9500T with Roots blower -air conditioning in the hall automatic (intake-and dilution air).
The driving resistances of the test bench were set according to the legal prescriptions, responding to the horizontal road.
Nanoparticle analysis
The measurements of NP size distributions were conducted with different SMPS-systems, which enabled different ranges of size analysis at steady state operation: SMPS: DMA TSI 3081 & CPC TSI 3772 (10-429 nm) nSMPS: nDMA TSI 3085 & CPC TSI 3776 (2-64 nm) For the dilution and sample preparation an ASET system from Matter Aerosol was used (ASET aerosol sampling & evaporation tube).This system contains: -Primary dilution -MD19 tunable rotating disc minidiluter (Matter Eng.MD19-2E) -Secondary dilution -dilution of the primary diluted and thermally conditioned sample gas on the outlet of evaporative tube.-Thermoconditioner (TC) -sample heating at 300°C This sample preparation system fulfills the requirements of PMP and it was used for all measurements.At steady state operation (SSC see next section) this system worked with summary dilution factors DF = 100 to 500.
The estimated accuracy of PN-measurement in the size range of 80-120 nm, with DF = 100 is ±6%.In the tests the gas sample for the NP-analysis was taken from the undiluted exhaust gas at tailpipe for stationary operation (SMPS) or from the diluted exhaust gas in CVStunnel at transient operation (CPC).The schematic of the general sampling set up is represented in Fig. 1.
Driving cycles
The vehicles were tested on a chassis dynamometer at constant speeds (SSC) and in the dynamic driving cycles.
The steady state cycle (SSC) consists of 20 min-steps at 95, 45 km/h and idling, performed in the sequence from the highest to the lowest speed.
Fig. 2 shows the steady state cycle (SSC) with the resulting tailpipe temperatures (t exh ) for gasoline vehicle (MPI).This gives the magnitude of the temperatures at the particulate sampling point "tailpipe" during steady state measurements (SMPS).
The approach to find a homogenized world-wide driving cycle was successfully finished with the development of the homogenized WLTP world-wide light duty test procedure.The WLTC (world-wide light duty test cycle) represents typical driving conditions around the world.
This cycle (Fig. 3) has been used also in this study.It represents different driving conditions: urban, rural, highway and extra-highway.Fig. 3 shows the time-courses and Table 5 summarizes the most important data of these driving cycles.
Steady state operation (SSC)
The considerations of particle size distributions at steady state operation give a basic view on the PNconcentrations at tailpipe and allow some reflections about the nanoparticle production.Nevertheless, this is not a legal measuring procedure and therefore the results do not have to be compared with the legal PN limit values.
Fig. 4 represents exemplary the SMPS particle size distributions (PSD) of all tested vehicles (V1 to V5) at tailpipe without GPF at the same constant speeds and idling.
At 95 and 45 km/h the maxima of PSD's show in certain cases the particle counts concentrations (PC) in the range of 10 6 to 10 7 1/cm 3 , which is similar as for Diesel engines (without DPF).At idling, the PC values are roughly one order of magnitude lower.
For vehicle
, strong fluctuations of the PCconcentration during the period of scanning (over the size range) are visible.During the constant speed operation of this vehicle (at 95 and 45 km/h), periodic fluctuations of gaseous emissions (CO, HC, NO x ) were observed (not represented here) and confirmed a continuous switching of the operation between lean and rich.This means that for this vehicle, changing between the stratified, or homogenous (lean) and homogenous (rich) operating strategies, it also implies the switching of parameters, like ignition timing, injection timing, injection quantity and eventually EGR.This can have the influences on NP-emissions as demonstrated.The relationships of NP-emissions between different vehicles can vary depending on operating condition.As example: vehicle has at 95 km/h the lowest and at 45 km/h and idling the highest particle counts concentrations.This is also visible in the summary representation of integral PN-emissions at all tested constant speeds, Fig. 5.
There are different interacting processes during mixture preparation, combustion and gas flow in the exhaust system, which sensitively influence the generation of nanoparticle emissions.The following discussion gives some ideas and hypotheses about the reasons of the observed differences of PN-results between the different vehicles.Important question is the mixture preparation: the ideal mixture preparation should atomize and evaporate all the used fuel and bring it as homogenously premixed, as possible into the combustion chamber.
For MPI there is usually a portion of fuel deposited on the walls of the intake port, which can, especially at transient operation, arrive in the combustion chamber as liquid non-premixed droplets.A part of this "unprepared" fuel burns heterogeneously and is a source of sootproduction.
These effects are stronger in DI technology and especially, when the liquid fuel arrives at the wall and, what is also possible, interacts with the lube oil layer, the production of nanoparticles is particularly increased [20,21].
The chemistry of oil and fuel, their HC-matrix and additive packages have a significant influence on the NP's.
Further to consider are: the passage of aerosol through the exhaust system, the history of temperature drop, catalysis, chemistry, spontaneous condensation and store/release effects in the exhaust system.All of them have finally influences on "what will be measured at tailpipe".
The processes influencing NP-production depend on engine operating conditions.With no doubt the NPemissions vary with the operating point and are increased at transient operation.
The measurements of all PSD's at constant speeds were simultaneously performed with two systems SMPS (size range 10-429 nm) and nano-SMPS (size range 2-64 nm).
Fig. 6 shows, as example, the particle size distributions measured with SMPS and with nSMPS for the higher (vehicle ) and for the lower (vehicle ) emitting vehicles at 95 km/h.There are no PC in the sizes below 6 nm and the PC in the size range 6 to 10 nm can be considered as negligible.
Generally there is a very good accordance of PSD's measured with both systems SMPS and nSMPS in the common size range (10-64 nm).Fig. 7 shows an example of scans with and without GPF.It confirms the excellent accordance of scans with both systems, it also confirms very good particle count filtration efficiency (PCFE) of the tested GPF and it particularly shows the total elimination of nanoparticles with sizes below 30 nm.
The opinion of the authors, resulting from these tests as well as from previous experiences with GDI-vehicles [19], is that additional research, or discussions about NP's with sub-10 nm-sizes and more restrictions of the legislation for sub-23nm-sizes, are not necessary.Five variants "vehicle -GPF" were tested.The GPF's were randomly obtained for the tests and they were mounted in the exhaust systems of the cars approximately 60 cm downstream of the TWC.They were neither developed, nor optimized for this application.The specific data of the GPF's are not available.
Fig. 8 summarizes the filtration efficiencies (PCFE) obtained at the constant speeds.The PCFE-values are between 91% and 100%.GPF3 and GPF4 represent clearly lower filtration efficiency than GPF1 and GPF2.This result indicates that the filtration efficiencies can be adapted by optimizing the substrate to fulfil different objectives or requirements.
In comparison, the quality requirements for DPF retrofitting are: for the Swiss Confederation OAPC PCFE ≥ 97% and of the VERT Association PCFE ≥ 99%.This is in the sense of "best available technology for health protection".
Transient operation
The results at transient operation are obtained with CPC (according to PMP) at the end of CVS-dilution tunnel.These results can be compared with the legal PN limit values.All mechanisms influencing the NP-production in combustion chambers and in the exhaust system are at transient operation variable and mostly overlapping each other.A known and accepted fact is that the peak values of NPemissions coincide with the acceleration, or deceleration events in the driving cycle.Fig. 9 summarizes the average PN emissions in WLTC cold and hot.The emission level of "hot" cycles is generally lower than the emission level of "cold" cycles.Vehicles which are equipped with GPF have, as expected, lower PNemissions.Vehicle is a Diesel car with original DPF of a very good quality; it sets a quality level, which is only roughly attained by the vehicle with GPF1.
From all variants with GPF's the GPF3 and GPF4 have the highest emissions.These two filters also have the lowest average filtration efficiencies, Fig. 10.
Finally, it can be concluded that the PN-emission level of the investigated GDI cars in WLTC without GPF is in the same range of magnitude very near to the actual limit value of 6.0 × 10 12 1/km.With the GPF's with better filtra-tion quality it is possible to lower the emissions below the future limit value of 6.0 × 10 11 1/km.Fig. 11 gives an example of the SMPS-particle size distributions (PSD) with the MPI vehicles at 95 km/h and at idling.The indicated particle counts concentrations are mostly in the range of ambient background level (10 2 to 10 4 1/cm 3 ).Nevertheless, there are some exceptions such as a clearly higher PN-emission with vehicle at 95 km/h and a higher PN-emission with vehicle at idling.At the highest speed (95 km/h) vehicle causes the particle count concentrations, which are up to 3 ranges of magnitude higher, then with the other vehicles.
Fig. 12 compares the PSD's measured with SMPS and with nSMPS with the highest emitting vehicle and with the lowest emitting vehicle at 95 km/h.
For vehicle there is a very good correlation of results obtained with nSMPS and with SMPS in the common size range (10-64 nm).The particle numbers in the size spectrum below 10 nm are zero, or negligible.
For vehicle there is no clearly pronounced size distribution, but random indications of particle counts in the ambient level.There is also a very good accordance of both measuring systems.
Transient operation
The legal emission limits are established for the transient operation, which causes higher PN-values.The particle counts are measured as summary of all particle sizes in the diluted exhaust in CVS tunnel, by means of CPC.
Fig. 13 summarizes the integral PN-results of the four MPI vehicles in all transient cycles.It can be remarked that the relationships of emission level are for all vehicles in all driving cycles the same.
In the driving cycle with cold start the PN-emissions are higher than with warm start.One of the vehicles would not pass the present limit value of 6 × 10 12 1/km and three of the vehicles would not pass the future limit value of 6 × 10 11 1/km.
The following points have to be mentioned: -in the cycle RTS95 vehicle had to be accelerated at full load to follow the driving conductor cycle trace, -at higher speeds and acceleration there is particularly higher PN-emission with vehicle , − at the ADAC130 high speed cycle none of the vehicles could follow the cycle; all vehicles were fully accelerated; this caused very high CO-emissions -in one case with vehicle CO in the bag came in over range, − vehicle with two injection valves per cylinder yielded the lowest PN-emissions -it can be supposed, that a better mixture preparation and lower portion of liquid fuel film deposited in the intake channels contributed much to this improvement.Fig. 14 impressively illustrates the residues on PMmeasuring filters of two vehicles in all driving cycles.There is a high carbonaceous part of the particle emission of the high-emitting vehicle .RTS95, which has higher accelerations, than WLTC, produces with cold start the highest amount of black carbon emission.Comparing all four vehicles in WLTC cold an increase of blackness of the filter residue in the sequence: V10 < V7 < V9 < V8 can be remarked.This is the same sequence, like for the increase of PN-emissions.
Conclusions
The most important statements of this work can be summarized as follows: -The PN-emission level of the investigated GDI cars in WLTC without GPF is in the same range of magnitude and very near to the actual limit value of 6.0 × 10 12 1/km.-With the GPF's with better filtration quality it is possible to lower the emissions below the future limit value of 6.0 × 10 11 1/km.-The filtration efficiency of GPF can attain 99% but it can also be optimized to lower values -in this respect the requirement of "best available technology for health protection" should be considered.-The present work demonstrated that the modern SIvehicles with MPI also emit a considerable amount of PN and PM.In an extreme case the PN-emission was in the range of Diesel car (without DPF).-The relationships of NP-emissions between different vehicles can vary depending on operating conditions.
-Generally there is a very good accordance of PSD's measured with both systems SMPS and nSMPS in the common size range (10-64 mm).-For the investigated vehicles with gasoline DI and MPI, there is no increase of PC's in nuclei mode (below 10 nm) at the measured constant speeds, the particle counts below 10 nm are negligible.-Due to the electronic regulation of the engine the NPemission of some vehicles (here vehicle ) are periodically fluctuating.The present paper focuses solely on solid nanoparticle emissions.The tested GDI cars, except of vehicle , were with homogenous combustion concept and all of them (GDI & MPI) represented a modern TWC technology.According to that the emissions of gaseous legislated components (CO, HC, NO x ) were very low.
The present research on MPI vehicles, showed some tendencies of significantly increased PN-emissions.With this knowledge and taking into consideration the immense multiplication factor of MPI vehicles worldwide the legal PN-limitations for MPI should be quickly progressed.
The present high filtration quality of Diesel vehicles (DPF) set's high requirements on the filtration quality in the gasoline sector (GPF).
Fig. 1 .
Fig. 1.Set-up of exhaust gas sampling for PN-analysis For the measurements of summary PN at transient operation a CPC TSI 3790 (PMP conform) was used.In the tests the gas sample for the NP-analysis was taken from the undiluted exhaust gas at tailpipe for stationary operation (SMPS) or from the diluted exhaust gas in CVStunnel at transient operation (CPC).The schematic of the general sampling set up is represented in Fig.1.
Fig. 2 .
Fig. 2. Steady State Cycle (SSC) and tailpipe temperature of vehicle 7 (MPI) In the test program with MPI vehicles the cycles RTS 95 and ADAC 130 were used.The RTS95 is a short chassis dynamometer test cycle representing aggressive driving and used for development purposes as short procedure replacing WLTC.The ADAC 130 cycle represents the high-way
Fig. 9 .
Fig. 9. Comparison of PN-emissions in WLTC cold and hot for different vehicles
Fig. 10 .
Fig. 10.PCFE's of the investigated GPF's in WLTC hot 6. Results with MPI vehicles 6.1.Steady state operation (SSC)Fig.11givesan example of the SMPS-particle size distributions (PSD) with the MPI vehicles at 95 km/h and at idling.The indicated particle counts concentrations are mostly in the range of ambient background level (10 2 to 10 4 1/cm 3 ).Nevertheless, there are some exceptions such as a clearly higher PN-emission with vehicle at 95 km/h and a higher PN-emission with vehicle at idling.
Fig. 11 .
Fig. 11.SMPS particle size distribution at constant speeds with different MPI vehicles
Fig. 14 .
Fig. 14.PM-results of the lowest & highest emitting vehicles in different transient cycles
Table 1a .
Data of investigated cars
Table 2 .
Data of investigated MPI vehicle
Table 3 .
Data of gasoline
Table 4 .
Data of the utilized lube oils (* analysis, others: specifications)
|
2019-04-13T05:37:56.612Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "2b16997693956494517ca938e225c853815486ca",
"oa_license": "CCBY",
"oa_url": "http://www.combustion-engines.eu/pdf-116735-45981?filename=Nanoparticle%20emissions.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2b16997693956494517ca938e225c853815486ca",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
232244048
|
pes2o/s2orc
|
v3-fos-license
|
Penile secondary lesions: a rare entity detected by PET/CT
While penile metastases are rare, PET/CT has facilitated their detection. We aimed to describe penile secondary lesions (PSL) identified by PET/CT. We reviewed 18F-FDG and Ga68-PSMA PET/CT records performed in a single center during May 2012-March 2020, for PSL. Of 16,774 18F-FDG and 1,963 Ga68-PSMA-PET scans, PSL were found in 24(0.13%) men with a mean age of 74. PSMA detected PSL in 12 with prostate cancer; FDG identified PSL in 4 with lymphoma, 3 with colorectal cancer, 2 with lung cancer, and one each with bladder cancer, pelvic sarcoma, and leukemia. Mean SUVmax of PSL was 7.9 ± 4.2 with focal uptake in 13(54%). Mean lesion size was 16.5 ± 6.8 mm; 8 at the penile root, 4 along the shaft, and 1 at the glans. CT detected loss of the penile texture in 15(63%). PSL were observed only during relapse or follow-up of disseminated disease. Among those with prostate cancer, PSA varied widely. Fifteen (62.5%) died, at a mean 13.3 ± 15.9 months following PSL demonstration, nine had non-prostate malignancies. PET/CT identified and characterized PSL in a fraction of cancer patients, most commonly those with prostate cancer. PSL universally surfaced in advanced disease, and signaled high mortality, especially in non-prostate cancers.
www.nature.com/scientificreports/ penile lesion. Imaging data were retrieved from the picture archive and communication system (PACS, Carestream Health 11.0, Rochester, NY), and clinical data were obtained from the computerized medical records at our hospital. Clinical data, including medical history, laboratory work, and biopsy results were reviewed. Study inclusion criteria were: (1) men aged 18 years and older; (2) a history of a malignancy with a primary site that was non-penile; and (3) findings consistent with PSL on PET/CT. Exclusion criteria included the presence of abnormal findings in the genital area on PET/CT. These included urinary retention in the penile urethra, without clear demonstration of a suspicious penile lesion, and evidence of radiotracer contamination in the penile area.
This retrospective study evaluated consecutive patients. During the designated period, a total of 48,731 18-F-FDG PET/CT scans were performed, mainly for assessment of known or suspected oncological diseases; of them, 22,875 scans were performed on 16,774 unique male patients. From 2015 onward, PET/CT using Ga 68-PSMA was performed in men with diagnosed or suspected prostate cancer or with biochemical recurrence. In total, 2,087 Ga 68-PSMA scans were carried out on 1,963 men. From a total of 18,737 unique PET/CT studies, the reports of 57 men mentioned the word "penis" or "penile", and 24 (0.13%) met the study eligibility criteria. Of the 1963 men with Ga 68-PSMA studies, 12 (0.61%) met the eligibility criteria. The remaining were excluded due to the absence of findings compatible with PSL.
PET/CT image acquisition. PET/CT scanning was performed using a combined PET-CT protocol with a 16-detector-row helical CT scanner (Gemini GXL, Phillips Healthcare). This scanner enables simultaneous acquisition of up to 45 transaxial PET images, with interslice spacing of 5 mm in one bed position; and provides an image from the vertex to the thigh in about 10 bed positions. The transaxial fields of view and pixel sizes of the PET images reconstructed for fusion were 57.6 cm and 4 mm, respectively, with a matrix size of 144 × 144 mm. The CT component was performed in accordance with the hospital's standard protocol, with routine use of both oral and intravenous contrast media unless either was contraindicated. The following technical parameters were used for CT imaging: pitch 0.8, gantry rotation speed 0.5, 120 kVp, 250 mAs, 3-mm slice thickness, and specific breath-holding instructions [18][19][20] .
Depending on the type of PET/CT scan (with FDG or PSMA), the patient received an intravenous injection of 370 MBq 18F-FDG after 4-6 h of fasting, or an injection of 148 MBq Ga 68-PSMA in the absence of a prerequisite fast. About 60 min later, CT images were obtained from the vertex to the mid-thigh for about 32 s. A contrast-enhanced CT scan was captured 60 s after injection of 2 mL/kg of non-ionic contrast material (Omnipaque 370 GE Healthcare). An emission PET scan followed in 3D acquisition mode for the same axial image range, 2.0-2.5 min per bed position. The diagnostic CT images were used for fusion with the PET data and to produce a map for attenuation correction. PET images were generated with CT attenuation correction utilizing a line of response protocol, and the reconstructed images were constructed for review (EWB, Extended Brilliance Workstation, Philips Medical Systems, Cleveland OH, USA) 18-20 . Image interpretation. All available images were interpreted by experienced specialists in nuclear medicine and radiology, and re-reviewed by one of the study co-authors with 20 years' experience and dual certification in radiology and nuclear medicine. Readers were not blinded to clinical information. Consensus was achieved on interpretation of all of the images.
Ga 68-PSMA and 18F-FDG activity were quantified by calculating a maximum standardized uptake value (SUVmax). This was done by manually generating a region of interest over the sites of abnormally increased radioactive material activity. Increased Ga 68-PSMA/ 18F-FDG uptake that was not explained by the normal bio-distribution, or uptake that was higher than the physiological uptake in the surrounding tissue was considered pathological/positive. Positive PET findings were analyzed with respect to their intensity of uptake (SUV max) and pattern of uptake enhancement (focal or diffuse). Cases with more than 2 focal penile findings were considered diffuse.
CT images were examined for abnormalities of the penis that corresponded to PET findings, and classified as loss of the typically smooth penile texture. The following descriptions were used: a focal hyperdense soft tissue mass, a focal hypodense lesion, a diffuse heterogeneous soft tissue mass infiltration along the penis, or the absence of detectable CT findings. For focal lesions, the diameter length was measured and the location within the penis was evaluated, and labeled as: at the root, along the shaft, or at the glans.
We also examined the involvement of other structures within the pelvis (prostate, prostatic bed after prostatectomy, seminal vesicles, bladder, pelvic lymph nodes) and organs outside the pelvis.
We analyzed penile findings on ultrasound and MRI when available. For men with more than one PET/CT scan, we assessed the dynamic changes in penile lesions with regard to size and the persistence of increased uptake in all the consecutive imaging studies.
Clinical data extraction. Since prostate cancer was the most common primary cancer in this study, we assessed specifically the characteristics of the men who had this cancer. For this subgroup, we examined the history of prostate resection including the type of operation, the method of approach, the values of prostate specific antigen (PSA), and Gleason's score of the primary tumor lesion.
Statistical analysis. Data are represented as means ± standard deviations (SDs) for continuous variables and as percentages for categorical parameters. The Chi-square test, Fisher's exact test, and the Student's t-test were used for statistical comparison as appropriate. The analysis was performed with the use of SPSS version 21.0 (SPSS, IBM, USA).
Results
Features of disease. Baseline characteristics. The mean age of the men included in the study was 74 ± 12.1 years (range 49-95). Sixteen men (66.7%) had primary genitourinary and lower gastrointestinal cancers. Prostate was the most common primary site, comprising 12 (50%) cases; their mean age was 78 ± 9.8 years (range 62-95). Of the other malignancies, four (16.7%) were lymphoma, three (12.5%) colorectal cancer (CRC), two (8.3%) lung cancer, one bladder cancer, one pelvic sarcoma, and one leukemia.
Two men with lung cancer and one man with lymphoma also had histories of prostate cancer. One of the patients with prostate cancer had a history of bladder cancer. Primary cancers were determined for this study based on both review of the notes written by the treating oncologists and on the PET/CT appearance of the penile lesions compared with other malignant appearing lesions.
Morphology of prostate tumors and Gleason's score. Histological classification was available for 11 of the 12 men with primary prostate cancer. These 11 patients showed cell histology characteristic of adenocarcinoma of the prostate. Further subtyping was available for only 3 men, two were categorized as having acinar histology and the third as ductal. Three of the tissue biopsies with adenocarcinomas showed perineural invasion. For one man whose prostate tissue revealed adenocarcinoma, a biopsy of an inguinal lymph node (LN) revealed two cell populations. One of these was consistent with carcinoma with neuroendocrine differentiation, and the other with morphology favoring prostatic origin. For the patient with prostate cancer and lymphoma, and for the patient with both prostate and lung cancers, biopsies showed adenocarcinoma without further subtyping listed. Gleason's scores, as assessed from prostate biopsies, were available for 10 men, and ranged from 6 to 10, with a mean of 7.6 ± 1.2.
History of surgical prostate removal. Of the 12 men with primary prostate cancer, 6 (50%) underwent prostatectomy. Three (25%) underwent transurethral resection of the prostate (TURP). Two men underwent robotassisted laparoscopic radical prostatectomy. The last patient underwent radical prostatectomy for which the exact approach was not available.
In addition, the patient with lymphoma and a history of prostate cancer had undergone radical retropubic prostatectomy. The patient with bladder cancer, whose primary tumor invaded into the prostate, underwent suprapubic prostatectomy followed by TURP. Finally, one of the patients with lung cancer and a history of prostate cancer had undergone TURP.
Morphology of non-prostatic tumors.
In the patient with primary bladder cancer, cell histology from the bladder biopsy was consistent with high grade urothelial carcinoma (UC) with necrotic features. The patient with primary prostate cancer who also had a history of bladder cancer had high grade UC with squamous differentiation. In the two men with lung cancers, the cell histology of one was consistent with non-small cell lung cancer (NSCLC) undifferentiated subtype, while the other patient had adenocarcinoma of the lung. All 4 patients with lymphoma had diffuse large B cell lymphoma (DLBCL). The 3 men with CRC had adenocarcinoma. The man with pelvic sarcoma had a high grade leiomyosarcoma. The patient with leukemia had acute myeloid leukemia (AML).
A penile biopsy was performed in only one man in our cohort, one of the patients with CRC. The tissue histology revealed cellular features of colorectal adenocarcinoma.
Findings on PET/CT. Penile lesions on PET.
SUVmax of PSL. For the entire cohort, the mean SUVmax of the penile lesions was 7.9 ± 4.2, with a range of 2.5-18.2. For one patient with prostate cancer who was being treated with hormonal therapy, SUVmax at the penis was too low to be measured. The mean SUVmax for the remaining 11 men with prostate cancer was 6.8 ± 3.8, with a range of 2.5 to 17.2 (Table 1). No correlation was detected between cancer histology and the degree of tracer uptake (p = 0.240). In the subgroup of patients with prostate cancer, there was no significant difference in SUVmax between those who had undergone prostate resection and those who had not (p = 0.300).
The pattern of uptake of PSL. For the entire cohort, uptake along the penile lesion was focal in 13 (54.2%) and diffuse in 11. Among the 12 men with prostate cancer, uptake in the penis was diffuse in 7 (58.3%) (Fig. 1) and focal in 5. Among the men with non-prostate primary cancer, enhancement was diffuse in one of the 3 with CRC, and in 3 of the 4 with DLBCL (Fig. 2). Increased uptake was focal in the remaining 2 patients with CRC; in one with DLBCL; in the 2 with NSCLC; and in the patients with UC, pelvic leiomyosarcoma and AML (Figs. 3, 4, and 5). No correlation was observed between cancer type and the pattern of tracer uptake (p = 0.680), or between the occurrence of previous prostatectomy in prostate cancer patients and the tracer pattern (p = 0.567).
The location and size of focal PSL. Among the men with primary cancers other than the prostate, 8 had penile lesions with focal uptake. In the 2 men with CRC and focal enhancement, the lesion was present at the penile root. Similarly, for those with UC, AML, and pelvic leiomyosarcoma, penile enhancement was seen at the root. In the 2 men with NSCLC, penile enhancement was localized along the shaft. In the patient with DLBCL and focal tracer uptake, the penile lesion was focused at the glans.
Among the 5 men with prostate cancer and focal enhancement, the lesion was visible at the penile root in 3 (Fig. 6). In the other 2, focal enhancement was present along the penile shaft. www.nature.com/scientificreports/ In the 14 men with focal penile lesions, diameter length ranged from 10 to 30 mm, with a mean of 16.5 ± 6.8 mm. In the 5 men with prostate cancer and focal lesions, the diameter length ranged from 10 to 17 mm, with a mean of 12.2 ± 2.9 mm.
Penile changes on CT. In 15 (62.5%) of the men, CT imaging showed loss of the typical penile texture. Of the 12 with prostate cancer, nine (75%) had corresponding CT findings. Penile changes on CT were also visible in 2 of the 3 men with CRC, one of 2 with NSCLC, one of 3 with DLBCL, the man with UC, and the man with pelvic leiomyosarcoma (Fig. 7). For the 9 remaining patients, CT did not demonstrate clear corresponding findings on CT. Table 1). (B) A 49-year-old man with metastatic adenocarcinoma of the lung (patient #20, Table 1). Additional metastatic sites. In 22 (91.7%) men, PET/CT revealed lesions with enhanced uptake, suspicious for metastases, in sites other than the penis or pelvic LNs (Table 2). Bone lesions were most common, present in 13 (54.2%) men. The second most common site for enhancement was the liver, in 6 (23.1%) men, followed by the bladder with discrete supraphysiological uptake in 5 (20.8%) men.
Scientific Reports
PET/CT detected metastatic lesions, outside the penis and LNs, in 11 (91.7%) of the 12 men with prostate cancer. Bony lesions were also the most frequent site of secondary lesions among the men with prostate cancer, found in 8 (66.7%), followed by the liver and bladder, each identified in 4 (33.3%) patients.
The involvement of neighboring soft tissue structures within the pelvis was noted in 4 men, each with nonprostate cancers. Increased uptake in the seminal vesicles and testicles was observed in one man with DLBCL and in the patient with pelvic leiomyosarcoma, in the testicles of another patient with DLBCL, and in the seminal vesicles and prostate in the man with UC. Table 1). PET/CT studies in our system, none of which revealed PSL. The duration of time from the initial diagnosis of primary malignancy to the finding of PSL on PET/CT was available for 23 men. Among these men and among the subset of 11 with primary prostate cancer for whom this information was accessible, the length of time from diagnosis to PSL identification ranged from less than 1 year to 16 years, with a median of 4 years. PSL were not detected in any of the men during initial staging. In 19 men, the first observed penile findings were detected on PET/CT scans performed during relapse of the primary cancer. In the remaining 5, the scans were done as part of monitoring of known advanced disease over the course of the first year following diagnosis: three men with prostate cancer, one with CRC, and one with DLBCL.
Imaging follow-up. For 11 (48.5%) men, follow-up PET/CT studies were performed between one and 40 months following the initial study that showed PSL. Four were done in men with prostate cancer using PSMA and 7 were FDG scans from men with other primary malignancies. In the 4 with prostate cancer, PSL were more pronounced in 3; while in one, the lesion decreased in size in parallel with treatment. Among the others with repeat scans, three had CRC; these showed more pronounced uptake in one, a widening of the lesion in another, and without apparent change in the third. PSL appeared stable on follow-up in the patient with AML. In the man with DLBCL and the man with NSCLC, the PSL enlarged. In the patient with pelvic leiomyosarcoma, PSL decreased in size with treatment.
The time lapsed from the first finding of penile lesions on PET/CT to death. The occurrence of death during the study period was known to us for 15 (62.5%) of the men in our cohort. Chart review revealed that each of these patients had died as a result of complications related to their malignancies. Among those who died, the survival time from the first scan that demonstrated PSL ranged from one to 48 months, the median was 8 months. In men with prostate cancer, proof of death during the study period was available for 6 (50%), with survival ranging from one to 48 months, with a median of 11.5 months. The other 9 who died included the 2 with NSCLC, the man with UC, three of those with DLBCL, two with CRC, and the patient with AML.
Discussion
This study reviewed all PSL identified by the PET/CT scans performed in our medical center over an 8-year period. To the best of our knowledge, this is the first case series to describe PET/CT findings of PSL in a relatively large group of patients with diverse primary malignancies. We found evidence of PSL in 0.13% of all male patients who had PET/CT scans performed for evaluation of known or suspected oncological diseases, and in 0.61% of those with suspected or confirmed prostate cancer who had undergone Ga 68-PSMA studies. While the rate of PSL among all cancers is unknown, the reported incidence among men with prostate cancer ranges from 0.1%, as described by a recently published paper by an Australian group that identified 5 men with prostate cancer using PSMA-avid PSL, to 0.5%, based on an older autopsy study 21,22 . Cancer of the prostate was the primary cancer for half our cohort; and together with cancers of the colorectum, and bladder, these comprised two-thirds of the primary cancers. However, we also detected PSL in patients with NSCLC, DLBCL, AML, and pelvic leiomyosarcoma. PSL have previously been recognized in lung cancer, lymphomas, AML, and pelvic leiomyosarcoma, though more than 90% originate in cancers of the genitourinary and gastrointestinal tracts [23][24][25][26] . Our 4 patients with lymphoma had DLBCL, corroborating reports that while found in both T-and B-cell lymphomas, PSL appear most commonly in DLBCL 27,28 .
On PET, we observed a mean SUVmax of close to 8 in the uptake of PSL, with no correlation between cancer histology and the degree of tracer uptake. The pattern of the PSL was focal in just over half, and diffuse in the 2,4,21,29 . We found no association between a history of prostatectomy and the degree or pattern of uptake. Seeding of cancer cells into the urethra has been theorized to occur during TURP, though some have postulated that this is highly unlikely. To the best of our awareness, no associations have been found between the method of prostate resection and penile metastases 5,30 . Among our patients with focal uptake, PSL localized most commonly to the root in three fifths, to the shaft in almost one third, and to the glans in only one patient with DLBCL. PSL was previously demonstrated at the root and shaft at equivalent rates, and at the glans in one quarter of the cases 9,25,31 . Uptake along the shaft in our 2 patients with NSCLC strongly implied vascular spread. In the 4 men with involvement of adjacent pelvic soft tissue structures, who were all with penile uptake that was either diffuse or localized to the base, PSL likely resulted from local invasion. In the rest of our patients, however, such inferences were more difficult to draw. While the route of dissemination is generally uncertain, enhancement at the shaft or glans has been cautiously suggested to imply cancer invasion by venous spread; and enhancement at the root is more likely to imply local invasion 6 .
CT contributed to the characterization of 63% of the PSL detected on PET, as exhibited by hyperdense focal soft tissue mass, diffuse soft tissue infiltration, or hypodensity. For the remaining PSL detected in PET, we did not find corresponding lesions on CT, and specifically not in 4 of our 5 patients with lymphomas and leukemia. While the nonappearance of penile changes on CT may be explained by the absence of contrast media during most of these exams, it underscores the value of PET in revealing PSL. Other studies have also reported the lack of penile findings on the CT portion of PET/CT 32 . PSL persisted in 82% of our patients who had follow up scans, and decreased in size in only 2 patients, in concurrence with treatment. This corroborates the evidence that lesions reflect PSL. Likewise, lesion persistence on repeat PET/CT in the aforementioned paper also demonstrated its consistency with PSL 32 .
Seventy percent of PSL in the current study were detected in patients in their 7th and 8th decades, and only one patient was younger than 50 years. Similarly, the mean age of PSL presentation in other reviews ranges between 60 and 80 years 5 . PSL were identified early in the disease course in some of our patients with extensive disease at presentation, and up to 16 years following diagnosis in the context of relapsed cancer. In more than 90% of our patients, extra-penile metastases were evident on PET/CT, illustrating the advanced nature of the disease. Prior reviews have illustrated that penile lesions mostly emerge years into the course of disease, though in rare instances, they present very early. Moreover, though PSL generally emerge in the setting of disseminated disease, they have also been described in the absence of other metastases 5,33 . In addition, we found PSL in patients with a wide range of PSA values, as has been previously demonstrated 34 .
We observed a mean survival rate of 13.3 months, and more than half died within 6 months following PET/ CT detection of PSL. The median survival was 3.5 months longer in those with prostate cancer than with other malignancies. Likewise, others reported a mean survival of 14 months in those with PSL, with few surviving more than 2 years, and with poorer prognosis in those with cancers of non-urological origin 3,15 . www.nature.com/scientificreports/ Our study has several limitations, including its retrospective nature and restricted sample size, and its being conducted at a single institution. PSMA was only introduced at the midpoint of the study, yet it appears to be a much more sensitive tool than FDG for the detection of metastatic lesions in men with prostate cancer, particularly at the penis. Furthermore, the prevalence calculated for PSL among cancer patients and prostate cancer patients alone are likely understated because they are based on the total number of FDG and PSMA PET/CT studies for suspected and confirmed malignancies, and we could not distinguish between those with definite cancers and in particular, those with disseminated disease. Finally, a pathological report was available for only one of the penile lesions seen on imaging. Notably, all 24 of our patients had advanced disease and the penile lesions were indicative of metastatic spread. For these reasons, and due to the exceeding rarity of primary penile cancer, there was no clinical need for pathologic correlation at this stage as has been reported elsewhere, and this was decided against by the treating clinicians 35 .
Despite these limitations, we were able to detect pathological PET/CT penile lesions in a series of 24 patients with a variety of malignancies. Even though we could not correlate these findings morphologically, careful characterization of penile findings using the dual modality of PET/CT with clinical correlation and evaluation of follow-up studies strongly support the postulation that the described PET/CT lesions of the penis represent PSL. Larger studies are warranted to assess the clinical implications that emerge from identification of PSL.
Conclusion
In malignancies with avidity for FDG and PSMA, PET/CT enables the detection of PSL. Our study confirms their rarity, though likely understates the true prevalence. Detecting PSL is important because these represent a possible metastatic site in a range of cancers. Moreover, PSL contribute to morbidity and portend a poor prognosis, and appropriate treatment measures must be instituted promptly. Accordingly, PET/CT is a critical instrument for assessment of the male genitalia, especially in men with advanced disease and for whom relapse is a concern.
|
2021-03-17T06:17:27.567Z
|
2021-03-15T00:00:00.000
|
{
"year": 2021,
"sha1": "b188305285a40cd45e711f370912241cb5c03ec8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-85300-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "004cc754403af8fe048dac96b9c1d2f36abf10f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263472683
|
pes2o/s2orc
|
v3-fos-license
|
Diet and Stress Impair Ovarian Function in Mid-life, Increasing Risk of Chronic Diseases of Aging in Primates
Abstract Ovarian dysfunction increases risk for chronic diseases of aging including cardiovascular disease, depression, cognitive impairment, and bone and muscle loss which promote frailty. Psychosocial stress disrupts ovarian function and recent observations suggest that Western diet may also. Determination of causal relationships among diet, psychosocial stress, and ovarian physiology is difficult in humans. Nonhuman primates provide relevant opportunities to investigate diet and psychosocial effects on ovarian physiology and aging because, like humans, they have monthly menstrual cycles and recapitulate many aging-related processes similar to humans. We examined ovarian function in 38 socially housed, middle-aged females fed either a Western or Mediterranean diet for 26 months (~ an 8-year period for humans). During the last 12 months, we examined cycle length, peak progesterone per cycle, and frequency of anovulatory cycles using blood sampling (3/week) and vaginal swabbing (6/week). Repeated measures analysis revealed that like middle-aged women, cycle length increased, and progesterone levels fell over time, suggesting that ovarian dysfunction generally increased in our sample with time. In addition, both Western diet and the stress of low social status reduced progesterone levels, disrupting ovarian function, and increasing risk of chronic diseases of aging. This study demonstrates the additive negative effects of poor diet and psychosocial stress on ovarian physiology in mid-life and lays the groundwork for future investigations to uncover associated metabolic signatures of accelerated aging. The results also suggest that a Mediterranean diet may exert a protective influence against ovarian dysfunction and its pathologic sequelae.
Carolina, United States,3. Moscow State University,Moscow,Moskva,Russia,4. UCLA,Los Angeles,California,United States Heterochronic parabiosis is a powerful rejuvenation model in aging research.Due to limitations in the duration of blood sharing and/or physical attachment, it is currently unclear if parabiosis retards the molecular signatures of aging or affects healthspan/lifespan in the mouse.Here, we describe a long-term heterochronic parabiosis model, which appears to slow down the aging process.We observed a "deceleration" of biological age based on molecular aging biomarkers estimated with DNA methylation clock and RNA-seq signature analysis.The slowing of biological aging was accompanied by systemic amelioration of aging phenotypes.Consistent with these findings, we found that aged mice, which underwent heterochronic parabiosis, had an increased healthspan and lifespan.Overall, our study re-introduces a prolonged parabiosis and detachment model as a novel rejuvenation therapy, suggesting that a systemic reset of biological age in old organisms can be achieved through the exposure to young environment.
DIET AND STRESS IMPAIR OVARIAN FUNCTION IN MID-LIFE, INCREASING RISK OF CHRONIC DISEASES OF AGING IN PRIMATES
Brett Frye, Suzanne Craft, Thomas Register, Susan Appt, Mara Vitolins, Beth Uberseder, Marnie Silverstein-Metzler, and Carol Shively, Wake Forest School of Medicine, Winston Salem, North Carolina, United States Ovarian dysfunction increases risk for chronic diseases of aging including cardiovascular disease, depression, cognitive impairment, and bone and muscle loss which promote frailty.Psychosocial stress disrupts ovarian function and recent observations suggest that Western diet may also.Determination of causal relationships among diet, psychosocial stress, and ovarian physiology is difficult in humans.Nonhuman primates provide relevant opportunities to investigate diet and psychosocial effects on ovarian physiology and aging because, like humans, they have monthly menstrual cycles and recapitulate many agingrelated processes similar to humans.We examined ovarian function in 38 socially housed, middle-aged females fed either a Western or Mediterranean diet for 26 months (~ an 8-year period for humans).During the last 12 months, we examined cycle length, peak progesterone per cycle, and frequency of anovulatory cycles using blood sampling (3/ week) and vaginal swabbing (6/week).Repeated measures analysis revealed that like middle-aged women, cycle length increased, and progesterone levels fell over time, suggesting that ovarian dysfunction generally increased in our sample with time.In addition, both Western diet and the stress of low social status reduced progesterone levels, disrupting ovarian function, and increasing risk of chronic diseases of aging.This study demonstrates the additive negative effects of poor diet and psychosocial stress on ovarian physiology in mid-life and lays the groundwork for future investigations to uncover associated metabolic signatures of accelerated aging.The results also suggest that a Mediterranean diet may exert a protective influence against ovarian dysfunction and its pathologic sequelae.
EXERCISE DURING CHILDHOOD PROTECTS AGAINST CARDIAC DYSFUNCTION LATER IN LIFE
Danielle Bruns, 1 MacKenzie DeHoff, 1 Aykhan Yusifov, 1 Sydney Polson, 1 Ross Cook, 1 Emily Schmitt, 1 and Kathleen Woulfe, 2 1.University of Wyoming, Laramie, Wyoming, United States, 2. University of Colorado-Denver, Aurora, Colorado, United States Cardiovascular disease continues to be a major cause of morbidity and mortality, particularly in aging populations.Exercise is amongst the most cardioprotective interventions identified to date, with early in life exercise such as during the juvenile period potentially imparting even more cardioprotective outcomes due to the plasticity of the developing heart.To test the hypothesis that juvenile exercise would impart later in life cardioprotection, we exercised juvenile male and female mice via voluntary wheel running from 3-5 weeks of age and then exposed them to cardiac stress by isoproterenol (ISO) at 4-6 and 18 months of age in adulthood and older age, respectively.We compared cardiac function and remodeling to sedentary control animals, sedentary animals who received ISO, and adult and aged mice that exercised for two weeks immediately before ISO exposure.Juvenile mice engaged in voluntarily wheel running, with male mice running 1.3 ± 0.8 km and female mice 2.8 ± 1.0 km a day.Echocardiography suggested that these juvenile animals underwent running-induced cardiac remodeling as evidenced by higher ejection fraction and stroke volume compared to sedentary controls.Exercise in the juvenile period attenuated ISO-induced cardiac hypertrophy and remodeling later in life compared to sedentary animals and those that exercised immediately before ISO administration.The mechanisms by which early versus late exercise is protective in adult and aged mice are under investigation.Further ongoing work will identify the adaptations induced by exercise in the juvenile heart that may help improve cardiac aging.
EXERCISE-INDUCED TRANSCRIPTIONAL CHANGES IN AGED SKELETAL MUSCLE
Yori Endo, 1 Yuteng Zhang, 2 Shayan Olumi, 3 Mehran Karvar, 2 and Indranil Sinha, 4 1.Harvard Medical School,Boston,Massachusetts,United States,2. Brigham and Women's Hospital,Harvard Medical School,Boston,Massachusetts,United States,3. Northwestern University,Evanston,Illinois,United States,4. Birgham and Women's Hospital,Harvard Medical School,Boston,Massachusetts,United States Exercise is beneficial for physical functions across all ages.However, the response to exercise shifts from anabolism, resulting in limited gain of muscle strength and endurance.These changes likely reflect age-related alterations in transcriptional response underlying the muscular adaptation to exercise.The exact changes in gene expression accompanying exercise, however, are largely unknown, and elucidating them is of a great clinical interest for optimizing the exercise-based therapies for sarcopenia.In order to characterize the exercise-induced transcriptomic changes in aged muscle, a paired-end RNA sequencing was performed on the rRNA-depleted total RNA extracted from the gastrocnemius muscles of 24 months-old mice after 8 weeks of regimented exercise (exercise group) or sedentary activities (sedentary group).Differential gene expression analysis revealed
|
2021-12-19T05:10:45.205Z
|
2021-12-17T00:00:00.000
|
{
"year": 2021,
"sha1": "ce8e3b1edd5a156542041a3381b1fa8a7501321f",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/innovateage/article-pdf/5/Supplement_1/678/43187256/igab046.2552.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce8e3b1edd5a156542041a3381b1fa8a7501321f",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology",
"Psychology"
],
"extfieldsofstudy": []
}
|
73713992
|
pes2o/s2orc
|
v3-fos-license
|
Higher spin currents in the critical O ( N ) vector model at 1 / N 2
We calculate the anomalous dimensions of higher spin singlet currents in the critical O(N) vector model at order 1/N. The results are shown to be in agreement with the four-loop perturbative computation in φ theory in 4 − 2ǫ dimensions. It is known that the order 1/N anomalous dimensions of higher-spin currents happen to be the same in the Gross-Neveu and the critical vector model. On the contrary, the order 1/N corrections are different. The results can also be interpreted as a prediction for the two-loop computation in the dual higher-spin gravity.
that, from the CFT point of view, correspond to the anomalous dimensions, γ s , of the higher-spin currents m 2 s = m 2 0 (s) + δm 2 s , m 2 0 (s) = (d + s − 2)(s − 2) − s, δm 2 s = γ s (d − 4 + 2s + γ s ) . (1.1) Recently, the order-1/N anomalous dimensions have been computed in all the four basic Chern-Simons matter theories [30]. The result is that they are given by two functions of spin, one of them being γ s above, times simple factors that depend on the parity violating parameter. It is interesting that the two spin-dependent functions were found to be same for all the four theories, a particular case being (1.2). In [31] the order-1/N 2 anomalous dimensions were computed for the Gross-Neveu model. The results of the present paper reveal that the order-1/N 2 anomalous dimensions are different in the critical vector and the Gross-Neveu models. It would be interesting to extend the results to Chern-Simons matter theories. The paper is organized as follows. In Section 2 we describe the model and review a technique for the calculation of critical exponents. In Section 3 we discuss the renormalization of higher-spin currents and present our results for the anomalous dimensions of these currents in the 1/N 2 order in arbitrary dimension d. The anomalous dimensions in d = 3 are discussed in Section 4. The details of the calculations and some numerical results are collected in several Appendices.
Critical O(N) model
The O(N ) invariant ϕ 4 model (where ϕ is an N -component real field) where u = g/16π 2 . This model is critically equivalent to the nonlinear σ -model, see for a review [1,12]. The latter describes a system of two interacting fields -basic field ϕ and "auxiliary" field σ -with the action 2 The partition function is given by the path integral The 1/N expansion for this model is constructed as follows [12,34]. One represents the action (2.3) in the form , etc. Thus the kernel K is an inverse propagator of the σ field. It is fixed by the condition that the term σKσ in S int cancels the LO ϕ loop insertions to the σ lines. Namely, Since D σ = K −1 ∼ 1/N one gets a systematic 1/N expansion for (2.4). However, despite the fact that one considers the theory in non-integer dimensions the loop diagrams are divergent and the theory has to be regularized. The most convenient way to do it is to modify the kernel K in the free part (S 0 ) of the action [34], The function C(∆) is arbitrary except that it has to satisfy the condition C(0) = 1. Different choices of C(∆) result in a finite renormalization for Green functions but do not affect the critical exponents. We fix the function C(∆) by the requirement that the σ field propagator takes the form The divergences in diagrams arise as poles in ∆ and are removed by the R operation. From now on we will assume the MS scheme, i.e. Z factors are series in 1/∆, Z = 1 + k≥1 c k (1/N )/∆ k . The renormalized action takes the form 3 [34] Note, however, that the renormalization is not multiplicative, i.e. S R (ϕ, σ) = S(ϕ 0 , σ 0 ). It means that the knowledge of renormalization factors is not sufficient for determining critical exponents [34,35]. Nevertheless it was shown in [24] that to the 1/N 2 accuracy the anomalous dimensions can be expressed via the corresponding renormalization factors in a simple way. The recipe is the following: we rescale the propagator of σ field by a parameter u, (2.11) Then the contribution of each diagram, G, to the renormalization constant comes with the factor u nG , where n G is the number of σ-lines in the diagram. Let Z be the renormalization factor for an operator O, In the MS scheme it takes the form where Z k (u) = j z kj (u)/N j . Then, to the order 1/N 2 the anomalous dimension of the operator O can be obtained as [24] γ O = 2u∂ u Z 1 (u) For more details see [24,36]. In certain situations conventional techniques of self-consistency equations [37,38] and conformal bootstrap [9] are, of course, more effective. However, the approach outlined above is very convenient for analysis of composite operators, especially with a nontrivial mixing pattern.
Higher-spin operators
We are interested in the critical dimensions of the higher-spin (traceless and symmetric) singlet operators In what follows we will not display Lorentz indices explicitly and adopt a shorthand notation for the operator, J s ≡ J µ1,...,µs . The operator J s mixes under renormalization with operators that are total derivatives. However, since the mixing has a triangular form it is irrelevant for calculation of the anomalous dimensions and can be neglected. Thus the renormalized operator takes the form The leading order diagrams contributing to the renormalization factor are shown in Fig. 1. The left diagram on this figure is the only one contributing at this order to the renormalization of the nonsinglet operator. The right diagram with a closed ϕ line cycle, contributes to the singlet operator only. With this in mind we write the answer for the anomalous dimension of the singlet operator in the form Here the index η determines the anomalous dimension of the field ϕ, η = 2γ ϕ , γ ns (s) is the anomalous dimension of the non-singlet operator, and ∆γ(s) is the contribution due to diagrams with a closed ϕ-line cycle. All contributions except ∆γ(s) are known to the NLO accuracy. The first two expansion coefficients for the index η = η 1 /N + η 2 /N 2 + O(1/N 3 ) take the form [38] , where The LO anomalous dimension of the σ field (γ σ = γ σ, The non-singlet anomalous dimension γ ns (s) has been calculated in [24] at the order 1/N 2 . The first two coefficients of the 1/N expansion where we introduced the notation, j s = s + µ − 1, for the canonical conformal spin. The functions S(j) and R s (µ) are defined as where Singlet operators exist only for even spins, so that from now on we assume that s is even. At 1/N order only one diagram -the rightmost diagram in Fig. 1 -contributes to the pure singlet anomalous dimension Calculating this diagram and using Eq. (2.13) we reproduce the known result [23] Thus to the leading order 1/N the singlet anomalous dimension is Note, that for s = 2 the anomalous dimension vanishes as it should be since the spin two current corresponds to the energy-momentum tensor. We also remark that the spin dependence of the LO singlet anomalous dimensions (3.13) is exactly the same as in the Gross-Neveu model, see e.g. [29,31].
Singlet current at 1/N 2
The diagrams which contribute to the pure singlet anomalous dimension at the order 1/N 2 can be split in two groups. The first one comprises the self-energy and vertex corrections to the leading order diagram (eight different diagrams in total). The diagrams from the second group are shown in Fig. 2. The diagrams from the first group can be effectively calculated with the help of technique developed in [24]. We give some details of this calculation in Appendix C. Next, the first three diagrams in the Fig. 2 are easy to calculate. All other diagrams have only a superficial divergency. Since we are interested only in a residue at the ∆ pole the regulator ∆ can be removed from the σ-lines and placed on one of the ϕ-lines. For ∆ = 0 the basic σϕ 2 vertex has the property of uniqueness and can be transformed with the help of the star-triangle relation = π µ a(α, β, γ) Using the standard technique [12] one can find rather straightforwardly the contribution of each diagram to the renormalization constant of the singlet current. We collected answers for individual diagrams in Appendix B.
Before presenting the answer for the singlet anomalous dimensions let us note that if one is interested only in the d = 3 result the calculation of the last three diagrams can be greatly simplified. It should be stressed here that we are talking about the pole part of the diagrams only. Using the star -triangle relation one derives in d = 3: Since d = 2µ = 3 the horizontal line on the rightmost diagram disappears. In this way it is easy to check that the contributions of the third and fourth diagrams in the second line in Fig. 2 vanish in d = 3, while the last diagram is reduced to a simple ladder-type diagram by application of the chain rule. Collecting all terms, our answer for the NLO singlet anomalous dimension takes the form: (3.14) Here (3.16) The expression (3.14) passes several consistency checks. First, it can be verified that for s = 2 the singlet anomalous dimension vanishes, γ(s = 2) = 0. We remark also that the non-singlet spin one current is conserved and, hence, its anomalous dimension vanishes, η + γ ns (s = 1) = 0. Second, the large spin asymptotic of γ(s) complies with the CFT prediction [39,40]. It was noticed in [41,42] that if one represents anomalous dimensions of higher-spin operators in the form then the asymptotic expansion of the function f (j) has a rather specific form. Namely, it is given by the sum of terms Excluding the prefactor, this series is invariant under j → 1 − j save that the coefficients a q,k are allowed to be functions (polynomials) of ln(j − 1/2). For more detail see Refs. [39,40]. In the Comparing it with (3.14) one finds that f 1 (j s ) is the LO anomalous dimension, while the second term in (3.19) is contained in the first two lines in (3.14). Thus the large spin expansion of all other terms in (3.14) starting from the third line has to have the form (
d = 3 reduction and higher-spin masses
In three dimensions the results can be considerably simplified. First of all, After some simplifications we obtain for the non-singlet anomalous dimensions in three dimensions For the anomalous dimensions of the singlet currents we have It may be interesting to compare the results of the large-N expansion with the perturbative results in 4 − 2ǫ dimensions for ǫ = 1 2 , which is displayed on Fig. 4 for the s = 4 current. We see that as N increases the two approximation converge to each other. Now we can write down the effective masses of higher-spin fields in AdS 4 :
(4.2)
The order-1/N correction is linear, while the effective mass up to the order-1/N 2 can be written as
Summary
We have calculated the 1/N 2 corrections to the anomalous dimensions of the singlet higher-spin currents in the O(N ) vector model. Also, using the results of Ref. [43] we recovered the four-loop anomalous dimensions in the O(N ) model and checked that the 1/N and ǫ expansions for the anomalous dimensions are in complete agreement with each other. The 1/N 2 expression for the anomalous dimensions (3.14) is rather involved but simplifies considerably in three dimensions. We have also related them to the two-loop radiative corrections to the masses of higher-spin fields.
It has been known that the LO critical dimensions of the singlet higher-spin currents coincide in the O(N ) and Gross-Neveu models with some identification of the expansion parameters. Our result shows that it is no longer true at the NLO order even in d = 3. This also implies that the NLO anomalous dimensions of the higher-spin currents in Chern-Simons matter theories have a more complicated form than the one observed at the LO. association with Lebedev Physical Institute and by the DFG Transregional Collaborative Research Centre TRR 33 and the DFG cluster of excellence "Origin and Structure of the Universe" (E.Sk.).
Appendices A Numerical Values
We collect below numerical values of the order-1/N 2 anomalous dimensions of the singlet currents. It is worth stressing that η 1 = 8 3π 2 is the same for the Gross-Neveu and for the critical vector models. It is convenient to give anomalous dimensions as multiples of (η 1 ) 2 , see Eq.
B Diagrams
In this appendix we collect the results for the diagrams shown in Fig. 2. The expressions below give the divergent part of diagrams with subtracted counterterms. The symmetry factors are already included in these expressions. We obtained for the diagrams D k , k = 1, . . . , 9 For the last diagram we get
C Vertex and self-energy corrections
In this Appendix we discuss the calculation of the self-energy and vertex corrections diagrams. In total there are eight different diagrams which arise from the SE and vertex corrections to the LO pure singlet diagram. The calculation of SE diagrams is rather straightforward but cumbersome while the vertex corrections could be rather involved. The reason for this is that the diagrams with vertex corrections contain, evidently, divergent subgraph and, therefore, one cannot remove the regulator ∆ from the σ lines and use the uniqueness property of the σϕ 2 vertex. However it is helpful to take into account that the model under consideration is a conformal one. The form of twoand three-point correlators in CFT is fixed up to normalization factors by the scaling dimensions of the fields. Namely, the dressed (full) propagators and 1PI irreducible three point function Γ σϕϕ have the form The explicit expressions for the factors A, B, Z at the order-1/N can be found in Ref. [24]. One can use this information in order to avoid a tedious calculation of the individual diagrams. Namely, it was shown in [24] that the contribution to the anomalous dimension at the 1/N 2 order due to the SE and vertex corrections to the 1/N diagram can be extracted from the same diagram with the dressed propagators and vertices. Figure 6. Vertex and Self-Energy correction diagrams for the pure singlet diagram.
Let us consider a logarithmically divergent diagram. For such a diagram the number of ϕ lines is equal to the number of basic vertices and twice a number of the σ lines, Replacing propagators and vertices by full propagators and vertices one gets a superficially divergent diagram. It has to be regularized by introducing the regulator ∆ in any line. The resulting diagram has a simple pole in ∆ The contribution to the anomalous dimension which comes from SE and vertex corrections diagram is equal to δγ SE+V = −2r (1) /N 2 , (C.6) where r (1) comes from the expansion of the residue R in 1/N , R = r (0) /N + r (1) /N 2 + O(1/N 3 ). The triple vertices in the modified diagram has the uniqueness property that usually simplifies calculation greatly. Moreover, it is not necessary to replace all vertices and propagators at once. The contributions from lines and vertices are additive [24]. One can replace a subset of lines and vertices, S 1 ⊂ S, satisfying the condition (C.4), calculate the corresponding diagram G (1) and find the coefficient r (1) 1 . Then the same can be done for the next subset, S 2 , and so on. If sets S k are not intersecting, k S k = ∅ and k S k = S, then δγ = − 2 N 2 k r (1) k . If the sets S/ k S k = S + and k S k = S − are not empty then we have to add the contributions from the elements in S + and subtract those in S − . We illustrate this rule on the example of the pure singlet diagram, Fig. 1. The corresponding decomposition is shown in Fig. 6.
In the leftmost diagram we replaced two left vertices, the left σ line and two horizontal ϕ lines. In the middle diagram -two right vertices, the right σ line and the two horizontal ϕ lines. So the contribution of the horizontal lines is counted twice. Thus we have to add the contribution from the lines attached to the operator vertex and subtract contribution from the horizontal lines. It is done by adding the rightmost diagram. All the diagrams are superficially divergent and have to be regularized by shifting index of one of the σ−lines by ∆. All these diagrams (first two are obviously equal each other) can be easily calculated with the help of a chain integration rule and the star-triangle relation.
After simple calculation we find for the residue r (1) of a simple pole of the diagrams D 10 , D 11 and D 12 : where χ = −η − γ σ = χ 1 /N + O(1/N 2 ) and
|
2017-06-28T12:43:44.000Z
|
2017-06-28T00:00:00.000
|
{
"year": 2017,
"sha1": "dc0a84cc27645d2f1442d95f158e1e407d47332d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP08(2017)106.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "425039b3345bffad4209539848fc31ab6686bb37",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246727
|
pes2o/s2orc
|
v3-fos-license
|
Deep Markov Random Field for Image Modeling
Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.
Introduction
Generative image models play a crucial role in a variety of image processing and computer vision tasks, such as denoising [1], super-resolution [2], inpainting [3], and image-based rendering [4]. As repeatedly shown by previous work [5], the success of image modeling, to a large extent, hinges on whether the model can successfully capture the spatial relations among pixels.
Existing image models can be roughly categorized as global models and lowlevel models. Global models [6,7,8] usually rely on compressed representations to capture the global structures. Such models are typically used for describing objects with regular structures, e.g. faces. For generic images, low-level models are more popular. Thanks to their focus on local patterns instead of global appearance, low-level models tend to generalize much better, especially when there can be vast variations in the image content.
Over the past decades, Markov Random Fields (MRFs) have evolved into one of the most popular models for low-level vision. Specifically, the clique-based structure makes them particularly well suited for capturing local relations among pixels. Whereas MRFs as a generic mathematical framework are very flexible and provide immense expressive power, the performance of many MRF-based methods still leaves a lot to be desired when faced with challenging conditions. This occurs due to the widespread use of simplistic potential functions that largely restrict the expressive power of MRFs. In recent years, the rise of Deep Neural Networks (DNN) has profoundly reshaped the landscape of many areas in computer vision. The success of DNNs is primarily attributed to its unparalleled expressive power, particularly their strong capability of modeling complex variations. However, DNNs in computer vision are mostly formulated as end-to-end convolutional networks (CNN) for classification or regression. The modeling of local interactions among pixels, which is crucial for many low-level vision tasks, has not been sufficiently explored.
The respective strengths of MRFs and DNNs inspire us to explore a new approach to low-level image modeling, that is, to bring the expressive power of DNNs to an MRF formulation. Specifically, we propose a generative image model comprised of a grid of hidden states, each corresponding to a pixel. These latent states are connected to their neighbors -together they form an MRF. Unlike in classical MRF formulations, we use fully connected layers to express the relationship among these variables, thus substantially improving the model's ability to capture complex patterns.
Through theoretical analysis, we reveal an inherent connection between our MRF formulation and the RNN [9], which opens an alternative way to MRF formulation. However, they still differ fundamentally: the dependency structure of an RNN is acyclic, while that of an MRF is cyclic. Consequently, the hidden states cannot be inferred in a single feed-forward manner as in a RNN. This posts a significant challenge -how can one derive the back-propagation procedure without a well-defined forward function?
Our strategy to tackle this difficulty is to unroll an iterative inference procedure into a feed-forward function. This is motivated by the observation that while the inference is iterative, each cycle of updates is still a feed-forward procedure. Following a carefully devised scheduling policy, which we call the Coupled Acyclic Passes (CAP), the inference can be unrolled into multiple RNNs operating along opposite directions that are coupled together. In this way, local information can be effectively propagated over the entire network, where each hidden state can have a complete picture of its context from all directions.
The primary contribution of this work is a new generative model that unifies MRFs and DNNs in a novel way, as well as a new learning strategy that makes it possible to learn such a model using mainstream deep learning frameworks. It is worth noting that the proposed method is generic and can be adapted to a various problems. In this work, we test it on a variety of low-level vision tasks, including texture synthesis, image super-resolution, and image synthesis.
Related Works
In this paper, we develop a generative image model that incorporates the expressive power of deep neural networks with an MRF. This work is related to several streams of research efforts, but moves beyond their respective limitations.
Generative image models. Generative image models generally fall into two categories: parametric models and non-parametric models. Parametric models typically use a compressed representation to capture an image's global appearance. In recent years, deep networks such as autoencoders [10] and adversarial networks [11,12] have achieved substantial improvement in generating images with regular structures such as faces or digits. Non-parametric models, including pixel-based sampling [13,14,15] and patch-based sampling [16,17,18], instead rely on a large set of exemplars to capture local patterns. Whereas these methods can produce high quality images with local patterns directly sampled from realistic images. Exhaustive search over a large exemplar set limits their scalability and often leads to computational difficulties. Our work draws inspiration from both lines of work. By using DNNs to express local interactions in an MRF, our model can capture highly complex patterns while maintaining strong scalability.
Markov random fields. For decades, MRFs have been widely used for low-level vision tasks, including texture synthesis [19], segmentation [20,21], denoising [1], and super-resolution [2]. Classical MRF models in earlier work [22] use simple hand-crafted potentials (e.g., Ising models [23], Gaussian MRFs [24]) to link neighboring pixels. Later, more flexible models such as FRAME [25] and Fields of Experts [26] were proposed, which allow the potential functions to be learned from data. However, in these methods, the potential functions are usually parameterized as a set of linear filters, and therefore their expressive power remains very limited.
Recurrent neural networks. Recurrent neural networks (RNNs), a special family of deep models, use a chain of nonlinear units to capture sequential relations. In computer vision, RNNs are primarily used to model sequential changes in videos [27], visual attention [28,29], and hand-written digit recognition [30]. Previous work explores multi-dimensional RNNs [31] for scene labeling [32] as well as object detections [33]. The most related work is perhaps the use of 2D RNNs for generating gray-scale textures [34] or color images [35]. A key distinction of these models from ours is that 2D RNNs rely on an acyclic graphs to model spatial dependency, e.g. each pixel depends only on its left and upper neighbors -this severely limits the spatial coherence. Our model, instead, allows dependencies from all directions via iterative inference unrolling.
MRF and neural networks. Connections between both models have been discussed long ago [36]. With the rise of deep learning, recent work on image segmentation [37,38] uses mean field method to approximate a conditional random field (CRF) with CNN layers. A hybrid model of CNN and MRF has also been proposed for human pose estimation [39]. These works primarily target prediction problems (e.g. segmentation) and are not as effective at capturing complex pixel patterns in a purely generative way.
Deep Markov Random Field
The primary goal of this work is to develop a generative model for images that can express complex local relationships among pixels while being tractable for inference and learning. Formally, we consider an image, denoted by x, as an undirected graph with a grid structure, as shown in Figure 2 left. Each node u corresponds to a pixel x u . To capture the interactions among pixels, we introduce, h u , a hidden variable for each pixel denoting the hidden state corresponding to the pixel x u . In the graph, each node u has a neighborhood, denoted by N u . Particularly, we use the 4-connected neighborhood of a 2D grid in this work.
Joint Distribution. We consider three kinds of dependencies: (1) the dependency between a pixel x u and its corresponding hidden state h u , (2) the dependency between a hidden state h u and a neighbor h v with v ∈ N u , and (3) the dependency between a hidden state h u and a neighboring pixel x v . They are respectively captured by factors ζ(x u , h u ), φ(h u , h v ), and ψ(h u , x v ). In addition, we introduce a regularization factor λ(h u ) for each hidden state, which gives us the leeway to encourage certain distribution over the state values. Bringing these factors together, we formulate an MRF to express the joint distribution: Here, V and E are respectively the set of vertices and that of the edges in the image graph, Z is a normalizing constant. Figure 2 shows it structure.
Choices of Factors. Whereas the MRF provides a principled way to express the dependency structure, the expressive power of the model still largely depends on the specific forms of the factors that we choose. For example, the modeling capacity of classical MRF models are limited by their simplistic factors. Below, we discuss the factors that we choose for the proposed model. First, the factor ζ(x u , h u ) determines how the pixel values are generated from the hidden states. Considering the stochastic nature of natural images, we formalize this generative process as a Gaussian mixture model (GMM). The rationale behind is that pixel values are on a low-dimensional space, where a GMM with a small number of components can usually provide a good approximation to an empirical distribution. Specifically, we fix the number of components to be K, and consider the concatenation of component parameters as the linear transform of the hidden state, where Q is a weight matrix of model parameters. In this way, the factor ζ(x u , h u ) can be written as To capture the rich interactions among pixels and their neighbors, we formulate the relational factors φ(h u , h v ) and ψ(h u , x v ) with fully connected forms: Finally, to control the value distribution of the hidden states, we further incorporate a regularization term over h u , as Here, η is an element-wise nonlinear function and d is the dimension of h u . In summary, the use of GMM in ζ(x u , h u ) effectively accounts for the variations in pixel generation, the fully-connected factors φ(h u , h v ) and ψ(h u , x v ) enable the modeling of complex interactions among neighbors, while the regularization term λ(h u ) provides a way to explicitly control the distribution of hidden states. Together, they substantially increase the capacity of the MRF model.
Inference of Hidden States. With this MRF formulation, the posterior distribution of the hidden state h u , conditioned on all other variables, is given by Here, h u depends on its neighboring states, the corresponding pixel values, as well as that of its neighbors. Since the pixel x u and its neighboring pixels x Nu are highly correlated, to simplify our later computations, we approximate the posterior distribution as, We performed numerical simulations for this approximation. They are indeed very close to each other, as illustrated in Figure 3. Consequently, the MAP estimate of h u can be approximately computed from its neighbors. It turns out that this optimization problem has an analytic solution given by, Here, σ is an element-wise function that is related to η as σ −1 (z) = η (z), where η is the first-order derivative w.r.t. η, and σ −1 the inverse function of σ.
Connections to RNNs. We observe that Eq.(7) has a form that is similar to the feed-forward computations in Recurrent Neural Networks (RNN) [9]. In this sense, we can view the feed-forward RNN as an MAP inference process for MRF models. Particularly, given the RNN computations in the form of Eq. (7), one can formulate an MRF as in Eq.(1), where regularization function η can be derived from σ according to the relation σ −1 (z) = η (z), as Here, b is the minimum of the domain of h, which can be −∞, and C is an arbitrary constant. This connection provides an alternative way to formulate an MRF model. More importantly, in this way, RNN models that have been proven to be successful can be readily transferred to an MRF formulation.
Learning via Coupled Recurrent Networks
Except for special cases [41], inference and learning on MRFs is generally intractable. Conventional estimation methods [42,8,43] either take overly long time to train or tend to yield poor estimates, especially for models with a highdimensional parameter space. In this work, we consider an alternative approach to MRF learning, which allows us to draw on deep learning techniques [44,45] that have been proven to be highly effective [40].
Variational Learning Principle. Estimation of probabilistic models based on the maximum likelihood principle is often intractable when the model contains hidden variables. Expectation-maximization [46] is one of the most widely used ways to tackle this problem, which iteratively calculates the posterior distribution of h i (in E-steps) and then optimizes θ (in M-steps) aŝ Here, θ = {W, Q, R} is the model parameter, x i is the i-th image, and h i is the corresponding hidden state. As exact computation of this posterior expectation is intractable, we approximate it based onh i , the MAP estimate of h i , as below: This is the learning objective of our model. Here, f is the function that approximately infers the latent stateh i given an observed image x i . When the posterior distribution p(h i |x i , θ) is highly concentrated, which is often the case in vision tasks, this is a good approximation. For an image x, log p(x|h, θ) can be further expanded as a sum of terms defined on individual pixels: For our problem, this learning principle can be interpreted in terms of encoding/decoding -the hidden statesh = f (x, θ) can be understood as an representation that encodes the observed patterns in an image x, while log p(x|h, θ) measures how wellh explains the observations.
Coupled Acyclic Passes. In the proposed model, the dependencies among neighbors are cyclic. Hence, the MAP estimateh = f (x, θ) cannot be computed in a single forward pass. Instead, Eq.(7) needs to be applied across the graph in multiple iterations. Our strategy is to unroll this iterative inference procedure into multiple feed-forward passes along opposite directions, such that these passes together provide a complete context to each local estimate. Fig. 4: Coupled acyclic passes. We decouple an undirected cyclic graph into two directed acyclic graphs with each one allowing feed-forward computation. Inference is performed by alternately traversing the two acyclic graphs, while coupling their information at each step.
Specifically, we decompose the underlying dependency graph G = (V, E), which is undirected, into two acyclic directed graphs G f = (V, E f ) and G b = (V, E b ), as illustrated in Figure 4, such that each undirected edge {u, v} ∈ E corresponds uniquely to an edge (u, v) ∈ E f and an opposite edge (v, u) ∈ E b . It can be proved that such a decomposition always exists and that for each node u ∈ V , the neighborhood N u can be expressed as N u = N f (u) ∪ N b (u), where N f (u) and N b (u) are the set of parents of u respectively along G f and G b .
Given such a decomposition, we can derive an iterative computational procedure, where each cycle couples a forward pass that applies Eq.(7) along G f and a backward pass 1 along G b . After the t-th cycle, the state h u is updated to As states above, we have N u = N f (u) ∪ N b (u). Therefore, over a cycle, the updated state h u would incorporate information from all its neighbors. Note that a given graph G can be decomposed in many different ways. In this work, we specifically choose the one that forms the zigzag path. The advantage over a simple raster line order is that zigzag path traverses all the nodes continuously, so that it conserves spatial coherence by making dependence of each node to all the previous nodes that have been visited before. The forward and backward passes resulted from such decomposition are shown in Figure 4. This algorithm has two important properties: First, the acyclic decomposition allows feed-forward computation as in Eq.(7) to be applied. As a result, the entire inference procedure can be viewed as a feed-forward network that couples multiple RNNs operating along different directions. Therefore, it can be learned in a way similar to other deep neural networks, using Stochastic Gradient Descent (SGD). Second, the feedback mechanism embodied by the backward pass facil-itates the propagation of local information and thus the learning of long-range dependencies.
Discussions with 2D-RNN. Previous work has explored two-dimensional extensions of RNN [31], often referred to as 2D-RNN. Such extensions, however, are formulated upon an acyclic graph, and can be considered as a trimmed down version of our algorithm. A major drawback of 2D-RNN is that it scans the image in a raster line order and it is not able to provide a feedback path. Therefore, the inference of each hidden state can only take into account 1/4 of the context, and there is no way to recover from a poor inference. As we will show in our experiments, this may cause undesirable effects. Whereas bidirectional RNNs [47] may partly mitigate this problem, they decouple the hidden states into multiple ones that are independent apriori, which would lead to consistency issues. Recent work [48] also finds it difficult to use in generative modeling.
Implementation Details For inference and learning, to make the computation feasible, we just take one forward pass and one backward pass. Thus, each node is only updated twice while being able to use the information from all possible contexts. The training patch size varies from 15 to 25 depending on the specific experiment. Overall, if we unroll the full inference procedure, our model 2 is more than thousands of layers deep. We use rmsprop [45] for optimization and we don't use dropout for regularization, as we find it oscillates the training.
Experiments
In the following experiments, we test the proposed deep MRF on 3 scenarios for modeling natural images. We first study its basic properties on texture synthesis, and then we apply it on a prediction problem, image super-resolution. Finally, we integrate global CNN models with local deep MRF for natural image synthesis.
Texture Synthesis
The task of texture synthesis is to synthesize new texture images that possess similar patterns and statistical characteristics as a given texture sample. The study of this problem originated from graphics [13,14]. The key to successful texture reproduction, as we learned from previous work, is to effectively capture the local patterns and variations. Therefore, this task is a perfect testbed to assess a model's capability of modeling visual patterns.
Our model works in a purely generative way. Given a sample texture, we train the model on randomly extracted patches of size 25 × 25, which are larger than most texels in natural images. We set K = 20, initialize x and h to zeros, and train the model with back-propagation along the coupled acyclic graph. With a trained model, we can generate textures by running the RNN to derive the D12 D34 D104 flowers bark clouds input 2DRNN [34] graphics [13] ours Fig. 5: Texture synthesis results.
latent states and at the same time sampling the output pixels. As our model is stationary, it can generate texture images of arbitrary sizes. We work on two texture datasets, Brodatz [49] for grayscale images, and VisTex [50] for color images. From the results shown in Figure 5, our synthesis visually resembles to high resolution natural images, and the quality is close to the non-parametric approach [13]. We also compare with the 2D-RNN. [34]. As we can see, the results obtained using 2D-RNN, which synthesizes based only on the left and upper regions, exhibit undesirable effects and often evolve into blacks in the bottom-right parts.
Two fundamental parameters control the behaviors of our texture model. The training patch size decides the farthest spatial relationships that could be learned from data. The number of gaussian mixtures control the dynamics of the texture landscape. We analyze our model by changing the two parameters. As shown in Figure 6, bigger training patch size and bigger number of mixtures consistently improves the results. For non-parametric approaches, bigger patch size would dramatically bring up the computation cost. While for our model, the inference time holds the same regardless of the patch size that the model is trained on. Moreover, our parametric model is able to scale to large dataset without bringing additional computations.
Image Super-Resolution
Image super-resolution is a task to produce a high resolution image given a single low resolution one. Whereas previous MRF-based models [2,55] work reasonably, the quality of their products is inferior to the state-of-the-art models based on deep learning [52,54]. With deep MRF, we wish to close the gap. Unlike in texture synthesis, the generation of this task is driven by a lowresolution image. To incorporate this information, we introduce additional connections between the hidden states and corresponding pixels of the low-resolution image, as shown in Figure 7. It is noteworthy that we just input a single pixel (instead of a patch) at each site, and in this way, we can test whether the model can propagate information across the spatial domain. As the task is deterministic, we use a GMM with a single component and fix its variance. In the testing stage, we output the mean of the Gaussian component at each location as the inferred high-resolution pixel. This approach is very generic -the model is not specifically tuned for the task and no pre-and post-processing steps are needed. We train our model on a widely used super-resolution dataset [56] which contains 91 images, and test it on Set5, Set14, and BSD100 [57]. The training is on patches of size 16 × 16 and rmsprop with momentum 0.95 is used. We use PSNR for quantitative evaluation. Following previous work, we only consider the luminance channel in the YCrCb color space. The two chrominance channels are upsampled with bicubic interpolation.
As shown in Table 1 and Table 2, our approach outperforms the CNN-based baseline [52] and compares favorably with the state-of-the-art methods dedicated to this task [53,54]. One possible explanation for the success is that our model not only learns the mapping, but also learns the image statistics for high reso- Fig. 7: Adapting deep MRFs to specific applications. Image super-resolution: the hidden state receives an additional connection from the low-resolution pixel. Image synthesis: deep MRF renders the final image from a spatial feature map, which is jointly learned by a variational auto-encoder. lution images. The training procedure which unrolls the RNN into thousands of steps that share parameters also reduces the risk of overfitting. The results also demonstrate the particular strength of our model in handling large upscaling factors and difficult images. Figure 8 shows several examples visually.
Natural Image Synthesis
Images can be roughly considered as a composition of textures with the guidance of scene and object structures. In this task, we move beyond the synthesis of homogeneous textures, and try to generate natural images with structural guidance.
While our model excels in capturing spatial dependencies, learning weak dependencies across the entire image is both computationally infeasible and analytically inefficient. Instead, we adopt a global model to capture the overall structure and use it to provide contextual guidance to MRF. Specifically, we incorporate the variational auto-encoder (VAE) [10] for this purpose -VAE generates feature maps at each location and our model uses that feature to render the final image (see Figure 7). Such features may contain information of scene layouts, objects, and texture categories.
We train the joint model end-to-end from scratch. During each iteration, the VAE first encodes the image into a latent vector, then decodes it to a feature map with the same size of the input image. We then connect this feature map to the latent states of the deep MRF. The total loss is defined as the addition of gaussian mixtures at image space and KL divergence at high-level VAE latent space. For training, we randomly extracts patches from the feature map. The gradients from the deep MRF back to the VAE thus only cover the patches being extracted. During testing, VAE randomly samples from the latent space and decodes it to generate the global feature maps. The output pixels are sampled from the GMM with 10 mixtures along the coupled acyclic graph. We work on the MSRC [58] and SUN database [59] and select some scene categories with rich natural textures, such as Mountains and Valleys. Each category contains about a hundred images. As we will see, our approach generalizes much better than the data-hungry CNN approaches. We train the model on images of size 64 × 64 with a batch size of 4. For each image, we extract 16 patches of size 15 × 15 for training. Figure 9 shows several images generated from our models, in comparison with those obtained from the baselines, namely raw VAE [10] and DCGAN [60]. The CNN architecture is shared for all methods described in the DCGAN paper [60] to ensure fair comparison. We can see our model successfully captures a variety of local patterns, such as water, clouds, wall and trees. The global appearance also looks coherent, real and dynamic. The state-of-the-art CNN based models, which focuses too much on global structures, often yield sub-optimal local effects.
Conclusions
We present a new class of MRF model whose potential functions are expressed by powerful fully-connected neurons. Through theoretical analysis, we draw close connections between probabilistic deep MRFs and end-to-end RNNs. To tackle the difficulty of inference in cyclic graphs, we derive a new framework that decouples a cyclic graph with multiple coupled acyclic passes. Experimental results show state-of-the-art results on a variety of low-level vision problems, which demonstrate the strong capability of MRFs with expressive potential functions.
Acknowledgment. This work is supported by the Big Data Collaboration Research grant (CUHK Agreement No. TS1610626) and the Early Career Scheme (ECS) grant (No: 24204215). We also thank Aditya Khosla for helpful discussions and comments on a draft of the manuscript.
|
2016-09-07T15:56:36.000Z
|
2016-09-07T00:00:00.000
|
{
"year": 2016,
"sha1": "2081ac7361e5f79420ed8b39f8d13e3797dafb3e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1609.02036",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8d30f00d10775cf88efd1bdabd85e1a8fb94f55a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
268914977
|
pes2o/s2orc
|
v3-fos-license
|
Exploring Influencers’ Commercial Content on Instagram
Abstract This study, which explored how commercial products and services are displayed by different influencer categories on Instagram, was motivated by the need for a more transparent picture of the commercial content consumed by followers. Drawing from an ethnographic content analysis of 3,278 Instagram posts, we found that social media influencers with a minor following had a higher number and a broader variety of commercial posts. Additionally, products and services were displayed through subtle patterns, often integrated into the influencers’ lifestyles and social activities. Although social media influencers with many followers had fewer commercial posts, their display of products and services was more direct and informative. The study closes a literature gap by providing a more refined understanding of social media influencers’ commercial content on Instagram. It offers managerial implications based on the societal impact of the commercial content that people consume.
and efficacy of advertisements (Hudders, De Jans, and De Veirman 2020).According to reviews by Hudders, De Jans, and De Veirman (2020) and Vrontis et al. (2021), limited studies have focused on the commercial content of social media influencers' posts, especially the practices among different influencer categories.In academia and in practice, it is common to categorize social media influencers on Instagram according to the number of followers they have (Campbell and Farrell 2020;Inzpire.me 2021;Klear 2021).Number of followers is an indication of how many people influencers can reach with their commercial content.Each category plays different roles in the influencer marketing field (Kay, Mulcahy, and Parkinson 2020), and their different influential capabilities are debated (Campbell and Farrell 2020;Domingues Aguiar and van Reijmersdal 2018).For example, the so-called micro-influencers (1,000-10,000 followers) are frequently understood to provide intimacy, almost like a distant friend, as they have fewer followers and brand collaborations (Campbell and Farrell 2020).In contrast, the so-called macro-influencers (500,000-1 million followers) are less intimate but play a more prominent role as information amplifiers because they reach a more extensive and more diverse base of followers (Abidin 2021).For the most part, studies have yet to investigate how different influencer categories display commercial content.A few exceptions exist (e.g., Alassani and G€ oretz 2019;Britt et al. 2020), and although these studies offer valuable insights, they focus on one type of sponsored post or on selected influencer categories.As such, this study contributes to the discussion by taking a broader approach, focusing on all forms of products and services and all influencer categories displaying commercial content.By taking this perspective, we can provide a broad and nuanced overview of the commercial content people consume.
This study addresses the following research question: How are commercial products and services displayed by different influencer categories on Instagram?To answer this question, we conducted an ethnographic content analysis of 3,278 Instagram posts from 33 social influencers.The data collection focused on females between the ages of 18 and 34 who are active on Instagram, as this is the dominant influencer group and Instagram is a leading platform in the industry (Statista 2023).Due to its dominance in influencer marketing, we selected the fashion and beauty domain as the specific context (Djafarova and Rushworth 2017).We focused our study on Scandinavia because most of the literature has examined other regions, such as North America and Southeast Asia (Abidin et al. 2020).We followed a categorization scheme developed by Abidin (2021) specifically for the Scandinavian market that is based on the number of followers: micro-influencers (1,000-10,000 followers), influencers (10,000-500,000 followers), macro-influencers (500,000-1 million followers), and mega-influencers (1 million þ followers).Abidin's (2021) categorization calculates the number of followers in each category against the population and size of the Scandinavian countries.According to Abidin (2021), nano-influencers, with fewer than 1,000 followers, constitute a fifth category.However, this category is rarely considered for commercial purposes in the Scandinavian market (e.g., Inzpire.me2021) and was not investigated in this study.We focus on Instagram "posts" in this study.Other formats such as "stories," "reels," and "channels" may be interesting to study, but posts offer the flexibility to share photos, video, carousels (multiple images or videos in one post), and longer-form captions (up to 2,200 characters) that appear permanently on the social media influencer's profile and on followers' feeds (Caldeira, Van Bauwel, and De Ridder 2021).Therefore, we considered posts to broadly represent content display and prioritized this format.
Our main findings demonstrate that micro-influencers and influencers had a higher number and a broader variety of commercial posts.Additionally, products and services were displayed through subtle patterns, often integrated into the influencers' lifestyles and social activities.Macro-and mega-influencers had fewer commercial posts, but their display of products and services was more direct and informative.The study provides two main contributions to influencer marketing literature.First, the study reveals how the increased establishment and commercialization of influencer marketing contribute to a new generation of micro-influencers.Second, we contribute insights into the subtle nature of micro-influencers' and influencers' commercial displays as opposed to macro-and mega-influencers' more direct and informative displays of commercial content.These findings close a literature gap by providing a broad and nuanced understanding of the commercial content displayed (Hudders, De Jans, and De Veirman 2020;Vrontis et al. 2021).Our study also emphasizes how the increased commercialization of the influencer industry has contributed to the complex challenges created when people's social lives are seamlessly connected to commercial activities.Although the study does not investigate commercial transparency per se, the findings are valuable for consumer authorities and for conversations taking place in the literature (Borchers and Enke 2022;Hogsnes, Grønli, and Hansen 2023;Karag€ ur et al. 2022) concerned with the societal implications of the consumption of commercial content.
Background
Instagram has evolved into an established marketing platform.Marketing on Instagram includes several activities, including the development of business profiles for organic interactions with customers, the strategic placement of ads (Instagram 2023), and influencer marketing (Mart� ınez-L� opez et al. 2020).Instagram has become increasingly valuable for marketers because they can interact directly with customers, create engaging and inspirational content, and target specific customers in their social space (Casal� o, Flavi� an, and Ib� añez-S� anchez 2020).It is also common for marketers to collaborate with social media influencers, often referred to as "influencer marketing" (Hudders, De Jans, and De Veirman 2020), where they display products, brands, organizations, or ideas on their social media profiles (De Veirman, Cauberghe, and Hudders 2017).Marketers typically pay social media influencers to convey commercial messages on their behalf, invite them to exclusive events, or send them free products in the hope that they will showcase them on their social media influencers profiles (De Veirman, Cauberghe, and Hudders 2017).Social media influencers can also be brand ambassadors, and those with many followers develop their own brands (Rundin and Colliander 2021).Typically, the number of followers on Instagram defines an influencer's position in the marketing field (Abidin 2021;Campbell and Farrell 2020).Because our study investigated the commercial content displayed by different social media influencer categories, our theoretical background covers three areas: influencer categories, commercial content, and display of commercial content.
Influencer Categories
It is not viable to conceptualize all social media influencers as the same (Kay, Mulcahy, and Parkinson 2020).As illustrated in Table 1, they can be divided into five main categories (Abidin 2021).
When investigating social media influencers in Scandinavia, Abidin (2021) emphasized the importance of accounting for specific categories scaled to the population.In countries such as Denmark (population 5.8 million), Norway (population 5.3 million), and Sweden (population 10.2 million), a mega-influencer has more than 1 million followers, whereas those with fewer than 1,000 followers are considered nano-influencers.Number of followers in each category estimates how many people influencers reach on Instagram versus the country's population.Different approaches exist, such as categories based on language groups or social media platforms.One might also categorize social media influencers by the number of "likes" (Kay, Mulcahy, and Parkinson 2020) or their genre (Abidin 2021).In this study, we utilize Abidin's (2021) categorization because it was developed specifically for Scandinavian countries and fits Instagram and the fashion and beauty industry.Nano-influencers are rarely involved in commercial collaborations in the Scandinavian market (e.g., Inzpire.me2021) and are therefore not included in this study.From now on, we refer to all categories in general as "social media influencers." Studies have investigated the role of each influencer category in the influencer marketing field (Campbell and Farrell 2020;Park et al. 2021).Mega-and macroinfluencers are often associated with broadcast media, where they act as information amplifiers because they reach large audiences (Abidin 2021).Macro-and mega-influencers also tend to promote their own brands or those they co-designed (Rundin and Colliander 2021), emphasizing their status in marketing.However, other studies have found that the established position of macro-and mega-influencers contributes to decreased influential power.For example, because macro-and mega-influencers are involved in a broader range of commercial collaborations, they use more professional photographs and tag more brands than micro-influencers (Alassani and G€ oretz 2019).This increased commercialization may decrease the perception of intimacy.Influencers are often viewed as opinion leaders (Abidin 2021) and play an essential role in the information flow (Casal� o, Flavi� an, and Ib� añez-S� anchez 2020).Information from mass media flows through a mediation process in which influential people process the information and transmit it to the public (Lin, Bruning, and Swarna 2018).Micro-and nano-influencers are thought to be persuasive converters (Abidin 2021).Because microinfluencers have small followings, some researchers have found that they are less likely to "sell out" than other categories (Campbell and Farrell 2020).They represent a more significant niche and appear to be more similar to those who follow them, so they tend to convey a greater sense of trust and authenticity (Campbell and Farrell 2020;Park et al. 2021).To expand on these conversations, our study explores the commercial content displayed by these different influencer categories.
Commercial Content
We define "content" as the resources available in a network (Kane, Alavi, and Borgatti, 2014) and "commercial" as intended for commercial purposes such as monetizing, selling, promoting, and advertising a product, business, or service (Merriam Webster 2024).Being a social media influencer involves displaying commercial content on social media (Gross and Von Wangenheim 2022).Influencers' marketing value, however, is based on nonsponsored posts about their everyday lives (Gross and Von Wangenheim 2022).In contrast to commercial content, nonsponsored posts are unrelated to brands and do not include any display of brands or products.They often contain personal stories, entertainment, inspirational images, and life captures (Zarei et al. 2020).As such, social media influencers' Instagram profiles link nonsponsored posts and those containing commercial content.
Commercial content consists of sponsored posts, nonsponsored commercial posts, and hidden sponsored commercial posts.Sponsored posts integrate brands and brand messages that are compensated by a sponsor (Zarei et al. 2020).Such posts are incentivized and influenced by companies, which have some control over the advertising message (Zarei et al. 2020).Sponsored posts tend to have captions with clear advertising messages.Captions are textual information added to a post that often aligns with the visual image or video (Caldeira, Van Bauwel, and De Ridder 2021).Marketers use captions as an advertising tool; they contain advertising messages and provide helpful information about brands and products, such as where to buy a product, where to get the best offers, or the social influencer's experience with a product (Gross and Von Wangenheim 2022, 291).
However, the degree to which a marketer can control sponsored posts depends on the marketing activity.
Social media influencers often receive monetary rewards through complimentary products or invitations to exclusive "Instagrammable" events, in the hope that the products will be displayed (De Veirman, Cauberghe, and Hudders 2017).In some cases, it is common for marketers to refrain from declaring a commercial collaboration in their captions, even if they are paying to convey a specific message.This is called a hidden sponsored post (Zarei et al. 2020).In this case, a post may include captions with product recommendations, brand tags, or emojis, for example, but it omits captions declaring the use of monetary incentives to post the content (Abidin et al. 2020).In addition to sponsored and hidden sponsored posts, social media influencers often post about products and brands for which they receive no monetary rewards (Jorge, Marôpo, and Nunes 2018), such as by recommending a product, tagging clothing brands they wear, or tagging restaurants they visit.We refer to this as a nonsponsored commercial post.Because our study is motivated by the need to gain insight into the commercial content followers consume, we study all forms of posts in which products and services are displayed under the umbrella of "commercial content."
Display of Commercial Content
Specific visual displays are used in photos and videos when social media influencers post commercial content.In most cases, they appear at the center of the image, displaying products or services being worn or used (Abidin 2016), such as in a selfie, whole-figure, or half-figure photo conceptualized as a "portrait" (Bainotti, Caliandro, and Gandini 2021).They also tend to display material assemblages, such as shoes or bags characterized as "material objects" at the center of a post.These portraits or material objects are photographed and combined to create "instaworthy" posts (Vanninen, Mero, and Kantamaa 2023) that align with specific Instagram visual aesthetics (Bainotti, Caliandro, and Gandini 2021).Displaying specific "settings" such as landscapes, celebrations, and general surroundings is common (Bainotti, Caliandro, and Gandini 2021).Personal captions are often attached to the visuals, often related to the influencer's preference and identity.Hidarto (2021) found that captions include a vast amount of language intended to establish familiarity with the audience.In other scenarios, the captions include limited information, letting the photo or video "speak for itself" when the visual surroundings are essential (Vanninen, Mero, and Kantamaa 2023).In this case, commercial elements appear through brand tags or hashtags.
In their textual and visual displays, social media influencers are concerned with creating personal intimacy or forming an intimate bond with their followers (Jorge, Marôpo, and Nunes 2018;Caldeira, Van Bauwel, and De Ridder 2021).For example, Caldeira, Van Bauwel, and De Ridder (2021) identified several visual displays of social media influencer portraits, often accompanied by captions that included brand tags or hashtags or captions that exalted the influencer's enjoyment of the brand.The motive behind such subtle commercial displays is to provide a source of inspiration and enjoyment (Djafarova and Bowes 2021).The commercial elements in this case are less direct and more subtle in nature (Campbell and Grimm 2019).Gross and Von Wangenheim (2022) refer to this type of commercial display as transformational advertising, emphasizing information about the experience of using the brand or product instead of promoting the product solely from an objective perspective.
Product information and more descriptive messages are also common in influencers' displays of products and services.Hidarto (2021) found that social media influencers tend to include written captions with descriptive information about a product's quality and "promises."Jorge, Marôpo, and Nunes (2018) found that influencers often carefully explain why they genuinely like a product.In this case, although the negotiated authenticity and commercialism are more direct, persuading followers to purchase the product is based on their own experiences (Hidarto 2021).Compared to more subtle tones, such displays contain a clear commercial message.These practices can be understood as informational advertising, based on providing rational information directly linked to the advertised brands and products.
Although research on social media influencers' commercial content display is expanding, there is a need for a broader and more refined understanding of how they display that commercial content (Hudders, De Jans, and De Veirman 2020;Vrontis et al. 2021).
Methods
We conducted an ethnographic content analysis (Altheide 1987;Bainotti, Caliandro, and Gandini 2021;Rose 2014) to investigate how different influencer categories on Instagram display commercial content.This method gave us direct access to the commercial content posted by social media influencers.It involved manual coding, which has demonstrated good validity because it allows researchers to consider text and image context and social embeddedness (Altheide 1987).The ethnographic content analysis involved three steps: (1) social media influencer selection, (2) coding, and (3) data analysis.
Social Media Influencer Selection
First, we identified females aged 18 to 34 as the dominant group in the influencer industry (Statista 2021) in Norway, Sweden, and Denmark.Although a broader age range could have enriched the analysis, the results would not have been representative of the dominant commercial display in the industry.We focused on Scandinavia because other researchers have mainly considered the influencer market in the United States and Southeast Asia (Abidin et al. 2020).Given the rapid growth of influencer marketing worldwide, it is crucial to expand the geographic coverage of this research (Vrontis et al. 2021).Second, we selected the fashion and beauty industry because this is the most prominent market in the influencer marketing industry (Statista 2021).Third, we studied the micro-influencer, influencer, macro-influencer, and mega-influencer categories identified by Abidin (2021) and selected candidates based on their number of Instagram followers, aligned with the population and size of Scandinavian countries.We did not assess nano-influencers, who rarely participate in the Scandinavian market (Inzpire.me2021).To identify candidates based on our criteria, we applied Klear (2021), an artificial intelligence (AI) tool used by marketers when searching for influencers because it allows demographic analysis and campaign management.We were able to apply filters such as "female," "18-34," "fashion," "beauty," "Sweden," "Norway," and "Denmark" to ensure a selection that fit our criteria.To confirm that the candidates met our criteria, we applied a second tool called Inzpire.me (2021), as there is no consensus on what defines a top influencer.Inzpire.meprovides insights on influencers' Instagram profiles, such as demographics and engagement.Inzpire.me and Klear are leading platforms in the Scandinavian market's approach to influencer searches.We applied both to ensure that we found appropriate candidates.
Nine micro-influencers and influencers were selected, with three representing each Scandinavian country.Eight macro-influencers and seven megainfluencers were selected (three Swedish and three Danish per category).Due to the lack of representation in these categories, we could analyze only two macro-influencers and one mega-influencer from Norway.In total, we had 33 candidates for analysis.
Coding
We developed codes to guide the ethnographic content analysis (Altheide 1987) on both denotative and connotative levels (Bainotti, Caliandro, and Gandini 2021).The denotative analysis focused on the objective representation of the content at first glance, such as whether the post contained a picture or a video, whether the post contained a portrait or a material object, and whether the caption included emojis or hashtags.The connotative analysis determined whether there was an association different from the literal meaning.At this level, the subjective meaning behind the posts was essential (Bainotti, Caliandro, and Gandini 2021).We developed six steps to guide our coding on these two levels.
Codes on the denotative level involved five steps.The first step identified the number of commercial posts (sponsored, nonsponsored, and hidden sponsored) posted by the different influencer categories.The second step identified commercial captions to increase our knowledge of whether the post contained a clear commercial message.A caption was recognized as commercial if it contained brand tags, visible brand labels, or commerce-related information.The third step identified the format of each commercial post, distinguishing among single photos, single videos, and carousels (a combination of both).
The fourth and fifth steps developed visual and textual codes of the commercial posts identified in the first step to identify how each influencer category displayed commercial products and services.Our goal was to identify what each photo, video, and carousel represented both visually and textually, that is, the textual elements in the captions attached to the post.We familiarized ourselves with the 33 candidates' postings by reviewing the features of their posts to gain a preliminary understanding of the content (Altheide 1987).We recognized many portraits of themselves as selfies or full-figure or half-figure photos.We also recognized many images of objects and social gatherings.One coauthor made brief notes about standard features, and we discussed our observations with one another to ensure their validity.This led to the development of three visual codes: portrait, material object, and setting.Regarding the textual codes, we found six main patterns: emojis, hashtags, brand tags, product descriptions, texts describing personal opinions about the products, and texts describing the influencer's mood.Given the exploratory nature of ethnographic coding (Altheide 1987), other descriptive and analytical codes were expected and allowed to emerge (Bainotti, Caliandro, and Gandini 2021).
The sixth step involved gathering information on a connotative level.At this stage, we described the subjective meaning behind each post in one or two lines, using Excel (Bainotti, Caliandro, and Gandini 2021).Specifically, the goal was to grasp the commercial context, such as what the post represented beyond the objective meanings identified in the first five steps.For example, on a denotative level, we identified a portrait in carousel format with commerce-related information in the caption, whereas the connotative level allowed us to consider the surroundings of the post and whether it promoted the influencer's own brand or a brand the influencer co-designed or represented as an ambassador.We also identified whether the influencers tagged brands or friends in the posts and the emotional state of each post, such as whether it had a positive or negative tone (Bainotti, Caliandro, and Gandini 2021).Including both levels gave us a broad and nuanced view of how each influencer category displays commercial content.
Table 2 provides an overview of the six steps we followed on a denotative and a connotative level.
Once the steps were planned and the codes were developed, we started coding the posts of the 33 selected social influencers.To get a representative number, we selected their 100 latest Instagram posts.The coding process took three months, coding one post at a time in Excel as it was viewed.One of the social media influencers in the study had only 78 posts, so we ended up with a data set of 3,278 coded posts for data analysis.
Data Analysis
The data were first sorted into four separate Excel sheets with data from each influencer category (micro, Commercial context behind the post influencer, macro, mega).Then the data were analyzed on a denotative level (steps 1-5) using a pivot diagram (Bainotti, Caliandro, and Gandini 2021) to count the following: (1) number of commercial posts, (2) number of commercial captions, (3) format (photo, video, or carousel), (4) visual codes used (portrait, material object, setting), and ( 5) textual codes used (hashtags, brand tags, emojis, descriptions, personal taste, mood).Once all the data were counted, we calculated the percentage distributions.On a connotative level, the one to two lines of description for each post underwent a rigorous qualitative process involving an open, axial, and selective coding strategy (Strauss and Corbin 1998) to capture patterns in the commercial context for each influencer category.After the data were gathered, we performed axial coding, drawing connections between the data using a color-coded approach.We examined all four Excel sheets (micro, influencer, macro, mega) and assigned similar colors to data with a certain link.For example, many of the posts showed influencers posing in a carousel format, in wearables with brand tags, accompanied by short captions.All these posts were coded with a similar color.Once all the data were color-coded, we analyzed the data assigned similar colors and created one central category for each pattern that connected the codes (Strauss and Corbin 1998).In this way, we could identify patterns of how each influencer category displayed commercial content.
Two of the authors discussed and evaluated the denotative and connotative levels of analysis to ensure the validity by performing an "ethnographic ethic" (Altheide and Johnson 1994, 587).The ethnographic ethic included five elements the authors considered during data collection and analysis: (1) the substance of the analysis, such as the relationship between the observed commercial content and its considerable cultural, historical, and organizational contexts; (2) the relationship between the author conducting the analysis and the commercial content analyzed; (3) the point of view when rendering an interpretation of the ethnographic data; (4) the role of the reader in the final product; and (5) the authorial style used to render the description or interpretation.
Results
This section presents our findings regarding how each influencer category on Instagram displays commercial products and services.
Micro-Influencers
We began our analysis of micro-influencers by identifying the percentage of commercial posts.Of 900 posts analyzed, 501 contained commercial elements.Thus, micro-influencers had one of the highest percentages of commercial posts, with 56%.Of the 501 posts, 215 contained commercial captions (43%), making micro-influencers the category with the lowest percentage of commercial captions.In the next step we identified the format, visual codes, and textual codes used in micro-influencers' commercial posts (Figure 1).The patterns were not mutually exclusive.
On a connotative level, we identified four main patterns: portraits displaying wearables (n ¼ 351), daily life captures (n ¼ 72), close-ups of material objects (n ¼ 49), and portraits applying material objects (n ¼ 29).The two dominant patterns are explained in greater detail.
First, many posts included portraits displaying one or multiple wearables (Figure 2).The surroundings in these posts were usually a big city, the seashore, or at home, intended to display wearables as inspirational daily life captures.Commercial elements usually appeared only in brand tags attached to the microinfluencers clothing.The images were generally accompanied by short captions with single emojis and brand tags; they rarely featured commercially descriptive elements.An interesting finding was that microinfluencers tended to tag a broader variety of brands, and they also tended to tag public relations agencies in their posts.Second, we identified that micro-influencers posted carousels displaying daily life captures.These posts often included photos and videos representing a day or a week in the influencers' lives, and commerce often appeared as tags in what the microinfluencers were wearing.Such posts rarely included long descriptive text related to commercial collaborations with brands.
Influencers
Out of 900 posts analyzed, 552 contained commercial elements (61%).Influencers were thus the category with the highest percentage of commercial postings.Of the 552 posts, 243 contained commercial captions (44%).We then calculated the percentage of visual formats, visual codes, and textual codes displayed in their commercial posts (Figure 3).The patterns were not mutually exclusive.
On a connotative level, we identified five main patterns: portraits displaying wearables (n ¼ 309), closeups of material objects (n ¼ 121), social events and gatherings (n ¼ 62), daily life captures (n ¼ 41), and portraits applying material objects (n ¼ 19).The two dominant patterns are exlained in greater detail.
First, like the micro-influencers, inflencers displayed many commercial posts showing wearables from multiple angles to highlight their taste and clothing preferences.In contrast to micro-influencers, however, influencers included more textual elements describing their taste.Second, the large number of single photos displaying material objects distinguished influencers from the other three categories (Figure 4).
In these posts, the commercial product was generally centered in the photo, with visible brand labels, but the post's context was not necessarily commercial or sales oriented.The captions of such posts rarely described a commercial collaboration, and brands were not necessarily tagged.However, a common feature of these posts was that most of the products promoted were from luxury brands, such as Chanel and Dior.Most of the captions included a single emoji or a short sentence describing the influencer's taste, such as "My favorite."In this pattern, there seemed to be a connection between the product displayed and the influencer's preferences; rather than promoting brands to others, these posts used an object to promote the influencer's profile.
Macro-Influencers
As in the other two categories, we began by identifying the percentage of commercial posts.Out of 800 posts analyzed, 371 contained commercial elements (46%).Surprisingly, macro-influencers were the category with the lowest percentage of commercial posts.Of the 371 commercial posts, 174 contained commercial captions (47%).We then calculated the visual formats, visual codes, and textual codes used in their commercial posts (Figure 5).The patterns were not mutually exclusive.
On a connotative level, we identified five main patterns: portraits displaying wearables (n ¼ 224), social activities (n ¼ 65), close-ups of material objects (n ¼ 23), portraits applying material objects (n ¼ 32), and sponsored posts with clear advertising messages about their own brands (n ¼ 27).
Like the other influencer categories, macro-influencers had an overwhelming number of posts showing themselves posing in wearables from multiple angles in either a single photo or a carousel format (Figure 6).
However, in contrast to micro-influencers, macroinfluencers often attached longer and more descriptive captions related to commercial collaborations.In some cases, the products displayed were also more visible in their posts.In contrast to the other categories, macro-influencers' photos were more "professionally" constructed, and there was often a clear advertising message.Interestingly, some posts by macro-influencers included a clear advertising message about their own brands.
Mega-Influencers
Of 678 posts analyzed, 325 contained commercial elements (48%), and 166 of these 325 posts contained commercial captions (51%).Mega-influencers thus had the highest percentage of commercial captions.Figure 7 shows the distribution of formats, visual codes, and textual codes used by mega-influencers.The patterns were not mutually exclusive.
Mega-influencers' extensive use of descriptions in their posts was an exciting finding that contrasted with the practice of influencers, who often attached personal opinions such as "My favorite" or "Love this bag."We also discovered that mega-influencers had fewer variations in brand collaboration, and many posts displayed their own brands.Such posts had a clear sponsorship message and usually included portraits followed by a close-up of the product.Like the patterns identified among macro-influencers, mega-influencers' portraits displayed wearables in a more "professional" manner (Figure 8).
Discussion
Having presented our findings on how each influencer category displays products and services, in this section we take an integrated perspective and discuss how our findings measure up against established literature in the field.Specifically, we present two main contributions to influencer marketing literature, followed by a discussion of the implications and limitations of our study and topics for future research.
First, our study demonstrated that micro-influencers and influencers display a higher number and a broader variety of commercial posts.Studies have argued that micro-influencers are less commercially active or that they facilitate greater intimacy compared with other categories (Campbell and Farrell 2020;Djafarova and Rushworth 2017;Kay, Mulcahy, and Parkinson 2020;Park et al. 2021).For example, Kay, Mulcahy, and Parkinson (2020) argued that microinfluencers appear to be more like their followers, so they tend to be more persuasive.Campbell and Farrell (2020) found that micro-influencers' recommendations seem more genuine than those made by macroinfluencers, who may be viewed as more likely to "sell out."In our study, however, we found that microinfluencers had one of the highest percentages of commercial posts (56%).They also promoted a broader variety of brands and tagged public relations agencies in their posts.One explanation for our findings could be that the growth of influencer marketing has contributed to a new generation of micro-influencers who have become better established and more commercialized.
Second, this study contributes insights into the main differences in the commercial content displays of the various influencer categories.We found that micro-influencers and influencers were more likely to subtly integrate products and services into their displayed lifestyles than the other two categories.Micro-influencers' commercial posts resembled daily life captures.The captions were often short and contained limited information about brands or products.As such, we argue that social media influencers with fewer followers are more concerned about displaying products and services in "regular" settings, based on an appealing lifestyle (Abidin 2016).The motive behind such subtle commercial displays could be the desire to be a source of inspiration and enjoyment (Djafarova and Bowes 2021).These findings also align with transformational advertising, emphasizing the experience of using the featured products (Gross and Von Wangenheim 2022).
Macro-and mega-influencers are more direct in their commercial displays, often attaching long descriptive captions with detailed product information, as found in a study by Rundin and Colliander (2021).We argue that macro-and mega-influencers act as informative sources for commercial messages to a greater extent than the other two categories.In this case, they are informers who share their knowledge with others and provide informational, educational, and supportive commercial content about products and services.This type of display can be understood as informational advertising, motivated by the goal of providing rational information directly linked to the advertised brands and products.These findings have implications for influencer marketing literature, as they contribute to the ongoing conversations regarding each influencer category's role in the marketing field (Campbell and Farrell 2020;Kay, Mulcahy, and Parkinson 2020).
Managerial Implications
Our study suggests that it is essential for consumer authorities to pay attention to micro-influencers, as also found by Kay, Mulcahy, and Parkinson (2020).Micro-influencers and influencers were among the categories with the most commercial postings and the fewest commercial captions.They also posted products and services more subtly than macro-and megainfluencers.Although we could not separate hidden commercial posts from nonsponsored commercial posts, the results indicate a lack of commercial disclosure among social media influencers who are likely in the early stages of their careers and may not have the necessary information to obey the rules and regulations.Overall, however, our results suggest that social media influencers are highly commercially active, regardless of category.Due to the increased commercialization of the influencer industry, it is important to recognize that using Instagram involves engaging with commercial content.
Limitations and Future Research
Our study has some limitations that need to be acknowledged.First, its focus was the Scandinavian market, where the populations of Norway and Denmark are just a little over 5 million citizens each.As such, fewer social media influencers are operating there, which may result in more established commercial positions among influencers with smaller follower bases.Future studies should focus on the commercial practices of micro-influencers in other countries.Second, we focused on Instagram posts but not Instagram stories, reels, or channels.Because these other formats are popular among social media influencers, investigating them in future studies could capture a more complete picture of how influencers integrate commerce into their content displays.Third, we focused on Instagram because it is a dominant platform, but many influencers operate on multiple platforms.Comparing the commercial content displayed on other social media platforms versus that on Instagram would be a relevant goal of future studies.
Conclusion
This study investigated how different influencer categories on Instagram display commercial products and services.We provided two main contributions to the influencer marketing literature.First, we found that micro-influencers and influencers had a higher number and a broader variety of commercial posts than macroand mega-influencers.Second, we identified differences in each influencer category's method of displaying products and services: micro-influencers and influencers often integrate their products and services into their displayed lifestyles, while macro-and mega-influencers are more direct and informative.Our study thus contributes to the conversations about influencer categories (Campbell and Farrell 2020;Djafarova and Rushworth 2017;Kay, Mulcahy, and Parkinson 2020;Park et al. 2021), emphasizing the differences in how they display commercial content.The study also provided managerial implications by emphasizing that people's social lives involve engaging with commercial activities.Our findings contribute to discussions in the literature about the societal implications of the commercial content people consume (Borchers and Enke 2022;Hogsnes, Grønli, and Hansen 2023;Karag€ ur et al. 2022).
Figure 1 .
Figure 1.Overview of format, visual codes, and textual codes displayed by micro-influencers.
Figure 3 .
Figure 3. Overview of format, visual codes, and textual codes displayed by influencers.
Figure 5 .
Figure 5. Overview of format, visual codes, and textual codes displayed by macro-influencers.
Figure 7 .
Figure 7. Overview of format, visual codes, and textual codes displayed by mega-influencers.
Table 2 .
Codes developed for the ethnographic content analysis.
|
2024-04-05T16:35:33.918Z
|
2024-04-02T00:00:00.000
|
{
"year": 2024,
"sha1": "cc081c0f704b6646d4948d69aed0549c479b3bf6",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15252019.2024.2316114?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "43633e710169da0dc04d427ad055c981c893a677",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
39401509
|
pes2o/s2orc
|
v3-fos-license
|
Experiences of nurses working in a rural primary health-care setting in Mopani district , Limpopo Province
Professional nurses working in rural primary health-care settings are experiencing burnout due to serious shortages of personnel. This is exacerbated by the brain drain of nurses leaving the country. Rural settings are resource constrained in terms of personnel and equipment. This results in dissatisfaction among nurses due to the unbearable working conditions which result in stress and frustration. A qualitative, explorative, descriptive study was conducted to explore and describe the experiences of nurses working in a rural primary health-care setting in the greater Letaba sub-district in Limpopo Province. Purposive sampling was used to identify the participants. Data was collected in the form of in-depth interviews. The study revealed that nurses working in primary health-care settings were experiencing emotional and physical strain as a result of the shortage of human resources. It was recommended that policies that meet the health-care needs of rural communities be developed, and that strategies to retain professional nurses in primary health-care settings be formulated.
Introduction and background of the study
The goal of the Healthy People 2010 Initiative is to reduce or eliminate health disparities in vulnerable populations, including populations with rural and minority ethnic backgrounds (Averill, 2002:624).Despite the progress made by African countries in the last decade in terms of developing national systems based on primary health-care principles, the issue of balancing the demand and supply of the health-care workforce in rural areas is still a problem.In developing sub-Saharan African countries, health issues have proved to be the m ost complicated and difficult policy issues to resolve (Akisola & Ncube, 2000:49-50).As a result of a lack of social amenities, attracting and retaining medical doctors 60 Curationis June 2008 and nurses has long been a problem.
The demands on the nurses in rural areas are multiple and diverse (Busby & Busby, 2001:306).Often, in rural primary health care settings, one registered nurse is placed on duty with only a nursing assistant and no attending physician at the clinic, either during the day or at night.The primary health-care facility is managed on a daily basis by a single qu alified p ro fessio n al nurse.T his co n trib u tes to ex c essiv ely high workloads and the poor performance of these nurses, which can tarnish the reputation of nurses in the eyes of the communities.There are some nurses who find jo b satisfaction in the greater autonomy they are afforded in these areas.They enjoy serv in g the communities and convert the problems they experience into challenges and o p p o rtu n ities.In settin g s w here professional nurses work cohesively with other health professionals, according to Hegney, McCarthy, Clark and Gorman (2002:130) and Fuszard, Green, Kujala and T alley (1 9 9 4 b :3 8 -9 ), it has been demonstrated that clients feel a strong ownership of their health.
In 1994, the new government of South Africa committed itself to the integration and sustainability o f its program me through a restructuring and development programme (RDP).One of the objectives set at this time was that primary health care be m ade available to pregnant women and children younger than six years old at no cost.The aim was to later expand this service to all citizens.Ironically, there was no program m e initiated to increase staff or material resources to m aintain and run these health services.
Low staff ratios, high workloads and a growing population led to an increase in the utilisation of personnel with fewer skills and a decline in the quality of care offered.The working conditions caused many nurses to decide to leave the profession or to emigrate from South Africa.The supermarket approach or the integ ratio n o f serv ices approach contributed to the stress and burnout of rural nurses working in primary health care.The lack of both material and human resources resulted in poor performance in nursing in rural primary health-care settings.
In the Greater Letaba sub-district the workload problem still exists, with newlyqualified nurses being appointed to prim ary health-care facilities which should be ru n by ex p erien ced professional nurses.These nurses work independently, fulfilling all day-to-day supervisory and managerial duties, while they are not yet skilled or knowledgeable in this sphere of service.Primary health care nurses working in rural communities are faced with problem s as they are working in areas suffering from gross shortages.In greater Letaba sub-district, nurses render a 24-hour service to the community and are therefore often on call for 24 hours a day, seven days a week, taking breaks at lunch and suppertime.The integration of services approach is practised in this area.The purpose of the study on which this article is based was to explore and describe the experiences of nurses working in rural primary health care settings.This article presents the findings of the study, and their potential implications for policy implementation.It recommends how to strengthen policy implementation and service delivery in South Africa.
Research design and methods
A qualitative, descriptive and exploratory research design was used to answer the research question.The population of this study comprised professional nurses working in clinics that render either eighthour or 24-hour services in greater Letaba sub-district in the Mopani district in Limpopo Province.The participants were selected purposively from four clinics.Purposive sampling was used to identify the p artic ip a n ts.The sam ple o f participants included nurses both trained and untrained in primary health care who had worked in the primary health-care setting for more than six months.In this study data were collected by means of unstructured interviews which lasted 45 to 60 minutes each, using the direct contact approach.According to Brink (1996:158), unstructured interviews are co n d u cted m ore like norm al conversation, but with a purpose.
During the interviews probing questions w ere asked in order to elicit m ore information from the participants and show participants that the researcher was interested in their experiences.The researcher collected the data personally in order to ensure that it was done in a systematic manner.The interviews were recorded by means of a tape recorder to prevent loss of data, and transcripts were made of the recordings.The researcher made appointments with the participants and interviewed them while they were off duty at the clinics where they worked, or at their homes.The researcher ensured that the interviews remained consistent by asking one broad central research question:
W hat a re the experien ces o f nurses working in rural prim ary health-care settings?
Data were collected until saturation was reached.Saturation was reached by the 9th participant.The researcher continued with another three interviews, hoping that new information may come up.In the end the sample was com prised of eleven fem ale professional nurses who had worked in the clinic for more than six months.The interviews were held in Northern Sotho as the participants felt more comfortable communicating in thenmother tongue.The researcher then translated the interviews into English.
Credibility
According to Polit and Beck (2004:36), credibility is an aspect of research that is achieved when confidence in the truth of the data and interpretation is attained.De Vos (1998:312) notes that credibility is established if it is demonstrated that the research was conducted in a manner that ensures that the phenomena were accurately identified and described.In this study the researcher ensured the credibility o f the research through immersion in the field, making use of a variety of sources of data, and building trust and rapport with the participants.
Transferability
Applicability is an important factor in transferability, and allows the researcher to present and describe data in such a manner that another person can compare them to the findings of other studies (Lincoln & Guba, 1985, in Klopper, 1998:316).Thick descriptions of the methodology were used to describe the experiences of the primary health-care nurses in rural settings to enable the reader to understand the phenomenon to the extent that he/she would be able to transfer the results to other settings
Dependability
Dependability is determined by the extent to which the findings of the study would be co n sisten t if the enquiry w ere replicated with the same subjects in a similar context (Lincolin & Guba, 1985in K lopper, 1998:316).P rolo n g ed engagement with participants increased the dependability of the research.
Confirmability
To achieve confirmability, it is important that the researcher remains neutral.The researcher should not be biased and his/ her perspectives and motivations should not impact on the study.In this study the re searc h er ob tained valued information through prolonged contact with the participants, observing them during data collection and decreasing the distance between the researcher and the p artic ip a n ts' conversation, w ithout allowing bias or own perspectives to influence the conversations.An audit trail was done by giving the tapes to an independent coder who also listened to the interviews.A meeting was held with the independent coder to compare and verify the findings.The supervisors of the study also listened to the tapes to verify the conclusions, interpretations and recommendations.
Ethical considerations
Permission to conduct the study was g ran ted by the L im popo P rovince D ep artm en t o f H ealth R esearch Committee.Once the letter of permission was received, copies were sent to the manager of primary health care in Greater L etaba su b -d istrict, as w ell as the participants.The researcher informed the participants regarding the purpose, methods and procedure of the study.The participants made an informed choice to take part in the study, and did so freely and voluntarily.They were asked to sign a form to indicate that they had given their informed consent to be interviewed, and were informed that they could refuse to answer any question or discontinue their participation at any time (Averill, 2002:656).The privacy of the participants was respected throughout the study and all information collected during the study was kept strictly confidential (Polit & Hungler, 1991:35).The participants' anonymity was ensured by substituting their names with numbers or codes.Participants were treated fairly and any unclear information was clarified for them during the study (P olit & Hungler, 1991:35).
Discussion of findings
A ccording to P o lit and H ungler (1991:460), data analysis is a systematic org an isatio n o f data synthesis and testing of the research hypothesis using those data.In the study qualitative analysis of data was done.Similar topics were clustered together, categorised and sub-categorised, and common themes identified.The Tesch method of data analysis was used (De Vos, 2002:318).Data from the interview transcripts were grouped into five categories: emotions, constraints in caring, infrastructural constraints, relationship constraints and personal issues.
Category 1: Emotions
The following themes emerged from the category: emotions: Participants expressed emotions such as anger, sadness, fear and suffering.They also indicated that in certain instances they felt frustrated and hopeless.They said that working 24-hour shifts from 07:00 to 19:00 and being on call from 19:00 to 07:00 was strenuous, especially because patients did not understand that when they were on call they dealt with emergencies only.Patients did not want to stand in long queues during the day.They stayed at home and pitched up during the night for minor ailments.
Participants experienced emotions such as anger at management for introducing policies such as "Batho pele" (people first) and the patients' rights charter.They said that they u n d erstood and appreciated that those policies where formulated to protect patients' rights.However, they felt left out as patients often quoted those policies to force them to work long hours and attend to minor complaints after hours.The following was said:
"I know that 'Batho pele ' principles and P a tie n ts ' R ig h ts C h a rte r w ere form ulated to bring harmony between the client and us but sometimes I feel lik e th ey w e re fo r m u la te d f o r the community to oppress us fo r even if they come during the night to fetch condoms. One cannot send them away because o f the Batho pele principles. "
The Batho pele principle emphasises that clients must have access to services and that clients must be treated with courtesy (White Paper on Transforming Public Service D elivery).The participants verbalised that those two principles stated that every client/patient must be assisted and not turned away.Participants reported that they were the ones who experienced the strenuous part of the job while policy-m akers who formulated "Batho pele" were in their offices dictating to them what to do and what not to do, thus they experienced it as an unfair practice on their part.Busby and Rauh (1991:19) state that nurses working in rural areas need to have a good relationship with community members as rural residents are acquainted w ith one another.In this situation, inform ation is quickly dissem inated among community members, especially when the local news concerns matters of life, illness and death.Turning away a patient at night may cause a stir and d issatisfactio n am ong c lien ts and consequently among members of the community.In turn, when patients with minor ailments come to the facility for consultation at night they find nurses exhausted by the d a y 's routine.According to Wiens (1990:16), lack of time due to inadequate staffing precludes quality patient care.Chalmers, Bramadat and Andusyszyn (1993:113) mention that lack of rural human resources can impose an additional burden on nurses, thus contributing to anger, sadness, suffering and frustration, which lead to high staff turnover.
Category 2: Infrastructural constraints
The participants expressed that they experienced infrastructural constraints, including lack of basic necessities such as accom m odation, com m unication system s, w ater and electricity.The participants felt frustrated about the shortage of water.Safe water is a basic need and should be made available to all.Managers should respond promptly to requests made by professional nurses at rural primary health-care facilities, as they serve the community in isolated areas and are thus the eyes and ears o f their managers.
As I am talking right now we have not had w ater f o r the p a st three weeks; patients assist by bringing w ater with sm all tubs. Fam ilies are expected to bring w ater along when they bring a woman in labour. The problem has not been a tte n d e d to d e s p ite r e p e a te d requests. The toilets are a big health hazard when we are without water. In this situation how are we expected to teach the community about a safe water supply and and usage?
Van der Merwe (1999:1273) states that people in rural areas have limited access to electricity and water.Dennill et al. (1995:5) conclude that primary health care includes a supply of safe w ater and sanitation.Water shortages increase the likelihood of the community's contracting infectious diseases such as cholera, typhoid and diarrhoea.
In support o f lack o f elec tricity , participants expressed the following Participants stated that it was difficult to perform any task without electricity and that health services came to a standstill without lights.The participants reported that maternity cases were attended to by candlelight and that could hamper the delivery of quality care.To cut and suture episiotomies by candlelight may lead to complications that could be harmful to patients.This is supported by Lipinge and Botes (2002:22) who say that rural communities lack essential physical and social stru ctu res in clu d in g w ater, communication and electricity.
Inadequate accommodation was a general problem experienced by the participants who rendered 24-hour service to the com m unity.The accom m odation arrangements at rural primary health-care facilities were inadequate.Two nurses shared a four-room ed house.It was difficult for them to have members of their own family with them.
The millions of rand spent by the South African Department of Health to recruit nurses to rural areas were wasted if management of the facility did not have a family-friendly policy (Hegney et al., 2002:132).It was evident that participants believed that this kind of policy did not exist at the nurses' facilities, as their children or spouses were not allowed to visit them.Nurses were forced to live in staff accommodation of a poor standard due to the high cost of private rental.The issue of accommodation was challenging, and support was needed for nurses in this area.Some of the nurses expressed the problem of communication.One of them said:
"I f e a r w hen th e re is no telecommunication, what might happen if an emergency case can come, who would need urgent tran sportation to hospital. "
Most primary health-care facilities in Letaba sub-district in the Mopani region used a two-way radio communication system as means of com m unication.According to the Department of Health (South Africa, 2001:14), there should alw ays be a w orking m eans o f com m unication, such as a radio or telephone, in order to manage a primary health-care facility effectively.According to D oherty and P rice (199 8 :3 1 5 ), am bulances based at h o sp itals responded to calls from members of the community or transferred the call to other hospitals some distance away.In an evaluation of rural ambulances, Doherty and Price (1998:315) found that radiophones were the principal means of com m unication between clinics and ambulances.Van der Merwe (1999:1274) states that the teleco m m u n icatio n systems in rural areas are poor.Chalmers et al. (1998:113) state that nurses are expected to identify needs and determine how to meet them.Nurses should create a climate of healing for their clients.It is th ere fo re im p o rtan t th at th eir telecommunication system be improved for better and effective communication when rendering care.
Category 3: Constraints in caring
The following themes emerged from the category, 'co n stra in ts in c a rin g ': shortage of staff, long working hours, poor maintenance of aseptic techniques and inadequate supply of drugs.The participants named staff shortage as a barrier to the provision of adequate health care to their clients, as it exhausted them.One participant made the following remarks:
"We are terribly understaffed, we work very hard, and most o f the time one is totally exhausted. When one nurse is on m aternity leave or sick leave, there is no replacement, we have to cover her p a rt o f the work. This is tough. "
Participants perceived the shortage of sta ff as th eir greatest challenge, a challenge that required them to use all their knowledge and various skills.At rural prim ary health-care facilities, p ro fessio n al nurses m anaged large numbers of patients every day.They had to assess, plan, implement and evaluate treatm ents, as well as conduct home visits.Due to the shortage of staff nurses, they sacrificed and worked long hours, which contributed to fatigue, stress and burnout.A lack of adequate staffing and organisational resources appeared to be the m ost ch a ra c te ristic o f nurses practising in rural areas (Muus, Stratton, Dunkin&Juhl, 1993:39).
Akisola and Ncube (2000:52) add that patients expect the best treatment, no matter what the staffing situation is.In some primary health-care facilities the number of professional nurses employed remained the same despite the increase in the use of their services.In a study of the effect of free health services on primary health-care nurses in the Vhembe district of Limpopo Province, Nemathaga (2002:39) found that the introduction of free health services was a factor that contributed to the shortage of nursing staff.
Shortage o f staff and infrastructure problems such as water, affected the provision of quality health care.One of the participants said:
"We stay three to four days without water in the clinic, yet we are supposed to wash hands between patient examinations."
T he in ab ility to m aintain asep tic techniques dem oralised professional nurses working in rural primary health care settings.They feared that they might end up being ill due to poor hygienic practices and walking long distances to fetch water when they were off duty.P articipants m entioned inad eq u ate supplies of drugs as a constraint to caring for clien ts.A ccording to these participants, the supply of drugs did not cover the number of clients at the facility.The supply was exhausted before the next order was due.According to Thornton (1996:495), the Rural Health Clinic Service Act was passed in 1978 in order to improve primary health care for those who were medically underserved.Participants found th at patients feared the consequences of the delayed delivery of chronic medication and felt that clients suffered w hen the drugs w ere not available.Participants mentioned that chronic patients feared that if medication from the hospital was not delivered in time they may experience problems.Participants mentioned that they had discovered that chronic patients lent medication to each other, and if one ran out of medication they even reduced the strength of their dose so that they would not run out of medication.
Category 4: Relationship
The them e that em erged from this category was the participants' separation from their families.Participants who rendered 24-hour services to the community were concerned that they could not be w ith th eir ch ild ren .Participants felt that their children needed their m others' guidance, as well as assistance with school work and other reassurance.W ithout their m others' presence, children's progress at school often declined.In a study of the job satisfaction of nurses in rural Australia, Hegney and McCarthy (2000:348) found that nurses worked in rural areas primarily for family and social reasons.
According to Hegney et al. (2002:132), rural nurses would choose to remain in rural areas, if m anagerial structures recognised that they had roles and responsibilities, such as child care and housework, apart from their waged work.F lexible scheduling not only meets patients' needs, but also attracts nurses who cannot work traditional nursing shifts (Fuszard, Green, Kyjala & Talley, 1994a:26).Separation from families for long periods of time could negatively affect n u rse s' m arriages and their relationships with their children.The kinds o f feelings expressed by the participants were often suppressed as a resu lt o f the c u ltu re o f the w ork environment.Professional nurses had to be responsible and display an accurate image of the profession and the life of the p atien t had to be th eir first co n sid eratio n (G attuse & B evan, 2000:893).
Category 5: Personal issues
The following sub-categories emerged from the category 'personal issues': Participants' negative experiences and participants' positive experiences.
In a study of rural nurses, Fuszard et al. (1994:42) found that salary compression is an acute problem in rural settings.The participants appreciated the salary they received, but said that part of the salary was usually spent on work-related items, such as phone calls to ambulances.
One participant attributed her lack of a full uniform to wear every day while on duty to an inadequate uniform allowance.Participants reported having to buy their own uniforms with their salaries.Stratton, Dunkin, Juhl, Ludtke and Gellar (1991:30) support this statement by indicating that working in rural areas makes it difficult for nurses' spouses to supplement their families' income.
Participants felt that providing 24-hour services for seven days in succession was tiring and as a result of calls during the night, they were not getting enough sleep.Unresolved issues that affected nurses were patient load, job stress, length of shifts, professional salaries versus hourly wages, provision of health care, career mobility and professional autonomy (Purnell et al., 2001:179).
Participants were very positive when speaking of being provided with inservice training on aspects such as tu b ercu lo sis (T B ) and victim empowerment.According to Fuszard et al. (1994:21), rural administrators must develop their staff, so that they will have the employees they need to fulfil their mission.The effectiveness of such a strategy is supported by Busby and Busby (2001:308), who state that a continuity education program m e is needed to prepare nurses to teach the community about safety issues, health promotion, illness, and prevention and management of chronic health problems in ord er to red u ce th eir need for em ergency services.Fuszard et al. (1994:36) state that professional growth is the responsibility of institutions as well as professionals, reg ard less o f the setting.
Participants were satisfied with the delivery of medication for minor ailments and said they never ran out of stock, despite the large number of clients visiting the facility daily.According to Fuszard et al. (1994:38), some facilities have a pharm acy that supports them and is available virtually 24 hours a day.Hegney e t al .(2002:131) add that a cohesive and effective team is facilitated by good communication.
Recommendations Incentives
The promotion of professional practice incentives and clin ical ladders are strategies that should be researched, disseminated and implemented in primary health-care facilities.A possibility is to create agency nurses for facilities from their own nursing staff.These nurses would do any extra work at the agency rate, thereby limiting unnecessary leave and lowering absenteeism rates.This would also ensure a higher standard of client care and build the morale of the staff, as they would be paid for the extra work rather than being refunded for it with extra days off.
Scheduling of work hours
Management should develop an overtime policy for their health-care facilities.This would supplem ent the professional nurses' salaries and assist with filling the gap in available human resources.The option of flexible scheduling should be created in order to meet the needs of the clients and attract professional nurses who cannot work traditional nursing shifts.Night duty should be introduced in place of a call system.
• Water supply
Ground water should be accessed by means of boreholes at every primary health-care facility to supply water for use at the clinic and nurses' home.A standby tank of water should be kept full at all times.
• Accommodation
More houses should be built in order to accommodate professional nurses with families.A more family-friendly facility would meet the needs of professional nurses and the facility would be more likely to retain their services.
• Communication
Telephones should be installed at all facilities and a cell phone should be supplied by management with a limited number of units per month to be kept on hand for the use of nurses in cases of emergency.
• Electricity
A solar system should be installed at all facilities to supplement the electricity supply during power cuts.
Conclusions
This study revealed that nurses working in the primary health-care facilities in the rural areas of Greater Letaba sub-district in the Mopani district were experiencing emotional and physical strain as a result of a shortage of human and material resources, work overload, long working hours and infrastructural constraints.The areas w ith w hich they w ere dissatisfied outnum bered those they found satisfactory.They expressed concern about the lack of infrastructure and felt that their problem s did not receive consideration when raised with the management of the facilities.Role overload caused a high rate of burnout.A great deal needed to be done if primary health-care services were to become acceptable, accessible and available to the community.The problem of human and material resource shortages needed to be dealt with.The emotional and physical strain nurses experienced in rural primary health-care facilities created obstacles to the provision of highquality care and caused tension in the relationships between nurses and clients.Clients' rights had to be respected, but some of these rights were a burden to the nurses.Professional nurses also had rights which had to be respected by the clients and the community.The smooth running of health-care services was dependent on the application of Batho Pele principles by health professionals, clients and the community.
A major challenge for the government is to develop specific policies regarding health programmes and services that meet the emerging health-care needs of rural communities.Administrators in primary health-care settings in rural areas should bring the needs of their facilities to the attention of those who influence health care policy.S trateg ies to retain professional nurses in primary health care settings must be formulated.
List of references
problems: " We have to walk by torchlight a t night from the nurses' home to the clinic and we can be bitten by snakes." "I have to su tu re a w o m a n 's p e r in e u m by candlelight." "The candle can fa ll and place be set alight.Candles have many hazards." AKISOLA, HY & NCUBE, E 2000: Rural health care provision in Botswana: The context of nursing, a practice and the expanded role of the nurse.Africa Journal of Nursing and Midwifery.1:49-55.AVERILL, JB 2002: Voices from the Gila: Health care issues for rural elders in so u th -w estern M exico.Jo u rn al o f Advanced Nursing.40(6):654-62.B R IN K , H I 1996: Fundam entals of research m ethodology fo r h ealth professionals.Cape Town: Juta.BUSBY, A & BUSBY, A 2001: Critical access hospitals: Rural nursing issues.jona,31(6):301-10.BUSBY, A & R A U H , JR 1991: Implementing an ethics committee in rural institutions.Jona, 21 (12): 18-25.CHALM ERS, KI; BRAMADAT, IJ & A N D R U SY SZ Y N , M A 1998: The changing environment of community health p ractice and ed u catio n : P ercep tio n s of s ta ff n u rses, administrators and educators.Journal of Nursing Education.37(3): 109-117.DENNILL, K; KING, L; LO C K , M & SW A N EPO EL, T 1995: Aspects of primary health care.Halfway House: Thomson.D E V O S , AS 1998: R esearch at grassroots level: A primer for the caring profession.Pretoria: Van Schaik.D O H ER TY , J & P R IC E , M 1998: Evaluation of rural ambulance service.World Health Forum.19(3):315-8.
|
2017-06-06T01:12:10.035Z
|
2008-09-28T00:00:00.000
|
{
"year": 2008,
"sha1": "b41f0f7763a6099a0e86bff951f169a3a4cae3a4",
"oa_license": "CCBY",
"oa_url": "https://curationis.org.za/index.php/curationis/article/download/984/921",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5b86373275fd884936eba3050cb54fafd6c7023f",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15325699
|
pes2o/s2orc
|
v3-fos-license
|
Predicting Novel Tick Vectors of Zoonotic Disease
With the resurgence of tick-borne diseases such as Lyme disease and the emergence of new pathogens such as Powassan virus, understanding what distinguishes vector from non-vector species, and predicting undiscovered tick vectors is an important step towards mitigating human disease risk. We apply generalized boosted regression to interrogate over 90 features for over 240 species of Ixodes ticks. Our model predicted vector status with ~97% accuracy and implicated 14 tick species whose intrinsic trait profiles confer high probabilities (~80%) that they are capable of transmitting infections from animal hosts to humans. Distinguishing characteristics of zoonotic tick vectors include several anatomical structures that facilitate efficient host seeking and blood-feeding from a wide variety of host species. Boosted regression analysis produced both actionable predictions to guide ongoing surveillance as well as testable hypotheses about the biological underpinnings of vectorial capacity across tick species.
Introduction
Ticks transmit a greater diversity of pathogenic agents than any other arthropod (Durden, 2006) and are responsible for vectoring at least 30 zoonotic infectious diseases worldwide (GIDEON, 1994). With global warming, tick-borne diseases are projected to increase even more drastically (Levi et al., 2015). Unsurprisingly, a large volume of research is dedicated to understanding tick biology, with substantial bias towards a fraction of tick species known to vector pathogens from animals to humans (zoonotic 2016 ICML Workshop on #Data4Good: Machine Learning in Social Good Applications, New York, NY, USA. Copyright by the author(s). vectors). Among the hard-bodied ticks (Family Ixodidae), the most species-rich genus, Ixodes, contains 244 species, of which 34 are known zoonotic vectors. What enables these few species to acquire and transmit zoonotic disease? Identifying which features distinguish effective zoonotic vectors from non-vector species is essential for understanding what drives vectorial capacity, and for developing preemptive approaches to reducing tick-borne diseases in humans. We applied a machine learning method called generalized boosted regression (Elith et al., 2008;Ridgeway, 2013) to identify which features best predict tick species capable of transmitting zoonotic diseases. This machine learning algorithm determines which features are most important for correctly predicting a response variable (here, a binary variable designating whether the tick species is a zoonotic vector) by building thousands of linked trees that successively improve upon the predictions of the previous tree. This model-free approach does not rely on distributional assumptions and is ideal for high-dimensional ecological data containing hidden, nonlinear interactions, and non-random patterns of missing data (Han et al., 2015;Di Marco & Santini, 2015). Our goal was to determine which tick species might harbor undiscovered zoonotic pathogens, and to identify traits of Ixodes ticks that best predict their status as vectors of human zoonotic disease.
Data Collection
Based on nomenclature from a standard reference (Guglielmone et al., 2014), we searched published literature for species binomials of 244 ticks of genus Ixodes. The response variable for our analysis was a binary score assigned to each species (0 or 1) based on their zoonotic vector status as established by the GIDEON database (Berger, 2005), which we used to provide the public health consensus on the status of each tick species as vectors for at least one zoonotic disease. From peerreviewed primary literature, we collated a total of 104 71 traits across three life stages (larvae, nymph, and adult) per tick species. These traits can be partitioned into four categories: anatomy, biology, geography, and pathology. All anatomical features were standardized to millimeters.
Analysis
Using similar approaches applied successfully in previous studies (Han et al., 2015;, we applied generalized boosted regression via the gbm package (Ridgeway, 2013) in R (Team, 2014). We tuned the model to build an ensemble of 30,000 trees using 10-fold cross-validation with a shrinkage rate of 0.00025 and an interaction depth of 3. Boosted regression models accommodate missing data by treating missingness as a value by using surrogate splits (Ridgeway, 2013), which draws from the correlation structure among trait variables. However, we also set a minimum threshold of 1% data coverage across tick species as criteria for inclusion in the model in order to remove those variables with extremely low coverage. The difference between using all traits and removing those with less 1% coverage were negligible to model performance. Data were randomly partitioned into training (70%) and test data (30%). Prediction accuracy was measured by AUC.
Study Bias
While many epidemiological metrics (e.g., prevalence of tick-borne disease in humans) are biased by study effort (i.e., the amount of healthcare or research spending per country), traits describing intrinsic vector biology (e.g., body size, clutch size) are less subject to the same biases.
However, if variation in study effort across tick species leads to significant differences in data coverage (i.e., biological features are only known for vectors), this type of bias can affect model results. To diagnose this possible issue, we produce a plot showing the probability of a tick species being a novel vector as a function of citation count ( Figure 1).
Results
Our model was able to distinguish vector from non-vector species with 97.8% accuracy. The best predictors of zoonotic vector status included host breadth (number of orders and families that tick feeds on); tarsus I length of larvae; capitulum lengths of larvae, nymph, and female adult stages; female adult scutum length; clutch size; and female body length ( Figure 1).
Compared to non-vectors, tick species that vector zoonotic disease tend to have several distinctive characteristics. These zoonotic vectors have wider host breadth, feeding on host species from five or more families and four or more orders. In addition, the larvae possess shorter tarsus I lengths (the length of the first segment of the first pair of legs; <0.18mm), whereas those of non-vectors are generally longer. Larvae and adult female ticks have shorter capitulum lengths than non-vectors, but nymphs exhibit the opposite pattern, with nymphs of known zoonotic vectors having longer capitula than non-vectors. Species that are known zoonotic vectors have capitulum lengths of 1.00mm, 0.125mm, and 0.400-0.800mm for adult females, larvae, and nymphs, respectively. Lastly, the adult females have larger clutches (>1000 eggs), greater scutum length (>1.0mm), and longer bodies, both while engorged (>6.0mm) and unengorged (>2.5mm) compared to adult females of non-vector species.
Of 244 Ixodes species, 34 species are currently recognized as vectors of human diseases (Berger, 2005). Our model identifies 14 additional potential zoonotic vectors, which share similar trait profiles with those 34 species. Among these 14 predicted vectors, 10 species are recognized by primary literature as possibly parasitizing humans ( Table 1). The remaining 4 species, I. canisuga, I. trichosuri, I. eldaricus, and I. aragaoi, are novel vectors that have not yet been identified as human parasites but reflect ¿80% probability of vectoring a zoonotic pathogen.
We found intrinsic traits poorly predicted which tick species that were well-studied. Although tick species that are known vectors for at least one zoonosis have higher citation counts, there are also several tick species (vectors and non-vectors) that are reasonably well studied with both low and high probabilities of being zoonotic vectors. Taken together, these results confirm that the tick trait 72 profile reported here reflects that of a zoonotic vector rather than that of well studied tick.
Discussion
Predicting tick vectors for future zoonotic diseases is a critical step toward disease prevention and will rely on understanding what features enable ticks to be better human disease vectors. Here, we report a profile of tick traits that distinguish zoonotic vectors from non-vectors with more than 90% accuracy, and we identify several tick species with high probabilities of vectoring one or more zoonotic diseases as potential targets for increased investigation and surveillance.
We found that the most important predictor of zoonotic vector status in Ixodes ticks was the diversity of vertebrate species that the tick parasitizes. This finding is consistent with the general principle that the probability of vectoring a zoonotic disease correlates directly with host breadth (Davies & Pedersen, 2008;Woolhouse & Gowtage-Sequeria, 2005). This pattern has been widely postulated for vector groups such as mosquitoes, and we find evidence that this is also true for Ixodes ticks.
Several anatomical features were highly predictive of vector status. Larvae of vector species tend to have shorter tarsus I lengths (length of the first segment of the first pair of legs) compared to non-vectors. The larval stage is important because acquisition of zoonotic pathogens (e.g., Lyme spirochetes) often occurs during the blood meal at this stage (Matuschka et al., 1992). Moreover, if infected at this stage, larvae have two potentially infectious bites through which to transmit the pathogen compared to one bite if it is infected as a nymph. Tarsus I contains many important sensory organs that promote important behaviors across all 3 life stages, including seeking ideal habitat, hostseeking, and mate-seeking behaviors. In addition, tarsus I contains Hallers organ, a vital organ for determining host location, host odors, detecting pheromones, and serving other environmental sensory functions (Sonenshine & Roe, 2013). We are unaware of studies examining allometric scaling patterns of tarsus I and/or Hallers organ, but from these results we spectulate that if the length of tarsus I is correlated to host-seeking at the larval stage, shorter tarsus I may cause larvae to be less selective about hosts or environments, thereby causing vectors at this life to be less selective about where and from whom the first blood meal is acquired. Similarly, if Hallers organ correlates directly with the length of tarsus I, larvae with shorter tarsus I lengths would be less selective about vertebrate host species. Reduced host selectivity could lead to more generalized feeding preferences across a wide diversity of host types, increasing the possibility of contact with hosts infected with zoonotic pathogens.
Another important trait distinguishing vectors from nonvectors was the capitulum length in larvae, nymphs, and adult females. Capitulum length is determined by hypostome length and salivarium size in ticks. The hypostome is a the ratchet-like anchor within the capitulum that is inserted into the host body (Richter et al., 2013), and the salivarium is a repository that collects and delivers tick saliva. Tick saliva contains bioactive molecules responsible for facilitating blood meals as well as zoonotic pathogens such as Borrelia burgdorferi (causative agent of Lyme disease), Francisella tularensis (causative agent of tularemia), and others (Reuben Kaufman, 2010). We found that capitulum lengths were shorter in adult female and larval vectors than those of non-vectors, and that capitulum length in nymphal vectors was longer in zoonotic vectors. This pattern is consistent with widely documented patterns of vector competence of Ixodes species that transmit pathogens that cause anaplasmosis, babesiosis, and Lyme disease: of the three developmental stages, the nymphal stage is disproportionately responsible for human transmission (Matuschka et al., 1992). With softer substrates like those encountered in human and other mammal hosts, ticks benefit from a more secure anchor conferred by deeper penetration of mouthparts that comprise the capitulum (Richter et al., 2013). Secure attachments lead to increased feeding times, which increase the probability of successful transmission for tick-borne diseases such as Lyme disease (Kazimrov & tibrniov, 2013). Thus, capitulum length at the nymphal stage may be a reliable indicator of the vectorial capacity of Ixodes tick species for zoonotic pathogens.
Our analysis also revealed that tick vectors have a fecundity advantage over non-vector ticks (Shine, 1988). This profile supports the fecundity advantage model which asserts that larger females produce larger clutches. Specifically, body size, scutum length, and clutch size of adult females are all larger for zoonotic vectors compared to non-vectors. Larger body sizes enable the ingestion of larger blood meals from hosts, leading to greater available resources for egg production (Ford & Seigel, 1989). Combined, the trait profile produced by our analysis shows that zoonotic tick vectors are most likely to be species where adult females produce a larger number of eggs which develop into larvae that may be feeding on a greater diversity of species. These larvae develop into nymphs whose capitula allow for more efficient and longer feeding times on soft-bodied hosts compared to non-vector species, which produces larger adult females with greater fecundity.
In addition to identifying a profile distinguishing tick species which are zoonotic vectors from non-vectors, our model identified 14 Ixodes tick species with 80% probabilities of being undiscovered vectors of zoonotic disease (Ta-ble 1). The majority of these species reside in Nearctic or Palearctic biomes, and all of them are habituated to forest or grassland habitats (Guglielmone et al., 2014). Some of the ticks we identify are suspected in the primary literature as being likely disease vectors, but these species are not currently recognized by the public health community as zoonotic vectors per se. For example, one species, Ixodes acuminatus, is capable of transmitting Borrelia burgdoferi sensu lato, though it is not considered an important vector for human disease in nature, perhaps due to low contact with human populations (Morse, 1995). The saliva of another species, Ixodes rubicundus, can cause paralysis in sheep (Fourie et al., 1989), but there is no record of this species transmitting infectious disease to humans. Given that many of the 14 predicted vector species are understudied and some of them are already suspected to have potential health consequences for humans (Table 1), our study offers new utility for identifying tick species whose intrinsic traits suggest they should be surveillance targets.
In addition to informing the biological basis by which some ticks vector zoonotic pathogens, our study underscores the crucial importance of basic research on ticks and other arthropod vectors, since understanding the biological basis of transmission ecology will rely fundamentally on understanding the innate characteristics distinguishing vector from non-vector species.
|
2016-06-20T20:33:07.000Z
|
2016-06-20T00:00:00.000
|
{
"year": 2016,
"sha1": "b1f7569d324cb7e4f32f6cebd5456a97f6a80c97",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b1f7569d324cb7e4f32f6cebd5456a97f6a80c97",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Mathematics"
]
}
|
219460307
|
pes2o/s2orc
|
v3-fos-license
|
Conceptual model of start-up business for strengthening Madrasah funding using Dynamic System approaches
A start-up business is generally developed in a limited financial so that they grow slowly. It also happens on the Cassava Meatballs start-up business run by Madrasah Al-Binaa, one of an Islamic non-formal boarding school. Therefore, it is required some alternative tactics to provide the liquidity and get profits that will encourage the start-up business to grow, develop and be sustain. The start-up business contains some trade-offs so that this study uses Dynamic System approach to simulate the interrelationships among the variables. This study aims to develop a conceptual model of start-up business run by Madrasah Al-Binaa. It is expected that the conceptual model can represent the real system of the Madrasah start-up business. The conceptual model development is begun by observing the phenomena of the existing start-up business process, and then a causal loop diagram and their equations are built. The structure of the developed conceptual model consists of seven elements, i.e. customers, products, liquidity, debt, reputation, workers and household spending. The research method is descriptive analytics. The result shows that the developed model is different with the generic model. In the developed conceptual model, the entrepreneurs are not paid from private payoff, but allow to consider household spending.
Introduction
Education is a joint responsibility between the government and the society. Madrasah is an Islamic boarding private school that has long been developed in Indonesia. The Madrasah has a very strategic role in developing education, especially for children in remote areas to get education. Beside a formal curriculum, the Madrasah also provides religious knowledge to build Islamic character of faith and devotion to Allah Subhanahu Wata'ala. Therefore, its existence in the society is needed to accelerate improvement the quality of human resources.
The quality of education should be supported by enough funding. The limited funds owned by the Madrasah encourage the owner to find some alternative sources of funding so that the Madrasah can operate independently. Madrasah Diniyah Albinaa is a Madrasah located at Jl. Raya Arjasari km 7 Kampung Sukarasa RT 01 / RW14, Arjasari District, Bandung. The human resources consist of 7 Ustadz with 20 women joint to the ta'lim group, and 80 students. All the time, Madrasah Diniyah Albinaa relies fund for their activities by infaq, sodaqoh, zakat from the limited donors.
Against a background to meet the fund for activities independently, over the past 2 years the Madrasah have tried various start-up businesses e.g. Catfish Farming, Catfish Crackers, Cassava Meatballs and Cilok. Some of the start-up businesses are able to survive, but for the Catfish Farming and their processed product is no longer produced. Some challenges faced include marketing and product quality. Recently, the start-up business that gives hope to grow and develop is the Cassava Meatballs business. These products are made of Cassava from the local Arjasari agricultural products, which are quite abundant. The Cassava is sold very cheap, i.e. Rp.800 -Rp. 1,000/kg, while the Cassava Meatballs can have sold on higher price due to the added value. Therefore, it is needed to study deeper to make this start-up business grow, develop and be sustainable using a model.
The start-up business is a complex system and has a dynamic behavior. Strategic approaches for modeling complex systems can be done by Dynamic Systems approach, while the dynamics of the system behavior can be simulated using a computer [1]. Therefore, the research method used in this research is the Dynamic System approach which is built based on data and information about the structure and behavior of key elements. The variables involved have complicated interrelations and many trade-offs.
The problem in this study is how the Madrasah can improve the Cassava Meatballs start-up business so that it can develop, and be sustainable. Finally, the Madrasah can operate their activities independently. This study aims to develop a conceptual model of start-up business run by the Madrasah using a Dynamic Systems approach, so that the start-up business can develop and be sustainable.
In the literature, Schwarz and Schöneborn develop a dynamic model of the evolution of small startup businesses based on simplified company theory. This model explains the evolution of the company which involves both growth and decline to bankruptcy simultaneously [2]. A generic model templates for entrepreneurs to develop their own company's using Dynamic Systems model has been proposed by Huang and Kunc [3]. Abdelkafi and Täuscher develop a business model for sustainability (BMfSbusiness model for Sustainability-) by creating value for various stakeholders and the natural environment has been developed by Abdelkafi and Täuscher [4]. BMfS is built on creating a feedback loop that reinforces the value created by the customer, the value captured by the company, and the value for the natural environment using a Dynamic Systems approach. Cosenz explore how System Dynamics modelling can provide a methodological support to business model design with the intent to better communicate business strategy and manage performance [5].
The business start-up life cycle includes 3 stages, i.e. the bootstrap stage, the seed stage, and the creation stage [6]. The main challenges of business start-up include "entrepreneurship", "innovation", "technology", and "economics" [6], but they are critical success factors of startups as well [7]. The growth of start-up businesses must be supported by the availability of information, efficient processes, attitudes that support start-up business owners, offering training opportunities, and the availability of supporting services [8], the importance of social media for promotion tools even as part of business strategy [9], the availability to the crowdfunding platform access [10], extensive pre-startup activities, and various expansion activities [11].
Based on the previous research, it can be seen that Dynamic System approach can be used to study variables that make a start-up business grows, develops and be sustainable. This study will develop a start-up business model using a Dynamic Systems approach by referring to Huang and Kunc model [3].
This paper is organized as follows. Section 2 describes the Methodology, Section 3 explains the Result, Section 4 discusses the result, and Section 5 shows the conclusion
Method
This study uses descriptive analytics methods to develop a conceptual model of the Cassava Meatballs start-up business. The conceptual model refers to the start-up business generic model proposed by Schwarz and Schöneborn [2]. The steps of model development refer to the stages in the dynamic system
Start-up business generic model
The start-up business generic model is constructed by Huang and Kunc as seen at figure 1 [3]. The startup business includes eight main resources, i.e. potential customers, customer base, order/product in process, services in process, staff, cash availability, assets and company reputation. In addition, financial variables in the form of financial information, are considered to be a very important elements of the startup, because the information effects on business survival. The financial variables are revenue, profit, cash in, cash out, service account for revenue, product account for revenue, total operating costs, total production costs, total staff costs, hiring budget, marketing budget and investment budget.
Conceptual model of cassava meatballs start-up business
The conceptual model of the Cassava Meatballs start-up business is constructed by referring Business Process of Cassava Meatballs that is described as follows: The main business run by Madrasah Diniyah Albinaa is starting from the market potential of the Cassava Meatballs products to the product sales. When the orders are received from the customers, materials are purchased. This will increase inventory and inventory costs. The order will be produced when the materials and staff are available. Product completed are then shipped. The number of orders processed and the number of product completed affect a backlog. If the backlog occurs, additional employees are needed to adjust capacity in order to be delivered the product just in time. The number of products shipped and product prices will determine revenue. The revenue should be used to buy materials, to provide holding cost, employee cost, and marketing cost. The difference between revenue and total costs (material cost, employee cost, holding cost and marketing cost) is called as profit. Model of Huang and Kunc states that all payment includes private payoff determine cash availability which represent a liquidity [3]. All unavoidable payments decrease liquidity. However, in the practical situation at the Madrasah start-up business the entrepreneur uses the revenue as household spending. The entrepreneur does not pay from the profit as private payoff.
General equations formulation
Refer to the Causal Loop Diagram seen at figure 2, then general equations are formulated as follows: -New customers = potential customers + targeted customers (1) -Product completed = if order > capacity, product completed = capacity if order < capacity, product completed = order -Order backlog = orders -Product completed (3) -Staff = staff needed + current staff (4) -Stock of material = amount of raw material purchase -material to produce (5) -Material cost = amount of material to product completed * material price (6) -Marketing cost = marketing budget allocation (7) -Staff wages = number of staff * wage per staff (8) -Stock costs = material stock x holding cost per unit (9) -Revenue = (product shipped x product price) + service cost (10) -Profit = revenue -(material costs + employee costs) (11) -Liquidity = profit + debt -stock cost -marketing cost (12)
Discussion
In figure 2 it can be seen that the Cassava Meatballs business start-up model run by Madrasah Diniyah Albinaa consists of 7 elements, namely potential customers, customer base, services, product completed, liquidity and company reputation. In the conceptual model developed there is no asset element because investment is not made when there are additional staff as shown in the generic model. In the Cassava Meatballs case, investments are made when there is an increase in capacity to buy the main equipment. In addition, in the case of Cassava Meatballs, the entrepreneur does not get paid as a private payoff as shown in the generic model, but the entrepreneur uses the income to fulfill household needs. This happens because the Cassava Meatballs business is the main source of income for the entrepreneur's family. This allows the cash deficit so that debt is required for operational cost. The debt will increase the cash, but it also increases the payments. This always happens, so that the companies are increasingly burdened by the debt. The Cassava Meatballs start-up business finally stop producing because they are unable to get debt for operational cost. Besides, the marketer has an accident and make him cannot work anymore.
Conclusion
The developed conceptual model has two differences from the generic model, i.e. the entrepreneur does not pay from the profit as private payoff, but the entrepreneur uses revenue to fulfil household spending, and asset element does not used in the developed conceptual model.
The developed model should be evaluated to make it valid, and then the right strategy should be found to make the Cassava Meatballs start-up business be sustainable. This needs further research.
|
2020-05-21T09:17:52.119Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "755dc21fc0881244de9ce4f20e6da680ec873a2a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/830/4/042011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "584d4fa21afaebbf1638e8d234688f6205e16e54",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
}
|
254621939
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of the proliferation and persistency of CAR T cells derived from human induced pluripotent stem cells
The effectiveness of chimaeric antigen receptor (CAR) T-cell immunotherapies against solid tumours relies on the accumulation, proliferation and persistency of T cells at the tumour site. Here we show that the proliferation of CD8αβ cytotoxic CAR T cells in solid tumours can be enhanced by deriving and expanding them from a single human induced-pluripotent-stem-cell clone bearing a CAR selected for efficient differentiation. We also show that the proliferation and persistency of the effector cells in the tumours can be further enhanced by genetically knocking out diacylglycerol kinase, which inhibits antigen-receptor signalling, and by transducing the cells with genes encoding for membrane-bound interleukin-15 (IL-15) and its receptor subunit IL-15Rα. In multiple tumour-bearing animal models, the engineered hiPSC-derived CAR T cells led to therapeutic outcomes similar to those of primary CD8 T cells bearing the same CAR. The optimization of effector CAR T cells derived from pluripotent stem cells may aid the development of long-lasting antigen-specific T-cell immunotherapies for the treatment of solid tumours.
Article https://doi.org/10.1038/s41551-022-00969-0 cells were used to induce CD4CD8 double-positive T cells (DP cells). The differentiation efficiency from CD4CD8 double-negative cells (DN cells) to DP cells was the same for 4-1BBz-based second-generation and third-generation CARs; however, it was substantially decreased if the iPSCs were modified by first-generation or 28z-based second-generation CAR compared with the differentiation efficiency of CAR-unmodified iPSCs (Fig. 1b). We confirmed that no molecules targeted by CAR, such as CD19, were expressed on iPSCs or differentiating cells during the culture (Extended Data Fig. 1a) and that differentiation to DP cells remained uninterrupted under the scFv-deleted construct (Fig. 1b). To confirm the effects of CAR expression on T-cell differentiation from iPSC-derived haematopoietic progenitor cells, we transduced a doxycycline-inducible CAR-harbouring lentiviral vector (inducible 1928z; Fig. 1a) into iPSCs. CAR induction at the early T-cell differentiation stage (days interfered with the differentiation of DP cells (Extended Data Fig. 1b). Moreover, we confirmed that CAR induction of 1928z but not of 1928bbz in differentiating cells caused the phosphorylation of CD3ζ, suggesting non-specific activation signalling (Fig. 1c). Recent investigations reported that 1928z transduction into HSCs impaired T-cell differentiation capability 13,14 and promoted NK-like cell development by suppressing the transcription factor BCL11B, which is indispensable for T-lineage development of lymphoid progenitors during early phases of ex vivo T-cell generation. It could be a possible reason also for T-cell differentiation from 1928z CAR-transduced iPSC 5 .
CAR induction at the DP cell-enriched stage (days 35-42) decreased the number of CD4CD8 DP cells and increased the number of CD8-single positive (SP) cells (Extended Data Fig. 1c). In addition, we found that 1928z induction increased the expression of NK cell-related genes, such as CD161, DNAM-1, CD56, NKG2D, NKG2A, NKp46 and NKp44 (Extended Data Fig. 1d). As we confirmed that 1928bbz-transduced iPSCs efficiently differentiated into CD4CD8DP cells compared with 1928z-transduced iPSCs (Fig. 1d), we decided to use the third-generation construct for further study.
GPC3 CAR-iPSCs generated two types of iCAR-T cells with distinct phenotypes
A previous study 5 has reported that regenerated 1928z CAR-T cells derived from CAR-engineered iPSCs exerted anti-tumour function comparable to that of γδ T cells transduced with the same CAR. Therefore, the generation of CD8αβ-positive T cells from CAR-iPSCs via CD4CD8 DP is expected to enhance the anti-tumour function. Recently, we and other groups reported the generation of adaptive-like CD8αβ-positive T cells from iPSCs 6,9,10 . However, how the lineage modification impacts cell function, especially in vivo, requires more study. Although CAR-T therapies are effective against haematological malignancies, they are less effective against solid tumours. To develop an iPSC-derived CAR-T therapy for solid tumours, we recently generated CAR-expressing iPSC-derived NK/innate lymphocyte cells (ILCs) that target glypican3 (GPC3)-expressing tumours 15 , such as hepatocellular carcinoma and ovarian clear cell carcinoma. GPC3 is a cancer-specific cell membrane protein and a promising target for cancer immunotherapy. To understand how the T-cell lineage derived from CAR-iPSCs impacts anti-tumour function, we compared progeny cells from CD4CD8 DP with previously reported innate-like iPS-T cells.
A CAR-targeting GPC3 (Fig. 2a) was transduced into iPSCs (CAR-iPSCs) using a lentiviral vector, and the cells were cloned by limiting the dilution after selection by stably expressing humanized Kusabira Orange 1 (hKO1). CAR-iPSCs were induced to form immature T cells (immature iCAR-T), which correspond to a CD4CD8 DP population, via haematopoietic progenitor cells (Fig. 2b). After subsequent maturation by an anti-CD3 antibody with dexamethasone 6 , immature iCAR-T cells were differentiated into iCAR-T cells post-maturation, a population that includes CD8αβ SP cells. The analysis of surface molecules related to the memory T-cell phenotype revealed that immature iCAR-T and signal is a cytokine signal transmitted by JAK/STAT. Different γ-chain cytokines, such as interleukin (IL)-2, IL-7, IL-15 and IL-21, have been used as the third signal. Effective modulation of these signals in T cells is expected to enhance the functions of killer and helper cells, prolong T-cell survival and avoid T-cell exhaustion to increase the therapeutic efficacy of T-cell therapy.
Emerging technologies to regenerate cytotoxic immune cells, such as T cells and natural killer (NK) cells, from pluripotent stem cells are expected to provide a universally accessible approach to cancer immunotherapy 5,6 . Regeneration of antigen-specific T cells via induced pluripotent stem cells (iPSCs) was first reported in 2013 (refs. 7,8 ) and is a potential source of cytotoxic T lymphocytes (CTLs) 6 . In this strategy, T cells are reprogrammed into iPSCs (T-iPSCs), which are subsequently differentiated back into T cells (iPS-T cells) but with rejuvenated phenotypes 7 . (For the remainder of this study, all comments about iPSCs refer to T-iPSCs unless otherwise stated.) However, previous reports have shown that iPS-T cells can have unexpected phenotypes. Especially in the case of CAR modifications in iPSCs, cytotoxic T cells derived from second-generation CD19 CAR-transduced iPSCs were reported to have properties of γδT cells, partially expressing CD8αα but not CD8αβ, according to the gene expression profiles and CAR-independent cytotoxicity 5 . Recent reports indicated improved differentiation protocols to synthesize CD8αβ-expressing T cells that showed effector functions more closely resembling primary T cells 6,9,10 . CAR transduction to such iPS-T cells was confirmed to work as effectively as primary T cells on a B-cell malignancy animal model when iPS-T cells were supported with an IL-15-mediated third signal 11 . However, unlike haematological malignancy, solid tumours are more refractory to cellular immunotherapies with respect to accessibility and sustained T-cell effector function at the local tumour site.
In this Article, to overcome the obstacles using CD8αβ iPS-T cells derived from CAR-modified iPSCs, we selected optimal CAR constructs without tonic signalling during the CD8αβ T-cell differentiation. Next, the CD3ζ-mediated signal pathway was enhanced by inhibiting the intracellular immunological checkpoint molecules, namely DGKα and DGKζ by clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9, to allow the proliferation of iCAR-T cells in the tumour. Finally, membrane-bound IL-15/IL-15RA was transduced in iCAR-T cells to prolong their survival by enhancing the third signal. The therapeutic efficacy of modified iCAR-T cells was confirmed in tumour-bearing animal models, demonstrating the generation of highly effective iCAR-T cells through a combination of selecting the appropriate CAR constructs, optimizing the differentiation process and enhancing the CAR and cytokine signalling by genetic manipulation. These findings suggest that our iCAR-T cells have therapeutic effects against solid tumours; they are comparable to primary CAR-T cells but with longer persistency. Thus, these findings indicate the potential to enhance the cancer immunity of CAR-T-cell therapies using iPSC technology. Article https://doi.org/10.1038/s41551-022-00969-0 iCAR-T post-maturation showed heterogeneous expression profiles regarding the surface antigens CD5 and CD8β that are expressed in peripheral CD8 T cells. Moreover, the cells showed heterogeneous expression patterns for CD62L, CCR7, CD27 and CD28, which are primarily expressed in naïve and central memory T cells (Fig. 2c,d).
To clarify the functional properties of CAR-iPSC-derived CD5CD8β DP cells through in vitro and in vivo assays, we expanded the cells and subsequently confirmed the conserved expression of CD8αβ (iCAR-T CTL ; Extended Data Fig. 2a). Furthermore, primary CD8 T cells modified with the same CAR (pCAR-T CTL ) and previously reported iPS-T cells with ILC-like function (iCAR-T ILC ) were prepared. iCAR-T ILC , which were directly induced from DN cells using an agonistic stimulation protocol 7,16,17 , expressed low levels of CD5, high levels of CD56 and CD161, and did not express CD8β, a phenotype that is consistent with that of ILCs 18 (Extended Data Fig. 2a,b), although the pre-rearranged T-cell receptor (TCR)-derived CD3 expression was preserved (Extended Data Fig. 2a). To avoid any biases from clone-specific phenotypes, we used the same clone, TKT3V1-7, to generate both iCAR-T CTL and iCAR-T ILC .
Although the three types of cells commonly expressed certain T/NK cell lineage markers, TCRαβ/CD3 complex and transduced CAR, differences existed in the expression of cell lineage-related markers (Extended Data Fig. 2b). To further characterize the three types of CAR-expressing cells, we conducted a transcriptional analysis of 259 human T-cell-related genes (Supplementary Table 1) at the single-cell level. Among differentially expressed genes (DEGs) between iCAR-T CTL and iCAR-T ILC , the expression of naïve/memory-related genes, such as SELL, CCR7, TCF7, IL7R and CD27, was high in iCAR-T CTL , suggesting that CD5CD8β DP iCAR-T CTL maintained a suitable phenotype for therapeutic efficacy in vivo compared with iCAR-T ILC even after proliferation (Extended Data Fig. 2c). Gene Ontology (GO) analysis of those genes revealed a high enrichment of genes related to T-cell differentiation (fold enrichment 47.34, false discovery rate (FDR) 2.24 × 10 −11 ) and T-cell activation (fold enrichment 34.04, FDR = 1.06 × 10 −11 ). On the other hand, the top 30 DEGs for pCAR-T CTL compared with iCAR-T CTL included IL2, IFNG, TNF and IL7R, indicating possible enrichment of terms for cytokine-mediated signalling pathways (fold enrichment 21.12, FDR 8.18 × 10 −23 ) and lymphocyte activation (fold enrichment NTD 1928z 1928bbz 4-1BB CD8 TM UbC hKO1 FMC63 scFV CD3z T2A tEGFR IRES CD8 TM UbC hKO1 FMC63 scFV CD3z T2A tEGFR IRES CD8 TM UbC hKO1 FMC63 scFV CD28 CD3z T2A CAR constructs 19z, 1928z, 19bbz, 1928bbz and w/oED28z CAR (w/oED28z lacks the extracellular scFv domain of 1928z CAR). Each construct was inserted at the indicated internal promotor(s) of the lentiviral vector pCS. b, Comparison of surface antigen profiles of immature iCAR-T cells from various kinds of iCARs. Haematopoietic progenitor cells derived from iPSCs were transduced with CARs (see a) and differentiated into T-cell lineages. To compensate for well-to-well variation, we used hKO1-negative cells in the same well as internal controls. Differentiation efficiencies were calculated as follows: CD5CD7 DP = (percentage of CD5CD7 DP cells in hKO1positive cells)/(percentage of CD5CD7 DP cells in hKO1-negative cells); CD4CD8 DP = (percentage of CD4CD8 DP cells in hKO1-positive cells)/(percentage of CD4CD8 DP cells in hKO1-negative cells). Each dot represents biological replicates (n = 6). One-way ANOVA with Dunnett's multiple comparisons test. c, Haematopoietic progenitor cells derived from inducible 1928z or 1928bbz CAR-transduced T-iPSCs were divided into two groups and subsequently differentiated on FcDLL4 for 2 days in the presence (2 μg ml −1 ) or absence of doxycycline. Representative FCM data representing the phosphorylation of CD3ζ in the two groups are shown. d, Representative FCM data comparing surface antigen expression of CD4 and CD8β on immature iCAR-T cells from 1928z-transduced and 1928bbz-transduced iPSCs. NTD, not transduced. 23.71, FDR 3.75 × 10 −13 ). These results suggested that iCAR-T CTL could be functionally closer to pCAR-T CTL than iCAR-T ILC (Extended Data Fig. 2d). However, it could be insufficient in multiple aspects of cancer immunity in comparison with pCAR-T CTL .
iCAR-T CTL suppressed tumour progression and accumulated to the tumour site in vivo better than iCAR-T ILC but did not reach pCAR-T CTL
To elucidate the functions of the three cell types, we first evaluated the target-mediated CAR-dependent cytokine production of iCAR-T CTL and iCAR-T ILC by co-culturing the cells with SK-Hep-1 (a liver cancer cell line) transduced with GPC3 (SK-Hep-GPC3) as a stimulator. Although both cell types produced IFN-γ and TNF upon stimulation, the levels were lower than those of expanded pCAR-T CTL (Extended Data Fig. 2e). We did not observe any difference between the impact of 4-1BBz-based second-generation and third-generation CAR on T-cell differentiation. Thus, we compared cytokine production between second-generation BBz iCAR-T CTL and third-generation 28BBz iCAR-T CTL , and found that 28BBz iCAR-T CTL produced IFN-γ and TNF significantly better than BBz iCAR-T CTL following SK-Hep-GPC3 stimulation (Extended Data Fig. 3). Thus, we selected third-generation 28BBz iCAR-T CTL for further experiments. Next, to examine CAR-dependent and CAR-independent cytotoxicity, which characterizes the target-antigen specificity, iCAR-T CTL , iCAR-T ILC and pCAR-T CTL were co-cultured with GPC3-absent SK-Hep-1(SK-Hep-Vector), K562 (leukaemia cell line), GPC3-present SK-Hep-GPC3, KOC7c (ovarian cancer cell line) and HepG2 (hepatocellular carcinoma cell line) (Fig. 2e). iCAR-T CTL and pCAR-T CTL showed similar levels of CAR target-specific cytotoxicity in all GPC3-positive cell lines.
In contrast, iCAR-T ILC showed CAR-dependent and CAR-independent cytotoxicity, as estimated by NK cell-activating receptor-mediated signalling 16,17 , resulting in stronger cytotoxicity against GPC3-expressing cells than iCAR-T CTL . iCAR-T CTL showed similar tumour-suppressive function as iCAR-T ILC in a peritoneal dissemination model of ovarian cancer, KOC7c (Extended Data Fig. 4a-d).
In the cancer immunity cycle 19 , an effective T-cell therapy depends on the trafficking of T cells to the tumour site, expansion of T cells accompanied by their recognition of tumour cells, and the duration of the effector function. Accordingly, we investigated these functions in iCAR-T CTL and iCAR-T ILC . The cellular kinetics of cells were evaluated using NSG mice carrying either SK-Hep-Vector or SK-Hep-GPC3, transplanted subcutaneously at each flank on day −14. Afterward, iCAR-T CTL , iCAR-T ILC or pCAR-T CTL engineered to express luciferase were injected into the tail vein of tumour-bearing mice on day 0. Bioluminescence imaging of mice revealed luminescence intensity gradually and selectively that significantly increased at the SK-Hep-GPC3-transplanted site from days 3 to 7 in iCAR-T CTL -injected mice and considerably in pCAR-T CTL -injected mice (Fig. 2f,g and Extended Data Fig. 5a). Almost none of iCAR-T CTL , iCAR-T ILC or pCAR-T CTL accumulated at the SK-Hep transplanted site. In addition, we confirmed the accumulation of iCAR-T CTL by pathological analysis and flow cytometry (FCM) analysis. SK-Hep-GPC3 tumour sections from mice infused with iCAR-T CTL showed an enhanced infiltration of CD3-positive T cells compared with iCAR-T ILC infusion (Extended Data Fig. 5b). Infiltrating iCAR-T CTL was positive for the active cell cycle molecule Ki-67; moreover, some were positive for the cytotoxic molecule Granzyme B as demonstrated by immunohistopathology (Extended Data Fig. 5c). Next, regarding the iCAR-T CTL group, the characteristics of iCAR-T CTL observed in the tumour were evaluated by comparing them with iCAR-T CTL found in the spleen. Global transcriptional profiles of iCAR-T CTL in the tumour and spleen revealed that the two groups have distinguishable transcriptional profiles (Extended Data Fig. 5d). In total, 1,492 genes were upregulated in isolated iCAR-T CTL from the tumour, and further analysis demonstrated that iCAR-T CTL in the tumour showed enrichment of gene sets related to proliferation, such as mitotic cell cycle, cell division, M phase and mitotic cell cycle phase transition, and related to effector function, such as cytokine signalling in the immune system and interferon signalling (Extended Data Fig. 5e,f).
Signal 1 enhancement by DGK deletion improved the accumulation, persistency and effector function of iCAR-T CTL
Although iCAR-T CTL showed preferable kinetics in a systemic injection model, its therapeutic efficacy in a local injection model and accumulation at the tumour site in the systemic injection model was inferior to those of primary CAR-T. This finding led us to consider whether the tumour microenvironment negatively affected the functions of iCAR-T CTL . As several signals (first signal: TCR signal, second signal: co-stimulatory signal, third signal: cytokine signal) are necessary to efficiently activate T cells, we hypothesized that a combinatory enhancement of these signals overcomes the insufficient function of iCAR-T CTL . To enhance antigen receptor-mediated first signal, we modified PD-1 signalling. PD-1-deleted iPSC was established and differentiated to iCAR-T CTL to assess if PD-1 deletion was effective in keeping the differentiated iCAR-T CTL activated (Extended Data Fig. 6a). PD-1 deletion significantly but slightly improved cytotoxicity and proliferation and did not improve ERK phosphorylation and cytokine production of iCAR-T CTL (Extended Data Fig. 6b-g). The tumour-suppressive capability was not enhanced by blocking the combination of iCAR-T CTL and PD-1 by antibody (Extended Data Fig. 6h). As a different approach, CAR overexpression slightly improved cytotoxicity; however, it showed less proliferation and cytokine producibility with increasing expression of exhaustion markers (Extended Data Fig. 7a-g).
Next, we focused on controlling the diacylglycerol (DAG) metabolism. Following TCR-or CAR-mediated phosphorylation of CD3ζ, DAG recruits and activates Ras guanyl nucleotide-releasing protein 1 (Ras-GRP1) to activate the MEK/ERK pathway. The two major DAG kinase isoforms, DGKα and DGKζ, found in T cells are known to attenuate the MEK/ERK signalling by degrading DAG 20 . The inhibition of DGK enhances the RAS/ERK pathway-mediated AP-1 and NF-κB activation 21 . In addition, disruption of DGK enhanced T-cell effector function 22,23 . Therefore, we disrupted both DGKα and DGKζ by CRISPR-Cas9 at the iPSC stage (DGK-dKO-iCAR) (Extended Data Fig. 8a), and subsequently differentiated iPSCs into iCAR-T CTL . Although we observed decreased efficiency of T-cell differentiation along with disruption of both DGKs ( Supplementary Fig. 1a), which is compatible with the previous observation in DGK-dKO mice 24 , we successfully obtained DGK-dKO-iCAR-T CTL that were confirmed to have no DGKα and DGKζ proteins (Extended Data Fig. 8b). Next, we evaluated their phenotype by FCM and performed gene expression analysis to compare with iCAR-T CTL (Extended Data Fig. 8c and Supplementary Fig. 2a,b). DGK disruption did not considerably affect naïve/memory phenotype except slight upregulation of CCR7. It increased the expression of metabolic fitness genes, activation genes and immune regulatory genes, and decreased NK cell-related activating receptor genes and exhaustion genes such as HAVCR2 (TIM3) and PDCD1 (PD-1).
Next, Phosflow assay was performed to evaluate the phosphorylation downstream of DAG in the tested T cells. Although ERK1/2 phosphorylation in iCAR-T CTL was less than that of pCAR-T CTL , DGK-dKO-iCAR-T CTL showed enhanced ERK1/2 phosphorylation that was similar to pCAR-T CTL when the cells were stimulated by the target antigen (Fig. 3a). The level of recovered ERK phosphorylation showed that the proliferation capability of DGK-dKO-iCAR-T CTL was significantly improved compared with DGK-unmodified iCAR-T CTL in response to SK-Hep-GPC3 (Fig. 3b). In addition, improved IFNγ and TNF production from DGK-dKO-iCAR-T CTL was observed in response to SK-Hep-GPC3 ( Supplementary Fig. 2c). To determine if DGK disruption improved the effector function and therapeutic persistency of iCAR-T CTL at the tumour site, we treated an intraperitoneal dissemination model of an ovarian cancer cell line, KOC7c, in NSG mice by intraperitoneally injecting DGK-dKO-iCAR-T CTL . As expected, DGK-dKO-iCAR-T CTL significantly suppressed tumour growth compared with iCAR-T CTL Nature Biomedical Engineering | Volume 7 | January 2023 | 24-37
Next, to evaluate the effect on survival and proliferation of DGK-dKO-iCAR-T CTL in vivo, we injected DGK-dKO-iCAR-T CTL into the tail vein of JHH7-xenografted NSG mice and performed in vivo imaging of the subcutaneous tumour. We found that DGK-dKO-iCAR-T CTL accumulated more and persisted longer at the subcutaneous tumour site than iCAR-T CTL (Fig. 3g-i). FCM analysis of the resected tumour supported the improved presence of DGK-dKO-iCAR-T CTL in the tumour (Extended Data Fig. 8d); however, the therapeutic impact was limited (Extended Data Fig. 8e).
Signal 3 enhancement by mbIL15 transduction improved the accumulation, persistency and effector function of iCAR-T CTL
We next focused on the augmentation of the third signal. As IL-15 increased the proliferation of iCAR-T CTL in response to anti-CD3 antibody and target antigen-expressing cell line, the most effective among three kinds of signal 3 cytokines, IL-15, IL-7 and IL-21 ( Supplementary Fig. 3a), we focused on enhancing the IL-15 signal pathway to improve the persistency in vivo and maintain the memory phenotype. Membrane-bound IL-15/IL-15Rα (mbIL15) gene has been previously reported to improve the persistence and anti-tumour effect of genetically modified primary T cells in a xenograft animal model 25 . We transduced the mbIL15 gene (mbIL15tg) with a retroviral vector (pMX-mbIL15, kindly provided by Dr Nakayama; Fig. 4a) into iCAR-T CTL and pCAR-T CTL ( Supplementary Fig. 3b) and checked the impact of mbIL15 on their phenotype by FCM ( Supplementary Fig. 3c,d) and gene expression profile ( Supplementary Fig. 2a,b). The mbIL15-transduced iCAR-T CTL significantly increased the expression of memory-related markers such as CCR7 and CD62L showing no elevation of exhaustion-related maker expression (Supplementary Fig. 3c,d). With respect to the gene profile, mbIL15 overexpression increased the expression of certain early memory-related marker in iCAR-T CTL , DGK-dKO-iCAR-T CTL and pCAR-T CTL 60 min after co-culturing with irradiated SK-Hep-GPC3. One-way ANOVA with Tukey's multiple comparisons test b, Target cell-mediated proliferation of iCAR-T CTL and DGK-dKO-iCAR-T CTL was determined by co-culturing with irradiated SK-Hep-GPC3 and using a standard 3 H-thymidine incorporation assay at 72 h (n = 3, mean ± s.e.m.). Two-sided Student's t-test. c, Therapeutic efficacy of iCAR-T CTL and DGK-dKO-iCAR-T CTL in a peritoneal ovarian cancer model, and treatment schedule of the ovarian cancer peritoneal dissemination xenograft model: NSG mice were injected intraperitoneally (i.p.) with 5 × 10 5 KOC7c cells expressing luciferase, and from the third day after the ovarian cancer inoculation, 5 × 10 6 iCAR-T CTL , DGK-dKO-iCAR-T CTL or pCAR-T CTL were injected intraperitoneally twice a week for 2 weeks (n = 8 for each group). d, In vivo bioluminescence imaging of luciferase-labelled KOC7c in NSG mice treated with iCAR-T CTL , DGK-dKO-iCAR-T CTL or pCAR-T CTL . e,f, Change in the total body flux as the total tumour volume (mean ± s.e.m.) (e) and survival (f) were evaluated at the indicated timepoints after the injection. One-way ANOVA with Tukey's multiple comparisons test and log-rank test with Bonferroni multiple comparisons test. g, Therapeutic efficacy of iCAR-T CTL and DGK-dKO-iCAR-T CTL with a subcutaneous liver cancer model; treatment schedule of the liver cancer subcutaneous xenograft model. NSG mice were injected intraperitoneally with 2 × 10 5 JHH7 cells 3 days before treatment. In total, 1 × 10 7 iCAR-T CTL and DGK-dKO-iCAR-T CTL were injected intravenously on days 0 and 7. h, In vivo bioluminescence imaging of injected T cells in NSG mice treated with iCAR-T CTL , DGK-dKO-iCAR-T CTL . i, Total flux (photons s −1 ) of the injected iCAR-T cells in the JHH7 tumour was quantified at the indicated timepoints (n = 8, mean ± s.e.m.). Two-sided Student's t-test.
We next compared the proliferative potential upon co-culture with SK-Hep-GPC3 in the presence or absence of rhIL-15. We found that the transduction of mbIL15 enhanced the target-dependent expansion of iCAR-T CTL in vitro (Fig. 4b). In addition, the transduction of mbIL15 prolonged the persistence and enhanced the anti-tumour effect of both iCAR-T CTL and pCAR-T CTL in vivo (Fig. 4c-f and Supplementary Fig. 4a-d).
Combinatorial enhancement of signals 1 and 3 on iCAR-T CTL improved anti-tumour function of cells in tumour-bearing animal model
To assess the function of combinatorial signal-enhanced CAR-T cells, we transduced mbIL15 into DGK-dKO-iCAR-T CTL . We examined the impact of each signal enhancement on the cell phenotype by comparing the gene expression profiles of iCAR-T CTL , iCAR-T CTL -mbIL15tg, DGK-dKO-iCAR-T CTL and DGK-dKO-iCAR-T CTL -mbIL15tg. Principal component analysis and hierarchical clustering analysis revealed that iCAR-T CTL -mbIL15tg and DGK-dKO-iCAR-T CTL formed a distinct population from iCAR-T CTL (Supplementary Fig. 2a). The transduction of the mbIL15 gene resulted in an enrichment of DNA conformation change, DNA replication, chromosome organization, DNA metabolic process, DNA-dependent replication and cell cycle, whereas DGK-dKO induced enrichment of inflammatory response, regulation of response to external stimulus, locomotion, cell migration, regulation of cell proliferation and cell motility (Supplementary Table 2). DGK-dKO increased the activation and co-stimulation-related genes, whereas mbIL15tg increased the expression of naïveness-related genes and decreased that of exhaustion-related genes. In combination with both manipulations, DGK-dKO iCART CTL -mbIL15tg showed additional expression of genes i (Supplementary Fig. 2b). In vitro, these types of cells had similar cytotoxicities ( Supplementary Fig. 2d); however, the antigen-dependent cytokine production capacity was enhanced by DGK deletion (Supplementary Fig. 2c). To evaluate if the combination of IL-15 signalling and DGK-dKO enhanced the anti-tumour effect at the tumour site in vivo, we administered 1 × 10 6 test cells, which is equivalent to 1/20 the cells administered in Extended Data Fig. 4, into the peritoneal cavity of a KOC7c pre-inoculated peritoneal dissemination mouse model (Fig. 5a). Although iCAR-T CTL lost their tumour suppressive capability with this small number of cells, the cohort of mice treated with the mbIL-15 gene-modified DGK-dKO-iCAR-T CTL showed significantly prolonged survival (Fig. 5b,c) and significantly better cell persistence than other cohorts (Fig. 5d,e). In addition, we intravenously injected 1 × 10 6 test cells, which is equivalent to 1/20 of the cells administered in Fig. 4, into a JHH7 pre-inoculated subcutaneous solid tumour model (Fig. 6a). Similarly to the results obtained from the peritoneal injection model, DGK-dKO-iCAR-T CTL -mbIL15tg showed significantly better tumour control and survival than other cohorts (Fig. 6b-d), whereas the same combinational signal enhancement of pCAR-T CLT could not be proved to be effective, which could be attributed to the limited efficiency of genome editing with a current protocol 23 we applied (Supplementary Fig. 2e). FCM analysis of the peripheral blood at 28 days after the injection confirmed that only DGK-dKO-iCAR-T CTL -mbIL15tg was detected among all cohorts ( Supplementary Fig. 2f ), indicating that the IL-15-mediated improved persistency of DGK-dKO-iCAR-T CTL contributed to the therapeutic efficacy in the solid tumour model.
Next, we evaluated the therapeutic impact of these genetic modifications on iCAR-T ILC in comparison with iCAR-T CTL to understand if these modifications generated better iCAR-T ILC than iCAR-T CTL (Extended Data Fig. 9 and Supplementary Figs. 4-6). In comparison with DGK-dKO iCAR-T CTL , we did not find any advantage of DGK-dKO iCAR-T ILC about ERK phosphorylation and proliferation in co-culture with SK-Hep-GPC3 ( Supplementary Fig. 5). In addition, we did not observe any significant results with respect to peritoneally disseminated tumour control when test cells were directory injected into the peritoneal cavity of immunodeficient mice (Extended Data Fig. 9). iCAR-T ILC exhibited stronger but non-specific cytotoxicity to tumour cell lines than iCAR-T CTL (Supplementary Fig. 6) after transduction of the mbIL-15 gene; however, we still found inferiority of tumour accumulation of iCAR-T ILC -mbIL15tg in comparison with iCAR-T CTL -mbIL15tg when they were intravenously injected into JHH7-bearing mice (Supplementary Fig. 7). On the basis of these results, we conclude that iCAR-T CTL -based modified cells would be useful for solid tumour immunotherapy.
To know if this combinatory modification strategy could be applied to other CARs, we transduced second-generation 19bbz-CAR into above-characterized GPC3 iCAR-T CTL and DGK dKO GPC3 iCAR-T CTL -mbIL15, and found signal enhancement and proliferation advantage of the combination of IL-15 expression and DGK disruption in vitro (Extended Data Fig. 10a-e) as well as enhanced T-cell survival and tumour suppressive capability in vivo (Extended Data Fig. 10f-i). These results suggest that enhancing the combinational signals of
Discussion
We have recapitulated the differentiation process from iPSCs to acquire potent CD8αβ cytotoxic T cells in vitro to decipher the characteristics of CAR-modified iPS-T cells (iCAR-T cells) by engineering the lymphopoiesis pathway using co-stimulatory signalling component selection and enhanced iCAR-T cell effector function against solid tumours. To achieve this, we knocked out immunological checkpoint molecules and transduced cytokine/cytokine receptor chimaeric genes. These enhanced features led to better therapeutic effects in tumour-bearing animal models.
A tonic signal is attributed to the aggregation of CAR molecules that cause CAR-T exhaustion 12 and are reported to inhibit the expression of master transcription factor BCL11B by inhibiting the Notch signalling that affects the lymphopoiesis of CAR-modified haematopoietic stem and progenitor cells 14,22 . We investigated how the engineering of iPSCs impacts their differentiation propensity and found that the 28z construct decreased the differentiation efficiency of iPSCs to CD4CD8 DP cells through tonic signalling. In addition, we found that replacement with 4-1BBz or additional 4-1BB signalling to 28z attenuated the tonic signal during differentiation. This finding is consistent with that reported in the literature on primary CAR-Ts that an additional 4-1BB signal to 28z-based CAR restricted the downstream Zap70 phosphorylation at a basal level as well as after antigenic stimulation, thus preserving the therapeutic function by different affinity CARs 26 . Although the detailed mechanism of how additional 4-1BB signalling rescues T-cell differentiation from iPSC needs to be elucidated, we believe a mechanism compatible with that reported previously should be present. Certain genetic modifications on iPSCs for functional enhancement may inhibit differentiation. We observed decreased efficiency of T-cell differentiation from DGK dKO iPSC, possibly by enhanced duration and magnitude of TCR signalling, resulting in negative selection to a part of differentiating DP cells ( Supplementary Fig. 1a), which is compatible with the previous observation in DGK-dKO mice 24 . Although the low differentiation efficiency decreased the collected T-cell yield from iPSCs, we could sufficiently expand the differentiated T cells through TCR stimulation 10 . We believe this is one of the advantages of using iPSC as the unlimited suppliable starting material. The IL-15-mediated signal is known to drive common lymphoid progenitor cells to NK cell progenitor in combination with AKT/mTOR signal activation 27 . In the presence of IL-15 during T-lineage cell differentiation from iPSCs, we observed NK-lineage-biased differentiation to CD4/CD8β double-negative NK progenitor cells 17 . This is compatible with previous observations in IL15tg mice 28 . Therefore, in this study, we modified matured iCAR-T cells by mbIL15 during TCR stimulation-mediated expansion after T-cell differentiation. Good temporal controls of these engineering steps could contribute to the large-scale production of therapeutic T cells as we indicated the concept of temporal control by using of DGKα-/-DGKζf/f iPSCs, which partially rescued the efficiency of T-cell differentiation ( Supplementary Fig. 1a-d). In addition, this strategy could be applied to the temporal control of mbIL15tg, whereas other kinds of temporal control such as endogenous promoters (CD4, TRAC and so on) and syn-Notch would be available for further studies 29,30 . iPSCs can serve as the cell source for an infinite number of genetically engineered cells. This can be an advantage over peripheral blood T cells in terms of certainty, safety, and applicability for the mass production of cell therapies.
iCAR-T CTL retains the basic properties of CAR-T cells such as homing, proliferation and cytotoxicity at the tumour site. However, as exemplified by the results of ERK phosphorylation, signalling in the differentiated cell is generally attenuated than iCAR-T cells, although the underlying causes remain unclear. We consider this could be a result of the overall quality in each step of ex vivo manipulation to induce haematopoietic progenitor cells, progenitor T cells, CD4CD8DP T cells, matured CD8ab T cells and so on. Therefore, each differentiation step needs to be improved physiologically, similar to the differentiating cells in our body. A recently reported method for iPS-T-cell differentiation using organoid culture could improve the quality of iCAR-T CTL 31 . In contrast, we here report methods to improve the deficient functions by genetic manipulations. Thus, the optimization of signals 1-3 has an independent impact on iPSC-derived T cells in terms of their induction and function. Specifically, signal 1 impacts the proliferation and effector functions, signal 2 impacts the activation and tonic signalling and signal 3 impacts cell survival and persistency. A combination of DGK deletion and mbIL15 transduction is one of the examples of such modifications. Because of great advances achieved in CRISPR-Cas9-based human genome editing, several new clinical applications are under development, especially in the field of T-cell immunotherapy. For example, certain clinical trials using primary T cells with TCR-and/or HLA-knockout are known to diminish the alloreactivity of T cells 32,33 , whereas PD-1-or CTLA-4 knockout can overcome immunosuppression of the tumour microenvironment 34,35 . As iPSCs can be manipulated as a single-cell clone, it is possible to achieve 100% genome editing accuracy of the target without off-target effects. By targeting DGK, we demonstrated that gene editing can enhance its function by regulating the expression of intracellular molecules, which cannot be manipulated by antibody administration such as anti-PD1 and anti-CTLA4 antibodies. PD1 inhibition, the same checkpoint molecule, unexpectedly exerted no impact. However, it is known that PD1 expression does not increase in iPS-T cells without frequent stimulation 10 . Therefore, it is inferred that the response was not as strong as that of pCAR-T. Conversely, a combination of DGK deletion and mbIL15 transduction exerted a limited effect on T cells (Figs. 5 and 6); however, it was effective in boosting the function of less-responsive iCAR-T cells. In addition to the approach used in this experiment, several target genes exist that can improve each signal; it is important to optimize the combination of manipulations of these genes, especially for iPSC-derived T cells because they tend to have less responsiveness as mentioned above. It is possible that a more intense approach could be more useful than that for primary T cells.
Although cytotoxic ILCs (NK cells) are similar to CTLs in their ability to eliminate target cells, they differ substantially in the manner they sense target cells and in their kinetics 36 . It is reported that iPSC-derived T cells with properties similar to ILC/NK appear during T-cell differentiation from iPSCs. A direct comparison of iCAR-T ILC and iCAR-T CTL differentiated from identical CAR-iPSCs revealed their different properties that were compatible with those of their physiological counterparts, namely NK cells and CTLs. mbIL-15, which was found to be useful in iCAR-T CTL in this study, has been reported to be useful in cord-blood-derived NK cells, and in improving the persistency of CD19CAR iPS-T cells. In this study, the DGK deletion for iCAR-T ILC resulted in improved effector function comparable to DGK-dKO-iCAR-T CTL , and mbIL-15 improved the persistency in vivo. However, the lack of improvement in the subcutaneous tumour accumulation suggested that these two modifications are insufficient for iCAR-T ILC and may indicate differences in properties from iCAR-T CTL . However, iCAR-T ILC has an advantage in cytotoxicity, and the expression of mbIL15 from the iPSC stage may be advantageous for NK cell differentiation, which would be reflected in future clinical development 15 .
Finally, we enhanced the therapeutic effect of iCAR-T therapy so that its effectiveness matched that of healthy-volunteer-derived CAR-modified CD8 T cells by modifying multiple T-cell signalling pathways. However, this approach could also increase the risk of unwanted immune reactions or tumourigenesis after administration of the cell therapy 37,38 . Suicide genes, such as tEGFR 39,40 , iCasp9 (ref. 41 ) and HSV-TK 42 , or antigen-specific induction systems such as syn-Notch receptors 30 could reduce these risks. Focusing on the risk of graft-versus-host disease caused by allogeneic TCR, deleting endogenous TCR genes in iPSCs by knocking in a pleiomorphic TCR such as gdTCR or iNKT-TCR, or knocking in CAR in the TRAC locus 29 reduce the risk of graft-versus-host disease. In addition, the risk of cell rejection from the host could be reduced by HLA matching and HLA editing the iPSCs 43 44 . Although the mechanism is not completely understood, the induction of T cells expressing CD4 from iPSCs has been reported in three-dimensional culture 45 . These technologies demonstrate the potential of iPSCs for the production of effective and safe allogeneic cancer immunotherapies.
. Another challenge associated with iCAR-T-cell therapies is the generation of CD4 iCAR-T cells. CD4 CAR-T cells have important therapeutic effects
In summary, several genetic modifications have been attempted to enhance the therapeutic effect of CAR-T therapies, and the artificial design of T cells is being pursued as a cancer immunotherapy strategy against solid tumours. iPS-T cells are feasible for combinatorial signalling that can enhance iCAR-T therapies for solid tumours.
Mice
Mice used in this study were 6-to 12-week-old female NOD-SCID IL2Rγc null (NSG) mice purchased from Oriental Bio. The mice were housed under controlled conditions, humidity and light/dark cycle in a specific-pathogen-free facility. All animal experiments were performed in accordance with the Ethical Review Body at Kyoto University.
Cell lines
HepG2 and JHH7 cell lines were purchased from the JCRB Cell Bank. KOC7c, SK-Hep-1, SK-Hep-1 transduced with GPC3, and K562 were provided by the National Cancer Institute, Japan. The mycoplasma status of the cells was routinely checked. JHH7, HepG2, KOC7c and SK-Hep-1 were maintained in Dulbecco's modified Eagle medium supplemented with 10% foetal bovine serum. K562 cells were maintained in Roswell Park Memorial Institute 1640 medium supplemented with 10% foetal bovine serum. The cells were cultured under 5% CO 2 at 37 °C. TKT3v1-7, an iPSC line established from the T cells of a healthy donor, was maintained on mouse embryonic fibroblasts in a human iPSC medium (Dulbecco's modified Eagle medium/F12 FAM supplemented with 20% knockout serum replacer, 2 mM l-glutamine, 1% non-essential amino acids, 10 mM 2-mercaptoethanol and 5 ng ml −1 basic fibroblast growth factor (bFGF)) or iMatrix-511 (Matrixome)-coated plates in StemFit AK03 medium (Ajinomoto) at 5% CO 2 and 37 °C.
Generation of CAR-engineered iPSCs
The composition of GC33-CD28-41BB-T2A EGFR was designed by the authors and produced by GenScript. The constructs were subcloned into the lentivirus vector CS-UbC-RfA-IRES-hKO1, which contains the hKO1 fluorescent protein gene with an internal ribosomal entry site (IRES) and is under the control of the human ubiquitin C (UbC) promoter. For the inducible CAR construct, we used the CD28 transmembrane domain 46 . The recombinant lentiviral vector was produced using the method described previously 6 . TKT3v1-7 was transduced with a lentiviral vector by spin infection in 24-well plates. hKO1-positive iPSCs were FACS sorted, cloned by limiting dilution and expanded.
Generation of mbIL-15-transduced CAR-T cells
RD18 cells that can produce a retroviral vector encoding mbIL15-T2A-NGFR were a kind gift from Dr Kazuhide Nakayama of Takeda Pharmaceuticals. CAR-T cells were stimulated with Dynabeads Human T Activator anti-CD3/28 (Thermo Scientific). Three days after the stimulation, the cells were transduced with a retroviral vector encoding mbIL15-T2A-NGFR. NFGR-positive cells were purified by FACS.
Generation of DGK-deleted primary T cells
Primary T cells were stimulated with Dynabeads Human T Activator anti-CD3/28 (Thermo Scientific). Four days after the stimulation, electroporation was performed using an Amaxa P3 Primary Cell Kit and 4D-Nucleofecter (Lonza). Ten micrograms of recombinant S. pyogenes Cas9 (Thermo Fisher Scientific) and 1.25 μg of chemically synthesized sgRNAs for DGKa and DGKz were incubated for 20 min before electroporation to generate Cas9-gRNA RNP complexes. A total of 5 × 10 5 cells were resuspended, and P3 buffer was added to the pre-incubated Cas9-gRNA RNP complexes. Cells were nucleofected using the program EO-115. One week after the electroporation, the cells were collected and the genomic DNA was isolated. The frequency of indels was calculated by Tracking of Indels by Decomposition (TIDE) analysis.
FCM analysis
Stained cell samples were analysed using an LSR or FACS AriaII Flow Cytometer (BD Biosciences), and the data were processed using FlowJo (Tree Star). All human cells were first gated on FSC/SSC according to cell size and granularity, using stained human PBMCs as a positive control and reference for cell size, granularity and staining intensity. Unstained samples were used to set up negative gates, and stained human PBMCs were used to set up positive gates. Dead cell populations were excluded using propidium iodide staining (Supplementary Fig. 8). For T-cell phenotyping, the following antibodies were used: CD3-eFluor 450 (clone UCHT1, eBioscience), CD3-APC-Cy7
Cytokine release, 51Cr release assays and non-RI cytotoxic assay
T-cell cytokine secretion was conducted using the Human Th1/Th2/ Th17 Kit (BD Bioscience). 51Cr release assays were performed to evaluate the cytolytic ability of effector cells. Target tumour cells were loaded with 1.85 MBq 51Cr for 1 h, and afterward, 5,000 tumour cells were co-incubated with effector cells for 5 h at effector-to-target (E:T) ratios ranging from 2.5:1 to 20:1. Supernatants were collected, and 51Cr release was quantified using a beta counter. The percentage lysis was calculated as lysis (%) = (experimental lysis − spontaneous lysis)/ (maximal lysis − spontaneous lysis) × 100%, where maximal lysis was induced by incubation in 2% Triton X-100 solution. For the non-RI cytotoxic assay, target cells were labelled with N-SPC non-radioactive cellular cytotoxicity assay kit (Techno Suzuta) according to the manufacturer's instructions. The target cells were pulsed with the BM-HT reagent at 37 °C and washed thrice, and 5 × 10 3 cells were seeded into a well of a 96-well plate. Effector cells were loaded into the wells at E:T ratios ranging from 2.5:1 to 20:1 and co-cultured for 4 h. Next, 20 μl of the co-culture supernatant was mixed with 100 μl of Eu solution, and time-resolved fluorescence was measured through the EnVision 2105 multimode plate reader (PerkinElmer). Percentage lysis was calculated as lysis (%) = (experimental lysis − spontaneous lysis)/ (maximal lysis − spontaneous lysis) × 100%, where maximal lysis was induced by incubation in a detergent solution provided by the manufacturer.
H-thymidine uptake proliferation assay and cell trace reagent proliferation assay
The proliferation of T cells was measured using the 3 H-thymidine uptake assay. In total, 1 × 10 5 effector cells were co-cultured with irradiated 1 × 10 4 target cells for 3 days. Next, 3 H was pulsed, and 16 h later, 3 H-thymidine was measured by a beta counter. For cell trace reagent proliferation assay, effector cells are stained with 1 μM CellTrace Violet before stimulating them with target cells. Three days after stimulation, the decline in fluorescence was evaluated using FCM.
Analysis of tumour-infiltrating CAR-T cell model
On day 14, 5 × 10 6 SK-Hep-1 cells transduced with GPC3 were injected subcutaneously at the left flank of NSG mice. Luciferase-knock-in CAR-T cells were intravenously injected on day 0. Mice were killed on day 7, and subcutaneous tumours were collected. Each tumour was divided into two fractions. One fraction was extensively minced with a razor and digested gently with a MACS Dissociator. Next, the fraction was filtered through 70 mm nylon mesh cell strainers, and red blood cells were lysed. Single-cell suspensions were stained with fluorochrome-conjugated antibodies as indicated. The other fraction was prepared for histopathological analysis.
Subcutaneous tumour animal model 1
On day 14, 5 × 10 6 SK-Hep-1 cells transduced with GPC3 were injected subcutaneously at the left flank of NSG mice, and 5 × 10 6 SK-Hep-1 cells without transduction were injected subcutaneously at the right flank. Luciferase-knock-in CAR-T cells were intravenously injected on day 0. Luciferase activity from T cells was measured by in vivo bioluminescence imaging.
Subcutaneous tumour animal model 2
A total of 2 × 10 5 JHH7 cells were injected subcutaneously 3 days before the treatment, and 1 × 10 7 or 1 × 10 6 CAR-T cells were injected intravenously on days 0 and 7 or day 0 only. Tumour dimensions were measured with calipers, and tumour volumes were calculated using the formula V = LW 2 /2, where L is length (longest dimension) and W is the width (shortest dimension).
Peritoneal dissemination tumour animal model
A total of 5 × 10 5 KOC7c cells were injected intraperitoneally on day 0, and 5 × 10 6 CAR-T cells were intraperitoneally injected twice per week for 2 weeks from day 3 or 10 6 CAR-T cells were intraperitoneally injected on day 3. Luciferase activity from T cells or tumour burden was measured by in vivo bioluminescence imaging (IVIS 100 Imaging System, Caliper).
RNA sequencing of iCART CTL in tumours or spleens of xenograft models
Libraries for RNA sequencing were prepared using the SMART-seq v3 Ultra Low Input RNA Kit for Sequencing according to the manufacturer's instructions. Briefly, 10-20 cells were lysed, followed by reverse transcription and amplification. The libraries were sequenced for 96 cycles and 8 bp dual-index using the HiSeq SR Cluster Kit v2 (Illumina) and HiSeq Rapid SBS Kit v2 on a HiSeq2500 operated using the HiSeq Control Software 2.2.58. All sequence reads were extracted in the FASTQ format using the BCL2FASTQ Conversion Software 1.8.4 in the CASAVA 1.8.2 pipeline. The sequence reads were mapped to the hg19 reference genome, downloaded on 10 December 2012, using TopHat v2.0.8b, and quantified using RPKMforGenes. Hierarchical clustering was conducted using the Euclidian distance and the ward method of the hclust function in R3.2.1.
Whole transcriptome sequencing using the Ion AmpliSeq Transcriptome Human Gene Expression Kit
Total RNA was extracted from 100,000 cells using the NucleoSpin RNA Kit (Macherey-Nagel, 740902.250) according to the manufacturer's instructions. A total of 1 ng of RNA was reverse transcribed into complementary DNA using the SuperScript VILO cDNA Synthesis Kit (Life Technologies, 11754050). Next, the cDNA was amplified using the Ion AmpliSeq Transcriptome Human Gene Expression Core Panel. Amplified libraries were evaluated for quality using an Agilent 2100 Bioanalyzer (Agilent Technologies) and quantified by qPCR using the Ion Library Quantitation Kit (Life Technologies, 4468802). The templated libraries were loaded onto an Ion 540 Chip (Life Technologies, A27765) using the Ion Chef System and subsequently sequenced on an Ion 5SXL.
Read alignment and differential gene expression analysis
The AmpliSeq sequencing data were analysed using the ampliSeqRNA plugin available in Ion Torrent sequencing platforms. This plugin utilizes the Torrent Mapping Alignment Program, which aligns raw Ion Torrent sequencing reads against a custom reference sequence containing the targeted transcripts included in the AmpliSeq gene expression kit. Differential gene expression analysis was performed using R after quality control, which included counts per million conversion, log transformation and the filtering of lowly expressed genes. Normalization was performed using the TMM normalization method. DEG analysis between two groups was performed using the voom Nature Biomedical Engineering | Volume 7 | January 2023 | 24-37
Enrichment analysis was performed using clusterProfiler v3.14.3 and ReactomePA v1.30.0 with upregulated and downregulated genes, respectively. Fisher's method was used to combine P values from each test, and the combined values were adjusted for multiple comparisons using the Benjamini-Hochberg procedure. A P value less than 0.05 was considered significant for GO terms of the Reactome pathway.
Statistics
All data are presented as mean ± standard error of the mean (s.e.m.) unless otherwise noted. All statistics were performed using Prism (GraphPad software). Analysis of variance (ANOVA) and Tukey's multiple comparison tests were used to compare multiple groups, and two-sided Student's t-test was used to compare two groups in parametrical data. Values of P < 0.05 were considered significant.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
The primary data supporting the results in this study are available within the paper and its Supplementary Information. The raw and analysed datasets generated during the study are too large to be publicly shared, yet they are available for research purposes from the corresponding author on reasonable request. Source data are provided with this paper. Corresponding author(s): Shin Kaneko Last updated by author(s): Oct 20, 2022 Reporting Summary Nature Portfolio wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency in reporting. For further information on Nature Portfolio policies, see our Editorial Policies and the Editorial Policy Checklist.
Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information.
Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A description of any restrictions on data availability -For clinical datasets or third party data, please ensure that the statement adheres to our policy The primary data supporting the results in this study are available within the paper and its Supplementary Information. Source data for tumour burden is provided with this paper. The raw and analysed datasets generated during the study are too large to be publicly shared, yet they are available for research purposes from the corresponding author on reasonable request.
Authentication
The expression of glypican3 was checked by flow-cytometric analysis.
Mycoplasma contamination
All cell lines were confirmed negative for mycoplasma contamination.
Commonly misidentified lines (See ICLAC register) None of the cell lines used in the study are listed in the ICLAC Database of Cross-contaminated or Misidentified Cell Lines.
Animals and other research organisms
Policy information about studies involving animals; ARRIVE guidelines recommended for reporting animal research, and Sex and Gender in Research Laboratory animals [6][7][8][9][10][11][12] week-old female NOD-SCID IL2Rγcnull (NSG) mice were purchased from Oriental Bio (Yokohama, Japan). Mice were exposed to 12h:12 h light-dark cycles with free access to water and food. The ambient temperature was restricted to 20-26 degrees Celsius and room humidity was 40-70%.
Wild animals
The study did not involve wild animals.
Reporting on sex
Female mice were used.
Field-collected samples The study did not involve samples collected from the field.
Ethics oversight
All animal experiments were performed in accordance with the Kyoto University School of Medicine Ethical Committee.
Note that full information on the approval of the study protocol must also be provided in the manuscript.
Flow Cytometry
Plots Confirm that: The axis labels state the marker and fluorochrome used (e.g. CD4-FITC).
The axis scales are clearly visible. Include numbers along axes only for bottom left plot of group (a 'group' is an analysis of identical markers).
All plots are contour plots with outliers or pseudocolor plots.
A numerical value for number of cells or percentage (with statistics) is provided.
Sample preparation
All stains were performed with < 1x10^6 cells per 100 μL staining buffer (PBS + 2% FBS) with 1:100 dilution of each antibody, 20 min on ice in dark.
Instrument
Stained cell samples were analyzed using LSR or FACS AriaII flow cytometer (BD Biosciences).
Software
The data were processed using FlowJo (Tree Star).
Cell population abundance
Sorted samples were confirmed for purity post-sort via flow cytometry. Sorted populations were confirmed to be of >95% purity.
Gating strategy
All human cells were first gated on FSC/SSC according to cell size and granularity, using stained human peripheral mononuclear cells (PBMCs) as a positive control and reference for cell size, granularity and staining intensity. Unstained samples were used to set up negative gates, and stained human PBMCs were used to set up positive gates. Dead-cell populations were excluded using PI staining.
Tick this box to confirm that a figure exemplifying the gating strategy is provided in the Supplementary Information.
|
2022-12-14T16:11:13.778Z
|
2022-12-12T00:00:00.000
|
{
"year": 2022,
"sha1": "fda9eb77d52096408fd37a3cef9517a14d9ecc3b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41551-022-00969-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3b87e6729e5738e3aea53f4924803eb201b3e10d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218598621
|
pes2o/s2orc
|
v3-fos-license
|
Long non‐coding NR2F1‐AS1 is associated with tumor recurrence in estrogen receptor‐positive breast cancers
The tenacity of late recurrence of estrogen receptor (ER)‐positive breast cancer remains a major clinical issue to overcome. The administration of endocrine therapies within the first 5 years substantially minimizes the risk of relapse; however, some tumors reappear 10–20 years after the initial diagnosis. Accumulating evidence has strengthened the notion that long noncoding RNAs (lncRNAs) are associated with cancer in various respects. Because lncRNAs may display high tissue/cell specificity, we hypothesized this might provide new insights to tumor recurrence. By comparing transcriptome profiles of 24 clinical primary tumors obtained from patients who developed distant metastases and patients with no signs of recurrence, we identified lncRNA NR2F1‐AS1 whose expression was associated with tumor recurrence. We revealed the relationship between NR2F1‐AS1 and the hormone receptor expressions in ER‐positive breast cancer cells. Gain of function of NR2F1‐AS1 steered cancer cells into quiescence‐like state by the upregulation of dormancy inducers and pluripotency markers, and activates representative events of the metastatic cascade. Our findings implicated NR2F1‐AS1 in the dynamics of tumor recurrence in ER‐positive breast cancers and introduce a new biomarker that holds a therapeutic potential, providing favorable prospects to be translated into the clinical field.
Introduction
Nearly 10-40% of women with estrogen receptor (ER)-positive tumors develop metastases long after the cessation of their treatment, and metastasis is responsible for the majority of breast cancer deaths. The administration of therapy within the first 5 years substantially reduces the risk of local and distant recurrence; however, tumors may reoccur 10-20 years after initial diagnosis (Pan et al, 2017;Zhang et al, 2013). Time-to-recurrence varies between tumor types. ERnegative tumors are more aggressive, and the relapse tends to reoccur at around 2-5 years after diagnosis. In contrast, ER-positive tumors have lower risk of recurrence in the first 5 years after diagnosis (Early Breast Cancer Trialists' Collaborative Group (EBCTCG), 2005;Hess et al, 2003). Hence, metastases in ER-positive subtypes generally become clinically apparent after long asymptomatic periods. It has been suggested that the variability in time-to-recurrence may be related to the ability of specific cancer cells to disseminate, colonize distant tissues, and establish premetastatic niches (Gomis and Gawrzak, 2017;Zhang et al, 2013). Disseminated tumor cells (DTCs) enter dormancy in secondary organs and remain dormant for extended periods (Aguirre-Ghiso and Sosa, 2018;Gomis and Gawrzak, 2017). Thus, late recurrence is thought to arise from awaken proliferative DTCs. Decisive factors in the dynamics of the dormant-toawaken switch seem to reside in the microenvironment. Factors such as TGFb2, BMPs, GAS6, NR2F1, and DEC2, which are involved in the regulation of stem cell fate and pluripotency, are in fact dormancy inducers Bragado et al, 2013;Sosa et al, 2014;Sosa et al, 2015). Although the development of new models recapitulating dormancy programs has provided great insights into metastatic processes, the mechanisms that steer DTCs into quiescence are yet unclarified (Aguirre-Ghiso and Sosa, 2018;Sosa et al, 2014).
Long noncoding RNAs (lncRNAs) are transcripts with more than 200 nucleotides, generally expressed at low levels, which can display high tissue/cell-specific activities, and are involved in multiple mechanistic roles of gene and genome regulation (Sanchez Calle et al, 2018;Ulitsky & Bartel 2013;Li et al, 2015). Because of their involvement in disease and developmental defects, lncRNAs have gained more attention as possible biomarkers or therapeutic targets. In the context of cancer, increasing evidence supports the implication of lncRNAs in tumor suppression and tumorigenesis. Specifically, in breast cancer, several lncRNAs have been assigned cooperative functions in tumorigenesis (Tracy et al, 2018). The lncRNA MALAT1 was shown to contribute to tumor progression in ER-positive (also known as luminal) cell lines and has been shown to control the expression of CD133 in the dedifferentiation process of breast cancer cells (Jadaliha et al, 2016;Zhang et al, 2012). Interestingly, the depletion of Malat1 in a metastasis-prone transgenic mouse model of breast cancer reduced lung metastases; however, primary tumors were not different in size (Arun et al, 2016). HOTAIR has been proposed as predictive marker for metastatic progression and overall survival in early-stage tumors of breast cancer (Gupta et al, 2010). On the other hand, NEAT1 and PTENP1, commonly known as tumor suppressors in various cancer types, have been shown to potentiate cell growth and tumor progression in breast cancer (Ke et al, 2016;Yndestad et al, 2017;Yndestad et al, 2018). Collectively, these nuances emphasize the tissue/ cell specificity of lncRNAs and their ability to selectively target genes and impact signaling cascades in a confined manner.
We hypothesized that specific lncRNAs might be associated with late recurrence in breast cancer. To this end, we compared transcriptional profiles of primary tumors obtained from 10 recurrent and 14 nonrecurrent ER-positive breast cancer patients. We successfully identified the lncRNA NR2F1 antisense RNA1 (NR2F1-AS1) as a main lncRNA linked to recurrence. We unveil that the regulation of NR2F1-AS1 expression is mediated by the transcriptional complex formed by progesterone receptor (PR) and ER, and show that its gain of function steers cancer cells into quiescence-like state by the upregulation of dormancy inducers and the pluripotency, in addition to the activation of metastatic events. Thus, ER-positive breast cancer cells expressing NR2F1-AS1 could benefit of the activation of prosurvival signaling cascades, upregulate metastatic-related biological processes, and bear the ability to enter dormancy.
Clinical specimens
Clinical specimens from luminal breast cancer patients were provided by the National Cancer Center Hospital (Tsukiji, Tokyo, Japan). This study was approved by the Internal Review Board of the National Cancer Center, Tokyo, Japan (no. 2013-173) and conducted according to the Declarations of Helsinki, and all participants gave their written consent. In total, 24 primary tumor samples were collected with needle biopsy. Fourteen samples are considered as no recurrence (at least 10 years no recurrence observed) and 10 samples are the primary tumors from recurred patients within 10 years. Clinical information is shown in Table 1 and Table S1 (treatment information after surgery).
Cell lines, culture conditions, and transfections
The cell lines were purchased from ATCC in 2016 and authenticated using STR profiling. All cell lines were routinely cultured in RPMI 1640 supplemented with 10% FBS without antibiotics at 37°C and 5% CO 2 . The plasmids transfected into the BT474 cell line were pcDNA3.1-P2A-eGFP containing the sequence of the short Var4 of NR2F1-AS1 and pcDNA3.1/Hygro(+) containing the sequence of the long Var1 of NR2F1-AS1; pcDNA3.1-P2A-eGFP and pcDNA3.1/Hygro(+) were used as negative controls (GenScript, Piscataway, NJ, USA). Transfection was performed by nucleofection using the same parameters (Nucleofector TM 2b Device, Lonza Bioscience, Basel, Switzerland).
RNAi
Cell lines were transfected for 72 h with 5 nM target siRNA or the negative control No. 1 SilencerÒ Select, Ambion #4457171 (Life Technologies, Tokyo, Japan) using Lipofectamine TM RNAiMAX #13778075 (Invitrogen, Thermo Fisher Scientific, Tokyo, Japan). The sequences targeting human ER and PR are described in Table S2.
RNA isolation and quantification
cDNA was synthesized from 1 lg of total RNA isolated from tissue or cells using the High-Capacity cDNA Reverse Transcription Kit #4374967 (Applied Biosystems, Tokyo, Japan). Target genes were detected using probes from TaqMan Gene Expression Assays (Thermo Fisher Scientific, Tokyo, Japan) or using specific primers as shown in Table S2 with Platinum TM SYBR TM Green qPCR SuperMix-UDG (Thermo Fisher Scientific). Threshold cycle values were normalized to ACTB, and relative expression levels of target genes were calculated using the delta CT method.
Chromatin immunoprecipitation assay
A commercially available SimpleChIP Ò Plus Kit (Magnetic 150 Beads) #9005 (Cell Signaling Technology, Japan, K.K.) was used according to the manufacturer's instructions. Antibodies to progesterone receptor (Cell Signaling Technology, 6A1, #3172) and estrogen receptor a (Cell Signaling Technology, Tokyo, Japan, D8H8, #8644) are used. The PCR primers for the promoter region of NR2F1-AS1 are shown in Table S1.
Anoikis assay
For anoikis analysis, we used CytoSelect Anoikis Assay (CBA-081, Cell Biolabs, Inc., San Diego, CA, USA). NR2F1-AS1-variant 1, variant 4, or control plasmid was transiently transfected with lipofectamine 3000 reagent (Invitrogen, Thermo Fisher Scientific). The transfected cells (each well: 4 9 10 4 cells) were plated into normal and anchorage-resistant 96 well plates. MTT assay and fluorometric assay with calcein-AM (green fluorescence, live cells) and EthD-1 (red fluorescence, dead cells) were performed following the manufacturer's instructions. The rate of anoikis resistance was estimated by comparing cell viability and cell death rate between normal and anchorage-resistant condition.
In vivo analysis
NR2F1-AS1-variant 1 and control plasmids were transiently transfected into BT474 cells with lipofectamine 3000 reagent (Invitrogen, Thermo Fisher Scientific). Two days after transfection, the transfected cells (5 9 10 5 cells) were intravenously transplanted into immunodeficient mouse. Three days after injection, the mice were euthanized and dissected, and lung tissues were collected. Metastasized cells were detected by quantitative PCR of gDNA with human-specific primer (Funakoshi et al., 2017) and mouse-specific primer (Duleba et al., 2020) as shown in Table S2.
Microarray
Total RNA was amplified and labeled with Cy3 using a Low Input Quick Amp Labeling Kit, one color (Agilent Technologies, Tokyo, Japan), following the manufacturer's instructions. For each hybridization, 0.60 lg of Cy3-labeled cRNA was fragmented and hybridized at 65°C for 17 h to an Agilent SurePrint G3 Human GE v2 8x60K Microarray (design ID: 039494). The microarray chips were scanned using an Agilent DNA microarray scanner. Intensity values for each scanned feature were quantified using AGILENT FEATURE EXTRAC-TION software version 11.5.1.1, which performs background subtraction. Normalization was performed with AGILENT GENESPRING version 13.1.1 (per chip: normalization to 75th percentile shift). The altered transcripts were quantified using the comparative method. Raw and normalized microarray data are available in the Gene Expression Omnibus database (accession numbers GSE128600 and GSE128617). The intensity values were log2-transformed and imported into Partek Genomics Suite 6.6 (Partek Inc., Chesterfield, MO, USA). One-way analysis of variance was performed to identify differentially expressed genes. Fold change and P-values were calculated for each analysis. Unsupervised clustering and heat map generation were performed with sorted datasets by Pearson's correlation or Ward's method with selected probe sets by Partek Genomics Suite 6.6.
Dataset sources
The clinical TCGA datasets for breast cancer (TCGA-BRCA) were downloaded from the data portal of the Genomic Data Commons (GDC, https://portal.gdc.ca ncer.gov/projects/TCGA-BRCA). Kaplan-Meier plots of overall survival (OS) and distant metastasis-free survival (DMFS) were estimated for breast cancer with the complete analysis tool KM plotter (www.kmplot.c om). Gene set enrichment analysis (GSEA) and Ingenuity pathways analysis (IPA) were used. Activated upstream regulators were considered when the IPA activation z-score value was between 2-and 4-fold (P < 0.001). For IPA, the analysis was performed following the manufacturer's instructions (https://www.qi agenbioinformatics.com/products/ingenuity-pathwayanalysis/).
Statistics
Data are presented as mean AE SD of n = 3 biological samples in triplicate. For two group comparisons, the statistical significance was determined by Student's ttest or Chi-square test. For multiple comparisons, the significance of differences in average values was analyzed using one-way ANOVA with Tukey's HSD or Dunnett's post hoc test. The limit of statistical significance for all analyses was defined as *P < 0.01 and **P < 0.001. For analyses of TCGA_BRCA datasets, Kruskal-Wallis and Wilcoxon tests were applied when P < 0.05 by Shapiro-Wilk test.
Transcriptome analysis of 24 ER-positive breast primary tumors
To elucidate a distinctive molecular signature of recurrence, we performed transcriptome analysis with nontreated 24 clinical needle-biopsied samples from ERpositive breast primary tumors (Table 1): 10 tumors which recurred after the treatment and 14 which did not recur. Principal component analysis (PCA) for the whole transcriptome did not show the clear separation among recurrence status as well as luminal subtypes (Fig. 1A). Based on recurrence status, differentially expressed genes (DEG, Fig. 1B) were clustered, which did not match with luminal subtypes (Fig. 1C); however, a gain of cancer-related genes in the recurrence group was clearly observed (Fig. 1D). Additionally, gene set enrichment analysis (GSEA) identified enriched gene sets related to EMT, focal adhesions, and cancer stem cell-associated markers (P < 0.05) (Fig. 1E). Thus, the transcriptome data of primary tumors that recurred after the treatment suggest distinct expression profiles from the primary tumors that did not recur after the treatment.
LncRNA NR2F1-AS1 is associated with recurrence
Since lncRNAs may display highly tissue/cell-specific activities, we questioned whether this could represent an optimal feature to signify tumor recurrence in ERpositive breast cancers. Thus, we analyzed the differentially expressed lncRNAs associated with tumor recurrence. When compared the expression of lncRNAs among recurrence status, only 35 lncRNAs were upregulated and 17 lncRNAs were downregulated in the tumors which recurred after the treatment ( Fig. 2A). Although there were a few lncRNAs distinctly expressed, they enabled to separate the recurrence and nonrecurrence (Fig. 2B). To find out the lncRNAs which are associated with both luminal A and B types, we also compared separately and narrowed down to 5 candidates (Fig. 2C). The expressions of these lncRNAs were high in both luminal A and B types ( Fig. 2D). Further validation by quantitative PCR confirmed NR2F1-AS1 as a lncRNA that was likely related to recurrence (Fig. 2E). For other 4 candidates, we could see the trend showing higher expression in recurrent cases but not statistically significant (Fig. S1).
Clinical relevance of NR2F1-AS1 in ERpositive breast cancer
To expand our knowledge about the presence of NR2F1-AS1 in breast cancer subtypes, we analyzed datasets from The Cancer Genome Atlas Breast Cancer (TCGA_BRCA). Because of the differences in relapse between ER-negative and ER-positive subtypes, we stratified the datasets into 3 main phenotypes, luminal (ER+), HER2-positive (ERÀ/PRÀ/ HER2+), and TNBC (ER-/PR-/HER2-), and extracted the cases with available information about relapse status (Fig. 3A). Interestingly, HER2-positive subtypes display higher expression of NR2F1-AS1 (P = 0.011, Fig 3B). However, when the cases with relapse we isolated, the presence of NR2F1-AS1 was more prominent in ER-positive luminal cases, although not statistically significant (P = 0.058, Fig. S2A). Thus, we subtracted the ER-positive luminal subtypes and found a significant expression of NR2F1-AS1 in recurrence group (P = 0.004, Fig 3C). Also, we found that the expression of NR2F1-AS1 is significantly associated with the status of lymph node (Fig. S2B) and patients who received the initial diagnose under 50 years old (Fig. S2C). On 2012, Curtis et al. introduced a novel classification of breast cancer subtypes based on the meta-analysis of copy number variation from 2000 breast tumors. Recently, the same group has reported the associated risk of recurrence for each subtype (Curtis et al, 2012;Rueda et al, 2019). The latter study shows that the IntClust subtypes belonging to late recurring with highest risk of relapse up to 20 years are enriched in ER+/HER2-. In line with these findings, we subtracted the ER+/HER2-cases and divided them accordingly to the relapse status. Strikingly, the relation between recurrence, ER+/HER2-, and the expression of NR2F1-AS1 was found significant (P = 0.017), supporting the association of NR2F1-AS1 to late recurrence ( Fig 3D). Additionally, using another public database, a Kaplan-Meier analysis of breast cancer patients indicated that high NR2F1-AS1 levels correlated with poor overall survival (OS) and distant metastasis-free survival (DMFS), even when restricted to ER-positive cases (Fig 3E-F).
ER and PR negatively regulate NR2F1-AS1 transcription
To understand whether NR2F1-AS1 is related to the ER-positive subtype, we first addressed whether its presence was associated with the hormone receptors. When only using recurrence cases, we noted an inverse correlation between NR2F1-AS1 and PR (Fig 4A), although ER showed a weak correlation with NR2F1-AS1. In contrast, no recurrence cases did not show a significant correlation with any hormone receptors (Fig. S3), suggesting that the presence of NR2F1-AS1 is more tightly related to recurrence than ER-positive subtype itself.
To interrogate the biological relevance of the presence of NR2F1-AS1, we screened its expression in 9 genotypically distinct breast cancer cell lines. Briefly, we observed that NR2F1-AS1 expression was higher in the absence of hormone receptors (Fig. 4B) and it is also confirmed by the correlation of NR2F1-AS1 and hormone receptor expression in the cell lines (Fig. S4). Cell lines expressing higher NR2F1-AS1 levels included those representatives of TNBC and HER2-positive subtypes and, interestingly, the ER-positive luminal type MCF7 and T47D cell lines. Other ER-positive cell lines, such as BT483, ZR-75-1, and BT474 cells, showed no quantitative expression of NR2F1-AS1. Notably, MCF7 and T47D lines are derived from metastatic sites of pleural effusion, while the rest of ER-positive cell lines were originally derived from nonmetastatic sites. This finding prompted us to consider that the expression NR2F1-AS1 is linked to the kinetics of metastasis.
Previous studies have reported that the physical interaction of PR and the ER transcriptional complex can activate and redirect transcriptional outputs in breast cancer cells (Carroll et al, 2017). Since our clinical recurrence samples showed an inverse correlation between PR and NR2F1-AS1, we evaluated the potential chromatin binding of PR and ER to the NR2F1-AS1 promoter region. We employed ChIP-qPCR in the ER-positive cell lines expressing higher levels of NR2F1-AS1, namely MCF7 and T47D cells. A gain of enrichment for PR over ER was observed and was more apparent in T47D cells, which have markedly higher PR levels than MCF7 cells (Fig. 4C). Our data suggested that the transcriptional regulation of NR2F1-AS1 is most likely mediated by PR. However, the inverse correlation was indicative of repression of NR2F1-AS1 expression. To confirm this, we transiently knocked down the expression of PR and ER by siRNA (Fig 4D and E). Consistently, the expression levels of NR2F1-AS1 increased upon the transient depletion of ER and PR in both cell lines ( Fig. 4F and G).
To further confirm whether the ER-PR signaling inhibits the expression of NR2F1-AS1, we exposed 3 different ER-positive breast cancer cell lines to low doses of tamoxifen for 72 h to avoid compromising cell viability. After the treatment of low doses of tamoxifen, the expression of ER decreased slightly, and the levels of NR2F1-AS1 markedly increased in MCF7 and T47D (Fig. 4G). The BT474 cell line, which does not show detectable levels of NR2F1-AS1, showed a slight increase in NR2F1-AS1 expression when exposed to 10 nM of TAM for 72 h. Collectively, our data indicated that the ER-PR transcriptional complex negatively mediated the transcriptional expression of NR2F1-AS1. Interestingly, in the early stage of ERpositive breast cancer, high levels of PR are linked to decreased metastasis (Mohammed et al, 2015;Thomas and Gustafsson, 2015). Thus, we wondered whether this could relate to the presence of NR2F1-AS1. (Fig. S5).
Control BT474 - Cell viability became progressively compromised, and cell population had dramatically reduced to a few cells.
The remaining cells were maintained, and at 60 days, small colonies could be observed. After 75 days, colonies displayed remarkable morphological changes compared with control BT474 cells (Fig. 5B). Also, we confirmed p21 and p27 gene expression and protein levels in BT474-Var1 and BT474-Var4, and the increases in p21 and p27 levels were observed (Fig. S6).
With overexpression of both NR2F1-AS1 variants, a large number of genes were differentially expressed (Fig. 5C). PCA mapping with whole transcriptome revealed that Var1 and Var4 showed distinct expression profiles (Fig. 5D). We confirmed that surviving colonies overexpressed their corresponding transfected NR2F1-AS1 variants (Fig. 5E, left). Strikingly, the overexpression of Var1 induced the upregulation of endogenous Var4, but the converse was not observed, suggesting coordinated transcriptional activity. In line with our previous correlations, PR and ER were downregulated upon the overexpression of both NR2F1-AS1 variants; in particular, the presence of Var4 seemed to exert a major effect on ER and PR expression (Fig. 5E, right).
Tumor cell dormancy can be fueled through distinct cues, such as the protein-coding genes, TGFb2 and NF2F1 Sosa et al, 2015). Thus, since BT474-Var1 and BT474-Var4 showed attenuated cell growth and proliferation, we assessed the expression of TGFb2 and NR2F1 (Fig. 5F, left). Contrary to our expectations, only BT474-Var1 cells displayed increased levels of TGFb2, while NR2F1 was downregulated in both populations. Because quiescence status is closely related to the stemness for the survival of dormant cells (Aguirre-Ghiso and Sosa, 2018), we also evaluated the pluripotent markers NANOG and OCT4, which were upregulated only in BT474-Var1 cells (Fig. 5F, right). This finding underlined the functional divergence due to the simultaneous coexpression of the two NR2F1-AS1 variants versus Var4 alone. Then, we examined the presence of commonly known dormancy inducers and cyclins involved in cell cycle arrest (Fig. 5G). Although there is diversity in their expression levels between Var1 and Var4, both had equal stimulation of the transcription factor differentially expressed in chondrocytes 2 (DEC2), which is known to induce dormancy (Aguirre-Ghiso et al, 2013; Aguirre-Ghiso and Sosa, 2018;Gomis and Gawrzak, 2017;Sosa et al, 2014). Next, we addressed the differentially represented pathways by GSEA and found that both populations strongly downregulated proliferation-related pathways such as E2F targets and G2M checkpoints (Fig. 5H), as well as MYC targets and mitotic-related processes (Fig. S7A). To further investigate the function of NR2F1-AS1, we knocked down NR2F1-AS1 by siRNA in MCF7 cell line (Fig. S8A). Although the knockdown of NR2F1-AS1 was confirmed by qRT-PCR, there was no significant change observed in the NR2F1-AS1 knockdown cells. Moreover, GSEA could solely report a slight downregulation of the TGFb signaling pathway in at P < 0.01 (Fig. S8B).
3.6. NR2F1-AS1 may endow metastatic potential to ER-positive breast cancer cells We further scrutinized significantly enriched pathways in NR2F1-AS1T474-Var1 and BT474-Var4 cells and found hypoxia and glycolysis, with predominant upregulation of immune-related pathways based on GSEA (Fig. 6A). These representative pathways have been considered indicators of dormancy in previous studies with dormant hematopoietic stem cells (Cabezas-Wallscheid et al, 2017) and nonproliferative cells from the inner mass of multicellular spheroids of colon carcinoma cells . GSEA could only differentiate EMT and KRAS signaling enriched in BT474-Var4 cells compared to BT474-Var1 cells (Fig. S7B), suggesting that NR2F1-AS1 variants mainly elicit the activation of similar pathways. Using ingenuity pathways analysis (IPA), we identified the top biological functions and diseases from the annotated genes that were differentially expressed in BT474-Var1 and BT474-Var4 cells (Fig. 6B). Notably, the overexpression of both variants increased biological functions encompassed in the metastatic network. Similar to GSEA results, BT474-Var4 cells presented a remarkably enriched oncogenic signature compared to BT474-Var1 cells. Hence, albeit both variants enhance the metastatic potential of BT474 cells, they may trigger a differential transcriptional response.
Next, we selected the upstream regulators that were commonly activated in BT474-Var1 and BT474-Var4 cells and were linked to dormancy programs (Fig. 6C). Among them, we found all-trans retinoic acid (atRA), p38 MAPK, and STAT1. Importantly, atRA has been ascribed to sustain dormancy (Cabezas-Wallscheid et al, 2017;M€ uller-Hermelink et al, 2008). Similarly, proliferating squamous cell carcinoma cells entered dormancy and induced TGFb2 in a p38-dependent manner upon treatment with atRA (Sosa et al, 2015). Furthermore, high p38 MAPK and low ERK1/2 levels are required for tumor cell quiescence because the activation of p38 may induce growth arrest Sosa et al, 2014;Zhang et al, 2013). Another upstream regulator is STAT1, which has been implicated in the arrest of cell proliferation by means of JAK2/STAT1 (Vander Griend et al, 2005). The phosphorylation of STAT1 and p38 MAPK was examined (Fig. S9), and the result showed increased phosphorylation of STAT1 in both BT474-Var1 and BT474-Var4, although phosphorylation of p38 MAPK was observed only in BT474-Var4. Because the BT474-Var1 population exhibited a higher degree of complexity, we unilaterally dissected its molecular signature. Among the activated upstream regulators, we could recognize patterns of dormancy and proliferative DTCs (Fig. 6D). Since late metastasis has been confined to the reactivation of dormant DTCs, these results highlighted the coexistence of cell subpopulations at different points of the dormant-toawaken state. To resume proliferation, quiescent cells activate EGF, RAS, TGFb1, and TGFb3, which upregulation have been attributed to cause higher malignancy in breast tumors Lo et al, 2006). To investigate the metastatic potential of NR2F1-AS1, we firstly tested whether NR2F1-AS1 influences anoikis resistance in BT474 cells. As shown in Fig. S10, both variants tended to increase anoikis resistance based on the cell viability and cell death in an anchorage-independent condition. Next, we transplanted BT474-Var1 cells intravenously into immunodeficient mice and examined the metastatic potential of NR2F1-AS1 in the mouse lung, by detecting transplanted BT474 cells with qPCR of human specific gDNA primer. Although it was not statistically significant, BT474-Var1 cells were more frequently detected than the control cells in the mouse lungs (Fig. S11). Collectively, our results suggest that NR2F1-AS1 supports tumor cell survival by the activation of metastatic-entailed events and dormancy programs, but it is not sufficient to sustain prolonged quiescence without the support of microenvironmental extrinsic factors.
Discussion
The metastatic cascade encompasses the events of invasion, neoangiogenesis, intravasation, dissemination, extravasation, dormancy, and colonization (Dasgupta et al, 2017;Giancotti, 2013). After tumor cells have extravasated in secondary organs, they may enter dormancy and remain dormant for long asymptomatic periods. Thus, late recurrences are thought to reoccur from awakened DTCs that establish premetastatic niches and colonize in the new tissue. We found that the expression of NR2F1-AS1 variants activate biological processes relating to the metastatic cascade (Fig. 6B). Further enrichment of EMT, hypoxia, and inflammatory response pathways, along with activated upstream regulators such as HIF1a, VEGFA, and ICAM-1, was also found in both populations (Fig. 6A,C). It is broadly accepted that circulating tumor cells must display an EMT signature to overcome hostile environments throughout the multistep metastatic cascade (Dasgupta et al, 2017). Chemokines that participate in the inflammatory response can regulate biological processes of cell differentiation and survival, and processes of neovascularization and extravasation require of the activation of VEGFA, hypoxia, and ICAM-1 (Fig enschau et al, Nobre et al, 2018;Schr€ oder et al, 2011). Recently, a report has shown the co-regulation of hypoxia and dormancy programs in posthypoxic ER-positive DTCs from patient-derived xenografts (PDX) and a transgenic mouse model (Fluegen et al, 2017). Thus, it is likely that NR2F1-AS1-expressing tumor cells activate events of the metastatic cascade, including cell survival and dormancy.
The viability of NR2F1-AS1-transfected BT474 cells was seriously influenced (Fig. 5B). The activation of apoptotic signaling was confirmed by GSEA, alongside with prosurvival TNFa/NFjB signaling pathway that was strongly enriched (Fig. 6A). Interestingly, the dormancy inducer DEC2, which was found equally upregulated in BT474-Var1 and BT474-Var4, activates antiapoptotic signaling in breast cancer cells, and its expression appears to be regulated by TNFa/NFjB (Li et al, 2003;Olkkonen et al, 2015). Comparative analyses for the genomic occupancy sites, by chromatin isolation by RNA purification sequencing (ChIRP-seq), suggested that NR2F1-AS1 variants can bind to distinct genomic loci acting in trans, eliciting different transcriptional responses (Ang et al., 2019). The same study revealed that NR2F1-AS1 has preference for binding DNA regions rich in basic helix loop helix (bHLH) motifs; bHLH proteins constitute a family of transcription factors implicated in circadian rhythm, cell differentiation, and hypoxia. Another bHLH family member DEC1 has been attributed to induce proapoptotic cues and mediate the repression of DEC2 (Li et al, 2003;Liu et al, 2010). Hence, the divergence in cell survival fate, dictated by the overexpression of NR2F1-AS1, could hinge on the affinity of NR2F1-AS1 variants for genomic regions enriched with bHLH motifs-containing factors. This would impose a clonal selection on the cell population, whence residual cells expressing NR2F1-AS1 would have activate the transcription of DEC2, steering tumor cells into quiescence.
In addition to slower cell cycle, the overactivation of NR2F1-AS1 induced phenotypical changes in tumor cells that were apparent at transcriptomic level. Hierarchical clustering heatmap indicated 1893 DEG between BT474-Var1 and control BT474, and 1544 DEG for BT474-Var4 (Fig. 5C). Among these, PR and ER were found downregulated (Fig. 5D). These observations, together with the upregulation of NR2F1-AS1 upon low doses of tamoxifen (Fig. 4H), prompted us to question whether NR2F1-AS1 could serve as backup plan for the downregulation of ER and PR. This supposition became more consistent with preliminary drug screening on MCF7, in which residual cells displayed gradually enhanced NR2F1-AS1 expression after the administration of combined treatment with TAM and the CDK4/6 inhibitor palbociclib (in a mol:mol ratio) for 5 days (Fig. S12). Therefore, ER-positive breast cancer patients presenting high levels of NR2F1-AS1 would be at an increased risk of recurrence when receiving endocrine therapies.
When IPA for the activated upstream regulators was restricted to BT474-Var1, we observed 2 trends of molecular patterns corresponding to dormancy cues and proliferative cues ( Fig. 6D; Sosa et al, 2014), indicating the existence of tumor cells at different points of the dormant-to-awaken state. The molecular intricacy of BT474-Var1 should be given by the simultaneous expression of the two variants. Seemingly, the Var1 acts as main trigger of dormancy cues, with the activation of quiescence inducers and pluripotency markers, whereas, as indicated in the GSEA results (Fig. S2A, Fig. 6A), the activation of the Var4 would foster EMT and the upregulation of KRAS signaling, most likely supporting the resumption of proliferative cues. The activation of HER2/Neu signaling appears to be consequence of the overactivation of NR2F1-AS1. Interestingly, DTCs are currently characterized by the expression of multi-markers, and the positive expression of HER2 is commonly observed among DTCs of different cancer types Hosseini et al, 2016).
Although the data presented here demonstrated that NR2F1-AS1 expression is positively related to dormancy in luminal type breast cancer, one limitation of this study is that we could not identify the key molecules or signals of how the dormant cells wake and expand in the secondary tumors at distal organs. As shown in Fig. 6D, we found that BT474-Var1 possessed both dormancy and proliferation cues, but the cell growth of BT474-Var1 nearly stopped for the long term in vitro. One possible answer of how the cells wake up might be simply the silencing of NR2F1-AS1 expression in breast cancer cells. To further investigate the dormant-to-awaken state in breast cancer, novel model in vitro and in vivo would be necessary to screen the key factors for waking up the cells from dormancy.
Conclusions
Collectively, we identified the biological relevance of NR2F1-AS1 in the kinetics of tumor recurrence in ER-positive breast cancers and elucidated the regulation of its expression mediated by the PR/ER transcriptional complex. Also, we showed that NR2F1-AS1 overactivation induced the quiescence-like state in ER-positive breast cancer cells. These findings bring favorable prospects for developing new predictive approaches and new therapeutic strategies.
Supporting information
Additional supporting information may be found online in the Supporting Information section at the end of the article. Fig S1. qPCR validation of other candidates of lncRNAs associated with recurrence. Fig S2. A-C. Sequencing reads of TCGA_BRCA data for NR2F1-AS1 in Luminal, HER2-positive and TNBC subtypes accounting for recurrence cases (A), subtracted clinical cases with incidence of positive lymph nodes (B), age at initial diagnosis with 50 years as the delineation point (C). Fig S3. Pearson correlation of ER, PR and ERBB2 versus NR2F1-AS1 restricted to no recurrence samples. Fig S4. Pearson correlation of ERa, ERb, PR and ERBB2 versus NR2F1-AS1 in 9 breast cancer cell lines. Fig S5. Ki67 staining in NR2F1-AS1-transfected BT474 cells. Fig S6. p21 and p27 levels in NR2F1-AS1-transfected BT474 cells. Fig S7. GSEA analysis for BT474-NR2F1-AS1. Fig S8. Analysis of NR2F1-AS1 knockdown in MCF7 cells. Fig S9. Phosphorylation levels of STAT1 and p38 MAPK in NR2F1-AS1-transfected BT474 cells. Fig S10. Anoikis resistance of NR2F1-AS1-transfected BT474 cells. Fig S11. Metastatic potential of NR2F1-AS1. Fig S12. Expression of NR2F1-AS1 in a drug treatment with MCF7 cells after 5 days of combined administration of TAM and palbociclib (in a mol:mol ratio). Table S1. Treatment information of 24 patients. Table S2. List of primers and siRNAs.
|
2020-05-13T13:03:50.971Z
|
2020-05-11T00:00:00.000
|
{
"year": 2020,
"sha1": "2a78245b322dbb8568d97fbad2eee8038a9a76d9",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/1878-0261.12704",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6de123144cfd4f575a68ab1d06416f3eb9fc809",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56147787
|
pes2o/s2orc
|
v3-fos-license
|
High-throughput identification of FLT3 wild-type and mutant kinase substrate preferences and application to design of sensitive in vitro kinase assay substrates
Acute myeloid leukemia (AML) is an aggressive disease that is characterized by abnormal increase of immature myeloblasts in blood and bone marrow. The FLT3 receptor tyrosine kinase plays an integral role in haematopoiesis, and one third of AML diagnoses exhibit gain-of-function mutations in FLT3, with the juxtamembrane domain internal tandem duplication (ITD) and the kinase domain D835Y variants observed most frequently. Few FLT3 substrates or phosphorylation sites are known, which limits insight into FLT3’s substrate preferences and makes assay design particularly challenging. We applied in vitro phosphorylation of a cell lysate digest (adaptation of the Kinase Assay Linked with Phosphoproteomics (KALIP) technique and similar methods) for high-throughput identification of substrates for three FLT3 variants (wild-type, ITD mutant, and D835Y mutant). Incorporation of identified substrate sequences as input into the KINATEST-ID substrate preference analysis and assay development pipeline facilitated the design of several peptide substrates that are phosphorylated efficiently by all three FLT3 kinase variants. These substrates could be used in assays to identify new FLT3 inhibitors that overcome resistant mutations to improve FLT3-positive AML treatment.
Introduction
Acute myeloid leukemia (AML) is an aggressive cancer with a diverse genetic landscape. The FLT3 gene encodes for a receptor tyrosine kinase (FLT3) that regulates hematopoiesis and perturbations to its signaling pathways appear to promote AML disease progression. In fact, FLT3 is implicated as a major factor in AML relapse. 1 Thirty percent of AML cases have mutations to FLT3 that lead the kinase to be constitutively active, 2,3 most commonly to the juxtamembrane domain and the kinase domain. 2,4,5 Internal tandem duplication (FLT3-ITD) in the juxtamembrane or the first tyrosine kinase domain (TKD) occurs when a segment is duplicated (head to tail) leading to the loss of repressive regions in the protein. 6 A second common mutation is a substitution of aspartic acid 835 to a tyrosine residue (D835Y) in the TKD. Both ITD and TKD mutants can activate and dimerize with the wild type FLT3. 7 The effects of these mutations on FLT3 signaling are still unclear, but one possibility is that mutant FLT3-TKD and FLT3-ITD activate alternative signaling pathways, or activate standard FLT3 pathways aberrantly, compared to the WT. Mutations to FLT3 are correlated with poor long-term prognosis 8,9 and while patients with FLT3 mutations achieve similar initial disease remission to those with wild-type FLT3, they have an increased risk for relapse. 2,8,10 In vitro studies show that FLT3-ITD mutant-expressing cell lines are resistant to cytosine arabinoside (the primary AML therapeutic). 8 These findings prompted the use of a combinatorial approach to AML therapies to include FLT3 tyrosine kinase inhibitors (TKIs), which are frequently initially successful but often lead to FLT3 inhibitor resistance and subsequent disease relapse.
The current FDA approved TKIs used to inhibit FLT3 were not developed specifically to target FLT3. [11][12][13] Sorafenib is a type II pan-TKI which is FDA approved for use in combinatorial approaches with AML chemotherapy, but elicits no response in FLT3 variants with tyrosine kinase domain mutations. 8,[14][15][16][17] Efforts to develop FLT3 mutant-specific TKIs lead to the discovery of the type II TKI quizartinib, which can inhibit the FLT3-ITD mutant and is currently undergoing phase III clinical trials for AML. 18 However, quizartinib has no activity against FLT3-TKD point mutations and thus these mutations are the primary mode of quizartinib monotherapy resistance. [18][19][20][21] Quizartinib also has potent activity towards Platelet Derived Growth Factor receptor (PDGFR) and c-KIT kinases, and produces side effects that may be related to their inhibition in patients undergoing a FLT3 TKI regimen. 22,23 Crenolanib, a TKI designed to target the α and β isoforms of PDGFR, has demonstrated activity against a broad range of FLT3 mutations. 1,24 Unlike quizartinib, crenolanib does not inhibit c-KIT (the main kinase implicated in undesirable side effects of quizartinib) at safe plasma concentrations, and is undergoing phase II clinical trials in relapsed AML patients with a driver FLT3 mutation (NCT01657682). 22,25 However, recent reports have shown that secondary point mutations within the kinase domain of FLT3 can reduce crenolanib's clinical efficacy that suggest it is only a matter of time until crenolanib resistant mutations are found in a clinical setting. 22,25 The complex abnormality landscape of AML reduces the possibility that a single FLT3 TKI would be a viable monotherapy for AML. Although crenolanib is a promising TKI, efficient development of new inhibitors will require better assays than those currently available, and adaptable strategies that effectively screen inhibitors to target mutant forms of FLT3 are especially needed. 18 Since very little is known about FLT3 substrate preferences, there are few options available when designing FLT3 activity assays. The current activity tests are limited by inefficient phosphorylation activity, and/or their phosphorylation by the mutant variants has not been characterized. In this manuscript, we describe the development of several novel and efficient peptide substrates for FLT3 and two clinically-significant mutant variants (the ITD and D835Y mutants). We adapted the "Kinase Assay Linked with Phosphoproteomics" (KALIP) 26,27 strategy (from the Tao lab) to perform high-throughput determination of FLT3's preferred peptide substrate motif in a manner similar to other previously reported methods (e.g. Kettenbach et al from the Gerber group). 28 In these approaches, a cell lysate digest is stripped of endogenous phosphorylation and used in a kinase reaction as a pseudo-"library" of peptides to determine kinase substrate preferences by identifying phosphorylated sequences via enrichment and mass spectrometry (ideal for high-throughput analysis of many substrates simultaneously without requiring radioactivity or other labeling). 29,30 We then used the identified substrate preferences to rationally design a panel of candidate peptides incorporating key sequence features predicted to make them favorable for phosphorylation by the FLT3 kinase variants, following our previously reported substrate development pipeline KINATEST-ID. 31 We demonstrated that these substrates enable efficient inhibitor screening for all three forms of FLT3. These peptides could be used in many different types of drug discovery settings to more rapidly and efficiently screen for and validate FLT3 inhibition.
Cell Culture and Endogenous Peptide Sample Preparation-KG-1 cells (ATCC) were
maintained in IMDM media (Gibco) supplemented with 10% heat inactivated fetal bovine serum (FBS), 1% penicillin/streptomycin in 5% CO2 at 37 °C. KG-1 cells were washed with 30 mLs of phosphate buffered saline (PBS) 5 times. The cells were then pelleted at 1,500 RPM for 5 minutes and lysed with buffer containing 8 M urea, 0.1 M ammonium bicarbonate pH 8.5, 20% acetonitrile (ACN), 20 mM dithiothreitol (DTT), and 1X Pierce Phosphatase Inhibitor tablet (Roche) pH 8.0. Lysed cells were incubated on ice for 15 minutes and then were subjected to probe sonication to shear the DNA. Lysates were treated with 40 mM iodoacetamide and incubated at room temperature (protected from light) for 60 minutes. Samples were then centrifuged at 15,000 RPM for 30 minutes to remove cellular debris. Urea concentration was diluted to 1.5 M using 50 mM ammonium bicarbonate buffer (pH 8.0) and the samples were set up for trypsin digestion at a 1:50 trypsin (ThermoScientific) ratio and incubated at 37 ˚C overnight. Trypsin digestion was quenched by adding 10% trifluoroacetic acid (TFA) in water to lower the pH below 3. Subsequently, the tryptic digest was desalted using hydrophilic-lipophilic balanced copolymer (HLB) reverse phase cartridges (Waters) and vacuum dried.
Alkaline Phosphatase Treatment-Samples were reconstituted in alkaline phosphatase dephosphorylation buffer containing 50 mM tris(hydroxymethyl)aminomethane hydrochloride (Tris-HCL), 0.1 mM Ethylenediaminetetraacetic acid (EDTA) at pH 8.5. Alkaline phosphatase (6 U, Roche) were added to each sample followed by incubation for 90 minutes at 37 ˚C. The reaction was quenched by incubating the samples in 75 ˚C for 15 minutes (Figure 1). 26,27 The enrichment kit is made up of four components: 1) loading buffer, 2) PolyMAC magnetic beads, 3) wash buffer 1 and 2, and 4) elution buffer. In brief, the dried peptides were re-suspended in Loading Buffer and 100 µL of PolyMAC capture beads were added to the mixture. The phosphopeptide-PolyMAC mixture was mixed at 700 RPM for 30 minutes. Subsequently, the mixture was centrifuged briefly and placed on a magnetic rack to remove the un-phosphorylated peptide solution. The beads were washed twice with wash buffer 1 and rocked for 5 minutes at 700 RPM. The phosphopeptide-PolyMAC complex was placed on the magnetic stand until beads were immobilized by the magnet, and the supernatant was discarded; the process was repeated using wash buffer 2. The phosphopeptides were eluted from the capture beads using 300 µL of elution buffer and then vacuum dried. False discovery rate (FDR) analysis was activated for each individual search. ProteinPilot 5.0 used a reverse database as the decoy to calculate the false discovery rate (FDR) for each independent search 35 and we set the global 1% FDR score as our cutoff threshold.
LC-MS/MS Data Acquisition-Samples
Data processing and KINATEST-ID substrate candidate prediction.
Streamlined data processing of LC-MS data as input for KINATEST-ID algorithm substrate design-A series of novel scripts were developed to prepare and analyze the results from KALIP to design potential substrates in the KINATEST-ID platform. These steps and scripts are described in more detail in the supplemental methods, and detailed instructions on running each script in sequence are provided in the supporting information file "kinatestsop.docx". To extract and reformat the phosphopeptide sequences from the ProteinPilot distinct peptide report, we created the KinaMine program and GUI that extracts all sequences from a ProteinPilot 5.0 (SCIEX) Distinct Peptides Report output file that have phosphorylated tyrosine residues identified at a 99% confidence (1% FDR), and creates "Substrate" and "Substrate Background Frequency (SBF)" files, which contain the observed substrate sequences and the UniProt (uniprot.org) accession numbers and calculated representation of all amino acids for the proteins from which substrate sequences were identified, respectively. We created the "commonality and difference finder.r" script to identify the phosphopeptides from the "substrates" and SBF files that are shared by all of the FLT3 kinase variants, and generated the "SHARED-16H" substrate and SBF files. We extracted the UniProt accession numbers from the SBF lists and used them to download a customized FASTA file from the UniProt website that contained entries only for those protein sequences, and converted that into .csv format using the "FASTAtoCSV" script. We created the "Negative Motif Finder.r" script to extract (and in silico trypsin digest) all tyrosine centered sequences present in any of the "background" proteins, and compared them to the substrate list (KinaMINE output) to return the sequences that were not observed in the phosphoproteomics data as a best estimate of "non-substrates."
KINATEST-ID streamlined processing in R. Using Substrates, Substrate Background
Frequency, Non-substrate Motifs, and Screener.csv, the scripts "Kinatestpart1.R" and "Kinatestpart2.R" were written to replicate the functionality of the KINATEST-ID workbooks previously described, 31 piperidine in dimethylformamide (DMF, Iris Biotech GMBH;) over two 5-minute cycles. The peptides were purified to >95% purity by preparative C18 reverse phase HPLC (Agilent 1200 series) over a 5-25% acetonitrile/0.1% TFA and water/0.1% TFA gradient and characterized using HPLC-MS (Agilent 6300 MSD). Peptide substrates were dissolved in a PBS solution containing 5% dimethyl sulfoxide (DMSO). Absorbance measurements at 280 nm wavelength were used to determine the peptide concentration using the Beer-Lambert law (for which peptide extinction coefficients were calculated using Innovagen's peptide property calculator (https://pepcalc.com/).
In Vitro Kinase Assays-Recombinant
Experimental design and statistical rationale-The trypsin digestion "library" preparation was performed for two independent replicates, each of which was subjected to an FLT3 kinase variant enzyme in triplicate ( Figure S1). Upon file conversion of raw mass spectrometer files into a MGF format, the resulting six kinase treated files per FLT3 variant were combined into one above. An exact sum-of-squares F test with a p value set at 0.05 was performed to identify differences in the reported IC50 curves for each TKI against the FLT3 kinase variants.
In vitro kinase reaction to identify substrates for input/analysis with the KINATEST-ID pipeline
Similar to the KALIP method 26 and others, 37 we used trypsin-digested cell lysate as a nonrandomized peptide "library" to determine FLT3 kinase substrate preferences. Briefly, AML KG-1 cells were grown to log phase, lysed with a urea lysis buffer and digested with trypsin as described above. Following trypsin digest, peptides were treated with alkaline phosphatase to remove endogenous phosphorylation from tyrosines. The phosphatase-treated digest was then divided into aliquots that were processed in parallel: one treated with kinase reaction mixture but Overall, we identified more than 10-fold more substrates for FLT3 and the two mutant variants than we had curated for most of the kinases we had evaluated in our previous publication using KINATEST-ID, in which substrate numbers ranged from ~15 to ~170. 31 Generally for all variants, acidic amino acids were overrepresented and basic amino acids were underrepresented N-terminal to the phosphotyrosine, while hydrophobic amino acids and glutamine/asparagine were slightly overrepresented C-terminal to the phosphotyrosine. Given the lack of substantial differences between the PSMs for the WT and mutant variants, we focused on the substrates that were observed in common for all three kinase variants to move forward with design of novel peptides that could be used as substrates in FLT3 activity assays.
KINATEST-ID-based design of novel FLT3 Artificial Substrate peptides (FAStides)
We then used the "SHARED-16H" dataset and employed the next steps of the KINATEST-ID approach to design a set of candidate sequences for synthesis and biochemical testing. This process used the KINATEST-ID "Generator" tool (via the Kinatest part 2.r script) to create a list of sequences comprising all the permutations of the amino acids overrepresented at each position by at least two standard deviations from the mean ( Figure 3A-B). One caveat was that while tyrosine was observed as overrepresented at -1 and +1 to the phosphotyrosine, we chose to exclude it from the preference motif for this iteration of substrate design, due to the potential ambiguity in assay signal that could ultimately be introduced by having more than one phosphorylatable residue in the designed substrates. Additionally, F was included as an option at position +1 (despite having relatively low representation) as a hydrophobic alternative to the more highly represented I and V in an attempt to provide better specificity, due to the high frequency of those two amino acids at that position in the motifs for other tyrosine kinases.
Permutation of the motif was then followed by scoring of the resulting 19,201 sequences against the PSMs for WT FLT3, the two mutant variants, and a panel of other kinases, 31 using the KINATEST-ID "Screener" tool ( Figure 2). The sequences and their scores against the PSMs are summarized in Figure 3C while their "off-target" kinase PSM scores are summarized in Figure 5G. We chose one set of sequences that scored well for FLT3 but scored poorly for other kinases (sequences A, D, G and H). Since the highest scoring sequences for FLT3 also scored well for several other off-target kinases in the Screener panel, we selected another set of those sequences (sequences B, C, E and F) to have a higher likelihood of obtaining an efficient (though potentially not FLT3-specific) substrate. Additionally, we synthesized two control sequences (Tables 1 and 2) that were previously reported to be phosphorylated by FLT3 (ABLtide and FLT3tide). 39,40 (figure 3 goes here)
In vitro validation of FAStide sequences as FLT3 kinase variant substrates
We tested the candidate sequences via in vitro kinase assays using anti-phosphotyrosine antibody 4G10 in a chemifluorescent Enzyme-Linked Immunosorbent assay (ELISA) as the readout to detect peptide phosphorylation, with sample aliquots quenched at 2 and 60 minutes. Figure 4A shows that the sequences that were most efficiently phosphorylated by FLT3-WT, D835Y and ITD contained the DXDXYXNXN motif. Figure 4A shows that S was well-tolerated at position -3 (while N and H were less tolerated), both N and D were tolerated at position -1, and F was the preferred amino acid at position +1 while A was not well tolerated. Sequences that contained F, P or T residues at position +3 were phosphorylated by all FLT3 kinases. The control peptides (FL-ABLtide and FLT3tide) were also scored against the PSMs for all of the datasets ( Figure 3C) and assayed in parallel with the FAStide sequences against the FLT3 kinase variants. The previously reported substrate FLT3tide 39 scored poorly against our models and was a poor FLT3 substrate in our assays ( Figure 4A). ABLtide, a previously reported FLT3 substrate, 40 scored moderately against the SHARED-16H dataset model, and performed moderately as a substrate.
Evaluating the relationship between substrate input datasets and resulting PSM model scores vs. biochemical assays
FAStide sequences A, B, D, E, F and G generally had higher PSM scores than C and H, and were all phosphorylated more efficiently than C and H in the assays. The FLT3tide reference peptide scored poorly in all the matrices and was phosphorylated very poorly in the assays, and FL-ABLtide scored moderately and was phosphorylated moderately relative to the best of the FAStide sequences in the assays. For the longer KALIP kinase reactions (16 hours), the strongest correlations were found between each variant's biochemical assay results at 60 min and the WT-16H dataset-derived PSM scores ( Figure 3C). The FAStide sequences scored the lowest with the D835Y-16H PSM model, which contained the largest substrate list (2010 substrates), and their scores had lower Spearman correlations with assay results from the WT, D835Y, and ITD variants, respectively. This is most likely attributable to a combination of the larger size of the input dataset for that PSM and that the FAStides were designed based on the selected subset of that data shared with the other two variants, rather than the entire dataset used to derive this PSM model. The FAStide sequences received higher scores using PSM models with the smallest substrate lists (SHARED-16H and ITD-16H), which was also primarily an artifact of all or nearly all of the sequences in those smaller datasets being used to design the FAStides in the first place. However those scores were less correlated with the biochemical assay results, which might also arise from smaller dataset artifacts or some other, as yet unidentified factor affecting the accuracy of the predictive models from those datasets.
Evaluating the length of time of in vitro kinase treatment on substrate motif prediction
We also examined the effect of reaction time in the kinase treatment step on the ability of the KALIP-KINATEST-ID process to identify efficient substrates that can be used in enzyme assays.
We performed a two-hour kinase reaction using FLT3-WT kinase and processed the data (referred to hereafter as WT-2H) as described above to determine if the KALIP kinase treatment time affected 1) the characteristics of the preference motif arising from a given dataset, and 2) its utility for subsequent substrate design. We identified 888 phosphopeptides from the 2H KALIP FLT3-WT kinase treatment (relative to 1559 for the 16-H treatment as described above).
We compared the "WT-2H" to the "WT-16H" substrate list and found 559 sequences shared by both. These sequences, referred to as the "WT-OVERLAP" substrate and background frequency lists, represented sequences that were likely to have been phosphorylated rapidly and robustly ( Figure 4B). The corresponding SDV values were compared to those for each substrate list from the WT-2H and WT-16H experiments, as shown in Figure S3.
(figure 4 goes here)
Overall, the preferences at each position as represented by the SDV tables were similar ( Figure S3), however subtle differences in the WT-2H dataset from the two hour incubation resulted in a scoring model that appeared to more accurately reflect the substrate phosphorylation efficiency for the WT kinase as observed in the biochemical experiments relative to the models derived from the WT-16H dataset. PSM scores derived from the WT-2H, WT-16H and WT OVERLAP datasets were compared with the assay results for the eight FAStides and two control peptides ( Figure 4A). Spearman correlations are shown in Figure 4C. Similar to the WT-16H, the assay signal at 60 min and the PSM scores generated via the WT-2H and WT-OVERLAP datasets were very highly correlated, appearing slightly stronger for WT-2H than for WT-OVERLAP and WT 16H. This may indicate that shorter KALIP kinase treatment time is better for determining more efficiently (i.e. rapidly) phosphorylated substrates, and that longer treatment may "dilute" the substrate preference motif with less efficient sequences.
In vitro characterization of FAStide FLT3 specificity
While in vitro recombinant kinase assays using the novel peptides identified here would not require exquisite specificity since they would typically employ purified kinase, we wanted to perform a limited assessment of the "off-target" phosphorylation of these peptides using a small Additionally, our panel included the receptor tyrosine kinase ALK, which we previously observed to phosphorylate similar sequences to those phosphorylated by SRC. 41 To ensure each of those recombinant kinases was active, we performed an in vitro kinase assay with several reference peptides that have been previously characterized in our laboratory for the kinases in the panel.
The "universal" tyrosine kinase substrate that we previously reported U5 (DEAIYATVA) 34 was the reference peptide chosen for KIT, PDGFRβ and BTK, and an ALK substrate we previously reported (ALAStide) 41,42 was used for ALK ( Figure 5A-F). SFAStide-A (DEDIYEELD) 31 was used as the reference peptide for SRC and LYN kinases. Briefly, the kinases were preincubated with kinase reaction mixture and the in vitro reaction was initiated by the addition of substrate peptide. The samples were quenched at 2 and 60-minute time points as described above.
Phosphorylation was measured using the previously described ELISA-based assay 31,36 and results are shown in Figure 5.
Overall, the sequences with the least off-target phosphorylation in this panel were FAStide-A, which was phosphorylated moderately by c-KIT and PDGFR (both FLT3 family members) and LYN after 60 minutes but not the others in the panel, and FAStide-G, which was phosphorylated to a relatively low degree after 60 minutes only by c-KIT and SRC ( Figure 5A and 5E). FAStide-B, -D, -E and -F were phosphorylated by more of the off-target kinases by 60 min and FAStide-C was a robust substrate for both SRC and LYN. PDGFRβ and BTK, on the other hand, did not phosphorylate any of the artificial sequences over a 60-minute incubation. The off-target in vitro kinase assay results were mostly but not entirely consistent with the Screener predictions, which is not surprising given that Screener is limited by the PSM models built into its cross-referencing algorithm-the main caveat is that the PSM models in Screener all come from the previously developed KINATEST-ID package 31 that did not have KALIP phosphoproteomics data as input.
A future goal is to update current kinase PSMs with newly generated KALIP data, as well as adding data from more kinases to improve the cross-referencing depth.
Detection of FLT3 kinase variant inhibition through FAStide in vitro phosphorylation
To demonstrate how the FLT3 artificial substrates can be used to monitor TKI efficacy, we performed dose-response (DR) assays for FLT3-WT, FLT3-D835Y and FLT3-ITD with sorafenib, quizartinib and crenolanib, three TKIs that have been characterized against the three FLT3 variants. 20,22,25,43 FAStide-E and FAStide-F were chosen for the DR assays due to their efficient phosphorylation by all three FLT3 kinase variants, and employed in parallel experiments. Each FLT3 kinase variant was pre-incubated in the kinase reaction mixture (containing ATP) with the respective TKI (0.00001 to 100 nM) for 15 minutes at 37˚C without substrate, and the kinase reaction was initiated via the addition of the substrate (37.5 µM).
Reactions were quenched after 30 min and wells analyzed using ELISA as described above.
Fluorescence values (relative fluorescence units, RFU) were collected and normalized to values for wells containing vehicle control (DMSO). In general, both substrates exhibited doseresponse curves and IC50 values that were consistent with what was expected for the given inhibitor against each FLT3 variant, with one notable exception (further described below) ( Figure 6, Table 1). All three inhibitors potently inhibited WT FLT3 (with IC50 values in the ~1-30 pM range). Potency towards the ITD mutant was lower for all three inhibitors (IC50 values between ~40-800 pM), with crenolanib more potent than the other two. The D835Y mutant's dose response curves were also as expected for sorafenib and crenolanib, with sorafenib being significantly less potent than it was against the WT (~200-250-fold higher IC50), while crenolanib maintained pM-range IC50. For quizartinib, on the other hand, dose response curves were different for the assays performed using FAStide-E compared to FAStide-F. Quizartinib is a type II inhibitor, known to bind to the inactive "DFG-out" conformation of the kinase. Bulky, hydrophobic mutations at position 835 in FLT3 are thought to confer resistance to quizartinib, due to the effects of the side chains on the structure and dynamics of the DFG loop in the kinase domain, with the extra steric bulk disrupting the stability of the inactive conformation. 44,45 The FAStide-E quizartinib dose response results for the D835Y mutant were consistent with this model, with essentially no significant inhibition even at concentrations as high as 100 nM (>10,000-fold higher than the IC50 for quizartinib against the WT FLT3). However using FAStide-F as the substrate, inhibition was observed in the same IC50 range as for sorafenib. This suggests that substrate interactions may affect inhibitor binding stability, perhaps by playing a role in DFG-in/-out dynamics.
Discussion
Drug resistance in AML has been a major factor in the poor 5-year survival and clinical remission rates. Treatments targeting patients with FLT3-positive AML have seen promising results, but inhibitor resistance has been detrimental to clinical efficacy. FLT3 remains a viable drug target in AML, 40,46 however, none of the current TKIs used in therapy are FLT3 specific and even once those are developed, rapid emergence of mutations that abrogate drug binding will be a continuing challenge. 11,12,47,48 This highlights the importance of having efficient assays that can be used as tools to identify specific and selective TKIs that target FLT3 and mutant variants.
In this work, we coupled in vitro kinase reactions on cell lysate digests with the KINATEST-ID pipeline to design, synthesize and validate a panel of sequences to detect the activity of a kinase that has few known substrates. Our process created a novel panel of peptides that can be used in kinase assays and provide higher phosphorylation efficiency than previously reported substrates. These findings demonstrate how the streamlined combination of KALIP and the KINATEST-ID pipeline can be used to identify novel artificial kinase substrates.
The original KINATEST-ID pipeline 31 relied on literature-validated sequences, including some from proteomic databases and positional scanning peptide library (PSPL) assays, as input for the matrices. Each sequence was manually curated by further literature examination (looking for corroboration of upstream kinase evidence via e.g. testing of non-phosphorylatable A/F mutations). This severely limited the number of substrates that could be included in the "true positive" input list, which likely resulted in less accurate predictions of optimal substrate sequences. While this was sufficient to develop effective substrates for several kinases that showed reasonable degrees of selectivity for their targets, 31 it was not optimal-and further, if a given kinase did not have sufficient known substrates or PSPL assay data available then it was not possible to make any prediction at all. Using cell lysate digest as a peptide library for highthroughput identification of FLT3 kinase variant substrates enabled a large increase in the numbers of bona fide substrate sequences that could be used to build positional scoring models.
The improved positional scoring matrix models developed from these large, empirically detected substrate sequence datasets enabled a prediction for FLT3's preferred amino acid motif, which was then used to design several potential novel substrate peptides. Scores from the positional scoring matrix models correlated well with the relative biochemical behavior of the novel substrates, especially when the input dataset comprised sequences that were observed as phosphorylated after a short kinase incubation time (2 h, which is closer to the reaction time scale used for biochemical assays in practice). This suggests that even though endogenouslyderived tryptic peptide libraries are somewhat biased relative to randomized/unbiased synthetic libraries 29 (given the sequence constraints imposed by their genomic origin), they are still able to provide sufficient sequence diversity to enable discovery of hundreds to thousands of substrates and accurately reveal substrate preferences for a given kinase. It also suggests that although performing the reaction at the protein level 30 may be better for developing prediction models for identifying endogenous protein substrates, performing substrate preference analyses at the peptide level in vitro is sufficient for designing peptide probes.
Intriguingly, we observed substrate-dependent inhibition for the well-characterized TKI quizartinib against the FLT3 D835Y mutant. This suggests that the particular substrate used in a screening assay might bias the interpretation of whether an inhibitor is or is not potent against a given enzyme. This highlights the importance of the substrate in inhibitor assays, and suggests that expanding the range of efficient substrates available as drug discovery tools would be beneficial. Work is ongoing to determine whether this is specific to the FLT3 D835Y mutant or is a more general issue for kinases. Other next steps will be to expand the application of this approach to the kinases previously built into the KINATEST-ID "Screener" panel, 31 in order to improve the accuracy of the positional scoring matrix models for the "off-target" kinases and achieve better predictions of selectivity during the substrate design process. While the original, previously published Screener panel was accurate enough to offer the practical ability to prefilter a large list of potential sequences down to a more manageable number, clearly the selectivity prediction was limited by the same factors (comprehensiveness of the input dataset) as the preference prediction. Ongoing efforts to apply the KALIP adaptation approach reported here to more kinases should facilitate improvement of this aspect, as well.
In summary, in this work we demonstrate the utility of generating a large dataset of bona fide substrate information, using a relatively cheap and easily produced peptide pseudo-"library" derived from cellular proteins via proteolytic digest, for defining substrate preference motifs and scoring models that enable design of efficient peptide substrates for kinase enzymes. This strategy enabled discovery of multiple substrates, some of which may influence inhibitor interactions with the enzyme and affect conclusions about inhibitor efficacy. We also anticipate that this process can be applied to orphan kinases for which little to nothing is known, to first identify substrate sequences through the in vitro proteolytic peptide library kinase reaction followed by prediction of the preference motifs from those data. Those motifs could be used to design, synthesize and validate artificial substrates, which can assist in chemical biology and drug discovery efforts to identify novel and potent inhibitors to study their biology and/or become therapeutic leads. Furthermore, this workflow could potentially be generalized to any enzyme driven disease for which substrate preference data can be determined from proteolytically or synthetically prepared peptide libraries 49 and used to design novel substrates for use in assays.
This will greatly enhance the generalization of the novel substrate probe design process we initially implemented in our first report of KINATEST-ID, 31 broadening the scope for drug discovery assay development. Table Table 1. IC50 values measured by monitoring the phosphorylation of FAStide-E or -F in ELISA-based assays.
|
2018-12-14T14:03:18.468Z
|
2018-10-31T00:00:00.000
|
{
"year": 2018,
"sha1": "d4b227fe5a66e3a7dbe5bb0e62cd6d7c33151110",
"oa_license": "CCBY",
"oa_url": "https://www.mcponline.org/content/mcprot/18/3/477.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "1baed8ee061183bf5a857f3548eefe638a31da89",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Biology"
]
}
|
310607
|
pes2o/s2orc
|
v3-fos-license
|
NT1014, a novel biguanide, inhibits ovarian cancer growth in vitro and in vivo
Background NT1014 is a novel biguanide and AMPK activator with a high affinity for the organic cation-specific transporters, OCT1 and OCT3. We sought to determine the anti-tumorigenic effects of NT1014 in human ovarian cancer cell lines as well as in a genetically engineered mouse model of high-grade serous ovarian cancer. Methods The effects of NT1014 and metformin on cell proliferation were assessed by MTT assay using the human ovarian cancer cell lines, SKOV3 and IGROV1, as well as in primary cultures. In addition, the impact of NT1014 on cell cycle progression, apoptosis, cellular stress, adhesion, invasion, glycolysis, and AMPK activation/mTOR pathway inhibition was also explored. The effects of NT1014 treatment in vivo was evaluated using the K18 − gT121+/−; p53fl/fl; Brca1fl/fl (KpB) mouse model of high-grade serous ovarian cancer. Results NT1014 significantly inhibited cell proliferation in both ovarian cancer cell lines as well as in primary cultures. In addition, NT1014 activated AMPK, inhibited downstream targets of the mTOR pathway, induced G1 cell cycle arrest/apoptosis/cellular stress, altered glycolysis, and reduced invasion/adhesion. Similar to its anti-tumorigenic effects in vitro, NT1014 decreased ovarian cancer growth in the KpB mouse model of ovarian cancer. NT1014 appeared to be more potent than metformin in both our in vitro and in vivo studies. Conclusions NT1014 inhibited ovarian cancer cell growth in vitro and in vivo, with greater efficacy than the traditional biguanide, metformin. These results support further development of NT1014 as a useful therapeutic approach for the treatment of ovarian cancer.
Background
Ovarian cancer is a highly fatal disease that is estimated to cause 14,240 deaths in 2016 in the USA alone [1,2]. Despite advances in treatment, the 5-year overall survival for ovarian cancer is approximately 40 %. While 80 % of patients will initially respond to cytoreductive surgery and platinum-based combination chemotherapy, the vast majority of women with advanced ovarian cancer will ultimately develop a recurrence and chemoresistant disease. Thus, there is an urgent need to develop novel therapies for this deadly disease [3,4].
Obesity is associated with increased risk and worse outcomes for ovarian cancer [5]. The Ovarian Cancer Association Consortium reported that a high BMI, at all stages of life, was associated with an increased risk of developing ovarian cancer [6], while a large prospective cohort study and two systematic reviews reported an increased risk of mortality from ovarian cancer in obese patients [7][8][9]. In addition to obesity, type II diabetes appears to effect ovarian cancer survival. A recent study following 642 cases of ovarian cancer over a 10-year period found that diabetics with ovarian cancer had significantly worse overall survival as compared to non-diabetics, even after multivariable adjustment [10]. In addition, our laboratory has shown that the metabolic effects of obesity promote ovarian cancer progression and aggressiveness in a genetically engineered mouse model of serous ovarian cancer [11].
The biguanide, metformin, is one of the most widely prescribed treatments for type II diabetes. Epidemiological studies suggest that metformin use for the treatment of type II diabetes may reduce the risk of developing ovarian cancer. This reduction in risk may be due to inhibition of cellular proliferation via AMPKdependent or AMPK-independent pathways and/or by reducing elevated systemic insulin levels [12][13][14]. Several recent studies have reported that metformin has an ability to inhibit cell proliferation, adhesion, migration, and angiogenesis in ovarian cancer cell lines and mouse models [15][16][17][18]. Our laboratory has found that metformin inhibits cell proliferation in a dose-dependent manner in ovarian cancer cell lines and reduces tumor growth in a genetically engineered mouse model of serous ovarian cancer fed with high-fat and low-fat diets (submitted).
Metformin is transported into cells by organic cation transporters (OCTs) 1, 2, and 3. These transporters are expressed at varying levels in different organs including the liver, muscle, ovary, and kidney [19,20]. OCT1 and OCT3 are highly expressed in epithelial ovarian cancer and ovarian germ cell tumors, respectively [21,22]. OCT2 is predominantly expressed in the kidney and is responsible for metformin clearance in the urine. Urinary excretion of metformin results in the short half-life of metformin as well as the wide range of peak to trough drug levels seen, particularly in patients with impaired renal function [19,23]. Recent studies have shown that inhibition of OCT2 activity by the OCT2 inhibitor cimetidine in patients treated with cisplatin resulted in a decreased cisplatin-induced nephrotoxicity by restricting the accumulation of cisplatin in the kidney [24][25][26]. Thus, development of novel biguanide agents, designed to increase their affinity for OCT1 and OCT3 while minimizing their affinity for OCT2, may result in more potent drugs with a longer plasma half-life than metformin. Biguanides with this profile may have profound effects on metabolic parameters as OCT1 is highly expressed in the liver whereas OCT3 is expressed in the skeletal muscle [19,27,28].
We have recently designed, synthesized, and screened approximately 140 biguanides in an attempt to identify compounds with a high affinity for OCT1 and OCT3 and with a reduced activity at OCT2. The biguanide, NT1014, has activity for OCT1 and 3 and reduced potency for OCT2. Moreover, NT1014 at 20 % the dose of metformin was demonstrated to result in activation of AMPK, inhibition of hepatic glucose output in rat hepatocytes, reduced rate of gastric emptying in mice, increased glucose disposal, and glucose-stimulated glucagon-like peptide-1(GLP1) release (data not shown). In the present study, we investigated the potential of NT1014 as a therapeutic agent for ovarian cancer by evaluating the anti-tumor effects of NT1014 as compared to metformin in human ovarian cancer cell lines and a genetically engineered mouse model of serous ovarian cancer.
Cell proliferation assay
The MTT assay was employed to measure cell proliferation. Briefly, the IGROV-1 and SKOV3 cells were seeded in 96-well plates at a density of 4000 cells/well and allowed to attach overnight. The culture medium was replaced with fresh medium containing NT1014 or metformin (from 0.01 to 3000 μM), and cells were incubated for 72 h. After drug treatment, MTT (5 mg/ml) was added to the 96-well plates at 5 μl/well for an additional incubation time of 1 h. The MTT reaction was terminated through the replacement of the media by 100 μl DMSO. The results were determined by measuring the absorbance at 575 nm with a micro-plate reader (Tecan, Morrisville, NC). The effect of NT1014 and metformin was calculated as a relative percentage of control cell growth obtained from DMSO (0.1 %)-treated cells grown in the same 96-well plates. Each experiment was performed in triplicate and repeated three times to assess for consistency of results.
Cell cycle analysis
The effect of NT1014 on cell cycle progression was measured using Cellometer (Nexcelom, Lawrence, MA). Briefly, the IGROV1 and SKOV3 cells were plated at 2.5 × 10 5 cells/well in six-well plates and incubated overnight. Plates were then treated with NT1014 (from 0.1 to 1000 μM) for 24 h. The cells were harvested by trypsin digestion and washed with phosphate-buffered saline (PBS), before being re-suspended and fixed in 90 % pre-chilled methanol and stored at −20°C overnight. The cells were treated with 50 μl RNase A solution (250 μg/ml, 10 mM EDTA) for 30 min at 37°C and then stained with 50 μl of staining solution (containing 2 mg/ml propidium iodide (Hayward, MA), 0.1 mg/ ml azide, and 0.05 % Triton X-100). The final mixture was incubated for 15 min in the dark before being analyzed by Cellometer. The results were analyzed using FCS4 express software (Molecular Devices, Sunnyvale, CA). The experiments were performed in triplicate and repeated three times for assessment of consistency.
Annexin V assay
The percentage of cells actively undergoing apoptosis was assessed with the annexin V FITC assay kit. The IGROV-1and SKOV3 cells (2 × 10 5 cells/well) were treated with NT1014 (from 0.1 to 1000 μM) for 24 h. The cells were then collected, washed with PBS, resuspended in 100 μl of annexin V and propidium iodide (PI) dual-stain solution (0.1 μg of annexin V FITC and 1 μg of PI), and allowed to incubate for 15 min in the dark. The samples were then analyzed via Cellometer. The results were analyzed by FCS4 express software. All experiments were performed in triplicate and repeated three times to assess for consistency of response.
Cleaved caspase 3 assay
Cleaved caspase 3 was detected using the cleaved caspase 3 activity assay kit. The IGROV-1 and SKOV3 cells were seeded at 6000 cells/well in a 96-well plate for 24 h and then treated with media containing different concentrations of NT1014 (1-1000 μM) for 4 h. We then added 100 μl of caspase 3 assay loading buffer into each well, mixed gently, and incubated the cells for 60 min at room temperature. The fluorescence intensity was measured at an excitation wavelength of 350 nm and an emission wavelength of 450 nm using a plate reader (Tecan). All experiments were performed at least twice to assess for consistency of response.
ROS assay
The alteration of total production of reactive oxygen species caused by NT1014 was measured using a DCFH-DA fluorescent dye. The IGROV-1 and SKOV3 cells (1.0 × 10 4 cells/well) were seeded in black 96-well plates. After 24 h, the cells were treated with NT1014 (0.1 to 1000 μM) for 4 h to induce reactive oxygen species (ROS) generation. After the cells were incubated with DCFH-DA (20 μM) for 30 min, the fluorescence was monitored at an excitation wavelength of 485 nm and an emission wavelength of 530 nm using a plate reader (Tecan). All experiments were performed at least twice to assess for consistency of response.
Adhesion assay
Each well in a 96-well plate was coated with 100 μl laminin-1 (10 μg/ml) and incubated at 37°C for 1 h. This fluid was then aspirated, and 200 μl blocking buffer was added to each well for 45-60 min at 37°C. The wells were then washed with PBS, and each plate was allowed to chill on ice. Next, 2.5 × 10 3 cells were added with PBS to each well, followed by varying concentrations of NT1014. Each plate was then allowed to incubate at 37°C for 2 h. After this period, the medium was aspirated, and cells were fixed by adding 100 μl of 5 % glutaraldehyde and incubating for 30 min at room temperature. Adherent cells were then washed with PBS and stained with 100 μl of 0.1 % crystal violet for 30 min. The cells were then washed repeatedly with water, and 100 μl of 10 % acetic acid was added to each well. After 5 min of shaking, the absorbance was measured at 570 nm using a micro-plate reader (Tecan). Each experiment was repeated at least twice for consistency of response.
Invasion assay
Ninety-six-well HTS transwells (Corning Life Sciences, Durham, NC) coated with 0.5-1X BME (Trevigen, Gaithersburg, MD) were used to examine the effect of NT1014 on the ability of ovarian cancer cells to invade. The IGROV-1 and SKOV3 cells (50,000 cells/well) were starved for 12 h and then added in the upper chambers of the wells in 50 μl FBS-free medium. The lower chambers were filled with 150 μl medium with various concentrations of NT1014. The plate was then incubated for 4 h at 37°C to allow invasion into the lower chamber. After washing the upper and lower chambers with PBS, 100 μl calcein AM solution was added into the lower chamber and incubated at 37°C for 30-60 min. The lower chamber plate was measured by the plate reader (Tecan) using an excitation wavelength of 485 nm and an emission wavelength of 520 nm. Each experiment was performed at least twice for consistency of response.
ATP assay
ATP production was detected by using the luminometric ATP assay kit (AAT bioquest, Sunnyvale, CA), following the manufacturer's instructions. Each well of a 96-well white plate was seeded with 5 × 10 3 cells and incubated overnight. Wells were then treated with different doses of NT1014 for 24 h. Next, 100 μl of ATP assay solution was added into each well, gently mixed, and allowed to incubate for 20 min at room temperature. The luminescence intensity was measured using the luminometer mode on a plate reader (Tecan). Finally, the measured ATP levels were normalized based on viable cell counts as measured by MTT assay. The experiments were performed in triplicate and repeated three times for consistency of response.
Lactate production assay
The L-Lactate Assay Kit was used to measure L-lactate production in the medium. Briefly, after we treated cells with different concentrations of NT1014 for 24 h, 10 μl of the culture medium was transferred into a new 96-well plate, and 40 μl of distilled water was added to each well. Each well was mixed with another 50 μl of lactate assay solution, incubated for 30 min at 37°C without CO 2 . The lactate level was measured at wavelength of 490 nm using a plate reader (Tecan). The experiments were performed in triplicate and repeated twice to assure consistency.
Glucose uptake assay
The IGROV-1 and SKOV3 cells were seeded into 96-well black plates at 4000 cells/well overnight and then treated with NT1014 under varying concentrations of glucose for 24 h. After treatment, cells were cultured with 2-NBDG (100 μg/ml) in glucose-free medium for 15 min. The 2-NBDG uptake reaction was stopped by removing the medium and washing the cells twice with 200 μl HBSS (Life Technologies Corporation, Grand Island, NY). Fluorescence intensity was measured at an excitation wavelength of 485 nm and an emission wavelength of 530 nm using a plate reader (Tecan). Relative glucose was assayed compared with untreated control. Data were normalized based on the viable cell counts measured by MTT assay. All the experiments were performed in triplicate and repeated three times.
Western blot analysis
The IGROV-1 and SKOV3 cells were collected at the end of drug treatment, and total protein was extracted using RIPA buffer (Boston Bioproducts, Ashland, MA) supplemented with protease/phosphatase inhibitor. Equal amounts (30 μg) of total protein were loaded and separated by 10-12 % SDS-PAGE and then transferred to a PVDF membrane. The blot was subsequently blocked in 5 % non-fat milk and incubated with a 1:1000 dilution of primary antibodies at 4°C overnight. The membranes were then washed and incubated with the appropriate secondary antibodies for 1 h at room temperature before development. The bands were developed and quantified using an Alpha Innotech Imaging System (San Leandro, CA, USA). After developing, the membranes were stripped or washed and re-probed using antibodies against total AMPK or pan-S6 and α-tubulin (for all proteins), respectively. The intensity of bands was measured and normalized to α-tubulin. Each experiment was repeated at least twice for consistency of results.
KpB mouse model
The K18 − gT121 +/− ; p53 fl/fl ; Brca1 fl/fl (KpB) mouse model has been described previously in detail [11,29]. All mice were handled according to protocols approved by UNC-CH Institutional Animal Care and Use Committee (IACUC). The KpB mice were injected with recombinant adenovirus Ad5-CMV-Cre (AdCre, Transfer Vector Core, University of Iowa) into the left ovarian bursa cavity at 6-8 weeks age. The mice were randomly divided into three groups with one group receiving NT1014 oral garage (dose of 75 mg/kg for 4 weeks) daily, one group receiving metformin in oral gavage (75 mg/kg for 4 weeks), and the other group receiving placebo once the ovarian tumor size had reached 0.1 × 0.1 cm in diameter by palpation. Tumor size was monitored twice-weekly using palpation until tumors had grown to a size amenable to caliper measurement. All mice were euthanized after 4 weeks of NT1014 and placebo treatment. Tumor volume was calculated using the following: (width 2 × length)/2. Tumor tissues and blood samples were collected for immunohistochemical (IHC) staining and VEGF assay.
Immunohistochemical analysis
Five micrometer paraffin sections were prepared from the KpB mice tumors and were used for IHC analysis. Staining procedures were performed at the IHC Mice Core Facility at UNC. The following primary antibodies were used: Ki-67, phosphorylated-AKT, phosphorylated-AMPK, phosphorylated-S6, and MMP9. Further processing was carried out using ABC-Staining Kits (Vector Labs, Burlingame, CA) and hematoxylin. IHC slides were scanned by Aperio and scored by ImageScope software (Vista, CA).
Statistical analysis
Data is expressed as mean ± SEM. Data was compared using two-tailed Student's t test, and p < 0.05 was considered significant. Data was analyzed using Prism (Graph-Pad Software, La Jolla, USA).
NT1014 has high affinity for OCT1 and OCT3
NT1014 was designed, synthesized, and identified in targeted screening using the ethidium bromide uptake assay in HEK293 cells (Fig. 1a). The cells were stably transfected with hOCT1, 2, or 3, and the uptake of metformin and NT1014 was measured in each cell type and in neo (control) cells at 37°C for 2.5 min (Fig. 1b).
NT1014 had a higher affinity for OCT1 and OCT3 and a reduced activity for OCT2 compared to metformin (Fig. 1c). In addition, MTT assays indicated that NT1014 treatment resulted in a 16-fold increase in growth inhibition (IC50 value) in HEK293 cells stably transfected with OCT1 compared to HEK293 control cells and a 5-fold increase in growth inhibition (IC50) in HEK293 cells stably transfected with OCT3 (Fig. 1d). These results when compared to metformin demonstrate the improved affinity of NT1014 for OCT1 and OCT3 and the reduced affinity for OCT2.
NT1014 inhibits cell proliferation in ovarian cancer cells
The IGROV-1 and SKOV3 ovarian cancer cell lines were found to express OCT1, OCT2, and OCT3, by Western blotting analysis (Fig. 2a). Using the MTT cytotoxicity assay, the IGROV-1 and SKOV3 ovarian cancer cell lines were found to have a progressive decrease in cell viability with increasing concentrations of NT1014 for 72 h (Fig. 2b). The IC50 values for the IGROV-1 and SKOV3 cells were 200 and 450 μM, respectively, suggesting that IGROV-1 cells are more sensitive to NT1014 than the SKOV3 cells. Subsequently, we compared the effect of NT1014 and metformin on cell proliferation in both cell types. We observed that NT1014 and metformin at low doses (0.01 to 10 μM) produced the same inhibitory effects on cell proliferation. However, NT1014 at high doses was found to increase the growth inhibition in both cells compared to metformin at the same dosages, which the IC50 values were lower for NT1014 than metformin (Fig. 2c, d). To further determine growth inhibitory function of NT1014, we examined the effect of NT1014 and metformin in primary cultures of human ovarian cancers. Cell proliferation in the nine primary cell cultures was assessed by MTT assay after exposure to NT1014 or metformin for 72 h. All nine primary cultures responded to NT1014 or metformin treatment. Lower IC50 values were found for NT1014 as compared to metformin in 6/9 of the primary cultures (Fig. 2e). These results suggest that NT1014 may have improved potency over metformin in inhibition of cell proliferation.
To investigate the effects of NT1014 on expression of OCT1, OCT2, and OCT3/4 in the IGROV-1 and SKOV3 cells, we treated both cell lines with 500 μM NT1014 in a time course fashion. NT1014 decreased OCT1 and OCT3/4 expression in both cell lines, with the greatest effects seen in both cell lines after 24 h of exposure to NT1014. NT1014 did not affect OCT2 expression in the IGROV-1 cells and slightly increased OCT2 expression after 6 h of treatment in the SKOV3 cells. Next, we treated the cells with different doses of NT1014 for 24 h and evaluated the effect of different concentrations of NT1014 on the expression of the OCTs. The level of OCT1 and OCT3/4 protein expression in both cells was decreased in a dose-dependent manner (Fig. 2f). To ascertain whether the effect of NT1014 was mediated by AMPK pathway, we characterized the effect of NT1014 on downstream targets of the AMPK/mTOR/S6 pathway. NT1014 increased phosphorylation of AMPK and decreased phosphorylation of S6 expression in both cell lines after 24 h of treatment (Fig. 2g).
NT1014 induced cell cycle G1 arrest and cellular apoptosis
The effects of NT1014 on cell cycle progression and apoptosis were evaluated in the IGROV-1 and SKOV3 cell lines. The cells were treated with NT1014 at varying concentrations for 24 h, and Cellometer was used to analyze the cell cycle. NT1014 treatment resulted in G0/ G1 cell cycle arrest and reduced S phase in a dose- Fig. 1 NT1014 has high affinity for OCT1 and OCT3. Molecular structure of NT1014 (a). HEK293 cells were stably transfected with hOCT1, 2, and 3 (b). Blue color represents nuclei. Affinity for OCT1, OCT2, and OCT3 after treatment of NT1014 or metformin (c). MTT assay were used to assess the growth inhibition by NT1014 and metformin in HEK293 cells transfected with OCT1 and OCT3 (d). *p < 0.05 and **p < 0.01 dependent manner in both cell lines (Fig. 3a, b). While the percent of cells in G1 phase increased from 68.2 to 87.7 %, the S phase cell population decreased from 9.6 to 5.5 % with increasing concentrations of NT1014 in the IGROV-1 cells. NT1014 also increased the percent of cells in G1 phase by 9.7 % with concordant reduction of S phase cells by 2.2 % at the dose of 1000 mM in the SKOV3 cell line.
To further characterize NT1014's effects on cell cycle arrest, cell cycle-related proteins were analyzed in NT1014-treated IGROV-1 and SKOV3 cells. Western blotting results showed that NT1014 down-regulated cyclin D1 and CDK4 and CDK6 protein expression and upregulated cell cycle inhibitor p21 and p27 expression in both cell lines (Fig. 3c). To confirm whether the growth inhibition of ovarian cancer cells was related to apoptosis, the apoptotic effect of NT1014 was evaluated in the IGROV-1 and SKOV3 cells by annexin V FITC stain analysis. Annexin V FITC detects the phospholipid phosphatidylserine (PS) translocation from the inner (cytoplasmic) leaflet of the cell membrane to the external surface in early apoptotic cells. The apoptotic cell population significantly increased in a dose-dependent manner in both cell lines after 24 h of exposure to NT1014 (Fig. 3d). We next determined whether the mitochondrial apoptosis pathway which leads to caspase activation and induces cell death was involved in NT1014-induced apoptosis in the ovarian cancer cell lines. We treated both cell lines with increasing concentrations of NT1014 for 4 h, and the activity of cleaved caspase 3 was detected by ELISA assay. A dosedependent increase in the activity of cleaved caspase 3 was found in both cell lines in response to NT1014 (Fig. 3e). Furthermore, NT1014 produced a decrease in protein expression of BCL-XL and MCL-1 in a dosedependent manner after treatment with NT1014 for 24 h in both cell lines (Fig. 3f ). These results suggest that NT1014 inhibits cell proliferation through the induction of mitochondrial apoptosis and cell cycle G1 arrest in ovarian cancer cells.
NT1014 induces cellular stress in ovarian cancer cells
ROS have been implicated in the cellular response to stress and are involved in the mediation of apoptosis via mitochondrial DNA damage [20]. Metformin has been shown to induce cell stress in different types of cancer [30]. To investigate the involvement of oxidative stress in the anti-proliferative effect of NT1014, intracellular ROS levels were examined using the ROS fluorescence indicator DCF-DA. NT1014 and metformin (0.1-1000 μM) significantly increased ROS production in a dose-dependent manner in the IGROV-1 and SKOV3 cells after 4 h of treatment (Fig. 4a, b). In addition, NT1014 significantly increased ROS levels in both cells compared to metformin at dose of 1000 μM. We next examined the alternations of endoplasmic reticulum (ER) stress-related markers after 24 h treatment of NT1014 in both cell lines. Our Western blotting results showed that NT1014 significantly induced the protein expression of Bip, PERK, and calnexin in a dosedependent manner (Fig. 4c). These results indicate that an increase in ROS production and ER stress might also be involved in the anti-tumorigenic effects of NT1014 in ovarian cancer cells.
NT1014 inhibits cell adhesion and invasion in ovarian cancer cells
In vitro adhesion and invasion assays were performed to evaluate the effect of NT1014 on metastatic activity. Cell adhesion assays were performed using laminin-1 as an adhesion substrate. NT1014 (100 and 1000 μM) treatment of the IGROV-1 and SKOV3 cells for 2 h showed a significant reduction in adhesion to laminin-1 compared with untreated control (17-24 % in IGROV-1 cells and 10-18 % in SKOV3 cells, p < 0.05) (Fig. 5a). Both cell lines were again treated with NT1014 at different concentrations for 24 h to determine the effect of NT1014 on cell invasion. NT1014 (100 and 1000 μM) significantly decreased cell invasion activity after 24 h of treatment (15-28 % in IGROV-1 cells and 11-23 % in SKOV3 cells, p < 0.05), as determined by the transwell invasion assay (Fig. 5b).
Cell adhesion and invasion are mediated by a variety of membrane proteins as well as modulation of cytoskeletal assembly. To further analyze the effect of NT1014 on cell motility and migration of ovarian cancer cells, the levels of expression of E-cadherin, β-catenin, Slug, and vimentin were analyzed by Western blot. After 24 h of treatment, NT1014 increased expression of Ecadherin and decreased expression of β-catenin, Slug, and vimentin (Fig. 5c). Collectively, these results demonstrate that NT1014 inhibits the adhesion and invasion of ovarian cancer cells.
The effect of NT1014 on glycolytic metabolism
It is well documented that cancer cells undergo a metabolic shift to adapt and survive under harsh environments by enhancing aerobic glycolysis (i.e., the Warburg lines (a, b). The effect of NT1014 on cell cycle-related proteins (p21, p27, cyclin D1, CDK4, and CDK6) was assessed by Western blotting (c). Both cells were treated with varying doses of NT1014 for 24 h, and cell apoptosis was examined by an annexin V FITC assay via Cellometer. NT1014 significantly increased annexin V expression in a dose-dependent manner (d). The activity of cleaved caspase 3 was detected by ELISA assay after treatment of NT1014 for 4 h (e). Western blotting showed that NT1014 decreased the expression of BCL-xL and Mcl-1 in the IGROV-1 and SKOV3 cell lines after exposure to NT1014 for 24 h (f). α-tubulin used as a loading control. *p < 0.05 and **p < 0.01 effect). Cancer cells exhibit increased expression of glucose transporters as a means to enhance glucose uptake, which in turn increases the rate of glycolytic ATP production and ultimately leads to enhanced tumor growth [31]. In order to investigate whether NT1014 affects glycolysis in ovarian cancer cells, the IGROV-1 and SKOV3 cells were incubated with NT1014 in concentrations up to 1000 μM for 24 h. The cellular ATP level, as well as glucose uptake and lactate level, was assayed. NT1014 increased glucose uptake and lactate production in both ovarian cancer cell lines (Fig. 6a, b). Compared to control cells, treatment with NT1014 caused a timedependent increase in Glut1 expression in both cell lines, as well as a concentration-dependent increase in IGROV-1 cells, suggesting that NT1014 stimulates glycolytic activity (Fig. 6d). Interestingly, NT1014 treatment resulted in a decrease in ATP production in the SKOV3 cells and an increase in ATP in the IGROV-1 cells (Fig. 6c).
To validate the causal relationship between ATP levels and glycolytic activity, we next examined the effect of NT1014 on glycolytic pathway. The expression of pyruvate dehydrogenase (PDH), a critical regulator of transforming pyruvate into acetyl-CoA, and lactate dehydrogenase (LDHA), a key enzyme of converting pyruvate into lactate, were analyzed after incubation with NT1014 for 24 h. We observed increased LDHA protein expression in both cell lines after 24 h of treatment (Fig. 6e), suggesting a direct effect of NT1014 on glucose metabolism and enhanced activity of the glycolytic pathway in the ovarian cancer cells. In addition, we also found that PDH expression was elevated in the IGROV-1 cells and was decreased in the SKOV3 cells after 24 h of treatment, suggesting that the function of complex I in SKOV3 cells compared to IGROV-1 cells was more profoundly influenced by NT1014. Given that biguanides target complex I and subsequently increase glycolytic activity in cancer cells, the differential metabolic reactions of NT1014 on the glycolytic pathway and complex I suggest that ovarian cancer cells have a different metabolic state in response to NT1014 compared to other biguanides such as metformin.
NT1014 decreased tumor growth in the KpB serous ovarian cancer mouse model
The in vivo anti-tumor efficacy of NT1014 was evaluated in the KpB serous ovarian cancer mouse models. The KbB mice were divided into three groups (n = 15/group) and were treated with NT1014 and metformin (75 mg/kg/day, 6 times/week) or placebo for 4 weeks. Tumor growth during the treatment period was monitored using twice weekly palpation. NT1014 or metformin was well tolerated. The mice showed no overt signs of toxicity and maintained normal activities throughout treatment. Twice-weekly measurements yielded no changes in blood glucose or Fig. 4 NT1014 induced cellular stress in ovarian cancer cells. The IGROV-1 and SKOV3 cells were treated with NT1014 and metformin at the indicated doses for 4 h, and the ROS production was determined using the DCFH-DA assay. NT1014 increased the ROS level in a dose-dependent manner (a, b). The expression of cellular stress proteins (PERK, Bip, and calnexin) was detected by Western blotting after treatment of NT1014 for 24 h (c). *p < 0.05 and **p < 0.01 mouse weight during NT1014 and metformin treatment (data not shown). After 4 weeks of treatment, the mice were euthanized, and the ovarian tumors were removed, photographed, and weighed. Both NT1014 and metformin resulted in significant suppression of tumor growth relative to the control. NT1014 showed more significant inhibition in tumor growth compared to metformin at the same dose, as evidenced by a decrease in tumor weight of approximately 70 % in NT1014 group and 46 % in metformin (p < 0.05) (Fig. 7a, b).
To further investigate the anti-tumor activity and mechanism of NT1014 in vivo, the expression of Ki-67, phosphorylated (phos)-AKT, phos-AMPK, phos-S6, and MMP9 in the ovarian tumor tissues was evaluated by IHC. Consistent with our results in vitro, the expression of phos-AMPK and phos-AKT was induced in the mice treated with NT1014, whereas NT1014 reduced the levels of phos-S6 in the treated mice but not in the untreated mice (Fig. 7d). These findings suggest that NT1014, like other biguanides, inhibits tumor growth of ovarian cancer in vivo via AMPK activation and inhibition of the mTOR pathway. Additionally, Ki-67 and MMP9 expression were significantly reduced following NT1014 treatment compared to the untreated controls (Fig. 7d). Serum VEGF levels were measured by ELISA assay at the end of the treatment. Mean VEGF level in treated group was significantly lower than that in control group (Fig. 7c). These results further support the role of NT1014 in the inhibition of tumor adhesion and invasion in ovarian cancer in vivo.
Discussion
NT1014 was designed as a novel AMPK activator with high affinity for OCT1 and OCT3. We find that NT1014 is a potent anti-tumorigenic agent that suppresses ovarian cancer cell proliferation and in vivo ovarian tumor growth through activation of AMPK. In addition, our results demonstrate that NT1014 inhibits cell proliferation and tumor growth with higher potency than metformin at similar doses in both ovarian cancer cell lines and in the KpB mouse model. Along with AMPKinduced inhibition of mTOR signaling which has been shown to trigger the anti-tumorigenic activities of metformin, NT1014 interferes with multiple AMPKdependent downstream signaling pathways that regulate survival, energy metabolism, oxidative stress, and cell migration in ovarian cancer cells.
Cell proliferation assays revealed a dose-dependent inhibition of cell growth by NT1014 in both human ovarian cancer cell lines tested. The drug concentrations required to induce growth inhibition are much lower Fig. 5 The effect of NT1014 on adhesion and invasion in ovarian cancer cells. The IGROV-1 and SKOV3 cells were cultured for 24 h and then treated with NT1014 (1-1000 μM) in a laminin-coated 96 well plate or BME-coated 96 transwell plate for 2 or 24 h to assess adhesion and invasion in a plate reader. The data represents relative inhibition in each cell line (a, b). The expression of E-cadherin, β-catenin, Slug, and vimentin were analyzed by Western blotting (c). *p < 0.05 and **p < 0.01 compared to metformin in the treatment of ovarian cancer cells in vitro. In addition, NT1014 treatment (75 mg/ kg) led to profound inhibition of ovarian tumor growth in vivo and had increased efficacy compared to the same dose of metformin (58 versus 33 %). Moreover, the data support that NT1014 binds to OCT1 with higher affinity than metformin, giving it the potential to more effectively enter OCT1-expressing ovarian cancer cells. Recent studies using metformin and AICAR (pharmacological AMPK activators) have confirmed their ability to induce apoptosis and cell cycle arrest in a variety of cancer cell types [31,32]. Our data showed that induction of apoptosis and cell cycle G1 arrest are key components of the anti-tumorigenic effects of NT1014 in ovarian cancer cells, as evidenced by induced expression of annexin V, p27, and p21 as well as reduction of BCL, Mcl-1, cyclin D1, and CDK expression. These effects may be the result of activation of AMPK and direct inhibition of the mTOR signaling pathway, given that NT1014 treatment increased phosphorylation of AKT in the KpB mouse model.
The underlying mechanism responsible for NT1014 as well as metformin-induced growth inhibition has not been entirely defined. Numerous studies have demonstrated that metformin significantly inhibits cell proliferation through activation of the AMPK pathway in ovarian cancer cells; furthermore, long-term use of metformin has been associated with decreased risk of ovarian cancer and Fig. 6 The effect of NT1014 on glycolytic metabolism in ovarian cancer cells. The IGROV-1 and SKOV3 cells were incubated with NT1014 in concentrations of up to 1000 μM for 24 h. The glucose uptake (a), lactate level (b), and cellular ATP level (c) were assayed. NT1014 increased glucose uptake and lactate production. NT1014 treatment resulted in a decrease in ATP production in the SKOV3 cells and an increase in ATP in the IGROV-1 cells (c). The expression of Glut1, LDHA, and PDH was measured by Western blotting after treatment with NT1014 for 24 h (d, e). *p < 0.05 and **p < 0.01 improved outcomes in patients with or without diabetes [12,14,33,34]. Together, these findings suggest that AMPK is an ideal target for the prevention and treatment of ovarian cancer. Recent reports have shown that AMPK activation by metformin is associated with increased oxidative stress leading to upregulated cell cycle arrest and induction of apoptosis in breast cancer and leukemia [35,36]. The level of oxidative stress correlated to metformin-dependent apoptosis induction in breast cancer [37]. A dose-dependent increase in reactive oxygen species formation with NT1014 treatment was found in this study. Similar NT1014 treatment at different concentrations resulted in an increase in expression of PERK, Bip, and calnexin, which are markers of oxidative stress associated with apoptosis [36]. Our data not only confirms that NT1014 induces cell cycle arrest and apoptosis but also demonstrates that NT1014 induces oxidative stress in the endoplasmic reticulum of treated ovarian cancer cells. The mechanism of apoptotic death may be triggered by the inability of ovarian cancer cells to adequately respond to oxidative stress in the endoplasmic reticulum.
Cell migration is a highly complex process, which involves orchestrated dynamic remodeling of the actin cytoskeleton and microtubule network [38]. AMPK has been recently documented to be involved in this process in cancer cells [38]. Pharmacological activation of AMPK by metformin and other biguanides disturbs cancer cell migration and invasion. Several studies have shown that AMPK inhibits cell migration, which could occur through different mechanisms including disruption of the mTOR, TGF-b, Pdlim5, CXCL12, NF-kB, and Akt-MDM2-Foxo3a pathways [38][39][40][41][42][43]. Thus, it is reasonable to believe that AMPK activity affects directional cell migration by regulating cell epithelial-mesenchymal Fig. 7 The effect of NT1014 on ovarian tumor growth in the KpB serous ovarian cancer mouse model. Thirty KpB mice were divided into three groups and treated with NT1014 and metformin (oral gavage, 75 mg/kg/day, 6 times/ week) or placebo for 4 weeks. The graph shows weekly tumor volumes for each group (a) and tumor weight after treatment (b). The level of VEGF in NT1014 group was significantly lower than that in control group (c). The changes of Ki67, phos-AKT, phos-AMPK, phos-S6, and MMP9 were assessed by immunohistochemistry in the ovarian cancer tissues. The expression of Ki-67, phos-S6 and MMP9 was significantly reduced and phos-AKT and phos-AMPK was increased in the NT1014 treatment group compared with the control group (d). *p < 0.05 and **p < 0.01 migration. In this study, our results demonstrated that treatment with NT1014 modifies the phenotype of ovarian cancer cells from a mesenchymal to an epithelial phenotype as evidenced by increased expression of the epithelial marker E-cadherin and decreased expression of the mesenchymal marker Slug. Thus, NT1014 may have an improved ability to inhibit ovarian cancer metastasis and progression through regulation of the epithelial-mesenchymal transition.
One of the principal metabolic alternations related to cell proliferation in tumors is the upregulation of aerobic glycolytic metabolism [44]. Activating AMPK by different agonists results in differential effects on glycolytic metabolism in cancer cells [45]. Metformin is believed to have an inhibitory effect on mitochondrial oxidative phosphorylation via inhibiting respiratory complex I, thus boosting glycolysis as a compensation mechanism. The effects of NT1014 on glycolysis in ovarian cancer cells parallel those reported for metformin in other types of cancer [45][46][47][48][49]. NT1014 exposure led to an increase in phosphorylation of AMPK and glucose uptake consistent with an increase in glycolysis, characterized by increased lactate production and increased levels of LDHA. Emerging evidence has confirmed that metformin effectively reduces mitochondrial ATP production [49,50]. In contrast, we found an increase in ATP levels with increased PDH expression in IGROV-1 cells, but a decrease in ATP levels and PDH expression in SKOV3 cells, after treatment of NT1014 for 24 h, despite phosphorylation of AMPK in both cell lines after 12 h of treatment. These results suggest that increased ATP production in IGROV-1 cells via increased PDH expression and acetyl-CoA represents a mechanism that partially compensates for the NT1014-associated decrease in ATP production by oxidative phosphorylation. Therefore, NT1014 may possess an alternative mechanism to regulate glucose metabolism and inhibit cell proliferation compared to metformin in ovarian cancer cells.
Conclusions
In conclusion, the results from this study show that NT1014 is a novel, orally bioavailable, and well-tolerated AMPK activator with a high affinity for OCT1 and OCT3. NT1014 causes significant inhibition of ovarian cancer cell proliferation in vitro and has anti-tumorigenic activity in vivo against ovarian cancer through modulation of multiple signaling pathways associated with cancer cell survival, metabolism, and progression. These results show promise for the use of NT1014 as a potential anti-cancer agent for the treatment of ovarian cancer. The efficacy of NT1014 in other ovarian cancer cell lines and mouse models, given alone or in combined therapy, will be explored in future studies.
|
2018-04-03T02:27:23.361Z
|
2016-09-21T00:00:00.000
|
{
"year": 2016,
"sha1": "64557801166a688f2f8e429e7af04193bc037177",
"oa_license": "CCBY",
"oa_url": "https://jhoonline.biomedcentral.com/track/pdf/10.1186/s13045-016-0325-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64557801166a688f2f8e429e7af04193bc037177",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119142308
|
pes2o/s2orc
|
v3-fos-license
|
A Graph Theoretic Analysis of Leverage Centrality
In 2010, Joyce et. al defined the leverage centrality of vertices in a graph as a means to analyze functional connections within the human brain. In this metric a degree of a vertex is compared to the degrees of all it neighbors. We investigate this property from a mathematical perspective. We first outline some of the basic properties and then compute leverage centralities of vertices in different families of graphs. In particular, we show there is a surprising connection between the number of distinct leverage centralities in the Cartesian product of paths and the triangle numbers.
Introduction
In a social network people influence each other and those with lots of friends often have more leverage (or influence) than those with fewer friends. However the true influence of a person not only depends on the number of friends that they have, but also on the number of friends that their friends have. A person that is well connected can pass information to many friends, but if their friends are also receiving information from others, their influence on others is lessened. The extreme cases of influence occurs with a person who has a large number of friends, and for each of the friends, their only source of information is the original person. In this situation, the original person has the highest possible influence and all of the others have the lowest possible influence.
The level of influence can be quantified by a property defined by Joyce et al. [6] known as leverage centrality. We recall that the degree of a vertex v is the number of edges incident to v and is denoted deg(v). We next give a formal definition of leverage centrality [6].
Definition 1 (leverage centrality) Leverage centrality is a measure of the relationship between the degree of a given node v and the degree of each of its neighbors v i , averaged over all neighbors N v , and is defined as shown below: This property was used by Joyce et al. [6] in the analysis of functional magnetic resonance imaging (fMRI) data [6] and has also been applied to real-world networks including airline connections, electrical power grids, and coauthorship collaborations [8]. However despite these studies leverage centrality has yet to be explored from a mathematical standpoint. The formula gives a measure of the relationship between a vertex and its neighbors. A positive leverage centrality means that this vertex has influence over its neighbors, where as a negative leverage centrality indicates that a vertex is being influenced by its neighbors. We begin with an elementary result involving the bounds of leverage centrality (Li et al. [8]).
Lemma 2 Let G be a graph with n vertices. For any vertex v, |l(v)| ≤ 1 − 2 n . Furthermore, these bounds are tight in the cases of stars and complete graphs.
We note that the bounds are also tight for regular graphs. There exist graphs G where the leverage centrality of all vertices is equal and where the leverage centrality of vertices is distinct. It is clear that if G is a regular graph than l(v) = 0 for every v ∈ G. We give an example below of a graph that has distinct leverage centralities. Intuitively one would think that the sum of the leverage centralities over a graph would be zero. This is in fact the case when a graph is regular. However, for non-regular graphs the sum of leverage centralities is negative. This arises since each edge between two vertices of different degrees contributes a negative amount to the sum of the leverage centralities. Let G be the graph K 3 with a pendant edge (see Figure 2).
). We can regroup the sum to be ) Since the first three parts are negative and the last part is zero, the sum must be negative.
Proof. If G is a regular graph, then l(v) = 0 for all v, and hence v∈G l(v) = 0. If G is not regular, there must exist an edge e with end vertices u and v where d(u) > d(v). We note that the contribution of each edge uv to the sum of the leverage centralities is 1 Hence for a non-regular graph, the sum of the leverage centralities is 2 Vertices with positive / negative leverage centrality A vertex of lowest degree cannot have a positive leverage centrality and a vertex of highest degree cannot have a negative leverage centrality. However it is possible to have all the vertices in a graph except for one to have negative leverage centrality, or all but one have positive leverage centrality. The star graph K 1,n−1 has n − 1 vertices with negative leverage centrality. We show in the next theorem there exist graphs where there are n − 1 vertices with positive leverage centrality.
Theorem 4
The maximum number of vertices with positive leverage centrality is n − 1.
Proof. Since the sum of leverage centralities over all vertices in a graph is less than or equal to zero, it is impossible for a graph to have n vertices with positive leverage centrality. Let G be a graph with vertices v 1 , ..., v n , where n ≥ 11, and edges We present a second example. Let G be a graph with n ≥ 12 vertices v 1 , v 2 , ..., v n and edges: which is positive when n > 11.531.
Leverage Centrality vs. Degree Centrality
Degree centrality weights a vertex based on its degree. A vertex with higher (lower) degree is deemed more (less) central. This property has been well-studied (for early works see Czepiel [1], Faucheaux and Moscovici [2], Freeman [3], Garrison,[4], Hanneman and Newman [5], Kajitani and Maruyama [7], Mackenzie [9], Nieminen [10], [11], Pitts [12], Rogers [13], and Shaw [14]). For some families of graphs the leverage centrality and degree centralities of vertices are closely related. For example, in scale-free networks where the distribution of degrees follows the power law, vertices with large degree will be adjacent to many vertices with much lower degrees. Hence the leverage centrality of these vertices will also be high.
However, for other families of graphs leverage centrality and degree centrality are not closely related. We show in the following example it is possible to construct infinite families of graphs where the vertex of largest degree does not have the highest leverage centrality. We do this by connecting nearly complete graphs as shown in Figure 3.
Let u be a vertex in K n+1 that has a neighbor vertex on the K n graph. Then, deg(u) = n and as n → ∞, it follows that deg(u) → ∞. Let v be the vertex that is the base of the claw graph found on the right side of the graph shown in Figure 2. Thus, the degree of v will always equal 4 and therefore, for all n ≥ 5, Since we know the degree of the neighbors of u, we can calculate the leverage centrality of u as shown: Thus, if we take the limit of the leverage centrality of u as n → ∞ we get: We can also calculate the leverage centrality of v: Since the leverage centrality of u converges to 0 as n → ∞, and the leverage centrality of v is equal to 8 15 , then l(v) > l(u) ∀n ≥ 5.
Leverage Centrality Zero
We note that bounds given in Lemma 2 are tight for regular graphs, where the leverage centrality of all vertices is zero. In fact, it is straightforward to show that l(v) = 0 for every vertex v if and only if G is a regular graph. It is also clear that for a vertex v with degree k that if all of the neighbors of v have degree k, then l(v) = 0. However, it is possible for a vertex to have a leverage centrality of zero without all of its neighbors having the same degree as the original vertex. We investigate this property below.
Example 5 Let G be a graph containing a vertex v of degree k where k − 1 of v"s neighbors have degree k = 2 and the remaining neighbor has degree 1.
We also give an example of a graph with a vertex v whose neighbors all have distinct degrees and l(v) = 0.
Complete Multipartite Graphs
We use K t1,t2,...,tr to denote the complete multipartite graph with parts of sizes t 1 , t 2 , .., t r and each vertex in a part is adjacent to every vertex in each of the other parts. As noted in [8] for vertices in the star graph K 1,n−1 the leverage centrality meets the two extremes. The vertex in a part by itself has leverage centrality 1+(n−1) = −1 + 2 n . We can extend the same idea to the general case of complete multipartite graphs. We will use G = K t1,t2,...,tr to denote a complete multipartite graph with r parts n 1 , n 2 , ..., n r where each part n i has order t i for all 1 ≤ i ≤ r.
Theorem 7 Let G = K t1,t2,...,tr where t i is the order of part n i . Then Due to the nature of a complete multipartite graph, it follows that v i will have t 1 neighbors in part n 1 , t 2 neighbors in part n 2 , t i neighbors in part n i , and the pattern continues for all 1 ≤ i ≤ r groups. Note that every vertex v k ∈ n k will have degree j =k t k . Thus the leverage centrality of v i can be calculated as follows: This completes the proof.
Cartesian Product of Graphs
We next present an elementary result from graph theory. Theorem 10 Let G be a graph and let G r be a regular graph where each vertex has degree r. Let u ∈ V (G r ) and let v i and v j be vertices in G with degrees k i and k j respectively.
. Proof. By Lemma 9 we have that deg ((u, v i )) = m − 1 + deg(v i ) and for all neighbors v j of vertex v i we have that deg ((u, v j )) = m − 1 + deg(v j ). The result then follows.
Cartesian Products of P n
In this section we will consider the lattice, × m P n . As the calculation of the degrees of vertices in a lattice is straightforward we will present results only involving the degrees without proof. We continue with some definitions. An inner corner vertex of × m P n is defined as follows.
It follows by definition that all vertices that are inner corner vertices are also non-corner vertices.
We note that for the remaining element of the m-tuple (v
General Lemmas
We begin with a basic result involving the degrees of vertices and its neighbors in a lattice.
Lemma 14
Let G be a lattice × m P n . Any vertex adjacent to a vertex with degree k must have degree k − 1, k, or k + 1.
Extreme Leverage Centralities
We next identify vertices with the minimum and maximum leverage centralities. We will show that the vertices with the minimum leverage centrality are the corners and the vertices with the maximum leverage centrality are the inner corners. Furthermore, we will show that for any vertex v in the lattice G = × m P n , Minimum Leverage Centrality We first characterize the vertices with the minimum leverage centrality. We begin by stating two elementary lemmas involving degrees of vertices in a lattice. Proof. Let v be a non-corner vertex in G with degree k. We know from Lemma 16 that at least one adjacent node has degree at most k. We know from Lemma 14 that the remaining adjacent nodes can have degree at most k + 1.
Let v have one adjacent node with degree k and k − 1 adjacent nodes with degree k + 1. We now calculate the leverage centrality of v. .
From Theorem 17, we have that for a corner vertex v c of degree k, the leverage centrality is: Given that the degree of any adjacent node must be greater than 0, we know that 0 ≤ k−1 k < 1. It follows that − 1 2k+1 < 1−k k(2k+1) and hence l(v c ) < l(v). If the neighbors of any non-corner vertex u differ from that of v, then it follows from our construction of v and Lemma 14 that for any corresponding neighbors u i from u and v i from v, that deg( This implies that l(v c ) < l(u) which completes the proof. Proof. Let v ic be an inner corner vertex of G. We have that
Maximum Leverage Centrality
By Lemma 19, we know that deg(v ic ) = 2m. We are also given that m neighbors of v ic have degree 2m and that m neighbors of v ic have degree 2m − 1. The leverage centrality of v ic is By rearranging terms we get: By distributing 1 2m we get that each term of the sum for l(v ic ) can be expressed as: 1 2m and since there are m terms in the sum, we can express l(v ic ) as: We simplify this to get: which proves the second part of the theorem. Let u be a vertex in G that is not an inner corner vertex of G. We have that ∃u * i ∈ u = (u 1 , u 2 , . . . , u m ) such that u * i ∈ {1, 3, . . . , n − 2, n}. Without loss of generality, we can assume that u i = v i when u i = u * i and thus u and v ic differ only in one element, u * i ∈ u and v * i ∈ v ic where u * i = v * i . We see that two cases arise in calculating the leverage centrality of u.
(i) Let u * i ∈ {1, n} and v * i ∈ {2, n − 1} By Lemma 19, we have that deg(u) = 2m − 1. In calculating the leverage centrality of u, we see that l(u) and l(v ic ) can differ only in one term of Equation 1 such that: For the differing terms for the expressions for leverage centrality of u and v ic we see that and it follows that l(u) < l(v ic ).
(ii) If u * i ∈ {3, n − 2} and v * i ∈ {2, n − 1} By Lemma 19, we know that deg(u) = 2m. In calculating the leverage centrality of u, we see that l(u) and l(v ic ) can differ only in one term of Equation 1 such that: From the proof of Case (i), we already have l(v ic ) l(u) = q + 1 2m 2m − 2m 2m + 2m = q, and For the differing terms for the expressions for leverage centrality of u and v ic we see that 0 < 1 2m 1 4m − 1 and it follows that l(u) < l(v ic ).
In both Cases (i) and (ii), we find that l(u) < l(v ic ) which proves that first part of the theorem and completes the proof.
Convergence of Leverage Centrality as m → ∞
We next consider the leverage centrality of different vertices as the number of dimensions is increased. Therefore, for any vertex v in G the leverage centrality is bounded as follows: We see that It follows that lim m→∞ l(v) = 0.
which completes the proof.
Leverage Centralities in Lattices and Triangle Numbers
In this section we investigate the number of distinct leverage centralities for lattices and show there is a surprising connection to the triangle numbers m+2 2 where m ≥ 1. We can label the vertices of × m P n with using m-tuples where v = (v 1 , v 2 , · · · , v m ) such that v i ∈ {1, . . . , n} ∀i ∈ {1, . . . , m}. For simplicity we will denote v r,s,t by (r, s, t).
• For P n × P n × P n where n ≥ 5, we have ten different leverage centralities: We next restate a well-known combinatorial formula.
Lemma 23 The number of solutions to Using Lemma 23, the number of solutions to this equation is the (m + 1)-st triangle number, m+2 2 . Hence we have the following upper bound.
Theorem 24 If n ≥ 5 the number of distinct leverage centralities in G = × m P n is less than or equal to m+2 2 .
For small cases of m this bound is in fact tight. The first three cases have been shown above. In the next theorem we show that this holds for m < 7.
1. If t j is the jth triangular number for 0 ≤ j ≤ m and r = t j + i where 0 ≤ i ≤ j, then leverage centrality of v r is given by 2. The number of distinct leverage centralities in G is less than or equal to m+2 2 . Moreover, if m < 7 the equality holds.
From this set of vertices V = {v 1 , v 2 , ..., v k } we can see that the degree of each vertex v r is m + j where r = t j + i and t j is the jth triangular number. The degrees of the vertices adjacent to v r are as follows: m − j vertices of degree m + j + 1, there are j − i vertices of degree m + j − 1 and there are j + i vertices of degree m + j. Therefore, for 0 ≤ j ≤ m and 0 ≤ i ≤ j the leverage centrality for each vertex v r is: .
In our proof of Property 2, we show that the leverage centralities of all vertices v r are distinct if m < 7. From a direct calculation on the formula found in above the leverage centrality satisfies the following orders. The first three cases were covered at the beginning of Section 5.
If m = 4, then
2. If m = 5, then 3. If m = 6, then This completes the proof. We have checked this computationally for all graphs × ⇒ k i = k j , x i = x j , and y i = y j . However this appears to be a complex problem.
We have also found that the number of distinct leverage centralities for graphs of the form × m P k n is linked to the polygonal numbers, which are numbers that can be represented by a regular geometrical arrangement of equally spaced points. For the first few cases, the triangle numbers are given by P 2 (m) = m+1 2 , the tetrahedral numbers are given by P 3 (m) = m+2 3 and the pentalope numbers are given by P 4 (m) = m+3 4 . In general, P k+1 (m) = m+k k+1 . Since we do not consider a case of a single vertex, we start all our leverage centrality calculations with the second polygonal numbers. Hence the general formula translates to m+k+1 k+1 . Based on our findings for small values of k we pose the following conjecture. .
|
2017-01-17T18:00:10.000Z
|
2017-01-17T00:00:00.000
|
{
"year": 2017,
"sha1": "f7074a93f64c544a788dcdbe01e57a9bada39d75",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.akcej.2017.05.001",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f7074a93f64c544a788dcdbe01e57a9bada39d75",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
56071805
|
pes2o/s2orc
|
v3-fos-license
|
THE EFFECTS OF DIFFERENT SOLVENTS ON BIOACTIVE METABOLITES AND “ IN VITRO ” ANTIOXIDANT AND ANTI-ACETYLCHOLINESTERASE ACTIVITY OF GANODERMA LUCIDUM FRUITING BODY AND PRIMORDIA EXTRACTS
1 University of Maribor, Laboratory for Separation Processes and Product Design, Faculty of Chemistry and Chemical Engineering, Smetanova 17, 2000 Maribor, Slovenia 2 Institute for Natural Sciences, Ulica bratov Učakar 108, 1000 Ljubljana, Slovenia, MycoMedica d.o.o. Podkoren 72, 4280 Kranjska Gora, Slovenia 3 University of Ljubljana, Biotechnical Faculty, Department of Wood Technology, Rožna dolina, Cesta VIII/34, 1000 Ljubljana, Slovenia
INTRODUCTION
Ganoderma lucidum (Fr.)Karst is a member of the mushroom family Polyporaceae and has been used in complementary medicine for over 2000 years [1,2].G. lucidum develops from a nodule, or pinhead, less than two millimeters in diameter, called a primordium, which is typically found on or near the surface of the substrate.It takes approximately 25 days from primordium formation to the development of a mature fruiting body that is ready for harvest.Spores are the mushroom's reproductive cells formed from hymenium of G. lucidum after the fruiting bodies become mature [1].
With aging, various pathological conditions and chronic diseases develop as a result of oxidative stress.The attack of free radicals on biomolecules, (lipids, proteins and DNA) eventually leads to many chronic diseases such as atherosclerosis, cancer, diabetes, rheumatoid arthritis, post-ischemic perfusion injury, myocardial infarction, cardiovascular diseases, chronic inflammation, stroke and septic shock, aging and other degenerative diseases in humans [14].The use of strong radical scavengers such as antioxidants may potentially delay neurodegeneration in diseases such as Parkinson's, Huntington's, amyotrophic lateral sclerosis and Alzheimer's [15,16].One of the most important strategies for the treatment of the neurodegenerative disorder Alzheimer's disease is to control the levels of acetylcholine in the brain through the inhibition of acetylcholinesterase (AChE) [17].Many synthetic chemicals such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), tert-butylhydroquinone (TBHQ) and propyl gallate (PG) are being used as strong radical scavengers; there is, however, growing interest in dietary antioxidants contained in many foods because of their natural origin and relative safety [14].Among the sources of natural derived antioxidants, edible mushrooms such as G. lucidum are receiving attention as a potential commercial source of antioxidants at present [18,19].
The antioxidant activity of G. lucidum extracts has been found to be correlated to their polysaccharide content as well as to their total phenolic contents [20].In addition to phenols and polysaccharides, bioactive proteins from G. lucidum have been reported as antioxidants [21], [22].Hasnat demonstrated the potential of G. lucidum extract as a valuable source of antioxidants exhibiting antiacetylcholinesterase activity [23].
In past studies, we observed that extracts obtained from G. lucidum fruiting bodies using supercritical carbon dioxide or hot water under different extraction conditions greatly vary in their bioactive capacities [24,25].
Recently published genomes have revealed the full potential of G. lucidum as a source of biologically active compounds [26].The authors studied variations in gene expression and triterpenoid content across three developmental stages of G. lucidum: areal mycelia, primordium and fruiting body.The results of a later study showed that significant numbers of genes (4,668) are up-or downregulated during at least one of the developing stage transitions.Further, triterpenoid content markedly increased in the primordium; however, their biological activity was not evaluated.New findings demonstrate that the primordium of G. lucidum has been overlooked in terms of being a source of bioactive compounds.
Therefore, the aim of our investigation was to provide some insight into the chemical composition as well as the bioactivity (antioxidant and antiAChE) of extracts obtained from G. lucidum primordia and fruiting bodies using various conventional solvents.This is one of the first reports describing G. lucidum primordium as a source of biologically active phenols, polysaccharides, fatty acids and proteins.) was purchased from Fluka (Germany).Sulfuric acid (99.99%) and Folin-Ciocalteau phenol reagent were purchased from Merck (Germany).Hexane (⩾95%) (CAS Number: 110-54-3) and ethanol (⩾99.9%)(CAS Number: 64-17-5) were purchased from Carlo Erba (Ita-ly) and methanol (⩾99.8%)(CAS Number: 67-56-1) was purchased from J. T. Baker Chemicals (Netherlands).
Extraction of phenolic compounds
G. lucidum fruiting bodies (GL) and its primordia (GL-P) were lyophilized, crushed using liquid nitrogen, and milled.Two types of extraction, hot (H) (at the boiling point of the solvent) and cold (C) (at 25 °C) were performed using different types of solvents (distilled water (H 2 O), ethanol (EtOH), acetone (AcOH), methanol (MeOH) and hexane (Hex)).5 g of GL or GL-P material was introduced into a flask and 100 ml of solvent was added.Extraction at the boiling temperature of the solvent was performed in a flask with a reflux condenser while extraction at 25 °C was carried out in a closed flask.Extraction time was 3 hours with constant stirring.After that, the extract solution was filtered and the filtrate was evaporated to remove the solvent.Extraction yield (%) was determined as: mass of fruiting body (GL) and primordia (GL-P) raw material (g)
Extraction of polysaccharides
The polysaccharidic extracts were obtained using three different procedures: (1) by conventional extraction and purification of polysaccharides described by Villares et al. [27], (2) by hot water extraction and precipitation with ethanol as described by Skalicka-Woźniak et al. [28], in both cases using 5 g of GL or GL-P, and (3) extraction with methanol (T = 67.7 °C and p = 1 bar), in order to remove phenolic compounds and other related molecules [27].Then, the methanolic extract was filtered and residue of GL or GL-P was extracted with hot water (T = 100 °C).The water extract was filtered (filtrate 1) and the remaining solid material was extracted with an aqueous basic solution of 2% w/v of NaOH at 100 °C.Again, the extract was filtered (filtrate 2) and GL or GL-P was discharged.Both filtrates were combined and proteins removed by precipitation with trifluoroacetic acid (TFA) (20% w/v).Then, proteins were separated by centrifugation.
Purified polysaccharidic extract of fruiting bodies in addition as -GL-PS ext.-1 and polysaccharidic extract of primordia in addition as GL-P-PS ext -1 were finally precipitated from the supernatant by the addition of EtOH in a 2:1 ratio (v/v).In the second procedure, GL or GL-P was extracted with 100 ml of distilled water at 85 °C for 6 hours with stirring.The crude hot water extracts were filtered and polysaccharidic extract of fruiting bodies (GL-PS ext.-2) and polysaccharidic extract of primordia (GL-P-PS ext.-2) were separated as described by Skalicka-Woźniak.Briefly, cold ethanol in a ratio of 3:1 v/v was added to concentrated hot water extracts and polysaccharides were precipitated overnight at +4 °C.The precipitated polysaccharides were collected after centrifugation (in an Eppendorf 5804 R refrigerated centrifuge) at 3100 rpm for 10 min.
Extraction of proteins
Five g of GL or GL-P was extracted with 100 ml of methanol for 3 hours under constant stirring to remove phenolic compounds.After the hot water extraction of residue of GL or GL-P was performed, the extract was filtered and proteins removed from the filtrate using trifluoroacetic acid (20% w/v).Then, protein extract of fruiting bodies, hereafter GL-P ext, and protein extract of primordial, hereafter GL-P-P ext, was separated by centrifugation [27].
Total phenol content
The concentration of total phenols in the extracts was measured by UV spectrophotometry (Varian, USA), based on a colorimetric oxidation/reduction reaction.The total phenols were determined according to the Folin-Ciocalteau method (1927) with some modifications [29].Briefly, all extracts were diluted in methanol at concentrations of 1 mg•ml -1 .2.5 ml of Folin-Ciocalteau reagent (diluted with water at a 1:10 ratio) was mixed with 0.5 ml of extract solution and 2.5 ml of Na 2 CO 3 (75 g•l -1 ).Prepared samples were then thermostated in a water bath at 50 °C for 5 min.After cooling, the absorbance was measured at 760 nm.As a control, 0.5 ml of methanol was used instead of the extract solution.
Quantification was determined based on the standard curve of gallic acid (GA).The amount (%) of phenol content (w GA ) was expressed as mg GA per gram (g) of extract.
Protein content
The total protein content was measured using the Bradford method.Briefly, 100 mg of Coomassie Brilliant Blue was mixed with 50 ml of 95% ethanol, and 100 ml of 85% (v/v) H 3 PO 4 solution was diluted with distilled water to 1 liter.The quantification was determined based on the standard curve of Bovine Serum Albumin (BSA).The calibration curve was in the range of 0.0 mg•ml -1 to 1 mg•ml -1 [30].
GL-P ext or GL-P-P ext was diluted with water to a concentration of 1 mg•ml -1 .An aliquot (20 μl) was mixed with 1 ml of Bradford reagent and the absorbance was measured at 595 nm.Total protein content was expressed in mg of BSA per gram (g) of extract (mg BSA•g -1 GL-P ext or GL-P-P ext ).
Polysaccharide content
Polysaccharide content was determined using the phenol-sulfuric acid method and D-glucose as standard [31].Polysaccharidic extract solutions (1 mg•ml -1 ) were prepared in distilled water.Then, 0.5 ml of solution was mixed with 0.5 ml of 5% aqueous phenol solution and 2.5 ml of concentrated sulfuric acid.The mixture was stirred for 30 min.The total sugar content was determined on the standard curve for glucose (0.0047 mg•ml -1 -0.15 mg•ml -1 ) at a wavelength of 490 nm.The results were expressed as mg of glucose equivalents per gram of polysaccharidic extract dry weight.
Scavenging effect on 1,1-diphenyl-2-picrylhydrazyl (DPPH) radicals
For the assay, 3.9 ml of 0.06 mM DPPH* radical (Sigma, CAS Number: 1898-66-4) was added to 0.1 ml of G. lucidum fruiting body or primordia extract.The reaction mixture was vortexed and absorbance measured at 515 nm using a spectrophotometer with methanol as a control.The decrease in absorbance was monitored until the reaction reached a plateau.The DPPH* free radical scavenging activity, expressed as a percentage of radical scavenging activity, was calculated as follows: The effects of different solvents on bioactive metabolites and "in vitro" antioxidant and anti-acetylcholinesterase activity… where A 0 is the absorbance of 0.06 mM methanolic DPPH and A s is the absorbance of the reaction mixture after 30 min [32].
Anti-acetylcholinesterase activity
The inhibition of acetylcholinesterase (AChE) was measured according to the Ellman method (1961), using acetylthiocholine iodide (1mM) as the substrate in 100 mM potassium phosphate buffer, pH 7.4, at 25 °C, and electric eel AChE as the source of enzyme (6.25 U•ml -1 , Sigma) (Ellman et al. 1961) [33].The hydrolysis of acetylthiocholine iodide was measured on a Kinetic Microplate Reader (Varian, USA) at 405 nm.The concentration of the extracts was 1 mg/ml and AChE inhibition was monitored for 5 min.All readings were corrected for their appropriate controls, and a run with only acetylthi-ocholine chloride served as a positive control assay.Galathamine, a common AChE inhibitor, was used as a control.Every measurement was repeated at least two times.Extraction yield (η) using different types of solvents (distilled water (H 2 O), methanol (MeOH), ethanol (EtOH), acetone (Ac) and hexane (Hex)) is presented in Table 1.
Yield (%) after extraction of G. lucidum fruiting body (GL) and G. lucidum primordia (GL-P.)
Solvent GLfruiting body Y (%) When water was used as a solvent, the highest yield (23.30%) was obtained.The yield decreased in the order of H 2 O > MeOH > EtOH > Ac > Hex.Only slight changes in the total yield were observed in the comparison of hot/cold extraction procedures.Higher extraction yield does not necessarily mean higher medicinal activity of the extracts.
The relative polarity of the solvents used in the present study increases in the order of hexane Hex < Ac < EtOH < MeOH < H 2 O. Extraction efficiency also increases in that order.The polarity of the solvent thus greatly influences the extraction yields.Higher temperatures usually lead to higher yields of extraction.However, in our study, the polarity of the solvent shows a greater influence on yield.Solvents such as methanol, ethanol and acetone are often used for the extraction of phenolic compounds, while hexane is more often used for the isolation of fatty acids.The yield in hexane solvent was the lowest.This corresponds to the fact that G. lucidum contains very low amounts of fatty acids [34].In general, the average yields for GL-P are slightly higher than those for GL.Mushrooms store components in the initial phase, which are needed for their further development, and we expected these to show slightly higher extraction efficiency values.We conclude that many of the components such as GL and GL-P are polar and that non-polar components are significantly less common.
Total phenols in extracts obtained with conventional solvents
The contents of total phenols in extracts obtained from G. lucidum fruiting bodies (GL ext ) and G. lucidum primordia (GL-P ext ) using various conventional solvents are presented in Figure 1 and Figure 2, respectively.
The concentration of total phenolic compounds in GL ext.ranged from 9.01 mg GA •g ext -1 to 74.36 mg GA •g ext -1 , depending on solvent selection and temperature during the extraction process.The highest concentration of phenolic compounds is 74.36 mg GA •g ext -1 EtOH GL ext , decreasing in the order EtOH > MeOH > H 2 O > AcOH > Hex.The influence of temperature during the extraction process on the total amount of phenols in GL ext was noticed.Extraction with EtOH at its boiling point resulted in 74.36 mg GA •g ext -1 GL ext while the amount was 65.72 mg GA •g ext -1 when EtOH at 25 °C was used.The effect of extraction temperature on total phenolic content was also noticed in the case of AcOH (Fig. 1).However, when MeOH was used, a higher content of total phenols, 52.11 mg GA /g ext , was obtained when extraction was performed at 25 °C, while performing extraction using MeOH at boiling point resulted in the production of 32.86 mg GA •g ext -1 in GL ext .The effect of solvent type on total phenols in different mushrooms was previously studied by Tsai et al. [35], who noticed that the highest yield of phenolic compounds was in ethanol, which ranked second after water.Orhan et al. (2011) determined the total phenol content of ethanolic extracts in a number of mushroom species growing in Turkey [36].The phenol content was in the range of 2.5 mg GA •g ext -1 to 51.7 mg GA •g ext -1 .Heleno et al. extracted G. lucidum from fruiting bodies with methanol : water (80:20) at 20 °C for 2 hours [37].The total phenolic content was 28.64 mg GA •g ext -1 in methanol/water extracts of the G. lucidum fruiting bodies.In our study, a two-fold higher concentration of total phenolic content at 25 °C was observed in GL ext produced with methanol extraction.
Celik et al. determined a higher total phenol content (49.52 mg GA •g ext -1 ) in G. lucidum ethanol extracts in comparison with methanol extracts obtained by Soxhlet extraction [38].
The influence of solvent type as well as temperature during the extraction process has an effect on the total phenol content in the GL ext obtained.There is no report on the influence of different types of solvents at different temperatures on G. lucidum phenol content., while 64.54 mg GA •g ext -1 of phenols were present in extract obtained with cold (C) Ac. (C) extraction generally resulted in a lower amount of phenolic compounds, except for the case of MeOH (Fig. 2).The expected phenolic compounds were not present in the extract obtained using hexane as a solvent.
From Figures 1 and 2, it can be seen that there is similar trend if we compare only temperatures using the same solvent type.In general, the amounts of total phenols in GL-P ext are higher than in GL ext .
DPPH free radical scavenging activity of extracts obtained with conventional solvents
The results of DPPH* radical scavenging activities of GL ext and GL-P ext obtained using hot or cold extraction are presented in Figure 3 and Figure 4, respectively.The highest DPPH* radical scavenging inhibitory activity was observed for hot AcOH (GL ext -H-Ac) at 23.66%.DPPH* radical scavenging inhibitory activity for cold and hot MeOH and EtOH vary between 12.90% and 16.84% (Fig. 3).
In the present study, the highest total phenol contents did not result in the highest DPPH* radical scavenging activity of the GL ext obtained (Figs. 1 and 2).These results correspond to the observations of Orhan et al. [36].
In general, positive correlations were found between the total phenolic content and the DPPH free radical scavenging activities elicited (Fig. 2 and Fig. 4).
It can be seen from Figure 4 that the maximum inhibition of DPPH radicals occurs when hot acetone solvent is used, followed by hot ethanol, methanol, water and hexane.The values of inhibition of DPPH radicals by GL-P ext were significantly higher than GL ext .
The use of different solvents can result in the extraction of various types of metabolites from G. lucidum fruiting bodies and primordia, with varying radical scavenging activities.Furthermore, increased temperatures during the extraction process may result in denaturation and a reduction of the loss of ability to act as an antioxidant.
The lowest radical-scavenging activities were observed for the extracts obtained in Hex, which is mostly used for extraction of lipids; therefore, polar compounds with recognized high antioxidant capacity could not be obtained.
De Bruin et al. investigated the antioxidant properties of extracts obtained from Grifola gargal mushrooms, which are of the same order as our results [39].
136
If we compare the effect of temperature in the same solvent and different material, GL and GL-P can be seen to have similar trends.The reason for the higher values of inhibition of DPPH* radicals in GL-P ext is that in the early stage of fun-gus growth, it produces substances which are necessary for its growth.Thus, the contents of total phenols, as the values of the inhibition DPPH* radicals for GL-P ext , are higher than in GL ext .
. Inhibition of acetylcholinesterase (AChE) by extracts obtained with conventional solvents
Since antioxidants play an important role in the protection against aging processes and neurodegenerative diseases such as Alzheimer's disease (AD), the ability of GL ext and GL-P ext to inhibit the AChE enzyme was determined.Both GL ext.and GL-P ext.contain polar and non-polar components which have the capability to inhibit AChE.
The AChE inhibitory activities of GL ext and GL-P ext were quantified using Ellman's method and the results are summarized in Figure 5 and Figure 6.The AChE inhibition of GL ext. was between 18.1% for (H) AcOH and 32.5% for (H) EtOH (Fig. 5).A similar result was observed in the case of total phenol content (Fig. 1).Generally, the use of different solvents did not significantly affect AChE inhibitory activity.Furthermore, extraction temperatures did not greatly influence the AChE inhibitory activity of GL ext (Fig. 5).
For primordia, the highest AChE inhibitory activity of 29.48% was obtained when GL-P ext -H-Hex was applied.The effect of solvents and/or extraction temperature is rather small; moreover, there is a correlation between the content of total phenols and AChE inhibitory activities.
There are no reports comparing AChE inhibitory activity of GL ext or GL-P ext obtained with different solvents.
Bioactivity of polysaccharides
In both extraction procedures, GL and GL-P material was first pre-extracted with MeOH in order to remove phenolic compounds.Afterwards, the procedures described in section 2.2 were followed.
Total phenols, total polysaccharides, DPPH* free radical scavenging and AChE inhibitory activities were determined for all polysaccharidic extracts.As a control in AChE activity determination, the inhibitor galanthamine, with an inhibitory activity of 93%, was used.The results are summarized in Table 2 , as observed by other researchers [27].Semi-purification can result in a lower polyphenol content in the extracts obtained.An additional purification step involved removal of phenols using MeOH from G. lucidum; this resulted in the lowest content of total phenols (GL-PS ext.-1, Table 2).The polysaccharide content between GL-PS ext -1/-2 did not change drastically (Table 2), indicating that purification steps did not result in a loss of polysaccharides during the extraction process from the initial G. lucidum material.The results of the present study show that the highest DPPH* activity of 20.46% was obtained for GL-PS ext -1, with the lowest content of total phenols, indicating that purified G. lucidum extract has greater antioxidant activity than extract.
Both GL-P-PS ext -1/2 contained about 17 mg GLC per g of GL-PS-P ext .The results of chemical analysis showed the presence of phenolic compounds in both extracts.The purification step using MeOH resulted in the lowest content of total phenols (GL-P-PS ext -1, Table 2).
In general, the results in Table 2 show that GL-PS ext from G. lucidum fruiting bodies have higher DPPH* activity and higher anti-AChE activity compared with GL-P-PS ext from primordia.The amount of total phenols is higher in GL-P-PS ext obtained from primordia.
Bioactivity of proteins
Extraction of proteins from G. lucidum fruiting bodies and primordia was performed as described by Villares et al. [27] using trifluoroacetic acid (TFA) for precipitation from crude hot water extract, pre-extracted with MeOH.This procedure resulted in protein rich extract (GL-P ext , GL-P-P ext ).Total protein and total phenol content, as well as DPPH* and anti-acetylcholinesterase activities, were measured.The total protein content was measured using a method proposed by Bradford.The results of total phenols, total proteins, DPPH and anti-AChE activity for GL-P ext , GL-P-P ext are summarized in Table 3.
Chemical content and bioactivity of polysaccharides obtained from G. lucidum fruiting bodies (GL-P ext )
and Ganoderma lucidum primordia (GL-P-P ext ).Data are means ± SD from two replicates, where two independent experiments were performed.
Sample ID Total phenols (mg GA •g ext. -1 ) ±SD From the results obtained, it can be observed that GL-P ext contains 13.3% of proteins as well as a surprisingly high content of phenols of 60% (Table 3).Antioxidant and anti-AChE activities of GL-P ext , being 17.33% and 24.40%, respectively, are probably the result of high phenol content.
Chemical analysis showed that GL-P-P ext contains 16.3% proteins and 26.04% phenols (Table 3).GL-P-P ext shows small DPPH activity, while AChE inhibitory activity was noticed with 13.62% inhibition.As a control inhibitor, galanthamine, which has an inhibitory activity of 88%, was used.
The extraction procedure described resulted in the production of GL-P ext from G. lucidum fruiting bodies .It is well recognized that phenolic compounds present in mushrooms may be complexed to soluble β-D-glucans by weak chemical linkages; therefore, the precipitation of proteins with TFA could result in the breakage of those links and the resulting high content of phenols in GL-P ext , as observed in the present study.
Polysaccharide and protein content from G. lucidum fruiting body and primordium also elicited antioxidant and acetylcholinesterase inhibitory activities.
The results indicate that the fruiting body and primordium of G. lucidum are rich in higher molecular weight phenolic compounds with strong DPPHfree radical scavenging activity and moderate AChE inhibitory activities.G. lucidum primordia and fruiting bodies potentially present a novel source of natural inhibitors of acetylcholinesterase enzyme.
CONCLUSION
The aim of our study was to investigate the antioxidant and acetylcholinesterase inhibitory activities of Ganoderma lucidum fruiting bodies and primordia extracts.To the best of our knowledge, this is the first report evaluating the biological activity of compounds extracted solely from G. lucidum primordia.
Extracts taken from G. lucidum fruiting bodies and primordia using various extraction procedures were characterized.Two different procedures were used for polysaccharide extraction and the extraction of proteins.
The extracted components proved to be very effective.In general, we conclude that the components of G. lucidum fruiting bodies as well as primordia are more polar than non-polar.The total phenol content is higher in the case of primordia extracts.The results indicate that fruiting body and primordia of G. lucidum are rich in higher molecular weight phenolic compounds with strong DPPH free radical scavenging activity and moderate AChE inhibitory activities.G. lucidum primordia and fruiting body present a novel potential source of natural inhibitors of acetylcholinesterase enzyme.
There are no reports on AChE inhibitory activity of GL ext or GL-P ext obtained using other solvents.The ability to inhibit AChE is crucial in the development of Alzheimer's disease (AD).Since antioxidants play an important role in the protection against aging processes and neurodegenerative diseases such as AD, the rate of inhibition of AChE was determined for GL ext and GL-P ext extracts, which contain both polar and non-polar components with the capability of inhibiting AChE.
According to these results, both types of extracts present a potential source of compounds for slowing the aging and neurodegenerative processes.
Fig. 2 .
Fig. 2. The content of total phenols in extracts obtained from G. lucidum (GL-P ext ) primordium.Data are presented as mgGA/g GL-P ext and are expressed as means ± SD from two replicates.Extraction temperature: hot (H) and cold (C); extraction solvent: H 2 O, MeOH, EtOH, Ac and Hex.
Fig. 3 .
Fig. 3. DPPH radical scavenging activity of extracts obtained from G. lucidum fruiting body (GL ext ).Data are expressed as inhibition in %.Extraction temperature: hotat boiling point of the solvent (H), and coldat 25 °C (C); extraction solvent: H 2 O, MeOH, EtOH, Ac and Hex.Data are means ± SD from two replicates, where two independent experiments were performed.
Fig. 4 .
Fig. 4. DPPH radical scavenging activity of extracts obtained from G. lucidum primordia (GL-P ext ).Data are expressed as inhibition in %.Extraction temperature: hotat boiling point of the solvent (H) and coldat 25 °C (C); extraction solvent: H 2 O, MeOH, EtOH, Ac and Hex.Data are means ± SD from two replicates, where two independent experiments were performed.
Fig. 5 .
Fig. 5. Acetylcholinesterase (AChE) inhibitory activities of G. lucidum extracts (GL ext. ) obtained using hot (at boiling point of the solvent) / cold (at 25 °C) (H/C) extraction solvent (H 2 O, MeOH, EtOH, AcOH, Hex).Data are expressed as percentage (%) of AChE inhibition.Data are means ± SD from two replicates, where two independent experiments were performed.As a control inhibitor, galanthamine, which has an inhibitory activity of 95%, was used.
Fig. 6 .
Fig. 6.Acetylcholinesterase (AChE) inhibitory activities of G. lucidum primordium extracts (GL-P ext ) in c = 1mg/ml obtained using hot (at boiling point of the solvent) / cold (at 25 °C) (H/C) extraction solvent (H 2 O, MeOH, EtOH, Ac, Hex).Data are expressed as the percentage (%) of AChE inhibition.Data are means ± SD from two replicates, where two independent experiments were performed.As a control, the inhibitor galanthamine was used, with an inhibitory activity of 94%.
. Chemical content and bioactivity of polysaccharides obtained from G. lucidum fruiting body (GL-PS ext.-1/-2) and primordia (GL-P-PS ext.-1/-2) with two extraction procedures.Data are means ± SD from two replicates, where two independent experiments were performed.
|
2018-12-14T20:24:20.000Z
|
2017-05-25T00:00:00.000
|
{
"year": 2017,
"sha1": "0b036c50054cde11195bd09a191bb60690192974",
"oa_license": "CCBYNC",
"oa_url": "https://mjcce.org.mk/index.php/MJCCE/article/download/1054/567",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0b036c50054cde11195bd09a191bb60690192974",
"s2fieldsofstudy": [
"Chemistry",
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
265152263
|
pes2o/s2orc
|
v3-fos-license
|
Single-cell sequencing reveals the reproductive variations between primiparous and multiparous Hu ewes
Background In the modern sheep production systems, the reproductive performance of ewes determines the economic profitability of farming. Revealing the genetic mechanisms underlying differences in the litter size is important for the selection and breeding of highly prolific ewes. Hu sheep, a high-quality Chinese sheep breed, is known for its high fecundity and is often used as a model to study prolificacy traits. In the current study, animals were divided into two groups according to their delivery rates in three consecutive lambing seasons (namely, the high and low reproductive groups with ≥ 3 lambs and one lamb per season, n = 3, respectively). The ewes were slaughtered within 12 h of estrus, and unilateral ovarian tissues were collected and analyzed by 10× Genomics single-cell RNA sequencing. Results A total of 5 types of somatic cells were identified and corresponding expression profiles were mapped in the ovaries of each group. Noticeably, the differences in the ovary somatic cell expression profiles between the high and low reproductive groups were mainly clustered in the granulosa cells. Furthermore, four granulosa cell subtypes were identified. GeneSwitches analysis revealed that the abundance of JPH1 expression and the reduction of LOC101112291 expression could lead to different evolutionary directions of the granulosa cells. Additionally, the expression levels of FTH1 and FTL in mural granulosa cells of the highly reproductive group were significantly higher. These genes inhibit necroptosis and ferroptosis of mural granulosa cells, which helps prevent follicular atresia. Conclusions This study provides insights into the molecular mechanisms underlying the high fecundity of Hu sheep. The differences in gene expression profiles, particularly in the granulosa cells, suggest that these cells play a critical role in female prolificacy. The findings also highlight the importance of genes such as JPH1, LOC101112291, FTH1, and FTL in regulating granulosa cell function and follicular development. Supplementary Information The online version contains supplementary material available at 10.1186/s40104-023-00941-1.
Background
Small ruminants, particularly native breed kinds, play a significant role to the livelihoods of a considerable part of human population from socio-economic aspects [1][2][3].
Thus, combined trials with emphasis on administration and genetic progress to improve animal outputs are of decisive significance [4][5][6].Economical and biological efficiency of sheep production enterprises generally improves by increasing productivity and reproductive performance of ewes [7][8][9][10][11].Hu sheep is a first-class protected local livestock breed in China and a worldrenowned multiparous sheep breed.It has early sexual maturity, four seasons of estrus, two or three litters a year, and an average lambing rate of 277.4% [12].Hu sheep are currently bred on a large scale in China's mutton sheep production system, and the litter size of ewes clearly impacts the economic efficiency.
The ovary, a critical reproductive organ consisting of follicles at several different developmental stages.The number of lambs produced is an important indicator of sheep fertility.It is also a complex quantitative trait regulated by genetic, epigenetic, and hormonal factors with a heritability of 0.03-0.10[13].The number of lambs produced by each ewe is influenced by the number of ovulations, and the ovulation can be genetically regulated by a single main effector gene or some micro-effector polygenes [14,15], such as BMPRIB [16], BMP15 [17], and GDF9 [18].
The ovary is a heterogeneous organ co-regulated by multiple cells, which determines the complexity of ovarian function.Follicular development is a highly coordinated process in sheep.Follicle cyclic recruitment, spatial displacement, follicle atresia, and ovulation are implicated events resulting from the release of molecular signals by somatic cells.Cells have different functions in the specific biological cycle of the ovary and contribute to the maturation of follicles [19].Previous studies have focused on the impact of follicles (including granulosa cells and oocytes) on ovarian function [20,21].There is a growing recognition of the functional role of ovarian somatic cells (endothelial cells, stromal cells, perivascular cells and immune cells) in follicular development [22,23].However, few studies have been conducted to investigate the effect of different cells in the ovary on reproductive performance in sheep.Therefore, establishing a functional analysis based on different ovarian cells and their specific physiological roles is important to explore the ovarian function and elucidate the mechanisms of differences in lambing number.
With the development of sequencing technologies, single-cell RNA sequencing (scRNA-seq) technology has been employed to detect the expression profiles of different tissue cells.Thousands of single-cells from a single biopsy can be analyzed by introducing unique molecular identifiers (UMI) in droplet-based protocols, reducing amplification errors and facilitating the detection of small populations of cells whose transcriptional programs are often not detected using bulk RNA sequencing [24].ScRNA-seq technology revealed that at distinct developing stages of cells, the corresponding cell markers are different [25][26][27].In addition, using this technology, studies have explored the different cellular functions and developmental trajectories of the ovary in humans and some model animals [28][29][30].In sheep, this technique is currently being used to investigate sperm-related functions in males [31][32][33].Whereas, there are few studies focus on ovary function of domestic animals with scRNA-seq.In the current study, this technique was conducted to explore the cellular mechanisms underlying differences in lambing number in Hu sheep.
In sheep breeds with high fecundity performance, five main causal genes control ovulation and lambing numbers [34].However, except for the FecB locus in the BMPR-1B gene, all other loci are not associated with high fecundity traits in Hu sheep [34].There is still a gap between the specific gene regulatory networks and lambing number.In this study, we utilized the 10× Genomics scRNA-seq technology to analyze ovarian tissue and explore the molecular mechanisms underlying high fecundity in Hu ewe, which provide new targets for molecular breeding and theoretical basis for further studies.
Ethical statement
The present study was approved by the Animal Care and Use Committee of Northwest A&F University, China (Approval No. DK2021113).All methods and experimentations were performed in accordance with the relevant guidelines and regulations.
Sheep management
The 6 estrus Hu ewes (average age: 3.6 years) were divided into 2 groups based on litter size from 3 consecutive parities.The highly reproductive group (HLS, n = 3, body weight: 40.16 ± 1.19 kg) comprised sheep with a litter size of ≥ 3, while the lowly reproductive group (LLS, n = 3, body weight: 42.01 ± 0.31 kg) consisted of 3 sheep with a litter size of = 1.Sheep weight and production records are shown in Table S1(Additional file 1).
The 6 randomly selected ewes were slaughtered within 12 h of estrus.The ram test was used to determine the estrous status.Venous blood samples were collected before slaughter for testing blood biochemical and hormone levels.Unilateral ovarian tissues were collected, placed in a protective solution (MACS ® Tissue Storage Solution, Miltenyi, Bergisch Gladbach, Germany), stored at 4 °C, and analyzed by scRNA-seq.
Sample preparation and library construction
The entire unilateral ovaries of Hu ewe were cut up, digested with collagenase 1 for 30 min and trypsin for 10 min, sieved, centrifuged, and lysed for cell counting.ScRNA-seq libraries were prepared with Chromium Single Cell 3' Reagent v3 Kits (10× Genomics, Pleasanton, California, USA) according to the manufacturer's protocol.In briefly, single-cell suspensions were loaded on the Chromium Single Cell Controller Instrument (10× Genomics, Pleasanton, California, USA) to generate single-cell GEMs.After generating GEMs, full-length cDNA was obtained through reverse transcription reactions engaged with barcoded, then disruption of emulsions by using the recovery agent and cDNA clean-up with DynaBeads Myone Silane Beads (Thermo Fisher Scientific, Waltham, Massachusetts, USA).cDNA was amplified by polymerase chain reaction (PCR).The amplified cDNA was fragmented, end-repaired, A-tailed, index adaptor-ligated, and library amplification.These libraries were sequenced on the MGISEQ-T7 platform (MGI Tech, Shenzhen, China).
Data preprocessing
The Cell Ranger software pipeline (v5.0.0) provided by 10× Genomics was used to demultiplex cellular barcodes, and reads were mapped to the genome and transcriptome using the STAR aligner and down-sample reads as required to generate normalized aggregate data across samples, producing a matrix of gene counts versus cells.The UMI count matrix was processed using the R package Seurat (v3.1.1)[35].To remove low-quality cells and multiplet captures, a major concern in microdropletbased experiments, a criterion was applied, including to filtering out cells with gene numbers less than 200, UMI less than 1,000, and log10GenesPerUMI less than 0.7.We discarded low-quality cells where > 10% of the counts belonged to mitochondrial genes, and > 5% of them belonged to hemoglobin genes.The DoubletFinder package (v2.0.2) [36] was applied to identify potential doublet.After applying these QC criteria, 41,150 single-cells were included in downstream analyses.Library size normalization was performed with the NormalizeData function in Seurat [35] to obtain the normalized count.The globalscaling normalization method "LogNormalize" normalized the gene expression measurements for each cell by the total expression and multiplied by a scaling factor (10,000 by default).The results were log-transformed.
Top variable genes across single-cells were identified using the method described by Macosko et al. [37].The most variable genes were selected using the Find-VariableGenes function (mean.function= FastExpMean, dispersion.function= FastLogVMR) in Seurat [35].Principal component analysis (PCA) was performed to reduce the dimensionality with the RunPCA function in Seurat [35].Graph-based clustering was performed to cluster cells according to their gene expression profiles using the FindClusters function in Seurat [35].Cells were visualized using a 2-dimensional Uniform Manifold Approximation and Projection (UMAP) algorithm with the RunUMAP function in Seurat [35].The FindAllMarkers function (test.use= presto) was used in Seurat [35] to identify marker genes of each cluster.For a given cluster, FindAllMarkers identified positive markers compared with all other cells.
Differentially expressed genes (DEGs) were identified using the FindMarkers function (test.use= presto) in Seurat.P value < 0.05 and |log 2 foldchange|> 0.58 were set as the threshold for significantly differential expression.Gene Ontology (GO) enrichment and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway enrichment analysis of DEGs were performed using R based on the hypergeometric distribution.
Pseudotime analysis
Pseudotime analysis was done with the Monocle2 package [38].The raw count was converted from the Seurat object into the CellDataSet object with the import Cell-DataSet function in Monocle.The differentialGeneTest function of the Monocle2 package was used to select ordering genes (qval < 0.01), which were informative in the ordering of cells along the pseudotime trajectory.The dimensional reduction clustering analysis was performed with the reduce dimension function, followed by trajectory inference with the order.The cell function was done using default parameters.Gene expression was plotted with the plot_genes_in_pseudotime function to track changes over pseudo-time.
GeneSwitches analysis
GeneSwitches (v0.1.0)[39] was used to discover the sequence of gene expression turn-on and turn-off during cell state transitions at single-cell resolution.Gene expression data were binarized to a 1 (on) or 0 (off ) state using the binarize_exp function (fix_cutoff = TRUE, bina-rize_cutoff = 0.05) from the GeneSwitches package.A mixture model of two Gaussian distributions was fitted to the input gene expression for each gene, which was used to calculate a threshold for binarizing the gene.Genes without a significant "on-off" bimodal distribution were removed, and the binary state of gene expression (on or off ) was modeled using the find_switch_logistic_fastglm function (downsample = TRUE).The top 50 best-fit (high McFadden's Pseudo R 2 ) genes were plotted along the proposed timeline.Genes turned on with the proposed time were shown above the horizontal axis.Genes that were turned off with the proposed time are shown below the horizontal axis.
Histological observation Hematoxylin and eosin staining
The ovary samples were fixed in 4% polyformaldehyde, embedded in paraffin, sectioned, and stained with hematoxylin and eosin (H&E) for the histologically observation ovarian tissues.SlideViewer 2.5.0 (3DHistech, Budapest, Hungary) was used for imaging.
Statistical analysis
The Student's t-test was used for blood biochemical and cell types proportion data analysis by SPSS software, version 24.0.(IBM, Armonk, New York, USA).Statistical significance was set at P < 0.05.
Blood biochemical and body weight of sheep
Twelve hours before sampling the ovaries, the venous blood samples were collected to evaluate the physiological status of the sheep.As shown in Table 1, the blood biochemical and body weight of sheep had no difference between the two groups.
Clustering and identification of the ovarian somatic cells
In this study, ovaries were obtained from 6 ewes (3 replicates in each group) with different litter sizes, and subjected to H&E staining was performed for ovary structure observation.In the ovary, follicles were observed in different developmental states in both groups (Fig. 1A), which means that our subsequent analysis covers all cell types from various developmental states during estrus.
The ovaries were digested for 10× genomic scRNAseq (Fig. 1B).After critical cell filtration, 38,921 cells were collected.The number of cells obtained from every sample ranged from 5,796 to 7,335, and the average number of UMI in each cell ranged from 6,227 to 10,384.The average number of genes in each cell ranged from 2,160 to 3,003, and the average proportion of mitochondrial UMI in each cell ranged from 0.0330 to 0.0480 (Fig. 1C).Mapping rate of every sample was higher than 85%, indicating the reliability of the scRNA-seq data of this study.Based on the sequencing data, a seurat-based workflow was used for cell clustering, and a total of 20 clusters (C) were identified by the uniform manifold approximation and projection (UMAP) analysis (Fig. 1D).All clusters were present in HLS and LLS groups (Fig. 1E).
Differences of ovarian somatic cell expression profiles between the HLS and LLS groups
The differences between the somatic cell expression profiles of primiparous and multiparous Hu ewes were compared.In our study, the comparison of somatic cell expression profiles was based on the cell types.The proportion of cell types differed between the HLS and LLS groups.The proportion of endothelial and immune cells in HLS was lower than in LLS, whereas GCs and stromal cells were higher than in LLS.The distribution of perivascular cells was consistent in both groups (Table 2, Fig. 4A).P-value < 0.05 and fold change > 1.2 were used as screening criteria.The numbers of up-regulated DEGs were 61, 115, 65, 74, and 105, while the down-regulated genes were 179, 168, 108, 181, and 179 in endothelial cells, GCs, stromal cells, perivascular cells, and immune cells, respectively (Fig. 4B).GO and KEGG enrichment analyses were conducted with the identified DEGs (Fig. 4C).The enrichment results showed that the functions up-regulated in ovarian somatic cells were associated with ribosome in HLS group compared to LLS group.Up-regulation of structural components of ribosomes, cytoplasmic large ribosomal subunits, translation, and other GO terms were observed in endothelial, GC, stromal, and perivascular cells.The result of KEGG enrichment in ovarian somatic cells were closely involved in cellular functions and the enhanced ribosome pathways.The "ovarian steroidogenesis" pathway was upregulated in GCs.Enhanced enrichment of the "oxidative phosphorylation" pathway was observed in stromal cells.Decreased functional enrichment of somatic cells was associated with extracellular structures and cell adhesion functions.For the GO term, cell adhesion enrichment was decreased in endothelial cells.Functional enrichment was decreased in stromal and perivascular cells.In KEGG enrichment, a corresponding decrease in functional enrichment (ECM-receptor interaction) was observed in stromal and perivascular cells.The HLS group showed decreased nutrient metabolic functions in the KEGG enrichment results compared to the LLS group, such as decreased "cholesterol metabolism" enrichment in endothelial cells and decreased enrichment of "protein digestion and absorption" in stromal and perivascular cells.This difference was more pronounced in GCs, where the HLS group significantly down-regulated enrichment of "glycolysis/glucose production", "oxidative phosphorylation", and "thermogenesis".By comparing the expression profiles of ovarian somatic cells of Hu ewes with different litter size, the number of differential genes and functional changes closely related to ovarian ovulation was greater in GCs than in other somatic cells.Thus, the difference in GCs in the different groups required further analysis.
GC subtype identification and developmental trajectory of sheep ovary
We reduced the dimension of the identified GCs into 11 sub-clusters (CL1-CL11) (Fig. 5A).CL1, CL3, CL5, and CL8 were recognized as early GC (eGC) through high expression of WT1 [45], TNNI3 [41] and WNT6 [46], and the low expression of VCAN [47].Although we recognized these cells, the mapping condition was not ideal.Thus, we performed cell functions analysis with GO and KEGG (Fig. S1).Based on the GO and KEGG enrichment results, we found that the enriched GO terms and KEGG pathways related to signal transduction, response to peptide hormone and insulin, and the key pathways, such as "Rap1 signaling", "FoxO signaling", "AMPK signaling", "Mammalian target of rapamycin (mTOR) signaling", and "WNT signaling" pathways enriched in CL1.
Positive regulation of transcription by RNA polymerase II, nucleus, and DNA binding function was enriched in CL3."MAPK signaling", "steroid biosynthesis", "focal adhesion", "PI3K-Akt signaling", and "SMAD binding" pathways were enriched in CL3.In CL5, the enriched GO terms and KEGG pathways related to the nucleus and RNA binding were enriched.The enriched pathways were the "regulation of the actin cytoskeleton", "focal adhesion", and "relaxin signaling".GO terms related to the nucleus, transcription corepressor activity, transcription factor binding and response to cAMP enriched in CL8, and pathways about "PI3K-Akt signaling" and "MAPK signaling" were enriched.Previous studies have shown that in in that the WNT signal activation occurs exclusively at the primordial follicle stage [46].Meanwhile, the "FoXO signaling", "mTOR signaling", "MAPK signaling", and "PI3K-Akt signaling" pathways have key functions in the activation of primordial follicles [48,49].Through the function enrichment results, we confirmed that these cell clusters belonged to eGC.Mural GCs (mGCs) (CL2 and CL9) were identified based on the expression levels of reported cell markers CITED2 [50], FSHR [51], GJA4, IGFBP5, and CYP11A1 [52,53].CL4, CL6, and CL7 were cumulus GCs (CCs) since the high expression levels of marker genes IHH, INHBB, and IGFBP2 [43], while CL10 and CL11 were recognized as atretic GCs (aGCs).The expression levels of GJA1 and CDH2 were lower in atretic follicles GC when compared with healthy follicles.We found a very extensive co-expression occurring in different GCs, which indicated that a lot of cell differentiation took place during estrus.Although the ovaries were collected at a single time point, the special histology structure of the ovary could obtain follicles in different developmental states (Fig. 1A).Cell trajectory analysis explored the differentiation trajectory of GCs by Monocle [38].From trajectory analysis data, GC cells were divided into seven states.All CLs were presented in the trajectory.CL1 was presented almost in all states.CL2 was presented in states 6 and 7. CL3 was presented in states 1, 2, and 3. CL4 was presented in state 4. CL5 was presented in states 1 and 4. CL6 was presented in state 6.CL7 was presented in states 6 and 7. CL8 was presented in states 1 and 4. CL9 was presented in state 7, and CL10 was presented in state 4 (Fig. S2).We identified the GCs' sub-type and mapped the sub-type into the trajectory.The identified sub-type and the eGCs were located in the early state.The GCs were differentiated into two broad directions, including mGCs and CCs.Through trajectory analysis, in eGCs, CL1 appeared in all states, and CL3 and CL8 appeared only in early development states, whereas CL5 intended to differentiate into CC (Fig. 5D).We performed heatmap analysis on cells in different developmental states and granulosa cells exhibited four patterns of gene expression levels.Genes in the prebranch showed model1 and model3 patterns, with high expression in the early developmental stages.Genes in branch1 and branch2 showed similar patterns of late high expression levels in the two different branches (Fig. 5E).KEGG and GO enrichment analyses of genes were conducted with different expression patterns, and the enrichment results of model1 and model3 were similar to those of eGC enriched in "adherens junction", "notch signaling", "WNT signaling", "thyroid hormone signaling", "MAPK signaling", "Hippo signaling", and "mTOR signaling" pathways.The growth hormone synthesis, secretion, and action pathways were enriched in model1, indicating that cells were in a rapid growth period during this stage.In model2, significant enrichment of functions was related to genetic material transfer, such as the ribosome, proteasome, DNA replication, RNA transport, and mismatch repair, as well as enhanced cell metabolism activities such as oxidative phosphorylation, thermogenesis, and citrate cycle, which suggests that cell metabolism is activated during the process of early GCs to cumulus cells development.In the enrichment results of model4, functions related to the regulation of actin cytoskeleton, endocytosis, and phagosome were observed (Fig. 5F).These results justify the GC subtype identification strategy.The marker genes of each GC subtype were explored (Additional file 2).The maker genes of eGC, CC, and mGC identified in this study in Hu ewes were WT1 and CD34, AMH and INHA, and HTRA3, respectively, for Hu sheep in our study.
To investigate key genes involved in the developmental timeline of granulosa cells, GeneSwitches analysis was conducted.Along the timeline of the process of GCs developing to CCs (branch1), we observed gene closure throughout the entire timeline, leading to the development of early GCs into CCs.In the later stage, more transcription factors (TF), such as JUNB and FNDC3B, and a key gene, such as FOSB, participated in the closure process.At 23.4 h, LOC101112291 expression arrested.These genes decreased expression levels during the final transformation into CCs (Fig. 6A and B).Along the timeline of GCs to mGCs development (branch2), the expression levels of genes increased in the early stage.Some TFs related to energy metabolisms, such as ACTG1, LDHB, ATP5MG, ATP5MC3, and ATP5F1E, expression initiated.In the late stage of the timeline, we observed the initiating of JPH1 expression.These genes were expressed higher when early GCs were transformed into mGCs (Fig. 6A and B).
We performed KEGG pathway enrichment analysis on key genes in different branches.In the process of branch1, only "adherens junction" pathway was suspended.The other pathways were turned on."Adherens junction" pathway was enriched at the lower levels in the early stages of development and decreased with the timeline, while "proteasome", "thermogenesis", and "oxidative phosphorylation" increased with the timeline and decreased rapidly before the endpoint."RNA polymerase function" only increased in the late stages of development.In the process of branch2 development, the enrichment of pathways was turned on, focusing on cell energy metabolism ("oxidative phosphorylation" and "TCA cycle") and related functions of cell protein synthesis ("RNA transport", "protein export", and "protein processing in the endoplasmic reticulum") (Fig. 6C).
Differences of ovarian GC expression profiles between the HLS and LLS group
By comparing the proportions of different cell subtypes, the proportion of aGCs was significantly higher in the LLS group than in HLS (Fig. 7A).In the LLS group, the physiological status of GCs was altered, resulting in an increase in follicles with a propensity for atresia.Changes in cell function based on cell subtypes were analyzed using KEGG enrichment.Pathways associated with apoptosis and necroptosis were inhibited, while pathways associated with cell survival were up-regulated.This change was cell subtype based.In mGCs, the enrichment of the necroptosis pathway was elevated.The genes enriched in the pathway were FTH1, FTL, and H2AZ1.FTH1 and FTL in the necroptosis pathway had a negative feedback regulatory effect in the ferroptosis pathway.In the current study, FTH1 and FTL highly expressed in group HLS reduced necroptosis by reducing the release of reactive oxygen species (ROS) after decreased lysosome membrane permeabilization.The expression levels of ferroptosis-resisted genes FTH1 and FTL were up-regulated in HLS.Then, we retrieved the location of FTH1 and FTL on the ferroptosis pathway and analyzed its upstream and downstream genes.The downstream expression levels of key genes MAP1LC3A, ATG5, and ATG7 (Fig. S3), and NCOA4 (Fig. S3) were downregulated in the HLS group, thus reducing ferroptosis via inhibiting the Fenton response.On the other hand, the enrichment of the FoxO signaling pathway was downregulated in HLS and reduced apoptosis by decreasing the expression of IRS2, EP300, BCL2L11, and SGK1 (Fig. S3).In CCs, the enrichment of ECM-receptor interaction in the HLS group was up-regulated by increasing the expression levels of COL4A4 (Fig. 7D), whereas the enrichment of the cAMP signaling pathway, oxytocin signaling pathway, tight junction, and thyroid hormone signaling pathway was down-regulated in the multi-lamb group, with decreasing the expression levels of CALM1, PLD1, OXT, PLN, ATP2B1, F2R, ACTG1, and MYL6 (Fig. 7D, Fig. S3).In eGCs, differences between the two groups were reflected in the down-regulation of the expression of transcription factor complex AP-1 composed of FOS, FOSB, and JUN (Fig. 7B and C).
Discussion
Every estrus cycle in sheep typically consists of three or four follicle waves development during the inter-ovulatory interval [54], and about 1-3 mature follicles ovulate [55].Estrus is a special time window.In peripheral blood, luteinizing hormone peaks can be observed; estradiol decreases rapidly from maximal values; progesterone is at its lowest level, and ewe usually ovulate about 20 h after the onset of estrus [56].Thus, understanding the transcriptional profiles of ovarian somatic cells during estrus is essential to investigate the mechanism of ovulation.In the present study, we investigated the differences in expression profiles of primiparous and multiparous Hu ewe ovary somatic cells by scRNA-seq, providing insight into the mechanisms underlying the differences in ovulation numbers.A total of five types of somatic cells were identified, and corresponding expression profiles were mapped in the ovaries of Hu ewe.A subtype identification of GCs was performed.Key involved in different subtype transitions were analyzed.The differences in cellular expression profiles were compared to identify the key factors regulating different litter sizes.These findings provide a theoretical basis for breeding high-fertility sheep and propose new targets for molecular geneticsbased selection.
Identification of ovarian somatic cells
Various cells in the ovary act in synergy to enable ovarian function, whereas existing research has not paid much attention to the function of these somatic cells, except GCs.The ovarian stroma comprises mostly incompletely characterized stromal cells (e.g., fibroblast-like, spindle-shaped, and stromal cells) [57].In recent years, the role of stromal cells in the ovary has been revisited, and studies have identified estrogen receptors α and β in the cytoplasm and nucleus of bovine stromal cells, unlike fibroblasts, these cells are oval cells with lipid droplets and vacuoles [58].Progesterone receptor α has been identified in stromal cells of pregnant and postpartum rabbit ovaries [59].In the present study, a large amount of energy metabolism occurred in the ovaries during estrus for supporting ovulation.There is an extensive blood supply to the ovary, which is involved in forming dominant follicles, and the endothelium can participate extensively in the angiogenic process.The importance of combined transplantation of ovarian endothelial cells with stromal cells when performing follicular transplantation in individuals with premature ovarian failure was demonstrated to ensure the formation of a well-vascularized and wellstructured ovarian-like stroma [60].A previous study proposed that perivascular cells were multipotent progenitors that contribute to granulosa, thecal, and pericyte cell lineages in the ovary, which supports folliculogenesis [23].The main functions of immune cells in the ovary are defense, remodeling of ovarian structure, signaling, and ovarian aging [22,61].In the present study, marker genes of ovary somatic cells proposed were confirmed in former studies [44] and also available as stroma cell gene signatures of Hu ewe.Among the ovarian somatic cells, the most well-studied are the GCs.The GC is a somatic cell surrounding the oocyte co-located with the oocyte in the same follicular microenvironment.Its function is limited to the secretion of gonadotropins to stimulate ovulation and includes follicular development.GCs secrete factors, including gonadal steroids, growth factors, and cytokines are critical for GC survival and follicular growth [62,63].In contrast, identifying the GC subtype remains controversial, especially for sheep.In the human ovary, the expression pattern of early-stage GCs is WT1 high /EGR4 high /VCAN low / FST low .The expression pattern of CCs is (VCAN high / FST high /IGFBP2 high /HTRA1 high /INHBB high /IHH high ), and the expression pattern of mGC is WT1 low /EGR4 low / KRT 18 high /CITED2 high /LIHP high /AKIRIN1 high [43].In domestic animals, the GCs of goats were identified based on developmental trajectory.ASIP and ASPN were highly expressed in early GCs, INHA, INHBA, MFGE8, and HSD17B1 were highly expressed in GCs during the growth phase, and IGFBP2, IGFBP5, and CYP11A1 were highly expressed during the growth phase of GCs [42].However, the study did not give subtype classification markers based on cell function.The present study defined GC subtypes by combining existing marker genes and functional analysis of different sub-clusters.Using pseudotime analysis, the reliability of GC subtype identification was verified.It has been found that WT1 and CD34 are marker genes for eGCs.AMH and INHA are marker genes for CCs, and HTRA3 is a marker gene for mGCs.These marker genes were applicable for identifying sheep GC subtypes.
Five somatic cell lineages were identified in sheep ovaries based on their gene expression signatures.GCs were further characterized into three subtypes, marker genes for each cell type are only expressed in specific "regions" in the UMAP figure and immunofluorescence profiling, which were consistent with the anatomy of the ovary [64].These results illustrated the reliability of the scRNAseq data from this study.However, no luteal cells were detected in our dataset, which is consistent with our previous study [40], which implies the degradation of luteal cells during the samples collection period (estrus) or the luteal cells are difficult to collect.
The transition of different GC subtypes
CC and mGC interact with oocytes differently in the follicle.The CC carries out bidirectional information transfer with the oocyte through gap junctions, contributing to oocyte maturation, fertilization, and early embryonic development [65].In contrast, the mGC has multiple receptors on its surface that can secrete various hormones and cytokines that regulate follicular growth and maturation in an autocrine and paracrine manner [66].In the present study, key genes were observed using GeneSwitches, which are involved in the transition of different GC subtypes.The suspend expression of LOC101112291 led to the differentiation of eGCs into CCs, while the initiating of JPH1 expression led to the differentiation of eGCs into mGCs.A previous study investigating the molecular mechanism of lambing in Hanper sheep using ovarian tissue has revealed that LOC101112291 (XIST) regulates lambing number through the methylation process [20].On the other hand, the protein expressed by the JPH1 gene, junctophilins (JPHs), is a family of structural proteins that connect the membrane with intracellular organelles such as the endoplasmic/ sarcoplasmic reticulum (ER/SR).The anchoring of these membrane structures leads to highly organized subcellular connections, playing an important role in signal transduction in all excitable cell types [67].Our study found that the expression levels of these genes expression were turned off.Therefore, LOC101112291 and JPH1 genes may potentially regulate the direction of differentiation of early GCs.
Differences in transcriptional profiles of GCs in Hu sheep with different litter size
In the modern sheep production system, the reproductive performance of female animals determines the economic profitability of farming, and how to increase the number of lambs has always been the hottest spot and key in sheep breeding and reproduction research.Based on former studies, GC is vital in follicle development [62,63,68].Our data revealed the number of differential genes and the key functional differences in primiparous and multiparous Hu ewes distributed in GCs, so we paid attention to these cell clusters.Li et al. [42] studied the gene expression of GCs at different stages in two populations of Jining Gray goats, and they found differences in the enrichment of GO terms of GCs at different periods in different litter size groups.The previous study showed the differences in the expression profiles of GCs at different litter size from functional analysis.In this study, the definition of the subtypes of Hu ewe GCs enabled us to discover differences in the functions of GCs in the two groups.Follicular atresia was increased in the LLS group, which was mainly caused by ferroptosis of GCs.Healthy growing follicles have a granulosa layer that is aligned with the follicular basement membrane, and no apoptotic cells are present.In the early stages of follicular atresia, apoptotic GCs gradually increase.In progressive atretic follicles, most GCs undergo apoptosis leading to severe disruption of the granulosa layer and clearance of the follicle.Apoptosis is initiated in the GCs on the inner surface of the granulosa layer, while the oocytes, as well as the inner and outer layers of the membrane, are not affected by apoptosis in the early stages of atresia [69], suggesting that GC apoptosis plays an initiating role in follicular atresia [70,71].Ferroptosis is a form of cell death caused by iron-dependent lipid peroxidation and ROS accumulation characterized by the reduction or loss of mitochondrial cristae and rupture of the outer mitochondrial and mitochondrial membranes condensation [72].Zhang et al. [73] found that transferrin expression was significantly reduced, and PCBP expression was significantly increased in porcine early atretic follicles, suggesting that iron accumulation began to occur early in follicular atresia and ferroptosis had an essential regulatory role in follicular atresia.Another study on female infertility found that induced iron overload in GCs led to ferroptosis and suppressed oocyte maturation by releasing exosomes from GCs, suggesting that ferroptosis of GCs is detrimental to oocyte development [74].This study found that the GCs of multiparous ewes suppressed ferroptosis by increasing the expression levels of anti-ferroptosis genes FTH1 and FTL, which promotes oocyte maturation and prevents follicular atresia, contributing to the multiparous trait.
Conclusion
In our study, we identified differences in the expression profiles of ovarian somatic cells between primiparous and multiparous Hu ewes.These differences were mainly attributed to GCs.The expression condition of JPH1 and LOC101112291 emerged as significant indicators for determining the evolutionary directions of granulosa cells.Additionally, FTH and FTL potentially regulate litter size by inhibiting granulosa cells ferroptosis and promoting follicle development.This study provides new insights into the molecular mechanisms underlying the high reproductive rate of Hu sheep.
Fig. 1
Fig. 1 Single cell transcriptome sequencing of somatic cells in Hu sheep ovary.A H&E staining of the ovary; 1, Primordial follicle; 2, Growing follicle; 3, Antral follicle; 4, Graafian follicle.B Procedure of ovary single-cell transcription sequencing.C Quality control of single-cell transcriptome data.D UMAP of ovary single-cell transcription sequencing data and clusters distribution in HLS and LLS groups
Fig. 2
Fig. 2 Identification of somatic cells in Hu sheep ovary.A Identification result of five different cell types on UMAP.B Dot plot of different cell marker gene expression levels.C Representative marker genes' feature plot and immunofluorescence of ovary somatic cell, green indicates gene, and blue indicates DAPI.Scale bar: 200 μm
Fig. 3
Fig. 3 Ovary somatic cells marker genes heatmap and function enrichment
Fig. 4
Fig. 4 Comparison of differed cell type expression profiles between single and multi lamb sheep ovary.A Somatic cell type difference in UMAP.B Differentially expressed genes (DEGs) in a somatic cell of sheep ovary.C Top 5 Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment, light red indicates up-regulated, light blue indicates down-regulated
Fig. 5
Fig. 5 Granulosa cell subtype identification of sheep ovary.A UMAP of reduced the dimension of granulosa cells.B Granulosa cells sub-cluster marker gene heatmap."-" indicates low expression of the corresponding gene." + " indicates high expression of the corresponding gene.C Granulosa cell subtype identification in UMAP.D Granulosa cell pseudotime trajectory.E Granulosa cell different developmental states heatmap.F Top 10 KEGG enrichment of pseudotime heatmap genes
Fig. 6 Fig. 7 (
Fig.6 Granulosa cell developmental trajectory with pseudotime.A Key genes involved in the developmental process of granulosa cells.The horizontal axis is pseudotime, and the vertical axis is the goodness-of-fit R 2 .The genes turned on with the pseudotime are shown above the horizontal axis, and the genes turned off are shown below the vertical axis.Genes that satisfy the following conditions have been selected for mapping: 1.The percentage of zero-expressing cells is below 90%, 2. The top 50 plots with the highest goodness-of-fit.B feature plot of representative gene expression.C Top 10 KEGG enrichment of genes involved in granulosa cells developmental process.Up, function enrichment turned on; down, function enrichment closure
Table 2
Comparison of different cell type proportions between HLS and LLS
|
2023-11-14T15:35:30.940Z
|
2023-11-14T00:00:00.000
|
{
"year": 2023,
"sha1": "f1b52ed970309cc5acbffba35b41e817e652dfca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "f1b52ed970309cc5acbffba35b41e817e652dfca",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255056455
|
pes2o/s2orc
|
v3-fos-license
|
Farm animal genetic resources in agro ecosystem of north east India
North Eastern Region of India is the homeland of diverse animal genetic resources and representing a unique agro-ecosystem with integrated subsistence low input tribal production system where farm animals play an important role in improving the socio-economic status and livelihood of the people. The total livestock and poultry population of this region is about 70.13 million (6.85% of India) of which 92.76% is indigenous population. Among the 183 registered breeds of livestock and poultry in India, this region has 19 registered breeds which include two cattle, one buffalo, two goat, two sheep, four pig, two horse and ponies, one yak, four chicken and one duck breed. Besides many uncharacterized farm animal breeds/populations are reared by tribal farmers in the region, which are described as their local names. The review, enumerates the farm animal genetic resources of this region and their current status, descriptions, unique features, utility and their economic valuation and cultural importance as well as future conservation strategies. Precise and reliable estimation and evaluation of different economic and climate resilient traits of indigenous farm animal germplasm and their economic valuation, genetic characterization, documentation and registration is highly warranted. It has also suggested and proposed a model for the implementation of strict policy from central and state agencies to facilitate in situ conservation with active community participation and ex situ conservation through application of modern biotechnological tool, which is warranted to maintain the diversity of farm animals in north east region of India.
The North-Eastern region of India is one of the major biodiversity hotspots in the world. This region is not only contributing plant diversity but also represent a huge diversity in animal genetic resources. The unique domestic species like yak, mithun and wild species like one-horn rhino and pygmy hog are the heart throb of this region and well known globally. This region of India lies between 21.5º N to 29.5º N latitude and 85.5º E to 97.5º E longitude and comprise Assam, Arunachal Pradesh, Manipur, Meghalaya, Mizoram, Nagaland, Sikkim and Tripura. It occupies about 8% of total land area and 4% of total population of the country (Census 2011). This region has a unique agroecosystem such as high annual rainfall (2500-3000 mm), subtropical to alpine climate, undulated and hilly terrain with the altitude ranges from 1,000 to 3,000 m above the mean sea level. About 65.59% of the geographical area is covered by forest (India State of Forest Report 2015) which is mostly under private or community ownership. This unique geographical location leads to diversity in animal genetic resources and its production system. By and large, this region practices integrated subsistence low input tribal production system where livestock and poultry play a complementary and vital role in improving the socioeconomic status and livelihood of the people.
The total livestock and poultry population of this region is about 70.13 million, which includes13.29 million cattle, 0.58 million buffalo, 0.57 million sheep, 7.85 million goat, 3.95 million pig, 23,000 horse and pony, 1,000 mule, 2,000 donkey, 18,000 yak, 0.30 million mithun and 43.53 million poultry (Table 1). Among them, 92.76% is indigenous population and remaining is crossbred population ( Table 2). Although there are 183 registered breeds of livestock and poultry in India, this region has only 19 registered breeds which include two cattle, one buffalo, two goat, two sheep, four pig, two horse and ponies, one yak, four chicken and one duck breeds. However, many uncharacterized farm animal breeds/populations are reared by tribal farmers in the region, which are described as indigenous local. The diversity of domesticated livestock and poultry breeds developed due to years of evolution within a specific niche as a result of adaptation and selection. These indigenous animal genetics resources are playing vital role in food and livelihood security of the people and maintaining genetic diversity in the ecosystem. These indigenous animals are able to survive and reproduce in adverse agro climatic condition even in low or/ and zero input production system. The objective of this review is to enumerate the farm animal genetic resources of region and their descriptions, unique https://doi.org/10.56093/ijans.v89i11.95838 ET AL.
[Indian Journal of Animal f f Sciences 89 (11) 4 features, utility and importance which will be an important aspect for conservation and breeding strategies in this region of India.
Cattle genetic resources
The total cattle population of NE region is about 13.29 million out of which 93.34% is indigenous. However, most of the animals are of non-descript type except in Assam and Sikkim where the two indigenous registered cattle breed Lakhimi and Siri are found. Among the eight states of NE region, Assam possesses 77.36% of the total cattle population followed by Tripura (7.14%) and Meghalaya (6.74%). Mizoram is found to have the lowest cattle population with only 35,000 heads (Livestock Census 2012). The trend in the cattle population dynamics from 2007 to 2012 is generally towards negative growth excepting Assam, Meghalaya and Sikkim where the total cattle population increased by 2.66%, 1.01% and 3.70% respectively.
Siri cattle: It is a medium size zebu cattle of NE region distributed in Sikkim, and district Darjeeling of West Bengal. It is said to be the native of Bhutan where it is called as Nublang. The population drastically declined from 79,000 to 13,948 during 2003-2012 and came under the threatened category. Siri is the largest cattle breed as compared to other cattle breeds of NE region of India.
Typical cervico-thoracic type of hump and long hairs are the characteristics of this breed. This animal has excellent draught ability in hilly terrain because of their strong legs and feet. The daily milk yield of Siri cattle ranged from 2.0 to 6.5 kg with a fat content of 2.8 to 5.5% (Tantia et al. 1996).
Lakhimi cattle: It is a small size zebu cattle breed distributed in entire Assam. The total population of Lakhimi cattle is about 79 lakh. Relatively short legs and small bowel shaped udder are the characteristics of this breeds. The average milk production per lactation is about 270-375 kg (NBAGR 2017). Bullocks are excellent draught animals and are used in agricultural operation. Besides these two registered breeds, each state of NE region has its own local cattle known by their local name like Manipuri cattle, Arunachali cattle, Mizo cattle etc. However, their descriptions and specific features are not well documented.
Buffalo genetic resources
The NE region of India is the land of swamp buffalo and has important evolutionary divergent from wild to swamp and swamp to riverine buffalo (Mishra et al. 2009). This region has 0.58 million buffalo population of which Assam possess 75.12% (0.43 million) followed by Manipur with 11.40% (0.066 million) (Livestock Census 2012). There is Luit (Swamp) buffalo: It is a medium size black coloured buffalo and is mostly found in upper Brahmaputra valley of Assam and some parts of Mizoram, Manipur and Nagaland bordering Assam. Compact and strong built up body with light white stocking up to knees in both fore and hind legs are the characteristics of this breed. The average lactation milk yields of Luit buffalo ranges from 385 to 505 kg. Bullock having excellent draught ability for carting and ploughing especially in muddy field for paddy cultivation (NBAGR 2018).
Assamese buffaloes: It is a medium size buffalo having primarily black body coat colour found in upper, lower and central Brahmaputra river valley region in Assam. The buffaloes are reared in traditional nomadic system of management under zero input conditions, locally termed as khuti system. They are mainly reared for sale of young male calves which are primarily used for carting and agricultural operations. Milk of Assamese buffaloes is popularly known as Khuti milk. The average milk yield ranges from 0.5 to 6 liter/day (Mishra et al. 2008).
Manipuri buffaloes: It is distributed in the hilly as well as valley/plain regions of different parts of Manipur. Based on the habitat and distribution, Manipuri buffaloes are of two types, viz. hill type locally termed as Chingi-eroi and valley type locally named as Tamgi-eroi. The population of Manipuri buffaloes is estimated around 0.066 million. It is genetically found to be pure domesticated swamp type based on their karyotypic profile (Mishra et al. 2009). Typical white markings on either side of muzzle and lower jaw are the characteristics of Manipuri buffalo. They are mainly used for meat purpose as well as different agricultural operations and carting (Mishra et al. 2009).
Sikamese buffaloes: These buffaloes are the natives of the Sikkim state. These are small size and black or grey colour buffalo, and are mostly found in different districts of Sikkim. These buffaloes are well suited in hilly terrain for carrying heavy load because of their short strong legs and compact hardy body. The milk yield of this buffalo is very poor (Pathak and Singh 2001).
Pig genetic resources
The NE region is contributing major share in pig genetic resources of India, as contributes 38.38% of country's pig population (Livestock Census 2012). The total pig population of this region is about 3.95 million of which Assam possesses 41.37% followed by Meghalaya (13.73%), Nagaland (12.74%) and Tripura (9.18%). Sikkim have the lowest pig population with only 30,000 heads (Livestock Census 2012). Among the 8 registered pig breeds of India, 4 breeds belong to NE region namely Niang Megha, TenyiVo, Doom and Zovawk (NBAGR 2017). Besides these one important indigenous pig known as Mali pig is very popular in this region.
Niang Megha: It is also known as khasi local pigs mostly distributed in Garo, Khasi and Jaintia hills of Meghalaya. The estimated population of Niang Megha pig is about 4.3 lakh. They are well known for nesting behaviour before farrowing and strong mothering ability. Body coat of this pig is covered with long and coarse bristles, which protect them from cold weather. Pigs attain sexual maturity at an early age. The average age at sexual maturity and age at first farrowing is about 197 and 326 days respectively (Zaman et al. 2014).
TenyiVo: It is a small size black colour pig mostly found in Chakesang, Mao, Tuensang and Angami district of Nagaland. The name TenyiVo literally translates into the "Pig from Angami". This breed in Sema tribe is called Suho and amongst the Lotha tribe it is known as Votho. The estimated population of TenyiVo pig 60-70 thousand only. Early sexual maturity and good mothering ability are the characteristics of this breed. The average at first estrous and age at first farrowing are 182 and 298 days respectively (Chusi et al. 2016).
Doom: It is a medium size black colour pig mostly distributed in lower parts of Brahmaputra valley of Assam. The estimated population of Doom pig is about 3,000 only. They are comparatively larger than other local pig breeds of this region. They migrate in groups Zovawk: It is a black colour pig with white spot on forehead and white patches on belly, and is mostly distributed in different parts of Mizoram. The estimated population of zovawk pig is about 39,000. Concave top line and long bristles on midline are characteristics of Zovawk pig. The average age at first fertile service and age at first farrowing are 323 and 437 days respectively (Kalita et al. 2018).
Mali pig: Mali is a black colour indigenous pig breed, widely distributed in different parts of Tripura. Short legs and drooping rumps are the characteristics of this breed. These pigs are well known for their early sexual maturity. The average age at puberty and age at first farrowing are 127 and 281 days respectively (Dandapat et al. t 2010).
Goat genetic resources
The NE region has 0.79 million goat population of which Assam contributes 78.50% followed by Tripura (7.78%), Meghalaya (6.01%) and Arunachal Pradesh (3.89%). Mizoram has the lowest goat population with only 22,000 heads (Livestock Census 2012). The goat population in this region showed 32.13% growth from 2007-2012. Among 34 registered breeds of the country, this region has only two registered goat breeds known as Sumi-Ne and Assam hill goat. ET AL.
[Indian Journal of Animal f f Sciences 89 (11) Sumi-Ne: It is a medium size goat also known as Nagaland long hair goat and is found in different parts of Nagaland. They are mostly reared in traditional open range system with almost zero inputs by the Sumi tribes of Nagaland. The estimated population of Sumi Ne goat is 4,500. They are mainly reared for silky fibre production. Long silky fibres obtained from these goats are used by local people for making traditional items for socio-cultural significance (NBAGR 2017).
Assam hill goat: It is a small size breed of goat, and is mostly found in the hilly terrain of North Cachar, Karbi-Anglong districts of Assam and also in the adjoining hilly tract of Meghalaya. They are well known for their good quality meat, higher rate of prolificacy and adaptability in low input poor management condition. Twining is very common in Assam hill goat (Zeshmarani et al. 2007). The average age at first heat and age at first kidding were 266 and 439 days respectively (Kadirvel et al. 2013).
Sheep genetic resources
The total sheep population in NE region is about 0.57 million of which Assam accounts for 90.40% followed by Meghalaya Banpala sheep: It is a medium size sheep with compact body covered with coarse wool, and are found in different parts of Sikkim and in neighboring Western Bhutan and Eastern Nepal. The breed derived its name as it is mostly reared inside the forest ('ban' means forest and 'pala' reared). This sheep is reared mostly by the traditional shepherd tribe called as Gurung. It is a typical dual purpose breed reared for both coarse wool and meat production. Banpala sheep produces 1 kg of coarse wool per year which is obtained in two shearing (Bhutia et al. 2006).
Tibetan sheep: It is a medium sized sheep mostly distributed in Northern Sikkim and Kameng district of Arunachal Pradesh. Tibetan sheep is famous for production of excellent lustrous carpet quality wool. The fleece of this sheep is relatively fine and dense on belly and leg region. Animals are shorn twice a year with average greasy fleece weight per clip ranging from 400 to 900 g (Kumar et al. 2017).
Equine genetic resources
North East region of India is bestowed with diverse indigenous animal genetic resources including that of equines. However, in the last decade, the equine population showed declining trend in the region. The equine population in this region is 23,000 (Livestock Census 2012). Although there is limited horse population, this region has two important registered breed, viz. Manipuri pony and Bhutia horse.
Manipuri pony: Locally named as Meitei Sagol is found mainly in Manipur and different parts of Assam. They are descendants from Asian wild horse. Manipuri ponies are intelligent, extremely tough with tremendous endurance and reared in semi wild system. It has 11-13 hands wither height with a good shoulder, short back and well developed quarters (Gupta et al. 2012). Manipuri pony are extensively used for polo game throughout the world. They are also utilized for transportation, hunting and racing.
Bhutia: It is a small size mountain horse also known as "Bhotia pony or Bhote-Ghoda mostly found in Sikkim and Arunachal Pradesh. Short neck, large headwith pronounced jaw and very strong short legs are the characteristics of this breed (Gupta et al. 2012). Bhutia pony are well known due to their terrifying habit while moving they always keep to extreme edge of a mountain path to avoid the bumping against the cliff wall on the inner side as they used to carry luggage on either side of their body.
Poultry genetic resources
NE region of India is famous for different groups of chicken and duck breeds reared by farmers under traditional systems of management. The total poultry population in this region is around 43.53 million. With the exception of Manipur, Mizoram and Tripura, the trend of poultry population in this region showed positive growth from 1997 to 2012 (Livestock Census 2012). Among 19 registered chicken breeds of India, NE region has 4 registered chicken breeds, viz. Chittagong, Daothigir, Miri and Kaunayen. This region has the only registered duck breed of India namely Pati duck (NBAGR 2017). Besides these, a local duck breed known as Nageswari duck is very popular in this region.
Miri fowl: These breeds of chicken are mostly found in Dhemaji, North Lakhimpur, Sibsagar, Dibrugarh and Majuli district of Assam. The name of the bird itself is derived after tribal people name called Miri or Missing tribe since the birds are reared by them. They are reared mostly for meat as well as eggs. The dressing percentage ranges from 65-74%. The average egg produced per year is around 60 to 70 (Vijh et al. 2005).
Daothigir: It is a chicken breed mostly distributed in Kokarajhar, Bongaigaon, Barpeta, Dhubri and Nalbari districts of Assam. The name of the breed is derived from the name of a plant in this region called Thigir (Dillenia ( ( indica). The colour of the flower is similar to the plumage colour of these birds. The shape of these flowers also resembles the comb of these birds. In Bodo language Dao means bird and hence these birds are known as Daothigir. It is a dual purpose breed for both egg and meat production. Annual egg production is about 60-70 (SAPPLPP 2013).
Kaunayen chicken: It is an indigenous chicken breed locally known as Kaunayen/Kwakman/Komanmostly. It is found in valley of Manipur. The word Kaunayen is a combination of two Manipuri words namely 'Kauna' means kick/fighting and yen means hen/ poultry. Elongated body with long neck and long legs are the characteristics of this breed. Kaunayen birds are mainly used in commercial purpose for cock fighting because of their martial qualities (Vij et al. 2016).
Chittagong fowl: These birds are locally known as Malay, and are mostly distributed in Meghalaya and Tripura bordering Bangladesh. They are comparatively larger than other breeds of chicken in this region. The average body weights of cock and hen are 3.5-4.5 and 3-4 kg respectively (Yadav et al. 2017). They possess the characteristic features of a good game bird. Chittagong fowl are reared for both meat and egg production, and has cultural and economic significance.
Pati/Desi duck: Pati breed of duck is distributed in different parts Assam which constitute about 85.6% of the total duck population in Assam (Islam et al. 2002). Pati ducks are mainly reared under natural conditions and they lay about 60 to 70 eggs annually. They are more resistant to disease and better acclimatized to the local environmental conditions (Islam et al. 2002).
Nageswari duck: These birds are locally named as Nagiare mainly distributed in the Barak valley of Assam bordering Meghalaya, Tripura, Mizoram and the neighboring country Bangladesh. They are reared under scavenging or free-range system with a flock size ranging from 5 to 200 (Zaman et al. 2005). Adult ducks forage in the rice fields all through the day and are confined during night time in a house made of the bamboo (called Ugartol). The average annual egg production is about 140 to 150 (Islam et al. 2002).
Other livestock genetic resources
Mithun: It is a unique bovine mostly found in high altitudes varying from 300 to 3000 m above mean sea level in the sub-tropical rain forests of NE region of India. They are primarily of meat type bovine having long and massive body. The average body weight ranges from 400 to 500 kg (Gupta et al.1999). Mithuns are associated with social and cultural significance of the people of this region. The total population of mithun in NE region is about 0.29 million of which Arunachal Pradesh has the highest number of mithun (0.25 million) followed by Nagaland (0.03 million). Apart from these two states, Manipur and Mizoram has about 10,000 and 3,000 mithun population respectively (Livestock Census 2012). The mithun population in this region showed an increasing trend with an increase of 12.5% from 2007-2012. Two distinct types of mithuns, viz. Nagami mithun and Arunachali mithun have been described by Verma (1996) based on their distinct habitat and geographical distribution in NE region of India.
Yak: It is a multipurpose domesticated bovine found in high altitude ranges from 3000-6000 m above the mean sea level and well adapted to the cold weather. They can even survive without food for several days and without any appreciable adverse effect on their health (Arora et al. 1998). Yak has a relatively heavy head with wide convex forehead and long, narrow and slightly dished face (Nivsarkar et al. r 1998). Yaks has two types of hair coats, an outer coarse hairy coat and an under coat of fine wooly fibre. They are termed as horse tailed buffaloes because of their peculiar horse-like tail with long hair. In NE region, yaks are found in the cold humid mountains of Sikkim and Arunachal Pradesh. The population of yak in this region is about 18,000 which decreased by 5.56% from 2007-2012(Livestock Census 2012. Two distinct types of yak, viz. Arunachali yak and Sikkimese yak have been described by Pal et al. (1994) based on their geographical distribution in NE region of India.
Utility of indigenous animals of NE region of India
In contrast to other parts of the country, livestock play an integral part of social, cultural and economic livelihood as well as source of nutritional security of the people of this region. These animals served multiple purposes of the owner and make an important component of farming system of NE region of India. The indigenous animals are extensively used as draught power for different agricultural operation and act as transportation vehicle in undulated hilly areas of this region. The manure of indigenous animals is of great demand as source of bio-fertilizer for organic production because this region is the emerging organic hub of the country (Rahman et al. 2009). Many of the indigenous animals in NE region have played an important role in specific culture and religion for a particular community. For example yak and mithun are essential for social and cultural identity of tribal people in this region. Ownership of mithun indicates the prosperity and social status of an individual in many tribes of Arunachal Pradesh (Gambo 2015). The Adi tribe of Arunachal Pradesh believes that sacrificing mithun during marriage ceremony will bring glory and blessings to newly engaged couple (Nimasow et al. 2015). Mithun is also sacrificed at the time of death of a person to appease the God for the peace of his/her soul (Nimasow et al. 2015). Exchange of live mithuns is the most common bride price among most of the tribes of Arunachal Pradesh and Nagaland in India (Verma 1996). Almost all the body parts of a yak are associated with cultural and religious significance of Monpa tribe. The skull of yak is used for writing of mantras and kept at prominent places like Buddhist Temple-Gonpa, and houses as a symbol of strength and safety (Norbua and Riba 2015). When a Tibetan girl marries a young herder, yak is always given as dowry along with the bride (Wu Ning 2003). Meat and blood of yak are important due to their medicinal values (Meyer 1976). Manipuri ponies are famous for polo game in this region. Buffalo bull fighting (MohJuj) is a prestigious game during 'Bihu' festival of Assamese community. Various food items such as Chilu (yak fat stored in empty stomach of sheep used as edible oil) and Surpi (product of yak milk) in Sikkim; Satchu (smoke dried meat of yak) in Arunachal Pradesh and Sikkim; Dohjem (pork meat item) and Tungrymbai (pork with fermented soybean and paste of sesame seeds cooked together with chopped ginger) in Meghalaya; pork pickle in Nagaland and Bongsha Rep ET AL.
[Indian Journal of Animal f f Sciences 89 (11) (smoked beef) in Mizoram are traditional foods of this region and popular throughout the world (Kadirvel et al. 2018, Hazorika 2013. Curd prepared from local buffalo milk and duck meat are special food items during Bihu in Assam. In poultry, Kuanayen breed of chicken of Manipur is used extensively for cock fighting, a very popular and valuable game of this region (Vij et al. 2015). Traditional cultural instrument such as Pepa made from buffalo horn, Dhol from skin of animals and trophy of different local animal implies the need and importance of farm animal genetic resources in NE region of India. The indigenous animals of this region posses some unique and climate resilient traits that distinguish them from the animals of other parts of the country. The local pigs possess long coarse bristle on their body coat to protect them from cold weather. The bristle from pigs are utilized for preparation of different type of brush, viz. painting brush, carpet cleaning brush, grooming brush for pet animals etc. (Mohan et al. 2014). The local pig attains sexual maturity at early age which increases the life time productivity and is also well adapted in low input hilly ecosystem (Kumaresan et al. 2007). The indigenous animals of this region has relatively short and strong legs which help them to carry heavy load in hilly undulated areas. The milk of Assamese buffaloes commonly famous as Khuti milk, is having great market demand due to its high fat percentage and often preferred for preparation of curd and ghee which fetches high prices (Alam et al. 2017). Twinning and triplet kidding in Assam hill goat (Zeshmarani et al. 2007), adaptability of Doom pig in group migratory scavenging system (Banik et al. k 2016) and specific milk composition of yak and mithun are the pride of NE region of India.
Economic valuation of indigenous animals of North East
India At present the economic valuation of a particular breed or an animal is based on the production and reproduction and/or economic traits. But the real economic valuations of indigenous farm animals of this region are ignored. To judge the actual economic value, the utility and contribution of indigenous animal in every aspect should be considered in economic valuation. Therefore, consideration of economic valuation including life time productivity, social and cultural valuation, ecosystem maintenance, sustainability and productivity in low input production system, market value of indigenous animal product, climate resilient traits and valuation of animal waste in organic production of indigenous farm animals of NE region is utmost important. There are various methods of environmental economic valuation of indigenous animals already available in literature. These methods are broadly classified into three types (Drucker et al. 2001), viz. determining the actual economic importance of the breed/ population, appropriate cost of conservation programme and priority setting in breeding programme. Implementation of this method for economic valuation of the indigenous farm animal genetic resources of this region is a pre-requisite and useful component to protect the farm animal biodiversity of NE region of India.
Conservation of indigenous animals of North East India
The indigenous farm animals are the result of long evolutionary processes. However, their population size is declining because of genetic dilution due to crossbreeding and are facing degeneration. Some of the indigenous breeds or populations like Bonpala sheep, Manipuri pony, Doom pig and all the indigenous poultry breeds of this region are under risk of extinction. So conservation program for protection of the indigenous animals are urgently needed in this region of India.
Existing conservation policy: The government of India as well as all the state governments of North-Eastern state have implemented many schemes and policies for conservation of indigenous animals; e.g. National Programme for Bovine Breeding (NPBB) was initiated in 2014 to conserve, develop and proliferate selected indigenous bovine breeds of high socio-economic importance. The Assam Livestock Development Agency (ALDA) is a state implementing agency of NPBB launch programme for local Swamp buffalo improvement and conservation with the establishment of a nucleus farm at Barhampur (Assam). Similarly, National Livestock Mission was launched in 2014-15 for piggery development in the North Eastern region of India. The government of India sanctioned a project in 2011 for conservation of Banapala sheep. Accordingly, a nucleus farm was established in west for propagation of this threatened germplasm. The government of Manipur implemented conservation policy for Manipuri Pony in 2016. This policy includes development of a breeding tract for Manipuri pony and complete ban on crossbreeding of Manipuri ponies till the population stabilized with the participation of local community. This policy also facilitates cryo-preservation of semen of good pedigree stallions for ex situ conservation. In poultry, Rural Backyard Poultry Development (RBPD) Programme was launched in NE states in 1999-2000 for conservation of local indigenous birds by increasing hatching, brooding facilities and strengthening the poultry farms. However, the conservation scheme or policies implemented for only a few indigenous breeds/populations of this region. So it is very much essential to establish suitable breeding policy and conservation strategies for each indigenous animal breed/population to protect the indigenous animal biodiversity of North Eastern region of India.
Proposed conservation strategies: The conservation of indigenous farm animals should be done by two methods, viz. in situ conservation and ex situ conservation. For in situ conservation each state department of NE region should work in collaboration with central agencies for identification, documentation and registration of each indigenous breed available in this region. The proposed strategy of conservation of indigenous animals of NE region of India is given in Fig 1. The state department should establish organized/nucleus breeding farm for each indigenous breed/population at different places within their breeding habitat. Community participation plays a key role in in situ conservation of indigenous animals of this region because majority of the land of this region is community based and most of the indigenous breeds are associated with some particular community. The state and central agencies should facilitate community-based conservation with sustainable and valuable use of indigenous animal in this region. This could be done by creating awareness and capacity building of the local community of this region through technical and institutional support mechanism. The farmer/community of this region should be provided incentives to enhance their attention and interest in rearing of indigenous animals. Establishment of community breeding farm and supply of pure germplasm at village level for propagation of indigenous animals, development of community participatory business model to promote indigenous animal products and entrepreneurship development through training on value addition in indigenous animal products, their packaging, labelling, branding and marketing will be prerequisite for community based conservation in this region. This community participation will provide long term benefit for conservation of indigenous animals of NE region of India.
For ex situ conservation, the R&D institute of this region in collaboration with state and central agencies should establish organized animal research stations for each indigenous animal breed at different locations for scientific study. In vitro conservation of genetic material such as cryopreservation of semen, oocytes, embryo, stem cell, live tissue, cells etc. for each indigenous breed of this region and their periodic evaluation is warranted. The R&D institute of this region such as Assam Agricultural University, Central Agricultural University-Manipur, ICAR-RC for NEH region should create gene/DNA bank for indigenous livestock and poultry breed of this region for future purpose. The R&D institutes should also conduct extensive research through application of modern biotechnology to regenerate the endangered breed of this region.
Conclusion
The farm animal genetic resources of this region possess few unique features that are important due to their sociocultural significance the local people that distinguishes them from the farm animals in others parts of the country. However, introduction of exotic germplasm and unrestricted crossbreeding become a threat for the many indigenous animal resulting decline of their population. Therefore, precise estimation and evaluation of important economic and climate resilient traits of indigenous germplasm and their genetic characterization, documentation and registration are necessary. Implementation of strict policy from central and state agency and institutional support mechanism to promote community participation in in situ conservation and application of modern biotechnological tool in ex situ conservation are urgent indication for conservation of indigenous livestock and poultry in the north-east region of India.
|
2022-12-24T16:30:12.147Z
|
2019-12-04T00:00:00.000
|
{
"year": 2019,
"sha1": "65f5385cbd011baeb1682e04e6449c07fb59de4f",
"oa_license": "CCBYNC",
"oa_url": "https://epubs.icar.org.in/index.php/IJAnS/article/download/95838/38353",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "bee9a832aebb88c61a0db9a240799a2dbea28ad5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
220257081
|
pes2o/s2orc
|
v3-fos-license
|
Changes in salivary electrolyte concentrations in mid‐distance trained sled dogs during 12 weeks of incremental conditioning
Abstract Regular exercise improves the health status of dogs; however, extreme exertion in the absence of adequate fluid and electrolyte replacement may negatively impact health and performance due to dehydration and cardiovascular stress. Unlike humans and horses, dogs thermoregulate predominantly through respiration and salivation, yet there is a dearth of literature defining exercise‐induced changes to canine salivary electrolytes. The study objective was to investigate the effects of exercise on salivary electrolyte concentrations, and to determine if adaptations may occur in response to incremental conditioning in client‐owned Siberian Huskies. Sixteen dogs were used, with an average age of 4.8 ± 2.5 years and body weight of 24.3 ± 4.3 kg. A 12‐week exercise regimen was designed to increase in distance each week, but weather played a role in setting the daily distance. Saliva samples were collected at weeks 0 (pre‐run, 5.7 km), 5 (pre‐run, 5.7, 39.0 km), and 11 (pre‐run, 5.7, 39.0 km). Samples were analyzed for sodium, chloride, potassium, calcium, magnesium, and phosphorous using photometric and indirect ion‐selective electrode analysis. When compared across weeks, sodium, chloride, potassium, and calcium concentrations did not differ at any sampling time point; however, phosphorus and magnesium concentrations increased from baseline. Data were then pooled across weeks to evaluate changes due to distance and level of conditioning. Sodium, chloride, and magnesium concentrations increased progressively with distance ran, suggesting that these electrolytes are primarily being lost as exercising dogs salivate. Repletion of these minerals may assist in preventing exercise‐induced electrolyte imbalance in physically active dogs.
. These losses cause decreases in total body water and plasma volume and, depending on the type of fluid loss (e.g., hypotonic, isotonic, or hypertonic), can increase plasma osmolarity. The reduced blood volume lessens cardiac filling and stroke volume while increasing heart rate, resulting in a degree of cardiovascular stress greater than that caused by exercise itself (Cheuvront, Carter, & Sawka, 2003). This may result in reduced blood flow to heat dissipation sites (e.g., skin, respiratory tract including mouth), resulting in inadequate heat transfer to the environment and excessive increases in body heat storage (Geor, McCutcheon, Ecker, & Lindinger, 2000). This will compromise the ability to dissipate heat, further compounding the issues related to dehydration (Von Duvillard, Braun, Markofski, Beneke, & Leithäuser, 2004). For example, in dogs induced with extracellular hyperosmolality-an outcome of hypertonic dehydration-internal temperature increased by nearly 2°C during 1 hr of submaximal exercise, an increase that was ~0.5°C greater than reported in the control dogs (Kozlowski, Greenleaf, Turlejska, & Nazar, 1980). Unlike humans and horses though, dogs primarily thermoregulate by way of convection and radiation during thermal panting, as well as by conduction through the skin; though, the thermoregulatory role that conductive cooling plays for dogs is relatively minor, particularly when ambient temperatures are below 30°C (Hammel, Wyndham, & Hardy, 1958). Thus, dogs likely lose electrolytes by way of salivation (Blatt, Taylor, & Habal, 1972;Ermon, Yazwinski, Milizio, & Wakshlag, 2014;Villiger et al., 2018); however, fluid losses that occur during thermal panting are considered "insensible" and consequently can be challenging to measure. It has been reported, though, that when dogs perform aerobic exercise in a state of hypertonic dehydration, they will preserve body water by greatly reducing the rate of fluid loss via salivary secretions (Baker, Doris, & Hawkins, 1983).
Exercise in the absence of adequate fluid and electrolyte replacement can result in hypertonic dehydration, where total body water (TBW) decreases while blood plasma osmolality and the concentration of sodium (Na) in the extracellular (interstitial and plasma) fluid increase. Measures of both blood plasma osmolality and plasma Na concentrations are regularly used as biomarkers for evaluating hydration status. Reports from human studies indicate that saliva osmolality also increases with progressive dehydration, similar to those changes seen in plasma; however, these changes may be influenced by the type of dehydration or fluid loss (Villiger et al., 2018). Comparatively, researchers cannulated the submaxillary gland of the dog and demonstrated that electrolyte minerals are concentrated into canine saliva in amounts approaching those found in normal blood plasma, as acinar cells of the submaxillary glands concentrate electrolytes (e.g., Na and potassium, K) directly from plasma into the saliva (Henriques, 1961). Furthermore, it has been hypothesized that for humans, changes in salivary electrolyte profiles, such as increased concentrations of Na and chloride (Cl), may be related to the level of work achieved during aerobic exercise (Chicharro et al., 1994). While these reports together may help to explain the link between exercise-induced dehydration and changes in salivary electrolyte concentrations, there remains to be a dearth of literature reporting the effects of exercise on salivary electrolyte concentrations in dogs.
In order to better understand, and ultimately reduce the risks associated electrolyte imbalance in exercising dogs, it is essential to further our understanding of changes in salivary electrolyte concentrations. As such, the objective of this study was to evaluate exercise-induced changes in canine salivary electrolyte concentrations, and to identify the electrolytes that are primarily lost in saliva during short and extended bouts of exercise. We hypothesized that salivary concentrations of Na and Cl would increase, and concentrations of select salivary K and calcium (Ca) would decrease in response to exercise. Additionally, we hypothesized that the concentrations of Na and Cl will continue to increase as the duration of exercise increases.
| Animals and housing
The present experiment was approved by the University of Guelph's Animal Care Committee (animal use protocol # 4008). Sixteen client-owned domestic Siberian Huskies (nine females: four intact, five neutered; seven males: two intact, five neutered), with an average age of 4.8 ± 2.5 years and body weight (BW) of 24.3 ± 4.3 kg (mean ± standard error, SE), were used in the study. Dogs were housed and trained at an off-site, privately owned facility (Rajenn Siberian Huskies, Ayr, ON) that had been visited and approved by the University of Guelph's Animal Care Services. During the study, dogs were pair or group-housed in free-run, outdoor kennels that ranged in size from 3.5 to 80 square meters and contained between 2 and 10 dogs each. Two dogs were removed from the trial (one on week 7, one on week 9) due to exercise-related injuries; all data collected up until their respective points of removal were included in this report.
| Diets and study design
For 2 weeks prior to the study period, all dogs were acclimated to a dry extruded kibble diet (Champion Petfoods LT.) that met or exceeded all National Research Council (2006) and Association of American Feed Control Officials (2016) nutrient recommendations and fed at intake levels predetermined from their historical feeding records. For additional information regarding the nutrient content of the diet as well as the ingredient deck, refer to Templeman et al. (2020). During the acclimation period and throughout the entire trial, dogs were fed once daily at 1700 hr. Initial BW were recorded at week -1 and thereafter, BW was measured weekly and food allotments were adjusted to maintain the dogs' week -1 BW. At feeding, all dogs were fed individually to allow adequate monitoring of food intake. Any orts were weighed and recorded daily. Throughout the study, all dogs were allowed ad libitum access to fresh water.
A 12-week exercise regimen was proposed whereby exercise intensity and duration would increase incrementally. However, decisions made regarding the distance ran each day and number of stops (e.g., for water) were made with consideration of the ambient temperature and humidity. The average daily run distance (average of 4 days of running) for each week was as follows: 6.9, 12.9, 17.5, 23.8, 31.0, 37.2, 42.2, 30.0, 31.3, 34.5, 31.6, and 34.2 ± 4.4 km (mean ± SE) for weeks 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11, respectively (Templeman et al., 2020). All training bouts commenced at approximately 08:00 hr. On the days of saliva collection, the exact distances ran were as follows: 5.7 km for week 0, 39.0 km (week 5), and 39.0 km (week 11). All saliva sampling days occurred on the same day (Monday, first training day of the week) for each week (0, 5, and 11). Training consisted of dogs running on a standard 16-dog gangline. The gangline was attached to an all-terrain vehicle with one rider who controlled the machine in its lowest gear. A pace of ~15 km/hr was maintained throughout the training period. Running pace and distance travelled was measured using a digital speedometer and odometer on the all-terrain vehicle. Dogs were provided with water approximately every 10 km and were always provided with water immediately following a bout of exercise, or immediately following a post-run saliva sampling. For additional details regarding the anticipated run distances for proposed incremental exercise regimen, refer to Templeman et al. (2020).
| Sample collection and analysis
Using sterile gauze and forceps, saliva samples were collected from all dogs at week 0 (pre-run, 5.7 km), week 5 (pre-run, 5.7, 39.0 km), and week 11 (pre-run, 5.7, 39.0 km). Samples were collected by rolling gauze around forceps and positioning the gauze under the dog's tongue and/or throughout the lining of the cheek within the buccal cavity for 30 s per sample (German, Hall, & Day, 1998). The gauze was then transferred to a sterile 50 ml centrifuge tube (Thermo Fisher Scientific) and samples were centrifuged at 3,500 g for 30 min at 4°C (Lavy, Goldberger, Friedman, & Steinberg, 2012;Tenovuo, Illukka, & Vähä-Vahe, 2000) using a Beckman J6-MI centrifuge (Beckman Coulter).
An estimation of osmolarity was calculated according to Rasouli (2016) using the following equation: (number of) osmoles = n' × (number of) moles, in which the unit of n' is milliequvalent (mEq) per mmol, and n' is defined as the number of mEq of produced particles during solvation of 1 mmol of solute. Each of the electrolytes presented herein dissolve in water without ionization (n' = 1). Osmolarity was calculated based on the salivary electrolyte concentrations at each week, and at each sampling time point; however, it should be noted that this estimation is calculated without accounting for bicarbonate (HCO 3 ) which, along with PvCO 2 , was reported to decrease in dogs subjected to short bouts of strenuous exercise (Robbins, Ramos, Zanghi, & Otto, 2017).
| Statistical analysis
Data were analyzed using PROC MIXED of SAS (v.9.4; SAS Institute Inc.). Dog was treated as a random effect, and week and sampling time point were treated as fixed effects.
Week was treated as a repeated measure and means were separated using the Tukey adjustment. The data were also pooled across weeks and analyzed using PROC MIXED of SAS with dog treated as a random effect and sampling time point treated as a fixed effect and as a repeated measure. Significance was declared at a p ≤ .05.
| RESULTS
Pre-run concentrations of P on week 5 were greater than weeks 0 or 11, and on week 11 were greater than week 0 (p ≤ .05; Table 1a). Concentrations of P at the 5.7-km sampling time point were greater at weeks 5 and 11 compared to week 0 (p ≤ .05; Table 1b). Concentrations of Mg at 5.7 km were greater at week 11 than week 0 (p ≤ .05); however, levels at week 5 did not differ from either week 0 or 11 (p > .05; Table 1b). At the 39.0-km sampling time point, Mg concentrations at week 11 were greater than week 5 (p ≤ .05; Table 1c). Salivary concentrations of Ca, Na, Cl, and K did not differ with week for any of the sampling time points (p > .05; Tables 1a-c).
Data were then pooled across week to evaluate changes due to run distance. Pre-run P and Ca concentrations were greater than at 5.7 and 39.0 km (p ≤ .05), but concentrations at 5.7 and 39.0 km did not differ from each other (p > .05; Table 2). Magnesium concentrations at 39.0 km were greater than at 5.7 km, and at 5.7 km were greater than at the pre-run sampling time point (p ≤ .05; Table 2). Pre-run Na concentrations were lower than at 5.7 and 39.0 km (p ≤ .05), but concentrations at 5.7 and 39.0 km did not differ from each other (p > .05; Table 2). Chloride concentrations at 5.7 km were similar to pre-run and 39.0 km; however, Cl concentrations at 39 km were greater than pre-run concentrations (p ≤ .05; Table 2). Pooled K concentrations did not differ with any sampling time point (p > .05; Table 2).
Saliva osmolarity was calculated using the combined mEq of Na, Cl, K, Mg, and P at each sampling time point for each week. No differences were observed within any sampling time point across any week (p > .05); however, differences were observed across sampling time point within week (Table 3). In week 0, estimated osmolarity at 5.7 km was significantly greater than the pre-run time point with an increase of more than 15% (p ≤ .05; Table 3). At week 5, estimated osmolarity at did not differ between the pre-run and 5.7-km sampling time points (p > .05), but osmolarity at 39 km was greater than at both pre-run and 5.7 km (p ≤ .05; Table 3) with increases of approximately 13.5% and 15%, respectively. Finally, by week 11, no differences T A B L E 1 Mean electrolyte concentrations (±SE d ) across weeks of training (0, 5, and 11) at the pre-run (a), 5.7 km (b), and 39.0 km (c) sampling time points in estimated osmolarity were observed across any sampling time point (p > .05).
| DISCUSSION
To the best of our knowledge, this is the first study to report salivary electrolyte concentrations in exercising dogs while investigating how the duration of exercise and physical conditioning affects these concentrations. The data presented herein indicate that when dogs participate in aerobic exercise, salivary concentrations of Na, Mg, and Cl increase, suggesting that these are the electrolyte minerals primarily lost in saliva during a bout of exercise. Moreover, electrolyte concentrations appear to change depending on the duration of a bout of exercise (e.g., Cl and Mg), suggesting that duration or intensity of aerobic exercise may influence changes in salivary electrolyte concentration. This indicates that electrolyte repletion may help prevent exercising dogs from entering states of electrolyte imbalance, especially dogs participating in extended and/or repetitive bouts of aerobic exercise.
Due to constraints related to the study design and in-field sample collection, the effects of dehydration and changes in salivary production could not be directly measured; however, these pilot data will hopefully provide the groundwork necessary for future studies investigating the effects of exercise on canine salivary electrolytes. The calculated osmolarity data (Table 3) indicate that the level of pre-exercise hydration for these dogs did not differ throughout the 12-week trial period, but that signs of hypertonic dehydration were present at the end of the runs in week 0 and week 5. Of interest is a possible adaptive response to conditioning, as calculated osmolarity was elevated at 5.7 km in week 0, but not in subsequent weeks, and calculated osmolarity was then elevated at 39 km in week 5, but not subsequently in week 11. While the mechanism(s) underlying these reductions in salivary osmolarity remain unknown, the evidence suggests that the dogs had adapted to the short bout of exercise within 5 weeks, and to the extended bout of exercise within 11 weeks. These data provide evidence of dehydration in exercising dogs without having measured changes in body mass, and also indicate that as dogs become physically conditioned, they appear to employ adaptive measures to reduce the severity of hypertonic dehydration induced by either short or extended bouts of aerobic exercise.
In humans subjected to increasing levels of aerobic exercise, secretion of certain salivary electrolytes display a biphasic response. At low-to-moderate levels of exercise, concentrations of Na, Cl, and K remain relatively unchanged (comparable to what was reported in other studies: Dawes, 1981;Rutherfurd-Markwick, Starck, Dulson, & Ali, Starck, Dulson & Ali, 2017;Shannon, 1967); however, once a certain work rate is achieved, Na and Cl concentrations in saliva increase dramatically while K remains stable (Chicharro et al., 1994). A similar pattern was observed in the current study, as Na and Cl concentrations increased during exercise while K concentrations stayed constant. As well, the Cl concentration increased only once the dogs were subjected to an extended bout of exercise, suggesting that, as with humans, salivary Cl concentrations may also follow a biphasic response depending on the work rate. This may also indicate that the work rates reached in previous studies with humans were simply not high enough to elicit a change in salivary electrolyte concentrations. Convertino, Keil, Bernauer, and Greenleaf (1981) reported that a minimum intensity of 40% VO 2 max was required to elicit a change in plasma osmolality in humans, supporting the potential of a relationship between electrolytic shifts and the intensity of work performed. Or perhaps, this biphasic response of salivary electrolytes to level of work may have merely been a function of the study subjects achieving differing degrees or types (hypotonic, isotonic, hypertonic) of dehydration. While dehydration was not analytically evaluated in either the current study or the work by (Chicharro et al. (1994), based on calculated osmolarity in the current study, it appears as though this response (or perhaps, degree/type of dehydration) may be influenced not only by duration of exercise (e.g., work rate), but also by degree of aerobic conditioning. At baseline, a significant increase in salivary osmolarity was observed with only a short bout of exercise; however, after the dogs were subjected to 5 weeks of aerobic conditioning, equivalent changes in osmolarity were only evident following an extended bout of exercise. Moreover, as the dogs progressed further into the conditioning regimen (by week 11), no changes in salivary osmolarity were observed after either short or extended bouts of exercise. In the future, a comparative analysis of salivary electrolyte concentrations and osmolarity in dogs subjected to varying durations of aerobic exercise over an extended conditioning period is warranted to confirm whether duration of a single bout of exercise, or degree of aerobic conditioning affects the type or degree of dehydration that is occurring, and to identify the influence those changes may have on salivary electrolyte profiles.
In exercising horses exposed to cool and dry ambient conditions, similar patterns were reported to that of Chicharro et al. (1994), with sweat osmolality, as well as Na and Cl concentrations increasing as a bout of exercise persisted (McCutcheon, Geor, Hare, Ecker, & Lindinger, 1995). The exercise induced increases in sweat osmolality were paralleled by an increase in sweating rate (McCutcheon et al., 1995), and a similar relationship has been identified between saliva flow rate and salivary electrolytes in humans. Using stimulated parotid saliva, Na and Cl concentrations have been reported to increase with saliva flow rate while K was unaffected (Asking & Emmelin, 1985;Henriques, 1961;Thaysen, Thorn, & Schwartz, 1954). Recent studies in humans have demonstrated that sympathetic stimulation during submaximal exercise may actually decrease saliva flow rate; however, this appears to occur simultaneously with an increase in the concentration of some salivary electrolytes such as Na and Mg (Chicharro et al., 1999). Though, it should be noted that the sympathetic response seen in humans may vary from those of dogs due to considerable differences in their cardiovascular capacity (Poole & Erickson, 2011). In order to better understand the mechanisms behind these exercise duration-induced changes in electrolyte concentration, future studies should attempt to evaluate salivary flow rate as well as salivary electrolyte concentrations under similar conditions in exercising dogs.
The compartmentalization of electrolytes may also, in part, explain the differences observed in salivary concentration depending on duration of exercise in the current study. Sodium and Cl are primarily extracellular ions, while Mg, Ca, and K are largely found within cells. During exercise in the absence of sufficient fluid replacement, plasma volume decreases as plasma osmolality and concentration of extracellular fluid Na simultaneously increase, resulting in the onset of hypertonic dehydration (Villiger et al., 2018). The osmolarity data in the current study suggest that postexercise signs of hypertonic dehydration were present at weeks 0 and 5, indicating that even though the dogs had access to water, the intake may not have been enough to offset fluid losses. As these shifts in plasma volume, plasma osmolarity, and extracellular Na occur, fluid is mobilized from the intracellular to the extracellular space in an effort to maintain the extracellular and circulating volume (Nose, Mack, Shi, & Nadel, 1988), suggesting that prolonged exercise may cause greater shifts in intercompartmental fluid and electrolyte movement. Since the acinar cells-the secretory units of the submaxillary gland-facilitate the shift of both water and electrolytes from the plasma into the salivary secretion, the concentrations of salivary electrolytes closely reflect the plasma concentrations (Henriques, 1961;Villiger et al., 2018). As the concentration of Na in the extracellular compartment steadily increases, this theoretically diminishes the necessity for water movement from plasma to the saliva; therefore, alongside the decrease in salivary flow rate, this causes an increase in saliva osmolality as well as protein concentrations (Muñoz et al., 2013;Walsh, Montague, Callow, & Rowlands, 2004). Since changes observed in salivary and plasma osmolality are linked, future research should aim to monitor these parameters simultaneously in response to exercise.
Studies have investigated the exercise-induced changes to plasma electrolyte concentrations in sled dogs (Ermon et al., 2014;Hinchcliff, Reinhart, Burr, Schreier, et al., 1997); however, these studies were done during multi-day, extreme distance races (e.g., 300+ mile races) and under in-field conditions where elements related to diet, water, and supplement intake can be difficult for researchers to control. Ermon et al. (2014) followed teams of Alaskan Huskies during the multi-day ~1,000-mile Yukon Quest race and reported mild, yet significant decreases in plasma Na concentrations and unchanged K concentrations. However, approximately 30% of dogs on study were receiving dietary NaCl supplements, a finding that likely contributed to the lack of differences observed (Ermon et al., 2014). In fact, the authors of both studies (Ermon et al., 2014;Hinchcliff, Reinhart, Burr, Schreier, et al., 1997) relied on musher reports for diet and water intakes of the dogs involved, and it is possible that these reports were not accurate. Hinchcliff, Reinhart, Burr, Schreier, et al. (1997)) reported that plasma Na and K concentrations decreased, and Cl concentrations remained unchanged in conditioned Alaskan Huskies following a 70-hr 300-mile race. Hyponatremia was reported in response to multi-day running in cold environments, which contrasts the results of the present study; however, this appears to be related to the timing of blood samples with respect to the ingestion of "watered" foods . While this suggests that dogs may be able to rehydrate during rests periods-which is supported by Greenleaf et al. (1976), who reported a decrease in the exercise-induced rise in internal temperature within 5 min of water consumption-it does not address the dehydration that may have occurred during the extended periods of exercise itself. The serum values for the competing dogs also indicate that dehydration may have been present prior to exercising, as prerace concentrations of Na and Cl were either at or above the highest reference values for healthy adult dogs. Hinchcliff, Reinhart, Burr, Schreier, et al. (1997)) report prerace concentrations for Na and Cl of 148.5 and 116 mEq/L, respectively, while the reference range of concentrations for these electrolytes in dogs are 137-149 and 99-110 mEq/L, respectively (Campbell & Chapman, 2000). Ultimately, these reports exemplify the challenges associated with controlling conditions in the multi-day race and the importance of the timing of blood collections.
In the future, evaluating plasma and salivary electrolyte concentrations simultaneously in dogs subjected to various levels of exercise while being maintained in a controlled environment (e.g., control over diet, food/water intake, exercise) may provide data to support the findings presented herein. These additional data may also help to further our understanding of the fluid and electrolyte shifts that occur between the extracellular and intracellular fluid compartments during exercise. Additionally, since the ability to dissipate heat is hindered by dehydration (Kozlowski et al., 1980;Walsh et al., 2004), researchers should also consider monitoring changes to internal body temperature (e.g., utilizing rectal temperature probes) so as to evaluate how various types or degrees of dehydration and electrolyte imbalance may affect internal temperature during single or repetitive bouts of exercise.
Ultimately, physically active dogs may benefit from an electrolyte supplement prior to, during, and/or after extended bouts of exercise in order to replenish electrolytes lost via exercise-induced salivation. Furthermore, ingestion of electrolyte-supplemented fluids prior to a bout of exercise may also assist in the maintenance of lower peak core body temperatures in dogs performing rigorous physical activity, particularly in warmer ambient conditions (Niedermeyer et al., 2020). Additionally, when formulating an electrolyte supplement, other compounds that may aid in mineral absorption and exercise recovery (e.g., glucose, citrate, prebiotics) should be considered aside from just the electrolyte minerals lost during exercise. For example, both glucose and citrate have been reported to increase the absorption of Na (Ermon et al., 2014;Patra, Rahman, Wahed, & Al-Mahmud, 1990), while inulin-based prebiotic fibres have been shown to increase solubility of minerals such as Ca and Mg in the gut (Legette et al., 2012;Schuchardt & Hahn, 2017). Finally, the addition of flavorings or palatants to electrolyte-enriched fluids should be considered as they may play a role in increasing the consumption and/or acceptance by physically active dogs .
IMPLICATIONS
The data presented herein indicates that salivary concentration of Na, Mg, and Cl increase during exercise, suggesting that these are the electrolytes primarily lost in saliva by exercising dogs. As well, it appears as though changes in the concentrations of select electrolytes (e.g., Cl and Mg) may be related to the duration or intensity of aerobic exercise performed, and as such physically active dogs may benefit from electrolyte repletion alongside basic hydration strategies. While much still remains unknown regarding processes such as how electrolyte mineral compartmentalization may influence secretion of electrolytes into saliva, these data provide the groundwork necessary to move forward in the investigation of production and turnover of canine salivary electrolytes during exercise. Further work is necessary, though, to better our understanding of fluid and electrolyte shifts in exercising dogs and also to evaluate the effectiveness of electrolyte repletion. Follow-up studies should aim to include a comparative analysis of plasma and salivary electrolyte concentrations and osmolarity, an evaluation of internal temperature in response to various levels of aerobic exercise in dogs, and a quantification of physical conditioning that aligns with the aforementioned parameters.
DISCLOSURES
M.I.L. works for The Nutraceutical Alliance (Burlington, ON, Canada). J.R.T., N.M., and A.K.S. declare that they have no conflicts of interest. The project was funded by the start-up money received by A.K.S from the University of Guelph.
AUTHORS' CONTRIBUTIONS J.R.T., M.I.L., and A.K.S. designed the research. J.R.T. and N.M. conducted the research, and all authors analyzed the data and wrote the manuscript. A.K.S. had primary responsibility for the final content. All authors read and approved the final manuscript.
|
2020-06-30T13:01:30.732Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "b99ee32d757a1344fdacd4e31cc5c13b02d9900c",
"oa_license": "CCBY",
"oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.14493",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3729f2012773629807fb9890f8d121c519bb77ab",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237434915
|
pes2o/s2orc
|
v3-fos-license
|
Universal SARS-CoV-2 Testing of Emergency Department Admissions Increases Emergency Department Length of Stay
Study objective Our institution experienced a change in SARS-CoV-2 testing policy as well as substantial changes in local COVID-19 prevalence, allowing for a unique examination of the relationship between SARS-CoV-2 testing and emergency department (ED) length of stay. Methods This was an observational interrupted time series of all patients admitted to an academic health system between March 15, 2020, and September 30, 2020. Given testing limitations from March 15 to April 24, all patients receiving SARS-CoV-2 tests were symptomatic. On April 24, testing was expanded to all ED admissions. The primary and secondary outcomes were ED length of stay and number needed to test to obtain a positive, respectively. Results A total of 70,856 patients were cared for in the EDs during the 7-month period. The testing change increased admission length of stay by 1.89 hours (95% confidence interval 1.39 to 2.38). The number needed to test was 2.5 patients and was highest yield on April 1, 2020, when the state positivity rate was 39.7%; however, the number needed to test exceeded 170 patients by Sept 1, 2020, at which point the state positivity rate was 0.5%. Conclusion Although universal SARS-CoV-2 testing of ED admissions may meaningfully support mitigation and containment efforts, the clinical cost of testing all admissions amid low community positivity is notable. In our system, universal ED SARS-CoV-2 testing was associated with a 24% increase in admission length of stay alongside the detection of only 1 positive case every other day. Given the known harms and risks of ED boarding and crowding, solutions must be developed to support regular operational flow while balancing infection prevention needs.
INTRODUCTION
Despite expanding SARS-CoV-2 testing resources and availability, COVID-19 continues to spread and remains a persistent public health threat. Given evidence of viral transmission by asymptomatic persons, pandemic mitigation efforts necessitate identification of asymptomatic individuals. Depending on local disease prevalence, rates of asymptomatic patients testing positive range from 1% to as high as 30%. 1 Within hospitals, identification and isolation of asymptomatic individuals with SARS-CoV-2 has garnered much attention as a method to prevent outbreaks, reduce bed transfers of cohorted patients, and allay fears of hospital-acquired COVID- 19. Universal preadmission patient testing continues to be a challenge. Similarly to influenza, identifying which patients require isolation facilitates early cohorting of infected patients, which helps to limit staff and patient exposure and allows more efficient bed management. 2,3 A similar approach has been employed in emergency departments with regard to COVID-19. However, in low-prevalence areas, this approach may unnecessarily delay care, given that most molecular tests utilized in hospital-based EDs require extended turnaround times. As ED volumes continue to rebound toward pre-COVID-19 numbers, further extending length of stay exacerbates ED crowding and boarding-both of which have been associated with poor outcomes. 4,5 Specifically, our institution transitioned from symptomatic to universal SARS-CoV-2 screening of ED admissions congruent with a community prevalence that changed from one of the highest in the nation in March 2020 to one of the lowest 7 months later. We examined the
Editor's Capsule Summary
What is already known on this topic COVID-19 surges have overwhelmed emergency departments at different times during the pandemic, but it is unclear how universal testing impacted patient throughput once it was available.
What question this study addressed
This study looked at the influence of universal testing in one academic ED during differing levels of community prevalence throughout a 6-month period in 2020.
What this study adds to our knowledge Even with greater testing capability, COVID testing of all ED hospital admissions led to patient throughput delays.
How this is relevant to clinical practice Balancing larger public health needs with the judicious ordering of tests is necessary to meet the needs of the patient population. Universal COVID testing can delay care with variable yield based on community prevalence at the time.
association between ED-based SARS-CoV-2 screening approaches and ED length of stay.
Study Design and Sample Acquisition
This was an observational interrupted time series of all ED patients seen in a tertiary care health system composed of an academic, community, and freestanding ED with a combined total annual visit volume exceeding 190,000 patients. Functionally, patients could transfer between health care system hospital sites based on inpatient bed availability. Processes for admission were the same across all sites, with the only exception being that patients admitted from the freestanding ED were transferred to one of the other 2 sites on bed availability. Of note, these EDs do not have observation units, and patients in observation are managed by inpatient teams. Given this is a billing distinction without effect on bed assignment, observation or inpatient status was not subanalyzed. We constructed a data set inclusive of all ED timestamps, diagnoses, and SARS-CoV-2 tests from the institutional data warehouse between March 15, 2020, and September 30, 2020. Given testing limitations, from March 15, 2020, to April 24, 2020, only patients under investigation with lower respiratory tract infection symptoms, fever, or clinical suspicion for COVID-19 were tested. On April 24, testing was expanded from symptomatic patients to all ED admissions. Although there was greater overall test availability, rapid molecular tests were still limited and preferentially used for ED specimens.
SARS-CoV-2 Test Characteristics
RT-PCR testing was performed locally using either an emergency use authorized variation of the Centers for Disease Control and Prevention (CDC) protocol or GeneXpert Xpress (Cepheid). Internal validation data support the comparability of these assays, and a real-life application of these specific assays has been published. 6,7 Preuniversal testing patients under investigation were admitted to an isolation floor, swabbed for SARS-CoV-2, and appropriately reassigned based on results. After universal testing was implemented, GeneXpert Xpress tests were prioritized for the ED, and other testing platforms were used only if Xpress was not available. Samples were not subject to batch loading. Xpress tests were run locally. Appendix E1 (available at http://www.annemergmed.com) shows average turnaround time by site.
Analyses and Outcomes
The primary analysis examined the relationship between ED testing strategy (symptomatic versus universal) and ED length of stay. The primary outcome was ED length of stay stratified between admitted and discharged patients. Consistent with national metrics, ED length of stay was defined as the time in minutes between ED arrival and ED departure. 8 Boarding length of stay was defined as the time in minutes from ED admit order to ED departure. We constructed autoregressive integrated moving average regression models (ARIMA) adjusting for ED census, ICU admissions, COVID-19 inpatient count, non-COVID-19 inpatient count, net hospital admissions, and week of testing. 9 ED census was used as a marker of ED crowding. 10,11 Hospital active capacity (as opposed to total overall beds) is dependent on staffing resources and can change unpredictably. The daily net hospital admissions were calculated as discharges subtracted from admissions as a proxy for daily changes in hospital capacity, which has been adapted from the Scottish Government, who used this metric to measure dynamic capacity of COVID-19 patients throughout the pandemic. 12 Of note, the 3 EDs were pooled because several operational processes, such as load balancing arrivals between EDs, cross campus ED transfers, and cross campus hospitalization, make length of stay a reflection of the total system and not a site-specific phenomenon. 13 However, sitespecific analysis is available in Appendix E1. Ten of 200 (6.5%) data were imputed using Kalman filtering, as justified in prior literature for COVID-19 and ARIMA modeling. 14,15 The secondary analysis focused on diagnostic yield of ED screening for SARS-CoV-2. The diagnostic yield was measured as the proportion of ED SARS-CoV-2 tests returning a positive result. For this analysis, we report descriptive statistics of the diagnostic yield as well as the number needed to test at the weekly level. Furthermore, to provide context, we concurrently report the community prevalence of COVID-19 from publicly available information. 16,17 Data analysis was conducted using R version 3.6.3. This study was approved by the University Institutional Review Board.
Study Characteristics
A total of 70,856 patients were cared for in the EDs during the 7-month study period. There were 11,541 (16.3%) patients in the preuniversal testing period, and of these, 3,910 (33.9%) were admitted and 3,364 (86%) were symptomatic and tested. Of the patients seen after the policy change, 18,311 (30.9%) were admitted, and all were tested (Appendix E1).
Primary Outcome: Policy Change and ED Length of Stay
Given the setting of declining ED visits and rising ED admissions, we found significant effects of the universal testing policy on ED length of stay (adjusting for covariates) for admitted patients (Figure 1). 18 The universal testing policy was associated with a 1.89-hour increase in ED admitted length of stay (95% confidence interval [CI] 1.39 to 2.38) and represents a 24% increase in admission length of stay (full model, Appendix E1). Similarly, ED discharge length of stay increased by 0.19 hours (95% CI 0.09 to 0.3) after the policy change. Finally, ED boarding length of stay increased by 1.58 hours (95% CI 1.15 to 2.01).
Secondary Outcome: Number Needed to Test and Community Positivity Rate
With regard to the secondary outcome, boarding length of stay increased by 1.18 hours (95% CI 0.37 to 2.0). Finally, given the increase in ED length of stay with universal testing, we calculated the number needed to test to obtain a single positive test among ED admissions. The lowest number needed to test was 2.5 patients the week beginning April 1, 2020, when the highest state positivity rate was 39.7%, and highest number needed to test exceeded 170 patients the week beginning Sept 1, 2020, concurrent with the lowest state positivity rate (0.5%) (Figure 2).
LIMITATIONS
There are several limitations to this study. First, generalizability may be limited as a single institution; however, the conceptual framework of analyzing length of stay as a function of testing strategy can be done elsewhere to inform policy decisions. Furthermore, the benefit of this universal testing policy is dependent on the COVID-19 burden in the region, so our results must be taken in the context of individual regional COVID-19 trends. Given the dates of the study, vaccines were not yet available, and if our hospital relaxed testing for asymptomatic, fully vaccinated individuals, the results may be more muted. To date, despite one of the highest vaccination rates and lowest infection rates in the country, we have not relaxed testing standards. However, we could expect that increased vaccinations are likely to only increase the number needed to test and, in turn, extend these effects on hospital flow. Finally, we did not explore patient-specific outcomes such as left without being seen, mortality, or clinical decompensation within this study.
DISCUSSION
As the COVID-19 pandemic evolves across the United States with heterogeneous community prevalence and variable access to testing resources in the ED, hospitals will increasingly face the need to make operational decisions balancing COVID-19 mitigation efforts with operational pressures. We found that the common operational change to universal screening was associated with a 24% increase in ED length of stay for admitted patients and that this delay in care was sustained as community prevalence of COVID-19 declined and health system resources rebounded.
Despite increased testing availability and the ED SARS-CoV-2 rapid test taking as little as 45 minutes laboratory time to complete, overall ED length of stay increased. This is due to inefficiencies related to manual and often fragmented processes around collection, transport, analysis, and bedding systems that have a large cumulative effect. Notably, specimens must be hand-delivered to the laboratory due to infection prevention concerns about using a pneumatic tube system. Of note, the clinical admission decision was not affected by the policy, and the influence of universal testing directly increased boarding times for admitted patients. Qualitatively, clinicians ordered this test early during clinical workups, but a delay in leaving the ED was still observed. Finally, despite allowing high-suspicion COVID-19 patients to be placed on a "COVID-19 Unit" before test results returned, prolonged ED length of stay was still observed.
We confirmed Ford et al's 1 findings of a low positivity rate among asymptomatic individuals across a broader time frame of 7 months (Appendix E1). There is likely a meaningful infection prevention benefit to universal screening of ED admissions; however, it must be balanced with potential harms of ED crowding and inefficient resource usage during times of low diagnostic yield. When examining the community positivity rate at the COVID-19 peak compared to its nadir against the number of ED patients needed to test to obtain a positive, the onerous effects are exponential when community prevalence is low. With 100 admissions per day across the EDs, community positivity of 0.5% translates to a number needed to test of nearly 170. Although extended ED length of stay may be warranted at times of high community prevalence and higher risk of within-hospital transmission, it is less clear at times of low prevalence when a positive test might be expected every other day. This number needed to test is likely to continue to rise as community vaccination rates improve and the prevalence rate of COVID-19 decreases alongside adherence to other nonpharmaceutical interventions (eg, social distancing). However, the effects of COVID-19 variants on viral prevalence are not yet fully understood, and loosening masking requirements and increasing occupancy limits may further promote community spread. Furthermore, our number needed to test estimates are likely conservative, given that false positives are more likely at times of low prevalence. A universal testing strategy is needed to identify asymptomatic individuals and can be possible without creating delays by exploring alternative policies related to bedding asymptomatic patients pending SARS-CoV-2 results or less-sensitive screening tests with confirmatory molecular tests, which has been successful at some institutions. 19 Unfortunately, the rapid testing approach with reflex to PCR was not commercially available until late 2020, given initial supply was bought by the federal government. 20 Additionally, early CDC guidance was to not use pneumatic tube systems for SARS-CoV-2 swab transport, despite such systems being used with other viral samples to significantly decrease turnaround times associated with sample movement from patient to the laboratory. 21, 22 We recognize that simple symptom-based strategies may be too simplistic given the evolution of the pandemic with new variants, what we know about asymptomatic transmission, and widespread availability of new vaccines. So, symptombased strategies might be explored for populations in which infection prevention strategies can be preserved without hindering length of stay, such as symptom-based testing for fully vaccinated individuals.
In conclusion, although universal COVID-19 testing of ED admissions may support mitigation and containment efforts, the clinical cost of testing all ED admissions, particularly amid low community prevalence, is notable. In our system, universal COVID-19 testing was associated with a nearly 3-hour increase in ED length of stay for all admitted patients alongside the detection of only 1 positive case every other day. Furthermore, as vaccination reduces community prevalence, innovative admission and infection prevention practices that can facilitate patient admission prior to SARS-CoV-2 testing results may offer a more practical solution to reducing patient exposure to the known harms and risks of ED crowding and boarding. Future research needs to include cost-benefit/effectiveness to better understand how to balance safety and patient
|
2021-09-08T13:18:10.969Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "df05bdb819ed708c13adf5cf1efd5e9993410136",
"oa_license": null,
"oa_url": "http://www.annemergmed.com/article/S0196064421008465/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "06ede4e765b416a67f63a61a1c06e0ac2e323834",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253028121
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of apoB Concentrations Across Early Adulthood and Predictors for Rates of Change Using CARDIA Study Data
The cumulative exposure to apolipoprotein B (apoB)-containing lipoproteins in the blood during early adult life is a central determinant of atherosclerotic cardiovascular disease risk. To date, the patterns and rates of change in apoB through early adult life have not been described. Here, we used NMR to measure apoB concentrations in up to 3055 Coronary Artery Risk Development in Young Adults (CARDIA) Study participants who attended the years 2 (Y2), 7 (Y7), 15 (Y15), 20 (Y20), and 30 (Y30) exams. We examined individual-level spaghetti plots of apoB change, and we calculated average annualized rate of apoB concentration change during follow-up. We used multivariable linear regression models to assess the associations between CARDIA participant characteristics and annualized rates of apoB change. Male sex, higher measures of adiposity, lower HDL-C, lower Healthy Eating Index, and higher blood pressures were observed more commonly in individuals with higher apoB level at Y2 and Y20. Inter- and intra-individual variation in apoB concentration over time was substantial—while the mean (SD) rate of change was 0.52 (1.0) mg/dl/year, the range of annualized rates of change was −6.26 to +9.21 mg/dl/year. At baseline, lower first apoB measurement, female sex, White race, lower BMI, and current tobacco use were associated with apoB increase. We conclude that the significant variance in apoB level over time and the modest association between baseline measures and rates of apoB change suggest that the ability to predict an individual’s future apoB serum concentrations, and thus their cumulative apoB exposure, after a one-time assessment in young adulthood is low.
Abstract The cumulative exposure to apolipoprotein B (apoB)-containing lipoproteins in the blood during early adult life is a central determinant of atherosclerotic cardiovascular disease risk. To date, the patterns and rates of change in apoB through early adult life have not been described. Here, we used NMR to measure apoB concentrations in up to 3055 Coronary Artery Risk Development in Young Adults (CARDIA) Study participants who attended the years 2 (Y2), 7 (Y7), 15 (Y15), 20 (Y20), and 30 (Y30) exams. We examined individual-level spaghetti plots of apoB change, and we calculated average annualized rate of apoB concentration change during follow-up. We used multivariable linear regression models to assess the associations between CARDIA participant characteristics and annualized rates of apoB change. Male sex, higher measures of adiposity, lower HDL-C, lower Healthy Eating Index, and higher blood pressures were observed more commonly in individuals with higher apoB level at Y2 and Y20. Inter-and intraindividual variation in apoB concentration over time was substantial-while the mean (SD) rate of change was 0.52 (1.0) mg/dl/year, the range of annualized rates of change was ¡6.26 to þ9.21 mg/dl/year. At baseline, lower first apoB measurement, female sex, White race, lower BMI, and current tobacco use were associated with apoB increase. We conclude that the significant variance in apoB level over time and the modest association between baseline measures and rates of apoB change suggest that the ability to predict an individual's future apoB serum concentrations, and thus their cumulative apoB exposure, after a onetime assessment in young adulthood is low.
Supplementary key words apolipoprotein B • epidemiology • change • young adults Apolipoprotein B100 (apoB) is the primary structural protein and binding ligand for the atherogenic lipoprotein particles: VLDL, remnant, IDL, LDL, and lipoprotein (a) (Lp(a)) (1,2). Each atherogenic lipoprotein particle contains one molecule of apoB. Genetic polymorphisms at different loci that mediate lipid metabolism (i.e., LDLR, LPA, and others) and differences in LDL-receptor density, cholesterol and triglyceride (TG) exchange, insulin resistance, and lifestyle factors result in substantial interindividual differences in the number of apoB-containing particles for a given total mass of cholesterol and TG (3,4). Thus, measurement of apoB concentration captures the aggregate burden of atherogenic lipoprotein particles present in the blood more precisely than the measurement of the cholesterol concentration in LDL (LDL-C) or non-HDL (non-HDL-C).
The retention of apoB-containing lipoproteins in the subendothelial space, as well as the chronic inflammatory response to these particles, is central to the pathogenesis of atherosclerosis and subsequent atherosclerotic cardiovascular disease (ASCVD) events (5,6). ApoB blood concentrations have strong associations with subclinical atherosclerosis and incident ASCVD (7). Because of its ability to quantify the burden of atherogenic lipoprotein particle number more completely and accurately than non-HDL-C, multiple studies suggest that apoB is a superior marker of ASCVD risk, particularly in the 8-20% of people who have higher or lower than average apoB levels for a given non-HDL-C level (8). Nonetheless, other clinical *For correspondence: John T. Wilkins, j-wilkins@northwestern.edu. markers of atherogenic lipoprotein burden like non-HDL-C and LDL-C are commonly used in clinical practice.
Since apoB is a causal determinant of atherosclerosis (9), and exposure to atherogenic lipoproteins across early adult life is a strong predictor of ASCVD risk (10,11), it is important to understand normative apoB concentrations in young adults, their patterns and rates of change through midlife, and the predictors of change. Furthermore, understanding how these rates of change compare to other commonly used measures of atherogenic burden may help inform the clinical utility of apoB measurement in young adults, as more stable measures may serve as more reliable markers of an individual patient's expected future burden of atherogenic lipoproteins, and they may perform better as long-term ASCVD risk markers in young adults. To date, the patterns and rates of change in apoB concentrations within the same individuals across young adulthood have not been described. Here, we report the first descriptions of the rates of intraindividual apoB change, as well as predictors of change, and we compare these rates of change for other commonly used measures of atherogenic lipoprotein burden, across early adult life using unique data from over 3,055 Coronary Artery Risk Development in Young Adults (CARDIA) study participants.
Study sample
The CARDIA study recruited 5,115 black and white men and women aged 18-30 years in 1985-1986 from four sites across the United States: Birmingham, AL; Chicago, IL; Minneapolis, MN; and Oakland, CA. Participants were sampled to achieve a cohort balanced by race (52% black and 48% white), sex (55% female and 45% male), education (40% with 12 years and/or younger, 60% with more than 12 years of education), and age (45% 18-24 years and 55% 25-30 years). CARDIA participants have undergone in-person examinations at baseline (year 0: Y0) and at Y2, Y5, Y7, Y10, Y15, Y20, Y25, and Y30 (12). Retention rates among surviving participants at each in-person examination have been high, at 91, 86, 81, 79, 74, 72, 72, and 71%, respectively. Contact is maintained with participants via telephone, mail, or e-mail every 6 months, with annual medical history ascertainment between in-person examinations. Over the last 5 years, >90% of the surviving cohort members have been directly contacted, and follow up for vital status is virtually complete through related contacts and intermittent National Death Index searches.
Inclusion criteria
Since we were interested in understanding the natural history of apoB concentration change across early adulthood, we included samples from participants who had samples available at Y20 and at least two of three from Y15, Y7, or Y2 (n = 3,055). For patients who met these inclusion criteria, we also included data from Y30 when it was available (N = 2,474). Of the 3,055, we measured apoB in all those with available serum or plasma samples at Y2 (N = 1,881), Y7 (N = 2,582), Y15 (N = 2,474), Y20 (N = 3,055), and Y30 (N = 2,474). The characteristics of included and excluded participants are presented in supplemental Table S2.
Traditional risk factor and lifestyle measurement
Age, race, and sex were determined via self-report. Height, weight, and waist circumference were measured with the participants in light clothing using a standardized stadiometer, tape measure, and calibrated scale; BMI (kg/m 2 ) was calculated. Detailed dietary data are available for Y0, Y7, and Y20. Dietary data obtained at baseline were used to represent Y2 dietary patterns. The CARDIA dietary data were obtained via an interview-administered method that included a short questionnaire regarding general dietary practices followed by a comprehensive questionnaire about typical intake of foods (about 100 header questions such as "do you eat meat," followed by open-ended responses in answer to that header question). Detailed data on portion sizes, frequency of consumption, and common additives were conducted on foods that were regularly consumed as well (12,13). Dietary data were designed to reflect the habitual intake (past month) of CARDIA participants. From these diet history data, the Healthy Eating Index (HEI) at Y2 and Y20 was calculated and used to represent overall dietary quality (14).
At each CARDIA examination, participants were given the interviewer-administered Physical Activity (PA) History Questionnaire. In brief, participants were asked about selfreported leisure time activity and the frequency of participation in 13 specific PA categories (eight of vigorous and five of moderate intensity) of recreational sports, structured exercise, home maintenance, and occupational activities during the preceding 12 months. Intensity of each activity was expressed as metabolic equivalents (15,16). The PA score summed frequency time intensity over the 13 activities to get total activity (in exercise units).
Blood pressure was measured after 5 min of rest in the seated position using a random zero mercury sphygmomanometer, replaced from Y20 forward with an Omron oscillometer (calibrated to the random zero). The means of the second and third systolic and diastolic measurements were used. Alcohol intake, smoking habits, and educational attainment were determined with the use of standardized and validated questionnaires. Blood pressure-and cholesterollowering medication use was determined by self-report.
After a 12-h fast, blood was drawn from a vein in the antecubital fossa into a Vacutainer, coated with EDTA for plasma. Serum and plasma samples were obtained and stored at −80 • C for future analysis. Plasma samples were transported on dry ice to the Northwest Lipid Research Center in Seattle, Washington. Plasma concentrations of total cholesterol and TGs were measured using a standard enzymatic assay. HDL-C was quantified after precipitation with dextral sulfatemagnesium chloride on ABA 200 Biochromatic instrument (Abbott Laboratories, North Chicago, IL). LDL-C was calculated using the Friedewald equation (17) and non-HDL-C by the difference between total cholesterol and HDL-C.
ApoB and LDL particle (LDL-P) were measured from frozen serum (Y2) and EDTA plasma (Y7, Y15, Y20, and Y30) samples by NMR spectroscopy at Labcorp (Morrisville, NC) using the high-throughput Vantera® Clinical NMR Analyzer platform. ApoB was quantified using partial least squares regression modeling of the lipid methyl and methylene spectral region as previously described (18). ApoB concentrations produced by this assay have been extensively validated as equivalent to those measured by immunoassay (R = 0.98), with precision and accuracy verified quarterly by blinded Centers for Disease Control and Prevention Lipids Standardization Program proficiency testing (18). LDL-P was quantified by NMR LipoProfile® analysis using the LP4 deconvolution algorithm (18,19).
Only serum was available for the Y2 samples. Because of well-described dilution effects that occur in EDTA plasma tubes (20), we applied a previously derived CARDIA-specific correction factor for total cholesterol of 0.9666 to the NMRderived apoB and LDL-P measures that were obtained from the Y2 serum separator tubes.
Statistical analysis
To describe the characteristics associated with different apoB concentrations in midlife, we stratified participants into quartiles of apoB concentration at Y20 and showed the participant characteristics by Y20 quartiles at the Y2, Y7, Y15, Y20, and Y30 exams. We compared demographics, anthropometrics, lifestyle behaviors, and traditional risk factor data across quartiles using ANOVA and Chi-square tests as appropriate.
To understand the shifts in apoB concentration distribution over time in CARDIA, we generated separate distributions of apoB at exam years 2, 7, 15, 20, and 30. To visualize individual-level change in apoB, LDL-P, LDL-C, and non-HDL-C concentrations, we generated spaghetti plots of the individual participant's lipoprotein concentrations across exams. Since the overall pattern of change appeared mostly linear, we calculated annualized rates of lipoprotein change. Individual annualized rates of change were calculated by subtracting the last measured atherogenic lipoprotein measure (Y20 or Y30) value by the first measured value. This difference in atherogenic lipoprotein level was then divided by the time between the first and last atherogenic lipoprotein measurement value. We then calculated separate quartiles of atherogenic lipoprotein annualized rate of change, and Y2, Y20, and Y30 characteristics were compared across quartiles. We also created a distribution of atherogenic lipoprotein change as well as individual-level waterfall plot of annualized rate of change. To compare intraindividual variability over time, we generated distributions of the percent annual change in each lipid measure.
We used linear regression models to assess the associations between characteristics and rates of apoB change. Models were adjusted as follows: model 1: first measurement of apoB to account for the level at which the participant's apoB began; model 2: model 1 + demographics; model 3: model 2 + commonly measured clinical characteristics at baseline (HDL-C, systolic blood pressure, BMI, blood pressure-lowering therapy, diabetes mellitus [DM] status, current tobacco use, PA [exercise units], serum glucose, and diet [HEI]); and model 4: model 2 + yearly averages (for characteristics not assessed at each exam) or cumulative values (for those characteristics assessed at all exams) of the commonly measured clinical characteristics used in model 3. Multicollinearity was tested between the selected independent variables using the variance inflation factor, and none was found.
To assess the association of lipid-lowering therapy during follow-up, we repeated the analyses outlined above after removing all participants (N = 717) who reported taking lipidlowering pharmacotherapy from examination Y2 though Y30 as a sensitivity analysis. Furthermore, more than 2,000 participants did not meet NMR analysis criteria and were not included in the primary analysis. To impute the missing factors, we conducted multiple imputation by chained equations with fully conditional specification using the SAS MI package (SAS Institute, Cary, NC) (21). NMR lipid measurements and related risk factors from all visits were included sequentially in the model specification. We excluded those participants who died prior to exam year 7 (two exam visits for annualized apoB change calculation), and finally, 5,066 participants were included in the imputation model. We created 10 imputed datasets, and the imputed values were set to missing after participants died. The distribution of covariates between the observed and imputed dataset was similar. Each regression analysis described above was performed separately in each of the imputed datasets, and the results were combined using Rubin's rules. SAS, version 9.4, was used for the analysis (22).
Cohort participant characteristics
Participant characteristics at Y2 and Y20, stratified by quartile of apoB concentration at Y20, are presented in Tables 1 and 2, respectively. The mean (SD) apoB at the Y2 exam was 85.2 (20.2) mg/dl. At the Y2 exam, participants in the higher Y20 apoB quartiles were more likely to be men, have a higher BMI, greater waist circumference, higher blood pressure, and have lower HEI. Those in the higher Y20 apoB strata had lower HDL-C and higher Y2 apoB, LDL-C, non-HDL-C, and TG levels as well.
Participant characteristics at Y20, stratified by quartile of apoB concentration at Y20, are presented in Table 2. The mean (SD) apoB level was 95.9 (20.5) mg/dl at the year 20 exam. As was seen for the Y2 characteristics, when compared with those in lower quartiles of apoB level, CARDIA participants with higher apoB levels at Y20 exam (mean age = 45 years) were more likely to be men and have higher BMI, waist circumference, and blood pressures. Participants with higher apoB at Y20 were also more likely to have lower HDL-C and higher glucose. As expected, the cholesterol and TG concentrations in the apoB lipoproteins were higher as well. There were no significant differences in self-reported saturated fat intake, though the HEI was lower in higher apoB groups. There were no significant differences in self-reported PA level at Y20 across apoB quartiles.
The characteristics for exam Y7, Y15, and Y30 are presented in supplemental Table S1A-C. The associations between characteristics and apoB quartile were similar in Y7, Y15, and Y30 when compared with Y20, though the absolute differences in these characteristics at Y7 and Y15 were smaller than were observed at Y20 and Y30.
Distributions of apoB
The distributions of apoB stratified by race and sex across exam years 2, 7, 15, 20, and 30 are presented in
Correlations between apoB and other measures of atherogenic lipoproteins
As expected, the correlation coefficients for apoB and non-HDL-c, LDL-C, and LDL-P measured at the Y20 exam are 0.89, 0.87, and 0.94, respectively. The correlations are similar across exam years 2, 7, 15, and 30.
Intraindividual change in apoB over time
Spaghetti plots representing individual-level change over time for CARDIA participants, stratified by the decile of their first apoB concentration measurement, are presented in Fig. 2.
Qualitatively, across all strata of first baseline apoB measurement, the interindividual variance in apoB appears to increase over the early adult life course (i.e., there is a widening distribution over time among individuals with very similar baseline apoB concentrations). The pattern of change for most participants appears to be fairly linear, with those in the lower baseline level more likely to increase over time and those in highest strata somewhat more likely to decrease over time. Although several outlier measurements are present for each exam year, on subsequent measurements, the values for those individuals appear more normative. Patterns of change were similar for non-HDL-C, LDL-C, and LDL-P though the absolute ranges of these values at baseline and at Y30 were greater (data not shown).
Distribution of annualized rate of change
The waterfall plot of individual-level annualized rate of absolute change in apoB is presented in Fig. 3 The Y2 characteristics of quartiles of annualized rate of change quartiles are presented in Table 3. The mean (SD) rates of change in the lowest to the highest quartile of change were −0.7 (0.7), +0.3 (0.2), +0.8 (0.2), and +1.7 (0.6) mg/dl/year, respectively. Notably, the quartiles with the lowest rate of increase (−0.7 mg/dl/year) had the highest initial apoB concentration of 102.7 (21.0) mg/dl, and the group with the highest rate of increase (+1.7 mg/dl/year) had the lowest starting apoB concentration of 76.1 (16.1) mg/dl. Participants in the higher rate of increase quartiles were more likely to be women and have lower BMI, waist circumference, glucose, blood pressure, higher PA and HEI, and a lower prevalence of DM than was observed in the lowest rate of change quartile ( Table 3).
The Y20 characteristics of quartiles of annualized rate of change are presented in Table 4. The higher rate of change quartiles had a higher HDL-C and higher concentrations of atherogenic cholesterol fractions. Interestingly, the TG concentrations were highest in the lowest and highest apoB change quartiles. Those in the higher rate of change groups had lower Y20 systolic blood pressures, lower BMI and waist circumference, and lower serum glucose levels than were observed in the lower rate of change groups. There were higher HEI and PA scores in participants in the highest rate of change quartiles when compared with the lowest rate of change. Regular alcohol use was higher in the higher rate of apoB change quartiles.
Spaghetti plots of intraindividual change, stratified by baseline apoB quartile and annualized rate of change quartile, are shown in supplemental Fig. S2. Of those in the lowest quartile of first measured apoB, the mean (SD) rate of change was +1.0 (0.69) mg/dl/year. Further decreases in apoB were unusual in this baseline stratum. Of those in the middle two quartiles of baseline apoB measurement, the distribution across strata of change was more balanced than was seen in the highest or lowest baseline strata. In those in the highest baseline stratum of apoB, the mean (SD) rate of change was −0.2 (1.1) mg/dl/year.
Baseline and cumulative predictors of rate of change
The beta coefficients of linear regression models assessing the participant characteristics that are associated with rates of change in apoB are presented in Table 5. In multivariable analysis, lower first measured apoB, female sex, white race, lower BMI, and current tobacco use at the Y2 exam were significant predictors of a higher rate of change in apoB across early adulthood. When cumulative participant characteristic levels across follow-up were considered (model 4) in multivariable modeling, female sex, white race, lower HDL-C, lower glucose, and alcohol use were associated with an increasing annualized rate of apoB change.
Sensitivity analyses. The characteristics of those included versus excluded participants are presented in supplemental Table S2. When using an imputed dataset, we observed similar findings regarding to the mean and range of apoB across all exam years, and the average annualized rate of apoB change between first and last measured apoB. In multivariable models, all the predictors as well as their relative strength of association with apoB change were similar to the unimputed dataset. When CARDIA participants who initiated lipid-lowering therapy during follow-up were removed from the analysis dataset, the mean BMI, TG level, glucose, and DM prevalence in the lowest rate of change quartile at Y20 were lower than observed in the primary analysis, suggesting that some of the decrease in apoB in that group was due to the use of lipid-lowering therapy in higher risk individuals. However, removal of treated participants resulted in very minor differences in the annualized rates of change and had no effect on multivariable predictors of apoB change over time.
Comparison of the variation in rates of change between apoB and non-HDL-C, LDL-C, and LDL-P
The distributions of percent annual change for apoB, non-HDL-C, LDL-C, and LDL-P are shown in Fig. 4. The ranges of percent change for LDL-P, LDL-C, and non-HDL-C are larger than the range of percent change for apoB. The SDs of the percent annualized rate of change for apoB, non-HDL-C, LDL-C, and LDL-P were 1.2, 1.4, 1.5, and 1.7, respectively. As shown in Fig. 4, close to 40% of CARDIA participants had a negative annualized percent change in non-HDL-C, LDL-C, and LDL-P during early adult life, whereas approximately 20% of CARDIA participants had a negative annualized percent change in apoB. The mean percent change was +0.77, +0.46, +0.20, and +0.62 for apoB, non-HDL-C, LDL-C, and LDL-P, respectively. Across all exams, the absolute ranges of non-HDL-C, LDL-C, and LDL-P values are greater than apoB, thus the modest differences in percent change in LDL-C, non-HDL-C, and LDL-P reflect, on average, larger annual changes in absolute value over time in these measures (per percent change) when compared with apoB.
DISCUSSION
Leveraging unique data from a well-phenotyped cohort of young adults followed for two decades with multiple measures of apoB, we describe distributions of with a range of −6.26 to +9.21 mg/dl/year. The baseline apoB concentration was significantly and inversely associated with apoB change, whereas female sex, lower BMI, and higher HDL-C level had more modest associations with increasing apoB during the early adult life course. The substantial interindividual variation in apoB change over time as well as the relatively modest associations between baseline clinical characteristics (other than first apoB measurement) and apoB change suggest that commonly measured clinical characteristics (at least at one time point) are unlikely to predict future apoB levels well.
The cross-sectional distributions of apoB that we observed at exam years in CARDIA are consistent with distributions that were reported in the National Health and Nutrition Examination Survey (NHANES) III study (23). However, our study is the first description of untreated intraindividual change in apoB concentration across 28 years of the early adult life course. In NHANES III study, differences in the mean apoB concentration between the 20-to 30-and 40-to 50year-old age groups were 21 mg/dl for men and 8 mg/dl for women, which suggested that apoB levels increase with age (23). However, NHANES III was a serial cross-sectional survey, and the different age ranges were not derived from a cohort. Thus, differences in apoB concentration by age group observed in NHANES III study could be due to birth cohort effects and differences in the sampling of individuals that represented different age groups. Thus, the patterns and rates of change in apoB concentration and their correlates that we report in this article could not have been determined from the NHANES III cross-sectional data alone. ApoB blood concentrations are determined by the rates at which apoB lipoproteins are produced and cleared from the plasma (2,24). In most individuals, the majority of apoB lipoproteins present in the blood are VLDL and LDL-Ps. Thus, the rates of production and clearance of VLDL and LDL-Ps determine most of the intraindividual and interindividual differences in apoB. Although the mediators of apoB synthetic rate and VLDL particle assembly are not completely understood, feeding and lipid kinetic studies have demonstrated that overnutrition, exogenous intake of saturated fats and simple carbohydrates, and/or insulin resistance causes increases in VLDL particles and therefore increased total apoB (25)(26)(27). Thus, chronic excess caloric intake, weight gain, and insulin resistance that occurred during follow-up likely account for some of the apoB increases observed in this analysis. Binding of the apoB molecules present on LDL-Ps to the LDL receptor causes removal of the LDL-P from the serum and downregulation of cholesterol synthesis (28). Therefore, to the extent that removal of LDL-Ps depends on the activity of the LDL receptor pathway, a decrease in LDL receptor density with age may partially explain the trend to higher levels of apoB seen in many individuals. However, apoB is present on remnant, VLDL, and Lp(a) as well as LDL-Ps. The relative molar concentration of each of these lipoprotein species can vary within and across individuals over time, and some of the non-LDL apoB lipoprotein species are not directly cleared by the LDL receptor (29). Therefore, change in apoB concentration should not be solely attributed to the LDL receptor density change and alternative exposures, and biologic pathways must contribute to the variation in apoB concentration observed with aging over the early adult life course.
Regression to the mean likely contributes to some of the observed patterns of change that we report, as those in the highest and lowest first apoB measurement were more likely, on average, to have repeated values that were closer to normative apoB values. Repeated measures of biological phenomena typically provide a more accurate estimate of an individual's "usual" values; the extent to which this phenomenon is driven by measurement error versus true biologic variation cannot be determined from this study. Nonetheless, regression to the mean is seen in clinical practice when serial lipid testing is performed in individual patients. Thus, our findings are instructive and inform what may be seen in individual patient care. The substantial changes observed over time, as well as the modest associations with baseline predictors reported above, suggest that clinicians will not be able to predict using readily available clinical information what a young adult patient's apoB level may be 5-15 years after a one-time measurement. Thus, serial measurement would be needed to monitor a patient's apoB exposure across early adult life. Nonetheless, the rates of change that we report from CARDIA (mean rate = 0.52 mg/dl/year) may be useful when serial apoB testing is performed, as clinicians can now have a reference value for the normative, though not necessarily optimal, rates of change in apoB concentration.
Although we observed substantial variation in apoB levels across early adult life, the intraindividual variation in apoB level across early adult life was less than that was observed for non-HDL-C, LDL-C, and LDL-P. These observations suggest that of the indices of atherogenic lipoprotein burden, apoB is the most stable during early adult life. Thus, in young adults, one-time measures of aopB may be a better (though still likely inadequate) marker of expected future cumulative atherogenic lipoprotein exposure than non-HDL-C, LDL-C, and LDL-P. However, further research is needed to determine if these differences in stability of these measures of atherogenic lipoprotein burden during early adult life translate into meaningful differences in long-term risk estimation for young adults. Similarly, the lower intrinsic variability in apoB during early adult life-in the context of its well-known mediating effect on ASCVD risk-may indicate that apoB is a better target for lifestyle optimization of lipidlowering therapy in some.
This study has several notable strengths. First, this article represents the first description of rates of change in apoB across the early adult life course using multiple longitudinal measurements in black and white men and women in a community-based sample. Second, in addition to statistics of central tendency (mean, median, SD, etc.), we show individual-level patterns of change in the spaghetti plots and waterfall plots, which can help contextualize patient-level observations in clinical practice. Third, the quality of demographic and behavioral assessment as well as traditional risk factor measurements in the CARDIA study is excellent. Fourth, our observations of apoB are put in context of other commonly used atherogenic lipid measures.
Several limitations should be considered as well. First, CARDIA enrolled exclusively self-reported black and white Americans. Thus, it is unclear if the patterns and rates of change that we report are generalizable to other race and ethnicity groups. Second, since our interest was in describing longitudinal patterns of apoB change, we included CARDIA participants who attended the Y20 and at least two of the previous exams (Y15, Y7, or Y2). Thus, participants who were lost to follow-up or died prior to the Y20 exam were not included in this analysis, which disproportionally excluded young black men because of lower rates of follow-up in this group (30). However, sensitivity analyses that imputed missing data did not significantly change our reported results. Our inclusion criteria also excluded some participants with chronic diseases (e.g., HIV) who either died or did not attend the Y20 exam. Patterns of apoB change may be different in young individuals with severe chronic illness; thus, our results may not correctly inform the expected apoB rates of change in specific subgroups of patients with chronic illness. Third, NMR does not provide a direct measurement of apoB but a derived value from magnetic resonance decay signals of lipid methyl and methylene groups. However, we are confident in the accuracy and precision of the apoB values provided by NMR as NMR apoB measures have been previously validated against immunonephelometry on standard assays with r 2 values of 0.98 (31). Furthermore, if the accuracy of the NMR apoB measure was poor but consistent, then the patterns and rates of change as well as the variance in these rates that we report would not be affected. On the other hand, if precision of apoB quantification was worse than other atherogenic lipid measures, we would have expected greater variance in apoB change than is observed in LDL-C, non-HDL-C, and LDL-P, but we observed the opposite-apoB had less variance over time than directly measured lipid values.
In summary, this article represents the first description of intraindividual apoB concentration change over the early adult life course. ApoB concentrations over time are dynamic, with the average person experiencing a +0.52 mg/dl/year increase (∼15 mg/dl over 28 years). Furthermore, the interindividual variation in change over time is substantial as well (range of −6.2 to +9.2 mg/dl/year), and the ability to predict an individual's rate of change using one-time assessment of traditional clinical variables appears modest. However, although absolute variation of apoB blood concentration was significant across early adult life, it was less than was observed for NDHL-C, LDL-C, and LDL-P, suggesting that of these measures, apoB is the most consistent measure of atherogenic lipoprotein burden in young adults. Nonetheless, in total, these observations suggest that serial apoB testing is needed if one aims to quantify the cumulative burden of apoB atherogenic particles across early adult life. Furthermore, an improved understanding of the risks associated with different apoB concentrations, as well as the presence of potentially critical thresholds of exposure or potentially critical periods of exposure during early adult life, is needed to inform testing guidelines for this central determinant of ASCVD risk.
Data Availability
The data that support the findings of this study are available in the CARDIA study by request.
Supplemental Data
This article contains supplemental data.
|
2022-10-19T17:35:19.002Z
|
2022-10-01T00:00:00.000
|
{
"year": 2022,
"sha1": "6849e4b025be4753a1d7b17508088fe5f76383a7",
"oa_license": "CCBY",
"oa_url": "http://www.jlr.org/article/S0022227522001328/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d9a0c1067491e024086b8e6e2c05686954a7ebcc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
157267092
|
pes2o/s2orc
|
v3-fos-license
|
THE STUDY OF THE FEATURES OF ESTONIAN ECONOMIC GROWTH: THE EXPERIENCE FOR UKRAINE
The authors have studied the features of Estonia economic growth that started from the successful money reform then the taxation reform correspondingly after the World economic crisis with anti-crisis reforms. It was established, that for today the base of Estonian economy development is a creation of digital society e-Estonia, the unique commercial register and the register of real estate. The features of Estonia taxation system, characterized with relatively low taxes and easiness of the use, were also defined in the work; the features of bank system, which financial safety essentially depends on Sweden banks that in their turn depends on inflation and economic situation in the country, were studied. The work also analyses the state of economic growth in Estonia in six stages, each of which has its cause and result. The first, third and fifth stages were with negative GDP index that was correspondingly caused by introduction of the money, taxation reform and the World economic crisis. The second, fourth and sixth stages were characterized with economic growth with positive GDP index that proves the expedience and effectiveness of introduction of the money, taxation and anti-crisis reforms in the country.
Introduction
The Estonian economy is for today open and stable with high ability to introduction of different innovations. Estonia follows the conservative budgetary policy, has taxation system, one of the best in European Union, balanced bank system, gives foreign citizens the significant right for land possession and provides 100-percent transfer of profit. So, Estonia is the one of most favorable countries for business.
In the period of economic crisis Estonia succeeded in principally-structured reforms, especially as to legislation, most liberal in EU. The effectiveness of financial policy was proved by the fact that after the long period of decline Estonia succeeded in observance of certain requirements to join the Eurozone.
It must be noted, that for today Estonian legislation is well harmonized with EU one. Estonia is most transparent and least corrupted state in the Central and Eastern Europe (index of corruption according to Transparency International for 2015 was 23 place among 168 [1]). Estonia also has the highly developed e-communications: good access to the internet, ID-cards, online-projects and also has relatively good production infrastructure, fast developed and modernized (ports, roads, telecommunications, storage facilities). The economic freedom in this state is on the one of first places in the World and on the first place in the Central and Eastern Europe region (the world level of economic freedom in 2015 was 8 place of 178 countries [1]), at that the normative environment favors the opening and conduct of business in Estonia (according to IMF report as to the facility of conducting business in 2015, 16 place of 189 countries [1]). At that Estonia occupies the high international credit ratings (Standard & Poor`s: AA-, Moody`s: A1, Fitch: IBCA: A+ [1]). The experience of Estonia is rather interesting for Ukraine that is directed on eurointegration. As a result, the study, outlining and correct implementation of certain steps from the experience of Estonian economic growth in legislative, tax and bank spheres allow Ukraine to adapt in EU.
Analysis of literary data and outlining of problem
Many scientists and leading economists studied the experience of Estonian economic growth. Especially Friedel Taube and Olena Perepadya underline in their works that Estonia develops not only in the segment of ІТ-industry, but accent that this Baltic country is an exemplary EU member also in other branches [2]. According to the last data of European commission, Estonia demonstrated the least level of newly created indebtedness comparing with own GDP among the other EU state-members in 2012 -only 0,3 percent. The Baltic state is followed by Sweden, Bulgaria and Luxemburg. For comparison: Spain has accumulated the new debts as 10,6 percents of own DGP in 2012. Even the estimation of general debt loads of Estonia testifies to its exemplary successes in economy -the state debt of Estonia is 10,1 percents to GDP and is the lowest in Europe [3]. Mart Laar the head of supervision council of the Bank of Estonia, the prime-minister of Estonia (1992)(1993)(1994)(1999)(2000)(2001)(2002), the defense minister of Estonia (2011-2012) accents that Estonia grew faster than any other EU state-member and states that other countries must use the new technologies and techniques, because the overtake needs to forget the old things, even ones, used in developed countries, and to concentrate on the new methods [2]. Mychailo Mikytyn sums up in his research that Estonia is a rather good sample for imitation of reforms, especially for the countries, appeared after decades of communistic enslavement [4]. Hanna Korbut and Taavi Roivas, the present prime-minister of Estonia, accent that Estonia became a member of EU and attained the economic growth especially due to the reforms [5]. Hanna Savina separates the main seven principles, necessary to build the functioning e-government, and states that the number of "e-Estonians" will exceed 10 million till 2025 [6]. Timophiy Kramariv in his research regards the introduction of Estonian experience of economic growth in Ukraine skeptically, not because of its ineffectiveness, but because of unwillingness of existing native corrupted officials to introduce the personal e-cards, to create the digital society [7].
Aim and tasks of research
The aim of this work is to study the features of economic growth in Estonia as an experience for Ukraine.
The following tasks were defined for attaining the set aim: 1. To study the economic growth of Estonia from 1991 to 2016. 2. To define the features that favored or restrained the Estonian economic growth. 3. To estimate the influence of the state of taxation and bank systems on Estonian economic growth.
4. To outline the possibilities to implement the experience of Estonian economic growth in the economy of Ukraine.
Materials and methods of research
The materials of research were the statistical data of Trade-industrial house of Estonia, Taxation-Custom Department of Estonia, Central bank of Estonia, e-resources, scientific-pedagogical periodic.
The empirical data of the study of economic growth in Estonia allow separate six stages, characterized with certain specificity of influence of reforms or crises. As a result three periods had negative GDP under the influence of money reform introduction at the first stage, taxation reform at the third stage and the World financial crisis at the fifth stage. Correspondingly, the second, fourth and sixth stages of Estonian economic growth are characterized with positive GDP that testifies to the effectiveness and success of introduction of the money, taxation and anti-crisis reforms in the country.
The certain efficient factors that finally favored the Estonian economic growth were determined as the method of analysis of the taxation and bank system.
Results of research
In 1991 after the Soviet Union destruction, Estonia started to be renewed as an independent republic. As far as the reforms were rather weak for that time, the first years of Estonian indepen- Economics, econometrics and finance dence were characterized with essential economic problems. The industrial production has fallen at the beginning of 1993 by 45 % comparing with 1989 (last year of the "old" economic regime). In 1992 the inflation in Estonia exceeded 1000 %, and the living standards essentially decreased.
The preconditions for further reforms were weak, 80 % of economy belonged to the state and the experience of private property possession was absent. The country depended on Russian energy sources, 92.5 % of its trade was with Russia and the export to the Western Europe was minimal. The countries of the Central-Eastern Europe started certain economic reforms before the complete economic destruction. The aims of transfer were different in different countries. The main aim of Estonia was the strengthening of its sovereignty, reorientation of the country from the East to West and Integration to the European Union (EU) and North-Atlantic Treaty organization (NATO). Estonia must do the following for it: 1. To create the working democracy with effective institutions (the necessity of new Constitution, radical reform of the state government and decrease of corruption).
2. To pass from the command economy to the market one. 3. To increase the Estonian living standards up to EU level. 4. To group and integrate the Estonian society (the necessity to assure Estonians in future, to protect their language and culture and also to integrate the ethnic population).
The real transitional reforms started in summer of 1992. The first radical reform was the money one, accepted in May of 1992 and realized the 20 of June of 1992. The leading economic problem of 1992 was hyperinflation. Estonia was directed on restoration of own national monetary unit as fast as possible. The idea of monetary council, authorized by Estonian parliament in May of 1992 appeared in several places at the same time. The main cause of such a choice was its easiness. It seemed to be true that it is the most effective means of the fight against inflation, because it needed from the government to support the fixed currency exchange course and the balance of state budget and also prohibited to the Bank of Estonia to credit the state. At the very beginning the Council promised that the crown would be converted that coincided with the Estonian choice of free external trade. At the beginning the International Monetary Fund (IMF) resisted the idea of monetary council creation, preferring the traditional bank that has permission for giving internal credits, but IMF was compelled to accept such decision of Estonia (Gansson and Sax, 1992; Lainela and Sutela, 1994) [2].
The new Estonian Government also tried to change the court system completely. As far as the law was under the influences of German juridical tradition since Hanseatic time, and as far as Estonia wanted to join the European Union, the government has chosen Germany as a model of its juridical system. Estonia simply introduced the German civil and commercial codes to provide the correspondence of its legislation to EU standards. 19 acts were accepted to construct the modern European three-level court system.
The economy liberalization was the first step for all transitional economic formations of the Central and Eastern Europe. As a result, liberalization often led to the abrupt growth of inflation speed and abrupt worsening of economic conditions as it took place in Estonia. The policy of openness created the environment of transparence with distinct signals, given to the producers by the market. The openness also favors subcontracting that gives a possibility to use the high-qualified but cheap working force that exists in transitional economic formations. Estonia has liberalized its trade policy already at the beginning: almost all export limitations were eliminated for 1992. The liberal trade policy favored the export growth and gave to the country a possibility to earn the foreign currency, so needed for import. Estonia also eliminated all import tariffs, except tobacco, alcohol and fuel that the custom was introduced for (Gansen and Sorsa, 1994) [2]. These brave steps caused the serious discussions in Estonia and beyond it. When Estonia started the negotiations with European Union about free trade in 1994, the representatives of EU could not believe that the economy without any custom tariffs can exist. They spent the whole day for studying all correspondent acts and regulations and for being convinced that it is really possible.
The former Soviet republic surprised the experts-economists throughout the World with economic growth by more than 10 percents at the end of 1990-ies and at the beginning of 2000-ies.
Economics, econometrics and finance
At that time Estonia was called "The Baltic tiger". The professor Riner Kattel, the one of most famous Estonian politologists-economists of the present time thinks, that among the most significant factors of Estonian success is its geographic location side by side with Finland and Sweden. It is not only the flows of tourists but also technologies and investments. The export in these countries is called just the base of Estonian economic growth: 23 percents of export belong to Finland, near 15 to Sweden, correspondingly [3].
In 2016 Estonia received the 23-th place among 141 countries in the Global Innovation Index. The rating is leaded by Switzerland, Great Britain and Sweden; Finland is on the 6-th place.
Estonia takes part in the production of high-technological equipment as a part of production chain. The role of the country in direct elaboration of innovative products is insignificant. Estonia plays the role of cheap production subdivision for many leading companies. Estonian IT-sector turned out to be oriented mainly on the local market. The low competitiveness of internal market limits the income of Estonian companies. It is worth to be noted, that in 2016 Japan became the first big country that has introduced the personal code My Number and electronic ID-card, based on it.
As a result, the Estonian economic development can be divided in several stages by GDP growth. The first stage from 1991 to 1994 was characterized with establishing, at which the money reform was realized, and GDP was from 14 % to -2 %. The second stage (1995-1998) was characterized with economic growth as a result of reform and GDP reaches almost 12 %. The third stage (1999) was characterized with introduction of the new system of enterprises profit taxation that resulted in minus GDP index (-0,3 %), but proved that this taxation system provided the following stable economic growth for 8 years that can be characterized as the fourth stage (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007). Obviously, the World economic crisis influenced also the Estonian economy that determined the fifth stage, at which GDP index reached the mark -14,3 %. The sixth stage that lasts till today can be characterized as after-crisis one. Due to the correct anti-crisis arrangements GDP growth was restored and in 2011 it was 7,5 %, and since 2013 is kept on the level near 2 %.
The defined stages distinctly prove the correctness of accepted reforms in Estonian legislative sphere, especially in taxation and bank systems that led to the effective economic growth in the country. Let's consider the features of these systems.
As to the Estonian taxation system, let's note that the Single tax was firstly introduced in 1994 with rate 26 percents. The rate gradually decreased since this time and is now 20 percents. Somebody thinks that the most surprising aspect of Estonian single tax is its easiness. 5 minutes are enough to pay taxes in Estonia. The code includes completely integrated tax from the natural persons and corporative incomes. It means that the corporative profit is taxed only once at individual level or at the level of person. That is the Estonian single tax is much better for the growth than the American system that suffers from the destructive double taxation. In whole, the tax rate for corporative profit is 20 % in Estonia (the integrated tax rate for corporative profit in USA is 56 %). The taxation system in Estonia looks more progressive than in the USA, not to mention Italy, Greece or Mexico.
The taxation institution that collects the state taxes is for today the Taxation-Custom department that functions in the sphere of financial ministry management. The tax officers must confirm the correctness of tax payments, to estimate the sums of taxes and determine the percents for payment in cases, established by law, surcharge the debts on taxes and use sanctions against the disturbers of tax legislation.
The aim of the modern Estonian tax policy is the transfer of tax load from the working force to consumption sphere. Let's note that the unique system of tax on enterprise profit functions in Estonia since 2000 that is the whole undistributed enterprise profit is not taxed. At that the local taxes play insignificant role in Estonian taxation system. Most (in 2014 -96 %) annual declarations of physical persons are given in e-form. The Estonian legislation as to VAT is based on EU declarations about the general system of VAT (2006/112/EU). The standard VAT rate is 20 %, and lowered rate is 9 %. The mechanism of reverse taxation is also used in Estonia and so on.
It must be noted that the significant stimulus for formation of the open and transparent working environment in the country, interesting for investors, is a creation of unique commercial register and the register of real estate that eliminate any doubts as to the subordination and proper- ty right. At the same time the Estonian government gives a possibility to study the unprecedented volume of tax information in the internet, using the most modern technologies.
As to the bank system, for today the central bank of Estonia is EestiPank that is the upper level of two-level system of Estonian credit institutions. After the transition of the Estonian economy to the euro in 2011, it maintains prices in the domestic market and conducts general control and supervision over the activities of the country's banks, now the exchange rates are regulated by the Central European Bank [8]. It functions as prescriptive and control organ, it control all agreements for more than 50000 euro and also the cash volume, credit rates and price policy of credit institutions. The currency courses after Estonia accession to Eurozone are regulated by the Central European Bank. 90 % of Estonian bank sector is divided between two largest Sweden banks -Swedbank AS and SEB Pank AS. Both banks belong to the large concerns, have successful history and guaranteed Sweden confidence. The spectrum of services, given by these banks, is very wide: multicurrency accounts, credits, factoring, bill of credit, investments in stock and precious metals, different payment cards, internet-banking, deposits and so on. Swedbank is comfortable because its branches and cash machines are in many countries of former CIS and Europe.
Both banks open the accounts for nonresidents of Estonia, in client's presence or distantly. For creating accounts of the offshore companies the client's personal presence and interview with bankers are needed.
10 % of Estonian market is divided between 5 more banks and several branches of foreign ones, namely: Krediidipank, TallinnaÄripank, SampoPank -is in DanskeBank group -most Scandinavian concern, MarfinPank -from April of 2012 more than 70 % of safety stock belong to UKRSELHOSPROM PCF LLC. It is rather popular bank; it has offices in many countries. It gives most number of bank services to the residents and nonresidents of Estonia. E-banking system is highly developed in Estonia; almost 80 % of all dwellers of the state realize their everyday bank operations using the internet. Banks also develop and spread the possibilities of bank operations using mobile telephone through WAP system.
But it must be noted that if the risks of Sweden market grow, they will influence also the Estonian market of bank services and bank system as a whole. Such presence of Sweden capital in Estonian bank system to certain extent testifies to the low level of financial safety of Estonian bank system.
Especially, the banks and branches in third quarter of 2016 have gotten a profit 81 mln euro that is by 9 % less than in the second quarter. The incomes of bank sector were by 10 % less. The other incomes shortened most of all that are the ones that abruptly grew in the second quarter at the background of commercial agreement of several banks that they received the extraordinary profit from. Without accounting these agreements the profit of third quarter could be by 23 % more than in the second one. Among the most important kinds of incomes, the incomes from interests increased by 3,5 % for quarter and the incomes from services -by 0,4 %. Let's note that the ratio of outlays and incomes increased a little and reached 48,3 %.
Despite the negative Euribor indices, the banks increased the incomes from interests and kept profitability mainly due to the fast growth of crediting and arrangements, carried out in managerial sphere. As far as netto-incomes from interests increase slower than the excess of credit portfolio, the indices of pure interest margin decreased.
The part of nonresidents' savings essentially shortened during the third quarter, the savings of offshore and other regions decreased. As far as nonresidents' savings are very volatile, such tendencies increase the stability of bank resources and decrease the liquidity risk. The outflow of nonresidents' savings influences among other things the general increase of savings balance that was lower than usually during the last year.
The annual growth of credit portfolio of Estonian banks in the third quarter of 2016 became slower from 11,7 to 9,4 %. And the annual savings growth became faster from 2,2 to 3,6 %. The banks liquidity was strong and continued to be improved and was in this period 19,8 % from own banks assets, at the same time the banks have other liquid assets as 7,7 % of assets. The bank resource of the third quarter grew by 1,4 % up to 20,8 bln euro. The productivity of own capital of Estonian banks in the third quarter was kept at the level 12 % [9].
Economics, econometrics and finance
According to the autumn economic prognosis of Eurocomission, the Estonian economy will grow by 1,1 % this year, but the external demand can stimulate it in further. The inflation and unemployment will remain at the low level and the state budget will remain balanced. In the Eurozone and European Union in general the slow growth will go on. In 2018 the Eurocomission prognoses 2,3 % of economic growth for Estonia, but inflation will also increase from today 0,8 % to 2,6 %. The unemployment level in nearest years must remain low (this year 6,5 %; next year 7,4 %), and budget will remain balanced (this year 0,5 % of GDP; in 2017 year 0,4 % of GDP). According to the prognosis of next year, the economy of European zone will grow on average by 1,5 %, and in whole European Union -by 1,6 %. The indices of current year -1,7 % and 1,8 % respectively. Most probably, the economic growth in EU will also remain insignificant in further [10].
Up to 2018 the private consumption will remain the main stimulus of growth, supported also by the expected improvement of employment level and little increase of salary. The general deficit of Eurozone budget must slowly decrease, whereas the fiscal policy is expected to remain soft and investments will correspondingly increase.
The political instability, the slow economic growth beyond EU and weak global trade can decelerate the growth prospects. There is also a threat that the growth will be decelerated by the low economic results of last years. Next years the European economy will not get the external support such as the decline of oil prices and decrease of currency course.
Although the economic results are essentially different as it were earlier, today GDP in EU is at the pre-crisis level, and in some state-members of EU even by 10 % more than at decline. During the whole prognosis period the economic activity in all EU state-members must gradually accelerate but will remain uneven.
Discussion of results
At studying the set aim we want to accent the topicality and necessity of Estonian experience of economic growth as to introduction in Ukraine. But the problem of this research is rather interesting and wide and needs more detail study in further. The given analysis of Estonian taxation reform must be used in Ukrainian taxation system and the money reform correspondingly in the bank system of Ukraine. The study about the introduction of Ukrainian digital society is important, but the features of its implementation in Ukraine need further analyzes and researches.
Conclusions
As a result of research: 1. The work analyses the state of economic growth in Estonia in six stages, each of which has its cause and result. The first, third and fifth stages were with negative GDP index that was correspondingly caused by introduction of the money, taxation reform and the World economic crisis. The second, fourth and sixth stages were characterized with economic growth with positive GDP index that proves the expedience and effectiveness of introduction of the money, taxation and anti-crisis reforms in the country that is a good example for Ukraine.
2. The functioning e-government must be built in Ukraine to differ one citizen from another. In Estonia it is realized using the universal ID-card. It is used to authorize on the sites of banks, Taxation-Custom department, other state organizations and hospitals -in general four thousand different services can be used -from the purchase of license for fishing to the payment for public transport usage. To give people a possibility to interact with state directly, the government permitted in 2000 year to sign any documents by digital signatures. The Estonians have left more than 200 million autographs since this time.
3. Ukraine must develop the local companies instead of using the foreign results. Estonia doesn't pay for license to the great international IT-companies: the free software and the results of local companies are used in the country. The big data allow analyze the context and to offer services to each user individually. For example the "context services" can be given -the one or other possibilities are offered to the users, depending on their personal situation.
4. Ukraine must apply many efforts for dwellers begin to use e-services as more as possible. For that the great attention must be paid to the teaching of computer literacy. In Estonia all schools Economics, econometrics and finance have access to the internet since the end of 1990-ies. At the same time Tiigrihüpe fund, created by the government for supporting the new technologies, taught the programming in senior classes, and recently its leaders offered the methodology of teaching computer literacy for pre-school children. The government also makes investments in the education of old generation that is a good example for Ukraine.
All these innovations will return the population confidence to the government and as a result the economic growth in Ukraine will be restored.
|
2019-05-19T13:05:27.340Z
|
2017-03-31T00:00:00.000
|
{
"year": 2017,
"sha1": "64e74621dee65f67049dd160ea83b9b452e12cb9",
"oa_license": "CCBY",
"oa_url": "http://eu-jr.eu/social/article/download/313/301",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5296760aa6ced12cc622d4ab80dd2b54062c7ee7",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
270766968
|
pes2o/s2orc
|
v3-fos-license
|
Retrograde cerebral embolism and pulmonary embolism caused by patent ductus arteriosus: a case report
Background Although rare, paradoxical embolism sometimes occurs with patent ductus arteriosus (PDA). This study presents a case of PDA-associated paradoxical embolism with acute ischemic stroke (AIS) and pulmonary embolism (PE) following thoracoscopic surgery. Case Presentation A 65-year-old woman developed acute-onset aphasia and right hemiparesis on the third day following thoracoscopic resection for a right lung tumor. Brain magnetic resonance imaging revealed multiple infarcts, and lower extremity venous Doppler ultrasound revealed deep vein thrombosis. The patient subsequently developed dyspnea, tachycardia, and hypoxemia. PE was confirmed by percutaneous transfemoral venous selective pulmonary angiography, which meanwhile demonstrated a PDA lesion. The patient, after receiving catheter-directed thrombolysis and inferior vena cava filter placement, improved in both neurological and respiratory status. Conclusion For an uncommon but potentially fatal case with PDA-induced paradoxical embolism causing AIS and PE, early recognition and treatment are vital. Further studies are warranted to determine the optimal management and prognosis of patients with PDA-related embolic events. Supplementary Information The online version contains supplementary material available at 10.1186/s13019-024-02901-w.
Paradoxical embolism represents a relatively rare but devastating AIS subtype, characterized by a complex thromboembolic trajectory that involves both systemic and pulmonary circulations [2].The prevalence of paradoxical embolism in ischemic stroke, according to documented postmortems, varies from 7 to 40% [3].Usually, paradoxical embolism arises in patients with venous thromboembolism, which originates in lower extremities and might lead to neuroarterial embolization [4].Among the possible intracardiac sources, patent foramen ovale (PFO) has been extensively reported and studied while patent ductus arteriosus (PDA) is seldom referred to [5].
The concurrence of acute pulmonary embolism (PE) is a strong predictor of poor prognosis and high in-hospital
Background
Acute ischemic stroke (AIS) is a major global challenge.In China, the stroke mortality rate is four times higher than that in Europe and the United States [1], highlighting the need for improved understanding and management of this condition in different populations and settings.mortality for patients with AIS [6], emphasizing the importance of timely diagnosis and treatment.Unfortunately, due to variable and nonspecific presentations, nearly half the PE cases associated with progressive stroke remain undiagnosed until death [7].
Here, we present a unique case of PDA with retrograde cerebral embolization and PE following thoracic surgery.Based on a rapid and accurate diagnosis, the patient underwent neurointerventional recanalization and anticoagulation therapy, attaining a favorable outcome.
Case presentation
A 65-year-old female patient was admitted to the thoracic surgery department on November 29, 2021 for a right lung nodule.She denied any significant past medical history and was generally healthy.A detailed enhanced chest computed tomography (CT) scan validated the existence of the nodule (a suspected malignant tumor) in the upper lobe of her right lung.A surgical resection was arranged for treatment.Preoperative brain magnetic resonance imaging (MRI) showed multiple lowsignal lesions in the diffusion-weighted imaging (DWI) of both cerebral hemispheres of the brain (Fig. 1), while lower extremity venous Doppler ultrasound results were unremarkable.
On December 4, 2021, she underwent a successful thoracoscopic resection (under general anesthesia) of the apical posterior segment of the right upper lobe and middle lobe nodule, along with lymph node dissection.Postoperative monitoring and symptomatic treatment, including oxygen therapy, were normal.
Unexpectedly, at 16:10 (on December 7, 2021), the patient developed sudden speech difficulty and rightsided weakness, and was promptly diagnosed with AIS.Physical examination revealed coarse breath sounds in both lungs (accentuated on the right side) and moist rales.Her vital signs remained stable, with a heart rate of 75 beats per minute (bpm) and blood pressure at 117/72 mmHg.Cardiac, vascular, and abdomen examinations were unremarkable.No edema was present in the lower extremities.
However, neurological examination identified defective consciousness and speech.Cranial nerve functions Subsequent brain MRI unveiled multiple patchy lesions in DWI across both cerebral hemispheres (Fig. 2), which conform to the image features in stroke of other determined etiology (SOE) at an early stage.
The patient presented with persistent stroke symptoms 1 h after onset.Intravenous thrombolysis was not an option due to recent major surgery and ongoing mild bloody chest tube drainage.Cerebral angiography performed at 17:20 showed complete obstruction of the left middle cerebral artery M2 superior division distal branch, indicative of an acute embolism (Fig. 3A).Unfortunately, mechanical thrombectomy was ruled out, as no accessible target was identified.Intra-arterial tirofiban was administered as a treatment for suspected small distal vessel occlusion, while repeat angiography after 15 min showed no improvement.
After the surgery (at 18:15), the patient regained consciousness with notable symptoms, including incomplete motor aphasia, right-sided facial and limb weakness, and a positive right Babinski sign.The NIHSS score was recorded at 8 (language: 1 point; facial palsy: 2 points; right arm: 2 points; right leg: 2 points; and ataxia: 1 point), indicating moderate severity of stroke.Immediate medical interventions included the administration of tirofiban (5 mL/h) for antiplatelet effects, atorvastatin for plaque stabilization, edaravone for neuroprotection, and dibenzyline to enhance collateral circulation.Supportive care and fluid replacement were also promptly initiated to optimize her recovery.
Venous Doppler ultrasound of the bilateral lower extremities revealed deep vein thrombosis (Fig. 3B).Percutaneous transfemoral venous selective pulmonary angiography confirmed acute PE and incidentally discovered PDA (Fig. 3C).The patient underwent inferior vena cava filter placement and systemic anticoagulation (heparin 2000 IU intravenously) under local anesthesia, which resulted in symptom improvement.Then, the patient received mask oxygenation, immobilization of the bilateral lower extremities, low molecular weight heparin (4250 IU intramuscularly every 12 h), and continuous infusion of tirofiban (5 mL/h) followed by anticoagulation with aspirin.Long-term treatment with low molecular weight heparin and aspirin was given for 2 weeks.
Two weeks after initiating anticoagulation and antiplatelet therapies, the patient showed significant improvement in neurological and functional status.At the follow-up examination, she was alert and oriented, with near-complete resolution of prior dysarthria.Her muscle strength improved to grade 4 + on the right and normal grade 5 on the left.Sensation and coordination were intact, except for a lingering positive Babinski sign on the right.Serum biomarkers, including cardiac troponin and D-dimer, were also normalized.Her NIHSS score was reduced to 3 (language: 1 point, right arm: 1 point, right leg: 1 point).
The patient was subsequently discharged in stable condition on December 22, 2021 following a regimen of aspirin, anticoagulant, and statin therapy.the electrocardiogram before discharge was normal (Supplemental Fig. S2).One month later, a follow-up brain MRI revealed encephalomalacia and localized cystic signals in the left frontal and frontoparietal junctions (Fig. 4).Cerebral angiography confirmed a left frontal lobe and frontoparietal watershed ischemic stroke due to acute occlusion at the distal end of the left middle cerebral artery M2 segment.
Discussion and conclusions
We reported a case of paradoxical embolism triggered by PDA, along with AIS and PE, after thoracic surgery.This condition, characterized by a complex thromboembolic pathogenesis involving both systemic and pulmonary circulation, has rarely been reported in the literature [5,6].A mortality rate exceeding 60% is attributed to this complex condition [6].In our case, the patient met the diagnostic criteria for paradoxical embolism, characterized by venous source thromboembolism and right-to-left cardiac shunting due to a structural anomaly.Additionally, unique clinical features were observed.The complex pathophysiological mechanisms of paradoxical embolism resulting from PDA are highlighted by the detailed interplay between systemic and pulmonary circulation during thromboembolic events [8].
First, we noted a significant improvement in symptoms and partial recanalization following the injection of tirofiban into the distal internal carotid artery during angiography [9].This observation is pivotal, as it hints at the potential efficacy of high-pressure delivery of contrast agents in disrupting distal cerebral emboli, typically resistant to chemical thrombolysis or mechanical thrombectomy [10].The rarely studied phenomenon aligns with previous findings [11][12][13] underscoring enhanced clot dissolution and improved patient outcomes via targeted thrombolytic therapy.The integration of localized drug delivery and mechanical interventions could potentially redefine treatment protocols for distal cerebral emboli and warrant comprehensive investigation to validate their efficacy and safety.Based on the patient's medical history and clinical manifestation, the factors behind the series of embolisms involved a hypercoagulable state of blood after lung surgery, PDA, and relative high pressure of the pulmonary artery.
Second, the incidental discovery of PDA on pulmonary angiography is also noteworthy.While PFO is known to be the most common cause (Fig. 5B), right-to-left shunting via PDA can occur more readily without the need for pulmonary hypertension or elevated right atrial pressure (Fig. 5A) [5,14].Even if there was a possibility that the tiny emboli moved from pulmonary veins to cerebral arteries along the antegrade flow, the paradoxical embolism through PDA seems more logical, as the cerebral infarction occurred before the pulmonary embolism.Our case highlights the importance of considering this situation, especially in patients with a history of thoracic surgery.Early recognition combined with neuroendovascular intervention might contribute to favorable outcomes, despite the known dismal prognosis.
Furthermore, multimodal imaging plays a vital role in the diagnosis and management of paradoxical embolism caused by PDA.In our case, percutaneous transfemoral venous selective pulmonary angiography was conducted upon suspected pulmonary embolism (PE).In doing so, the aortic arch was meanwhile visualized quickly and clearly, indicating the presence of patent ductus arteriosus (PDA), as well as relatively high pressure of the pulmonary artery.This underlays speculation on the paradoxical embolism.We also performed a transesophageal echocardiogram 1 month after discharge (Supplemental Fig. S1), which evinced an intact atrial septum, effectively ruling out the PFO.These imaging techniques also enabled us to evaluate the severity of the condition, the degree of vascular occlusion, and the effect of the treatment.Additionally, we performed a follow-up brain MRI at 9 months to assess the long-term outcome and the sequelae of the ischemic stroke.
The significance of multimodal imaging in the early diagnosis of conditions such as stroke, PE, and PDA has been confirmed by numerous studies [15][16][17].These studies underscore the essentiality of cutting-edge imaging techniques for identifying and facilitating timely interventions.In light of this evidence, we advocate the integration of a multimodal imaging strategy to ensure accurate diagnosis and effective management of these intricate and uncommon conditions.This report is based on a single observational case study, which limits the ability to establish direct causation due to the absence of a comparison group and randomization.The long-term prognosis remains uncertain and requires further follow-up.Moreover, additional functional and imaging studies could have better characterized patient recovery beyond hospital discharge.This case report illuminates the complex clinical and academic landscape surrounding the diagnosis and treatment of paradoxical embolism caused by PDA, a rare and devastating disease.The presented case underscores the critical role of prompt disease recognition through multimodal imaging, which enables timely neuro-interventional and anticoagulation therapies that are pivotal in reducing mortality rates.
Fig. 2
Fig. 2 Brain MRI after AIS.The MRI demonstrates patchy signal intensity (red circles) in the left frontal lobe (A) and at the junction of the left frontoparietal lobes (B) on DWI, suggesting the presence of cerebral infarction at a very early stage
Fig. 1
Fig. 1 Preoperative brain MRI.The MRI shows low signal intensity in both cerebral hemispheres on DWI (red circles) without significant infarct lesions in the left frontal lobe (A) or at the junction of the left frontoparietal lobes (B).This indicates that the patient had no cerebrovascular abnormalities before thoracic surgery
Fig. 3
Fig. 3 Vascular imaging during AIS and PE.(A) Cerebral angiograms showing (I) extracranial contrast of anterior arch (type I) with no significant stenosis of the bilateral subclavian, internal carotid, and vertebral arteries (red arrows); (II) intracranial contrast angiography depicting no stenosis of the right middle cerebral artery but unclear distal filling of the left middle cerebral artery M2 superior trunk (red arrows); (III) anterior view of a left carotid artery demonstrating unclear distal filling of the left middle cerebral artery M2 superior trunk (red circles); and (IV) lateral view of left carotid artery indicating unclear distal filling of the left middle cerebral artery M2 superior trunk, suggesting possible thrombosis.(B) Lower extremity venous Doppler ultrasounds revealing (I) expanded muscular veins behind the right calf (maximum diameter ~ 9.2 mm) with hypoechoic nonvascular cavities, and (II) similar expanded hypoechoic venous cavity behind the left calf (maximum diameter ~ 4.0 mm), indicating deep vein thrombosis.(C) Pulmonary arteriography showing partial pulmonary artery embolism (I) and PDA (II, red circle and red arrow)
Fig. 4
Fig. 4 Brain MRI a month after AIS.The MRI reveals brain substance and local fluid signal alterations (red circles) in the left frontal lobe (A) and frontoparietal junction (B), suggesting liquefactive necrosis of the left frontal and frontoparietal infarcts
Fig. 5
Fig. 5 Paradoxical embolism pathway diagram.(A) The diagram illustrates the flow direction of the thrombus from PDA to paradoxical embolism.(B) The diagram depicts the flow direction of thrombus from PFO to paradoxical embolism
|
2024-06-28T13:10:25.980Z
|
2024-06-27T00:00:00.000
|
{
"year": 2024,
"sha1": "3b1fc7d14ef24d0fadbbb278866b4599b8295c28",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3262e94868539406dfa767fef5f5280fea70537a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244772974
|
pes2o/s2orc
|
v3-fos-license
|
Electric Field Induced Macroscopic Cellular Phase of Nanoparticles
A suspension of nanoparticles with very low volume fraction is found to assemble into a macroscopic cellular phase under the collective influence of AC and DC voltages. Systematic study of this phase transition shows that it was the result of electrophoretic assembly into a two-dimensional configuration followed by spinodal decomposition into particle-rich walls and particle-poor cells mediated principally by electrohydrodynamic flow. This mechanistic understanding reveals two characteristics needed for a cellular phase to form, namely 1) a system that is considered two dimensional and 2) short-range attractive, long-range repulsive interparticle interactions. In addition to determining the mechanism underpinning the formation of the cellular phase, this work presents a method to reversibly assemble microscale continuous structures out of nanoscale particles in a manner that may enable the creation of materials that impact diverse fields including energy storage and filtration.
Introduction
Electric fields provide a flexible means to manipulate soft matter, however, they interact with materials in electrolytic solutions through numerous distinct phenomena making the outcome of field-directed assembly difficult to predict. When a suspension of polarizable particles experiences a spatially uniform electric field, their induced dipoles lead them to form isolated chains along field lines, causing solidification through the electrorheological effect. [1][2][3] In contrast, suspensions have also been experimentally observed to assemble into macroscopic porous structures with particle-rich walls and particle-poor voids, [4][5][6][7][8] even though this phase has not been completely recapitulated in simulation. [3,9,10] While these porous structures suggest a path to realizing continuous mesoporous solids with extremely low densities, the origin of this phase is not clear even though, and perhaps because, the cellular phase was observed in vastly different systems spanning orders of magnitude in particle size, particle volume fraction, and electric excitation. While electrohydrodynamic (EHD) flow was identified as the primary mechanism of formation in two instances [4,5] and electroosmotic (EO) flow in another, [8] the other two examples list the interactions between induced dipoles as the origin of the structure. [6,7] Overall, the lack of a cohesive and encompassing explanation for the formation of the cellular phase hinders the ability to design macroscopic porous structures.
Here, we find that a nanoparticle suspension exhibits a macroscopic cellular phase when an AC voltage VAC and DC voltage VDC are simultaneously applied, despite using quantum dot (QD) particles with a diameter and volume fraction both at least an order of magnitude smaller than all prior examples of the cellular phase. Indeed, systematic study revealed that the cellular phase only formed in the presence of both V DC and V AC at volume fraction-dependent critical voltages. The complex interactions required to produce a cellular phase in this system include (1) electrochemistry to generate a DC current, (2) electrophoresis to aggregate particles into a 2D arrangement on one electrode, and (3) an instability driven by the long-ranged repulsive and shortranged attractive EHD flow that nucleates at regions on the electrode with high local field enhancement. Notably, EO and other purely attractive interactions are competitive with the cellular phase and instead drive the system towards a cluster-phase (i.e. pearl chaining). This mechanistic explanation was compared to all previous examples of the electrically-mediated cellular phase to identify a set of unifying factors that appear to always be present, namely that the system adopts an effective 2D arrangement and features an in-plane interaction that is short-range attractive and long-range repulsive. This understanding paves the way towards the concerted formation of hierarchical porous structures that may impact fields including energy storage and filtration. [11,12] Experimental Methods As a model system for assembly, poly(maleic anhydride-alt-1-octadecene) (PMAO)coated CdSe/CdS quantum dots (QDs), [13,14] were suspended in dilute borate buffer (3.125 mM, 5.5 nm Debye screening length)15 at a volume fraction φ = 6×10-5 which is equivalent to a 25 nM particle concentration. For a typical assembly experiment, indium tin oxide (ITO) slides (2277 -University Wafer, 703176 -Sigma Aldrich) were prepared by sonicating them in acetone and subsequently isopropanol for 5 min each before drying them under an N2 stream. Finally, the ITO slides were placed into the 3D printed frame pictured in Fig. S2(a). A laser-cut polyimide spacer (2271K72 -McMaster) with a height fixed by a 177 ± 1 µm was then placed onto one of the ITO slides and 4 µL of the suspension was pipetted onto the (ITO)-coated glass and covered with a second ITO-coated glass slide to form a fluid cell as shown in Fig. 1(a).
This complete cell was then transferred to an Olympus BX43 microscope with a GS3-U3-120S6M-C Grasshopper camera. A filter cube with an emission wavelength at 642 nm, 75 nm BW (67-036 -Edmund Optics Inc.), a short-pass excitation filter with a cutoff at 500 nm (84-706 -Edmund Optics Inc.), and a dichroic with cutoff at 550 nm (DMLP550R -ThorLabs Inc.) were used to visualize the photoluminescent QDs. Alligator-clip leads were attached to a corner of each ITO-coated slide as shown in Fig. S2
Results and Discussion
Simultaneously applying VAC and VDC across the QD suspension resulted in a cellular phase at strikingly low φ, particle size, and field intensities. Specifically, setting VDC = 2.2 V and VAC = 2 V amplitude at 500 kHz, the particles assembled into a cellular phase over the course of a few minutes as shown in Fig. 1(b). To determine whether this process was reversible, the field was subsequently switched off, which led the suspension to gradually homogenize through diffusion, as seen in Fig. 1(c). Given that this phase has not been previously observed for particles with hydrodynamic diameters < 100 nm, we considered whether this could be specific to these QDs.
Thus, we repeated this experiment with commercially available fluorophore-doped polystyrene nanoparticles and again observed the cellular phase (Fig. S3), showing that this phase is not restricted to these QDs.
To explore the mechanism of the cellular phase and whether it originated from forces between induced dipoles, we compute the non-dimensional parameter Λ = 2 8 2 , which reflects the importance of induced dipole interactions between particles relative to thermal energy given particle polarizability α, applied voltage V, electrode separation H, Boltzmann's constant , and temperature T. Here, we estimate Λ ~ 0.008, indicating that induced dipole mediated assembly should not occur and that other interactions must drive the formation of the cellular phase. Another potential mechanism is suggested by the resemblance of the cellular phase to Benárd cells where gravity-driven natural convection from density gradients produces similar cells. [16][17][18] Thus, we repeated the experimental conditions shown in Fig. 1 with the cell rotated 90° such that gravity pointed along the electrodes and the same cellular structure formed (Fig. S4), indicating that natural convection is not responsible for the cellular phase. In order to explore the mechanism of the cellular phase, we examined the contributions of VDC and VAC. Specifically, we performed a series of experiments holding VAC fixed while VDC was incrementally increased from 0 to 2.6 V in steps of 0.2 V. A fluorescence micrograph was taken at each increment after equilibrating for 4 min. The observed gradual transition from a uniform suspension to an ordered cellular phase was apparent and shown in Fig. 2(a). To analyze these experiments, the fluorescent pattern in each image was manually classified as having (1) While VDC was required to form the cellular phase, the origin of the ~1.8 V threshold voltage, or the dominant physical effect of VDC, was not clear. The relevance of the DC field is especially noteworthy when one considers that electrolyte ions will accumulate on oppositely charged electrodes, giving rise to an electrode polarization that screens the DC field, thus preventing a strong DC component in the bulk. [19,20] One process that could maintain a steadystate DC field is a constant flow of ions across the chamber mediated by their electrochemical generation/annihilation at the anode and cathode. Electrochemical reactions at the electrodes could also explain the VDC threshold as highly non-linear currents are common in electrochemistry due to reaction-specific standard reduction/oxidation potentials and mass transport effects. [21] To determine whether electrochemical currents were present, two-electrode cyclic voltammetry measurements were conducted on the cell, confirming that electrochemical reactions were present and resulted in an appreciable current when VDC > 1.5 V (Fig. S6). The observed reactions were likely due to a combination of water electrolysis and ITO degradation. [22,23] While the current turn-on behavior was commensurate with the onset of the cellular phase, we sought to establish a more definitive link between electrochemical reactions on the electrodes and the cellular phase.
Thus, we coated the surface of the ITO electrodes with an insulating layer of poly(methyl methacrylate) (PMMA) to prevent electrochemical reactions from occurring and repeated the assembly experiment. After this treatment, the cellular phase did not form at any voltage ( Fig. S7), verifying that electrochemistry was required for the cellular phase to form.
Due to the electrochemical reactions at the electrodes, a steady-state DC field persisted in the chamber and led to electrophoresis of the QDs and a subsequent increase in their local concentration on one electrode. Since these QDs had a slightly negative zeta potential, [25] they will accumulate on the positively charged electrode and form a thin particle-dense film, calculated to be about a monolayer in thickness. To understand the fate of this initially uniform film, it is useful to consider that the magnitude of VAC determines whether it will adopt a cellular or cluster phase, as shown in Fig. 2(b). Particles on a substrate will interact through two types of electricallyinduced fluid flows: EO flow and EHD flow. [21,25,26] While EO flow is a DC phenomenon, EHD flows arise from both VAC and VDC. [20,24] Interestingly, these effects are expected to produce contrasting flow fields in which EO draws particles together in the plane while EHD flow, despite being short-range attractive, will repel particles at long ranges. While a Cahn-Hilliard analysis of these interactions revealed that both interactions can drive a spinodal decomposition, and will do so in a concentration-dependent manner, the instigator of the cellular phase was not clear from this analysis alone.
To determine what interaction drove the spinodal decomposition from a film to the cellular phase, we performed a series of experiments at various φ in which VDC was held constant while VAC was gradually increased. First, we prepared a sample with φ = 3×10-5, VDC = 1.9 V, and increased VAC from 0.5 to 5.5 V in steps of 0.5 V. To quantify the critical AC voltage * at which the cellular phase forms, the images were analyzed to count the number N of cells in each image ( Fig. S8). Fitting N vs. VAC to a sigmoid (Eq. S16) allowed us to quantify * . A typical experiment is shown in Fig. 3(a). Eight conditions were tested in triplicate (at four values of φ and both VDC = 1.9 V and VDC = 2.2 V) over the range of VAC, enabling a Cahn-Hilliard analysis of the cellular phase formation. In this framework, spinodal decomposition is predicted to occur when the interparticle interaction (i.e. EO or EHD) leads perturbations to grow faster than they dissipate through diffusion. Due to these competing effects, a general relationship is expected wherein the strength of EHD and EO are assigned unknown, but concentration and field independent, prefactors βEHD and βEO. EHD and EO scale with electric field quadratically and linearly, respectively. [21] Thus, the data in Fig. 3(b) was fit to, where b reflects that while AC and DC voltages can both give rise to EHD, they may have different intensities. [21] Using nonlinear least squares fitting, we found βEHD = 0.004 ± 0.001 V-2, b = 9.7 ± 0.8, and βEO = -0.9 ± 0.2 V -1 . Critically, since both βEHD and b were positive, this means that EHD promoted the formation of the cellular phase. In contrast, βEO being negative means that EO flow inhibited the cellular phase formation. Interestingly, these results implied that VDC played two competing roles by contributing to both EO and EHD. These results demonstrated that EHD flow was critical to the formation of the cellular phase. and then repeating the same experiment, we found that the structure of the cellular phase was repeatable with voids occurring at the same locations as shown in Fig. S9(a). To explore this further, an experiment was performed where a solution was exposed to conditions that led to a cellular phase (VDC = 2.2 V, VAC = 3.0 V, φ = 6×10-5), the system was then allowed to homogenize with the field off, and then exposed to conditions that led to a cluster phase (VDC = 2.4 V, VAC = 0 V). Importantly, Fig. S9(b) shows that many of the cluster phases were co-localized with the centers of the voids of the cellular phase. Together, these results suggest that features of the underlying substrate, likely asperities that enhance local electric field, [28] break the symmetry of the system and nucleate the phase transition. Furthermore, the fact that the same location can lead to voids through repulsive EHD flows or clusters through attractive EO flows further suggests that the mode of spinodal decomposition is fundamentally different between EHD-and EO-mediated phases.
These experiments and analysis coalesced into a proposed mechanism for the cellular phase formation involving electrophoresis, EO flow, and EHD flow. Once VDC was applied to the particle suspension, electrochemistry at the electrodes led to a DC current that electrophoretically pulled the particles to one side of the chamber as shown in Fig. 4(a). Once assembled into a film, EO led to an attractive flow that promoted particle aggregation as shown in Fig. 4(b). However, VAC and V DC also produced an EHD flow as depicted in Fig. 4(c) that was short-range attractive but repulsive at long ranges. Both flow profiles in Fig. 4
(d) were replotted from Ristenpart et al.[21]
Depending on which flow dominated, spinodal decomposition in Fig. 4(e) either began through the nucleation of an excess or decrease of particles at the high field regions, which subsequently led to the cluster phase or cellular phase, respectively. A similar dichotomy of spinodal decompositions has been observed in simulations of colloids with competing interparticle interactions. [29,30] Interestingly, at VDC > 2.4 V, a transition from a cellular phase to a cluster phase was observed, but this is qualitatively different than the low VAC cluster phase as it occurs at the nodes of the cells. Thus, we attribute this to the vertices of the cells becoming tall enough to span the chamber, at the expense of the structure becoming thinner, at which point particles are recirculated into the voids. With a greater understanding of the mechanism of the cellular formation, we hypothesized that the cell arrangement could be controlled. Having shown that a polymer coating prevented the formation of the cellular phase, we reasoned that photoresist could serve as a patternable coating to localize assembly. To test this, we patterned a star on the ITO slide connected to the positive lead and performed an assembly experiment. After applying VAC = 2 V and VDC = 2.2 V for 4 min, the cellular structure formed with cells only present centered at the points of the star (Fig. 5).
Interestingly, this simple method allowed for the location and orientation of five cells to be controlled, suggesting further opportunities for crafting complex macro-porous arrangements with very low particle densities. Considering the mechanism of formation identified in the present work and the characteristics of prior work that resulted in the cellular phase, two commonalities emerge that unify all observations of the electrically mediated cellular phase. The first commonality is that each system can be effectively reduced to 2D by way of gravity or VDC pulling particles to one side of the chamber, [4,5,8] or through the formation of chains spanning the entire chamber. [6,7] The second commonality is that all feature interactions that are short-range attractive and long-range repulsive in the plane of the electrode, either through dipolar interactions of particle chains or through EHD flow. Indeed, a qualitatively similar cellular phase has been observed in suspensions of magnetic particles under the influence of triaxial magnetic fields that were effectively 2D systems in which the interaction was short-range attractive and long-range repulsive, [31][32][33] suggesting that these features can contribute towards a more general and complete understanding of the cellular phase.
Conclusion
We observed an ultra-low density macroscopic cellular phase through the electrically mediated assembly of nanoparticles that were an order of magnitude smaller than previous examples. Additional control experiments helped tease out the factors that contribute to cellular phase formation such as EO and EHD flows and the importance of electrochemistry at the electrode surface. This interaction between electrochemistry, electrophoresis, EO, and EHD results in a unique porous structure made of nanoparticles where the characteristic length of the pores is 10,000 times larger than the size of the particles. Importantly, by comparing to prior work, we identified two characteristics that appear to be required to form the cellular phase: particles that are confined in some way to 2D and an interparticle interaction that is short-range attractive, longrange repulsive. This level of understanding is essential to bridging the gap between observing and utilizing the unique structures produced by this assembly process.
I. Cahn-Hilliard Analysis
To analyze the spinodal decomposition of the particles from a uniform distribution to a cellular phase, we perform a Cahn-Hilliard analysis. As an initial state for this analysis, we posit that the DC electric field E will lead the particles to assemble onto the electrode that is positively charged.
To justify this, we estimate the electrophoretic speed of the QDs which is given by, 1 = ζ p for ≫ 1, (S1) with particle radius R, inverse Debye length , particle zeta potential , medium permittivity , and medium viscosity . For our system of QDs suspended in 3.125 mM borate buffer, = 0.18 nm -1 and R = 8.5 nm, so > 1. Based on Eq. S1, a particle with ≅ -30 mV 2 at room temperature in water will move ~0.2 mm/s when 2 V is applied across 200 µm. Under these conditions, the particle would traverse the fluid cell in ~1 s, so all the particles will concentrate on the electrode immediately upon application of E. If the QDs have a packing fraction of 0.74 and are dispersed at a bulk volume fraction of 6×10 -5 , they are expected to form a film ~12 nm thick on the surface of the electrode which is approximately a monolayer of particles Once assembled into a two-dimensional film, the movement of particles can be described by the convection-diffusion equation, 3 where the first term describes the change in particle areal concentration n with time t at a location r on the surface of the electrode, the second term describes the convection of particles due to flow field U, and the final term represents the diffusion of the particles with diffusion coefficient D, which is assumed to be constant. The total number of particles is not changing and as a result no source or sink term is included.
After assembly onto one electrode due to electrophoresis, the initial areal concentration 0 on the positively charged electrode is given by, where 0 is the bulk volume fraction and H is the chamber height. In the Cahn-Hilliard analysis, a plane wave perturbation ′ is added and the growth or decay of this term will determine the stability of the film. This leads to an expression, where a(t) is the amplitude of the wave perturbation, k is the non-dimensional wave vector normalized by R, and x is a direction along the electrode. We assume that initially the perturbation ′ is small compared to 0 . Due to this perturbation, the flow field can be separated into = 0 + ′ where 0 is due to 0 and ′ is due ′ . However, we assume the initially uniform distribution of particles indicates that 0 = 0. Thus, Eq. S2 can be linearized to show, By introducing Eq. S4 into Eq. S5 and simplifying, the expression becomes, Drawing from the analysis performed by Hardt et al, 4 we define ′ generally using the integral, where ( ) is the flow velocity as a function of the magnitude of the location r in the x-y plane and describes the direction of r. Importantly, there were two flow types present in our system, electroosmotic (EO) and electrohydrodynamic (EHD) flow. The EHD velocity ( ) and EO velocity ( ) can be defined as, and where the functions ( ) describe the flow at a point r away from a single particle based on the flow profiles in Fig. 4(d) which were replotted from theory described by Ristenpart et al. 5 and the constants describe the strength of the flow field with subscripts denoting EHD and EO flow. It is known that EHD flow is proportional to the voltage squared, whereas EO flow scales directly with the applied voltage 5 as described in Eqs. S8 and S9. For each flow, the integral in Eq. S7 was solved numerically as a function of k. Thus, Eq. S7 can be expressed as, where ( ) and ( ) are the result of the double integral described by Eq. S7 for and , respectively. The plot of ( )/ in Fig. S1 for EHD shows that at low (or high wavelength ) repulsion between particles is expected.
To apply this understanding to our data, Eq. S10 was introduced into Eq. S6 and simplified, which describes the evolution of the amplitude over time. When the term multiplied by ( ) is positive, the perturbation grows, leading to spinodal decomposition. Thus, Eq. S11, can be used to compute a critical voltage * at which an instability will occur which is found to be, * = √ where is Boltzmann's constant and T is temperature, Eq. S12 can be simplified to, Importantly, Eq. S14 shows that * is inversely related to 0 , which implies that as the volume fraction increases, the necessary voltage to observe the cellular phase will decrease. Additionally, * must be expanded into its AC and DC components, * = * + , (S15) where the dimensionless constant b allows VAC and VDC to contribute to EHD with intensities reflecting the different complex conductivities at DC and high frequencies.7 In our experiments, the critical AC voltage * at which the cellular phase was observed was determined by fitting the experimental data to a sigmoid as seen in Fig. 3(a), where is the number of cells, is the maximum number of cells observed in that experiment, and 1 , 2 , and 3 are additional fitting parameters. Incorporating Eq. S15 and S3 into Eq. S14 yields, which was simplified to the functional form used to fit the data in Fig. 3 where b, , and are fitting parameters. Equation S18 captures the behavior in our experimental determination of * shown in Fig. 3(b) and confirms that as * increases, 0 and decrease. which VAC was initially set to a value from 0 V to 2.5 V and then VDC was increased from 0 V to 2.6 V in steps of 0.2 V. After each time VDC was increased, the system was allowed to stabilize for 4 min before images were taken. 2 V and VAC = 2 V after equilibrating for 4 min, then the field was turned off for 40 min to allow the particles to redistribute, and finally the same voltage was applied for 4 min. Scale bar depicts 500 m and applies to both (a) and (b). (b) From left to right, micrograph of QDs at = 6×10 -5 after VDC = 2.2 V and V AC = 3 V had been applied for 4 min, micrograph taken after 40 min after the field had been turned off, and then a micrograph taken after VDC = 2.4 V was subsequently applied for 4 min. The location of bright spots where QD concentration is high are indicated by the white circles, these same spots correspond to the location of voids in the left-most micrograph.
|
2021-12-02T02:16:30.512Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "88ae5ab7c660e198f10036b5c02ab0a29dd861f4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "88ae5ab7c660e198f10036b5c02ab0a29dd861f4",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
118679
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Arteriovenous Anastomosis on Blood Pressure Reduction in Patients With Isolated Systolic Hypertension Compared With Combined Hypertension
Background Options for interventional therapy to lower blood pressure (BP) in patients with treatment‐resistant hypertension include renal denervation and the creation of an arteriovenous anastomosis using the ROX coupler. It has been shown that BP response after renal denervation is greater in patients with combined hypertension (CH) than in patients with isolated systolic hypertension (ISH). We analyzed the effect of ROX coupler implantation in patients with CH as compared with ISH. Methods and Results The randomized, controlled, prospective ROX Control Hypertension Study included patients with true treatment‐resistant hypertension (office systolic BP ≥140 mm Hg, average daytime ambulatory BP ≥135/85 mm Hg, and treatment with ≥3 antihypertensive drugs including a diuretic). In a post hoc analysis, we stratified patients with CH (n=31) and ISH (n=11). Baseline office systolic BP (177±18 mm Hg versus 169±17 mm Hg, P=0.163) and 24‐hour ambulatory systolic BP (159±16 mm Hg versus 154±11 mm Hg, P=0.463) did not differ between patients with CH and those with ISH. ROX coupler implementation resulted in a significant reduction in office systolic BP (CH: −29±21 mm Hg versus ISH: −22±31 mm Hg, P=0.445) and 24‐hour ambulatory systolic BP (CH: −14±20 mm Hg versus ISH: −13±15 mm Hg, P=0.672), without significant differences between the two groups. The responder rate (office systolic BP reduction ≥10 mm Hg) after 6 months was not different (CH: 81% versus ISH: 82%, P=0.932). Conclusions Our data suggest that creation of an arteriovenous anastomosis using the ROX coupler system leads to a similar reduction of office and 24‐hour ambulatory systolic BP in patients with combined and isolated systolic hypertension. Clinical Trial Registration URL: http://www.clinicaltrials.gov. Unique identifier: NCT01642498.
A rterial hypertension is the most prevalent and major modifiable risk factor for cardiovascular morbidity and mortality worldwide. 1 Although several effective and safe antihypertensive drug classes are available, the prevalence rate of treatment-resistant hypertension (TRH) remains %8% to 15%. 2,3 Moreover, it has been reported that within a median of 1.5 years after initiation of antihypertensive treatment, 1 of 50 patients develops TRH. 4 This is of crucial importance, since the diagnosis of TRH carries significantly greater cardiovascular risk compared with patients without TRH. 5,6 Therefore, innovative therapeutic strategies are needed to achieve blood pressure (BP) control and reduction of cardiovascular mortality in this population.
Several interventional approaches for lowering BP in patients with TRH have recently been introduced. Most depend on modulation of sympathetic activity, for example through renal denervation (RDN) or baroreflex activation. However, the BP response to RDN is markedly heterogeneous and it is not fully known whether this is due to technical failure or a diminished role of renal sympathetic signaling in nonresponders. [7][8][9] It has been suggested that the effect of reduced sympathetic activity (due to RDN), and hence the potential to decrease BP in the short term may be limited in patients with advanced vascular remodeling. 10 Furthermore, in patients with isolated systolic hypertension (ISH) (office BP ≥140 mm Hg systolic and <90 mm Hg diastolic), indicative of arterial stiffness, BP reduction due to RDN was attenuated compared with patients with combined hypertension (CH) (office BP ≥140/≥90 mm Hg). 11 This was confirmed in a post hoc analysis of pooled data from the Symplicity HTN-3 trial and the Global SYMPLICITY Registry. Even though patients with ISH had a reduction in systolic BP (SBP) 6 months after RDN, the magnitude of SBP reduction was less pronounced than that seen in patients with CH. 12 An alternative approach to nonpharmacological BP reduction targeting mechanical aspects of the circulation is the percutaneous creation of a therapeutic arteriovenous anastomosis using the ROX coupler system, thereby increasing arterial compliance and reducing total peripheral resistance. 13 In the randomized controlled ROX Control Hypertension Study (NCT01642498), a central iliac arteriovenous anastomosis resulted in significant reductions in both office and 24-hour ambulatory BP (ABP) compared with medically managed patients. 14 The aim of the current post hoc analysis was to assess the effects of ROX coupler implantation on office and 24-hour ABP in patients with CH compared with patients with ISH using data from the ROX Control Hypertension Study.
Study Design and Cohort
The ROX Control Hypertension Study was conducted between October 2012 and April 2014, and its design has been published elsewhere. 14 In brief, the study was a European, open-label, multicenter, prospective, randomized, controlled trial assessing the safety and efficacy of an arteriovenous anastomosis for BP-lowering purposes in patients with TRH. Inclusion criteria were age between 18 and 80 years and presence of TRH (office SBP ≥140 mm Hg and average daytime ABP ≥135/85 mm Hg despite treatment with at least 3 antihypertensive drugs including a diuretic) on a stable drug regimen (without change in dose or medication) for at least 2 weeks. Exclusion criteria were secondary hypertension other than sleep apnea, RDN within the previous 6 months, an estimated glomerular filtration rate (eGFR) <30 mL/min per 1.73 m 2 and type 1 diabetes, current diagnosis of unstable cardiac disease requiring intervention, history of heart failure, recent myocardial infarction, unstable angina, coronary angioplasty or bypass surgery within last 6 months, current severe cerebrovascular disease or stroke within the previous year, and significant peripheral arterial or venous disease. Furthermore, patients in the intervention group with pulmonary arterial hypertension (mean pulmonary artery pressure >25 mm Hg) and/or elevated pulmonary capillary wedge pressure (>15 mm Hg) were excluded.
Patients were randomly (stratified by study site and previous treatment with RDN) assigned to intervention (percutaneous creation of an arteriovenous anastomosis) plus continuation of antihypertensive medication or maintenance of antihypertensive mediation alone in a 1:1 fashion. However, for this post hoc analysis, only patients who were randomized to ROX coupler implementation and were not lost to follow-up were included.
The study was approved by the ethics committees of the participating centers and was performed according to the Declaration of Helsinki and Good Clinical Practice guidelines. Written informed consent was obtained from all patients before study entry. The study was registered at www.clinicaltrials.gov (ID: NCT01642498).
Creation of an Arteriovenous Anastomosis
The procedure for creation of an arteriovenous anastomosis is described in detail elsewhere. 14 In brief, the placement of the ROX coupler creates a fixed caliber 4-mm arteriovenous anastomosis between the distal external iliac artery and vein in a standard cardiovascular catheterization laboratory setting under fluoroscopic guidance. The self-expanding nitinol device permits a controlled shunt volume of 800 to 1000 mL/min. 15 Use of anticoagulation was determined on an individual basis by the interventionalist.
Office and 24-Hour ABP Monitoring
Office BP was measured according to standard recommendations in the nondominant arm, and the average of 3 measurements was taken. If BP values were more than 15 mm Hg apart, measurements were repeated and the means of the last 3 consecutive consistent readings were taken. ABP measurements were performed with validated automatic portable devices. Readings were taken every 30 minutes during daytime and every 60 minutes during nighttime. Measurements were deemed acceptable if there were at least 70% successful readings over 24 hours or if 14 successful readings during daytime and 7 during nighttime were recorded. Patients were graded according to their dipping pattern into dippers (nighttime BP fall ≥10%) and nondippers (nighttime BP fall <10%).
A responder was defined as a patient with office SBP reduction ≥10 mm Hg 6 months after intervention.
Statistical Analysis
All analyses were performed using IBM SPSS Statistics for Windows, version 21.0 (IBM Corp., Armonk, NY). Following our hypothesis, patients were categorized into CH or ISH groups according to their baseline office BP. Data were compared by paired and unpaired Student t tests, Wilcoxon and McNemar tests, and Fisher exact test as appropriate, and were presented as meanAESD in the text and meanAESEM in the figures, respectively. A general linear model was used to assess interaction and adjust for possible influencing factors between the two groups. A 2-sided P value of <0.05 was considered statistically significant.
Results
Baseline characteristics of patients stratified according to type of hypertension (CH [n=31] versus ISH [n=11]) are given in Table 1. Office SBP and 24-hour systolic ABP were higher in patients with CH compared with those with ISH, but the difference did not reach statistical significance. Per definition, office diastolic BP (DBP) and 24-hour diastolic ABP were higher in patients with CH compared with those with ISH. There was no difference in the number of patients with prior RDN between the two groups (P=0.372).
Office BP
There was a significant reduction of office SBP and DBP after 6 months by À29AE21/À24AE13 mm Hg (both P<0.001) in the CH group and by À22AE31/À10AE13 mm Hg (both P<0.05) in the ISH group. Most importantly, the change in office SBP did not significantly differ between the two groups (P=0.445) (Figure 1). The general linear model did not reveal an interaction between baseline office SBP and type of hypertension (P=0.226). After adjusting for baseline office SBP, there was no difference in office SBP reduction 6 months after ROX coupler implementation between the two groups (P=0.991). Even after full adjustment (sex, age, and office SBP and DBP), no difference in office SBP reduction was detected (P=0.669).
Responder Rate
A total of 25 patients in the CH group (81%) and 9 patients in the ISH group (82%) had an office SBP reduction ≥10 mm Hg (usually defined as a BP responder after an interventional strategy of BP lowering), which was not significantly different (P=0.932). Figure 2). There was also no difference in the change of 24-hour systolic ABP reduction after adjustment for baseline 24-hour systolic ABP (P=0.695) as well as after full adjustment (sex, age, 24-hour systolic and diastolic ABP) (P=0.940). Similar (nonsignificant) findings were found for daytime systolic ABP reduction (adjustment for baseline daytime systolic ABP: P=0.765, and full adjustment [sex, age, and daytime systolic and diastolic ABP]: P=0.940) and nighttime systolic ABP reduction (adjustment for baseline nighttime systolic ABP: P=0.649, and full adjustment [sex, age, and nighttime systolic and diastolic ABP]: P=0.786).
Antihypertensive Medication
There was no difference in number and type of antihypertensive medication between the two groups at baseline (Table 2).
Antihypertensive medication (net effect of change) was decreased/increased in 8/2 patients in the CH subgroup and in 2/2 patients in the ISH group, respectively, while antihypertensive medication remained unchanged in 28 of 42 patients during follow-up. Overall, there was no significant difference in (net effect of change) antihypertensive medication between the subgroups (P=0.499).
Renal Function
There was no change in eGFR between baseline and 6-month follow-up in the two groups. eGFR changed from 79.1AE20 to 77.6AE21 mL/min per 1.73 m 2 (P=0.420) in patients with CH and from 67.8AE19 to 65.2AE16 mL/min per 1.73 m 2 (P=0.234) in patients with ISH. Accordingly, no significant mean change in eGFR from baseline was documented between the groups (CH: À1.5AE10 versus ISH: À2.6AE6 mL/min per 1.73 m 2 [P=0.906]).
Discussion
The main finding of our current analysis is that percutaneous creation of a central iliac arteriovenous anastomosis reduced office and ABP to a similar extent in patients with CH and ISH. The magnitude of the BP-lowering effects in patients with CH is similar to results achieved with other interventional techniques such as RDN. However, it was observed that in TRH patients with ISH, BP reduction of both office BP and 24-hour ABP after RDN was clearly reduced in contrast to our observation following creation of an arteriovenous anastomosis. 11,12 This discrepancy may be due to the fact that the underlying treatment mechanism targets different pathophysiologic concepts. In fact, recent expert consensus statements on RDN noted that the failure of RDN to lower BP in some individuals could be the consequence of arterial stiffness with subsequent inability to dilate and decrease vascular resistance, rather than due to technical failure of the procedure itself. 16,17 From a biophysical standpoint, creating a fixed-caliber central iliac arteriovenous anastomosis adds a low-resistance, high-compliance venous segment to the central arterial tree, resulting in a reduction of systemic vascular resistance. 18 Activation of the Frank-Starling mechanism due to increased venous return increases cardiac output, but not commensurate with the reduction of systemic vascular resistance. Most important, the addition of a highly compliant venous parallel compartment, compared with the chronically hypertrophied and maximally filled arterial tree, reduces the effective arterial blood volume. This small reduction of effective arterial blood volume restores arterial compliance to some extent by modulating the stress-strain curve of the aorta, which shifts to the left with aging and in ISH. 19 Improvement of structural alterations may change the stress-strain relationship back towards the right, resulting in increased arterial compliance for any given BP, thereby restoring the Windkessel effect.
It is worth noting that reduction in effective arterial blood volume is achieved without depleting the intracellular, interstitial, and venous capacitance spaces, and hence without activation of the neurohormonal system. As early as 1937, Hallock and Benson 20 analyzed the relationship between vascular stiffness, aging, and volume expansion and were able to demonstrate that with aging and stiffening of the arteries, a small increase in arterial blood volume is associated with an exaggerated increase in BP. In contrast, diuretics reduce intracellular, interstitial, and venous capacitance volumes before reducing effective arterial blood volume and this is accompanied by activation of the sympathetic nervous system and the renin-angiotensin system. 21,22 In a crossover study comprising patients with TRH, low-versus high-salt diet resulted in a marked decrease in both office and 24-hour ABP, as well as a tendency toward decreased vascular stiffness. Notably, the magnitude of BP reduction induced by sodium restriction is substantially greater in patients with TRH than in normotensive or (stage 1 or 2) hypertensive patients. 23 These findings support the hypothesis that in patients with TRH, increased sodium retention (and hence intravascular volume expansion) is a major contributor to resistance to antihypertensive therapy, particularly when associated with increased arterial stiffness.
Additional analyses strengthened the concept of comparable BP reductions in patients with and without stiffened arteries following ROX coupler implementation. Pulse pressure (PP) is a valid and widely applicable proxy for arterial stiffness. 24 An office PP >60 mm Hg in the elderly is an acknowledged marker of target organ damage that influences prognosis and is used for stratification of total cardiovascular risk. 25 Dichotomization (and full adjustment) of our cohort for PP below versus above this threshold revealed a similar BP reduction in both subgroups (data not shown). Moreover, even after stratifying (and full adjustment) the cohort according to presence of marked ISH (defined as 24-hour ambulatory PP ≥63 mm Hg), 26,27 comparable office BP and 24-hour ABP reduction was evident (data not shown). Notably, only one patient had neither an office PP ≥60 mm Hg nor 24-hour ambulatory PP ≥63 mm Hg, indicating that all patients were at high cardiovascular risk.
We observed that the responder rate to coupler therapy did not significantly differ between patients with CH and ISH. Our findings are also not influenced by changes in antihypertensive medication. Notably, the responder rates were also similar whether patients were stratified according to office PP ≥60 mm Hg or 24-hour ambulatory PP ≥63 mm Hg (data not shown).
From a clinical perspective, ISH is difficult to treat with no formal evidence-based guidance, but it is nonetheless responsible for a substantially increased risk of cardiovascular morbidity and mortality. [28][29][30] The effectiveness of antihypertensive medication may also be limited by vascular aging and arterial stiffness, both known to contribute to treatment resistance. 31 Indeed, studies have consistently shown lower rates of SBP than DBP control in patients with ISH. 32,33 A central iliac arteriovenous anastomosis may therefore offer a new therapeutic option to treat ISH and may result in an improvement in renal and cardiovascular outcomes. [34][35][36]
Study Limitations
Several limitations should be discussed. Our findings are based on post hoc analyses with a small sample size, and thus further corroboration by additional studies is required. The ROX Control Hypertension Study was not sham-controlled, but immediate BP reduction after arteriovenous coupler implantation and the resulting palpable thrill in the ipsilateral groin may limit or even jeopardize any effect to perform a sham-controlled randomized controlled trial. Direct parameters of arterial stiffness (eg, pulse wave velocity) were not measured, but data from a single patient undergoing central arteriovenous anastomosis formation revealed a large reduction in pulse wave velocity (before: 15.2 versus 4 months: 13.7 m/s), which appears (partly) independent of associated BP reduction. 37 Data on cardiovascular outcome are still lacking, but it is well known that the relative risk of cardiovascular mortality is estimated at 2:1 (2% reduction of mortality for each 1-mm Hg BP reduction). Further investigations are necessary, and hence the Global Registry study (www.clinicaltrials.gov: NT1885390) was initiated to further evaluate the ROX coupler. In addition, the consequences of the small shunt were not elusively assessed. In one case report, it was shown that ROX coupler implementation resulted in an immediate as well as long-term (6-month follow-up) reduction of systemic vascular resistance and increment of cardiac output indicating coupler-induced venous filling and hemodynamic unloading of the left ventricle. 38 Moreover, extensive experience in patients with endstage renal disease and similarly sized shunts for dialysis access suggest that the risk of cardiovascular decompensation is low. In patients with end-stage renal disease, highoutput cardiac failure may occur, but volumes exceeding 30% of cardiac output 39 and flow rates of at least 2.0 L/min are necessary. 40 In contrast, the fixed-caliber arteriovenous coupler permits flow of only 0.8 to 1.2 L/min. 15 Moreover, arteriovenous anastomosis can be closed (with a covered stent), if necessary, therefore eliminating its clinical risk. Dipping status was not improved after ROX coupler implementation, which might be related to the poor reproducibility of the classification of patients into dippers and nondippers over time. 41,42 Conclusions Our analyses suggest that percutaneous creation of a fixedcaliber arteriovenous anastomosis using the ROX coupler, and therefore modifying the mechanical properties of the arterial vascular tree, reduces office SBP and ambulatory SBP to the same extent in patients with CH and ISH. These data contrast with the results of diminished BP reduction in patients with ISH after RDN. Given the primacy of effective arterial volume as a determinant of BP, this is perhaps not surprising and the >90% response rate to coupler therapy observed in the ROX Control Hypertension Study attests to this. Ongoing studies are examining hemodynamic effects of the coupler in greater detail and future studies should address whether patients with TRH due to ISH would benefit from treatment targeting mechanical properties of the circulation (arteriovenous anastomosis formation) as a first choice rather than RDN.
|
2018-04-03T03:07:32.288Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "4680fa4290b7189a927e3c94bf479b3589533025",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1161/jaha.116.004234",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4680fa4290b7189a927e3c94bf479b3589533025",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119124588
|
pes2o/s2orc
|
v3-fos-license
|
Random weighted averages, partition structures and generalized arcsine laws
This article offers a simplified approach to the distribution theory of randomly weighted averages or $P$-means $M_P(X):= \sum_{j} X_j P_j$, for a sequence of i.i.d.random variables $X, X_1, X_2, \ldots$, and independent random weights $P:= (P_j)$ with $P_j \ge 0$ and $\sum_{j} P_j = 1$. The collection of distributions of $M_P(X)$, indexed by distributions of $X$, is shown to encode Kingman's partition structure derived from $P$. For instance, if $X_p$ has Bernoulli$(p)$ distribution on $\{0,1\}$, the $n$th moment of $M_P(X_p)$ is a polynomial function of $p$ which equals the probability generating function of the number $K_n$ of distinct values in a sample of size $n$ from $P$: $E (M_P(X_p))^n = E p^{K_n}$. This elementary identity illustrates a general moment formula for $P$-means in terms of the partition structure associated with random samples from $P$, first developed by Diaconis and Kemperman (1996) and Kerov (1998) in terms of random permutations. As shown by Tsilevich (1997) if the partition probabilities factorize in a way characteristic of the generalized Ewens sampling formula with two parameters $(\alpha,\theta)$, found by Pitman (1992), then the moment formula yields the Cauchy-Stieltjes transform of an $(\alpha,\theta)$ mean. The analysis of these random means includes the characterization of $(0,\theta)$-means, known as Dirichlet means, due to Von Neumann (1941), Watson (1956) and Cifarelli and Regazzini (1990) and generalizations of L\'evy's arcsine law for the time spent positive by a Brownian motion, due to Darling (1949) Lamperti (1958) and Barlow, Pitman and Yor (1989).
Introduction
Consider the randomly weighted average or P -mean of a sequence of random variables (X 1 , X 2 , . . .) where P := (P 1 , P 2 , . . .) is a random discrete distribution meaning that the P j are random variables with P j ≥ 0 and j P j = 1 almost surely, where (X 1 , X 2 , . . .) and P are independent, and it is assumed that the series converges to a well defined limit almost surely. This article is concerned with characterizations of the exact distribution of X under various assumptions on the random discrete distribution P and the sequence (X 1 , X 2 , . . .). Interest is focused on the case when the X i are i.i.d. copies of some basic random variable X. Then X is a well defined random variable, called the P -mean of X, whatever the distribution of X with a finite mean, and whatever the random discrete distribution P independent of the sequence of copies of X. These characterizations of the distribution of P -means are mostly known in some form. But the literature of random P -means is scattered, and the conceptual foundations of the theory have not been as well laid as they might have been. There has been recent interest in refined development of the distribution theory of P -means in various settings, especially for the model of distributions of P indexed by two-parameters (α, θ), whose size-biased presentation is known as GEM(α, θ) after Griffiths, Engen and Mc-Closkey, and whose associated partition probabilities were derived by Pitman (1995). See e.g. Regazzini et al. (2002), Regazzini et al. (2003), Lijoi and Regazzini (2004), James et al. (2008a), James (2010a,b), . See also Ruggiero andWalker (2009), Petrov (2009), Canale et al. (2017), Lau (2013) for other recent applications of two-parameter model and closely related random discrete distributions, in which settings the theory of (α, θ)-means may be of further interest. So it may be timely to review the foundations of the theory of random P -means, with special attention to P governed by the (α, θ) model, and references to the historical literature and contemporary developments. The article is intended to be accessible even to readers unfamiliar with the theory of partition structures, and to provide motivation for further study of that theory and its applications to P -means. The article is organized as follows. Section 2 offers an overview of the distribution theory of P -means, with pointers to the literature and following sections for details. Section 4 develops the foundations of a general distribution theory for P -means, essentially from scratch. Section 5 develops this theory further for some of the standard models of random discrete distributions. The aim is to explain, as simply as possible, some of the most remarkable known results involving P -means, and to clarify relations between these results and the theory of partition structures, introduced by Kingman (1975), then further developed in Pitman (1995), and surveyed in Pitman (2006, Chapters 2,3,4). The general treatment of P -means in Section 4 makes many connections to those sources, and motivates the study of partition structures as a tool for the analysis of P -means.
Scope
This article focuses attention on two particular instances of the general random average construction X := j X j P j .
(i) The X j are assumed to be independent and identically distributed (i.i.d.) copies of some basic random variable X, with the X j independent of P . Then X is called the P -mean of X, typically denoted M P (X) or X P .
(ii) The case X := X 1 P 1 + X 2 P 1 , with only two non-zero weights P 1 and P 1 := 1 − P 1 . It is assumed that P 1 is independent of (X 1 , X 2 ). But X 1 and X 2 might be independent and not identically distributed, or they might have some more general joint distribution.
Of course, more general random weighting schemes are possible, and have been studied to some extent. For instance, Durrett and Liggett (1983) treat the distribution of randomly weighted sums i W i X i for random non-negative weights W i not subject to any constraint on their sum, and (X i ) a sequence of i.i.d. random variables independent of the weight sequence. But the theory of the two basic kinds of random averages indicated above is already very rich. This theory was developed in the first instance for real valued random variables X j . But the theory extends easily to vector-valued random elements X i , including random measures, as discussed in the next subsection.
Here, for a given distribution of P , the collection of distributions of M P (X), indexed by distributions of X, is regarded as an encoding of Kingman's partition structure derived from P (Corollary 9). That is, the collection of distributions of Π n , the random partition of n indices generated by a random sample of size n from P . For instance, if X p has Bernoulli(p) distribution on {0, 1}, the nth moment of the P mean of X p is a polynomial in p of degree n, which is also the probability generating function of the number K n of distinct values in a sample of size n from P : E(M P (X p )) n = Ep Kn (Proposition 10). This elementary identity illustrates a general moment formula for P -means, involving the exchangeable partition probability function (EPPF), which describes the distributions of Π n (Corollary 22). An equivalent moment formula, in terms of a random permutation whose cycles are the blocks of Π n , was found by Diaconis and Kemperman (1996) for the (0, θ) model, and extended to general partition structures by Kerov (1998). As shown in Section 5.7, following Tsilevich (1997), this moment formula leads quickly to characterizations of the distribution of P -means when the EPPF factorizes in a way characteristic of the two-parameter family of GEM(α, θ) models defined by a stick-breaking scheme generating P from suitable independent beta factors. Then the moment formula yields the Cauchy-Stieltjes transform of an (α, θ) mean X α,θ derived from an i.i.d. sequence of copies of X. The analysis of these random (α, θ) means X α,θ includes the includes the characterization of (0, θ)-means, commonly known as Dirichlet means, due to Von Neumann (1941), Watson (1956), and Cifarelli and Regazzini (1990), as well as generalizations of Lévy's arcsine law for the time spent positive by a Brownian motion, due to Lamperti (1958), and Barlow, Pitman, and Yor (1989).
Random measures
To illustrate the idea of extending P -means from random variables to random measures, suppose that the X j are random point masses for a sequence of i.i.d. copies Y j of a random element Y with values in an abstract measurable space (S, S), with • ranging over S. Then is a measure-valued random P -mean. This is a discrete random probability measure on (S, S) which places an atom of mass P j at location Y j for each j. Informally, P (•) is a reincarnation of P = (P j ) as a random discrete distribution on (S, S) instead of the positive integers, obtained by randomly sprinkling the atoms P j over S according to the distribution of Y . In particular, if the distribution of Y is continuous, on the event of probability one that there are no ties between any two Y -values, the list of magnitudes of atoms of P (•) in non-increasing order is identical to the corresponding reordering P ↓ of the sequence P := (P j , j = 1, 2, . . .). The original random discrete distribution P on positive integers, and the derived random discrete distribution P (•) on (S, S), are then so similar, that using the same symbol P for both of them seems justified. The integral of a suitable real-valued S-measurable function g with respect to P (•) is just the P -mean of the real-valued random variable g(Y ): S g(s)P (ds) = M P (g(Y )) := j g(Y j )P j .
As a consequence, for g(s) = 1(s ∈ B) in (3), so g(X) has the Bernoulli(p) distribution on {0, 1} for p = P(Y ∈ B), the simplest Dirichlet mean (3) for an indicator variable has a beta distribution: See Section 5.3 for further disussion. Replacing the gamma process by a more general subordinator makes P (•) a homogeneous normalized random measure with independent increments (HRMI) as studied by Regazzini et al. (2003), James et al. (2009). from the perspective of Bayesian inference for P (•) given a random sample of size n from P (•). Basic properties of P -means derived from normalized subordinators are developed here in Section 5.2.
Splitting off the first term
It is a key observation that the P -mean of an i.i.d. sequence can sometimes be expressed as a (P 1 ,P 1 )-mean by the splitting off the first term. That is the decomposition with R j := P j+1 /P 1 the residual probability sequence defined on the event P 1 > 0 by first conditioning P on {2, 3, . . .} and then shifting back to {1, 2, . . .}. In general, the residual sequence R may be dependent on P 1 . Then X R and P 1 will typically not be independent, and analysis of X P will be difficult. However, if P 1 and (R 1 , R 2 , . . .) are independent, then P 1 , X 1 and X R are mutually independent. So X P = X 1 P 1 + X R P 1 .
The right side is the (P 1 ,P 1 )-mean of X 1 and X R , with P 1 independent of X 1 and X R , which are independent but typically not identically distributed. This basic decomposition of a P -mean by splitting off the first term leads naturally to discussion of P -means for random discrete distributions defined by a recursive splitting of this kind, called residual allocation models or stick-breaking schemes, discussed further in Section 5.1.
Lévy's arcsine laws
An inspirational example of splitting off the first term is provided by the work of Lévy (1939) on the distributions of the time A t spent positive up to time t, and the time G t of the last zero before time t, for a standard Brownian motion B: See e.g. Kallenberg (2002, Theorem 13.16) for background. To place this example in the framework of P -means: • Let P 1 := 1 − G 1 be the length of the meander interval (G 1 , 1).
• Let (P j , X j ) for j ≥ 2 be an exhaustive listing of the lengths P j of excursion intervals of B away from 0 on (0, G 1 ), with X j the indicator of the event that B t > 0 for t in the excursion interval of length P j .
If the lengths P j for j ≥ 2 are put in a suitable order, for instance by ranking, then (X j , j ≥ 1) will be a sequence of i.i.d. copies of a Bernoulli ( 1 2 ) variable X 1 2 , with (X j , j ≥ 1) independent of the excursion lengths (P j , j ≥ 1). Then by construction, is the P -mean of a Bernoulli ( 1 2 ) indicator X 1 2 , representing the sign of a generic excursion. This is so for any listing P of excursion lengths of B on [0, 1] that is independent of their signs. But if P 1 := 1 − G 1 puts the meander length first as above, then the residual sequence (R 1 , R 2 , . . .) is identified with the sequence of relative lengths of excursions away from zero of B on [0, G 1 ]. But that is also the list of excursion lengths of the rescaled process B br := (B(uG 1 )/ √ G 1 , 0 ≤ u ≤ 1), with corresponding positivity indicators (X 2 , X 3 , . . .). Lévy showed that B br is a standard Brownian bridge, equivalent in distribution to (B u , 0 ≤ u ≤ 1 | B 1 = 0), and that a last exit decomposition of the path of B at time G 1 makes the length P 1 of the meander interval independent of B br , hence also independent of the residual sequence (R 1 , R 2 , . . .) and the positivity indicators (X 2 , X 3 , . . .), which are encoded in the path of B br . Let A br 1 denote the total time spent positive by this Brownian bridge B br . So A br 1 d = (A 1 |B 1 = 0), while also A br 1 = ∞ j=1 R j X j+1 by the previous construction. Then the last exit decomposition provides a splitting of A 1 = M P (X) of the general form (12). In this instance, A 1 = X 1 P 1 + A br 1 P 1 where on the right side • X 1 , P 1 and A br 1 are independent, with • P 1 the meander length, • A br 1 the total time spent positive by B br , and • P 1 := 1 − P 1 = G 1 the last exit time.
Lévy showed the meander interval has length P 1 d = β 1 2 , 1 2 , known as the arcsine law, because while the bridge occupation time has the uniform [0, 1] distribution A br 1 d = β 1,1 . Lévy then deduced from (13) that the unconditioned occupation time A 1 has the same arcsine distribution as P 1 and G 1 = P 1 :
Generalized arcsine laws
Lévy's arcsine laws (15) for the Brownian occupation time A 1 , the time G 1 of the last zero in [0, 1], and the meander length P 1 := 1 − G 1 , and his associated uniform law for the Brownian bridge occupation times A br 1 , have been generalized in several different ways. One of the most far-reaching of these generalizations gives corresponding results when the basic Brownian motion B is replaced by process with exchangeable increments. Discrete time versions of these results were first developed by Andersen (1953). Feller (1971, §XII.8 Theorem 2) gave a refined treatment, with the following formulation for a random walk S n := X 1 + · · · + X n with exchangeable increments (X i ), started at S 0 := 0: the random number of times n i=1 1(S i > 0) that the walk is strictly positive up to time n has the same distribution as the random index min{0 ≤ k ≤ n : S k = M n } at which the walk first attains its maximum value M n := max 0≤k≤n S k . In the Brownian scaling limit, Sparre Andersen's identity implies the equality in distribution A 1 d = G max 1 , the last time in [0, 1] that Brownian motion attains its maximum on [0, 1]. That the distribution of G max 1 is arcsine was shown also by Lévy, who then argued that G max 1 d = G 1 , the time of the last zero of B on [0, 1], by virtue of his famous identity in distribution of reflecting processes where M t := max 0≤s≤t B s is the running maximum process derived from the path of B.
Many other generalizations of the arcsine law have been developed, typically starting from one of the many ways this distribution arises from Brownian motion, or from one of its many characterizations by identities in distribution or moment evaluations. See for instance Kallenberg (2002, Theorem 15.21) for the result that Lévy's arcsine law (15) extends to the occupation time A 1 of (0, ∞) up to time 1 for any symmetric Lévy process X with P(X t = 0) = 0 instead of B, with G 1 replaced by G max 1 , the last time in [0, 1] that X attains its maximum on [0, 1], and P 1 replaced by 1 − G max 1 . See also Takács (1996aTakács ( ,b, 1999Takács ( , 1998, Petit (1992) and Mansuy and Yor (2008, Chapter 8) regarding the distribution of occupation times of Brownian motion with drift and other processes derived from Brownian motion. See Getoor and Sharpe (1994), Bertoin and Yor (1996), Bertoin and Doney (1997) for more general results on Lévy processes, and Knight (1996) and Fitzsimmons and Getoor (1995), for an extension of the uniform distribution of A br 1 for Brownian motion to more general bridges with exchangeable increments, and Yano (2006) for an extension to conditioned diffusions. Watanabe (1995) gave generalized arc-sine laws for occupation times of half lines of one-dimensional diffusion processes and random walks, which were further developed in Kasahara and Yano (2005) and Watanabe et al. (2005). Yet another generalization of the arcsine law was proposed by Lijoi and Nipoti (2012).
The focus here is on generalized arcsine laws involving the distributions of Pmeans for some random discrete distribution P . The framing of Lévy's description of the laws of the Brownian occupation times A 1 and A br 1 , as P -means of a Bernoulli( 1 2 ) variable, for distributions of P determined by the lengths of excursions of a Brownian motion or Brownian bridge, inspired the work of Barlow, Pitman, and Yor (1989) and . These articles showed how Lévy's analysis could be extended by consideration of the path of (B t , 0 ≤ t ≤ T ) for a random time T independent of B with the standard exponential distribution of γ(1). For then G T /T d = G 1 by Brownian scaling, while the last exit decomposition at time G T breaks the path of B on [0, T ] into two independent random fragments of random lengths G T and T − G T respectively. Thus This realizes the instance r = s = 1 2 of the beta-gamma algebra (6) in the path of Brownian motion stopped at the independent gamma(1) distributed random time T . A similar subordination construction was exploited earlier by Greenwood and Pitman (1980) in their study of fluctuation theory for Lévy processes by splitting at the time G max T of the last maximum before an independent exponential time T . See Bertoin (1996) and Kyprianou (2014) for more recent accounts of this theory. This involves the lengths of excursions of the Lévy process below its running maximum process M . Lévy recognized that for a Brownian motion B his famous identity in law of processes M − B d = |B|, as in (16), implied that the structure of excursions of B below M is identical to the structure of excursions of |B| away from 0. This leads from the decomposition of M − B at the time G max T of the last zero of M − B on [0, T ] to the corresponding decomposition for |B|, discussed earlier. The same method of subordination was exploited further in Pitman and Yor (1997a, Proposition 21), in a deeper study of random discrete distributions derived from stable subordinators.
The above analysis of the P -mean M P (X), for an indicator variable X = X 1 2 , and P the list of lengths of excursions of a Brownian motion or Brownian bridge, was generalized by Barlow, Pitman, and Yor (1989) to allow any discrete distribution of X with a finite number of values. That corresponds to a linear combination of occupation times of various sectors in the plane by Walsh's Brownian motion on a finite number of rays, whose radial part is |B|, and whose angular part is made by assigning each excursion of |B| to the ith ray with some probability p i , independently for different excursions. The analysis up to an independent exponential time T relies only on the scaling properties of |B|, the Poisson character of excursions of |B|, and beta-gamma algebra, all of which extend straightforwardly to the case when |B| is replaced by a Bessel process or Bessel bridge of dimension 2 − 2α, for 0 < α < 1. Then P becomes a list of excursion lengths of the Bessel process or bridge over [0, 1], while G T and T − G T become independent gamma(α) and gamma(1 − α) variables with sum T that is gamma(1). So the distribution of the final meander length in the stable (α) case is given by by another application of the beta-gamma algebra (6). The excursion lengths P in this case are a list of lengths of intervals of the relative complement in [0, 1] of the range of a stable subordinator of index α, with conditioning of this range to contain 1 in the bridge case. In particular, for 0 < p < 1, the P -mean of a Bernoulli(p) indicator X p represents the occupation time of the positive half line for a skew Brownian motion or Bessel process, each excursion of which is positive with probability p and negative with probability 1 − p. The distribution of such a P -mean, say M α,0 (X p ), associated with a stable subordinator of index α ∈ (0, 1) and a selection probability parameter p ∈ (0, 1), was found independently by Darling (1949) and Lamperti (1958). Darling indicated the representation Darling also presented a formula for the cumulative distribution function of M α,0 (X p ), corresponding to the probability density Zolotarev (1957) derived the corresponding formula for the density of the ratio of two independent stable(α) variables T α (p)/(T α (1) − T α (p)) by Mellin transform inversion. This makes a surprising connection between the stable(α) subordinator and the Cauchy distribution, discussed further in Section 3. Lamperti (1958) showed that the density of M α,0 (X p ) displayed in (19) is the density of the limiting distribution of occupation times of a recurrent Markov chain, under assumptions implying that the return time of some state is in the domain of attraction of the stable law of index α, and between visits to this state the chain enters some given subset of its state space with probability p. Lamperti's approach was to first derive the the Stieltjes transform where q := 1 − p. The associated beta(1 − α, α) distribution of P 1 appearing in (17) is also known as a generalized arcsine law. In Lamperti's setting of a chain returning to a recurrent state, the results of Dynkin (1961), presented also in Feller (1971, §XIV.3), imply that Lamperti's limit law for occupation times holds jointly with convergence in distribution of the fraction of time since last visit to the recurrent state to the meander length P 1 as in (17), along with the generalization to this case of the distributional identity (13), which was exploited by Barlow, Pitman, and Yor (1989). Due to the results of Sparre Andersen mentioned earlier, this beta(1−α, α) distribution also arises from random walks and Lévy processes as both a limit distribution of scaled occupation times, and as the exact distribution of the occupation time of the positive half line for a limiting stable Lévy process X t with P(X t > 0) = 1 − α for all t. But in the context of the (α, 0) model for P , this beta(1 − α, α) distribution appears either as the distribution of the length of the meander interval P 1 , as in (17), or as the distribution of a size-biased pick P * 1 from P . See also and (Pitman and Yor, 1997b, §4) for closely related results, and James (2010b) for an authoritative recent account of further developments of Lamperti's work.
Fisher's model for species sampling
A parallel but independent development of closely related ideas, from the 1940's to the 1990's, was initiated by Fisher (1943). See Pitman (1996b) for a review. Fisher introduced a theoretical model for species sampling, which amounts to random sampling from the random discrete distribution (P 1 , . . . , P m ) with the symmetric Dirichlet distribution with m parameters equal to θ/m on the m-simplex of (P 1 , . . . , P m ) with P − i ≥ 0 and m i=1 P i = 1. See Section 5.3 for a quick review of basic properties of Dirichlet distributions. Fisher showed that many features of sampling from this symmetric Dirichlet model for P have simple limit distributions as m → ∞ with θ fixed. Ignoring the order of the P i , the limit model may be constructed directly by supposing that the P i are the normalized jumps of a standard gamma process on the interval [0, θ]. That model for a random discrete distribution, called here the (0, θ) model, was considered by McCloskey (1965) as an instance of the more general model, discussed in Section 5.2 in which the P i are the normalized jumps of a subordinator on a fixed time interval [0, θ], which for a stable (α) subordinator corresponds to the (α, 0) model involved in the Lévy-Lamperti description of occupation times. McCloskey showed that if the atoms of P in the (0, θ) model are presented in the size-biased order P * of their appearance in a process of random sampling, then P * admits a simple stick-breaking representation by a recursive splitting like (9) with i.i.d. factors Engen (1975) interpreted this GEM(0, θ) model as the limit in distribution of size-biased frequencies in Fisher's limit model. This presentation of (0, θ) model was developed in various ways by Patil and Taillie (1977), Sethuraman (1994), andPitman (1996a). In this model for P = P * in size-biased random order, the basic splitting (12) holds with a residual sequence R that is identical in law to the original sequence P , hence also X R d = X P . Then (12) becomes a characterization of the law of X P by a stochastic equation which typically has a unique solution, as discussed in Feigin and Tweedie (1989), Diaconis and Freedman (1999), Hjort and Ongaro (2005). See also Bacallado et al. (2017) for a recent review of species sampling models. Ferguson (1973) and Kingman (1975) further developed McCloskey's model of P derived from the normalized jumps of subordinator, working instead with the ranked rearrangement P ↓ of P with P ↓ 1 ≥ P ↓ 2 ≥ · · · ≥ 0. However, it is easily seen that the distribution of the P -mean of a sequence of i.i.d. copies of X is unaffected by any reordering of terms of P , provided the reordering is made independently of the copies of X. So for any random discrete distribution P , and any distribution of X, there is the equality in distribution where P * can be any random rearrangement of terms of P . This invariance in distribution of P -means under re-ordering of the atoms of P is fundamental to understanding the general theory of P -means. In the analysis of M P (X) by splitting off the first term, the distribution of M P (X) is the same, no matter how the terms of P may be ordered. But the ease of analysis depends on the joint distribution of P 1 and (P 2 , P 3 , . . .), which in turn depends critically on the ordering of terms of P . Detailed study of problems of this kind by Pitman (1996a) explained why the size-biased random permutation of terms P * , first introduced by McCloskey in the setting of species sampling, is typically more tractable than the ranked ordering used by Ferguson and Kingman. The notation P * will be used consistently below to indicate a size-biased ordering of terms in a random discrete distribution.
The two-parameter family
The articles of Perman et al. (1992) and Pitman and Yor (1997a). introduced a family of random discrete distributions indexed by two-parameters (α, θ), which includes the various examples recalled above in a unified way. Various terminology is used for different encodings of this family of random discrete distributions and associated random partitions.
• The distribution of the size-biased random permutation P * is known as GEM(α, θ), after Griffiths, Engen and McCloskey, who were among the first to study the simple stick-breaking description of this model recalled later in (150).
• The distribution of the corresponding ranked arrangement P ↓ is known as the two-parameter Poisson-Dirichlet distribution (Pitman and Yor, 1997a), (Feng, 2010).
• The corresponding random discrete probability measure on an abstract space (S, S), constructed as in (2) by assigning the GEM or Poisson-Dirichlet atoms i.i.d. locations in S, has become known as a Pitman-Yor process. (Ishwaran and James, 2001).
• The corresponding partition structure is governed by the sampling formula of Pitman (1995) which is a two parameter generalization of the Ewens sampling formula, recently reviewed by Crane (2016).
The (α, θ) model refers here to this model of a random discrete distribution P , whose size-biased presentation is GEM(α, θ). For such a P the associated P -mean will be called simply an (α, θ)-mean, with similar terminology for other attributes of the (α, θ) model, such as its partition structure. Following further work by numerous authors including Cifarelli and Regazzini (1990), Diaconis and Kemperman (1996) and Kerov (1998), a definitive formula characterizing the distribution of an (α, θ) mean X α,θ , for an arbitary distribution of a bounded or non-negative random variable X, was found by Tsilevich (1997): for all (α, θ) for which the model is well defined, except if α = 0 or θ = 0, the distribution of X α,θ is uniquely determined by the generalized Cauchy-Stieltjes transform Companion formulas for the (α, 0) case with θ = 0, 0 < α < 1, trace back to Lamperti for X = X p a Bernoulli(p) variable, as in (20), while the (0, θ) case with α = 0, θ > 0 is the case of Dirichlet means due to Von Neumann (1941), andWatson (1956) in the classical setting of mathematical statistics, involving ratios of quadratic forms of normal variables, and developed by Cifarelli and Regazzini (1990) and others in Ferguson's Bayesian non-parametric setting. These formulas are all obtained as limit cases of the generic two-parameter formula (22), naturally involving exponentials and logarithms due to the basic approximations of these functions by large or small powers as the case may be e.g. e x = lim n→∞ (1 + x/n) n and log x = lim α↓0 (x α − 1)/α for x > 0. For θ = α ∈ (0, 1) the transform (22) was obtained earlier by Barlow et al. (1989) in their description of the distribution of occupation times derived from a Brownian or Bessel bridge, by a straightforward argument from the perspective of Markovian excursion theory. But Tsilevich's extension of this formula to general (α, θ) is not obvious from that perspective. Rather, the simplest approach to Tsilevich's formula involves analysis of partition structure associated with (α, θ) model, as discussed in Section 5.7. Further development of the theory of (α, θ) means was made by Vershik, Yor, and Tsilevich (2001). See also the articles by James, Lijoi and coauthors, listed in the introduction, for the most refined analysis of (α, θ)-means by inversion of the Cauchy-Stieltjes transform.
Transforms
Typical arguments for identifying the distribution of a P -mean involve encoding the distribution by some kind of transform. This section reviews some probabilistic techniques for handling such transforms, by study of some key examples related to ratios of independent stable variables. See Chaumont and Yor (2003) for further exercises with these techniques, and James (2010b) for many deeper results in this vein.
Proposition 1. [Talacko-Zolotarev distribution]. Let C denote a standard Cauchy variable with probability density P(C ∈ dc) = π −1 (1 + c 2 ) −1 dc for c ∈ R, and Let S α be a random variable with the conditional distribution of log C α given the event with S 1 = 0 and the distribution of S 0 defined as the limit distribution of S α as α ↓ 0.
For each fixed α with 0 ≤ α < 1, the distribution of S α is characterized by each of the following three descriptions, to be evaluated for α = 0 by continuity in α, as detailed later in (34): (i) by the symmetric probability density (ii) by the characteristic function (iii) by the moment generating function Proof. The linear change of variable (23) from the standard Cauchy density of C makes Restrict to x > 0, and divide by P(C α > 0) to obtain P(C α ∈ dx | C α > 0). For x > 0, make change of variable s = log x, ds = x −1 dx, x = e s in (28) to obtain the density P(log C α ∈ ds | C α > 0) = f α (s) as in (25), with constant 2πP(C α > 0) in place of (2πα). To check P(C α > 0) = α use the standard formula and the fact that 0 < sin πα < 1 for 0 < α < 1, to calculate This proves (i). Now (ii) and (iii) are probabilistic expressions of the classical Fourier transform 1 2π ∞ −∞ e iλs sin απ cosh s + cos απ ds = sinh απλ sin πλ .
This Fourier transform is equivalent, by analytic continuation, and the change of variable x = e s as above, to the classical Mellin transform of a truncated Cauchy density ∞ 0 x r dx 1 + 2x cos απ + x 2 = π sin απ sin απr sin πr (|r| < 1).
Whittaker and Watson (1927, Example 4, P. 119) attribute this Mellin transform to Euler, and present it to illustrate a general techique of computing Mellin transforms by calculus of residues. This Mellin transform also appears as an exercise in complex variables in Morse and Feshbach (1953, Part I, Problem 4.10). (Talacko, 1956) gave details of the derivation of the Fourier transform (31) by contour integration. A more elementary proof of the key Fourier transform (31) is indicated below.
The Fourier transform (31) appears also in Zolotarev (1957, formula (21)), attributed to Ryzhik and Gradshtein (1951, p. 282), but with a typographical error (the lower limit of integration should be −∞, not 0). Chaumont and Yor (2012, 4.23) present some of Zolotarev's results below their (4.23.4), including (31) with the correct range of integration, but missing a factor of 2: the 1/π on their left side should be 1/(2π) as in (31). Talacko (1956) regarded the family of symmetric densities f α (s) for 0 ≤ s < 1 as a one-parameter extension of the case α = 1 2 , with and the limit case α = 0 with These probability densities and their associated characteristic functions were found earlier by Lévy (1951) in his study of the random area swept out by the path of two-dimensional a Brownian motion ((X t , Y t ), t ≥ 0) started at X 0 = Y 0 = 0. In terms of the distribution of S α defined by the above proposition, Lévy proved that Lévy first derived the characteristic functions φ 0 and φ 1 2 by analysis of his area functional of planar Brownian motion. He showed that the distributions of S 0 and S 1 2 are infinitely divisible, each associated with a symmetric pure-jump Lévy process, whose Lévy measure he computed. He then inverted φ 0 and φ 1 2 to obtain the densities f 0 and f 1 2 displayed above by appealing to the classical infinite products for the hyperbolic functions. Lévy's work on Brownian areas inspired a number of further studies, which have clarified relations between various probability distributions derived from Brownian paths whose Laplace or Fourier transforms involve the hyperbolic functions. See Biane and Yor (1987), and Pitman and Yor (2003) for comprehensive accounts of these distributions, their associated Lévy processes, and several other appearances of the same Fourier transforms in the distribution theory of Brownian functionals, and Revuz and Yor (1999, §0.6) for a summary of formulas associated with the laws of S 0 and S 1 2 . Note from (26) and (34) that the characteristic function φ α of S α is derived from φ 0 by the identity corresponding to the identity in distribution where S 0 and S α are assumed to be independent. That is to say, the distribution of S 0 is self-decomposable, as discussed further in Jurek and Yor (2004). An easier approach to these Fourier relations (33) and (34) for α = 1 2 and α = 0, which extends to the Fourier transform (31) for all 0 ≤ α < 1, is to recognize the distributions involved as hitting distributions of a Brownian motion in the complex plane. The Cauchy density of C α in (28) is well known to be the hitting density of X T on the real axis for a complex Brownian motion (X t + iY t , t ≥ 0) started at the point on the unit semicircle in the upper half plane X 0 + iY 0 = cos(1 − α)π + i sin(1 − α)π = − cos απ + i sin απ and stopped at the random time T := inf{t : Y t = 0}. Let X t + iY t = R t exp(iW t ) be the usual representation of this complex Brownian motion in polar coordinates, with radial part R t and continuous angular winding W t , starting from R 0 = 1 and W 0 = (1 − α)π. Then by construction According to Lévy's theorem on conformal invariance of Brownian motion, the process Pitman and Yor (1986) for further details of this well known construction. The conclusion of the above argument is summarized by the following lemma, which combined with the next proposition provides a nice explanation of the basic Fourier transform (31).
Proposition 3. With the notation of the previous lemma, and the Talacko-Zolatarev densities and characteristic functions f α and φ α defined as in Proposition 1, the joint distribution of Φ T and Θ T is determined by any one of the following three formulas, each of which holds jointly with a companion formula for (Θ = 0) instead of (Θ = π), with θ replaced by π − θ on the right side only, so sin θ = sin(π − θ) is unchanged, and cos θ is replaced by cos(π − θ) = − cos θ: (ii) The corresponding cumulative distribution function is (iii) The corresponding Fourier transform is Proof. By the well known description of hitting probabilities for Brownian motion in terms of harmonic functions, the P θ distribution of (Θ T , Φ T ) is the harmonic measure on the boundary of the vertical strip {(θ, s) : 0 < θ < π, s ∈ R} for Brownian motion with initial point (θ, 0) in the interior of the strip. Formula (38) is then read from the classical formula for the Poisson kernel in the strip, which gives the hitting density on the two vertical lines. This formula is mentioned in Hardy (1926) and derived in detail by Widder (1961). As indicated by Widder, the formula for the Poisson kernel for the strip follows easily from the corresponding kernel for the upper half plane, by the method of conformally mapping θ + is to e i(θ+is) = e −s e iθ . This proves (i), and (ii) follows by integration. As for (iii), it is easily seen that conditionally given T and Θ T the distribution of Φ T is Gaussian with mean 0 and variance T . Hence where the last equality is a well known formula for one-dimensional Brownian motion (Revuz and Yor, 1999, Exercise II.3.10), which holds because (exp(±λΘ t − 1 2 λ 2 t), t ≥ 0) is a martingale for each choice of sign ± and λ > 0. The average of these two martingales is M λ,t := sinh(λΘ t ) exp(− 1 2 λ 2 t). So P θ governs (M λ,t , t ≥ 0) as a martingale with continuous paths which starts at M λ,0 = sinh(λθ), and is bounded by As a check on (40), its limit as λ → 0 gives P θ (Θ T = π) = θ/π.
Laplace and Mellin transforms
The Laplace transform of a non-negative random variable X, can always be interpreted probabilistically as follows for λ ≥ 0. Let ε d = γ(1) be a standard exponential variable independent of X. By conditioning on X, This basic formula presents φ X (λ) as the survival probability function of the random ratio ε/X, whose distribution is the scale mixture of exponential distributions, with a random inverse scale parameter X. See Steutel and van Harn (2004) for much more about such scale mixtures of exponentials. This formula (43) works with the convention ε/X = +∞ if X = 0. For instance, if X = T α has the standard stable(α) law with Laplace transform (18) then (43) gives and hence for λ = x 1/α That is to say, in view of the uniqueness theorem for Laplace transforms, the standard stable(α) distribution of T α is uniquely characterized by the identity in law is an exponential variable with mean 1, independent of T α . Equate real moments in (46) to see that the distribution of T α has Mellin transform This provides another characterization of the standard stable(α) law of T α , by uniqueness of Mellin transforms. This derivation of (46) and (47) is due to Shanbhag and Sreehari (1977). A more general Mellin transform for stable laws appears much earlier in (Zolotarev, 1957, Theorem 3). Consider now the ratio R α := T α /T α of two independent standard stable(α) variables. Immediately from (47), the Mellin transform of R α α is Equivalently, by the change of variable r = x 1/α , so x = r α , dx = αr α−1 dr, By calculus, the density (50) of R α has derivative at r > 0 which is is a strictly negative function of r multiplied by Analysis of this quadratic function of x explains the qualitative features of the densities of R α displayed in Figure 1 for selected values of α. (23) conditioned to be positive. The curves are identified by their values at 0, which decrease as α increases, and their values at 1 which increase with α. The corresponding densities of Rα can be identified similarly in the right panel. By unimodality of the Cauchy density, in the left panel each density of R α α is unimodal, with maximum density at 0 for α ≤ 1 2 , and at sin(α − 1 2 )π for α ≥ 1 2 . Each density of Rα in the right panel has an infinite maximum achieved at 0+. The discrimant of the quadratic (51) is ∆(α) := 2(cos 2 απ + α 2 − 1) which is negative for α ≤ αc, where αc ≈ 0.736484 is the unique root α ∈ (0, 1) of the equation ∆(α) = 0. So the density of Rα is strictly decreasing for α ≤ αc, with strictly negative derivative for α < αc, and with a unique point of inflection for α = αc at ( 1 − α 2 c /(1 + αc)) 1/αc ≈ 0.278018. For α > αc, as for the top two curves with α = 6/8 and α = 7/8, the density of Rα is bimodal, with a local minimum at r − (α) and a local maximum at r + (α) where r ± (α) := (x ± (α)) 1/α for x ± (α) the two roots in [0, 1] of the quadratic (51). A common feature of the laws of R α α and Rα for all 0 < α < 1 is that each law has median 1, due to Rα d = R −1 α , and each law has infinite mean. As α ↑ 1, both laws converge to the distribution degenerate at 1. But as α ↓ 0, the behavior is different. At each x > 0, the density of R α α converges to (1 + x) −2 , which is the density of the limit in distribution of R α α . In parallel with this convergence, as α ↓ 0, the density of Rα converges pointwise to 0, as the distribution of Rα converges vaguely to an atom of 1 2 at 0 and an atom of 1 2 at +∞. This pointwise convergence of densities as α ↓ 0 is apparent in both panels.
Cauchy-Stieltjes transforms
For a real valued random variable X, the Cauchy-Stieltjes transform of X is commonly defined to be the function of a complex variable z There are inversion formulas both for this transform, as well as for the generalized Cauchy-Stieltjes transform of X of order θ, say G X,θ (z) obtained by replacing the power −1 in (52) by −θ: (51) is cos 2 απ + α 2 − 1, as plotted in the left panel, with αc ≈ 0.736484 the unique root of this function in (0, 1). The right panel shows the two graphs of r ± (α) := (x ± (α)) 1/α for x ± (α) the two roots in [0, 1] of the quadratic equation (51), for αc ≤ α < 1.
The lower curve r − (α) gives the location of the unique minimum in (0, 1) of the density of Rα. This location decreases from r ± (αc) ≈ 0.278018 to 0 as α increases from αc to 1. The upper curve r + (α) is the location of the unique local maximum of the density (0, ∞). This modal value is always less than 1, and increases from r ± (αc) ≈ 0.278018 to the median value of 1 as α increases from αc to 1. (2016) for a recent article about this transform with references to earlier work. For X with values in [0, 1] it is more pleasant to deal with the variant of this transform
See Demni
where the series is convergent and equal to E(1 − λX) −θ for every |λ| < 1 by dominated convergence. A distribution of X on [0, 1] is uniquely determined by its moment sequence (EX n , n = 0, 1, 2, . . .), hence also by its generalized Cauchy-Stieltjes transform of order θ, for any fixed θ > 0. For unbounded non-negative X, including X with EX = ∞, for which there is not even a partial series expansion (54) for λ in any neighbourhood of 0, it is typically easier to work with Here the left side is evidently a well defined and analytic function of λ with positive real part. The right side may be understood by analytic continuation of G X,θ (z) from non-real values of z. But arguments by analytic continuation can often be avoided by the following key observation. By introducing γ(θ) with gamma(θ) distribution, independent of X, and conditioning on X, the expectation in (55) is that is the ordinary Laplace transform of γ(θ)X. This determines the distribution of X, by uniqueness of Laplace transforms, and the the following lemma which has been frequently exploited (Pitman and Yor, 2001, p. 358), Chaumont and Yor (2012, 1.13, 4.2, 4.24), (McKinlay, 2014, Theorem 3). As a general rule, in reading formulas involving generalized Stieltjes transforms of probability distributions of X, especially X ≥ 0, matters are often simplified by interpreting the generalized Stieltjes transform as the Laplace transform of γ(θ)X.
Lemma 4. [Cancellation of independent gamma variables] For random variables or random vectors X and Y , and γ(θ) with gamma(θ) distribution independendent of both X and Y , for each real a there is the equivalence of identities in distribution Proof. Consider first the case of real random variables.
So by conditioning it may as well be assumed that both X and Y are strictly positive, when there is no difficulty in taking logarithms. It is known (Gordon, 1994) that the distribution of log γ(θ) is infinitely divisible, hence has a characteristic function which does not vanish. The conclusion in the univariate case follows easily, by characteristic functions. An appeal to the Cramér-Wold theorem takes care of the multivariate case.
To illustrate these ideas, let us derive the ordinary Cauchy-Stieltjes transform of the ratio R α := T α /T α of two i.i.d. standard stable (α) variables, whose Mellin transform and probability density were already indicated above. From above, the problem is to calculate for independent random variables ε d = γ(1) and T α d = T α . But we already know from Thus the distribution of R α is uniquely characterized by the simple Cauchy-Stieltjes transform It is notable that the explicit formula (50) for the density of R α with Laplace-Stieltjes transform (1+λ α ) −1 is much simpler than the corresponding inversion for the common where is the classical Mittag-Leffler function with parameter α. This is an entire function of z ∈ C, for each α ∈ C with strictly positive real part, with α ∈ (0, 1) here. This formula was found by Pillai (1990). See also (Mainardi et al., 2001, (3.9) and (4.37)) for closely related transforms, and Gorenflo et al. (2014) for a recent survey of Mittag-Leffler functions and their applications. Compare also with the density of T α , given by Pollard (1946) Only for α = 1 2 , when T 1 2 d = 1/(2γ( 1 2 )) is there substantial simplification of this series formula. But see Penson and Górska (2010) for explicit expressions for the density (62) in terms of the Meijer G function for rational α, and Schneider (1986) for a general representation of stable densities in terms of Fox functions. See also Ho et al. (2007).
Returning to the context of random discrete distributions, if P α,0 is governed by the (α, 0) model defined by normalizing the jumps of a stable(α) subordinator on some fixed interval of length say s > 0, then it is evident that for X = X p the indicator of an event of probability p, the distribution of the P α,0 mean of X p is determined by The distribution of M α,0 (X p ) is thus obtained from that of R α by a simple change of variable. Moreover, for any real X, the identity allows the Cauchy-Stieltjes transform of (1 + cX) −1 to be expressed directly in terms of that of X. In particular, for the ratio of independent stable variables X = R α with the simple Cauchy-Stieltjes transform (60), and c := (q/p) 1/α with q := 1 − p, this algebra simplifies nicely to give in (63) This is the Stieltjes transform (20) found by Lamperti. See (Pitman and Yor, 1997b, §4) for further discussion.
Some basic theory of P -means
This section presents some general theory of P -means, for an arbitrary random discrete distribution P , and its relation to Kingman's theory of partition structures, relying only the simplest examples to motivate the development. This postpones to Section 5.7 the study of the rich collection of examples associated with the (α, θ) model. Kingman (1978) introduced the concept of the partition structure associated with sampling from a random probability distribution F . That is, the collection of probability distributions of the random partitions Π n of the set [n] := {1, . . . , n}, generated by a random sample Y 1 , . . . , Y n from F , meaning that conditionally given F the Y i are i.i.d. according F . The blocks of Π n are the equivalence classes of the restriction to [n] of the random equivalence relation i ∼ j iff Y i = Y j . A convenient encoding of this partition structure is provided by its exchangeable partition probability function (EPPF) (Pitman, 1995). This is a function p of compositions (n 1 , . . . , n k ) of n, that is to say sequences of k positive integers (n 1 , . . . , n k ) with k i=1 n i = n for some 1 ≤ k ≤ n. The function p(n 1 , . . . , n k ) gives, for each particular partition {B 1 , . . . , B k } of [n] into k blocks, the probability
Partition structures
where #B i is the size of the block B i of indices j with the same value of Y j . A random partition Π n of [n] is called exchangeable iff its distribution is invariant under the natural action of permutations of [n] on partitions of [n]. Equivalently, its probability function is of the form (65) for some function p(n 1 , . . . , n k ) that is non-negative and symmetric. The sum of these probabilities (65), over all partitions {B 1 , . . . , B k } of [n] into various numbers k of blocks, must then equal 1. This constraint is most easily expressed in terms of the associated exchangeable random composition of n N ex •:n := (N ex 1:n , N ex 2:n , . . . , N ex Kn:n ) defined by listing the sizes of blocks of Π n in an exchangeable random order. This means that conditionally given the number K n of components of Π n equals k for some 1 ≤ k ≤ n, and that Π n = {B 1 , . . . , B k } for some particular sequence of blocks (B 1 , . . . , B k ), which may be listed in any order, for instance their order of least elements, N ex •:n := #B σ(1) , . . . , #B σ(k) where σ is a uniform random permutation of [k]. As indicated in Pitman (2006, (2.8)), the usual probability function of this random composition of n is the exchangeable composition probability function (ECPF) P(N ex •:n = (n 1 , . . . , n k )) = p ex (n 1 , . . . , n k ) := 1 k! n n 1 , . . . , n k p(n 1 , . . . , n k ).
(66) These probabilities must sum to 1 over all compositions of n. So the normalization condition on an EPPF is that for p ex derived from p using the multiplier in (66), n k=1 (n1,...,n k ) p ex (n 1 , . . . , n k ) = 1.
Here and in similar sums below, (n 1 , . . . , n k ) ranges over the set of n−1 k−1 compositions of n into k parts. To understand (66), observe that putting the components of Π n in an exchangeable random order creates a random ordered partition of [n], with block sizes N ex •:n . So P(N ex •:n := (n 1 , . . . , n k )) is the sum, over all ordered partition of [n] into k blocks of the specified sizes, of the probability of each ordered partition of those sizes. Each particular ordered partition has probability p(n 1 , . . . , n k )/k!, and the number of these ordered partitions with sizes (n 1 , . . . , n k ) is the multinomial coefficient. For Π n generated by sampling from a random discrete distribution with atoms of sizes (P j ), let (J 1 , . . . , J n ) denote the corresponding sample of positive integer indices. Then for each particular partition {B 1 , . . . , B k } of [n] as in (65) Hence, by conditioning on P , where the sum is over all sequences of k distinct positive integers (j 1 , . . . , j k ). As observed by Kingman, as n varies, the partition structure associated with sampling from a random distribution is subject to a consistency condition: the restriction of Π n+1 to [n] must be Π n for every n ≥ 1. In terms of the EPPF, this consistency condition implies where n = (n 1 , . . . , n k ) ranges over compositions of n, and n (i+) for 1 ≤ i ≤ k + 1 is n with the ith component incremented by 1, meaning for (n, 1) obtained by appending a 1 to n for i = k + 1. See Pitman (2006, §3.2) for further discussion. The instance of the general formula (3), when (S, S) is the unit interval [0, 1] with Borel sets, and the Y j = U j are i.i.d. uniform [0, 1] variables, independent of P , is of particular importance. Write F P for the random probability measure on [0, 1] which sprinkes the atoms of P at i.i.d. uniform random locations. So by definition, for all bounded or non-negative measurable g In particular, for Note that F P [0, 0] = 0 and F P [0, 1] = 1 almost surely.
The following proposition summarizes some well known facts: Proposition 5. [Kallenberg (1973), Kingman (1978)] The random c.d.f. F (v) := F P [0, v], derived as above for 0 ≤ v ≤ 1 from a random discrete distribution P , is a process with exchangeable increments, meaning that for each m = 1, 2, . . . the The collection of distributions of these exchangeable sequences is an encoding of the partition structure generated by P , as is the collection of finite-dimensional distributions of P ↓ , the ranked re-ordering of P , and the collection of finite-dimensional distributions of P * , the sizebiased permutation of P . In other words, for two random discrete distributions P and Q, with associated random c.d.f.s with exchangeable increments F P and F Q , and exchangeable partition probability functions p P and p Q , the following conditions are equivalent: for all compositions of positive integers n; • F P and F Q share the same finite dimensional distributions.
Proof. As indicated by Kallenberg, the finite-dimensional distributions of F = F P determine those of the list P ↓ of ranked jumps of P , and conversely. It is obvious that the laws of P ↓ and P * determine each other, and that either of these laws determines the EPPF p P , by application of formula (68) with P replaced by P ↓ or P * . That the law of P ↓ can be recovered from the partition structure was shown by Kingman (1978).
See also Pitman (2006, Theorem 3.1) for an explicit formula expressing the EPPF in terms of product moments derived from P * .
A nice exercise in Kallenberg's encoding of P by an exchangeable random c.d.f. F := F P is provided by the following construction, proposed by Patil and Taillie (1977, Example 2.10), in an insightful review article which appeared a year before the general theory of partition structures was offered by Kingman (1978). Suppose P is a random discrete distribution with P(P i > 0) = 1 for each i = 1, 2, . . .. Let (U i ) be a sequence of i.i.d. uniform variables, independent of P , and for each 0 < p < 1 consider the sequence P i 1(U i ≤ p) obtained by annihilating each P i with U i > p and keeping each P i with U i ≤ p. Then a new random discrete distribution P (p), called a p-thinning or p-screening of P , is obtained by ignoring the annihilated entries P i with U i > p, and listing the remaining entries of P i with P i ≤ p in their original order, renormalized by their sum F (p) : is the sum of j independent copies of τ (p, 1) with the geometric(p) distribution P(τ (p, 1) = k) = pq k−1 for q := 1 − p, and the sequence of indices (τ (p, j), j = 1, 2, . . .) is independent of P . In terms of the random c.d.f. with exchangeable increments F (u) := i P i 1(U i ≤ u), whose jumps in some order are the P i , the p-thinning P (p) is by construction a listing of jumps of the random c.d.f. with exchangeable increments (F (up)/F (p), 0 ≤ u ≤ 1). In terms of P -means, for suitable distributions of X, the P (p)-mean of X is the ratio of two jointly distributed P -means: A particularly appealing instance of this construction is described by the following proposition: Proposition 6. (Patil and Taillie, 1977, Theorem 2.5) If P is governed by the GEM(0, θ) (ii) the p-thinned random discrete distribution P (p) has GEM(0, pθ) distribution; (iii) the fraction F (p) is independent of the random discrete distribution P (p).
Proof. As indicated by Patil and Taillie, this is a consequence of the representation of P by random sampling from the random c.d.f. F (u) = γ(uθ)/γ(θ) derived from the standard gamma subordinator. See Pitman (2006, §4.2) for a proof of McCloskey's result that the size-biased representation of jumps of this F gives P governed by the GEM(0, θ) model with i.i.d. beta(1, θ) distributed residual factors. Granted the gamma representation of P , part (i) is just the basic beta-gamma algebra (6). Part (ii) holds by the identification of F (up)/F (p) = γ(upθ)/γ(pθ), 0 ≤ u ≤ 1 as the c.d.f. with exchangeable increments associated with P (p). Part (iii) appeals to independence part (7) of the beta-gamma algebra, which makes F (p) = γ(pθ)/γ(θ) independent of the process (F (up)/F (p), 0 ≤ u ≤ 1), hence also independent of its list of jumps P (p) in their order of discovery by a process of uniform random sampling.
As remarked by Patil and Taillie, the above proposition holds also with GEM(0, θ) replaced by its decreasing rearrangement, the Poisson-Dirichlet (0, θ) distribution. Various components of the proposition can be broken down and generalized as follows.
Proposition 7. Let P (p) be the random discrete distribution obtained by p-thinning of a random discrete distribution P with P(P i > 0) = 1 for each i = 1, 2, . . ..
(iv) if P is in either ranked or size-biased order, then the following two conditions are equivalent: A is a stable (α) subordinator for some 0 < α < 1.
in which case P * is governed by the GEM(α, 0) model with independent residual Proof. Part (i) is obvious. To see part (ii), observe that P = P * may be constructed by listing the jumps of the associated random c.d.f. with exchangeable increments F in the order they are discovered by a process of random sampling from F . But then by construction as above, P (p) is the list of sizes of jumps of F in [0, p], relative to their sum F (p), in the order of their discovery in samping from F . But the successive values of the sample from F which fall in [0, p] form a sample from F conditioned on [0, p]. Thus P (p) is just the list of atoms of this random conditional distribution in their order of their discovery by a process of random sampling, and it follows that P (p) is in size-biased random order. Part (iii) is just a reprise of part (ii) of the previous proposition, with a general subordinator instead of the gamma process. As for part (iv), if F is derived from a stable subordinator, it is easily seen that the distribution of the process for either ranked or size-biased ordering of P , by (i) and (ii). Conversely, it is known (Pitman and Yor, 1992, Lemma 7.5) that for a subordinator A the distribution of A(1) is determined up to a scale factor by that of the process . It is well known that for a subordinator A this condition implies that A is stable with some index α ∈ (0, 1) as indicated in (74).
The only part of Proposition 6 which does not extend to a subordinator more general than the gamma process is the independence of F (p) and P (p). This is a consequence of independence of A(t) and (A(ut)/A(t), 0 ≤ u ≤ t), which is well known to be a characteristic property of A(t) = aγ(bt) for some a, b > 0. See Pitman (2006, §4.2) and work cited there. See also Pitman (2003) andÉmery and Yor (2004) for more about bridges with exchangeable increments obtained by normalizing a subordinator.
The construction of infinitely divisible semi-stable laws by Lévy (1954, §58) shows for each fixed q ∈ (0, 1) there exist non-stable subordinators such that (73) holds if p = q n for some n = 1, 2, . . . but not for all 0 < p < 1. Let P (α,0) denote a random discrete distribution governed by the (α, 0) model, say in size-biased order for simplicity, but it could just as well be ranked. Part (iv) of the above proposition implies that for each probability distribution π on (0, 1), which might be regarded as a prior distribution on the stability index α, the formula defines a mixture of (α, 0) laws, which governs P with the invariance property (73) under p-thinning for all 0 < p < 1.
Problem 8. Are there any other laws besides (75) of random discrete distributions P such that P(P i > 0) = 1 for all i and P (p) d = P for all 0 < p < 1?
P -means and partition structures
The present point of view is that the collection of distributions of P -means M P (X), indexed by various distributions of X, should be regarded as yet another encoding of the partition structure associated with P . That point of view is justified by the following corollary of Proposition 5, which does not seem to have been pointed out before. Call a random variable simple if it takes only a finite number of possible values.
Corollary 9.
[Characterization of partition structures by P -means] For each random discrete distribution P , the collection of distributions of its P -means M P (X), as X ranges over simple random variables, is an encoding of the partition structure of P . That is to say, for any two random discrete distributions P and Q, the condition can be added to the list of equivalent conditions in the Proposition 5.
Proof. As remarked earlier around (21), it the distribution of M P (X) remains unchanged if P is replaced by P ↓ , and the same for Q instead of P . So P ↓ d = Q ↓ implies M P (X) d = M Q (X). For the converse, the Cramér-Wold theorem shows that the finitedimensional distributions of F P are determined by the collection of one-dimensional distributions of finite linear combinations of F P [0, v], 0 ≤ v ≤ 1, each of which is a P -mean by application of (70): for all simple X implies that the finite dimensional distributions of F P and F Q are the same. Hence the conclusion, by the preceding proposition.
Part of how the partition structure of P is determined by the distributions of Pmeans M P (X), as the distribution of X varies, is found by consideration of the Pmeans of indicator variables X, that is X = 1(U ≤ v) whose P -mean is F P (v). So there is the following proposition, which also does not seem to have been noticed before, though it is the easiest case for an indicator variable of the general moment formula for P -means, due to Kerov, which is presented later in Corollary 22.
Proposition 10. Let F (v) := F P [0, v] be the random cumulative distribution function with exchangeable increments on [0, 1] derived from a random discrete distribution P , and let K n be the number of distinct values in a random sample of size n from either P or from F . Then the nth moment of F (v) is a polynomial in v of degree at most n, which equals the probability generating function of K n evaluated at v: where P(K n = k) is determined by the ECPF p ex of P according to the formula where the sum is over all n−1 k−1 compositions of n into k parts. Consequently, the collection of one-dimensional distributions of K n , for n = 1, 2, . . . determines the collection of one-dimensional distributions of F (v) for 0 ≤ v ≤ 1, and vice versa.
Proof. Formula (76) displays two different ways of evaluating the probability of the It is known (Nacu, 2006) that another equivalent condition is equality in distribution of the two sequences (K n , n ≥ 1) generated by sampling from P and Q respectively.
Problem 11. Does equality of the one-dimensional distributions of K n , generated by sampling from P and Q for each n, imply equality of partition structures?
By Corollary 10, this condition is the same as equality of one-dimensional distributions of F P [0, p] and F Q [0, p] for each 0 ≤ p ≤ 1. So the issue is whether the finite-dimensional distributions of an increasing process with exchangeable increments are determined by its one-dimensional distributions. [Kallenberg (1973), established a result in this vein, that the distribution of any process on [0, 1] with exchangeable increments and continuous paths is determined by its one-dimensional distributions.
It appears that the distribution of an exchangeable random partition Π n on [n], with restrictions Π m to [m] for m ≤ n, is determined by the collection of distributions of K m , the number of blocks of Π m , for 1 ≤ m ≤ n, for n ≤ 11 but not for n = 12. To see this, consider the # part (n) probabilities of individual partitions of n in the distribution of the partition of n induced by the ranked block sizes of Π n , where # part (n) is the number of partitions of n. These # part (n) probabilities are subject only to the constraints of being non-negative, with sum 1, so the range of # part (n) − 1 of these probabilities contains some open ball in R #part(n)−1 . The P(K m = k) for 1 ≤ k < m ≤ n then form a collection of n−1 2 linearly independent linear combinations of the # part (n). It is easily checked that # part (n) − 1 ≤ n−1 2 for 1 ≤ n ≤ 11, but # part (12) − 1 = 76 > 66 = 12 2 . Hence the conclusion. However, it does not seem at all obvious how to construct such an example which is part of an infinite partition structure derived by sampling from a random discrete distribution.
The following proposition develops the meaning of the terms p ex (n 1 , . . . , n k ) in the sum (77) for P(K n = k), in the context of the preceding proof.
Proposition 12. Let V 1 , . . . , V n be a sample from F P , meaning that Let K n be the number of distinct values among V 1 , . . . , V n , and let N ex •:n be the numbers of repetitions of these values in the sample V 1 , . . . , V n , in increasing order of V -values. Then N ex •:n is an exchangeable random composition of n with the probability function p ex featured in formulas (66) and (77).
Proof. By construction, K n is the number of blocks of Π n , the random partition of [n] generated by sampling from P . On the event of probability one that there are no ties among the U -values, the association V i = U Ji pairs distinct V -values with distinct J-values in a sample J 1 , . . . , J n of indices of P . Thus K n is the number of distinct values in a sample of size n from P , and the distinct V -values are the uniform order statistics U 1:Kn < U 2:Kn < · · · < U Kn:Kn where for k = 1, 2, . . . the U 1:k < U 2:k < · · · < U k:k are the order statistics of the first k i.i.d. uniform variables U 1 , . . . , U k . It is well known that U i = U σ k (i):k for a random permutation σ k of [k] that is independent of these k order statistics. Hence N ex •:n is an exchangeable random composition whose probability function (66) encodes the partition structure of P .
P -means as conditional expectations
The point of view taken here is that a random discrete distribution P may be regarded as a probabilistic mechanism for turning a suitable random variable X into another random variable M P (X). Considered in this way, M P becomes an operator on random variables X, whose properties are those of a conditional expectation operator. In the first instance, the definition M P (X) := j X j P j , makes M P an operator on probability distributions, which converts the common distribution of X and the X j into the distribution of the new random variable M P (X). There is no specification of which of the many identically distributed variables X j should be regarded as X.
This construction of X := M P (X) puts X on the same probability space as all the copies X j of X. But the joint distribution of X and X j will typically depend on j. So there is no well defined joint distribution of X and a generic representative X of the terms X j without some further precision. For instance, if E(X) = 0 and EX 2 < ∞, then the covariance E( XX j ) = (EP j )EX 2 will typically depend on j. Only exceptionally, as in the case of exchangeable P 1 , . . . , P m , does the joint law of ( X, X j ) not depend on j for some finite range 1 ≤ j ≤ m. This apparent lack of a joint distribution of X and X := M P (X) should be contrasted with conditional expectations X := E(X | G) for G any sub σ-field of events in a probability space (Ω, F, P) on which X is defined and integrable. For then X and X are defined on the same probability space, with an induced joint probability distribution P((X, X) ∈ •) on R 2 . There are however many indications in the literature of particular P -means, that the operation which transforms a random variable X into M P (X) shares properties of a conditional expectation operator E(X | G). Most obviously, M P is a positive operator: X ≥ 0 implies M P (X) ≥ 0, and M P is a linear operator, meaning that if (X, Y ) has some arbitrary joint distribution, such that both X := M P (X) and Y := M P (Y ) are well defined almost surely, then the natural construction of a random pair ( X, Y ) := M P (X, Y ), using one copy of P and an i.i.d. sequence (X j , Y j ) of copies of (X, Y ), makes M P (aX + bY ) = aM P (X) + bM P (Y ).
It is also easily shown there is a monotone convergence theorem for P -means: with the same coupling construction All of which supports the idea that P -means should be regarded as some kind of conditional expectation operator. In fact, for any prescribed distribution of X on an abstract measurable space, there is the following canonical construction of X jointly with a sequence of i.i.d. copies (X j ) of X and a random discrete P with any desired distribution, and a suitable σ-field of events G, which makes for all bounded or non-negative measurable functions g. Assume that the (X j ) and (P j ) are defined together with a uniform [0, 1] variable U , as needed for further randomization, on some probability space (Ω, F, P), with (X j ) , (P j ) and U independent. Conditionally given (X j ) and P = (P j ) let J be a random draw from P : P(J = j | X 1 , X 2 , . . . , P 1 , P 2 , . . .) = P j (j = 1, 2, . . .), which may be constructed in the usual way by letting Then set X := X J .
So X is not any particular X j , but X = X J for J picked at random according to P , independently of the entire sequence of X j -values. Then the following proposition is easily verified: Proposition 13. Let X := X J be defined in terms of an i.i.d. sequence (X j ) and a random discrete distribution (P j ) independent of (X j ) by this canonical construction, with the random index J picked according to P , independently of (X j ). Then • the distribution of X is the common distribution of the X j ; • for each measurable function g with E|g(X)| < ∞, let the P -mean of g(X) be defined by Then the series converges absolutely both almost surely and in L 1 , and M P [g(X)] is the conditional expectation . . , P 1 , P 2 , . . .] a.s.
• In particular, if X is real-valued with E|X| < ∞, and X := M P (X), then E(X | X) = X so the sequence (EX, X, X) is a three term martingale.
Consequently, for each random discrete distribution of P , the transformation from the distribution of X to that of its P -mean X enjoys all the well known general properties of conditional expectation operator. So P -means should be properly be understood, like conditional expectations, as a kind of partial averaging operator. Some of these properties of P -means inherited from conditional expectations are listed in the following corollary. Recall that the convex partial order on the distributions of real valued random variables X and Y with finite means is defined by This relation X cx ≤ Y should be understood as a relation between the distributions of X and of Y , subject to E|X| < ∞ and E|Y | < ∞, comparable to the usual stochastic order X d ≤ Y , meaning that Eφ(X) ≤ Eφ(Y ) for all bounded increasing φ. Because every convex function φ(x) is bounded below by some affine function ax + b, the assumption E|X| < ∞ implies Eφ(X) has a well defined value which is either finite or +∞ for every convex φ, and similarly for Y . So for X and Y with both E|X| < ∞ and E|Y | < ∞, the meaning of the condition (80) can be made more precise in either of the following equivalent ways: • (80) holds for all convex φ, allowing +∞ as a value on one or both sides; • (80) holds for all convex φ such that both Eφ(X) and Eφ(Y ) are finite.
It is known (Shaked and Shanthikumar, 2007, §2.A) that further equivalent conditions are • EX = EY and the inequality (80) holds for φ(x) = (x − a) + for all a ∈ R; • EX = EY and the inequality (80) holds for φ(x) = |x − a| for all a ∈ R.
Given some prescribed distributions on the line for X and for Y , a coupling of X and Y is a construction of random variables X and Y with these distributions on a common probability space. It is a well known that X d ≤ Y is equivalent to existence of a coupling of X and Y with P(X ≤ Y ) = 1: simply take X = F −1 X (U ) and Y are the usual inverse distribution functions, and U has uniform [0, 1] distribution.
By Jensen's inequality for conditional expectations, X cx ≤ Y is implied by • there exists a martingale coupling of X and Y , that is a construction of X and That remark is all that is needed to deduce the following Corollary from Proposition 13.
It is a well known result of Strassen that X cx ≤ Y implies the existence of a martingale coupling of X and Y . But the construction is quite difficult and not explicit in general. See Hirsch, Profeta, Roynette, and Yor (2011) and Beiglböck, Nutz, and Touzi (2017) for this result and more about the convex order.
Corollary 14. Let X be a random variable with E|X| < ∞, and let X := M P (X) be its P -mean for some random discrete distribution P . Then X cx ≤ X. In particular: (iii) The distributions of X and X cannot be the same, except if either P(X = x) = 1 for some x, or P(P j = 1 for some j) = 1.
Proof. All but part (iii) follow immediately from Proposition 13. These statements also follow from the definition X := j X j P j by applying Jensen's inequality φ( j X j P j ) ≤ j φ(X j )P j before taking expectations. As for (iii), it is well known (Durrett, 2010, Exercise 5.1.12) that if a martingale pair ( X, X) has X d = X, then P( X = X) = 1. It is easily seen that for X := M P (X) this can only be so in one of the two exceptional cases indicated.
Part (i) of this Corollary, and the instance of part (ii) for r = n a positive integer, can also be deduced from the formula for E X n presented later in Corollary 22. Part (iii) appears in Yamato (1984, Proposition 3) for the case of Dirichlet (0, θ) means.
As an operator mapping a distribution of X to a distribution of X, one property of P -means extends those of a typical conditional expectation operator: the P -mean of X may be well defined and finite by almost sure convergence, even if E|X| = ∞. For instance, there is the following easy generalization of a result of Yamato (1984) for Dirichlet (0, θ) means, and Van Assche (1987) for the uniformly weighted mean X 1 P 1 + X 2 (1 − P 1 ) for P 1 with uniform distribution on [0, 1].
Proposition 15. Suppose that X d = a + bY for some fixed a and b and Y with the standard Cauchy distribution P(Y ∈ dy) = π −1 (1 + y 2 ) −1 dy. Then, no matter what the random discrete distribution P , the P -mean X is well defined as an almost surely convergent series, with X d = X.
Proof. This can be shown by a computation with characteristic functions after conditioning on P , as in Yamato (1984). Alternatively, using the well known scaling property Y (p) d = pY (1) of a standard Cauchy process with stationary independent increments (Y (t), t ≥ 0), assumed independent of P , the P -mean X may be constructed as the limit of a + bY (Σ j i=1 P i ) as j → ∞. It is easily seen by conditioning on (P 1 , P 2 , . . .) that the limit exists and equals a + bY (1) almost surely.
For the case of X = X 1 P 1 + X 2 (1 − P 1 ) with P 1 uniform on [0, 1], Van Assche (1987, Theorem 2) obtained the conclusion of this proposition by a more complicated argument involving Stieltjes transforms. But he also obtained a converse: the equality in distribution X d = X implies that X d = a + bY for some real a and b and Y standard Cauchy. It appears that this converse is true under very much weaker conditions on P . But some condition is required to avoid the case P 2 = 1 − P 1 with the distribution of P 1 concentrated on terms of a geometric progression (q n , n = 1, 2, . . .) for some 0 < q < 1. For Lévy (1954, §58) established the existence of infinitely divisible semistable laws of X such X d = pX + (1 − p)X if p = q n for some n, besides the family of strictly stable Cauchy laws aY + b, which is characterized by this property for all p ∈ (0, 1).
Refinements
For P and R two random discrete distributions, say that R is a refinement of P if there is a coupling of P and R on a common probability space such that that both P = (P i ) and R = (R i ) may be indexed by i ∈ N := {1, 2, . . .} in the usual way, while some rearrangement of atoms of R may be indexed by (i, j) ∈ N 2 as R i,j with The following proposition provides a simple explanation of many monotonicity results for P -means: Proposition 16. If R is a refinement of P , then M R (X) cx ≤ M P (X) for every X with E|X| < ∞.
Proof. It must be shown that for arbitrary convex φ, and X with E|X| < ∞ where (X i,j , i, j ∈ N) is a doubly indexed array of copies of X, independent of R, and (X i , i ∈ N) is a singly indexed list of copies of X, independent of P . By conditioning on the coupling (P, R), it is enough to establish (81) for a fixed, non-random discrete distribution R, which is a refinement of some other fixed, non-random discrete distribution P . A further reduction, by easy limit arguments, shows it is enough to establish (81) when R has only a finite number of non-zero atoms. Moreover, by induction on the number these atoms, it is enough to consider the case when only one atom of P is split to obtain R from P . That case reduces easily by conditioning and scaling to the base case Eφ(M R (X)) ≤ Eφ(X) of Corollary (14).
By general theory of the convex order of distributions on the line, recently reviewed by Letac and Piccioni (2018), the above proposition implies it is possible to realize the sequence (EX, M R (X), M P (X), X) on a suitable probability space as a four term martingale. It is well known however that the general construction of such a martingale, from a sequence of distributions increasing in the convex order, is not at all explicit or elementary, and the proof sketched above does not help much either. So it is natural to ask if the canonical martingale construction of (M P (X), X) in Proposition 13 can be extended to provide an explicit martingale (M R (X), M P (X), X) on a suitable probability space, whenever R is a refinement of P . The following argument shows how this is possible. But the argument is quite tricky, and it does not seem obvious how to extend it to a sequence of successive refinements in any nicer way than by forcing the martingale to be Markovian with prescribed two-dimensional laws.
Martingale proof of Proposition 16. The aim is to construct R and P jointly with X on some common probability space (Ω, F, P) so that M R (X) = E(X | R) and M P (X) = E(X | P) for some sub σ-fields R ⊆ P ⊆ F. Note well that while R is a refinement of P , the associated σ-field R must be coarser than P. It is possible to make such a construction quite generally. But the definition of the σ-fields involved is tricky. So as in the previous proof, let us rather argue that by conditioning on (R, P ) it is enough to consider the case of deterministic R and P . So consider a fixed pair of discrete distributions (R, P ), and let (I, J) be a random element of N 2 which conditionally given X •• := (X i,j , i, j ∈ N) is a pick from R: and set X := X I,J = i,j X i,j 1((I, J) = (i, j)) to make To involve P as well, for i with P i > 0 let J i be a random index with the conditional distribution of J given I = i, that is P(J i = j) = R i,j /P i . Suppose that the J i are independent, forming a sequence J • := (J i ) with i ranging over {i : P i > 0}. Assume further that the sequence J • is independent of the double array X •• of copies of X. Now a random pair (I, J) as in (82), and X := X I,J subject to (84), is conveniently constructed from the double array X •• of copies of X and the sequence of conditional indices J • as J := J I for a single random index I with so that X := X I,J = i X i,Ji 1(I = i) and hence where it is easily argued that (X i,Ji ) is a sequence of independent copies of X, with this sequence independent of P by (85). Thus we obtain a coupled pair of representations M R (X) = E(X | R) and M P (X) = E(X | P) with R ⊆ P for R the σ-field generated by X •• , and P generated by X •• and J • . Hence the desired conclusion (81), by Jensen's inequality for conditional expectations.
As an application of this proposition, there are known constructions of the (0, θ) model which are refining as θ increases (Gnedin and Pitman, 2007). For instance, let (V i , Y i ) be the points of a Poisson process with intensity dvdy/(1 − v) in the strip (0 < v < 1) × (0 < y < ∞). Then let P 0,θ,j be the length of the jth component interval of the relative complement in [0, 1] of the random set of points {V i : 0 < Y i ≤ θ}, reading the intervals from left to right. As shown by Ignatov (1982), this construction makes P 0,θ,j = H j,θ where the H j,θ are i.i.d. copies of β 1,θ , which is the characteristic property of the size-biased ordering of the (0, θ) model. This construction refines the random discrete distributions P 0,θ as θ increases, hence the following corollary of Proposition 16: Corollary 17. (Letac and Piccioni, 2018, Theorem 1.2) For every X with E|X| < ∞, as θ increases on [0, ∞) the family of distributions of (0, θ) means of X is decreasing in the convex order of distributions on the line, starting from the distribution of X at θ = 0, and converging to the constant E(X) in the limit as θ ↑ ∞.
See (Letac and Piccioni, 2018) for many more refined results regarding the family of Dirichlet curves in the space of probability distributions on the line, meaning the laws of (0, θ) means of a fixed distribution of X as a function of θ. It is an implication of Corollary 17 and a well known result of Kellerer, discussed further in (Letac and Piccioni, 2018, §2), that for each distribution of X with finite mean, it is possible to construct a Markovian reversed martingale ( X θ , θ ≥ 0) with X 0 = X and lim θ→∞ X θ = EX almost surely, such that X θ d = M 0,θ (X) for each θ ≥ 0. However, there is no known way to explicitly construct the transition kernel of such a Markov process. The construction indicated above gives an explicit enough process for X j i.i.d. copies of X and (P θ,j , j = 1, 2, . . .) the family of coupled copies of GEM(0, θ) generated by Ignatov's Poisson construction. Even for the simplest choice of Bernoulli (p) distributed X j , when we know X θ d = β pθ,qθ , it seems difficult to provide any explicit description of the joint law of ( X θ , X φ ) for 0 < θ < φ, or even to determine whether or not this process is Markovian, or a reversed martingale. It is known however (Gnedin and Pitman, 2007) that a corresponding process of compositions of n, obtained by sampling from this model, is Markovian with a simple transition mechanism, and it might be possible to proceed from this to some analysis of ( X θ , θ ≥ 0) defined by (88).
One final remark about Proposition 16. The converse is completely false. Consider the classical example with P n the deterministic uniform distribution on [n], discussed further in Section 4.8. It is well known that M Pn (X) := (X 1 + · · · + X n )/n is a reversed martingale, for any distribution of X with E|X| < ∞. So the distribution of M Pn (X) is decreasing in the convex order, but P n is a refinement of P m iff m divides n.
Problem 18. What more explicit condition on a pair of random discrete distributions P and R is equivalent to M R (X) cx ≤ M P (X) for all X with a finite mean?
Even for deterministic P and R this seems to be a non-trivial problem. A discussion of various measures of diversity for random discrete distributions, and concepts of comparison of P and R with respect to such measures, with many references to earlier work, was provided by Patil and Taillie (1977). That article discusses relations between four different partial orderings on distributions of random discrete distributions, each of which provides some sense in which R may be stochastically more diverse than P , denoted SD2, SD3, SD4, SD5. It appears that all of these orderings are implied by the ordering by refinement, call it SD1, as that notation was not used by Patil and Taillie, and the refinement ordering SD1 seems to be both the simplest and strongest of all these orderings. Already in Fisher (1943) there is the idea that in his limit model for species sampling, called here the (0, θ) model, the parameter θ > 0 (which Fisher called α, not to be confused with the second parameter α ∈ (0, 1) of the (α, θ) model) should be regarded as some kind of index of diversity in the random distribution of species frequencies in the population. This idea was confirmed by Patil and Taillie (1982, Theorem 2.9), according to which the (0, θ) family is increasing in stochastic diversity according to the partial order SD3. As discussed above, the (0, θ) family is increasing in the refinement order SD1, hence also in all of the other orders considered by Patil and Taillie. A sixth partial order, say SD6, defined by M R (X) cx ≤ M P (X) for all X with a finite mean, is implied by SD1, and is perhaps the same as one of partial orders proposed by Patil and Taillie. One of these partial orders, denoted SD4 by Patil and Taillie, is the condition that R ↓ [n] := n i=1 R ↓ i is stochastically smaller than P ↓ [n] for each n: That is to say, for each fixed n it is possible to construct a coupling of R ↓ and P ↓ with P(R ↓ [n] ≤ P ↓ [n]) = 1. A stronger stochastic ordering condition, say SD7, with SD7 =⇒ SD4, is that there exists a single coupling of R ↓ and P ↓ such that It is easily shown that the refinement ordering SD1 =⇒ SD7, but not conversely, due to the counterexample with P n and P m mentioned above. It is also the case that the two variants of the stochastic ordering condition, SD4 with different couplings for different n, and SD7 with a single coupling for all n, are not equivalent. This can be seen from the following simple example: • Let P = P ↓ be equally likely to be (3, 3, 0)/6 or (4, 1, 1)/6.
, for each n = 1, 2, 3. But it is impossible to couple P and R so that P(R[n] ≤ P [n] for n = 1, 2) = 1.
This is really a fact about arbitrary fixed ranked distributions, which applies also to random ranked distributions. To see (91), for 0 ≤ λ ≤ 1 consider the convex combination P ↓ (λ) := (1 − λ)R ↓ + λP ↓ , which is evidently another ranked discrete distribution, and differentiate i P ↓ i (λ) 2 with respect to λ. This derivative is a linear function of λ, which is of the requisite positive sign for all 0 ≤ λ ≤ 1 iff it is positive for λ = 0 and λ = 1. But that is easily checked using the condition that both P ↓ and R ↓ are ranked. A connection with the convex order of means is that if X has mean 0 and finite mean square, then, as discussed further in Section 4.7, it is easily seen that So a necessary condition for M R (X) cx ≤ M P (X) for all X with a finite mean is that This is obviously implied by the existence of a coupling of R ↓ and P ↓ with i (R ↓ i ) 2 ≤ i (P ↓ i ) 2 , as implied by (91), but is clearly a lot weaker than that condition. Other necessary conditions for M R (X) cx ≤ M P (X) are implied by the generalization of (92) to higher powers presented later in Corollary 22. So much remains to be clarified regarding these various orderings with respect to stochastic diversity.
Reversed martingales in the Chinese Restaurant
This section, which can be skipped at a first reading, explains how in the canonical construction of (EX, M P (X), X) as a three term martingale, as in Proposition 13, the X and M P (X) are the first term and the almost sure limit of the reversed martingale constructed in the following proposition.
Proposition 19. Let (J 1 , J 2 , . . .) be a random sample from a random discrete distribution P , with (J 1 , J 2 , . . .) and P , independent of the i.i.d. sequence (X 1 , X 2 , . . .). Let J * k the kth distinct value observed in the sequence (J 1 , J 2 , . . .), with J k = ∞ if there is no such value. Let so P n = (P n,k , k = 1, 2, . . .) is the random empirical distribution of sample values J 1 , . . . , J n reindexed by their order of appearance. For a measurable function g, let so in particular M P1 (g(X)) := g(X) for X := X J1 = X J * 1 . Then for each g with E|g(X)| < ∞ the sequence of P n -means M Pn (g(X)) is a reversed martingale, which converges both almost surely and in L 1 to Proof. The equality of the two expressions for M Pn (g(X)) follows easily from the definitions. The rest of the argument is a variation of the proof of Kingman's representation of partition structures by Aldous (1985). It is easily checked that the sequence (X Ji , i = 1, 2, . . .) is exchangeable, so M Pn (g(X)) is a reversed martingale by standard theory of exchangeable sequences. The remaining conclusions follow easily.
The Chinese Restaurant Process provides a visualization of successive random partitions generated by the cycles of random permutations π n of [n], where π n+1 is obtained from π n by inserting element n + 1 into one of n + 1 possible places relative to the cycles of π n . Various aspects of this metaphor are developed in Pitman (2006, §3.1) In terms of Chinese Restaurant, the random distribution P n with support {1, . . . , K n } is the empirical distribution of how the first n customers are assigned to tables j for 1 ≤ j ≤ K n , where K n the number of distinct values in the sample J 1 , . . . , J n from P . In this picture, table k is brought into service when the kth distinct value J * k appears, and that kth table is labeled by the positive integer J * k . The (n + 1)th customer is given the random value J n+1 picked from (P 1 , P 2 , . . .), and assigned to whichever table has label equal to J n+1 , if that label has appeared before, and otherwise, if there are K n = k tables in use, with k different labels, customer n + 1 is assigned to a new table k + 1 with value J * k+1 = J n+1 . Suppose that in addition to its index k in order of appearance and its label J * k , the kth table is assigned value X J * k for (X 1 , X 2 , . . .) an i.i.d. sequence with values in an arbitrary measurable space, independent of P and the sample (J 1 , J 2 , . . .) from P which drives the Chinese Restaurant Process. Say X J * k is the table color of the kth table brought into service in the restaurant. Then the sequence of table colors encountered by customers as they enter the restaurant, that is (X J1 , X J2 , . . .), is an exchangeable sequence of random variables which generates a partition structure which may be coarser than the partition of customers by tables, if there are ties among the X-values, but which will be identical to the partition of customers by tables if the distribution of X is continuous so the X-values are almost surely distinct. Note that the sequence (P * j , j = 1, 2, . . .) is a size-biased random permutation of the original random discrete distribution (P j ) driving the Chinese Restaurant Process, by a mechanism that is independent of the X-sequence. Pitman and Yor (1996, §6) introduced the composition operation on two random discrete distributions P and Q which creates a new random discrete distribution R := P ⊗ Q as follows. Let P := (P i ) be independent of (Q i,j , j = 1, 2, . . .), a sequence of i.i.d. copies of Q, and let P ⊗ Q denote the ranked ordering of the collection of products (P i Q i,j , i = 1, 2, . . . , j = 1, 2, . . .). Intuitively, each atom of P is fragmented by its own copy of Q, and these fragments are reassembled in non-increasing order to form R := P ⊗ Q. Clearly, R is a very special kind of refinement of P , as discussed in Section 4.4. The composition operation ⊗ may be regarded either as an operation on ranked discrete distributions, as in Pitman and Yor (1996, §6), or on their corresponding partition structures, as detailed in Pitman (1999, Lemma 35).
Fragmentation operators and composition of P -means
Independent of (P i ) and (Q i,j ) as above, let (X i,j ) be an array of i.i.d. copies of X, assumed to be either bounded or non-negative. Then Hence the following proposition: Proposition 20. The operation P ⊗ Q of composition of random discrete distributions P and Q corresponds to composition of their mean operators M P and M Q : for all bounded or non-negative X. Consequently, for three random discrete distributions P , Q and R, the following two conditions are equivalent: for every X with a finite number of values; Proof. The first sentence summarizes the preceding discussion. The second sentence follows from the characterization of partition structures by their P -means (Corollary 9).
Typically, the operation of composition of random discrete distributions is quite difficult to describe explicitly. A remarkable exception is the result of Pitman and Yor (1997a, Proposition 22) that for the P α,θ governing the (α, θ) model, there is the simple composition rule corresponding to the identity in distribution of corresponding P -means for all bounded or non-negative random variables X. See Pitman (2006, §3.4) for an account of how the identity (97) was first discovered by a representation of the (α, θ) model for 0 < α < 1 and θ > 0 as the limiting proportions of various classes of individuals in a continuous time branching process. See also Pitman (1999, Theorem 12) for a proof of the more general result that which has a similar interpretation in terms of P -means. See also Pitman (2006, §5.5) for further discussion and combinatorial interpretations of (97) and (99). As indicated in Section 5.7 these composition rules for (α, θ) means are closely related to Tsilevich's formula (22) for the generalized Stieltjes transform of an (α, θ) mean. See also James et al. (2008a, Theorem 2.1) where a presentation of (98) was derived from Tsilevich's formula (22). But the equivalence of (97) and (98) is only hinted at there, by a reference to Gnedin and Pitman (2005), which contains related results for interval partitions and random discrete distributions derived from self-similar random sets. A result of Pitman (1999, Theorem 12). establishes a close connection between the operation of fragmentation of one random discrete distribution by another, and a kind of dual coagulation operation. Curiously, while this coagulation operation has a simple description in terms of composition of associated processes with exchangeable increments, it does not seem to have any simple description in terms of P -means. See Pitman (2006, §5) and Bertoin (2006) for further discussion of fragmentation and coagulation operations and associated Markov processes whose state space is the set of ranked discrete distributions.
Moment formulas
Let ( X, Y ) := M P (X, Y ) be the pair of P -means of two random variables X and Y with some joint distribution. It is a basic problem to calculate the expectation E X Y , in particular E X 2 in the case X = Y . This problem was first considered by Ferguson (1973) for the (0, θ) model of P . Following Ferguson's approach in that particular case, expand the product as and take expectations to conclude that where p(2) := E j P 2 j and p(1, 1) := E j =k P j P k are the two most basic partition probability formulas encoded in the EPPF p derived from the random discrete distribution by (68), that is p(2) = P(J 1 = J 2 ) and p(1, 1) = P(J 1 = J 2 ) for (J 1 , J 2 ) a sample of size 2 from P . In the Dirichlet case considered by Ferguson (1973, Theorem 4) P is governed by the (0, θ) model, which makes p(2) = 1/(1 + θ) and p(1, 2) = θ/(1 + θ). This method extends easily to a product of three P -means, say X Y Z, with a different sum appearing for each of the 5 partitions of the index set [3], according to ties between indices of summation: by (68). Continuing to a product of n factors, the corresponding moment formula is given by the following proposition. This is a variant of product moment formulas due to Kerov and Tsilevich (2001, Proposition (10.1)), for the two-parameter model, and Ishwaran and James (2003) for a general random discrete distribution P , possibly even defective, as in Section 4.9.
[Product moment formula for P -means] Let ( Y i , 1 ≤ i ≤ n) = M P (Y 1 , . . . , Y n ) be the random vector of P -means derived from some joint distribution of (Y 1 , . . . , Y n ). For instance if Y i = g i (X) for some sequence of measurable functions g i and some basic random variable X, then Y i := j g i (X j )P j for (X 1 , X 2 , . . .) a sequence of i.i.d. copies of X, independent of P with EPPF p. Then, assuming either the Y i are either all bounded, or all non-negative, where #B is the size of block B and µ(B) := E i∈B Y i , and where for each k the inner sum is over the set of all partitions of [n] into k blocks {B 1 , . . . , B k }.
Proof. Expand the product according to the partition generated by ties between indices. For each particular partition {B 1 , . . . , B k }, the corresponding expectation is evaluated using the basic formula (68) for the EPPF.
Observe that no matter what the joint distribution of the Y i , if Π n is the random partition generated by a sample of size n from P , and the definition of the product moment function µ(B) on subsets B of [n] is extended to a partition Π = {B 1 , . . . , B k } of [n] by µ(Π) := k j=1 µ(B j ), then the product moment formula (101) becomes simply: It is tempting to think this formula somehow evaluates E n i=1 Y i by conditioning on Π n in a suitable construction of the product jointly with Π n to make E( n i=1 Y i | Π n ) = µ(Π n ), which would obviously imply (102). However this thought is completely wrong. Just consider the simplest case (100) for n = 2 for X = Y with E(X) = E(Y ) = 0. We know from examples that the distribution of X 2 can be continuous, with p(1, 1) > 0. But then there is no event E with probability p(1, 1) such that Be that as it may, the probabilistic form (102) of the product moment formula for P -means explains why this formula reduces easily in special cases, by manipulation of Eµ(Π n ). For instance, if the joint distribution of (Y 1 , . . . , Y n ) is exchangeable, then µ(B) depends only on #B, say µ(B) = µ(#B) where the definition of the moment function µ is extended to positive integers m by µ(m) := E m i=1 Y i . That is, the mean product of any collection of m of the variables. In this case, µ as a function of partitions of [n] simplifies to µ({B 1 , . . . , B k }) = k j=1 µ(#B j ). This is a symmetric function of the sizes of the blocks of Π n , which can be evaluated by listing the sizes of these blocks in any order, say (N 1:n , N 2:n , . . . , N Kn:n ). So for exchangeable (Y 1 , . . . , Y n ) formula (102) becomes where µ(m) is the expected product of any m of the Y i , and is the number of blocks of Π n of size i. In the important special case when Y i ≡ X for every 1 ≤ i ≤ n, µ(m) = EX m , and (103) may be recognized in Kerov (1998, Theorem (4.2.2)) in the equivalent form where π n is a random permutation of n which conditionally given Π n is uniformly distributed over all permutations of [n] whose cycle partition is Π n , as generated by the Chinese Restaurant Construction of Π n , and c(i, π) is the number of cycles of size i in π. See also Diaconis and Kemperman (1996, §2) where the formula (104) was first derived for the (0, θ) model of P which generates the Ewens (θ) distribution on random permutations with for K n (π) the number of cycles of π. Here is a version of Kerov's moment formula (104) in terms of the ECPF of P , as introduced in (66): [Moment formula for P -means] Let P be a random discrete distribution with ECPF p ex . For every distribution of X with E|X| n < ∞, the nth moment of X P , the P -mean of a sequence of i.i.d. copies of X, is finite and given by the formula where the inner sum is over all n−1 k−1 compositions of n into k parts. In particular, if E exp(tX) < ∞ for t in some open interval I containing 0, as for a bounded random variable X, then for every random discrete distribution P , • the distribution of X P is uniquely determined by its moment sequence (106).
Proof. For non-negative X, this is read from (103) for Y i ≡ X and the particular choice of the exchangeable random presentation N ex •:n of sizes of blocks of Π n . Then take the usual difference X = X + − X − for signed X. The rest is read from Corollary 14 and standard theory of moment generating functions.
A good check on this general moment formula for P -means is provided by taking X to be the constant random variable X = 1 in (106). Then X = 1 too, and the moment formula confirms that p ex (n 1 , . . . , n k ) is a probability function on compositions of n for each n, as in (67) The above moment formulas for P -means then reduce to classical formulas for moments of the arithmetic mean of a sequence of i.i.d. random variables, discussed further in Section 4.8. The ECPF (107) can be derived quickly as follows . Each of n balls indexed by 1 ≤ i ≤ n is equally likely to be painted any one of m colors j ∈ [m], and given there are k different colors used, the clusters of balls by color are put in any one of k! different orders by a uniform random permutation of [k]. Then p ex m (n 1 , . . . , n k ) is the probability that the sequence of cluster sizes (n 1 , . . . , n k ) is achieved by this random ordering. But there are k! m k different ways to choose the sequence of k different colors (j 1 , . . . , j k ) generated by this ordering, and for each of these choices of k colors, the probability of the achieving the counts (n 1 , . . . , n k ) by this sequence of colors, is the probability 1/k! that the particular k colors are put in the desired order, times the multinomial probability of achieving counts (n 1 , . . . , n k ) for these colors (j 1 , . . . , j k ), and count 0 for all other colors, in a simple random sample with replacement of n colors from [m].
Problem 23. Suppose that p ex is a symmetric function of compositions (n 1 , . . . , n k ) such that for some random discrete distribution P the moment formula (106) holds for all simple random variables X. If p ex is known to be an ECPF, then p ex = p ex P the ECPF of P , by Corollary 9. But this is not very obvious algebraically. What if p ex is not known to be an ECPF? Can it still be concluded that p ex = p ex P ? If not, what further side conditions (e.g. non-negativity) might be imposed to obtain this conclusion?
As a simple case in point, for each m = 1, 2, . . ., the classical moment formula for arithmetic means shows that the moment formula (106) holds for all simple random variables X and the function p ex displayed in (107). Does formula (106) alone imply that p ex = p ex m is in fact the ECPF for sampling from the uniform distribution on [m]? For small n 1 +· · ·+n k = 1, 2, 3, 4 it seems easy enough to conclude that by varying the distribution of X over two values that there are enough independent linear equations to force p ex (n 1 , . . . , n k ) = p ex P (n 1 , . . . , n k ). But as n increases, it seems necessary to involve three or more values of X, in which case the necessary linear independence of these equations does not seem to be obvious.
Arithmetic means
The study of averages of i.i.d. random variables has a long history. Borel and Kolmogorov established almost sure convergence of X m := m j=1 X j /m to E(X) as m → ∞. In this instance, X m is the P mean of X for the non-random weights P j := 1(j ≤ m)/m that are uniform on the set [m] := {1, . . . , m}, and it is assumed that E|X| < ∞. Characterizations of the exact distribution of X m in terms of the distribution of X are provided by the theory of moments, moment generating functions and characteristic functions, developed specifically for this purpose, as described in every textbook of probability theory. For X with a moment generating function (m.g.f.) E exp(tX) that is finite for t in some neighborhood of 0, the m.g.f. of m X m is from which the nth moment of m X m can be extracted by equating coefficients of t n : where [t n ]g(t) is the coefficient of t n in the expansion of g(t) in powers of t. In expanding the product of m factors on the right side of (109), each product of terms contributing to the coefficient of t n involves some subset I ⊆ [m] with say #I = k factors involving some t ni with n i > 0 for i ∈ I and n i = 0 otherwise. Hence, for all positive integers m and n, the classical moment formula for the arithmetic mean of m i.i.d. copies of some basic variable X: where (n 1 , · · · , n k ) ranges over the set of n−1 k−1 compositions of n into k parts, that is sequences of k positive integers with sum n. The term indexed by (n 1 , · · · , n k ) is a symmetric function of (n 1 , · · · , n k ), which remains unchanged if (n 1 , · · · , n k ) is replaced by its non-increasing rearrangement (n ↓ 1 , · · · , n ↓ k ), called a partition of n. This partition of n is often encoded by the sequence of counts for 1 ≤ j ≤ n, in terms of which k = j c j and j jc j , and the right side of (110) involves n n 1 , . . . , n k So the classical moment formula may be rewritten as a sum over partitions of n with a multiplicity factor counting the number of compositions for each partition, or as a similar sum over permutations of [n], with a different multiplicity factor, using the cycle structure of the permutations to index partitions of n.
The classical moment formula shows explicitly how the moments of X m are determined by those of X, in the first instance for X with a m.g.f. that converges in a neighborhood of 0. But then, by standard arguments involving formal power series, the formula holds also for every X with E|X| n < ∞. Instances and applications of this formula are well known. For instance, the case n = 2 of (110) gives hence the weak law of large numbers for such X, by Chebychev's inequality. And the case n = 4 of (110) gives hence the strong law of large numbers for such X, by Chebychev's inequality and the Borel-Cantelli Lemma (Durrett, 2010, Theorem 2.3.5). The classical moment formula (110) and its variant with summation over partitions have been known for a long time. It was used already by Markov in one of the first proofs of the central limit theorem. See e.g. Uspensky (1937, Appendix II). It was also used by Nelson (1967) to establish the Gaussian nature of increments in his proof of Lévy's martingale characterization of Brownian motion. See also Ferger (2014) for a recent discussion without acknowledgement of the classical literature. The above derivation of moments of the arithmetic mean X m of a sequence of i.i.d. copies of X can be adapted to P -means by first conditioning on P . This gives Now the coefficient of t n involves expanding the infinite product, picking out some finite number k of the factors, say those indexed by j i factors of t ni with n i > 0, for 1 ≤ i ≤ k, and then summing over all choices of (j 1 , . . . , j k ) and all compositions (n 1 , . . . , n k ) of n. This provides another proof of the moment formula for P -means (106). Kingman (1978) showed that to provide a general representation of sampling consistent families of random partitions of positive integers n, it is necessary to treat not just sampling from random discrete distributions (P i ) with P i ≥ 0 and i P i = 1, but also to consider sampling from (P i ) with P i ≥ 0 and i P i ≤ 1. This more general model may be interpreted to mean that the P i with P i > 0 are the jumps of some random distribution function F , but that F may also have a continuous component whose total mass is the defect
Improper discrete distributions
Call P proper iff P ∞ = 0, and defective or improper if P ∞ > 0. It was shown in Pitman (1999, Proposition 26) how improper random discrete distributions arise naturally in the study of random coalescent processes. See (Möhle, 2010, §3) and work cited there for more recent developments in this vein. Kerov (1998) indicated the right generalization of the definition of the P -mean M P (X) to defective random discrete distributions P . Restrict discussion to X with E|X| < ∞, and set M P (X) := j X j P j + P ∞ EX for (X j ) as usual a sequence of i.i.d. copies of X. This definition is justified by the way that defective distributions of P arise as weak limits of proper discrete distributions. For instance, if P m is the uniform distribution on [m] as in the previous section, then P m d → P := (0, 0, . . .) as m → ∞, in the sense of convergence of finite dimensional distributions. In this case the limit P has P ∞ = 1, and Kolmogorov's law of large numbers gives M Pm (X) := m −1 m i=1 X i → E(X) almost surely. This justifies the definition (114) in the extreme case P j ≡ 0 and P ∞ = 1. More generally, it is known (Pruitt, 1966) that if (a n,k ) is a Toeplitz summation matrix (i.e., lim n a n,k = 0 for each k, lim n k a n,k = 1, and k |a n,k | is bounded in n), and X n := k a n,k X k , then for any non-degenerate distribution of X with E|X| < ∞, there is convergence X n → E(X) in probability iff max k |a n,k | → 0 as n → ∞. As an easy consequence of this fact, there is the following proposition, whose proof is left to the reader: Proposition 24. Assume E|X| < ∞. Let P n be a sequence of proper discrete distributions, with P ↓ n d → P ↓ , meaning that the finite-dimensional distributions of P ↓ n converge in distribution to those of P ↓ , for P ↓ some possibly improper random discrete distribution. Then M Pn (X) d → X := M P ↓ (X) defined by (114). Moroever, this conclusion continues to hold for a sequence of possibly defective discrete distribution P n , provided (114) is taken as the definition of M Pn (X).
In other words, for X with E|X| < ∞, the definition (114) is the only definition of M P (X) which agrees with the definition in the proper case, and which makes P ↓ → M P ↓ (X) weakly continuous as a mapping from laws of possibly defective random ranked discrete distributions P ↓ to laws of M P ↓ (X). Beware that the above proposition is false if the assumption P ↓ n d → P ↓ is replaced by P n d → P : just take P n to be certain to be a unit mass at n. Then P n d → (0, 0, . . .), but M Pn (X) d = X for every n, which does not converge to EX unless X is constant.
For more about improper discrete distributions, and the tricky issue of extending the notion of a size-biased permutation to this case, see Gnedin (1998).
Models for random discrete distributions
This section recalls some of the basic models for random discrete distributions. These models all arose from applications of random discrete distributions, and spurred the development of a general theory of distributions of P -means and its relation to partition structures.
Residual allocation models.
Consideration of P -means by splitting off the first term, suggests that their study should be simplest for those P which can be presented in some order by a residual allocation model, or stick-breaking scheme, involving a recursive splitting like (9). That is, assuming the terms of P have already been put in the right order for such a recursion, there is the stick-breaking representation for a sequence of independent stick-breaking factors H i with H i ∈ [0, 1]. Freedman (1963) studied Bayesian estimation for such P given a sample J 1 , . . . , J n from P , assuming the stick-breaking representation (115) for H i such that Assuming the stick-breaking form (115) for P := (P 1 , P 2 , . . .) derived from (H 1 , H 2 , . . .), let R := (R 1 , R 2 , . . .) be the residual random discrete distribution defined derived correspondingly from (H 2 , H 3 , . . .). Then, assuming only that H 1 is independent of (H 2 , H 3 . . .), for M P (X) the P -mean of a sequence of i.i.d. copies of X, there is the decomposition where on the right side, P 1 , X 1 and M R (X) are independent, with X 1 d = X. The case of independent stick-breaking when P 1 d = β r,s for some r, s > 0 is of particular interest, due to the ease of computation of moments of M P (X) in this case. Multiply (116) by an independent γ r+s variable, and appeal to the beta-gamma algebra (7) to see that (116) where on the right side, X 1 and M R (X) are independent, independent also of γ r and γ s , which are independent gamma variables with the indicated parameters. In terms of moment generating functions, this becomes That is, by conditioning on all except the gamma variables,, For instance, if X p := 1(U ≤ p) is an indicator variable of an event with probabilty p, and P 1 d = β r,s is independent of the residual fractions (R 2 .R 3 , . . .), then Formula (117) is a generalization of Proposition 3 of Hjort and Ongaro (2005), which is the particular case with r = 1 and s = θ > 0 of greatest interest in Bayesian nonparametric inference. See also Proposition 4 of Hjort and Ongaro (2005) (116) holds with R d = P , implying that the distribution of X := M P (X) solves the stochastic equation where on the right side P 1 , X and X are independent. As shown by Feigin and Tweedie (1989) and Diaconis and Freedman (1999), this stochastic equation uniquely determines the distribution of X under mild regularity conditions. See Hjort and Ongaro (2005, Proposition 9) regarding the important case of the (0, θ) model with P 1 d = β 1,θ for some θ > 0.
Normalized increments of a subordinator
A well known method of construction of random discrete distributions P = (P 1 , P 2 , . . .) is to start from a sequence of non-negative random variables (A 1 , A 2 , . . .), and then normalize these variables by their sum A Σ : Here it is assumed that P(A Σ > 0) = 1, which provided P(A i > 0) > 0 for some i can always be arranged by conditioning on the event (A Σ > 0).
is an increasing process with stationary independent increments, and the A i are the independent increments of A(•) over consecutive intervals of lengths θ i with i θ i = θ. The normalizing factor A Σ in (120) is then A Σ = A(θ).
A closely related, but more important construction, with the same normalizing factor A(θ), is obtained by supposing that A i = A i (θ) in (120) are some exhaustive list of the jumps ∆A(r) := A(r)−A(r−) with ∆A(r) > 0 and 0 ≤ r ≤ θ, for a subordinator with no drift component, meaning that almost surely Precise definition of the A i (θ) and the corresponding P i (θ) in (120) requires an ordering for these jumps. However, according to Corollary 9, the distribution of P -means M P (X), and all other aspects of the partition structure derived from P , do not depend on what ordering of jumps is chosen. As shown by Lévy's analysis of occupation times of Brownian motion, it may be possible to identify the distributions of various P -means by suitable decompositions like (12), even without fully specifying the ordering in a construction of P from a countable collection of interval lengths. Historically, this was done by Lévy and Lamperti, decades before analysis of the size-biased orderings of jumps of a subordinator by McCloskey, and the ranked jumps by Ferguson and Klass (1972) and Kingman (1975). According to the Lévy-Itô theory of subordinators, the jumps A i (θ) in (121) are the points of a Poisson point process on (0, ∞) with intensity measure Λ(•), for some Lévy measure Λ on (0, ∞), which is uniquely determined by the Lévy-Khintchine representation of the Laplace exponent of the subordinator Φ(λ) : The joint law of ranked jumps A ↓ (θ) is then easily read from the Poisson description of the associated counting process (122), as detailed in Ferguson and Klass (1972). More or less explicit descriptions of the finite dimensional distributions of (P ↓ j (θ), j = 1, 2, . . .) are known. See Pitman and Yor (1997a, Proposition 22) which reviews earlier work on ranked discrete distributions. But to derive partition probabilities or distributions of P -means, ranked discrete distributions are impossible to work with. For such purposes, a much better ordering is the size-biased ordering P * introduced in this setting by McCloskey (1965). McCloskey imagined each A i (θ) to be a Poisson intensity rate of trapping, called the abundance of some species labeled by i, in a species sampling model driven by a collection of independent Poisson point processes of random rates A i (θ), for some fixed parameter value θ > 0. McCloskey showed that for A i (θ) the jumps of a standard gamma process (γ(r), 0 ≤ r ≤ θ), in the size-biased order of their discovery in the Poisson species sampling model, the resulting random discrete distribution P * has i.i.d. beta(1, θ) distributed residual fractions, and that beta(1, θ) is the only possible distribution of i.i.d. residual fractions which generates a random discrete distribution with its components in size-biased random order. Later work showed that this GEM(0, θ) model for P * introduced by McCloskey is the size-biased presentation of limit frequencies associated with the limit model proposed earlier by Fisher (1943), with partition probabilities governed by the Ewens sampling formula. Before discussing the GEM(0, θ) this model in more detail, the following proposition presents a fundamental connection between the more elementary model (120) with (P 1 , P 2 , . . .) the normalized increments of some subordinator A(•) over some fixed sequence of intervals of lengths θ i with i θ i = θ, and the model obtained from the same subordinator by some ordering of its relative jump sizes.
Proposition 25. Let P θ (•) := j 1(Y j ∈ •)P j (θ) be the random probability measure on an abstract space (S, S) defined as in (2) by assigning i.i.d. random locations Y i to each normalized jump P i (θ) of a subordinator up to time θ. Then for every ordered partition (S 1 , S 2 , . . .) of S into disjoint measurable subsets with θP(Y j ∈ S i ) = θ i , there is the equality in distribution of discrete random distributions on the positive integers where on the right side the A i (θ i i) are the independent increments of the subordinator A over a partition of [0, θ] into a succession of disjoint intervals of lengths θ i with Proof. This is a straightforward consequence of standard marking and thinning properties of Poisson point processes, which make the This proposition yields a fairly explicit description of the finite dimensional distributions of the random measure P θ (•) on S, as well as the distribution of various P -means: Corollary 26. Let P (θ) := (P j (θ), j = 1, 2, . . .) be the sequence of normalized jumps of a subordinator (A(r), 0 ≤ r ≤ θ) governed by a Lévy measure Λ with infinite total mass. Then every discrete random variable X := i a i X pi , with distinct possible values x i , and X pi the Bernoulli(p i ) indicators of disjoint events (X = x i ) with p i := P(X = x i ) subject to i p i = 1, the distribution of M P (θ) (X), the P (θ)-mean of a sequence of i.i.d. copies of X independent of P (θ), is determined by the equality in distribution where the right side is a corresponding normalized linear combination of independent increments A i (θp i ) of the subordinator A over a partition of [0, θ] into disjoint intervals, as in (125). If X has an infinite number of possible values, (126) means that if either side is well defined by almost sure absolute convergence, then so is the other, and the distributions of both sides are equal.
Proof. The case of a finite sum is read immediately from the previous proposition. The case of infinite sums then follows by an obvious approximation argument.
These distributions of P -means can be described much more explicitly in the particular cases of gamma and stable subordinators, as discussed further below. See also Regazzini, Lijoi, and Prünster (2003), regarding more general subordinators.
Dirichlet distributions and processes.
The model for a random discrete distribution derived from normalized increments of a subordinator is of special interest for the standard gamma subordinator A(r) = γ(r) for r > 0, defined by the standard gamma density (4). The convolution property of gamma distributions, that γ(r) + γ (s) for independent gamma variables of the indicated parameters r, s > 0, is part of the basic beta-gamma algebra (6)-(7) which underlies all the following calculations with the gamma process. First of all, this property allows the construction of the standard gamma subordinator with stationary independent increments. For any subordinator A, it is known (Sato, 1999, Corollary 8.9) that for each continuity point > 0 of its Lévy measure Λ(•), the restriction of Λ(•) to ( , ∞) is the weak limit as r ↓ 0 of the same restriction of the measure r −1 P(A(r) ∈ •). For the gamma density (4), in this limit there is the pointwise convergence of densities at each x > 0 because rΓ(r) = Γ(r + 1) → Γ(1) = 1. This identifies the Lévy measure of the gamma process hence the Lévy-Khintchine exponent which is a Frullani integral. The corresponding Laplace transform is obtained more easily by integration with respect to the gamma(r) density (4): The negative binomial expansion of this Laplace transform in powers of −λ encodes the moments of γ(θ): Hence, by equating coefficients of λ n , the list of integer moments of a gamma(θ) variable: Apart from the last equality, this moment evaluation holds also for all real n > −θ, by direct integration and the definition of the gamma function. Easily from (131) by beta-gamma algebra, or by direct integration, there is the corresponding beta moment formula: where for non-negative integers r and s, the right side involves just factorial powers of r, s and r + s, and the formula extends to all real n > −r and m > −s with the general definition (131) of the Pochhammer symbol (θ) n . This Pochhammer symbol, appearing in most formulas involving Dirichlet distributions with total weight θ, is often best understood through beta-gamma algebra as the nth monent of a gamma(θ) variable, that is the magic multiplier which makes the Dirichlet components independent. The Dirichlet distribution of P with weights (θ 1 , θ 2 , . . .) is the distribution obtained as P i := A i /A(θ) from the normalized subordinator increments construction (120), The finite Dirichlet (θ 1 , . . . , θ m ) distribution of P , is the distribution of (P 1 , . . . , P m ) on the m-simplex m i=1 P i = 1 so obtained by taking θ i = 0 for i > m. This distribution can be characterized in a number of different ways. For instance, by the joint density of (P 1 , . . . , P m−1 ) at (u 1 , . . . , u m−1 ) relative to Lebesgue measure in R m−1 , which is or by its product moments which are easily obtained by beta-gamma algebra, like the case (132) for m = 2.
The symmetric Dirichlet distribution with total weight θ, denoted here by Dirichlet(m||θ), is the particular case with θ i ≡ θ/m for 1 ≤ i ≤ m. As examples: • the distribution of the m consecutive spacings between order statistics of m − 1 independent uniform [0, 1] variables is the Dirichlet(m||m) distribution with m weights equal to 1.
• For any integer composition (m 1 , . . . , m k ) of m, a finite Dirichlet (m 1 , . . . , m k ) random vector can then be constructed from suitable disjoint sums of terms in a Dirichlet(m||m) random vector, by property (ii) in the following proposition.
This proposition summarizes some well known properties of the Dirichlet model for P .
(ii) For each partition of positive integers into a finite number of disjoint subsets B 1 , . . . , B m , the distribution of (P (B i ), 1 ≤ i ≤ m) is the finite Dirichlet (θP (B i ), 1 ≤ i ≤ m) distribution on the m-simplex.
(iv) This model is identical to the residual allocation model (115) with independent beta distributed factors Proof. Straightforward applications of the basic beta-gamma algebra (6)- (7).
These definitions and properties of Dirichlet distributions allow Proposition 25 and its corollary to be combined and restated as follows, for the Dirichlet random discrete distributions on abstract spaces introduced by Ferguson (1973).
Proposition 28. Let P θ (•) := j 1(Y j ∈ •)P j (θ) be the random probability measure on an abstract space (S, S) defined as in (2) by assigning i.i.d. random locations Y j to each normalized jump P j (θ) of a standard gamma subordinator up to time θ. Then for every ordered partition (S 1 , S 2 , . . .) of S into disjoint measurable subsets with θP(Y j ∈ S i ) = θ i , the sequence (P θ (S i ), i ≥ 1) has the Dirichlet distribution with parameters (θ i , i ≥ 1). That is where the γ i (θ i ) are the independent gamma(θ i ) distributed increments of the gamma subordinator over a partition of [0, θ] into disjoint intervals of lengths θ i . Moreover, for each discrete distribution of X := i a i X pi as in (126), there is the particular case of (126) where P (θ) is a random discrete distribution defined by any exhaustive listing of the normalized jumps P j (θ) of a standard gamma subordinator up to time θ.
Finite Dirichlet means
As a general remark, if the X i in a random average X := i X i P i are either constants, or made so by conditioning, say X i = x i for some bounded sequence of numbers x i , then as (x i ) ranges over bounded sequences, the collection of distributions of X, or a suitable collection of moments or transforms of those distributions, provides an encoding of the joint distribution of random weights P i . This approach works very nicely for the Dirichlet model: Proposition 29. [Von Neumann (1941), Watson (1956) ] For each fixed sequence of non-negative coefficients (x 1 , . . . , x m ) and (P 1 , . . . , P m ) with Dirichlet (θ 1 , . . . , θ m ) distribution with m i=1 θ i = θ, the distribution of the finite Dirichlet mean m i=1 x i P i is uniquely determined by the following Laplace transform of γ(θ) m i=1 x i P i , for γ(θ) with gamma(θ) distribution independent of (P 1 , . . . , P m ): (136) For λ = 1, with the left side regarded as the multivariate Laplace transform of the random vector γ(θ)(P 1 , . . . , P m ) with arguments x 1 , . . . , x m , this formula uniquely characterizes the Dirichlet (θ 1 , . . . , θ m ) distribution of (P 1 , . . . , P m ).
Proof. After multiplying both sides of (136) by an independent γ(θ) variable, the betagamma algebra makes the P i γ(θ) a collection of independent gamma(θ i ) variables, hence for independent γ i (θ i ) with sum γ(θ), as above. Hence by taking Laplace transforms: Condition on all the P i , and integrate out the gamma variables using the Laplace transform (129), to obtain the two further expressions in (136). For each fixed choice of coefficients x i , this formula determines the Laplace transform of γ(θ) i x i P i , hence the distribution of γ(θ) i x i P i , hence also the distribution of the finite Dirichlet mean i x i P i , by Lemma 4. The basic Dirichlet mean transform (136) has a long history, dating back to Von Neumann (1941), who gave a more complicated derivation in the case of particular interest in mathematical statistics, with parameters θ i = k i /2 for some positive integers k i with m i=1 k i = k when i for a sequence of i.i.d. standard Gaussian variables Z i . So in this instance, which provided the original motivation for study of the finite Dirichlet distribution in mathematical statistics i x i P i is the ratio of two dependent quadratic forms in a sequence of k i.i.d. standard Gaussian variables. As observed by Von Neumann, for half integer θ i , the basic beta-gamma algebra behind the above formulas, especially the key independence (7) of the Dirichlet distributed ratios and their gamma distributed denominator, follows from the symmetry of the joint distribution of the underlying Gaussian variables in R k with respect to orthonormal transformations. Watson (1956) gave the simple general argument indicated above using beta-gamma algebra. Watson also supposed each θ j to be a multiple of 1/2, but his argument generalizes immediately to general θ i as above. Watson indicated how the same method yields a transform of the joint law of any finite number of linear combinations of Dirichlet variables. Simply take λ = 1 and x j = i t i j x i,j D j in (136) to obtain a joint Laplace transform of i j x i,j D j , 1 ≤ i ≤ m for any matrix of real coefficients x i,j , 1 ≤ i ≤ m, 1 ≤ j ≤ k. This trick, of turning what looks at first like a univariate transform into a multivariate transform, has been rediscovered many times, often without recognizing that it can done so simply by a change of variables. See also Mauldon (1959), Weisberg (1971) Diniz et al. (2002 for detailed studies of the distributions and joint distributions of linear combinations of Dirichlet variables, motivated by applications to linear combinations of order statistics and their spacings. The above proposition was formulated for a fixed sequence of coefficients x 1 , . . . , x m . But a corresponding result for random coefficients (X 1 , . . . , X m ) follows immediately by conditioning: Corollary 30. Let (X 1 , . . . , X m ) be a sequence of random variables independent of (P 1 , . . . , P m ) with Dirichlet (θ 1 , . . . , θ m ) distribution with m i=1 θ i = θ. Then: • the distribution of the random Dirichlet mean i X i P i is uniquely determined by the following Laplace transform: for γ(θ) independent of (P 1 , . . . , P m ), and λ ≥ 0 • If the X i are independent, this holds with E i replaced by i E in the rightmost expression. In particular, if the X i are i.i.d. copies of X, so M P (X) := i X i P i is the P -mean of X for this Dirichlet distribution of P , then • As a special case, for X m||θ the P -mean of X for P = (P 1 , . . . , P m ) with the symmetric Dirichlet(m||θ) distribution with total weight θ, To illustrate the basic transform (141) of the distribution of a symmetric Dirichlet mean, observe that for a, b > 0 the beta(a, b) distribution is characterized by Hence easily from (141), In the particular case a = b = 1 2 , for the symmetric Dirichlet(m||m) mean of i.i.d. copies of X with the arcsine distribution of β 1/2,1/2 , the implication ⇒ in (143) was established in Roozegar and Soltani (2014) by a more difficult argument involving Stieltjes transforms. See also Homei (2017) where the same case is derived by moment calculations, involving the instance for Dirichlet(m||m) of the general moment formula (106) for P -means.
To illustrate (143) for 0 < p < 1 and q := 1 − p, if a unit interval is cut into m segments by m−1 independent uniform cut points, and a beta(p, q)-distributed fraction of each segment is painted red, independently from one segment to the next, then the total length of red segments has beta(mp, mq) distribution.
Infinite Dirichlet means
The extension of the basic transforms of Corollary 30 from finite to infinite Dirichlet means is surprisingly easy: Corollary 31. [Infinite Dirichlet mean transform: Cifarelli and Regazzini (1990)] For every non-negative random variable X, and P 0,θ the random discrete distribution derived from the normalized jumps of standard gamma process on [0, θ], the distribution of the distribution of the P 0,θ -mean X 0,θ of X is uniquely determined by the Laplace transform of γ(θ) X 0,θ , for γ(θ) independent of X 0,θ , according to the formula for λ > 0 For unbounded X ≥ 0, this formula should be read with the convention (1+λ∞) −θ = e −∞ = 0, implying P( X 0,θ < ∞) = 1 or 0 according as E log(1 + X) < ∞ or = ∞.
Proof. Suppose first that X is a simple random variable X = m i=1 x i X pi for Bernoulli(p i ) indicators X pi of m disjoint events with probabilities p i = θ i /θ. Proposition 28 gives X 0,θ d = i x i P i for (P 1 , . . . , P m ) with the finite Dirichlet distribution with parameters (θp i , 1 ≤ i ≤ m). So Proposition 29 gives This is (144) for simple non-negative X. The case of general X ≥ 0 follows by taking simple X n with 0 ≤ X n ↑ X and appealing to the monotone convergence theorem for P -means (79).
See also Sethuraman (2012) for a nice proof of this result without use of transforms. The problem of inverting the transform (144) to obtain more explicit formulas for the distribution of a (0, θ) mean X 0,θ has attracted a great deal of attention. One of the first appearances of the right side of formula (144) in connection with the distribution of a (0, θ) mean X 0,θ is in Hannum et al. (1981, Theorem 2.5), where for X with E|X| < ∞ it is shown that for each real x the formula with defines the characteristic function of a random variable T x , which is a limit in distribution of a linear combination of independent gamma variables with suitable Dirichlet distributed weights. Provided P(X = x) < 1 the distribution of T x is continuous, and such that P( X 0,θ ≤ x) = P(T x ≤ 0).
The c.d.f. of X 0,θ is therefore determined by inversion of the characteristic function (146). Something missing in this discussion of Hannum et al. (1981) identification which is evident by inspection of formula (144) for λ = −it. This observation makes both the identity (148) and the continuity of the distribution of T x completely obvious. It is also clear from Corollary 32 that this description of the distribution of X 0,θ is valid for any X with E log(1+|X|) < ∞. Closely related generalized Stieltjes transforms of the distribution of X 0,θ appear also in Cifarelli and Regazzini (1990), with references to earlier work by those authors. For a later treatment with further references, and explicit inversion formulas for the density of X 0,θ , see (Regazzini et al., 2002, Proposition 2) which is a Fourier variant of Corollary 31, with subsequent analysis involving (148) and inversion of the Fourier transform (146). Surprisingly, none of the above references mention the simple interpretation (149) of T x .
The two-parameter model
As recalled in Section 2.7, following the initial development of the basic infinite Dirichlet model with a single parameter θ by Fisher (who used α instead of θ for the parameter), subsequent work of McCloskey, Ewens, Ferguson and Engen, and the work of Lévy, Lamperti, Dynkin and others on last exit times and occupation times of various stochastic processes related to the stable subordinator of index α ∈ (0, 1), Perman, Pitman, and Yor (1992) developed the two-parameter extension of these basic models for random discrete distributions. The partition structure of this (α, θ) model was described by Pitman (1995), following which Pitman and Yor (1997a) gave an account of the corresponding ranked discrete distributions, and Tsilevich (1997) characterized the distributions of P α,θ -means for the complete range of parameters (α, θ). The (α, θ) model is most easily described by a residual allocation model (115) for generating its size-biased permutation P * , commonly known as the GEM(α, θ) distribution. This is obtained by the particular choice of distributions for independent factors H i with H i d = β 1−α,θ+αi (i = 1, 2, . . .).
It is easily shown that this EPPF corresponds to the above choice of beta distributed factors in the residual allocation model, and that this choice leads to a well defined random discrete distribution P iff one of following three cases obtains. See Pitman (2006, §3.1) for details and references to original sources.
• GEM(0, θ) = size-biased Dirichlet(∞||θ). This is the case α = 0 and θ ≥ 0, which is the weak limit of the Dirichlet(m||θ) model as m → ∞. In this model, P j > 0 a.s. for all j if θ > 0. Statistical aspects of this limit process were first considered by Fisher (1943). As first shown by McCloskey, the GEM(0, θ) model is the size-biased ordering of relative sizes of jumps of the standard gamma process on [0, θ], relative to their gamma(θ) distributed total. This is also the size-biased distribution of atom sizes of any Dirichlet random measure governed by a continuous measure with total weight θ. The corresponding partition structure is governed by the Ewens sampling formula.
• (α, 0). This model with θ = 0 is the size-biased ordering of relative sizes of jumps of a stable process of index α on [0, s], for any fixed time s. Equivalently in distribution, an interval partition of [0, 1] may be created by the collection of maximal open intervals in the complement of the range of the stable subordinator, relative to [0, 1]. Then the GEM(α, 0) distributed (P j ) may be obtained either as a size-biased ordering of the lengths of these intervals, or by letting P 1 be the last (meander) interval with right end 1, and size-biasing the order of the rest of the intervals.
• (α, α). This case with θ = α ∈ (0, 1), is derived from the previous construction by conditioning the stable subordinator to hit the point 1. So there is no last interval, rather an exchangeable interval partition, whose lengths in size-biased order are GEM(α, α). Equivalently, this is the sequence of lengths of excursions, in size-biased random order, for the excursions of a Bessel bridge of dimension (2 − 2α) from (0, 0) to (1, 0).
• (α, θ) for general 0 < α < 1 and θ > −α. The GEM(α, θ) model for generating P , and a random sample from P from which the partition structure is created, is absolutely continuous relative to the GEM(α, 0) model, with density factor c α,θ S θ/α α , where S α , the α-diversity of P , is the almost sure limit of K n /n α as n → ∞ for K n the number of distinct elements in a sample of size n from P , and c α,θ := Γ(1 + θ)/Γ(1 + θ/α) is a normalization constant. So if E α,θ is the expectation operator governing P as a GEM(α, θ), and a sample (J 1 , J 2 , . . .) from P , then for every non-negative random variable Y which is a measurable function of P and the sample (J 1 , J 2 , . . .) from P : In the 1990's, this (α, θ) model for a random discrete distribution P , and its associated partition structures and P -means, were extensively studied in a series of articles cited in Section 5.6. Since around 2000, the merits of this (α, θ) model for a random discrete distribution P have been widely acknowledged, and there is by now a substantial literature of developments and applications of this model in various contexts, as mentioned in the introduction.
Two-parameter means
Looking at the general moment formula for P -means (106), it is evident that this formula will simplify greatly if the EPPF factors as for some pair of weight sequences v(k), k = 1, 2, . . . and w(m), m = 1, 2, . . .. For then by (66) the corresponding ECPF factors as p ex (n 1 , . . . , n k ) = v(k)/k! c(n)/n! k i=1 w(n i )/n i !
It was shown by Kerov (2005) that apart from some degenerate limit cases, the only EPPFs of the form (153), defined for all positive integer compositions and subject to the consistency constraint (69) for all n, are those in displayed in (151), corresponding to a random discrete distribution P whose size-biased presentation follows the GEM(α, θ) residual allocation model (150). Assuming that (154) is an EPPF, which we know is possible for suitable choices of weights v(k), w(n) and c(n), the general moment formula (106) reduces easily to the identity which for X = X = 1 gives Thus the general formula (106) for moments of P -means has the following corollary.
[Composite moment formula for (α, θ)-means; Tsilevich (1997)]. For any presentation of an (α, θ) EPPF in the product form (153) for some sequences of weights v(k) and w(n) with exponential generating functions V and W as above, these generating functions are convergent in some neighborhood of the origin, and for each bounded random variable X the distribution of the (α, θ)-mean X is the unique distribution whose positive integer moments are determined by the identity of formal power series in λ E[V • W (λ X)] = V (EW (λX)).
To check the claim of convergence of the generating functions, it seems necessary to check case by case as below. But this composite moment formula for (α, θ)-means provides a remarkable unification of a number of different formulas that were first discovered in the special cases listed below. This composite moment formula for Pmeans is a variation of the compositional or Faà di Bruno formula, which shows how the coefficients c(n) of the composite function C(λ) = V • W (λ) are determined the two weight sequences v(k) and w(m). See Pitman (2006, §1.2). Consider the product π(n 1 , . . . , n k ) := v(k) k i=1 w(n i ) appearing in (153), without the factor of c(n) in the denominator. Starting from any two sequences of weights v(k) and w(m) such that this product is non-negative for all (n 1 , . . . , n k ), the compositional formula (157) determines the sequence of non-negative coefficients c(n) that is necessary to make p(n 1 , . . . , n k ) := π(n 1 , . . . , n k )/c(n) the EPPF of some exchangeable random partition Π n of [n] for each n. However, for these Π n to be derived by sampling from some random discrete distribution P , it is necessary that they be consistent as n varies in the sense of (69), and it is this consistency requirement that limits the scope of application of the composite moment formula to the (α, θ) model.
which allows the product form (153) The corresponding exponential generating functions then all simplify by negative binomial expansions: which magically combine as they must according to the composite formmula (157): This argument simplifies a similar argument due to Tsilevich (1997), by working consistently with compositions rather than partitions of n. A puzzling feature of the argument is that for 0 < α < 1, there is no obvious interpretation of the weight sequence w(m) = (−α) m in probabilisitic or combinatorial terms, due to negativity of the weight for m = 1. This is compensated by the alternating sign in the definition of v(k), which ensures that the product (153) is positive, as it must be for all compositions of positive integers (n 1 , . . . , n k ). Still, the result of this algebraically simple calculation is a remarkable unified formula for what appear at first to be extremely different cases of the (α, θ) model, that is the elementary symmetric Dirichlet (m||θ) case with only a finite number m of positive P i , and the fat tailed (α, θ) models for 0 < α < 1.
Also, for α = 0, θ = 0 and all X with E|X| n < ∞ for some n = 1, 2, . . . the nth moment of X α,θ is well defined, and given by the equality of coefficients of λ n in the formal power series And for 0 < α < 1 and arbitrary θ > −α • X α,θ is finite with probability one for all θ > −α if EX α < ∞; • X α,θ is infinite with probability one for all θ > −α if EX α = ∞.
Proof. Formula (166) is read from Corollary 33, in the first instance for bounded X, when the convergence of all power series is easily justified. The formula then extends to unbounded X ≥ 0 by monotone convergence, using the consequence of Proposition 13 that P -means X and Y of X and Y with 0 ≤ X ≤ Y can always be constructed as X = X J ≤ Y = Y J for (X i , Y i ) a sequence of i.i.d. copies of (X, Y ). It follows easily that if E|X| n < ∞ for some n = 1, 2, . . . then the nth moment of X α,θ is well defined, and can be evaluated as indicated by equating coefficients in the formal power series. The conclusions regarding finiteness of X α,θ follow similarly by monotone approximation, in the first instance for And for 0 < α < 1 and θ > −α with θ = 0, then also for θ = 0 by the result of Pitman and Yor (1997a) that for each fixed 0 < α < 1 the laws of GEM(α, θ) distributions are mutually absolutely continuous as θ varies.
|
2018-04-21T05:52:49.000Z
|
2018-04-21T00:00:00.000
|
{
"year": 2018,
"sha1": "10c5851dd9766097b410cc4da33cc5f94426f781",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "10c5851dd9766097b410cc4da33cc5f94426f781",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
120720144
|
pes2o/s2orc
|
v3-fos-license
|
0-pi transition in SFS junctions with strongly spin-dependent scattering
We develop theory of proximity effect in a superconductor - GMR alloy - superconductor trilayers, which takes into account strong spin dependence of electron scattering of compositional disorder in a diluted ferromagnetic alloy. We show that in such a system the critical current oscillations as the function of the thickness of the ferromagnetic layer, with the period of $v_{F}/2I$, decay exponentially with the characteristic length of the order of the mean free path.
The recent observation 1,2,3,4 of Josephson junctions with negative coupling, 5,6 also known as π junctions, has attracted a lot of attention to hybrid superconductorferromagnet -superconductor (SFS) structures.In contrast to conventional Josephson junctions, such as superconductor -normal metal systems, where the ground state corresponds to the superconducting phase difference ϕ of zero, the phase difference in a SFS trilayer can take both ϕ = 0 and π values, depending on the thickness of the ferromagnetic layer.Both 0 and π states in SFS trilayers have been deduced from the measurements of the density of states 1 and the critical current as a function of magnetic flux and temperature. 2,4,7,8In particular, the critical current exhibits oscillations superimposed on the exponential decay as a function of the thickness of the ferromagnetic layer. 3,9The decay length ξ d and the period ξ o of these oscillations have been measured, providing comparable yet unequal experimental values of these two parameters.
In ballistic SFS structures, the critical current is expected to oscillate with the period ξ o = v F /2I.For a ferromagnetic metal with a strong exchange splitting I, fluctuations of the width of the ferromagnetic layer suppress the appearance of the proximity effect, despite the fact that in ballistic structures Cooper pairs decay with the distance according to a power law rather than exponentially.Moreover, it has been shown 10 that, when the electron motion in a ferromagnetic film with large I is diffusive, the randomisation of the oscillation phase over paths of different lengths leads to the expenential suppression of proximity at the length of the mean free path: ξ d ∼ l for the case of Iτ ≫ 1 (where τ is the electron mean free path).To enhance the proximity effect in a SFS multilayers, one may want to use weakly ferromagnetic alloys, where the exchange field I is reduced by diluting the magnetic component.The analysis of diluted systems with Iτ ≪ 1 based upon modelling disorder in SFS junctions as spin-independent impurities has shown that the decay length may be expected 6,11 to extend beyond the mean free path range, such that ξ d ∼ ξ o = D/I, where D = v 2 F τ /3.In this paper, we show that a possibility to prolong the extent of the superconducting proximity effect in SFS structures by making them of diluted magnetic alloys is strongly limited.Following theory of suppression of superconductivity by magnetic impurities 12 , earlier theories 13,14 took into account the effect of magnetic disorder by including in the Usadel equation a weak Cooper pair relaxation described by a phenomenological spin relaxation rate τ −1 s .Keeping in mind that even in a weak ferromagnet electron spin flip is an inelastic process and should be accompanied by the excitation of a magnon, we attribute the pair breaking in a ferromagnetic alloy to a giant magnetoresistance (GMR) type effect.As noticed in earlier GMR studies 15,16 , a feature of ferromagnetic alloys is that elastic electron scattering in them is strongly spin dependent.Indeed, one scattering event off strongly spin-dependent disorder, seen differently by spin-up and spin-down electrons, is enough to break a singlet Cooper pair.In such a case, the decay length of a Cooper pair is of the order of the mean free path, ξ d ∼ l.Since, in this case, the use of Usadel equations adopted in the previous studies of disordered SFS junctions 6,11 does not hold, here we employ a nonlocal approach based on solution of Eilenberger equation 6,10,13,17 to describe 0-π Josephson oscillations as a function of the thickness of the diluted ferromagnetic alloy layer.
To describe a dilute ferromagnetic alloy, we use the following Hamiltonian (a 2 × 2 matrix in the spin space), adopted 15 in GMR theory, where V and J describe magnetic atoms embedded into a normal metal, and σ is the vector of Pauli matrices.The average J = e z I determines the exchange splitting for conduction band electrons, and V = 0. Since every magnetic atom produces both scalar V and exchange J potentials, we use the following correlation functions for magnetic and nonmagnetic disorder, The starting point for quantitative description is Eilenberger equation for the retarded component of the semi-classical Green's function, where (f + ) αβ (r, t; n, ω) = −[f αβ (r, t; −n, −ω)] * , τ 3 acts in the Nambu space, n = p/p, the self-energy has the form and ǧ = ǧ d 2 n/4π is the Green's function averaged over momentum direction.For the weak proximity effect, Eq. ( 2) can be linearized around the zero-order Green's function ǧ0 = τ 3 .Performing the expansion up to first order, we obtain The linearization of the Eilenberger equation and subsequent analysis are based upon the assumption of weak coupling between superconductors and the ferromagnet, which is realized, for instance, if these are separated by an opaque barrier with the low transparency Θ ≪ 1.The appropriate boundary conditions have been derived by Zaitsev, 18 where ǧs/a i = (ǧ i (n z )± ǧi (−n z ))/2 (i = S and i = F for a superconductor and a ferromagnetic alloy, respectively), n z > 0, where n z is the projection of n = p/p onto the direction normal to the SF interface, and ǧs ± = (ǧ s S ± ǧs F )/2.In the case of low transparency Θ ≪ 1, we find that in the first order in Θ, where ǧ(0) S and ǧ(0) F are the Green's functions in the two materials when those are detached (Θ = 0).Together with Eq. ( 4) this gives us closed set of equations.
It is convenient to represent the semiclassical Green's function f (n, z) as a combination of two functions of a positive argument, n z > 0: In this representation the boundary conditions take the form where d F is the thickness of the ferromagnetic layer.The equations for f 1 and f 2 take the form where n z > 0 and α = (τ is a 2 × 2 matrix acting on the 2 × 2 matrix f , and The averaged Green's function equals In the case of a thick ferromagnetic layer, such that e −dF /l ≪ 1, where l = v F τ is the mean free path, one can write down the formal solution of Eqs.(6) as The subsequent algebra includes adding and averaging Eqs.(9), which leads to the integral equation for f .Having presented f (z) as the sum, we find that the (matrix) function h(z) satisfies Fredholm equation of the second type, where 1 0 e −z/nzl dn z .Up to this point, we could still reduce our equations to Usadel equations provided the diffusion approximation holds, (1 − α)|| Im λ|| ≪ 1.In the rest of the paper, work outside this regime and consider the ballistic situation.For α = 0, the exact solution of Eq. ( 11) is h(z) = K(λz)/2.Generalizing, we find that in the ballistic case the solution is determined by behavior of functions K and G which at z ≫ l are K(z) ≈ G(z) ≈ e −z/l l/z.Assuming that solution falls off exponentially as e −λz/l , one can see that in Eq. ( 11) the last term in the integral can be neglected everywhere except for a small region near the boundary, z = d F .This enables us to split the solution of Eq. ( 11) into two parts, The first term is relevant everywhere and is the main term of the solution, whereas the second one is only important close to the boundary, d F − z ∼ l, when the exponents become of the same order.Each of the matrix functions h i (i = L, R) satisfies the equation where G(λz) = G(λz)e λz/l and S L (λz) = K(λz) exp(λz).
Far from the left boundary, z ≫ l, we parameterize h L (z) = A(z)l/λz, 1/λ ≡ λ −1 .Substituting it into Eq.( 13) and keeping the leading order in l/z, we obtain the equation for the diagonal matrix A, where the matrix ξ = λ ∞ 0 h L (z)e −2λz dz/l does not depend on z.The last term in Eq. ( 14) in the leading order in ln −1 z is A(z)(γ + ln(λz/l)) with γ being Euler's constant.Subsequently, we obtain a differential equation for the function z 0 A(z ′ )dz ′ /z ′ .The solution far from the boundaries reads where a constant δ(α, λ) is of order one; at α = 0 the exact solution gives δ(0, λ) = 2. Numerical calculations show that δ(α, λ) is still close to 2 even for α = 1.
Having solved the equation for h L , we use it to determine the matrix function S R , according to and find the solution for the function h R (z).Within the approximations used in the above analysis of the Eilenberger equation for the anomalous Green function f , the Josephson current density in the SFS structure can be represented as Here, n(ω) is the Fermi distribution function.Substituting the expressions for h(z) into Eq.( 10) and Eqs. ( 9), we find where the matrix function Z depends on d F logarithmically and for α = 0 equals Z(0, λ, d F ) = 1.Generally, in the leading order in d −1 F it becomes where ξ = λ For ∆τ ≪ ||λ|| ∼ 1, the calculation of the frequency integral leads to an expression that is most conveniently represented as the sum over Matsubara frequencies ω n = 2πT (n + 1/2).This is equivalent to the replacement iω → ω n in the above expressions involving the matrix λ.As the result, λ becomes a diagonal matrix with two complex conjugate eigenvalues, so that in the Matsubara representation Z has the same property and can be written down as Note that for ω n τ < ∼ ∆τ < ∼ 1, |Z| and ϕ Z are two parameters of a structure independent of the Matsubara frequency.For α = 0, one finds Z = 1.The dependence of |Z| on the value of parameter α is plotted in Fig. 1.Although parameters |Z| and ϕ Z depend on the quantities α = (τ J − τ V )/(τ V + τ J ), Iτ , and d F /l, this fact does not qualitatively affect the results.Outside the regimes of Iτ ≪ 1 and τ J ≪ τ V (where our results are not applicable) Z is a smooth function of d F of the order 1 that does not contain any dependence on scales of order of ξ d ∼ l or ξ o = v F /2I. Finally, we arrive at the expression for the critical current density [in j = j c sin(φ L − φ R )] which has the form In the limiting cases, the summation over Matsubara frequencies ω n can be calculated explicitly.For a ferromagnetic layer with the thickness much greater than the coherence length in the superconductor, d F ≫ v F /∆, the sum equals 2T sinh −1 (2πT d F /v F ).In the opposite case of a thin layer, d F ≪ v F /∆, one obtains (∆/4) tanh(∆/2T )).At zero temperature, the sum can be converted into integral which equals Ci(a) sin(a) + (π/2 − Si(a)) cos(a), where a = 2∆d F /v F , and the functions Si and Ci are sine and cosine integrals, respectively.For high temperature, d F ≫ v F /T , only the lowest frequency is important, and the sum equals The critical current dependence on the ferromagnetic layer thickness described in Eq. ( 19) for a weakly ferromagnetic layer with d F > l is shown in Fig. 2.Even a when dilution of a ferromagnetic layer is such that exchange energy in it is weak, Iτ ≪ 1, oscillations of j c as a function of the layer thickness, with the period of ξ o = v F /2I, decay exponentially at the length scale of the mean free path, ξ d = l -similarly to what happens in a disordered ferromagnetic layer with a strong exchange 10 (Iτ ≫ 1).Our results for Iτ ≫ 1 coinscide with whose of Ref. 10: For strong fields, the phase randomization of the order parameter is effective irrespective of the nature of scatterers.The dependence of the critical current on the thickness of the ferromagnetic layer in Eq. ( 19) resembles the experimentally observed suppression of the proximity effect, at the length scale comparable to the mean free path measured in the same material 2 .Note that theories involving generation of the triplet order parameter due to nonuniform (spiral) magnetization in the ferromagnet 19 end up with the opposite conclusion, predicting weaker decay of the order parameter.
In conclusion, we developed theory of the proximity effect in a superconductor -weakly ferromagnetic GMR alloy -superconductor trilayers, which takes into account strong spin dependence of electron scattering of compositional disorder.The result, Eq. ( 19), describes 0-π transition for Josephson effect as a function of the thickness of the ferromagnetic layer d F : Oscillations occur with the period of ξ o = v F /2I and exponential decay with the characteristic length ξ d = l of the order of the mean free path, even in the regime when Iτ ≪ 1.This result complements previous studies of the spin-singlet proximity effect in superconductor -ferromagnet hybrid structures performed for ballistic and diffusive systems with spinindependent scattering 6,10,13 as well as theories of the suppression of the order parameter oscillations caused by spin-active interfaces. 20
∞ 0 h
L (z)e −2λz dz/l and ζ = α −1 λ 2 d F ∞ 0 h R (z)K(λz)dz/l.For d F ≫ l the quantity ξ is constant, and A depends on it logarithmically.The quantity ζ depends on d F in the same way as A.
FIG. 2 :
FIG.2:The dependence of the critical current in SFS trilayer on ferromagnetic layer thickness.Here, we normalize current using a notional j0 = 2πeνvF Θ 2 ∆, and show it for T = 0, ∆τ = 0.1 and several values of parameters α and Iτ : α = 0 (dashed lines) and α = 0.3 (solid lines).Dips in the value of jc indicate positions where it disappears and changes sign, thus resulting in a sequence of 0−π transitions.Neighbouring dips always correspond to the same value of Iτ , demonstrating only weak dependence of the results on the parameter α.For comparison, we also show the decay of the Josephson proximity effect in an SNS structure heavily doped by magnetic scatterers (Iτ = 0).
|
2019-04-14T02:12:34.487Z
|
2006-10-11T00:00:00.000
|
{
"year": 2006,
"sha1": "148cca453b25d088d1d8e4c166fe67a21d8f29b2",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/cond-mat/0610299",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6304bbde7d347f52bed351048fc427173b38a357",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
150931610
|
pes2o/s2orc
|
v3-fos-license
|
Supporting Care by Interpretation of Expressions about Patient Experience with Machine Learning
— Our research aims at addressing the needs of developing data analysis about communication in respect to care seeking and primary care, discovering how health expressions evolve along the personal growth and learning process, and how to solve the needs identified in respect to developing measuring the quality of life. We provide an overview of the development of a new research methodology exploiting machine learning for analyzing patient experience expressions to support personalized care and managing in everyday life. Our research relies on an online questionnaire in which the representatives of various population groups perform interpretation tasks. Dependencies between answers about the interpretation tasks and background information are analyzed with machine learning methods. The research creates new ways to interpret and address the meanings of language usage of different groups of patients and impaired carefully and distinctively as a part of everyday life and care events.
I. INTRODUCTION
We provide an overview of the development of a new research methodology exploiting machine learning for analyzing patient experience expressions to support personalized care and managing in everyday life. Our description relies on the detailed planning and implementation we have made for a new research project titled "Development of method for interpretation of health expressions based on machine learning to support various care events and persons (DIHEML research project)". The research idea and the research plan for DIHEML research project have been conceived and developed by Lauri Lahti and they have been published in an initial form and concerning main features in his scientific publication (Lahti 2017, http://urn.fi/URN:NBN:fi:aalto-201712298340). Lauri Lahti is the responsible researcher of DIHEML research project.
DIHEML research aims at developing a method based on machine learning (artificial intelligence) that enables semantic classification of health expressions and dialogues concerning various events, processes and persons in healthcare. To develop computational methods DIHEML research needs to acquire a broad data collection among the groups of patients and impaired and other population groups about essential viewpoints concerning health and wellbeing. DIHEML research gathers data with an online questionnaire that asks a person to interpret health expressions by giving Lauri Lahti, Department of Computer Science, Aalto University School of Science, Finland. answers on various measurement scales. In addition, the person is asked to give answers about his/her background. In brief, the research aims at finding out what kind of dependencies there emerge between the answers about the interpretation tasks and background information. A successful implementation of DIHEML research relies on a diverse set of collaborating organizations that enable a fertile acquisition of data from various fields of life and a detailed analysis with multidisciplinary research partners.
II. PREVIOUS RESEARCH
Previous research has identified the importance of developing computational methods that enable interpretation of biomedical measurement data with a natural language used in thinking and communication (Califf 2014;Tsai et al. 2015). Previous research has also explored ways to identify significant patterns in linguistic health data (Brown et al. 2010;Ashutosh 2014) and how different populations interpret linguistic expressions in respect to affectivity (Bradley & Lang 1999a;Warriner et al. 2013) and associations (Higginbotham et al. 2015;Fitzpatrick et al. 2015).
It has been suggested that various characteristics of a person have an impact on the state of the health (Marmot et al. 2003). An aspect of a patient's health status that can be directly measured by the patient himself/herself (i.e. without a need for an interpretation of the patient's response by someone else) can be referred to as a patient-reported outcome and it can be typically measured with a self-administered questionnaire referred to as a patient-reported outcome measure (PROM) (Prinsen et al. 2018 including also descriptions about diagnosis and treatment. This online information can offer resources for developing computational methods to support care with a wide range of information content, such as authorized healthcare guidelines (Terveyskirjasto 2018;Lahti 2016) and general discussion forums (The Suomi 24 Corpus 2016; Lahti et al. 2018).
In our previous work (Lahti 2017) we trained a convolutional neural network model (adapted from the model of Kim (2014)) with sentences representing two classes: 1000 sentences from the discussion topic group Children's health (Lasten terveys, in Finnish) and 1000 sentences from the discussion topic group Health (Terveys, in Finnish) based on Finnish online discussions (The Suomi 24 Corpus 2016). Then the trained model managed to classify correctly 88,6 percent of the new Children's health sentences and 91,4 percent of the new Health sentences.
III. METHOD
Motivated by the previous research DIHEML research aims at developing a method based on machine learning that enables semantic classification of health expressions and dialogues concerning various events, processes and persons in healthcare. Our research aims at addressing the needs of developing data analysis about communication that can support evaluation of the urgency of the need for treatment and decision making in respect to care seeking and primary care. In addition, our research aims at discovering how the interpretations of health expressions evolve along the personal growth and learning process and furthermore among students who are studying to become healthcare professionals and how this can enable supporting the educational system and learning methods. In addition, our research aims at addressing the needs identified in respect to developing measuring the quality of life.
A. Acquisition of data
The acquisition of data for DIHEML research relies on an online questionnaire in which the person participating in the research is asked to give an answer on various measurement scales to a) claims shown in text, b) images and c) videos. These answers indicate how the person interprets the shown health expressions and this activity is referred to as performing health expression interpretation tasks. In addition, in the acquisition of data the person is asked to give answers about his/her background concerning the conditions of life and quality of life, including information of the state of health and the contact information to enable later longitudinal research.
The answers about interpretation tasks and background which are gathered from the person can be considered as a collection of patient-reported outcome measures (PROMs) that are used to identify patterns of health behavior. The research aims at finding out what kind of dependencies there emerge between the answers about the interpretation tasks and background information given by the person participating in the research. Based on the gathered data the research develops machine learning methods (artificial intelligence) that support care and wellbeing. The gathered data and methods developed based on its analysis can be used in various ways to implement different personalized health services and to advance addressing different needs in everyday life and care. DIHEML research is carried out in the context of Finnish language usage and Finnish healthcare system and society but the results are expected to be generalized to other contexts as well when taking into account the similarities and differences in respect to the Finnish context.
The population groups asked to answer to the online questionnaire consist of a) the groups of patients and impaired, and b) the students of primary school, secondary school, high school and educational institutes in the domain of healthcare. The groups of patients and impaired are recruited via organizations of patients and impaired, and the students are recruited via the authorities of educational institutes. A goal is to gain 5000 persons participating in the research. The acquisition of data is based on various different experiment series (online sessions). The person can choose himself/herself how many experiment series he/she wants to participate. An assumption is that one experiment series lasts about one hour.
B. Formulation of interpretation tasks based on health expressions
In brief, DIHEML research relies on a collection of thousands of health expressions and with randomization a set of 100 expressions at a time are given to the person to be interpreted by giving answers on various measurement scales. In addition, the person is asked to give answers about his/her background.
In DIHEML research the patient experience is considered to modularly consist of health expression events (HEE) and a health expression event can be described with a health expression (HE), for example "I have a feeling of illness" or "I am in a situation where the doctor tells me the diagnosis".
A health expression event series (HEES) is a group of health expression events that occur as a consecutive series and/or as a parallel series. In practice a health expression event series can be described as a kind of dialogue which possibly contains consecutive and/or parallel expressions.
In the questionnaire the interpretation tasks can have various formulations. In the interpretation task the person can be asked to interpret the shown expression like it was an imagined experience of his/her own person (imagined my person's experience, see Figures 1a and 1c) or like it was an imagined experience of another person (imagined other person's experience, see Figure 1b).
In rating interpretation for example the expression "I have a feeling of illness" is shown and then the person is asked to interpret what kind of impression this expression induces in him/her in respect to worriedness, on a scale 0-10 where 0 means the smallest possible worriedness and 10 means the greatest possible worriedness (see Figure 1a). In comparison interpretation for example two expressions "I have a feeling of illness" and "I have a feeling of faintness" are shown and then the person is asked to interpret which one of these two expressions induces in him/her a greater impression in respect to worriedness (see Figure 1b).
If the person is asked to interpret for example two expressions that occur as a consecutive series then the interpretation task can be referred to as the interpretation of consecutive expressions (see Figure 1c). If the person is asked to interpret for example two expressions that occur as a parallel series then the interpretation task can be referred to as the interpretation of parallel expressions.
An interpretation task can ask the person to give an answer on various measurement scales to claims shown in text, images or videos. Figure 1 illustrates two text-based interpretation tasks (a and b) and an image-based interpretation task (c).
An image-based interpretation task is created on the basis of a corresponding text-based interpretation task using HEE image templates (template variations currently include "a situation with one human figure where the person thinks" and "a situation with two human figures where the person thinks"; see images 1 and 2 in Figure 1c). In the HEE image templates the simplified form of the human figure, referred to as a tadpole, aims at making the human figure intuitively recognizable for people at any age (Foley & Mullis 2008).
A video-based interpretation task is created on the basis of a corresponding text-based interpretation task or image-based interpretation task. Thus a video-based interpretation task is a dynamic slide show which presents consecutively one textual view or image at a time all the stimuli that are included in the corresponding text-based or image-based interpretation task. Video-based interpretation tasks currently include the following variations: transitions automatically, transitions activated by the person, interpretation holistically, interpretation sequentially, interpretation cumulatively, a replay option is offered, and a replay option is not offered.
C. Adapting to the person's age and sensitivity to become anxious DIHEML research is active from June 2018 to May 2021. The research is funded by scholarships given for research purposes. The persons participating in the research are not paid for the participation. The research is implemented so that respect is given towards the privacy of the participating persons, the academic research ethical principles, the data protection and privacy regulation of European Union and the guidelines of the Finnish research ethics authority TENK.
For the acquisition of data in DIHEML research various procedures have been prepared to ensure carefully that the persons participating in the research cannot be harmed in any way. Preparations have been made concerning possible problem situations encountered during the acquisition of data so that when giving answers with the online questionnaire there are guidelines and personal guidance available via a specific online form.
In respect to psychological stress intensity there is an aim to adapt the questionnaire contents (interpretation tasks and questions) to the person's age and sensitivity to become anxious which are asked instantly in the beginning of the data acquisition. This adaptation is carried out according to the corresponding age limits defined for media contents (KAVI 2018), and following them Lauri Lahti has classified all information that is shown in the DIHEML research.
In the following are some examples of expressions belonging to each psychological stress intensity class: "I have pain." (permitted to people of any age), "I have a bleeding nose." (at least 7-year-olds), "I have an open fracture." (at least 12-year-olds), "I have anorexia eating disorder." (at least 16-year-olds), and "I have suicidal thoughts." (at least 18-year-olds).
In the evaluation of the sensitivity to become anxious the person is asked to answer to the part "generalized anxiety" in the SCARED questionnaire (Birmaher et al. 1997 . If the total score for the person's answers is below the threshold value 9 then the research will show to the person contents that belong to the intensity class of his/her age or belong to lower intensity classes (thus the sensitivity to become anxious has been considered low for the person). If the total score for the person's answers is at least the threshold value 9 then the research will show to the person contents that belong only to the intensity class "permitted to people of any age" (thus the sensitivity to become anxious has been considered high for the person).
D. Authorization of the caregiver
The information document about DIHEML research which contains an online link to the questionnaire is given to the persons younger than 15-year-olds only after it has been confirmed that an authorization has been gained from the caregiver named for the person in question. This confirmation is performed by the responsible researcher of DIHEML research or someone who assists him (for example a teacher of an educational institute). The confirmation is carried out so that an information document is sent to the caregiver of the person participating in the research in such a way that this information document contains an online link to a form that enables giving the authorization.
When the caregiver is giving the authorization, the caregiver expresses the age and sensitivity to become anxious concerning the cared person. This information sets the initial upper limit for the psychological stress intensity class (the age and sensitivity to become anxious expressed by the cared person himself/herself can set the upper limit for the intensity class still even lower).
In addition, the caregiver is required to give beforehand an answer whether or not the cared person is allowed to start answering the questionnaire independently, or is the cared person allowed to start answering the questionnaire only when the caregiver is present or only when the teacher of the cared person is present. Furthermore, the caregiver gets beforehand an access to familiarize himself/herself with and accept the questionnaire contents that are used as a basis for the interpretation tasks of the cared person, and the caregiver is provided with an opportunity to remove tasks that he/she considers as harmful for the cared person so that the cared person will not be exposed to them at all.
IV. EXPERIMENT
As a part of the planning and implementation of DIHEML research project we have carried out an experiment to identify suitable materials and methods to be used in the questionnaire for data acquisition with the representatives of target populations. Especially we have tried to identify suitable formulations concerning the information asked about the background of the person and the interpretation tasks given to the person.
Based on a literature review and some preliminary testing with the representatives of the target populations we suggest that the following entities are useful information to be asked about the background of the person: -the conditions of life (among others gender, birthtime, place of living, native language, profession, state of health, chronical diseases/disability/impairment, gained care for chronical diseases/disability/impairment) -responses about the age and sensitivity to become anxious answered by the person and by the caregiver Health expression interpretation tasks rely on the resource Lahti (2018a, "Expression collection of everyday life") that has been conceived and developed by Lauri Lahti. Since this resource lists health expression interpretation tasks, it thus lists also both the expressions that are asked to be measured (measuring material, for example "I have a feeling of illness") and the measuring scales (measuring dimensions, for example "worriedness"). All these measuring material and measuring dimensions are based on text strings that have been considered significant when extracted by Lauri Lahti with algorithms (some algorithms discussed in Lahti et al. Two samples from the resource Lahti (2018a, "Expression collection of everyday life") are illustrated in Table 1 and Table 2. Table 1 shows all 10 main categories of health expressions included in the measuring material and the number of expressions and subcategories. Table 2 shows all 52 measuring dimensions (measurement scales used to give answers about interpretations of health expressions).
Resembling our previous work (Lahti 2017) we carried out a new sentence classification experiment. We trained a convolutional neural network model (adapted from Kim (2014) and Britz (2015)) with a sample of 2000 unique sentences of Finnish online discussions from the discussion topic group "Terveys" (Health, in Finnish (The Suomi 24 Corpus 2016)). We trained the model with two classes of 1000 sentences: we had labeled beforehand 1000 sentences as pleasurable and 1000 sentences as unpleasurable based on affective ratings of the words occurring in a sentence (Warriner et
V. DISCUSSION
The acquisition of data for DIHEML research project produces a data collection that enables to investigate what kind of dependencies there emerge between the answers about the interpretation tasks and background information given by the person participating in the research. Based on the gathered data the research develops machine learning methods (artificial intelligence) that support the care and wellbeing of the groups of patients and impaired and other population groups. The promising results of our new sentence classification experiment relying on a convolutional neural network model gives support for our central aim to exploit machine learning to identify dependencies in a resembling health-related data that can be gathered in DIHEML research.
All the data gathered in the acquisition of data for DIHEML research is planned to be permanently archived by the research group. In addition information planned to be permanently archived by the research group include the research registry which contains approval documents and documents for decoding code names, and the detailed descriptions about the predictive models and the measurement tool developed in the research as well as the data base consisting of the measurement results. In all this archive data unique personal information is stored, i.e. identification information, which enables later longitudinal research related to the data collection. The processing and storing of personal information are needed to enable combining the person's answers given in separate sessions especially for longitudinal research purposes and also to investigate the dependencies between the person's state of health and his/her other answers.
Besides developing methods the research will publish an appropriately anonymized version of the measurement results gathered by the research freely available for everybody.
DIHEML research creates new ways to interpret and address the meanings of language usage of different groups of patients and impaired carefully and distinctively as a part of everyday life and care events (reflecting the meanings of language usage in respect to among other things context, background and personal profile). The research offers new ways to interpret the language usage of the representatives of groups of patients and impaired so that misunderstanding can be prevented and agreement can be advanced. The research offers also ways to highlight in language usage of the representatives of groups of patients and impaired such topics that are important to be addressed in implementation and development of healthcare services. Thus the research aims at making such a progress that the personal needs and rights of patients and impaired can be better addressed in decision making. The research aims at advancing creation of support services needed by different special population groups in everyday life and care. Thus the research enables advancing the equality for different population groups and for population groups that have special needs.
The answers gained from the students of educational institutions supplement the answers gained from the groups of patients and impaired, thus also providing comparison data representing the general population. In addition an opportunity is achieved for finding out how the interpretations evolve along the personal growth and learning process and furthermore separately among the persons attending educational programs aiming at a profession in the field of healthcare.
|
2019-05-13T13:04:12.609Z
|
2019-01-11T00:00:00.000
|
{
"year": 2019,
"sha1": "4b867927510d5b03fc4cc46914d8aeec7d1c86a0",
"oa_license": null,
"oa_url": "https://doi.org/10.31871/ijntr.4.12.16",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e4f2171fcce64e17e462b85f7e69826f65cb8eff",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
930865
|
pes2o/s2orc
|
v3-fos-license
|
Negative pressure wound therapy promotes muscle‐derived stem cell osteogenic differentiation through MAPK pathway
Abstract Negative pressure wound therapy (NPWT) has been revealed to be effective in the treatment of open fractures, although the underlying mechanism is not clear. This article aimed to investigate the effects of NPWT on muscle‐derived stem cell (MDSC) osteoblastic differentiation and the related potential mechanism. The cell proliferation rate was substantially increased in NPWT‐treated MDSCs in comparison with a static group for 3 days. There was no observable effect on the apoptosis of MDSC treated with NPWT compared with the control group for 3 days. The expression levels of HIF‐1α, BMP‐2, COL‐I, OST and OPN were increased on days 3, 7 and 14, but the expression level of Runx2 was increased on days 3 and 7 in the NPWT group. Pre‐treatment, the specific inhibitors were added into the MDSCs treated with NPWT and the control group. ALP activity and mineralization were reduced by inhibiting the ERK1/2, p38 and JNK pathways. The expression levels of Runx2, COL‐I, OST and OPN genes and proteins were also decreased using the specific MAPK pathway inhibitors on days 3, 7 and 14. There were no significant effects on the expression of BMP‐2 except on day 3. However, the expressions of the HIF‐1α gene and protein slightly increased when the JNK pathway was inhibited. Therefore, NPWT promotes the proliferation and osteogenic differentiation of MDSCs through the MAPK pathway.
Introduction
NPWT is an effective treatment method of various complex wounds. High-energy trauma, open fractures and excessive soft tissue damage are often seen in clinical work. There are plenty of studies that reported that NPWT could promote the growth of granulation tissue, reduce tissue oedema, increase wound blood supply topically and decrease the incidence of infection [1][2][3][4].
One way that NPWT promotes wound healing is by creating a subatmospheric environment, acting at the level of the interstitium to eliminate unwanted oedema, inflammatory mediators and bacteria, and by removing the volume that obstructs the inflow and out-flow, thereby allowing greater nutrient and oxygen inflow as well as venous drainage [5,6]. Furthermore, the mechanical strain allows microdeformation and stretch at the cellular level, permitting cellular chemotaxis, angiogenesis and new tissue formation [6]. Numerous reports have documented that NPWT could successfully promote wound healing and has no harmful effect on fracture healing [7][8][9][10], but the advantages of NPWT on bone treatment remain under debate.
MDSCs are a type of stem cells that have a characteristic of selfrenewal and differentiation capacity. Liu et al. [11] reported that MDSCs were recognized as one of the key cells during open fracture healing. They found that the contribution of MDSCs to the healing of callus tissues was insignificant in closed tibia fractures. However, approximately 40% of the cells in an open fracture with periosteal stripping were MDSCs. MDSCs show great osteogenic tendency induced with bone morphogenetic protein 2 (BMP-2) or BMP-4 [12][13][14]. The mitogen-activated protein kinase (MAPK) signalling pathway, which included extracellular-regulated kinase 1/2 (ERK1/2), p38 MAPK and c-Jun N-terminal kinase (JNK), occupied a central role in osteogenic differentiation. Payne et al. [15] described that MDSCs could differentiate into osteoblasts via ERK1/2 and p38 in the induction of BMP2. Guicheux et al. [16] reported that the p38 and JNK pathways participated in BMP-2-induced osteoclast differentiation. Our previous study found that NPWT could promote periosteumderived mesenchymal stem cells (P-MSCs) proliferation and osteogenic differentiation [17]. Therefore, we suggest that NPWTpromoted open fracture healing might be related to osteogenic differentiation of MDSCs. However, whether proliferation and osteogenic differentiation of MDSCs are under continuous negative suction has not been reported.
In this study, we illustrate that NPWT promotes MDSC proliferation and osteogenic differentiation and investigate the underlying mechanism. Therefore, we discovered that NPWT could promote MDSC proliferation through cell counting kit-8 (CCK-8) analysis, but there were no obvious effects on apoptosis. NPWT could promote MDSC osteogenic differentiation by analysis of alkaline phosphatase (ALP) activity, alizarin red staining and expression of osteoblast-related genes and proteins. Moreover, ALP activity, mineralization, expressions of osteoblast-related genes and proteins were decreased when the ERK1/2, p38 and JNK pathways were inhibited. Therefore, we reveal that NPWT could promote MDSC osteogenic differentiation through the MAPK pathway.
Preparation of NPWT bioreactor
We assembled the bioreactor according to previous studies [17,19]. In short, moderate size foam (VSD Medical Technology Inc., Wuhan, China) was placed above the prepared cell matrix containing 2 9 10 4 MDSCs. A drape was used on the top of the well to ensure the well was sealed. A scalp needle passed through the 3 M bumpon into the foam and then connected with a vacuum negative pressure pump (VSD Medical Technology Inc.). The pressure value of negative pressure pump was set as uninterrupted suction at À125 mmHg. Another needle was passed through the O-ring elastomeric disc and arrived at the bottom of plate. This needle was connected with a peristaltic pump (Longer, Baoding, China), which injected OSM at seven ml per 24 hrs per well. The static group, which has no vacuum suction, was cultured in the same bioreactor. These groups were cultured in the CO2 incubator with 5% CO2 at 37°C.
CCK-8 assay
The cell clots were treated with NPWT or static conditions for 3 days. Fresh medium containing a CCK-8 (Dojindo, Kumamoto, Japan) was added to these cells, transferred to 96-well plates and then measured on a microplate reader (DR-200Bs, Beijing, China) with absorbance at 450 nm after incubation at 37°C for 3 hrs.
TUNEL assay
MDSCs were cultured with NPWT or static conditions for 3 days. These cells were fixed with 4% paraformaldehyde for 30 min., washed with PBS (HyClone, three times, 5 min. each) and then permeabilized in 0.5% Triton X-100 for 2 min. Then, the freshly prepared TUNEL reaction mixture (50 ll TdT and 450 ll dUTP) was added for 60 min. The cell nuclei were stained by DAPI for 5 min. Then, we used a confocal microscope (LSM710, ZEISS, Jena, Germany) to visualize and photograph the cells. The apoptosis rate was calculated according to the number of TUNEL + cells divided by the randomly selected region total number of cells.
Matrix mineralization
We used alizarin red staining to demonstrate mineralization of the clot cells on days 3, 7 and 14. The clots were incubated with 2% alizarin red S (Sigma-Aldrich) for half an hour, washed with PBS (HyClone) and then imaged using an Olympus Inverted Microscope (Olympus, Tokyo, Japan). Alizarin red staining was quantified using the areas and integral optical density (IOD). ALP activity assay ALP activity was detected by nitrobenzene phosphate method. Briefly, the cells were lysed on days 3, 7 and 14 of treatment with or without NPWT. Lysates were incubated with pNPP (Sigma-Aldrich) solution for 15 min. in the culture conditions, and the reactions were then stopped by adding NaOH. Subsequently, the ALP activity was determined by measuring the OD values for absorbance at 405 nm and expressed using nmol pNPP/lg of total cellular protein.
Quantitative real-time PCR analysis
The real-time fluorescent quantitative PCR (RT-PCR) was performed according to a previously reported protocol [17]. Briefly, the total RNA was extracted on days 3, 7 and 14 of treatment with or without NPWT using TRIzol reagent (Invitrogen). The first strand of cDNA was obtained from the total RNA using oligo-dT primers and reverse transcriptase (Takara Bio, Kusatsu, Shiga, Japan). RT-PCR was completed in the StepOne Real-Time PCR instrument (Life Technologies, Waltham, MA, USA). GAPDH mRNA expression was used as an endogenous control. The fold changes were calculated according to the manufacturer's instructions (Takara Bio). The PCR contained a first step of denaturation at 95°C for 1 min., followed by total 40 cycles at 95°C for 15 sec., 58°C for 20 sec. and 72°C for 45 sec., followed by measurement using the 2ÀDDCt method. The primer sequences used in the present study are listed in Table. 1.
Western blot analysis
We obtained the total protein from MDSCs treated with NPWT or static conditions or NPWT-treated MDSCs with MAPK pathway-specific inhibitors on days 3, 7 and 14 using a Total Protein Extraction Kit (Aspen, Wuhan, China). Equal amounts of protein obtained from the cell lysate were loaded onto 5% SDS polyacrylamide gel (Aspen) and transferred to polyvinylidene fluoride (PVDF) membranes (Millipore, Darmstadt, Germany). The membranes were obstructed with 5% BSA in TBS and then incubated overnight at 4°C with the following primary antibodies: rabbit anti-GAPDH antibody (1:10000
Statistical analysis
We used the mean AE S.D. to express all of the values. The two groups were compared using a Student's unpaired t-test. The statistical significance of the comparisons between the multiple groups was determined using an ANOVA test. All tests were carried out using SPSS, v.18.0 (SPSS Inc., Chicago, IL, USA). We defined P-values < 0.05 as statistically significant.
Characterization of stem cell surface markers of MDSCs
We examined the cells isolated from the gastrocnemius muscle by the pre-plate method [18]. The PP6 cells are small, round and scattered with a ray. As shown in Figure 1, the flow cytometry analysis showed that the cultured cells expressed Desmin (99.6%), Sca-1 (99.7%) and CD34 (99.4%) and almost no expression of CD45 (6.31%), which is consistent with the results of previous studies [12,20]. These results demonstrated that the PP6 cells were MDSCs, and the PP6 cells were used for subsequent experiments.
NPWT promotes MDSC proliferation
The CCK-8 assay was performed to determine the proliferation effect of NPWT on MDSCs. The data showed that the proliferation of MDSCs increased significantly when treated with NPWT for 3 days compared with control (Fig. 2C). Moreover, TUNEL analysis showed that the apoptosis rate has no obvious effect on MDSCs treated with NPWT for 3 days compared with the control group ( Fig. 2A and B). The results revealed that NPWT could promote the proliferation of MDSCs.
NPWT promotes MDSC osteogenic differentiation
The activities of ALP, alizarin red staining and expression levels of the osteogenic genes and proteins were measured to assess the effect of NPWT treatment on MDSC differentiation. There were extremely rare mineralization nodules under static conditions, but these nodules could be seen in the NPWT group. The nodules were big and dense in the NPWT group on days 7 and 14 (Fig. 3A). The ALP activity substantially increased in the MDSCs treated with NPWT compared with controls; this activity was in accordance with the alizarin red staining results. The ALP activity analysis showed noteworthy increases in the NPWT group compared to the control group (Fig. 3B).
In the NPWT group, the Runx2 mRNA expression showed an initial increase (Fig. S2D) that peaked at day 7; then, there was a slight decline (Fig. 4). The mRNA expression levels of ALP, COL-I, OST, OPN, HIF-1a and BMP-2 increased from day 3 to day 14 (Fig. 4). The Western blot analyses were in accordance with the RT-PCR results (Fig. 5).
Effects of specific inhibitors on osteogenic differentiation
ERK1/2, p38 and JNK pathway-specific inhibitors were used to understand the effect on osteogenic differentiation. As shown in Figure 3A, we found that inhibition of the ERK1/2, p38 and JNK pathways led to a decreased trend compared to NPWT group. The results of the IOD analysis were in accordance with the alizarin red staining (Fig. S1). Inhibition of the ERK1/2, p38 and JNK pathways caused a decrease tendency in ALP activity in MDSCs treated with NPWT (Fig. 3B). (Fig. 4). HIF-1a and BMP-2 gene expression did not show an effect by inhibition of the ERK1/2 pathway in NPWT-treated MDSCs (Fig. 4A and D), although the Runx2, COL-I, OST and OPN gene expression showed a decrease (Fig. 4B-F). Inhibition of the p38 pathway by the addition of SB203580 to NPWT-treated MDSCs did not show a relevant effect on BMP-2 (Fig. 4D) or HIF-1a on day 3 (Fig. S2A) and the expression of the other genes decreased (Fig. S2B-F). Inhibition of the JNK pathway by the addition of SP600125 to NPWT-treated MDSCs showed a significant effect on the expression of all genes (Fig. 4) except BMP-2 on day 7 (Fig. 4F). The Western blot assay also showed that the levels of Runx2, COL-I, OST and OPN on day 3 and day 7 were significantly down-regulated in the presence of the inhibitors to the ERK1/2, p38 and JNK pathways when compared with the NPWT group (Fig. 5). These experiments revealed that NPWT promoted MDSC osteogenic differentiation through the MAPK pathway.
Discussion
It is universally acknowledged that NPWT is a successful and useful therapeutic method for traumatic wounds. Many clinical reports have been published on its application in treatment of open fractures. However, the underlying mechanisms are less obvious. In this research, we provide new evidence that NPWT promotes osteogenic differentiation of MDSCs perhaps via the MAPK signalling pathway.
Fracture healing is a complicated process that depends upon various cells and factors. The osteocompetent progenitors originating from the periosteum and bone marrow play an important role during the process of bone repair [21,22]. However, open fractures caused 517 by high energy often lead to periosteum and soft tissue damage, which have a much higher probability of non-union or delayed union. In addition to the above cells involved in fracture healing, Liu et al. [11] reported that approximately 40% of the cells in open fractures with periosteal stripping were MDSCs. Therefore, we suggested that MDSCs might be considered to be one of the key cell types during the process of open fracture healing.
To the best of our knowledge, cell proliferation is the pivotal step of wound healing. Therefore, the influences of NPWT on MDSCs needed further investigation. McNulty AK et al. [23] reported that NPWT-treated cells showed significantly greater cell proliferation than cells under static conditions. However, the apoptosis rate showed no obvious distinction between these two groups. Other studies [17,24,25] also showed that NPWT could promote cell proliferation and that cell proliferation was caused by micromechanical deformation produced by foam and continuous suction. Consistent with this research, our present study demonstrated that NPWT also promoted the proliferation of MDSCs at day 3 under sustained subatmospheric pressure. Compared with static conditions, the cell apoptosis rate was slightly increased in the MDSCs treated with NPWT, although there was no statistically significant difference between these two groups at day 3. Therefore, these results illustrated that NPWT could promote MDSC proliferation, and there was no remarkable effect on MDSC apoptosis.
Osteogenic differentiation plays a vital role in fracture healing. To further investigate whether NPWT could promote osteogenic differentiation of MDSCs in vitro, we examined the expression of osteogenic markers. Related articles showed that NPWT could promote the healing of open fractures in clinical and experimental studies [9,26]. In our previous research, we also found that NPWT could promote MSC osteogenic differentiation [17]. In the present study, our results exhibited that ALP activity expression and mineralization were elevated in MDSCs treated with NPWT compared to the control group. Furthermore, the osteogenic gene and protein expression also showed the same results when MDSCs were treated with NPWT. However, Runx2 initially increased on day 3 and reached its peak on the seventh day before declining. These results were in accordance with a previous study [27] and were contrary to the reports of the effect of continuous mechanical strain stimulation on osteogenic differentiation of MSCs [28]. In conjunction with our previous study [17], we suggested that mechanical stretch and hydrostatic pressure have a direct effect on the osteogenic differentiation of MDSCs. In addition, through preliminary experiments [17], we found that the cells in the fibrin matrix might be more sensitive to fluid shear stress, which might play a dominant role in mechanical stimulation by NPWT. Therefore, we concluded that NPWT could promote MDSC osteogenic differentiation.
Regional hypoxia is one of the principal mechanisms of NPWT. HIF-1a was up-regulated under the condition of hypoxia. A hypoxic environment could create an osteogenic favourable microenvironment and thus maintain the survival of osteoblasts [29]. A previous study demonstrated that the expression of BMP-2 was elevated under a low oxygen environment by activation of multiple signalling pathways, which includes MAPK signal pathways [30]. Payne KA et al. [15] confirmed that MAPK pathways participate in osteogenic differentiation of MDSCs induced by BMP-4 in vitro. Furthermore, many studies have exhibited that the MAPK family is activated by a variety of external stimuli [31][32][33][34]. With these external stimuli, BMP plays a major role in osteogenic differentiation. In our study, the ERK1/2, p38 and JNK pathways also play a pivotal role in osteogenic differentiation of MDSCs treated with NPWT. Our results showed that the ALP activity and mineralization were decreased in MDSCs treated with NPWT using the specific chemical inhibitors to the ERK1/2, p38 and JNK pathways, PD98059 (25 lM), SB203580 (10 lM) and SP600125 (25 lM), respectively. The osteogenic gene and protein expression also showed the same results when the NPWT was added into the MAPK pathwayspecific inhibitor. Previous research has shown that the MAPK signalling pathway was involved in the MDSC osteogenic differentiation [15]. In their research, they thought that the ALP activity and mineralization were increased when the ERK1/2 pathway was inhibited, whereas inhibition of the p38 pathway decreased osteogenesis in the BMP4-induced MDSCs. In the pluripotent stem cell C2C12 myoblast line, BMP-2 has been shown to activate ERK1/2 and p38, but not to activate JNK [35]. However, others reported that BMP-2 primary activation of p38 and JNK in MC3T3-E1 and calvaria-derived osteoblastic cells, whereas BMP-2 barely affects the activation of ERK1/2 [16]. We suggested that two reasons might lead to these differences.
The mechanism of NPWT is very complex, including mechanical stimulation, regional hypoxia and other mechanisms [36,37]. Previous articles have indicated that mechanical stimulation could promote osteogenic differentiation of MDSCs and MSCs in response to shear stress, and the ERK1/2, nitric oxide and p38, Ca 2+ signalling pathways were activated [27,38]. On the other hand, as a previous study demonstrated, because of cross-talk between these pathways, specific chemical inhibitors might affect multiple pathways. Furthermore, many studies have revealed that the ERK1/2, p38 and JNK pathway might be associated with BMP-activated Smads [32,39,40]. Therefore, we propose that the mechanism by which NPWT promotes osteogenic differentiation of MDSCs is rather complex. A large number of signals and factors might be involved in this process. The detailed mechanism needs further study. We would establish in an open fracture animal model to confirm the role of NPWT in bone healing in vivo for further research.
We demonstrated that NPWT could promote MDSC proliferation and osteogenic differentiation through experiments in this article. We found that osteogenic differentiation of MDSCs treated with NPWT was influenced by the addition of MAPK pathway-specific inhibitors. The results showed the influence of NPWT on MDSC osteogenic differentiation via the MAPK pathway. We hope this study might provide a scientific basis to prove the positive role of NPWT in open fracture or bone defects.
Declaration
This manuscript has never been partly or wholly published in any other journals. It is not being submitted to any other journal.
|
2018-04-03T03:10:41.393Z
|
2017-09-25T00:00:00.000
|
{
"year": 2017,
"sha1": "e33f01cb22ad1db988b47152ab6b1a9a4edb26d5",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.13339",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e33f01cb22ad1db988b47152ab6b1a9a4edb26d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
125616185
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Tracking Performance of MIMO Discrete-Time Systems with Network Parameters
. The optimal regulation properties of multi-input and multioutput (MIMO) discrete-time networked control systems (NCSs), over additive white Gaussian noise (AWGN) fading channels, based on state space representation, are investigated. The average performance index is introduced. Moreover, the regulation performance is measured by the control energy and the error energy of the system, and fundamental limitations are obtained. Two kinds of network parameters, fading and the additive white Gaussian noise, are considered. The best attainable regulation performance limitations can be obtained by the limiting steady state solution of the corresponding algebraic Riccati equation (ARE). The simulation results are given to demonstrate the main results of the theoretical development.
Introduction
In recent years, there has been growing attention devoted to the study of feedback control over communication networks [1][2][3][4][5][6][7][8][9], because, comparing with classical feedback control systems, the NCSs have their advantages, for example, low cost, flexibility, reduced weight and power requirement, and simple installation and maintenance.However, there exist many tough challenging problems in the stability and performance analysis of NCSs owing to the existence of networks.Researchers keep their eyes on the communication constraints in the networks, for example, quantization effects [10,11], time delay [12][13][14][15], data rate constraint [16,17], and data packet dropout [12,18,19].Nevertheless, the performance limitation of NCSs remains a puzzle.
Performance limitation of control systems has been receiving an increasing amount of interest in the control community; see [20][21][22][23][24] for details.A partial review of previous work on feedback performance over a communication channel is given as follows.By invoking Shannon entropy as a measure of performance, a universal lower bound was obtained in [25].Reference [26] derived a conservation law dictating that causal feedback cannot reduce the differential entropy inserted in the loop by external sources and an inequality unveiling that the feedback loop must be able to convey information originating from initial states of the physical plant and exogenous disturbance signals.By using nonlinear timevarying communication and control strategies, [27] proposed a lower bound on the performance achievable at a specified terminal time and pointed out that the bound can be achieved by linear strategies.Reference [28] showed the performance limitations for scalar systems under either bounded or Gaussian disturbances, and two kinds of disturbances were treated in a unified manner using appropriate entropies and distortions.However, in [28], the achievable performance had not been improved even if the maximum information constraint is relaxed to an average information constraint.Optimal tracking performance issues were studied for multi-input and multioutput linear time-invariant systems under networked control with limited bandwidth and additive colored white Gaussian noise channel in [22].In [1], the optimal tracking performance of NCSs with encoder-decoder was studied.The optimal tracking performance of single-input single-output (SISO) discrete-time NCSs with the packet dropouts and channel noise is studied in [2].The communication channel is characterized by three parameters: the packet dropouts, channel noise, and the encoding and decoding.In [3], the optimal tracking performance of MIMO discrete-time NCSs with bandwidth and coding constraints is studied by using spectral factorization technique.In [4], the limitations in stabilization and tracking of MIMO networked feedback systems are studied.The reference is considered as a random reference signal with finite power.The optimal tracking performance by linear time-invariant (LTI) controllers subject to channel input power constraint is obtained.The adopted model can be found in many real systems.For example, in the telemedicine system of robot-assisted neurosurgery, patient and robot are, respectively, the plant and the controller.The remote expert obtains information via the network transmission, and the instruction of the expert is then sent back to the robot via the network transmission.In addition, for leader-follower multiagent systems [29], provided that the position, velocity, and direction information of a leader are considered as the reference signal, the controller is designed to achieve the minimal tracking error between the leader and the follower.
In this paper, we investigate optimal regulation performance issues pertaining to MIMO feedback control systems over multiple AWGN fading channels.The average performance index is introduced, and the regulation performance is measured by controlling energy and the state energy of the system.And regulation fundamental limitations are obtained.The stability or stabilization problem for the network with a fading channel is considered in a few works [30,31].Moreover, few results about the performance limitation analysis of the network with a fading channel can be found nowadays.Due to the impact of multiplicative noise in fading channel, it is difficult to be processed and analyzed for the performance limitation with fading channels by the frequency domain method.Therefore, in this paper, the performance limitation is considered from another angle, that is, the state space method.Additionally, in most of the existing results, the best achievable performance is analyzed under transfer function representation [1-4, 21, 22, 26, 32].However, from a modern control theoretic point of view, this is not the only possible line of research to pursue.The goal of this paper is to derive the regulation performance under state space representation.The contributions of this paper can be summarized as follows.Firstly, the model about multiple AWGN fading channels is considered, which is more practical than most existing literatures focusing on AWGN channel models, for instance, [4,21,22].Secondly, we are mainly devoted to study the performance limitations for the NCS with fading channels and AWGN, which is different from the results [30,31] of the existing focus on the stability or stabilization problem for the NCS with fading channel.Furthermore, the best attainable regulation performance limitations can be reached by the limiting steady state solution of its associated algebraic Riccati equation (ARE).
The rest of the paper is organized as follows.The feedback regulation performance limitations are studied by parameter
Regulation Performance Limitations
In this work, we will consider a feedback control system with a network in the upstream channel as showed in Figure 1, where the plant model is a rational transfer function matrix.The network model is an unreliable network in the path from the controller to the plant, which contains a fading channel and an additive white Gaussian noise (AWGN) channel.Assume that the plant () is strictly proper and unstable, and one of its minimum realizations is given by where () ∈ R , () ∈ R , and () ∈ R are the plant state, the plant input, and the measured output, respectively.
has full-row rank, and the triple (, , ) is stabilizable and detectable.
The input and output relationship for the AWGN fading channel is given by where V() is the fading channel output and () = ( 1 (), . . ., ()) is a vector of uncorrelated zero-mean white noises each with power spectral density Φ (), 1 ≤ , ≤ , for each element and Φ = diag{Φ 1 (), . . ., Φ ()}.And V() and () are independent of each other.The model of the fading channel is given in the following memoryless multiplicative form: where () is the network control input, and with { ()} being independent random variables at each time index .And it is assumed that { ()} are white noise processes with satisfying > 0, > 0, = 1, 2, . . ., .Denote Remark 1.Besides the fading phenomenon, the uncertainties of a digital network can be described by the model (3), such as packet dropouts and quantization errors [30].Specifically, the model (3) covers packet dropout described by identically and independently distributed (i.i.d.) Bernoulli processes [30].
The () represents the packet-loss process by a 0-1 binaryvalued scalar.
The following lemmas are useful for subsequent development and thus are introduced first.
Lemma 2 (see [31]). is a nonminimum phase strictly proper right-invertible transfer function matrix; then, a state space realization of can be given by where is stable, all the poles of are either on or outside the unit circle, and ( , ) is controllable.
Lemma 3 (see [33]).The equation = has a solution if and only if Moreover, the general solution is where is arbitrary.
State Feedback Regulation Performance Limitations.
In this subsection, we consider the feedback system of Figure 1 where the channel input is based on static state feedback.To simplify the form, we recorded () fl .The average performance index to be minimized in the present subsection is given by where ∈ [0, 1] is a parameter to be determined prior to one's choice, and it may be used to weigh the relative importance of tracking objective versus that of constraining the input energy. = , = (1 − ), = ( ), and +1 is a symmetric positive definite matrix of appropriate dimension. sf is the class of all stabilizing state feedback controllers.
The problem under study can be described as follows.
Problem 4. For a discrete-time NCS as depicted in Figure 1, find a network control input ∈ sf ( ) such that the minimum performance index (11) is obtained.
Theorem 5. Consider the feedback system of Figure 1 with a fading channel by ( 2) and ( 3), and the system () is unstable, nonminimum phase, and strictly proper, (, , ) is the minimum state space realization of (), and then the minimum state regulation performance is given by where is the unique solution of discrete-time ARE: and the optimal controller sequence is where Proof.The optimal problem is formulated in terms of the state covariance matrices = [ ] and the gain matrices .By a simple calculation it can be seen that the following deterministic optimal control problem is equivalent to the original problem (1), ( 2), ( 3), ( 10), (11), and ( 13), with a feedback control of the form (10): subject to Firstly, consider the following performance index: In order to obtain the optimal performance * under conditions (19), we can construct the following Lagrangian function: where is Hamiltonian function: where +1 is a parameter matrix and +1 is symmetric matrices of Lagrangian multiplier [34].The necessary conditions for optimality are Then, we can obtain the following AREs: And then, we have It is known that the general finite ARE has a unique solution () > 0, ∈ {0, 1, 2, . . ., }.It is obvious that () = − (0).If the system (1) is stable by feedback control , ∈ {1, 2, . . ., , . . ., ∞}, the corresponding regulation performance limitation must exist; namely, (0) exists in (25), and the solution of general ARE (24) lim →∞ (0) exists.Moreover lim Then, ( 24) can be written as Now by using Lemma 3, we can obtain where In particular, we can take = † and noting ( 13), ( 20), (25), and (27), then the optimal performance, the control gain , and the discrete-time ARE can be given by Remark 6. Considering the traditional sense of the performance index by the proof of Theorem 5, we can get that the performance limitation is Noting (26), the performance limitation (32) will reach an infinite value for > 0, so we introduced the average performance limitations (11).
When the unreliable network does not contain the fading channel, that is, it contains only a white Gaussian noise channel, we can obtain the following corollary.
Corollary 7. Consider the feedback system of Figure 1 with a white Gaussian noise channel, and the system () is unstable and nonminimum phase corresponding to Lemma 2; then the minimum state regulation performance is given: where is the unique solution of discrete-time ARE: And the optimal controller sequence is where Then And then, the following equations can be obtained: Applying Lemma 2, the above equation can be transformed into Therefore, we have where unstable poles/nonminimum phase zero will deteriorate the optimal regulation performances.
Conclusions
In this paper, we have investigated the optimal regulation performance of networked control systems over an unreliable network in the path from the controller to the plant.The unreliable network contains a fading channel and an additive white Gaussian noise (AWGN) channel.We consider two types of feedback control: state feedback and output feedback, and fundamental limitations are obtained for regulation performance, respectively.The optimal regulation performance limitations can be obtained by the limiting steady state solution of its associated algebraic Riccati equation (ARE).Finally, some simulation results are given to illustrate the obtained results.
Furthermore, the obtained results of this paper can be easily extended to the continuous-time case.When the networked control system contains the nondeterministic or hybrid switching, the issue of performance limitation also deserves to be studied furthermore.
Figure 1 :
Figure 1: State feedback control by one-parameter controller over AWGN fading channels.
|
2019-01-02T08:30:15.342Z
|
2016-08-23T00:00:00.000
|
{
"year": 2016,
"sha1": "55567a35040515df4ed8ec8b3f4770ff42195ff4",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ddns/2016/6826130.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "55567a35040515df4ed8ec8b3f4770ff42195ff4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
258178411
|
pes2o/s2orc
|
v3-fos-license
|
U3 snoRNA‐mediated degradation of ZBTB7A regulates aerobic glycolysis in isocitrate dehydrogenase 1 wild‐type glioblastoma cells
Abstract Aims The isocitrate dehydrogenase (IDH) phenotype is associated with reprogrammed energy metabolism in glioblastoma (GBM) cells. Small nucleolar RNAs (snoRNAs) are known to exert an important regulatory role in the energy metabolism of tumor cells. The purpose of this study was to investigate the role of C/D box snoRNA U3 and transcription factor zinc finger and BTB domain‐containing 7A (ZBTB7A) in the regulation of aerobic glycolysis and the proliferative capacity of IDH1 wild‐type (IDH1WT) GBM cells. Methods Quantitative reverse transcription PCR and western blot assays were utilized to detect snoRNA U3 and ZBTB7A expression. U3 promoter methylation status was analyzed via bisulfite sequencing and methylation‐specific PCR. Seahorse XF glycolysis stress assays, lactate production and glucose consumption measurement assays, and cell viability assays were utilized to detect glycolysis and proliferation of IDH1WT GBM cells. Results We found that hypomethylation of the CpG island in the promoter region of U3 led to the upregulation of U3 expression in IDH1WT GBM cells, and the knockdown of U3 suppressed aerobic glycolysis and the proliferation ability of IDH1WT GBM cells. We found that small nucleolar‐derived RNA (sdRNA) U3‐miR, a small fragment produced by U3, was able to bind to the ZBTB4 3′UTR region and reduce ZBTB7A mRNA stability, thereby downregulating ZBTB7A protein expression. Furthermore, ZBTB7A transcriptionally inhibited the expression of hexokinase 2 (HK2) and lactate dehydrogenase A (LDHA), which are key enzymes of aerobic glycolysis, by directly binding to the HK2 and LDHA promoter regions, thereby forming the U3/ZBTB7A/HK2 LDHA pathway that regulates aerobic glycolysis and proliferation of IDH1WT GBM cells. Conclusion U3 enhances aerobic glycolysis and proliferation in IDH1WT GBM cells via the U3/ZBTB7A/HK2 LDHA axis.
| INTRODUC TI ON
At present, glioblastoma (GBM) is the most malignant intracranial tumor, with high morbidity and mortality rates. 1 in isocitrate dehydrogenase1 (IDH1), which facilitates the classification of GBM into IDH1 wild-type (IDH1 WT ) and IDH1 mutant-type (IDH1 Mut ). IDH1 R132H (IDH1 R132H ) is the most common point mutation. [4][5][6] Active aerobic glycolysis, known as the Warburg effect, is a typical metabolic feature of GBM cells. Glycolysis can provide energy and raw materials for the rapid proliferation of GBM cells. 7,8 Studies have found that the aerobic glycolytic and proliferative capacity of IDH1 WT GBM cells are significantly higher than that of IDH1 Mut GBM cells, and IDH1 WT GBM has been associated with a shortened survival time compared with IDH1 Mut GBM. 6,9 Therefore, it is essential that the molecular mechanisms regulating aerobic gly- It has been reported that the methylation status of CpG islands in DNA promoter regions is closely linked with the expression of small nucleolar RNA (snoRNA). 11 snoRNAs, which are noncoding small RNAs, are located mainly in the nucleus and participate in the tumorigenesis of various tumors. 12,13 For example, SNORD44 is generally expressed at low levels in gliomas; however, the high expression of SNORD44 inhibits GBM proliferation, invasion, and migration and promotes apoptosis. 14 SNORD12B expression is elevated in GBM and promotes glycolipid metabolism and proliferation. 15 U3 belongs to the RNU3 nuclear small RNA family, which is located at the core of the nonhomologous recombinant palindrome sequence on the long arm of chromosome 17 and is closely correlated with tumorigenesis. 16,17 Currently, the differential expression of U3 in IDH1 WT and IDH1 Mut GBM and the mechanisms regulating aerobic glycolysis in IDH1 WT GBM remain unclear.
The transcription factor ZBTB7A (zinc finger and BTB domaincontaining 7A) is a member of the POK (POZ/BTB and Krüppel) protein family and is known to repress target genes by recruiting co-repressors. 18 In osteosarcoma, ZBTB7A represses linc00473 transcription and expression and regulates the sensitivity of osteosarcoma cells to cisplatin chemotherapy. 19 In melanocytes, ZBTB7A transcriptionally represses the expression of the key adhesion protein MCAM and regulates the migratory and invasive abilities of melanoma cells. 20 However, there have been no reports on the role of ZBTB7A in aerobic glycolysis and the proliferative capacity of IDH1 WT GBM cells. Hexokinase 2 (HK2) and lactate dehydrogenase A (LDHA) are key enzymes that regulate aerobic glycolytic energy metabolism. Increased expression of HK2 and LDHA in GBM cells can promote the aerobic glycolytic capacity of cells, which in turn can promote cell proliferation, migration, and invasion. 7,8,21 In the present study, we demonstrated the differential expression of U3 and ZBTB7A in IDH1 WT and IDH1 Mut GBM tissues and cells, as well as the intermolecular interactions between them.
Furthermore, we explored the mechanism responsible for their effects on aerobic glycolysis and the proliferative capacity of IDH1 WT GBM cells. This study identified a new mechanism of tumorigenesis and explored new molecular targets for the treatment of IDH1 WT GBM from the perspective of aerobic glycolysis.
| Clinical specimens
The normal brain specimens and glioma specimens used in this study were collected from the Department of Neurosurgery, Shengjing Table S1.
For more detailed information, refer to Supporting Information of Materials and Methods.
| Western blot
Western blot was performed as previously described. 22,23 For more detailed information about experimental procedure and antibodies information, please refer to Supporting Information of Materials and Methods.
The qRT-PCR and western blot were utilized to confirm transfection efficacy ( Figure S1). For more detailed information about experimental procedure, please refer to Supporting Information of Materials and Methods.
| Extracellular acidification rate
Extracellular acidification rate (ECAR) assay was performed as previously described. 22,23 For more detailed information about experimental procedure, please refer to Supporting Information of Materials and Methods.
| Glucose utilization and lactate production assays
Glucose utilization and lactate production was measured as previously described. 22,23 For more detailed information about experimental procedure, please refer to Supporting Information of Materials and Methods.
| CCK-8
Cell proliferation was assessed via CCK-8 assay as previously described. 22,23 For more detailed information about experimental procedure, please refer to Supporting Information of Materials and Methods. Table S4.
| Northern blot
Northern blot procedure was performed with the Signosis High Sensitive miRNA Northern Blot Assay Kit (Signosis Inc., Santa Clara, CA, USA) following the manufacturer's protocol. Total RNA (20 μg) was separated on 10% acrylamide denaturing gel and then blotted onto a nylon Hybond N membrane and analyzed using oligonucleotides probe complementary to U3-miR.
| RNA immunoprecipitation assay
The RNA immunoprecipitation (RIP) assay was performed as previously described. 22
| Dual-luciferase reporter assay
Dual-luciferase reporter assay was performed as previously described. 22,23 For more detailed information about experimental procedure, please refer to Supporting Information of Materials and Methods.
| Tumor xenograft in nude mouse
Animal experiments were performed as previously described. 22,23 The stable transfected and expressing cells were utilized to establish xenograft models in nude mice. For more detailed information about experimental procedure, please refer to Supporting Information of Materials and Methods.
| Statistical analysis
All values are presented as mean ± standard deviation (SD). Each experiment was conducted three times independently. The normality of the data was analyzed using the Shapiro-Wilk test. The data (two groups) were analyzed using the Student's t-test. The data (more than two groups) were analyzed using one-way or two-way ANOVA analysis of variance followed by the Dunnett's multiple comparisons test or Sidak's multiple comparisons test. For data not normally distributed, nonparametric tests were used. Statistical analysis was conducted via GraphPad Prism v8.4, and p value <0.05 was considered statistically significant.
| The glycolytic capacity and proliferation of IDH1 WT GBM cells were significantly higher than those of IDH1 R132H GBM cells
Sanger sequencing was used to confirm that the GBM cell lines, U87 and U251 were IDH1 WT GBM cells ( Figure 1A). IDH1 R132H and IDH1 WT overexpression vectors were stably transfected into GBM cells, and western blot experiment detected the indicated proteins IDH1 R132H and IDH1 WT expression using IDH1 R132H and IDH1 WT antibodies ( Figure 1B). Subsequently, we analyzed the differences in aerobic glycolytic and proliferative capacity among control, empty vector (negative control, NC), IDH1 WT vector, and IDH1 R132H vector transfected U87 and U251 cells. The aerobic glycolytic capacity of IDH1 WT GBM cells was significantly higher than that of IDH1 R132H GBM cells, while overexpression of IDH1 WT did not increase aerobic glycolytic capacity in IDH1 WT GBM cells ( Figure 1C-E). Glucose consumption was significantly higher in IDH1 WT GBM cells than in IDH1 R132H GBM cells, and the overexpression of IDH1 WT increased glucose consumption ( Figure 1F). Similarly, IDH1 WT GBM cells had a significantly higher proliferative capacity than IDH1 R132H GBM cells, and the overexpression of IDH1 WT enhanced cell proliferation capacity ( Figure 1G,H). The above results demonstrated that IDH1 WT GBM cells had higher aerobic glycolytic and proliferative capacity when compared to IDH1 R132H GBM cells. Overexpression of IDH1 WT enhanced cell proliferation but did not increase aerobic glycolytic capacity.
| U3 was highly expressed in IDH1 WT GBM and U3 knockdown inhibited glycolytic capacity and proliferation
Based on data from Gene Expression Profiling Interactive Analysis (GEPIA, http://gepia.cance r-pku.cn/index.html), we found that U3 was highly expressed in glioma ( Figure 2A). Using qRT-PCR experiments, we found that U3 expression was significantly elevated in glioma tissues, and U3 expression was higher in IDH1 WT glioma tissues when compared to IDH1 R132H glioma tissues ( Figure 2B). Similarly, qRT-PCR experiments revealed that U3 expression was significantly higher in IDH1 WT GBM cells when compared to normal human astrocyte (NHA) cells, and U3 expression was higher in IDH1 WT GBM cells than in IDH1 R132H GBM cells ( Figure 2C). Based on the above results, we constructed U3 knockdown U87 and U251 IDH1 WT GBM cells ( Figure S1A) to investigate their effect on the aerobic glycolytic and proliferative capacity of IDH1 WT GBM cells. Using western blot assay, we found that HK2 and LDHA expression decreased upon U3 knockdown ( Figure 2D), and the extracellular acidification rate (ECAR) assay and colorimetric quantification revealed a significant decrease in the aerobic glycolytic capacity of IDH WT GBM cells upon U3 knockdown ( Figure 2E-G). Colony formation and CCK-8 assays revealed that the proliferative capacity of IDH1 WT GBM cells was also reduced upon U3 knockdown ( Figure 2H,I). Similarly, we observed that U3 knockdown also inhibited glycolytic capacity and proliferation in IDH1 WT GBM cell that overexpressed IDH1 WT vector ( Figure S2).
| Hypomethylation of the CpG island in the promoter region of U3 promoted U3 expression in IDH1 WT GBM
Next, we analyzed the mechanisms that led to the differential expression of U3 in IDH1 WT and IDH1 R132H GBM cells. The CpG island F I G U R E 1 The isocitrate dehydrogenase 1 wild-type (IDH1 WT ) molecular phenotype is correlated with enhanced aerobic glycolysis and proliferation ability. (A) U87 and U251 glioblastoma (GBM) cell lines were identified as IDH1 WT via Sanger sequencing for the IDH1 codon 132. (B) Western blot analysis detected the indicated proteins in control, empty vector (negative control, NC), IDH1 WT vector, and IDH1 R132H vector transfected U87 and U251 cells using IDH1 R132H antibodies and IDH1 WT antibodies. (C) Aerobic glycolytic ability was measured using extracellular acidification rate (ECAR) assay in the abovementioned transfected U87 cell. (D) Aerobic glycolytic ability was measured using ECAR assay in the abovementioned transfected U251 cell. (E) Lactate production was measured in the abovementioned transfected cells. (F) Glucose consumption was measured in the abovementioned transfected cells. (G) Cell viability was analyzed in the abovementioned transfected cells using CCK-8 assay. (H) The proliferation ability of the abovementioned transfected cells was analyzed using a colony formation assay. **p < 0.01 versus NC group; **p < 0.01 versus IDH1 WT (+) group. Data are presented as the mean ± SD of three independent experiments per group, unless otherwise specified. Data were statistically analyzed using one-way analysis of variance (ANOVA). analysis website (http://www.uroge ne.org/methp rimer/) was used to identify the presence of two CpG islands in the U3 promoter region ( Figure 4A). We designed specific primers for pyrosequencing ( Figure 4B) to identify whether there was a difference in CpG1 and CpG2 methylation levels in IDH1 WT and IDH1 R132H GBM cells. We found that the CpG2 methylation status was not significantly different between IDH1 WT and IDH1 R132H GBM cells; however, CpG1, which is closer to the transcription start site (TSS), was hypomethylated in IDH1 WT GBM cells and hypermethylated in IDH1 R132H GBM cells ( Figure 4C). The methylation-specific PCR (MSP) assay also indicated that CpG1 methylation in IDH1 WT GBM cells was lower than that in IDH1 R132H GBM cells ( Figure 4D). At the tissue level, we found that CpG1 methylation in IDH1 WT glioma tissues was lower than that in normal brain tissues (NBTs), and the level of CpG1 methylation in F I G U R E 2 U3 expression was higher in IDH1 WT GBM, and the knockdown of U3 suppressed aerobic glycolysis and proliferation. (A) U3 expression in gliomas based on the GEPIA database (B) U3 expression was detected in normal brain tissues (NBTs, n = 10), IDH1 R132H GBM tissues (n = 10), and IDH1 WT GBM tissues (n = 10) using qRT-PCR. *p < 0.05 versus NBT group; **p < 0.01 versus NBT group; ## p < 0.01 versus IDH1 R132H GBM group. (C) U3 expression in normal human astrocytes (NHA), IDH1 R132H , and IDH1 WT GBM cells as determined by qRT-PCR. **p < 0.01 versus NHA; ## p < 0.01 versus IDH1 R132H GBM. (D) Hexokinase 2 (HK2) and lactate dehydrogenase A (LDHA) protein expression was detected using western blot after U3 knockdown in IDH1 WT GBM cells. (E) Aerobic glycolytic ability was measured after U3 knockdown in IDH1 WT GBM cells using an ECAR assay. (F) Glucose consumption was measured after U3 knockdown in IDH1 WT GBM cells. (G) Lactate production was measured after U3 knockdown in IDH1 WT GBM cells. (H) Proliferation ability was detected after U3 knockdown in IDH1 WT GBM cells using a colony formation assay. (I) Cell viability was detected after U3 knockdown in IDH1 WT GBM cells using a CCK-8 assay. **p < 0.01 versus U3(−)NC group. Data are presented as the mean ± SD of three independent experiments per group, unless otherwise specified. Data were statistically analyzed using one-way ANOVA.
| U3 downregulated ZBTB7A expression via the formation of sdRNA U3-miR and enhanced glycolytic capacity and proliferation in IDH1 WT GBM cells
Several studies have revealed that snoRNA can form snoRNAderived RNA (sdRNA) via its stem-loop structure and is further processed by the Dicer enzyme and bound to the Ago2 protein to exert microRNA-like functions. 26,27 RT-PCR experiments revealed that U3 was able to form the short fragment, sdRNA U3-miR ( Figure 5A), and U3-miR had an increased expression in IDH1 WT GBM cells when compared to IDH1 R132H GBM cells ( Figure 5B). RIP experiments confirmed that U3-miR was able to bind to the Ago2 protein ( Figure 5C).
Knockdown of Dicer in IDH1 WT GBM cells ( Figure S1C,D) and northern blot experiments revealed no significant change in U3 expression, whereas U3-miR expression was significantly reduced ( Figure 5D).
Comparative sequence analysis revealed that U3-miR was highly similar to hsa-miR-496, and by searching and analyzing the starBase database (http://starb ase.sysu.edu.cn/), we found that ZBTB7A was the primary target gene ( Figure 5E). We further investigated whether U3-miR has a microRNA-like function in regulating ZBTB7A expression. We found that U3-miR possessed a sequence complementary to the 3′UTR of ZBTB7A, and the dual-luciferase assay demonstrated that U3-miR could bind to the 3′UTR of ZBTB7A ( Figure 5F). The actinomycin D assay revealed that ZBTB7A mRNA stability increased and ZBTB7A protein expression was elevated upon U3knockdown ( Figure 5G,H). The above results suggest that U3 promotes the degradation of ZBTB7A mRNA and downregulates ZBTB7A protein expression by forming the short fragment sdRNA U3-miR to perform microRNA-like functions. Western blot experiments revealed that ZBTB7A expression was lower in GBM tissues compared with NBTs and ZBTB7A protein expression was significantly lower in IDH1 WT GBM than in IDH1 R132H GBM ( Figure 5I). Furthermore, ZBTB7A protein expression was lower in GBM cells, and ZBTB7A was expressed at significantly lower levels in IDH1 WT GBM cells than in IDH1 R132H
| U3 enhanced glycolytic capacity and proliferation of IDH1 WT GBM cells via the regulation of ZBTB7A expression
On the basis of the knockdown of U3 expression, we interfered with the expression of ZBTB7A and observed cell glycolytic and proliferation capacity. We detected U3 and ZBTB7A expression via qRT-PCR or western blot ( Figure S3). Compared with the U3 knockdown alone group, overexpression of ZBTB7A after U3 knockdown significantly inhibited the expression of HK2 and LDHA proteins ( Figure 6A
| Knockdown of U3 inhibited the growth of subcutaneous xenograft IDH WT GBM tumor and prolonged the survival of nude mice
To further demonstrate the effect of U3 knockdown on the inhibition of IDH1 WT GBM progression, nude mice were randomly divided into four groups: control, U3(−), ZBTB7A(+), and U3(−) + ZBTB7A(+).
The subcutaneous xenograft tumor assay revealed that compared with the control group, the graft tumor volume decreased in the U3(−) and ZBTB7A(+) groups; however, the graft tumor volume was the smallest in the U3(−) + ZBTB7A(+) group ( Figure 8A,B). In addition, we injected IDH1 WT GBM cells into the right striatal area of nude mice and detected differences in the survival periods of these groups. Compared with the control group, the survival period of nude mice in the U3(−) and ZBTB7A(+) groups was significantly longer and the survival period in the U3(−) + ZBTB7A(+) group was the longest ( Figure 8C). Taken together, our results demonstrate the mechanism by which the U3/ZBTB7A/HK2 LDHA pathway promotes aerobic glycolysis and proliferation of IDH1 WT GBM cells ( Figure 8D).
| DISCUSS ION
In this study, we found that the aerobic glycolytic and proliferative capacity of IDH1 WT GBM cells were higher than those of IDH1 R132H It has been reported that IDH1 WT and IDH1 Mut molecular phenotypes are closely correlated with glioma tumorigenesis and patient prognosis. Patients with secondary GBM carrying IDH1 R132H generally have a better prognosis and longer overall survival than patients with IDH1 WT primary glioblastoma (31 months vs. 15 months). Studies have found that the IDH1 molecular phenotype is also closely related to glycolytic energy metabolism in glioma cells, and the aerobic glycolytic capacity of IDH1 WT GBM cells is significantly higher than that of IDH1 R132H GBM cells, 28,29 which is consistent with results obtained in this study. The differential expression of the LDHA protein, a key glycolytic enzyme, in different molecular phenotypes of IDH1 may lead to heterogeneity in glycolytic energy metabolism in glioma cells and may affect glioma cell migration, invasion, and proliferation. 25,30,31 We found that HK2 and LDHA expression was higher in IDH1 WT GBM cells than in IDH1 R132H GBM cells, and the glycolytic and proliferative capacity of IDH1 WT GBM cells were higher than those of IDH1 R132H GBM cells.
U3 belongs to the RNU3 gene family of noncoding snoRNAs that cause genomic instability and increase susceptibility to genetic rearrangements, which play crucial roles in tumorigenesis and tumor progression. 16 Studies have shown that U3A increases the sensitivity of breast cancer cells to 5-FU chemotherapy by upregulating UMPS expression. 17 Furthermore, U3 is highly expressed in osteosarcoma cells and promotes cellular resistance to doxorubicin chemotherapy. 32 It has also been reported that SNORD3B-1 is highly expressed in hepatocellular carcinoma and is utilized as a diagnostic molecular marker for early-stage hepatocellular carcinoma and AFP-negative hepatocellular carcinoma. 33 This study revealed higher U3 expression in IDH1 WT GBM cells than in IDH1 R132H GBM cells. Furthermore, the knockdown of U3 inhibited the aerobic glycolytic and proliferative capacity of IDH1 WT GBM cells, suggesting that U3 exerts a pro-oncogenic role via its involvement in abnormal energy metabolism in IDH1 WT GBM cells. snoRNAs can form short RNA fragments of sdRNA via stemloop structures. The sdRNAs formed by snoRNAs were found to be of either of two lengths of 17-19 nts or greater than 26 nts. 34,35 sdRNAs are processed by the Dicer enzyme and bind to the Ago2 protein to perform microRNA-like functions. 34,36,37 Studies have shown that SNORA-93 regulates pipox expression and promotes breast cancer cell invasion by generating miRNA-like sdRNA-93. 38 Furthermore, in prostate cancer, the high expression of sdRNA-D19b and sdRNA-A24 enhances prostate cancer cell proliferation, metastatic ability, and resistance to chemotherapy by regulating F I G U R E 7 ZBTB7A transcriptionally regulated HK2 and LDHA expression. (A) HK2 mRNA expression was analyzed after ZBTB7A overexpression or knockdown via qRT-PCR in IDH1 WT GBM cells. (B) LDHA mRNA expression was analyzed after ZBTB7A overexpression or knockdown via qRT-PCR in IDH1 WT GBM cells. **p < 0.01 versus ZBTB7A(+)NC group; ## p < 0.01 versus ZBTB7A(−)NC group. (C) Diagram showing ZBTB7A binding site of HK2 promoter (above). Chromatin immunoprecipitation (ChIP) assay revealed ZBTB7A bound to the HK2 promoter (below). (D) Schematic diagram of luciferase reporter construction and relative luciferase activity analyzed in cells co-transfected with pEX3-ZBTB7A or empty vector and the HK2 promoter (−1000 to 0 bp) (or HK2 promoter without the putative ZBTB7A binding site). (E) Diagram showing ZBTB7A binding site of LDHA promoter (above). ChIP assay revealed ZBTB7A bound to the LDHA promoter (below). (F) Schematic diagram of luciferase reporter construction and relative luciferase activity analyzed in cells co-transfected with pEX3-ZBTB7A or empty vector and the LDHA promoter (−1000 to 0 bp) (or LDHA promoter without the putative ZBTB7A binding site). **p < 0.01 versus pEX3 empty vector group. Data are presented as the mean ± SD of three independent experiments per group, unless otherwise specified. The data were statistically analyzed via one-way ANOVA. the expression of CD44 and CDK12. 39 In this study, we found that U3 could form sdRNA U3-miR in GBM cells. Consistent with parental U3 expression, U3-miR had an increased expression in IDH1 WT GBM cells when compared to IDH1 R132H GBM cells. We found that U3 sdRNA needs to be processed by the Dicer enzyme to form U3-miR, and U3-miR can then bind to the Ago2 protein to exert a microRNA-like function in downregulating ZBTB7A expression.
We also found that ZBTB7A expression was reduced in GBM and was significantly lower in IDH1 WT GBM than in IDH1 R132H GBM. In summary, this study demonstrated that U3 is highly expressed and ZBTB7A is expressed at low levels in IDH1 WT GBM. We found that U3 formed the sdRNA U3-miR, which functioned similarly to microRNA by binding to the ZBTB7A 3′UTR region to downregulate ZBTB7A expression. ZBTB7A transcriptionally repressed the expression of HK2 and LDHA and regulated the aerobic glycolytic and proliferative capacity of IDH1 WT GBM cells. We revealed a new mechanism by which the U3/ZBTB7A/HK2 LDHA pathway promotes tumorigenesis in IDH1 WT GBM cells, thus providing new targets and strategies for the treatment of IDH1 WT GBM.
AUTH O R CO NTR I B UTI O N S
XL, YX, WD, and YL involved in the study conception and design.
CO N FLI C T O F I NTER E S T S TATEM ENT
The authors have no relevant financial or nonfinancial interests to disclose.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2023-04-18T06:17:48.388Z
|
2023-04-17T00:00:00.000
|
{
"year": 2023,
"sha1": "97997c1d88164c42a16b06900f8ea5e25391fc50",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cns.14218",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "7e57eb9693274944a2b1813cc32c4e3d6a6d8ea6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
49431576
|
pes2o/s2orc
|
v3-fos-license
|
Paraoxonase (PON)-1: a brief overview on genetics, structure, polymorphisms and clinical relevance
Paraoxonase-1 (PON1) is a high-density lipoprotein-associated esterase and is speculated to play a role in several human diseases including diabetes mellitus and atherosclerosis. Low PON1 activity has been associated with increased risk of major cardiovascular events, therefore a variety of studies have been conducted to establish the cardioprotective properties and clinical relevance of PON1. The major aim of this review was to highlight the important studies and to subsequently assess if PON1 has clinical relevance. A review of the literature showed that there is currently insufficient data to suggest that PON1 has clinical relevance. It is our opinion that robust studies are required to clarify the clinical relevance of PON1.
Introduction
Human serum paraoxonase-1 (PON1) is a calcium-dependent hydrolytic enzyme that is found in a variety of mammalian species. Abraham Mazur 1 and Norman Aldridge 2 played a pivotal role in the identification and classification of PON1 in the mid-1940s to early 1950s. Initially, the enzymes were referred to as "A"-esterases, but later became universally known as paraoxonases due to their ability to detoxify the organophosphate compound paraoxon which is the toxic metabolite of parathion, a commonly used agricultural insecticide. 3 PON1 belongs to a family of three serum paraoxonases, including PON2 and PON3; however, PON1 remains the most popular member of this family. 4 This is largely due to the elegant studies by Mackness et al that described the role of high-density lipoprotein (HDL)-associated PON1 in decreasing lipid peroxide accumulation on low-density lipoprotein (LDL). [5][6][7] This highlighted a link to PON1 and cardiovascular disease, which sparked the research interest in PON1, mainly to elucidate the precise physiological mechanisms of the enzyme. In addition, PON1 hydrolyzes homocysteine thiolactone. Homocysteine thiolactonase activity of PON1 protects against N-homocysteinylation, which is detrimental to protein structure and function. 8 submit your manuscript | www.dovepress.com
138
Shunmoogam et al the tunnel has a structural role that is critical for the conformational stability of PON1. The other calcium ion which lies at the bottom of the active site cavity has a catalytic role and is important for substrate positioning and ester bond activation. Three helices are located above the active site of PON1: H1, H2 and H3, where H1 and H2 have functions in PON1-HDL interactions. 9
Genetics of PON1
The human PON1 gene is a member of a multigene family consisting of three members in total. PON1, PON2 and PON3 are located next to each other on chromosome 7 and share extensive structural homology. Interestingly, PON1 can be differentiated from PON2 and PON3 by the three extra nucleotide residues in exon 4. 11 The genes for this family are expressed in various mammalian tissues, with PON1 and PON3 primarily synthesized in the liver and mostly found associated with HDL in the plasma. 12,13 PON1 forms part of a repertoire of HDL-associated enzymes, including lecithin-cholesterol acyltransferase and platelet-activating factor acetyl-hydrolase, responsible for the antioxidative activity of HDL.
PON1 polymorphisms
Human PON1 has many single-nucleotide polymorphisms (SNPs); eight have been identified on the promoter region and 176 within the gene sequence, 14 some of which exert changes in PON1 level and activity. These polymorphisms may also affect the risk for disease development and the severity of disease. 15 Studies have identified two common polymorphisms in the coding region (at positions 55 and 192) that have been reported to affect the activity and concentration of PON1. 16 The leucine/methionine polymorphism at position 55 of the amino acid sequence (L55M) has been associated with changes in PON1 serum concentrations, and an association with the occurrence of cardiovascular disease was also observed. 17 The glutamine/arginine polymorphism at position 192 (Q192R) has been shown to affect PON1 activity, where the Q192 isoform was demonstrated to hydrolyze paraoxon and metabolize oxidized LDL more effectively than the R192 isoform. 18 The Q192R polymorphism is regarded as the chief biomarker of oxidant status, where LDL oxidation is prevented the most in QQ homozygous patients and the least in RR patients. 19 In addition, three common SNPs (G-907C, A-162G and C-108T) were identified in the promoter sequence of the PON1 gene. 14 These polymorphisms are associated with considerable differences in PON1 concentration and activity, and they have also been implicated in the presence of coronary heart disease. 20 It has been well established that low PON1 activity is linked with an increased risk of cardiovascular disease, implicating PON1 as a physiologically important enzyme.
Clinical relevance
PON1 is an HDL-associated protein that has the ability to hydrolyze oxidized LDL-cholesterol, with potential atheroprotective effects. 21 Furthermore, PON1 can cleave phospholipid peroxidation adducts with potential cytoprotective functions. 22 Given these potential atheroprotective effects, and the large burden of atherosclerotic cardiovascular disease, considerable work has focused on elucidating the clinical relevance of PON1.
Animal studies 23,24 have suggested atheroprotective benefits of PON1. Transgenic mice overproducing human PON1 protected them from atherosclerosis, when compared to wild-type mice. 23 In addition, PON1-deficient mice are at greater risk of developing atherosclerosis than wild-type mice. 24 Animal studies have various limitations and direct extrapolation to humans cannot be made. However, they are useful for "transiting" from in vitro studies to clinical studies.
With respect to clinical studies, Mackness et al 20 investigated the effects of the C-108T and G-909C promoter polymorphisms on PON1 levels and the presence of coronary heart disease (CHD). It was a case-control study, with 417 people with CHD and 282 healthy controls. PON1 activity and concentration were significantly lower in the CHD population compared to controls, regardless of their C-108T and G-909C genotype (p<0.001). Both promoter polymorphisms were not associated with CHD presence and the authors concluded that PON1 status was significantly lower in people with CHD.
Azarsiz et al 25 investigated PON1 activity in patients with angiographically confirmed CHD. Twenty-four healthy volunteers and 101 patients were enrolled in the study; 68 patients had coronary artery disease, which was confirmed by coronary angiography. PON1 activities of patients with CHD were lower than those of controls; however, this difference was statistically insignificant. In contrast to Azarsiz et al's study, 25 Sharma et al 26 demonstrated that serum PON1 activity was significantly low in CHD patients. Sharma et al investigated PON1 activity in 200 patients suffering from CHD and 150 normal individuals. CHD patients were classified into two groups on the basis of associated risk factors (diabetes mellitus, hypertension): group 1 (n=120; CHD patients with associated risk factors) and group 2 (n=80; CHD patients with no associated risk factors). 26 Smoking is an important independent risk factor for CHD, and a study by Han et al 27 showed that cigarette smoking status together with the presence of common PON1 SNPs play a role in the risk of developing CHD. 27 In this casecontrol study nested within the Singapore Chinese Health Study, which was a prospective cohort of 63,257 participants recruited over 5 years, Han et al evaluated 1,914 Singaporean Chinese: 688 cases and 1,226 controls. The participants were stratified according to cigarette smoking status as ever-smokers (n=813), who answered yes to smoking at least one cigarette a day for a year or longer, and never-smokers (n=1,101). The T allele of the PON1 rs662 polymorphism was shown to be associated with an increased risk of CHD among ever-smokers only (odds ratio [OR]=1.35, 95% CI 1.08-1.68; adjusted p=0.036), whereas another PON1 SNP, rs3735590, was shown to be associated with an increase in CHD among never-smokers only (OR=1.53, 95% CI 1.11-2.11; adjusted p=0.036), 27 highlighting that more research is required on PON1 polymorphisms and CHD.
Kunutsor et al 28 investigated the association of PON1 with cardiovascular risk. The study included prospectively measuring PON1 activity in 6,902 study subjects. The study subjects were followed up for a mean of 9.3 years, with 730 adverse cardiovascular events being recorded. There is an approximately log-linear inverse association between PON1 activity and CVD risk, which is partly dependent on HDL-cholesterol levels. Kunutsor et al did a further meta-analysis of six stud-ies with 15,064 study subjects and 2,958 incident adverse cardiovascular events. Based on the findings of Kunutsor et al, although there is a negative relationship between PON1 and adverse cardiovascular events, PON1 does not add value to cardiovascular risk stratification, beyond conventional cardiovascular risk factors. 28 Given that atherosclerotic cardiovascular disease is a major cause of mortality, it was hypothesized that older individuals may have lower PON1 activity, when compared to younger individuals. Seres et al 29 investigated the relationship between age and serum PON1 activity. One hundred twentynine healthy subjects aged between 22 and 89 years were included in their study. Serum PON1 activity significantly decreased with age (r=−0.38, p<0.0001). HDL concentrations remained unchanged with age; however, Apo A1 concentration showed a slight negative, but significant correlation with age (r=−0.19, p<0.027). Moreover, the total cholesterol concentration was positively and significantly correlated with age (r=0.40, p<0.001). The authors also noted that HDL from elderly subjects was more susceptible to oxidation than HDL from young subjects, measured by higher lipid peroxidation rate. The study was limited by relatively small sample size, but did demonstrate reduction in PON1 activity with age.
Numerous further studies investigated whether PON1 is a longevity gene. Lescai et al 30 carried out a meta-analysis of these aforementioned studies that included 5,962 subjects: 2,795 young controls (<65 years of age) and 3,167 old subjects (>65 years of age). R carriers demonstrated a significant result with an overall OR of 1.16 (95% CI 1.04-1.30, p=0.006). The QR genotype also showed a significant result, with an overall OR of 1.14 (95% CI 1.02-1.27, p=0.016). The authors concluded that PON1 gene variants at codon 192 impact on the probability of attaining longevity, and those subjects carrying RR and QR genotypes (R+ carriers) are favored in reaching extreme ages. However, subsequent meta-analysis with larger number of patients 31,32 has suggested that there is no effect of PON1 on human longevity. However, population-specific effects could not be excluded. The aforementioned metaanalysis may be limited by publication bias and variations in analytical procedures used to measure PON1 activity.
Another meta-analysis based on 30 publications analyzed the risk of cancer in relation to the PON1 Q192R polymorphism. 33 controls. The results indicated that the PON1-192 R allele was associated with a reduced risk of overall cancers compared to the 192 Q allele (OR=0.842, 95% CI 0.725-0.979); however, when the results were analyzed according to the cancer types, an increased and decreased risk of cancer subtypes were observed under heterozygous, homozygous, dominant and recessive models. 33 It is well established that oxidative stress and increased free radicals may lead to an increased risk of cancer; therefore, the antioxidant properties of the genetic variants of PON1 should be studied in more detail to fully understand their role in cancer.
Diabetes mellitus is characterized by increased oxidative stress and damage, possibly due to the result of glycosylation of LDL by glucose. 34 Various studies [35][36][37][38][39] have demonstrated a reduction in PON1 in type 2 diabetic patients. Furthermore, reduced PON1 activity in type 2 diabetes mellitus has been associated with increased risk of cardiovascular disease. 40 Rozek et al postulated that reduced PON1 activity in diabetic patients results in reduced HDL protective activity against cell membrane peroxidation contributing to increased arteriosclerosis in diabetic patients. 41 Studies demonstrating reduction in PON1 levels in diabetic patients are contradicted by studies that show no changes in the levels of PON1 in diabetic subjects. [42][43][44] However, although the aforementioned studies showed no absolute reductions in PON1 levels, they demonstrated qualitative reductions in PON1 activity.
Nie et al conducted a meta-analysis on the relationship of PON genes and Alzheimer's disease. 45 Fifteen studies (involving five polymorphisms) were included in the metaanalysis. The authors concluded that the "SS genotype of PON2 S311C polymorphism had a significant association with Alzheimer's disease in the studied population, and the A allele of PON1 rs705379 polymorphism was positively related to AD in the Caucasian population as well as the GG genotype decreased AD risk significantly in Caucasians." 45 The meta-analysis is limited by the quality of studies included; further robust studies are required to elucidate the role of PON in Alzheimer's disease.
Liu et al conducted a systematic review and meta-analysis of PON gene polymorphisms and ischemic stroke. 46 Twentyeight studies were included in the meta-analysis. The R allele or RR genotype of PON1 Q192R polymorphism had an increased risk for ischemic stroke in the general population, but there was no significant association between other genetic variants of PON gene and ischemic stroke. 46 Again, the quality of the studies included in the meta-analysis and systematic review lack robustness, and thus, global inferences cannot be made from this study.
Organophosphates are chemicals commonly used in insecticides. They are also sometimes ingested by humans either accidently or intentionally to commit suicide. PON1 has shown activity against organophosphates, and individuals with higher levels of PON1 may be protected against the harmful effects of organophosphates. 47 However, PON1 levels are not employed routinely during the management of organophosphate poisoning, and probably will not be included in future management algorithms because they are unlikely to affect treatment. Figure 2 illustrates the potential clinical role of PON, and Table 1 lists the key teaching points.
Pharmacological interactions of PON1
A detailed review of the relationship between PON1 and pharmacological agents is beyond the scope of this review. A comprehensive review by Mahrooz 48 describes the interactions of PON1 with cardiovascular drugs, antidiabetic drugs, antibiotics, anticancer drugs, antidepressants and contraceptives. There remains a large amount of incongruences in the study findings, and the clinical relevance of PON1 remains to be further investigated. Mahrooz attributes "dosage and type of drug, length of treatment, genetic variations, particularly loss-of-function polymorphisms, and the model used (cultured cells, animal studies, or human studies)" for the variability of study results. 48 As an example, we will describe the studies of PON1 and the antiplatelet drug, clopidogrel. Bouman et al 49 investigated the clinical relevance of the PON1 Q192R genotype in a population of individuals with coronary artery disease who underwent stent implantation and received clopidogrel therapy. PON1 QQ192 homozygous individuals showed a considerably higher risk than RR192 homozygous individuals of stent thrombosis, lower PON1 plasma activity, lower plasma concentrations of active metabolite and lower platelet inhibition. 49 The findings of Bouman et al were contradicted by a systematic review and meta-analysis. 50 Table 1 Key teaching points PON1 is an HDL-associated calcium-dependent enzyme involved in decreasing oxidized LDL-cholesterol PON1 is a member of a multigene family and is primarily synthesized in the liver The PON1 gene has single-nucleotide polymorphisms which influence the enzyme level and activity A variety of studies have been performed to investigate the clinical relevance of PON1 in a number of diseases including cardiovascular disease, diabetes, neurologic diseases and cancer Preclinical and clinical data are currently insufficient and, in some cases, contradictory Further robust studies are required to elucidate the precise role of PON1 in human diseases
|
2018-06-29T00:30:07.949Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "718dfa314409d7b2ddb726553b641fe576396d27",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=42668",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4cc9bd76a45c923d58c9d2a059f66c90b1848a6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
134421233
|
pes2o/s2orc
|
v3-fos-license
|
Low Enriched Uranium based Nuclear Rocket Propulsion Technology: Mars Exploration Mission
- Many space agencies like NASA, SPACE-X have promised to send humans into the red planet in future. So, considering their project of mars colonization, nuclear rocket propulsion would be the better option. Replacing chemical rockets by nuclear rockets may reduce the mission duration and also can reduce the mass of the propellant used. In propellant releases energy through combustion but in case of nuclear rockets, propellant i.e. hydrogen is heated up in controlled fission reaction in nuclear reactor inside the rocket engine. Specific impulse of the nuclear rocket is greater than chemical rocket. This helps in providing gigantic thrust as a result mission duration is decreased. The challenging parameter of specific by maximizing specific impulse by increasing the exhaust core temperature. The fuel is selected in such a way so that the exhaust temperature would be obtained. The (U, Zr) C – graphite fuel is selected because it has high uranium density and melting point is equivalent to exhaust core temperature which is sufficient enough to enhance the reactivity of the fissile material and thus to increase the rocket performance. A mathematical analysis shows that the percentage of mass of propellant used in mars mission will be lesser than the chemical rockets because the specific impulse is expected to be more in nuclear propulsion. impulse obtained from the CFD Analysis of rocket nozzle is 979 sec with exit velocity of 9604m/s.
I. INTRODUCTION
The concept of Nuclear Rocket Propulsion arises if we talk about long distanced mission like sending humans to mars or to asteroids. Nuclear energy is one of the most enabling as well as proposed technology in space exploration. Basically, Nuclear Thermal Propulsion (NTP) consists of liquid hydrogen as propellant nuclear reactor & nozzle. Unlike conventional chemical rockets, it uses the nuclear reactor that heats the propellant & then expands through the nozzle. Recently, we are focusing on the development of engines with targeted specific impulse 1000sec approx. In order to achieve this value, the nuclear reactor should heat a specified flow rate of liquid hydrogen to a temperature of 2700K. The reactor operating temperature must not exceed the melting points of surrounding rocket components.
Conceptually, the NTP systems is remarkably simple; but control of nuclear fission reaction is somehow challenging. This is minimized with the help of control drums positioning. The Research and development of Nuclear Thermal Propulsion has been already done by USA & Russia under ROVER and NERVA (Nuclear Engine for Rocket Vehicle Applications) programs. [2] In this design model of NTP, the liquid hydrogen is allowed to plunge and heated up in Nuclear reactor with the help of turbine pump as shown in fig
II. SPECIFIC IMPULSE
Rocket propulsion is achieved by providing a thrust to the rocket which is obtained by the ejection of the propellant mass through the nozzle. The Specific impulse is considered as the important parameter in rocket propulsion which is nothing but the time integral of the thrust F(t) per unit weight "w" of the propellant.
Whereas the total weight of the propellant is given as; w= g∫ ( ) ̇ ̇( ) is the mass flow rate of the propellant and "g" is acceleration due to gravity (9.8m/ ) Also, The specific impulse helps to indicate the rocket efficiency and it also helps to compare between different rockets engines. It also signifies the sizing of the engine like if the value of specific impulse is known, the propellant mass which is used to provide the required thrust can be determined.
In 1903, A Russian school teacher formulated an equation to determine the equivalent velocity change of the rocket. This equation is known as Tsiolkovsky"s rocket equation. [1] ∆V =V ( ) Where, is the mass of the propellant used and Mf is the final mass of the rocket. The equivalent velocity change ∆V value is around 10 km/s for any rockets from earth to low earth orbit (LEO).
( ∆V/ .g) The specific impulse of nuclear rockets is expected to be obtained as 1000sec whereas it is about 500sec for chemical rockets. Following observations are made between specific impulse and the ratio of propellant mass to the final mass for both chemical rockets and Nuclear thermal rockets.
The specific impulse of nuclear rocket goes on increasing as mentioned in following table.
III. SELECTION OF FUEL IN NUCLEAR ROCKET
The low molecular weight gas i.e. Hydrogen is selected as a propellant. The fuel is selected in such a way which has high uranium density in order to overcome the degradation of fissile materials and also to prevent it from various thermal attacks. Graphite based fuels and Cermet fuels are the potential fuels in Nuclear rocket propulsion engine. Also, we require the maximum exhaust temperature so carbidebased fuels are desirable than any other fuel because of its higher melting point. [10] Carbide M.P (K) Thermal conductivity(W/m^2k) (U, Zr)C 3350 10 -30 (U, Zr, Nb)C 3800 20 -100 (U, Zr, Ta)C 3900 20 -100 The (U, Zr) C-graphite fuel is selected which is having 35% of carbide and 0.64g/cm^3 density of uranium which is sufficient enough to increase the reactivity of fissile materials to enhance the rocket performance. [9]. Thus, due to low melting point of cermet fuels (U, Zr) C-graphite fuel is chosen to conquer the maximum exhaust temperature. The selection of the moderator has vital role to enable the use of LEU fuels in rocket propulsion. The moderator should have great neutronic performance, melting point and thermal conductivity. Thus, ZrH18 is selected as a baseline moderator.
IV. DEVELOPMENT OF LOW ENRICHED URANIUM
Research on development of LEU is being performed at NASA Marshal space flight center under the Space Capable Cryogenic Thermal Engine Program (SCCTE). [11] The Through analysis in LEU-NTP shows that the LEU fuel can be applied in Nuclear Propulsion Technology and also the performance of LEU fuel will be similar to highly enriched uranium (HEU). A series of baseline cores have been proposed to exhibit the development needed for LEU Rocket Propulsion systems. The two reference cores are Space Capable Cryogenic Thermal engine (SCCTE) core and Superb Use of Low Enriched Uranium (SULEU) core. These cores are analyzed on the basis of NERVA/ROVER geometry using same materials and components.
The fuel being implemented is the primary difference between two reference Cores as SULEU uses (U, Zr) C-Graphite composite fuels whereas SCCTE uses enriched Tungsten -184 cermet fuel. The percentage of Uranium enrichment on both of the cores is 19.75% and is designed to operate with a nominal thrust of around 35klbf. [9] The Specific Impulse for SULEU is considered as 897.9 where for SCCTE is 894 as shown in Table 2.
V. MARS EXPLORATION & RADIATION PROTECTION
Obviously, Mars seems to be a distant goal for human exploration. Sending human to mars is challenging adventure but it is not impossible though. Perhaps after 2030 we will be seeing human into the mars because of advanced technology in rocket/spacecraft propulsion. Advancement in entry, descent, and landing (EDL) are required to land heavy payloads like human being & electric rovers in mars surface. Previous technology like mars science laboratory"s sky crane was not capable to land payloads greater than 2 metric tons. But to land human being into mars surface, greater 40 metric tons payload is required. To reduce the weight of payloads, a rigid aeroshell concept has to be implemented i.e. ellipsled entry system. [8] Astronauts will be exploring mars in pressurized rover which allows them to move beyond the landing site. Perhaps, Lunar electric rover (LER) would be the better rover for mars exploration too. The average speed of LER is 10km/h which can cover 60km in a day. To power this rover, lithium-ion batteries as well as regenerative fuel is provided. [8] During the Mars exploration mission, Astronauts will be exposed to some harmful highly energetic cosmic rays. These space radiations may cause cancer and other harmful effects on crew"s health. Due to this, radiation shielding has to be considered and designed. The materials containing hydrogen atom may be considered more effective at attenuating the protons & heavy ions. Thus, lightweight shielding material i.e. polythene is being studied.
VI. DESIGN AND ANALYSIS OF NTP ROCKET NOZZLE
To examine the exhaust velocity of the rocket, a nozzle was designed which basically converts thermal energy obtained in the nuclear reactor chamber to kinetic energy. In this experiment, convergent divergent (C-D) nozzle was selected and its geometry was created using ANSYS WORKBENCH 19.1 as shown in figure 4.
VII. RESULT AND DISCUSSIONS
The pressure becomes maximum at the inlet and goes on decreasing till the exit as shown in fig 7. The pressure at the exit is observed to be 0.3154 bar. The pressure decreases at the exit due to the occurrence of shock wave after the throat section.
Fig 7: Pressure contour
The velocity at the inlet of the nozzle becomes minimum and goes on increasing till the exit. Since this is the nozzle of nuclear rocket so its velocity is higher than chemical rockets. The maximum velocity obtained at the exit is found out to be 9604m/s as shown in fig.8
Fig 8: Velocity contour
Since the exit velocity from Nuclear thermal propulsion is approx. 9604m/s, thus the specific impulse obtained is 979sec.
VIII. CONCLUSION
Nuclear power has made an indisputable impact on the world for power generation. The need of high specific impulse in rocket propulsion is analyzed and the performance of nuclear rocket engine is studied. The higher specific impulse of the nuclear thermal rocket can reduce the mission duration than chemical rocket. Low enriched uranium (LEU) fuel helps in advancement of nuclear rocket propulsion technology. Thus, the mass of the propellant used can also be reduced by NTP technology.
In future, if the selection of fuel is done in such a way to achieve 900-1000 sec specific impulse then NTP would be better option for space exploration. To reduce the risks like leakage of gas, material degradation etc. induced by hydrogen propellant, some alternative source of propellant can also be used such as NH3, CH4, N2H2.
DECLARATION
Author had disclosed no conflicts of interest.
|
2019-04-27T13:12:38.084Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6b33139a4769a21e4032b5d3ccf195770b6fc334",
"oa_license": null,
"oa_url": "https://doi.org/10.21276/ijre.2019.6.1.4",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c4d7193094095229b24ca71acf973626622045be",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
252503189
|
pes2o/s2orc
|
v3-fos-license
|
A Security Policy Protocol for Detection and Prevention of Internet Control Message Protocol Attacks in Software Defined Networks
: Owing to the latest advancements in networking devices and functionalities, there is a need to build future intelligent networks that provide intellectualization, activation, and customization. Software-defined networks (SDN) are one of the latest and most trusted technologies that provide a method of network management that provides network virtualization. Although traditional networks still have a strong presence in the industry, software-defined networks have begun to replace them at faster rates. When network technologies emerge at a steady rate, SDN will be implemented at higher rates in the upcoming years in all fields. Although SDN technology removes the complexity of tying control and data plane together over traditional networks, certain aspects such as security, controllability, and economy of network resources are vulnerable. Among these aspects, security is one of the main concerns that are to be viewed seriously as far as the applications of SDN are concerned. This paper presents the most recent security issues SDN environment followed by preventive mechanisms. This study focuses on Internet control message protocol (ICMP) attacks in SDN networks. This study proposes a security policy protocol (SPP) to detect attacks that target devices such as switches and the SDN controller in the SDN networks. The mechanism is based on ICMP attacks, which are the main source of flooding attacks in the SDN networks. The proposed model focuses on two aspects: security policy process verification and client authentication verification. Experimental results shows that the proposed model can effectively defend against flooding attacks in SDN network environments.
Introduction
The latest advancements in software-defined networks (SDN) have provided new mechanisms for simplified network operations [1]. This approach provides a simple abstraction mechanism for network operators by removing the complexity of the network topology. The separation of hardware and software components provides a flexible way to design and program a network [2]. The complexity of switches in the network is further reduced using the concept of dynamic and adaptive network management. The control
Introduction
The latest advancements in software-defined networks (SDN) have provided new mechanisms for simplified network operations [1]. This approach provides a simple abstraction mechanism for network operators by removing the complexity of the network topology. The separation of hardware and software components provides a flexible way to design and program a network [2]. The complexity of switches in the network is further reduced using the concept of dynamic and adaptive network management. The control system and monitoring are part of the controller, and the components of the data plane are organized according to the instructions of the controller.
The SDN controller collects all information on the network topology from components such as host location, link status, and switch information. The collected information is used by the controller for further processing [3]. The developers can modify the SDN controller task by including their inventions, strategies, and applications at the top of the controller. The open-flow protocol acts as an interface between the switch and controller for data transmission. [4] The main responsibility of the controller is to define the traffic flow between components in the network. The following Figure 1 shows the architecture of SDN. The application layer of a software-defined network comprises several network tools, devices, and a variety of commercial applications that interact with the control layer through an SDN controller [5]. The infrastructure layer is the foundation layer of the SDN architecture. The main responsibility of this layer is to forward network traffic and collect network statistics, usage, and network topology. This layer is responsible for handling packets constructed in the direction of the SDN controller [6]. It contains virtual and physical network equipment such as routers, switches, and other network devices that are used to forward network traffic. The control layer acts as the interface between the application and infrastructure layers. The northbound interface enables the communication between The application layer of a software-defined network comprises several network tools, devices, and a variety of commercial applications that interact with the control layer through an SDN controller [5]. The infrastructure layer is the foundation layer of the SDN architecture. The main responsibility of this layer is to forward network traffic and collect network statistics, usage, and network topology. This layer is responsible for handling packets constructed in the direction of the SDN controller [6]. It contains virtual and physical network equipment such as routers, switches, and other network devices that are used to forward network traffic. The control layer acts as the interface between the application and infrastructure layers. The northbound interface enables the communication between the applications and the SDN controller. The southbound interface is responsible for communication between the SDN controller and the infrastructure layer. The application layer instructions are processed through the southbound interface and network components through a southbound interface [7].
Traditional networks consist of static protocols for network equipment, such as routers and switches, where it is not possible to implement network protocols. Therefore network administrators find it difficult to define custom routing protocols [8]. The SDN controller eliminates routing issues in the SDN network with the use of logical connections. This technology opens a gateway for clients to track environmental changes. Once clients in the network have complete controller information, they can execute a malicious attack on the controller [9]. This is a huge challenge for administrators in preserving the network information in SDN networks. The first goal of this research was to distinguish malicious attacks in SDN networks. The second goal is to provide a security policy protocol to avoid unauthorized attacks on the network. The final goal was to evaluate the proposed model in a different scenario with several parameters to identify the suitability of the model for deployment in an SDN environment.
The remainder of this paper is organized as follows. Section 2 presents recent works done in the software-defined networks followed by SDN security in Section 3. Section 4 details the Internet control message protocol, followed by ICMP attacks in Section 5. Section 6 presents the proposed methodology, followed by experimental setup in Section 7. Section 8 presents experiments and results. Finally Section 9 concludes the paper.
The contribution of the paper includes but not limited to: The study highlights ICMP protocol-based attacks and their impact on SDN environments. It presents a new security policy protocol (SPP) and client authentication model to avoid unauthorized attacks on SDN networks. The proposed solution proved to be accurate in tackling potential attacks in SDN and the performance metrics were evaluated using parameters such as CPU utilization, channel bandwidth, packet delivery ratio, response time, and a number of flow requests. The proposed model has the capability to detect different attacks including ping flood and Smurf attacks that often originate from undefended legacy equipment. It adds to the knowledge relating to the security of Internet control message protocol and indeed cybersecurity in general.
Literature Review
Security has been identified as an unnerving task in communication networks because of the nature of underlying network complexities and parameter-based security solutions that are difficult to manage [10]. The authors of ref. [11] proposed an approach to divert traffic from an attacked device and remove unwanted instructions from the attacked switch. However, there is no consideration of attacks on controller resources, which is an important security aspect of SDN.
Wang et al. [12] stated that sniffing a network is possible without any significant impact on the SDN controller. This work is not suitable for detecting or preventing slow attacks on the controllers. The authors of ref. [13] presented an approach in which the controller checks the authentication for every incoming packet and installs certain instructions to prevent the intruder from using the underlying network resources. These methods require more instructions to be stored in the affected device, which is more vulnerable for the controller and affects the performance.
The authors of [14] addressed several security treads in the control plane of an SDN. This study also proposed an enhanced security framework based on attribute-based encryption. Tree-structure-based encryption was used to achieve a fine-grained access control mechanism for SDN. Liang et al. [15] presented a security architecture for the SDN-based 5G networks. They focus mainly on mobile networks by implementing network and security domains with a low degree of coupling, which makes it easy to deploy the services or equipment without disturbing normal functionality.
The authors of [16] presented a pictorial model for attack detection using a graph theory approach. An attack path prediction model was developed to identify critical components and devices in an SDN network. They have mainly focused on reconnaissance, topology poisoning, and forensic attacks. Vijay et al. [17] proposed a hybrid architecture with a security management application on the SDN controller to detect the attacking device before a request is sent from the attacker host to the controller. They also addressed the dynamic management of security policies for data planes for flooding and injection attacks. This study focused only on a single SDN domain where there will be a limited number of attack types.
The authors of ref. [18] presented an approach based on the extension of the controller to deal with only topology poisoning attacks using fingerprint methods of the device for authentication. The main issue with this approach is that all fingerprints should be maintained only by the controller, which will create more burdens for the controller in complex environments. The authors of [19] presented a policy-based security architecture for distributed SDN network platforms. They implemented an access policy rule to validate the MAC address and the original IP address of the end devices such as switches to drop the packet when the address is spoofed. They primarily focused on man-in-middle and spoofing attacks.
Hau et al. [20] addressed the integrity issues of the link layer discovery protocol, which is primarily used in network topology discovery. They proposed a detection algorithm for worm-hole attacks based on path latencies in SDN environments using three topologies: Nsfcnet, Shentel, and Neol. They also introduced a gravity model to generate network traffic by using real data. The authors of [21] addressed DDoS and IP spoofing attacks in SDN environments and proposed a variable security management solution. They developed an abstract grammar to implement security policies with compilers and employed an optimal algorithm to place the rules across the switches to avoid unwanted traffic.
Most recent studies focused on either diverting network traffic to remove unwanted instructions from attacked devices or using encryption methods to fine-grain the access control of DCN environments. From the literature, it was identified that using security policy mechanisms is an ideal solution for the detection of spoofing attacks in SDN networks and to prevent them from taking full control over network environments. This study focuses on ICMP attacks, namely man-in-the-middle attacks and flooding attacks, which are critical security attacks in SDN environments.
Software Defined Network Security
Software-defined networks provide capable solutions for handling the complications of traditional networks in the modern era. Although these models offer more advantages for concerned organizations, attackers can execute different forms of attacks in SDN environments [22]. The controller is a vital component used by the attacker to execute security attacks. Mischievous traffic can be generated to attack the controller and control plane communication. Once this is completed, the clients that are connected to the switches can execute the attacks. Flooding attacks are critical attack that fails in an entire network.
These attacks target flooding the control plane first and then the data plane and SDN controller bandwidth. Because the controller acts as the intelligence agent of the entire network that controls a large number of devices and applications, attacks block the entire traffic and fill up the total memory of the SDN switch [23]. Once the total memory is full, it is not possible to accept any new upcoming requests or configure the rules from the SDN controller, which leads packet dropping. The main reason behind this is that the degree of inward flows is very high because of malicious requests, which make the buffer memory full, leading to higher bandwidth consumption.
Attackers use different approaches to execute attacks, including network-based approaches such as ICMP, UDP, or TCP packets, to exploit the memory structure, algorithms, or authentication protocols [24]. Figure 2 presents the attack scenario in which the ICMP protocol is used to flood the controller bandwidth from host D. Switch S2 and the controller are the victims where, after a certain time interval, the entire traffic will be congested, and the new request will be discarded. These types of attacks not only affect specific hosts but also all the devices in SDN environments, as shown in Figure 3. Figure 4 clearly shows the utilization of available resources by ICMP attacks at specific time intervals, and Figure 5 shows the effects of ICMP on total traffic. This clearly shows that, in a short period, ICMP attacks use 90% of the available bandwidth in the entire network. ICMP protocol is used to flood the controller bandwidth from host D. Switch S2 and t controller are the victims where, after a certain time interval, the entire traffic will be co gested, and the new request will be discarded. These types of attacks not only affect sp cific hosts but also all the devices in SDN environments, as shown in Figure 3. Figur clearly shows the utilization of available resources by ICMP attacks at specific time int vals, and Figure 5 shows the effects of ICMP on total traffic. This clearly shows that, i short period, ICMP attacks use 90% of the available bandwidth in the entire network. rithms, or authentication protocols [24]. Figure 2 presents the attack scenario in which the ICMP protocol is used to flood the controller bandwidth from host D. Switch S2 and the controller are the victims where, after a certain time interval, the entire traffic will be congested, and the new request will be discarded. These types of attacks not only affect specific hosts but also all the devices in SDN environments, as shown in Figure 3. Figure 4 clearly shows the utilization of available resources by ICMP attacks at specific time intervals, and Figure 5 shows the effects of ICMP on total traffic. This clearly shows that, in a short period, ICMP attacks use 90% of the available bandwidth in the entire network. The attacker continuously sends ICMP request messages to the destination machine over the network. This makes the host busy replying the ICMP request messages. This leads to unwanted flooding in the network and degrades the network performance. The host machine with an IP of 172.16.23.142 is attacked by the victim, as shown in Figure 3. The host is used to send unwanted traffic in the form of a request to switch (S2) in the SDN network. Within a couple of minutes, the host sends n requests to the switch and creates unwanted traffic over the entire network. The following Figure 6 shows the situation of the attack. In this scenario, the other protocols are affected by the attack. HTTP protocol was used unnecessarily in this attack scenario to send and receive the resources. Figure 7 shows the protocol distribution for the attack scenario. The attacker continuously sends ICMP request messages to the destination machine over the network. This makes the host busy replying the ICMP request messages. This leads to unwanted flooding in the network and degrades the network performance. The host machine with an IP of 172.16.23.142 is attacked by the victim, as shown in Figure 3. The host is used to send unwanted traffic in the form of a request to switch (S2) in the SDN network. Within a couple of minutes, the host sends n requests to the switch and creates unwanted traffic over the entire network. The following Figure 6 shows the situation of the attack. In this scenario, the other protocols are affected by the attack. HTTP protocol was used unnecessarily in this attack scenario to send and receive the resources. Figure 7 shows the protocol distribution for the attack scenario.
Internet Control Message Protocol
All IP-enabled end systems and intermediate devices such as routers frequently use the ICMP protocol for troubleshooting the network [25]. The ICMP protocol is used to report issues in the network or intermediate devices such as routers, hubs, and switches. Some of the important features of ICMP protocols are reporting when end systems (ES) do not respond to the request, congestion in the network, IP header issues, and other network-related issues. The protocol is frequently used by network administrators to track the working functionality of end systems (ES). It is also used to check whether the router correctly direct packets to their intended destination. Its communications are not transferred directly to the data link layer; although it belongs to the network layer, messages will be encoded into IP datagrams before being sent to the lowest layer. The protocol field has a value of one, indicating that the IP data are an ICMP message category. Consequently, ICMP is primarily concerned with a mechanism for any IP-enabled device to deliver error messages to another IP machine in the network. ICMP has several message
Internet Control Message Protocol
All IP-enabled end systems and intermediate devices such as routers frequently use the ICMP protocol for troubleshooting the network [25]. The ICMP protocol is used to report issues in the network or intermediate devices such as routers, hubs, and switches. Some of the important features of ICMP protocols are reporting when end systems (ES) do not respond to the request, congestion in the network, IP header issues, and other network-related issues. The protocol is frequently used by network administrators to track the working functionality of end systems (ES). It is also used to check whether the router correctly direct packets to their intended destination. Its communications are not transferred directly to the data link layer; although it belongs to the network layer, messages will be encoded into IP datagrams before being sent to the lowest layer. The protocol field has a value of one, indicating that the IP data are an ICMP message category. Consequently, ICMP is primarily concerned with a mechanism for any IP-enabled device to deliver error messages to another IP machine in the network. ICMP has several message formats that allow the transfer of different types of data. In response to the message delivered by Host0 to Host1 and transmitted by router0, router R1 generates ICMP packets. When the MTU value of the link between routers 0 and router1 is less than the size of the IP packet, and when the packet has a do not fragment (DS) bit in the IP packet header, the ICMP message delivered to the Host0.
A. ICMP MESSAGES
One of the most significant protocols of the TCP/IP protocol suite is the Internet control message protocol (ICMP). It is mostly used by the underlying platform to transmit error messages to devices in the network. ICMP [26] is a critical element of the IP that must function. It differs from TCP and UDP in that it is rarely utilized for data transmission between the end systems. User network programs or devices rarely use this protocol, except for ping and traceroute commands. Unannounced network flaws, such as the inaccessibility of a host or a network portion owing to a malfunction, are among these issues.
ICMP sends a TCP packet or UDP packet to the specified port number in the network without any destination information. The router in the network buffers the packet when there are more packets to be transmitted within a specific time interval to assist in the troubleshooting process. The echo function in ICMP simply sends a message back and forth between the two hosts [27]. A ping command is a popular network administration tool for determining the availability of the device in the network. The ping sends out a series of packets to calculate the loss percentages and average round-trip times. Timeouts should be announced. When the TTL field of an IP packet is zero, the router or any intermediate device discards the packet from the network and sends an ICMP message to the source to denote the packet delivery issue to the destination. Trace-route is a command that uses tiny TTL packets to map network pathways while monitoring ICMP timeout discoveries.
B. ICMP MESSAGE TYPES
Network errors are reported to the host using ICMP messages. Faults may be in the network, router, or any other intermediate device. The source can quickly determine the cause of the errors by observing these types of messages. Query and error reporting are two types of ICMP messages that can be used to troubleshoot network issues. When intermediate devices such as the host or router process an IP packet, these error reporting schemes can report the errors encountered. Destination inaccessibility, source-quench, time exceed, parameter problem, and redirection are some of the error reporting messages provided by the ICMP protocol to the host devices or routers [28].
A pair of query messages will assist intermediate devices, such as hosts, routers, or network managers in obtaining error-related information from a host or router in the network [29][30][31][32][33][34]. Devices in the network can locate any router and collect router information for further processing. Even routers can assist devices (hosts) with redirection messages using updated information about the router and routing table. Echo messages, timestamps, router advertisements, and solicitation are the message types provided by the query message of the ICMP protocol [35][36][37][38]. The following are some key points to remember regarding the ICMP error messages: (1) ICMP Messages will not be generated for the messages that contain error messages of ICMP type. (2) There is no provision for using the ICMP error messages for fragmented datagram.
(3) ICMP messages will not be generated for messages that contain a multicast address.
C. ICMP MESSAGE FORMAT
An ICMP message's structure can be conceived as having a common component and a unique part [29]. The common part of all ICMP messages consists of three fields of the same size and meaning (but the values in the fields vary depending on the ICMP message type). Each message form has its own set of fields in unique portion. Figure 8 shows the ICMP packet format.
Sustainability 2022, 14, x FOR PEER REVIEW 9 of 20 type). Each message form has its own set of fields in unique portion. Figure 8 shows the ICMP packet format. Network troubleshooting involves identifying and resolving networking issues to maintain the best performance of a network. The primary role of a network administrator is to maintain network connectivity for all devices. To assist administrators, ICMP plays a vital role in tracking the status of connections and improving their performance. ICMP can be used to accomplish this goal. First, ICMP traffic should be captured on the network to troubleshoot it. A network analyzer can be used to record all TCP/IP traffic while filtering the ICMP traffic. After configuring the network analyzer to filter the ICMP traffic, we Network troubleshooting involves identifying and resolving networking issues to maintain the best performance of a network. The primary role of a network administrator is to maintain network connectivity for all devices. To assist administrators, ICMP plays a vital role in tracking the status of connections and improving their performance. ICMP can be used to accomplish this goal. First, ICMP traffic should be captured on the network to troubleshoot it. A network analyzer can be used to record all TCP/IP traffic while filtering the ICMP traffic. After configuring the network analyzer to filter the ICMP traffic, we examined the ICMP traffic that passes through the network. Although some redirect messages are common (especially during morning start-up hours), if one device is frequently routed before talking to other network devices, then it is necessary to designate a different default gateway.
A. FLOODING ATTACKS
The services are affected by flooding attacks. Generally, denial of service (DoS) and distributed denial of service (DDoS) are common flooding attacks on SDN networks [30]. Server resources are exposed to the entire device in the network with this type of attack. It also slows down services that degrade the performance of servers and controllers, including memory, forwarding rates, and flow control in the SDN environment.
B. MAN-IN-MIDDLE ATTACKS
Most SDN networks permit third-party applications to be installed in a network. Thus, there is a possibility of active and passive attacks that directly route applications between the client and gateway. These applications provide false network information to the other devices in the network and initiate various attacks.
C. REPLICATION ATTACKS
This is considered to be one of the most dangerous attacks in an SDN network. The node was replicated in the network to manipulate a particular segment. As it is replicated, the node gains complete control over the network, including devices such as servers, switches, and controllers. It is very difficult to detect replicated nodes because of the complexity of the network.
D. NETWORK MANIPULATION ATTACK
Manipulation attacks provide false information in a network and execute the attack. Such attacks occur in the control plane and gain access to all the devices in network. Detecting manipulation attacks in SDN environments is very difficult owing to their complex nature.
E. TRAFFIC DIVERSION ATTACKS
These attacks are executed in the components of SDN networks, which redirect the traffic flow from a trusted path to a malicious path. Once this is done, the attacker can gain complete access to all components in the SDN network. After a certain period, the total services will be blocked.
Flooding attacks can be rectified in the SDN network using security middle boxes, such as IDS, anti-malware, and firewalls. These approaches should be integrated into virtualized environments to prevent security attacks. Several malware shields are available in the market to cope with the security challenges of SDN networks; however, the performance of such shields is very poor for massive attacks [30]. The use of machine learning classifiers also requires more computing resources, resulting in overhead for all network devices.
Proposed Methodology
The control plane waits until a stable topology is present in the network. Then, the control panel creates the forwarding table to send data from the source to the destination port via the forwarding plane. The client sends a request from the controller to the switch for certain services. Once the switch receives the request, it performs the following: (a) The SDN switch sends the request in message to the SDN controller, (b) drops packet in case of invalid authentication, and (c) provides a service based on the previous records. The main aim of this research was to prevent malicious attacks from end hosts connected to SDN. It was also identified that if the nearest source of the attack was detected in the initial stage, then it would be possible to reduce the traffic to the controller and minimize the wastage of network bandwidth and complex computations. The proposed architecture consists of a security policy protocol (SPP) component that checks all incoming packets before they reach the controller. This component was placed close to the controller between the application and data planes. Thus, it is possible to implement security policies in the data plane. The proposed security component comprises a database that stores all authenticated host details. The database consists of complete details of authorized clients that are previously accessed resources in the SDN network. If the host requests a service for the first time, the SPP checks for a real IP address from the request and verifies it. Subsequently, it either adds an entry to the database or discards the packet. The data-path ID uniquely identifies devices in an SDN environment. SPP operates at two levels: (a) the security policy process and (b) client filtering process.
A. SECURITY POLICY PROCESS
The security policy process plays a vital role in providing security to all devices connected to an SDN network. In this study, 1 SDN controller, 16 intermediate switches, and 60 host machines were used in the initial stage. Once the security policies are formulated, it is possible to add multiple controllers and devices to the network. When a host requests a service from the SDN controller for the first time, the intermediate switch sends a packet-in message to the SPP, which in turn checks the packet header field information with the database, as shown in Algorithm 1. The host request is processed by the controller after the authentication process. Only authorized hosts are permitted to use the network resources. The SDN edge-switch drops the unauthorized packets from the network. Second, the SPP will check client filtering process to initiate a connection with the SDN controller. For instance, when the client initiates a connection to the SDN controller through switch S1, S1 queries SPP to identify the client. If the client passes the authentication by the SPP, it is allowed for the connection and other operations in the network; otherwise, the request is dropped by the end switch (S1). Table 1 shows the structure of the database using only the sample devices used in this experiment.
B. CLIENT FILTERING MECHANISM
Another important issue related to SDN is flooding attacks, as discussed in the previous section. ICMP attacks are considered dangerous malicious attacks that block machines or other network resources to end users. Most of the hosts/devices affected by ICMP attacks use high memory, CPU, and bandwidth, which will slow down the entire network and its components. To overcome these types of attacks, SPP uses a filtering algorithm to filter unwanted traffic in SDN. First, SPP monitors the total traffic in an SDN network. If unwanted traffic is found, for example, continuous ICMP flooding messages, filtering is executed to control the flow of ICMP requests in the network. Two different criteria were used to filter the ICMP packets: (i) If the number of ICMP packets exceeds 60, the filtering process applied for the check; (ii) if the size of the packet exceeds 78, the filtering process is applied as shown in the Algorithm 2. Two-way filtering is an ideal methodology for SDN. The following Figure 9 shows how the filtering process works in simulated SDN networks. In this experiment, both ICMP and IGMP packets were filtered by the SPP. There is no special condition check for the IGMP packets. The filtering algorithm is executed only if the switch receives the maximum threshold value (n), where n = 60. If the value is less than 60, the ICMP packets are allowed in the network. An ICMP attack continuously generates a large number of flows with a small number of packets over a short period. Therefore, based on flow analysis, it is possible to The filtering algorithm is executed only if the switch receives the maximum threshold value (n), where n = 60. If the value is less than 60, the ICMP packets are allowed in the network. An ICMP attack continuously generates a large number of flows with a small number of packets over a short period. Therefore, based on flow analysis, it is possible to determine the severity of ICMP attacks. The following formula was used to compute the percentages: Of the ICMP packet is much smaller than that of the Ethernet frame size in the network. When the ICMP packets are affected, their size increases. It is possible to determine the degree of attack in the end switch based on the percentage of small bytes in ICMP packets. (1) Most of the ICMP flooding packets are invalid; thus, the corresponding flow rules issued by the SDN controller will not last for a long period before the timeout. The percentage of flow increased sharply over a short period. This can be determined using the following equation:
Experimental Setup
The implementation was tested using Mininet simulator running on a virtual machine with Windows operating system. Hyper-V virtualization technology, which enables virtualized computer systems on the Windows platform, was used. Analytical modelling, measurement, and evaluation were identified as the three main approaches commonly used to evaluate communication network systems. The results of the evaluation were used to set the network performance indices given the traffic workload and network configuration. Sixty Core i7 CPU with 3.40 GHz, 1 IBM Intel server, 8 GB RAM, and Windows 64 bit operating systems were used for evaluation. The experimental topology is presented in the figure. To evaluate the proposed work, two existing models were used: RYU SDN Framework and the detection and mitigating DoS (DDS) [30][31][32][33][38][39][40][41][42][43][44]. Figure 10 shows the simulation topology.
to set the network performance indices given the traffic workload and network configuration. Sixty Core i7 CPU with 3.40 GHz, 1 IBM Intel server, 8 GB RAM, and Windows 64 bit operating systems were used for evaluation. The experimental topology is presented in the figure. To evaluate the proposed work, two existing models were used: RYU SDN Framework and the detection and mitigating DoS (DDS) [30][31][32][33][38][39][40][41][42][43][44]. Figure 10 shows the simulation topology. Before analyzing the impact of attacks in an SDN environment, threshold values were set for network components with different parameters. After the attacks were evaluated, these values were compared to obtaining accurate results. The threshold parameters were the CPU utilization, memory utilization, storage space, and throughput [44][45][46][47]. The following Table 2 presents the parameter values recorded before the attack scenario.
Experiments and Results
The proposed model was evaluated using five parameters: CPU utilization, channel bandwidth, packet delivery ratio, response time, and number of flow requests. CPU utilization is important parameter for evaluating system performance in SDN networks. CPU utilization may vary depending on the deployment of additional security protocols particularly in SDN. This section presents some of the other parameters used to evaluate the proposed scheme.
A. CPU UTILIZATION
The total CPU processing power was used by ICMP attacks during the attack period. There will be a continuous installation of unwanted requests from the host machine in SDN networks during the attack period. Therefore, in this case, the normal traffic is affected and is only less provisional for trusted requests. The following Figure 11 shows a comparison of CPU utilization for the proposed model with RYU and DDS. The Table 3 presents the CPU utilization based on two sets of evaluation.
The total CPU processing power was used by ICMP attacks during the attack period. There will be a continuous installation of unwanted requests from the host machine in SDN networks during the attack period. Therefore, in this case, the normal traffic is affected and is only less provisional for trusted requests. The following Figure 11 shows a comparison of CPU utilization for the proposed model with RYU and DDS. The Table 3 presents the CPU utilization based on two sets of evaluation. Figure 11. CPU Utilization. Figure 11. CPU Utilization.
B. PACKET DELIVERY RATIO
The packet delivery ratio is the ratio of the total packets sent by the source machine to the number of packets received by the destination machine. The packet loss ratio also plays a vital role in evaluating the packet delivery ratio. In our experiment, the TCP packets were sent from the source host to the destination host. The counter is used then to store the number of successful and unsuccessful packets. Table 4 presents the packet ratio units based on two sets of evaluation. The proposed scheme achieved a 98% delivery ratio compared with (87.25%) and DDS (73.25%) as shown in Figure 12. The formula used to calculate the delivery ratio is as follows:
C. CONTROL CHANNEL BANDWIDTH
When host requests pass through a control channel during ICMP attacks, the channel becomes unavailable owing to the lack of bandwidth. The proposed SPP protocol reduces the load by blocking malicious traffic when the threshold value reaches ≥ 60, as shown in the previous section ( Figure 9). During the attack period, the bandwidth increased in both cases, whereas in the SPP model the bandwidth was constant. The reason for this is the blocking of malicious traffic in SDN. Table 5 presents the units for channel bandwidth. The proposed scheme achieved constant bandwidth as shown in Figure 13.
C. CONTROL CHANNEL BANDWIDTH
When host requests pass through a control channel during ICMP attacks, the channel becomes unavailable owing to the lack of bandwidth. The proposed SPP protocol reduces the load by blocking malicious traffic when the threshold value reaches ≥ 60, as shown in the previous section (Figure 9). During the attack period, the bandwidth increased in both cases, whereas in the SPP model the bandwidth was constant. The reason for this is the blocking of malicious traffic in SDN. Table 5 presents the units for channel bandwidth. The proposed scheme achieved constant bandwidth as shown in Figure 13.
D. FLOW CONTROL
Flow requests are vital component of SDN traffic. Because of DoS attacks, more flow rules will be installed by the end switch. Most attacks are executed only with flow request features to overload the SDN controller. The proposed scheme blocks all malicious attacks before they reach the controller. This is the reason why the SPP protocol is installed very close to the SDN controller. The figure indicates that SPP packet-in messages are much fewer when compared to the other two models during the attack. It also clearly shows that there are more unwanted packet-in requests in RYU during the attack. The average number of packets in the message is below 1000 messages per minute in the SPP model as shown in Figure 14, proving its better suitability to SDN environments than other approaches [48][49][50][51].
E. RESPONSE TIME
The response time increases during DOS attacks owing to fake requests on the controller. Therefore, there was a delay in the response time. Therefore, the proposed SPP overcomes this issue by blocking the unwanted traffic. The figure shows that the average response time of SPP is 5.23 milliseconds when compared to other models with 6.03 and 7.24 milliseconds, respectively, as shown in Figure 15.
E. RESPONSE TIME
The response time increases during DOS attacks owing to fake requests on the controller. Therefore, there was a delay in the response time. Therefore, the proposed SPP overcomes this issue by blocking the unwanted traffic. The figure shows that the average response time of SPP is 5.23 milliseconds when compared to other models with 6.03 and 7.24 milliseconds, respectively, as shown in Figure 15.
Conclusions
When network technologies emerge at a steady rate, SDN will be implemented at higher rates in the upcoming years in all fields. Although SDN technology removes the
E. RESPONSE TIME
The response time increases during DOS attacks owing to fake requests on the controller. Therefore, there was a delay in the response time. Therefore, the proposed SPP overcomes this issue by blocking the unwanted traffic. The figure shows that the average response time of SPP is 5.23 milliseconds when compared to other models with 6.03 and 7.24 milliseconds, respectively, as shown in Figure 15.
Conclusions
When network technologies emerge at a steady rate, SDN will be implemented at higher rates in the upcoming years in all fields. Although SDN technology removes the
Conclusions
When network technologies emerge at a steady rate, SDN will be implemented at higher rates in the upcoming years in all fields. Although SDN technology removes the complexity of tying control and data planes together over a traditional networks, it makes certain aspects, such as security, controllability, and the economy of network resources, vulnerable. Among these aspects, security is one of the main concerns to be viewed seriously as far as the applications of SDN are concerned. This study addressed recent ICMP protocol-based attacks and their impact on SDN environments. This work proposes a security policy protocol (SPP) and client authentication model to avoid unauthorized attacks on SDN networks. The proposed model was evaluated using parameters such as CPU utilization, channel bandwidth, packet delivery ratio, response time, and the number of flow requests. The final experimental results proved that the proposed model performed with higher performance and minimum overhead in terms of efficiency. This model can effectively defend against flooding attacks in SDN network environments. The proposed SPP protocol achieved 92% accuracy in ICMP detection Compared with traditional approaches.
Author Contributions:
Conceptualization
|
2022-09-25T15:05:28.292Z
|
2022-09-22T00:00:00.000
|
{
"year": 2022,
"sha1": "405a93d78e1ef1e70f3652ab39a8c2ebef379a5e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/19/11950/pdf?version=1663837606",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "481f2d30b2a5e61889efafb826e29d04efbc8e9a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
143073969
|
pes2o/s2orc
|
v3-fos-license
|
Working with Lesbian-Headed Families : What Social Workers Need to Know
More gay men and lesbian women are choosing parenthood. One common challenge facing lesbian-headed families is how to navigate interactions with societies that are largely homophobic, heterocentric, or unaware of how to embrace nontraditional families. Systems may struggle to adjust services to meet the needs of modern family structures, including families led by lesbian women. The following are three areas of intervention (knowledge, creating affirmative space, and ways to incorporate inclusive language), informed by current literature, that allow social workers to create successful working relationships with members of lesbian-headed families.
family vary from household to household, but there are a few key themes that social workers must be aware of in order to best support their lesbian clients.
Within lesbian communities, especially lesbian feminist communities, the ideas of lesbian and mother have been mutually exclusive categories.Some lesbian scholars believe that lesbian women do not always recognize the political implications of lesbian motherhood (Corley & Pollack, 1997), and this sentiment can be alienating for some lesbian mothers (Lewin, 1994).Lesbian women encounter oppression due to their sexual orientation from every aspect of their public and private lives.This oppression is intensified when lesbian women make the decision to have children, because they make this choice in a society that frequently and boisterously protests the lesbian-headed family (Short et al., 2007).Lewin (1994) followed 135 lesbian women who were raising children for five years and has offered much insight into the plight of mothering while gay.In addition to examining the experience of motherhood, she presents a dichotomous range of lesbian women's perspectives on paternity and paternal involvement.She reported that lesbian women all seemed to consider the role of "father" in their children's lives, but the responses in how to define "father" and to include, or exclude, that person varied widely.For instance, one woman described the void created by lack of male involvement as strictly financial and saw government assistance as sufficient to fulfill that role (Lewin, 1994).Some women felt that raising their children without an active father figure was an advantage.Other women, however, saw filling the "father" role as necessary and felt they were responsible to find diverse male figures, such as grandparents, brothers, or male friends to provide a positive male remodel in their children's lives.Regardless of a lesbian woman's opinion of the place of a traditionally defined father figure, Western culture places value on paternal involvement and considers it to be essential for healthy child development.However, the construct and culture of fatherhood is fluid and dependent upon societal, historical, and economic contexts (Goldberg & Allen, 2007).The fluid definition of fatherhood provides lesbian women opportunities to redefine traditionally ascribed "father" roles into gender-neutral paradigms that fit within their families.
Lesbian mothers have the opportunity to redefine other family roles as well.Social workers must keep in mind that lesbian-headed households may not be strongly connected to their families of origin, depending on whether or not their biological family is aware of their kin's sexuality or their reaction to the coming out process.Lesbians often rely less on family of origin and more on families of choice (Erwin, 2007).These selected individuals represent a safe community that understands and supports them (Erwin, 2007).Social workers must remember to validate and include, if necessary, families of choice when working with lesbian mothers.When working with couples of color or mixed race couples this is doubly important since extended family and friends are often included in family trees as a cultural standard (Erwin, 2007).
Lesbian mothers have self-reported a variety of strategies and resources to ensure the wellbeing of their families (Short et al., 2007).Women cited that developing rich social networks and intentionally seeking relationships with people from diverse family backgrounds were important methods of creating a strong and unique family identity (Short et al., 2007).This research is valuable in that these coping strategies came from lesbian women themselves, rather than counselors or researchers attempting to claim expertise in lesbian women's experiences.Social service, legal, and political systems in the U.S. are struggling to find ways to meet the needs of increasingly varied models of "family" that are beginning to challenge traditional stereotypes and traditional methods of service provision.As the idea of family expands and shifts to meet the needs of modern families, social workers may be called on to provide direct services to lesbian-headed families.The following are several interventions, informed by current literature, which will allow social workers to create successful working relationships with lesbian-headed families.
Interventions Know the Facts
Common cultural LGBT myths.While American society has succeeded in dispelling many myths about gay men and lesbian women, several harmful and invalid stereotypes persist perpetuating homophobic and heterosexist attitudes, which can negatively impact delivery of effective and competent services.Social workers are not unaffected by homophobia (Black, Oles, & Moore, 1998;Messinger & Topal, 1997).Further, the NASW Code of Ethics (1999) calls for social workers to continually strive to improve their knowledge and practice.Some of the harmful myths that exist in our culture include assuming that all LGBT individuals want to be "out" to society, assuming that lesbians dislike or even hate men, assuming that a list of all LGBT people within a community exists or that all gay people know or want to know each other, assuming that identifying as LGBT is not compatible with religion or spirituality, assuming that LGBT individuals are liberal or democratic, assuming that LGBT individuals do not want to be married or have children, and assuming that gender norms are derived entirely from nature rather than society.A particularly harmful stereotype that has prevailed in our society is an association between sexual orientation and child sexual abuse.This list, though long, is by no means exhaustive.The resulting damage from these assumptions and societal myths to individuals and families can be devastating.Thus, in an effort to dispel remaining misinformation, the following is a brief overview of empirically-based information about lesbian and gay-headed families that contradict some of these cultural myths that permeate our society.
Despite research to the contrary, debate continues about how being raised by gay or lesbian parents will affect a child's development and whether or not children of gay or lesbian parents are more likely to be sexually abused (Erwin, 2007).However, research suggests that children raised by gay or lesbian parents may have developmental advantages over children raised by heterosexual parents (Goldberg, Smith, & Perry-Jenkins, 2012;Mallon, 2011;Patterson, 2000;Stacey & Biblarz, 2001).This comparison to heterosexual families is also a form of oppression and heterosexism, as it defines heterosexual families as the norm to which other families are compared (Erwin, 2007;Pollack, 1987).Pollack (1987) discusses the real danger of assuming that lesbian mothers are just like other mothers, explaining that doing so "thickens the veil of invisibility" that surrounds lesbian women.Furthermore, available research focusing on lesbian families alone consists of lesbian-headed families of well-educated, white, middle class background, with children from previous marriages.This limiting image further prevents a clear and accurate portrayal of lesbian women, their children and their experiences.This cycle of oppression can affect lesbian women's identities, as sexual orientation is defined and perceived through diverse political, cultural, and ethnic lenses.Internalizing homophobia and heterosexism can affect lesbian women's psychological health and parenting ability (Erwin, 2007).Continuing to view the family through a heterosexual lens will serve to further the oppression experienced by lesbian women and anyone existing outside of a nuclear, heterosexual family context.
Many lesbian-headed families, which can be created through adoption, donor insemination, or mixed families with children from prior relationships, have been found to display higher levels of equality between partners in regards to economic contribution as well as performing work in the home such as childcare and home and property maintenance, and they display advanced parenting skills (Goldberg et al., 2012).Children of lesbian-headed families demonstrate higher levels of attachment when compared to children of heterosexual-headed families (Goldberg et al., 2012;Mallon, 2011;Patterson, 2000;Stacey & Biblarz, 2001).Despite the cultural myths associating lesbians with a lack of desire and adequacy for partnership or motherhood, the trends of increasing lesbian-headed families and the positive outcomes of their children suggest that these women are desiring and fully capable of creating legitimate and healthy families.One of the most crucial understandings to have when working with LGBT families is to ask about their personal experiences and not make assumptions regarding cultural myths on their individual lives.
Protections and discrimination.Societal oppression exists through the abovementioned myths; however, laws throughout the United States support legal discrimination.Many in our society believe that law protects freedom from oppression, but this is often not the case for individuals who identify as LGBT.While civil rights have expanded to many historically disenfranchised groups, LGBT individuals have often been left behind though things are constantly in flux in different states or cities within states.Various laws regarding bullying, employment protections, fair and equal access to housing, health care for pregnancy planning and partner coverage, marriage equality, availability to petition for second-parent adoption, child custody, donor insemination, and other issues that influence the lives of LGBT individuals, as well as their families, are often not inclusive of LGBT individuals.This lack of legitimized recognition can cause an increase in traumatic experiences and anxiety that would not occur with individuals who do not identify as LGBT, and the large disparity from location to location creates additional inequality of experiences and quality of life (Knauer, 2012).Even the increased political dialogue surrounding elections can increase negative psychological experiences including anxiety, depression, and posttraumatic stress disorder in LGBT individuals (Russel, Bohan, McCarroll, & Smith, 2011).Currently there is a lack of federal oversight, which leaves civil rights decisions up to state and local governing bodies.Thus it is crucial for practitioners to be educated regarding local and national laws as well as the personal experiences of their clients regarding geographical location and legal discrimination.
Create Affirmative Space
Rapport-building begins when the client enters the agency, or often prior to physical introduction to the agency when the client receives paperwork to complete.Experience has shown that this introductory process is especially important when working with marginalized populations, such as lesbian households because it sets the stage for further development of strong rapport and a trusting therapeutic relationship.Lesbian women who are raising children have unique ways in which they navigate their sexual identity, with varying levels of disclosure depending on the context.Lindsay and colleagues (2011) characterize the degree to which sexual orientation is disclosed on a continuum of proud, selective, and private.They further explain that lesbian women who are considered proud are those women who articulate a commitment to active disclosure of their sexual orientation as a means of advocacy or protection for their children.Selective disclosure refers to women who are just that, selective, regarding to whom and when they disclose their sexual orientation.Women who attempt to disguise their relationships, especially relationships to the non-legal or non-biological parent, choose to do so because they feel out of place, unwelcomed, or excluded when working within heteronormative systems.Finally, private denotes deliberate and active non-disclosure.Levels of authenticity and disclosure are directly related to the perceived level of acceptance and support within the social context and lower levels of disclosure are related to the desire to keep their children safe in systems that are homophobic (Lindsay et al., 2011).
Lesbian clients may find it difficult to ask for or accept help, if the physical environment is not affirmative (Hunter & Hickerson, 2003).Creating affirmative space extends beyond an individual social worker's office to include the entire agency area.One way to create affirmative space agency-wide is to include pictures, periodicals, or other media that include various family constellations, equality organizations such as Parents and Friends of Lesbians and Gays (PFLAG) or the Human Rights Campaign (HRC), and written statements about the agency's commitment to providing equal services (Mallon, 2000).Providing images of same-sex couples, lesbian-headed families, or the like sends the message that all families are valued and encouraged to attend and be fully open about their sexual orientation and family constellations.Within each social worker's office he/she could include resources that are specifically lesbian and gay friendly (Eldridge & Barnett, 1991).Mercier and Harold (2003) interviewed 21 lesbianheaded families about their interfaces with schools and found that many reported feeling that their school systems were attempting to be inclusive of their families, but were doing so with a limited array of resources.Further, lesbian-headed families are often knowledgeable about resources regarding their families and eager to share books, resources, pictures, or similar resources to create systems that affirm their families (Mercier & Harold, 2003).
Members of lesbian-headed families consistently mention heteronormative systems' inability to "see" them as they are (Eldridge & Barnett, 1991;Mercier & Harold, 2003).Lesbian parents frequently report that even when they are intentionally out with child care providers, school personnel, or other helping professionals, the helping professional reorganizes their family structure into a more common, heteronormative structure, mistaking partners for sisters, grandparents, or the like (Mercier & Harold, 2003;Skattebol & Ferfolja, 2007), perpetuating invisibility and oppression.When lesbian women have parts of their family system minimized, or restructured to fit within the norm, the message of otherness, less than, is internalized, further marginalizing lesbian parents and their children.One solution is to be aware of personal assumptions about families, let the client lead introductions of themselves and their families, and ask clarifying questions when necessary.
Lack of these LGBT-affirming environments, images, and literature can lead clients to feel marginalized, unwelcome, and can increase likelihood of internalized homophobia from living in a heterocentric society (Lindsay et al., 2011;Szymanski & Chung, 2008).Having a safe, confidential space is crucial in contributing to a strong therapeutic relationship and experience for lesbian women (Pixton, 2003).
Use Inclusive Language
The first encounters with a practitioner or agency are generally intake or registration forms or informational surveys.Often forms present heterosexist language including "married, single or divorced" or describe relationships to the client with words such as "spouse," "mother," and "father."Redesigning forms to include language that is inclusive of all family structures will send a signal to lesbian clients that their families are understood and valued by the practitioner as well as by the entire agency.In general, it is important to always provide an option for "other" and a blank space for the client to provide appropriate information.Utilize the client's language and always ask for definitions or clarification rather that making assumptions.For a comprehensive list of replacement options for current agency forms, please see Table 1.In situations where adoption is not a preferred or viable option for lesbian women, an important issue to focus on is the role of the "co-mother," or the non-biological parent in a lesbian family.Motherhood is associated with biology and childbirth in society, making the role of co-mother a difficult and often isolating position.In her published diary, Gray (1987) writes of the disconnection she experienced during the first month after the birth of her partner's child.She explains she felt "very left out because I can't feed him" as well as "hurt by Kathleen's seeming unwillingness to share that fundamental task" (Gray, 1987).She later comments, "no one's allowed to have two mommies," as was evident when people approached her family and asked, "Who's the mom?" or "whose baby is it?"(Gray, 1987).
How lesbian co-mothers are perceived and treated in society is another indicator of heterosexism along with the denial of basic cultural celebrations and landmarks to lesbian and gay families that include marriage, anniversaries, baby showers, and so forth.Social workers have the opportunity to encourage lesbian families to create their own rituals to celebrate their union, and be inclusive of two mothers, two fathers, and other nonnormative family structures as agencies celebrate.Family rituals and being included in agencies' events validate and empower lesbian couples, as well as create a sense of legitimacy and family identity (Erwin, 2007).Utilizing the correct language and exploring the individual narrative of clients' families is crucial to establishing a trusting relationship that validates all family structures and is required for a working therapeutic alliance.
Conclusion
The attitudes of Americans toward lesbian women have changed dramatically within the last ten years.The majority of Americans support same-sex marriage and even more support equal protections regardless of sexual orientation.While the political landscape continues to grow more tolerant of sexual minorities, lesbian women still face real risks when disclosing their sexual orientation.Further, many lesbian women are hesitant to disclose their sexual orientation.The growing forms of non-traditional family structures pose considerable challenges to heteronormative systems.
Because of their direct contact with clients and their ethical commitment to oppose oppressive systems, social workers are likely to be the agency representatives who are in the best position to advocate for corrections within heteronormative systems that marginalize lesbian-headed families.Suggestions for social workers presented in this paper include: 1) knowing the facts about the challenges faced by lesbian-headed families, with special attention paid to legal and social risks associated with "coming out" as a lesbian or a lesbian mother; 2) empowering social workers to create affirmative space within their agencies for lesbian-headed families by, for example, supporting changes to policy, paperwork, and physical surroundings that suggest lesbian-headed families are seen and valued; and 3) reworking language to be inclusive of lesbianheaded families.In summary, working with lesbian-headed families may be challenging for social workers who often report feeling unaware of the needs of this population.However, addressing these three key areas (knowledge, space, and language) can transform an agency into an affirmative and efficient resource for lesbian-headed families.
Table 1 .
Commonly Utilized Language Contrasted with More Inclusive Language Options
|
2019-01-02T13:36:47.400Z
|
2013-09-03T00:00:00.000
|
{
"year": 2013,
"sha1": "e88f5526ae772fd23ea266f3519d4e3dd6827b1b",
"oa_license": "CCBY",
"oa_url": "http://journals.iupui.edu/index.php/advancesinsocialwork/article/download/8846/16384",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e88f5526ae772fd23ea266f3519d4e3dd6827b1b",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
244750993
|
pes2o/s2orc
|
v3-fos-license
|
T Cell Interactions in Mycobacterial Granulomas: Non-Specific T Cells Regulate Mycobacteria-Specific T Cells in Granulomatous Lesions
Infections with pathogenic mycobacteria are controlled by the formation of a unique structure known as a granuloma. The granuloma represents a host–pathogen interface where bacteria are killed and confined by the host response, but also where bacteria persist. Previous work has demonstrated that the T cell repertoire is heterogenous even at the single granuloma level. However, further work using pigeon cytochrome C (PCC) epitope-tagged BCG (PCC-BCG) and PCC-specific 5CC7 RAG−/− TCR transgenic (Tg) mice has demonstrated that a monoclonal T cell population is able to control infection. At the chronic stage of infection, granuloma-infiltrating T cells remain highly activated in wild-type mice, while T cells in the monoclonal T cell mice are anergic. We hypothesized that addition of an acutely activated non-specific T cell to the monoclonal T cell system could recapitulate the wild-type phenotype. Here we report that activated non-specific T cells have access to the granuloma and deliver a set of cytokines and chemokines to the lesions. Strikingly, non-specific T cells rescue BCG-specific T cells from anergy and enhance the function of BCG-specific T cells in the granuloma in the chronic phase of infection when bacterial antigen load is low. In addition, we find that these same non-specific T cells have an inhibitory effect on systemic BCG-specific T cells. Taken together, these data suggest that T cells non-specific for granuloma-inducing agents can alter the function of granuloma-specific T cells and have important roles in mycobacterial immunity and other granulomatous disorders.
Introduction
Approximately, one-third of the world's population is infected with Mycobacterium tuberculosis (Mtb) earning it the distinction of the "world's most successful pathogen." [1]. Despite a vigorous immune response, chronic infection persists, and reactivation or reinfection can occur. The primary cause of morbidity and mortality in tuberculosis is the failure of the host response to control reactivation during the chronic phase of infection. Thus, an understanding of the host response to Mtb during latent infection is critical to controlling the disease. In this work, we describe how activated T cells lacking specificity for the infection can contribute to protection during chronic infection.
CD4 + T cells have a central role in protection against mycobacterial diseases by orchestrating the formation of a delayed-type hypersensitivity site, the granuloma [2][3][4]. The granuloma is a host-pathogen interface where the host immune response controls bacteria, but also where bacteria persist. Although the granuloma environment may hold the key to bacterial persistence and the absence of sterilizing immunity, knowledge of the biology of this compartment remains scarce. Mice lacking CD4 + T cells due to deficiency in recombinase activating gene (RAG), TCR β chain [5], CD4, or MHC class II [6,7] succumb to infections with mycobacteria. Furthermore, adoptive transfer of CD4 + T cells to deficient hosts confers protection [8] and promotes granuloma formation [9]. CD4 + T cells participate in every aspect of granuloma formation [10] and can mediate protection through a variety of mechanisms. The cytokines TNF [11][12][13] and IFN-γ [14,15] are key factors in protection against mycobacteria, though many other factors are known to play a role.
Control of bacterial numbers during the chronic phase of infection depends upon a balance between bacterial proliferation and T cell immunity. Declining CD4 + T cell counts in the late stages of HIV/AIDS results in the reactivation of previously latent mycobacterial infections [16]. In mouse models, antibody depletion of CD4 + T cells during the chronic phase of infection also results in reactivation [17]. Previous work from our laboratory and other groups has demonstrated that the T cell repertoire is broad even at the level of the single granuloma and differs from lesion to lesion [9,18,19]. Despite this broad repertoire, a single monoclonal BCG-specific T cell population is sufficient to mediate protective granuloma formation [9]. Although protection is equivalent under the conditions studied thus far, we have noted differences in the activation phenotype of CD4 + T cells in wild-type mice infected with BCG as compared with mice possessing monoclonal populations of CD4 + T cells specific for BCG. In this work, our goal was to determine whether a two T cell system could recapitulate some of the properties of infection of wild-type mice.
In genetically intact or wild-type mice, the diversity of the normal T cell repertoire makes it difficult to distinguish pathogen-specific T cells from non-specific T cells. To test how non-specific T cells contribute to granulomatous lesions, we constructed a model system in which mycobacteria-specific and non-specific T cells were specified as two monoclonal T cell populations in contrast to the millions of specificities present in the normal repertoire. PCC-specific CD4 + TCR Tg 5CC7 RAG −/− mice were infected with recombinant M. bovis strain bacille Calmette-Guérin (BCG) expressing PCC (PCC-BCG). These mice were adoptively transferred with T cells from the conalbumin (CA) specific CD4 + TCR Tg D10 RAG −/− as sentinels for a BCG-non-specific T cell population. We demonstrate that BCG-non-specific D10 T cells, when activated, have access to the granulomatous inflammatory site. Additionally, these non-specific T cells affect the function of BCG-specific T cells in the granuloma. These non-specific T cells also contribute cytokines and influence macrophage activation in the granuloma. These data suggest a role for activated non-specific T cells in boosting the activity of local antigen-specific T cells and in directly affecting the antimicrobial function of the granuloma. Our data indicate that non-specific and specific T cells can cooperate to control bacteria at the site of infection. [20], specific for PCC residues 88-104 in the context of I-E k , were purchased on the B10.A background from Taconic Farms Emerging Models Program (Tarrytown, NY, USA) and bred onto the B10.BR RAG1 −/− background. D10 RAG1 −/− mice [21] specific for CA residues 121-136 in the context of I-A k were maintained on the B10.BR RAG −/− background. Mice for all experiments were used at 6-10 weeks of age. Mice were bred and housed at the University of Wisconsin Animal Care Unit (Madison, WI, USA) under specific pathogen free (SPF) conditions in filter top cages with autoclaved cages, water, bedding and feed according to the guidelines of the Institutional Animal Care and Use Committee (IACUC).
Testing Transgenic T Cells for Reactivity to PCC-BCG Antigens
Frozen stocks of PCC-BCG were sonicated with three pulses at high power in a Sonicator ® ultrasonic processor (Heat Systems, Newtown, CT, USA) to disrupt the cell wall. The BCG sonicate was then centrifuged at 14,000 rpm in an Eppendorf centrifuge to separate a "lysate" fraction, and the "pellet" fraction was resuspended in PBS. D10 spleen cells were plated at 10 6 per well in a 96 well plate in 200 µL complete RPMI 1640 plus 10% FBS (cRPMI10) with either 10 µL or 50 µL of BCG lysate or pellet. After 72 h of culture, cells were harvested, washed and analyzed for their expression of activation markers by flow cytometry.
Immunofluorescence
All incubations were performed at room temperature unless otherwise stated. Fiveµm-thick cryosections were cut from frozen O.C.T. embedded tissues and fixed for 30 min in 4% PFA in PBS. Sections were then washed three times with PBS and outlined with a Pap pen. Sections were blocked for 30 min with 1% BSA and 40 µg/mL 2.4G2 antibody to block FcR binding and stained for 30 min with Alexa 568-labeled KJ25 (anti-Vβ3) and FITC-labeled F23.1 (anti-Vβ8) in the presence of 1% BSA and 40 µg/mL 2.4G2. Unbound antibody was washed away by three washes in PBS and sections were coverslipped with Gel/Mount (Biomeda, Goleta, CA, USA). Slides were viewed on an Olympus IX-70 fluorescent microscope equipped with an Optronics DEI 750 digital camera. Images were acquired with LaserSharp Acquisition Software and analyzed with BioRad Confocal Assistant.
Histopathology
Tissue was fixed in 10% neutral buffered formalin and processed for paraffin embedding by standard methods. 5 µm thick sections were stained with H&E for tissue morphology and by the Ziehl-Neelsen method to identify acid-fast bacilli (AFB). To quanti-tate granuloma lesion size, digital images of H&E stained sections were acquired at 400× total magnification and the granuloma area was determined by outlining each lesion in the Scion Image program version 1.62c (National Institutes of Health, Bethesda, MD, USA). Quantitation of granuloma burden was performed by counting the number of liver granulomas per field at 100× total magnification. To quantitate bacterial load, the number of AFB per lesion were counted at 1000× total magnification on Ziehl-Neelsen stained slides.
Multiplex Cytokine Assay
Spleen or granuloma cells (10 6 per well of a 96-well plate) were cultured in triplicate with 100 µg/mL PCC or CA in cRPMI10; 72 h later, culture supernatants were harvested and stored at −80 • C. Frozen supernatants were assayed on the 22-plex Mouse Cytokine and Chemokine cytometric bead array as a service by LINCO Research (St. Charles, MO, USA). Intra-assay and inter-assay variances were less than 10% and 20%, respectively.
Results
In 5CC7 RAG −/− mice chronically infected with PCC-BCG, 5CC7 T cells acquire a resting phenotype. Previous work has demonstrated that 5CC7 RAG −/− mice expressing a monoclonal T cell population specific for PCC-BCG control bacteria similarly to wild-type mice possessing the normal repertoire of T cells [9]. ( Figure 1A,B, rightmost panels) shows that both wild-type and 5CC7 RAG −/− mice have high levels of acid-fast bacilli (AFB) at three weeks but low levels of AFB during the chronic stage of infection at six weeks. However, the activation state of granuloma-infiltrating T cells in infected 5CC7 RAG −/− mice as measured by cell size and by cell surface levels of LFA-1 decreases to a much greater extent during chronic infection than the activation state of granuloma-infiltrating T cells in wild-type mice ( Figure 1A,B, first and second panel from left). The repertoire of T cells in wild-type chronic granulomas is broad and contains both pathogen-specific T cells and pathogen-non-specific T cells [9]. To test the hypothesis that non-specific T cells contribute to the T cell activation phenotype in wild-type granulomas, we introduced a BCG-nonspecific T cell population from CA-specific, D10 RAG −/− TCR Tg to the monoclonal 5CC7 T cell system.
D10 T Cells Are Not Activated by PCC-BCG
To assess for reactivity of D10 T cells to PCC-BCG antigens, a BLAST search of the Mtb and BCG genomes for DNA sequences encoding the CA 121-136 epitope was performed and yielded no sequences with significant homology. D10 T cells cultured in vitro with PCC-BCG exhibited no evidence of blasting ( Figure 2, second column from the left) or elevation of LFA-1 expression (Figure 2, third column from the left). Finally, D10 T cells adoptively transferred to PCC-BCG-infected mice demonstrated no evidence of expansion or activation (Figure 3, second row) after one week, while immunization with CA Ag resulted in robust expansion and activation ( Figure 3, third row). These data demonstrate that D10 T cells have no reactivity for PCC-BCG antigens in vitro or in vivo, and that D10 T cells are a useful sentinel population for BCG-non-specific T cells. Thus, 5CC7 RAG −/− mice adoptively transferred with D10 cells represent a two T cell system in which one monoclonal T cell population is specific for BCG, while the other is not specific for BCG.
Acutely Activated BCG-Non-Specific T Cells Have Access to the Granuloma
We used this two T cell network model to test the hypothesis that T cells lacking specificity for granuloma antigens can accumulate in the granuloma. 5CC7 RAG −/− mice were infected with PCC-BCG i.p. to induce liver granulomas. At 5 weeks post-infection, when granuloma formation is chronic, mice were transferred with D10 RAG −/− spleen cells equivalent to 10 6 CD4 + cells and immunized s.c. with either CA antigen or PBS. One week post-transfer, spleen and granuloma infiltrating cells were isolated and stained to distinguish the two transgenic T cell populations by flow cytometry. Previous work using RAG −/− mice infected with PCC-BCG and adoptively transferred with PCC-specific T cells demonstrated that one week is sufficient to allow protective granuloma formation [9]. Immunization results in a robust activation as measured by LFA-1 expression ( Figure 3B). In addition, immunization resulted in approximately 80-fold expansion of transferred D10 T cells in the spleen (mean from three independent experiments of 2.23 × 10 4 splenic D10 T cells in the PBS-immunized mice vs. 1.85 × 10 6 D10 T cells in the CA-immunized mice; Figure 3C,D). In the granuloma, activation of D10 T cells by antigen allowed them to accumulate in granulomas and outnumber 5CC7 (PCC-BCG-specific) T cells by a ratio of approximately 5:1 ( Figure 4A,B). As in the spleen of CA-immunized BCG-infected mice, granuloma-infiltrating D10 T cells express high levels of LFA-1. These data suggest that activated non-specific T cells can upregulate the necessary factors, such as adhesion molecules for accumulation at the BCG inflammatory site. In addition, they suggest that if activated T cells dominate systemically, they dominate the local inflammatory site as well. To confirm the localization of D10 T cells to the granuloma, frozen sections of liver tissue from these mice were stained with antibodies specific for 5CC7 (red) and D10 (green) T cell receptors and counterstained with DAPI (blue) to identify lesions. As shown in Figure 4C, granulomatous lesions in nonactivated mice contain only PCC-specific cells, whereas in activated mice, the CA-specific T cells dominate the lesions. In addition, the CA-specific T cells are confined largely to the granulomatous lesions.
Acute Activation of Local Antigen Non-Specific T Cells Increases the Activation of Granuloma Antigen Specific T Cells
The large proportion of CA-specific T cells in the PCC-BCG-induced granulomas of activated mice suggested that these activated cells might have an effect on the ongoing immune response to PCC-BCG. Cytokines produced during expansion of antigen-specific T cells can induce activation of bystander T cells [23]. At this chronic stage in PCC-BCG infection, splenic 5CC7 cells have already downregulated LFA-1 ( Figure 3A, middle column). As expected, LFA-1 levels on 5CC7 T cells are higher in the granuloma than in the spleen since granulomas accumulate activated T cells. Activation of transferred D10 T cells shows a trend towards increased activation of the BCG-specific 5CC7 T cells, but this is did not reach statistical significance ( Figure 3A, middle column, and quantified in Figure 3B). At the same time, in the granulomas of CA-immunized mice, there was a nearly fivefold increase in the proportion of activated 5CC7 T cells as measured by high cell surface levels of LFA-1 ( Figure 4A, middle column, and quantified in Figure 4B) and by cell size ( Figure S1). Remarkably, these results suggest that activation of non-specific T cell populations can increase the activation state of granuloma antigen-specific T cells. As discussed later ( Figure 4D), activated non-specific T cells alter the functional properties of granuloma-infiltrating BCG-specific T cells as well.
CA Restimulation of Granuloma-Infiltrating D10 T Cells from CA-Immunized Mice Elicits a Profile of Cytokines That Can Contribute to Protection against Mycobacteria
Given the access of D10 T cells to BCG granulomas in mice immunized with CA/IFA, we examined the cytokine signature of spleen and granuloma cells from these mice in response to in vitro restimulation with CA using a cytometric bead array (CBA, Table 1). CA restimulation induced high levels of IL-2 production from granuloma cells and even higher levels from spleen cells. In addition, spleen and granuloma cells restimulated with CA secreted high levels of Th1 cytokines IFN-γ and TNF, while secreting lower levels of Th2 cytokines IL-4, IL-5, IL-9, and IL-13. This is important as it suggests that homing of D10 T cells to granulomas increases the levels of cytokines that are known to be important in protection against mycobacterial infection. With regard to chemokine secretion, granuloma cells secreted higher constitutive levels of MCP-1, RANTES, and IP-10, which attract monocytes and T cells, with a preference for antigen-experienced T cells. Again, this data suggests that the presence of activated non-specific T cells can augment the recruitment of cells to granulomas. While the range of cytokines produced by CA-restimulated spleen and granuloma cells in CA-immunized mice is similar, there are differences between the two sites. Most notably, granuloma-infiltrating cells produce much more MIP-1α than spleen cells, suggesting that granuloma-infiltrating non-specific T cells have a cytokine profile better suited to the granuloma environment and distinct from the profile of systemic non-specific T cells. Taken together these data indicate that activated non-specific T cells can directly contribute to cytokine secretion and cellular recruitment in granulomas.
PCC-Stimulated Cytokine Secretion by Granuloma Infiltrating Cells Is Upregulated by
Recruited Activated BCG-Non-Specific T Cells Table 2 illustrates the profile of cytokines secreted by spleen and granuloma cells in response to PCC and highlights the effect of activation of D10 T cells by CA immunization on PCC-stimulated cytokine secretion. In contrast to the spleen, 5CC7 T cells in chronic granulomas are anergic as they do not secrete IL-2 or IFN-γ in response to restimulation with cognate antigen (PCC). The most important finding is that the presence of activated D10 T cells in granulomas alters not only the activation phenotype of 5CC7 T cells ( Figure 4A), but also their ability to secrete IL-2 and IFN-γ (Table 2) indicating rescue from anergy. In addition, recruitment of acutely activated D10 T cells also leads to higher constitutive levels of TNF, MCP, RANTES, IP-10, KC, GM-CSF and G-CSF. Overall, these data indicate that activation of non-specific (D10) T cells by immunization partially reverses anergy of BCG-specific 5CC7 T cells in the granuloma and results in a Th1-biased PCCinduced cytokine secretion profile. In addition, activation of non-specific T cells leads to the secretion of chemokines that can recruit more macrophages and activated T cells. Taken together, these data suggest that mycobacteria-non-specific T cells alter and potentially augment the function of mycobacteria-specific T cells in chronic granulomas.
Activation of Non-Specific T Cells Suppresses Cytokine Secretion by BCG-Specific Spleen Cells
Interestingly, while activation of non-specific T cells by CA immunization results in increased cytokine secretion by granuloma cells in response to in vitro PCC restimulation, CA immunization leads to decreased cytokine secretion by splenic cells in response to PCC ( Table 2, left half). In general, spleen cells from non-transferred and D10-transferred, PBS-immunized mice that are restimulated with PCC exhibit a Th1-biased cytokine profile characterized by high levels of IFN-γ and TNF and low levels of IL-4, IL-5, IL-9 and IL-13. This suggests that while 5CC7 T cells in the local BCG inflammatory site are anergic, systemic T cells are not. In contrast, spleen cells from D10-transferred, CA-immunized mice restimulated with PCC secreted approximately tenfold less IL-2, IFN-γ and MIP-1α than PCC-restimulated spleen cells from D10-transferred, PBS-immunized mice. The magnitude of these changes suggests that D10 T cells have an active suppressive effect on 5CC7 T cells in the spleen. This observation suggests that when both cells are activated, they compete with each other in lymphoid organs, whereas they seem to cooperate and enhance each other's function at the local inflammatory site.
Accumulation of Activated D10 T Cells Results in Increased Granuloma Macrophage Activation
To assess the in vivo effect of IFN-γ on granuloma function, we measured cell surface expression of I-A k ( Figure 4D) as an indicator of macrophage activation. Consistent with elevated levels of both 5CC7 and D10 activation as well as increased IFN-γ secretion in CA-immunized mice, MHC class II was moderately elevated on splenic macrophages and significantly elevated on granuloma infiltrating macrophages from CA-immunized mice relative to PBS-immunized mice. These results suggest that acutely activated non-specific T cells can indirectly contribute to granuloma function by rescuing antigen-specific T cells from anergy. In addition, activated non-specific CD4 + T cells can directly participate in effector functions in the granuloma, including IFN-γ production to effect bacterial killing and chemokine production to aid in recruitment of additional macrophages and T cells to the granuloma.
Effect of Non-Specific T Cells on Granuloma Structure and Function
Given the increased numbers of activated effector cells present in the CA-immunized two T cell granuloma, we questioned if this altered cell population would lead to increased control of BCG infection or altered granuloma morphology. Figure 5A presents representative micrographs from the three experimental groups alongside micrographs from wild-type C57BL/6 and RAG −/− mice for comparison. BCG-induced granulomas formed in the mice possessing either one or two T cell specificities appear similar to wild-type granulomas. Quantitation of granuloma area using digital images revealed no statistically significant differences in granuloma size between the different groups ( Figure 5B). Quantitation of the number of granulomas per unit area (granuloma burden, Figure 5C) revealed a modest increase in the number of granulomas per square micron in CA-immunized mice. Thus, there are more granulomatous lesions in CA-immunized mice, but they have similar size and composition. Taken together, these data suggest that increased T cell activation, macrophage activation, and expression of chemokines does not appear to significantly alter the structure of the granuloma. The number of AFB was counted on Ziehl-Neelsen stained formalin fixed liver sections ( Figure 6) to determine the effect of the activation of non-specific T cells on bacterial control. Despite differences observed in macrophage activation and secretion of IFN-γ, no statistically significant differences were observed in the number of AFB per lesion. The lack of "improved" control of infection may reflect that there is already sufficient protective granuloma formation in both 5CC7 RAG −/− mice and 5CC7 RAG −/− mice transferred with D10 T cells even without the help of activated BCG-non-specific D10 T cells. For comparison, granulomas from wild-type B10.BR mice with a normal T cell repertoire contain on average 3 AFB per lesion while B10 RAG −/− mice contain on average 23 AFB per lesion [15]. Thus, BCG infection appears to be well controlled in both 5CC7 RAG −/− and 5CC7 RAG −/− mice transferred with D10 T cells. These data are consistent with our previous findings that a single T cell is sufficient for protective granuloma formation and further suggest that the presence of non-specific activated T cells does not detract from protective granuloma formation and additionally is not able to enhance it any further. It also indicates that anergized T cells can induce granulomas and control bacterial expansion, and that factors other than IFN-γ can contribute to these activities.
Discussion
In chronic mycobacterial infections, mycobacteria are confined and controlled within granulomas. CD4 + T cells are central in granuloma formation [24][25][26]. Depletion of CD4 + T cells results in disorganized granuloma structure, reactivation and dissemination of infection, and eventually death of the animal. The relative role of BCG-specific and BCGnon-specific T cells in granulomas is not understood and that is the subject of this paper. In contrast to wild-type granulomas, CD4 + T cells in mice expressing a monoclonal population of BCG-specific T cells have a resting phenotype. We hypothesized that the activated phenotype observed in the granulomas of wild-type mice at six weeks postinfection was due to a complex T cell repertoire that includes BCG-non-specific T cells. To test this hypothesis, we adoptively transferred D10 T cells representing a BCG-non-specific T cell into this monoclonal T cell system. D10 T cells were shown by a number of criteria to be nonreactive to PCC-BCG ( Figure 2, middle row, Figure 3, middle row). In this work, we show that activated non-specific T cells are able accumulate in the granuloma. In addition, these granuloma-infiltrating non-specific T cells secrete a broad range and large amount of cytokines and chemokines, can influence BCG-specific T cells at that site, and can potentially affect granuloma function. These data suggest that non-specific T cells in the context of a normal T cell repertoire may play a role in chronic granuloma formation. Finally, while D10 T cells appear to enhance the activity of granuloma-infiltrating BCGspecific T cells, they appear to suppress some functions of BCG-specific T cells at the systemic level.
Accumulation of Non-Specific T Cells
Accumulation of T cells at an inflammatory site depends both on activation molecules expressed by T cells as well as on the presence of antigen at the site and its effect on retention of antigen-specific cells. Studies examining the homing of T cells to inflamed skin in the presence or absence of antigen illustrate that while T cells have access to inflammatory sites independent of antigen specificity, local antigen aids in retention of antigen-specific T cells in the absence of proliferation [27]. Similarly, effector or memory but not naïve OT-1 T cells were able to accumulate in influenza-infected lungs [28]. Studies from our lab describing the homing of a CNS-antigen-specific T cell line [29] and of LCMV-specific T cells [18] to granulomatous inflammatory sites have reached similar conclusions. The present work shows that activated non-specific T cells can not only accumulate in the granuloma, as observed previously, but can even dominate the site. In addition, the induction by activated granuloma-infiltrating non-specific T cells of the chemokines MIP-1α, MCP-1, RANTES and IP-10, which preferentially attract effector T cells [30,31], helps serve as a positive feedback loop in the recruitment of non-specific T cells at the site [32]. All these argue that the systemic activated T cell repertoire has access to inflammatory sites. Early in infection, in the presence of high levels of infectious agent, a lot of the systemic activated T cells are likely to be specific for the infectious agent. In the chronic stage when pathogen loads are lower, pathogen-specific T cells represent a smaller proportion of the systemic activated T cell repertoire. Thus, in chronic granulomas the proportion of pathogen-non-specific T cells is likely to be higher. Recent work using a systems biology simulation approach has suggested that non-specific T cells may actually compete with Mtb-specific T cells for access to infected macrophages [33]. This would seem to conflict with our data that recruitment of non-specific T cells actually increases activation of mycobacteria-specific T cells and results in increased macrophage expression of MHC class II implying activation by T cell-derived IFN-γ. One possibility is that IL-2 secreted by non-specific activated T cells directly activates local anergized specific T cells.
Anergy and Granuloma Function
In this study, we report that chronic infection of mice with a monoclonal BCG-specific T cell population results in an anergic state marked by downregulation of the activation marker LFA-1 as well an inability to secrete IFN-γ or IL-2 in response to restimulation with cognate antigen. In chronic mycobacterial infection, hyporesponsiveness of T cells to antigen has been reported. Peripheral blood T cells of lepromatous leprosy patients express lower levels of TCR ζ chains, p56 lck and NF-κB p65 correlating with decreased transcriptional activity from the IFN-γ promoter [34]. Similarly, peripheral blood T cells from TB-infected patients demonstrate lower levels of TCR ζ chains [35]. The same work demonstrated that Mtb granuloma-infiltrating T cells produce little IFN-γ when restimulated with antigen, but that IFN-γ secretion could be increased by the addition of IL-2. Similar results have been reported for synovial infiltrating T cells in rheumatoid arthritis patients [36]. Taken together, these data suggest that chronic inflammation, such as that induced by BCG infection in our system, can induce T cell anergy.
T cell hyporesponsiveness can be induced by the presence of excess antigen or by other factors in the T cell's local microenvironment. Chronic infection of mice with LCMV results in deletion of LCMV tetramer-specific T cells [37]. In addition, there is a progressive loss of the ability of the remaining LCMV tetramer-specific T cells to secrete IL-2, TNF, and IFN-γ similar to what is observed in chronic PCC-BCG infection of 5CC7 RAG −/− mice in the present work. The severity of this deletion and inactivation correlates with the level of LCMV antigens present in the mouse. Interestingly, adoptive transfer of LCMV GP-specific CD4 + or CD8 + TCR Tg T cells demonstrated that CD4 + T cells took 5-6 weeks to become anergized [38], similar to the timeframe seen in our model of BCG infection. In a mouse model of tolerance, a panel of hen egg lysozyme (HEL) transgenic mice was crossed to the 3A9 CD4 + TCR Tg mouse. In this system, the level of T cell inactivation correlated with the amount of antigen present in the lymph nodes and the affinity of the TCR-pep-MHC interaction [39]. In another system, male antigen-specific T cells become anergic when transferred into male nude mice, but this anergy can be reversed by retransferring these cells into female nude mice [40]. How this applies to the BCG model system is unclear. In our system, BCG bacterial load is high at three weeks but drops significantly by six weeks. High levels of antigen early in infection may induce T cell anergy which then persists for the duration of the experiment.
Alternatively, hyporesponsiveness of granuloma-infiltrating T cells could be the result of antigen deprivation of T cells. In BCG or Mtb granulomas, the display of bacterial antigen is very low, even in the acute phase. Overexpression of antigen by recombinant BCG activates T cells to much higher levels of cytokine production indicating that only a fraction of the host effector capacity is used [41,42]. Antigen deprivation may also occur through competition for antigen presenting cells between mycobacteria-specific T cells with non-specific T cells. In Mtb granulomas, the ratio of bacterial specific T cells is reported to be less than 5% of granulomas T cells [43]. Additionally, the granuloma structure itself in which T cells are localized to the periphery and most infected macrophages reside in the center of the granuloma might sequester T cells away from antigen presenting cells.
Antigen-independent factors in the granuloma microenvironment may also play a role in inducing T cell anergy. Chronic exposure of T cells to TNF in vitro has been shown to cause downregulation of TCR ζ chains and hyporesponsiveness to antigen restimulation [44,45]. Sarukhan, et al. [46] demonstrated that less than half of mice doubly transgenic for a TCR specific for influenza hemagglutinin and for expression of hemagglutinin in pancreatic beta cells develop clinical diabetes. The remaining mice which do not develop diabetes have infiltration of pancreatic islets by transgenic T cells which proliferate poorly when restimulated with antigen. This hyporesponsiveness was correlated with a higher secretion of TNF by islet-infiltrating T cells, providing circumstantial evidence for a role for TNF in promoting anergy at this site. Furthermore, expression of TNF under the control of the same beta cell specific promoter prevented the development of autoimmune diabetes in the susceptible NOD mouse strain [47]. The expression of immune checkpoint molecules can also restrict granulomatous responses. Chronic granulomas possess dendritic cells expressing PD-L1 and T cells expressing PD-1 [48]. Inhibiting PD-1-PD-L1 interactions allows T cells to produce higher IFN-γ [48]. Other inhibitory receptors on T cells and regulatory cells can further limit granuloma T cell responses [25]. Taken together, these data suggest a role for inflammatory cytokines such as TNF or immune checkpoint molecules such as PD-1-PD-L1 in rendering T cells hyporesponsive during chronic inflammation.
We report that the presence of an activated non-specific T cell population can partially reverse the hyporesponsiveness of 5CC7 T cells despite all the limiting factors discussed above. (See schematic, Figure 7) Classically, IL-2 is thought to be the most important cytokine in reversal of anergy [49]. This may be relevant in our system, since activated D10 T cells secreting IL-2 are recruited to the granuloma and in addition induce 5CC7 T cells to produce IL-2. Also, D10 T cells in our system produce a broad range of cytokines. Other cytokines such as IL-18 [50,51] and cytokine combinations such as IL-2, IL-6 and TNF [52] have been shown to induce T cell activation in the absence of antigen. Alternatively, chemokine secretion by D10 T cells (Table 1) may allow recruitment of nonanergic 5CC7 T cells from the systemic pool, which are clearly able to secrete IFN-γ and IL-2 ( Table 2). Further study will clarify the mechanism by which anergy is broken.
Interestingly, despite a minimal ability to secrete Th1 cytokines, PCC-specific T cells are still able to form granulomas similar in appearance to those of wild-type mice and are also able to control bacterial numbers to similar levels ( Figures 5 and 6). This seems paradoxical since depletion of CD4 + T cells during the chronic phase of infection results in reactivation [17]. One possibility is that anergic T cells can under some conditions still produce IFN-γ [53,54]. Another possible resolution of this paradox is that factors besides the cytokines and chemokines assayed may compensate for their loss. For example, although IFN-γ and TNF are thought to be essential cytokines in protection against mycobacterial infections, IFN-γ-and TNF-independent mechanisms are known to exist for protection against M. tuberculosis [25,55,56]. In addition, while granuloma-infiltrating 5CC7 T cells are anergic, systemic 5CC7 T cell retain the ability to secrete cytokines and a low level of recruitment to the granuloma may be enough to provide some protection. Finally, the protection observed may be a remnant of the more active T cell response seen at earlier times during infection and that eventually BCG infection would reactivate in 5CC7 RAG −/− mice at later times than those tested in the present work. These data support one interpretation that anergized T cells contribute to granuloma maintenance and bacterial control and that factors other than IFN-γ may be responsible for these activities.
Activated D10 T Cells Suppress Systemic PCC-Specific T Cells
In contrast to the T cell cooperation observed in the granulomas of these mice, activated D10 T cells appear to inhibit cytokine secretion by 5CC7 cells in the systemic compartment as represented by the spleen. As shown in Table 1, CA-induced secretion of several cytokines, notably IFN-γ and IL-2, is correlated with a tenfold or more decrease in PCC-induced secretion of the same cytokines. Since the D10 T cells constitute only half of the T cells in the spleens of D10-transferred, CA-immunized mice, this difference cannot be explained by cell numbers alone and suggests an active mechanism of suppression in contrast to the passive competition observed between T cells for peptide-MHC complexes on antigen presenting cells [57][58][59]. Thus, while activated non-specific T cells help resting granuloma-infiltrating T cells, activated non-specific T cells compete with activated granuloma-infiltrating T cells. Suppression of IFN-γ secretion by IL-4 is observed in Th2 polarization protocols, but this seems unlikely in this system given that D10 T cells are secreting a Th1 profile of cytokines. Work by Duthoit and colleagues [60] demonstrated that recently activated CD4 T cells can suppress the proliferation of and secretion of IL-2 by naïve T cells when the two are cultured together. This effect is not diminished by addition of up to a 50:1 ratio of irradiated stimulator spleen cells to T cells and persists even after the naïve cells are separated from the activated cells arguing against competition for access to antigen presenting cells. The mechanism of suppression is not clear but appears to depend on cell-cell contact. In our system, D10 cells are acutely activated and may have similar suppressive effects on 5CC7 either in vivo or in vitro during the assay. In summary, T cells can interact to either positively or negatively regulate each other. It is clear from the data presented here that the T cell compartment (in this case, systemic vs. effector site) is one factor influencing whether the interaction is positive or negative. Further study will be necessary to determine how this occurs.
Summary
The large size of the normal T cell repertoire obscures our understanding of the basic rules governing their interactions with each other and ultimately determining the shape and character of the immune response. In this work, we employ a two T cell network of known specificities to understand how T cells interact in the context of a chronic infection. Using this system, we observed a number of interesting phenomena. Similar to previous work, activated non-specific T cells were able to accumulate in the granuloma and even dominate the site. These non-specific T cells were also able to significantly increase the activation state of and cytokine secretion by BCG-specific T cells at that site. Indeed, granuloma infiltrating BCG-specific T cells exhibited behavior consistent with anergy and the presence of activated non-specific T cells allowed a partial reversal of this state. Together with the heterogenous nature of the TCR repertoire at the single granuloma level previously observed, these data suggest that non-specific T cells may play a role in maintaining the activation state of T cells in chronic granulomatous diseases. In contrast, activation of non-specific T cells appeared to suppress cytokine secretion by systemic BCG-specific T cells. This suggests that nature of T cell interactions is highly dependent on the T cell compartment. In summary, the use of a small network of T cells of defined specificity has revealed interesting properties of T cell interactions, which would not have been evident in the context of the full T cell repertoire. Further study will be required to understand the mechanisms behind these phenomena and their significance in T cell interactions within the full wild-type repertoire.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/cells10123285/s1, Data demonstrating the size of transgenic T cells as a measure of activation are provided in Supplemental Figure S1: Activated non-specific T cells increase the activation of BCG-specific T cells as measured by cell size. Granuloma cells were prepared from mice treated as described in the legend to
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Acknowledgments:
We are grateful to Derek Sant'Angelo for providing the D10 mice and 3D3 clonotypic hybridoma used in this study.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-12-01T16:31:18.356Z
|
2021-11-24T00:00:00.000
|
{
"year": 2021,
"sha1": "1153f3f8e05aaf2f6ac12c675f2828133a3773ca",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/10/12/3285/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "392a6856db63a3e3d3bfec622c5b18f80d389374",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251669404
|
pes2o/s2orc
|
v3-fos-license
|
Peritoneal catheter insertion: combating barriers through policy change
ABSTRACT Barriers to accessing home dialysis became a matter of life and death for many patients with kidney failure during the coronavirus disease 2019 (COVID-19) pandemic. Peritoneal dialysis (PD) is the more commonly used home therapy option. This article provides a comprehensive analysis of PD catheter insertion procedures as performed around the world today, barriers impacting timely access to the procedure, the impact of COVID-19 and a roadmap of potential policy solutions. To substantiate the analysis, the article includes a survey of institutions across the world, with questions designed to get a sense of the regulatory frameworks, barriers to conducting the procedure and impacts of the pandemic on capability and outcomes. Based on our research, we found that improving patient selection processes, determining and implementing correct insertion techniques, creating multidisciplinary teams, providing appropriate training and sharing decision making among stakeholders will improve access to PD catheter insertion and facilitate greater uptake of home dialysis. Additionally, on a policy level, we recommend efforts to improve the awareness and feasibility of PD among patients and the healthcare workforce, enhance and promulgate training for clinicians—both surgical and medical—to insert PD catheters and fund personnel, pathways and physical facilities for PD catheter insertion.
INTRODUCTION
Peritoneal dialysis (PD) is the home therapy option of choice for most patients with kidney failure worldwide. Best-practice PD today provides mortality and quality-of-life outcomes similar to those achieved with in-center hemodialysis (HD) [1,2]. PD is cost saving in most global settings and recommended by professional societies, including the International Society of Nephrology (ISN) [3]. Further, while patients receiving in-center HD were 5-20 times more likely to be infected with severe acute respiratory syndrome coronavirus 2 compared with the age-matched general population, the risk for PD patients was the same as the general population [4][5][6][7].
A lack of focus on PD catheter insertion has been identified as an important barrier preventing PD utilization [8]. Inadequate skill training in the range of catheter insertion techniques leaves the newly trained clinician poorly equipped to optimally utilize this modality, pushing patients to the default in-center HD option. While there are excellent and dedicated surgeons passionate about PD catheter insertion, it is given low priority in some programs and, due to clinical prioritization, operating rooms are preferentially used for other procedures. Further, in some locations, PD catheter insertion was classified as a nonurgent procedure during the coronavirus disease 2019 (COVID-19) pandemic, which deprived many patients starting dialysis from benefitting from PD.
There are plenty of data from around the world to show that compared with those inserted by surgeons or interventional radiologists, catheter insertion by trained nephrologists without the need for operation theater access or general anesthesia is associated with similar or higher rates of successful PD utilization, both for elective and urgent-start PD [8]. A shared care model has been recommended, in which the decision on the initial insertion approach is made after a risk estimation-with nephrologists performing insertion in uncomplicated cases and surgeons takeing care of high-risk cases that might require (advanced) laparoscopy, including additional procedures such as adhesiolysis, omentopexy or hernia repair.
Despite the acknowledgement of the critical role played by the timely creation of access to the peritoneal cavity in the success of this treatment modality, this topic needs greater attention. Therefore the aim of this article is to discuss the importance of PD catheter insertion, barriers to successful and timely insertion, and potential solutions.
HISTORY OF PD CATHETER INSERTION
Wegner performed the first PD experiments in rabbits in 1877 [9]. The first intermittent PD trial in a human patient was carried out by Ganter in 1923 [10]. He introduced 1.5 L of saline solution intraperitoneally in a woman with ureter obstruction and observed a slight improvement. In 1927, Heusser and Werder [11] conducted dialysis on three uremic patients by the continuous method, but there was no improvement, perhaps because too little fluid was used. Two techniques were used: intermittent dialysis with one tube, where both the infusion and drainage occurred through the same tube, and continuous dialysis, with two tubes placed in the peritoneal cavity, one used for the inflow of the PD solution and the other for outflow.
When Dr. Fred Boen defended his PhD dissertation in Amsterdam in 1959, PD was still 'crawling', but already saving and prolonging the lives of patients with acute kidney injury (AKI) and kidney failure. His thesis contains a narrative of the PD technique, including case histories of 22 patients treated with 32 treatments [12,13]. An excerpt of his thesis gives us a clear perspective of PD access insertion: Technique: The surgeon (Dr. van der Reyden) made an incision on the left and right side of the abdomen at the level of the spina iliaca anterior superior and brought two rubber drains into the abdomen through these openings, one tube being used for the inflow and the other for the outflow. When difficulties were encountered in the outflow, the direction of the flow was reversed. During the course of the dialysis, an enormous leakage occurred from both incisions. This was not abolished after stitching the wound again.
Present-day considerations
With the creation of the first successful indwelling peritoneal catheter by Henry Tenckhoff in 1968, PD became more regularly utilized. Over time, the vital importance of a functioning PD catheter for the patient came to be realized.
Who inserts PD catheters, the exact location of service, and the methods are influenced by a number of factors. In some countries, regulatory constraints limit who can perform PD catheter insertion or the location where can it be done, which limits the opportunities for PD catheter insertion for patients. This underscores the need to educate not only doctors, nurses, and patients, but also people in the regulatory and political spheres in every country.
Practitioners need to learn from centers, regions, or countries that have successfully developed and implemented PD catheter insertion programs in renal services. Such programs exist in many countries, including South Africa, Saudi Arabia, Mexico, Thailand, China, Guatemala, Dominican Republic, and Brazil. However, only a few of these programs have published their findings (Appendix D).
GLOBAL PICTURE OF PD CATHETER INSERTION TECHNIQUES
This section summarizes catheter insertion techniques, followed by a discussion of conditions that influence variations in observed PD access practice patterns around the world.
The insertion techniques available can be divided into percutaneous, open surgical and (advanced) laparoscopic. Not all countries have access to all techniques. Often there is substantial variability of the relative availability between centers within the same country. Variations are influenced by the availability and skill of practitioners, clinical demands, reimbursement policies, and cultural and regional historical practices.
Percutaneous technique
The percutaneous technique utilizes either a trocar or a blind modified Seldinger approach and is usually performed in a procedure room or at the bedside under local anesthesia by a nephrologist/radiologist or nurse practitioner [14][15][16][17]. The disadvantage of the trocar approach is that the large-bore trocar is placed without visualization, risking bowel or vascular injury as well as creating a track that is larger than the catheter, which may result in leakage, and this technique has been largely abandoned. A modification of this approach uses a needle, guidewire and peel-away sheath through which the catheter is inserted. As these techniques rely on the blind introduction of a needle/trocar, they are most suitable for patients who are not obese and have not had previous abdominal surgery, peritonitis or other reasons for suspecting intra-abdominal adhesions. The modified Seldinger technique can be supported with radiological assistance such as ultrasound to assess for visceral slide, which reassures that significant bowel adhesions at the point of insertion are unlikely and helps in determining the depth from the skin to the peritoneum. Fluoroscopic visualization can be used to determine entry into the peritoneum and appropriate positioning of the guidewire and catheter. Percutaneous catheter insertion is widely used in resource-poor countries, enabling patients to receive lifesaving therapy, and has been shown to be associated with excellent outcomes [18][19][20][21].
Open surgical technique
The open surgical technique involves dissection to the peritoneum followed by either blind insertion of the catheter in the direction of the pelvis or through a mini-laparotomy-guided direct visualization of catheter placement in the pelvis. The advantage of this technique over the percutaneous approach is the ability to visualize entering the peritoneum. It is therefore safer for patients who have had previous abdominal surgery or are obese. The advantage over the laparoscopic approach is that it is more cost effective and may be performed by nephrologists and surgeons without laparoscopic skills in a procedure room or at the bedside under local anesthesia with sedation [16,22]. However, this technique does not allow proper visualization of the peritoneal cavity, including confirmation of the pelvic position of the PD catheter and permits only limited adhesiolysis and omentopexy at the point of entry into the peritoneum.
Laparoscopic technique
This technique involves insertion of the catheter into the pelvis under direct vision, resulting in certainty of the position of placement. This technique may be supplemented by adjunctive procedures [23,24] including hernia correction, epiploic appendicectomy, colpopexy, musculofascial tunneling, omentopexy and fixation in the paracolic gutter when the pelvis is not accessible due to adhesions [25]. These advanced techniques have been shown to produce superior outcomes than standard laparoscopic placement [24]. The relatively small incisions and ability to suture port sites allow urgent use of catheters with minimal risk of leakage.
The laparoscopic and image-guided percutaneous techniques require more sophisticated equipment and practitioner skill and is not available everywhere [26]. Image-guided insertion is typically performed under local anesthesia with or without sedation in a radiology department or operating room [27][28][29].
Peritoneoscopic placement
The peritoneoscopic approach is a proprietary laparoscopicassisted technique of peritoneal catheter placement. The procedure can be conducted in a treatment room under local anesthesia. The peritoneoscope is inserted through a sleeve introduced around a trocar and is used to confirm peritoneal entry and guide catheter placement. Studies have shown comparable or better survival and complication rates with this technique compared with the open surgical method. This technique is practiced preferentially in some locations [30][31][32][33]. Like the percutaneous method, peritoneoscopic insertion is not advisable for patients with obesity and in those with prior peritonitis, multiple abdominal operations or the inability to lie flat.
The Peritoneal Dialysis Outcomes and Practice Patterns Study in collaboration with the International Society for Peritoneal Dialysis (ISPD) performed a survey of five highincome countries for which data were available describing the international variation in PD catheter practices [34]. Table 1 summarizes PD insertion practices by country.
Although organizations such as the ISPD have released best practices for PD catheter insertion [23], how best to scale known best procedural practices to better facilitate PD adoption worldwide remains unclear. One observational study at a regional PD center in the USA showed significant gaps in adherence to ISPD best practices, with 30% of patients not being evaluated for hernias and 20% not being provided follow-up care instructions. A total of 41% of patients developed a complication postoperatively [35].
Despite numerous comparative studies along with metaanalyses comparing the various PD catheter insertion techniques, a clear benefit of one technique over another has never been demonstrated, and most studies suffer from power and bias in multiple domains. As a result, international guidelines state that the choice of technique should be determined by skill and availability [8,23,24,36,37]. Those studies that show benefits of one technique over another are likely more representative of the technical ability and enthusiasm of the practitioner rather than the modality per se. For example, in the hands of a skilled laparoscopic surgeon using advanced laparoscopic techniques the complication rate and long-term outcomes are likely excellent, but in a center where the surgeon performing the procedure is not a dedicated access surgeon or where procedures are delegated to untrained or junior surgeons, these results may be significantly worse. Therefore, the insertion technique should be determined by patient, practitioner, and health resource factors, as these are far more likely to impact on outcomes than the specific technique used. Finally, all centers should strive to set up a multidisciplinary access team to select the most appropriate catheter type, insertion technique and insertion and exit site locations for individual patients, as specified in the ISPD guidelines [23].
Views on PD access insertion may differ between nephrologists and surgeons. Although many surgery programs train residents in catheter insertion, this training is often limited and affects a surgeon's willingness and ability to perform the procedure or remedy complications. Some surgeons are perceived as reluctant to respond to referrals for PD catheter insertion in a timely fashion and may delegate it to a junior member of the team [38]. However, this deceptively simple operation can result in complications that are time consuming to resolve. Some surgeons (more often those who are not properly embedded in multidisciplinary PD teams) may also be unaware of the importance of a functional PD access from the patient's perspective and how failure impacts patient's the lifes. Further, PD access often comprises only a small part of their surgical practice and reimbursement often does not compensate well for the time invested. The low overall use of PD by the health system potentiates the problem by preventing surgeons from gaining enough experience in peritoneal access, perpetuating a vicious cycle of poor outcomes, dissatisfaction with PD and low PD utilization [38]. This point highlights the need for PD access teams consisting of a PD nurse, nephrologist and surgeon, thus ensuring the most appropriate use of skills for patient outcome.
In many regions, nephrologists are increasingly taking on the role of PD access providers [14][15][16][17]22]. Multiple studies have demonstrated catheter insertion by nephrologists is associated with equal or better outcomes compared with programs relying on surgical insertions (Fig. 1) [15,[39][40][41][42]. In part, this may be because of case selection, with difficult insertions being performed by surgeons. In a systematic review, no significant difference in catheter survival was noted between percutaneous placement of PD catheters [8] by nephrologists and surgical insertion, while the peritonitis risk was lower with percutaneous insertion. However, like surgical training, many nephrology fellowship programs do not offer opportunities for training in catheter insertion. The use of more technologically intensive techniques, such as (advanced) laparoscopic and image-guided techniques is relatively limited in most countries and continues to be deprioritized because of a lack of skill and training and low demand. However, these methods expand the pool of patients eligible for PD and improve their outcomes, and it is important to ensure appropriate availability and optimal utilization of these techniques, especially in regional centers of excellence [23,24,29].
PRACTICE BEFORE AND DURING COVID-19
The ISPD has published guidelines on recommended PD catheter insertion techniques and postoperative care [23]. The extent of adherence to these recommendations, however, is not known. Few outcome data are available, such as from the North American PD Catheter Registry [43]. Laparoscopic PD catheter insertion approaches [44][45][46][47] have a reported 5-year patency rate of 96-99% [47]. Similar success has been reported by interventional radiologists utilizing ultrasonographic and fluoroscopic guidance [25]. The Cleveland Clinic published their 10-year experience with laparoscopic PD catheter insertion and reported minimal immediate postoperative complications (0.9%). More than half of patients were undergoing PD or were transplanted in the long term (median follow-up of 4 years); the median survival time for patients on PD was 8 years [48]. Other scenarios where timely PD catheter insertion can be lifesaving include in the emergency room [49], as urgent start [50,51] and in patients with vascular access failure [52]. The Saving Young Lives Project of the ISN has used acute PD as a lifesaving treatment for children with AKI in Africa, Asia, and Latin America [53].
Beginning in March 2020, PD initiation was hampered when several countries suspended the insertion of PD catheters along with other elective surgeries during the COVID-19 pandemic. This seriously impacted the ability of patients with kidney failure to benefit from home dialysis. However, other health systems continued PD catheter placement programs. One study conducted in the Dominican Republic presented data from 946 patients treated across seven centers during the pandemic. Over the course of 3 months, 95 catheters were placed in incident patients, 72 by surgical and 23 by a percutaneous technique; 64 started treatments at home and the remaining patients were in training at the time of the report. The procedure followed the routine protocol applied in the clinics [54]. Similar experience has been reported from Saudi Arabia with ambulatory PD (APD) [55,56]. Furthermore, patients on APD stayed at home, followed up by a telemonitoring system, obviating the need for in-person follow-up visits.
PD can deliver outcomes equivalent to those reported with other dialysis options, including continuous renal replacement therapy and intermittent HD for patients with AKI [57][58][59][60][61][62], as demonstrated during the COVID-19 pandemic [63]. Based on this experience, the US Centers for Medicare & Medicaid Services, Ontario Renal Network and UK Renal Association provided recommendations that PD catheter insertion was lifesaving and/or nonroutine during COVID-19. These policies allowed surgical PD catheter insertions to resume where they had been interrupted [64].
Trends in PD catheter insertion, including E-survey
To better understand the global trends in PD catheter insertion procedures and the impact of COVID-19, an electronic questionnaire was administered to 82 nephrologists from 17 countries (Appendix B). The target audience of this voluntary survey was clinicians involved in starting patients on home dialysis taken from the mailing lists from the International Home Dialysis Roundtable [65] ( Table 2). The completion rate for this survey was 38%. Forty-five percent (14/31) of the respondents indicated that surgeons place PD catheters. About 77% (24/31) noted that there were no rules to Awareness of advantages of PD to patients and the healthcare system Adequate reimbursement to individuals and hospitals for PD catheter insertion in private systems Provision of finance to support an appropriate number of professionals and facilities govern who can place a PD catheter. Additionally, 52% (16/31) of respondents indicated that the procedure currently takes place exclusively in the operating room, although 67.7% (21/31) stated that there are no legal requirements about where the procedure is conducted.
Specifics about the types of barriers to conducting the procedure varied. However, 52% (16/31) respondents identified availability of physical space, time and practitioners as important impediments to optimizing PD catheter insertion. Survey respondents were split on the impact of COVID-19 on practitioners' ability to continue the procedure, with 48% (15/31) saying the pandemic increased delays or barriers and 52% (16/31) indicating it did not-although the vast majority of those that did see barriers also noted that they were resolved during the course of the pandemic.
In many cases, the challenges around capacity and staff availability improved as COVID-19-related hospitalizations decreased, with the shift of policy to accommodate patients with COVID-19. According to one respondent from Sweden, hospital staff created regular planning meetings for better coordination and to prioritize access to resources. In the UK, a respondent noted that the facility reorganized operating theaters to be better equipped to optimize catheter insertion opportunities for patients with COVID-19. In one system in Lebanon, the key to continuing PD catheter placement was improving public health protocols and guidance for staff and patients regarding COVID-19 vaccines, testing and lockdowns. Similarly, in the Netherlands, protocols regarding percutaneous PD catheter insertion by fluoroscopic guidance and urgent-start PD were published [66]. Respondents from Canada and the USA indicated that decisions to make catheter placement for dialysis an essential procedure were critical to continuing the practice through the pandemic.
In some cases, operations that were implemented prior to or immediately at the start of the pandemic helped facilitate greater PD catheter insertion access and capacity. In the USA before the pandemic began, one institution created a program for PD units to invite surgeons to 'meet the PD nurses, tour the clinic, see patients in training, see dialysis equipment, and view a brief PowerPoint...regarding medical and economic benefits of PD as renal replacement therapy', which helped improve engagement. Another facility in the USA noted that at the beginning of the pandemic, one key to increasing capacity and moving beyond surgeon placement was to also have interventional radiologists place catheters.
However, a majority of the responses were from Europe and North America, which limits the generalizability of the survey findings.
PD CATHETERS: SUGGESTED ACTIONS FOR IMPROVEMENT
Three factors should be addressed to increase the availability and quality of PD catheter insertion. As shown in Table 3, these are patient and healthcare awareness of PD feasibility, training of clinicians to insert PD catheters and funding to maximize personnel, pathways and physical facilities for PD catheter insertion.
Awareness of PD feasibility
The steps needed to increase the uptake of PD at the patient and system levels have been discussed in depth by Blake et al. [67]. The need for shared decision making with patients and their caregivers at all stages of PD delivery has been reinforced by the recent ISPD prescribing recommendations [68]. The misconception that people with previous abdominal surgery, obesity, aortic aneurysms, or polycystic kidneys are not eligible for PD has been addressed by the 2019 ISPD PD access guideline [23]. With the use of advanced laparoscopic PD catheter insertion techniques, there are few situations where there is an absolute surgical contraindication to PD catheter insertion [69].
PD catheter insertion training
The respective merits of different PD catheter insertion techniques have been discussed in depth in the 2019 ISPD PD access guideline [23]. Although advanced laparoscopic techniques were promoted as the gold standard, the guideline recognizes the many situations where percutaneous catheter insertion is preferred. Having access to more than one method of PD catheter insertion maximizes access to PD. As an example, if access to surgery is limited, as occurred during the COVID-19 pandemic surges in many centers, PD centers with access to percutaneous insertion techniques could continue PD catheter insertions [70]. Therefore training clinicians in all insertion techniques needs to be prioritized to improve the rate-limiting step of PD catheter insertion to enable greater uptake of PD as a dialysis modality.
Enabling and training nurses to insert PD catheters percutaneously will increase access to PD catheters; this is already routinely done in some centers in the UK and Brazil. Training is often done locally on a one-to-one basis supported by national training curricula. Webinars on percutaneous PD catheter insertion for nephrologists have been developed. The ISPD has made E-learning videos on enhancing the technique with imageguided techniques (available at www.pduinir.com). E-learning needs to be followed up with practical hands-on experience. This can be done by setting up links with centers willing to provide training and by organizing workshops at local, national and international meetings, such as through the ISPD and ISN Sister Renal Center Programs. AVATAR (avatar.org.in) is another example of an international initiative that promotes PD catheter insertion through workshops open to international participants.
Training in surgical PD catheter insertion also needs to be readily available [24]. Laparoscopy is generally available in many countries, but to insert PD catheters successfully, surgeons need training in the associated advanced techniques, including omentopexy and adhesiolysis, and how to address the complications of PD catheter insertion. Theoretical training is provided by the ISPD PD University Programme (www.pdusurgeons.com). Hands-on workshops are limited by the expense of skills labs, obtaining the models and other equipment needed. Proctoring sessions can then be provided within specialist centers or by providers in the center of the trainee.
PD catheter insertion pathways
As discussed by Blake et al. [67], PD catheter insertion is potentially a rate-limiting step for starting patients on PD. Along with trained 'inserters', access to operating theaters with appropriate equipment, availability of anesthetists and hospital facilities for pre-and postoperative care are all needed and need to be funded. Furthermore, PD catheter insertion needs to be prioritized so procedures are not cancelled because of other pressures on surgical resources. Provision should be made for urgent surgery, such as for urgent-start PD, repositioning nonfunctioning catheters and to replace catheters for infection reasons. Enabling PD catheter insertion is going to depend on healthcare systems [71]. In private systems, PD catheter insertion prioritization will depend on reimbursement to the hospital and the individual surgeon. In public healthcare systems, awareness of the advantages of PD both for the patient and the healthcare system is needed to enable the necessary pathways to be adequately financed and resourced.
CONCLUSIONS
The COVID-19 pandemic has reignited the interest in PD as a dialysis modality, both for those with chronic kidney failure (to protect them from COVID-19) and for managing the surge of cases with AKI due to COVID-19. Despite the long history of the technique, PD catheter insertion remains neglected and has been identified as a rate-limiting step for starting patients on PD. The challenges range from those related to policies regarding catheter insertion (who can perform, where it can be done and insufficient prioritization as essential care) to issues related to individuals and hospitals (insufficient training in catheter insertion in nephrology and surgical training programs, lack of availability of operating rooms, and reimbursement issues). The value of a successful catheter insertion and functioning for the patient is often not appreciated. These problems can be overcome and sustainable and scalable programs established, as has been shown in examples of successful programs worldwide. In recent years, leadership by the PD community in PD catheter insertion has allowed rapid scale-ups. A judicious mix of expertise coupled with a referral/collaboration mechanism with expert centers that can undertake catheter insertion in more complex cases will expand the pool of patients and improve choices for patients who need dialysis. There are several areas of improvement-including patient selection, appropriate technique, multidisciplinary teams, appropriate training and shared decision making. Several stakeholders, such as the ISPD and ISN, have developed resources and training tools on enhancing insertion techniques. Awareness of the advantages of PD both for the patient and health system is needed to enable the necessary pathways to be adequately financed and resourced, and prioritizing PD catheter insertion is foundational to the success of PD.
SUPPLEMENTARY DATA
Supplementary data are available at ckj online.
|
2022-07-27T13:07:14.313Z
|
2022-05-12T00:00:00.000
|
{
"year": 2022,
"sha1": "d3598037a7fa05060484fd0bca741a7f5831cd86",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ckj/advance-article-pdf/doi/10.1093/ckj/sfac136/45765004/sfac136.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "545fde30d463657efeb28abe58b30b4b1b98f640",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219704405
|
pes2o/s2orc
|
v3-fos-license
|
Blood Glucose Control During Lockdown for COVID-19: CGM Metrics in Italian Adults With Type 1 Diabetes
To prevent the spread of COVID-19, lockdown was imposed in many countries with rigid restrictions on all outdoor activities, also limiting attendance at diabetes clinics. In patients with diabetes, lockdown implies lifestyle changes related to physical activity, stress, and nutrition that are likely to adversely affect glycemic control (1). Conversely, during lockdown, individuals with type 1 diabetes (T1D) are to be expected to have a more regular lifestyle, more closely respecting time schedules and insulin administration timing.
We evaluated the impact of lockdown on glucose control in 207 Italian adults with T1D attending the Diabetes Outpatient Clinic of the Federico II University Hospital, Naples: 96 females/111 males, mean ± SD age 38.4 ± 12.7 years, 104 on multiple daily insulin injections (MDI), and 103 on insulin pump (continuous subcutaneous insulin infusion [CSII]). Inclusion criteria were continuous glucose monitoring (CGM) for at least 6 months, including a 2-week period with CGM use >70% before (January–February) and during (March–April 2020) lockdown, while maintaining the same device: FreeStyle ( n = 130), Guardian 3 ( n = 47), Dexcom G6 ( n = 18), and Eversense ( n = 12). Each participant gave informed consent for the use of her or his data. No participant reported …
Diabetes Care 2020;43:e88-e89 | https://doi.org/10.2337/dc20-1127 To prevent the spread of COVID-19, lockdown was imposed in many countries with rigid restrictions on all outdoor activities, also limiting attendance at diabetes clinics. In patients with diabetes, lockdown implies lifestyle changes related to physical activity, stress, and nutrition that are likely to adversely affect glycemic control (1). Conversely, during lockdown, individuals with type 1 diabetes (T1D) are to be expected to have a more regular lifestyle, more closely respecting time schedules and insulin administration timing.
We evaluated the impact of lockdown on glucose control in 207 Italian adults with T1D attending the Diabetes Outpatient Clinic of the Federico II University Hospital, Naples: 96 females/111 males, mean 6 SD age 38.4 6 12.7 years, 104 on multiple daily insulin injections (MDI), and 103 on insulin pump (continuous subcutaneous insulin infusion [CSII]). Inclusion criteria were continuous glucose monitoring (CGM) for at least 6 months, including a 2-week period with CGM use .70% before (January-February) and during (March-April 2020) lockdown, while maintaining the same device: FreeStyle (n 5 130), Guardian 3 (n 5 47), Dexcom G6 (n 5 18), and Eversense (n 5 12). Each participant gave informed consent for the use of her or his data. No participant reported COVID-19 infection during the study.
Time in target range (TIR) (3.9-10.0 mmol/L), time above target range (TAR) (.10.0 mmol/L and .13.9 mmol/L), and time below target range (TBR) (,3.9 mmol/L and ,3.0 mmol/L) expressed as percentage of all CGM readings, mean glucose, and glycemic variability (coefficient of variation [CV%]) were analyzed (2). An online questionnaire provided data on physical activity, dietary habits, and sleeping pattern. The primary outcome was change in TIR (%) from prelockdown to lockdown period. Secondary end points were changes in TAR, TBR, and CV%. Tables 1 and 2. During lockdown, TIR increased significantly (P 5 0.002) in the whole cohort and subgroups of sex, age (,35 or $35 years), and insulin regimen (MDI or CSII). Glycemic variability (CV%) decreased significantly (P 5 0.001), with the change being more relevant in relation to lower age (P , 0.001 vs. $35 years), male sex (P , 0.001 vs. female), and MDI use (P , 0.001 vs. CSII). This improvement was due to reduction of hypoglycemia ,3.0 mmol/L (P , 0.001)dmore evident in MDI participants (P 5 0.025 vs. CSII)d and hyperglycemia .13.9 mmol/L (P 5 0.052). During lockdown, participants reduced their physical activity, had a more regular meal pattern with a higher food intake and more frequent snacking, and went to bed later and woke up later. Participants who increased sleep duration (n 5 63) showed a greater, although not statistically significant, increase in TIR than those who decreased it (n 5 53) (4.1 6 13.2% vs. 0.17 6 11.5%, P 5 0.088). Changes in physical activity during lockdown were significantly positively associated with changes in glucose CV% (Pearson correlation, r 5 0.155, P 5 0.038) but not with changes in TIR (r 5 0.019, P 5 0.800). This study shows that during lockdown for COVID-19, patients with T1D had improved glucose control indicated by increased TIR, reduced glucose variability, and reduced hyperglycemia and severe hypoglycemia. These findings are somewhat unexpected considering that, because of home confinement, patients had no access to outpatient diabetes clinicsdalthough interacting with their diabetes team by teleconsultingdand less opportunity to perform physical activities. We can hypothesize that the improved glucose control observed in our patients could result from a more regular lifestyle, including reproducible mealtimes and more time for self-care, as also supported by the increased TIR associated with increased sleep duration (3,4). The reduction in physical activity may have also played a role, considering the well-known difficulties to appropriately modulate carbohydrate intake and insulin doses in relation to exercise. In fact, in our study, the reduction in physical activity was associated with reduced glucose variability but unchanged TIR, in line with the evidence that physical activity, while exerting favorable effects on weight, cardiovascular fitness, lipid profile, and psychological wellbeing (5) in people with T1D, does not clearly associate with improved glycemic control.
Results are shown in
A strength of our study is that COVID-19 restrictions represented an unprecedented, hopefully unique condition in which to evaluate the effects of home confinement in a free-living T1D population. Moreover, CGM cloud platforms provide new metrics of glucose control including glycemic variability. A limitation is that lifestyle data were mainly qualitative. Moreover, the lack of a control group does not allow assignment of the observed changes to lockdown. However, because of the extraordinary condition patients were facing, these changes were very unlikely due to general trends or other unmeasured factors.
In conclusion, in adults with T1D, glucose control improved during lockdown, highlighting the importance of a more stable rhythm of life, including more regular mealtimes. Lifestyle changes in patients with T1D should take into consideration not only diet and physical activity but also a more regular and less stressful life.
|
2020-06-16T20:06:47.982Z
|
2020-06-15T00:00:00.000
|
{
"year": 2020,
"sha1": "f5c87bdeb86ff3a1d80902c8010f6e338cb55374",
"oa_license": null,
"oa_url": "https://diabetesjournals.org/care/article-pdf/43/8/e88/630741/dc201127.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c41506cf0b0bf108324cfedf06535fa696b8b4a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52187772
|
pes2o/s2orc
|
v3-fos-license
|
Multi-stage chemical heating for instrument-free biosensing
Improving the portability of diagnostic medicine is crucial to alleviating global access-to-care deficiencies. This requires not only designing devices that are small and lightweight but also autonomous and independent of electricity. Here, we present a strategy for conducting automated multi-step diagnostic assays using chemically generated, passively regulated heat. Ligation and polymerization reagents for Rolling Circle Amplification of nucleic acids are separated by melt-able phase-change partitions, thus replacing precise manual reagent additions with automated partition melting. To actuate these barriers and individually initiate the various steps of the reaction, field ration heaters exothermically generate heat in a thermos while fatty acids embedded in a carbonaceous matrix passively buffer the temperature around their melting points. Achieving multi-stage temperature profiles extends the capability of instrument-free diagnostic devices and improves the portability of reaction automation systems built around phase-change partitions.
Access to healthcare remains one of the primary challenges of modern medicine. Technological advances often remain concentrated in wealthy urban centers, out of reach to rural and poor populations in developing and developed nations alike. Alleviating this disparity is not simply a matter of making existing techniques affordable: many traditional assay platforms are incompatible with field-use at any cost. Instead, alternative technologies must be designed for high portability (small, lightweight, not reliant on electricity) and ease-of-use (simple and autonomous, with minimal hands-on steps).
Two primary approaches have emerged to answer this need: chip-and paper-based microfluidics. 1,2 Traditional microfluidic devices that utilize microfabricated fluidic networks are capable of housing numerous reactions with myriad components that proceed in a well-orchestrated pattern, yet the requisite pumps and other peripheral equipment severely impair portability. 3 While paper devices significantly improve the portability of biosensing reactions, the simplicity that makes them easy to use limits the throughput and complexity of assays they can support. 4 To address this gap between miniaturized assays suitable only for laboratory use and those restricted to field use, we recently described the novel approach of employing thermally-reversible barriers to sequester reagents within a common PCR tube. 5 This approach demonstrated the potential to offer the tight reaction control of microfluidics with portability and ease-of-use that parallels paper devices.
These "phase-change partitions" consist of ordinarily-solid purified hydrocarbon waxes that exhibit sharply-defined melting transitions at distinct temperatures. Reagents for each step of a multi-part reaction remain isolated from one another until the respective barrier is melted, at which point a sample solution sinks through the now-molten alkane and mixes with the reagent beneath. This approach allows arbitrarily long reaction stages and at least five distinct reagent zones within a single 200 µL PCR tube. While the temperature range spanned by the alkanes we employed is narrow enough to remain accessible to simple heating devices, the melting transitions are discrete enough to avoid the need for tightly-calibrated temperature control. Indeed, we demonstrated actuation of these phase-change partitions in a simple water-bath as well as a commercial thermocycler.
However, even a temperature-regulated water-bath requires a consistent source of electricity, unavailable in field settings or low-resource clinics. A similar challenge is faced by many isothermal nucleic acid amplification techniques such as LAMP (loopmediated isothermal amplification) 6,7 and RPA (recombinase-polymerase amplification) 8,9 , which require elevated temperatures to achieve highlysensitive detection of pathogens. Numerous groups have employed chemically-generated heat with thermal buffers to reach the incubation temperature for these reactions. [10][11][12] This is typically achieved using the exothermic hydration of calcium oxide or the galvanic corrosion of MgFe alloys in the presence of saline. 12,13 This latter reaction is extensively employed in military Meal-Ready-to-Eat (MRE) field ration heaters and thus has been thoroughly optimized to rapidly reach boiling temperatures while remaining compact and lightweight.
To achieve prolonged incubation at temperatures amenable to biochemical reactions rather than a brief burst of excessive heat, researchers have employed phase-change materials (PCMs) as thermal buffers in these exothermic systems. [10][11][12][13][14][15][16] These materials surround the reaction compartment so that, as they melt, the temperature of the reaction remains near that of the compound's melting point (Figure S-1, Supplementary Material). 17 Phasechange materials with specified thermal characteristics are commercially available 18 but can also be inexpensively fashioned from materials such as fatty acids and hydrated salts with high latent heats of fusion and desirable melting temperatures. 19,20 Previous reports have described systems which are designed for only a single operating temperature; here, we present the use of MRE heaters with blended PCMs to achieve multistage temperature profiles. We leveraged this platform to sequentially actuate two phase-change partitions in a PCR tube, simultaneously providing ideal operating temperatures for the ligation and polymerization stages of Rolling Circle Amplification (RCA). Our results demonstrate the potential for platforms based on phase-change partitions to automate the field use of complex, multi-stage biosensing reactions without the need for electricity.
Materials and Methods
Materials. Carbon black (CB) (99.9%), lauric (dodecanoic) acid (LA) (98%), and palmitic (hexadecanoic) acid (PA) (95%) were purchased from Alfa Aesar (Haverhill, MA). Sodium thiosulfate pentahydrate, sodium acetate trihydrate, and carboxymethyl cellulose (CMC) were purchase from Sigma. 0.2 mL high-profile PCR tubes were purchased from USA Scientific (Ocala, FL). RCA reagents were purchased from New England Biolabs Tahoe Trails 10 oz vacuum insulated double wall stainless steel travel tumblers, 4 x 5 inch sealable tea bags, Kayose natural tea filter bags, and the iTouchless handheld heat bag sealer were purchased from Amazon. 1 mm nylon mesh sieves were purchased from Component Supply Company. The custom-designed thermos insert was 3D printed in ABS with a Zortrax M200; the .stl file can be found in the Supplementary Material. All remaining materials were purchased from MilliporeSigma (Burlington, MA).
Preparation of Encapsulated PCMs. Fatty acids and hydrated salts were melted on a hot plate under magnetic stirring. Carbon black or activated carbon was mixed with melted fatty acids at specified weight ratios. The hydrated salts were mixed first with 5 wt% CB and subsequently with 10 wt% CMC; a small amount of methanol was added to allow CB to mix with the molten salt hydrate. The resulting pastes were spread on aluminum foil to cool, ground in a mortar and pestle, and sieved to obtain granules <1 mm in diameter. To produce systems with multiple temperature stages, multiple PCMs were encapsulated separately then mixed after cooling so as not to impact their individual melting points.
Form-Stability of Encapsulated PCMs. To assess the stability of the encapsulated PCMs against melt leakage, ~4 g composite was placed in Kayose tea bags, sandwiched between paper towels and then aluminum foil, and placed on an 80 °C hotplate. Mass of the composite was taken before and after one hour of incubation.
Differential
Scanning Calorimetry (DSC). Approximately 10 mg samples were sealed in aluminum hermetic pans (TA Instruments) using a sample encapsulation press. DSC measurements were made on a TA Instruments DSC Q100. Samples were held isothermal at 0 °C for 5 min, then heated to 100 °C and cooled to 0 °C at a rate of 3 °C min −1 , ± 0.20 °C amplitude, with a modulation period of 60 s for two continuous cycles.
Chemical Heating. MRE heaters were used as is to determine the temperature profile of the exothermic reaction between saline and the MgFe alloy in the MREs. A single packet of the MgFe alloy was added to a thermos followed by 100 mL of saline. The reaction was examined using NaCl concentrations of 0, 0.1, 0.3, 0.5, 1.0, and 1.5 wt%. Temperature was recorded using a Sparkfun waterproof temperature sensor (DS18B20) and an Arduino Uno.
Passively Regulated Temperature. MRE heaters were repackaged using heat-sealable tea bags, as described previously. MgFe alloy granules from MREs were distributed into each tea bag in 3.70 g allotments, sealed, then placed in the bottom of the vacuum thermos. The exothermic reaction was initiated by adding 100 mL of 0.1% saline, after which a 3D-printed insert made of ABS was placed in the container and covered with aluminum foil, suspending the PCM above the saline level. A Styrofoam lid provided insulation at the top of the thermos, while a small hole allowed hydrogen gas produced in the chemical reaction to vent. The temperature within the PCM was recorded using a Vernier Labquest Mini and a stainless steel temperature probe. heated as described above with an MRE heater initiated by 0.1% saline. Reactions marked L in Figure 4 were removed once the vessel temperature reached 40 °C, and reactions marked F were removed one hour after the vessel temperature exceeded 55 °C.
Phase-Change
Gel Electrophoresis. Denaturing polyacrylamide was used to analyze RCA products. Reactions were removed at the respective time and halted by immersion in ice water. Reaction solutions were extracted, mixed with two parts 12 M urea, heated to 95 °C for five minutes, then run on a 15% gel at 500V for 15 minutes in a BioRad mini PROTEAN.
Results
The usability of phase-change partitions for diagnostic reactions in low resource settings heavily depends on being able to easily manipulate the heat source without the use of electricity or additional equipment. Our multi-stage heating device consisted of an off-the-shelf vacuum thermos separated by a 3D-printed insert: a lower chamber contained the MRE alloy packet while the encapsulated PCMs and reaction tubes were housed in an upper chamber above the saline level ( Figure 1A). We were able to easily fine tune the temperature profile of the salineactivated MgFe alloy by changing the concentration of salt in the solution ( Figure 1B). The saline pack included with the MREs contained 1.5 wt% salt and caused a rapid increase in temperature up to 97°C. Reducing the salt concentration increased the time it took for the MgFe alloy to reach its maximum temperature while lowering that maximum, reducing the thermal burden needed to be buffered by PCMs.
We investigated two fatty acids (LA, m.p. ~43 °C, and PA, m.p. ~63 °C) and two hydrated salts (sodium thiosulfate pentahydrate, Na 2S2O3·5H2O, m.p. ~48 °C; and sodium acetate trihydrate, NaOAc·3H2O, m.p. ~58 °C) for use as PCMs. Ideally, PCMs must be encapsulated to prevent leakage of the melted material during operation and, in the case of hydrated salts, to prevent phase-separation in the molten state; doing so also has the advantage of improving the thermal conductivity of the material. There is an extensive body of literature devoted to such encapsulation techniques for the purposes of solar heating as well as "smart" construction and textile materials, most of which entail either formation of core-shell microparticles or distribution of the PCM within a porous matrix. 21 Here, we chose carbon black (CB) as an encapsulant for fatty acids due to its affordability, ease of encapsulation, and thermal-conduction properties. The fatty acid was melted, mixed rapidly with CB to penetrate the porous matrix, cooled, ground, and sieved ( Figure 2A). Upon subsequent re-melting, surface tension caused the molten fatty acid to remain entrapped within CB pores. This composite exhibited bulk minimal leakage at elevated temperatures when the CB mass fraction was 20% or greater ( Figure 2B); curiously, CB provided greater formstability than activated carbon, despite the latter's nominally higher surface area to volume ratio and prominent position in the PCM literature. 19,22 For hydrated salts, 5% CB with 10% CMC achieved adequate form-stability. 22 We used differential scanning calorimetry (DSC) to investigate the thermal properties of our encapsulated PCMs. As shown in Figure 2C and Figure S-2 (Supplementary Material), encapsulation resulted in minimal change in PCM melting point (defined as the temperature at maximal specific heat capacity), implying no chemical interaction between core and matrix materials. We also examined a mixture of encapsulated LA and encapsulated PA, which displayed significantly lower melting points when melted a second time. This suggests that the molten fatty acids migrate between the carbon black particles, mixing with one another and mutually depressing their respective melting points.
This mixture of encapsulated fatty acids provided an adequate thermal buffer for to produce multi-stage temperature profile from an MRE heater. By combining 20 g encapsulated PA with 20 g encapsulated LA, the temperature profile generated by an MRE heater with 0.1% saline was successfully modulated to exhibit an approximately 1 hour hold between 30 and 40 °C and a greater than 1 hour hold between 55 and 65 °C ( Figure 3A). This temperature profile allowed well-controlled actuation of phasechange partitions, demonstrated by stepping a pH indicator solution through sequential eicosane and tetracosane barriers to mix with various buffers ( Figure 3B). Combinations of encapsulated hydrated salt also produced multiple temperature stages ( Figure S-3, Supplementary Material).
The lowered melting points of the encapsulated fatty acid mixture provided ideal temperature regimes to achieve Rolling Circle Amplification. RCA is a twostep method for DNA detection: a template sequence is first ligated into a circle, then a complementary trigger sequence is extended by a polymerase to continuously replicate the template. 23,24 While each stage requires 30- 60 minutes at only a single temperature, the ligase enzyme is most active between 30 and 40 °C and the fastest polymerase enzymes are active between 55 and 65 °C; furthermore, the two steps must be performed separately, since premature extension of the trigger sequence along an un-circularized template prevents ligation and continuous amplification.
We constructed a partitioned RCA reaction by placing a dumbbell-forming template DNA sequence above a layer of octadecane (m.p. 30 °C), followed by a buffer containing ligase, a layer of tetracosane (m.p. 52 °C), and finally a buffer containing trigger DNA and polymerase ( Figure 4A). Six such reactions were run in parallel in the MRE-PCM system described above. Three were removed once the thermos reached 40 °C and the remaining three were incubated further until the thermos had spent an hour above 55 °C, during which time the temperature never exceeded 65 °C. As demonstrated by gel electrophoresis (Figure 4B), both ligation and polymerization proceeded efficiently; furthermore, the trigger sequence is not present in the reactions incubated only until 40 °C, indicating that the phase-change partitions completely sequestered the various reaction components.
Conclusion
We have demonstrated the electricity-free automation of a multi-step biosensing reaction. The phase-change partition platform reported previously enabled stable separation of reactants with thermally-reversible alkane barriers; the current work provides a system capable of actuating these partitions in an automated, field-compatible manner. Tempering the saline concentration added to MRE heater granules metered the rate of the accompanying exothermic reaction, while phasechange materials buffered the reaction temperature within multiple sequential ranges amenable to biochemical reactions. Encapsulating the PCMs within porous matrices prevented bulk leakage, enabling re-use. When used alone, encapsulated LA produced a temperature hold of approximately 45 °C, but when combined with encapsulated PA, the first temperature hold occurred at a temperature regime more amenable to T4 DNA Ligase, between 30 and 40 °C. The melting point of PA was similarly depressed, remaining within an optimal region for Bst polymerase. Additionally, future investigations The phase-change partitioned RCA assay is initiated by melting of an octadecane layer once the tube exceeds 30 °C, causing ligase enzyme to ligate the template DNA into a circle. The amplification stage is initiated by melting of a tetracosane layer once the tube exceeds 52 °C, at which point the polymerase extends a trigger sequence to continuously replicate the template. B) Denaturing acrylamide gel electrophoresis reveals successful ligation in all reactions, and successful generation of amplicon in those incubated for the full duration (F). The absence of trigger DNA in reactions incubated only for the ligase portion (L) confirms the integrity of the tetracosane barrier below its melting point. Note that the apparent difference in circular template band intensity between Ligase-only and Full reactions is due to the further dilution by the polymerase solution in the latter.
should explore numerical approaches to quantitatively model PCM-MRE temperature profiles and accelerate the development cycle of noninstrumented diagnostic assays.
We successfully actuated a phase-change partition system with passively-buffered chemical heating to demonstrate the capacity of this system to automate nucleic acid amplification. This report extends the compatibility of the phase-change partition platform to include not only well-equipped laboratories (via thermocyclers) and generic clinics (via water baths), but also resource-poor settings and field operation (via multi-stage PCM-MRE heaters). The key advantage of such broad compatibility is that it enables a common form factor to be employed in diverse settings: the same assay can be given to a central lab technician and a field nurse. Our results demonstrate that phase-change partitions have the potential to bridge the current gap between centralized and remote diagnostic platforms. Now further developments are necessary to adapt a wide range of clinical assays to this system and support efforts to close the urban-rural divide that persists in 21 st century medicine.
Supplementary Material
Multi-stage chemical heating for instrument-free biosensing John P. Goertz Fatty Acid Characterization. Each well of a PCR strip was filled with 200 µl of a pure fatty acid. A thermocouple was placed in each well and the fatty acid solidified around the thermocouple. The melting rate profile of the fatty acids were then monitored as the temperature increased to 80 °C at a constant rate of 0.1 °C/min in the thermocycler. Data were fit with a smoothing spline.
|
2018-09-16T07:03:54.171Z
|
2018-07-11T00:00:00.000
|
{
"year": 2018,
"sha1": "3c6e741c1f6bd762ccb195ddef019a01eefc59b0",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2018/07/11/367029.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "d43fbb5694997fd1447cfe78bb178243da8631d3",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Biology"
]
}
|
45338073
|
pes2o/s2orc
|
v3-fos-license
|
FoodRepo: An Open Food Repository of Barcoded Food Products
In the past decade, digital technologies have started to profoundly influence healthcare systems. Digital self-tracking has facilitated more precise epidemiological studies, and in the field of nutritional epidemiology, mobile apps have the potential to alleviate a significant part of the journaling burden by, for example, allowing users to record their food intake via a simple scan of packaged products barcodes. Such studies thus rely on databases of commercialized products, their barcodes, ingredients, and nutritional values, which are not yet openly available with sufficient geographical and product coverage. In this paper, we present FoodRepo (https://www.foodrepo.org), an open food repository of barcoded food items, whose database is programmatically accessible through an application programming interface (API). Furthermore, an open source license gives the appropriate rights to anyone to share and reuse FoodRepo data, including for commercial purposes. With currently more than 21,000 items available on the Swiss market, our database represents a solid starting point for large-scale studies in the field of digital nutrition, with the aim to lead to a better understanding of the intricate connections between diets and health in general, and metabolic disorders in particular.
INTRODUCTION
Metabolic disorders, such as diabetes or obesity, have become a major public health concern, with increasingly large parts of the global population affected (1,2). Nutritional epidemiologists hope to better understand the underlying causes, the potential treatments and prevention strategies by analyzing population and individual patterns through studies that generally rely on surveying dietary habits. Traditional food-intake survey methods are based on questionnaires filled by participants at a given frequency. The frequency of diet records is an important factor contributing to the accuracy of the study (3). Multiple-day diet records might provide good accuracy when not based on memory, but require strong motivation and time commitment by the participants. Approaches like multiple/single 24-h recalls-involving a specialized interviewer performing surveys in person or on the phone with the participants-require less engagement, but pose issues with missing data as they rely on short-term memory. Finally, so-called Food Frequency Questionnaires, where participants are asked to indicate the frequency of intake of certain foods over long periods of time (typically 1 year), demand minimal participants' commitment, therefore allowing for large cohort studies on long-term dietary habits. However, the likelihood of missing or incorrect data increases as they count on participants' long-term memory. Overall, self-reported dietary data present biases which limit their applications, especially when they heavily rely on participants' memory (4). Such limitations, which should be properly addressed in further epidemiological studies, may be overcome with more advanced recording methodologies such as dietary biomarkers and digital technologies (5).
Recent technological advances, and in particular the emergence and almost complete market penetration of smartphones, have offered interesting surveying alternatives. In particular, mobile phones have been successfully deployed in several food-related studies (6), for example using food photography (7)(8)(9)(10)(11)(12). Other research has also explored the possibility of recording dietary habits by asking participants to scan the barcodes of their consumed food (13,14). Although further investigations are required to assess self-reporting biases, these advances in nutritional research have triggered the release of mobile apps oriented mainly toward diabetes and weight-loss selfmanagement (15)(16)(17)(18)(19), showing the willingness and interest of users to monitor their food intake if it provides potential health benefits.
The further expansion of self-monitoring for research and medical purposes relies on comprehensive and continuously updated food databases. A few databases of barcoded products already exist, for example Open Food Facts (20) or the USDA Food Composition Databases (21). While they each have their strength, not all of them are openly accessible or, and they often have a limited product coverage, and are often not regularly updated. For Switzerland, we did not find any database whose product coverage was sufficiently high, where the data was completely open, and easily accessible through an Application Programming Interface (API). The last point was particularly important to us, as APIs are necessary for third parties to dynamically use the data in their products and services. Our approach was therefore to build an openly accessible database of barcoded food products with sufficiently high coverage, accessible through a stable API. Rather than focusing on a wide geographic range, we focused on a small country (Switzerland) in order to obtain the necessary coverage. The focus on the Swiss market further benefits from the need to support multiple languages from the beginning, thus making the system readily expandable to other countries, which we are now planning to do.
Here, we present this system, which we call FoodRepo (https:// www.foodrepo.org), an openly accessible database of barcoded food products, and we describe the data-acquisition framework, its quality control and maintenance. Here, the word repository is meant to be understood as a data repository, where the community can deposit an increasing number of datapoints on food products. The growing community around FoodRepo and the validation of new products make our database robust, scalable and self-sustainable in the long run. Currently, the FoodRepo database mostly holds products sold in Switzerland, from the main grocery stores in the country. Its international expansion is under development.
Any item in the database is accessible through the FoodRepo website (for an example of products contained in the FoodRepo database, please see Figure 1A) or via our API, described in section Usage Notes. The CC-BY-4 license under which our database is released will allow its exploitation by different type of users, from academic researchers to commercial partners. For instance, a Swiss consumers association is using FoodRepo data in their NutriScan mobile app (22) to make the food package information more accessible, and to provide their users with an overall nutritional score.
Beyond this specific example, the FoodRepo database opens the way for promising research opportunities in the field of digital epidemiology and personalized nutrition. Notably, we foresee that, through dietary live-tracking, this database can support studies which combine other recent technological developments and new findings in our understanding of the human metabolism. For example, phone-connected devices for continuous monitoring of blood glucose levels have recently been made available to diabetic patients (23,24), as well as numerous direct-to-consumer devices to estimate glucose levels have appeared on the market. A plethora of other wireless sensors are now also available to record various physiological parameters such as heart rate or blood pressure, marking a new era of "high-throughput human phenotyping" (25). Studies that would simultaneously track participants' parameters, food intake, glycemic response and physical activity might provide detailed insights on the variability of individual metabolic responses. Interestingly, one of the factors which has recently been found to account for a large part of this variability is microbiota (26)(27)(28)(29)(30). Large-scale testing of these hypotheses through self-tracking could contribute to the assessment of the complex metabolic response of the human body to different energy sources. This requires detailed records of food intake that includes nutritional information as well as eating times (31) and food portion sizes (32)(33)(34), all challenges that FoodRepo may help to overcome.
However, we highlight an important limitation of all food databases. Generally, the curators of such repositories cannot ensure the validity of the data reported by the producers on the nutrition facts labels. It is indeed well known in the literature that there might be large discrepancies between the reported nutrients and the actual food content, due to different factors, such as food pre-processing or the different industry standards (35)(36)(37)(38)(39)(40). Therefore, all studies using databases such as the one presented here would do well to assess the validity of such data and ideally quantify the reporting errors, especially when using the reported data on nutritional values.
Analyses of the database evolution will give interesting indication on the dietary trends and on the overall modification of the nutritive quality of packaged food. Although the database itself does not inform on the buying frequency, the continuous introduction of specific products in the market and thus in the database can potentially indicate how retailers react to customer demands and changing dietary habits.
METHODS
The database building and maintenance process relies on the following steps: (i) collection of product pictures from local retailers, (ii) data extraction from the pictures, (iii) validation of the extracted data, and (iv) permanent storage in the database (Figure 2). For the initial build of the database, we designed a specific pipeline (bootstrap workflow, Figure 2A, which allowed us to validate the first 20,000 food products in a few months. Given the dynamic nature of our data and the cost of the bootstrap workflow, we designed a second pipeline (currently under development) which relies on the growing FoodRepo community. This workflow (communitybased, Figure 2B) allows us to keep up with the new and seasonal products introduced to the market by the retail shops, as well as to ensure the scalability and self-sustainability of FoodRepo in the long run.
The bootstrap workflow (Figure 2A) consists of 3 main steps. The first step entailed a massive manual data collection from three large groceries stores in Switzerland upon approval from the shops (specifically Migros, Coop, and Lidl). We hired students to take pictures of all barcoded food items in retail shops located in the Lausanne area. To facilitate the data collection, we specifically designed a simple phone app with which students could scan the products' barcode and take pictures of the front and back of the package, the product's name, ingredients list, and nutrition facts. These pictures were then automatically uploaded to the database. At the end of this step, students had collected on average 4.4 pictures per item. The second step focused on the extraction of information contained in the pictures. Due to the presence of multi-language ingredients and the often wrinkled surfaces of item packaging, Optical Character Recognition (OCR) systems could not achieve a reliable accuracy. We therefore opted for a crowd-sourced solution and in particular we decided to recruit workers on Amazon Mechanical Turk (41) (AMT). AMT is a platform connecting requesters to workers, the latter being financially compensated to achieve tasks requiring human intelligence (HITs-Human Intelligence Tasks). Here, we designed a graphical user interface (GUI) allowing workers to transcribe the text they could read from product pictures. Specifically, the GUI presented text boxes where AMT workers FIGURE 1 | (A) Screenshot from the webpage of a product on the FoodRepo website. (B) Schematic representation of the pipeline behind our API. When a user or an application (left column) sends a call to the API, the request is handled by the server that hosts the API (middle column). This sends then a query to the server which hosts the FoodRepo database (right column), where the query is handled by the Elastic Search engine. The data is returned to the API server which performs final formatting before giving it back to the user or the application. (C) Distribution of API response times, color-coded according to different sections of the back-end pipeline, as shown in (B). In green (main plot and inset) the response-times of the Elastic Search server to the application server; in blue the full time needed for a user to have the data after a call to our API.
provided the product name, nutritional values (in a table format) and ingredients, in every language present on the label (German and/or French for almost all items; Italian and/or English in addition for some products). Three different HITs were set up: one for nutrients, one for product name and one for ingredients. For the last two, we set up qualification rounds for AMT workers as their transcription involved some language skills. AMT workers could choose to either enter from scratch the information they saw on the pictures, or to approve/modify the suggestions given by an OCR (42) system. At the end of the second step, all annotated products were uploaded into the database, flagged as ready for validation. The third step was thus dedicated to data validation, which was based on extensive manual checking by the FoodRepo team, and was additionally informed by manual reports from visitors to the FoodRepo website and with error-detection analyses of nutritional values. Such online reports are encouraged by the presence of a "report an issue" button on each product web-page, which prompts a visitor to file an issue when spotting a potential error. Details about the error-detection analyses are given in the Technical Validation section. Before the final validation of the data, the FoodRepo team as well as students manually checked all products thoroughly.
The community-based workflow ( Figure 2B) is similar to the bootstrap workflow, but instead of counting on AMT workers, it relies on the growing FoodRepo community. As new products become available in retail shops, FoodRepo users can submit them by uploading the corresponding package pictures, using the FoodRepo smartphone app. Currently, the information extraction is still performed by the FoodRepo team, but additional features are being implemented in the app, which will allow users to directly type the product details contained on the package. Before user-provided information is permanently stored in the FoodRepo database, consistent entries will need to be submitted by at least three different FoodRepo users. If such consensus will not be reached after seven independent submissions (i.e., there are still less than three consistent entries), the item will be manually analyzed by the FoodRepo team for definitive validation and inclusion into the database. This procedure will ensure minimal intervention from our team, while still guaranteeing the reliability of the data. The FoodRepo team is currently fostering the development of an active community through which the continuity of FoodRepo is assured, and which will likely accelerate the birth of independent exploitations of the database, from both public and private partners. Pictures Url to the front picture of the sample product: e.g., https://goo.gl/PyjjNa
DATA RECORDS
While here we only provide the link to the front image of the product, an API call would provide the links to all pictures available for the requested products. A complete description of the fields provided by the API is available in the API documentation, on the project's GitHub repository.
the database (see Table 1). The programmatic access to the database is allowed by an API, described in the section Usage Notes.
TECHNICAL VALIDATION
As described in the Methods section, during the bootstrap stage ( Figure 2A) the final validation was performed manually by the FoodRepo team, while in the community workflow (Figure 2B), the accuracy of the data is ensured by the consensus test (the FoodRepo team intervenes only if fewer than three matches are achieved after the uploads of the same product by seven different users). We highlight here that FoodRepo strictly reflects the information printed on products packages, even when suspicious values are present on the labels. All validation processes have thus been set-up to detect transcription errors. Within this rationale, computational analyses were implemented for the detection of outliers, in particular regarding the nutritional values. These tests reflect basic constraints, such as the mass upper-limit: where p, f , c are respectively the product's protein, fat and carbohydrates concentrations expressed in grams per 100 g of product. From Equation (1), one can also derive other linear inequalities for a single nutrient or couples of nutrients, namely p + f ≤ 100, p + c ≤ 100, and c + f ≤ 100. These simple tests allowed us to detect transcription errors in earlier versions of the database, as illustrated by the outliers in Figure 3A which shows the distribution of products in the fat-carbohydrates space with the joint mass boundary. Similarly, other typos could be spotted by checking that the concentration of a subclass of nutrient is smaller than the one of the parent-class. This is the case for instance of sugars vs. carbohydrates, or saturated-fat vs. fat, shown in Figure 3B.
Another simple relation that helps check products' nutrition facts can be derived from the standard approximation of energy density based on nutrients composition (45): where the product's energy content E is expressed in kCal/100 g. Combining expressions 1 and 2 provides upper and lower boundaries for the energy content (for example Figure 3C). In this case however, not all dots that fall outside the boundaries were due to typos in transcription. Indeed, the approximation in Equation (2) does not take into account the different contribution to energy of complex carbohydrates such as polyols, which account for less than 4 kCal/g. This is why products such as candies and chewing gums would fall below the energy boundaries.
USAGE NOTES
In order to facilitate the access to the database, we built an openly accessible API. Any terminal user, including third party apps or services, can send API requests to retrieve specific data. The API pipeline is illustrated in Figure 1B. User's requests are handled on an application server, where an Elastic Search (ES) application handles the queries on another cloud computing service, based in Ireland. The ES response is then returned to the user after JSON formatting and compression (on demand). We checked that handling the request between the two servers does not critically compromise the total user-response time. We run series of single-page API calls, every 6 h, over a week, in order to measure the full response-time and the application server response-time. We observed that the latter was consistently fast across all experiments (in the range of 20-50 ms) and that the bottleneck was rather the transmission between the terminal user and the application server (the average full response time was about 250 ms-see Figure 1C). For a quick introduction to the API endpoints, users are welcome to try them out on the API Playground page (46). Furthermore, on the project's GitHub repository, one can also find usage cases (47) in Python, Ruby, Curl and JavaScript, as well as examples of complex queries which include fuzzy searches (48). When fetching a large amount of data, we suggest using the option of compressed data 1 and the possibility to include/exclude specific fields of each product [see for details the API documentation (46)]. In this way, one could reduce the response payload size by up to a factor of 10.
We remind readers that all contents (other than computer software) made available by FoodRepo on its websites, apps or services are licensed under the Creative Commons Attribution 4.0 International License. We however would like to highlight the fact that product images may contain copyrighted data such as brand logos.
ACKNOWLEDGMENTS
We are grateful to Migros, Coop, and Lidl for access to their retail shops.
NOMENCLATURE
• API: Application Programming Interface-an set of tools and methods that allow to types of software to communicate. The FoodRepo API allows other applications to get and use the data. • CC-BY-4: Creative-Commons public license, with the "Attribution" term. It implies that anyone is free to share and transform the content of FoodRepo, even for commercial purposes, with the obligation to properly give credit to FoodRepo, and to display any modification without claiming direct endorsement from FoodRepo. For a detailed description, see the license text at https://creativecommons.org/licenses/by/4.0/ • OCR: Optical Character Recognition-tools that allow for automatic conversion of text contained in images to machinereadable formats.
|
2018-01-25T16:22:33.000Z
|
2018-01-25T00:00:00.000
|
{
"year": 2018,
"sha1": "28fb08b47ea3923be55f8def8589475aeafa97d2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2018.00057/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28fb08b47ea3923be55f8def8589475aeafa97d2",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine",
"Business"
]
}
|
248629562
|
pes2o/s2orc
|
v3-fos-license
|
A Composite Indicator to Assess Sustainability of Agriculture in European Union Countries
Few studies have been conducted to assess agricultural sustainability in the European Union (EU), and all of them fail to provide a holistic view of sustainability in a relevant temporal horizon that could effectively support the design of policies. In this paper, a composite indicator is constructed based on the geometric aggregation of 12 basic indicators measured yearly in the period 2004–2020 (17 years) on all EU countries plus United Kingdom, with weights determined endogenously according to the Benefit of Doubt (BoD) approach. Our composite indicator has a two-level hierarchical structure accounting for the contributions of the economic, social and environmental dimensions of sustainability. In our results, Bulgaria, Croatia, Lithuania and Poland are the countries with the strongest growth rate of sustainability, while countries reaching the 90th percentile of the score in sustainability include Austria, Czechia, Estonia, France, Germany, Hungary, Latvia, Lithuania, Slovakia and Sweden. In overall, the social and the environmental dimensions have similar levels, while the level of the economic dimension is definitely higher. Interestingly, several countries with a high level of sustainability are characterized by a decline of the economic dimension, including Austria, Finland, Italy, Latvia and Slovakia. The reliability of our composite indicator is supported by the substantial agreement of sustainability scores with subsidies attributed by the Common Agricultural Policy (CAP). Therefore, our proposal represents a valuable resource not only to monitor the progress of EU member countries towards sustainability objectives, but also to refine the scheme for the attribution of CAP subsidies in order to stimulate specific sustainable dimensions.
Introduction
Nowadays, the agricultural sector is called to face in the front row the challenge of satisfying food demand of the rapidly increasing world population. For this reason, sustainability of agriculture has become a widely spread theme among international decision makers. Specifically, the Food and Agriculture Organization (FAO) has outlined five principles of sustainable agriculture: (i) increase of productivity, employment and value addition in food systems, (ii) protection and enhancement of natural resources, (iii) improvement of livelihoods and promotion of inclusive economic growth, (iv) enhancement of the resilience of people, communities and ecosystems, (v) adaptation of governance to new challenges (Food and Agriculture Organization 2014). Also, the concept of agricultural sustainability has been integrated into the objectives of the Common Agricultural Policy (CAP) of the European Union (EU), and has found a significant place in the EU scientific research program Horizon 2020 (European Commission 2011) and in the 2030 agenda for the Sustainable Development Goals (SDGs) of the United Nations (UN General Assembly 2015). However, despite the widely acknowledged importance of sustainable agriculture for economic systems around the world, consensus on how agricultural sustainability should be defined, pursued and measured is still far from being achieved (Zhang et al. 2021).
Several different tools have been developed to assess sustainability of agriculture in a holistic view, including RISE (Response-Inducing Sustainability Evaluation, Hani et al. 2003), SAFE (Sustainability Assessment of Farming and the Environment, Van Cauwenbergh et al. 2007), IDEA (Indicateur de Durabilité des Exploitations Agricoles, Zahm et al. 2008), SEAMLESS (System for Environmental and Agricultural Modelling Linking European Science and Society, Van Ittersuma et al. 2008), SAFA (Sustainability Assessment of Food and Agriculture, Food and Agriculture Organization 2013), PG (Public Goods, Gerrard et al. 2012), and the COSA method (Committee on Sustainability Assessment 2020). In these tools, agricultural sustainability is conceptualized into three main pillars (sustainable dimensions), which are measured through sets of indicators: (i) the economic dimension, pertaining to the efficient production of goods and services, (ii) the social dimension, concerning the improvement of conditions in rural areas, and (iii) the environmental dimension, referring to the management of natural resources.
Indicators are widely used in assessment tasks because they provide a quantitative and simplified view of specific phenomena. Therefore, in principle, even the assessment of sustainability may benefit from their employment. Unfortunately, indicators involve significant difficulties in the selection and aggregation processes, which are reflected by the wide variability in the methodology across existing tools for the assessment of agricultural sustainability (De Olde et al. 2016;Chopin et al. 2021). Several guidelines for the selection of indicators have been proposed in the literature, with emphasis on the principles of parsimony, sufficiency and availability (Latruffe et al. 2016;Talukder et al. 2020). However, existing assessment tools are still a long way from converging towards a common core set of indicators, raising doubts about the achievement of standardized tools with general validity and applicability (De Olde et al. 2016).
The selection of indicators is not the only step determining the validity of assessment tasks. In fact, once indicators are selected, the underlying information should be extracted, interpreted and communicated in an easily intelligible form to policy makers. Currently, there is no consensus among existing assessment tools whether the indicators should be aggregated or considered individually (Chopin et al. 2021). Aggregation of indicators into composite indicators (Organisation for Economic Co-Operation and Development 2008) is an appealing approach as it provides one or few synthetic measures of sustainability that ease comparisons across different systems. However, the construction of composite indicators is subjected to several arbitrary choices that may influence the final results, especially the aggregation method and the weighting scheme (Terzi et al. 2021). In order to reconcile the two approaches, some authors have suggested to employ both individual and aggregated indicators, where the former are used to analyse each system and the latter to make comparisons among systems (Bockstaller et al. 2008).
Issues of agricultural sustainability may differ across the various geographical scales, i.e., farms, regions and countries. Therefore, in order to achieve a holistic view of agricultural sustainability, the integration of different geographical scales is just as important as the integration of sustainable dimensions. Many policies, management programs and assessments targeting the conservation of ecosystems and well-being fail because they do not properly address such integration (Millennium Ecosystem Assessment 2005). Also, the temporal attribute has an important role, as it allows to assess not only the level, but also the trend of sustainability.
In this paper, we focus on the assessment of agricultural sustainability in the EU. In our review of the literature, we found a total of twelve studies: five conducted at farm level and seven conducted at country level. Surprisingly, all of these studies fail to provide a holistic view of agricultural sustainability in a relevant temporal horizon that could effectively support the design of policies. On one hand, all studies conducted at farm level cover all the three sustainable dimensions but rely on cross-sectional data. On the other hand, among studies conducted at country level, some cover only a subset of the sustainable dimensions, others focus on a small set of countries, still others rely on cross-sectional data. Studies conducted at farm level have the opportunity to directly adopt existing assessment methods, especially for what concerns the selection of indicators, and data can be collected through direct interviews. However, the results are difficult to generalize at higher geographical scales, thus they have limited relevance to policy makers. Instead, studies conducted at country level provide a more general information which is suited to international policy making, but computing the indicators suggested by existing assessment tools may be impracticable due to scarce availability, or even unavailability, of data at the national scale (see the set of indicators proposed by Talukder et al. 2020 based on the existing literature). Clearly, whichever the geographical scale, the problem of data unavailability is further emphasized when the temporal evolution is considered, thus justifying the small number of studies based on longitudinal data at both farm and country level.
This paper aims at filling the gap of existing empirical studies assessing sustainability of EU agriculture by achieving a holistic view in a relevant temporal horizon. A composite indicator is constructed based on the geometric aggregation of 12 basic indicators measured yearly in the period 2004-2020 (17 years) on all EU countries plus United Kingdom, with weights determined endogenously according to the Benefit of Doubt (BoD) approach (Cherchye et al. 2007;Zhou et al. 2010;Vidoli et al. 2015). Our composite indicator has a two-level hierarchical structure accounting for the contributions of the three sustainable dimensions. Geometric aggregation allows a small degree of compensation to reflect the fact that sustainable development is achieved only when all or most individual sustainability goals are pursued, while the BoD weighting scheme, which has never been applied in the assessment of agricultural sustainability in the EU, permits to infer the relative importance of each basic indicator and sustainable dimension in the achievement of sustainability without relying on subjective opinions. This paper is structured as follows. In Sect. 2, the literature on the assessment of agricultural sustainability in the EU is reviewed. In Sect. 3, the selection of indicators and the data collection process are described. In Sect. 4, the methodology employed in the construction of the composite indicator is detailed. In Sect. 5, the results are presented and discussed, including the comparison with the attribution of CAP subsidies and the analysis of sensitivity to different aggregation methods and weighting schemes. Section 6 contains concluding remarks and purposes for future work.
Literature Review
The characteristics of existing studies assessing agricultural sustainability in the European Union (EU) are briefly reviewed in Table 1. We see that studies conducted at farm level (Gómez-Limón and Sanchez-Fernandez 2010;Majewski 2013;Ryan et al. 2016;Gaviglio et al. 2017) cover all the three sustainable dimensions, but they all rely on cross-sectional data and thus they disregard the temporal evolution of sustainability. Instead, among studies conducted at country level, some cover only a subset of the sustainable dimensions (Cristache et al. 2018;Czyzewski et al. 2020), others focus on a small set of countries (Radovanović and Lior 2017;Mili and Martínez-Vega 2019), still others rely on cross-sectional data (Nowak et al. 2019;Cataldo et al. 2020). The study in Magrini (2022) is the only exception covering all the three sustainable dimensions and considering a broad set of countries longitudinally, although the assessment is focused on the growth rate of sustainability and not on its level.
The method of assessment differs across studies in Table 1, but the construction of composite indicators, employed by nine studies out of twelve, is the most common approach. In Gómez-Limón and Sanchez-Fernandez (2010), both arithmetic and geometric aggregation is considered and combined with weights based on prior judgements and on principal component analysis. In Majewski (2013), Ryan et al. (2016), and Mili and Martínez-Vega (2019), one composite is constructed for each sustainable dimension using arithmetic aggregation and uniform weights, i.e., admitting full compensation and attributing the same importance to each indicator. In Radovanović and Lior (2017), a composite is constructed using arithmetic aggregation and different weighting methods based on several scenarios. Multi-criteria decision analysis is adopted by three studies: the Agri-environmental Footprint Index (AFI, Purvis et al. 2009) is employed in Gaviglio et al. (2017) and in Dabkiene et al. (2021), while the method of similarity to the ideal solution (TOPSIS, Hwang and Yoon 1981) is applied by Nowak et al. (2019). In Cataldo et al. (2020), an innovative weighting method based on Partial Least Squares Path Modelling (PLS-PM) with second-order formative constructs is proposed. This method has the advantage to provide one weight for each indicator and each sustainable dimension, thus making the results easier to interpret, and to not require correlation among basic indicators. In Magrini (2022), EU countries are clustered according to common trends of sustainable objectives through group-based multivariate trajectory modelling (Nagin et al. 2018).
Arithmetic aggregation (weighted sum) and geometric aggregation (weighted product) are distinguished by the degree of compensation. Specifically, arithmetic aggregation admits full compensation, i.e., it allows to cancel a bad performance in a basic indicator through a performance of the same intensity but of opposite sign in another basic indicator. On the contrary, using geometric aggregation, the compensation of a bad performance in a basic indicator requires a good performance of higher intensity in other basic indicators.
3
Although arithmetic aggregation is often preferred in the assessment of agricultural sustainability in the EU (see Table 1), we believe that the underlying assumption of full compensation is undesirable because sustainable development is achieved only when all or most individual sustainability goals are pursued. In this view, we believe that geometric aggregation is more suited to the assessment of sustainability due to its low degree of compensation, even if establishing the correct degree of compensation to assume remains challenging due to the lack of consensus on how agricultural sustainability should be defined, pursued and measured (Zhang et al. 2021).
As it can be noted from Table 1, existing studies assessing agricultural sustainability in the EU adopt different weighting methods. This variability reflects the existence of several different approaches without a widely accepted methodology. Essentially, weights can be set uniformly to give the same importance to each indicator and/or dimension, defined a priori with the help of experts' and stakeholders' opinions, or computed endogenously (i.e., empirically from data). A good review of weighting methods can be found in Organisation for Economic Co-Operation and Development (2008) and in Terzi et al. (2021). Uniform and a priori weighting are the most common schemes adopted by existing studies assessing sustainability of EU agriculture. Uniform weighting is easy to understand and replicate, but it cannot provide insights into the importance of indicators and may involve the risk of double weighting. The a priori definition of weights, commonly performed through multi-criteria decision analysis (see, for example, Talukder et al. 2018), represents the most transparent way to construct composite indicators, but it is potentially affected by bias due to scientific consensus or policy priorities, and it may be difficult or even impossible to be generalized across different geographical regions.
Several methods to determine the weights endogenously have been proposed to avoid sources of subjectivity. For instance, principal components and factorial analysis can be exploited to determine the weights based on empirical correlations. Although weights determined in this way can be interpreted as correlations with some underlying constructs, this approach have been criticized because the importance of indicators does not necessarily depend on their covariance structure. The Benefit of Doubt (BoD) approach (Cherchye et al. 2007;Zhou et al. 2010;Vidoli et al. 2015) is an alternative weighting method based on benchmarking arguments, i.e., weights are assigned in order to maximize the overall performance of units. Therefore, a unit with a relatively good (or bad) performance in a specific indicator indicates that such unit considers the underlying objective as more (or less) important to achieve a good overall performance. BoD weighting is superior to correlation-based schemes because it does not require indicators to be correlated and is unit invariant (Cooper et al. 2000, p. 39), thus normalization of indicators is not needed. However, it may attribute excessively low or high weights to indicators with the consequent risk of cancelling the contribute of some weak objectives. A commonly adopted solution to attenuate this inconvenient is the use of proportion constraints in order to bound the relative contribution of each indicator to the composite (Cherchye et al. 2007).
Although the BoD weighting scheme has not yet been adopted to assess agricultural sustainability in the EU, it has received a large popularity in the last two decades, as witnessed by several notable applications to a large variety of research fields, including human development (Despotis 2005), technological achievement (Cherchye et al. 2006), quality of life (Morais and Camanho 2011), internal market (Cherchye et al. 2007), competitiveness (Bowen and Moesen 2011), student evaluation (Rogge 2011), environmental performance (Zanella et al. 2011), digital access (Gaaloul and Khalfallah 2014), and health system evaluation (Lauer et al. 2004;Vidoli et al. 2015). In this view, the use of the BoD weighting scheme to assess sustainability of EU agriculture is definitely attracting.
Partially Ordered Sets (POSets) have been recently proposed as an alternative to composite indicators (Alaimo et al. 2021a, b;Fattore 2017). In essence, POSets can provide a (partial) order on the combinations of basic indicators' values, thus aggregation is avoided. Although POSets overcome several limitations of composite indicators, they are designed for ordinal basic indicators and involve a computational complexity that is exponential in the number of indicators and of their categories. Therefore, we believe that POSets are not suited to the assessment of agricultural sustainability because, according to existing assessment tools, a large number of indicators should be considered and most of them are quantitative.
Selection of Indicators and Data Collection
The selection of indicators was based on guidelines outlined in Van Cauwenbergh et al. (2007). Although these guidelines have been published more than ten years ago as part of the SAFE assessment tool, they have inspired several recent assessment tools and have often been appreciated in some recent critical reviews (see, for example, Latruffe et al. 2016;De Olde et al. 2016;Talukder et al. 2020). Also, SAFE consists of a smaller set of objectives compared to most existing assessment tools, thus making possible to select indicators that can be computed at country level based on publicly released statistics.
Our procedure for selecting indicators and collecting data was the following. Firstly, we identified all the objectives suggested in Van Cauwenbergh et al. (2007) that could be measured by at least one indicator for which data are released by international institutions and organizations. Secondly, we selected a set of indicators and a temporal window as large as possible balancing representativeness of the three sustainable dimensions (economic, social and environmental dimensions) and availability of time series data. In the data collection process, we tolerated the occurrence of at most six missing values for each time series (one third), with no more than two consecutive missing values internally to the time series and no more than one missing value at the extremes.
The resulting dataset comprises twelve indicators: five for the economic, three for the social and four for the environmental dimension, measured yearly on all the 27 EU countries plus United Kingdom in the period 2004-2020 (17 years). Table 2 contains a brief description, objective and data source of the selected indicators, while a detailed description is provided in Sect. 3.1. Details on the imputation of missing values and on cointegration analysis are given in Sect. 3.2.
Selected Indicators
The selected indicators for the economic dimension of agricultural sustainability cover the following objectives in Van Cauwenbergh et al. (2007): -"Agricultural activities are economically and technically efficient", measured through the Total Factor Productivity (TFP) index of agriculture with base year 2015 computed by the United States Department of Agriculture ( X 1 ); -"Land tenure arrangements are optimal", measured through the ratio of net capital stocks to gross value added ( X 2 , source: Faostat); -"Inter-generational continuation of farming activity is ensured", measured through the ratio young/elderly for farm managers ( X 3 , source: Common Monitoring and Evaluation Framework for the CAP 2014-2020), where young managers are those with less than 25 years and elderly managers are those with more than 55 years; -"Farm income is ensured", measured through the real income of agricultural factors per paid annual work unit ( X 4 ) and the net entrepreneurial income of agriculture per unpaid annual work unit ( X 5 ), both computed by Eurostat as indices with base year 2010.
Unfortunately, economic objectives outlined in Van Cauwenbergh et al. (2007) related to farmer's training, market activities and dependency on external finance were disregarded due to data unavailability.
For what concerns the social dimension of agricultural sustainability, we covered the objective "Equity in the farm community is maintained or increased" in Van Cauwenbergh et al. (2007) through the following three indicators: median equivalised net income in rural areas ( X 6 ), at-risk-of-poverty rate in rural areas ( X 7 ) and unemployment rate in rural areas ( X 8 ), all sourced to Eurostat. Unfortunately, we did not find reliable data on social objectives outlined in Van Cauwenbergh et al. (2007) related to food quality, integration, labour and health conditions. However, it is worth noting that indicator X ENV,2 (area under organic cultivation) selected for the environmental dimension as described below, partially covers the provision of food of good quality. All the shortcomings in the coverage of the social dimension of agricultural sustainability were independent of our effort, in fact the lack of data on social indicators is widely recognized (Latruffe et al. 2016) and, to our knowledge, the three indicators that we selected are the only measures for which time series data referred to EU countries are publicly available.
The selected indicators for the environmental dimension of agricultural sustainability cover the following objectives in Van Cauwenbergh et al. (2007): -"Energy flow is adequately buffered", measure through the production of renewable energy from agriculture ( X 9 , source: Common Monitoring and Evaluation Framework for the CAP 2014-2020); -"Soil physical and chemical quality is maintained or increased", measured through the area under organic cultivation ( X 10 , source: Faostat); -"Pollution levels are reduced", measured through greenhouse gas emissions due to agriculture ( X 11 , source: Faostat); -"Soil loss is minimized", measured through the gross nitrogen balance ( X 12 , source: Eurostat).
Unfortunately, the measurement of the nutrient balance was limited to nitrogen because the time series for the other available nutrients (phosphorus and potassium) contain a large number of missing values. For the same reason, we disregarded environmental objectives outlined in Van Cauwenbergh et al. (2007) related to natural conservation, soil mass flux, water supply and ecosystem services. Summary statistics of the selected indicators are shown in Table 3. From a first look to the data, it is apparent that the average annual changes in the period 2004-2020 across the considered EU countries are consistent with sustainability for most indicators. The ones with the highest change are the real income of agricultural factors per paid annual work unit ( X 4 , + 2.28%), the median equivalised net income in rural areas ( X 6 , +3.14%) and the production of renewable energy from agriculture ( X 9 , +19.85%). The huge growth rate of this last indicator is explainable by the commitment of EU countries to obtain 20% of its energy from renewable sources by 2020. The ratio young/elderly for farm managers ( X 3 ) is the only indicator with an average annual change not consistent with sustainability (−2.84%).
Imputation of Missing Values and Cointegration Analysis
In the data collection process, we tolerated the occurrence of at most six missing values for each time series (one third), with no more than two consecutive missing values internally to the time series and no more than one missing value at the extremes. The only exception was represented by the ratio young/elderly for farm managers ( X 3 ), which is systematically observed in 2005, 2007, 2010, 2013 and 2016, and missing otherwise. Given the regular pattern of observed values in the considered period (2004-2020) and the valuable and not substitutable information provided by this indicator, we decided to not exclude it from the analysis. It is worth remarking that half of the selected indicators have a missing value in 2020, specifically the agricultural TFP index ( X 1 ), the ratio young/elderly for farm managers ( X 3 ), and all the environmental indicators ( X 9 , X 10 , X 11 and X 12 ). However, this does not constitute a violation of the data collection criteria, and, moreover, the consideration of year 2020 allows our study to account for the most recent available information.
In order to obtain a complete dataset, we imputed missing values based on a Vector Auto-Regressive (VAR) model with fixed intercepts for countries. The procedure was the following. Firstly, we imputed missing values internally to the time series through linear interpolation. Secondly, we performed a graphical check of stationarity and noted that all the time series were definitely non-stationary, as confirmed by the ADF (Dickey and Fuller 1981) and KPSS (Kwiatkowski et al. 1992) tests. Therefore, we specified the VAR model on logarithmic returns to avoid spurious regression (Granger and Newbold 1974). Let x i,t = (x i,t,1 , … , x i,t,p ) � be the multivariate observation of the p indicators on country i at time t. The vector of logarithmic returns for country i at time t is: where log x i,t,j = log x i,t,j − log x i,t−1,j , which approximates the relative change in the value of the j-th indicator with respect to the previous time point. The adopted VAR specification for a given lag length L ∈ ℕ + was: where: is the p × p matrix of coefficients at lag l, i = ( i,1 , … , i,p ) � is the p-dimensional vector of fixed intercepts for country i, and u i,t = (u i,t,1 , … , u i,t,p ) � is the vector of random errors for country i at time t such that: Note that, since the VAR model in formula (2) is specified on logarithmic returns, the intercepts i represent the coefficients of linear deterministic trends. The Expectation-Maximization (EM) algorithm (Dempster et al. 1977) was employed to compute the expected value of missing data for L = 1, 2, 3, 4 , and the imputation provided by the model with the minimum Bayesian information criterion was retained as the final one. The EM algorithm was implemented as follows: 0. missing values are randomly initialized to obtain a complete dataset; 1. (E-step) the VAR model in formula (2) is fitted to the complete dataset; 2. (M-step) missing values are filled by their prediction based on the fitted model to obtain a new complete dataset; 3. the procedure is iterated from step 1 until convergence of the likelihood.
All the computations were performed in R Core Team (2022) through a program developed by the authors. Among the different lag lengths under consideration ( L = 1, 2, 3, 4 ), we found L = 1 as the optimal one.
Before constructing the composite indicator, we tested whether the time series of the selected indicators, after imputation of missing values, were cointegrated (Engle and Granger 1987). Cointegration ensures the existence of a long-term relationship among non-stationary time series, thus it is important to justify the multivariate analysis of the selected indicators. Since our data are structured as a panel, we tested cointegration according to Pedroni (1999). We found that, for half of the selected indicators, the majority among the eleven statistics proposed by Pedroni (1999) leaded to the rejection of the hypothesis of no cointegration. Instead, for the other half of the indicators, few or none of the statistics confirmed cointegration. This result appears satisfactory given the small length of the time series (17 time points), because cointegration tests are notoriously characterized by low power in small samples. (
Methodology
Our composite indicator for agricultural sustainability in EU countries is based on the weighted product method (geometric aggregation of basic indicators), with weights determined endogenously according to the Benefit of Doubt (BoD) approach (Cherchye et al. 2007;Zhou et al. 2010;Vidoli et al. 2015). The BoD approach consists of selecting the weights by maximizing the score of each observation. The BoD weighting scheme is unit invariant, i.e., weights are adapted to the units of measurement of basic indicators (Cooper et al. 2000, p.39), thus normalization is not required. Nevertheless, basic indicators should have the same polarity, thus we preliminarily applied the reciprocal function to all indicators negatively correlated with sustainability, which include the at-risk-of-poverty rate in rural areas ( X 7 ), the unemployment rate in rural areas ( X 8 ), greenhouse gas emissions due to agriculture ( X 11 ) and the gross nitrogen balance ( X 12 ). Let i = 1, … , n denote the countries, j = 1, … , p the basic indicators, and t = 1, … , T the time points. Also, let x i,j,t and w i,j,t be, respectively, the measurement and the weight of the basic indicator X j for country i at time t. The score in sustainability for country i at time t is defined as: Since the basic indicators can be partitioned into the economic (ECO), social (SOC) and environmental (ENV) sustainable dimensions, the score in sustainability given by formula (5) can be decomposed into the product of the score in each sustainable dimension: For each pair (i, t), the weights w i,1,t , … , w i,j,t , … , w i,p,t are determined by solving the following problem: The last constraint, which bounds between 5% and 15% the contribution of each basic indicator to the composite, is introduced to avoid excessively low or high weights.
Note that the logarithm of the composite indicator SUS in formula (5) is a linear combination of the logarithmic values of basic indicators: Therefore, the optimization problem in formula (7) becomes linear after logarithmic transformation of basic indicators. Precisely, for each pair (i, t), the weights w i,1,t , … , w i,j,t , … , w i,p,t are determined by solving the following problem: This optimization was performed in R Core Team (2022) through a program developed by the authors.
Note that, on the logarithmic scale, the contribution of each basic indicator to the composite can be expressed as a share: We refer to r i,j,t as the relative importance of the basic indicator X j for country i at time t. Analogously, the relative importance of sustainable dimensions ECO, SOC and ENV for country i at time t can be computed as the ratio of the logarithmic score in each dimension to the logarithmic score in sustainability: In order to assess the change in time of the composite indicator SUS and of its components ECO, SOC and ENV, we adopt the mobility index proposed by Giambona and Vassallo (2014). Let R i,t = R i,t − R i,t−1 be the first order difference in rank at year t for country i, and S i,t = S i,t − S i,t−1 be the first order difference in score at year t for country i. The mobility index for country i is defined as: It can be noted that the mobility index for a country is the mean of first order changes in rank weighed by first order changes in score. Therefore, it accounts not only for the absolute change of the country, but also for its relative change with respect to the other countries. The mobility index for a country takes positive (or negative) value in case of increasing (or decreasing) relative performance in the considered period, while a null value indicates an overall stability of the performance.
Results and Discussion
In this section, we report and discuss the results of our composite indicator. Sections 5.1 and 5.2 focus, respectively, on the trajectories of sustainability and on the relative importance of sustainable dimensions and basic indicators. Section 5.3 provides a comparison with the results of existing studies, while Sect. 5.4 compares our results with subsidies attributed by the Common Agricultural Policy (CAP). Finally, Sect. 5.5 reports the analysis of the sensitivity to different aggregation methods and weighting schemes. Figure 1 shows the trajectories of the composite indicator SUS (in red) and of its economic (ECO, in blue), social (SOC, in orange) and environmental (ENV, in green) components in the period 2004-2020. We see that the trend of sustainability is pretty stable or has a moderate growth rate for most countries, and that no country has a definitely decreasing trend of sustainability. Countries showing a trajectory of sustainability with strong growth rate include Bulgaria, Croatia, Lithuania and Poland. Among countries with a non-decreasing trajectory, those reaching the 90th percentile of the score in sustainability are Austria, Czechia, Estonia, France, Germany, Hungary, Latvia, Lithuania, Slovakia and Sweden. Cyprus, Malta and Netherlands show an irregular trajectory of sustainability, but with a non-decreasing trend in recent years. For what concerns sustainable dimensions, the social and the environmental ones have similar levels for most countries, while the level of the economic dimension is definitely higher for all countries. It can be noted that the trend of the economic dimension is decreasing for Austria (which is the country with the highest level of sustainability), Finland, Italy, Latvia and Slovakia. Also, Cyprus and Sweden show a decreasing trend of the social dimension, while Czechia is characterized by a decreasing trend of the environmental dimension. Mobility indices can be inspected to get an in-depth insight into the trajectories of sustainability of EU countries. In fact, the mobility index accounts for the evolution of the performance of each country relatively to the other ones. Average scores and mobility indices are shown in Table 4 and displayed in Figures 2 and 3. From Figure 2, it can be noted that the countries with average score in sustainability (SUS) above the third quartile are Austria (AT), Slovakia (SK), Sweden (SE), Hungary (HU), France (FR) and Czechia (CZ), and all of them has a positive mobility index, implying an overall improvement of sustainability in the period 2004-2020, with the exception of Czechia for which the mobility index is negative. Therefore, Czechia requires attention in the near future to prevent a degradation of its , interventions aimed at targeting specific weak sustainability objectives should be considered. Figure 3 provides a comparison between average scores and mobility indices by sustainable dimension. Such comparison may help, on one hand, in supporting the design of policies in favour of countries with a low level of sustainability, and, on the other hand, in monitoring the importance that countries with a high level of sustainability attribute to the sustainable dimensions. For instance, we see that the most problematic dimension for Cyprus (CY) and Luxembourg (LU) is the environmental one (average score below the first quartile and negative mobility index), followed by the social dimension (average score above the third quartile but negative mobility index), while the economic dimension is characterized by a low average score but with positive mobility. Instead, the weakest dimension for Belgium (BE) and Netherlands (NL) is the economic one, characterized by low average score and negative mobility. The weak performance of Austria (AT) in the economic dimension despite the excellent performance in sustainability (SUS), previously Figure 4 displays the trend of the relative importance of sustainability dimensions by country in the period 2004-2020, while Tables 5 and 6 report, for each country, mean and average annual change of the relative importance of each sustainable dimension and basic indicator. We see that the economic dimension has the highest relative importance with an average across countries equal to 42.9%, followed by the environmental dimension (23.8%) and by the social dimension (22.4%). The ranks of the relative importance of sustainable dimensions differ within countries, but it can be noted that the economic dimension is ranked first for all countries excepting Belgium and Netherlands, for which it is ranked second after the social dimension. Instead, the social dimension is ranked first only for Belgium and Netherlands, and the environmental dimension is never ranked first.
Relative Importance of Sustainable Dimensions and Basic Indicators
Among the considered basic indicators, the net entrepreneurial income of agriculture ( X 5 , economic dimension) has the highest relative importance with an average across countries equal to 13.74%, followed by the TFP index of agriculture ( X 1 , economic dimension, 12.59%), the median equivalised net income in rural areas ( X 6 , social dimension, 11.35%), greenhouse gas emissions due to agriculture ( X 11 , environmental dimension, 8.96%), and the real income of agricultural factors ( X 4 , economic dimension, 8.88%). The other basic indicators have an average relative importance across countries between 5% and 8%, although the ranks of their relative importance differ significantly within countries and no definite patterns can be deduced.
Comparison with Existing Studies
Our findings are not properly comparable with those of existing studies, because our study is the first one in the literature performing a longitudinal assessment on all the three sustainable dimensions for an exhaustive number of countries. Among existing studies, the most suited for a comparison with our results are Nowak et al. (2019) and Cataldo et al. (2020), where all the three sustainable dimensions and a non-trivial number of countries is considered, although they rely on cross-sectional data.
In Nowak et al. (2019), a composite indicator is constructed based on 2016 data using the TOPSIS method (Hwang and Yoon 1981), leading to a top ten list including seven transition economies (Slovakia, Czechia, Bulgaria, Latvia, Lithuania, Estonia and Hungary) and only three developed countries (Spain, Luxembourg and Austria). This result is apparently in contrast with our composite indicator, but it can be explained by the focus on a single year, where the increasing performance in sustainability for transition countries, also highlighted by our findings, may have been particularly favourable. However, it is reasonable to think that the discrepancies between the findings of Nowak et al. (2019) and ours are mainly due to the fact that the TOPSIS method is based on multicriteria decision analysis, and thus the weighting scheme differs substantially from the BoD one.
In Cataldo et al. (2020), Partial Least Squares Path Modelling (PLS-PM) with secondorder formative constructs is exploited to construct a composite indicator based on 2017 1 3 data, leading to a higher weight for the economic dimension, followed by the social and the environmental dimensions. The disagreement between these findings and ours can be explained by a substantial difference in the considered indicators. In fact, the study of Cataldo et al. (2020) considers the system of indicators designed to monitor Sustainable Development Goals (SDGs), which includes measures mainly related to agriculture but not limited to agricultural sustainability. Unfortunately, the main objective of Cataldo et al. (2020) is to derive the weights of sustainable dimensions, thus the ranks of countries are not reported and a comparison with ours is not possible.
Comparison with CAP Subsidies
The results of our composite indicator can be exploited to explore the effectiveness of subsidies attributed by the Common Agricultural Policy (CAP). At this purpose, we accessed the Farm Accountancy Data Network (FADN, European Commission 2020b) and downloaded the data at country level on the following indicators: "Total subsidies, excluding on investments" (SE605), "Subsidies on investments" (SE406), "Environmental subsidies" (SE621), "Subsidies for less favourite areas" (SE622), and "Other rural development payments" (SE623). Total CAP subsidies were obtained by summing the indicators SE605 and SE406. Also, we distinguished CAP subsidies based on economic, social and environmental objectives: environmental subsidies are directly measured by the indicator SE621, while social subsidies were proxied by summing the indicators SE622 and SE623. Finally, subsidies targeting the economic dimension were obtained by subtraction from total CAP subsidies. All the data on CAP subsidies were divided by the utilized agricultural area (UAA) to allow comparisons among countries. Table 7 reports the mobility index for scores and for CAP subsidies to utilized agricultural area (UAA) by country in the period 2004-2019 (data for year 2020 are not available in the FADN). The same mobility indices are compared in Figure 5, where it is apparent a substantial agreement between the score in sustainability (SUS) and total CAP subsidies, with few exceptions. Countries with an evident incoherence include Netherlands (NL), which shows an increase in CAP subsidies despite a decreased score in sustainability (second quadrant of Figure 5), and Austria (AT), Slovenia (SI) and Malta (MT), which show a decrease in CAP subsidies despite an increased score in sustainability (fourth quadrant of Figure 5). Figure 6 provides a comparison between scores and CAP subsidies by sustainable dimension. Again, a substantial agreement is apparent for all the three sustainable dimensions with few exceptions. In particular, countries with an increase in CAP subsidies despite a decreased score in sustainability (second quadrant) include: Austria (AT), Czechia (CZ), Estonia (EE), Latvia (LV) and Netherlands (NL) for the economic dimension; Belgium (BE), Czechia (CZ), Luxembourg (LU) and Sweden (SE) for the social dimension; Bulgaria (BG), Cyprus (CY), Croatia (HR) and Portugal (PT) for the environmental dimension. Instead, countries with a decrease in CAP subsidies despite an increased score in sustainability (fourth quadrant) include: Malta (MT) for the economic dimension; Estonia (EE), Lithuania (LT), Malta (MT), Portugal (PT) and Slovenia (SI) for the social dimension; Austria (AT), Belgium (BE), Finland (FI) and Netherlands (NL) for the environmental dimension.
The substantial agreement between mobility indices of scores and mobility indices of CAP subsidies supports the reliability of our composite indicator. Therefore, it represents a valuable resource to refine the scheme for the attribution of CAP subsidies in order to stimulate specific sustainable dimensions.
Sensitivity Analysis
Sensitivity analysis is an important step to evaluate the robustness of the composite indicator with respect to alternative methodological choices, i.e., the selection of indicators, the Values are means across the period 2004-2020 with average annual change within brackets Overall: average across all countries normalization procedure, the aggregation method, and the weighting scheme (Terzi et al. 2021). Since we selected the basic indicators based on theory, guidelines in the literature and data availability, the robustness of our composite indicator with respect to the use of different basic indicators was not investigated. Also, since the BoD weighting scheme is unit invariant, we disregarded the effect of different normalization procedures and concentrated only on the comparison between different aggregation methods and weighting schemes. Specifically, we computed three alternative composite indicators: (i) arithmetic aggregation with BoD weights, (ii) geometric aggregation with uniform weighting, (iii) arithmetic aggregation with uniform weighting. In these alternative composite indicators, uniform weighting was obtained by setting the weights equal to the reciprocal of the variance of each basic indicator. Table 8 shows average ranks and mobility indices for our composite indicator and the three alternative composites. We see that there is a substantial difference in the results based on arithmetic aggregation (BoD versus uniform weighting) are the most dissimilar from each other (Spearman correlation equal to 0.399). This analysis highlights a clear dependence of the results from the aggregation method and the weighting scheme, confirming the core importance of methodological choices that, in this research, have been clearly motivated in favour of geometric aggregation with BoD weights.
Concluding Remarks
In this paper, we have emphasized that few studies have been conducted to assess agricultural sustainability in the European Union (EU), and all of them fail to provide a holistic view of sustainability in a relevant temporal horizon that could effectively Our proposal is innovative with respect to existing studies because we considered: (i) all EU countries rather than a subset of them, (ii) a broad set of indicators (12 in total) to cover the economic, social and environmental dimensions of sustainability, (iii) longitudinal data over a long period (17 years). Also, the decomposition into the contributions of sustainable dimensions and the adoption of the BoD weighting scheme is novel in the assessment of agricultural sustainability in the EU.
The construction of composite indicators is subjected to several arbitrary choices that may influence the final results, especially the aggregation method and the weighting scheme. Therefore, we paid particular attention in motivating our methodological choices. On one hand, geometric aggregation was preferred to the arithmetic one because it allows a small degree of compensation to reflect the fact that sustainable development is achieved only when all or most individual sustainability goals are pursued. On the other hand, the BoD weighting scheme was selected because it permits to infer the relative importance of each basic indicator and sustainable dimension in the achievement of sustainability without relying on subjective opinions. The core importance of methodological choices, and thus of their motivation, was also confirmed by the sensitivity analysis conducted on our composite indicator.
A valuable resource employed to discuss our results is represented by the mobility index. The mobility index accounts for the evolution of the performance of each country relatively to the other ones, and not simply for each country separately. Therefore, it allows an in-depth insight into the trajectories of sustainability of the various countries. For this reason, we hope that our work encourages the developers of composite indicators for longitudinal data to use the mobility index in the discussion of their results.
In order to check the reliability of our composite indicator, we inspected the relationship between mobility indices of scores and mobility indices of subsidies attributed by the Common Agricultural Policy (CAP). Our findings highlighted a substantial agreement between the two, both in overall and by sustainable dimension. Therefore, our composite indicator represents a valuable resource not only to monitor the progress of EU member countries towards sustainability objectives, but also to refine the scheme for the attribution of CAP subsidies in order to stimulate specific sustainable dimensions.
The main critical point of our work relies in quality and availability of data, an issue affecting all multidimensional assessments due to the practical difficulty of collecting reliable measurements on a large number of indicators. The national scale and the longitudinal nature of our analysis entail further complications, because the only data sources are represented by international institutions and organizations, and available time series are typically short and may present a number of missing values. In this paper, missing values have been imputed based on a vector auto-regressive model with fixed intercepts. Our imputation procedure has good properties because the Expectation-Maximization (EM) algorithm was employed to compute the expected value of missing data. However, the limited length of the time series prevented us to effectively check the presence of cointegration, thus our methodology has room for improvement. At this purpose, we plan to integrate the EM algorithm within the BoD optimization and to recompute the composite indicator as future data become available.
|
2022-05-10T16:29:30.488Z
|
2022-04-30T00:00:00.000
|
{
"year": 2022,
"sha1": "e36e0fcd0318f9281bf69cf32f7884be108b0da1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11205-022-02925-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "23a9df4c1100073b62a110cbbc04f3c3d87c2168",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
261174476
|
pes2o/s2orc
|
v3-fos-license
|
Bio-organic fertilizers promote yield, chemical composition, and antioxidant and antimicrobial activities of essential oil in fennel (Foeniculum vulgare) seeds
The aromatic fennel plant (Foeniculum vulgare Miller) is cultivated worldwide due to its high nutritional and medicinal values. The aim of the current study was to determine the effect of the application of bio-organic fertilization (BOF), farmyard manure (FM) or poultry manure (PM), either individually or combined with Lactobacillus plantarum (LP) and/or Lactococcus lactis (LL) on the yield, chemical composition, and antioxidative and antimicrobial activities of fennel seed essential oil (FSEO). In general, PM + LP + LL and FM + LP + LL showed the best results compared to any of the applications of BOF. Among the seventeen identified FSEO components, trans-anethole (78.90 and 91.4%), fenchone (3.35 and 10.10%), limonene (2.94 and 8.62%), and estragole (0.50 and 4.29%) were highly abundant in PM + LP + LL and FM + LP + LL, respectively. In addition, PM + LP + LL and FM + LP + LL exhibited the lowest half-maximal inhibitory concentration (IC50) values of 8.11 and 9.01 μg mL−1, respectively, compared to l-ascorbic acid (IC50 = 35.90 μg mL−1). We also observed a significant (P > 0.05) difference in the free radical scavenging activity of FSEO in the triple treatments. The in vitro study using FSEO obtained from PM + LP + LL or FM + LP + LL showed the largest inhibition zones against all tested Gram positive and Gram negative bacterial strains as well as pathogenic fungi. This suggests that the triple application has suppressive effects against a wide range of foodborne bacterial and fungal pathogens. This study provides the first in-depth analysis of Egyptian fennel seeds processed utilizing BOF treatments, yielding high-quality FSEO that could be used in industrial applications.
Essential oils (EOs) extracted from medicinal and aromatic plants (MAPs) have been widely used for their antispasmodic, sedative, digestive, cardiotonic, diuretic, and tonic effects in alternative medicine 1 .They are routinely added to foods and are usually acknowledged safe when these plants and/or their EOs are farmed organically using certified procedures.In addition, EOs have long been used as flavorings in food industry, as well as many other applications in cosmetics, hygiene products, pharmaceutical medications, and fragrances [2][3][4] .The natural antioxidant effects of EOs can also be used as alternative food preservatives 5,6 .
In 2021, 2.3 million hectares of MAPs were harvested worldwide, yielding over 2.7 metric tons of seeds 7 .Egypt has contributed to more than 32,000 hectares of the harvested area-the vast majority of which is distributed across areas negatively impacted by salinity-yielding about 29,000 tons of fruitful seeds.On average,
Materials and methods
Experimental location.Two field-scale trials were performed in 2019/2020 and 2020/2021.A factorial layout with a randomized complete block design (RCBD) was applied.The experiments were carried out on a plot of soil at a research farm in Fayoum governorate (29° 17′N; 30° 53′E), Egypt.Mean temperature throughout the experimental period (from October to May) were 25 ± 3 °C/10 ± 2 °C for average day/night temperatures; average relative humidity of 75 ± 4%.Natural sunlight (11 h for average daylight length) was sufficient for all growth stages of fennel plnats.
Land properties.
According to the climatic spectrum and aridity index 31 , the experimental site was in arid area.Soil was classified as typic tropopsamments, siliceous, and hyperthermic based on Soil Survey Staff USDA 32 .Soil samples were collected from the upper soil layer (0.0-0.2 m in depth).All physio-chemical analysis of the studied soil was carried out according to the methods described by 33,34 .The soil used in this study was saline calcareous, sandy loam in texture (74.66% sand, 12.15% silt, and 13.19% clay).The ECe was 6.92 dS m −1 , CaCO 3 = 13.8%,pH = 7.64, OM = 0.89%, and the available N was 0.016% (Table S1).
Plant materials.Seeds of F. vulgare were obtained from the Institute of Medicinal and Aromatic Plants, Agricultural Research Center (ARC), Giza, Egypt.Fennel seeds were hand-bedded in hills 0.3 m apart (3-5 seeds hill −1 ), on October 27th of both seasons.Twenty-one days after germination (DAG), hills were thinned to 2-3 seedlings, and re-thinned again at 45 DAG to maintain only the strongest plant hill −1 .The experimental site was fertilized with the recommended doses of 75 kg P 2 O 5 ha −1 as (P), two equal applications of 150 kg N ha −1 applied at 45 and 75 DAG, and 50 kg K 2 O ha −1 (K) totally applied with the second application of N-fertilization.Disper Complex GS (Chelated-Microelements, 0.5 g L −1 ) were sprayed on the flowage of fennel crop at 40 and 70 DAG, purchased from Sphinx International Trade Co., Nasr City, Egypt.All the matured fennel crop were hand-picked up on May 8 in this 2-year study.The use of plants/plant parts, in the present study, complies with the international, national and/or institutional guidelines.
Treatments and experimental setup.Two organic fertilizers comprising of farmyard manure (FM) or poultry manure (PM) were purchased from cattle and poultry producers (private farms) based in Fayoum city, Fayoum governorate, Egypt.
The chemical properties of FM and PM are presented in Table (S2).FM and PM were applied individually at rates of 25 and 20 m 3 ha −1 , respectively, as commercial agronomic regional practices of fennel, or applied in combination with the two lactic acid bacteria (LAB) strains, Lactobacillus plantarum (LP) and Lactococcus lactis (LL).LP and LL were applied individually or in a mixture as seed inoculation.
Each treatment was applied three times, with a total of 27 plots.The area of the experimental plot was 3 m in length × 3 m row width (9 m 2 ).Each plot contained 5 lines, each 3.0 m in length and 60 cm apart.
Bacterial strains.Two bacterial strains (L.plantarum subsp.plantarum ATCC 14917 and L. lactis subsp.lactis ATCC 11454) obtained from the Department of Agricultural Microbiology and Biotechnology, Ain Shams University, Egypt were used in the current study.Both strains were cultivated on de Man, Rogosa and Sharpe (MRS) agar (Lab M Limited, Lancashire, UK) and stored at 4 °C.Cell suspensions of bacterial strains were obtained by inoculation of each strain in double-strength MRS broth and cultivated overnight at 37 °C.The final concentration of cells reached 5 × 10 9 colony forming units (CFU) mL −1 .To inoculate fennel seeds, 100 mL of cell suspensions of each Lactobacillus strain was transferred to the 250 mL flask and stored overnight at 37 °C.Fennel seeds were inoculated with a cell suspension of either LP or LL (1:1).
Extraction and analysis of FSEO.Extraction of FSEO.Air-dried fennel seeds powder (100 g) from each plot were subjected separately to hydrodistillation in 1 L of double distilled water (DDW) and boiled for 4 h in a Clevenger apparatus 35 .The extracted oils from each plot were dried over anhydrous sodium sulfate (Advent Chembio PVT.LTD, Mumbai, India) to eliminate any traces of moisture, then weighted and kept in air-tightly closed dark vials at − 80 °C until use (Fig. 1).
Analysis of FSEO.
FSEO from each plot were analyzed by using trace gas chromatography (GC) (model GC1310-ISQ) mass spectrometry (MS; Thermo Scientific, Austin, TX, USA) equipped with TG-5MS column (30 m × 0.25 mm × 0.25 μm film thickness), with helium as a carrier gas at a constant flow rate of 1 mL min −1 .The column oven temperature was initially held at 50 °C and then raised by 5 °C min −1 to 230 °C, held for 2 min, and raised to the final temperature of 290 °C at 30 °C min −1 and held for 2 min.The injector and MS, transfer line temperatures, were kept at 250 and 260 °C, respectively.The solvent delay was 3 min, and diluted samples of 1 µL were injected automatically using autosampler AS1300 coupled with GC in the split mode.In full scan mode, electron ionization mass spectra were collected at 70 eV ionization voltages over m/z 40-1000.The ion source temperature was set at 200 °C.The components were identified by comparison of their retention times and mass spectra with those of WILEY 09 and NIST 11 mass spectral databases.
After stirring vigorously for 1 min, the reaction mixture was kept at 35 ± 2 °C for 30 min in the dark.The decrease in absorbance was recorded at 517 nm via the U-2900 UV-Vis double-beam spectrophotometer (Hitachi, Tokyo, Japan).Three replications for each measurement were carried out.For each sample, the DPPH free radical scavenging activity (DPPH FRSA) was computed as: where Acs, the absorbance of the control sample; Ats, the absorbance of the treatment sample.The half-maximal inhibitory concentration (IC 50 ) values (the concentration required for 50% inhibition of viability) were assessed from the relationship FRSA curve versus concentrations of the curve of the respective sample.
Determination of the antimicrobial effect.Microbial strains sources, culture conditions and inoculum preparation.The FSEO antimicrobial efficiency was tested against different bacterial and fungal strains, including two Gram positive bacteria, Staphylococcus aureus (ATCC 8095) and Bacillus subtilis (ATCC 13753), and two Gram negative bacteria, Escherichia coli (ATCC 25922) and Pseudomonas aeruginosa (ATCC10662).Two fungal strains (Penicillium roqueforti and Aspergillus niger) known for their food spoilage and mycotoxins production were also used in the present study.
All bacterial strains were obtained from the Agricultural Microbiology Department, Fayoum University, Egypt, while the Mycological Center, Assiut, Egypt provided the fungal strains.Bacterial cultures were cultured on the Luria-Bertani (LB) agar (Lab M), and the fungal cultures were cultivated on potato dextrose agar (PDA) (Lab M).All strains were stored at 4 °C and subcultured once a month.
Fungal cultures were grown on PDA for 7 days at 28 °C until good sporulation was obtained.For the preparation of fungal spore suspension, 5 mL of a sterile saline solution (0.85%) containing tween 80 (Sigma) (0.1%) was added to the surface of the cultures, followed by gentle scraping with a sterile needle.After settling down for 3 min, the homogeneous upper suspension was used as inocula.The tests were then carried out using a suspension containing 10 8 spores mL −1 .Bacterial inocula were prepared by inoculating the culture into a 50 mL LB broth medium (Lab M) in an Erlenmeyer flask.The flasks were incubated in a shaker incubator at 37 °C for 24 h at 150 rpm.Bacterial inoculum was adjusted to 10 7 CFU mL −1 ; 0.5 Mac-Farland.
Disc-diffusion assay.Disc-diffusion assay 38 was employed to determine the antimicrobial activity of FSEO against the tested strains.Solidified plates containing LB agar for bacterial strains and PDA for fungal strains were seeded with 0.2 mL from the inoculum suspension previously described.Different concentrations of FSEO were added to 9 mm Whatman #1 filter paper disks which were placed on the agar surface.The plates were left for 60 min to diffuse and then incubated at 37 °C for 24 h for bacteria and at 28 °C for 5 days for fungi.Antimicrobial activities were measured as the inhibition zone diameter around each disk.The antibiotics gentamycin and clotrimazole were used as a positive control for bacteria and fungi, respectively.
Effect of essential oils on hyphal morphology.The determination of the volatile FSEO effects on hyphal morphology was previously described 39 .
Statistical analysis.
All experiments were carried out with three replications for each FSEO concentration.Data were analyzed using the two-way analysis of variance (ANOVA) and Duncan's multiple range test were used to determine the statistical significance at P < 0.05.For all statistical analyses, SPSS ® IPM ® statistical program (version 23, New York, USA) was used.
In Iran, FSEO content ranged from 2.7 to 4% 40 , but FSEO yield in Pakistan was 2.81% 41 .Although FSEO yield was 0.1% from Portugal 42 , the FSEO from 16 wild edible Tunisians F. vulgare ranged from 1.2 to 5.06% 43 .In addition, it was found the FSEO yield from Egyptian organic fennel was 1.6% without any treatments 44 .The content of EO can mainly be influenced by the environmental geographical conditions of the regions, climatic changes, the nature of the soil, and genetic factors 14 .Moreover, the technique and extraction process may have an effect 45 .According to 46 effective agricultural and environmental practices would also help in enhancing the quality and yield of EOs.
GC-MS analysis of FSEO.
GC-MS analysis of FSEO led to the identification of 17 components, which represented 99.94-100% of the total composition belonging to hydrocarbons, alcohols, ethers, ketones, esters, amines, fatty acids, monoterpenes and sterols (Table 1; Fig. S1).Ethers represent the most available component in fennel seeds among all tested BOF treatments.
Ethers.Ethers were the most prevalent class in all treatments applied, accounting for 79.82-91.92%,emphasizing their antioxidative and antibacterial properties, making them noteworthy dietary components 47 reaching 91.92% and 89.74% of the total volatiles in the triple combination of PM + LP + LL and FM + LP + LL, respectively, compared to untreated control (81.8%).(E)-anethole and its isomer estragole (i.e., phenylpropanoid derivatives) are extensively found in different plants.In star anise (Illicium anisatum L.) and anise (Pimpinella anisum L.), (E)-anethole is the main volatile compound, while estragole is prevalent in sweet basil (Ocimum basilicum L.) and tarragon (Artemisia dracunculus L.) 48 .In the current study, (E)-anethole (No. 4, Table 1) was the major volatile compound in fennel seeds produced under all BOF treatments.The sweet, distinct, anise-like flavor that distinguishes fennel fruits could be contributed to (E)-anethole, which is also used as a flavoring and fragrance ingredient in the food industry and cosmetics 49 .In addition, (E)-anethole possesses various pharmacological properties, including anti-inflammatory, immunomodulatory, neuroprotective, and diabetic.On contrast, estragole has no discernible effect on the total fennel aroma, albeit its high affinity to alkenylbenzenes (e.g., methyleugenol and safrole; Fig. S2) which are classified as carcinogens (Class 2B) according to the International Agency for Research on Cancer (IARC), which prompted the European Union (EU) to restrict utilizing estragole in nonalcoholic beverages to 10 mg kg −150 .
Recently, estragole has received attention due to its genotoxicity and hepatocarcinogenic properties 51 .These effects result from the 1′-hydroxyestragole sulfuric ester, an estragole metabolite, forming an adduct with DNA.Accordingly, the toxicity is not initiated by the parent compounds but by their highly reactive metabolites.On the other hand, it has been demonstrated that other plant components, such as flavonoid nevadensin can prevent the formation of estragole DNA adducts caused by sulfotransferase (SULT) that converts 1′-hydroxyestragole to the critical carcinogen 1′-sulfooxyestragole [52][53][54] .
In addition, the toxicokinetic of alkenylbenzenes, such as estragole versus trans-anethole, are influenced by structural differences in these compounds (Fig. S2).This influences the toxic (particularly genotoxic) potential of various alkenylbenzenes, which must be considered when evaluating the possible dangers associated with exposure to these chemicals 54 .Recognizing these threats, the European Medicines Agency has advised pregnant women, nursing mothers and young children to minimize the estragole supplementation.Eventhough no suzerainty has banned using the estragole-containing herbs, the European Union Commission has banned their use as food additives 55 .
Ketones.After ethers, the ketone fenchone (No. 6, Table 1) was the second major class of volatiles in all fennel treatments amounting 2.84-9.55%.In all treatments, only fenchone was found and present at a much higher level in FM + LL (9.55%) compared to that in FM + LP and PM + LL at 5.89 and 5.82%, respectively.It was, however, found at much lower levels in PM + LP + LL at 2.84%, probably due to the impact of high levels of (E)-anethole.www.nature.com/scientificreports/Our result agreed with a recently published report 56 where ketones scored 7.52% when fennel plants were treated with humic acid.Due to fennel's bitter aftertaste, fenchone is utilized as a flavor for food owing to its camphor-like aroma 57 , in addition to its antifungal, acaricidal, and wound-healing properties 58 .
Monoterpene hydrocarbons (MTHCs).
After ethers and ketones, MTHCs were the plentiful third class in combinations (PM + LL, FM + LL and PM + LP) at 9.17, 8.27 and 7.88%, respectively, and to a lesser extent in control, FM and FM + LP (6.51-6.12%),and reached to almost 4.25% in the remaining treatments (Table 1).This is consistent with a report in which MTHCs were found to be 7.15% of the total 59 .
The major MTHCs identified was limonene (No. 14, Table 1) which was found at the highest level in PM + LL (8.62%) and FM + LL (7.52%), respectively.Limonene, a key component of citrus fruits, is an additive to numerous food products for its lemon-like flavor and anti-inflammatory properties against multiple intestinal inflammations 60 .Further, the limonene was used as a wetting, dispersion, resins, and dissolving agent.Small quantities of 3-pinanylamine were detected in all specimens ranging between 0.21-0.90%.This branched monoterpene hydrocarbon was used to manufacture insecticides and solvents 61 .
Hydrocarbons/alcohols/esters/amines/fatty acids/sterols.The minor elements of the FSEO were hydrocarbons (0.29-0.63%).Alcohols were present in amounts ranging from 0.08% in FM + LP to 0.83% in FM.Linalool, www.nature.com/scientificreports/which was highly abundant in PM (0.72%); followed by PM + LL compared to other treatments (Table 1), is responsible for the aroma of clementine peel oil which and can be utilized as a flavoring agent owing to its outstanding floral balmy odor 62 .Esters were abundant in PM (0.81%) compared to other fennel specimens, and amines were present in traces in all fennel treatments (0.01-0.07%) except in PM and PM + LL, which reached to 0.26 and 0.19% respectively.Fatty acids were found in all fennel treatments ranging from 0.47 to 1.25% and other minor constituents of the FSEO were sterols ranging 0.06-0.16% in all fennel treatments, these components contributed to the overall aroma of fennel.
In conclusion, EOs obtained by hydro-distillation were rich in (E)-anethole (78.9-91.4%),fenchone (3.35-10.1%),limonene (2.94-8.62%)and estragole (0.50-4.29%) when fennel plants were treated with BOF (Fig. S3).These compounds are responsible for the most intense odor in fennel seed oil.The last compounds were better obtained by treating fennel with the combinations of PM + LP + LL, FM + LP + LL, PM + LL and FM + LL.For this reason, fennel treated with organic and biofertilizers can be used to obtain volatile plant oil at an analytical scale and to obtain FSEO industrially in replacement of traditional techniques based on treated fennel with chemical fertilizers.
Biological potential of FSEO. Antioxidant activity-DPPH assay. An attractive area of nutritional and
pharmacological study is analyzing the antioxidant properties of significant oils as lipophilic secondary metabolites.Natural compounds derived from plants are increasingly replacing synthetic food additives because they are safe, efficient, and well-liked by consumers 63 .Fennel, as an edible and medicinal plant, generally denotes importance in the neutralization of reactive oxygen species due to the existence of various secondary metabolites in the fennel oil.This would significantly contribute to their biochemical activities to prevent damage to lipid, DNA, and protein which is thought to be the principal cause of cell aging, oxidative stress-related infections (neurodegenerative and cardiovascular diseases) and cancer 64 .FSEO exhibits high antioxidant activity related to the positive control, l-ascorbic acid (Table 2).The IC 50 , which is defined as the substance concentration which causes a loss of 50% of the DPPH activity (color) 65 , was the criterion employed to measure the DPPH FRSA.
Furthermore, the antioxidant potential of F. vulgare treated with the triple combinations was stronger than that of Egyptian F. vulgare untreated (IC 50 = 141.82mg mL −1 ) 44 .This variation in IC 50 values was probably due to the treatments of organic and biofertilizers together.This led to differences in the content of the main component Table 2. Antioxidant potential of FSEO that is determined through DPPH assay.Treatments were: (1) C, control, no seed or soil treatment; (2) FM, soil treatment with farmyard manure; (3) FM + LP, soil treatment with farmyard manure + seed treatment with Lactobacillus plantarum; (4) FM + LL, soil treatment with farmyard manure + seed treatment with Lactococcus lactis; (5) FM + LP + LL, soil treatment with farmyard manure + seed treatment with Lactobacillus plantarum + Lactococcus lactis; (6) PM, soil treatment with poultry manure; (7) PM + LP, soil treatment with poultry manure + seed treatment with Lactobacillus plantarum; (8) PM + LL, soil treatment with poultry manure + seed treatment with Lactococcus lactis; (9) PM + LP + LL, soil treatment with poultry manure + seed treatment with Lactobacillus plantarum + Lactococcus lactis.The values expressed as means (n = 3).Based on the Duncan's multiple range test at P ≤ 0.05; the means of rows sharing different small letters (a-i) are significantly different.IC 50 , the half-maximal inhibitory concentration (IC 50 ) values (the concentration required for 50% inhibition of viability).FSEO, fennel seeds essential oil; DPPH, 2,2-diphenyl-1-picrylhydrazyl.1).This led to differences in the content of the main component (E)-anethole which recorded a significantly higher concentration in triple combinations (Table 1).However, lower values have been reported for untreated Egyptian (46.26%) 66 , Chinese (54.26%) 44 , and Tajikistan (36.8%) 64 .
Samples
Except for (E)-anethole and estragole, all nine FSEO have comparable concentrations of all other significant components, which shows that the antioxidant activity was mainly related to (E)-anethole concentration.One of the main distinctions between the chemical composition of (E)-anethole and estragole is the double bond of the propenyl side chain in (E)-anethole that is conjugated with the aromatic ring.In contrast, it is nonconjugated in estragole.Contrary to estragole, which can only produce homobenzylic radical cation (Fig. S4), (E)-anethole readily forms a conjugated radical cation, which can be delocalized with the aromatic ring and is more stable by the methoxy group through the 1,4 interactions.This variation among (E)-anethole and estragole was also seen in their photochemical and free radical dimerization, where anethole dimerized but not estragole by forming the intermediate radical cation 68,69 .This observation may explain the variations in antioxidant activity between the studied FSEO.
As a result, the current work provides for the first time the IC 50 for FSEO treated with organic and biofertilizers as an evaluation of their antioxidant activity (Table 2).The present study emphasized that FSEO demonstrates the ability as the primary antioxidant interacting with free radicals and inhibiting or scavenging free radicals from the human body; thus, preventing their damage.In addition, it may be concluded that estragole is an excellent alkylation agent while (E)-anethole is a better radical scavenger, which may explain that estragole is suspected to be carcinogenin because it can easily alkylate DNA molecules and establish covalent bonds with DNA bases 70 .
Antimicrobial effect of FSEO against pathogenic bacteria and fungi.Antibacterial potential.The antibacterial activities of FSEO were assessed against four food-borne pathogenic bacteria (S. aureus, B. subtilis, E. coli, and P. aeruginosa).Based on the inhibition zone diameters obtained, our results were divided into three categories according to 71 : Resistant (< 7 mm), intermediate (> 12 mm) and senstive (> 18 mm).
All FSEO samples, from the current study, exhibited significant antibacterial activities against all the tested strains except for P. aeruginosa.FSEO from PM + LP + LL and FM + LP + LL were the most efficient against all tested strains which gave a larger inhibition zone than gentamycin by (23.5%, 25.0%, 6.6% and 16.6%) and (13.3%, 25.0%, 0% and 0%) for S. aureus, B. subtilis, E. coli and P. aeruginosa, respectively, when 10 µL disk −1 was provided (Table 3).
Our findings also showed that S. aureus and B. subtilis were the most sensitive bacteria tested, revealing the largest inhibition zones, while the smallest inhibition zone was for E. coli (Table 3).None of the studied FSEO effectively inhibited P. aeruginosa except that of PM + LP + LL and FM + LP + LL.Our results are in alignment with another study 72 , suggesting that FSEO has considerable antibacterial activity, particularly towards Gram positive bacteria compared to Gram negative isolates.According to 73 , FSEO inhibits various Bacillus species and had less sensitivity to Gram negative bacteria.
It has been reported that these differences between Gram-positive and Gram-negative bacteria are caused by their distinct cell walls [74][75][76] .Such variations alter plasma coagulation, cause DNA destruction, modify enzymatic processes or increase plasma membrane permeability, which may result in greater leakage of fluid material from bacterial cells 77 and decrease microbial respiration 78 .
In conclusion, the treated FSEO with any of the triple combinations was highly effective against Gram positive and negative bacteria, and may be employed as a natural antibacterial agent for treatments of several infectious disorders initiated by these pathogenic bacteria.Antifungal potential.FSEO components were more effective and showed more fungicidal potential than clotrimazole (Fig. 3; Table 4).FSEO produced a complete zone of inhibition relative to the standard drug for A. niger.It also has the same higher activity against P. roqueforti, forming a zone of inhibition larger than the standard drug by 100%.
The effect of FSEO has been tested against A. niger mycelial growth.FSEO reduced mycelial growth of A. niger because there was no fungal sporulation on the 5th day of the FSEO-treated sample compared to the control without FSEO.Light microscopic examinations supported these findings.Microscopic observation of A. niger hyphae exposed to FSEO showed hyphal morphological changes compared to normal morphology in control hyphae (Fig. 4).Compared with thick, elongated and normal mycelial growth in controls (Fig. 4a-d), hyphae appeared thinner with cytoplasmic coagulation and looked empty as if the hyphal cells drained up from cytoplasm and organelles (Fig. 4f-h).We did not observe conidiospores under the microscope (Fig. 4f,g).The mechanism of action of volatile oil 39,79 can be attributed to the hyphal morphological changes which may be a result of the lipophilic character of EOs that gives them the ability to easily penetrate the fungal mycelia causing cell integrity loss and deformation of fungal mycelia.Secondly, their significant components' effect might increase the plasma membrane's permeability, resulting in hyphal function disorders and deformation.
Conclusion
For the first time, the current study presents variability in the FSEO concentration and chemical composition of Egyptian F. vulgare seeds grown in saline calcareous-soil treated with OM and biofertilization, as well as their combinations.High oil yield and higher content of the medicinal and culinary compounds (E)-anethole (78.9-91.4%),fenchone (3.35-10.19%),limonene (2.94-8.52%),and estragole (0.50-4.29%) were observed in F. vulgare fertilized with any of the triple combinations.According to the DPPH assay, the antioxidant activity of FSEO treated with PM + LP + LL and FM + LP + LL was four-and three-times higher than that of l-ascorbic acid, respectively.Compared to other treatments, FSEO treated with triple combinations showed relatively superior
Figure 1 .
Figure 1.Flow chart of the hydro-distillation of fennel seed oil production process.
Figure 4 .
Figure 4. Effect of FSEO on the mould, Aspergillus niger, under the light microscope.A. niger growing in plates to determine the impact (a-d) without (control) or (e-h) with FSEO on the (a,e) morphological characteristics of hyphae; (b,f) sporangiophores; (c,g) number of spores; and (d,h) thickness and elongation of hyphae.Note that FSEO showed (e) inhibition of fungal growth (white arrow) and sporulation (red arrow); (f) deformed sporangiophore; (g) absence of spores, and (h) cytoplasmic coagulation in hyphae.Light micrographs of A. niger hyphae were exposed to FSEO at ×40.FSEO, fennel seed essential oil.
Table 4 .
Effect of FSEO on the mycelial growth of fungal isolates.Treatments were: (1) C, control, no seed or soil treatment; (2) FM, soil treatment with farmyard manure; (3) FM + LP, soil treatment with farmyard manure + seed treatment with Lactobacillus plantarum; (4) FM + LL, soil treatment with farmyard manure + seed treatment with Lactococcus lactis; (5) FM + LP + LL, soil treatment with farmyard manure + seed treatment with Lactobacillus plantarum + Lactococcus lactis; (6) PM, soil treatment with poultry manure; (7) PM + LP, soil treatment with poultry manure + seed treatment with Lactobacillus plantarum; (8) PM + LL, soil treatment with poultry manure + seed treatment with Lactococcus lactis; (9) PM + LP + LL, soil treatment with poultry manure + seed treatment with Lactobacillus plantarum + Lactococcus lactis.Values with the same letter within a column for each treatment are not significantly (P > 0.05) different according to Duncan's multiple range test.FSEO, fennel seeds essential oil; CI, complete inhibition; NA, no activity.
|
2023-08-27T06:17:52.971Z
|
2023-08-25T00:00:00.000
|
{
"year": 2023,
"sha1": "3316d44986ef70b576ff89093199caefadc18b2e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-40579-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61a59f75c99353e1133ec6dcaf5203b8cceb73de",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209546288
|
pes2o/s2orc
|
v3-fos-license
|
ReXCam: Resource-Efficient, Cross-Camera Video Analytics at Scale
Enterprises are increasingly deploying large camera networks for video analytics. Many target applications entail a common problem template: searching for and tracking an object or activity of interest (e.g. a speeding vehicle, a break-in) through a large camera network in live video. Such cross-camera analytics is compute and data intensive, with cost growing with the number of cameras and time. To address this cost challenge, we present ReXCam, a new system for efficient cross-camera video analytics. ReXCam exploits spatial and temporal locality in the dynamics of real camera networks to guide its inference-time search for a query identity. In an offline profiling phase, ReXCam builds a cross-camera correlation model that encodes the locality observed in historical traffic patterns. At inference time, ReXCam applies this model to filter frames that are not spatially and temporally correlated with the query identity's current position. In the cases of occasional missed detections, ReXCam performs a fast-replay search on recently filtered video frames, enabling gracefully recovery. Together, these techniques allow ReXCam to reduce compute workload by 8.3x on an 8-camera dataset, and by 23x - 38x on a simulated 130-camera dataset. ReXCam has been implemented and deployed on a testbed of 5 AWS DeepLens cameras.
Introduction
The Internet of Things (IoT) has led to an explosion of data sources, and applications that rely on real-time inferences over these data. In parallel, the models making these inferences have improved in accuracy, even surpassing humans for certain vision tasks, but at increased resource cost. This work addresses the systems challenges of scaling up IoT applications to enable live video analytics on a fleet of cameras.
Live video analytics over a fleet of camera feeds embodies two key trends-massive data sources and compute-intensive inference (e.g., neural nets). On the one hand, enterprises deploy large camera networks for public safety and business intelligence [11]. For instance, Chicago and London police access footage from 30,000 and 12,000 cameras to respond to crimes in real time [4,5]. On the other hand, many applications rely crucially on cross-camera video analytics, i.e., detecting, associating and tracking queried "identities" in the live streams as these identities move across the camera feeds over time (e.g., high-value shoppers in a store [8,34] or suspects in a city [46,66]). However, cross-camera analyt- cameras (on the y-axis) are plotted according to their mutual distances, e.g., c1 and c2 are spatially closer than c1 and c3. In searching for a query identity starting at frame t (marked in dark red), ReXCam eliminates some cameras entirely (spatial filtering), as well as frames t + 2 and t + 3 (temporal filtering).
In this example, ReXCam searches first on c1, c2, and c3 (but not c4), finds the target vehicle in c3, and then searches only on c2 and c4 (but not c1 and c3). The cameras and the times at which they are searched are marked in green. The unmarked portions represent compute savings.
ics applications are computationally more challenging than "stateless" single-camera vision tasks (such as object detection in one camera feed) as they entail discovering associations across frames and across cameras. Their compute cost thus grows with the number of cameras. Prior work falls short of addressing this challenge. Work in computer vision improves accuracy of cross-camera analytics (e.g., [55,58,70]), but it has largely ignored the prohibitive compute costs. Recent systems have accelerated analytics on live videos via frame sampling and/or cascaded filters for discarding frames [25,28,37,40,63,65]. However, they share a key drawback that they optimize the execution of analytics on single video feeds, independent of the other streams. Thus, the compute cost of cross-camera analytics still grows with more deployed cameras and longer activity time. Spatio-temporal correlations: Our main insight is that the cost of cross-camera analytics can be drastically reduced by exploiting the physical correlations of objects among the camera streams. We develop ReXCam, a cross-camera analytics system that leverages inherent spatio-temporal correlations to aggressively prune the set of camera streams to be processed, thus decreasing compute costs. In the ideal case, ReXCam reduces cost to the number of cameras that the queried object appears in at any point in time and not the total number of deployed cameras. A key property of cross-camera applications is that objects of interest appear only in a small number of cameras at any time, even in large camera deployments.
Spatial correlations indicate geographical association between cameras -the probability that objects seen in a source camera will move next to a particular destination camera's field of view. Temporal correlations indicate association between cameras over time -the probability that objects seen in a source camera will move next to a destination camera's view at a particular time. These spatio-temporal correlations enable ReXCam to guide its cross-camera inference search toward cameras and frames most likely to contain the query identity (see Figure 1). ReXCam's use of spatio-temporal correlations to cut the cost of cross-camera analytics is fundamentally different than the cross-camera correlations used by recent work (e.g., [37]) that optimizes the resource-accuracy profiling but not the live video analytics itself, which still executes on each stream independently. Challenges: ReXCam, at its core, applies the physical properties in the IoT world (spatio-temporal correlations across cameras) to high-level AI applications (cross-camera video analytics). This has led to three main challenges. First, automatically obtaining spatio-temporal correlations is expensive on unlabeled video data. Second, applying spatio-temporal correlations to existing single-camera inference modules (e.g., object trackers) is non-trivial and requires clean abstractions with the necessary system supports. Finally, any spatio-temporal profile is bound to have errors that will lead to missing objects, which need to be detected and rectified efficiently.
To tackle these challenges, ReXCam operates in three distinct phases. 1) In an offline profiling phase, it constructs a cross-camera spatio-temporal correlation model from unlabeled video data, which encodes the locality observed in historical traffic patterns. This is an expensive one-time operation that requires detecting entities with an offline tracker, and then converting them into an aggregate profile of crosscamera correlations. 2) At inference time, ReXCam uses this spatio-temporal model to filter out cameras that are not correlated to the query identity's current position (camera), and is thus unlikely to contain its next instance. 3) Occasionally, this filtering will cause ReXCam to miss query detections. In these cases, ReXCam performs a fast-replay search on recently filtered frames (that it stores), uncovers the missed query instances, and gracefully recovers into its live search. Evaluation Highlights: We evaluate ReXCam using the wellstudied DukeMTMC video data [55] from the Duke campus. On this 8-camera dataset, ReXCam saves compute cost by 8.3× over a correlation-agnostic baseline (∼ 90% of the ideal savings). These savings come at a drop in recall of only 1.6%. We also use a simulated dataset of 130 cameras in Porto (using GPS trajectories) [10], and report savings of 23 × −38×. Interestingly, ReXCam improves precision by 39%, perhaps because the spatio-temporal pruning acts as a "low pass filter". Finally, we have implemented and deployed ReXCam on a small testbed of 5 AWS DeepLens smart cameras [13]. Contributions: Our work makes three main contributions. 1) We quantify the potential for harnessing spatio-temporal correlations in cross-camera video analytics. 2) We build a cross-camera video analytics system that learns and applies spatio-temporal profiles on live videos. 3) We develop robust error-handling mechanisms to avoid missed detections by storing and searching on recent videos.
Motivation and Background
We explain some example cross-camera video analytics applications ( §2.1), the modules in their analytics pipelines ( §2.2), and then the compute models for video analytics ( §2.3).
Cross-camera analytics applications
Large camera networks are installed in cities (such as London, Beijing, and Chicago), transport facilities (traffic intersections, airports), and enterprise campuses (corporate offices, retail shops) [1,5,12,66]. A common class of applications in these camera deployments rely on re-identifying and following objects (e.g., people or vehicles) as they move across the views of the different cameras. The focus is on following select "objects of interest" that are typically provided by external entities (such as law enforcement). A key characteristic of cross-camera applications is that objects of interest occur only in a small fraction of the cameras at any given time. 1) Public safety. Cross-camera video analytics helps localize suspects after a security breach. For example, after a reported incident of a person pulling out a gun inside an office building, we will want to track that person (whose image can be obtained from the camera footage) across the cameras in the building while security personnel are dispatched.
Alternatively, after a major public attack (e.g., in a train), law enforcement may track the accomplices of the identified perpetrator, which may be obtained from police databases that store people frequently associated with the perpetrator [66]. Following these accomplices across the thousands of cameras in the city allows for effective police apprehension.
2) Vehicle tracking in traffic cameras. In the U.S. and Europe, AMBER alerts are raised on suspected child abductions [2]. The license plate and vehicle details are obtained from investigations, and alerts are broadcast to citizens in the area [2]. Tracking of the suspect's vehicle across the thousands of cameras on highways and city streets can keep tabs on the suspect and victim, even as police intervene [46].
Likewise, when traffic police notice a vehicle speeding or making a dangerous maneuver, they will note its details and will be interested in tracking the vehicle as it moves across the city using cross-camera analytics to assess its behavior.
3) Retail store cameras. Using computer vision to improve shopping experience is a big thrust among retailers. "Special" shoppers (e.g., loyal customers, or customers on wheelchairs) are identified as they enter the store and cross-camera analyt- ics can be used to track them across the hundreds of cameras in the store to make sure they are provided timely attention (e.g., dispatching a store representative) when necessary.
Video analytics pipelines
Video analytics pipelines for cross-camera applications (in §2.1) typically consist of a series of modules on the decoded frames of the video stream: (1) an object detection module, which extracts and classifies objects of interest in each video frame (e.g., people, gun), and (2) a re-identification module, which given a query image (e.g., of a person), returns positions of co-identical instances of the query in subsequent frames (if present). Cross-camera analytics pipelines detect objects in each camera, and track the objects across cameras. Core to this pipeline is the vision primitive of identity reidentification [39,50,56]. Given an image of a query identity q, a re-identification (re-id) algorithm ranks every image in a gallery G based on its feature distance to q; the lower the distance the higher the similarity (Figure 2). Typically, features are the intermediate representation of a neural network trained to associate instances of co-identical entities. Object detection and re-id are the most challenging steps of cross-camera video analytics -in terms of cost and accuracy -and our work focuses on improving both of them. Cost. Tracking in large camera networks is computationally expensive. Tracking even a single object of interest through a camera network, after an initial detection, can potentially require analyzing every subsequent frame in every camera (without good heuristics for geographic localization). 1 Accuracy. Re-id is a non-trivial problem in computer vision [59,68], being particularly difficult in crowded scenes and in large camera networks due to significant differences in lighting and viewpoint across cameras. Often, re-id models must rely on weak signals (like clothing), thus making it difficult among a large gallery of objects in a frame.
Our use of spatio-temporal correlations to prune the video frames to analyze -i.e., run object detection and re-id -significantly cuts down the inference space, thus improving both cost as well as accuracy. While our focus is on cross-camera applications, we also show how spatio-temporal correlations improve the cost of even single-camera applications ( §5.4).
Setup and compute model
Consistent with existing deployments [23,29,47], our focus is on "edge" computation of video analytics. In our setup, all the cameras are in a high-speed local network with sufficient bandwidth to an edge compute box (e.g., Azure Data Box Edge [3]) that is managed by the enterprise (that has deployed the cameras). For example, cameras in an office building are analyzed in an edge box located in the same building. Traffic cameras in a city are analyzed in the local traffic command center [45]. Videos are streamed to this edge box and the pipeline modules ( §2.2) including object detection and re-id are run on this edge. Reducing the compute load enables more video feeds to be processed on the edge box or alternately reduces the resources to be provisioned.
Our ideas also readily apply to a network of AI cameras (as we implement and deploy in §7), each of which consist of compute on-board, accelerators (e.g., GPUs), and storage [13,53]. Our techniques will enable each camera to be provisioned with much lower resources, thus lowering their cost.
Quantifying spatio-temporal correlations
We analyze the potential of using spatio-temporal correlations for cross-camera video analytics using the DukeMTMC dataset [55]. We study cross-camera identity tracking that involves tracking an object of interest, in real time, through a camera network. In particular, given an instance of a query identity q (e.g., a person) flagged in camera c q at frame f , we return all subsequent frames, across all cameras, in which q appears as it moves around. We measure the reduction in compute, i.e., the number of frames on which object detection and re-id operations ( §2.2) are executed.
Empirical analysis on cross-camera correlations
We now present an empirical study to quantify the crosscamera correlations in the DukeMTMC dataset [55], one of the most popular benchmarks in computer vision person re-id and tracking [60,67]. This quantification motivates our design of a video analytics system that leverages such correlations to improve the performance of cross-camera analytics. The DukeMTMC dataset contains footage from eight cameras placed on the Duke University campus (see Figure 3), in an area with significant pedestrian traffic. The field of views of the cameras do not mostly intersect, but the cameras are placed close enough that people frequently appear in multiple cameras, as is typical in camera deployments. The dataset contains over 2,700 unique identities across 85 minutes of footage, recorded at 60 frames per second [55].
Spatial correlation.
Cross-camera movement of individuals (or "traffic") demonstrates a high degree of spatial correlation. Here, "traffic" between cameras A and B is defined as the set of unique individuals detected in camera A that are next detected in camera B. (Note that a person that moves from A to B via camera C are Traffic Percentage (%) Figure 4: Spatial correlations in the DukeMTMC dataset [55].
Cells display % of outbound traffic (individuals) from each camera that appears at other cameras. Each row corresponds to a particular source camera while each column to a destination camera; each row's values add up to 100%. The final column represents traffic that exits the camera network.
excluded from the traffic count of A → B and instead included in the A → C traffic count.) We find that individuals seen at a camera c q move next to only a small number of c q 's peer cameras. On the 8-camera DukeMTMC dataset, only 1.9 of 7 potential peer cameras, on average, receive even 5% of the total outbound traffic (or individuals) from a given camera. Figure 4 shows the full pair-wise spatial correlations. Exploiting this insight can significantly reduce our compute workload, at little cost to accuracy, when searching for a query identity q (e.g., a person), that was first detected in camera c q . In comparison to a scheme that searches all n − 1 peers, a smarter scheme that searches only those camera feeds that receive at least 5% of the traffic from c q , reduces our compute by 3.7× (we search only 1.9 cameras instead of 7, or 3.7× fewer frames to run object detection and re-id; see §2.2), while still capturing 95% of all detections as per our experiments.
An interesting aspect is that geographical proximity is not necessarily a good spatial filter. Consider camera-5 ( Figure 4), out of which a significant fraction of individuals (traffic) go to cameras 2 and 6 but not to 7 or 8 even though they are also spatially proximate. Likewise, little traffic moves out of &DPHUD &DPHUD 7UDYHOWLPHVHFWRGHVWLQDWLRQFDPHUD )UHTXHQF\ Figure 5: Temporal correlations in the DukeMTMC dataset [55] (for two example destination cameras 2 and 4). Plots display distribution of inter-camera travel times. Each plot corresponds to traffic to the particular destination camera. Each colored line represents a particular source camera. camera 8 to cameras 2 and 5 even though these are physically proximate. Thus, learning these patterns in a data-driven fashion is a more robust approach (as we will quantify in §8.2). Data-driven learning also allows us to capture asymmetry in the traffic patterns between cameras, for e.g., over 50% of traffic from camera-7 move to camera-6 but less than 25% of traffic moves in the reverse direction from camera-6 to 7.
Temporal correlation.
Cross-camera traffic also demonstrates a high degree of temporal correlation. As Figure 5 shows, travel times of individuals between a particular source camera and a destination camera in the DukeMTMC dataset are highly correlated. This is explained by the fact that these are static cameras and thus their pairwise distances are also static. Thus, for a given pair of cameras, the travel times for people to leave the feed of one camera and appear in the other camera are likely to be clustered around a mean value. In the DukeMTMC dataset, the average travel time between all camera pairs is 44.2s, and the standard deviation is only 10.3s (or only 23% of the mean).
Exploiting temporal correlations, even on its own, has the potential to provide compute savings. Given the task of locating a given query identity q, first identified in camera c q , in one of the n − 1 possible destination camera streams, we can simply search each of the n − 1 streams (ignoring spatial correlations) but only for the time window when the query identities are most likely to show up. We probabilistically set the time window to be when at least 98% of the objects appear. Such an approach has the potential to reduce our compute load by 7.5× compared to a naive approach that does not use such a (time) windowed search. This shows the considerable potential in leveraging the tight distribution of travel times of individuals between the views of the cameras.
Potential gains: spatial & temporal correlations
We now put together the gains due to spatial and temporal filtering combined over a baseline that searches all n − 1 cameras (for a maximum duration). We assume ideal knowledge about the spatial correlations between the cameras as well as the temporal characteristics of travel times of individuals between the views of the cameras. Using the same thresholds as in §3.1, our analysis shows a potential gain of 9.4× savings in the compute cost. This encouraging potential for savings, even for a 8-camera dataset, motivates us to both learn and exploit the spatio-temporal correlations for cross-camera video analytics. As we will show in §8, ReXCam achieves 8.3× reduction in compute cost, which is ∼ 90% of the potential. In addition, the filtering of frames to search also improves the precision of the results from 51% for the baseline approach to 90% with ReXCam, with little drop in recall.
ReXCam Overview
Building upon the strong spatial/temporal correlations across cameras seen in §3, we develop ReXCam, a resource-efficient cross-camera analytics system that leverages the correlations across cameras to reduce computing cost. As depicted in Figure 6, ReXCam provides two core functions for crosscamera video analytics applications.
The spatio-temporal model ( §5.1) describes the spatial and temporal correlation between cameras, and can be queried by applications. At a high level, one can query the model with two cameras, c s and c d , and a time window, and it will return how likely an object leaving c s will appear in c d (i.e., the spatial correlation) and if it appears in c d how likely it will appear within the time window (i.e., temporal correlation). The forward and replay analysis ( §5.2 and §5.3) perform real-time inference on live videos (i.e., forward) as well as inference on history video (i.e., replay). Both capabilities operate jointly, and replay search is inherently needed for spatio-temporal pruning: ignoring a camera due to weak spatial/temporal correlation will inevitably introduce false negatives that a baseline of searching all cameras would have avoided, so ReXCam provides the abstraction of replay search to allow faster-than-real time search over some history videos (that were ignored) for error correction.
In §5.2 we demonstrate how cross-camera identity tracking (tracking an identity across cameras over time from a known starting point) is performed using spatio-temporal pruning. We also show the generality of the functionalities of ReXCam by applying spatio-temporal pruning for cross-camera identity detection (finding a queried identity, e.g., a lost child, in a large camera deployment) in §5.4 that is both an important singlecamera application as well as ties to the cross-camera identity tracking by providing it the starting point for its tracking.
Spatio-temporal correlations in ReXCam
We now describe ReXCam's solution for leveraging spatiotemporal correlations in cross-camera video analytics.
Defining the spatio-temporal model
ReXCam builds upon the cross-camera correlations in §3. 1) Spatial correlations capture associations between camera pairs arising from the movement of traffic (individuals) between the views of the camera streams. The degree of spatial correlation S between two cameras c s , c d is quantified by the ratio of: (a) the number of individuals leaving the source cam-
ReXCam Shared functions
Cameras & underlying compute resources … Forward analysis Figure 6: Architecture of ReXCam.
era's stream for the destination camera, n(c s , c d ), to (b) the total number of entities leaving the source camera: When a large fraction of individuals that leave c s 's view are seen next in a camera c i , we say that c i is highly correlated to camera c s . Note that S may be asymmetric (as seen in our analysis in §3.1.1); camera c s may not be highly correlated with camera c i , even if the converse is true. In cross-camera identity search, ReXCam exploits spatial correlations by prioritizing cameras that are highly correlated to the last camera where the queried identity q was spot (called query camera).
2) Temporal correlations capture associations between camera pairs over time. If a large fraction of the traffic leaving camera c s for camera c d arrives within durations t 1 and t 2 , then camera c d is said to be highly correlated in the time window [t 1 ,t 2 ] to camera c s . The degree of temporal correlation T between two cameras c s , c d during a window [t 1 ,t 2 ] is the ratio of: (a) individuals reaching c d from c s within a duration window [t 1 ,t 2 ] to (b) total individuals reaching c d from c s : Indeed, cameras in real-world deployments have substantial temporal correlation ( §3.1.2). In cross-camera identity search, ReXCam exploits temporal correlations by prioritizing the time window [t 1 ,t 2 ] in which a destination camera is most correlated with the query camera. Spatio-temporal model Given a source camera c s , the current frame index f curr (which serves as a timestamp), and a destination camera c d , our proposed spatio-temporal model M outputs true if c d is both spatially and temporally correlated with c s at f curr , and false otherwise. In our description, the frame index f curr serves the role of the timestamp. The thresholds for being spatially correlated with c s , and temporally correlated with c s at time f curr are model parameters. As an example, we may first wish to search cameras receiving at least s thresh = 5% of traffic from c s , during the time window containing the first 1 − t thresh = 98% of traffic from c s . These parameter settings exclude both outlier cameras (cameras receiving less than 5% of the traffic from c s ) and outlier frames (frames containing the last 2% of the traffic from c s ). Defining s thresh and t thresh as a percent of traffic (or individuals) directly translates to precision and recall of the entities being tracked. M is formally defined as: Here f 0 is the frame index at which the first historical arrival at c d from c s was recorded. The reason of having f 0 is because it takes time to travel from c s to c d , and cost savings can be maximized by not searching on frames while objects are moving between cameras. As a result, our temporal filter checks if the volume of historical traffic that arrived at c d between [ f 0 , f curr ] is less than 1 − t thresh of the total traffic. This ensures that f curr falls in the "dense" part of the travel time distribution, where we are likely to find q. (Note that we must check that f curr ≥ f 0 . When f curr < f 0 , M is false.) Figure 7 shows an illustration for using M with f 0 values for each destination camera. (We construct the model M in §6.) Search hits and misses: Leveraging the spatio-temporal model M allows us to explore the subset of the inference space (camera streams and time windows) that is most likely to contain q. A "hit" reduces cost, as we avoid searching the entire space. On the (rare) misses, we go back and find q in the past video frames over all the camera streams we had filtered out using M. In §5.3, we will explain how we handle misses and mitigate the delay it introduces. Maximizing the cost savings from hits and minimizing the miss-induced delays is a tradeoff controlled by the parameters s thresh and t thresh . tp_corr(c s , c d , f ) → {true, false} 4: for query (q, f q , c q ) ∈ Q do 5: q feat = features(q) extract image features 6: f curr = f q + 1 init current frame index 7: M q = [] init query match array 8: phase = 1 start phase one 9: f curr = f q + 1 reset frame index 23: sp_corr = relax(sp_corr) 24: tp_corr = relax(tp_corr) 25: phase = 2 start phase two 26: output: matched detections {M q }
Cross-camera identity tracking
Algorithm 1 explains our cross-camera identity tracking. In cross-camera identity tracking, the input consists of a query image q, last seen in frame f q on camera c q . (If the input does not contain the frame f q , we can first run the next application, multi-camera identity detection, to locate it.) The goal is to flag all subsequent frames, on all cameras, where q appears. Note that q can appear again on the same camera (c = c q ), different cameras (c = c q ), or else exit the network altogether.
For each query q, we begin by extracting image features q feat and initializing an empty array of discovered matches M q . For each frame, as explained in §2.2, we: (1) extract individuals (objects) from each frame using an object detection model, (2) rank the objects based on their feature similarity distance to q using a re-id model (Figure 2). If the top-ranked detection is within a threshold (match_thresh in Algorithm 1), i.e., a co-identical instance is found by the re-id model, we add the detection to our array of matches M q , update our query representation q feat to incorporate the features of the new instance of q, update the query frame index f q to f curr , and proceed with tracking q; lines 14-18. We continue searching until the gap between the last detected instance of q and our current frame index exceeds a pre-defined exit threshold (defined as exit_t in Algorithm 1). At this point, we conclude that q must have exited the camera network, and cease tracking q.
We apply the spatio-temporal model to cross-camera tracking as follows (marked in blue in Algorithm 1). The model M has two filters (lines 2 and 3): (1) spatial_corr(c s , c d ), which given a source camera c s and a destination camera c d returns true if c d is correlated with c s , and (2) temporal_corr(c s , c d , f ), which given a source camera c s , a destination camera c d , and a frame index f , returns true if c d is correlated with c s at f . At query time, these two functions are passed to the filter function (line 10), which given a list of video feeds V , returns the subset of cameras (V corr ) that are both spatially and temporally correlated to c q at f curr .
Applying filter reduces the inference search space, at each frame step f curr , from all entity detections at f curr on every camera to all entity detections at f curr on correlated cameras. This allows us to abstain from running object detection and feature extraction models on non-correlated cameras, and reduces the size of the re-id gallery in the ranking step. If filter in Algorithm 1 were applied to the example in Figure 7, the set V corr would be only C1 in in the times [0, 10], only C2 in the times [10,20], and null set at all other times.
Handling pruning errors via replay search
Spatio-temporal pruning may cause a drop in recall: missing actual occurrences of the query identity q, which would be discovered by a baseline that exhaustively searches all the frames of all the cameras. When tracking on the spatially filtered cameras does not discover q after exit_t time (line 22 in Algorithm 1), we will initiate a "second pass" through the video frames that we skipped; we call this replay search. Replay subset: We initiate replay search on a broader subset of cameras and timespans. In particular, we go back to the last camera that the queried identity was seen, c q (i.e., restart the tracking procedure from f curr = f q + 1, line 23, as f q was the last frame the queried object was seen), and find all the correlated cameras and time windows that c q is correlated with using the spatio-temporal profile but now with thresholds s thresh and t thresh decreased by a factor of 10. If we do discover an instance of q, we proceed with tracking from that detection, initiating a new phase one in Algorithm 1. If we still do not, we search the entire camera network until the exit threshold.
Note that despite relaxing s thresh and t thresh , the cameras over which we perform replay search will still be only a small fraction of the overall camera network and for only a small duration in the past. This is because a vast majority of cameras (in a large deployment) will have never seen traffic (individuals) from c q . Implicit to replay search is also the ability to store videos in the past. However, this only needs to be for the last few minutes (few 100 MBs even for HD videos). Replay delay: Searching on videos from the past indicates that we are lagging behind tracking the identity. Thus, it is desirable to speed up the search process. ReXCam processes the historical videos at faster-than-real-time. a) Skip frame mode -Process the historical videos at lower frame rate (via frame sampling) and lower resolution (via frame downsizing) to increase processing rate but potentially lower accuracy. We use offline profiling [63,65] to decide the frame rates and resolution to limit the drop in accuracy. b) Parallelism mode -Process the historical videos by parallelizing them across other cameras or edge machines (depending on the setup; §2.3) that are idle. As explained above, the broader replay search is likely still only a small subset of all the videos, so spare resources will be available.
We implement both solutions and investigate their tradeoffs on accuracy and delay in our evaluation ( §8.3).
Multi-camera identity detection
While our focus thus far has been on cross-camera video analytics, spatio-temporal models can also be applied to reduce the cost of single-camera analytics, e.g., find a lost baby or lost car in a mall's or city's cameras. This involves running object detectors independently on each camera stream, and is expensive for large camera deployments. In this section, we apply our cross-camera spatio-temporal model ( §5.1) to such single-camera "identity detection". Not only is it an application of wide relevance on its own, it also ties closely with cross-camera tracking ( §5.2) to provide it the starting point of the query q (which we have been referring to as camera c q ).
Identity detection refers to finding a given identity q (e.g., an image of a lost baby or suspect) in many camera streams. The intuition why the spatio-temporal model helps is that if q is not found in camera C1 and the spatio-temporal model indicates that most objects appearing in camera C2 have recently appeared in C1, then camera C2 is unlikely to contain q. In other words, the model allows to prune the cameras and time windows in which q is unlikely to be found based on when and where q was not found earlier. At any point of time, we maintain a probability for each camera to contain an object that has not been "scanned" (i.e., not found in the camera feeds we have searched so far). The cameras with high values of this probability will be prioritized in the search.
Formally, we define P c,w to be the probability of any unscanned object (i.e., an object that did not appear in any camera when it was searched) appearing in camera c in time window w. Thus, the greater the P c,w is, the more likely searching camera c in window w would yield a "hit". We also define P * c is the probability of the identity entering the whole camera network at camera c at any point in time. We estimate this value by looking at the history trace and dividing the number of objects who appear camera c first by total number of objects. Then P c,0 = P * c and P c,w with w > 0 can be derived iteratively by the following equation: where I c i ,w j is a binary flag indicating if camera c i was searched at time window w j (I c i ,w j = 0) or not (I c i ,w j = 1). The equation can be intuitively interpreted as following: the probability of query object q to appear in camera c and time window w is the sum of the probability of it entering the whole network at c (i.e., P * c ) and the probability of q moving from another camera c i to at time w j , i.e., I c i ,w j · P c i ,w j · S(c i , c) · T (c i , c, w).
At any point in time, we search the camera c and time window w whose P c,w is greater than a threshold θ. If the identity is found, the search ends. Otherwise, we set I c i ,w j = 0 and update other P c,w . This is run until we find the queried identity. §8.5 evaluates our gains with identity detection.
Profiling spatio-temporal correlations
A final piece of ReXCam system is the profiling and maintaining of the spatio-temporal correlations. ReXCam takes an approach that builds on standard techniques from computer vision. Before ReXCam is deployed, we first use a multitarget, multi-camera (MTMC) tracker to label entities in a dataset of historical video, collected from the same camera deployment on which the live tracking is executed. Logically, such a tracker will return for each detected entity instance i a tuple, (c i , f i , e i ), containing the camera identifier c i , frame index f i , and entity identifier e i for the detection, respectively.
Using these, we compute n(c s , c d , [t 1 ,t 2 ]), the number of entities leaving any source camera c s for any destination camera c d within a time interval [t 1 ,t 2 ]. These quantities translate directly to our spatio-temporal model M in Eq. 1 (see §5.1).
However, directly using MTMC trackers to profile spatiotemporal correlations in the history video is computationally expensive, neutralizing the savings from the search pruning. This is because unlike single-target tracking, a MTMC tracker will track all entities in the dataset. To limit the profiling overheads, we explore the trade-off between the robustness of offline profiling and the accuracy of subsequent single-target cross-camera tracking using the generated model. In particular, the profiling cost can be reduced by labeling fewer frames with the MTMC tracker (e.g., by selecting a lower frame sampling rate or choosing a smaller subset of the data to label). At first glance, this will likely reduce the search accuracy as the spatio-temporal correlations is based on a sampled subset of entities. In practice, however, we found that despite labeling fewer frames for the profiling, our precision and recall drops are only mild, and thus our solution of labeling fewer frames significantly reduces the profiling cost without impacting accuracy. We empirically show this in §8.4.
Finally, ReXCam needs to cope with potential changes in the spatio-temporal correlations (e.g., a road work may block a busy segment, which can reduce the correlation between two cameras). These 'changes are relatively infrequent, but when they do happen, ReXCam can automatically detect them and initiate re-profiling. In particular, ReXCam tracks the number of objects that are missed in the normal pruned search but detected in the subsequent replay search (in an "uncorrelated" time interval or camera), and triggers a re-profiling of the spatio-temporal correlations between the corresponding cameras when there is a spike in pruning errors. Note that the error in the spatio-temporal profile during the re-profiling will not affect ReXCam's inference, but only increase latency because the replay search handles the errors.
System Implementation & Deployment
We implement ReXCam with 1.5K line of Python code over AWS DeepLens smart cameras [13]. Each DeepLens camera runs Ubuntu OS-16.04 LTS, and is equipped with an Intel Gen9 GPU and Intel Atom Processor CPU, 8GB RAM, and 16GB built-in storage. Our testbed includes five such cameras connected to each other via Wi-Fi and deployed on Anon-Campus ( Figure 8). In our testbed, video analytics modules (object detection, re-id) run on DeepLens's on-chip GPU and CPU. The testbed of smart cameras contrasts the alternate model for video analytics using nearby edge boxes ( §2.3).
We use a laptop (connected to the same Wi-Fi network as the cameras) to run the ReXCam controller. The ReXCam controller is responsible for profiling ( §6) and maintaining the spatio-temporal model of correlations among cameras. The connectivity between the controller and the cameras is only to exchange "control messages" and not video data. We implement two main control inferences ( Figure 8): 1. A trigger message from the controller to a camera triggers the camera to start (or stop) searching for a specified query identity in its video within a specified time interval. The trigger message can also be used to initiate search in history videos for replay search ( §5.3).
2.
A feedback message from a camera to the controller notifies the controller on an interesting incident (e.g., the specified identity has just been detected, or left the camera's view) in real-time. A feedback follows an activation message and is sent as soon as the incident occurs.
Fault tolerance: The cameras broadcast a heartbeat every few seconds to the controller to handle instances of cameras failing. The ReXCam controller can be replicated for resilience. The only persistent state held by the ReXCam controller is the model of spatio-temporal correlations, which is backed up, and is updated only at coarse timescales. The spatio-temporal Figure 9: Example snapshots from AnonCampus (left) and DukeMTMC [55] (right) cameras.
pruning algorithm (Algorithm 1) is also stateless, and triggered by feedback messages from the cameras.
Evaluation
Our evaluation of ReXCam shows the following highlights. 1) ReXCam's compute savings on the 8-camera DukeMTMC dataset is 8.3× (which is ∼ 90% of the potential; §3). ReXCam also improves precision from 51% to 90%. On the larger simulated dataset of 130 cameras from Porto, our savings grow with the number of cameras. ( §8.2, §8.3) 2) Deployment on the 5-camera testbed with AWS DeepLens cameras leads to 3.4× savings in compute. ( §8.2) 3) ReXCam's optimizes to keep the profiling costs small without impacting the precision and recall. ( §8.4) We evaluate ReXCam for single-camera analytics in §8.5.
Methodology
A. Datasets -We evaluate ReXCam on three datasets. 1) AnonCampus dataset ( §7) consists of 35 minutes of 1080p video recorded at 24 frames per second, captured by five DeepLens cameras deployed in a school building (see Figure 8). The dataset is manually labeled with person identities. 2) DukeMTMC dataset is a video surveillance dataset with footage from eight cameras installed on the Duke University campus (see Figure 3). The data consists of 85 minutes of 1080p video from each camera recorded at 60 frames per second. In all, the footage contains over 2,700 unique identities and over 4 million person detections (all labeled). Figure 9 shows snapshots from eight different cameras (four each) from the AnonCampus and DukeMTMC datasets.
3) Porto dataset is generated from 1,710,671 trajectories obtained from 442 taxis running in the city of Porto, Portugal between Jan. 2013 and June 2014 [10]. Each trajectory contains timestamps and GPS coordinates sampled every 15 seconds. To emulate cross-camera tracking, we manually pin 130 cameras at intersections of the city (we get the cameras' coordinates from Google Maps) and set each camera's fieldof-view to be a square area centered at the camera with length l = 100m. We assume the accuracy of object detection and re-id equal to the values reported in DukeMTMC-reID [7] for objects in the camera's view. The main objective is to measure ReXCam's gains in a large city-wide setting of cameras.
B. Models -For our re-id model, we use an opensource, ResNet-50-based implementation of person re-id [6], trained in PyTorch on a subset of the Duke dataset called DukeMTMC-reID [7]. We then implement our tracking (Algorithms 1), which applies this model iteratively at inference time to discover all instances of a query identity in the Duke dataset. Since DeepLens uses the clDNN and Intel GPUs, we leverage person-reidentification-retail-0076 from the Open-VINO model zoo [32] for re-id in the AnonCampus dataset.
To build our spatio-temporal model on unlabeled video data (simulating real deployment conditions), we apply an offline multi-target multi-camera (MTMC) tracker [9] ( §6) to label every person detection in a subset of the dataset (i.e., profile set with 16352 frames). We implement a profiler to extract spatial and temporal correlations from these labels.
C. Workload -We run a set of 100 tracking queries, {q i }, drawn from the test query partition of the DukeMTMC-reID dataset [7] (20 from the AnonCampus dataset, and 100 from the Porto dataset). Each tracking query consists of multiple iterations. Each iteration involves searching for the next instance, q j i , of the query identity in the dataset, starting with the initial instance q 0 i . A tracking query terminates when no more instances can be found. Experiments on the DukeMTMC dataset were conducted on AWS EC2 p2.xlarge instances (contains one Nvidia Tesla K80 GPU).
D. Metrics -We report the following four metrics which are computed over the entire query set. E. Compared Schemes -To evaluate our spatiotemporal filtering, we compare against two schemes: 1) Baseline (all) -Searches for query identity q in all the cameras at every frame step. Uses state-of-the-art re-id model [6]. no spatio-temporal filtering is utilized.
2) Baseline (GP) -Searches for query identity q only in the cameras that are in geographical proximity to the query camera at every frame step. Uses state-of-the-art re-id model [6]. For DukeMTMC dataset, we manually set pairs of neighboring cameras using Figure 3 while for Porto dataset, we set geographical proximity threshold to 4l (where l = 100m).
3) ReXCam -Searches for query identity q only on cameras that are currently spatio-temporally correlated with c q (as per Algorithm 1). The same person re-id model is used as in the baseline [6]. We consider various versions of Equation 1, corresponding to different spatio-temporal filters. Each version is coded as Ss-Tt, where s indicates the spatial filtering threshold and t indicates the temporal filtering threshold. Higher values of s and t indicate more aggressive filtering (no t value indicates no temporal filtering and helps measure the gains of spatial filtering alone). For instance, S5-T2 filters cameras that receive <5% of the traffic from query camera c q . In addition, its filter frames outside the time window containing the first 98% of traffic from c q .
8.2 Spatio-temporal filtering gains Figure 10, Figure 11 and Figure 12 compare the performance of the baseline and various ReXCam versions on three datasets, respectively. We find that ReXCam significantly outperforms both baselines, by (1) reducing compute cost and (2) improving precision, while maintaining comparable recall. It is noteworthy that the best thresholds for ReXCam is dependent on the dataset. ReXCam versions S30-T1, S5-T2, S1-T1 offer the best trade-off between compute cost, recall, precision, and delay in the three datasets, and in general have to be tuned. We term these schemes ReXCam-O(ptimal). 1) Compute cost -Baseline (all) is by far the most compute-intensive, processing 98,760 frames for 20 queries and 45,638/85,890 frames for 100 queries on the DukeMTMC/Porto dataset, respectively. Baseline (GP) saves the cost quite a bit but its performance fluctuates on different settings due to the discrepancy between spatial correlation and geographical proximity (as also pointed out in §3.1.1). Each successive version of ReXCam achieves lower compute cost than its predecessor. For instance, in Figure 11, the most aggressive version of ReXCam, S10-T10, processes only 3,513 frames, and achieves 13× lower compute cost on 8 cameras than the all-camera baseline. Similarly, a maximal value of 3.6× compute savings can be achieved in Figure 10.
2) Recall (%) -Compared with both baselines, recall of the ReXCam versions declines slightly when spatial/temporal filtering is introduced. In Figure 11, for example, baseline (all) achieves recall of 81.3%. Both spatial-only schemes achieve 79.3% recall. ReXCam-O achieves 79.7%, a 1.6% drop from the baseline. Similar patterns are observed in Figure 10 and Figure 12. The reason why recall becomes lower in the Anon-Campus deployment is because of the increased instances of occlusions in indoor environments (see Figure 9). Note that in Figure 12, recall drops significantly from baseline (all) to baseline (GP), as a number of relevant cameras are mistakenly excluded by geographical proximity-based pruning.
3) Precision (%) -Baseline (all) achieves precision of 50.4%, 51.1% and 49.6% on three datasets, respectively. All versions of ReXCam improve on this, but ReXCam-O in particular achieves 71.7%/90.4%/85.8% precision, which is a gain of 21.3%/39.3%/36.2% over the baseline. Compared with baseline (GP), precision gain from ReXCam-O remains as high as 33.5%/15.6% on the DukeMTMC and Porto dataset. Higher precision is a key benefit of spatio-temporal filtering for crosscamera video analytics. By searching fewer irrelevant cameras, and fewer irrelevant frames, ReXCam is less likely to declare matches that do not actually match the query. 4) Delay (sec.) -Here we report total cumulative lag (lag in the absence of replay search ( §5.3)), averaged over all queries. We do not report the delay from the AnonCampus deployment since among all 20 queries, only one needed replay search. For both DukeMTMC and Porto results, we find that delay increases with more spatial or temporal pruning. This is expected as there are more instances of misses. ReXCam-O, in particular, incurs moderate delay -less delay than S5-T1 and S5-T10 but more delay than spatial-only filtering.
Large-scale camera data: The key objective of using the trajectories from the Porto dataset was to experiment on ReX-Cam's gains at scale ( §8.1); unfortunately there are no video datasets available for hundreds of cameras. Figure 13 shows cost savings and precision of ReXCam/Baseline (all) with increasing number of cameras. Cost savings steadily grows with increasing number of cameras, achieving up to 38× lower cost than baseline (all) in ReXCam S12-T12 for 130 cameras. We believe this is an encouraging result for ReXCam's value for large camera deployments. All through, ReXCam maintains a 34.5% gain on precision with little impact on recall.
Frame skipping: Frame sampling is a key technique in prior work [28,37,65] to make single-camera analytics cheaper. Such techniques are orthogonal to ReXCam's spatiotemporal pruning for cross-camera analytics, and we quantify our point. Figure 14 measures the impact of frame skippinguniformly skip one in 3 frames, and one in 4 frames-on both baseline (all) and ReXCam. As shown in the figure, ReXCam maintains a much lower compute cost in both skipping cases. Specifically, the cost savings are 8.6× and 8.4×, which is in the same ballpark as without frame skipping of 8.3×, thus showing the orthogonality of frame skipping to ReXCam.
Replay search
In this section, we evaluate the effectiveness in reducing lag in replay search using the two proposed schemes from §5.3: Skip frame mode -Employ a x 2 frame sampling rate to increase throughput on historical frames, at the price of lower accuracy (via missed detections). (2x skip) Parallelism mode -Employ a 2x frame processing rate to increase throughput, at the price of increased compute cost (via increased resource usage). (2x ff) Both schemes are applied to ReXCam-O, and compared to (a) the all-camera baseline and (b) ReXCam-O with the default real-time replay search, which incurs 2.6s of delay.
As Figure 15 shows, both 2x skip and 2x ff achieve delay reductions, decreasing final cumulative lag to 1.8s and 1.3s, respectively. The reason why 2x skip doesn't halve the delay is due to the skipped query instances during the first round of replay search where s thresh and t thresh decreased by a factor of 10. Also, delay reductions from 2x skip and 2x ff come with different tradeoffs. 2x skip reduces recall by 1.2% to 78.0%, but increases precision from 90.37% to 90.87% and increase compute cost savings from 8.30× to 8.68× better than the baseline (by processing fewer historical frames). 2x ff does not impact recall and precision, but reduces compute cost savings from 8.30× to only 8.27× better than the baseline.
Profiling cost vs. tracking accuracy
Profiling cost increases with the number of frames that must be processed by the MTMC tracker ( §6). We investigate the trade-off between profiling cost and subsequent tracking accuracy. Specifically, we test whether we can build a precise spatio-temporal model on smaller subsets of the training data obtained by uniformly sampling the frames. We apply a sampling rate of 8×, 6×, 4×, 2×, and 1× (using X in 8 frames) in the profile partition of the Duke dataset ( §8.1) for profiling, which translates to correspondingly lower profiling costs.
As Figure 16 shows, recall of ReXCam during live tracking reaches the maximum of 80.1% with 6× sampling, i.e., when half of the frames are labeled for offline profiling to obtain the spatio-temporal model. Interestingly, on either side of this, the recall falls. On the left side, the drop is caused by insufficient amount of profiling data. On the right side, the small drop is because extra data results in a spatial-temporal model being overfit to the profile partition. This experiment indicates that spatial-temporal model can be built on a reasonably small set of training data (i.e., 37.1 min). However, the exact amount of data to train the spatial-temporal model varies among datasets, and thus should be chosen carefully. Precision remains stable (∼90%) in Figure 16 when more than 4K (i.e., 2× sampling) frames are used for training. If we combine the profiling cost with the cost of the live video analytics, we see that ReXCam would need to run only 34 live tracking queries to break-even with locality-agnostic tracking (calculations omitted). This represents a small fraction of the expected annual workload in large video analytics operations [65,66] that track many hundreds of thousands of queries. Hence ReXCam's profiling costs are small and will not dent the gains, leaving it to remain sizable.
Identity detection
Lastly, we evaluate ReXCam's spatio-temporal pruning on identity detection, the single-camera application described in §5.4. As Figure 17 shows, ReXCam achieves as high as 7.6× cost reduction with θ = 0.95 on the 8-camera DukeMTMC dataset (θ is the likelihood threshold for searching a camera's stream). Similar to trends in cross-camera tracking, the gain on precision far outweighs the drop on recall. In fact, for θ = 0.75, recall does not drop at all while precision improves by 28% even as cost savings stay at 6.6×. This experiment shows the generality of applying ReXCam for both crosscamera as well as single-camera applications.
Related Work
Video Analytics Systems. A sizable body of work on video analytics has emerged recently [28,40,46,65]. Chameleon exploits correlations in camera content (e.g., velocity of objects) to amortize profiling costs, but not the cost of the video analytics itself [37]. These works leave three problems unexplored, each of which ReXCam addresses. First, they focus on single-frame tasks (e.g., object detection and classification), which are stateless. In contrast, surveillance applications, like the real-time tracking we focus on, involve multi-frame track- ing, where future questions depend on past inference results. Second, they study single camera analytics. Thus, they do not explore the complexities involved in cross-camera inference on live video (e.g., occlusions) that define applications such as person re-id. Third, in contrast to classification tasks, many security applications search for new object instances (e.g., a suspicious person) where the training data is skewed toward negative examples. Our use of correlations, i.e., movement across cameras, however, yields substantial accuracy gains.
Visual Data Management. Image and video databases explore the use of classical computer vision techniques to index video efficiently [15,18,22,51,52]. Cross-camera inference with CNNs on live video entails substantially different challenges than the target domain of these works.
Mobility Modeling. Mobility modeling and prediction has long been a topic of interest in mobile computing. Studies have shown promising results in generating human/vehicle mobility models from call detail records [33,64], wireless signals [62], social media [38], and transactions in transportation systems [61,64]. While none of these works apply mobility models to video analytics, ReXCam could benefit from their techniques on building accurate spatial-temporal models.
Conclusions
Cross-camera analytics is a computationally expensive functionality that underpins a range of real-world video analytics applications, from suspect tracking to intelligent retail stores. We presented ReXCam, a system that leverages a learned model of cross-camera correlations to drastically reduce the size of the inference time search space, thus reducing the cost of cross-camera video analytics. ReXCam directs its search towards the camera streams that likely contain the identity being tracked, while gracefully recovering from (rare) misses using a replay search on historical videos. Our results are promising: ReXCam reduces compute workload by 8.3× on the 8-camera DukeMTMC dataset, and improve inference precision by 39%. On a simulated dataset of 130 cameras, its gains grow with the number of cameras. We have deployed a five camera testbed on campus, which we plan to expand for further experiments.
|
2018-11-03T19:15:15.000Z
|
2018-11-03T00:00:00.000
|
{
"year": 2018,
"sha1": "404000cb3ab8cc4a888b744fcb982d70be9a562e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7c9c0747a465fd91b88b33a30564911af7e0a70f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
13294394
|
pes2o/s2orc
|
v3-fos-license
|
Assortativity and leadership emerge from anti-preferential attachment in heterogeneous networks
Real-world networks have distinct topologies, with marked deviations from purely random networks. Many of them exhibit degree-assortativity, with nodes of similar degree more likely to link to one another. Though microscopic mechanisms have been suggested for the emergence of other topological features, assortativity has proven elusive. Assortativity can be artificially implanted in a network via degree-preserving link permutations, however this destroys the graph’s hierarchical clustering and does not correspond to any microscopic mechanism. Here, we propose the first generative model which creates heterogeneous networks with scale-free-like properties in degree and clustering distributions and tunable realistic assortativity. Two distinct populations of nodes are incrementally added to an initial network by selecting a subgraph to connect to at random. One population (the followers) follows preferential attachment, while the other population (the potential leaders) connects via anti-preferential attachment: they link to lower degree nodes when added to the network. By selecting the lower degree nodes, the potential leader nodes maintain high visibility during the growth process, eventually growing into hubs. The evolution of links in Facebook empirically validates the connection between the initial anti-preferential attachment and long term high degree. In this way, our work sheds new light on the structure and evolution of social networks.
, together with hierarchical clustering 7 ω − C k k . One ubiquitous feature of many RWNs is degree-degree correlations: two nodes are more likely to be linked to one another if they are of similar (assortative) or dissimilar (disassortative) degree. Assortativity is generally found in social and collaboration RWNs, while disassortativity is common in technological and biological RWNs 8,9 . SF networks have been studied in the context of generative models, and simple rules relating to the formation of new links have been shown to lead to power-law degree distributions with non-hierarchical 10,11 and hierarchical [12][13][14][15][16][17][18] traits. Static SF network models 19 have also been proposed with controlled assortativity 20,21 , and growing SF networks have been studied with assortative [22][23][24][25][26] , disassortative 10,27 and both types 11 of degree mixing.
In particular, a wide range of RWNs features assortativity 28 , including online social 29 , and neural 30,31 networks. As it reflects a basic birds of a feather flock together property, it is not surprising that it is so ubiquitous. Rather, what is really surprising is that the contributions of different nodes to the graph assortativity level r strongly depend on the degree. Decomposing the assortativity spectrum, one can indeed describe the local assortativity or assortativeness 32 r k of each set of nodes with a given degree k (see the Methods section). Many RWNs have a pronounced local maximum in r k located near (but above) the average degree k . In social networks such a feature even appears to be generic, while in technological and biological networks the maximum is less pronounced or even entirely absent. In Fig. 1 we show the qualitative difference in the inherent patterns of r k between typical social networks (the friendship structure of Facebook users 29 , Fig. 1a, and the Authors' collaboration graph from the arXiv's Astrophysics section [33][34][35] , Fig. 1d) and a technological one (the flights connecting the 500 busiest commercial airports in the United States 36 , Fig. 1b).
Results
Empirical observations. The way traditional methods imprint assortativity into pre-generated networks is via degree-preserving link permutations 9,37 . This approach yet presents a number of problems. On the one hand, generating a graph with an ad-hoc imprinted SF distribution (Fig. 1c) and then rewiring connections does not yield the observed pattern of local assortativity, on the other hand, even starting from a configuration model (CM) retaining the original degree distribution 19 , this procedure is only able to reproduce the real assortativity pattern at the expense of destroying the other significant features, such as the hierarchical inherent structure of clustering ( Fig. 1d and its bottom-right inset). This indicates that the systemic mechanisms leading to the emergence of degree-correlation have a special signature, which is not captured when generating assortativity artificially, i.e., ex post facto.
Further striking evidence comes to light from a deeper analysis of social RWNs: in some cases the final leaders (i.e. the nodes that, at the end of the process, do acquire a leading role in terms of their degree) actually behave anti-preferentially when entering the network. In Fig. 2, the Facebook network of Fig. 1a is examined, and one sees that, plotting the degree of the first linked node as a function of time, those nodes eventually becoming the network's leaders (i.e. the final hubs, red triangles) tend initially (at the moment at which they start forming part of the network) to link existing nodes with low degree values (Fig. 2a). This is clearer from Fig. 2b where the final degree k f achieved by a given node, labeled as a red triangle ( > ) k 400 , is compared to the degree of its first neighbor at the time that node entered the network. A straightforward statistical analysis of the data shows in Fig. 2c that indeed the fraction of final hubs forming initial ) r 0 2013 . Together with the real data (blue triangles), r k is reported for a configuration model (CM) reproducing the real degree sequence, after classical permutation methods have been applied, imposing the same r value observed in the real network (red stars) and a negative ( = − . ) r 0 3 value (black circles). Insets in panels (a-d) show the log-log plots of the degree distributions P k and clustering coefficient C k .
Scientific RepoRts | 6:21297 | DOI: 10.1038/srep21297 connections with nodes of low-medium degrees is far larger than that of the nodes which ultimately acquire intermediate and low degrees.
The generative model. Following the empirical observation in Fig. 2 of a nexus between initial anti-preferential attachments and long-term high degrees, we propose a generative model which creates SF-like networks with tunable global assortativity and realistic local assortativity patterns, while also reproducing the hierarchical structure of the network's clustering. The model reflects a microscopic mechanism for a struggle for leadership between two competing populations of nodes: type I nodes (acting as followers and selecting connections so that a preferential attachment rule spontaneously emerges 10 ) and type II nodes (acting as potential leaders, i.e. adopting anti-preferential behavior which leads them to prefer lower degree nodes for the establishment of their initial links).
Under such a mechanism, a network of N nodes is created by sequentially adding units to an initial clique of ≤ m N N 0 vertices. The growing process occurs at discrete times: at each time step ≤ ≤ − t N N 1 0 a new node enters the graph, and forms m links with existing nodes according to an attachment rule that is illustrated schematically in Fig. 3 and summarized as follows: with a probability p to the lowest degree nodes (nodes 1 and 2) or with probability − p 1 at random (nodes 3 and 5).
is composed of a randomly chosen node j (node 5, green circle) and its nearest neighbors at time − t 1.
1. An anchor node j is selected uniformly at random from the nodes existing at time − t 1. 2. The subgraph G j composed of node j and all other nodes that are at distance less than or equal to from j is examined. 3. With probability − p 1 , the new node behaves as a follower (type I): it selects m nodes from G j uniformly at random, and links to them. With probability p, the new node behaves instead as a potential leader (type II): it forms links with the m lowest degree nodes in G j .
The parameter is defined as the so called penetration depth, i.e. the extent of local information (around the anchor j) accessible to the entering node. In the following, we set = 1, so that G j is the subgraph containing j and all its nearest neighbors. Once = 1 is set, the model is uniquely determined by two parameters: the average degree = k m 2 and p, the fraction of type II nodes. In the absence of potential leaders ( = ) p 0 , the growth of the resulting network exhibits emergent preferential attachment and hierarchical clustering 10 . This is actually due to the so called friendship paradox 38 , stating that, averaged across the network, the neighbors of a node i will always have a higher average degree than k i . Since, indeed, the number of subgraphs G j in which a node i appears is equal to + k 1 i , higher degree nodes will tend to naturally receive more and more links. It is important to note that this preferential behavior is in fact, emergent: the entering nodes do not require global knowledge of the degree levels in the system, nor any explicit preference for high degree nodes. In that sense, preferential attachment can be viewed as a kind of null behavior in which the rate of growth increases with size, as the analogous Yule process is understood in evolutionary dynamics 39,40 .
When instead the population is split (with some nodes following the null preferential attachment, and some others linking in an anti-preferential manner), the local assortativity pattern shown in Fig. 1a, characterizing social systems, emerges. Namely, the contribution to assortativity from nodes of degree k i) increases with k from = k 1 to a local maximum located just above the average degree, ii) decreases to a subsequent local minimum, and then iii) increases again as → ∞ k , i.e. qualitatively reproducing the generic tendency observed in social RWNs, which is only captured in random generated networks with artificially induced assortativity at the expense of obliterating the graph's clustering traits. The results of the model are summarized in Fig. 4. As p increases, the degree distribution of the resulting network deviates more and more from a pure SF configuration (Fig. 4a), but at the same time the hierarchical clustering traits are entirely preserved (Fig. 4b). The generated network is actually endowed with a fully controllable and tunable level of global assortativity r (as a function of m, as shown in Fig. 4c), while, more remarkably, the assortativity local pattern is fully reproduced (Fig. 4d).
Analytical description. We next move toward giving a more analytic description of the motivations and roots underlying the proposed model and the observed, emergent phenomena. We start by noting that links in this model are undirected, and this leads to a symmetry of interpretations: one can describe the type II nodes as preferring low-degree units (as it is described in our generative model), or one can state that low-degree nodes are more likely to create links with type II newcomers. The second interpretation is actually in line with what arises from recent sociological studies, which indeed indicate that people are limited in the number of relationships they can maintain over time (with the exact number of maximal relationships being an open question). Starting from the seminal works by Dunbar 41,42 , the limitations on the number of active social connections have been extensively studied and empirical support from online social networks has also been adduced 43 . In the present case, the emergence of positive assortativity is associated with the interplay of two mechanisms: an innate preferential attachment (resulting from nodes that nonhierarchically form connections with a pre-existing growing structure) and a limited ability of human beings to maintain many relationships. By comparing the average contribution of assortativity per node of degree k, r k , and the total contribution of nodes of degree k, r k , one can actually understand the origin of the peak in the local assortativity. The average contribution for nodes of degree k increases monotonically with k (inset of Fig. 4d). However, the frequency of nodes decreases monotonically with k in pure scale-free networks (Fig. 4a). With the introduction of type II nodes, lower-medium degree nodes become more frequent, as observed in Fig. 4a for = .
p 0 6, even though an overall scale-free-like degree distribution is maintained. The combination of more-common than expected medium degree nodes and per-node contribution to assortativity that increases with k leads to the characteristic bump observed in the model and the data.
As the network's growth proceeds, type II nodes actually tend to develop a higher degree on average. This is because new links are obtained with probability where N t is the number of nodes in the system at time t and | | G j is the size of the neighborhood of the subgraph of a given anchor node j. By choosing anchor nodes with small | | G j (low degree), type II nodes actually increase their likelihood of being linked from future, incoming, nodes. Because this increased likelihood can be understood as type II nodes "placing themselves" in smaller neighborhoods so that they are more likely to be linked to than when chosen at random, we understand this advantage as a kind of improved visibility to the linking process.
In fact, one can measure the number of neighbors at time t for each node type as described in the Methods section. The results are shown in Fig. 5, and point to the emergence of leadership of type II nodes at low values of p (Fig. 5a). At intermediate values of p (not shown) no significant differences are observed between the two nodes' populations in the way the average increased degree evolves in time. Only at large p values (Fig. 5b), where anti-preferential nodes are vastly predominant in number the trend is actually reversed and type I nodes (the followers) now seem to be favored in attracting connections. Such a latter situation corresponds however to a rather homogeneous network, where a SF-like distribution is no longer observed (see Fig. 4 for comparing the large deviations in the degree distribution already observed at = . ) p 0 6 .
Discussion
In summary, assortativity, hierarchical structure and fat-tailed degree distributions (well-approximated by power laws) are structural features manifested almost ubiquitously by RWNs, and until now no model had ever linked their emergence with microscopic growing assumptions. Furthermore, these features have a fundamental role in determining many relevant processes, and/or regulating the network's dynamics and functioning. Guided by the empirical observation of the growth of the friendship network of Facebook users, we have shown how the combination of preferential and anti-preferential attachment mechanisms acting together in the same generative model (via two distinct node populations), leads to the growth of heterogeneous networks with modified scale-free properties and tunable realistic assortativity, while maintaining the hierarchical clustering. Both our analytical predictions and numerical results indicate that networks constructed in this way match the patterns of local assortativity measured in real-world graphs. By presenting the first generative model with tunable assortativity, this work sheds new light on the structure and evolution of social networks, and counterintuitively suggests that anti-preferential attachment is a mechanism adopted by a fraction of the nodes during the network's growth, as a strategy for increasing their own leadership.
Methods
Local assortativity/assortativeness. In a network with N nodes, L links and degree distribution P k , the local assortativity or assortativeness 32 r j is defined as the contribution of each node to the network assortativity r and it is calculated as Measuring the average degree of each node type. In order to compare the average degree of the two node populations as the model evolves, we label each node uniquely by the step in which it entered the network. This way, at time t, every node i will have m neighbors with indices < j i, and ( ) − k t m i neighbors with indices > j i. To compare the degree growth rates of type I and type II nodes, we need to measure the characteristic time for new links to form. To do so, we consider the set of differences in index values, − j i, for each neighbor which linked to i at step j where α N is the total number of nodes of type α. Thus ( ) α f t provides the average number of new neighbors ( − ) k m that a node of type α will acquire after t steps.
|
2016-03-07T10:31:19.000Z
|
2015-07-29T00:00:00.000
|
{
"year": 2016,
"sha1": "ff9e93161f8f26ab1f1b730860fbdb8e9431276d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep21297.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab33e7c6c83c4f8b56966c2997107709d865a044",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Medicine"
]
}
|
54661322
|
pes2o/s2orc
|
v3-fos-license
|
Can Turkish and Us Co-Operation in the Black Sea Region Increase Efficiency Gains ?
34 ABSTRACT. This paper examines the impact of co-operation between Turkey and the US upon Turkish trade and investments towards the Black Sea region. The study is particularly important in the conjuncture of the US withdrawal from the Transatlantic Trade and Investment Partnership (TTIP) and in the wake of signing a free trade agreement with the EU. An additional matter of importance relates to the improved Turkey – Russia economic collaboration especially after the “jet” incident and American involvement with the Middle East. Significant part of the latter is economic as the US has also explicit economic interests in the Eastern Meditteranean. A gravity model has been employed using ordinary least squares on a panel data with fixed effects to analyse aggregate trade. We have also categorized export groups of Turkey and the US separately. Our findings for both Turkish and the US exports indicate that per-capita GDP of Black Sea countries are highly persistent and positively correlated with increased efficiency gains and trade volumes. Regression results show that the US exports to the EU member countries are on average less than to those non-EU member Black Sea countries. Hence, we question whether a possible co-operation between the US and Turkish companies can help gaining better access to the Black Sea market for their exports.
ABSTRACT.This paper examines the impact of co-operation between Turkey and the US upon Turkish trade and investments towards the Black Sea region.The study is particularly important in the conjuncture of the US withdrawal from the Transatlantic Trade and Investment Partnership (TTIP) and in the wake of signing a free trade agreement with the EU.An additional matter of importance relates to the improved Turkey -Russia economic collaboration especially after the "jet" incident and American involvement with the Middle East.Significant part of the latter is economic as the US has also explicit economic interests in the Eastern Meditteranean.A gravity model has been employed using ordinary least squares on a panel data with fixed effects to analyse aggregate trade.We have also categorized export groups of Turkey and the US separately.Our findings for both Turkish and the US ex-ports indicate that per-capita GDP of Black Sea countries are highly persistent and positively correlated with increased efficiency gains and trade volumes.Regression results show that the US exports to the EU member countries are on average less than to those non-EU member Black Sea countries.Hence, we question whether a possible co-operation between the US and Turkish companies can help gaining better access to the Black Sea market for their exports.
Introduction
The aim of this paper is to examine outcomes of a possible co-operation between Turkish and US companies while exporting goods towards the Black Sea (BS) region� We achieve this by using a gravity model in which we analyze primarily the determinants of aggregate exports of both Turkey and the US to the BS region� Then we examine different categories of exports such as food products, machinery and manufactures, separately� We find that both Turkish and US exports are highly persistent and positively correlated to the per-capita GDP of BS region countries� We also find that determinants of export performance for each sub-category differs sub-stantially� Moreover, regression results show that trade and investment agreements are relevant for US exports to the region, while this is not the case for Turkey� One other important finding relates to the impact of EU membership of the BS region partners upon exports of the US and Turkey� Total US exports to the non-EU member BS countries are on average higher than to those BS countries that are members of the EU� On the other hand for Turkish exports the EU membership variable is not statistically signif-icant� This suggests that, in order to increase their share in markets for exports in the region a synergy can be generated between the US and Turkish firms� The following section summarizes the political economy of commercial relations between Turkey and the US with BS countries� The third section explains the methodology and the fourth the data used in the study� Then, we present findings of the empirical tests and sum up with policy recommendations and concluding remarks
Political Economy of BS Countries and Prospects for Turkey -US Co-operation
Differences of BS country economies are most profound as their developmental stages are diverse� In 1992 Turkey and 10 other regional nations formed the Black Sea Economic Cooperation (BSEC) to expand regional trade and economic cooperation� This institution envisioned by late Turkish President Turgut Ozal is important for the purpose of this paper� The BSEC created a powerful regional market of 400 million people from countries bordering or near the Black Sea� This region, rich in untapped natural resources and vital industries, is one of the most difficult terrains of business activity in the world� The agree-ment ultimately aims to eradicate Black Sea Region countries' differences in economic fundamentals and business practices� Some of the BSEC countries like Greece, Bulgaria and Romania are in the European Union and they are bound with the rules and regulations imposed by EU membership� Bulgaria, another EU member state received significant amounts of FDI and its successive governments were committed to economic reforms and responsible fiscal planning� The presence of Turkish companies in Bulgaria is vast and the operations of these firms cover a wide range of sectors, from food production to banking� This is facilitated by the presence of large indigenous ethnic Turks� Romania is another Black Sea country that has joined the EU's ranks in 2007� Romania, a deficit country achieved strong GDP growth in recent years thanks to domestic consumption and invest-ment� Romania, one of the poorest countries in the EU, benefited from the membership as it began to realize macroeconomic gains recently, spurring the creation of a middle class� Romania received $3,4bn FDI inflow with strong Turkish background� Turkish businesses there also vary in the sectors as wide as food, textiles and banking� The other cluster of countries in the BSEC includes Georgia, Ukraine, Belarus and Rus-sia� Georgia's recent improvement include growth in the construction, banking services, and mining sectors, but reduced availability of external investment and the slowing regional economy are emerging risks� The country imports nearly all its needed supplies of natural gas and oil products� Belarus on the other hand has seen limited structural reform since 1995, when President Lukashenko launched the country on the path of "market socialism"� In keeping with this policy, administrative controls over prices and currency exchange rates were re-imposed� In Belarus, continued state control over economic operations hampers market entry for businesses, both domestic and foreign� Ukraine is known as the bread basket of the ex-Soviet Union� It also has a diversified heavy industry with integrated market structures to the other regions of the former USSR� Although Ukraine is dependent for its energy supplies on Russia, it is one of the most important economies in the region� The liberalization effort has failed as Ukraine experienced a contraction� Ukraine is strategically an important country, as it has been proven by a two-week dispute that saw gas supplies cut-off to Europe in the recent past� Ukraine has ended up with an agreement of ten year gas supply and transit contract with Russia� The Russian economy was one of the hardest hit by the 2008-2009 global economic crisis as oil prices plummeted and the foreign credits that Russian banks and firms relied on dried up� High oil prices buoyed Russian growth in the first quarter of 2011 and could help Russia reduce the budget deficit inherited from the lean years of 2008-2009, but inflation and increased government expenditures may limit the positive impact of these revenues (Kuznetsov, 2017)� Russia's long-term challenges include a shrinking workforce, a high level of corruption, difficulty in accessing capital for smaller, non-energy companies, and poor infrastructure in need of large investments� Russia is a surplus economy by $71 bn and a growth rate of 4% still receives a record high FDI inflow of $43bn� The impact of FDI flows from neighbouring countries more recently began attracting some academic attention� For instance, (Kuznetsov, Nevskaya, 2017) draw attention to the Russian inward FDI from Visegrad countries (Poland, Czech Republic, Slovak Republic and Hungary)� They argue that while these countries have been major FDI recipients in Central and Eastern Europe, they have become investors in Russia� For them, Visegrad group direct investments in Russia are mostly comes from enterprises which has removed the political component from their investment decisions� As Rossitsa Rangelova puts it, FDI from the West-European countries has been a major driver for CEE transformation since the beginning of the 1990s, including technological and structural renewal as well as new management methods and organizational rules (Rangelova, 1999)� Briefly, the recent political economy of the region indicates to the severity of conflict of interest between neighbouring countries� Some of the countries in the region drawn under the umbrella of the EU are unable to create competitiveness for their fledgling in-dustries� Russia and Ukraine are trying to establish themselves a respectable place in the newly built transatlantic power corridors and between the US and China� Turkey is no ex-ception� "Arab Spring" appeared to engineer for Turkey a role in the new global power structures but this has faded away rapidly� In the meantime Turkey's bid for EU membership seems to falter� Turkey's geo-politics forces her to be innovative in developing herself a new role in the Eurasian power architecture� In this respect there exists a scope for Turkey to develop business collaborations with the countries in the Black Sea region� In a relatively short time span there have been fundamental economic and political changes in the region surrounding Turkey� The neighbours in the Black Sea region went through a systemic shift from a socialist economic set up to a more liberalised one while the neighbours to the south experienced a social upheaval popularly known as the "Arab Spring"� Amidst these radical shifts Turkey tried to remain relatively stable despite some internal disturbances and external imbalances� Trade with the regions both to the north and to the south increased expo-nentially� Turkey's commercial and economic interests has become increasingly aligned with Russia that has evolved towards a more strategic partnership, even though Turkey continues to be a member of the Western camp especially through her membership to NATO� As a result Turkey became a hub for the distributional corridors of Caspian and Central Asian oil and gas to the West� In the same time up until recently Turkey was shown as the model case for developing nations in the region� In the past US interest in the region has been merely focused on security issues� This was due to the international relations of the cold war era� Hard dying habits in establish-ing such interactions and the ways in which they are inherited by the modern times has been studied more recently by А� Kuznetsov (Kuznetsov 2017)� This work presents a comprehensive review on Russian -US relations and discusses that the stagnation of mutual FDI flows began in 2009-2010 as economic and political considerations of the investors has become more influential in investment decision making (p� 45)� Since American security architecture began to be redesigned along the "War on We now focus on the U�S� -Turkish perspective with respect to improving business co-operation in the Black Sea Region� We particularly focus on complementarities and indivisibilities in domestic country business set-up and their forward looking prospects in collaboration in the region� This is especially important in the current conjuncture whereby the new US administration began debating a change in their connection with the EU in terms of investments and trade� Turkey views this development with concern that it may divert trade away from Transatlantic towards other parts of the world� Provided that this opportunity is used to promote Turkey's priorities, this may be helpful in turning Turkish government's target for the 2023 to become world's tenth largest economy (now 16th) and reaching an export volume of $500 bn�, while realising per capita incomes beyond $20 thousand� In order to meet these targets what Turkey needs is, to embark upon some radical shifts in reforming its labour force as well as opening herself up for a more global reach� The EU membership bid has provided significant social opportunities by developing closer economic and social co-operation� But, the EU engagement with the current economic global crisis prioritised its areas of as-sistance, in which Turkey did not have uppermost importance� On the other hand, Turkey's importance to the U�S� comes from its geographic location at the junction of the turbulent Middle East, Caucasus, Central Asia, the Balkans and the Black Sea region that include Ukraine and Russia� Recently, crisis dynamics and energy concerns in the globalised world economy drove such geo-strategically inspired relationship towards the economic sphere� In fact, Turkey already has become an important market for U�S� businesses, in terms of both trade and foreign investments but this importance now has grown to such an extent to act as a bridgehead towards the third countries particularly in the Middle East and the Black Sea Region� More recently, as observed from a variety of interactions between U�S� -Turkey economic councils, a stronger emphasis to work together to overcome obstacles and to seek out new ways to pursue mutual goals in the ex-Soviet countries and Central Asia� Authorities in both countries search for mechanisms for expanding shared priorities in third countries by promoting support for small and medium sized exporters� The steps taken to enhance investment climate include regulatory and intellectual property support for innovative industries, energy, biotechnology, pharmaceuticals, and government pro-curement� These steps are expected to deepen ties between the U�S� and Turkish private sectors� Given Turkey's economic growth and development over the last ten years, the U�S� decided to shift its emphasis from short term technical assistance projects to longer term business linkages between the US and Turkish firms� Turkey applies the EU's common external customs tariff to third-country imports, including from the United States, non-agricultural imports and imposes no duty on non-agricultural items from EU and European Free Trade Association (EFTA) countries� The U�S� Commerce Department has designated Turkey as one of the ten Big Emerging Markets, forecasting great potential commercial opportunities� In 1993 the number of American firms operating in Turkey was 80, in 2013 this number is above 1200� The U�S� -Turkish Bilateral Investment Treaty went into effect in April 1990� A double-taxation agreement has been signed� In 2009 the U�S� and Turkey launched the Framework for Strategic Economic and Commercial Cooperation, a new cabinet-level initiative focused on boosting trade and investment ties� The inaugural Framework for Strategic Economic and Commercial Cooperation meeting was held in Washington in October 2010� The Framework aims to reduce barriers to bilateral trade and investment, create opportunities for U�S� workers, farmers, and firms, and otherwise enhance bilateral economic ties� The Framework includes greater involvement of the private sector in both countries in dialogue and deliberations between the two governments� This new engagement on economic matters will give momentum to bilateral commercial transactions and mutual investment flows� The Framework will help the business communities in both countries explore new business partnerships and execute commercial transactions� In addition to the new framework, the U�S� and Turkey hold annual meetings of the Trade and Investment Framework Agreement (TIFA) Council, which met in Washington in July 2010, and the Economic Partnership Commission (EPC), which met in March 2011� The U�S� -Turkish Business Council was established to enable private sector leaders from both markets to provide joint recommendations for improving the commercial relationship� Two-way trade (exports plus imports) between the United States and Turkey was valued at $18�7 billion during 2014, representing a modest trading relationship� While U�S�-Turkish trade was sharply impacted by the economic downturn in 2009, U�S� exports to Turkey increased in 2014� Leading U�S� exports to Turkey include aircraft, iron, steel, machinery and fabric, in addition to a wide range of agricultural products� Turkey predominantly exports vehicles, machinery, cement, and tobacco to the United States� The stock of U�S� foreign direct investment (FDI) in Turkey was $4�9 billion in 2007, and amounted to $6�3 billion in 2009, mostly concentrated in the wholesale trade and manufacturing sectors, while Turkish FDI in the United States was $218 million� Turkey attracted $325 million FDI from the US in 2014� Americans expect from Turkey to increase its trade advocacy and export promotion efforts, as well as easing up of accessing to the credits, especially for small -and medium-sized businesses involved in high value-added goods and services� According to US Department of State, "Turkey must enforce international trade rules, ensure the transparency and timely execution of judicial orders, increase engagement with foreign investors on policy issues, and pursue policies to promote strong, sustainable, and balanced growth" 1 � U�S� FDI in Turkey is concentrated largely in the banking and manufacturing sectors� Almost all economic sectors open to investment by the Turkish private sector are fully open to foreign participation without screening or prior approval, although establishment in Turkey's financial and petroleum sectors requires permission� Foreign equity ownership is limited to 25 percent in broadcasting and 49 percent in maritime transportation� Turkey's parliament is considering draft legislation easing restrictions on foreign ownership in the media sector� All areas which are open to the Turkish private sector are now open to the U�S� participation and investment� Turkey grants U�S� businesses the same rights, incentives, exemptions and privileges that Turkish businesses receive� U�S� firms can participate in governmentfinanced and/or subsidized research and development programs� Investment incentives include subsidized credit facilities and exemptions on corporate and value-added tax, customs fees and duties� Such openness makes Turkish -U�S� ventures more attractive to act together in third countries� In addition to an industrial and commercial market in its own right, Turkey is also a regional business hub, thereby offering tremendous opportunity for North American companies to penetrate the high-growth economies of the Middle East, North Africa, Central Asia and the Balkans� Turkey's both political and economic involvement in its vicinity has increased significantly in recent years� Delegations from the US, but also other developed and developing countries, kept calling for cooperation with Turkey in order to invest in third countries in regions where the latter's influence is or is about to be big, such as in the Black Sea, as well as North Africa and Middle East re-gions� However, our inspiration for this research goes beyond recent political develop-ments� Turkey's total exports to the Black Sea countries has overpassed those of the US as of 2005� Turkish exports to the BS countries have become more attractive compared to the US exports to the same countries, starting from 2005�
The gravity model
The gravity model was first formulated by Tinbergen (Tinbergen, 1962), who argued trade among countries is determined by the size of their incomes (which Tinbergen measures in gross national product, or GNP) and the geographic distance between them� Linneman (1966) added the population variable to the model� The gravity model has been successful in explaining trade flows but initially lacked theoretical bacground� After a wave of criticism in the 1970s and 1980s, several authors including Anderson (Anderson, 1979), Bergstrand (Bergstrand, 1985, 1989, 1990) and Helpman and Krugman (Helpman, Krugman, 1985) proved the model had a strong theoritical backtround� An extensive review of empirical studies using gravity models to study international trade flwos in recent years can be found in Kepaptsoglou, Karlaftis and Tsamboulas (Kepaptsoglou, Karlaftis, Tsamboulas, 2010)� The simplest form of the gravity model is presented in the following equation: where T ij is trade between country i and j, Y i is country i's gross domestic product (GDP), Y j is country j's GDP and D ij is the physical distance between the two countries� The parameters α, β and θ are generally estimated in the log-linear version of the model as follows: (2) lnT ij = αlnY i + βlnY j -θlnD ij As noted by equations ( 1) and ( 2) above, the gravity model suggests that trade flows between two countries are positively related to their economic size and negatively related to the physical distance between them, which refers to transportation costs� Since Tinbergen (Tinbergen, 1962) the model has been developed and extended in various forms, adding other variables that might affect trade flows such as prices (Bergstrand, 1985(Bergstrand, , 1989;;Anderson, 1979;Anderson, van Wincoop, 2003)� Other variables referring to trade costs other than distancewere added to the model� These include dummies on borders, cultural or historical (colonial) links among countries, language similarities, membership in free trade area and/or other trade-related agreements� A number of studies have analyzed determinants of Turkey's trade flows through gravity models (Lejour, Mooij 2005; Antonucci, Manzocchi 2006; Akkemik, Göksal 2010)� These studies analyze different aspects of trade, for instance, the role of EU in Turkey's trade flows, the effects of Chinese exports on Turkish exports or the determinants of Turkish agricultural exports to the EU� In this paper,we seek to compare trade determinants of Turkey's and US' exports to the Black Sea region countries� We develop a gravity model and employ a panel dataset to investigate to what extent determinants of Turkey's and US' exports to this region differ� We use country-specific characteristics such as distance between countries and membership in free trade area agreementsand membership to the EU and WTO� We also employ other variables that might affect trade flows between Turkey and US and the BS countries, such as gas prices, real effective exchange rates, inflation and unemployment rates� In order to conduct our analysis, we construct a panel data least ordinary squares model with fixed effects separately for both Turkey's and US' exports toward the Black Sea region (BS) countries� For Turkey's exports to the BS countries we propose a log-linear variant of the gravity equation as follows: (3) lnX TUR,i,t = α 0 + α 1 lnX The dependent variable lnX TUR,i,t indicates Turkey's exports to the BS countries at time t� Subscript i refers to Turkey's trading partners, namely the BS countries, whereas TUR refers to Turkey� GDP i,t indicates per capita real GDP of the partner country� DISTOIL i,TUR,t is a proxy variable developed by multiplying the distance between Turkey and the BS country with oil prices, which we discuss further in the next section� REER TUR,t indicates the real effective exchange rate for Turkey� Meanwhile, GAS t , INF i,t and UNE i,t refer to gas prices, inflation rates and unemployment rates, respec-tively� These variables are also believed to explain exports of Turkey to the BS economies� We include one lag of Turkish exports, GDP and real effective exchange rates as independent variables, as well� Variable FTA t is a dummy variable that indicates membership of Turkey and the trading partner from BS countries in the same free trade area agreement (FTA)� The dummy takes the value of 1 when the free trade agreement also including Turkey and the BS trading partner has entered into force and 0 when there is no FTA in force, including these countries� EU t and WTO t are two other dummy variables that account for European Union (EU) and World Trade Organization (WTO) membership, respectively� The variables take the value of 1 when the BS trading partner is a member of EU and/or WTO, and 0 otherwise� Meanwhile, for US' exports to the BS countries we propose a log-linear variant of the gravity equation as follows: (4) lnX US,i,t = α 0 + α 1 lnX US,i,t-1 + α 2 lnGDP i,t + α 3 lnGDP i,t-1 + α4lnDISTOIL The notations of dependent and independent variables are the same as in equation (3), except for the subscript 'US' , which indicates the US as reference country, and the explanatory variable TA t unlike the FTA t in equation (3)� TA t is a dummy variable that takes the value of 1 if a trade agreement exists between the US and the BS country, starting as of the year the agreement was signed, and 0 otherwise� We run regressions first for aggregate exports and then separate regressions for five different categories of exported goods -namely food, crude materials, chemicals, manufactured goods and machineries� Categories of the Standard International Trade Classification (SITC Rev�2) were used for developing these groups� Table 2 in the Annex summarizes all variables used in the model�
Data
Our analysis covers the time period between 1992 and 2010� We restricted ourtime period since developments from 2010 creates anomolies in the data due to political developments in the region� The bilateral trade data are extracted from the United Nations Commodity Trade Statistics (COMTRADE) database� The samples for each of the models include exports of Turkey and the US to nine Black Sea countries, namely: Azerbaijan, Belarus, Bulgaria, Georgia, Greece, Kazakhstan, Romania, the Russian Federation and Ukraine� We had to exclude Armenia, as there is officially no trade between this country and Turkey� The trade data in current US dollars is deflated with the US consumer price indices (1982-1984 = 100) extracted from the US Bureau of Labor Statistics database� GDP per capita data in current US dollars were obtained from International Monetary Fund's World Economic Outlook (IMF WEO) database� This data was also deflated with the US consumer price indices with 1982-1984 = 100� The inflation rates in average consumer prices and the unemployment rates as a percent of total labor force were also obtained by the IMF WEO database� The real effective exchange rate data for Turkey and the US was obtained from the OECD Main Economic Indicators (MEI) da-tabase� The base year for the real effective exchange rate series is 2005� The data for the natural gas and oil prices -the latter multiplied to distance to create the DISTOIL i,TUR,t proxy -were obtained from the International Energy Agency/OECD Energy Statistics Division� Natural gas prices refer to the EU Pipeline Import prices measured in USD/MBtu, whereas oil prices indicate the Total Average IEA Member Country Crude Oil Import prices measured in USD/bbl� The data on distance between Turkey and the BS countries as well as the distance between the US and the BS countries was obtained from the World Bank database produced by Nicita and Olaerraga (2006)� The reason for developing the DISTOIL i,TUR,t proxy variable is due to the use of an Ordinary Least Squares (OLS) regression with fixed effects, where distance as a variable on its own would cause perfect multicollinearity� We must note here that the parameters corresponding to the variable were highly insignificant both when used as distance only in pooled panel regressions or when used as the DISTOIL i,TUR,t proxy in fixed effects OLS panel data model� We could draw two alternative conclusions from this: either distance is not important for Turkish and US imports to BS countries, or distance or the DISTOIL proxy is not a relevant variable to count for transportation costs� Bosker and Garretsen (2010) for example, argue that apart from transportation costs, other variables like 'tariffs and non-tariff barriers, but also less tangible costs arising from cross-border trade, due to institutional and language differences ... ' are other types of costs to trade among countries (2010,)� Other authors also, like Head and Mayer ( 2010) have also raised the issue of effective distance measurement in gravity models� Getting back to the other explanatory variables, the data for developing dummy variables on EU and WTO membership was obtained from official websites ofthe European Union and World Trade Organization, respectively� The FTA/TA is the only variable that contains different information for Turkish and US exports to the region� For the Turkish exports regression, FTA indicates the presence of a free trade agreement and takes the value of 1 on the year the free trade agreement enters into force� Meanwhile, for the US exports regression, TA indicates the presence of a bilateral trade agreement between the US and the respective country and takes the value of 1 starting from the year the agreement was signed� Information for this variable was obtained the from the Office of the US Trade Representative for the US�
Regression results
The results of the gravity model regressions are shown in Table 1� below� We note that all regressions exhibit high F-statisticand R-square values, which indicates the models fit the data and are well-defined� Regression results show both Turkey's and US' exports towards Black Sea countries are highly persistent, that is, their trend in the past years matters for determining that in the future� This is shown through the high level of significance -1 percent -for first lags of dependent variables l_[DEP.VARIAB.]_1, that is, aggregate exports and exports by each category in all but one regression� We also notice that the significance of independent variables varies among regressions with aggreagate exports or different categories of exports as dependent variables� This implies our model is not bound to aggregate exports and that exports of various commodity categories are determined by different factors� Regressions indicate the per capita GDP of BS countries is important for both Turkish and US aggregate exports to these countries� The parameter of this variable for the Turkish exports regression is significant statistically and shows a one percent increase in the GDP of BS countries will increase Turkish aggregate exports to these countries by about 0�5 percent� The figure for the US is about 0�39 percent, significant at a 5 percent significance level� These findings suggest there would be a higher demand for Turkish aggregate exports compared to the US aggregate exports (by about 0�11 percentage points) by BS countries for the same amount of increase in their per capita GDP� Moreover, the per capita GDP of BS countries is also statistically significant for Turkey's chemicals, manufactured goods and machinery export categories, as indicated in Table 1� Variation in Turkey's and US' exports to the BS region is not explained by the variation in the real effective exchange rates variable l_REER, neither by its lag l_REER_1� Both are statistically insignificant in almost all regres-sions� This implies irrelevance of price competition in export flows from Turkey and the US to the BS region� Akkemik and Göksal (2010) also get similar results while examining the effect of Chinese exports on Turkish exports� Such findings suggest exports of Turkey and the US to the BS countries are affected by real factors rather than price competitiveness� An interesting finding is the fact that the presence of a trade agreement between the US and a BS country is statistically significant for almost all US' export categories, while it is not the case for its aggregate exports to BS coun-tries� US exports of crude materials except fuels will be 59 percent higher to BS partners in case a free trade agreement exists among parties and has entered into force, compared to BS countries with which the US does not have a trade agreement� Similarly, US exports will be higher to those BS countries with which it has a trade agreement in power by 43�4 percent for chemicals and related products (l_CHEMIC), by 29 percent for manufacturing sector goods (l_MANUF) and by 40 percent for machineries (l_MACHINE)� Regarding Turkish exports, a free trade agreeement already signed will have a statistically significant positive effect only for exports of machinery and transport equipment� The coefficient shows these exports will be 36�5 percent higher to BS countries with which Turkey has already signed a free trade agreement� The EU membership dummy EU has a negative sign and is statistically significant at a 10 percentage significance level for the regression where US total exports are the dependent variable� The coefficient indicates US' exports total to a BS country will be about 28�6 percent lower than to those BS countries that are not EU members� We also notice Turkey's chemicals exports are negatively affected by BS countries' membership to the EU and WTO� Turkish chemicals exports will be about 20�4 percent lower to BS countries that are EU members and 21�6 percent lower to BS countries that are WTO members� The DISTOIL variable is statistically significant only for the regression where the US' food exports is taken as dependent variable, where we get an expected negative sign� Regarding Turkish exports, the DISTOIL variable is statistically significant in regressions where aggregate exports as well as those of machinery and transport equipment are set as a dependent variable� In both regressions we get an unexpected positive sign, which suggests the proxy is not relevant or that there is a bias, considering that Russia, which is a powerful trade partner for Turkey, is the country with the largest distance in the region� All in all, we conclude that Turkish and US exports to the Black Sea region are persistent, the weight of factors affecting each export category differs, the GDP of BS trade partners is an important factor that drives exports up especially for Turkish exports and that exports of both Turkey and the US are not affected by price competitiveness but rather by real fac-tors� Moreover, we notice trade agreements are relevant for US exports to the region but not for Turkish exports� EU and WTO membership of BS countries also affects US total exports and certain categories of Turkish exports to the region�
Recommendations and policy options
Considering regression results, we suggest the following policy options to be developed between Turkey and the US in order to boost their exports to the Black Sea region: • In hindsight several industry clusters can be identified as the emerging sectors, information technology, environmental technology, transportation, energy technology, health care technology and financial services� Opportunities for American companies together with their Turkish counterparts in these sectors are significant and expanding rapidly� The other business sectors for developmental prospects include electrical power systems, telecommunications (equipment and services), industrial chemicals, pollution control equipment, computers, medical equipment, and major infrastructure projects� • Developing business interactions with Greece, Bulgaria and Romania would require alignment of business practices along the EU lines while business interaction with Russia, Georgia and Ukraine would follow strategic priorities� • Russia, Ukraine, Belarus and Kazakhstan would be much more difficult markets to penetrate because of the size, scope and depth of their economies� In this case adopting a more flexible business approach might be required� • Ukraine, due its IMF straightjacket and its history of liberalization, might be a different case� The areas of business opportunities for Turkish -U�S� companies in Ukraine is diverse, but banking, pension and insurance are likely sectors to be considered for invest-ments� • There exists a need to diversify Turkish-US trade relations� This follows the implications of recent regional changes and new opportunities and challenges in working with third countries� Aligning both Turkish and American business practices would bring about substantial opportunities between Turkish and American business as this might enhance co-operation and collaboration� • However, there exist certain problem areas to improve business environment and economic co-operation� For instance, in addition to energy sector where innovative partnerships can happen, U�S� based pharmaceutical companies could also invest in Turkey which would bring into Turkey an international investment of about $1bn� This would create spill-overs to the third countries in the region� The Turkish regulatory framework deters such investment� U�S� businesses often view the Turkish regulatory environment as inconsistent and non-transparent� Although the Office of U�S� Trade Representative recognized the improvement of Turkey's Intellectual Property Rights regime (IPR), Turkey is also on the U�S� piracy watch list� • It is crucially important that possible disputes are minimized� Turkey already set a strategy by establishing necessary legal regulations and properly using administrative mechanisms on the path of EU accession by generating several rapid, transparent and new legal resolution mechanisms alternative to governmental adjudication, i�e� arbitration� But, there exist cases of legal actions brought against the Government of Republic of Turkey before the ICSID (International Centre for the Settlement of Investment Disputes) are not resolved even after a very long period� • Developing Istanbul as an international financial centre and logistics� This would require progress on the IFC action plan, determination of specific areas of cooperation, including exchanges of regulatory and policy experts, and potential cooperation on measures to make Turkey more attractive for foreign investment in the financial sector� • There also exist opportunities for cooperation in the energy sector, including efficiency and renewables� In both countries the vital role in promoting co-operation in energy and innovative industries the private sector needs to be involved through global entrepreneurship programs� Co-operating on entrepreneurship includes supporting similar programs for the third countries, particularly those in the Black Sea Region, North Africa and sub-Saharan Africa� • Facilitating co-operation between the two governments and private sectors requires leveraging new business opportunities in the third markets� In addition, the promotion of such ventures in the third countries can be facilitated by creating logistics centres in Turkey to manufacture and increase both Turkish and U�S� exports to third countries� • Complementarities between U�S� and Turkish Outflow FDI� Studying complementarities between Turkish OFDI and the U�S� business interests can provide ground for furthering co-operation particularly towards the third countries in the Black Sea Region� • Given that EU markets are a strong rival to the US regarding exports to the region, and that Turkey is a member of the EU's customs union, Turkish and US firms exporting or willing to export to the BS countries could initiate trilateral or multilateral cooperation agreements with large Black Sea economies to burst out their trade relations�
Conclusion
Turkey is an important strategic partner of the United States on areas such as security, regional peace and stability, and counter-terrorism� Despite strong political and military ties, trade relations between the two countries have not yet reached their full potential� The level of bilateral trade is inadequate as 25% of US imports in Turkey are military goods� The establishment of market driven democracies in the Black Sea Region can only survive with comparable economic transformation� The opportunities for the US-Turkish co-operation in the Black Sea Region can be enhanced if based on a strategically designed model that helps strong reforms to integrate the region to the global economic order� Design of such model towards the region need new ideas to improve the balance in trade and investment between Turkish and American businesses� The growth of such co-operation however cannot be possible without realizing and recognizing in the US, the capacity of Turkey� The business community in both countries ought to bridge the divide of modus operandi in both countries� Entrepreneurs should find incentives to promote collaboration� The Black Sea Region has huge potential and job creation in the countries of the region is unlikely to be endogenous but instead will need foreign and external input, providing the opportunity for Turkish-American business collaboration� By combining the strengths of both Turkish and American business, the high level of risk can be reduced since Turkey has experience in the Turkic countries of the ex-Soviet Union while U�S� firms provide cred-ibility� Some of the problematic issues between Turkey and the U�S� economic partnership are related to trade, investment, entrepreneurship, third country and sectoral co-operation can be dealt with by enhancing business-tobusiness ties while promoting co-operation in agricultural sector and developing policies that encourage bilateral trade in agricultural goods� There are mutual impediments to improve agricultural trade� The machine manufacturing industry also holds strategic significance in the development by defining the productive skills of other sectors through investment, intermediate goods and the services it offers� A developed machine manufacturing industry provides a critical competitive edge over other countries in the manufacturing industry� The growth of the Turkish machinery sector is backed by highly competitive and adaptable small and medium-sized businesses (SMEs), which form the bulk of the industrial production in the country� As the drivers of growth in machinery and major contributors to the industrialization of the country, Turkish SMEs distinguish themselves from their peers in other countries by their utilization of the low-cost and highly skilled work force Turkey offers� Another indicator of the advanced level of the Turkish machinery industry is the rate of domestic input in the production stage� Around 85 percent of domestic input not only reduces the dependency on foreign sources, but also helps other local industries� The combined advantage of the engineering capability required to compete in the international market with reasonable labour costs enable the Turkish machinery industry to offer a range of products and components that are both high-quality and affordable� The machinery industry in Turkey is labour intensive rather than capital intensive, and is expected to remain so in the near future� In this regard, the advantage of the Turkish machinery industry lies in the accumulation of companies with different capabilities, strategies and products, so that this clustering provides a technological edge to the overall industry� The harmonization of EU legislation in accordance with Turkey's accession process has made it compulsory to obtain the necessary safety and compatibility certifications� As of July 2010, four Turkish national institutions have been authorized as notified bodies to ensure the compliance of local machinery producers with EU standards� Turkey's machinery industry has been given ambitious export targets for the country's 100th anniversary in 2023 to reach USD 100 billion with a share of 2�3 percent of the global market� This is one area where U�S� and Turkish companies can co-operate with some spillover effects to the third countries�
Table 1 .
Regression results for Turkish and US exports to Black Sea countries
Table 2 .
List of variables used in analysisDistance between Turkey or US and partner country(km) oil prices (total average IEA member country crude oil import prices, in USD/ bbl.)
|
2018-12-12T12:06:18.702Z
|
2017-11-02T00:00:00.000
|
{
"year": 2017,
"sha1": "fd2e7c8c56e20078f67271cef2dff382a50be472",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.23932/2542-0240-2017-10-2-34-49",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fd2e7c8c56e20078f67271cef2dff382a50be472",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Geography"
]
}
|
225066887
|
pes2o/s2orc
|
v3-fos-license
|
Load balancing policies with server-side cancellation of replicas
Popular dispatching policies such as the join shortest queue (JSQ), join smallest work (JSW) and their power of two variants are used in load balancing systems where the instantaneous queue length or workload information at all queues or a subset of them can be queried. In situations where the dispatcher has an associated memory, one can minimize this query overhead by maintaining a list of idle servers to which jobs can be dispatched. Recent alternative approaches that do not require querying such information include the cancel on start and cancel on complete based replication policies. The downside of such policies however is that the servers must communicate the start or completion of each service to the dispatcher and must allow cancellation of redundant copies. In this work, we consider a load balancing environment where the dispatcher cannot query load information, does not have a memory, and cannot cancel any replica that it may have created. In such a rigid environment, we allow the dispatcher to possibly append a server side cancellation criteria to each job or its replica. A job or a replica is served only if it satisfies the predefined criteria at the time of service. We focus on a criteria that is based on the waiting time experienced by a job or its replica and analyze several variants of this policy based on the assumption of asymptotic independence of queues. The proposed policies are novel and perform remarkably well in spite of the rigid operating constraints.
I. INTRODUCTION
Load balancing policies play a vital role in latency reduction in distributed systems such as large data centers and cloud computing. A typical load balancing system comprises of a large number of homogeneous servers and a dispatcher that routes arriving jobs to the queue of these servers. When the instantaneous queue length of the different servers is known, an obvious approach would be to use the join-shortest-queue (JSQ) policy [1]. If instead of the queue length, the workload i.e., the pending amount of work at each server is known, the optimal policy would be the join-smallest-work queue (JSW). Unfortunately, in most practical systems, the number of servers is large and therefore obtaining the instantaneous queue lengths from all servers is difficult.
A popular remedy for this is to consider the power of d choice variant of JSQ and JSW. In a JSQ(d) policy, the dispatcher samples d servers uniformly at random and queries their queue lengths. The job is then routed to a sampled server with the least number of waiting jobs. Implementing such a policy requires only 2d messages per job and was shown to have very good performance characteristics [2], [3]. The equivalent workload based policy JSW(d) also has a 2d query overhead per job and was analyzed recently [4], [5]. For many systems, a 2d query exchange is also a considerable overhead, especially when d is large or when the timescale for message exchange is comparable to the actual service requirement of a job. Recent efforts have therefore been directed towards bringing down the communication overhead using smart feedback techniques [6], [7]. [6] considers a hyper-scalable dispatching scheme where the dispatcher maintains queue length estimates for the different queues and sends an arriving job to the server with the least estimated queue length. Each server occasionally updates the dispatcher about its true queue length and this enables the dispatcher to synchronize its estimates with reality. [7] introduce the join-open-queue scheme where servers send busy alerts to the dispatcher at predetermined times.
Author * is with the Robert Bosch centre for cyber-physical systems, and the authors † are with the department of electrical communication engineering, all at Indian Institute of Science, Bangalore, Karnataka 560012, India. Email: {roojijinan, ajaybadita, parimal}@iisc.ac.in.
Author ‡ is with the department of electrical engineering, Indian Institute of Technology, Dharwad, Karnataka 580011, India. Email: tejaspbodas@iitdh.ac.in. When a server is idle, it does not send the alert and thus the dispatcher can infer idle servers without considerable message exchanges.
Join idle queue policy is another load balancing approach which has a low message overhead and a very good performance characteristics [8]. In this policy, idle queues willingly inform the dispatcher about their idleness and the dispatcher notes this in an associated memory. An arriving jobs is sent to an idle queue selected randomly from the list if it is non-empty and therefore this policy has an overhead of at most 1 message per job. Some recent load balancing policies that make use of memory in their dispatching decisions appear in [9], [10].
Redundancy based load balancing policies on the other hand do not require querying instantaneous queue length or workload information. Two popular variants of redundancy-d based load balancing are cancel on start (c.o.s.) [11] and cancel on complete (c.o.c) [12]. In these policies, independent replicas of an arriving job are sent to d randomly chosen servers. In c.o.s. (resp. c.o.c), when one of the copy starts receiving service (resp. receives complete service), the d−1 replicas are canceled. Such policies also have superior delay performance and are quite amenable to analysis. A detailed product form analysis characterizing the delay performance for both variants is presented in [13]. A major implementation problem with replication based policies is the synchronized cancellation of the redundant replicas. The sophistication required for implementing such an approach in fact may even be nontrivial. Further, depending on the operating scenario, instantaneous cancellation may not always be feasible, thereby adding an overhead on the system in terms of wasted service making the system inefficient [14], [15].
The load balancing policies considered above either involve (a) communication of messages, or (b) require a memory, or (c) require replication with cancellation. Such policies therefore always have an element of feedback from the server to the dispatcher. In this work, we restrict to a working environment where there is no feedback of queue length or workload information from the servers. This renders any memory that the dispatcher may have to be of no use. While the dispatcher can possibly replicate jobs to different servers, the lack of communication prohibits cancellation of redundant copies. In such a rigid load balancing environment, we hope to seek policies that outperform the random routing policy which is an obvious policy in a no feedback regime. Towards this, we offer the dispatcher the ability to append a server-side cancellation criteria to each replica. Before picking any replica for service, each server will check if the appended criteria is satisfied or not. If the criteria is met, then the replica is served or else it is dropped. We consider a criteria that depends on the waiting time of the replica in a queue. For example, one criteria that we consider is to serve the replica only if it has waited in the queue for less than T units of time. Such a criteria is easy for the server to validate, and can be achieved by logging the arrival time information of each job/replica. The key essence of our approach is to exploit possible gains from replication of jobs, but at the same time prevent overloading the system due to extra replicas by preemptively performing server-side cancellation of potentially wasteful replicas.
In a more formal description of our policy, we consider a load balancing system with N queues and where jobs arrive according to a Poisson process with rate λN . Jobs have a service requirement that is characterized by a general service time distribution G(·). Servers are identical with service rate µ and for each arriving job referred to as the primary replica, the dispatcher creates d − 1 secondary replicas with probability p. The servers where the replicas are sent are chosen randomly. Associated with the primary and secondary replicas are discard thresholds T 1 and T 2 . A replica is discarded by the server if the waiting time experienced by the replica is more than its discard threshold. We label our load balancing policy by π(p, T 1 , T 2 ) and provide a complete performance characterization.
We observe that when T 1 and T 2 are both finite, arriving jobs could potentially be lost without service. Keeping this in mind, the two key performance metric that we consider are the conditional mean response time of jobs admitted into the system and the loss probability of an arriving job. To analyze our policy, we make use of the cavity process method of [16], [17] along with a conjecture on the asymptotic independence of the workloads at the different queues as N → ∞. Under this mean field regime, we obtain the moment generating function (MGF) for limiting workload for an arbitrary queue under the policy π(p, T 1 , T 2 ). This function can be inverted to obtain the limiting workload distribution, when the service time distribution is exponential. In this case, we derive closed form expressions for marginal workload distribution and conditional mean response time of admitted jobs in terms of the system parameters.
Proposed load balancing policy π(p, T 1 , T 2 ) is closely related to replication based policies without cancellation. A setting where cancellation of replicas is expensive or infeasible is considered in [18]. Their setting corresponds to the special case of our policy where the secondary replicas are always selected with probability p = 1, and the replicas are always served with thresholds T 1 = T 2 = ∞. The idea of replication without cancellation has also been used in multipath routing in networks [19], [20]. Typically, flows are replicated along multiple paths to extract the diversity in congestion levels across different paths. Proposed probabilistic redundancy policy π(p, T 1 , T 2 ) is more generally applicable for such multipath routing in network as well, provided the intermediate nodes/routers have the ability to drop certain flows based on appropriate criteria. We do not proceed with this idea any further in this article.
A. Contribution
We have listed our key contributions below. 1) We propose a load balancing policy with probabilistic redundancy, where secondary replicas are added probabilistically. The policy is distributed since the dispatcher needs no feedback from the servers, and the replicas are discarded at the server if the waiting time exceeds a threshold. 2) Assuming asymptotic independence of workloads in the number of servers, we find an expression for conditional mean response time for a job, given the job is admitted in the system. 3) We obtain closed-form expression for loss probability, limiting marginal workload distribution and conditional mean response time of admitted jobs, when the service time distribution is exponential. 4) We empirically verify that the independence assumption on the marginal workload distribution is a good approximation even for a finite number of servers. 5) We provide design guidelines on choice of number of replicas d, and the corresponding cancellation thresholds T 1 , T 2 for the proposed policy.
B. Organization
We introduce the system model and notations in Section II. This is followed by a discussion on the cavity process method and its application to our problem along with the conjecture on the asymptotic independence of the workloads at different queues. In Section III, we compute the performance metrics for the proposed probabilistic redundancy policy π(p, T 1 , T 2 ) with server side cancellation for a general service time distribution, in terms of the limiting marginal workload distribution. In Section IV, we find the closed-form expression for marginal workload distribution when the service time distribution is exponential. We also compute the conditional mean of response time for admitted jobs, for some special cases of π(p, T 1 , T 2 ) policy. We conclude with a summary of our work and future directions in Section V.
II. SYSTEM MODEL AND PRELIMINARIES
We consider a load balancing system with N servers, where jobs arrive according to a Poisson process of rate λN . There is a dispatcher associated with this system whose objective is to minimize the response time experienced by each job by suitably balancing the workload across different servers. Owing to the popularity of redundancy based load balancing policies, we assume that the dispatcher has the ability to replicate an arriving job across multiple servers.
Throughout this article, we denote the set of first n consecutive positive integers as [n] {1, . . . , n}, the set of non-negative integers as Z + , the set of positive integers as N, the set of non-negative reals as R + and the set of positive reals as R + . We also use the notation x ∧ y min {x, y}.
A. Replication
We denote the service time for nth arriving job at ith server by X n,i ∈ R + . We assume that the job service time sequence (X n,i ∈ R + : n ∈ N, i ∈ [N ]) is random and independent and identically distributed (i.i.d. ) with the common distribution G : R + → [0, 1] and the common mean 1 µ . That is, we assume that the service time for each replica of the job is i.i.d. according to the same distribution G. Even if we consider all servers to be identical in terms of configuration and compute power, there could be some uncertainties in the time taken to service a job at any server due to other background processes [21]. The randomness assumption accomodates these uncertainties. Further, we also assume the service times to be exponentially distributed. Recent studies suggests that the service times in distributed computing systems can be modelled to have two components; a constant startup delay and a random memoryless component [22]- [24]. Although, it is the shifted exponential model that best fits this profile, whenever the startup time is negligibe the service time distribution can be approximated by an exponential distribution. This along with analytical tractability motivates us to assume that the service times follow i.i.d. exponential distribution with rate µ. Also, we denote the tail distribution of the service time or the complementary service time distribution byḠ 1 − G. When we focus on a single queue i, we will drop the subscript i for brevity.
B. Threshold based cancellation
We assume that the dispatcher has limited functionality and that it cannot cancel redundant copies when one of the replica has received (or starts receiving) service. Instead, we assume that the dispatcher can append discard instruction along with each replica. Before selecting a job/replica for service, each server will read the discard instruction and possibly discard the replica based on the instruction. We call this as a redundancy based approach with server side cancellation of replicas. For ease of exposition, we assume that the instruction is almost identical for all copies in the system and hence the overhead of implementing this approach is minimal. In this article, we restrict to instructions that are characterized by a threshold T ∈ [0, ∞). To elaborate, we assume that the server serves a replica if it is chosen for service within T units of its arrival or else discards the replica. We call T as the discard threshold for brevity.
1) Primary replica and discard threshold: We consider the following dispatching policy based on the above idea of a discard threshold. When a job arrives, the dispatcher samples a single primary server uniformly at random and sends a primary replica of the job to the server along with the primary discard threshold T 1 .
2) Secondary replicas and discard thresholds: For each job arrival, the dispatcher decides to create secondary replicas independently with replication probability p. When dispatcher decides to create secondary replicas, then it samples d − 1 other servers uniformly at random and sends i.i.d. replicas of the same job to the sampled d − 1 servers after appending each replica with a secondary discard threshold of T 2 where T 2 T 1 .
Since our policy is parametrized by probability of secondary replica p, primary discard threshold T 1 , and secondary discard threshold T 2 , we shall henceforth denote it by π(p, T 1 , T 2 ) for simplicity. The replication probability p controls the redundant load on the system. For example, we do not add any secondary replicas when p = 0, and we always add secondary replicas when p = 1. We choose the secondary discard threshold to be smaller than the primary discard threshold, since we expect the secondary replicas to be helpful only if the primary is delayed.
Following are some special cases of our discard threshold based probabilistic d-replication-cancellation policy that we analyze in this article.
1) Selective replication with identical thresholds (π(p, T, T )): In this policy, each job is replicated d times and assigned to d servers chosen at random with probability p. Each job replica will have a threshold of T time units which can possibly result in loss of jobs. When T = ∞ and p = 1 the policy reduces to that of a simple replication policy without cancellation. 2) Selective replication with no loss (π(p, ∞, T 2 )): This is a selective replication policy, where primary is always served, and the d − 1 replicas are created only with probability p reducing the overhead due to large number of replicas. Since T 1 = ∞, each primary replica of the job is definitely served. The advantage of this policy is that no jobs are lost. 3) Selective replication on idle servers (π(p, ∞, 0)): This is special case of selective replication policy with minimal redundancy addition, since secondary replicas only join idle queues.
C. Server
We assume that each server has an infinite sized buffer where arriving job replicas can wait for service, on a first come first served (FCFS) basis. We let the random variable W n,i with distribution function F : R + → [0, 1], denote the waiting time of the nth job at server i ∈ [N ] experienced by an arriving job replica. Due to FCFS service, the random variable W n,i is also the effective workload present at server i that must be served before this replica can receive service. An arriving replica is executed at a server i if its discard threshold T is larger than the observed workload W n,i , and is discarded otherwise.
Each arriving job in the system results in a potential arrival at maximum d randomly sampled queues. Depending on the T and W n,i , the job either receives service or is discarded. If a replica is served, then it results in an actual arrival at the corresponding server queue.
D. Performance metrics
We consider the following two performance metrics, the mean response time and the loss probability. Since our dispatcher replicates each arriving job to at most d servers, the response time of an arriving job is the minimum of the sojourn times experienced by its different replicas. When both the thresholds T 1 and T 2 are finite, each replica can be discarded without service, leading to a loss. For lost jobs, the response time metric is meaningless. Hence, we obtain the mean response time of a job, conditioned on the event that it is not discarded. A job is serviced when at least one of its replicas is not discarded at the servers sampled by the dispatcher, i.e. when workload at one of these servers is smaller than or equal to the corresponding discard threshold. Definition 1. Let I 1 be the singleton set of servers where primary replica is dispatched. Let ζ be the indicator that secondary replicas are created. Let I 2 be the candidate set of servers to which the secondary replicas are dispatched. For any server j, we define the indicators γ 1,j ½ {j∈I1} and γ 2,j ½ {j∈I2} which indicates that the queue is selected as a primary or secondary server respectively. Definition 2. If the replica is dispatched to a server j ∈ I 1 ∪ I 2 with workload W j , then we define the indicator that the job is not discarded at this server j as We denote the set of servers, where the job replicas are not discarded by I {j ∈ I 1 ∪ I 2 : ξ j = 1}. A job is not discarded when I = ∅, and we denote this by indicator ξ ½ {I =∅} . We can write this in terms of the indicator ζ, the set of servers I 1 , I 2 , the indicators ξ j andξ j 1 − ξ j for all j ∈ I 1 ∪ I 2 , Definition 3. The loss probability for policy π(p, T 1 , T 2 ) is denoted by P L Eξ.
Definition 4. We denote the response time of any job by R ′ ∈ R + ∪ ∞ and the response time of an undiscarded job by a random variable R = ξR ′ ∈ R + following the distribution function H : for all x ∈ R + . We study the conditional mean response time for a job given that it is not discarded. Specifically, we define the conditional mean response time as In this article, we analyze the performance of the π(p, T 1 , T 2 ) load balancing policy for different special cases mentioned in Section II-B, based on the two performance metrics of conditional mean response time and loss probability. Computing the limiting marginal workload distribution at a single queue is straightforward. However, a job response time is the minimum of response time for all possible job replicas, and computation of the conditional mean requires the knowledge of the joint distribution of workloads at all queues with a job replica. As such, we assume that the workloads in different queues are independent of each other and the probability of creating secondary replicas do not depend on the existing workloads, when the number of servers N grows large while keeping the number of replicas d fixed. The cavity process method and the conjecture on the asymptotic independence of the queues is stated in the next subsection.
E. Cavity process method
We first explain the principle of a cavity process method as applied to popular load balancing policies such as least loaded (LL(d)) or join shortest queue (JSQ(d)) and then specialize the discussion to our policy π(p, T 1 , T 2 ). See [4], [15]- [17] for more details about this approach. In the LL(d) (resp. JSQ(d)) system with N queues and Poisson arrival rate of λN , d queues are sampled for each arriving job. The arriving job is executed on the sampled server with the smallest workload (resp. queue length). Let {H(t), t 0} denote the collection of probability measures on R + . This is called as the environment process. We tag one of the queue in the N queue system as the cavity queue and denote the cavity process by X H(t) which represents the workload process (resp. the queue length process) at the cavity queue under policy LL(d) (resp. JSQ(d)). The potential arrival rate of jobs to the cavity queue under both policies is λd. For a potential arrival at the cavity queue at time t, we compare d − 1 random variables with law H(t) with X H(t−) . The potential arrival becomes an actual arrival to the cavity queue if the value of X H(t−) is lower than the values taken by the d − 1 other variables, else the job is discarded. When the job is accepted, we have X H(t) = X H(t−) + 1 for the JSQ(d) policy and X H(t) = X H(t−) + x for the LL(d) policy where x is the service requirement of the arriving job. When the job is discarded, we have X H(t) = X H(t−) . For the LL(d) policy, the workload X H(t) at the cavity queue decreases a unit rate, and for the JSQ(d) policy, the queue length X H(t) of the cavity queue decreases by one at a unit rate. The process H(·) is called as the equilibrium environment process if X H(.) (t) has distribution H(t) for all times t. If H(t) = H for all t, then H is called as equilibrium environment.
The cavity process method was used in [16], [17] to analyze the LL(d) and the JSQ(d) policy. A key step in the analysis is to show asymptotic independence between the workloads/queue length random variables at the different queues. While the analysis for LL(d) holds for any service requirement distribution, the proof for JSQ(d) is only known for the case when the service requirement of a job has decreasing hazard rate distribution. In [4], this approach is used further to obtain the functional differential equation for the workload distribution of the cavity queue. In [15], several workload based load balancing policies based on redundancy were considered and the cavity process method was used to identity the workload distribution for a wide range of load balancing policies. While the asymptotic independence of the queues was only conjectured, this was very recently proved (for most of the policies of [15]) in [25] for a variety of such replication based policies.
For our π(p, T 1 , T 2 ) policy, we use this cavity process method along with the conjecture that the workload distribution across any finite subset of queues is asymptotically independent. For our policy, note that the potential arrival rate to the cavity queue isλ λ(1 − p) + pλ(d − 1). If the copy at the cavity queue is a primary replica, Similarly if the replica at the cavity queue is a secondary one, then the replica is served if X H(t−) T 2 . Clearly, the potential arrival at the cavity queue becomes an actual arrival based on the workload level at the queue. Remarkably, for our policy there is no influence on the cavity queue of the d − 1 random variables with law H(·). With the following conjecture on the asymptotic independence of the workload at the queue, using the cavity process approach, we can view the cavity queue as an M/G/1 queue with workload dependent arrival rates. The workload distribution of the cavity queue is in fact the equilibrium environment H for our system. See [26] for one possible approach to obtain the workload distribution for an M/G/1 queue with wokrload dependent arrival rates. In the following we use a different approach based on the Lindley type recursion and moment generating function (MGF) to obtain the workload distribution for the queue at cavity. We believe that this approach is novel and can be applied to more general load balancing policies beyond this work.
Conjecture 5. Consider the load balancing policy π(p, T 1 , T 2 ) and assume this system is stable for the chosen parameter values of p, T 1 and T 2 for any N . 1 Then as N → ∞, the system has a unique equilibrium workload distribution under which any finite number of queues are independent. Furthermore, this distribution is same as the equilibrium distribution of the cavity process.
Remark 1. Based on this conjecture, we first obtain the MGF for the workload at the cavity queue. We then use this to obtain the conditional mean response time for the different policies. We illustrate the accuracy of our expressions in Appendix A by comparing them with simulation experiments for different values of N . As a validation of the conjecture, we see that as N increases, the mean response time from simulations approach the analytical values.
III. PERFORMANCE ANALYSIS
We note that it is difficult to analyze the conditional mean response time and loss probability under general conditions. However, based on the cavity process method and the assumption of asymptotic independence, we will obtain expression for both the performance metrics. This computation is an approximation for finite number of servers. However, we empirically verify that this approximation is quite accurate even for a small number of servers.
A. Asymptotic independence of queues
For the proposed discard threshold based dispatching policy, due to the assumption on the asymptotic independence of the queues, each queue in the system can be modeled as an M/G/1 queue with workload dependent arrival. When replica of nth job with discard threshold T is added to a queue i with workload w, it is served at this queue if T w. Further, the workload in the queue after this actual arrival will be incremented by the random service time X n,i of the arriving nth replica. Note that every job arrival is replicated at maximum of d servers at the same time, and therefore the arrivals to different queues are correlated. We ignore this fact in our limiting analysis using the asymptotic independence and hence our analysis is an approximate one.
B. Loss probability
When both primary and secondary thresholds are finite, some jobs can be discarded from the system. Under the asymptotic independence assumption, we compute the limiting loss probability of a job being discarded in the following Lemma.
Lemma 6. The limiting loss probability of a job under discard threshold based dispatching policy π(p, T 1 , T 2 ) with equilibrium workload distribution F and tail distribution of service timeḠ is given by Proof: From (2) in Definition 3, we obtain P L = E j∈I1ξ j j∈I2 (ζξ j +ζ) . The result follows from the independence of the indicators ζ and (ξ j : j ∈ I 1 ∪ I 2 ), under the assumption of asymptotic independence across the servers j ∈ I 1 ∪ I 2 , and the fact that the mean of indicators Eζ = p, Eξ j =F (T 1 )γ 1,j +F (T 2 )γ 2,j for all j ∈ I 1 ∪ I 2 .
C. Conditional mean response time
Next, we characterize the mean response time for a job under the dispatching policy π(p, T 1 , T 2 ). Note that when the discard thresholds T 1 , T 2 are finite, then all jobs that arrive at a server with workload w > T 1 will be lost. For lost jobs, the response time metric is meaningless. Hence, we obtain the conditional mean response time given that the job is not discarded. A job is serviced when at least one of its replicas is not discarded at the servers sampled by the dispatcher, i.e. when workload at one of these servers is smaller than or equal to the corresponding discard threshold.
Theorem 7. The conditional mean response time of an undiscarded job under π(p, T 1 , T 2 ) policy with equilibrium workload distribution F and tail distribution of service timeḠ is given by Proof: The tail distribution of an undiscarded job in the system is denoted byH and defined in Definition 4. Therefore, the mean response time for an undiscarded job can be written as E[R] = xH (x)dx. Next, we derive an expression for the tail distributionH of the response time for each undiscarded job under π(p, T 1 , T 2 ) policy. Recall that ζ is the indicator that secondary replicas are created, and I 1 , I 2 denote the disjoint random sets of servers where primary and secondary replicas are dispatched. Then, the indicator that the job replica at server j ∈ I 1 ∪ I 2 with workload W j is not discarded is defined in (1). Recall that the set of servers, where the job replicas are not discarded is denoted by I = {j ∈ I 1 : ξ j = 1} ∪ {j ∈ I 2 : ζ = 1, ξ j = 1}, and the indicator of an undiscarded job is ξ = ½ {I =∅} . Therefore, we can write the indicator of response time of an undiscarded job being larger than a threshold x ∈ R + as Substituting (2) for the indicator ξ in the above equation, using the fact that ξ jξj = ζζ = 0, and re-arranging the terms, we can write Taking expectation on both sides of the above equations, using the independence of indicators ζ and (ξ j : j ∈ I 1 ∪ I 2 ) with the respective means Eζ = p and E[ξ j |I 1 , I 2 ] = F (T 1 )γ 1,j + F (T 2 )γ 2,j , and the definition of k(x, T ) = E ½ {Wj T } ½ {Xj+Wj>x} , we obtain the tail distribution of the response time for an undiscarded job as Since the right hand side of the above equation doesn't depend on I 1 , I 2 , and hence we haveH( The result follows from equation (3) for conditional mean of response time.
Remark 2. From the non-negativity of distribution functions, we can exchange two integrals from Monotone convergence theorem. Therefore, we have Defining k(x) lim T →∞ k(x, T ), we observe that x∈R+ k(x)dx = EW + 1 µ . Remark 3. When the replication probability p equals 1,the tail distribution of response time simplifies tō Remark 4. When the thresholds T 1 and T 2 are infinity, we see the tail workload distributionsF (T 1 ) =F (T 2 ) = 0 and we have k(
TIMES
In this section, we evaluate the workload distribution F in the cavity queue under various load balancing polices discussed in section II-B when the service times of each job is independent and follows an identical exponential distribution with rate µ. We choose the service times to be exponentially distributed as they are amenable to analytical computations, due to their memoryless property. Let us first introduce some preliminary definitions prior to introducing the results.
We denote the indicator that the jth server is selected by nth job as a primary or secondary server by γ n 1,j and γ n 2,j respectively. Recall that the workload seen by the nth job arrival at server j is W n,j and the service time for nth job if it joins server j is given by X n,j . Since we are interested in a single cavity queue j, we drop the subscript j in the following. Using Lindley's recursion for single queue workload sequence (W n : n ∈ N) in terms of random service time sequence (X n : n ∈ N), inter-arrival time sequence (T n : n ∈ N), and indicator ζ n denoting whether secondary replicas are created or not, we get That is, we have In order to derive the workload distribution in the cavity queue, we make use of the moment generating function of the workload.
Definition 8. The moment generating function of the limiting workload W in a single queue, restricted to different workload regimes is defined as Theorem 9. For an N server system with i.i.d. exponential service times of rate µ and Poisson arrivals of rate N λ, the moment generating function Φ W (θ) for the waiting time of admitted jobs at any queue under π(p, T 1 , T 2 ) policy is given by where Corollary 10. For an N server system with i.i.d. exponential service times of rate µ and Poisson arrivals of rate N λ, the single queue workload distribution under π(p, T 1 , T 2 ) policy is given by Next, we study some special cases of the π(p, T 1 , T 2 ) policy listed in Section II-B.
A. Selective replication with identical thresholds
First, we study the system under the selective replication with identical thresholds policy, π(p, T, T ). Next result follows from Corollary 10 by substituting T 1 = T 2 .
Corollary 11. For an N server system with i.i.d. exponential service times of rate µ and Poisson arrivals of rate N λ, the workload distribution at the cavity queue at stationarity under π(p, T, T ) policy, is given by Using the above corollary, we now compute the loss probability and conditional mean response time using Theorem 7. F (0)) and probability of zero workload F (0) is given in Corollary 11. From Theorem 7, we know the conditional mean response time under π(p, T, T ) policy is
Corollary 12. The loss probability of a job under discard threshold based dispatching policy π(p, T, T ) with equilibrium workload distribution F and tail distribution of service timeḠ, is given by
Thus, we see that computing the term k(x, T ) shall allow us to evaluate the mean response time of the N server system under the policy π(p, T, T ). The next lemma provides us this result. (c) τ vs P L Tradeoff Fig. 1: For the policy π(p, T, T ) with fixed number of survers N = 20, arrival rate λ = 0.3, probability p = 1, service rate µ = 1, conditional mean response time τ as a function of threshold T is plotted in Fig. 1a, loss probability P L as a function of threshold T is in Fig. 1b, tradeoff between conditional mean response time τ and loss probability P L is in Fig. 1c.
Lemma 13. For an N server system with i.i.d. exponential service times of rate µ and Poisson arrivals of rate N λ, we can find the following constants under π(p, T, T ) policy, The function k(x, T ) is given by Proof: We know that the service time are exponential and hence the tail service time distribution isḠ(x) = e −µ(x)+ , where (x) + = max {x, 0}. Therefore, we can write Considering the two cases when x T and x < T , we get k(x, T ) as 0 e µW dF (w), x < T. The result follows from the workload distribution F given in Corollary 11.
In Fig. 1, we consider a numerical plot for the policy π(p, T, T ) for the number of servers N = 20, the normalized arrival rate λ = 0.3, and probability of replication p = 1. We compare the conditional mean response time and the loss probabilities for different values of discard threshold T and for different choices of number of replicas d. From Fig. 1a, we see that the conditional mean response time increases in threshold T . At the same time, we observe that the loss probability decreases in threshold T in Fig. 1b. For values of discard threshold T between 0 and 1, there seems to be significant gain from using our policy as compared to random routing. Further, we note that the tradeoff in terms of the loss probability also seems marginal, since the maximum loss probability is observed to be around 0.095. The tradeoff between the conditional mean response time of admitted jobs and loss probability P L for different numbers of replicas d, is illustrated in Fig. 1c. We observe that for the number of replicas d > 1, the proposed policy can offer a significantly lower conditional mean response time compared to random routing policy by allowing a small loss probability.
In Fig. 2, we study the behaviour of conditional mean response time and the loss probability for π(p, T, T ) as the normalized arrival rate λ increases. We choose the number of servers N = 20, discard threshold T = 1.5, and probability of replication p = 1. Fig. 2a and Fig. 2b illustrate the advantages of our policy over random routing. For lower values of normalized arrival rates λ, our policy for d > 1 outperforms random routing. Even for higher values (c) Tradeoff-τ vs P L Fig. 2: For the policy π(p, T, T ) with fixed number of servers N = 20, threshold T = 1.5, probability p = 1, service rate µ = 1, conditional mean response time τ as a function of arrival rate λ is plotted in Fig. 2a, loss probability P L as a function of arrival rate λ is in Fig. 2b, tradeoff between conditional mean response time τ and loss probability P L is in Fig. 2c. of arrival rates, the proposed policy has a lower mean response time as compared to random routing. Moreover, as the discard thresholds are finite, the system stays stable for any arrival rate unlike a random routing policy. However, this comes at the cost of loss probabilities and we see that the loss probability rises up to 0.35 in the provided plots. Note that the case of single primary replica d = 1, is different from random routing. When d = 1 in π(p, T, T ) policy, it implies that only one primary replica is executed if the workload at the randomly selected primary server is smaller than the discard threshold T . Fig 2c illustrates the tradeoff between the two performance metrics for this policy. This study shows that the right value for the number of replicas d must be chosen for a given admissible loss probability in order to minimize the conditional mean response time .
B. Selective replication with no loss
We next study the N server system under the selective replication with no loss policy. Specifically, we assume that the primary discard threshold T 1 = ∞, and the secondary discard threshold T 2 < T 1 is finite. In this case, the system is stable only if λ < µ. First, we obtain the following result from Corollary 10 by substituting T 1 = ∞.
Corollary 14.
For an N server system with i.i.d. exponential service times of rate µ and Poisson arrivals of rate N λ, the stationary workload distribution at the cavity queue under π(p, ∞, T 2 ) policy exists only for λ < µ, and is given by where the probability mass at 0 is F (0) = Note that the loss probability is 0 under this policy. Then, from Theorem 7, we get the conditional mean response time From the definition of k( , it follows that (F (T 2 ) + k(x, T 2 )) 1, and hence the conditional mean response time for this special case is minimized for p = 1.
The next lemma provides us with the terms k(x, T 2 ), k(x, ∞) andF (T 2 ) that enable us to compute the conditional mean response time τ under the scheduling policy π(1, ∞, T 2 ). Note that, we provide the results only for the regime of arrival rates where the system is stable, that is, when λ < µ.
Lemma 15. For a stable N server system with i.i.d. exponential service times of rate µ and Poisson arrivals of rate N λ, the function k(x, T 2 ) under the π(p, ∞, T 2 ) policy is We can also find the function k(x, ∞) as The probability mass at 0 is given by Proof: Since the service time is exponentially distributed with rate µ, we getḠ(x) = e −µ(x)+ . Therefore, we can write the function Setting T = ∞ in the above equation, we get k(x, ∞) = e −µx x 0 e µw dF (w). Substituting the workload distribution F from Corollary 14, we get the result.
We compare the conditional mean response time time for jobs under policy π(1, ∞, T 2 ) as a function of normalized arrival rate λ for different number of replications d, the number of servers N = 20, and the exponential service rates of jobs to be µ = 1 in Figure 3. We choose secondary discard threshold T 2 = 2 which is twice the mean service time of a job. As anticipated, lower values of replications d are preferable with increase in the normalized arrival rate λ. That is, when the arrival rates are high, the creation of redundant replicas causes increase in load in the system which adversely affects the performance. Recall that for d = 1, this policy is same as the random routing policy, and observed to be the preferred policy in a high load regime. Also evident from the figure is the fact that the stability condition is λ < µ, independent of number of replicas d. We compare the mean response time under policy π(1, ∞, T 2 ) as a function of secondary discard threshold T 2 for different number of replicas d, in Fig. 4, We choose the normalized arrival rate λ = 0.3 and the number of servers N = 20. We observe that the choice of replication factor d affects the conditional mean response time, and the conditional mean response time minimizing replication factor d depends on the discard threshold T 2 . Alternatively, it implies that if the number of secondary replicas d is chosen apriori, then the secondary discard threshold T 2 should be chosen carefully to minimize the conditional mean response.
Remark 6. Let us consider the π(p, ∞, ∞) policy, which is a special case of π(p, T, T ) for T = ∞ as well as of π(p, ∞, T 2 ) for T 2 = ∞. It is to be noted that there is no jobs lost in such a system and therefore, the loss probability is zero. Further, as the arrival rate to any queue isλ, the sytem is stable only ifλ < µ. For this regime, using Lemma 13 it can be shown that k(x, ∞) = e −(µ−λ)x . From this, the conditional mean response time for exponential service time distribution can be found to be In Fig. 5, we plot this conditional mean response time for policy π(p, ∞, ∞) as a function of λ for N = 20 servers, p = 1, and for different values of d. The figure is indicative of the stability condition λ < 1 d for this policy. The performance gain from using larger values of d is also evident, but this comes at a cost of requiring a stricter stability condition. Of course, the clear advantage of this policy over random routing (d = 1) is limited to lower arrival rates. At higher values of λ, the fact that the copies cannot be cancelled impacts the performance of the system severely. For better clarity, we also provide the percentage improvement of conditional mean response time of the policy π(p, ∞, ∞) over random routing policy across stable regions in Table I. From the above studies, we observe that introduction of secondary replicas add to the load in the system and deteriorates the system performance when arrival rates are high. Therefore, in the following section, we study a variant of selective replication policy, where we replicate only on idle server.
C. Selective replication on idle servers
As mentioned above, we next study the special case of π(1, ∞, T 2 ) policy where the discard threshold T 2 = 0. We note that this implies that primary replica is chosen uniformly at random from N servers, and (d − 1) secondary replicas are chosen uniformly at random from remaining N − 1 servers. The secondary replicas are added only if the sampled secondary servers are idle. Since this policy is a special case of selective replication with no loss policy, we can obtain the mean response time directly.
Lemma 16. The mean response time of any job under the dispatching policy π(1, ∞, 0) when service times of each job is i.i.d. exponential with rate µ and arrivals are Poisson with rate N λ, is given by for λ < µ and F (0) = µ−λ µ+λ(d−1) andF (0) = 1 − F (0). Proof: Substituting T 2 = 0 in Lemma 15 and substituting the terms in (7) gives the result. In Fig. 6, we compare the mean sojourn time under dispatch policy π(1, ∞, 0) as a function of normalized arrival rate λ for different numbers of replica d. We have selected the total number of servers as N = 20, and exponential service rate µ = 1. It follows from the figure that for lower arrival rates, a higher number of replicas d is preferred. Moreover, the performance of the policies with secondary replicas is never worse than the random routing policy. That is, as opposed to other policies seen earlier, the additional replicas are executed only if the server is idle in this policy. Therefore, a higher choice of replication factor d does not increase the system load significantly. For moderate to higher values of arrival rates, all the different choices of number of replicas have a similar performance with the stability condition being λ < µ, independent of number of replicas d. We also provide the percentage improvement of conditional mean response time of the policy π(p, ∞, 0) over random routing policy for various values of normalized arrival rate in Table II.
V. DISCUSSION & FUTURE WORK
In this work, we consider load balancing policies that are suitable for rigid working environments with no feedback, no memory, and no synchronized replica cancellations. In such settings, random routing policy is the default choice for load balancing. Equipped with the ability to create replicas and pass on cancellation instructions to servers, we have introduced a new class of policy namely π(p, T 1 , T 2 ). An attractive feature of this policies is the server-side cancellation of replica based on discard thresholds T 1 and T 2 . In this work we have shown that this policy (and several of its special cases) not only offer a marked improvement over the random routing policy (for suitable choice of parameters λ, d) but do so without using any communication from the servers. We analyze this policy using the cavity queue approach and the conjecture on asymptotic independence of queues. Using the MGF approach, we characterize the mean conditional sojourn time of a job and the loss probability for the policy as part of our key result.
A key assumption in our analysis has been the exponential service requirements for jobs, and that the copies of jobs require independent and identically distributed service time. We believe that relaxing these assumptions and analyzing the proposed π(p, T 1 , T 2 ) policy for more general service time distributions and for the case of identical replicas is an interesting open direction. One can think of more nuanced policies without feedback such as replicating only short jobs (if the service requirement of a job is known at arrival) or replicating only if the primary copy is discarded. Analyzing such policies is also part of our agenda. Finally, while the performance of π(p, T 1 , T 2 ) seems to be good (compared to random routing) for lower values of normalized arrival rates λ, it would be interesting to investigate if there exist such no feedback policies that are better than random routing even for higher values of normalized arrival rates λ.
APPENDIX A MODEL VALIDATION
In this section, we discuss the accuracy of our theoretical results and compare them with simulation experiments. We obtained the conditional mean sojourn time τ for undiscarded jobs and the probability of discard P L under proposed probabilistic redundancy policy π(p, T 1 , T 2 ), based on the conjecture of the asymptotic independence of the queues. The workload distribution for the cavity queue under policy π(p, T 1 , T 2 ) has a closed form expression for exponentially distributed service time, and is provided in Corollary 10. The expression for the conditional mean sojourn time under policy π(p, T 1 , T 2 ) is complex, and hence we have omitted it. Instead, we restrict our validation results for three special cases: (a) deterministic d replicas with identical finite discard threshold π(1, T, T ), (b) deterministic d replicas with no discard π(1, ∞, ∞), and (c) deterministic d replicas with secondary replicas only at idle servers π (1, ∞, 0).
Findings of the simulation experiments under the policy π(1, T, T ) are reported in Fig. 7. We note that this is a lossy system, where some jobs can be discarded if none of the sampled servers have workload smaller than the threshold T . We plot the conditional response time for π(1, T, T ) as a function of normalized arrival rate λ, when the jobs have i.i.d. exponential service times with unit mean. The identical discard threshold for primary and secondary replicas is taken as T 1 = T 2 = 5 and total number of replicas is selected as d = 3. Each experiment is run over 10 5 iterations and we repeat this experiment for increasing number of servers N . We empirically compute the average response time of undiscarded jobs, as a function of normalized arrival rate λ. We observe that the empirical curve approaches our analytical computation under asymptotic independence conjecture, as the number of servers N increases. This provides an empirical validation of the asymptotic independence conjecture, and hence our theoretical results. In particular, it indicates that even for the most general of our policies, the asymptotic independence of queues is indeed true. When the primary discard threshold is infinite, then all jobs get served. We illustrate a similar validation for two special cases where the primary replica is never discarded. The results for deterministic d replicas with no discard (π(1, ∞, ∞) policy) is presented in Fig. 8, and for deterministic d replicas with secondary on idle servers (π(1, ∞, 0) policy) in Fig. 9. The closed form theoretical expressions of the conditional mean response time of these policies are provided in Remark 6 and Lemma 16 respectively. As in the case of π(p, T, T ), we see that the empirically computed mean response time of undiscarded job converge to the corresponding theoretical expression with increase in number of servers N . This indicates that as the number of servers N increases the workload across queues tend to be independent, validating our conjecture on the asymptotic independence of queues. It is remarkable to note that the theoretical values and those obtained empirically from the simulation, coincide even when the number of servers N is as low as 10.
Even though, we have performed extensive validations for different values of p, T 1 and T 2 (for which closed form results are available) and have observed a similar behavior with increase in number of servers N , we have presented only a select few of the plots validating our models.
APPENDIX B PROOF OF THEOREM 9
This section provides the moment generating function based approach for deriving the stationary workload distribution in a single queue in an N server system with i.i.d. service times and Poisson arrivals with thtreshold based dispatching policy, π(p, T 1 , T 2 ) . Although, the proof is provided only for the case where the service times are exponentially distributed with rate µ, the same approach can be used when the service times follow a shifted exponential distribution. We omit the details due to space constraints. Let us now begin the proof by providing two simple results.
Lemma 17. For the interarrival time sequence (T n : n ∈ N), we have Proof: Recall that interarrival times (T n : n ∈ N) are i.i.d. exppnential with rate N λ, and duration T n+1 is independent of past workloads (W 1 , . . . , W n ) and past and present service times (X 1 , . . . , X n ) for all n ∈ Z + . Hence, the result follows.
In addition, we have the following identity Proof: The nth service time X n is independent of workloads (W 1 , . . . , W n ) seen by first n incoming arrivals. The first equality follows from this observation. The second equality follows from the fact that Φ X (θ) = µ µ+θ . Proposition 19. For an N server system with i.i.d. exponential service times of rate µ, Poisson arrivals of rate N λ under π(p, T 1 , T 2 ) policy and the moment genrating functions of the limiting workload W in a single queue defined in definition 8, Proof: From (4), we can write the restricted moment generating function for W n+1 in terms of W n . We assume that there exists a limiting workload distribution lim n∈N P {W n w} seen by an arriving customer, which equals the limiting distribution of workload in the system by the PASTA property. At stationarity, we will take the distribution of both W n+1 and W n as the limiting distribution F . Now let us compute Φ W (θ). From the definition of moment generating function for workload at (n + 1)th arrival is given by We will derive the three terms separately. We first observe that in the region W n > T 1 , we have W n+1 = (W n − T n+1 )½ {Wn>Tn+1} from (5). Using the identity in (9), we can write We next observe that in the region W n ∈ (T 2 , T 1 ], we have W n+1 = (W n − T n+1 )½ {Wn>Tn+1} with probability 1 − 1 N , and W n+1 = (W n + X n − T n+1 )½ {Wn+Xn>Tn+1} with probability 1 N . We can write For X n = 0 the moment generating function Φ X (θ) = 1, and hence combining the above two equations, we get We next observe that in the region W n T 2 , we have W n+1 = (W n − T n+1 )½ {Wn>Tn+1} with probability 1 −λ N λ , and W n+1 = (W n + X n − T n+1 )½ {Wn+Xn>Tn+1} with probabilityλ N λ . Repeating the steps followed above for the region W n > T 1 and rearranging, we get We observe that LHS and RHS have the form f (θ) = f (N λ) for an arbitrary function f and variables θ and λ. Therefore, we conclude that f (θ) = f (0). Further, note that Φ i (0) =F Ti for i ∈ [2]. Then, using equation (11), we can write for exponential service times, Now, we substitute Φ 1 (θ) and Φ 2 (θ) from equations (13) and (16) respectively in the above equation. Further incorporating equations (14) and (17) and rearranging the terms will yield equation (12).
Remark 7. Upon inverting the moment generating function in equation (12), we see that the complementary workload distribution function for w 0 is given bȳ In addition, we can find the constant, F (0) = 1 −λ µ + λ −λ µF (T 2 ) + λ µF (T 1 ) . Proposition 20. For an N server system with i.i.d. exponential service times of rate µ, Poisson arrivals of rate N λ under π(p, T 1 , T 2 ) policy and the moment generating functions of the limiting workload W in a single queue defined in definition 8, This implies that for w > T 1 ,F (w) =F (T 1 )e −µ(w−T1)+ ,, wherē Proof: The computation remains similar to the previous step, with an additional restriction of W n+1 > T 1 . Therefore, we can write We sequentially compute the first term, the summation of first two terms, and the summation of all three terms as before. In the region W n > T 1 , we have e −θWn+1 ½ {Wn+1>T1} ½ {Wn>T1} = e −θ(Wn−Tn+1) ½ {Tn+1<Wn−T1} ½ {Wn>T1} .
Then, it follows that Note that, in the region W n T 1 , it is not possible for W n+1 > T 1 , unless the nth arrival with service time X n is admitted at the cavity queue. This occurs with probability 1 N in region T 2 < W n T 1 , and with probabilityλ N λ in region W n T 2 . Therefore, for the region T 2 < W n T 1 , we can write Similarly, for the region W n T 2 , we can write Substituting the above three equations in equation (15) and simplifying as in the previous proof, we get The result follows by inverting the moment generating function and noting that Φ 1 (0) =F (T 1 ).
In addition,F Proof: The computation remains similar to the previous case but here we have the restriction of W n+1 > T 2 . Then, we can write We sequentially compute the first term, the summation of first two terms, and the summation of all three terms as before. The indicator W n+1 > T 2 implies that W n+1 can't be zero. In the region W n > T 1 , we have Similarly, for the region T 2 < W n T 1 , an external arrival is admitted with probability 1 N . When there is no arrival W n+1 = W n − T n+1 , and we have In the region T 2 < W n T 1 , the nth arrival with service time X n is admitted at the cavity queue with probability 1 N . In this case, W n+1 = W n + X n − T n+1 , and we can write Combining these results in the region W n > T 2 , we can write In the region W n T 2 , it's not possible for W n+1 > T 2 , unless the n arrival with service time X n is admitted at the cavity queue. This occurs with probabilityλ N λ , and we can write Combining the above equations and simplifying as in the previous proof, we obtain To prove the second statement, note that Φ 1 (θ) =F (T 1 )e −θT1 Φ X (θ) from equation (13). Substitution and simplification tells us that Φ 2 (θ) = 1 µ + θ − 1 µ − λ + θ µF (T 1 )e −θT1 +λ e −θT2 (µ − λ + θ) e −µT2 (Φ(−µ) − Φ 2 (−µ)) when service times are exponentially distributed with rate µ. The result follows by inverting this moment generating function and the fact that Φ 2 (0) =F (T 2 ).
|
2020-10-27T15:01:49.280Z
|
2020-10-26T00:00:00.000
|
{
"year": 2020,
"sha1": "1607eb6269bb5f48adaf740f100820fe8004dea2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1607eb6269bb5f48adaf740f100820fe8004dea2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
58668046
|
pes2o/s2orc
|
v3-fos-license
|
Recurrent bacterial pneumonia in Irish Wolfhounds: Clinical findings and etiological studies
Background Increased incidence of bacterial pneumonia (BP) has been reported in Irish Wolfhounds (IWHs), and recurrence of BP is common. The etiology of recurrent pneumonia in IWHs is largely unknown. Objectives To describe clinical findings in IWHs with recurrent BP and investigate possible etiologies. Animals Eleven affected IWHs, 25 healthy IWHs, 28 healthy dogs of other Sighthound breeds, and 16 healthy dogs of other breeds. Methods Prospective cross‐sectional observational study. All affected IWHs underwent thorough clinical examinations including thoracic radiographs, thoracic computed tomography, electron microscopic evaluation of ciliary structure, and bronchoscopy and bronchoalveolar lavage fluid (BALF) cytology and culture. Serum and BALF immunoglobulin concentrations were measured using an ELISA method, and peripheral blood lymphocyte subpopulations were analyzed using flow cytometry. Esophageal function was assessed by fluoroscopy (n = 2). Results Median age of onset was 5.0 years (range, 0.4‐6.5 years), and when presented for study, dogs had experienced a median of 5 previous episodes of BP (range, 2‐6). The following predisposing factors to BP were detected: focal bronchiectasis (10/11), unilateral (2/9) and bilateral (1/9) laryngeal paralysis, and esophageal hypomotility (2/2). Local or systemic immunoglobulin deficiencies or primary ciliary defects were not detected. Conclusions and Clinical Importance Recurrent BP affects mostly middle‐aged and older IWHs without any evident immune deficit or primary ciliary defects. Focal BE was a frequent finding in affected dogs and likely contributed to the development of recurrent respiratory infections. Laryngeal and esophageal dysfunction identified in a minority of dogs may contribute to recurrent BP.
flora, which emphasizes the importance of predisposing factors in the development of BP. [3][4][5] Several predisposing factors to the development of BP have been described in dogs, including infections with respiratory viruses, ciliary defects, an immune deficit, and conditions predisposing to aspiration, such as laryngeal dysfunction, decreased esophageal motility, recent anesthesia, or neurological disease. 3,[6][7][8][9][10][11][12][13][14][15][16][17][18] An increased incidence of BP has been reported in the Irish Wolfhound (IWH), and recurrent BP has been identified frequently in this breed. [19][20][21] A recent questionnaire-based study described at least 1 episode of pneumonia in 37% of IWHs, and the majority of these dogs (53%) experienced recurrent episodes. 19 Bronchopneumonia is also 1 of the most common causes of death in IWHs along with neoplasia, cardiac disease, and musculoskeletal disorders. 19,[22][23][24] Additionally, a significantly shorter life span has been reported in IWHs with a history of pneumonia, indicating that episodes of BP are severe in this breed, and death as a consequence of BP is a notable problem. 19 The etiology of this breed predisposition is not well established. A retrospective study suggested aspiration as an etiology based on the acute onset of respiratory signs and radiographic changes in the dependent lung lobes. 20 However, a predisposing factor to aspiration was identified only in a minority (16%) of IWHs. 20 Supporting aspiration as a possible etiology, another study reported megaesophagus in a small number (9%) of IWHs with recurrent BP. 19 To our knowledge, prospective studies examining the etiology of recurrent BP in IWHs are lacking.
A unique rhinitis and bronchopneumonia syndrome (RBPS) has been described in young IWHs and is characterized by variable rhinorrhea present mostly from birth, accompanied by recurrent BP. [25][26][27] The clinical picture of RBPS resembles primary ciliary dyskinesia or primary immune deficiency. 25 These diseases have not been identified in affected dogs, but the possibility of immunoglobulin A (IgA) deficiency has been suggested. 25,28 Pedigree analysis of IWHs with RBPS has identified shared ancestors, suggesting a hereditary component in this disease. 25 Currently, it is still unclear whether recurrent BP in IWHs represents a disease entity distinct from RBPS.
Our aim was to describe the clinical as well as diagnostic imaging features in IWHs with recurrent BP and to investigate possible etiologies.
| Study design
This study was conducted as a prospective cross-sectional observational study. were eligible for inclusion. Dogs were included in the study only between pneumonia episodes when they were clinically healthy and not receiving antimicrobial treatment.
| Study population
As healthy controls, privately owned healthy IWHs >6 years of age with no history or clinical findings suggestive of previous or current BP, as well as dogs of other Sighthound breeds and dogs of various other breeds, were recruited as healthy controls. These dogs had no clinical signs of illness and had normal physical examination findings as well as normal hematology and serum biochemistry findings. Additionally, 6 healthy purpose-bred laboratory Beagle dogs (normal physical examination findings, blood hematology, serum biochemistry and arterial blood gas analysis results, unremarkable thoracic radiographs and bronchoscopy findings, as well as a negative bacterial culture in bronchoalveolar lavage fluid [BALF]) were included as healthy controls for BALF comparisons.
| Diagnostic testing and sample collection
All dogs underwent a full clinical examination, and venous blood samples for hematology, serum biochemistry, immunoglobulin measurements, and lymphocyte differentiation were obtained. Additionally, in IWHs with recurrent BP and in healthy laboratory Beagles, thoracic radiographs (left and right laterolateral and ventrodorsal views) and fecal samples (3 consecutive days) were obtained, and arterial blood gas analysis for partial pressures of oxygen (PaO 2 ), carbon dioxide (PaCO 2 ), and alveolar-arterial oxygen gradient was performed. Laryngeal evaluation was performed in IWHs with recurrent BP under light anesthesia after IV butorphanol (Butordol 10 mg/mL, Intervet International B.V., Boxmeer, the Netherlands) and propofol (PropoVet Multidose 10 mg/mL, Fresenius Kabi AB, Uppsala, Sweden). Movement of the arytenoid cartilages was observed until either normal movement was observed or the dog was too awake to tolerate the examination. 29 Thoracic computed tomography (CT) was performed under general anesthesia in intubated patients during an expiratory pause with a helical scanner (Somatom Emotion Duo, Siemens Germany, and GE LightSpeed VCT 64, GE Healthcare, Fairfield, Connecticut). The CT examination was performed first in a dorsal recumbency and then in ventral recumbency. After the CT examination, bronchoscopy was performed with the dog in ventral recumbency using a 4.9-mm flexible endoscope (GIF-N180, Olympus Europa SE&Co. KG, Hamburg, Germany), and airway samples for cytology and semiquantitative bacterial culture were obtained by weight adjusted bronchoalveolar lavage (BAL). 30 After BAL, ciliary biopsy specimens were obtained from the distal trachea using a single-use endoscopic biopsy forceps, placed into a buffered glutaraldehyde solution, and shipped to a veterinary diagnostic pathology service (University of Liverpool, Neston, UK) for electron microscopy.
Evaluation of esophageal function was performed at a separate appointment by a fluoroscopic (BV Libra C-arm, Philips Medical Systems, Eindhoven, the Netherlands) swallow study using barium sulfate (Mixobar Colon 1 g/mL, Bracco Imaging S.p.A, Colleretto Giacosa, Italy) mixed in canned food (1:10) for those dogs in which signs suggestive of esophageal dysfunction were identified in the history or during the aforementioned investigations. The study was performed in awake standing animals.
Thoracic radiographs and CT images were assessed by the same radiologist (A.K.L), who was blinded to the patient data. The presence and severity of bronchiectasis (BE) was assessed by using previously established criteria. 31 Each lung lobe was assessed for the presence of bronchiectasis (BE) using transverse images, and bronchoarterial (BA) ratio was measured at several locations including at least 1 normal-appearing central and peripheral airway in each lung lobe as well as all abnormally wide-appearing airways. The largest BA ratio was recorded for each lung lobe.
| Sample handling and analysis
Hematology, serum biochemistry, arterial blood gas, and fecal analysis as well as with cytological evaluation and bacterial cultures of respiratory samples (semiquantitative aerobic bacterial culture and Mycoplasma spp. culture) were performed as previously described. 32 Swab samples were obtained from mucosal membranes (oral mucosa, nares, and perineum) to screen for methicillin-resistant Staphylococcus pseudintermedius (MRSP) colonization and were processed as described previously. 33 Serum and BALF samples obtained for immunoglobulin analysis were frozen immediately and stored at −80 C until analysis. 34 Immunoglobulin A, M, and G (IgA, IgM, and IgG) were measured in serum and in BALF with ELISA kits for canine samples (Bethyl Laboratories Inc., Montgomery, Texas). [35][36][37] Serum and BALF urea concentrations were measured with a clinical chemistry analyzer (Kone Specific, Thermo Fisher Scientific, Vantaa, Finland) by using an enzymatic method (UREA UV 250, bioMérieux SA, Marcy l'Etoile, France), and the proportion of epithelial lining fluid (ELF) in the BALF was calculated as follows using serum and BALF urea measurements: proportion of ELF = (concentration of urea in BALF / concentration of urea in serum) × 100% as described previously. 30 Epithelial lining fluid immunoglobulin concentrations were calculated by using the known proportion of ELF in BALF.
Fresh EDTA blood samples were stained with monoclonal antibodies to canine lymphocyte cell surface antigens (fluorescent mouse/rat anti-dog CD3, CD4, CD8, CD21, and MHC class II antibodies) as described previously (AbD Serotec, Oxford, United Kingdom). 38,39 Briefly, 100 μL aliquots of fresh EDTA blood were exposed to 3 different ). Five microliter of each antibody was used. A 4th 100 μL aliquot of EDTA blood was not exposed to antibodies. Additionally, aliquots of EDTA blood were stained with each single antibody separately and used as controls. Tubes were incubated for 30 minutes in the dark, and red blood cells were lysed with a commercial erythrocyte lysing buffer (Erythrolyse Red Blood Cell Lysing Buffer, AbD Serotec, Oxford, United Kingdom). Cells were washed with a washing solution (phosphate-buffered saline with 1% bovine serum albumin) and 0.4% paraformaldehyde was used as a cell-fixing solution. 40 Samples were analyzed within 48 hours of staining with a BD FACSAria II flow cytometer (BD Biosciences, San Jose, California) and BD FACSDiva software (BD Biosciences, San Jose, California). Lymphocytes were identified using an electronic gate based on cell size and granularity (forward and side-angle light scatter properties). A minimum of 50 000 events was recorded for each preparation. Absolute concentrations of lymphocyte subpopulations were calculated by hematology analysis results in combination with flow cytometry data. P values <.05 were considered statistically significant. All IWHs with recurrent BP were examined between episodes of BP when they were clinically healthy. At the time of inclusion, 1 dog was receiving prednisolone (0.08 mg/kg PO q24h) and 3 dogs were receiving pain medication for orthopedic problems (carprofen, mavacoxib, and gabapentin).
| Ethical approval and owner consent
Affected IWHs had experienced their first BP at a median age of In 2 episodes of BP, the owners reported that the first antimicrobial was changed to another before clinical signs were relieved. Owners reported full recovery in 43/47 episodes of BP. In 2 episodes of BP, clinical signs relapsed immediately after antimicrobials were discontinued and in 2 episodes of BP, a mild cough persisted afterward.
| Review of patient records and radiographs
Patient records and radiographs from referring veterinarians were available from 8/11 dogs for retrospective review. In the remaining 3/11 dogs, the referring veterinarian had established a diagnosis of BP, and the acute onset of clinical signs (fever, tachypnea, dyspnea, and cough) as well as rapid response to antimicrobial treatment were highly supportive of the diagnosis. Radiographs were obtained at the referring veterinarian in 17/47 episodes of previous BP. Especially when BP recurred frequently, the diagnosis was based on typical clinical signs and increased serum C-reactive protein concentration. 32
| Clinical findings
Clinical examination findings included normal respiratory rate and character of breathing in all dogs. Lung auscultation was normal in 9/11 dogs and mild crackles were detected ventrally in 2/11 dogs.
None of the dogs coughed spontaneously, and a mild cough was provoked by tracheal palpation in 5/11 dogs. Cardiac auscultation, heart rate, and rhythm were normal in all dogs.
The results of blood hematology are presented in Table 1. Arterial blood gas analysis results are presented in Table 2. Fecal analyses were negative for lungworms and intestinal parasites in all Beagles and IWHs with recurrent BP. Three of 11 of the affected IWHs were found to be carriers of MRSP in their mucosal membranes. Leukocyte count (10 9 /L) 7.9 (7.2-8.6) 7.2 (5.5-8.2) 5.7 (4.9-6.1)*** 7.6 (6.
Results of BALF cytology analysis are presented in Table 3
| Electron microscopy of ciliary biopsies
Changes indicating primary ciliary dyskinesia were not observed in any of the affected IWHs (Figure 3). Small numbers of compound cilia Abbreviations: IQR, interquartile range; PaCO 2 , partial pressure of arterial carbon dioxide; PaO 2 , partial pressure of arterial oxygen.
(<10% of examined cilia) were detected in 10/11 dogs. Additionally, rod-shaped bacteria were detected on the luminal aspect of cells in 1 dog that had negative bacterial culture in BALF.
| Laryngeal and esophageal function
Laryngeal function was evaluated in 9/11 dogs. Normal function was observed in 6/9 dogs, 2 dogs had unilateral laryngeal paresis (grade 2), and bilateral laryngeal paralysis (grade 3) was diagnosed in 1 dog. 29 None of the affected IWHs had findings suggestive of megaesophagus in thoracic radiographs. Clinical signs were suggestive of esophageal dysfunction in 2 affected IWHs (regurgitation and eructation), and a fluoroscopic swallow study was performed in these dogs. The pharyngeal phase was normal, but esophageal transit time was longer than normal in both dogs. 42 In a 2.6-year-old intact male IWH with daily regurgitation, esophageal transit time was prolonged because of the food bolus remaining in the cervical esophagus for 10 seconds. In a 5.8-year-old intact female with daily eructation and occasional regurgitation, esophageal transit time was >4 minutes. In this dog, the diameter of the esophagus was estimated as being normal, but peristaltic waves were completely missing and food material accumulated in the thoracic esophagus during the entire study period. Both of these dogs had normal laryngeal function. Based on these observations, it is likely that recurrent BP represents a distinct disease entity.
| Follow-up
Affected dogs appeared to recover clinically from BP episodes: they were mostly free of clinical signs between episodes, arterial The prevalence and characteristics of bronchiectasis (BE) in high-resolution computed tomography studies in Irish Wolfhounds with recurrent bacterial pneumonia. Bronchiectasis was defined as lack of tapering of the bronchial lumen toward the periphery, identification of visible bronchi within 1 cm of the lung margin, or a bronchoarterial (BA) ratio exceeding 2.0. Cylindrical BE was characterized as dilatation of the bronchi without tapering toward the periphery. A saccular BE presented as a focal saccular dilatation or a cyst-like structure, and a varicose BE was described as a focally dilated bronchial segment interposed between normal hood, and a case of ciliary dyskinesia has been reported in an aged dog. 47,48 Therefore, these possibilities also were assessed in our study.
Predisposition to aspiration because of laryngeal or esophageal dysfunction was identified in some of the affected IWHs and could have contributed to the development of recurrent BP in these dogs.
However, predisposition to aspiration was not a consistent finding in affected dogs and the connection between subclinical laryngeal paralysis and recurrent BP was not fully established; comparison with the prevalence of subclinical laryngeal paralysis in healthy IWHs was not done. Furthermore, it has been shown previously that laryngeal dysfunction also may be detected under anesthesia in asymptomatic dogs. 29 Retention of food in the esophagus was severe in 1 affected to the protocol would have been ideal because significant differences in swallow metrics have been noted among liquid, puree, and kibble meals. 42 However, significant differences in esophageal transit time or the prevalence of food bolus retention have not been noted, and therefore adding liquid or kibble to the protocol was considered unlikely to have changed the assessment. 42 Defects in the ciliary ultrastructure suggestive of primary ciliary dyskinesia were not detected in any of the affected IWHs. A small number of compound cilia were commonly noted and most likely represent secondary changes caused by repeated BP. 50 However, normal ultrastructure of cilia does not fully eliminate a functional deficit; ciliary dyskinesia without typical ultrastructural changes has been reported rarely in both humans and dogs. 51,52 Ciliary function could be further assessed by using scintigraphic studies and measuring ciliary beat frequency, but such studies were not done in our dogs. 51,52 However, ciliary dyskinesia was considered unlikely in these dogs, because the purulent nasal discharge typical of ciliary dyskinesia was lacking, and accumulation of mucus in the bronchial tree was not noted during bronchoscopy. 6,[52][53][54] Immunoglobulin deficit has been suggested to underlie RBPS in IWHs and therefore local and systemic immunoglobulin concentrations were evaluated. 25 to severe (BA ratio, >3.0), and BE is likely to be an important factor predisposing to further infections. 31 Similarly, as reported previously, thoracic CT examination was more sensitive in detecting BE (10/11) than was thoracic radiography (0/11) or bronchoscopy (4/11). 64 Because BP is an acute potentially life-threatening bacterial infection, it needs to be treated with antimicrobials for animal welfare. Because affected IWHs typically respond rapidly to antimicrobial treatment and appear to recover clinically, owners tend to continue treating BP episodes despite the recurrent nature of the disease. The etiology of recurrent BPs still is largely unknown, and the methods of prevention, therefore, also are limited. Future research efforts could be aimed at identifying possible genetic factors connected with this disease as well as further investigating possible local immune deficits or factors leading to marked bronchial remodeling.
An inherent limitation in our study was that episodes of previous BP were mostly diagnosed and treated by the referring veterinarian and therefore could not be verified using uniform criteria, and patient records were available for retrospective review in only 8/11 affected IWHs. Additionally, healthy IWHs did not undergo all examinations performed in affected dogs and therefore CT, bronchoscopy, and BALF findings could not be compared with those from the healthy IWHs.
To conclude, recurrent BP affects mostly middle-aged and older
|
2019-01-22T22:35:45.506Z
|
2019-01-21T00:00:00.000
|
{
"year": 2019,
"sha1": "e5badfd59d110d4914e5761a5b7f97b47d88e376",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvim.15413",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf170a54c22a8c0e06c7e9d230dc38f886bd4260",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240357818
|
pes2o/s2orc
|
v3-fos-license
|
Runners Employ Different Strategies to Cope With Increased Speeds Based on Their Initial Strike Patterns
In this paper we examined how runners with different initial foot strike pattern (FSP) develop their pattern over increasing speeds. The foot strike index (FSI) of 47 runners [66% initially rearfoot strikers (RFS)] was measured in six speeds (2.5–5.0 ms−1), with the hypotheses that the FSI would increase (i.e., move toward the fore of the foot) in RFS strikers, but remain similar in mid- or forefoot strikers (MFS) runners. The majority of runners (77%) maintained their original FSP by increasing speed. However, we detected a significant (16.8%) decrease in the FSI in the MFS group as a function of running speed, showing changes in the running strategy, despite the absence of a shift from one FSP to another. Further, while both groups showed a decrease in contact times, we found a group by speed interaction (p < 0.001) and specifically that this decrease was lower in the MFS group with increasing running speeds. This could have implications in the metabolic energy consumption for MFS-runners, typically measured at low speeds for the assessment of running economy.
INTRODUCTION
Foot strike patterns (FSP) describe the location of the first contact area of the foot with the ground (Cavanagh and Lafortune, 1980) during running. At comfortable speeds, runners most commonly strike with the rear part of the foot (∼78%), while the rest strike with the middle or the front part of the foot (Santuz et al., 2016). The two strategies provide very distinct running patterns, exhibiting differences in a plethora of biomechanical characteristics (Hayes and Caplan, 2012;de Almeida et al., 2014;Almeida et al., 2015;Strauts et al., 2015;Valenzuela et al., 2015). For instance, it is well accepted that runners that strike the ground with the heel exhibit a lower peak vertical ground reaction force, lower external dorsiflexion moment and range of motion, while having a higher loading rate of the vertical ground reaction forces and knee extension moment in comparison to runners with a more anterior point of force application (Almeida et al., 2015;Valenzuela et al., 2015). Moreover, certain FSPs have been linked to different injuries (Cheung and Davis, 2011;Daoud et al., 2012;Rice et al., 2013) and to affect performance (Di Michele and Merni, 2014;Ogueta-Alday et al., 2014;Ekizos et al., 2018).
The common strategy employed by humans to increase speed until ∼7 ms −1 is by exerting larger vertical ground reaction forces (Arampatzis et al., 1999;Weyand et al., 2000), which leads to increments in step length (Mercer et al., 2002;Dorn et al., 2012). Ground reaction forces subsequently increase the loading on the human system and have to be produced in shorter contact times that are associated with increasing velocities (Gatesy and Biewener, 1991;Arampatzis et al., 2000). Except the overall higher loading, the transition from a lower to a higher speed taxes the human system with an increased oxygen consumption. However, humans maintain similar energy costs (J/kg per meter distance) in a range of running speeds (Margaria et al., 1963;Carrier et al., 1984;Bramble and Lieberman, 2004). From a mechanical point of view, it has been suggested that these increases in running speed are achieved through a repositioning of the foot in relation to the ground. It is suggested that runners gradually adapt their FSP in order to modify the impact of loading or energy costs (Cheung and Davis, 2011;Cheung and Rainbow, 2014;Di Michele and Merni, 2014) and gradually employ a more anterior point of force application at first contact (Keller et al., 1996). However, previous reports did not find a consistent behavior regarding the changes of FSP with increasing speeds. Some studies report that the point of force application moves to the anterior with increasing speeds (Keller et al., 1996;Wang et al., 2018), but this alteration was not confirmed by other studies (Breine et al., 2014(Breine et al., , 2019Cheung et al., 2017). Furthermore, Forrester and Townend (2015), using the foot strike angle (i.e., angle of the foot with respect to the ground in the sagittal plane) as assessment parameter to classify FSP, found that the most runners did not change their initial foot strike angle by increasing running speeds. However, they identified also a cluster of rearfoot strike (RFS) runners that showed a decrease in foot strike angle indicating a trend to midfoot strike (MFS) patterns at higher speeds (Forrester and Townend, 2015). It seems, therefore, that runners are using diverse strategies concerning the FSP behavior to cope with increasing speeds.
Until now, there is no established consensus regarding the changes in FSP with increasing speeds. FSP is a discrete rather than a continuous variable (Breine et al., 2014) and thus changes within a given strike pattern may not be considered examining only the possible transfer from RFS to MFS and vice versa. Thus, a numerical continuous parameter like the foot strike index (FSI), may be a more appropriate way to investigate the modulation of FSP in different running speed conditions (Breine et al., 2014;Santuz et al., 2016). At speeds, which can be sustained for longer periods of time, the human system is more comfortable to exhibit its preferential or more familiar FSP. When increasing speed, the system is forced to accommodate the higher loads and alterations in FSI may be associated with the runners FSP at the comfortable speed. Non-rearfoot strikers, for instance, have a lower margin to increase their FSI anteriorly compared to rearfoot runners. Consequently, runners with a non-rearfoot strike pattern may retain a similar FSI throughout increasing running speeds. It is therefore possible, that the strategies of rearfoot and non-rearfoot strike runners could develop differently as speed progresses and particularly an alteration of FSI toward anterior only in rearfoot runners could be expected. In the current study, we examined the effect of speed on the FSI separately for runners with a rear and non-rear foot strike pattern. We hypothesized (1) a change of FSI in runners with a rear strike pattern toward the fore of the foot, leading to a higher percentage of non-rear foot runners by increasing running speed and (2) runners with an initial non-rear strike pattern would maintain the same strike pattern strategy.
Experimental Design
In the current study 47 young adults who were recreational runners (37 males and 10 females, training sessions per week: 3.7 ± 1.6, training duration per week 5.1 ± 2.8 h) have been recruited (age: 27.8 ± 4.8 years, height: 177.3 ± 8.6 cm, mass 70.9 ± 9.2 kg). For each participant the measurement took place on a single day. None of the participants had any neuromuscular or musculoskeletal impairments at the time of the measurements. Moreover, in the 6 months prior to the day of the measurements, none of them have suffered any injury to the lower limbs. All participants gave informed consent and approval of ethics has been acquired from the appropriate committee of the Humboldt-Universität zu Berlin (HU-KSBF-EK_2018_0013).
For the measurements we used a treadmill (mercury, H-pcosmos Sports & Medical GmbH, Nussdorf, Germany) with an integrated pressure plate operating at 120 Hz (FDM-THM-S, zebris Medical GmbH, Isny im Allgäu, Germany). After a selfselected warm-up, the participants ran shod at six predefined sub-maximal fixed velocities; 2.5, 3.0, 3.5, 4.0, 4.5, and 5.0 ms −1 . The chosen speeds were comfortably attainable by all participants for small periods of time. While in non-homogeneous groups relative intensity can provide methodological advantages, the homogeneity presented in our cohort meant we could use fixed speeds. As such, possible differences in the relative intensity would not skew our results and comparability with other studies is increased. The duration of the run at each speed was 2 minutes, of which the first minute was used as familiarization to the specific speed and the latter minute was extracted for subsequent analysis.
To calculate the contact time of each step we used the pressure plate data from the treadmill. The time that each foot was in contact with the ground has been calculated based on the time difference, between the first non-zero data after the swing phase and the first zero in the pressure data right after toe-off. We used the average of all contact times of both feet in all steps per trial per person for the statistical analysis. Cadence was calculated from the number of steps detected over the whole trial period. Subsequently, step length, step time and flight time were calculated based on these values. The duty factor was calculated as the ratio of contact time over step time.
The FSP was numerically quantified using the pressure distributions from the instrumented treadmill, through the strike index. The FSI is defined as the distance from the heel to the center of pressure at first impact, relative to the total foot length and was calculated based on the recorded foot pressure distribution using a validated custom algorithm (Santuz et al., 2016). In short, after physically measuring the shoe length (to account for incomplete footstrikes), the algorithm compares it to the calculated length (i.e., using the pressure plate data) and corrects the footstrikes when necessary (Santuz et al., 2016). The first recorded data (i.e., initial contact) at touchdown of each foot in every step are then localized to the full length of the foot. The values, therefore, range from the most posterior part of the heel representing 0 up to the most anterior part of the toes representing 1 (non-dimensional). In our paper we aimed at showing the behavior of the system as a whole and thus we used the average of the strike indexes of both feet in all steps per trial per person. The symmetry in the FSI between left and right foot was quite high depicting an association of R 2 = 0.887 and differences were not statistically significant (p > 0.05) in all investigated speeds.
Generally, footstrikes are divided in three distinct categories based on where the first impact is located, in relation to the whole foot. A RFS is considered one that provides a strike index lower than 0.33 and thus first touch occurs at the heel of the foot, a midfoot strike one that provides values between 0.33 and up to 0.66 (approximate point of the metatarsophalangeal joints), and a forefoot strike one with values above 0.66 (Cavanagh and Lafortune, 1980;Hasegawa et al., 2007). Due to forefoot strikers exhibiting a low prevalence in the general population (Hasegawa et al., 2007;Larson et al., 2011;Santuz et al., 2016), in this study the participants exhibiting a mid-or a forefoot strike have been grouped together as MFS for all further analysis.
Statistics
We defined two groups based on the FSI at the slowest running speed (i.e., 2.5 ms −1 ). In that way a RFS (n = 31) and a MFS group (n = 16) have been identified. To further examine the differences and development of FSI, contact time, cadence, step length, step time, flight time and duty factor with speed based on the identified groups, we performed a two-way repeated measures ANOVA. Speed was selected as a 6-level within-subject factor and groups (RFS, MFS) as the between-subject factor. The level of significance was set to α = 0.05.
RESULTS
Investigating the effect of running speed for the FSI we found an interaction between the two groups [F (2 , 5) = 5.2, p = 0.005; Figure 1]. The post hoc analysis by means of a repeated measures ANOVA revealed a significant decrease in the FSI in the MFS group [F (2 , 5) = 4.3, p = 0.018] and no significant differences in the RFS group [F (1 , 5) = 2.5, p = 0.104].
Contact times decreased significantly with increasing velocities [F (2 , 5) = 799.8, p < 0.001] in both RFS and MFS groups, while between groups there was a significant effect [F (1 , 5) = 8870, p < 0.001] with a clearly higher contact time in the RFS group (Figure 2). We found a group by speed interaction [F (2 , 5) = 10.4, p < 0.001] in the contact time. In the post hoc analysis, both groups exhibited significantly decreased contact times with increasing velocities [RFS: F (1 , 5) = 718.4, p < 0.001; MFS: F (1 , 5) = 244.9, p < 0.001], therefore the interaction indicate a higher decrease of contact time in the RFS group.
DISCUSSION
In the current study, we examined the effect of speed on the FSI and contact time for RFS and MFS runners. Out of all 47 investigated participants, 31 (66%) were rearfoot striking at the initial examined speed (i.e., 2.5 ms −1 ) and 16 (34%) were mid-or forefoot striking. Only six participants (13%) exhibited a change in the FSP: three changed their FSP from RFS to MFS and three changed to a RFS while starting with a MFS. The rest of the participants did not change their initial FSP by increasing running speed. Further, we detected an overall decrease of FSI in the MFS group at higher running velocities. Both groups significantly decreased contact times with increasing speeds, however, the decrease was higher in the RFS group. We hypothesized an increase of FSI in the RFS runners leading to a higher percentage of MFS runners by increasing running speed. Since only three participants changed from RFS to MFS our first hypothesis has been rejected. However, based on our use of the FSI, it was also shown that RFS runners do not alter the way their point of first contact within their chosen pattern either. This means they are maintaining a similar way of striking the ground throughout the examined speeds. In bipedal locomotion contact times decrease with increasing velocities (Gatesy and Biewener, 1991;Arampatzis et al., 2000) and RFS pattern is reported to have longer contact times than MFS (Hayes and Caplan, 2012;Di Michele and Merni, 2014;Ekizos et al., 2018). This could naturally lead participants that use RFS at lower velocities to change their strike pattern toward the fore of the foot. Dynamic stability during locomotion is a sine qua non concept and acute changes in the mechanics of running can cause instabilities in the system. In previous studies we found that acute changes in foot strike patterns (i.e., alteration from RFS to MFS) decrease the human dynamic stability during running (Ekizos et al., 2017(Ekizos et al., , 2018. Maintenance of locomotor stability might be therefore a reason for the preservation of the foot strike patterns despite the increased running speed. Although contact time decreased significantly with the increased speed, the majority of the investigated RFS runners (87%) maintained the same FSP and minimized the changes in the FSI.
Midfoot strike runners also maintained their initial FSP throughout the examined speeds. However, in the MFS runners we found a significant (16.8%) decrease in the FSI with increasing speeds, which resulted in smaller differences in the FSI between RFS and MFS. This highlights that while the overall FSP did not change, the modification of the FSI within the MFS pattern indicates changes in the running strategy. At the same running speed, lower FSI is associated with a longer contact time (Gruber et al., 2013;Di Michele and Merni, 2014;Ekizos et al., 2018) and therefore the decrease of the contact time by increased speed was lower in MFS. The consequence was a reduction of the differences in the contact time between RFS and MFS runners by increasing speed. Both groups increased cadence and step length, and decreased step time in a similar way ( Table 1). The flight time and consequently the duty factor on the other hand showed a different trend between groups indicating a greater time on the ground of the RFS runners by increasing speeds. There is evidence that the rate of metabolic energy consumption per body weight of running is inversely proportional to contact time (Kram and Taylor, 1990;Kram, 2000). Therefore, the lower decrease of contact time in MFS could affect the energy cost of running.
The higher FSI in the MFS group results in distinct distributions of the muscular output in the lower extremities between RFS and MFS runners [i.e., higher moments at the ankle and lower moments at the knee joint for MFS (Kulmala et al., 2013;Kuhman et al., 2016)], and leads to improvements in the cost coefficient (Ekizos et al., 2018). However, the improved cost coefficient due to the higher FSI in MFS did not improve running economy because of the lower contact time and thus greater rate of ground reaction force development (Ekizos et al., 2018). Traditionally, running economy is investigated in running speeds between 2.5 and 4 ms −1 (Heise and Martin, 2001;Arampatzis et al., 2006;Albracht and Arampatzis, 2013;Gruber et al., 2013;Craighead et al., 2014;Bohm et al., 2019) or as a percentage of the lactate threshold (Fletcher et al., 2009;Andersson et al., 2021) and several studies reported no differences in running economy Frontiers in Physiology | www.frontiersin.org between RFS and MFS (Gruber et al., 2013;Ekizos et al., 2018). In these speeds the average differences in the contact time between the investigated RFS and MFS were ∼9.6% and reduced to 6.5% in the 5.0 ms −1 speed and may decrease the negative effect of the contact time for running economy in MFS. Elite distance runners, for instance, who are commonly employing speeds >5.0 ms −1 (Hoogkamer et al., 2017) might have energetic benefits using MFS patterns. At least, our findings indicate that the investigation of running economy between RFS and MFS should be extended to higher running speeds.
Here, we found that most runners maintain their initial FSP with increasing running speed and that MFS runners even move the point of force application to the posterior. This means that until 5.0 ms −1 it is possible to increase the rate of force generation without a transition to a more anterior point of force application. However, based on our results we cannot answer how the FSI or other parameters will develop at speeds above 5.0 ms −1 . Future investigations could improve our understanding concerning the effects of FSP on running mechanics and energetics during increased speeds, by considering measurements on metabolic energy consumption, lower leg kinetics and muscle mechanics (Arampatzis et al., 2006;Gruber et al., 2013;Bohm et al., 2019Bohm et al., , 2021, as well as including runners who are accustomed with speeds higher than 5.0 ms −1 .
CONCLUSION
Although the majority of runners maintained their original FSP with increasing speed, we found that RFS and MFS runners employed different strategies to cope with this increase. Specifically, RFS runners maintained a similar FSI throughout the examined speeds, but MFS runners exhibited a significant reduction in the FSI, without this reduction being enough to change the FSP. Compared to RFS, the MFS group also decreased contact times slower with increasing speeds which could affect the measurement of the energy consumption in MFS runners, when this is measured only in slow speeds.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Humboldt-Universität zu Berlin. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
AE designed the study, carried out the experiments, and drafted the manuscript. AS designed the study, carried out the experiments, and edited the manuscript. AA designed the study and drafted the manuscript. All authors gave final approval for publication.
|
2021-11-02T13:28:20.683Z
|
2021-11-02T00:00:00.000
|
{
"year": 2021,
"sha1": "9be51dca0a2b5ec91862e6b6bbe4278b695123db",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.686259/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9be51dca0a2b5ec91862e6b6bbe4278b695123db",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56309895
|
pes2o/s2orc
|
v3-fos-license
|
USE OF INDIGENOUS BENEFICIAL BACTERIA ( Lactobacillus spp . ) AS PROBIOTICS IN SHRIMP ( Penaeus monodon ) AQUACULTURE
Probiotic Shrimp Lactobacillus Antagonistic effect Aquaculture The present study was conducted to study in-vitro the antagonistic effect of Lactobacillus spp. against the pathogenic bacterial vibrio harveyi on shrimp. For this purpose, shrimp samples were collected from three different Ghers at Batiaghata upazilla, Khulna. Gills and intestines were taken out from the samples to identify the load of Lactobacillus spp. and Vibrio spp. The results revealed that the load of Lactobacillus spp. was found more than Vibrio spp. both in gills and intestines; the gills also contained higher load of Vibrio spp. than in the intestines. V. harveyi was separated from the isolated Vibrio spp. with different types of biochemical tests: Gram stain, Motility test, Indole test, VP test, MR test, Arginine dihydrolase, Salt tolerance test, growth at different temperature ranges and colony color on TCBS agar media. The isolated V. harveyi was subjected for in-vitro test. In in-vitro challenge test, the potential antagonistic effect of Lactobacillus spp. against V. harveyi was gradually obtained at 0, 4, 8, 12 hour of treatments. Interesting finding was that, with the time, the load of V. harveyi was reduced gradually and the lowest load was obtained after 12 hours of probiotic inoculation. The present study revealed an excellent in-vitro antagonistic probiotic effect of Lactobacillus spp. on V. harveyi. Therefore the result suggested that probiotic treatment might be an effective alternative to the use of antibiotics in treatments of bacterial diseases in shrimp aquaculture.
INTRODUCTION
Shrimp aquaculture has been recognized as a profitable business in Bangladesh.In spite of having great potentiality, this sector is affected with a wide range of microbial disease.It is one of the limiting factors to shrimp production.A dozen of Vibrio species are responsible for bacterial disease.Specially V. harveyi, V. logei,, V. alginolyticus, V. pelagicus, V. splendidus, V. alginolyticus, V. vulnificus, V. parahaemolyticus and V. damsella, are commonly found as causative agent of vibriosis (Lightner, 1983).Gram-negative bacterium V. harveyi as the causative agent of luminous bacterial disease and it is considered a serious pathogen of larval shrimp in hatcheries (Lavilla-Pitogo et al., 1990; Karunasagar et al., 1994).Both juvenile and adult shrimp can be attacked by this bacterium causing mass mortalities.From over the years, a verity of antibiotics has been used to control various pathogenic bacteria (Baticados et al., 1990).Disinfectants and antimicrobial drugs (antibiotics) are not easy to buy and most of time it is applied in semi-intensive and intensive aquaculture run by the rich farmers.But the situation is different in our country where most of the people farmer live below the poverty line during the starting of the culture and practice traditional or improved traditional culture systems.It is really difficult for them to afford such extra expenses.Furthermore, antibiotics is not encouraged to use nowadays as there is a growing concern about the abuse of antimicrobial drugs not only in human medicine and agriculture but also in aquaculture (Alberman, 1988).Frequent use of chemotherapeutic agents, especially antibiotics, leads to the emergence of resistant strains bacteria (Akoki, 1975) pathogenic to the animals.If antibiotics or disinfectants are used to kill bacteria, some bacteria will survive, because they carry genes for resistance.These will then grow rapidly because their competitors are removed (Moriarty, 1999).Vaccinations to prevent infections have been successful in laboratory scale but yet to be proved under field conditions.
However, it is an urgent issue to find an alternative of antibiotics that would be easy to get and will no harm the nature in anyway.The promising alternative approach for using antibiotics for controlling fish diseases is the use of probiotics or beneficial bacteria.Probiotics bacteria could prevent the establishment of pathogenic bacteria by out-competing them for adhesion and colonization sites in the intestines and other tissues of the animal (Vine et al., 2004a).They could also produce inhibitory substances actively preventing pathogen establishment (Verschuere et al., 2000).When added to rearing water, they may act as bioremediation agents improving water and sediment quality, augment nutrient cycling in the system and initiate colonization of other beneficial micro-flora affecting an overall positive impact on growth rates and productivity (Prabhu et al., 1999).Based on above background the study was undertaken in-vitro to find out the beneficial bacteria (Lactobacillus spp.) from shrimp for possible use as probiotics on V. harveyi infected shrimp.
Sampling and sample size
The present study was conducted on three ghers of Batiaghata upazilla, Khulna.The ghers were randomly selected for collecting shrimp samples.Nine (9) specimens of shrimps, Paeneus monodon (8.10cm-12.4cm in length and 13.12g-22.6g in weight) were collected from three ghers of the study area.
Preparation of stock solution of the target organs
The target organs (gill and intestine) of the samples were separated aseptically and weighed in electric balance.Organs (gills weight: 0.22g-0.34gand weights of intestinal tracts: 0.13g-0.25g.) were taken into eppendorf tubes with peptone water (James and Hirsch, 1960) for isolating Lactobacillus spp.and alkaline saline peptone water was used for isolating Vibrio spp.Then they were homogenized using tissue homogenizer.Then homogenized solutions were centrifuged at 3000 rpm for 3 minutes (James and Hirsch, 1960).After centrifugation the supernatant liquid portion was collected with a micropipette and taken into eppendorf tube and preserved in deep freeze.
Experimental design
Isolation of probiotic bacteria (Lactobacillus spp.) Lactobacillus spp. was isolated from gills and intestinal tracts of the collected shrimp samples.The stock solution was diluted (tenfold dilution) with peptone water (James and Hirsch, 1960).0.1 ml suitable dilution of each stock solution was inoculated in MRS Lactobacillus agar media and incubated at 37 0 C temperature for 24-48 hours.After incubation at 37 0 C temperature for 24-48 h, cream-colored colonies with yellow halos were collected and preserved for further experiment.
Isolation of Vibrio spp.
Vibrio spp. was isolated from the gills and intestinal tracts of the collected shrimp samples from study area.ISO method was followed for isolating Vibrio spp.0.1ml stock solution of each gill and intestinal tract was taken and mixed with 0.9ml alkaline saline peptone water (ASPW) in sterilized test tubes.Then the mixture was incubated at 37 °C for 6 ±1 hr.After that, whole culture of each test tube obtained in first selective enrichment was taken and transferred into other test tubes each containing 10 ml ASPW.Then the solution was incubated at 41.5 °C for 18 ± 1 hr.This was the second selective enrichment.Then serial dilution (tenfold) was done and 0.1ml suitable dilution of each culture was inoculated in thiosulfate citrate bile and sucrose (TCBS) agar plates.Then the inoculated TCBS agar plates were incubated at 37 °C.After 24 h ± 3 h of incubation, the plates were examined for the presence of typical colonies of presumptive Vibrio spp.(ISO/TS 21872-1, 2007).
Biochemical tests
The isolates were identified at the species level with the use of biochemical key.
Gram Staining
Gram stain was done to identify the gram positive and gram negative bacteria (Cowan and Steel's, 1993).
Indole test
The Indole test was performed in a 48h culture in peptone water adding about 1ml ether; shake; run 0.5 ml Ehrlich's reagent down the side of the tube.A red color in the solvent indicates positive reaction (Cowan and Steel's, 1993).
Voges-Proskauer (VP) test
The VP test was performed after completion of the methyl red test adding 0.6ml 5% α-naphthol solution and 0.2 ml 40% KOH aqueous solution; shake well, slope the tube, and examine after 15 min & 1 h.A positive reaction is indicated by a strong red color (Cowan and Steel's, 1993).
Arginine hydrolysis
Arginine hydrolysis inoculated 5ml Arginine broth & after incubation for 24h adding 0.25ml Nessler's reagent.Arginine hydrolysis is indicated by the development of a brown color (Cowan and Steel's, 1993).
MR test
The MR test inoculated the isolated bacteria with buffered glucose broth & incubated at 37°C for 48h.After incubation adding a few drops of methyl red solution to the culture, read immediately.A red color represents a positive test (Cowan and Steel's, 1993).
Motility test
The motility test stabbed inoculates tubes of motility medium to a depth of about 5mm.Incubate at or below the optimum growth temperature.Motile organisms migrate throughout the medium, which becomes turbid; growth of non-motile organisms is confined to the stab inoculum (Cowan and Steel's, 1993).
Salt Tolerance Test
This test was done on nutrient agar media supplemented with varying amounts of NaCl (0%, 1%, 3%, 6%, 8%, and 10%).This was performed to study the salt tolerance range of the isolated species and the optimum concentration, once determined was supplemented in various media required to test their biochemical properties (Cowan and Steel's, 1993).
Growth at different temperature range
The isolated Vibrio spp. was kept in incubator after inoculation at subsequent interval at temperature 4 0 C, 28 0 C, 37 0 C, 55 0 C and checked for their survivability and colony formation obtained at 28 0 C and 37 0 C (Cowan and Steel's, 1993).
In-vitro challenge test and determination of antagonistic activity of the probiotics
After identification of Vibrio harveyi by biochemical test one colony of V. harveyi was taken into eppendorf tube by isolating loop into 0.9ml peptone water.Prepared stock solution of V. harveyi (ISO/TS 21872-1, 2007).The stock solution was diluted (tenfold dilution) with peptone water (James and Hirsch, 1960) and prepared test solution of V. harveyi.Then 0.5 ml isolated probiotic solution was separately mixed with 0.5ml of test solution (V.harveyi).0.1ml suitable dilution of the mixer solution was inoculated in TCBS agar media after at subsequent intervals of 4 hour up to 12hour.This procedure was done for 2 times.Test solution of V. harveyi ; without probiotic was also inoculated in TCBS agar media at 0 hour, 4 th hour, 8 th hour and 12 th hour subsequently.All the inoculated TCBS agar plates were incubated at 37 0 C for 24± 3 hours.Standard plate count was done after incubation.
Data collection and analysis
Collected data were stored, explored and analyzed using Microsoft Excel (Microsoft office, 2007) and Statistical Package for the Social Sciences (SPSS version 16.0; SPSS, Inc., Chicago, IL).Independent sample t-test was applied to address the differences between the treatment and control at 1% significance level using SPSS (version 16.0).
Figure 2. In-vitro challenge test with isolated probiotics on vibrio harveyi
The average load of Lactobacillus spp. in the gills and intestinal tracts of the shrimp samples collected from 3 experimental ghers were 1.65× 10 5 , 1.65× 10 5 , 1.18× 10 5 CFU/g and 4.14× 10 4 , 2.74× 10 4 , 3.10× 10 4 CFU/g respectively (Table-1).The average load of Vibrio spp. in the gills and intestinal tracts of the shrimp samples collected from 3 experimental ghers were 1.29 × 10 4 , 1.90 × 10 4 , 1.40 × 10 4 CFU/g and 5.44 × 10 3 , 4.00 × 10 3 , 8.49× 10 3 CFU/g respectively (Table 1).Nine biochemical tests were performed (Table 2) to identify V. harveyi from the isolated colonies of Vibrio spp.The isolated colonies were inoculated in different biochemical media, supplemented with 1% NaCl to provide the optimum condition for growth, and incubated overnight at 30 o C. The Gram stain result was observed under total magnification by using a light microscope and showed all V. harveyi isolates to be Gram negative short rods.All V. harveyi isolates showed positive results to the Motility test, Indole ring, MR test.Majority of V. harveyi isolates (96.7%) were able to grow in salt tolerance test at 0% to 5%.These bacterial isolates showed inhibition at temperatures of 4°C and 55°C but were able to grow well at temperatures of 28°C and 37°C.The green and yellow colonies were observed on TCBS agar plates.
The biochemical test was done with 10 colonies of Vibrio spp.From the conformation test it was found that 80% was V. harveyi among 10 Vibrio spp.After that with the isolated V. harveyi colonies were further invitro tested to determine the antagonistic effect of Lactobacillus spp. on V. harveyi.
The result of the in-vitro challenge test with and without probiotics had been presented in Figure 2. In-vitro challenge tests were performed to investigate the antibacterial effect of the isolated probiotics (Lactobacillus spp.) on the Vibrio harveyi of infected shrimp.When in-vitro challenge tests were run it was found that the average load of V. harveyi. in zero (0) hour was 4.69× 10 3 CFU/g and it was 2.30× 10 4 , 2.36× 10 5 and 4.24× 10 5 at 4 th , 8 th and 12 th hour respectively.However, after treatment with the isolated probiotics, the average load of V. harveyi at 4 th , 8 th and 12 th hour were reduced to 1.16× 10 4 , 7.41× 10 3 and 1.13× 10 3 respectively.The present study showed a slight reduction of V. harveyi load at the 4 th hour of probiotic application where as a drastic and significant reduction in the load of V. harveyi had been obtained at the 8 th and 12 th hour of probiotic application.
DISCUSSION
The present study showed that the load of Lactobacillus spp.and Vibrio spp. was more in gills than in the intestinal tracts of shrimp samples.Gill is an essential organ for the respiration of fish and always remains in contact with the aquatic environment.Water is the major source of various types of microorganisms.So, gill possesses the greater possibility of association with different types of bacteria.The present study was also similar with some other research work.Such as, potential pathogens were able to maintain themselves in the external environment (water) of the animal and proliferate independently of the host animal (Hansen and Olafsen, 1999;Verschuere et al., 2000).
Various potential pathogens are taken up constantly by the animal through the processes of respiration, osmoregulation and feeding.This might be the main reason of such better occurrence of Lactobacillus spp.and Vibrio spp. in gills than intestinal tracts of the experimental samples.This result also agreed well with the findings of Ringo and Gatesoupe (1998).They reported that, Lactobacillus spp. is less abundant in intestinal tract.It is also well known that the population level of lactic acid bacteria associated with the digestive tract is affected by nutritional and environmental factors like dietary polyunsaturated fatty acids, chromic oxide, stress etc.These affecting factors might be another reason why there was less abundance of Lactobacillus spp. in intestinal tracts than in the gills.Moriarty (1990) reported that, aquatic farmed animals are surrounded by an environmental milieu that supports opportunistic pathogens independently of the host animal, and so the pathogens can reach high abundance on the external organs of the animal.In aquaculture ponds, where animal and algal population densities are very high, Vibrio spp.numbers can also become high compared to the open sea.This might be another reason of such better occurrence of Vibrio spp. in gills than the intestinal tracts of the experimental samples, which was found in the present study.Verschuere et al., (2000) stated that although lactic acid bacteria are not dominant in the normal intestinal microbiota of larval or growing fish, several trials have been undertaken to induce an artificial dominance of lactic acid bacteria in aquatic animals.This statement supports the findings of the present study.
The present study revealed that Lactobacillus spp.loads were higher than Vibrio spp.both in gill and intestine.Lactobacillus spp. is a beneficial bacteria and the presence of higher amount of Lactobacillus spp. in gill and intestine is a good sign.Because this Lactobacillus spp.; act as natural probiotics in infected shrimp.These results also agreed with the findings of two other separate experiments (L ee et al., 2000) and (Vine et al., 2004) who stated that successful probiotics bacteria are usually able to colonize the intestine, at least temporarily, by adhering to the intestinal mucosa.The adhesive probiotics bacteria could prevent the attachment of pathogens, such as coliform bacteria and clostridia, and stimulate their removal from the infected intestinal tract.Both of these experiments revealed the temporary colonization and adherence of probiotic bacteria.The gills may appear susceptible to bacterial penetration because they are covered by a thin exoskeleton (Taylor and Taylor, 1992), but their surfaces are cleaned by the setobranchs.
From the biochemical test it was found that most of the Vibrio spp.were Vibrio harveyi .From the present it has been proved that there is a significant antagonistic effect of Lactobacillus spp.against V. harveyi.In invitro challenge test, the potential antagonistic effects of Lactobacillus spp.against V. harveyi was gradually obtained at 4 th , 8 th and 12 th hour of probiotics treatment.This finding is very much similar to Balca'zar, (2003) also works on effect of probiotics on shrimp, his findings also relate with the findings of the present study.He found that the administration of a mixture of bacterial strains (Bacillus spp.and Vibrio spp.) positively influenced the growth and survival of juveniles of white shrimp and presented a protective effect against the pathogens Vibrio harveyi and white spot syndrome virus.
Sumon et al., (2018) also found the antagonistic effect of probiotic bacteria on Vibrio harveyi that also supports the present study.Austin and Brunt, (2008) also mentioned that the probiotics actively interfere with the colonization of potential pathogens in the digestive tract by antibiosis or by competition for nutrients, oxygen and/or space, by alteration of microbial metabolism, and/or by the stimulation of the innate immune response, including enhancement of phagocytic and respiratory burst activities and lysozyme reduction.Jiravanichpaisal and Chuaychuwong et al., (1997) used successfully Lactobacillus spp. as the probiotic bacteria in the tiger shrimp (P.monodon Fabricius).They designed to investigate an effective treatment of Lactobacillus spp.against vibriosis and white spot diseases in P. monodon.Results of the present study indicated that probiotics treatment offers a promising alternative to the use of antibiotics in shrimp aquaculture.These works strongly suggest the effective control of microflora in fish and shellfish in culture environments by antibiotic-producing bacteria.
CONCLUSION
From this in-vitro challenge test with Lactobacillus spp. it was found that it significantly reduced the V. harveyi load of the selected shrimp samples.From this in-vitro test it was proved that Lactobacillus spp., had the inhibitory property of a biocontrol agent for use in control of shrimp pathogens and might be useful for replacing the commercial antibiotics.The present investigation also clearly demonstrated that, putative probionts isolated from infected shrimp possesses an excellent antibacterial effect and could be applied in aquaculture operation as an effective tool of treatment to prevent and cure shrimp diseases caused by V. harveyi.On the basis of the results obtained in this work, it could be concluded that Probiotics, as 'bio-friendly agents' such as Lactobacillus spp., could be introduced into the culture environment to control and compete with pathogenic bacteria as well as to promote the growth of the cultured organisms.
Figure 1 .
Figure 1.Average Load of Vibrio spp.and Lactobacillus spp. in Gill and intestine
Table 1 .
Load of Lactobacillus spp. in gills and intestinal tracts of collected shrimp
Table 2 .
Biochemical test to identify Vibrio harveyi
|
2018-12-15T01:45:47.987Z
|
2018-05-03T00:00:00.000
|
{
"year": 2018,
"sha1": "9ce12138afc5e5fee529a3827e1be9232cb750f6",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/RALF/article/download/36561/24637",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ce12138afc5e5fee529a3827e1be9232cb750f6",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
251786835
|
pes2o/s2orc
|
v3-fos-license
|
Developing a Targeted Learning-Based Statistical Analysis Plan
Abstract The Targeted Learning estimation roadmap provides a rigorous framework for developing a statistical analysis plan (SAP) for synthesizing evidence from randomized controlled trials and real world data. Learning from these data necessitates acknowledging potential sources of bias, and specifying appropriate mitigation strategies. This article demonstrates how Targeted Learning informs different aspects of SAP development, including explicit representation of intercurrent events. Guiding principles are to (a) define the target parameter of interest separately from the model or estimation procedure; and (b) use targeted minimum loss-based estimation (TMLE) and super learning for causal inference. These flexible methodologies can be entirely pre-specified while remaining data adaptive; and (c) carry out a nonparametric sensitivity analysis to evaluate the plausibility of a causal interpretation of the estimated treatment effect, and its stability with respect to violations of underlying casual assumptions. The roadmap promotes the principles and practices set forth in the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use Guideline. An annotated SAP, checklists for pre-specifying the TMLE and super learning procedures, and sample R code are provided as supplementary materials.
Introduction
Existing guidelines for developing a statistical analysis plan (SAP) stress their importance in providing clarity, transparency and reproducibility (United States Food and Drug Administration 2013; Public Policy Committee, International Society of Pharmacoepidemiology 2016; Gamble et al. 2017;ICH 2019). This article complements that literature by demonstrating the utility of following the Targeted Learning Roadmap during SAP development . Targeted Learning (TL) offers a framework for causal inference in randomized controlled trials (RCT), RCTs incorporating real world data (RWD), and observational studies (OS) (van der Laan and Rose 2011; Ho et al. 2021;Gruber et al. 2022). The TL approach defines a clinical question of interest in terms of a causal inference problem, and uses targeted minimum lossbased estimation (TMLE) with an advanced machine learning algorithm known as super learning (SL) to estimate treatment effects (van der Laan and Rose 2011). TMLE+SL addresses potential biases due to non-randomization, intercurrent events, and missing outcome data, and can be pre-specified in an entirely transparent and reproducible manner. The TL approach recommends sensitivity analyses to assess the impact of "causal bias" stemming from, for example, unmeasured confounding, on the validity and reliability of a causal interpretation of the study finding. The entire process is described in a step-bystep guide known as the TL estimation roadmap. Following the roadmap enhances the ability to adhere to principles and practices set forth by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH-E9(R1) Guideline) (ICH 2019).
The Guideline lists five elements of a statistical estimand: population, treatment, outcome variable, summary measure, and intercurrent events that occur after treatment initiation, altering the treatment strategy or outcome of interest. Targeted Learning addresses each of these elements. In particular, Targeted Learning emphasizes precisely formulating a statistical question that clearly address the scientific causal question of interest. Unlike a traditional parametric modeling approach, the causal quantity of interest is defined in terms of the distribution of the data, not as a coefficient in a regression model. Separating the parameter definition from the choice of estimator strengthens interpretability of the findings. Invoking a counterfactual framework for defining the target causal estimand, such as the Neyman-Rubin causal model or Pearl's nonparametric structural equation models, allows for explicitly incorporating intercurrent events into the representation of the causal quantity (Pearl 2000;Sekhon 2008). Estimation using TMLE and SL can be completely pre-specified, while remaining data adaptive and providing valid inference.
Methods
The Targeted Learning Roadmap is a practical guide to statistical learning from data ( Figure 1) van der Laan and Rose 2011;Ho et al. 2021). Starting with a well-defined question that can be answered from data, the steps in the roadmap provide a systematic process for creating and evaluating the evidence extracted from data. These steps facilitate adhering to four central precepts extracted from the ICH-E9(R1) guidelines: (a) construct the estimand corresponding to a clinical question of interest; (b) the description of the estimand should reflect the clinical question of interest in respect of intercurrent events; (c) the statistical analysis should be aligned to the estimand; and (d) sensitivity analysis should explore the robustness of study findings under violations of untestable assumptions (ICH 2019).
Next we step through the TL roadmap, using a fictitious RCT as a running example, the Randomized Trial of Drug for Migraine And Headache Pain (RDMAP) study. This study compares the effect of study Drug A with active comparator Drug B on migraine and chronic headache at the end of 52 weeks of follow-up. A completed annotated sample SAP is provided as supplementary materials.
Step 0. Formulate a well-defined substantive question, and precise description of the experiment generating the data A well-defined question addresses all five elements of the statistical estimand defined by the ICH-E9(R1) Guidelines: population, treatment, outcome, summary measure, and potential intercurrent events (ICH 2019). In our example these elements are defined as follows, • The study population consists of adults, 18-65 years, who have consulted a medical professional for migraine of chronic headache within the past 24 months. Inclusion criteria include self-report of at least two headaches per month in the past 6 months. Exclusion criteria are pregnancy or known malignancy, cluster headache, suspicion of serious pathological etiology, cranial neuralgia. • Subjects will be randomized 1:1 to the treatment arm (Drug A, 500 mg tablet, once daily) or comparator arm (Drug B, 500 mg tablet, once daily). A 12 week supply of the assigned drug will be dispensed at baseline, and at three subsequent in-person visits every 12 weeks. • The primary outcome is mean weekly headache score (MWHS) 12 months post-randomization. The MWHS will be an average of the weekly headache scores during the final four weeks of follow-up, and a weekly headache score is the sum of seven daily self-reported headache scores on a scale of 0-10.
• The summary measure of interest is the marginal additive treatment effect (ATE). • Foreseeable intercurrent events include treatment noncompliance (discontinuation or switch), incomplete capture of the covariates.
For a PP analysis the question of interest is, "What is the population-level effect of taking Drug A versus Drug B on migraine and chronic headache in adult patients who meet eligibility criteria if they are adherent to their assigned medication?" where the effect is measured as the difference in MWHS at 12 months post-randomization under each treatment. For the ITT analysis, the question is, "What is the population-level effect of being assigned treatment with Drug A versus Drug B on migraine and chronic headache in adult patients who meet eligibility criteria, regardless of adherence?" The effect is again measured as the difference in MWHS at 12 months postrandomization under each assigned treatment.
The timeline for data accrual in the RDMAP study is shown in Figure 2. The covariate vector at each time point, t, is denoted by L t . Covariates collected at baseline (t = 0) and subsequent clinic visits are summarized in the figure. By convention, the final set of covariates, L 4 , includes outcome Y. Indicators of treatment received at baseline and subsequent clinic visits are denoted by A 0 , A 1 , A 2 , A 3 , (missing if post LTFU). Indicators of remaining uncensored are denoted by C 1 , C 2 , C 3 , C 4 . The longitudinal data structure is given by O = (L 0 , A 0 , C 1 , A 1 , L 1 , C 2 , A 2 , L 2 , C 3 , A 3 , L 3 , C 4 , L 4 ).
Step 1. Define a realistic statistical model
A statistical model, M, is a collection of possible probability distributions of the data. A main terms linear model regressing outcome Y on binary treatment indicator A, (A = 1 for study Drug A, A = 0 for comparator Drug B), optionally adjusting for suspected covariates, W, is often specified by default. However, this defines an underlying statistical model, M, that contains only distributions of the data that enforce a monotonic relationship between each continuous covariate and the outcome, and preclude treatment effect heterogeneity. Such a model may be far from ideal. Targeted Learning instead defines a realistic statistical model, M, respecting the time ordering of the data generating process and consistent with study inclusion/exclusion criteria. In our example, since baseline treatment is randomly assigned, we know that P(A 0 = 1|L 0 ) = 0.5, although there may be chance imbalances. However, because the dropout mechanism is unknown, we cannot realistically impose parametric modeling assumptions on the conditional probability of being LTFU.
Step 2. Specify a causal model and causal quantity of interest
A causal model describes presumed causal relationships in the data and known conditional independencies, even beyond those implied by the time ordering. Subject matter experts and statisticians can work together to construct a directed acyclic graph (DAG) that explicitly depicts this causal knowledge (Pearl 2000). Analyzing the structure of the DAG can facilitate identifying potential confounders of the associations between treatment, drop out, and the outcome. Carrying out this exercise during SAP development motivates planning to collect these data throughout the course of the study.
Our causal estimand of interest, the ATE, is defined as a mapping from the causal model to a marginal additive treatment effect. In counterfactual notation the estimand can be expressed in the point treatment setting as ψ causal = EY 1 − EY 0 , where Y 1 is the counterfactual outcome a subject would experience when treated with Drug A, and Y 0 is the counterfactual outcome a subject would experience when treated with Drug B. The expectation is with respect to the entire population of randomized subjects.
This point treatment formulation may be appropriate for an intention to treat (ITT) analysis, when any LTFU is only a function of pretreatment patient characteristics. However, it is problematic for a PP analysis, because it ignores intercurrent events that potentially disrupt the treatment-outcome association. As instructed by the ICH-E9 (R1) Guidelines, the Targeted Learning approach formalizes the PP causal quantity in a way that explicitly represents the true process giving rise to the data over time. This representation explicitly includes treatment noncompliance, and missing outcome data.
For a TL-based definition of the PP estimand, the causal contrast of interest is the difference between the mean counterfactual outcomes under treatment at all time points and no treatment at all time points, with no LTFU, that is, the average treatment effect among the entire randomized population. Contrast this with an SAP where the PP population is defined as the subgroup that was observed to adhere to treatment, to be determined after the trial concludes, though possibly before outcomes are unblinded. The marginal treatment effect among this compliant population is not necessarily the same as the marginal effect among the randomized population (Imbens and Angrist 1994). If the compliant population is not easily characterized, it is not clear for whom the study finding holds. In the TL-based analysis, the effect estimate is with respect to the original study population.
In the causal inference literature treatment nodes and censoring nodes are collectively referred to as the set of intervention nodes, because the causal contrast of interest involves intervening to deterministically set these nodes to correspond to the longitudinal regime of interest. The causal parameter is defined in terms of counterfactual values for all the intervention nodes. The causal PP parameter is denoted ψ causal where the subscripts indicate the counterfactual values when intervening to set A 0 through A 4 and C 0 through C 4 to their desired values.
For the ITT analysis, we are concerned with intervening on baseline treatment assignment at t = 0, and setting all censoring nodes to indicators of remaining uncensored through the end of follow-up. In the counterfactual model, the nodes that represent later treatments, (A 1 , A 2 , A 3 ), are no longer intervention nodes. The causal quantity of interest for the ITT analysis can be written as,ψ causal The specifications for both the ITT and PP estimands anticipate that intercurrent events will happen. This explicit representation also guides the design of study with respect to what variables need to be measured to make the sequential randomization assumption plausible.
When the outcome is binary the ATE is equivalent to a risk difference. Other summary measures that are commonly of interest include the risk ratio, odds ratio, hazard ratio, cumulative incidence difference or ratio. Though it is not a concern in our running example, in the presence of competing risks intervening on censoring nodes may be unrealistic, and alternate definitions of the causal parameter may better address the substantive question (Rudolph et al. 2020). One of several strategies suggested in the ICH-E9 (R1) Guidelines is to consider incorporating the competing risk into the outcome definition (ICH 2019). A continuous time TMLE for competing risks has been described in the literature (Rytgaard and van der Laan 2021). Each competing risk is considered to be an outcome that arises from a multivariate outcome process that jumps when the particular cause of failure occurs. The TMLE approach carries out a complete analysis of all absolute risks simultaneously.
Step 3. Specify the statistical estimand
Identifying assumptions link the causal parameter, defined in terms of counterfactuals, to a statistical parameter that can be estimated from data, ψ stat . This statistical estimand can be written as a difference in conditional mean outcomes under two different treatment regimes or interest, ψ stat = ψ stat (Qā 1 ) − ψ stat (Qā 0 ). In our example, for the PP analysisā 1 is the treatment regime in which the subject receives Drug A at each time point, and remain uncensored, andā 0 is the treatment regime in which the subject receives Drug B at each time point, and remain uncensored.Qā is defined as a series of K + 1 iterated conditional means, where K is the number of time points,Q a 1 = Q a Y ,Q a L K−1 , . . . ,Q a L 0 (van der Laan and Gruber 2012; Schnitzer 2020). For example, under intervention a = (1, 1, 1, 1, 1, 1, 1, 1), this series is given by, andC t , denote covariate, treatment, and censoring histories, respectively, from t = 0 through t. ψ causal is identifiable from data under the following causal assumptions. The sequential consistency assumption is an assumption that for each regime of interest,ā, for any individual i observed to follow that regime, Y i = Y(ā) . In an ITT analysis this assumption is met by design, as long as the outcome is measured without error. In a PP analysis, if observed treatment is not concordant with the intervention of interest at some time t, then this assumption is not met. Recall that the subscripts in the representation of ψ causal PP describe the counterfactual value at each intervention node. Under treatment noncompliance at time point t the observed sequence of values no longer matches the desired counterfactual sequence. Thus, there is no basis to claim that the recorded outcome value corresponds to the counterfactual outcome under the intervention of interest. One must acknowledge that the counterfactual outcome is unavailable, by setting its value in the dataset to missing, and setting all indicators of being uncensored at times t + 1 through t = K to zero, or "censored. " The sequential positivity assumption states that within strata defined by confounders, subjects have a nonzero probability of receiving each level of treatment. If this assumption is not met there are areas where the data provide no support for evaluating a causal contrast, without imposing additional modeling assumptions. The positivity assumption must hold at each time point with respect to the cumulative product of the conditional probabilities associated with following the intervention of interest at each time, t. The probability of remaining uncensored can be 1, since no intervention of interest involves intervening to impose censoring. Denoting each cumulative product through time t as G 0:t , the positivity assumption states that 0 < G 0:t < 1. Randomization baseline treatment assignment guarantees this assumption is met at t = 0. However, if there is LTFU, that guarantee does not extend to all time points t > 0. In a PP analysis, treatment switching also threatens the positivity assumption. Propensity scores and missingness probability diagnostics allow us to assess this assumption with respect to covariates measured up to each time point.
The sequential randomization assumption (SRA) states that treatment and/or censoring at time t is independent of the counterfactual outcome given the past. This is an extension to longitudinal settings of the missing at random (MAR) assumption, A t , C t ⊥ ⊥ Yā|Ā t−1 ,C t−1 ,L t−1 . Under causal models that provide additional information about known conditional independencies in the data, a weaker version of this assumption might be sufficient. For example, if treatment and censoring at time t were known to depend only on baseline values and the most recent measures of time-varying covariates, then replacingL t−1 with L t−1 , L 0 would be sufficient, A t , C t ⊥ ⊥ Yā|Ā t−1 ,C t−1 , L t−1 , L 0 .
Step 4. Estimation from data, respecting M and statistical inference
Step 4 requires suitable methodology for estimating the statistical parameter defined in Step 3. The estimation tools of Targeted Learning are TMLE and SL. TMLE has been shown to produce reliable study findings while depending on weaker assumptions than traditional parametric modeling (Gruber and van der Laan 2013). TMLE can appropriately adjust for timevarying confounding due to, for example, components of L t affected by prior treatment that also impact the outcome, while traditional parametric modeling approaches (e.g., linear, logistic, Cox proportional hazards) cannot. Although some other causal inference methods, including inverse probability weighting (Hernan et al. 2000) and G-computation (Bang and Robins 2005) can also correctly adjust for time-varying cofounders, these are consistent and asymptotically linear only under a smaller, more restrictive, statistical model.
As a double robust estimator, TMLE produces unbiased estimates of the treatment effect as long as either the outcome regression or both the PS and censoring mechanism are modeled correctly. For specific examples of carrying out longitudinal data analyses we refer the reader to publications that emphasize practical applications (Schomaker et al. 2019;Sofrygin et al. 2019;Ferreira et al. 2020;Gruber et al. 2022). For practical guidance on specifying a SL we refer the reader to a recent publication describing each essential step: choosing an appropriate loss function for the task at hand, defining the cross-validation scheme, and constructing the SL library based on characteristics of the data ). The library should include a rich set of diverse algorithms, which can be coupled with screening algorithms to reduce dimensionality. In summary, analytic choices should be tailored to the characteristics of the problem, and what is known about the data and the plausibility of the identifying assumptions linking the statistical and causal estimands.
Each choice can and should be pre-specified in the SAP. A priori specifications enhance analytic reproducibility, interpretability, and reliability. Checklists of options and settings for analyses using the tmle, ltmle, and SuperLearner R packages are provided as supplemental materials Polley 2021;Schwab 2021). These checklists are intended to illustrate what decisions must be made in order to pre-specify the entire analysis. The values provided are example specifications, not general recommendations.
Handling missing covariates: It is important to distinguish between missing values in covariates intended to be captured for all subjects and informed presence, the existence of covariate values due to extra surveillance in only some portion of subjects (Goldstein et al. 2016). The absence of a value sometimes indicates that there was no need to measure the covariate. Thus, its unknown underlying value is not related to a treatment or drop-out decision (United States Food and Drug Administration 2021). The absence of information on a baseline or timedependent covariate, X, in this circumstance does not represent missing data. The value of X should only be viewed as missing when the underlying covariate is needed for the SRAs for treatment and censoring to hold. In essence, X functions as an interaction term, REQUIRED×X, that equals the recorded value when measuring X was required, and 0 otherwise.
On the other hand, if the SRAs only hold with respect to the underlying full measurement of X, then its value is truly missing. If the MAR assumption is reasonable, then in response, we impute a value, such as 0 or the mean or mode of the observed values. We also simultaneously create a binary indicator of missingness to add to the dataset. This indicator allows the pattern of missingness to itself be a predictor of subsequent treatment, drop out, or the outcome. There is no need to impute missing covariate values for data collected after a censoring event at time t cens , because only values from subjects who remain uncensored at times t > t cens contribute to estimating the components of Q and G at those subsequent time points. If the MAR assumption is not reasonable, then, if possible, one might change the design of the study by randomly sampling subjects among the total sample where these measurements are collected. Such designs are often referred to as two stage designs, where the second stage involves randomly sampling according to known sampling probabilities (Ho et al. 2021). Of course, changing the design of the study, or the definition of the causal estimand necessitates going through the steps of the roadmap from the beginning.
Handling missing outcomes: These general guidelines should be discussed with clinicians to reflect clinical context. Outcome values that are not recorded are set to missing (e.g., NA in R, '. ' in SAS). In a PP analysis, outcomes are also set to missing for subjects who are noncompliant with assigned treatment. Note that each observation with a missing outcome value remains in the dataset, and contributes to estimation of the PS and missingness probabilities up to the time it is right censored. Internally, TMLE evaluates the targeted counterfactual outcomes under both treatment assignments. These values contribute to the parameter estimate, ensuring the marginal mean outcome is with respect to the original intended study population. Retaining these observations also reduces variance.
TMLE provides valid inference, unlike most other causal effect estimation methodologies that incorporate machine learning. TMLE provides asymptotically valid 95% confidence intervals and controls the Type I error rate as a consequence of being a regular, asymptotically linear estimator. If the strong positivity assumption holds (i.e., the efficient influence curve for the target parameter is a bounded function in the observations), then reliable finite sample analytic standard error estimates are obtained with the sample variance or cross-validated sample variance of the efficient influence curve, an immediate byproduct of the TMLE (van der Laan and Rubin 2006). These standard error estimates can be used to construct p-values and Wald-type confidence intervals that have good coverage. However, if positivity is an issue, then it has been shown that these influence curve based variance estimators can be anticonservative by under-estimating the variance, but that plugin variance estimators still provide robust variance estimators. (Tran et al. 2018) Such estimates of the standard errors based on the influence curve of the TMLE, or on a robust, targeted plug-in estimate of the variance are available in most of the standard TMLE software packages Benkeser et al. 2017;Schwab 2021;Ju 2021).
There is also the option to use the nonparametric bootstrap to obtain confidence intervals that incorporate the higher order behavior of the TMLE. This has been shown to be theoretically valid and finite sample robust method when one use a SL based on highly adaptive lasso (HAL) estimators, and parametric model based estimators (Cai and van der Laan 2020). If, on the other hand, one uses a SL with other types of machine learning algorithms, then ties must be removed from the bootstrap sample (i.e., analogue to subset sampling). Targeted model based bootstrap resampling is a final option for obtaining robust confidence intervals whose confidence intervals incorporate higher order behavior of the TMLE with any SL (Coyle and van der Laan 2018). This illustrates that specifying the method for inference is another important detail to include in the SAP.
Step 5. Interpretation and sensitivity analyses to inform a substantive conclusion
Although the interpretation of ψ causal is clear, how closely ψ stat matches ψ causal merits discussion. To assess interpretability of ψ stat as a causal effect, one considers the plausibility of each of the identifying assumptions in turn. The consistency assumption holds trivially in an ITT analysis when outcomes are correctly measured. In a PP analysis, this assumption is likely to be met as long as treatment adherence over time was documented, and outcomes recorded under noncompliance are viewed as missing (exceptions are pre-specified allowances for grace periods, treatment and outcome assessment windows, etc.). The positivity assumption can be assessed with respect to measured confounders by examining the estimated PS and missingness distributions by treatment arm. Domain knowledge is needed to qualitatively assess the plausibility of the sequential randomization assumption. However, the true extent of violations of these assumptions is ultimately unknowable. Thus, TL prescribes a nonparametric assessment of how large and small departures from these assumptions would impact the substantive conclusion. This provides quantifiable insight into the level of support, including any lack of support, for regulatory decision making. This analysis complements other forms of sensitivity analyses, such as a tipping point analysis (Ratitch et al. 2013), multiple imputation, analyses assessing the impact of outlying values, assumptions on clustering or correlation, competing risks, and others (see Thabane et al. 2013).
The TL literature defines causal bias as the gap, δ, between the causal estimand and the statistical estimand, ignoring random variation, ψ stat − ψ causal (Diaz and van der Laan 2013). The nonparametric sensitivity analysis investigates the impact of potential, unknown, causal bias under a range of plausible values for δ, without imposing untestable modeling assumptions. The exercise illustrates how the effect estimate, p-value, and confidence interval bounds change, depending on the magnitude and direction of the hypothesized gap. The range of plausible values can be based on clinical context or prior evidence. In the following example, we examined a large enough range to show the gap size required for the 95% confidence interval to exclude the null, in both the positive and negative directions.
Consider a hypothetical analysis to estimate the ATE from RCT data where there was 25% LTFU. The unadjusted effect estimate wasψ ATE unadj = −6.10. Using TMLE to adjust for confounders moved the point estimate toψ ATE adj = −5.42, closer to the null value of 0. The substantive conclusion from the primary analysis is that treatment reduces the mean outcome. The upper bound on the 95% confidence interval is well below 0. An open question is whether the estimated treatment effect is biased due to confounding by unmeasured covariates partially responsible for LTFU. This is not directly testable from data. However, we can examine how the substantive conclusion would be impacted under a range of presumed causal bias.
We operationalized our sensitivity analysis by considering the potential bias due to unmeasured confounders or outcome mismeasurement. Figure 3 shows the shift in point estimates and confidence interval bounds for a range of causal bias sufficient for confidence intervals to be entirely above and below the null. The magnitude of the causal bias is shown on the x-axis labeled δ. Alternative axes show the bias relative to the SE of the adjusted estimate (SE units), and in terms of the difference between the unadjusted and adjusted estimates, Adj Units = 0.68. Only when causal bias is greater than approximately -3 does the confidence interval include the null. Causal bias would have to be more than 11 times larger than the difference between the adjusted and unadjusted estimates for the point estimate to be positive and the confidence interval exclude the null. Subject matter experts can share insight into whether a causal gap of this size and direction is plausible. If it is highly unlikely, then there is strong support for a conclusion that on average Drug A either has no impact or reduces 12-month MWHS in this population, compared with Drug B. A conclusion that Drug A reduces 12-month MWHS in this population also has strong support in the data. However, a conclusion that treating with Drug A instead of Drug B increase 12-month MWHS has essentially no support.
Results
An annotated SAP, completed checklists for pre-specifying the estimation procedure, and sample data analysis R code are provided as supplementary materials.
Conclusion
Decisions regarding development and authorization of new drugs, biologics, and medical devices need to be based on solid reliable interpretable science. This paper demonstrates how applying the principles of Targeted Learning while writing a SAP fosters the development of reliable RWE. TL distinguishes between a realistic statistical model, a causal model, a causal estimand and a statistical estimand. These clear distinctions are key in obtaining transparent, interpretable, actionable evidence from data. The TL roadmap complements the ICH E9(R1), going beyond the ICH focus on the statistical estimand.
Defining a realistic target of estimation with respect to intercurrent events for the primary analysis aligns with the ICH Guidelines. Nevertheless, proactively planning for how to avoid or minimize them remains an important study design component. This includes following current regulatory practice by capturing the reason for noncompliance or drop-out, and continuation of follow-up even after nonadherence (United States Food and Drug Administration 2008). Our sample SAP omits details often required in practice, to better highlight the contributions of the Targeted Learning framework. It illustrates how a Targeted Learning perspective can influence all stages from study design through data analysis and interpretation. The key recommendation is to follow the Targeted Learning Roadmap. Doing so produces a clear statement of the causal parameter explicitly with respect to intercurrent events. A model free definition of the corresponding statistical estimand leaves discretion in the choice of estimator.
TMLE is recommended over other double robust estimators by virtue of its being a plug-in estimator that respects bounds on the statistical model. TMLE also incorporates machine learning while preserving valid inference. Supplementary materials provide clear guidance on what features in TMLE and SL need to be pre-specified to ensure transparency and reproducibility. TMLE is more efficient than PS-based methods (e.g., inverse probability weighting, matching), and, unlike them, can remain unbiased when the PS and missingness mechanisms are poorly estimated (Lendle et al. 2013;Schnitzer et al. 2013;Colson et al. 2016). Even if this recommendation is not followed, principles underlying steps 1-3 and 5 of the TL Roadmap remain important guides to developing the SAP.
As in a standard analysis, a causal interpretation of the study finding depends on how well the identifying assumptions are met. Targeted Learning encourages explicitly considering each assumption in turn. This exposes potential gaps in identifiability that threaten the validity of a causal interpretation. Nonparametric sensitivity analyses quantify how such gaps affect point estimates, confidence intervals, and p-values. Results from such a sensitivity analysis, other sensitivity analyses, and diagnostics aid in assessing the strength of support in the data for a substantive conclusion drawn from the study findings.
Supplementary Materials
Annotated Statistical Analysis Plan: An annotated statistical analysis plan (SAP) titled, "A Fictitious Targeted Learning Example: Randomized Trial of Drug for Migraine And Headache Pain (TL-RDMAP)." The SAP appendix includes checklists and sample R code for pre-specifying the data analysis using the tmle or ltmle packages, and checklists for specifying super learner options. These specifications are for illustration only, and should be tailored for any particular data analysis.
|
2022-08-25T15:06:07.556Z
|
2022-08-23T00:00:00.000
|
{
"year": 2023,
"sha1": "b73594df2c04812464a7ffb4f20f3f907da90040",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Developing_a_Targeted_Learning-Based_Statistical_Analysis_Plan/20556501/1/files/36782568.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "0349929f12d501b5ca414814f0c5c6d9915e13da",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
224852333
|
pes2o/s2orc
|
v3-fos-license
|
Mini-seminar project: An authentic assessment practice in speaking class for advanced students
Received Apr 18, 2020 Revised Aug 14, 2020 Accepted Sep 26, 2020 This paper reports one best-practice in assessing the public speaking performance of advanced students at an Indonesian public university. The study involves an English course for an advanced class which was primarily related to public speaking skills. Considering that speaking is a productive skill that should be assessed through authentic assessment principles, the lecturers decided to assign the students with a mini-seminar project as part of their final examination. This project required the students to conduct a reallife contextualised seminar in which the organisers, speakers, and audience are composed of the students themselves. This paper discusses the rationale behind the planning and implementation of this successful project which involved a synthesis of assessment of, for, and as learning and critically evaluates the procedures of the assessment, the rubric developed therein, and the challenges experienced by the lecturers within the classroom. After the implementation, it can be concluded that this mini-seminar project as a doable alternative authentic assessment model that is applcable in a speaking class which focuses on the development of students’ public speaking skills. This mini-seminar project is recommended not only because it can be used as an alternative assessment model, but also it encourages students to work together in teams, and encourage them to work creatively, create something new in order to perform better.
INTRODUCTION
As one of the core components of education, assessment can be generally understood as a systematic and continuous process or activity to collect, analyze, and interpret information about the process as well as the results of students' learning. Put simply, students' learning is assessed to identify whether (and often, to what extent) they have achieved the learning goals stated by the curriculum. The results are further used by the teachers to make instructional decisions based on certain criteria and considerations [1]. Assessment can also function as a tool to collect information related to the development of students. Assessment serves to measure the level of students' achievement in subjects learned, including mapping the learning problems they experience. In addition, assessment can serve as a tool through which teachers receive feedback on the quality of their own teaching [2]. The assessment has indeed played a crucial role for students' learning as research shows that it has influenced the quality of student learning and enhanced deeper learning [3]. Assessment can be done at various stages and in multiple ways. In general, it can be done at the end of the learning process (summative evaluation, or assessment of learning) and during the learning process takes place (formative evaluation, or assessment for learning). Assessment can also be conducted as a metacognitive tool whereby the assessment task itself becomes a process of learning (assessment as learning) [4]. Assessment can be conducted in the form of tests which and non-tests. Assessment in the form of tests usually appears in the form of objective tests, written tests, and oral tests, while assessment in the form of non-tests can be done in more various forms, such as observation, performance, assignments, presentations, seminars, and other authentic forms.
Assessment is considered authentic when the tasks are real-to-life or have real-life value [3][4][5]. Varela et al. [6] describe authentic assessment as the multiple forms of assessment reflecting students' learning, achievement, motivation, and attitudes toward classroom instructional activities. They then mention three types of authentic assessment: performance assessment, portfolios, and students-self assessment. Performance assessment consists of oral reports, writing samples, individual or group projects, exhibitions, as well as demonstrations in which students respond orally.
In the last ten years, the Indonesian government has campaigned for the practice of authentic assessment since the 2006 curriculum along with an emphasis on the use of school-based curricula in primary and secondary schools in Indonesia [7]. This can be seen in Article 2 of paragraph 2 of Government Regulation number 14 of 2014 emphasising the use of authentic assessment in the process of evaluating learning outcomes by teachers. The forms of authentic assessment suggested by the government are observations, assignments to the field, portfolios, projects, products, journals, laboratory work, and performance, as well as self-assessment and peer evaluations. As mentioned by Azhar [8], this authentic assessment is expected to serve as a solution to problems of assessment in Indonesian schools.
In the context of higher education, assessment of learning should also be carried out comprehensively, covering all domains of knowledge, skills, and attitudes. Assessment should also emphasize learning processes and results. Just like learning at the elementary and secondary levels, instruments that can be used in assessments can be tests and non-tests. The application of authentic assessments is more likely to be the practical choice in higher education because student assignments, in general, tend to be directed more at solving problems in the real-world context. Students are not only introduced to theories/concepts in the scientific field but are also encouraged to deal with relevant issues around them.
Some projects have been conducted by other researchers to develop authentic assessment both in the Indonesian higher education and secondary schools context. These projects include a problem-based learning model through an authentic assessment based practicum to improve students' science process skills conducted by Duda and Susilo [9] in STKIP Persada Khatulistiwa Sintang, West Borneo, Indonesia. Another project was done by Rohmad [10] who developed documents of authentic assessment in assessing affective domain in Islamic Education and character education. Other studies have also developed some models of authentic assessment in assessing students' speaking performance [11][12].
However, studies by Ermawati and Hidayat [13] and Rukmini and Saputri [14] indicate that both lecturers and school teachers face several problems in the assessment process. First, the obstacle in carrying out a comprehensive and consistent assessment; and difficulties in passing improvisation/ developing research instruments. Time constraint has also been identified as a major challenge for some teachers to conduct an authentic assessment [15]. Another study [16] also shows that teachers have encountered similar problems in conducting the authentic assessment. This includes time and effort consuming issues, validity issues, reliability issues, resource administration, evidence transformation, and subjectivity.
Keeping such complexities in mind, this paper discusses the implementation of a mini-drama project as a form of authentic assessment in a speaking course at a university in Indonesia. It also shows how this task involved the principles of assessment of, for and as learning and synthesized them into one single activity. This project has been successfully implemented several times in the last two years with its effectiveness demonstrated through both anecdotal evidence of students' impressions, their positive feedback upon the completion of the project as well as through their performances in subsequent summative tasks for speaking. This paper will further discuss how the procedures were implemented, the rationale behind the model of the assessment carried out, and how students responded to this authentic assessment.
TEACHING CONTEXT
The mini-seminar project was held as the final assignment in a Speaking 3 course for second-year students at Universitas Riau. Speaking 3 is a pre-requisite course with three credit hours as a continuation of Speaking 1 and 2 subjects. Thus this class is an advanced level class consisting of students who have passed previous speaking courses. As typical lecture classes in many universities in Indonesia, this class is considered as a large class consisting of 35 students. The teaching material in this class is more directed on how to prepare students to have good public speaking skills. Therefore the course syllabus contains materials related to public speaking, such as how to deliver speeches, deliver presentations, debate, become a master of ceremony, become a moderator, impromptu speech, become a newsreader, and report news as a journalist.
a. Assessment procedures
For both teachers and students, coming to the assessment stage of a speaking course is always challenging. Speaking is an intricate skill involving many elements of both linguistic and non-linguistic factors [17]. It does not only deal with testees' linguistic competency, but also their state of being when the test is conducted, such as mood and fatigue, or even practicality issues such as bad quality of the recording. This challenge is particularly obvious in a speaking course where public speaking skills development is the main objective of the course, as is the case of this study. This is because a successful public speaking performance is distributed across multiple modalities, e.g., the speech content, voice and intonation, facial expressions, head poses, hand gestures, and body postures [18] and all of these need to be taken into account for a fair and complete assessment. In addition, because the assessment of such as task is synchronous, i.e., taking place at the same time as the delivery of the presentations, there are additional challenges in conducting it, often requiring a lot of experience from the teachers.
Considering the purpose and content of the course, we decided to do an authentic assessment at the end of the course. It is believed that an authentic assessment has the potential to enhance students' learning [6]. Ontologically, this assessment was developed following the principle of authenticity proposed by Vu and Dall'Aliba [5] arguing that authenticity need not be an attribute of tasks but, rather, is a quality of educational processes that engage students in becoming more fully human. In the contect of English Language Teaching (ELT), this authentic assessment enables teachers or lecturers to put emphasis on the ability to function effectively through language in particular contexts of situation, rather than on on linguistic accuracy [19].
The assessment of this speaking class was then carried out in the form of conducting seminars on certain topics organized by students with speakers and all other seminar 'implementers' made up students themselves. We call this activity a 'mini-seminar project'. The procedures for this mini-seminar project, in which everyone from the class participates, are as follows: a. Students were divided into two large groups, each consisting of around 16-17 people. Group division was done in the 12 th week, or one month before the semester ended. Details of task instructions were submitted in writing through Google Classroom, the application where all students collaboratively engaged in, as seen in Figure 1. c. Among the roles that must be prepared by each of the groups were: master of ceremonies, seminar speakers, moderators, chair of the event committee, campus officials (who will give speeches), and the audience, who would ask questions and comment on the seminar session. d. In addition to the above roles, several other students acted as journalists who would report live seminar sessions on their social media, including interviewing seminar speakers and several audiences at the end of the event. After the seminar was completed, this group was also tasked with reporting on seminar activities on social media, such as on Youtube and other social media. e. During the seminar, the lecturer sit at one corner in the seminar room as a non-participating observer to make an assessment. The assessment was done by paying attention to individual performance and overall group performance. Individual performance gets 70% of the total rating. The rest was group performance-based assessment. f.
Before the seminar, we first developed the assessment rubric. The contents of the rubric were adopted from the rubric developed by Schreiber, Paul, and Shibley [20] and Rubric by Ohio State University [21] as described in the next section.
b. Assessment rubric
There are several things that we considered in developing an assessment rubric. First of all, the authenticity aspect of the assessment-the assessment must be done within a real-life atmosphere to enable the students to perform their authentic public speaking skills. Our decision to make an assessment with this miniseminar model was part of the implementation of the aspect of authenticity in the assessment. This seminar allows students to perform in a real-life like situation [3].
Second, the aspect of public speaking. This is the core of the assessment in the rubric developed because this class is a speaking course with advanced students with the main purpose of the learning process to develop their public speaking skills in a number of situations, as discussed above. Some public speaking skills are included here to be assessed, including topic delivered, presentation structure (organisation), engagement with the audience, non-verbal behavior, voice/tone clarity, and language quality.
Third, the aspect of teamwork or collaboration. The principle of collaboration and or cooperation is an important part to be developed in our education today. UNESCO, for example, has long included the principle of cooperation in their 21 st -century education vision [22], with collaboration included in the principle of learning how to live together which UNESCO has emphasized, in addition to other principles such as learning how to be, learning how to learn, and learning how to do.
Bernhardt [23] points out that in the context of the 21 st -century education paradigm, collaboration has emerged as an important competency that must be developed by teachers in schools, including in universities. He further reminds, "schools need to ensure students work collaboratively, base learning on authentic experience, incorporate multiple forms of representation, and stress fluency in multiple medias" (p.1).
Fourth is the aspect of creativity. The ability to create is also an important competency that teachers must develop. This is not only relevant to 21 st -century competencies but also relevant to Bloom's revised edition taxonomic theory [24], which is now often being used as a reference in Indonesia in sequencing classroom tasks and activities based on cognitive load increment. One form of revision is in the cognitive domain, where the thinking ability of analysis and synthesis is integrated into analysis only. The number of the six categories in the previous concept did not change because Anderson included a new category, namely creating, which did not exist before. This is where creativity becomes very important to be developed in the learning process and this formed an important part of the assessment rubric. The final form of the rubric that was developed can be seen in Table 1
IMPLEMENTATION
The mini-seminar was held in the sixteenth week, which was the last session of the course in the current semester. However, students were given three weeks to prepare. This preparation included the time to design the seminar program, determine the theme of the seminar, divide roles, prepare presentation slides, speech concepts, and the time to do the rehearsals. Preparations were also done in technical aspects, such as preparing the room, making invitations to potential audiences, making banners, and other technical matters.
Overall, during the seminar day, both groups performed very well. They prepared the event enthusiastically and in harmony, following the guidelines as expected. One group presented a seminar with the theme 'anti-bullying campaign' as displayed in Figure 2, while the other group presented a talk-show inviting a young figure who was successful in entrepreneurship. Each speaker delivered the topic of the seminar / talk show for about ten minutes.
The seminar was lead by two masters of ceremonies who guided the event with clear instructions on the proceedings. This was followed a speech from the project leader, from the study program coordinator, and finally from the 'dean' of the faculty. The event was then officially opened by the University of Riau's 'Chancellor'. As mentioned above, all these roles were play-acted by students themselves. After the presentation, the activity was then followed by a question and answer session and discussion with attendees who were all students from the Speaking class 3 as well as some students from other classes. The event was closed by giving souvenirs to the speaker(s) and photos session, as seen in Figure 3. All seminar processes were carried out completely in English. In addition, the media team worked on the program, interviewed speakers and seminar/talk show participants. The audiences were asked on their impressions about the event and also the general messages they wanted to convey, including feedback on the performances. Interviews and the coverage of this event were then published on social media such as YouTube as shown in Figure 4. This coverage was part of the exam assessment, especially related to how to be a journalist (news report), one of the skills taught in the Speaking 3 course.
POST ASSESSMENT REFLECTIONS
In terms of setting up and explaining the task, distribution and assignment of roles, and conducting the task, the assessment project was overall well received and implemented. As intended as an assessment tool, the seminar was successfully conducted as a tool to assess students' speaking performance in a real setting, involving the participation of all students in the class. As previously discussed, the students used their English public speaking skills through enacting the various assigned roles during the seminar. They did so through the performance of certain aspects of public speaking, such as speech and presentation skills during the seminar, in line with the assessment rubric, as seen in Table 1. The project served the public speaking scenario in keeping it real and contextual as authentic assessment tasks are expected to do [25].
In addition to assessing students' public speaking performance, this project also integrated a number of soft skills into the assessment process. These soft skills include students' skills in collaboration and creativity -21 st -century essential skills. The assessment task facilitated the conditions under which they learned how to work in team planning and making a scenario of their seminar project. The task challenged them into exercising the higher-order cognitive skills of creative activities. They also learnt the content while doing the task.
The project not only served as a tool for testing (assessment of learning) in the sense that students received grades, but also as a medium of learning (assessment for learning) [26,27], in the sense that this involves self-and peer assessment. In addition, the mini seminars themselves acted as a space where all ISSN: 2089-9823 students from the class improved their speaking skills through their participation in the various assigned roles, thereby making the task an assessment as a learning task (Earl, 2003). The latter two types of assessment were evident in the students' social media posts where they wrote about the enjoyable and collaborative ways in which they fulfilled the activity. They did not seem to feel the typical psychological problems such as anxiety and nervousness as often experienced by test takers in other kinds of assessement (see Chapell et al., 2005;Nelson, 2016). This exam experience would probably last in their memory as a enjoyable and engaging learning experience. Despite the positive feedback and encouraging scores, we realised that this assessment model has room for further improvement. One of them is probably in the rubric descriptors. This needs more comprehensive indicators for assessing students' individual performances. This is especially important as every student plays a different role during the seminar. To address the issue of fairness, for instance, the parameter should be made different for each role. The fact that the students were given quite a long time for rehearsal would probably affect the 'originality' of their real speaking skills. Their speaking performance might be different if, for instance, they were asked to speak in an impromptu or extempore situation.
CONCLUSION
Apart from several weaknesses outlined above, we found this mini-seminar project as a doable alternative authentic assessment model that can be applied in a speaking class which focuses on, among other issues, the development of students' public speaking skills. This mini-seminar project is recommended not only because it can be used as an alternative assessment model, but also it encourages students to work together in teams, and encourage them to work creatively, create something new in order to perform better. These two competencies: collaboration and creativity are among the competencies that teachers and lecturers must develop in the classroom so that students can have 21 st -century skills to successfully respond to the challenges of today's life.
|
2020-10-19T18:05:34.682Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "24988f2cb1fec1faf5b18a573522fa3cb44753b2",
"oa_license": "CCBY",
"oa_url": "http://edulearn.intelektual.org/index.php/EduLearn/article/download/16429/pdf_448",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "104ec181039400f6c6a2f734318dc7660f16dedb",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
251817254
|
pes2o/s2orc
|
v3-fos-license
|
Cathepsin L promotes chemresistance to neuroblastoma by modulating serglycin
Cathepsin L (CTSL), a lysosomal acid cysteine protease, is found to play a critical role in chemosencitivity and tumor progression. However, the potential roles and molecular mechanisms of CTSL in chemoresistance in neuroblastoma (NB) are still unclear. In this study, the correlation between clinical characteristics, survival and CTSL expression were assessed in Versteeg dataset. The chemoresistant to cisplatin or doxorubicin was detected using CCK-8 assay. Western blot was employed to detect the expression of CTSL, multi-drug resistance proteins, autophagy-related proteins and apoptosis-related proteins in NB cells while knocking down CTSL. Lysosome staining was analyzed to access the expression levels of lysosomes in NB cells. The expression of apoptosis markers was analyzed with immunofluorescence. Various datasets were analyzed to find the potential protein related to CTSL. In addition, a subcutaneous tumor xenografts model in M-NSG mice was used to assess tumor response to CTSL inhibition in vivo. Based on the validation dataset (Versteeg), we confirmed that CTSL served as a prognostic marker for poor clinical outcome in NB patients. We further found that the expression level of CTSL was higher in SK-N-BE (2) cells than in IMR-32 cells. Knocking down CTSL reversed the chemoresistance in SK-N-BE (2) cells. Furthermore, combination of CTSL inhibition and chemotherapy potently blocked tumor growth in vivo. Mechanistically, CTSL promoted chemoresistance in NB cells by up-regulating multi-drug resistance protein ABCB1 and ABCG2, inhibiting the autophagy level and cell apoptpsis. Furthermore, we observed six datasets and found that Serglycin (SRGN) expression was positively associated with CTSL expresssion. CTSL could mediate chemoresistance by up-regulating SRGN expression in NB cells and SRGN expression was positively correlated with poor prognosis of NB patients. Taken together, our findings indicate that the CTSL promotes chemoresistance to cisplatin and doxorubicin by up-regulating the expression of multi-drug resistance proteins and inhibiting the autophagy level and cell apoptosis in NB cells. Thus, CTSL may be a therapeutic target for overcoming chemoresistant to cisplatin and doxorubicin in NB patients.
Introduction
Neuroblastoma (NB) is the most common extracranial solid tumor in children. NB accounts for 10% of all childhood cancers and also 15% of pediatric cancer deaths (Tolbert and Matthay, 2018). Main available treatment strategy for NB patients is chemotherapy followed by surgical resection. Cisplatin (DDP) and doxorubicin (ADM) are common chemotherapeutics for NB (Jemaà et al., 2020). However, the clinical efficacy of these drugs is limited by chemoresistance, which is the main cause of the treatment failure in NB patients (Rodrigo et al., 2021). Thus, it is important to research the molecular mechanisms of chemoresistance in NB and find the potential targets to overcome it.
Various molecular mechanisms have been implicated in chemoresistance. The ATP-binding cassette transporter (ABC) family of transmembrane proteins is associated with chemoresistance by promoting drug efflux. Among these, ABC transporter B1 (ABCB1/MDR1/P-glycoprotein), ABC transporter C1 (ABCC1/MRP1) and breast cancer resistance protein (ABCG2/BCRP) are closely related to poor platinum sensitivity (Domenichini et al., 2019). Resistance to apoptosis can also cause chemoresistance. Mutations, amplifications and overexpression of the genes encoding the anti-apoptotic BCL-2 family members and inhibitor of apoptosis proteins (IAPs) have been reported associated with chemoresistance of cancers. Autophagy is another mechanism of chemoresistance by promoting cancer cell survival during metabolic stresses induced by anticancer agents. Besides, alterations in drug metabolism, DNA damage repair, epigenetic changes, mutation of drug targets and the influence of tumor microenvironment may also contribute to chemoresistance of cancers (Holohan et al., 2013).
The ability of cancer cells to maintain an internal stasis is a critical characteristic of a neoplasm (Li et al., 2017). The important role of lysosomes in cellular stasis has been identified in many studies (Yang and Wang, 2021). Lysosome is connected to chemoresistance, cellular adaptation, immune response and cell death (Hraběta et al., 2020). Lysosomes have more than 60 hydrolytic enzymes, including proteases, lipases (Davidson and Vander Heiden, 2017). Tumor stasis is a multidimensional process that is adjusted by cellular proteins, including cathepsin family of proteases, protein-protein interactions, alternative splicing and expression of miRNAs (Mijanović et al., 2019). Cathepsins play important roles in malignant tumors. Cathepsin L (CTSL) is a cysteine protease which has been reported linked to tumor occurrence, development, and metastasis (Sudhan and Siemann, 2015). CTSL up-regulation has been identified in many human malignancies including gastric (Pan et al., 2020) and lung cancers. Significantly, CTSL has important roles in regulating cancer chemoresistance . Our previous study found that CTSL up-regulation-induced EMT phenotype was associated with the acquisition of DDP or paclitaxel resistance in A549 cells (Han et al., 2016). However, the roles and the mechanisms of CTSL in NB chemoresistance are still unclear which need to be studied further.
In the present study, we demonstrated that CTSL was a regulator of poor DDP and ADM sensitivity in NB cells, and the regulation of chemoresistance by CTSL was mediated through its effects on ABC proteins, autophagy and cell apoptosis. These findings indicate that CTSL may represent a novel therapeutic target to overcome poor DDP and ADM sensitivity in NB patients.
Validation of human datasets
Tissue array analysis results of NB patient tumor samples were obtained from the R2 Genomics Analysis and Visualization Platform (http://r2.amc.nl) using the following publicly available dataset: Versteeg (GEO: GSE16476) (Zhang H. et al., 2020), which included comprehensive information on the relevant clinical and prognostic factors selected for analysis. For Kaplan-Meier analysis, the best p value and corresponding cutoff value was selected according to the R2 Genomics Analysis and Visualization Platform.
Six datasets were observed at the same time and different colors represented different datasets in the box plot after standardization, According to −0.6 < R < 0.6 and p < 0.001 and at the top 20 of the datasets, the eligible common genes in six databases could be screened out. Then the Spearman correlation analysis between the selected gene and CTSL was performed.
Cell lines and culture
The human NB cell lines, SK-N-BE (2) and IMR-32 were obtained from the Type Culture Collection of the Chinese Academy of Sciences, Shanghai, China. SK-N-BE (2) and IMR-32 cells were cultured in a 1:1 mixture of MEM and DMEM/F-12 with 10% fetal bovine serum, 1% Penicillin-Streptomycin. These cells were placed in an incubator with 5% CO₂ at 37°C and passaged every 72 h.
CCK-8 assay
Cell Counting Kit-8 (CCK-8) assay was used to measure the viability and proliferation of cells. SK-N-BE (2) and IMR-32 cells in logarithmic growth phase, with the appropriate concentration of 5 × 10 4 /ml, were inoculated into 96-well culture plates with 100 μl/well, cultured overnight in an incubator with 5% CO₂ at 37°C and treated the next day. After pretreatment with different concentrations of DDP (HY-17394, MedChemExpress, Shanghai) or ADM (HY-15142A, MedChemExpress, Shanghai) for 24 h, 10 μl CCK-8 solution was added to each well and incubated for 4 h at 37°C. The optical density was measured at 450 nm. All assays were performed in triplicate.
For transfection, siRNA was mixed with Lipofectamine ® 3000 Reagent (Invitrogen) and then transfected into SK-N-BE (2) or IMR-32 cells. After 6 h, the supernatant was replaced with fresh medium containing 10% FBS and cultured for another 24 h. Three siRNA sequences were used for transfection (Table 1).
Cathepsin L overexpressing cell line establishment
A lentivirus carrying CTSL gene was constructed by GeneChem (Shanghai, China). IMR-32 cells were seeded in 6-well plate and then infected with the lentivirus according to protocols as recommended by the manufacturer. After 16 h, the medium was replaced with complete medium. In order to obtain a stable CTSL overexpressing cell line, the lentivirus infected cells were selected by incubation with complete medium of 2 μg/ml puromycin. The expression of CTSL in IMR-32 cell lines stably infected with a lentivirus was examined by Western blot.
Western blot analysis
Detailed procedure was as described in a previous study . Primary anti-human antibody against SRGN was purchased from Santa Cruz Biotechnology. CTSL, ABCB1, ABCG2, LC3, Bax, Bcl-2, and GAPDH primary anti-human antibodies were all purchased from Cell Signaling Technology.
Real-time quantitative PCR
Detailed procedure for these steps has been previously reported (Gu et al., 2018). The primer sequences employed for the PCR analysis were listed in Table 2. All primers were synthesized by Sangon Biotech (Shanghai, China).
Immunofluorescence
Cells were cultured on glass coverslips and then fixed with 4% paraformaldehyde. After a PBS wash, the cells were permeabilized employing 0.1% Triton X-100, incubated in a blocking solution (PBS with 3% bovine serum albumin), then further incubated overnight at 4°C with the primary antibody to Bcl-2, SRGN (Santa Cruz Biotechnology) and CTSL (Abcam). The fluorescent conjugated secondary antibodies were Alexa Fluor 488 and Alexa Fluor 594 (Invitrogen, Carlsbad, California, United States), and 4′,6diamidino-2-phenylindole (DAPI) (Sigma Aldrich, St Louis, MO) was employed as a nuclear counterstain for 10 min. The coverslips Frontiers in Pharmacology frontiersin.org were finally mounted onto slides with fluorescent mounting medium and instantly observed by confocal microscopy.
Lysosome staining
After removing the cell culture solution, the lysosome green fluorescent probe staining solution (Beyotime, Nantong, China) was added, and the lysosome green fluorescent probe solution and the cell culture solution were mixed at a ratio of 1/20,000, and then incubated with cells in an incubator with 5% CO₂ at 37°C for 60 min. The staining solution was discarded and new cell culture solution was added. Finally the cells were observed under the laser confocal microscope.
Apoptotic assays
Apoptotic cell death was assessed by flow cytometry using the AnnexinV-fluorescein isothiocyanate (FITC)/propidium iodide (PI) Apoptosis Detection Kit (KeyGEN). Cells were harvested and resuspended in binding buffer, then labeled with Annexin V-FITC/PI reagent for 15 min in the dark at room temperature. For each analysis, a minimum of 20,000 cells per sample were analyzed using a Beckman Coulter FACS machine. Results were analyzed and calculated by FlowJo V.10 software; the percentage of apoptosis was obtained from the sum of (Annexin V-FITC+/PI−) and (Annexin V-FITC+/PI+) cells.
Animal experiments
All mice experiments were conducted in accordance with the humane treatment of animals under institutional guidelines approved by the Ethical Committee of Children's Hospital of Soochow University. The mice were housed in individually ventilated cages in the Animal Laboratory of the Children's Hospital of Soochow University. Six-week-old male M-NSG (NOD-Prkdc scid Il2rg em1 /Smoc) mice (Shanghai Model Organisms, Shanghai) were used in the study. Subcutaneous tumor transplantation was conducted using the SK-N-BE (2) cells. Cells (n = 1 × 10 7 ) were resuspended in 100 μl PBS and implanted into the right flank of nude mice under sterile conditions. After the formation of palpable tumors (tumor volume reached 100 mm 3 ), mice were randomized into four groups (5 mice per group): control group (saline, i.p.), Z-FY-CHO (a specific CTSL inhibitor, HY-128140, MedChemExpress, Shanghai) group (5 mg/kg, i.p.), ADM group (1 mg/kg, i.p.), Z-FY-CHO (5 mg/kg, i.p.) plus ADM (1 mg/kg, i.p.) group. Mice were injected with vehicle or with drugs three times weekly. The size of the tumor and the body weight of each mouse were measured as described previously . Mice were sacrificed on day 15, and tumor tissues were harvested.
Statistical analysis
All measurement data were expressed as the mean ± S.D. at least three independent experiments were conducted. Differences in measured variables between the experimental and control groups were assessed by Student's t-test. The chi-square (χ 2 ) test or Fisher exact test was used to compare qualitative variables. Differences were considered statistically significant at p values of <0.05. All analyses were performed employing GraphPad Prism 5.0.
Results
High expression of cathepsin L was positively associated with poor prognosis of NB patients To figure out the relationship between the expression of CTSL and prognosis of NB patients, we analyzed the public dataset Versteeg to examine the correlation between the mRNA expression of CTSL and the survival rates in NB. In the Versteeg dataset, patients with high CTSL mRNA levels (cut off: 615.00) showed significantly poor overall survival (OS) (22-years OS; p = 0.0071; Figure 1A) and poor recurrence-free survival (RFS) (22-years RFS; p = 0.0161; Figure 1B). Besides, increased CTSL expression was positively associated with advanced tumor stages, though there was no statistical significance (Supplementary Figure S1). In general, these results suggested that high expression of CTSL was positively correlated with poor prognosis of NB patients and CTSL might be a potential prognostic marker in NB.
The expression of cathepsin L was poorly associated with ADM and DDP sensitivity in NB cells
To determine whether CTSL expression might be related to chemoresistance in NB cells, protein and mRNA levels of CTSL in two subtypes of NB cells were analyzed. Western blot and PCR analysis showed that the protein and mRNA levels of CTSL were low in IMR-32 cells and high in SK-N-BE (2) cells (Figures 2A,B). CCK-8 assay was performed to detect the 50% inhibitory concentration (IC50) of two cell lines in the treatment of a gradient concentration of ADM and DDP at 24 h. It turned out that the IC50 of SK-N-BE (2) cells in the treatment of ADM and DDP were higher than that of IMR-32 (p < 0.001; Figures 2C,D). These results suggested that SK-N-BE (2) cells were more chemoresistant to ADM and DDP than IMR-32 cells and CTSL expression was poorly associated with ADM and DDP sensitivity in NB cells.
FIGURE 1
Upregulation of CTSL correlated with poor prognosis in patients with NB (A) Kaplan-Meier analysis of overall survival (OS) was determined according to CTSL expression in 88 NB samples from Versteeg dataset (B) Recurrence-free survival (RFS) was also determined according to CTSL expression from Versteeg dataset. Statistical analysis was carried out using a log-rank test.
FIGURE 2
The expression of CTSL was associated with ADM and DDP sensitivity in NB cells (A) Western blot analysis and (B) real-time quantitative PCR analysis of CTSL levels in SK-N-BE (2) cells and IMR-32 cells. Cell viability curves for the two NB cell lines after ADM (C) and DDP (D) treatment were evaluated using the cell counting kit-8 assay (upper panel). The IC50 values were analyzed using the Mann-Whitney test (lower panel). *, p < 0.05, ***, p < 0.001.
Frontiers in Pharmacology frontiersin.org 05 Cathepsin L down-regulated ADM and DDP sensitivity in NB cells We further researched whether chemoresistance was modulated by CTSL in NB cells. Western blot showed that all three selected si-RNAs could significantly down-regulate CTSL expression of SK-N-BE (2) cells, we chose si-CTSL (450) with the best interference effect to interfere the CTSL expression of SK-N-BE (2) cells ( Figure 3A). CCK-8 assay was performed to detect the IC50 of SK-N-BE (2) cells in the treatment of a gradient concentration of ADM and DDP at (2) cells. Mice were injected with ADM or with Z-FY-CHO three times weekly. The tumor volumes were measured and the nude mice were weighed every 2 days until the mice were sacrificed. The curves were shown respectively. NS: not statistically significant, *, p < 0.05, **, p < 0.01, ***, p < 0.001.
Frontiers in Pharmacology frontiersin.org 06 24 h in the reference of si-CTSL (450) and a specific CTSL inhibitor, Z-FY-CHO. We found that the IC50 of si-CTSL (450) group and Z-FY-CHO group under the action of ADM and DDP of various concentrations were lower than that of NC group and control group, which showed that the chemosensitivity of SK-N-BE (2) cells to ADM and DDP increased after CTSL inhibition ( Figures 3B,C; Supplementary Figures S2A,B). Considering that CTSL may be closely related to the development of chemoresistance. A lentivirus carrying CTSL gene was constructed and infected into IMR-32 cells (Supplementary Figure S3A). Overexpression of CTSL rendered them resistant to ADM and DDP, as indicated by CCK-8 assay compared with LV-Vector cells, confirming the chemoresistant role of CTSL in NB cells (Supplementary Figure S3B,C). Then, the 50% lethal dose of ADM and DDP were selected to treat SK-N-BE (2) cells. The morphological changes of NC group and si-CTSL (450) group after drug treatment were observed. In NC group, the cells were inhibited to some extent, and the number of cells decreased after administration with ADM or DDP, but some cells still survived. In si-CTSL (450) group, the cell morphology changed from fusiform to round after administration with ADM or DDP, its lethality to cells also increased greatly ( Figure 3D). We further investigated the effect of CTSL on chemoresistance of NB in vivo.
The subcutaneous tumor xenograft model was established using SK-N-BE (2) cells, and then treated with Z-FY-CHO or ADM. As shown in Figure 3E, tumor volume was 2345.20 ± 561.75 mm 3 in the control group, 1850.10 ± 255.30 mm 3 in Z-FY-CHO group, 1124.80 ± 343.89 mm 3 in ADM group, and 344.60 ± 156.03 mm 3 in Z-FY-CHO plus ADM group. The average volumes of tumors were remarkably decreased in Z-FY-CHO plus ADM group compared to ADM group (p < 0.05), which was consistent with the results of experiments in vitro. The weights of mice were also recorded. As was shown, there was no significant difference among those groups in tumor weight ( Figure 3F). The results of tumor xenograft experiment confirmed that CTSL inhibition down-regulated chemoresistance of NB.
Cathepsin L induced chemoresistance to ADM and DDP in NB cells by up-regulating the expression of multi-drug resistance proteins and inhibiting the autophagy level.
To explore the mechanisms of chemoresistance in NB cells with high CTSL expression, Western blot was used to detect the expression level of ABCB1 and ABCG2 in NC group and si-CTSL group. It turned out that the expression levels of ABCB1 and ABCG2 were lower in si-CTSL group than that of NC group ( Figure 4A; Supplementary Figures S4A,B). The results showed that knockdown of CTSL down-regulated the expression of multi-drug resistance proteins. Further, to figure out whether CTSL expression was associated with autophagy in NB cells, we observed the expression level of lysosomes in NC group and si-CTSL (450) group and found that the green fluorescent spots in si-CTSL group were more and brighter compared with NC group ( Figure 4B). This finding indicated that the autophagy level of si-CTSL group may be higher than that of NC group. Furthermore, Western blot was employed to detect the expression of autophagy-related protein LC3-Ⅱ of NC group and si-CTSL group at the same concentration of ADM and DDP. It turned out that the expression of LC3-Ⅱ of si-CTSL group was higher than that of NC group ( Figure 4C; Supplementary Figures S4C,D). These results suggested that CTSL down-regulated ADM and DDP sensitivity in NB cells by inhibiting the autophagy level. Thus, all these findings indicated that CTSL could induce chemoresistance to ADM and DDP in NB cells by up-regulating the expression of multi-drug resistance proteins and inhibiting the autophagy level.
Cathepsin L induced chemoresistance to ADM and DDP in NB cells by inhibiting cell apoptosis
At present, it is widely accepted that anti-apoptosis is a potent inducer of chemotherapy failure. To further characterise the mechanism of underlying chemoresistance in NB, we evaluated apoptotic response with flow cytometry in SK-N-BE (2) cells with different treatments. Apoptosis levels did not significantly increase in si-CTSL (450) group compared with NC group (Supplementary Figure S5A); by contrast, a remarkable increase in the number of late apoptotic cells was observed in si-CTSL (450) plus ADM group compared with ADM group (p < 0.001). Then, the expression levels of Bcl-2 protein and Bax protein were detected by western blot to determine whether knockdown of CTSL could induce cell apoptosis. We found that the expression level of anti-apoptotic protein Bcl-2 in si-CTSL group under the action of ADM and DDP with the same concentrations was lower than that of NC group, and the expression level of apoptosis-related protein Bax in si-CTSL group under the action of ADM and DDP with the same concentrations was higher than that of NC group ( Figure 5A; Supplementary Figures S5B,C), which indicated that knocking down CTSL induced apoptosis of SK-N-BE (2) cells. Then, we found that the fluorescence intensity of Bcl-2 protein in si-CTSL group was darker and the area was smaller than that in NC group under the action of ADM and DDP with the same concentrations using laser confocal microscope ( Figure 5B). These results suggested that CTSL down-regulated ADM and DDP sensitivity in NB cells by inhibiting cell apoptosis.
Cathepsin L induced chemoresistance by up-regulating the expression of serglycin in NB cells
To further explore the downstream targets of CTSL in NB cells, six datasets were observed at the same time and showed that only Frontiers in Pharmacology frontiersin.org Serglycin (SRGN) expression was associated with the expression levels of CTSL (−0.6 < R < 0.6 and p < 0.001; Figure 6A). Preliminary data analysis and outlier identification were performed using principal component analysis (PCA). PCA results before batch removal for multiple datasets showed that the three datasets were separated without any intersection, while PCA results after batch removal showed the intersection of three datasets, which could be used as a batch of data for subsequent analysis ( Figure 6B). Then Spearman's correlation analysis showed that CTSL expression was positively correlated to SRGN expression (r: 0.19; P: 0.01577; 95% CI: 0.10-0.27; Figure 6C). Versteeg dataset also demonstrated that patients with high SRGN mRNA levels (cut off: 890.90) and high CTSL mRNA levels showed significantly poor OS (22-years OS; p = 0.0014; Figure 6D) and poor PFS (22-years OS; p = 0.0430; Figure 6E). Next we detected the expression levels of SRGN in si-CTSL NB cells to confirm our findings. The protein and mRNA levels of SRGN were lower in si-CTSL group than that in NC group (Figures 6F,G). To further investigate the potential links between CTSL and SRGN, we overexpressed CTSL in IMR-32 cells through lentivirus transduction. As shown in Figure 7A, CTSL overexpression increased SRGN expression in IMR-32/LV-Over-CTSL cells. Then, we found that high co-expression of CTSL with SRGN was shown in IMR-32/LV-Over-CTSL cells, as detected by immunofluorescence staining (Figure 7B). To further examine the effect of SRGN on CTSL induced chemoresistance, we suppressed SRGN by transfecting IMR-32/LV-Over-CTSL cells with three selected si-RNAs. Western blot results suggested that si-SRGN (295) inhibition evidently decreased the SRGN expression ( Figure 7C). To elucidate the underlying mechanism, we performed Frontiers in Pharmacology frontiersin.org 08 a recovery experiment in IMR-32/LV-Over-CTSL cells which were transfected with si-SRGN (295). The results showed that the expressions of SRGN were both decreased in si-CTSL (450) group and si-SRGN (295) group compared with NC group (p < 0.001; Figure 7D). More importantly, we found that the IC50 of si-CTSL (450) group and si-SRGN (295) group under the treatment of ADM were significantly lower than that of NC group, which showed that the chemosensitivity of IMR-32/LV-Over-CTSL cells was mediated by CTSL-SRGN regulation ( Figure 7E). All results above indicated that CTSL could mediate chemoresistance by up-regulating SRGN expression in NB cells and SRGN expression was positively correlated with poor prognosis of NB patients.
Discussion
NB is considered as the most common pediatric solid tumor (Zafar et al., 2021). DDP and ADM are two main chemotherapeutics for NB. However, the chemoresistance of NB is the most critical reason for the failure of clinical chemotherapy (Gao and Wang, 2019). Therefore, it is important to explore the molecular mechanism that affects the chemoresistance of NB and find key targets to overcome it. Our study found that the expression level of CTSL in NB patients was positively correlated with poor prognosis and poor sensitivity of DDP and ADM. Moreover, we also found that CTSL mediated the decrease of DDP and ADM sensitivity of NB cells by up-regulating the expression of multidrug resistance proteins ABCB1 and ABCG2 and inhibiting autophagy and apoptosis. In addition, we also demonstrated that CTSL might up-regulate the expression of SRGN which was positively correlated with the poor prognosis of NB patients.
CTSL is a lysosomal enzyme, which has been demonstrated to be highly expressed in many malignant tumors such as lung cancer, breast cancer and cervical cancer (Han et al., 2016;Mao et al., 2019;Parigiani et al., 2020). CTSL plays a key role in the formation, growth, invasion and migration of malignant tumors (Sudhan and Siemann, 2015). Zhang et al. discovered that CTSL was overexpressed in ovarian cancer and CTSL could induce paclitaxel resistance in ovarian cancer cells (Zhang et al., 2016). Cui et al. also found that CTSL expression was higher in NSCLC cells and overexpression of CTSL was positively correlated with gefitinib resistance in non-small cell lung cancer (NSCLC) (Cui et al., 2016). These results indicated that CTSL might be closely related to thechemoresistance of cancers.
However, the specific role and mechanism of CTSL in chemoresistance of NB have not been studied clearly. Therefore, our study mainly researched the role and Frontiers in Pharmacology frontiersin.org molecular mechanism of CTSL in mediating chemoresistance of NB. Firstly, we selected two different NB cells for experiments, and found that SK-N-BE (2) cells showed higher expression of CTSL and more resistant to ADM or DDP. These results indicated that CTSL might mediate the decrease of ADM and DDP sensitivity in SK-N-BE (2) cells. In order to verify the role of CTSL in mediating chemoresistance, we used CTSL targeting si-RNAs to interfere Frontiers in Pharmacology frontiersin.org CTSL, and the chemoresistance of SK-N-BE (2) cells to ADM decreased by about 1.7 times and that of SK-N-BE (2) cells to DDP decreased by about 2.6 times. Similarly, cotreatment with CTSL inhibition and ADM in M-NSG mice of subcutaneous tumors exhibited less tumor growth. To further verify the role of CTSL in regulating the decrease of the sensitivity to ADM and DDP, we detected the expression levels of multidrug resistance proteins ABCB1 and ABCG2 by western blot and found that ABCB1 and ABCG2 expression were decreased in si-CTSL group, which indicated that the sensitivity of SK-N-BE (2) cells to ADM and DDP were significantly increased after si-RNA interfering CTSL. According to the above results, it was demonstrated that CTSL was involved in regulating the decrease of ADM and DDP sensitivity in SK-N-BE (2) cells.
To explore the mechanisms of CTSL in the chemoresistance of SK-N-BE (2) cells, the autophagy of cells was observed from the cell morphology, and the green fluorescent spots of SK-N-BE (2) cells in si-CTSL group were brighter and more, indicating that the autophagy level was higher. With autophagy protein LC3-II as an index, we observed that the autophagy level of SK-N-BE (2) cells in si-CTSL group increased significantly under the treatment of ADM and DDP. After si-RNA interfering CTSL, the expression level of pro-apoptotic protein Bax increased and the expression level of anti-apoptotic protein Bcl-2 decreased under the treatment of ADM and DDP. The above results indicated that CTSL could mediate the chemoresistance of NB by regulating the autophagy and apoptosis of SK-N-BE (2) cells.
SRGN is a low molecular weight glycoprotein, which plays a critical role in the storage and secretion of some chemokines, cytokines and proteases, thus, it participates in lots of physiological and pathological processes (Zhu et al., 2021). SRGN has been demonstrated to be overexpressed in many cancers and is closely related to the occurrence and development of tumors (Zhang et al., 2017;Xie et al., 2021;Zhu et al., 2021). Guo et al. found that SRGN was highly expressed in NSCLC and its interaction with CD44 could promote the metastasis of NSCLC (Guo et al., 2020). Moreover, Zhang et al. discovered that the crosstalk of SRGN and the transcriptional coactivator YESassociated protein mediated the chemoresistance and stemness in breast cancer cells by regulating the expression of HDAC2 (Zhang Z. et al., 2020). We here found that CTSL could elevate SRGN expression and CTSL/SRGN axis induced the chemoresistance in NB cells.
The study still has some limitations. Since this study has not constructed NB chemoresistant cells, we have not been able to clarify the role of CTSL in NB chemothresistant cells, which needed to be further verified by more experiments. Meanwhile, the mechanism about CTSL how to regulate SRGN needed to be clarified. As reported before, CTSL could get into the nucleus, and then processes and activates certain transcription factors to perform transcription functions (Sudhan and Siemann, 2015). It was reported that SRGN could be transcriptionally regulated in the tumor cells (Xu et al., 2018). Therefore, CTSL elevated SRGN expression probably through indirect endonuclear transcription fuction.
In all, our study explored the role and molecular mechanism of CTSL in regulating the chemoresistance of ADM and DDP in NB cells, and found that CTSL could mediate the chemoresistance of NB by up-regulating the expression of SRGN. Therefore, CTSL seems to play a key role in improving the chemotherapy sensitivity of NB, and it may become an important target to improve the chemosensitivity of NB and the effect of ADM and DDP.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Ethics statement
The animal study was reviewed and approved by the Ethical Committee of Children's Hospital of Soochow University. Written informed consent was obtained from the owners for the participation of their animals in this study. Written informed consent was obtained from the individual(s), and minor(s)' legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.
Author contributions
WW conceived and designed the study. XD, LD, and SH performed experiments and prepared the preliminary results of the manuscript. FL, YY, and RT performed data analysis and participated in data collection. XD and WW was a major contributor in writing the manuscript. XD and ZZ participated in data analysis and interpretation and critically reviewed the manuscript. All authors read and approved the final manuscript.
Funding
This work was supported by the grants from the National Natural Science Foundation of China (Grant No. 82172840, 81703532, and 81902320), Suzhou science and technology development plan project (SYSD2019183), and Gusu Health Talents Project of Suzhou Municipal Health Commission (GSWS2021033). The funders had no role in study design, data collection, analysis or interpretation of the data, preparation of the manuscript or decision to publish the results.
Frontiers in Pharmacology frontiersin.org
|
2022-08-26T14:07:23.558Z
|
2022-08-26T00:00:00.000
|
{
"year": 2022,
"sha1": "1f2dc4b1c6e350457b0c2db6c851a299dbfd4b76",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "1f2dc4b1c6e350457b0c2db6c851a299dbfd4b76",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9461137
|
pes2o/s2orc
|
v3-fos-license
|
Protein Kinase D1 Has a Key Role in Wound Healing and Skin Carcinogenesis
Protein kinase D (PKD) is a family of stress-responsive serine/threonine kinases implicated in the regulation of diverse cellular functions including cell growth, differentiation, apoptosis, and cell motility. Although all three isoforms are expressed in keratinocytes, their role in skin biology and pathology is poorly understood. We recently identified a critical role for PKD1 during reversal of keratinocyte differentiation in culture, suggesting a potential pro-proliferative role in epidermal adaptive responses. Here, we generated mice with targeted deletion of PKD1 in epidermis to evaluate the significance of PKD1 in normal and hyperplastic conditions. These mice displayed a normal skin phenotype indicating that PKD1 is dispensable for skin development and homeostasis. Upon wounding however, PKD1-deficient mice exhibited delayed wound re-epithelialization correlated with a reduced proliferation and migration of keratinocytes at the wound edge. In addition, the hyperplastic and inflammatory responses to topical phorbol ester were significantly suppressed suggesting involvement of PKD1 in tumor promotion. Consistently, when subjected to two-stage chemical skin carcinogenesis protocol, PKD1-deficient mice were resistant to papilloma formation when compared to control littermates. These results revealed a critical pro-proliferative role for PKD1 in epidermal adaptive responses, suggesting a potential therapeutic target in skin wound and cancer treatment.
INTRODUCTION
Protein kinase D is a family of stress-responsive serine/threonine kinases, involved in the regulation of diverse biological and pathological processes including, cell proliferation, differentiation, adhesion, migration, stress-induced cardiac hypertrophy, pathological angiogenesis, tumor cell proliferation and metastasis (Rozengurt, 2011). PKDs are effectors of diacylglycerol (DAG) and PKCs and are activated by a variety of stimuli, including Users may view, print, copy, and download text and data-mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use:http://www.nature.com/authors/editorial_policies/license.html#terms growth factors, neuropeptides, hormones, and phorbol esters (Fu and Rubin, 2011). PKD isoforms (PKD1, PKD2 and PKD3) share highly homologous regulatory subdomains and can be activated by the same stimuli, however, they also have distinct functions based on their level of expression, tissue specificity and their interacting proteins (Fu and Rubin, 2011).
Despite recent progress in understanding the biological functions of PKD enzymes and their involvement in disease processes, the role of PKD in skin biology and pathology is poorly understood. PKD1 is the most studied member of the family. Earlier studies have shown correlation of PKD expression with proliferation status of cultured keratinocytes suggesting a pro-proliferative role for PKD1 in normal keratinocytes (Ernest Dodd et al., 2005;Rennecke et al., 1999). In addition, PKD expression was shown to be up-regulated in mouse carcinomas and human basal cell carcinomas, although the functional significance of PKD activation or up-regulation in these processes was not determined (Rennecke et al., 1999;Ristich et al., 2006). However, these biochemical and immunohistochemical studies have been difficult to interpret because of the presence of antibody cross-reactivity and a failure to distinguish between individual isoforms.
We have recently shown a critical pro-proliferative role for PKD1 in differentiated cultures of epidermis during de-differentiation in response to a low calcium switch (Jadali and Ghazizadeh, 2010). Specific knockdown of PKD1 to 20% of its normal level by RNA interference was sufficient to block re-initiation of proliferation and reversal of differentiation without affecting normal proliferation and differentiation of mouse keratinocytes (Jadali and Ghazizadeh, 2010). Notably, neither PKD2 nor PKD3 could compensate for the loss of PKD1 function in this process, suggesting a major role for PKD1 in stress-induced responses in keratinocytes.
Although there is compelling evidence in cell culture demonstrating a unique and critical role for PKD1 in keratinocyte de-differentiation, the in vivo relevance of these findings and the physiological role of PKD1 in skin remain to be determined. In the present study, we generated a conditional knockout of PKD1 in mouse stratified epithelia in order to characterize unique functions of PKD1 in skin. Our results identified a crucial role for PKD1 in wound healing, phorbol ester-induced hyperplasia and skin tumor formation.
Epidermal PKD1 is dispensable for mouse skin homeostasis
Disruption of the mouse pkd1 gene causes embryonic lethality (Fielitz et al., 2008), therefore to investigate the role of PKD1 in skin epithelia, mice with targeted disruption of PKD1 in keratinocytes (PKD1-cKO) were generated. These mice were carrying three genetic modifications: (i) homozygously floxed pkd1 allele (PKD1 fl/fl ), (ii) Keratin 14 (K14)-Cre, and (iii) a Cre reporter, flox-STOP-flox-ROSA26-YFP. PKD1-cKO mice were born at expected Mendelian ratio and appeared indistinguishable from their PKD1 fl/fl littermates. As shown in Figure 1, K14-regulated recombination in PKD1-cKO mice was highly efficient and specific resulting in uniform YFP expression in skin epithelia ( Figure 1d-lower panel). Analysis of transcript and protein levels of three PKD isoforms confirmed efficient and specific loss of PKD1 in PKD1-CKO keratinocytes (Figure 1a-b). A lack of alteration in the expression of PKD2 and PKD3 indicated no compensatory upregulation of these closely related isozymes in the absence of PKD1. The observed residual PKD1 mRNA and protein in PKD1-cKO epidermis is most likely reflective of PKD1 expression in melanocytes and fibroblasts contaminating the primary epidermal cultures.
Previous studies have suggested a pro-proliferative and/or anti-differentiation role for PKD1 in normal keratinocytes (Ernest Dodd et al., 2005). Histological analysis of dorsal skin of adult mice showed no significant difference in the skin and hair morphology, or in the number of proliferating epithelial cells (Ki67 staining) between PKD1-cKO and PKD1 fl/fl mice (Figures 1c and 3d). Furthermore, analysis of K14, an epidermal basal cell marker and that of differentiation markers including K10, involucrin (INV) and loricrin (LOR) by either immunofluorescent (Figure 1d) or westen blot analysis (data not shown) did not indicate any alteration in epidermal proliferation and differentiation . Overall, these data indicated that under normal conditions PKD1 is dispensable for skin development and homeostasis.
Impaired wound healing by PKD1-deficiant keratinocytes.
PKD1 is a stress-responsive kinase and has been implicated in cell proliferation and motility suggesting a potential role during wound healing (Olayioye et al., 2013). To investigate the role of epidermal PKD1 in wound healing, the dorsal skin of PKD1-cKO and age-and sexmatched PKD1 fl/fl mice were wounded with one 6 mm circular, splinted, full-thickness excisional wounds, and monitored daily. As shown in Figure 2a, the kinetics of wound healing in PKD1-cKO mice was slightly but significantly slower than control. Histological analysis of wounds at 10 days post-wounding indicated the presence of a migrating tongue (Figure 2b, arrows) with a gap averaging 0.96±0.47 mm (n=3) in PKD1-deficient wounds, while control wounds were completely re-epithelialized ( Figure 2b). Immunohistochemical analysis of 7-day-old wounds when epidermal hyperplasia and a migrating tongue of keratinocytes are present in both groups (Figure 2c, arrows), demonstrated a lower proliferation rate (BrdU labeling index) for PKD1-deficient keratinocytes at the wound edge when compared to the control (Figure 2c-d). These data indicated a correlation between delayed wound healing and reduced proliferative response of keratinocytes in PKD1-cKO mice.
To confirm the involvement of PKD1 in wound re-epithelialization independent of wound inflammation and contraction, a skin explant culture assay that mimics the behavior of keratinocytes at the edge of skin wounds was used (Mazzalupo et al., 2002). As shown in Figure 2e-f, the areas of keratinocyte outgrowths were significantly smaller in PKD1-cKO explants (25±8 mm 2 ) when compared to that of PKD fl/fl (52±12 mm 2 ), confirming a role for PKD1 in wound re-epithelialization. Re-epithelialization of skin wounds results from increases in both mitotic activity and migration of keratinocytes at the wound edge (Gurtner et al., 2008). To determine if keratinocyte migration was affected by the loss of PKD1, control or PKD1-cKO explants were treated at 48 hours post-seeding with mitomycin C to irreversibly block mitosis, and the area of outgrowth was measure 5 days later. As shown in Figure 2g, although mitomycin treatment resulted in a significant reduction in the outgrowth area in both control and PKD1-deficient explants, the effects were more pronounced on the latter (45% in PKD1-deficient vs. 37% in control) indicating a defect in migration of PKD1deficient keratinocytes. These data supported pro-proliferative and pro-migratory roles for PKD1 during wound healing.
PKD1 is a major mediator of TPA-induced epidermal hyperplasia and inflammation
To determine the significance of PKD1 in epidermal hyperplastic responses to other stimuli, the responses of PKD1-deficient epidermis to a tumor promoter, 12-Otetradecanoylphorbol-13-acetate (TPA) was examined. TPA is a DAG analogue and a known inducer of PKD activation in mouse keratinocytes (Ernest Dodd et al., 2005;Jadali and Ghazizadeh, 2010). Topical application of TPA is known to induce hyperplasia and inflammation, and is necessary for skin tumor development in two-stage chemical carcinogenesis (Rundhaug and Fischer, 2010). To determine the potential role of PKD1 in TPA-induced mitogenic responses in skin, PKD1-cKO and PKD fl/fl were treated with a single dose of TPA or acetone (control), and analyzed 48 hours later. As shown in Figure 3, a single dose of TPA in PKD1 fl/fl skin induced a robust proliferative response leading to a four-fold increase in the number of Ki67-positive keratinocytes and more than a five-fold increase in the epidermal thickness. In PKD1-cKO mice however, these responses were blunted with only a two-fold increase in proliferating basal keratinocytes and in the epidermal thickness. In addition, analysis of skin sections showed a marked suppression of TPA-induced inflammation in PKD1-cKO mice (Figure 3a). Immuofluorescent analysis of skin sections for S100A9, which is constitutively expressed on monocytes and neutrophils (Lagasse and Weissman, 1992), showed a five-fold reduction in the number of infiltrating leukocytes in PKD1-cKO mice (Figure 3e-f). These data indicate a critical role for PKD1 as a positive regulator of epidermal hyperplasia and inflammation in response to phorbol esters, suggesting a role in tumor promotion.
PKD1-deficient mice are resistance to tumor formation in two-stage chemically-induced skin carcinogenesis
To examine the potential effects of epidermal PKD1 in tumor promotion, two-stage chemical carcinogenesis experiments were carried out in PKD1-cKO and their normal littermates. The use of 7,12-dimethylbenz[α]anthracene (DMBA), to introduce oncogenic mutations primarily on the Hras1 gene, and TPA as a tumor promoter, to allow selective outgrowth of initiated cells, is a well-established chemical carcinogenic treatment that leads primarily to papilloma formation in the skin (Abel et al., 2009). Groups of 15 mice at 7-8 weeks of age were treated with DMBA followed a week later by twice weekly TPA for 20 weeks. Animals were examined weekly to determine tumor incidence and multiplicity. Six weeks after the last TPA treatment, tumors were quantified, harvested and analyzed. As shown in Figure 4, mice lacking PKD1 in the epidermis were refractory to papilloma formation. While all control mice develop tumors by 16 weeks of promotion, more than 60% of PKD1-deficient mice did not develop any tumor during the entire 26 weeks of observation (Figure 4a-b). In addition, the average number and size of tumors in tumorbearing PKD1-cKO mice were markedly reduced (Figure 4 c-d). Histological analysis of tumors revealed that most of the tumors formed in both groups were benign papillomas and keratoacanthomas (data not shown). At 26 weeks, the frequency of malignant conversion of these benign tumors was less than 3% and was restricted to the control mice. Malignant conversion in PKD1-cKO mice however, may have remained undetected because of the lower total number of tumors developed in these mice. The PKD1-cKO mice resistance to carcinogen-induced tumorigenesis was not the result of increased apoptosis in PKD1deficient keratinocytes as the number of apoptotic keratinocytes following 24 hrs of DMBA treatment was comparable between the two groups (data not shown). These data identified PKD1 as the key transducer of the tumor promoting effects of TPA in chemically-induced skin carcinogenesis.
DISCUSSION
Most of the functions assigned to PKD1 have been characterized in cell culture systems and their in vivo relevance has yet to be determined. Using a conditional knockout of PKD1 targeted to stratified epithelia, we investigated the non-redundant role of PKD1 in epidermis. Although PKD1 was found to be dispensable for skin development and homeostasis, our study identified a critical role for this enzyme during wound healing and in the TPA-induced hyperplastic/inflammatory responses that are necessary for tumor development. Our findings are consistent with the PKD function as a stress-responsive kinase and provide direct genetic evidence supporting a pro-proliferation role for PKD1 in skin tumor development. PKD isoforms share high sequence homology and all isoforms could be activated by TPA (Fu and Rubin, 2011). Despite expression of all three PKD isoforms in mouse keratinocytes (Jadali and Ghazizadeh, 2010), disruption of PKD1 gene alone resulted in marked reduction in TPA-induced responses and tumor promotion. This indicated that PKD2 and PKD3 cannot fully compensate for the loss of PKD1 function during this process. The TPA-induced responses in PKD1-cKO mice however, were not completely blocked and may reflect some redundant functions of PKD2 and PKD3 during tumor promotion.
Previous studies using primary cultures of mouse keratinocytes have suggested a proproliferative and/or anti-differentiation role for PKD1 in normal keratinocytes (Ernest Dodd et al., 2005). The normal skin architecture and unaltered expression of proliferation and differentiation markers in PKD1-deficient epidermis did not support this hypothesis. Moreover, the growth and differentiation of primary cultures of PKD1-deficient keratinocytes was comparable to that of control littermates, arguing against a general proproliferation/anti-differentiation role for PKD1 in keratinocytes (data not shown). This is consistent with our previous studies using small inhibitory RNA to knock down PKD1 in mouse keratinocytes (Jadali and Ghazizadeh, 2010). Although there was no compensatory up-regulation of PKD2 and PKD3 in PKD1-null keratinocytes (Figure 1), the possible functional redundancy of PKDs or other stress-responsive kinases acting on common targets during normal growth and differentiation of keratinocytes cannot be excluded.
Using two complementary ex vivo and in vivo approaches we showed that disruption of PKD1 impaired re-epithelialization during wound healing. PKD1 has been implicated as an inhibitor or a promoter of directed cell migration depending on the cell type and the experimental condition (Olayioye et al., 2013). Consistent with our studies, PKD1 has been shown to be involved in regulation of hemidesmosome dynamics through direct phosphorylation of integrin β4 on its signaling domain, a process important in promoting keratinocyte migration and proliferation (Frijns et al., 2012;Nikolopoulos et al., 2005). In addition to the PKD1 regulation of hemidesomosomes in the basal keratinocytes, PKD1 has been shown to play a distinct pro-proliferation function during reversal of differentiation in keratinocytes (Jadali and Ghazizadeh, 2010). It is plausible to assume that PKD1 activation in differentiated cells in vivo, may increase the size of proliferative cell pool during wound healing where normal differentiation process may be reversed (Morasso and Tomic-Canic, 2005).
PKDs are involved in a diverse set of signaling pathways important to tumor development and cancer progression and have been shown to be dysregulated in several cancer types (LaValle et al., 2010;Sundram et al., 2011). Our study underlines a role for PKD1 in skin tumor formation. The two-stage chemical carcinogenesis is widely used to study the mechanism of epithelial carcinogenesis (Rundhaug and Fischer, 2010). Our data identified PKD1 as a major downstream target of TPA/DAG, and a key mediator of skin tumor promotion. TPA is known to activate PKDs via a PKC-dependent mechanism. The PKCs, directly bind, phosphorylate and activate PKDs, although classical PKCs specifically PKCα can also activate PKDs (Rozengurt et al., 2005). δ, ε, η isoforms are expressed in mouse keratinocytes, however, δ and η isoforms are thought to be anti-tumorigenic, and the role of PKCε appears to be more complex (Rundhaug and Fischer, 2010). Mice overexpressing PKCε have been shown to be resistant to papilloma formation but develop papillomaindependent metastatic carcinomas independent of TPA treatment (Reddig et al., 2000). These studies suggest, at least in part, distinct roles for PKCε and PKD1 in tumor formation. Another highly expressed PKC in epidermis, PKCα is a major target for TPA in differentiated keratinocytes (Dlugosz et al., 1994). Similar to PKD1, disruption of PKCα gene in mice have been shown to result in impaired TPA-and wound-induced epidermal hyperplasia. However, PKCα-null mice were more susceptible to tumor formation (Hara et al., 2005). The apparent divergence of PKC and PKD1 signaling in response to TPA may be explained by distinct substrate-specificity of PKCs and PKDs or, by the time and context dependent activation of PKD1 by PKC-dependent and -independent mechanisms as previously described (Jacamo et al., 2008;Rybin et al., 2009). Clearly, further studies are necessary to delineate the mechanism by which PKD1 mediates its pro-proliferative effects in epidermis.
In summary, the results presented here underlines the importance of PKD signaling in epidermal adaptive responses including wound healing and skin carcinogenesis, suggesting another therapeutic target to alter wound healing or suppress skin tumor formation. Consistent with our findings, PKD1 signaling has been suggested as a target for the cancerpreventive activity of green tea-constituents in mouse skin (Chiou et al., 2013).
Wounding healing Analysis
For in vivo analysis, a well-established and reproducible, excisional wound healing model was used (Galiano et al., 2004). Dorsal hair of 7-to 9-week-old mice (age-and sexmatched) was clipped and a full thickness 6 mm circular wound was generated in the upper back. A 10 mm circular splint was placed around the wound perimeter and secured with Krazy glue and 6 interrupted sutures to fix the splint to the skin. Wounds were covered with sterile Tegaderm dressings (3M Healthcare, St. Paul, MN) which were changed every other day until wounds were closed. Digital images were obtained at the time of dressing changes. Wound area was quantified using the splint to normalize the wound size. Wound area was calculated as percent area of the original wound. Representative wounds were biopsied following a 2-hr BrdU pulse (50 μg/g body weight), bisected and fixed in 10% formalin for routine histological processing and immunostaining.
For ex vivo analysis, the quantitative explant outgrowth assay of mouse skin was used as previously described (Mazzalupo et al., 2002). Briefly, dorsal skin of 2-day-old pups were removed, and 4 mm punch biopsies were cultured for 7 days. To assess keratinocyte outgrowth, explants were immunostained using an antibody against K14 and Supersensitive IHC Detection kit (Biogenex Laboratories, San Ramon, CA). Plates were photographed and the total area of outgrowth was measured using NIH-image J software. A subset of explants were treated with mitomycin C (5 ug/ml for 2 hrs; Sigma-Aldrich) or PBS (as controls) at 48 hours post-seeding and analyzed 5 days later as described above.
TPA-induction of epidermal hyperplasia
The dorsal skin of 7-9-week old male mice were shaved and the next day were treated with either a single dose of 5 nmole TPA (LC laboratories, Woburn, MA) in 100 μl acetone or 100 μl acetone (carrier control). Two days later, mice were euthanized, the treated skin was biopsies and fixed for histological processing. Skin samples were analyzed following H&E staining or immunostaining using antibodies specific for Ki67 (proliferation marker; Novocastra, New Castle, UK) or S100A9 (leukocyte marker; Axxzel Biosystem LLC, Houston, TX). The epidermal thickness was measured in sections stained with H&E at a minimum of 6 different regions in sections prepared from three different mice.
Two stage chemical carcinogenesis experiments
A dorsal area of 7-8-week old mice (10 males and 5 females/group) was shaved, and a day later treated with a single application of DMBA (100 μg in 200 μl acetone; Sigma Aldrich, St. Louis, MO) as an initiating agent. A week later, mice were treated with TPA (20 nmole/200 μl acetone) twice weekly for 20 weeks to promote tumor formation. There was no significant difference between the average number of tumors developed in male and females. Tumor defined as raised lesions with a minimum of 1.5 mm in diameter, were assessed weekly for 26 weeks. At this time tumor were harvested for further analysis.
Statistical Analysis-Statistical analysis was performed using the GraphPad Prism version 5.0 (GraphPad Software). Student's t-test was used for comparing two groups of data. For analysis of tumor incidence, comparison of the curves showing the mice with tumors was performed using log-rank χ2 test. Tumor multiplicity was analyzed using repeated measures analysis of variance (ANOVA) for overall differences between the two groups and Mann-Whitney test for comparing differences at each week between PKD fl/fl and PKD1-cKO. Only values with p<0.05 were accepted as significant. (a) Primary cultures of epidermal cells isolated from PKD fl/fl or PKD1-cKO mice and cultured for 5 days were analyzed for expression of PKD isozymes by semi-quantitative RT-PCR using isozyme-specific primers at 32 cycles for PKD1, 28 cycles for PKD2 and PKD3, and 22 cycles for Actin; (b) Western blot analysis of cell lysates described in (a) using an antibody cross-reacting to PKD1/PKD2 or one specific to PKD3. Actin served as loading control. Shown is representative of at least three experiments. Asterisks in (a and b) indicate residual PKD1 expression likely contributed by melanocytes and fibroblasts contaminating the primary epidermal cultures. (c) Skin sections prepared from adult PKD1 fl/fl or PKD1-cKO mice were stained with either hemotoxylin and eosin (H&E) for histology or immunohistochemical staining with proliferation marker Ki67 (peroxidase, brown nuclear staining). (d) Immunofluorescent staining of frozen skin sections with antibody against basal Rashel et al. Page 11 J Invest Dermatol. Author manuscript; available in PMC 2014 October 01.
cell marker (K14) or markers of early (K10), intermediate (INV) and late (LOR) epidermal differentiation followed by Alexa-594 conjugated secondary antibody (red). Sections were counterstained with dapi (blue nuclear staining). YFP expression (green) in LOR panel included to show efficient and specific Cre-mediated recombination in keratinocytes. Scale bars=50 um.
|
2018-04-03T01:40:35.560Z
|
2013-11-08T00:00:00.000
|
{
"year": 2013,
"sha1": "de3af3b0486046b2b16639b7ed12db2fea86ba7c",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "http://www.jidonline.org/article/S0022202X15367312/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "de3af3b0486046b2b16639b7ed12db2fea86ba7c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
253606479
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the Molecular Mechanism of Sepal Formation in the Decorative Flowers of Hydrangea macrophylla ′Endless Summer′ Based on the ABCDE Model
With its large inflorescences and colorful flowers, Hydrangea macrophylla has been one of the most popular ornamental plants in recent years. However, the formation mechanism of its major ornamental part, the decorative floret sepals, is still not clear. In this study, we compared the transcriptome data of H. macrophylla ‘Endless Summer’ from the nutritional stage (BS1) to the blooming stage (BS5) and annotated them into the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) databases. The 347 identified differentially expressed genes (DEGs) associated with flower development were subjected to a trend analysis and a protein–protein interaction analysis. The combined analysis of the two yielded 60 DEGs, including four MADS-box transcription factors (HmSVP-1, HmSOC1, HmAP1-2, and HmAGL24-3) and genes with strong connectivity (HmLFY and HmUFO). In addition, 17 transcription factors related to the ABCDE model were screened, and key candidate genes related to the development of decorative floret sepals in H. macrophylla were identified by phylogenetic and expression pattern analysis, including HmAP1-1, HmAP1-2, HmAP1-3, HmAP2-3, HmAP2-4, and HmAP2-5. On this basis, a gene regulatory network model of decorative sepal development was also postulated. Our results provide a theoretical basis for the study of the formation mechanism of decorative floret sepals and suggest a new direction for the molecular breeding of H. macrophylla.
Introduction
Hydrangea macrophylla is a perennial deciduous shrub of Hydrangea in Hydrangeaceae, which is popular for its full flower shape and colorful flowers. It is not only a common potted ornamental flowering shrub, but it also can tolerate semishady environments and is an important part of the understory in many areas, with high ornamental and ecological values. The cymes of H. macrophylla include sterile flowers and fertile flowers. Sterile flowers, also known as decorative florets, are decorative mainly because of their large, showy sepals that help attract the attention of pollinators, which is also their main ornamental trait. Fertile flowers, on the other hand, also known as nondecorative florets, have small and inconspicuous sepals and are mainly responsible for seed production [1]. Since decorative florets greatly enhance the ornamental value of H. macrophylla, many breeders have made this a breeding and production goal, hoping to increase its number through various methods. The H. macrophylla variety 'Endless Summer' has remontancy characteristics and can be grown in relatively cold northern regions, making it one of the most common 2 of 20 hydrangea varieties available at present. Moreover, its nondecorative flowers have an unstable withering phenomenon during flower development (personal observation), which is also a good material for studying the development of decorative flowers.
As an important reproductive organ of angiosperms and the main ornamental part of ornamental plants, flowers have always been the focus of research by botanists due to their diverse appearance and rich variation types [2,3]. Additionally, in the study of plant flower development, the morphogenesis process of flowering organs involves a complex multigene molecular genetic regulatory network, which is a hot field of this research [4]. The floral organ characterization genes were first identified in Arabidopsis thaliana and Antirrhinum majus [5]. An analysis of floral homozygous mutants in both led to the formation of the genetic ABC model in the early 1990s [6,7]. In terms of floral anatomy, complete flowers of dicotyledons generally consist of four whorls of concentric structures, in order from the outside in: the 1st whorl of sepals, the 2nd whorl of petals, the 3rd whorl of stamens, and the 4th whorl of carpels [5]. The ABC model predicts that the combined action of three types of homologous heterozygous floral organ homology genes determines the identity of the four floral organ types [7]. Subsequently, researchers have identified class D genes, which are closely related to class C genes and have similar expression patterns and partially overlapping functions [8,9], as well as class E genes that are necessary for the development and formation of four rounds of floral organs and have partial functional redundancy [3,9,10]. Therefore, the ABCDE model of flower development was formally formed.
Among the five classes of ABCDE genes, all of them share a commonly conserved region and belong to the MIKC C -type MADS-box genes in the MADS-box gene family, except APETALA2 (AP2), a member of the class A gene [4]. The MADS-box gene family is an important family of transcription factors that are widely distributed in nature and whose members commonly play a role in growth and developmental regulation and signaling in eukaryotes [11]. The main role of these genes first identified and studied in plants was to regulate the development of floral organs and flowering time. The members of the MADS-box family all have a conserved domain composed of 56-58 amino acids in their protein structure, called the MADS-box [12]. MADS-box genes can be classified into two major types, Type I and Type II, based on phylogenetic relationships, gene structure, and protein structure. Type I MADS-box genes usually contain 1-2 exons, and the encoded protein contains only one highly conserved MADS domain [13], while Type II MADS-box genes generally include six introns and seven exons, which are composed of four domains of M (MADS), I (Intervening), K (Keratin-like), and C (C-terminal), also known as the MIKC-type gene [12,14,15]. In plants, Type II MADS-box genes can be subdivided into MIKC C -type genes and MIKC*-type genes [14], and Becker et al. [16], by phylogenetic analysis of MIKC C -type MADS-box genes in angiosperms, classified them into 12 major subfamilies, which include five classes of floral homeotic genes that control organ homology during floral organ development in the ABCDE model. Subsequent experiments, such as gel blocking and yeast two-hybrid, showed that these five classes of genes formed homoor heterodimer through MIKC C -type MADS structural domain proteins, which followed by ternary or quaternary protein complexes to determine the formation of different floral organs [9]. Thus, the floral quartet model of floral organ development was proposed. This model not only validates and refines the ABCDE model but also explains the interactions between proteins related to floral organ development at the protein level, providing a favorable basis for the study of the molecular mechanism of floral organ development.
Sepal morphology is the most significant difference in appearance between decorative and nondecorative flowers of H. macrophylla. During sepal formation, all sepals of nondecorative flowers were initiated successively within a comparatively short period, while a single sepal of decorative flowers was initiated on the abaxial side first, and other sepals were initiated after the single sepal had developed to a certain extent [17]. In addition, differences in the expression of meristem characteristic genes, such as LEAFY (LFY), APETALA 1 (AP1), and TERMINAL FLOWER 1 (TFL1), may also lead to the formation of decorative and nondecorative florets [17]. In monocots, such as Tulipa spp. and Agapanthus praecox, in addition to whorls 2 and 3, class B genes are also expressed in whorl 1 and are involved in regulating tepal morphogenesis [18,19]. In Ranunculaceae plants, there are also class B genes expressed in the petaloid sepals [20]. Consequently, some studies speculate that the formation of decorative sepals in H. macrophylla may be jointly regulated by class A and class B genes [21,22]. Furthermore, to resolve controversies over the ill-defined function of class A genes, there was also a study that proposed the (A) B (C) model, where (A) = A + E and (C) = C + D [23]. This model both regains the simplicity of the ABC model and generalizes and simplifies the floral quartet model. Even so, there are still few studies related to the genes of floral organ development in H. macrophylla, and the formation mechanism of its decorative florets has still not been elucidated.
In this study, differential genes at five flowering stages were investigated using transcriptome sequencing with H. macrophylla as the plant material. We focused on analyzing the main biological processes associated with flower development and then screened the genes related to the formation of decorative floret sepals to reveal the intrinsic molecular mechanism of decorative floret formation and provide a theoretical basis for the breeding of H. macrophylla with more decorative flowers.
Phenotypic Observation
The morphological characteristics of the sampled flower buds and inflorescences were observed to determine their differentiation status, and we selected the developmental stages with distinct differentiation characteristics for comparison. As shown in Figure 1, the development of decorative flowers of H. macrophylla started from the vegetative bud ( Figure 1a) and gradually formed an inflorescence meristem dome (Figure 1b), which changed to the reproductive growth stage. Later, many inflorescence meristem domes were differentiated, forming a typical globular shape ( Figure 1c). Subsequently, the differentiation of the floral primordia ( Figure 1d) and the floral organs (Figure 1e) began. Figure 1f shows a part of the secondary inflorescence branching, when a clear spherical inflorescence formation could already be observed macroscopically. Figure 1g,h correspond to a nondecorative floret and a decorative floret in the inflorescence of Figure 1f, respectively. At this point, the sepals of the nondecorative floret were obviously shorter than the petals, and both could be observed and clearly distinguished at the same time, while the sepals of the decorative floret wrapped around the entire floret and the petals were located inside the sepals. Subsequently, the sepals of the decorative floret slowly elongated, changed color, and unfolded (Figure 1i-l), reaching the full flowering stage and completing the development of the floral organs.
while a single sepal of decorative flowers was initiated on the abaxial side first, and other sepals were initiated after the single sepal had developed to a certain extent [17]. In addition, differences in the expression of meristem characteristic genes, such as LEAFY (LFY), APETALA 1 (AP1), and TERMINAL FLOWER 1 (TFL1), may also lead to the formation of decorative and nondecorative florets [17]. In monocots, such as Tulipa spp. and Agapanthus praecox, in addition to whorls 2 and 3, class B genes are also expressed in whorl 1 and are involved in regulating tepal morphogenesis [18,19]. In Ranunculaceae plants, there are also class B genes expressed in the petaloid sepals [20]. Consequently, some studies speculate that the formation of decorative sepals in H. macrophylla may be jointly regulated by class A and class B genes [21,22]. Furthermore, to resolve controversies over the ill-defined function of class A genes, there was also a study that proposed the (A) B (C) model, where (A) =A + E and (C) = C + D [23]. This model both regains the simplicity of the ABC model and generalizes and simplifies the floral quartet model. Even so, there are still few studies related to the genes of floral organ development in H. macrophylla, and the formation mechanism of its decorative florets has still not been elucidated.
In this study, differential genes at five flowering stages were investigated using transcriptome sequencing with H. macrophylla as the plant material. We focused on analyzing the main biological processes associated with flower development and then screened the genes related to the formation of decorative floret sepals to reveal the intrinsic molecular mechanism of decorative floret formation and provide a theoretical basis for the breeding of H. macrophylla with more decorative flowers.
Phenotypic Observation
The morphological characteristics of the sampled flower buds and inflorescences were observed to determine their differentiation status, and we selected the developmental stages with distinct differentiation characteristics for comparison. As shown in Figure 1, the development of decorative flowers of H. macrophylla started from the vegetative bud ( Figure 1a) and gradually formed an inflorescence meristem dome (Figure 1b), which changed to the reproductive growth stage. Later, many inflorescence meristem domes were differentiated, forming a typical globular shape ( Figure 1c). Subsequently, the differentiation of the floral primordia ( Figure 1d) and the floral organs (Figure 1e) began. Figure 1f shows a part of the secondary inflorescence branching, when a clear spherical inflorescence formation could already be observed macroscopically. Figure 1g,h correspond to a nondecorative floret and a decorative floret in the inflorescence of Figure 1f, respectively. At this point, the sepals of the nondecorative floret were obviously shorter than the petals, and both could be observed and clearly distinguished at the same time, while the sepals of the decorative floret wrapped around the entire floret and the petals were located inside the sepals. Subsequently, the sepals of the decorative floret slowly elongated, changed color, and unfolded ( Figure 1i
Quality Analysis of Transcriptome Sequencing
Five cDNA libraries were constructed by comparative transcriptome sequencing of floral organs of 'Endless Summer' at different developmental stages. Of these, BS1 and BS2 were mixed samples of vegetative buds and flower buds, respectively, and BS3 to BS5 were the mixes of decorative flowers collected during that growth period ( Figure 2). The results show ( Table 1) that the raw reads of each sequencing library were above 7 G, and the high-quality clean reads obtained after filtering were not less than 6 G. The number of valid bases of all tested samples was more than 94%; the GC content was approximately 45%, and Q30 ≥ 88.66%. This indicates that the bases are of high quality and the transcriptome sequencing is of good quality, meeting the requirements of subsequent assembly and data analysis.
Quality Analysis of Transcriptome Sequencing
Five cDNA libraries were constructed by comparative transcriptome sequencing of floral organs of 'Endless Summer' at different developmental stages. Of these, BS1 and BS2 were mixed samples of vegetative buds and flower buds, respectively, and BS3 to BS5 were the mixes of decorative flowers collected during that growth period ( Figure 2). The results show ( Table 1) that the raw reads of each sequencing library were above 7 G, and the high-quality clean reads obtained after filtering were not less than 6 G. The number of valid bases of all tested samples was more than 94%; the GC content was approximately 45%, and Q30 ≥ 88.66%. This indicates that the bases are of high quality and the transcriptome sequencing is of good quality, meeting the requirements of subsequent assembly and data analysis.
Analysis of KEGG Metabolic Pathway
To understand the specific function of unigenes obtained by transcriptome sequencing during the development of H. macrophylla and the metabolic pathways they involved, unigenes were mapped to the KEGG database ( Figure 3a). The statistical results show that a total of 6881 unigenes were involved in 122 metabolic pathways, mapping to five types of branches of the KEGG pathway: metabolism, genetic information processing, environmental information processing, cellular processes, and organismal systems. The most active pathways in each branch include translation, carbohydrate metabolism, folding, sorting and degradation, amino acid metabolism, transport and catabolism, lipid metabolism, and signal transduction, which involve various aspects of plant life activities.
GO Function Classification
The classification results of the unigenes obtained from this study after GO function annotation are shown in Figure 3b. A total of 18,121 unigenes from 'Endless Summer' were successfully annotated to 47 functional terms in three categories: molecular function (MF), cellular component (CC), and biological process (BP). Among them, the largest number of terms was annotated in the biological processes. According to a GO analysis, cellular process (11,589 entries) was the largest term of biological processes, followed by
Analysis of KEGG Metabolic Pathway
To understand the specific function of unigenes obtained by transcriptome sequencing during the development of H. macrophylla and the metabolic pathways they involved, unigenes were mapped to the KEGG database ( Figure 3a). The statistical results show that a total of 6881 unigenes were involved in 122 metabolic pathways, mapping to five types of branches of the KEGG pathway: metabolism, genetic information processing, environmental information processing, cellular processes, and organismal systems. The most active pathways in each branch include translation, carbohydrate metabolism, folding, sorting and degradation, amino acid metabolism, transport and catabolism, lipid metabolism, and signal transduction, which involve various aspects of plant life activities. and organelles (12,248 entries) predominated. Among the molecular functional categories, genes were mainly involved in binding (10,753 entries) and catalytic activity (7826 entries).
(a) (b) Figure 3. Annotation of unigenes in KEGG and GO datasets. (a) KEGGs. The horizontal axis represents the percent of genes annotated to the pathway; the vertical axis represents the Level 2 pathway name, and the number on the right of the bar represents the number of genes annotated to this Level 2 pathway. (b) GO. The horizontal axis represents the GO functional classification; the left vertical axis represents the percentage of genes annotated to that class, and the right vertical axis represents the number of genes annotated to that class.
Screening and Enrichment Analysis of Differentially Expressed Genes
Based on the results of the difference analysis, genes with p-value ≤ 0.05 and |log2Fold Change| ≥ 2 were screened as significantly different genes. From Figure 4, the number of upregulated differential genes was significantly higher than the number downregulated in BS1 vs. BS2 at the initial stage of floral organ development, which was probably related to the transition from vegetative growth to reproductive growth. The period from BS2 to BS4 belonged to the flower organ development period, and the DEGs related to flower development gradually increased, and the expression level increased. However, Figure 3. Annotation of unigenes in KEGG and GO datasets. (a) KEGGs. The horizontal axis represents the percent of genes annotated to the pathway; the vertical axis represents the Level 2 pathway name, and the number on the right of the bar represents the number of genes annotated to this Level 2 pathway. (b) GO. The horizontal axis represents the GO functional classification; the left vertical axis represents the percentage of genes annotated to that class, and the right vertical axis represents the number of genes annotated to that class.
GO Function Classification
The classification results of the unigenes obtained from this study after GO function annotation are shown in Figure 3b. A total of 18,121 unigenes from 'Endless Summer' were successfully annotated to 47 functional terms in three categories: molecular function (MF), cellular component (CC), and biological process (BP). Among them, the largest number of terms was annotated in the biological processes. According to a GO analysis, cellular process (11,589 entries) was the largest term of biological processes, followed by metabolic process (9527 entries) and response to stimulus (5244 entries). Among the cellular components, genes associated with cells (15,109 entries), cell parts (15,081 entries), and organelles (12,248 entries) predominated. Among the molecular functional categories, genes were mainly involved in binding (10,753 entries) and catalytic activity (7826 entries).
Screening and Enrichment Analysis of Differentially Expressed Genes
Based on the results of the difference analysis, genes with p-value ≤ 0.05 and |log2Fold Change| ≥ 2 were screened as significantly different genes. From Figure 4, the number of upregulated differential genes was significantly higher than the number downregulated in BS1 vs. BS2 at the initial stage of floral organ development, which was probably related to the transition from vegetative growth to reproductive growth. The period from BS2 to BS4 belonged to the flower organ development period, and the DEGs related to flower development gradually increased, and the expression level increased. However, by late flower development, the expression of most DEGs related to floral organ development decreased, resulting in a higher number of differential genes being upregulated than downregulated in BS4 vs. BS5. With the continuous maturation of floral organs, the number of DEGs in the five developmental stages from leaf bud to decorative flower formation revealed a trend of increasing first and then decreasing, indicating that the DEGs screened in this study were related to flower organ development and could be used for a subsequent analysis of related gene identification. The up-and downregulated DEGs in the four comparison groups are shown in Supplementary Table S1. The DEGs annotated with the GO function were enriched, and Figure 5 shows th most significantly enriched GO terms for biological process (Supplementary Table S2 shown in the figure, the main GO terms related to flower development enriched by each differentially expressed gene set were maintenance of inflorescence meristem identity (GO:0010077), floral whorl development (GO:0048438), and pollen exine formation (GO:0010584). Moreover, there were also 36 related terms, such as flower development (GO:0009908), regulation of flower development (GO:0009909), specification of floral organ identity (GO:0010093), and floral organ development (GO:0048437), which were enriched by DEGs.
Analysis of Differentially Expressed Genes Related to the Development of Decorative Flowers
Based on the DEGs in the process related to the formation of decorative flowers of H. macrophylla, combined with GO functional enrichment and other genes with significant differences or high expression, according to the functional annotation results and previous research results, this study screened out 347 DEGs related to flower development. Thereafter, through a trend analysis, PPI analysis, and phylogenetic analysis, we continued to narrow down the candidate genes and search for relevant key genes. Table S3). In modules 44, 9, 37, 19, and 24, the gene expression was generally upregulated followed by downregulation or downregulated, and all of them reached peak expression at the BS1 or BS2 stages. These genes may be the key genes involved in the formation of decorative sepals in H. macrophylla.
Analysis of Differentially Expressed Genes Related to the Development of Decorative Flowers
Based on the DEGs in the process related to the formation of decorative flowers of H. macrophylla, combined with GO functional enrichment and other genes with significant differences or high expression, according to the functional annotation results and previous research results, this study screened out 347 DEGs related to flower development. Thereafter, through a trend analysis, PPI analysis, and phylogenetic analysis, we continued to narrow down the candidate genes and search for relevant key genes.
Trend Analysis
We performed a trend analysis on the expression patterns of the 347 DEGs associated with flower development screened above. The trend analysis divided the five developmental periods into 50 modules (Supplementary Figure S1), while the DEGs occupied a total of 37 modules, of which 8 were significantly enriched ( Figure 6). A total of 11 MADS-box transcription factors and 78 other transcription factors from 23 transcription factor families were screened out from the eight significantly enriched modules (Supplementary Table S3).
In modules 44,9,37,19, and 24, the gene expression was generally upregulated followed by downregulation or downregulated, and all of them reached peak expression at the BS1 or BS2 stages. These genes may be the key genes involved in the formation of decorative sepals in H. macrophylla. one MADS-box transcription factor (HmAGL24-3) and five transcription factors from other families were identified in Module 24.
In summary, 5 MADS-box transcription factors and 55 other transcription factors were screened out from five significantly enriched trend modules 44,9,37,19, and 24. These DEGs identified from the significantly enriched modules can be used for further analysis to narrow down the candidates for genes related to sepal development in the decorative florets of H. macrophylla.
Protein-Protein Interaction (PPI) Analysis
Functional links between proteins can usually be inferred from genomic correlations between the genes encoding them, while a group of genes required for the same function tends to demonstrate similar species coverage.
We designed the PPI figure using 347 DEGs associated with flower development. According to the obtained protein-protein interactions (Figure 7), there were 15 DEGs with strong connectivity in the comparison between the BS1 and BS2 phases, of which five MADS-box transcription factors (HmPI-1, HmMADS3, HmAP1-2, HmEJ2, and HmAGL19) and six other transcription factors from five transcription factor families were screened. A total of 49 DEGs with strong connectivity were identified in the BS2 and BS3 phase comparisons, including 9 MADS-box transcription factors (HmSVP-1, HmSOC1, HmSEP3-1, HmPI-1, HmAG-1, HmAGL15, HmSEP1, HmMADS3, and HmAP3-2) and 22 other transcription factors from 16 transcription factors. In the comparison between the BS3 and BS4 phases, in total, 72 DEGs were more connected, of which we screened 24 other transcription factors from 16 transcription factor families. In the comparison between the BS4 and BS5 phases, there were 43 DEGs with strong connectivity, among which 1 MADS-box transcription factor (HmAGL24-3) and 13 other transcription factors from nine transcription factor families were identified. The PPI result files are shown in Supplementary Table S4. In summary, 5 MADS-box transcription factors and 55 other transcription factors were screened out from five significantly enriched trend modules 44,9,37,19, and 24. These DEGs identified from the significantly enriched modules can be used for further analysis to narrow down the candidates for genes related to sepal development in the decorative florets of H. macrophylla.
Protein-Protein Interaction (PPI) Analysis
Functional links between proteins can usually be inferred from genomic correlations between the genes encoding them, while a group of genes required for the same function tends to demonstrate similar species coverage.
We designed the PPI figure using 347 DEGs associated with flower development. According to the obtained protein-protein interactions (Figure 7), there were 15 DEGs with strong connectivity in the comparison between the BS1 and BS2 phases, of which five MADSbox transcription factors (HmPI-1, HmMADS3, HmAP1-2, HmEJ2, and HmAGL19) and six other transcription factors from five transcription factor families were screened. A total of 49 DEGs with strong connectivity were identified in the BS2 and BS3 phase comparisons, including 9 MADS-box transcription factors (HmSVP-1, HmSOC1, HmSEP3-1, HmPI-1, HmAG-1, HmAGL15, HmSEP1, HmMADS3, and HmAP3-2) and 22 other transcription factors from 16 transcription factors. In the comparison between the BS3 and BS4 phases, in total, 72 DEGs were more connected, of which we screened 24 other transcription factors from 16 transcription factor families. In the comparison between the BS4 and BS5 phases, there were 43 DEGs with strong connectivity, among which 1 MADS-box transcription factor (HmAGL24-3) and 13 other transcription factors from nine transcription factor families were identified. The PPI result files are shown in Supplementary Table S4. Protein-protein interaction network of differentially expressed genes. Red indicates differential genes with upregulated expression; blue indicates differential genes with downregulated expression, and the connecting lines between them indicate the presence of interaction between genes. The more genes that are associated, the larger the point of the gene. According to the above PPI analysis, 179 DEGs were identified in the four sets of comparisons (Supplementary Table S5), and excluding 40 redundant genes, a total of 139 DEGs were obtained, including 13 MADS-box transcription factors and 51 other transcription factors. Combining them with the target modules in the trend analysis, we obtained a total of 60 DEGs, which included four MADS-box transcription factors (HmSVP-1, HmSOC1, HmAP1-2, and HmAGL24-3). Among the 60 DEGs screened, HmLFY (TRIN-ITY_DN12572_c0_g1_i1_3) showed high connectivity in three groups. In BS1 vs. BS2, the four floral development-related genes, HmLFY, HmAP1-2 (TRINITY_DN20532_c0_g2_i1_2), HmUFO (TRINITY_DN15238_c0_g1_i1_3), and HmPI-1 (TRINITY_DN12279_c0_g1_i1_2), all had strong interactions with each other. In BS2 vs. BS3, HmLFY, HmUFO, HmPI-1, HmAP3-2 (TRINITY_DN18016_c0_g1_i1_3), HmSOC1 (TRINITY_DN11951_c0_g1_i2_1), HmAP2-1 (TRINITY_DN27371_c0_g3_i4_3), HmAG-1 (TRINITY_DN27929_c0_g5_i1_3), HmSEP3-1 (TRINITY_DN24713_c0_g2_i2_3), HmSVP-1 (TRINITY_DN10563_c0_g1_i1_1), HmANT-2 (TRINITY_DN27092_c0_g1_i3_3), and HmSEP1 (TRINITY_DN23823_c0_g1_i12_3) all showed strong correlations among the floral development genes. In BS3 vs. BS4, HmLFY, HmUFO, and HmANT-2 interacted with one another strongly. Therefore, these DEGs can be used as a candidate gene set for screening the genes related to sepal formation in decorative florets of H. macrophylla for further analysis and research.
Phylogenetic and Expression Pattern Analysis
Of the 347 DEGs mentioned above, 17 transcription factors associated with the ABCDE model were also screened (Supplementary Tables S6). A phylogenetic analysis of these 17 transcription factors with the ABCDE model-related genes of Arabidopsis thaliana, Antirrhinum majus and Vitis vinifera was performed, and the results are presented in Figure 8a. From the figure, the screened transcription factors were highly homologous with the ABCDE model-related genes of the representative plants, especially Vitis vinifera.
Phylogenetic and Expression Pattern Analysis
Of the 347 DEGs mentioned above, 17 transcription factors associated with the ABCDE model were also screened (Supplementary Tables S6). A phylogenetic analysis of these 17 transcription factors with the ABCDE model-related genes of Arabidopsis thaliana, Antirrhinum majus and Vitis vinifera was performed, and the results are presented in Figure 8a. From the figure, the screened transcription factors were highly homologous with the ABCDE model-related genes of the representative plants, especially Vitis vinifera.
A hierarchical cluster analysis was performed on the screened transcription factors, and the genes with the same or similar expression profiles were clustered. The presented results are shown in Figure 8b. All DEGs were clustered into four groups, and the expression levels of the genes within the groups (i.e., the color in this figure, which represents the FPKM expression of the genes in the samples) were essentially similar. Class A genes were closely related to sepal development in the ABCDE model. A total of three AP1 homologs and five AP2 homologs were identified in H. macrophylla. Among them, HmAP1-1 and HmAP1-3 showed a similar expression pattern, both of which were elevated first and maintained at a high level after reaching a peak at the BS2 stage. The expression trends of HmAP2-1 and HmAP2-2 were similar and, in contrast to HmAP1-1 and HmAP1-3, decreased first and then increased and maintained at a high level after reaching a minimum at the BS2 stage. Both HmAP2-3 and HmAP2-4 basically showed a continuous decreasing expression pattern. Meanwhile, HmAP1-2 and HmAP2-5 both showed a trend of increasing and then decreasing, and their expression peaked in the BS2 stage.
Real-Time Quantitative PCR (RT-qPCR) Verification of Differentially Expressed Genes
To verify the accuracy of the transcriptome sequencing results, this study selected 1 genes (HmSOC1, HmAP2-5, HmAP2-4, HmAP1-2, HmANT-2, HmARF5, HmMSI4 HmC90A1, HmSEP3-1, HmSEP1, HmSVP-3, HmAP3-1, HmFRI3, and HmAP2-1) that poten tially related to flower development from the DEGs screened in the transcriptome dat for RT-qPCR detection. The results show that the expression of the 14 genes by RT-qPCR at different flower developmental stages was basically consistent with the transcriptom sequencing results. Their relative expression levels were closely correlated with the FPKM values, and the Pearson's correlation coefficients of both were above 0.896, which prove the high reliability of this sequencing (Figure 9). In addition to the 7 genes associated wit the ABCDE model already mentioned above, RT-qPCR validation was also performed fo the remaining 10 genes (Supplementary Figure S2). The results are in general consisten with transcriptome sequencing and can be used for subsequent studies. A hierarchical cluster analysis was performed on the screened transcription factors, and the genes with the same or similar expression profiles were clustered. The presented results are shown in Figure 8b. All DEGs were clustered into four groups, and the expression levels of the genes within the groups (i.e., the color in this figure, which represents the FPKM expression of the genes in the samples) were essentially similar. Class A genes were closely related to sepal development in the ABCDE model. A total of three AP1 homologs and five AP2 homologs were identified in H. macrophylla. Among them, HmAP1-1 and HmAP1-3 showed a similar expression pattern, both of which were elevated first and maintained at a high level after reaching a peak at the BS2 stage. The expression trends of HmAP2-1 and HmAP2-2 were similar and, in contrast to HmAP1-1 and HmAP1-3, decreased first and then increased and maintained at a high level after reaching a minimum at the BS2 stage. Both HmAP2-3 and HmAP2-4 basically showed a continuous decreasing expression pattern. Meanwhile, HmAP1-2 and HmAP2-5 both showed a trend of increasing and then decreasing, and their expression peaked in the BS2 stage.
Real-Time Quantitative PCR (RT-qPCR) Verification of Differentially Expressed Genes
To verify the accuracy of the transcriptome sequencing results, this study selected 14 genes (HmSOC1, HmAP2-5, HmAP2-4, HmAP1-2, HmANT-2, HmARF5, HmMSI4, HmC90A1, HmSEP3-1, HmSEP1, HmSVP-3, HmAP3-1, HmFRI3, and HmAP2-1) that potentially related to flower development from the DEGs screened in the transcriptome data for RT-qPCR detection. The results show that the expression of the 14 genes by RT-qPCR at different flower developmental stages was basically consistent with the transcriptome sequencing results. Their relative expression levels were closely correlated with the FPKM values, and the Pearson's correlation coefficients of both were above 0.896, which proved the high reliability of this sequencing (Figure 9). In addition to the 7 genes associated with the ABCDE model already mentioned above, RT-qPCR validation was also performed for the remaining 10 genes (Supplementary Figure S2). The results are in general consistent with transcriptome sequencing and can be used for subsequent studies.
Discussion
In this study, combined with morphological observation, a total of 17 ABCDE modelrelated transcription factors, including eight class A genes, four class B genes, two class C/D genes, and three class E genes, were identified by comparative transcriptome analysis of different developmental stages of H. macrophylla.
Following the origin of core dicotyledons, the evolution of features, such as a clear distinction between the petals and sepals in the perianth and the arrangement of floral organs, all have been associated with the duplication of class A and E genes [24,25]. Arabidopsis thaliana has two class A genes: AP1 and AP2, of which only AP1 encodes the MADS-box transcription factor. In addition to being expressed in the first sepal and second petal whorls, AP1 also plays an important role in the formation of floral meristems, Figure 9. RT-qPCR validation of the expression patterns of 14 DEGs. The x-axis represents the five developmental stages; the left y-axis represents the relative expression levels of RT-qPCR results; the right y-axis represents the FPKM values from RNA-seq. Error bars show the standard error between three biological replicates (n = 3).
Discussion
In this study, combined with morphological observation, a total of 17 ABCDE modelrelated transcription factors, including eight class A genes, four class B genes, two class C/D genes, and three class E genes, were identified by comparative transcriptome analysis of different developmental stages of H. macrophylla.
Following the origin of core dicotyledons, the evolution of features, such as a clear distinction between the petals and sepals in the perianth and the arrangement of floral organs, all have been associated with the duplication of class A and E genes [24,25]. Arabidopsis thaliana has two class A genes: AP1 and AP2, of which only AP1 encodes the MADS-box transcription factor. In addition to being expressed in the first sepal and second petal whorls, AP1 also plays an important role in the formation of floral meristems, both as a characteristic gene for floral organs and for floral meristems [5,26]. In H. macrophylla, the AP1 homolog produced three genes of this type, HmAP1-1, HmAP1-2, and HmAP1-3, through two small-scale gene duplications. Among them, HmAP1-2 was highly expressed mainly in the BS2 stage, while HmAP1-1 and HmAP1-3 were expressed in both the BS2 to BS5 stages (Figure 8b). However, it was found that in the Arabidopsis ap1 mutant, no ectopic expression of AG occurred, whereas in the ap2 mutant, the expression of AG extended to whorls 1 and 2, demonstrating that there was no antagonistic relationship between AP1 and AG and only the AP2 gene had the function of inhibiting the AG gene [27]. It could be speculated that HmAP1-2 may be mainly involved in the development of the sepals of H. macrophylla, while HmAP1-1 and HmAP1-3 may be involved in petal development in addition to sepal development, and their expression in the BS4 and BS5 stages did not affect the expression of class C/D genes in the same stage. Furthermore, within the AP1 subfamily of the MADS-box gene family, there are two AP1 paralogous genes, CAL (CAULIFLOWER) and FUL (FRUITFULL), and all three together form the SQUA-like (SQUAMOSA) gene [11,28]. CAL can positively regulate AP1 expression, but all its functions are redundant with those of AP1 [16,26,29]. In contrast, FUL has evolved with a function in valve identity specification [28]. Phylogenetic reconstructions suggest that the ancestral function of SQUA-like genes was to specify the identity of floral meristems, whereas the function to specify the identity of sepals and petals or fruit petals was derived later [30,31]. Kitamura et al. also analyzed three hydrangea AP1 homologous genes in a study on morphological changes in anthurium flower organs induced by phytoplasma infection and speculated that the changes in their expression levels might contribute to bract formation in the pedicel [21]. However, no significant changes in class A genes were observed in that study, and it was also speculated that the class B gene, along with the class A gene, might jointly regulate the morphogenesis of decorative sepals in hydrangeas [21]. Additionally, the floral meristem determination gene LFY (LEAFY)/FLO (FLORICAULA) is also homologous to AP1/SQUA, with relatively high homology at both the DNA and protein levels, and LFY also activates AP1 for expression throughout the floral meristem [3,32]. In the study by Nashima et al., they assumed that LFY was a causative gene of the double flower phenotype of 'Sumidanohanabi' [33]. In the decorative flowers of hydrangea, sepals showed petaloid characteristics, including pigmentation and organ enlargement. Additionally, the double flowers derived from dsu exhibited male sterility and reduced female fertility similar to the Arabidopsis lfy mutant [33]. The absence of petals and stamens in the lfy mutant flowers in Arabidopsis was traced back to a failure to activate the petal-and stamen-specific B-class genes APETALA3 (AP3) and PISTILLATA (PI). To activate the gene expression of AP3, UNUSUAL FLORAL ORGANS (UFO) is required as a transcriptional cofactor of LFY. The ufo mutant exhibited a similar phenotype to the lfy mutant of Arabidopsis. The HmLFY and HmUFO screened in this study also interacted with HmAP1-2, HmPI-1, and HmAP3-2 ( Figure 7).
As the only gene in the ABCDE model that does not belong to the MADS-box family, AP2 has the specific AP2 domain that encodes the AP2/EREBP family of transcription factors. The AP2 domain consists of approximately 60-70 amino acids, is highly conserved, can form an amphipathic α-helix, which is associated with protein interactions, has a nuclear localization signal sequence, and functions as a transcription factor [34]. The transcription factors of the AP2/EREBP gene family can be divided into three categories depending on the number of AP2 structural domains they contain, with the AP2 subfamily containing two AP2 structural domains in the protein structure. The subfamily can be divided into two categories, AP2 and ANT, and almost all of its members are closely related to plant development, such as the determination of flower organ morphology and development, the control of inflorescence meristem formation, and the normal development of ovules and seeds [35]. In floral organ development, AP2 has similar functions to AP1, which belong to the same class A functional gene. They jointly control the development of calyx and petals and regulate genes related to floral meristem initiation, and AP1 acts downstream of AP2 [35]. As indicated by the similar phenotypes of the single mutants and the more pronounced tendency of the double mutants to shift from floral primordia to inflorescence, AP1 and AP2 have similar regulatory effects on floral primordia characteristics, with some superimposed effects, and together they influence the construction of floral organs [34]. Similarly, as an attribute gene of the floral meristem, AP2 has a degree of mutual positive regulation with LFY gene expression (Figure 7) [36,37]. Molecular and genetic studies in Arabidopsis thaliana have shown that AP2 was expressed at low levels in the stem apex, with an enhanced expression in the inflorescence meristem, and throughout the entire Arabidopsis floral development, i.e., immature buds, four whorls of floral organ primordia, and developed ovules and seeds [38]. In this study, a total of five AP2 genes were screened in H. macrophylla, of which HmAP2-1 and HmAP2-2 had similar change trends, both reaching peak expression at late floral organ development, while HmAP2-3, HmAP2-4, and HmAP2-5 demonstrated similar expression patterns, all expressed at high levels during the BS1 or BS2 stages and decreasing at later stages ( Figure 8b). Although the RNA of AP2 was detectable in both floral organs in whorls 3 and 4, it did not suppress AG expression in whorls 3 and 4. Chen et al. [39] suggested that this is because in whorls 3 and 4, AP2, a microRNA-mediated target site for gene regulation, is repressed at the translational level by miRNA172 and belongs to post-transcriptional silencing. Therefore, it is speculated that HmAP2-1 and HmAP2-2 may be responsible for regulating the normal development of ovules and seeds at a later stage and are sequence complementary to miRNA172 in whorls 3 and 4 and are repressed at the translational level, while HmAP2-3, HmAP2-4, and HmAP2-5 may be responsible for the formation of floral meristem and the differentiation of sepals and petals and repress the expression of AG in whorl 1 and 2 at the early stage of floral organ formation. In addition, another member of the AP2 subfamily, ANT, has a partial functional overlap with AP2 and is an important transcription factor that regulates ovule and female gamete development and can work in concert with AP2 to repress AG expression in whorls 1 and 2 [34,40,41]. The AIL5 gene within this group is also associated with flower development [35]. The protein-protein interaction network ( Figure 7) illustrated that AP2 interacted with the ANT, AIL5, SVP, and AP3 genes related to flower development.
Class E genes are cofactors in flower development, which regulate the development of floral organs together with other classes of transcription factors, and play a role in the development of all floral organs in sepals, petals, stamens, carpels, and ovules [3,28]. In a study by Tsai et al. [42], it was suggested that class E and class A MADS-box genes were grouped together in the AP1/AGL9 (AGAMOUS-like 9) clade and the two were closely related. This is the same as the results of the phylogenetic analysis in this study (Figure 8a). At present, four SEPALLATA (SEP) genes have been identified in Arabidopsis thaliana, namely SEP1 (AGL2), SEP2 (AGL4), SEP3 (AGL9), and SEP4 (AGL3). Neither single nor double mutants of the SEP gene differed much in the developmental phenotype [43]. In contrast, in the triple mutant of sep1/2/3, petals, stamens, and carpels were transformed into sepal-like floral organs, similar to the phenotype of the double mutant of the Band C-class genes [43][44][45]; in the quadruple mutant of sep1/2/3/4, all floral organs were transformed into spiral leaf-like structures, consistent with the phenotype of the ABC-class triple mutant [24]. However, in the sep mutant, the expression of the class A, B, and C floral homeotic genes could be unaffected. These studies further reveal that class E genes are necessary to standardize the determinacy of all floral organs and floral meristems and that class A, B, and C genes play a key role in regulating floral organogenesis and development only when SEP-like gene expression is present in plants [32]. It is both a new floral organ determinant gene and an activator of ABC-like genes [32]. Furthermore, SEP genes are partially redundant in their function in controlling the identity of all floral organs, with SEP4 determining the identity of sepals. In this study, three class E genes, HmSEP1, HmSEP3-1, and HmSEP3-2, were identified, but the SEP4 gene, which regulates sepal development more obviously, was not screened. It is speculated that this gene presumably is not a differentially expressed gene and is not included in the screening. Alternatively, it may be that the deletion of this gene did not affect sepal formation due to factors such as functional redundancy. Alternatively, it is also possible that a change in gene function occurred during the evolutionary process of H. macrophylla, resulting in the development of sepals that do not require the SEP4 gene or that the gene is expressed in the form of another gene. This also indicates that the regulatory network of floral organ development is complex and that the ABCDE model is universal in flowering plants while having numerous specificities.
In summary, this study postulated a gene regulatory network model for the development of decorative floret sepals in H. macrophylla ( Figure 10). First, the core transcription factors controlling the development of decorative floret sepals, namely HmAP1-2, HmAP1-1, HmAP1-3, HmAP2-3, HmAP2-4, HmAP2-5, HmSEP1, HmSEP3-1, and HmSEP3-2, were identified based on gene expression patterns, phylogenetic analysis, and the ABCDE model. Subsequently, according to the protein-protein interaction network and literature studies, the genes HmFT, HmSOC1, HmLFY, and HmLUH were hypothesized to positively regulate class A and E transcription factors, and the HmLFY, HmLUH, and ANT genes also synergistically repressed AG gene expression in whorls 1 and 2. In addition, there were a number of genes associated with plant hormones, such as gibberellins, abscisic acid, cytokinins, and auxins, which were also involved in the sepal formation process at the early stages of flower development. It was hypothesized that they affected the development of decorative floret sepals by positively or negatively regulating the hormones. However, there were some genes with unknown functions, which also joined the developmental process of decorative florets, but their regulatory relationships on the process were not clear and needed further study. Although the present study hypothesized the functions and interrelationships of genes related to the development of decorative floret sepals in H. macrophylla, the conclusion is only a speculation. Its regulatory system has not been fully elucidated, and more in-depth experimental exploration is still needed. as functional redundancy. Alternatively, it is also possible that a change in gene function occurred during the evolutionary process of H. macrophylla, resulting in the development of sepals that do not require the SEP4 gene or that the gene is expressed in the form of another gene. This also indicates that the regulatory network of floral organ development is complex and that the ABCDE model is universal in flowering plants while having numerous specificities.
In summary, this study postulated a gene regulatory network model for the development of decorative floret sepals in H. macrophylla ( Figure 10). First, the core transcription factors controlling the development of decorative floret sepals, namely HmAP1-2, HmAP1-1, HmAP1-3, HmAP2-3, HmAP2-4, HmAP2-5, HmSEP1, HmSEP3-1, and HmSEP3-2, were identified based on gene expression patterns, phylogenetic analysis, and the ABCDE model. Subsequently, according to the protein-protein interaction network and literature studies, the genes HmFT, HmSOC1, HmLFY, and HmLUH were hypothesized to positively regulate class A and E transcription factors, and the HmLFY, HmLUH, and ANT genes also synergistically repressed AG gene expression in whorls 1 and 2. In addition, there were a number of genes associated with plant hormones, such as gibberellins, abscisic acid, cytokinins, and auxins, which were also involved in the sepal formation process at the early stages of flower development. It was hypothesized that they affected the development of decorative floret sepals by positively or negatively regulating the hormones. However, there were some genes with unknown functions, which also joined the developmental process of decorative florets, but their regulatory relationships on the process were not clear and needed further study. Although the present study hypothesized the functions and interrelationships of genes related to the development of decorative floret sepals in H. macrophylla, the conclusion is only a speculation. Its regulatory system has not been fully elucidated, and more in-depth experimental exploration is still needed.
Plant Materials
Perennial H. macrophylla 'Endless Summer' planted on the campus of Beijing Forestry University was selected as the test material. We selected plants with comparable growth to collect the buds and decorative flowers from the leaf bud state (BS1) to the full bloom stage (BS5), which were full of external morphology, healthy, and free from diseases and pests ( Figure 2). Part of the fresh samples was used for morphological observation, and part was placed in centrifuge tubes and snap-frozen in liquid nitrogen tanks and stored in −80 • C ultralow temperature refrigerator for standby.
Extraction and Detection of Total RNA from Transcriptome Sequencing Samples
Total RNA was extracted from samples at different developmental stages using the Aidlab EASYspin Plus RNA Extraction Kit (Aidlab Biotech, Beijing, China). The extracted RNA was tested for integrity, concentration, and purity by agarose gel electrophoresis combined with an ultramicro UV spectrophotometer (Thermo Fisher Scientific, Sunnyvale, CA, USA), and the samples that qualified were used for subsequent RNA high-throughput sequencing library construction. The sequencing unit was Shanghai OEBiotechnology Co., Ltd (Shanghai, China).
Construction of cDNA Library and Transcriptome Sequencing
The mRNA was enriched from the total RNA by magnetic bead method and then broken into short fragments by adding interruption reagent, and the first strand of cDNA was synthesized by using short fragments of mRNA as the template with random primers. After synthesis of the second strand by DNA polymerase I, RNase H, dNTP, and buffer, the double-stranded cDNA was purified and end-repaired; poly A was added, and sequencing connectors were ligated. The library was prepared by PCR amplification after fragment size screening. The constructed libraries were quality-checked with an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and then paired-end sequenced using an Illumina sequencer (Genedenovo Biotechnology, Guangzhou, China).
Sequencing Data Quality Monitoring and Sequence Assembly
The raw data obtained from high-throughput sequencing were filtered to remove the reads containing adapter, N ratio greater than 10%, all A bases, and low quality (the number of bases with Q ≤ 10 accounted for more than 50% of the whole read) to obtain clean reads. The transcripts were reconstructed and assembled by Trinity software (The Broad Institute, Cambridge, MA, USA) and CD-HIT software (http://www.bioinformatics.org/cd-hit/ (accessed on 4 March 2022)) to obtain the final unigenes for subsequent information analysis.
Bioinformatics Analysis and Screening of Differentially Expressed Genes
The obtained unigenes were annotated for gene function and aligned with the GO and KEGG databases to select the proteins and families with the highest sequence similarity. The sequence similarity alignment method was used to obtain the number of reads aligned to the unigene in each sample and calculate the expression abundance (FPKM value) of each unigene in each sample. Based on the magnitude of the FPKM value, the number of counts of each sample was standardized, and the difference fold was calculated and tested for significance. Eventually, the DEGs were screened with |log2Fold Change| ≥ 2 and the p-value ≤ 0.05 as the threshold. Trend analysis and PPI analysis were performed on the screened DEGs using the company's cloud platform program (https://cloud.oebiotech.cn/ (accessed on 4 March 2022)). The trend analysis constructed the corresponding change trends by time points and then transformed the gene expression data and calculated the similarity of the transformed data with the corresponding change trends. Meanwhile, the time points were randomly disrupted, and the trend analysis was reperformed to count the number of genes in each trend. After performing numerous random rearrangements, an expected number of genes could be obtained in each trend, and finally, the p-value of the trend was calculated using the hypergeometric distribution algorithm. PPI analysis is a blast comparison of unigenes with allied species in the STRING database to obtain the interaction relationship and combined score of differential unigenes by homologous alignment substitution. The protein (gene) interaction network took the obtained list of target proteins (genes) and the possible two-by-two interactions between all proteins (or corresponding genes) of the species as input files and visualized the interaction relationships between proteins (genes) with high connectivity in a set of proteins (genes) in the form of a ring network. The up-and downregulation were distinguished by different colors according to the information on expression level changes. Phylogenetic tree was drawn with MEGA11 software (https://www.megasoftware.net/ (accessed on 31 July 2022)).
RT-qPCR Verification
To verify the accuracy of the transcriptome data, we selected 13 DEGs that might be associated with decorative floret formation for real-time quantitative PCR analysis. RNA was reverse transcribed into cDNA using the HiScript III RT SuperMix for qPCR (+gDNA wiper) (Vazyme Biotech, Nanjing, China), which was used as a template for RT-qPCR. Gene-specific primers were designed using Primer Premier 5.0 software (Premier Biosoft, San Francisco, CA, USA) (Supplementary Table S7), and the reference gene was EF1-β [46]. The reaction system was prepared according to the instructions of TB Green ® Premix Ex Taq TM II (Takara, Shiga, Japan), and PCR amplification was performed. The total reaction system was 20 µL, TB Green Premix Ex Taq II of 10 µL, 4 µL each of forward and reverse primers, and 2 µL of cDNA template. The amplification procedure was as follows: predenaturation at 95 • C for 30 s, denaturation at 95 • C for 5 s, annealing at 60 • C for 30 s, extension at 72 • C for 30 s, 40 cycles, melting curve. Each reaction was repeated three times. The relative expression of the target genes was calculated using the 2 −∆∆CT method [47].
Conclusions
In this study, the transcriptomes of floral organs from the nutritional stage (BS1) to the blooming stage (BS5) of H. macrophylla were sequenced and aligned with the KEGG and GO databases. The DEGs of each comparative group were mainly enriched in the GO terms related to flower development, such as the maintenance of inflorescence meristem identity, floral whorl development, and pollen exine formation. The 347 DEGs obtained by differential expression analysis and GO functional enrichment were subjected to a trend analysis and a PPI analysis, and 17 ABCDE model-related transcription factors were identified and subjected to phylogenetic analysis and expression pattern analysis. Ultimately, this study postulated a model for the gene regulatory network of decorative sepal development in H. macrophylla, which laid the foundation for the study of the molecular mechanism of sepal formation in decorative florets. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: All data generated or analyzed during this study are available.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-11-18T16:25:57.355Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "0fb806e9b27dd8e7f46c11e7e9ba2e307df0ae29",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/22/14112/pdf?version=1668519440",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c30d358bd569d3f3eb31abab7bf14390b0dbda3d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
251599766
|
pes2o/s2orc
|
v3-fos-license
|
Body mass index related to executive function and hippocampal subregion volume in subjective cognitive decline
Objective This study aims to explore whether body mass index (BMI) level affects the executive function and hippocampal subregion volume of subjective cognitive decline (SCD). Materials and methods A total of 111 participants were included in the analysis, including SCD (38 of normal BMI, 27 of overweight and obesity) and normal cognitive control (NC) (29 of normal BMI, 17 of overweight and obesity). All subjects underwent the Chinese version of the Stroop Color-Word Test (SCWT) to measure the executive function and a high-resolution 3D T1 structural image acquisition. Two-way ANOVA was used to examine the differences in executive function and gray matter volume in hippocampal subregions under different BMI levels between the SCD and NC. Result The subdimensions of executive function in which different BMI levels interact with SCD and NC include inhibition control function [SCWT C-B reaction time(s): F(1,104) = 5.732, p = 0.018], and the hippocampal subregion volume of CA1 [F(1,99) = 8.607, p = 0.004], hippocampal tail [F(1,99) = 4.077, p = 0.046], and molecular layer [F(1,99) = 6.309, p = 0.014]. After correction by Bonferroni method, the population × BMI interaction only had a significant effect on the CA1 (p = 0.004). Further analysis found that the SCWT C-B reaction time of SCD was significantly longer than NC no matter whether it is at the normal BMI level [F(1,104) = 4.325, p = 0.040] or the high BMI level [F(1,104) = 21.530, p < 0.001], and the inhibitory control function of SCD was worse than that of NC. In the normal BMI group, gray matter volume in the hippocampal subregion (CA1) of SCD was significantly smaller than that of NC [F(1,99) = 4.938, p = 0.029]. For patients with SCD, the high BMI group had worse inhibitory control function [F(1,104) = 13.499, p < 0.001] and greater CA1 volume compared with the normal BMI group [F(1,99) = 7.619, p = 0.007]. Conclusion The BMI level is related to the inhibition control function and the gray matter volume of CA1 subregion in SCD. Overweight seems to increase the gray matter volume of CA1 in the elderly with SCD, but it is not enough to compensate for the damage to executive function caused by the disease. These data provide new insights into the relationship between BMI level and executive function of SCD from the perspective of imaging.
Introduction
Subjective cognitive decline (SCD) refers to the decline in subjective memory or cognitive function, but there is no obvious cognitive dysfunction and no obvious impairment of daily living ability in objective behavioral examination (Jessen et al., 2014). SCD is a state between normal aging and mild cognitive impairment (MCI), which is considered to be one of the most initial cognitive change in the pathogenesis of Alzheimer's disease (AD) . A recent study found that the prevalence of SCD in the elderly population > 50 years was 26.6% (Liew, 2019), and SCD increased the risk of progression to MCI in the elderly by 1.73 times and the risk of progression to AD by 1.9 times (Pike et al., 2021).
One variable that may play an important role in the development of AD is obesity, which is associated with numerous deleterious health conditions (Mallorquí-Bagué et al., 2018;Piché et al., 2020) including late-life dementia (Kivipelto et al., 2018). Body mass index (BMI), one measure of obesity, has a complex relationship with cognitive function in the elderly. Previous studies found that the cognitive dimensions of BMI's impact are different across clinical stages of AD. For instance, in dementia or MCI stage of AD, a higher BMI is related to the worse overall cognitive function, memory, attention, and executive function in the elderly (Calderón-Garcidueñas et al., 2019;Sanchez-Flack et al., 2021). In the elderly population with normal cognitive status, higher BMI predicts worse executive function (Gunstad et al., 2007;Beyer et al., 2017), while BMI is not significantly associated with attention and memory dimensions (Schmeidler et al., 2019). These studies suggested that different from other cognitive dimensions the effects of BMI on executive function may appear to be present throughout different clinical stages of AD. However, the relationship between BMI and executive function in SCD (an early stage of AD) is unclear. In addition, our previous preliminary study found that overweight and obese patients with SCD had a worse executive function compared with patients with SCD in the normal weight group . However, a previous study lacked further investigation in normal controls (NCs) to check the interaction of BMI level and disease on executive function in patients with SCD. In addition, the underlying mechanism is also unclear.
Neuroimaging studies have suggested early AD-like structural brain alterations in SCD (Pini and Wennberg, 2021). The hippocampus plays a critical role in cognition (Lisman et al., 2017). A previous study found that SCD exhibits a consistent pattern of hippocampal atrophy (Chen et al., 2021). Some studies have observed a decreased hippocampal volume in individuals with SCD both at baseline and during a significant longitudinal decline (Scheef et al., 2012;Sánchez-Benavides et al., 2018;van Rooden et al., 2018;Yue et al., 2018), with an annual decrease of 1.9% (Cherbuin et al., 2015;Wang et al., 2020). The hippocampus is composed of multiple subregions such as the dentate gyrus (DG), cornu ammonis (CA) region, and subiculum (SUB), all of which play specific roles in the circuits of the hippocampus (O'Keefe et al., 2007). For example, the CA1 subregion and the entorhinal cortex can represent a variety of different information (time, space, etc.), and the SUB, as a transition region between the two subregions, can accept the direct input of synapses in CA1 subregion and project it to different cortex and subcortical regions (Matsumoto et al., 2019); CA1, CA2/3, and DG play complementary roles in supporting episodic memory by allowing one to remember specific items, as well as their relationships, within a shared context (Dimsdale-Zucker et al., 2018). With the progression of AD, the volume of hippocampal subregions shows an obvious decreasing trend . Studies have found that in people with risk of AD smaller volumes of the hippocampal fimbria, presubiculum, and SUB showed the associations with poor performance on executive function (Evans et al., 2018). While in patients with MCI, smaller hippocampal subregion (CA1) volume is associated with worse executive function (Suo et al., 2017). However, the relationship between the hippocampal subregion and the executive function subdimension in SCD is still not clear.
It is worth mentioning that the hippocampus is a key structure involved in body weight regulation (Davidson et al., 2007;Kanoski and Grill, 2017). Neuroimaging studies incorporating structural magnetic resonance imaging (MRI) reported that patients with AD with higher BMI levels have a smaller hippocampal volume (Ly et al., 2021). However, the relationship between BMI and cognitive function/hippocampal volume in different stages of AD is inconsistent. Kivimäki et al. (2018) reported that when BMI was assessed > 20 years before the diagnosis of dementia, a higher BMI was associated with an increased risk of dementia, whereas when BMI was assessed < 10 years before the diagnosis, a lower BMI predicted dementia. In addition, studies reported that overweight/obesity was positively correlated with the hippocampal volume of subjects (Widya et al., 2011;Ma et al., 2019). Animal experiments show that obesity affects the hippocampal subregion (CA1, CA3) of rats with pre-AD and MCI models (Ivanova et al., 2020). However, it is unclear what the effects are of different BMIs on the volume of the hippocampal subregion in the elderly with SCD and whether there is an interactive effect between the BMI and disease condition on the volume of hippocampal subregions.
This study aims to compare the difference in executive function of the elderly SCD and NC with different BMI levels and explore whether there are differences in hippocampal subregions related to BMI levels in different cognitive states. We hypothesized that the effect of BMI level on executive function in patients with SCD may be related especially to the hippocampal subregion gray matter volume.
Participants
In this study, we recruited 111 elderly subjects aged > 60 years, who voluntarily participated in a free questionnaire survey and physical examination in communities in Fuzhou, Fujian Province, including 65 elderly people with SCD and 46 elderly NC. The SCD and NC participants were divided into normal BMI group and overweight/obesity (high BMI) group according to the Chinese adult overweight and obesity prevention and control guidelines (China Obesity Working Group, 2004) (normal weight, BMI between 18.5 and 23.9 kg/m 2 ; overweight and obese, BMI ≥ 24.0 kg/m 2 ). This study was approved by the Medical Ethics Committee of the Affiliated Rehabilitation Hospital of Fujian University of Traditional Chinese Medicine. All participants signed an informed consent form before taking the tests.
Subjective cognitive decline and NC exclusion criteria included (1) hypertensive patients with uncontrolled blood pressure; (2) history of alcohol and drug abuse; (3) severe anxiety and depression, as indicated by the Hamilton Depression Scale (HAMD) (Hamilton, 1960) score > 24 points or Hamilton Anxiety Scale (HAMA) (Hamilton, 1959) score > 29 points; (4) decline in cognitive function caused by other diseases (such as mental diseases and poisoning); and (5) unable to cooperate with the tester due to other physiological and psychological reasons.
Clinical assessment
In the form of a questionnaire, the basic demographic data of the subjects (age, gender, BMI, years of education) and medical history (hypertension, hyperlipidemia, type II diabetes mellitus [T2DM], etc.), as well as medication history, were recorded in detail by professionally trained assessors. Medication history refers to the last 3-month routine medication self-reported by subjects when receiving the questionnaire of this study. We mainly recorded the use of drugs that control hypertension, hyperlipidemia, and T2DM. In addition, any other medication the participant used within the last 3 months was also recorded.
Neuropsychological assessment
Montreal Cognitive Assessment was used to assess the global cognitive function of subjects, with a total score of 30 points (the higher the score, the better the global cognitive function). HAMD and HAMA were used to assess the severity of depression and anxiety.
The Stroop Color-Word Test (SCWT) (Golden and Golden, 1981) evaluates the executive function. The SCWT version adopted in this study consists of three cards, each with 24 characters. SCWT A is composed of red, yellow, blue, or green dots; SCWT B is composed of Chinese characters printed in the same color (red, yellow, blue, or green, the color of the characters is consistent with the meaning of the word); and SCWT C prints four kinds of Chinese characters with different colors (red, yellow, blue, or green, the color of the characters is inconsistent with the meaning of the word, such as "yellow" printed in red color). The longer the response time of each card, the worse the execution function (Wecker et al., 2000;Homack and Riccio, 2004). The analysis indicators are the reaction time of each card and the SCWT C-B reaction time. A larger SCWT C-B reaction time represents greater interference from conflicting response sets or poorer inhibitory control (Scarpina and Tagini, 2017;Rabi et al., 2020).
Brain imaging acquisition
The brain MRI data were acquired on a 3.0 T Prisma scanner system (Siemens Medical Solutions, Erlangen, Germany) with a 64-channel head coil. Before MRI scanning, inform the precautions of MRI scanning again and sign the consent form for MRI scanning. Confirm that the subjects have no scanning contraindications such as metal implants and claustrophobia. Ask the subjects to stay still during the scanning process, use rubber earplugs and soft head pads to reduce noise, and fix the position of the head. If they feel uncomfortable, press the alarm in their hand to instruct the staff to stop the scanning. The T1-MPRAGE images were collected using the following parameters: field of view, 256 mm * 256 mm; repetition time, 2,300 ms; echo time, 2.94 ms; flip angle, 15 • ; slice thickness, 1 mm; and slices, 160. In addition, all subjects in this study were screened with an appropriate MRI scan (T2-weighted sequence) to check the vascular injuries such as stroke and brain tumor before initiation of the study. None of the eligible subjects included had obvious vascular lesions.
Brain imaging processing
The T1 image data were preprocessed using the MRIconvert (a toolbox for image data conversion, version 2.0 Rev. 235 1 ) and FreeSurfer software (a toolbox for image data analysis, version 7.1.0 2 ) under the Lunix system. The FreeSurfer 7.1.0 software was used to extract the hippocampal volume in the subcortical nucleus segmentation file of each subject generated by pretreatment. During 1 https://www.softpedia.com/get/Science-CAD/MRIConvert.shtml 2 http://surfer.nmr.mgh.harvard.edu segmentation, the image of each subject is first converted from the individual space to the FreeSurfer standard space to ensure accurate segmentation, and then the hippocampus of each subject is divided into 19 regions according to the hippocampal subregion segmentation template on the official website of FreeSurfer. 3 According to Supplementary Table 1, the subdivided 19 regions were combined into hippocampal tail, hippocampal fissure, fimbria, parasubiculum, subiculum, presubiculum, CA1, CA3, CA4, molecular layer, granule cell layer, and molecular layer of the dentate gyrus (GC-ML-DG) and hippocampal amygdala transition area (HATA), the 12 hippocampal subregions, and CA2 is always included in CA3 (Fischl, 2012;Iglesias et al., 2015; Supplementary Figure 1).
Statistical analysis Behavioral data analysis
Analysis was performed using the SPSS software (version 26.0 for Windows, IL, United States). The categorical variables are expressed by the number of cases n (%), and the comparison between groups was performed using chi-square test. The continuous variables are expressed as mean ± SD, and the comparison between groups was performed using one-way ANOVA. If p < 0.05, the difference was considered to be statistically significant, and the post hoc comparison was performed using Fisher's least significant difference method.
Two-way ANOVA was used for the comparison between groups of SCWT test (the first factor was population grouping and the second factor was BMI grouping, to explore the main effects of population grouping and BMI grouping and the interaction between them). Age, gender, and years of education were used as covariates; p < 0.05 was considered to be statistically significant. When there is an interaction between population grouping and BMI grouping, simple main effect and paired comparative analysis were used to further study whether different populations and different BMI improve or reduce executive function.
Brain imaging analysis
The volume of each subject's hippocampal subregion was extracted and analyzed by two-way ANOVA based on region of interest (ROI) level (the first factor is population grouping and the second factor is BMI grouping) with the total hippocampal volume, intracranial volume, age/gender/years of education/hypertension/hyperlipidemia/T2DM as covariates. The brain regions with interactive gray matter volume differences were obtained, and the interaction effects between them were further analyzed using the SPSS 26.0 software. When there is an interaction between population grouping and BMI grouping, simple main effect and paired comparative analysis were used to further study whether different populations and different BMI increase or decrease the gray matter volume of hippocampal subregions. For the brain area with significant interaction, the association between the gray matter volume and the corresponding score of SCWT test was done using the partial correlation analysis with age, gender, and years of education as covariates. Multiple comparisons were corrected using Bonferroni method.
Demographic characteristics
The comparison of general demographic data and personal medical history of each group (i.e., NC-normal BMI, NChigh BMI, SCD-normal BMI, SCD-high BMI) is shown in Table 1. The results of intergroup comparison showed that no significant group difference was found among the general demographic data of the four groups except BMI. After adjusting for age, gender, and years of education, the mean MoCA score of subjects with SCD included in this study was 27.40 ± 1.07 ≥ 26, which suggested that there was no obvious objective index abnormality.
Neuropsychological characteristics
Montreal Cognitive Assessment score, HAMD score, and HAMA score of each group (NC-normal BMI, NC-high BMI, SCD-normal BMI, and SCD-high BMI) are compared in Table 1. The results of intergroup comparison showed that there was no significant difference in MOCA score, HAMD score, and HAMA score among the four groups (p > 0.05).
The comparison of SCWT scores of each group is shown in Table 2. We observed significant interaction effects for SCWT C reaction time ( Post hoc analysis showed that the SCWT C reaction time (s) and SCWT C-B reaction time (s) of the SCD-normal BMI group were significantly larger than the NC-normal BMI group, which indicates that the inhibitory control function of the SCD-normal BMI group was worse than the NC-normal BMI For SCWT A reaction time (s) and SCWT B reaction time (s), we did not find significant differences in the interaction terms between population grouping and BMI grouping (p > 0.05; see Table 2).
Brain imaging characteristics
A comparison of hippocampal subregion volume among the four groups is shown in Table 4 was smaller in the SCD-normal BMI group compared with the NC-normal BMI group. For other hippocampal subregions (hippocampal tail, hippocampal fissure, fimbria, parasubiculum, subiculum, presubiculum, CA3, CA4, GC-ML-DG, HATA), we did not find significant differences in the interaction term between population grouping and BMI grouping (p > 0.05; see Table 5).
After correction by Bonferroni's method, ANOVA showed that the population × BMI interaction had a significant effect on CA1. Post hoc analysis showed that in the normal BMI group gray matter volume was decreased in CA1 of SCD compared with NC; in the SCD population, gray matter volume of CA1 in the high BMI group was significantly increased compared with normal BMI (see Figure 1). In the NC population, we Medicine: medication history refers to the last 3-month routine medication self-reported by subjects when receiving the questionnaire of this study, and mainly records the use of drugs that control hypertension, hyperlipidemia, and T2DM. In addition, any other medications the participant has used within the last 3 months were also recorded. a Continuous variable, two-way ANOVA is adopted, and mean ± SD is used for statistical description; the unit is seconds. b There is an interaction between population grouping and BMI grouping after correction by Bonferroni method.
SCWT, the Stroop Color-Word Test. The two-way ANOVA was carried out with age, gender, and years of education as covariates.
did not observe any significant effect of changes in BMI level on CA1 gray matter volume (p > 0.05). In the high BMI population, we did not observe any significant difference in CA1 gray matter volume between the SCD and NC population (p > 0.05). After correction by the Bonferroni method, we did not find significant correlation between gray matter volume changes in CA1 and SCWT C reaction time (s) (p > 0.05/4) or CA1 and SCWT C-B reaction time (s) (p > 0.05/4; see Supplementary Tables 2, 3).
Discussion
This study compared the effects of different BMI levels (high BMI and normal BMI) and different populations (SCD and NC) on executive function and hippocampal subregion volume. Our results showed significant interaction effects between the a Continuous variable, two-way ANOVA is adopted, and mean ± SD is used for statistical description; the unit of volume is cubic millimeter. b There is an interaction between population grouping and BMI grouping after correction by Bonferroni method.
HATA, hippocampal amygdala transition area; CA, cornu ammonis; GC-ML-DG, granule cell layer and molecular layer of the dentate gyrus. The two-way ANOVA was carried out with age, gender, years of education, hypertension, hyperlipidemia, type II diabetes mellitus, whole hippocampus volume, and intracranial volume as covariates.
population group and BMI group in the executive function (inhibitory control function) and hippocampal subregion (CA1) gray matter volume. In addition, we found significant main effects in the BMI group and population group. Our results suggested that SCD has a worse executive function (inhibitory control function) than NC regardless of the BMI level, but overweight and obesity aggravate the degree of impairment of executive function in the elderly with SCD. For the normal BMI subgroup, the CA1 volume of SCD was smaller than that of NC. Furthermore, we found that in the SCD population the CA1 gray matter volume of high BMI was larger than that of normal BMI. The present results suggest that SCD has a worse executive function (inhibitory control function) than NC in both overweight/obese and normal weight, which is partly consistent with previous studies. A study from Spain (López-Higes et al., 2017) showed that the executive function test (use the Stroop interference index to test) performance scores in the elderly with SCD were lower than healthy controls. This study found that the worse executive function of SCD compared with NC is mainly manifested in inhibitory control function. Although repeating the results of previous studies, this study considered the effect of BMI on patients with SCD and found that elevated BMI aggravated the impairment of inhibitory control function in patients with SCD. A meta-analysis (Yang Y. et al., 2018) showed that individuals who are overweight have deficits in working memory and inhibitory control function. Inhibitory control is an important subdimension of executive function (Dempster, 1992;Bjorklund and Harnishfeger, 1995). The impairment of inhibitory control functions has been identified as the most affected function in the subdomain of MCI executive function (Traykov et al., 2007;Brandt et al., 2009;Johns et al., 2012). These results may be explained from a neurophysiological perspective. Inhibitory-controlled behavior was found to be electrophysiologically correlated in patients with MCI, and neurocognitive mechanisms associated with response inhibition (No Go P300) were impaired in patients with MCI compared with healthy controls (López Zunini et al., 2016). In the preclinical stage of MCI, SCD may also have the same neurocognitive impairment as MCI, so we observed worse inhibitory control function in SCD compared with healthy subjects. As for the impact of BMI on the executive function of SCD, studies have shown that overweight or obesity can significantly reduce brain blood flow, lead to insufficient cerebral Significant population × BMI interaction effect on gray matter volume in the hippocampal subregion.
perfusion, with the brain lacking enough oxygen and nutrients for a long time, which is the early mechanism of AD (Knight et al., 2021), and is also related to worse executive function (Alosco et al., 2012). This may explain why the increase in BMI leads to the aggravation of executive function impairment in patients with SCD.
In addition, we also found a significant interaction between population grouping and BMI grouping in the hippocampal subregion (CA1), and the hippocampal subregion of CA1 significant atrophy in SCD compared with NC. A recent study (Worker et al., 2018) showed that patients with AD experience greater hippocampal subregional atrophy over time compared to NC subjects, including CA1, molecular layer, CA3, hippocampal tail, fissure, and presubiculum, among which CA1 and molecular layer is more obvious. We also found atrophy in the molecular layer of the hippocampus in SCD before Bonferroni correction, which is partly consistent with a previous study. The results seem plausible as the most distinctive AD-related neuron loss was seen in the CA1 region of the hippocampus, and the neuronal loss in CA1 is not an age-related phenomenon but rather characterizes an overt AD process (West et al., 1994). Some studies considered a sequential pattern of atrophy starting within entorhinal and transentorhinal areas and moving to CA1 and eventually other hippocampus subregions (Csernansky et al., 2005;Apostolova et al., 2010). SCD showed a similar pattern of volume atrophy in the hippocampal subregion as AD, preferentially and mainly involving the CA1 region (Perrotin et al., 2015). These findings may be related to the unique structure and function of hippocampal CA1. The CA1 region of the hippocampus maintains its neuroplastic flexibility well into adulthood and plays an important role in external and internal demands to serve cognitive processes (Walhovd et al., 2014b). However, studies have found that brain regions with high neuroplasticity are more prone to neurodegeneration (Neill, 2012;Bufill et al., 2013). The ability of CA1 may increase its vulnerability to neurotoxic effects, ultimately leading to structural atrophy and functional decline (Walhovd et al., 2014a;Nemeth et al., 2017). Deposition of Aβ occurs in the neocortex and hippocampus many years before the onset of clinical symptoms of AD (Sadigh-Eteghad et al., 2015), while the CA1 area is highly sensitive to pathological changes (Pluta et al., 2021). We therefore speculate that the loss of CA1 volume in subjects with SCD may be significantly associated with neuroplasticity in the hippocampal CA1 region.
It is worth mentioning that the result of this study also showed that the hippocampal subregion (CA1) volume of high BMI index in SCD is significantly higher than that of normal BMI population. A recent longitudinal imaging study found that higher BMI in AD populations was associated with larger hippocampus volumes, which is partly consistent with the present result. Sun et al. (2020) pointed out that subjects with higher BMI showed a significant lower Aβ load using PET imaging. Previous studies have suggested that Aβ peptide deposition triggers tau hyperphosphorylation and aggregation in the form of neurofibrillary tangles, and these aggregates lead to inflammation, synaptic damage, neuronal loss, and thus decrease the brain volume (Nelson et al., 2012;He et al., 2018;Vogel et al., 2020;Roda et al., 2022). The accumulation of Aβ pathology enhances hippocampal atrophy in pre-AD (Gordon et al., 2016;Wang et al., 2016). In addition, in contrast to previous study, we found that the increased hippocampal volume caused by the increase in BMI is mainly in the CA1 area. A previous study found that medium and large Aβ plaques are significantly more numerous in CA1 than in other hippocampal subregions (Ugolini et al., 2018), and CA1 may be the most vulnerable region of the hippocampus to neuronal loss (Padurariu et al., 2012;Yang X. et al., 2018). We therefore reasoned that the increased CA1 volume in high-BMI subjects with SCD might be partly mediated by obesity reducing the accumulation of Aβ in the hippocampus.
However, the compensatory increase in CA1 volume in patients with SCD in this study does not appear to be sufficient to compensate for the impairment of executive function in patients with SCD, which is manifested by worse performance on executive function tests in patients with SCD compared with NC. Recent studies have shown that the hippocampus plays an important role in appetite and weight regulation (Alosco et al., 2017;Hsu et al., 2018;Li et al., 2021), and is crucial in the mediation of executive function . Obesity, however, can lead to impaired executive function (Willeumier et al., 2011;Davidson et al., 2019). The results of this study may be explained by the leptin synthesized and secreted by adipocytes. Leptin is transported across the blood-brain barrier (BBB) via a saturable transport system (Banks, 2004) and is a potent modulator of excitatory synaptic transmission at hippocampal CA1 synapses Harvey, 2014, 2021). Consequently, the ability of leptin to regulate excitatory synaptic efficacy at CA1 synapses suggests that leptin is likely to influence cognition processes (Hamilton and Harvey, 2021;Harvey, 2022). However, leptin transport across the BBB is impaired in high BMI phenotypes (Banks et al., 1999), which has an adverse effect on the synaptic transmission of hippocampal CA1 (Grillo et al., 2011). Furthermore, circulating leptin levels were significantly reduced in patients with cognitive impairment (Power et al., 2001;Johnston et al., 2014), which may explain why patients with SCD have increased CA1 gray matter volume and impaired executive function.
In this study, we did not find significant association between gray matter volume changes in CA1 and inhibitory control function. Previous studies have found that the impairment of executive function is not always correlated with the change of hippocampal subregion volume that is partly consistent with our results. Ge et al. (2021) showed that the volume of hippocampal subregions such as CA1 gradually decreased from the amyloid-negative group to the amyloid-positive group in the elderly people (including 87 individuals with normal cognition, 46 with MCI, and 10 with AD), and as amyloid pathology persisted, impairment of executive function was more significantly associated with changes in hippocampal tau lesions/volume. There seems to be a threshold effect in the relationship between hippocampal atrophy and executive function (Oosterman et al., 2012), that is, severe to very severe but not moderate hippocampal atrophy is associated with lower executive function. We speculate that due to the SCD in the preclinical stage of AD, its behavioral or brain pathological changes are not very serious, resulting in the weak correlation between executive function and hippocampal CA1 volume. Further study is needed to confirm this hypothesis.
This study has several limitations. First, the subjects we included did not contain any BMI level such as lower than 18.5 kg/m 2 , so it was impossible to examine the effects of insufficient body mass on executive function and hippocampal subregion. Secondly, the sample size included in this study is insufficient, which may lead to the instability of the research results. Furthermore, this is a cross-sectional study, and it is impossible to follow the subjects for a long time to observe the changes, correlations, and possible causes of cognitive function and hippocampal subregions caused by different BMI with the development of disease and age. Further longitudinal study should be done.
Conclusion
Our results showed that higher BMI was associated with lower levels of executive function in the SCD and larger hippocampal subregion (CA1) gray matter volume. These associations suggest that obesity increases hippocampal gray matter volume in the elderly with SCD but is not sufficient to compensate for the impairment of executive function caused by the disease. Future studies are necessary to better elucidate these associations from the perspective of other mechanisms.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Medical Ethics Committee of the Affiliated Rehabilitation Hospital of Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
JLi: experimental design, analysis, and manuscript preparation and revision. RC: data analysis and manuscript preparation and revision. GC and SX: data collection and data analysis. QS and JLu: data collection. YW, ML, and HL: data analysis. All authors contributed to drafting the manuscript and have read and approved the final manuscript.
|
2022-08-17T13:43:23.287Z
|
2022-08-17T00:00:00.000
|
{
"year": 2022,
"sha1": "473096e51329455e543e58ff68e4993cc8aef0e8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "473096e51329455e543e58ff68e4993cc8aef0e8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268479233
|
pes2o/s2orc
|
v3-fos-license
|
Emerging therapeutic drug monitoring technologies: considerations and opportunities in precision medicine
In recent years, the development of sensor and wearable technologies have led to their increased adoption in clinical and health monitoring settings. One area that is in early, but promising, stages of development is the use of biosensors for therapeutic drug monitoring (TDM). Traditionally, TDM could only be performed in certified laboratories and was used in specific scenarios to optimize drug dosage based on measurement of plasma/blood drug concentrations. Although TDM has been typically pursued in settings involving medications that are challenging to manage, the basic approach is useful for characterizing drug activity. TDM is based on the idea that there is likely a clear relationship between plasma/blood drug concentration (or concentration in other matrices) and clinical efficacy. However, these relationships may vary across individuals and may be affected by genetic factors, comorbidities, lifestyle, and diet. TDM technologies will be valuable for enabling precision medicine strategies to determine the clinical efficacy of drugs in individuals, as well as optimizing personalized dosing, especially since therapeutic windows may vary inter-individually. In this mini-review, we discuss emerging TDM technologies and their applications, and factors that influence TDM including drug interactions, polypharmacy, and supplement use. We also discuss how using TDM within single subject (N-of-1) and aggregated N-of-1 clinical trial designs provides opportunities to better capture drug response and activity at the individual level. Individualized TDM solutions have the potential to help optimize treatment selection and dosing regimens so that the right drug and right dose may be matched to the right person and in the right context.
Introduction
TDM is the measurement of drug concentration(s) in blood, plasma, or other biosamples, in order to determine the optimal drug dosing regimen for an individual (Kang and Lee, 2009;Clarke, 2016;Ates et al., 2020).Its adoption has been historically limited due to challenges with available techniques, which include chromatographic strategies that may be coupled with immunoassays or other detection methods (Ates et al., 2020).While these approaches have utility, wider implementation has been hindered due to factors including issues with low throughput and inaccurate detection despite high sensitivity and specificity for chromatography methods; and low specificity, in spite of lower costs, simpler protocols, and highthroughput flexibility, for immunoassay approaches (Carlier et al., 2015;Ates et al., 2020;Tuzimski and Petruczynik, 2020).However, more recent technological developments will enable more widespread TDM applications in the clinic and in research.One area that will benefit from these developments is precision medicine, which holds promise towards better tailoring effective drug treatments to improve the health of patients, and also improving our understanding of drug pharmacokinetics (PK) and pharmacodynamics (PD) at the individual level.Finding solutions to effectively match drugs and doses to patients is needed, particularly due to the fact that although a large variety of drugs are routinely prescribed by physicians or are available as over the counter (OTC) drugs, there have not been improvements in health in the general population for some time (Sánchez-Sánchez et al., 2021); and further, more than 50% of prescribed or dispensed drugs are used inappropriately (Sánchez-Sánchez et al., 2021).In addition, an individual may take multiple drugs to treat different conditions, potentially creating problematic drug-drug interactions, or may take dietary supplements, which are not regulated by or registered with the Food and Drug Administration (FDA), but may be marketed to promote health benefits.New TDM technologies thus have the potential to enhance scientific understanding of which drugs truly benefit an individual's health.
While TDM has been used in specific contexts, there are opportunities to widen its scope of use.Drugs for which TDM are commonly used include anti-epileptic drugs (Patsalos et al., 2018), antibiotics (Muller et al., 2018;Wicha et al., 2021;Abdul-Aziz et al., 2022), anti-cancer drugs (Decosterd et al., 2015;Buclin et al., 2020), and others (Punyawudho et al., 2016;Imamura, 2019).Key criteria used to determine which drugs may be appropriate for, or benefit from, TDM include those that demonstrate: 1) inter-subject PK variability, 2) intra-subject PK stability over time, 3) a clear correlation between drug concentration and clinical response and/or toxicity, 4) a narrow therapeutic window, 5) in-availability of PD biomarkers of clinical response and/or drug toxicity, and 6) consistent treatment duration to enable dosage changes (Ates et al., 2020;Buclin et al., 2020).Meeting these criteria requires rigorous trials and analyses that are not performed for all prescription and OTC drugs, often due to significant time and cost investments, but could, in theory, be used to optimize the use of any drug.TDM would also benefit from the use of pharmacogenetic testing to improve drug prescription strategies.Such testing was implemented in, for example, the PREPARE (Preemptive Pharmacogenomic Testing for Preventing Adverse Drug Reactions) study, which utilized a 12-gene pharmacogenomic panel encompassing 50 germline variants to assess adverse reactions associated with a genotype-guided drug treatment compared to standard of care (Swen et al., 2023).Notably, using genotypeguided drug treatments resulted in a 30% decrease in clinically relevant drug reactions (Swen et al., 2023).Integrating strategies such as pharmacogenetic testing with TDM will be important to consider in order to maximize therapeutic benefits for patients.
Outlining efficient strategies to determine which drugs are suitable for TDM, which subjects would benefit from TDM, and how to appropriately apply TDM in these situations remains a challenge due to the unique comorbidity, genetic, epigenetic, behavioral, and environmental exposure profiles that each individual possesses (Kang and Lee, 2009;Landmark et al., 2016).The use of strategies such as biosensor and wearable technologies, as well as medical digital twins, computational simulations of realworld patients that utilize key features of an individual to forecast how they may respond to injury, infection, or treatments (Laubenbacher et al., 2022) and which have specifically shown promise for personalizing pain medication management (Bahrami et al., 2023), have the potential to address such challenges, alleviate the burden of implementing TDM strategies, and also enable the use of continuous drug monitoring.Continuous monitoring is particularly attractive for facilitating precision medicine as it: 1) creates a closed-loop system for real-time assessment of drug responses and fine tuning of doses; 2) can help to expedite drug development and clinical trials by quickly identifying clinically meaningful trends of a drug's effects; 3) enables the collection of longitudinal data, versus the collection of temporally fragmented data, to improve reliability of predictions and to strengthen data interpretation; and 4) can be used to delineate intra-and inter-individual variability in drug response and PK to ultimately improve individual treatment outcomes, which may be extrapolated to larger populations (Bian et al., 2021).Ideally, such efforts, in combination with pharmacogenetic testing, could decrease the incidence of adverse events (AEs), minimize drug toxicity, improve tolerability, reduce costs, decrease burden on both patient and clinical staff, and improve therapeutic outcomes (Bian et al., 2021).Such benefits may be further maximized using N-of-1 clinical trials, which treat each subject as an independent study, and which may be used to determine if a subject responds to an intervention, and to determine the most effective treatment for that subject (Schork and Goetz, 2017;Selker et al., 2022).Data from separate trials may in turn be aggregated to make broader claims across a population.By taking into account the unique nature of each individual, these designs differ from traditional trials, which are designed to evaluate interventions in the greater population and whose aim is not necessarily to find an effective treatment for each subject (Schork and Goetz, 2017;Selker et al., 2022).TDM also relies on the assumption that a drug's PK informs its PD, but this does not always hold true (Open Resources for Nursing and Ernstmeyer, 2023).N-of-1 analyses will thus help to characterize inter-individual variability in these associations.The strategic implementation of new TDM technologies within an N-of-1 framework has the potential to advance personalized medicine in novel ways.In this mini-review, we will discuss emerging TDM technologies and key factors that impact TDM, as well as opportunities to implement N-of-1 and aggregated N-of-1 designs, to maximize the benefits of TDM in the conduct of precision medicine.
Emerging TDM technologies
New technologies potentiating TDM include biosensors and wearables which can enable the translation of specific measurements on individuals into quantifiable drug-induced signals (Ates et al., 2020).Drug-induced signal detection from, e.g., plasma samples, typically occurs as a result of non-covalent binding of a recognition element (antibodies, enzymes such as cytochrome P450 (i.e., enzyme-linked assays (ELA)), membranes, polymers, or aptamers) to an analyte (Ates et al., 2020) and is performed most commonly using optical and electrochemical methods (Garzón et al., 2019;Ates et al., 2020;Pollard et al., 2021;Qian et al., 2021).With optical methods, a biorecognition event generates an optical signal, or elicits a change in environmental optical properties, which is subsequently captured by a photodetector (Dincer et al., 2019;Kim et al., 2019).This approach is used to measure concentrations of antibiotics (Zengin et al., 2014;Cappi et al., 2015;Losoya-Leal et al., 2015;Spiga et al., 2015;Tenaglia et al., 2018), anti-cancer drugs (Zhao et al., 2015;Yockell-Lelièvre et al., 2016), antifungals (Berger et al., 2017), anti-epileptic drugs (Yamada et al., 2015), therapeutic drug antibodies (Lu et al., 2017;Beeg et al., 2019), and others (Liu et al., 2020;Bian et al., 2021;Weber et al., 2021) (Table 1).With electrochemical methods, a biorecognition event generates an electrical signal proportional to the drug concentration (Dincer et al., 2019).Electrochemical biosensors have been used with antibiotics (Kling et al., 2016;Bruch et al., 2017;Yu et al., 2018;Dauphin-Ducharme et al., 2019), antiepileptics (Mobed et al., 2022), anti-cancer drugs (Tajik et al., 2015;Lima et al., 2018;Sukanya and Rath, 2022), as well as antifungals (Tuchiu et al., 2022) (Table 1).Both optical and electrochemical biosensors demonstrate similar advantages including high sensitivity, reliability, and multiplexing capabilities, with electrochemical solutions also enabling on-site monitoring and usage of small sample volumes (Dincer et al., 2019;Ates et al., 2020).Disadvantages associated with optical biosensors include their susceptibility to background noise and environmental interference, potential signal loss depending on the matrix that is used, the fragility of instrumentation, and high instrumentation costs (Dincer et al., 2019;Ates et al., 2020), while electrochemical approaches may harbor issues with non-specific binding of analytes (Ates et al., 2020).In general, biosensor utility for TDM is affected by factors such as the degree of invasiveness of sample collection for analyte analysis and signal amplification strategies which may increase the sensitivity and the selectivity of signal detection (Dincer et al., 2019;Ates et al., 2020).Additional factors influencing TDM include the sample matrix that is used and how samples are collected.The most commonly used matrices for TDM are plasma and whole blood and thus the relationships between matrix drug concentration and therapeutic response are best characterized for these sample types (Ates et al., 2020).However, variability in hematocrit across subjects may introduce bias in TDM (Ates et al., 2020;Sikma et al., 2020).Thus, use of other types of matrices, such as sweat, interstitial fluid (ISF) and oral fluids, are being explored (Kiang et al., 2012;Ghareeb and Akhlaghi, 2015;Gao et al., 2019) and will enable wider adoption of TDM studies and practices.The mode of sample collection may also influence the success of TDM.Microsampling technologies such as dried blood spots (Gaissmaier et al., 2016;Zakaria et al., 2016;Li et al., 2021) and remote collection alternatives such as those commercially available from Neoteryx (Gruzdys et al., 2019;Williams et al., 2021) or Tasso (Williams et al., 2021;Wan et al., 2022), may help decrease the invasiveness, burden, and cost of sample collection, but will also require testing and validation of reliability.Furthermore, timing of sample collection may introduce an incomplete picture of drug concentration levels, especially for drugs with long half-lives or if the subject has hepatic and/or renal insufficiency affecting drug metabolism (Ates et al., 2020).Solidifying continuous TDM solutions will aid in resolving these issues.
Continuous TDM solutions
Continuous TDM yields significant benefits over traditional TDM, whereby measurements have been typically collected only at single or specific time points (Hiemke, 2008).In addition to providing a more comprehensive view of drug concentration changes over an period of time, continuous TDM can also improve optimization of therapeutic dosing and treatment decision-making, reduce drug toxicity, enable characterization of PK dynamics within and across subjects to aid in creating more reliable PD and PK models, reduce burden on the subject and on clinical staff, and ultimately help to expedite clinical trials and drug development (Bian et al., 2021).To perform continuous TDM, electrochemical biosensors may be used as they can be modified using functional nanomaterials and immobilized antibodies or aptamers to improve matrix analysis and target capture, respectively; and can be integrated with microfluidic and wearable, or implantable, devices (Bian et al., 2021).
Both in vitro and ex vivo methods have demonstrated the utility of electrochemical biosensors for continuous TDM.In vitro methods include measurements on extracted blood or buffers, whereas ex vivo methods involve the use of a discrete substrate outside of the body (Bian et al., 2021).In vitro methods encompass approaches that modify electrode surfaces with nanomaterials to improve biosensing capabilities (Maduraiveeran et al., 2018;Bian et al., 2021;Vaneev et al., 2022), and have been used to monitor drugs including naproxen (Baj-Rossi et al., 2014;Stradolini et al., 2018;Sweilam et al., 2018), propofol and paracetamol (Stradolini et al., 2018), and lithium (Sweilam et al., 2018).Additional elements that may be used are aptamers, single-stranded DNA or RNA molecules that have high target binding affinity and specificity (Bian et al., 2021), which have been used for detection of drugs including tenofovir (Aliakbarinodehi et al., 2017;Tzouvadaki et al., 2017), vancomycin (Dauphin-Ducharme et al., 2019), imatinib (Tartaggia et al., 2021) and anti-fungals (Wiedman et al., 2017).In contrast to in vivo approaches, ex vivo methods utilize an external monitoring substrate such as a microfluidic device, or a wearable sensor (Bian et al., 2021).While microfluidic-based sensors have been primarily tested in animal models to continuously monitor drugs such as doxorubicin (Ferguson et al., 2013;Karnik, 2017;Mage et al., 2017;Bian et al., 2021), promising wearable options that utilize sweat or microneedle sensors are being explored for humans (Gao et al., 2016;Chung et al., 2019;Bian et al., 2021).These types of wearable biosensors can be placed on the epidermis to measure drugs and analytes in sweat, following physical activity or through sweat induction (Tai et al., 2018;Chung et al., 2019;Tai et al., 2019), or from ISF which is accessed from microneedle penetration into the dermal-interstitial space (Goud et al., 2019;Gowers et al., 2019;Rawson et al., 2019).Despite challenges around sufficient sample collection and validating blood versus sweat-based drug concentrations, wearable sweat biosensors have been used to perform real-time monitoring of caffeine (Tai et al., 2018) and levodopa (Tai et al., 2019).Microneedle-based sensors have similar challenges, such as the need to improve detection limits, as they capture measurements from ISF.This approach is most commonly used for monitoring plasma glucose for management of diabetes (Lee et al., 2016;Wang et al., 2016), but also has utility for continuous TDM of levodopa (Goud et al., 2019) and antibiotics (Gowers et al., 2019;Rawson et al., 2019).
In vivo biosensors, which are suitable for feedback-controlled closed-loop systems, can also be used for continuous TDM and drug administration.These solutions, which are commonly used in the form of implantable biosensors for measuring and maintaining normal plasma glucose levels in diabetic subjects, represent an optimal strategy towards precision drug management as they allow for a more complete view of PK changes within and across subjects (Bian et al., 2021).In vivo biosensors outside of glucose monitoring have been primarily explored in animal models to monitor doxorubicin and tobramycin (Arroyo-Currás et al., 2017), and feedback-controlled dosing of vancomycin (Dauphin-Ducharme et al., 2019).One notable observation from animal studies is the high level of interanimal variance (>50%) in PK-related measurements of drug distribution, excretion, and maximum plasma concentration, and the absence of an association between these factors and body surface area (Vieira et al., 2019).This is further exacerbated by metabolic variation across species (Bian et al., 2021) and emphasizes the need to develop and optimize TDM at the individual level.Overall, improvements in sensor technology, including smart bandages (Mostafalu et al., 2018;Dincer et al., 2019), disposable wearable sweat and ISF sensors (Zhang et al., 2016;Ainla et al., 2018;Kim et al., 2018;Dincer et al., 2019), voltammetry-based sensing modalities that do not rely on recognition elements (Lin et al., 2020), and integration of sensors into smartphonebased tools (Madrid et al., 2022), will pave the way for future adoption of these solutions.
Considerations and opportunities for TDM in precision medicine
Polypharmacy, supplement use, and drug/ supplement interactions One key challenge with traditional or continuous TDM is determining how to perform analyses and interpret data in the context of drug combinations, polypharmacy, and the use of dietary supplements.Polypharmacy is the simultaneous use of five or more prescription and non-prescription medications by one person (Masnoon et al., 2017).At least four out of ten older adults meet this definition and almost 20% take at least ten drugs (Brownlee and Garber, 2019).When including dietary supplements and OTCs, approximately 67% of older adults fulfill the definition of polypharmacy (Qato et al., 2016).Polypharmacy can lead to serious drug interactions, decreased adherence to medication (Elbeddini et al., 2021), suboptimal treatment (Darwich et al., 2017), and an increase in the risk of AEs by 7%-10% with each medication that is taken (Elbeddini et al., 2021).Oversight of dietary supplements is particularly challenging, since it is estimated that they are used by 80% of all adults (Levinson, 2012).However, only 23% do so based on the advice of their healthcare professional (Akabas et al., 2016).Furthermore, since quality standardization of supplements is minimal, there are significant safety, quality, and efficacy concerns (Ronis et al., 2018).Based on AEs submitted to the FDA, 40,546 AEs resulting from consumption of vitamins, minerals, proteins, and unconventional diets have been reported since 2004 (FDA, 2023).Although TDM has been primarily applied towards monitoring of prescription drugs, expanding its application to supplements is critical, especially given the possibility of synergistic or antagonistic effects of co-administered medications (Shipkova and Christians, 2019).
Since TDM is based on a relationship between drug concentration and a therapeutic effect, determining the clinical and biological impacts of drug and supplement interactions is needed.Although dosing adjustments may be used to counter PK interactions, drug-drug and drug-supplement interactions may still result in PK or PD effects (Asher et al., 2017).A PK interaction may occur, e.g., if a drug has the same mechanism of absorption, distribution, metabolism, or excretion (ADME) as a co-administered supplement, whereby competition at ADME processes can both influence the concentration of the drug or supplement at the site of action (Palleria et al., 2013;Asher et al., 2017;Grogan and Preuss, 2023) and affect the expected actions of the drug (Figure 1A).On the other hand, a PD effect may occur if one drug or supplement directly impacts the mechanism of another drug or supplement, and may alter the clinical efficacy of a drug without any associated changes in drug concentration (Palleria et al., 2013;Asher et al., 2017;Niu et al., 2019) (Figure 1B).With respect to drugdrug interactions, a data mining analysis of the FDA's AE Reporting System (AERS) for side effect profiles found that two highly prescribed drugs, the lipid-lowering agent pravastatin and the anti-depressant paroxetine, had synergistic effects on blood glucose levels only when taken together (Tatonetti et al., 2011).In a separate analysis of AERS, the co-administration of the diabetes drug rosiglitazone and the incretin mimetic exenatide dramatically decreased myocardial infarctions associated with rosiglitazone alone (Zhao et al., 2013).This study further found 19,133 drug combinations whereby one drug may reduce AEs associated with a second drug (Zhao et al., 2013).Another study that evaluated patients who received triple antiepileptic drug combinations, found that AEs and seizures occurred more often in patients taking three or four drugs together (Grundmann et al., 2017).Such compelling findings provide evidence of how drug interactions may yield both positive and negative impacts.Examples of known drug-supplement interactions include: goldenseal (Hydrastis canadensis) supplements which are recommended not to be administered in combination with the majority of OTC and prescription drugs; and St. John's wort (Hypericum perforatum), which can decrease the efficacy of numerous drugs including warfarin, protease inhibitors, irinotecan, theophylline, and oral contraceptives (Asher et al., 2017).Characterizing and predicting the effects of such interactions will be important for the development of feedback-controlled closed loop TDM solutions to maximize therapeutic benefits.
One strategy that will assist with the management of potential drug and supplement interactions are multiplexed TDM solutions to measure concentrations of multiple targets.ELAs are being developed to measure multiple antibiotics simultaneously (Kling et al., 2016) and mass spectrometry-based methods (e.g.,,LC-MS/ MS) have been developed to perform TDM of multiple immunosuppressant drugs (Yang and Wang, 2008;Seger et al., 2009), anti-viral drugs (Conti et al., 2018), antibiotics (Kling et al., 2016;Schuster et al., 2018), anti-depressants (Lindner et al., 2019), anti-psychotic medications (Patteet et al., 2014), and mono-clonal antibodies (Willeman et al., 2019).While these traditional TDM methods are accurate and precise, they suffer from high instrumentation costs, increased turnaround times, and the need for analyses to be performed in clinical laboratories.Although optical-based biosensor solutions for identification of multiple proteins (Spindel and Sapsford, 2014;Rafat et al., 2023) could ultimately be adopted for TDM, continued improvements in biosensors are needed to enable the detection and monitoring of multiple drugs and supplements from the same matrices.
TDM within N-of-1 trial designs
In the clinical setting, TDM categorizes drug concentrations as sub-therapeutic, therapeutic, supra-therapeutic, or toxic based on statistically determined ranges from clinical trials or in healthy populations (Cooney et al., 2017;Ates et al., 2020), or expert opinion (Cooney et al., 2017).However, such trials did not account for an individual's unique clinical, genetic, phenotypic, or other, features which may influence TDM measurements and interpretation of data.In other words, although the basic premise of TDM is that a drug's PK is informative of PD, this does not always hold true (Open Resources for Nursing and Ernstmeyer, 2023), but the use of N-of-1 trials will help to shed light into inter-individual PK and PD variability.Therapeutic ranges may also be modified by electrolyte balance, acid-base balance, age, bacterial resistance, plasma protein binding, or drug interactions (Aronson and Hardman, 1992).It is well known that people treated with drugs such as phenytoin, warfarin, digoxin, and fentanyl, have interindividual PD variability at a given drug plasma concentration, as well as significant cross-subject differences in steady state plasma Co-administration of drugs and supplements may result in PK and/or PD-related interactions.During co-administration of drugs or supplements, PK drug-drug (DD) or drug-supplement (DS) interactions may occur if the individual compounds share mechanisms or impact processes across absorption (A), distribution (D), metabolism (M), and/or excretion (E) functions (A) (Palleria et al., 2013;Grogan and Preuss, 2023).Such interactions may cause in a change in the concentration of the drug or supplement at its site of action.For example, metoclopramide, a dopamine receptor antagonist that treats nausea and vomiting in patients with gastroesophageal reflux disease, may activate gastric motility and decrease absorption of drugs such as digoxin, a heart failure medication (Johnson et al., 1984).Separately, PD, DD or DS, interactions may occur if one of the compounds has a direct effect on the mechanism of the other compound (B).For example, a synergistic interaction occurs when 5-fluorouracil (5-FU) is administered with folinic acid to increase inhibition of thymidylate synthase in order to kill cancer cells (Keyomarsi and Moran, 1986;Niu et al., 2019); alternatively, an antagonistic interaction occurs when angiotensin converting enzyme inhibitors (ACEi) are administered with thiazide diuretics, which are used to treat hypertension, resulting in increased hypotension and diuresis (Mignat and Unger, 1995;Niu et al., 2019).
drug concentrations (Kang and Lee, 2009;Bahrami et al., 2023), which has also been observed for anti-cancer drugs (Cardoso et al., 2020).Further, studies may not repeat TDM measurements, but the predictive performance of model-informed precision dosing has been found to improve with the addition of longitudinal TDM data (Wicha et al., 2021).
One solution may be evaluating the efficacy and safety of drugs by incorporating TDM into an N-of-1 trial design alongside longitudinal biomonitoring and deep phenotyping of individuals (Lillie et al., 2011;Schork and Goetz, 2017).This design may be used to perform TDM, followed by aggregation of cross-trial data to identify potential sub-populations and TDM trends that may be associated with covariates such as genetic and pharmacogenomic variants or clinical characteristics, including sex, body weight, comorbidities, and other features (Buclin et al., 2020).Since bodily distribution of drugs exhibits both spatial and temporal differences, there may be differences in organ-specific drug kinetics after systemic drug administration (Weiss, 1999;Bian et al., 2021), and an N-of-1 approach will help to better characterize these nuances, as well as cross-subject PK and PD variability (Levy, 1994;Gross, 2001;Kang and Lee, 2009), in order to optimize PK/PD models for TDM.Moreover, longitudinal, and ideally continuous, single subject analyses that incorporate TDM, will help to better define the relationship between drug availability (i.e., dose), therapeutic impact, and physiological functions, while minimizing drug toxicity.
Another complication with TDM is improving drug dose optimization and treatment management in ways that are therapeutically beneficial for the patient.The use of populationbased PK data to determine dosing algorithms overlooks numerous factors unique to each person.However, strategies such as incorporating the use of patient-derived organoids to perform drug screening, dose optimization, and treatment holds promise for improving patient outcomes (Bose et al., 2021).Patient-derived organoids capture important patient-specific features, including the patient's physiology and tissue microenvironments, and is being explored for the treatment of different cancers (Zhou et al., 2021;Shin et al., 2023;Schmäche et al., 2024), including assessing cancer drug resistance (Sun et al., 2024), and digestive disorders (Wang et al., 2022), to name a few.Integrating this approach with patient-focused N-of-1 trials will help to identify, handle, and mitigate issues associated with using population-based data.They can also create opportunities to establish more effective dosing strategies and treatment regimens.
Future directions
The development of biosensor and TDM technologies creates a valuable opportunity to improve and expedite precision medicine and drug development with the goal of benefitting patients.The continued evolution of nanomaterials, manufacturing and preparation of both electrochemical and optical biosensors, and integration of biosensors into wearables, will have beneficial implications for TDM.The success of TDM relies on the accuracy of measuring drug concentrations in various contexts, and developing sensitive and precise PK/PD models and algorithms, which could in theory be expanded using digital twins and in silico clinical trials that are appropriately tailored to each person.The use of TDM as part of N-of-1 trials with longitudinal biomonitoring and deep phenotyping will also enable precision medicine in very appropriate ways.Such studies, in combination with strategies such as patient-derived organoid models, would provide a foundation to improve patient outcomes by optimizing drug dosing and treatment schedules.These studies would also help to identify which individual would benefit from which drugs, and aggregated analyses of N-of-1 studies could identify markers of drug response.Such analyses may also help with, e.g., the identification of molecular PD biomarkers, or drug-specific biomarkers, that reflect biochemical and functional changes in the body that occur in response to a drug (Shipkova and Christians, 2019); the characterization of ADME processes associated with drug-drug and drug-supplement interactions; improving our understanding of human biology (Schork et al., 2023); and validating drug repurposing opportunities and drug-patient matching (Cremers et al., 2016).
While the immense benefits of TDM technologies are apparent, TDM is not without its challenges.However, ongoing efforts across multiple areas will help to pave the way for wider and intelligent adoption of this technology.Implementing continuous TDM will result in the generation of massive amounts of data, which necessitates finding solutions to address data management and data confidentiality.This is exemplified by the use of continuous glucose monitoring, where methods are still evolving to best analyze continuous data (Rodbard, 2016).TDM analyses may also shed light into determining how a drug should be prescribed in order to maximize beneficial clinical outcomes, such that drug candidates that rely on TDM may have lower priority in development pipelines (Buclin et al., 2020).Additionally, TDM may show that patients benefit from lower or fewer doses, or even potentially reveal that certain therapeutics may not be effective.Further, TDM is currently costly, which has limited wider adoption (Buclin et al., 2020;Zhao et al., 2022) such that progress using TDM would benefit from investment from organizations and therapeutic developers.Despite these challenges, the continued development of biosensor technologies and integrating TDM into precision medicine approaches have the potential to significantly improve patient outcomes and positively change the way in which medicine is performed.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
TABLE 1
TDM biosensor technologies evaluated using human matrices.
|
2024-03-17T16:20:34.397Z
|
2024-03-13T00:00:00.000
|
{
"year": 2024,
"sha1": "7ca0d051c229715ec2e3325d70f73c524f2218da",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/pharmacology/articles/10.3389/fphar.2024.1348112/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1e8782315d8ba67f96e14443e1bdcca4be19154",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216666923
|
pes2o/s2orc
|
v3-fos-license
|
The instruments development aassessment of the affective competence: values contained in history
This study aims to develop valid and reliable instruments of the affective competency assessment (historical values). The instrument trial involved 935 students of high school grade XI IPS in Medan. Respondents were selected by random sampling and quota sampling. Data collected is analyzed using Structural Equation Modeling (SEM) and second order in Confirmatory Factor Analysis (CFA) with the standard loading factor (SLF) value criteria. From CFA analysis with the Maximum Likelihood method on the first order, it is known that all instrument items have a factor load of> 0.32 and the results of the model match test with a significance level of 0.0001. The results of reliability testing with Construct Reliability (CR) and Variance Extracted (VE) show that the instruments developed have met the acceptance limit of the reliability coefficient of r_xx '≥ 0.70 for CR and rxx’ ≥ 0.50 for VE. In conclusion, the instrument of affective competency assessment on historical values developed includes 5 dimensions, 14 indicators and 84 items including valid and reliable categories.
Introduction
Assessment is an integral part of the learning process and the achievement of learning objectives in accordance with the established curriculum. This is in line with the opinion of Ducan and Criss assessment in learning is intended to determine the level of mastery and competence of students towards the material being studied, obtain information on the potential and motivation of students, materials to diagnose student learning difficulties [1] Assessment of learning history that takes place in schools has tended to be oriented only to the cognitive aspects of historical facts. Affective aspects related to the values contained in historical events are ignored and the assessment of the affective aspects tends to be done in writing so as not to give a factual picture of the affective dimension. This condition requires an improvement in the affective aspiration assessment system, so that the objectives and process of learning history are more meaningful. Efforts to improve the affective assessment system of history learning, require the development of instruments of affective assessment instruments in harmony. Departing from this thought, the aim of this study was to develop an instrument for evaluating valid and reliable affective competency learning outcomes that teachers can use to measure students' appreciation of the values contained in history. Instrument is a tool used to measure an object measuring Scriven, both natural phenomena and social phenomena, including measuring student achievement or other factors that have a relationship between [2]. Valid instruments will determine the quality of education [3]. The steps for developing instruments include : 1) theoretical study to strengthen the concept of variables and their opresional definitions, 2) compile dimensions and indicators, 3) make grilles and write instrument items, 4) theoretical validation for experts and panelists, 5) empirical trials, 6) validity analysis and empirical data reliability, 7) perfecting instruments with their completeness so that they can be used as standard instruments for affective competency assessment in history learning. [4] The development of affective values encompasses historical aspects of the concept, that history contains knowledge about past events, has a mission to give birth to educated generations who are soulful, passionate and live the noble values of the nation [5] [10]. So that the historical value becomes, aspirations, views of life and references in acting and behaving or becoming the basic values and operational value of the National Daily Council of Force '45 and the basis of personal formation and mental attitude of Meulen [11] [12]. In the formulation of Bloom and Krathwohl the affective dimensions of history learning include receiving, responding, valuing, organization, and characterization [13]
Methods
The focus of this study is the development of affective instruments and methods used in instrument development research methods, beginning with theoretical and theoretical validation studies involving experts and panelists to assess the suitability of dimensions with variables, suitability of indicators with dimensions, suitability of items with indicators. Quantitative analysis uses a likert scale of 1 to 5 with the formula: a. ∑ni │i -r│ Aiken validity with valid criteria if the value of rxx 2 0.2 and Hoyt interater reliability with reliability criteria if above rxx 7 0.7. Furthermore, instrument testing was carried out in two stages (102 items in the first phase and 84 points in the second stage involving 935 respondents from the 11th grade social high school students (515 respondents tested 1 and 420 respondents in stage 2).
The scale model used is a Likert scale with 4 (four) options, namely: strongly agree (SS), Agree (S), disagree (TS) and strongly disagree (TS) with the reason the respondent avoided choosing the middle option. Determination of validity in test 1 and test 2 uses Structural Equation Modeling (SEM) analysis with first order and second order is continued on Confirmatory Factor Analysis (CFA) with Standard Loading Factor (SLF) / load factor ≥ 0.32 criteria with T-Value 96 1.96 or ≥ 2.00. While reliability testing in SEM used Construct Reliability (CR) and Variance Extracted (VE) through the following formula: The research method used is presented in Figure 1 below:
Table 2. Recapitulation of Validity and Reliability of Affective Competency Dimensions
Based on factor analysis shows that the value of Standard Loading Factor (SLF) / factor load ≥ 0.32 with T-Value ≥ 1.96 or ≥ 2.00 for each dimension. This means that the dimensions of the test results in step 1 of the phase 2 trial are valid. To test the reliability of the empirical trial phase 1 and trial 2, reliable for all dimensions, the reliability coefficient index is above 0.70 for CR and VE is 0.5. Recapitulation of the validity and reliability of affective competency variables is presented in table 3.
Table 3. Recapitulation of Validity and Reliability of Affective Competency Variables
The test of the suitability of the model, the results of the goodness of fit (GOF) measure from the output analysis of the measurement results with Lisrel on empirical data of test 1 and test 2 with fact analysis of non-order CFA are shown in Table 4. The assessment using a criterion referenced test (CRT) based on Gronlund and Linn obtained the highest score of 336 and the lowest value of 84, as listed in Table 6. [15] This means that the research on the development of affective instruments that includes 5 dimensions, 14 indicators and 84 items to measure the affective competency study in history learning at the high school level is categorized as close to the requirements of the Goodness of Fit model, because there are 5 GOF criteria that are met. This result according to Hair et.al, instrument development research that produces 5 criteria of goodness of fit is considered to be sufficient to assess the feasibility of a valid and reliable instrument model [14].
Conclusions
The results of the development of students' affective competency assessment instruments in the history learning of high school level can be concluded that: the results of the construct validity testing of the instruments developed show all stages of testing, both the theoretical testing stage (construct) and the empirical 1 and 2 testing stages have met the criteria for validity and reliability significant. This can be seen from the results of theoretical validation analysis using Aiken validity, obtained validity values above 0.2 and inter-rater reliability using a reliable Hoyt coefficient Aikan above 0.7. For the stages of empirical testing phase 1 and 2 that are valid that meet the criteria have a factor load of> 0.32 and indicated by the achievement of the reliability coefficient in each test, Construct Reliability (CR) has met the established standards of rxx'≥ 0.70 and Variance Extracted (VE) rxx'≥ 0.50. Based on the results of the model suitability test using goodness of fit test, five criteria were produced which contained good value because they had criteria above the cut value. So that the instrument of affective assessment developed covers 5 dimensions, 14 indicators and 84 items meet the criteria of valid and reliable instruments.
Awards and Acknowledgments On this occasion the research team thanked the Head of Universitas Negeri Medan, Prof. Syawal Gultom, who has provided enormous support for the completion and smoothness of this research. Thanks are extended to family and colleagues who have provided critical input in the completion of this research.
|
2019-09-15T03:06:22.698Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "5fe3a99104bf4d3899826b58096b02e6ad761299",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1175/1/012158",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d130251212214ac58120dd16a766cb2e6513cf52",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics",
"Psychology"
]
}
|
266323356
|
pes2o/s2orc
|
v3-fos-license
|
The Euphrates Poplar Responses to Abiotic Stress and Its Unique Traits in Dry Regions of China (Xinjiang and Inner Mongolia): What Should We Know?
At the moment, drought, salinity, and low-temperature stress are ubiquitous environmental issues. In arid regions including Xinjiang and Inner Mongolia and other areas worldwide, the area of tree plantations appears to be rising, triggering tree growth. Water is a vital resource in the agricultural systems of countries impacted by aridity and salinity. Worldwide efforts to reduce quantitative yield losses on Populus euphratica by adapting tree plant production to unfavorable environmental conditions have been made in response to the responsiveness of the increasing control of water stress. Although there has been much advancement in identifying the genes that resist abiotic stresses, little is known about how plants such as P. euphratica deal with numerous abiotic stresses. P. euphratica is a varied riparian plant that can tolerate drought, salinity, low temperatures, and climate change, and has a variety of water stress adaptability abilities. To conduct this review, we gathered all available information throughout the Web of Science, the Chinese National Knowledge Infrastructure, and the National Center for Biotechnology Information on the impact of abiotic stress on the molecular mechanism and evolution of gene families at the transcription level. The data demonstrated that P. euphratica might gradually adapt its stomatal aperture, photosynthesis, antioxidant activities, xylem architecture, and hydraulic conductivity to endure extreme drought and salt stress. Our analyses will give readers an understanding of how to manage a gene family in desert trees and the influence of abiotic stresses on the productivity of tree plants. They will also give readers the knowledge necessary to improve biotechnology-based tree plant stress tolerance for sustaining yield and quality trees in China’s arid regions.
Introduction
P. euphratica Oliv.(P.euphratica), the Euphrates poplar or desert poplar, is nonhalophyte and mesophyte in its morphology, but has a high abiotic stress tolerance [1] and is an important component of riparian ecosystems in arid regions.This poplar tree species is mainly distributed in Southwestern Europe, including Spain [2]; Western Asian countries, including Iran [3], Iraq, Syria, and Turkey [4]; Central Asia, including Kazakhstan [5]; Pakistan [6]; India [7]; and China's western Inner Mongolia, Xinjiang, and other arid regions [8][9][10][11][12].It is also found in many other countries outside of Asia, such as the Middle the genome sequencing data were uploaded at the NCBI [56][57][58] and co-expressed the functions that respond to biotic stress.P. euphratica shows transcriptional modification in signaling, photoprotection, oxidative stress detoxification, and the suppression of stomatal closure, potentially changing drought stress responses (as illustrated in Figure 1) [47].The expression of particular gene sets in plants determines their susceptibility to stress and their level of resistance to it.P. euphratica proteins' functions in plant growth, development, metabolic pathways, and stress responses require advanced study for their regulatory roles and functions.Here, we gathered all information by using the Web of Science, the Chinese National Knowledge Infrastructure, and the National Center for Biotechnology Information.This review provides a quick summary of recent studies examining the functions of P. euphratica in novel gene families and molecular networks that may control plant tolerance to abiotic stress.These investigation have created the groundwork for improving the stress tolerance of forest breeding.stress responses, is predicted through co-expression scrutiny.P. euphratica has also been studied [36,55,56]; the genome sequencing data were uploaded at the NCBI [56][57][58] and co-expressed the functions that respond to biotic stress.P. euphratica shows transcriptional modification in signaling, photoprotection, oxidative stress detoxification, and the suppression of stomatal closure, potentially changing drought stress responses (as illustrated in Figure 1) [47].The expression of particular gene sets in plants determines their susceptibility to stress and their level of resistance to it.P. euphratica proteins' functions in plant growth, development, metabolic pathways, and stress responses require advanced study for their regulatory roles and functions.Here, we gathered all information by using the Web of Science, the Chinese National Knowledge Infrastructure, and the National Center for Biotechnology Information.This review provides a quick summary of recent studies examining the functions of P. euphratica in novel gene families and molecular networks that may control plant tolerance to abiotic stress.These investigation have created the groundwork for improving the stress tolerance of forest breeding.
P. euphratica Gene Family Identification and Characterization
Environmental selection is the dynamic force that underscores molecular and biological evolution in P. euphratica [59].The classification of plant tolerance may be based on its diversification, which includes obvious interspecific phenotypic and genomic changes [60].As a result, due to the nature of plants, gene quantities, composition, and existence are all factors that are considered during the plant evolution process [61][62][63].Importantly, this process of the evolutionary mechanism provides more information about the modifications in drought resistance, salt tolerance, and oxygen uptake as an outcome of the plant's capability for adaptation during its life in the soil, such as the formation of the rooting system, vascular structure, and evolution of metabolites in response to abiotic stresses.Recently, a clear and profound understanding has been gained of P. euphratica's genome analysis at the transcriptome level, which embodies conditional transcripts.The de novo assembly of P. euphratica reveals the most in-depth analysis of transcriptomic, gene annotation, profiling, gene expression, and transgenic lineages, providing an interesting concept for learning about their gene family.Therefore, the most prevalent TFs corresponding to the bZIP, bHLH, C3H, and NAC TF families were those differentially expressed in the MYB and MYB-related transcription factor families [47].For example, some genes are crucial in transport, cellular communication, and metabolism.A gene family helps us to identify and characterize the shared origin of proteins with related structures and metabolic activities.Based on the P. euphratica resistance gene family, it is evident that this model plant is tolerant not only to external stress, but also to the multiple environmental pressures during adaption.For example, the transcription factors (TF gene family) function as the regulators of P. euphratica, including plant growth, developmental, and reproductive functions.Studies have established that progress in P. euphratica can be realized based on the different numbers of resistance genes, comprising the PeDREB2 gene family [64].For example, downstream of the DREB2A transcription cascade system, where PeDREB2, PeCPK10, and HsfA3 are expressed in drought and salt stress [65]; they were also induced by cold, but not by abscisic acid (ABA) treatment [64][65][66].It is interesting that these gene families play particular roles in tree plants, such as molecular chaperones (heat shock proteins).This large family is a global salvation system in living organisms that prevents protein damage by stopping aggregation, refolding, and restoring cellular homeostasis through proteolysis and lysosomes/proteasomes.Additionally, some extremely hydrophilic proteins, such as the LEA and COR family members, may serve as chaperones to protect proteins and membranes from damage caused by stress [67].
Under abiotic stressors, the gene family contributes significantly more at the transcription level (Figure 1).For example, heat, drought, and salt stresses consequently significantly elevated the expression of PeuHsf genes in P. euphratica [68], as well as bHLH (PebHLH35 from P. euphratica offers drought resistance by controlling the growth, photosynthesis, and stomatal development of A. thaliana) [69]; R2R3-MYB [70]; WRKY [71]; WRKY1 (because PeWRKY1 was able to connect to the W-box in the PeHA1 promoter, salt stress increased the transcription of PeHA1) [72]; PeuSAP [73]; GASA [74]; TCP [75]; and the SPX gene family, which plays a role in the response mechanism to phosphorus stress [76].It has been witnessed that the transcription factors of the large family play a crucial role in the P. euphratica species [47].Data from high-throughput sequencing revealed that P. euphratica NAC (PeNAC) genes play a role in osmotic and salt stress responses.One study showed that distinct stress-responsive PeNAC transcription factors differentially control salt tolerance in transgenic plants [77].For example, the stress response gene PeNAC1 in P. euphratica was subjected to salt stress; despite that, transgenic A. thaliana that overexpressed PeNAC1 had greater salt tolerance.The cloning and functional analysis of PeNAC045 also demonstrated high responses to abiotic stress [78].The successful identification and isolation of the strongest genes have been experimented with, which suggests the crucial derivative stress tolerance of the gene family in P. euphratica and other model plants.In addition, the cloning gene has evidenced the critical role of P. euphratica tolerance to drought stress, including the PeSCL7 gene [79]; PeSTZ1, a C2H2-type zinc finger enhanced the freezing tolerance throughout the modulation of ROS regulation within the PeAPX2 expression [80].Increasing the water transport capacity in P. euphratica LAC2 (LACCASE) increases its drought tolerance [81].Importantly, the overexpression of PeuLAC2 in poplar plants alters the xylem structure, thickening the secondary cell walls and increasing the fiber cell length and stem tensile strength.Arabidopsis CBF gene overexpression induces orthologs and increases stress tolerance.In transgenic poplar leaves, drought, high salt, and cold stressors increase PeCBF4a expression, which enhances the photosynthetic capacity and PSII photosynthetic electron transport activity.
Additionally, most physiological mechanisms depend heavily on the photosynthetic activities of P. euphratica.For example, exogenous plant growth factors are essential for P. euphratica's ability to withstand drought by controlling photosynthesis and lowering salt stress [82].Additionally, P. euphratica demonstrates heat tolerance and leaf protein accumulation [83].What about the modifications made in reaction to salinity in the roots of P. euphratica, then?For instance, to respond to this query, extracellular factors, such as calmodulin, calmodulin-like, and calcineurin B-like proteins, encourage the production of calcium.The Ca 2+ -mediated CBL-CIPK network, which affects salt tolerance, depends on CIPKs [84,85].It has been well established that several CIPK genes are implicated in the SOS signaling pathway that mediates salt tolerance [86].The Na + /H + antiporter controls Na + efflux and root salt tolerance in A. thaliana [87,88], while the interaction between AtCBL4 and CIPK24 (SOS3-SOS2 complex) mostly controls NHX activity [87] as opposed to the shoots, where the AtCBL10-AtCIPK24 complex defends against salt stress [86,89].PeCBL1 interacts with CIPK24, CIPK25, and CIPK26 to improve P. euphratica's ion homeostasis and salt tolerance.These CDPKs are substrates for ion channel proteins and transporters, and are implicated in abiotic stress-induced Ca 2+ signaling [90,91].For instance, PeCIPK26 expression was found in the roots, stem, leaf, cell membrane, cytoplasm, and nucleus, possibly induced by salt stress [92].By overexpressing the PeCIPK26 gene in A. thaliana cipk24 mutants, the role of PeCIPK26 in salt tolerance was examined.The faster germination rate, lower Na(+) buildup, and greater capacity to release Na(+) when grown with NaCl of transgenic plants compared to mutants demonstrate their superior salt tolerance.These findings point to a role for PeCIPK26 in P. euphratica's response to salt stress.The model plant regulates genes involved in drought, salt, and freezing stress, indicating resistance to these environmental conditions.For tree-breeding programs, these genes work as abiotic stress markers and aid comparative genomics projects, revealing shared and distinctive transcriptional pathways between tree species [70].This poplar species revealed its transcriptional pathways that lead to the discovery of the variance in expression of H + -ATPase, rubisco activase, laccases, and expans in P. trichocarpa, but not in P. euphratica [93,94].A. thaliana's reactions to drought show clear species-dependent characteristics [95].This could indicate that angiosperm trees, independent of species or hybrids, will likely have various applications in response to drought stress, and that these processes are extremely varied among species and genotypes.Comparing tree transcriptome investigations' outcomes is challenging due to varying responses based on plant organs, developmental stages, and stress levels [96].In P. euphratica, extreme stress triggers quick reactions, while progressive water depletion enables plants to adjust by putting stress defense mechanisms in place.The intricacy of gene expression studies in response to abiotic stress is shown by the fact that various water depletion regimes activate various gene networks (as illustrated in Table 1).Identifying new genes for salt and drought tolerance in plants can advance tree breeding and ecosystem restoration in woody plants.
Effect of Salt Stress in P. euphratica
Around 60% of the world's land surface is exaggerated by salt, which spreads due to ineffective irrigation practices or water contaminated by salt.The total amount of soluble salts in the soil is measured as soil salinity, and high levels can kill or induce plant wilting.Salts, including NaCl, CaCl 2 , gypsum, magnesium sulfate, potassium chloride, and sodium sulfate, are frequently found in saline soils.As a result of salt growth in the cytoplasm, high salinity can harm and even kill leaves.Halophytes are salt-tolerant plants that can survive in environments with salinities higher than 400 mM.Terrestrial vascular plants depend on xylem water transfer and stomatal evaporation to survive in stressful environments.The leaves of P. euphratica are wax-coated, rigid, and thick, preventing oxidation-related damage.They grow more robust xylems with drought-induced cavitation due to their adaptation to hydraulic conductivity and embolism.P. euphratica grows in dry, hot climes; increases its photosynthetic rate and evaporation; accumulates salts; deepens its tape roots; and increases its salt content.The primary root penetrates one meter of soil vertically, while the lower end develops lateral feeder roots.However, salt stress has a major negative impact on plant development, growth, and reproduction.Understanding salt tolerance pathways is essential, as rapid exposure activates genes involved in ribosome activities, photosynthesis, cell development, and transport.Reactive oxygen species (ROS) in excess can harm organisms by degrading chlorophyll and causing membrane leakage.It is necessary to find tree and woody plant species that can withstand salt and improve their resistance.The P. euphratica tree is a good example of a tree with salt tolerance.
P. euphratica has lengthy juvenile periods and frequently reproduces across several years.One of the main characteristics that sets P. euphratica apart from other plants is its secondary growth.It has the capacity to produce thickened vascular bundles that accumulate to create secondary xylem (dicots) or wood-like tissue (monocots), which allows them to improve their transport capacity when needed.This species has high outcrossing rates, longdistance pollen distribution, large effective populations, an arborescent stature, longevity, and late successional communities.These characteristics could make P. euphratica less susceptible to genetic bottlenecks and more resilient to habitat fragmentation and climatic changes.Tissue-specific differentially expressed genes (DEG) under salt stress has diverse functions, with membrane transporter activity being the most significant leaf function and the oxidation-reduction process being the most significant root function.Gene families like SOS, NHX, GolS, GPX, APX, RBHF, and CBL are involved in ionic homeostasis in P. euphratica seedling tissues [99].DEGs, such as antioxidant genes, contribute to ROS scavenging and plant salinity tolerance by maintaining ionic and ROS homeostasis in tissues and improving ion uptake, transport, and compartmentalization [100,101].The regulation of pathways, including plasma membrane and tonoplast Na + /H + transporters, is crucial for salt stress tolerance.P. euphratica halophytes maintain a low Na + influx and prevent Na + accumulation [102,103].Ionic homeostasis is accomplished through genes that code for pyrophosphatase, cation/proton antiporters, plasma membrane and vacuolar H + -ATPases, and salt tolerance systems (as illustrated in Figure 2) [104,105].P. euphratica studies reveal genes controlling the cells' salt tolerance, ion compartmentalization, xylem loading, and potassium levels [18,106,107].P. euphratica has intricate and interrelated defense mechanisms that help it avoid or lessen environmental harm.As part of these systems, transcription factors (TFs) bind to cis-elements in the promoters of target genes or other useful modular structures to regulate how genes are expressed in response to abiotic stress.In P. euphratica, 2382 TFs (2382 loci) have been discovered and categorized into 58 families following the family assignment guidelines.The P. euphratica genome has a high content of TFs (http://planttfdb.gao-lab.org/index.php?sp=Peu, accessed on 6 July 2023) and various transcription factors, including DREB, bZIP, AP2/ERF, WRKY, and bHLH, that regulate plant responses to stress (as mentioned in Figure 1), including salt stress [108,109].In non-woody plants, WRKY genes have just been discovered; nonetheless, the effects of salinity on these transcription factors in woody plants may be pertinent.Interestingly, researchers have revealed that salt stress inhibits the PalWRKY77 gene by reducing the salt tolerance in P alba var.pyramidalis [110].The PalWRKY77 pathway receives a bad signal from this ABA regulatory mechanism, making poplar trees more vulnerable to salinity.However, the salt-induced transcriptional response of PeWRKY1 in P. euphratica reveals that WRKY1 binds to H + -ATPase promoters, improving gene expression and salt tolerance [72].However, little is understood about the modifications that the P. euphratica xylem undergoes in response to salinity.Addressing the other transcription factor members is needed to facilitate a molecular revolution for tree breeding in other species.
Effect of Drought Stress on the Physiology of P. euphratica
According to different studies, typical abiotic stress, known as drought, impacts cellular homeostasis, gas exchange, seed generation, plant development, and water relations [111,112].Plants have evolved innate defenses against drought stress to adapt to harsh settings, such as closing stomata, decreasing transpiration, producing abscisic acid (ABA), and storing hydrogen peroxide (H2O2) [113].Although drought can severely restrict a plant's ability to grow and develop, family genes are thought to have a key role in how the plant reacts to various conditions.Drought impacts the physiological environment of the soil microbiota and plants.Auxins, cytokinins, gibberellins, and abscisic acid, among other phytohormones, are produced by bacteria and have been shown to increase drought resilience in P. euphratica [114,115].As plant-growth factors, due to their capacity to produce endospores, which enable bacterial survival for lengthy periods under unfavorable environmental conditions, bacterium genera, including Bacillus sp., are frequently discovered in arid land [116].At the same time, the various metabolisms and strong physical tolerance of Pseudomonas sp.isolates in P.euphratica cultivated in desolate and saline soil may explain their large abundance [117].Plant rhizobacteria support plant development, biological regulation, and resistance to abiotic stress through direct and indirect processes
Effect of Drought Stress on the Physiology of P. euphratica
According to different studies, typical abiotic stress, known as drought, impacts cellular homeostasis, gas exchange, seed generation, plant development, and water relations [111,112].Plants have evolved innate defenses against drought stress to adapt to harsh settings, such as closing stomata, decreasing transpiration, producing abscisic acid (ABA), and storing hydrogen peroxide (H 2 O 2 ) [113].Although drought can severely restrict a plant's ability to grow and develop, family genes are thought to have a key role in how the plant reacts to various conditions.Drought impacts the physiological environment of the soil microbiota and plants.Auxins, cytokinins, gibberellins, and abscisic acid, among other phytohormones, are produced by bacteria and have been shown to increase drought resilience in P. euphratica [114,115].As plant-growth factors, due to their capacity to produce endospores, which enable bacterial survival for lengthy periods under unfavorable environmental conditions, bacterium genera, including Bacillus sp., are frequently discovered in arid land [116].At the same time, the various metabolisms and strong physical tolerance of Pseudomonas sp.isolates in P.euphratica cultivated in desolate and saline soil may explain their large abundance [117].Plant rhizobacteria support plant development, biological regulation, and resistance to abiotic stress through direct and indirect processes [118].The direct mechanisms involve phytohormone regulation, the release of volatile compounds, and an enhanced plant uptake of nutrients [119].The indirectly beneficial effects include suppressing deleterious microorganisms and pathogens, competition for nutrients, inhibiting enzymes, and triggering host-induced systemic resistance [120,121].The rhizosphere of both sexes suggests the presence of sex-specific variation in bacterial communities and their relative abundances [122].In reaction to dryness, males have more drought-tolerant fungus and bacteria in their rhizospheres than females [123].The altered bacterial and fungal community composition increase soil ammonification in the rhizosphere of female plants.The contribution of Rhizobium in biocontrol activities against pathogens and the alleviation of stresses play a decisive role in the P. euphratica.For instance, recent discoveries discussed the significance of rhizobia, which promote plant growth by reducing salt and osmotic stressors in contaminated soils (see [124] and references therein).
Additionally, rhizobium occurrence and utilities in microbiomes of non-leguminous plants were reviewed to control the growth of various soilborne plant pathogens by focusing on the biological control of the different genera [125].Rhizobium populi sp.nov., an endophytic bacterium recovered from P. euphratica, was particularly isolated from the storage fluids in the stems of P. euphratica trees, which is an interesting development [126].The studies conducted on the pathogenic fungus in the pathogenic site of P. euphratica found that it increased its survival rate in arid regions [127].Microbiome research indicates that planting P. euphratica may influence bacterial communities, possibly resulting in more infections.However, research on the relationship between pathogenic bacteria and a high rate of plant mortality is lacking.Different research revealed that the synergistic actions of the dioecious P. euphratica roots and coexisting microorganisms allow them to respond to and survive drought stress [128,129].However, research on the identification of genes associated with microorganisms to cope with abiotic stresses remains scarce in this model tree plant.P. euphratica grows in deep water, relying on water table depth, but faces drought-induced cavitation, causing shoot cessation, stomatal closure, and reduced root growth [130,131].Recent findings have also unveiled that long-term irrigation in P. euphratica plantations affects soil phosphorus fractions and microbial communities [132].They found positive relationships between inorganic P and various bacteria, while negative associations were observed with Burkholderiaceae and soil phosphorus (soil P) in its inorganic form (Pi).The study suggests that water management techniques focusing on soil microbial recovery could improve soil quality.However, the increased mineralization of organic P in P. euphratica is linked to soil moisture, pH, and microorganism profiles, necessitating future research on foliar P fraction distribution.Based on the research conducted on the impact of cow dung and biochar on phosphorus efficiency in P. euphratica soil, bacterial communities, and functional genes (phoC, phoD, gcd, and pqqC; see [133] and references therein), the authors found that returning cow dung improves soil properties, seedling growth, and phosphorus availability, which are up-taken throughout the roots of P. euphratica.Biochar, a carbonized form of cow manure, has a more definite cumulative phosphorus content and promotion of bacterial diversity in arid regions.However, the study suggests increasing biochar use in plantation management and conducting a long-term analysis to discover the utmost scientific addition technique for P. euphratica seedlings and cow dung.In desert environments, mycorrhizal associations and symbiotic interactions between fungi and plant roots increase plant resilience to drought stress [134,135].These connections aid in the intake of nutrients, particularly phosphorus, which is necessary for plant growth [136].These plants survive in arid conditions by developing specialized mycorrhizal associations, which enhance the soil's surface area for the uptake of nutrients and water, as well as delivering carbon compounds from the host plant [137].It is interesting to note that P. euphratica may also help us to fully comprehend the intricate mechanisms underpinning P. euphratica's resilience to drought stress and salt stress, which are mediated via mycorrhizal connections in the desert.The study reveals that not all arbuscular mycorrhizal fungi (AMF) can infect and colonize plant roots.In extreme conditions, one species can become predominant.The alkaline P. euphratica rhizosphere soil favored G. mosseae growth, suggesting its selectivity and adaptability.Other AMF species may survive in the rhizosphere soil or colonize P. euphratica roots.There is, however, a small gap in this study that has to be filled [138].
Photosynthesis is one the most significant processes in a plant's life, but drought stress may affect its mechanisms, which stops plant growth and development under severe ecological conditions.The direct impacts of drought stress on photosynthesis include fluctuations in photosynthetic metabolism and constraints on diffusion via the stomata and mesophyll.Secondary effects include oxidative damage brought on by the superposition of various stresses.Interestingly, a thorough investigation comparing salt and drought stress found that both conditions resulted in the down-regulation of several photosynthetic genes.In P. euphratica, for example, the roots and leaves suggested that during stress, protein concentration can be altered without affecting gene expression [131,139].Importantly, at the transcriptome level, 27 photosynthesis genes were differentially expressed during drought stress, with the majority being down-regulated and six genes enhancing expression [47].For examining the processes of abiotic tolerance in woody plants, Populus euphratica is a potential candidate species.For instance, under salt and drought stress conditions, PeGSTU58 overexpression lines showed increased expression of various stress-responsive genes, such as DREB2A, COR47, RD22, CYP8D11, and SOD1.Additionally, PebHLH35 has been demonstrated to be able to directly bind to the promoter region of PeGSTU58 and stimulate its expression in yeast one-hybrid experiments and luciferase studies.These findings suggested that PeGSTU58, whose expression was favorably controlled by PebHLH35, had a role in the tolerance to salt and drought stress by maintaining ROS homeostasis [140].
Mechanism of Abiotic Stress Tolerance in P. euphratica
Developing mechanisms that allow plants to regulate their water loss while continuing to fix carbon dioxide (CO 2 ) through photosynthesis has been a critical step in plants' colonization of terrestrial environments [141,142].This is an important step, since, in a natural environment, water availability is undoubtedly the primary determinant of plant distribution and survival.Drought tolerance mechanisms are any processes that let plants continue to grow or produce under conditions of an insufficient soil water supply.The first tactic is to prevent a water deficit.Drought resistance entails finishing the life cycle under ideal conditions, limiting transpiration, and maximizing root uptake [143][144][145].It also involves increasing deciduous plants that hibernate during droughts, as well as species of arid environments with permanent access to the water table, like P. euphratica.By sacrificing biomass production, these strategies enable plant survival.One of the main issues restricting most plants' survival ability is water.Plants, particularly those found in arid regions, have developed a variety of tactics to stop water loss or adapt to growth in water-scarce environments.At the physiological level, for instance, P. euphratica demonstrates greater antioxidant enzyme activity and stronger root hydrotropic development than other poplar plants [146].The capacity of stems to carry water (called xylem pressure) and the closure or opening of stomata controlled by phytohormones support water balance in plants (Figure 1).Numerous studies on woody plants have demonstrated that drought tolerance is closely tied to xylem structure, which is connected to the ability of plants to transport water [147][148][149][150].The experimental evidence, however, is in favor of the associated evolutionary processes, particularly when it comes to the molecular control mechanisms of P. euphratica.It offers a variety of ecological services as a natural check on the spread of deserts, including preventing sandstorms, controlling oasis conditions, and even preserving the ecosystem balance.P. euphratica is therefore frequently used as a model woody plant for the investigation of trees' abiotic resistance mechanisms.P. euphratica has developed a variety of adaptations in arid environments.Of these, the polymorphisms in its leaves and its hard wood are thought to be two crucial adaptive features that may give P. euphratica an enhanced adaptability to desert conditions.For instance, the xylem of P. euphratica wood can store significant amounts of cellulose and lignin [151].The xylem's secondary cell wall (SCW) serves as a route for the long-distance movement of nutrients and water, supporting plants mechanically and as a barrier against disease and insect attacks [152][153][154].These characteristics and leaf polymorphisms enable the plants to show physiological responses that aid their adaptation to severely dry settings, such as reduced photosynthetic activity, altered cell wall flexibility, and stomatal aperture regulation.
Moreover, the ultimate impact of abiotic stress on plant growth depends not only on the severity of the damage but also on how well the plant can recover from the damage [155].Photosynthetic electron transport and carbon metabolism rates often decline less in plants that can withstand cold temperatures [156].Compared to cold-sensitive genotypes, these modifications let these plants recover more quickly from chilling stress [157].
6. P. euphratica, a Naturally Stress-Tolerant Tree, as a Source for Finding Stress-Adapted Genes P. euphratica is the only arbor species among the toughest plants that can live in the deserts of the Inner Mongolia Autonomous Region in north-central China [158].Therefore, the Taklamakan Desert in Xinjiang is the second-largest sand desert with moving dunes and is known for its harsh conditions [159].The Tarim River, the longest inland river in China, flows through the desert, fed by melting snow from Tianshan Mountains [160].Ejina Banner in Inner Mongolia has had a P. euphratica tree growing there for over 800 years [2,161,162].The locals refer to it as a "sacred tree."A new plant species appeared along the ancient Mediterranean Sea some 65 million years ago, following the extinction of the dinosaurs [163,164].Its scientific name is P. euphratica and its binomial name is P. euphratica Oliv.
The leaves are formed like willow leaves as a sapling to lessen evaporation.Its leaves enlarge and take on a rounded form as it becomes taller and requires more energy to sustain the trunk.In the Gobi Desert, the poplar thrives on water but is drought-resistant.The poplars can flourish happily as long as the groundwater stays at a depth of roughly four meters below the surface.An adult P. euphratica tree can release ten kilograms of salt and alkali annually.Water and soil are well preserved in the desert with a poplar forest.The Taklamakan Desert demonstrated that it has phreatophytic and deciduous character.Despite having a low drought tolerance, the ongoing contact with groundwater allows for significant water usage and transpiration through the mounting season.This method of avoiding drought makes accumulating a lot of biomass possible.Studies have shown that P. euphratica is a fast-growing tree species but is stress-sensitive, and our focus has increased due to its use in environmental protection, agroforestry on marginal soils, and afforestation on damaged soils [18,165].The study of the endophytic bacteria of P. euphratica in various saline-alkaline regions of Xinjiang revealed that the bacteria in the various tissues of P. euphratica changed with the change in soil salinity [166].Endogenous bacteria diversity is lower in P. euphratica sap tissue under high salinity.These unique bacteria under various salinities were primarily related to the host's stress tolerance [20].Understanding the connections between natural microorganisms and stress-tolerant trees can enhance plant performance and broaden the variety of tree plants.In climate change situations, integrated genomic, transcriptomic, proteomic, and metabolomics data can reveal mechanisms for stress tolerance [167,168].Creating DNA, RNA, protein, and metabolite separation procedures for woody plant biology and natively tolerant tree biology, including the recalcitrance of seeds, viability, and seed germination and acclimatization.In arid areas, woody species like Populus, Prosopis, Atriplex, and Eucalyptus species effectively use meager rainfall inputs [168,169].Plants avoid drainage during dry spells by utilizing water surpluses due to their large storage capacity and deep root development systems.Understanding natural adaptation to salinity and drought in woody plants can accelerate gene integration, producing salt-tolerant and drought-tolerant varieties and maintaining commercial woody species' productivity [170].Nevertheless, there are arguments against using and regulating genetically modified wood plants and crops in today's society.Countries have unique regulations on genome-editing technologies, potentially causing delays or prohibitions in their territories.For instance, the Chinese government has a keen eye for using genetically modified plants.The US Department of Agriculture has no regulations, while the EU requires the some regulation for genetically modified plants [171,172].Therefore, the commercialization of salt-tolerant and drought-tolerant plants depends on individual laws.Studies on woody plants in salinized soils are crucial for sustainable agro-ecosystem management and genetic information, as native stress-tolerant trees offer valuable biological resources.
Effect of Water Stress on Hydraulic Traits in P. euphratica
Due to their numerous physiological functions, including photosynthesis, transpiration, photoreception, and respiration, leaves are the most important organs for determining the overall growth of a plant [173,174].The leaves of plants are highly influenced by their environmental resilience [175].An important aspect of plant physiology and its functional ecology is shown in the connection between leaves and their environment.The way in which leaves develop and evolve could show how the environment has an array of effects on plants, ranging from gene to population [176].The distribution of water from leaves along the vertical heights of plants is most likely caused by some factors, one of which is likely to be the hydraulic characteristics.As plants grow taller, the leaf-to-atmosphere vapor pressure deficit increases, causing xylem tension and increased gravitational potential [177].Drought causes plants to lose moisture, which increases the stress in the hydraulic pathways, which can cause cavitation and the development of embolisms (air bubbles) in the water conductivity xylem [143].This phenomenon creates a blockage of water transport and, when prolonged, may cause hydraulic failure and plant death.To shrink xylem embolisms, which cause plant hydraulic dysfunction, the plant usually improves its water transport capacity by adjusting its water-related functional traits.The P. euphratica tree, a dominant species of the desert, is one tree species in the terrestrial ecosystem that develops strong leaves in the face of environmental changes, which subjects it to long-term drought and salt stress tolerance.Many researchers have found that P. euphratica has a significant strategy to alleviate the increase in hydraulic constraints along vertical heights [178].Importantly, poplar saplings exhibit a high degree of phenotypic plasticity in response to water-deficit growing conditions, with reductions in hydraulic stem sensitivity and leaf area being particularly important in postponing the onset of hydraulic failure during an induced drought event.It is interesting to note that PeGSTU30 increases the hydrophobicity of the active cavity while maintaining strong enzymatic activity [179,180].Leaf functional characteristics are essential for plants to survive and for ecosystems to function.Understanding how these characteristics coordinate under stress is essential since salinity impacts them.According to Li et al. (2023), P. euphratica builds in riparian forests and adjusts and coordinates the functional features of its leaves to survive in saline conditions.
Conclusions and Future Research Perspectives
In arid areas of China, the fast-growing deciduous tree P. euphratica is essential for producing timber.It produces wood and stabilizes sand dunes while tolerating salt, alkalinity, and dryness.This review examines gene family analyses, concentrating on salt and drought tolerance in the Xinjiang and Inner Mongolia sites.Research on P. euphratica stress-tolerance genes focuses on its ability to maintain a stable internal water environment under low groundwater levels.New studies are screening functional-coding genes involved in ion transport, antioxidant enzymes, and signal transduction processes.Research suggests that P. euphratica can adapt to severe atmospheres by regulating tolerance genes, particularly in leaves related to photosynthesis.Recent applied research, such as the improved tobacco drought tolerance, highlights the importance of gene applications and techniques in enhancing P. euphratica's resilience to drought [2].It has been demonstrated that genes increase stress tolerance by influencing physiological morphology through the study of the molecular control mechanisms of leaf morphology and physiological alterations in P. euphratica.However, the triggers or factors that regulate gene expression in response to stress are controversial among researchers.Some experts believe that chromosomal rearrangements help the tree to adapt, while others argue that environmental stress is the main driver of gene expression.This difference in opinion highlights the need for further investigation and confirmation [181].Even though various efforts have been made to construct the genome of P. euphratica as a model tree plant, there is still a gap in the complete genome for tree breeding.Future studies should focus on this model plant's identification and the characterization of genes linked to heavy metal pollution under low pH levels.It is important to gain a thorough understanding of the intricate mechanisms underlying P. euphratica's resilience to drought stress and salt stress caused by mycorrhizal connections (a symbiotic relationship between the tree and certain fungi) in desert regions.Furthermore, future studies should employ cutting-edge molecular analysis of gene expression patterns and how they govern mycorrhizal-mediated growth, nutrient uptake, and water uptake in stressful conditions.Future studies should also provide a clear investigation into the molecular mechanisms balancing stress tolerance and development in P. euphratica.
Figure 1 .
Figure 1.Abiotic stresses are perceived by P. euphratica throughout transcription regulation to ensure abiotic stress tolerance and resistance in arid areas.Figure 1. Abiotic stresses are perceived by P. euphratica throughout transcription regulation to ensure abiotic stress tolerance and resistance in arid areas.
Figure 1 .
Figure 1.Abiotic stresses are perceived by P. euphratica throughout transcription regulation to ensure abiotic stress tolerance and resistance in arid areas.Figure 1. Abiotic stresses are perceived by P. euphratica throughout transcription regulation to ensure abiotic stress tolerance and resistance in arid areas.
Figure 2 .
Figure 2. A proposed diagram model of Na + /H + homeostasis in P. euphratica during NaCl stress tolerance in arid areas.
Figure 2 .
Figure 2. A proposed diagram model of Na + /H + homeostasis in P. euphratica during NaCl stress tolerance in arid areas.
Table 1 .
Genome-wide records of gene family in P. euphratica.
|
2023-12-17T16:08:31.096Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d0f2540e6240c5f3e96a64812933d135af70c6b7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/14/12/2213/pdf?version=1702557999",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b033158468d30c85f5a4a2b4177464a578e898fa",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201291827
|
pes2o/s2orc
|
v3-fos-license
|
Formulation and Evaluation of Copper Nanoparticles Loaded Microsponges
Microsponges become imperative in the field of targeted drug delivery and in other biomedical applications. There was a clamant need for designing microsponges incorporating with green synthesised metal nanoparticles rather than the chemical drug in order to reduce the side effects of the drug and thus increasing the effectiveness of nature of the whole material. It provokes us to design this novel approach of loading copper nanoparticles into the microsponges. Here in this work, microsponges based on ethyl cellulose and polyvinyl alcohol were synthesised by Quasi-Emulsion Solvent diffusion method in which copper nanoparticles procured from Hibiscus rosa-sinensis leaf extract was incorporated. The Loaded microsponges were characterised by High Resolution Scanning Electron Microscopy (HR-SEM) and Particle size distribution Analyzer (PSA). The Drug content and Entrapment Efficiency of the microsponges were found out. The antimicrobial and antioxidant activity of the loaded microsponges were evaluated.
INTRODUCTION
Microsponges are polymeric delivery systems, possessed of porous polymeric microspheres that can entrap active ingredients. These are tiny sponge like spherical particles that consists of myriad of interconnecting voids with large porous surface. Usually the size of the microsponges range from 5 to 300µm. [1][2] Metal nanoparticles such as gold, silver and copper are reported as highly toxic to micro-organisms. [3][4] In recent years, it has been extensively used for the production of medical products like wound dressing because of its strong cytotoxicity. [4][5][6] In current scenario, the development of microsponge loaded with specific drug has been emphasised due to their controlled release of the drug. Since the microsponges prepared from synthetic polymers, it will protect the entrapped drug from any kind of degradation. These kinds of encapsulated drugs within microsponge system can significantly reduce the irritation, side effects of the drug without decreasing its efficiency. [7][8] The current work involves the formulation and evaluation of copper nanoparticles loaded microsponges and its biomedical applications. Here, microsponges were synthesised by Quasi-Emulsion Solvent diffusion method using different proportions of ethyl cellulose and polyvinyl alcohol. Later, the green synthesised copper nanoparticles from the leaf extract of Hibiscus rosa-sinensis were incorporated into the microsponges.
The formulated and loaded microsponges were characterised by SEM and PSA. The Drug Content and Entrapment Efficiency of the loaded microsponges were studied. The antimicrobial and antioxidant activity of the copper nanoparticles loaded microsponges were evaluated. Hence, in the present work an attempt was made for the first time by incorporating copper nanoparticles in the microsponges rather than any chemical drug. These metal nanoparticles loaded microsponges will minimise the toxicity of the drug intake, prolong the pharmacological effect and thus improve the overall effect of the microsponges. The copper nanoparticles loaded microsponges will show enhanced activity towards biomedical applications than the copper nanoparticles alone. In future, this study would lead to a new scenario of introducing copper nanoparticles loaded microsponges for a smarter application.
MATERIALS AND METHODS Materials
Ethyl Cellulose (EC), Polyvinyl alcohol (PVA), Dichloromethane (DCM) of reagent grade were kindly purchased and used without purification. Copper nanoparticles were green synthesized from the leaf extract of Hibiscus rosa-sinensis. Double Distilled water was used throughout the study.
Green synthesis of copper nanoparticles
The Copper nanoparticles (B) were synthesized from the leaf extract of Hibiscus rosa-sinensis. The fresh leaves were collected and washed with distilled water to remove dust and impurities and shade-dried for 3-4 days at room temperature. About 100 g of dried and minced leaves were weighed and transferred to beaker containing 100 mL distilled water. It was then boiled at 60°C for 10-15 min. First, the prepared solution was filtered Whatmann no.1 Filter paper to get a clear solution. This filterate was known as Hibiscus rosasinensis leaf extract. 50 mL of this extract was added to 50 mL of 0.05M CuSO4, kept for incubation for 3 days.
After incubation, the precipitate got settled down that was confirmed by the colour change from green to black. This indicates the formation of Copper nanoparticles that was purified by repeated centrifugation at 6000 rpm for 10 min to remove unwanted materials. The synthesized CuNps were lyophilized and stored at 25°C for further use. [9] Synthesis of copper nanoparticles loaded Microsponges Copper nanoparticles loaded Microsponges were formulated by Quasi-Emulsion Solvent Diffusion method. Five batches of microsponges (NS0 -NS4CuB) with varying proportions of Ethyl Cellulose (EC) and Polyvinyl alcohol (PVA) were taken. The Dispersed Phase consists of Copper Nanoparticles (B -CuNps) and required amount of EC dissolved in 20 mL of Dichloromethane (DCM). It was slowly added to PVA in 150 mL of aqueous continuous phase. Then it was stirred at 1000 rpm under magnetic stirrer for 3 hours. The microsponges formed were filtered and dried in oven at 40-50ºC for 24 hours. Then the dried microsponges were stored in vacuum dessicator to remove the residual solvent. The composition of the microsponge formulation was tabulated in Table 1. The Figure 2 indicates the schematic representation of microsponge formation. The prepared microsponges were characterized based upon the entrapment efficiency and particle size. [8]
Characterisation of Copper nanoparticles loaded Microsponges Microscopic studies
The morphology of the loaded microsponges and unloaded microsponges was studied by using High Resolution Scanning Electron Microscopy (HRSEM).
Here we have used VEGA3 TESCAN instrument for our characterization work. A thin film of the sample was made by placing a pinch of the sample on a carbon coated grid and then the film on the SEM grid was made to dry under mercury lamp for 5 minutes.
Particle size determination
The particle size of the copper nanoparticles loaded Microsponges was determined by using Particle Size Distribution Analyzer. Here the instrument used was HORIBA Laser Scattering Particle Size Distribution Analyzer LA-950.
Percentage Yield
The percentage yield of copper nanoparticles loaded microsponges of various batches were calculated using the weight of final product after drying with respect to the initial total weight of drug and polymer used for the preparations. [8] Drug Content and Entrapment Efficiency About 10 mg of microsponge from all batches were accurately weighed and dissolved in methanol in 50 mL standard flask and then made up to the volume of phosphate buffer pH 7.4. After appropriate dilution, the amount of drug was detected by a UV Spectrophotometric method at 650 nm using blank microsponges treated in the same manner. [ Phosphate Buffer was prepared and pH was found to be 7.4 using digital pH meter. [8] Determination of λmax of copper nanoparticles The absorption maxima for copper nanoparticles (B) were found to be 650 nm. [9] Standard calibration curve of Copper nanoparticles (B) The absorbance of copper nanoparticle standard solutions having a concentration range of 100-500µg/mL in phosphate buffer pH 7.4 was plotted. The curve was found to be linear at λmax 650 nm. The calculation of the drug content and Entrapment efficiency were based on this calibration curve. [8] In-vitro antimicrobial study
Determination of Minimum inhibitory concentration (MIC) using Resazurin Microtitre Assay Preparation of resazurin solution
The resazurin solution was prepared by dissolving 270 mg in 40 mL of sterile distilled water. A vortex mixer was used to ensure that it was a well-dissolved and homogenous solution.
Test was carried out in a 96 well Plates under aseptic conditions. A sterile 96 well plate was labelled. A volume of 100μL of sample was pipetted into the first well of the plate. To all other wells 50μL of nutrient broth was added and serially diluted it. To each well 10μL of resazurin indicator solution was added. 10μL of bacterial suspension was added to each well. Similarly, the same set up was performed for antifungal activity in which 50μL of potato dextrose broth was added and 10μL of fungal suspension was added on each well. Each plate was wrapped loosely with cling film to ensure that bacteria did not become dehydrated. The plate was incubated at 37°C for 18-24 hours. The colour change was then assessed visually. Any colour changes from purple to pink or colourless were recorded as positive. The lowest concentration at which colour change occurred was taken as the MIC value. [9] Antioxidant study Determination of scavenging activity by DPPH assay The percentage of antioxidant activity (AA %) of each substance was assessed by DPPH free radical scavenging assay. Different concentrations of sample were added to all the tubes except blank. Then 3 mL of ethanol and 0.3 mL of 0.5 mM DPPH radical solution in ethanol was added. The control solution was prepared by mixing ethanol (3.5 mL) and DPPH radical solution (0.3 mL). Absorbance was read at 517 nm after 30 min of reaction. [9] The scavenging activity percentage (AA %) was calculated using the below formula % Antioxidant activity = {(absorbance at blank) -(absorbance at test) / (absorbance at blank)} × 100
RESULTS AND DISCUSSION Microscopic studies
From SEM studies, it was found that the samples had porous and almost spherical sponge in nature. The pores were induced by the diffusion of the solvent. [8] The SEM image of CuNps(B) were spherical and agglomerated to form clusters (Fig. 3). [9] The unloaded Microsponges shows shiny smooth surface morphology (Fig. 4). The Loaded microsponge shows porous smooth surface and spherical (Fig. 5). SEM results revealed that surface morphology has been shown to be beneficial for topical application for future studies.
Particle size
The Particle size analysis of loaded and unloaded microsponges (Fig. 6) revealed that the particle size ranges from 65µm to 93µm. NS4CuB was selected for the further study in terms of lower particle size (65µm). The smaller particle size shows better entrapment efficiency in future.
Production yield
The production yield of all the microsponges were calculated and shown in the Fig. 7.
Drug Content and Entrapment Efficiency
The Drug content and Entrapment Efficiency were calculated and displayed in the table 2.
Standard Calibration Curve
The standard calibration curve for copper nanoparticles (B) in phosphate buffer pH 7.4 at 650 nm was shown in Fig. 8.
Antimicrobial activity by Resazurin microtitre assay
The synthesised NS4CuB microsponge formulation was selected for the biomedical applications due to its least particle size and better entrapment efficiency. The antimicrobial activity was done by Resazurin Microtitre assay (Table 3). It shows good antibacterial activity towards E. coli and B. subtilis whose MIC values are 125µg/mL and 31.2µg/mL respectively. From the results, it shows more active towards B. subtilis. The MIC values of copper nanoparticles loaded microsponge NS4CuB is almost equal to that of the value of CuNps(B). [9] Similarly, NS4CuB shows an excellent antifungal activity towards C. albicans whose MIC value is 62.5µg/mL whereas the MIC value of CuNps(B) was found to be 250µg/mL. [9] Hence, it is proven that the antifungal activity nature of copper nanoparticles loaded microsponge is enhanced. (Antibacterial activity -STD-Streptomycin) (Antifungal activity-STD-Amphotericin B) Antioxidant activity by DPPH assay The copper nanoparticles loaded microsponge formulation NS4CuB has an antioxidant potential of 59.5% (Table 4). The percentage scavenging activity of copper nanoparticles loaded microsponge is slightly lower than the value of standard BHT (Fig. 10). The CuNps(B) showed 21.7% of scavenging activity. [9] From the results, it shows that the antioxidant activity increases in the copper nanoparticles loaded microsponge formulation (NS4CuB). This indicates the successful encapsulation of drug (CuNPs) within the microsponge. Therefore, the copper nanoparticles loaded microsponge enhanced the activity of CuNps. It reveals the porous nature of the outer surface of the sponge offers control on the release of drug. Escherichia coli ---- Bacillus subtilis ------+ + -+ + 3 Candida albicans -----+ + + -+ + Ethyl cellulose based microsponges loaded with copper nanoparticles green synthesised from the leaf extract of Hibiscus rosa-sinensis have been successfully prepared by Quasi-Emulsion solvent diffusion method. The formulated batches of microsponges were characterised by SEM and PSA. The SEM results showed smooth outer surface and porous spherical in nature. The least particle size of 65µm of NS4CuB was selected for the biomedical applications. The physicochemical parameters of the formulated microsponges including production yield, Drug content and entrapment efficiency were determined. The NS4CuB with least particle size showed better entrapment efficiency of 135%. The antibacterial activity of copper nanoparticles loaded microsponge formulation NS4CuB shows good activity on B. subtilis.
The MIC values of CuNps loaded microsponge is equivalent to that of the drug (CuNps). Similarly, the antifungal activity of NS4CuB towards C. albicans is increased when compared to that of CuNps. The antioxidant activity of NS4CuB showed an enhanced activity of 59.5% to that of the CuNps (21.7%).
In this work, we have made an attempt to incorporate copper nanoparticles in microsponge for the first time.
We have succeeded in our venture by encapsulating CuNps in the microsponge formulation, thereby enhancing the activity of the copper nanoparticles. The smooth and porous nature of the formulation offers good control on release of the drug and hence it can be used in topical application in future.
|
2019-08-23T14:28:51.708Z
|
2019-07-30T00:00:00.000
|
{
"year": 2019,
"sha1": "9c64a24059fcd3289d6c08a85f86a706a4f132ed",
"oa_license": "CCBYNCSA",
"oa_url": "http://ijpsdr.com/index.php/ijpsdr/article/download/699/631",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c0adce49069d0fdf0d81864a33d8612b29263644",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
140716794
|
pes2o/s2orc
|
v3-fos-license
|
Moho Interface Modeling Beneath the Himalayas , Tibet and Central Siberia Using GOCO 02 S and DTM 2006 . 0
We apply a newly developed method to estimate the Moho depths and density contrast beneath the Himalayas, Tibet and Central Siberia. This method utilizes the combined least-squares approach based on solving the inverse problem of isostasy and using the constraining information from the seismic global crustal model (CRUST2.0). The gravimetric forward modeling is applied to compute the isostatic gravity anomalies using the global geopotential model (GOCO02S) and the global topographic/bathymetric model (DTM2006.0). The estimated Moho depths vary between 60 70 km beneath most of the Himalayas and Tibet and reach the maxima of ~79 km. The Moho depth under Central Siberia is typically 50 60 km. The Moho density contrast computed relative to the CRUST2.0 lower crustal densities has the maxima of ~300 kg m-3 under Central Tibet. It substantially decreases to 150 250 kg m-3 under Himalayas and north Tibet. The estimated Moho density contrast under central Siberia is within 100 200 kg m-3.
InTrODUCTIOn
Starting from the 1980s systematic studies of the lithospheric structure in the Himalayas and Tibet were carried out in the frame of the GGT, IRIS/1991-92PASSCAL and INDEPTH/GEDEPTH geophysical projects.Zhao et al. (1993) analyzed the seismic reflection data collected at the profile INDEPTH-I across the Himalayas.He estimated that the largest Moho depths reach ~75 km.This value is consistent with the values of 75 -78 km along the INDEPTH-II seismic reflection profile obtained by Teng et al. (1983), Wu et al. (1995), and Gao et al. (2005).Zeng et al. (1994) reported the Moho depths to 80 -84 km to the south of the Bangong-Nujiang suture.More recently, Schulte-Pelkum et al. (2005) reported the Moho depths to ~75 km beneath the Tethyan Himalayas.Kind et al. (2002) estimated based on processing the seismic data collected along the profile INDEPTH-III that the Tibetan crust varies in thickness from a maximum of about 78 ± 3 km to a minimum of about 65 ± 3 km; with the maximum thickness within the Lhasa terrane.Allègre et al. (1984), Wu et al. (1991), Nelson et al. (1996), and Kind et al. (1996) concluded that a typical crustal thickness under the Tibet plateau is 70 -80 km with a probably partially molten crust beneath the depth of 20 -30 km, characterized by high conductivity and a seismic low-velocity zone.Hirn et al. (1984) estimated that the average depth beneath the Lhasa terrane is ~55 km, while the average value of 70 km was suggested by Wu et al. (1995).Zhang et al. (2001) estimated the Moho depths in northern Tibet to be at least 80 km.Teleseismic receiver function analysis of seismograms recorded on a ~700 km long profile of 17 broadband seismographs traversing the north-west Himalayas conducted by Rai et al. (2006) revealed a progressive northward Moho deepening from ~40 km beneath Delhi south of the Himalayan fore deep to ~75 km beneath Taksha at the Karakoram fault.An earlier study by Wittlinger et al. (2004) to the north of the Karakoram fault showed that the Moho continues to deepen to ~90 km beneath western Tibet before decreasing substantially to 50 -60 km at the Altyn Tagh fault.Bagherbandi (2012) applied and compared three different isostatic methods (based on solving the Vening-Meinesz Moritz models and using Parker-Oldenburg's algorithm) to estimate the Moho depths beneath Tibet and Himalayas.According to his results the maximum Moho depths reach 67 -72 km depending on the method applied.The regional isostatic studies were conducted also by Lyon-Caen and Molnar (1983Molnar ( , 1984)), Caporali (1995Caporali ( , 1998Caporali ( , 2000)), Braitenberg et al. (2000a, b), Watts (2001), and others.The studies of the Siberian and Baikal crustal structures can be found, for instance, in Pavlenkova (1996), Zorin et al. (2002), and Pavlenkova and Pavlenkova (2006).
In this study we apply a novel approach developed by Sjöberg (2009) and Sjöberg and Bagherbandi (2011) to estimate the Moho depths and density contrast.The numerical realization at the study area of the Himalayas, Tibet and Central Siberia is done using recently released global models of the Earth gravity, topography, bathymetry and crustal thickness.The gravimetric results are compared with the seismic model from the global crustal model CRUST2.0 as well as more detailed regional studies.Sjöberg and Bagherbandi (2011) developed and applied the least-squares method for a simultaneous estimation of the Moho depths and density contrast based on solving the inverse problem of isostasy and using the constraining information from seismic data.They formulated the linearized observation equations for the product of T t D and t D for as follows
MeTHODOlOGy
where T is the Moho depth; t D is the Moho density contrast; G = 6.674 × 10 -11 m 3 kg -1 s -2 is the Newton gravitational constant; R = 6371 × 10 3 m is the Earth mean radius; Δg i is the isostatic gravity anomaly; Y n, m is the surface spherical harmonic function of degree n and order m; and N max is the upper summation index of spherical harmonics.The 3-D position is defined in the system of spherical coordinates (r, X); where r is the spherical radius and , z m X = ^h denotes the spherical direction with the spherical latitude z and longitude λ.
As seen from Eq. ( 2), if T is known, the crust-mantle density contrast t D can be estimated from the spectrum of g , n m i D . The isostatic gravity anomalies in Eqs. ( 1) and (2) are computed in the spectral domain using the following expression (Sjöberg 2009) where where T 0 and 0 t D are the adopted nominal mean values of the Moho depth and density contrast respectively.
The least-squares analysis combines the estimated product of T and t D with the a priori values t and κ of these parameters in order to obtain the improved estimates of T and t D .The system of observation equations, formulated for both parameters, is written in the following vectormatrix form where f is the vector of residuals.The system matrix A, the parameter vector x and the observation vector l are given by dκ , , The elements l 1 , l 2 , and l 3 , respectively, of the observation vector l are formed by the observables T t D , t D and T. The parameter vector x consists of the unknown correc-tions dT and dκ to the a priori (initial) values of T and t D .The solution is found based on solving the system of normal equations x t = n -1 A T Q -1 l; where n = A T Q -1 A is the normal matrix, and Q denotes the variance-covariance matrix.
ISOSTATIC GrAvITy AnOMAlIeS
The study area comprising the Himalayas, Tibet and Central Siberia is bounded by the parallels 20 and 60 arcdeg northern latitudes and the meridians 60 and 120 arc-deg eastern longitudes.The topography/bathymetry over the study area including a description of the major geological regions is shown in Fig. 1.
The global geopotential model (GOCO02S) and the global topographic/bathymetric model (DTM2006.0)were used to compute the isostatic gravity anomalies with a spectral resolution complete to degree 180 of the spherical harmonics.This computation was realized on a 1 × 1 arc-deg geographical grid of surface points.The coefficients of the combined GRACE and GOCE satellite global geopotential model GOCO02S (Goiginger et al. 2011) were used to generate the gravity anomalies.The normal gravity component was computed according to the GRS-80 parameters (Moritz 1980).
The recent studies based on a regional accuracy assessment of global geopotential models have shown that the combined satellite-only GRACE/GOCE solutions pro-vide a substantial improvement of the Earth's gravity field at a medium-wavelength part of gravity spectra (within the frequency band approximately between 100 and 250) when compared to satellite-only GRACE models (cf.e.g., Goiginger et al. 2011).A significant improvement of gravity spectra at medium wavelengths by GOCE data was also demonstrated based on comparison with the combined satellite-terrestrial gravitational model EGM08 (Pavlis et al. 2012).Test results (not shown herein) revealed that the differences between the GOCO02S and EGM08 gravity field reach as much as ±60 mGal within our study area.
The refined Bouguer gravity anomalies were obtained after applying the Bouguer gravity reduction to the GO-CO02S gravity anomalies.The Bouguer gravity reduction was computed using the coefficients of the global topographic/bathymetric model DTM2006.0 (Pavlis et al. 2007).The average density of the upper continental crust 2670 kg m -3 (cf.Hinze 2003) was adopted as the topographic and reference crustal density.For the adopted values of the reference crustal density 2670 kg m -3 and the mean seawater density 1027 kg m -3 , the ocean density contrast equals 1643 kg m -3 .
The regional map of the GOCO02S gravity anomalies and the refined Bouguer gravity anomalies, both computed with a spectral resolution complete to degree 180 of the spherical harmonics, are shown in Figs.233 mGal.The orogenic belt corresponding to the convergence between the Indian and Eurasian continental tectonic plates is the most pronounced in the gravity field (see Fig. 2).Here we observe large horizontal spatial gravity anomaly variations with positive values in the Himalayas and corresponding negative values along the Indus-Gangetic basin.The mostly positive gravity anomalies are seen also over Tibet.Further north, the Altyn Tagh fault and the Tarim and Qaidam basins are characterized mainly by the negative gravity anomalies.The gravity signal over Central Siberia is more likely associated with the rock structures of major geological provinces of shields, platforms and basins.Here the gravity anomalies have either small negative or positive values.Bouguer gravity reduction application substantially changed the gravity field over the mountains (see Fig. 3).The continental margins of the Indian plate are characterized by positive gravity anomaly values.These gravity anomalies become negative over continents with the minima below -500 mGal in the Himalayas and Tibet.
MOHO pArAMeTerS
The isostatic gravity anomalies were used to estimate the Moho parameters over the study area.The system of (linearized) observation equations was solved by applying the least-squares adjustment using the elements method.The initial values of the Moho depths were taken from CRUST2.0 (Bassin et al. 2000).The Moho density contrast was determined relative to the adopted reference crustal density of 2670 kg m -3 .The observation vector l in Eq. ( 6) was composed of three observation types; namely l 1 = T t D [Eq. ( 1)], l 2 = t D [Eq. ( 2)], and l 3 = T S formed by the CRUST2.0Moho depth values.The variance-covariance matrix Q in the least-squares estimation model reads (cf.Sjöberg and Bagherabndi 2011) where 1 v and 3 v are the standard errors of the parameters T t D and T, respectively, and + ^h .The standard error 1 v of T t D was computed using the following expression where γ 0 is the GRS-80 normal gravity, N n, m = (2n+1)(n-1)/ (n+1), and , n m 2 v are the error degree potential coefficients.The CRUST2.0 Moho depths data are not provided with the standard error model.Hence, we assumed the representative uncertainties (i.e., standard error 3 v in the matrix Q) of the Moho depth data of ~20 km.This corresponds to relative Moho uncertainties of ~30% or more depending on the actual Moho depths.This value was chosen empirically based on a range of differences in the Moho depth estimated values under the Himalayas and Tibetan Plateau, as summarized in section 1.A realistic estimation of the Moho depth errors is obviously not simple.Čadek and Martinec (1991), for instance, estimated the uncertainties of the Moho depths in their global crust thickness model to be about ~20% (5 km) for the oceanic crust and of ~10% (3 km) for the continental crust.The results of more recent seismic and gravity studies, however, revealed that these error estimates are too optimistic.Grad et al. (2009) demonstrated that the Moho uncertainties (estimated based on processing the seismic data) under Europe regionally exceed 10 km with the average er-ror of more than 4 km.Much larger Moho uncertainties are to be expected over large parts of the world where seismic data are absent or insufficient (such as our study area).
The estimated Moho parameters on a 1 × 1 arc-deg grid within the study area are shown in Figs. 4 and 5.The Moho depths vary from 34 to 79 km.The Moho density contrast (determined relative to the adopted reference crustal density of 2670 kg m -3 ) varies between 380 and 710 kg m -3 .
DISCUSSIOn
The largest continental crustal thickness is confirmed under the Himalayas; the Moho depths there reach 79 km.The locations of large crustal thickness further extend under the Tibetan plateau with typical Moho depths of 70 -75 km and the maxima found in northern Tibet.In Central Tibet, more shallow Moho depths of ~65 km correspond with the Bangong-Nujiang suture.These results agree with the findings of Braitenberg et al. (2000b), Kind et al. (2002), and others (see section 1).There are several different theories explaining a large crustal thickness beneath the Tibetan plateau.The collision of the Indian and Eurasian plates, which began in Paleogene and continues today (at a rate to about 5 cm yr -1 ; cf.e.g., Bilham et al. 1997), have been forming the Himalayan and Tibetan orogenic belt.The geological structure of Tibet is characterized by several sub-plates that were successively accumulated into the Eurasian plate during Paleozoic and Mesozoic periods.The results of paleomagnetic analysis acquired that these sub-plates were moved from the southern hemisphere during the Paleozoic period northward as the intervening ocean subducted and subsequently accreted to the Eurasian plate.The resulting sutures are marked by distinctive geological formations and fault zones.For more information describing the geological structures and tectonic configuration we refer readers to studies, for instance, by Allègre et al. (1984) and Molnar (1986).This collision resulted in the subduction of a large part of the oceanic crust underneath the Tibetan plateau.Zeng et al. (2002) observed multiple crustal subduction features under the Himalayas and southern Tibet.Tilman et al. (2003) reported that the front of the Indian lithospheric mantle was detached below the Qiantang block, where the asthenosphere ascended and was exchanged with the lithosphere.The geophysical evidences also indicate that the subducted crust of the Indian plate detached from its upper part while the Indian lithospheric mantle is assimilating into the upper mantle (cf.Wu et al. 2004).Xu et al. (2004) reported that the Indian lithospheric slab is being subducted underneath the Tanggula Mountains.A large high-velocity anomalous zone was split into separate high-velocity anomalous bodies, which may be considered geophysical evidence for the abruption caused by the subduction of the Indian lithospheric mantle.The studies by Wittlinger et al. (2004) and Rai et al. (2006) suggest that the Indian plate may penetrate as far as the Bangong suture, and possibly as far north as the Altyn Tagh.Alternative theories facilitate the hypothesis of crustal shortening and consequent crust thickening attributed to the extrusion or escape tectonics mechanism (Molnar and Tapponnier 1975).According to these theories the motion of the Indian plate pressed the Indochina block, and a proposed mechanism is that a large part of the crustal shortening was accommodated by thrusting and folding of the sediments of the passive Indian margin together with the deformation of the Tibet crust (Dewey et al. 1989).
Large crustal thickness of 60 -65 km was confirmed also beneath the Altay and Hindu Kush.These features are in the contrast to large basins to the south and southwest of the Himalayas as well as to the north of Tibet with a much thinner continental crust.The Moho depths beneath the Tarim and Qaidam basins were found to be below ~60 km.The similar Moho depths estimates under the Tarim basin were given, for instance, by Wittlinger et al. (2004).The Indo-Gangetic basin has a crustal thickness of 45 -55 km.According to our estimates the crustal thickness beneath Central Siberia is typically 50 -60 km with some more detailed structures of deeper crustal roots.The crustal structure of Central Siberia consists of two distinctive tectonic regions, the Paleozoic west Siberian basin and the Precambrian Siberian Craton (which extends from the Ural orogen to the Lena river basin).Tectonic configuration indicates that the crustal evolution of these regions began approximately 4 Ga ago.The Moho depths beneath Archean terranes were estimated to be 60 -65 km.The crustal thickness slightly decreases under Paleo-Mesoproterozoic terranes and Mesozoic and Cenozoic regions, where the Moho depths are typically less than 60 km.The largest Moho depths of 61 -64 km were found at the southern part of the Siberian Craton.The Moho depths beneath the Paleozoic west Siberian basin are according to our estimates ~53 km.Some more detailed structures of the crustal thickness can be recognized along the Baikal rift zone which is a boundary between the Amur sub-plate and the Eurasian plate (Wei and Seno 1998).Here the Moho locally deepens to ~62 km.
We further compared our estimates (re-sampled to 2 × 2 arc-deg grid) with the CRUST2.0Moho depths.The differences between our and CRUST2.0Moho depths are shown in Fig. 6.These differences within the study area vary between -9.0 and 18.3 km with the mean of -3.4 km and the RMS of differences is 5.7 km.As seen from this comparison, the largest absolute differences are found in Himalayas (differences are mostly > 10 km).Our results more closely correspond with the CRUST2.0Moho depths under Siberia (differences are mainly within ±5 km).The CRUST2.0 and our estimates of the crustal thickness beneath Central Siberia are, however, substantially larger than the Moho depths derived from seismic data, for instance, by Pavlenkova (1996), Zorin et al. (2002) and Pavlenkova and Pavlenkova (2006).They reported a typical thickness of the Siberian crust of 35 -45 km.
The largest values of the Moho density contrast (defined relative to the reference crustal density of 2670 kg m -3 ) are under the Himalayan and Tibetan orogens.Here the maxima exceed 550 kg m -3 and locally reach as much as ~700 kg m -3 .Since the Moho density contrast was determined with respect to the reference crustal density (of 2670 kg m -3 ), the density of the upper mantle underlying the crust can be calculated from these values.The estimated upper mantle density under Himalayas and Tibet is typically 3200 -3400 kg m -3 .The continental upper mantle density increases with depth.The largest values are thus under significant orogens with the largest crustal thickness.We further used these upper mantle density values to determine the Moho density contrast with respect to the CRUST2.0lower crustal densities.These values should optimally represent the real Moho density contrast.The Moho density contrast under the continental crust in this case generally does not increases everywhere with depths.Its maxima are found beneath central Tibet; here the density contrast is ~300 kg m -3 .The Moho density contrast, however, substantially decreases to 150 -250 kg m -3 under the deepest mountain roots of Himalayas and north Tibet.The Moho density contrast in Central Siberia is typically within 100 -200 kg m -3 .Fig. 6.Moho depth differences obtained from the combined approach and CRUST2.0.The units are in km.
COnClUSIOnS
The convergent tectonic plate boundaries marked distinctively by the positive gravity anomalies along the orogens are coupled with the negative gravity anomalies along the sides of subducted crust.In the gravity map these features are seen along the continent-to-continent collision zone of the Indian and Eurasian tectonic plates (Himalayan orogen and the Indo-Gangetic basin).The large positive gravity anomalies over the Tibetan, Altay and Hindu Kush orogens are coupled by the negative gravity disturbances over the Tarim and Qaidam basins.
Bouguer gravity reduction application substantially changed the gravity signal over Himalayas and Tibet with the gravity anomaly minima below -500 mGal.The resulting refined Bouguer gravity anomalies are significantly correlated with the Moho geometry.The largest crustal thickness was confirmed under the Himalayan and Tibetan orogens with the Moho depths exceeding 65 km and reaching the maxima of ~79 km.This maximum Moho depth differs ~10% from the corresponding maximum of 72 km estimated based on using EGM08 and DTM2006.0 by Bagherbandi (2012).The contrast between the crustal thickness beneath orogens and basins is clearly distinguished by more shallow Moho depths (< 60 km) under Indo-Gangetic, Tarim and Qaidam basins.Our estimates of the Siberian crustal thickness are similar to CRUST2.0 Moho depths, but both are significantly larger than that obtained from regional seismic studies.This misfit between the regional and global seismic models might be explained by a low quality of CRUST2.0 in this part of the world.Consequently, our gravimetric solution, complied using the CRUST2.0Moho depths in forming the observation equations, agree better with CRUST2.0 than with regional seismic results.
The Moho density contrast typically increases with depth.However, this trend is not representative everywhere under the continental crust.Our results revealed that the density contrast of the deepest crustal structures is often much less pronounced compared to the upper mantle.When taking into consideration the Moho density contrast computed with respect to the CRUST2.0lower crustal densities the maxima of ~300 kg m -3 are found under Central Tibet.On the other hand, the Moho density contrast under the deepest crustal structures of the Himalayas and northern Tibet is only 150 -250 kg m -3 .The Moho density contrast of 100 -200 kg m -3 was estimated over most of Central Siberia.
Fig. 1.Topography/bathymetry of the study area.
Fig. 3 .
Fig. 3. Refined Bouguer gravity anomalies computed on a 1 × 1 arc-deg grid at the surface points with a spectral resolution complete to the spherical harmonic degree 180.The units are in mGal.
Fig. 2 .
Fig.2.GOCO02S gravity anomalies computed on a 1 × 1 arc-deg grid at the surface points with a spectral resolution complete to the spherical harmonic degree 180.The units are in mGal.
Fig. 5 .
Fig. 5. Moho density contrast (defined relative to the adopted reference crustal density of 2670 kg m -3 ) computed on a 1 × 1 arc-deg grid.The units are in kg m -3 .
Δg n, m and g ,
|
2018-12-27T14:55:06.393Z
|
2013-08-01T00:00:00.000
|
{
"year": 2013,
"sha1": "89f6c2f43d33c6134a2ddc175b2f6db446a6a09f",
"oa_license": "CCBY",
"oa_url": "http://tao.cgu.org.tw/index.php/articles/archive/hydrology/item/download/1157_881a991a6142b9aab905258a626119a4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "89f6c2f43d33c6134a2ddc175b2f6db446a6a09f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
235228732
|
pes2o/s2orc
|
v3-fos-license
|
Safety of Mealworm Meal in Layer Diets and their Influence on Gut Morphology
Simple Summary There is limited research on the use of the mealworm meal in laying hens’ diets and effects on relative organ weights, caecum microbiota, ileum morphology and digesta viscosity. All these parameters can affect the performance of animals, i.e., the laying and quality of eggs. The mealworm meal is a relatively new feedstuff, where it is necessary to exclude a possible harmful effect. Insect products have a beneficial nutrient content, but there are issues of stability, shelf life, storage and contamination, which could, in the case of negative properties, affect the morphology of the digestive tract, cause liver damage and, as a result, affect the animal performance parameters. The main objective of this study was to verify the safety of the mealworm meal in the feed of laying hens from 17–42 weeks of age. Therefore, the feed mixtures were tested in terms of microbiological stability, fungal and mycotoxin content and selected parameters of hens’ intestinal morphology and physiology were monitored. Feed mixtures with proportions of insect products were microbially stable even after four months. Based on the results of this study, use of two to five percent of mealworm meal in hen′s diet may be used as a sustainable and safe protein feed. Abstract The main objective of this study was to verify the safety of mealworm meal in the feed of laying hens from 17 to 42 weeks of age. Therefore, the feed mixtures were tested in terms of microbiological stability, fungal and mycotoxin content and selected parameters of hens’ intestinal morphology and physiology were monitored. The experiment was carried out with 30 Lohmann Brown Classic hens. Hens were divided by body mass into three equal groups with 10 replicates per treatment. The two experimental groups received feed mixtures containing 2% and 5% yellow mealworm (Tenebrio molitor L.) meal. The third group was a control group which had 0% of mealworm meal in the diet. Diets with 2% and 5% of mealworm meals did not affect the length of villi and microbiome of the caecum. The highest digesta viscosity from the ileum was found in the group with 5% mealworm, which may indicate a slower passage of the digesta through the digestive tract. Based on our results, it may be concluded that the proportion of mealworm meals does not deteriorate the quality of feeds. Mealworm meal does not negatively affect microbial stability in experimental feeds. Therefore, it can be recommended the two and (or) five percent of mealworm meal inclusion in hen’s diet.
Introduction
Given the fact that the growth of the human population is expected, the need to increase the production of animal products is proportionally unavoidable [1]. Poultry production is one of the cheapest and easiest ways to obtain a supply of animal protein for fast egg and meat production and beneficial feed conversion efficiency [2]. Thus, eggs represent a low-cost source of high-quality protein, lipids, vitamins, minerals, low in calories with the content of various amino acids [3]. With regard to this, egg production has been increasing rapidly, expected to reach 89 million tons by 2030 [4]. It is desirable to include readily available, low-cost feedstuffs in the diet of laying hens, which may provide the nutrients for optimizing egg production. Poultry production depends heavily on plant protein sources such as soybean (extracted) meal. However, the rising price of feedstuffs and potential shortages of ingredients in the future [5] due to adverse climatic conditions, land unavailability, and overexploitation of marine sources could lead to serious consequences regarding feed and food production, and the situation will be further worsened by the food-feed-fuel competition [6]. Moreover, proteins from insect products could become a source of animal protein for laying hens and thus ensure a quality nutrient composition of eggs [7]. In this respect, insects appear to be a suitable alternative feed (or food), which is considered to be of animal origin. Generally, insects have proven to be a good alternative feedstuff, especially for poultry, because insects are a part of the natural poultry diet [8,9]. It is currently being investigated whether insect products as a feedstuff may affect the microbial population of the digestive tract in animals.
It is well-known that optimal gastrointestinal health and functionality is important for sustainable animal performance since it has direct effects on animal health and production [10]. It has been reported that the most predominant genera found in chicken cecum are Clostridium, Ruminococcus, Lactobacillus, Bacteroides [11][12][13][14][15] and in a smaller representation Alistipes and Faecalibacterium [11]. For example, it was evaluated the in vitro antimicrobial activities of two fats from Hermetia illucens and Tenebrio molitor and their effect as a total substitute for dietary soybean oil in cecal fermentation and gut microbiota of growing rabbits. The in vitro activities of Hermetia illucens or Tenebrio molitor fats against Pasteurella, Yersinia, and known pathogens of the rabbit gut, indicate a potential for impairing their growth in vivo in rabbits. The inclusion of Hermetia illucens and Tenebrio molitor fats in rabbits' diet stimulated volatile fatty acids production at caecum and could positively modulate the caecal and fecal microbiota of the growing rabbits [16]. Morphometric measurements of villus height and crypt depth are usually used to evaluate intestinal development [17], since they represent useful indicators of gut proliferative and absorptive compartments [18].
The inclusion of 10-15% of mealworm meal into the broiler chickens' diet could negatively affect cecal microbiota and intestinal mucin dynamics. Therefore, it is recommended to include 5% of Tenebrio molitor (TM) meal into broilers' diet. The authors stated that yellow mealworm meal utilization at low inclusion rates (5%) represents the most feasible alternative in terms of gut microbiota characteristics and mucin dynamics in broiler chickens [19]. In another study, it was noted that TM could be successfully used to replace 4% of extracted soybean meal in laying hens' diet [20].
There is limited research on the use of the mealworm meal in laying hens' diets and effects on relative organ weights, caecum microbiota, ileum morphology and digesta viscosity. All these parameters can affect the performance of animals, i.e., the laying and quality of eggs. The mealworm meal is a relatively new feedstuff, where it is necessary to exclude a possibly harmful effect. Insect products have a beneficial nutrient content, but there are issues of stability, shelf life, storage and contamination, which could, in case of negative properties, affect the morphology of the digestive tract, cause liver damage and, as a result, affect the animal performance parameters. The main objective of this study was to verify the safety of the mealworm meal in the feed of laying hens from 17-42 weeks of age. Therefore, the feed mixtures were tested in terms of microbiological stability, fungal and mycotoxin content and selected parameters of hens' intestinal morphology and physiology were monitored.
Materials and Methods
The animal procedures were reviewed and approved by the Animal Care Committee of Mendel University in Brno and by the Ministry of Education, Youth and Sports (MSMT-22771/2019-5).
Animals and Diets
The experiment was carried out with 30 Lohmann Brown Classic hens. Layers were housed in balance cages and divided by body mass into three equal groups with 10 replicates per treatment. The two experimental groups received feed mixtures containing 2% (TM2; n = 10) and 5% (TM5; n = 10) of yellow mealworm (Tenebrio molitor L.) meal. The third group (TM0; n = 10) was a control group that had 0% of mealworm meal in the diet. The mealworm meal was mixed with the other components into homogeneous feed mixtures. The mealworm meal replaced the appropriate proportion of soybean extracted meal. The yellow mealworm meal was obtained from Underground Food, s.r.o. (Brno, Czech Republic).
From 17-18 weeks of hens' age, a preparatory period was carried out. Hens were fed with experimental and control diets in this period. The experiment was carried out from 18 to 42 weeks of age. Table 1 shows the composition and proximate analyses of the diets. The rations were calculated according to the Lohmann Tierzucht, Management Guide [21] as isonitrogenous and isoenergic ones. The mash form diets were offered to hens. The hens had free access to feed and water. The health status was evaluated daily. The chemical composition of the mealworm meal and diets were determined for dry matter, crude protein, ether extract, crude fiber and ash according to Commission regulation (EC) 152/2009 [22]. Room temperature, humidity and lighting regime were controlled according to the requirement for the current age of hens [21].
Measurement of Digestive Tract and Ileum Viscosity
In the 42nd week of age, seven average hens from each group were selected and slaughtered. The entire digestive tracts were removed and divided into the following sections: crop, proventriculus, gizzard, duodenum, jejunum, ileum, caeca and colon. The lengths (or width) and empty weights of each segment were recorded. The fresh cecal contents were transferred to a vial and kept refrigerated for subsequent microbiological analysis.
The fresh digesta (from each hen) was removed from the distal part of the ileum to determine viscosity according to Yasar [23]. The digesta was collected in tubes and then centrifuged for 10 min at 3000 rpm. The resulting supernatant was pipetted into Eppendorf tubes. The samples were analyzed for dynamic viscosity on an RST rheometer (Brookfield, MA, USA) at a constant shear strain rate of 50 s −1 with a standard cone-plate geometric arrangement (RCT-50-2; α = 2 • ), including a temperature duplicator system. The measurement was performed in 10 replicates at 40 • C and the sample volume was 1.2 mL.
Histopathological Examination
Liver and ileum samples (seven replications in each group) fixed in 10% formalin were treated with a conventional paraffin method, stained with hematoxylin-eosin [24,25] and evaluated under a light microscope (Motic BA 310).
Morphometric Examination
Histopathological preparations stained with hematoxylin-eosin were further used for morphometric examination by the method of image computer analysis Soft Imaging System Cell F-Imaging Software for Life Science Microscopy, OLYMPUS Soft Imaging Solutions. The method of the traditional and automatic computer morphometry in 2D projection was used to measure the length of villi. Morphometric (histometric) measurements were performed based on an image obtained from a classical histological specimen (slide) viewed with a light microscope (Olympus BX 51 with Olympus DP 70 scanner), which is connected to a computer. The display resolution (1360 × 1024) was set in the image computer analysis program mentioned above, the analysis object was displayed on the computer monitor screen with a built-in video camera, and an area in the specimen (ROI) was selected in which the measurement objects (villi) were displayed most suitably. The area was stopped, the most suitable magnification for the given measurement was selected and the analyzed objects were focused and then photographed with a built-in digital camera. The "Arbitrary line" (Al) function was selected on the menu bar for morphometry (any distance-the shortest distance between the selected fixed starting point and the final, variable, measured dimension point-it can be said that the measurement took place in the longitudinal axes of structures) and, using this function, the length of the villi was measured. The length of 3 villi was measured in each section (three sections were made), each from the base to the apex.
Microbiologically Analysis
Microbiological analysis was performed by standard plate methods. Firstly, the mealworm meal and other feedstuffs (wheat, maize, soybean meal) were analyzed microbiologically. Secondly, final feed mixtures were analyzed to determine the quantities of microorganisms.
A 10-g sample of the feed mixture was weighed into a sterile Erlenmeyer flask and 90 mL of sterile saline was added. Each sample was prepared in two duplicates, were the plates done in duplicate for each used dilution to improve accuracy. The samples were homogenized on the orbital shaker PSU-10i (Biosan, Latvia) for 10 min. The following groups of microorganisms were determined by standard procedures. The following constituents were determined in the feed mixture: total plate count on PCA (Biokar Diagnostics, Allonne, France) at 30 • From the obtained samples of cecal chyme (n = 7 per group), 5 g were collected and placed into a sterile centrifuge tube, 45 mL of sterile saline solution were added and the mixture was shaken for 1 min on Multi-speed Vortex MSV-3500 (Biosan, Latvia). The following constituents were determined in cecal chyme: E. coli and other coliform bacteria on Rapid E. coli Agar (Bio Rad, Helsinki, Finland) at 37 • C for 24 h, Clostridium perfringens on TSN Agar (Biokar Diagnostics, Allonne, France) and Salmonella spp. by the doubleenrichment method on Rapid Salmonella Agar (Bio Rad, Helsinki, Finland) at 37 • C for 24 h. After the incubation, the number of typical colonies was counted and the results were expressed in log CFU/g.
The method for DON analysis was based on the standard method for trichothecene analysis [26]. In brief, the homogenous sample (12.5 g) was extracted with acetonitrile and water (84:16). The mixture was shaken intensively for 30 min, filtered through Whatman No.1 filter paper and 8 mL volume of the supernatant was passed through a MycoSepâ 227 Trich+ cleanup column. The sample was dried with nitrogen gas and reconstituted in a mobile phase (acetonitrile and water; 90:10). The final sample was injected into the HPLC system, the HPLC conditions were as follows-mobile flow rate 1.0 mL/min, temperature 35 • C, injection rate 25 µL, UV detector (218 nm).
For OTA analyses, the homogenous sample (12.5 g) was extracted with acetonitrile and water (84:16). The mixture was shaken intensively for 30 min, filtered through Whatman No.113V filter paper, and after the addition of glacial acetic acid to the supernatant, the mixture was passed through a MycoSepâ 229 Ochra cleanup cartridge. The sample was dried with nitrogen gas and reconstituted with mobile phase (water and acetonitrile and acetic acid, 60:40:1). The HPLC conditions were as follows-mobile flow rate 0.3 mL/min, temperature 30 • C, injection rate 25 µL, fluorescence detector (460 and 333 nm).
For aflatoxin analyses, the ground test sample (12.5 g) was extracted with methanol and water (80:20). The mixture was shaken for 30 min, filtered through Whatman No.1 filter paper, and mixed with acetonitrile (1:1), the mixture was passed through a MycoSepâ 226 AflaZon+ cleanup column. The sample was diluted (1:3) in the mobile phase (water and methanol, 55:45) and used for HPLC analyses. The HPLC conditions were as follows-mobile flow rate 0.8 mL/min, temperature 25 • C, injection rate 100 µL, fluorescence detector (440 and 360 nm).
Chemicals were of HPLC grade and they were purchased together with analytical standards from Merck (Merck-Sigma-Aldrich s.r.o., Prague, Czech Republic). Mycotoxins in mealworms larvae meal and feed mixtures were analyzed in duplicates. The detection limits were 40 µg/kg, 0.2 µg/kg and 0.1 µg/kg for DON, OTA and aflatoxins, respectively.
Statistical Analysis
The data were processed by Microsoft Excel (USA) and StatSoft Statistica version 12.0 (USA). The Shapiro-Wilk W test was used to test the normality of the data distribution. The data set was well-modeled by a normal distribution. A one-way analysis of variance (ANOVA) was used to determine the differences between the groups. To ensure evidential differences, Scheffé's test was applied and p < 0.05 was regarded as a statistically significant difference.
Results
The used mealworm meal contained in the dry matter basis 532.5 g/kg of crude protein, 293.5 g/kg of ether extract, 62.1 g/kg of crude fiber and 39.0 g/kg of crude ash.
The results are presented in Tables 2-4 and Figure 1. Table 2 brings relative sizes of individual sections of the digestive tract. The morphology of the ileum and the viscosity of the Ileal digesta are shown in Table 3. Figure 1 shows histopathological examination of the liver. Table 4 shows the results of caeca microbial colonization. difference in digesta viscosity was found in the control group compared to the TM5 group. In the examined ileal samples of the TM0 group, the sites of vacuolation of the cytoplasm of epithelial cells and in the lamina propriae mucosae were detected. In the TM2 group, the sites of vacuolation of the cytoplasm of epithelial cells were detected. In the TM5 group, suspected lymphodepletion was found in some areas of the lymphatic tissue. Otherwise, the samples were without pathological changes.
In the examined liver samples of the TM0 group (n = 7), small-capsule vacuolation of the cytoplasm (probably fat capsules), rather periportally, was found in some hepatocytes. The nuclei remain in the center of hepatocytes in many cells. In TM2 samples (n = 7), some hepatocytes had small-droplet cytoplasmic vacuolation (probably fat capsules), rather periportally. The nuclei remain in the center of hepatocytes in most cells. In TM5 samples (n = 7), some hepatocytes had small dose vacuolation of the cytoplasm (probably fat sacs), rather periportally. In the fourth examined sample of the TM5 group, the changes from all three groups were most intensely visible. The nuclei remain in the center of hepatocytes in most cells. In general, all examined samples were free of pathological changes. See Table 2 shows the mean weights and lengths of individual sections of the digestive tract of hens. In the control group of hens, statistically significant (p < 0.05) lower width and height of the gizzard were found. The highest gizzard muscle height was found in the control group compared to the TM2 group. A significantly (p < 0.05) longer colon was found in the control group compared to the group of hens receiving a 5% proportion of mealworms in the diet.
Histo-Morphological Examination and Digesta Viscosity
The morphology of the ileum and the viscosity of the ileum digesta were also measured in the experiment. No statistically significant differences were found in the length of villi between the groups (p > 0.05). See Table 3. A statistically significant (p < 0.05) lowest difference in digesta viscosity was found in the control group compared to the TM5 group.
In the examined ileal samples of the TM0 group, the sites of vacuolation of the cytoplasm of epithelial cells and in the lamina propriae mucosae were detected. In the TM2 group, the sites of vacuolation of the cytoplasm of epithelial cells were detected. In the TM5 group, suspected lymphodepletion was found in some areas of the lymphatic tissue. Otherwise, the samples were without pathological changes. In the examined liver samples of the TM0 group (n = 7), small-capsule vacuolation of the cytoplasm (probably fat capsules), rather periportally, was found in some hepatocytes. The nuclei remain in the center of hepatocytes in many cells. In TM2 samples (n = 7), some hepatocytes had small-droplet cytoplasmic vacuolation (probably fat capsules), rather periportally. The nuclei remain in the center of hepatocytes in most cells. In TM5 samples (n = 7), some hepatocytes had small dose vacuolation of the cytoplasm (probably fat sacs), rather periportally. In the fourth examined sample of the TM5 group, the changes from all three groups were most intensely visible. The nuclei remain in the center of hepatocytes in most cells. In general, all examined samples were free of pathological changes. See Figure 1.
Microbial Colonization in Cecal Chyme
In the 42nd week of hens' age, a chyme analysis from caecum was performed. The results are shown in Table 4. Nevertheless, the table shows that no statistically significant differences were found, there was a decrease in the number of colonies of E. coli and other coliform bacteria. The presence of the genus Salmonella was also analyzed. All samples were negative from the point of view of the presence of this bacteria.
Microbiological Analysis of Experimental Feeds
Results are shown in Table 5. The presence of E. coli and Salmonella spp. was not found in mealworm meal. Other monitored species of microorganisms were at a very low level in the raw material. Likewise, the mycotoxin content was well below the limit values. Mycotoxin content was analyzed in 94.3% of dry matter. TPC-total plate count; LAB-lactic acid bacteria; CFU-colony-forming unit. OTA-ochratoxin A, AF-aflatoxins, AFB1-aflatoxin B1, DON-deoxynivalenol.
The microbiological quality of the experimental feed mixtures was monitored for four months. As can be seen from the results in Tables 6 and 7, there was no deterioration in the microbiological parameters of the diets during the monitoring period. The feed mixtures were also tested for Salmonella spp. during the storage period. All tested samples were negative during the storage period. The most problematic is the number of molds, which could completely degrade the feeds, especially by the production of mycotoxin. The high production of mycotoxins was refuted by subsequent analyses. Stored feed mixtures were tested for mycotoxin contamination at the end of the storage period. It was found approximately 338 µg/kg DON, 0.4 µg/kg OTA, <0.1 µg/kg AFB1 and <0.1 µg/kg AF in experimental feed mixtures (in 92.79% dry matter).
Discussion
In the present study gut morphology, ilea digesta viscosity and cecal chyme microbiology was evaluated with the inclusion of 2% and 5% of mealworm meal in diets. Biasato et al. [27] fed female broilers 50 g/kg, 100 g/kg or 150 g/kg of Tenebrio molitor in the diets. They found no influence of the diets on the gut morphometric indices. Our results of ileum villus height corresponded to their findings. Biasato et al. [27] found out that the abdominal fat weight showed a significantly linear response to increasing the TM meal levels, with a maximum corresponding to the inclusion of 150 g/kg of TM meal. No significant effects related to TM meal utilization were observed for the weight of other relative organs. Biasato et al. [27] also found histopathological alterations in liver: a moderate (control, TM 50 g/kg and TM 100 g/kg = 50% of the broilers; TM 150 g/kg = 30%) to severe (control = 20%; TM 50 g/kg = 30%; TM 100 g/kg = 20%; TM 150 g/kg = 0%) perivascular lymphoid tissue activation. A normal liver was observed in 30% of the control group, 20% (50 g/kg), 30% (100 g/kg) and 70% (150 g/kg) of the animals. In our trial, all examined liver samples were free of pathological changes even after 6 months of laying. This means that the inclusion of mealworm meal did not affect liver health or worsen the condition of villi in the laying hen s ileum. In another experiment of Biasato et al. [28] it was found statistically significant higher ileum villus height in the group receiving 75 g/kg of TM meal compared to the control group of slow-growing chickens. It was logical to perform the experiment on slow-growing chickens, due to their longer fattening-i.e., longer exposure to the tested feed. With longer exposure to the tested feeds, it can be expected that any changes will take effect. Therefore, it makes sense to test insect products in laying hens or slow-growing chicken diets for an extended period.
In trial [29], defatted black soldier fly larvae meal feeding (BSFLM) had no effect on pancreas, small intestine, and gizzard weight of layers. Feeding this insect product quadratically increased the liver weight. The inclusion of BSFLM reduced empty ceca weight linearly and quadratically compared to the control group of layers. Generally, the high crude fiber content in poultry diets tends to increase the sizes of ceca and small intestine weight [29,30]. Other studies showed that chitin present in insects had a positive effect on the gastrointestinal physiology and metabolism of the Lohmann Brown Classic laying hens [31,32]. Chitin may be a substrate for microbial fermentation and therefore, it could have a positive effect on the microbial balance in the gastrointestinal tract similar to a probiotic [33]. We have not confirmed this phenomenon in our experiment.
It is well documented that many enzymes (like arabinoxylans) may increase digesta viscosity [34], which might lead to a slower passage rate and reduced absorption of nutrients which can lead to worse growth in broilers [35,36]. Moreover, another negative effect of the viscosity is the ability to hold water in the digesta, producing adhesive excreta and increased moisture of the litter [37,38]. Increased intestinal viscosity may change the morphology of the ileal villi [34,39]. Due to a lack of endogenous enzymes that degrade dietary fiber, including soluble non-starch polysaccharides (NSP), the intestinal viscosity increases, which slows down the migration and absorption of nutrients [40]. For example, the ileal digesta viscosity [41] (but measured at a temperature of 25 • C) was lower compared to our results (2.45 vs. 5.43 mPa·s). In our experiment, the ileal viscosity was higher with an increased proportion of TM in the diets. The reason for the increase of the ileal viscosity in our experiment, with a proportion of 5% TM in the diet, may be the presence of chitin, which has similar digestive properties as crude fiber. On the other hand, the crude fiber content was not particularly different in each experimental diet (Table 1). It should be noted that the increased ileal digesta viscosity in TM5 affected neither the length of the villi, nor the length and weight of the ileum and vice versa.
There are no studies to verify the stability of feed mixtures with the proportion of insect products. For this reason, it is difficult to compare our results. However, information about insect microbial flora may be helpful. Generally, the microbial flora of insects is composed of bacteria of different genera: Staphylococcus, Streptococcus, Bacillus, Proteus, Pseudomonas, Escherichia, Micrococcus, Lactobacillus and Acinetobacter [42][43][44][45]. The largest variations were found in numbers of bacterial endospores, psychrotrophs and fungi. Salmonella spp. and L. monocytogenes were not detected in any of the fresh mealworm larvae samples [46]. Insect microflora can affect the microbial stability of feeds and it may also affect the gut microbiome of animals. There was a decrease in the number of colonies of E. coli and other coliform bacteria (Table 4) in ceca in our trial. This decrease could be due to the antibacterial agents that Tenebrio molitor is able to produce. Shin et al. [47] observed in antimicrobial tests that chitosan (produced by alkaline deacetylation of chitin) from the larva of Mealworm Beetle showed about 1-2 mm inhibition zones against four strains of bacteria: S. aureus, B. cereus, L. monocytogenes, and E. coli, indicating antimicrobial activity. If we focus on the detection of E. coli, it is very positive that there were no findings in any sample of the tested feed during the storage period.
In the present study it was also tested the presence of mycotoxins, both in mealworm meal and in the feed mixtures with a proportion of this insect product during storage. Under incorrect storage conditions, fungi may form in the feeds, which may form mycotoxins. According to Commission recommendation 2006/576/EC [48] and Commission regulation 574/2011 [49] a content of 0.1 mg/kg OTA, 5 mg/kg DON, 0.02 mg/kg AFB1 is determined for feed mixtures for poultry (in 88% dry matter-as fed basis). The mycotoxin levels found in our experimental feed mixtures containing mealworm meal are well below the limit values.
While replacing traditional sources of protein with insect-based protein in the layer diet, economic efficiency has to be taken into account. Recently, the prices of insect protein have been several times higher than sources of the traditional protein used in the layer diet. Therefore, it makes sense to keep the proportion of the insect protein at a lower level, which is not significantly affecting the price of the produced feed, but, at the same time, it has the potential to affect positively the production capacity of hens or the health of hens during their productive life. The inclusion of a small percentage of the insect protein can also be used in marketing activity related to selling final products because a part of the customers positively evaluates the replacement of the traditional sources of protein by the insect protein. The impact on the final price has always to be controlled, as it is a crucial indicator for the economic efficiency from the point of view of commercial producers of both feed and laying hens.
By increasing the scale of production, insect farmers will be able to increase the price competitiveness and stability of their products compared with other protein sources. Automation and controlled production systems will significantly help stakeholders to achieve this by making insect production less labor-intensive [50]. To establish the economic impact of adding insects into animal feeding and more cost-benefit analysis will have to be carried out to deeply investigate how these alternative ingredients effectively influence overall production costs. In particular, the offset of the extra costs of novel feeds by the improvement of animal health and performance, as well as the market premium potentially derived from higher welfare products, will have to be considered [51].
In the present study, the mealworm meal inclusion did not affect the morphology of the small intestine, thus suggesting no influence on nutrient metabolization or performance. The full-fat mealworm meal inclusion did not induce histological changes, thus suggesting no negative influence on animal health.
Conclusions
Two and five percent of mealworm meal were included in layers' diet. Based on our results, it may be concluded that the proportion of the mealworm meal does not deteriorate the quality of feeds. The mealworm meal does not negatively affect the microbial stability (and production of mycotoxins) in experimental feeds. In the experiment, it was found out that the proportion of 2% and 5% of the mealworm meal in hens' diet did not affect the length of villi and the microbiome of the caecum. At the same time, the highest digesta viscosity of the ileum was found in the TM5 group, which may indicate a slower passage of the digestion through the digestive tract. Therefore, the inclusion of two and (or) five percent of the mealworm meal in hens' diet can be recommended.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ongoing patent proceedings.
|
2021-05-29T05:17:26.500Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9eab605a920055a0d03fd91438def922cb4a3f99",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/5/1439/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9eab605a920055a0d03fd91438def922cb4a3f99",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271174270
|
pes2o/s2orc
|
v3-fos-license
|
Morphological and Functional Alterations in the CA1 Pyramidal Neurons of the Rat Hippocampus in the Chronic Phase of the Lithium–Pilocarpine Model of Epilepsy
Epilepsy is known to cause alterations in neural networks. However, many details of these changes remain poorly understood. The objective of this study was to investigate changes in the properties of hippocampal CA1 pyramidal neurons and their synaptic inputs in a rat lithium–pilocarpine model of epilepsy. In the chronic phase of the model, we found a marked loss of pyramidal neurons in the CA1 area. However, the membrane properties of the neurons remained essentially unaltered. The results of the electrophysiological and morphological studies indicate that the direct pathway from the entorhinal cortex to CA1 neurons is reinforced in epileptic animals, whereas the inputs to them from CA3 are either unaltered or even diminished. In particular, the dendritic spine density in the str. lacunosum moleculare, where the direct pathway from the entorhinal cortex terminates, was found to be 2.5 times higher in epileptic rats than in control rats. Furthermore, the summation of responses upon stimulation of the temporoammonic pathway was enhanced by approximately twofold in epileptic rats. This enhancement is believed to be a significant contributing factor to the heightened epileptic activity observed in the entorhinal cortex of epileptic rats using an ex vivo 4-aminopyridine model.
Introduction
Epilepsy is a common neurological disorder affecting approximately 50 million people worldwide [1].This disorder is characterized by recurrent spontaneous seizures resulting from hyperexcitability and hypersynchrony of brain neurons [2].Despite the availability of numerous antiseizure drugs, approximately 30% of patients with epilepsy, specifically temporal lobe epilepsy (TLE), continue to experience seizures [3].Consequently, the investigation of the mechanisms underlying the generation of ictal activity and the precise functions of distinct brain divisions in this process represents a pivotal objective within the field of experimental neurobiology.
In many cases of temporal lobe epilepsy, impaired interactions between the hippocampus and entorhinal cortex play an important role in the generation of ictal activity.The synaptic connections between the entorhinal cortex and the hippocampus can be described in a simplified manner as follows: there are two parallel excitatory pathways, namely the trisynaptic pathway (entorhinal cortex, layer 2 → dentate gyrus → CA3 → CA1 → entorhinal cortex, layer 5/6) and the monosynaptic direct pathway (entorhinal cortex, layer 3 → CA1) [4].The direct and trisynaptic pathways from the entorhinal cortex terminate at distinct locations within the dendrites of CA1 pyramidal neurons, as they traverse distinct layers of the hippocampus.Schaffer collaterals, as part of the trisynaptic pathway, are located in the radial layer, while the monosynaptic pathway is situated within the stratum lacunosum moleculare [4].Such an organization of connections in the normal state allows CA1 neurons to match and integrate information from the two pathways [5].However, in epilepsy, this is likely the reason for the excitotoxicity and increased vulnerability of this area.For instance, temporal lobe epilepsy (TLE) is frequently accompanied by hippocampal sclerosis, which has been demonstrated in MRI, surgical, and post-mortem studies [6,7] and in animal models [8,9].Such disorders are frequently characterized by cell loss and gliosis in the CA1 and CA3 subfields of the hippocampus, hilus [10,11].The death of some hippocampal neurons and the reorganization of connections within the hippocampus and other brain structures may be one of the reasons for the maintenance of pathological activity in the brain.
The present study focused on the properties of hippocampal CA1 neurons using a lithium-pilocarpine model of epilepsy in young rats.This model is capable of reproducing the same types of lesions observed in patients with temporal lobe epilepsy.These include significant neuronal death in the CA3 and CA1 regions of the hippocampus, gliosis, proliferation of granular neurons in the dentate gyrus, and mossy fiber sprouting.Furthermore, the phenomenon of pharmacoresistance to antiseizure drugs is frequently observed in this model, which mirrors the situation observed in human temporal lobe epilepsy [12].Previously, we have demonstrated a reduction in synaptic neurotransmission through Schaffer's collaterals between the CA3 and CA1 regions of the hippocampus in the lithium-pilocarpine model.Furthermore, we observed a high background activation of the glutamatergic system and an increase in the frequency of spontaneous events on CA1 pyramidal neurons [13].It is hypothesized that the death of a portion of hippocampal neurons may result in aberrant sprouting, which in turn may lead to altered afferent excitatory connectivity in the CA1 subfield.Therefore, the present study aimed to characterize the electrophysiological membrane properties of CA1 pyramidal neurons, their synaptic inputs from CA3 and the entorhinal cortex, as well as features of epileptiform activity in brain slices obtained from control and lithium-pilocarpine-treated epileptic rats.
In Epileptic Rats, There Is a Notable Loss of Pyramidal Neurons in the CA1 Region of the Hippocampus, Yet the Membrane Properties of the Neurons Remain Largely Unaltered
It has been demonstrated that the hippocampus is one of the most severely damaged brain structures in the chronic phase of the lithium-pilocarpine model [14].A distinctive feature of our experimental model is that pilocarpine was administered to rats at the age of three weeks.One month following the induction of status epilepticus by pilocarpine, a quantitative analysis of neuronal loss was conducted in different regions of the hippocampus (Figure 1).This period aligns with the chronic stage of the model, during which time spontaneous recurrent seizures were observed in the majority of experimental animals included in the study.
A statistically significant decrease in the number of neurons was observed in all analyzed areas.The CA1 area exhibited the greatest neuronal death, with 37% of cells lost, followed by the dentate gyrus (26%), CA3 area (19%), and hilus (18%).These findings indicate that the CA1 area is one of the most vulnerable regions of the hippocampus in the lithium-pilocarpine model of epilepsy.
A significant loss of neurons in the CA1 region may result in alterations to their biophysical characteristics.Given that the number of cells has decreased, it can be postulated that the remaining neurons assume the functions of the dead ones.As a result, it may be expected that the size of the surviving neurons may increase due to an expansion in the total length of dendrites and axons.This would consequently result in alterations to the passive membrane properties, specifically a reduction in input resistance and an elongation of τ.We therefore proceeded to record and analyze the membrane properties of pyramidal neurons via whole-cell patch clamp recording.A significant loss of neurons in the CA1 region may result in alterations to the physical characteristics.Given that the number of cells has decreased, it can be post that the remaining neurons assume the functions of the dead ones.As a result, it m expected that the size of the surviving neurons may increase due to an expansion total length of dendrites and axons.This would consequently result in alterations However, the membrane input resistance and τ were unaltered (Figure 2a, Table 1), thereby refuting our hypothesis.It is also notable that the resting membrane potential of neurons from the epileptic hippocampus did not differ from that of controls.However, the membrane input resistance and τ were unaltered (Figure 2a, Table 1), thereby refuting our hypothesis.It is also notable that the resting membrane potential of neurons from the epileptic hippocampus did not differ from that of controls.The overall response patterns were very similar between groups; however, CA1 neurons from epileptic rats showed an increased maximum AP firing rate compared to controls (control: 23.2 ± 0.7 Hz; epileptic: 25.7 ± 0.5 Hz; p < 0.01; Figures 2 and 3).This change was accompanied by an 18% reduction in late frequency adaptation of APs (control: 1.48 ± 0.05; epileptic: 1.25 ± 0.05; p < 0.01), but not in early adaptation (control: 1.06 ± 0.02; epileptic: 1.04 ± 0.03; p = 0.5).However, all other parameters of the firing pattern in the CA1 neurons of the epileptic rats were similar to the control.The maximum slope of the currentfrequency curve, which characterizes the rate of increase in AP frequency with increasing depolarizing current amplitude, was 0.12 ± 0.01 in control and 0.12 ± 0.01 in epileptic rats (p = 0.47).The current amplitudes sufficient to evoke the first APs (control: 137 ± 8 pA; epileptic: 126 ± 7 pA; p = 0.32), the maximum frequency of the APs (control: 484 ± 26 pA; epileptic: 503 ± 23 pA; p = 0.6), and the depolarizing block (control: 597 ± 28 pA; epileptic: 590 ± 26 pA; p = 0.86) were also similar in these two groups of rats (Figure 3).Consequently, the altered excitability parameters observed in CA1 neurons of epileptic rats are predominantly associated with conditions of extreme excitation, where these neurons are capable of sustaining slightly elevated firing rates before entering depolarizing block.
Next, we compared the firing patterns of the CA1 neurons of the epileptic rats with the age-matched control.Representative examples of the action potential (AP) trains generated in response to the suprathreshold depolarizing current steps in CA1 neurons are shown in Figure 2b.
The overall response patterns were very similar between groups; however, CA1 neurons from epileptic rats showed an increased maximum AP firing rate compared to controls (control: 23.2 ± 0.7 Hz; epileptic: 25.7 ± 0.5 Hz; p < 0.01; Figures 2 and 3).This change was accompanied by an 18% reduction in late frequency adaptation of APs (control: 1.48 ± 0.05; epileptic: 1.25 ± 0.05; p < 0.01), but not in early adaptation (control: 1.06 ± 0.02; epileptic: 1.04 ± 0.03; p = 0.5).However, all other parameters of the firing pattern in the CA1 neurons of the epileptic rats were similar to the control.The maximum slope of the current-frequency curve, which characterizes the rate of increase in AP frequency with increasing depolarizing current amplitude, was 0.12 ± 0.01 in control and 0.12 ± 0.01 in epileptic rats (p = 0.47).The current amplitudes sufficient to evoke the first APs (control: 137 ± 8 pA; epileptic: 126 ± 7 pA; p = 0.32), the maximum frequency of the APs (control: 484 ± 26 pA; epileptic: 503 ± 23 pA; p = 0.6), and the depolarizing block (control: 597 ± 28 pA; epileptic: 590 ± 26 pA; p = 0.86) were also similar in these two groups of rats (Figure 3).Consequently, the altered excitability parameters observed in CA1 neurons of epileptic rats are predominantly associated with conditions of extreme excitation, where these neurons are capable of sustaining slightly elevated firing rates before entering depolarizing block.Subsequently, the properties of individual APs in CA1 neurons were evaluated.The following parameters were measured: AP threshold, amplitude, half-width, and rise time from 10 to 90% of the amplitude of the AP; fast and medium afterhyperpolarization (AHP) amplitude and timing; as well as the amplitude of afterdepolarization (ADP).These pa- Subsequently, the properties of individual APs in CA1 neurons were evaluated.The following parameters were measured: AP threshold, amplitude, half-width, and rise time from 10 to 90% of the amplitude of the AP; fast and medium afterhyperpolarization (AHP) amplitude and timing; as well as the amplitude of afterdepolarization (ADP).These parameters are largely defined by the kinetics of different sodium and potassium channels [15].No significant alterations were observed in the action potential properties of CA1 pyramidal cells in the chronic phase of the lithium-pilocarpine model (Table 2).Given that no significant alterations were observed in the membrane properties of pyramidal neurons, while the functionality of neuronal networks in the epileptic hippocampus was found to be disrupted [13], it can be postulated that synaptic connections may be disrupted.Previously, we have demonstrated a reduction in synaptic neurotransmission through Schaffer's collaterals between the CA3 and CA1 regions of the hippocampus in the lithium-pilocarpine model.In addition, we observed a high background activation of the glutamatergic system and an increase in the frequency of spontaneous events on CA1 pyramidal neurons [13].It is possible that the observed changes are caused by an increase in neurotransmission through the direct pathway from layer 3 of the entorhinal cortex to the CA1 region of the hippocampus [4,16].This hypothesis is supported by evidence from previous studies [17].Consequently, we conducted a study to examine the properties of synaptic neurotransmission between the CA3 and CA1 regions and between the entorhinal cortex and the CA1 region of the hippocampus in control rats and rats in the chronic phase of the lithium-pilocarpine model of temporal lobe epilepsy.Whole-cell patch clamp recordings were conducted on CA1 pyramidal neurons, with electrical extracellular stimulation applied to two distinct pathways to elicit evoked excitatory postsynaptic currents (eEPSCs).
In epileptic rats, the paired-pulse ratio during stimulation of Schaffer collaterals was found to be greater than in control animals (t = 2.13, p < 0.05).In contrast, this ratio did not change in the case of temporoammonic pathway stimulation.This finding indicates that in the epileptic group, the probability of mediator release from the postsynaptic terminals of CA3 neurons is reduced, whereas this probability remains unchanged in the terminals of entorhinal neurons.
Subsequently, we conducted an analysis of the summation of EPSCs evoked by a shorttrain of five stimuli with a frequency of 50 Hz (Figure 4d,e).It was observed that in epileptic rats, the summation of responses evoked by stimulation of the temporoammonic pathway was significantly higher (F 1, 10 = 54.6,p < 0.001), whereas the summation of responses evoked by stimulation of the Schaffer's collaterals decreased (F 1, 14 = 5.28, p < 0.05).As Schaffer's collaterals terminate at apical dendrites, which are situated in the stratum radiatum, and the temporoammonic pathway terminates in the distal portion of the apical dendrites of CA1, which are located within the stratum lacunosum moleculare [4], it was deemed pertinent to ascertain whether there existed any structural correlates of the observed functional alterations.
The filling of neurons with biocytin during electrophysiological recordings permitted the morphological reconstruction of these cells (Figure 5).The density of dendritic spines in the stratum radiatum and stratum lacunosum moleculare of control and epileptic rats was analyzed.The results demonstrated that the spine density in the stratum lacunosum moleculare of chronic TLE rats was more than 2.5 times higher than that of control rats (t = 3.74, p < 0.01).However, no differences were observed in the stratum radiatum.As Schaffer's collaterals terminate at apical dendrites, which are situated in the stratum radiatum, and the temporoammonic pathway terminates in the distal portion of the apical dendrites of CA1, which are located within the stratum lacunosum moleculare [4], it was deemed pertinent to ascertain whether there existed any structural correlates of the observed functional alterations.
The filling of neurons with biocytin during electrophysiological recordings permitted the morphological reconstruction of these cells (Figure 5).The density of dendritic spines in the stratum radiatum and stratum lacunosum moleculare of control and epileptic rats was analyzed.The results demonstrated that the spine density in the stratum lacunosum moleculare of chronic TLE rats was more than 2.5 times higher than that of control rats (t = 3.74, p < 0.01).However, no differences were observed in the stratum radiatum.
Consequently, both electrophysiological and morphological data indicate that the direct pathway from the entorhinal cortex to CA1 neurons is reinforced, whereas the inputs to them from CA3 are either unaltered or even diminished.Consequently, both electrophysiological and morphological data indicate that the direct pathway from the entorhinal cortex to CA1 neurons is reinforced, whereas the inputs to them from CA3 are either unaltered or even diminished.
Epileptiform Activity Induced by 4-Aminopyridine in Hippocampus-Entorhinal Cortex Slices Differs between Control and Epileptic Rats
The subsequent step was to ascertain whether there were differences in the generation of epileptiform activity in brain slices of control and epileptic rats.To this end, a 4aminopyridine ex vivo model was employed, and epileptiform activity was analyzed in
Epileptiform Activity Induced by 4-Aminopyridine in Hippocampus-Entorhinal Cortex Slices Differs between Control and Epileptic Rats
The subsequent step was to ascertain whether there were differences in the generation of epileptiform activity in brain slices of control and epileptic rats.To this end, a 4-aminopyridine ex vivo model was employed, and epileptiform activity was analyzed in brain slices containing the hippocampus and entorhinal cortex.Simultaneous recordings of local field potentials (LFPs) were obtained from the radial layer of the CA1 area of the hippocampus and the deep layers of the entorhinal cortex (Figure 6).Following the application of an epileptogenic solution, epileptiform activity rapidly developed with approximately the same delay in both groups in hippocampus (control: 99.0 ± 15.4 s, n = 12; vs. epileptic: brain slices containing the hippocampus and entorhinal cortex.Simultaneous recordings of local field potentials (LFPs) were obtained from the radial layer of the CA1 area of the hippocampus and the deep layers of the entorhinal cortex (Figure 6).Following the application of an epileptogenic solution, epileptiform activity rapidly developed with approximately the same delay in both groups in hippocampus (control: 99.0 ± 15.4 s, n = 12; vs. epileptic: 92.6 ± 19.2 s, n = 9, t = 0.27, p = 0.79) and the entorhinal cortex (control: 183.7 ± 26.7 s, n = 12; vs. epileptic: 244.4 ± 51.4 s, n = 9, t = 1.13, p = 0.27).Visual analysis of the recordings revealed significant differences between the epileptic and control animals (Figure 6).For example, in the hippocampus, interictal discharges were more frequent in control rats than in epileptic animals.In most control animals, interictal activity was predominant in the entorhinal cortex, and when ictal discharges occurred, they were of relatively short duration.In the epileptic animals, ictal discharges predominated in the entorhinal cortex (Figure A1) and lasted longer than in the control group.Furthermore, these ictal discharges partially propagated from the entorhinal cortex to the hippocampus, but the amplitude of the "ictal" LFP in the hippocampus was Visual analysis of the recordings revealed significant differences between the epileptic and control animals (Figure 6).For example, in the hippocampus, interictal discharges were more frequent in control rats than in epileptic animals.In most control animals, interictal activity was predominant in the entorhinal cortex, and when ictal discharges occurred, they were of relatively short duration.In the epileptic animals, ictal discharges predominated in the entorhinal cortex (Figure A1) and lasted longer than in the control group.Furthermore, these ictal discharges partially propagated from the entorhinal cortex to the hippocampus, but the amplitude of the "ictal" LFP in the hippocampus was relatively small (Figure 6, inset).The propagation of ictal discharges from the entorhinal cortex to the hippocampus was not observed in control animals.
The use of wavelet transformation in signal analysis enables a clearer visualization of the distinction between the control and epileptic groups.Specifically, the intensity of ictal activity in the entorhinal cortex is greater in epileptic rats.In control animals, the interictal activity is more prominent than in epileptic animals and propagates from the hippocampus to the entorhinal cortex; thus, the interictal discharges in the hippocampus and cortex are more synchronized in control animals than in epileptic animals (Figure 6).
To compare patterns of epileptiform activity quantitatively, we binarized the signal and identified unitary epileptiform events (uEEs) in the hippocampus and entorhinal cortex throughout the recording using the algorithm described in the Methods section.We then created a cumulative plot of the number of uEEs within one hour of the change in the bath solution to a proepileptic one for each slice.The algorithm identifies ictal discharges as sets of closely spaced uEEs, while interictal discharges usually correspond to a single uEE or a short burst of uEEs.Therefore, on cumulative curves, the steeply rising part corresponds to an ictal discharge (Figure 7).As shown in the figure, these steps are noticeable in the entorhinal cortex curves and practically absent in the hippocampus.The total number of uEEs in the hippocampus (2337 ± 242 vs. 1625 ± 229) and in the cortex (925 ± 111 vs. 419 ± 48) was lower in epileptic rats than in controls.Thus, epileptiform activity in brain slices from rats with epilepsy differs both qualitatively and quantitatively from that in controls.
relatively small (Figure 6, inset).The propagation of ictal discharges from the entorhinal cortex to the hippocampus was not observed in control animals.
The use of wavelet transformation in signal analysis enables a clearer visualization of the distinction between the control and epileptic groups.Specifically, the intensity of ictal activity in the entorhinal cortex is greater in epileptic rats.In control animals, the interictal activity is more prominent than in epileptic animals and propagates from the hippocampus to the entorhinal cortex; thus, the interictal discharges in the hippocampus and cortex are more synchronized in control animals than in epileptic animals (Figure 6).
To compare patterns of epileptiform activity quantitatively, we binarized the signal and identified unitary epileptiform events (uEEs) in the hippocampus and entorhinal cortex throughout the recording using the algorithm described in the Methods section.We then created a cumulative plot of the number of uEEs within one hour of the change in the bath solution to a proepileptic one for each slice.The algorithm identifies ictal discharges as sets of closely spaced uEEs, while interictal discharges usually correspond to a single uEE or a short burst of uEEs.Therefore, on cumulative curves, the steeply rising part corresponds to an ictal discharge (Figure 7).As shown in the figure, these steps are noticeable in the entorhinal cortex curves and practically absent in the hippocampus.The total number of uEEs in the hippocampus (2337 ± 242 vs. 1625 ± 229) and in the cortex (925 ± 111 vs. 419 ± 48) was lower in epileptic rats than in controls.Thus, epileptiform activity in brain slices from rats with epilepsy differs both qualitatively and quantitatively from that in controls.We then analyzed the distribution of interevent intervals (IEIs) in the hippocampus (Figure 8).In both control and epileptic rats, the distributions had a bimodal shape.The first mode is in the region of 2 s, which roughly corresponds to the frequency of interictal events.In control animals, this mode was shorter (1.5 ± 0.2 s) than that in epileptic rats (2.1 ± 0.3 s, t = 2.12, p < 0.05).The second mode occurs at approximately 0.3 s, which is more indicative of events that are part of the ictal discharge.It is important to note that while interictal events were more common in both groups, the proportion of events with short IEIs increased in the epilepsy group.
The distribution of IEIs in the entorhinal cortex differed significantly between control and epileptic rats (Figure 9a).In control animals, the distribution of IEIs was similar to that observed in the hippocampus, with a predominant mode indicating interictal discharges.However, in epileptic animals, the distribution is different, and the proportion of interictal discharges is lower.To compare the features of epileptiform activity in the cortex of epileptic and control animals more accurately, we isolated areas with ictal and interictal activity on the recording and analyzed the distribution of IEIs (Figure 9b,c).Our findings We then analyzed the distribution of interevent intervals (IEIs) in the hippocampus (Figure 8).In both control and epileptic rats, the distributions had a bimodal shape.The first mode is in the region of 2 s, which roughly corresponds to the frequency of interictal events.In control animals, this mode was shorter (1.5 ± 0.2 s) than that in epileptic rats (2.1 ± 0.3 s, t = 2.12, p < 0.05).The second mode occurs at approximately 0.3 s, which is more indicative of events that are part of the ictal discharge.It is important to note that while interictal events were more common in both groups, the proportion of events with short IEIs increased in the epilepsy group.
The distribution of IEIs in the entorhinal cortex differed significantly between control and epileptic rats (Figure 9a).In control animals, the distribution of IEIs was similar to that observed in the hippocampus, with a predominant mode indicating interictal discharges.However, in epileptic animals, the distribution is different, and the proportion of interictal discharges is lower.To compare the features of epileptiform activity in the cortex of epileptic and control animals more accurately, we isolated areas with ictal and interictal activity on the recording and analyzed the distribution of IEIs (Figure 9b,c).Our findings indicate that ictal discharges in epileptic and control rats occurred with approximately equal latency after the application of the epileptic solution (300-400 s).However, ictal activity in the entorhinal cortex disappears rapidly in control rats.On average, we recorded 2.0 ± 0.9 ictal discharges per hour (n = 8).In epileptic rats, ictal activity was significantly greater, with 11.0 ± 2.0 discharges per hour (n = 9, t = 3.86, p < 0.01).In control rats, no ictal activity was recorded in 37.5% of the slices, while in epileptic rats, ictal activity was present in 100% of the slices.
Discussion
The present study focused on the properties of pyramidal neurons in the hippocampal CA1 region of epileptic rats in the lithium-pilocarpine model.Our findings indicate that in this model of epilepsy, the CA1 region is one of the most vulnerable, with maximal death of hippocampal neurons observed here.However, electrophysiological membrane properties of pyramidal neurons of this area are practically not disturbed.It can be observed that the altered excitability parameters observed in CA1 neurons of epileptic rats are predominantly associated with conditions of extreme excitation, where these neurons are capable of sustaining slightly elevated firing rates before entering the depolarizing block.Concurrently, the synaptic inputs of pyramidal neurons underwent a significant modification.The direct input from the entorhinal cortex increased, accompanied by the appearance of new synaptic contacts.In contrast, the input from the CA3 area remained unchanged or weakened slightly.These changes in synaptic inputs significantly altered the characteristics of epileptiform activity in an ex vivo 4-AP model.In the CA1 region of the hippocampus, interictal discharges were more frequent in control animals than in epileptic rats.In the entorhinal cortex of control rats, interictal activity was predominant, and ictal discharges, when they occurred, were relatively short in duration.In the entorhinal cortex of animals with epilepsy, ictal discharges occur more frequently and last longer than in the control group.
Temporal lobe epilepsy is frequently accompanied by damage to the CA1 region of the hippocampus, including neuronal loss and gliosis.This has been demonstrated in both patients and experimental studies [18][19][20][21].Our data obtained in this work are fully consistent with this.Previous studies have demonstrated that the expression of many ion channels, including transient receptor potential vanilloid 1 (TRPV1) [22], hyperpolarizationactivated cyclic nucleotide-gated channels (HCNs) [23][24][25], and small-conductance Ca 2+activated K + (SK) channels [26] can be altered during epileptogenesis.Furthermore, specific changes in sodium channel expression [27][28][29] and A-type voltage-gated potassium (Kv) channels [30] were also identified in CA1 neurons.We hypothesized that changes in ion channel expression would be evident in the assessment of biophysical membrane properties of pyramidal neurons, as these properties depend on different types of ion channels.For example, AP threshold, amplitude, and rise time from 10 to 90% of the amplitude of the AP depend mostly on the voltage-gated sodium channels, AP amplitude, AP half-width and fast afterhyperpolarization (AHP) amplitude, which are mostly defined by the kinetics of the potassium channels, as well as the amplitude and timing of medium AHP, which is mediated by the big-conductance (BK) K + channels, and ADP amplitude, which is also defined by the K V channel activity [15,31,32].Nevertheless, no discernible distinctions were observed between the control and epileptic animals in terms of the biophysical membrane characteristics of pyramidal neurons.
One of the most interesting results of the present study is the increase in spine density on the distal portions of apical dendrites of CA1 pyramidal neurons.Dendritic spines that represent the morphological sites of the majority of excitatory synaptic inputs could be critically involved in the pathophysiology of epilepsy [33,34].Changes in distribution and number of dendritic spines in cortex and hippocampus have been identified both after acute convulsions induced by kainic acid [35], monosodium glutamate [36], and pilocarpine [37], and in models of chronic epilepsy [37][38][39].Loss of dendritic spines has been observed in most of the above-mentioned studies.Similar alterations in dendritic morphology and spine loss mainly in hippocampal neurons have been reported in human brain tissues from patients with epilepsy [40,41].
A greater number of synaptic contacts was observed in a smaller number of studies.For instance, researchers observed synaptic reorganization in the hippocampus, with excitatory connections between CA1 pyramidal cells increasing following kainate-induced status epilepticus [42].In patients with TLE, the proximal dendrites of dentate granule cells exhibit a greater spine density where the aberrant collaterals were densely localized [43].The increase in spine density on distal parts of dendrites of CA1 neurons indicates an increase in excitatory inputs from the entorhinal cortex.This was demonstrated directly by measuring the summation of responses evoked by stimulation of the temporoammonic pathway.The summation of responses was significantly higher in epileptic rats compared to controls.Previous studies have also shown that the temporoammonic pathway, a direct cortical input to hippocampal area CA1, is enhanced in both the lithium-pilocarpine model [17,44] and the kainate model [45] of epilepsy.EC layer 3 interneurons are known to inhibit the temporoammonic pathway [46], but their loss is thought to play an important role in the activation of the temporoammonic pathway and CA1 field [47].This transformation causes the pathway to become 10 times more effective as an excitatory projection.Thus, our results are consistent with the concept that epilepsy alters neural networks.
At the next stage of our study, we analyzed the features of ictogenesis in control and epileptic rats.The mechanism of transition to the epileptic state is often studied in ex vivo models.Many laboratories use combined hippocampal-entorhinal cortex slices from rodent brains for this purpose [48][49][50][51].Epileptiform activity is induced by perfusing brain slices with artificial cerebrospinal fluid (ACSF) containing one of several chemoconvulsants, such as the K + channel blocker 4-aminopyridine (4-AP) [52,53], the GABAa receptor antagonists bicuculline [54] or picrotoxin [55], the glutamate receptor agonist kainate [56], or by using a nominally zero Mg 2+ solution [57].Multiple studies have demonstrated that neuronal networks in brain slices interact to produce patterns of epileptiform activity that resemble the limbic seizures commonly observed in patients with TLE [50,[58][59][60].Here, a 4-AP ex vivo model of epileptiform activity was used.
At least two types of epileptiform discharges are typically distinguished based on differences in duration, spread, amplitude, initiation site, and response to antiepileptic drugs: interictal discharges and ictal or seizure-like events [51].Interictal discharges are significantly shorter in duration than ictal discharges.Interictal discharges have a heterogeneous origin.Some are caused primarily by the synchronous activity of inhibitory interneurons, while others are caused by the simultaneous activity of excitatory principal neurons [61].Interictal discharges can have either a proictogenic or anti-ictogenic effect [62,63].In the case of a proictogenic effect, they are referred to as preictal discharges [60].In contrast, interictal discharges originating in the CA3 region of the hippocampus and propagating through the CA1 region to the entorhinal cortex can effectively inhibit the generation of ictal activity in this region [64,65].
Various mechanisms of proictogenic action of interictal events have been described.For instance, hypersynchronization of interneurons in the entorhinal cortex can initiate ictal discharge [61, 66,67].Field potential recordings indicate that the ictal discharge in this case is preceded by an isolated 'slow' interictal discharge [63].In contrast, a specific type of interictal discharge, preictal discharge, has been described in the subiculum of tissues from individuals with TLE.Preictal discharges are dependent on glutamatergic activity rather than the more common mixed depolarizing GABA/glutamatergic processes that underlie most interictal discharges.Preictal discharges precede ictal events in vitro [60].In both cases, ictal discharges in vitro resembled electrographic limbic seizures.Although the current study did not specifically focus on which interictal discharges might trigger ictal events, it is likely that the ictal discharges occurring in the entorhinal cortex are due to 'slow' interictal discharge.
In control animals, ictal discharges in the entorhinal cortex, if any, occur only at the very beginning after the addition of the epileptogenic solution and then are replaced by interictal discharges.In vitro studies have shown that interictal activity generated in the hippocampal subfield CA3 has an anti-ictogenic effect when the hippocampal loop is intact [59,68].In this case, interictal discharges originating in CA3 spread through CA1 and the subiculum to the entorhinal cortex and then re-enter the hippocampus through the dentate gyrus [59].Transection of Schaffer collaterals, which connect the hippocampal CA3 and CA1 areas, can restore ictal activity in the entorhinal cortex [49,59].Moreover, lowfrequency electrical or optogenetic stimulation of the CA3 or CA1 regions can successfully suppress ictal activity in the entorhinal cortex [65,69,70].The anti-ictogenic effect of the CA3 area is a characteristic found in adult animals.In young animals, the entorhinal cortex retains ictal activity even when Schaffer collaterals are preserved [71].
Ictal discharges occur more frequently and persist longer in the entorhinal cortex of epileptic animals, which is consistent with the findings of a previous study on pilocarpineinduced epilepsy in mice [68].In the pilocarpine model, pyramidal neurons and interneurons, particularly in the CA3 and CA1 regions, are lost [21,72], which was confirmed in the present study.The death of neurons in these areas reduces hippocampal output activity and releases its control on entorhinal cortex network excitability [68].Additionally, dysregulation of excitatory inputs in the hippocampus has been observed.In favor of this transformation, except for the changes in summation during activation of temporoammonic inputs discussed above, we observed that low-amplitude LFP changes correlated with ictal discharge were observed in the hippocampus of epileptic animals during ictal discharge in the entorhinal cortex.Such a phenomenon was not observed in slices of control animals.
In conclusion, the characteristics of epileptiform activity in brain slices obtained from control and lithium-pilocarpine-treated epileptic rats differed.The increased ictal activity in the entorhinal cortex of epileptic animals was most likely due to morphological and functional abnormalities in the CA1 hippocampus and specific amplification of temporoammonic inputs.Furthermore, our findings lend support to the use of low-frequency stimulation of the hippocampus or subiculum, situated downstream of the hippocampus proper, for the control of seizures in drug-refractory epilepsy from a translational perspective [49].
Animals
Male Wistar rats were used in the present study.Rats were maintained under standard conditions at room temperature with ad libitum access to food and water.The animal experiments were conducted in accordance with the ARRIVE guidelines and were performed in accordance with the EU Directive 2010/63/EU on animal experiments.The experimental protocol was approved by the Ethics Committee of the Sechenov Institute of Evolutionary Physiology and Biochemistry (Protocol No. 1-7/2022, 27 January 2022).Every effort was made to minimize the number of animals used and their suffering.
Lithium-Pilocarpine Model of Temporal Lobe Epilepsy
A detailed description of the model has been provided previously [73].The procedure involved injecting 21-day-old rats intraperitoneally (i.p.) with 127 mg/kg LiCl.After 24 h, the rats were treated with pilocarpine (30 mg/kg, i.p.).To prevent peripheral effects of pilocarpine, (-)-scopolamine methylbromide (1 mg/kg, i.p.) was administered one hour before pilocarpine.Only rats with stage 4 or higher seizures on the Racine scale [74] lasting at least 90 min were included in the study.The control group received LiCl, scopolamine methylbromide, and saline.To confirm the spontaneous seizures, 11 rats from the experimental group, aged 1.5 months, were selected.A 48 h monitoring period was conducted to observe the animals' behavior and count the number of spontaneous recurrent seizures.Ten out of eleven rats exhibited an average of 2 to 3 clonic seizures during the monitoring period.The observed seizures were rated on the Racine scale, with an intensity of 3 to 4 points.
Slice Preparation and Electrophysiological Recordings
Electrophysiological experiments were performed 30-35 days after the injection of pi-locarpine.Rats were anesthetized with isoflurane (Laboratorios Karizoo S.A., Barcelona, Spain), decapitated, and their brains quickly removed.Brain slices containing the hippocampus and entorhinal cortex were sectioned using a Microm International HM 650 V vibratome (Microm, Walldorf, Germany) in ice-cold carbogen-aerated ACSF containing 126 mM NaCl, 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 1 mM MgSO 4 , 2 mM CaCl 2 , 24 mM NaHCO 3 , and 10 mM glucose.The slices were then transferred to oxygenated ACSF and incubated at 35 • C for 1 h before electrophysiological recording.The brain slices were then transferred to a recording chamber and perfused with ACSF at a constant flow rate of 5 mL/min for 15-20 min at 30 • C. Local field potentials (LFPs) were recorded simultaneously from the CA1 stratum radiatum and deep layers of the entorhinal cortex using glass microelectrodes (0.2-1.0 MΩ).LFPs were recorded with a Model 1800 amplifier (A-M Systems, Carlsborg, WA, USA) and digitized with an ADC/DAC NI USB-6211 (National Instruments, Austin, TX, USA).Recording was performed using WinWCP v5.7.8 software (University of Strathclyde, Glasgow, UK).
Synaptic responses at a CA1 pyramidal neuron were elicited by stimulating two different inputs using bipolar stimulating electrodes placed at the Shaffer collaterals and the temporoammonic pathway (Figure 4a).An A365 stimulus isolator (World Precision Instruments, Sarasota, FL, USA) was used for extracellular stimulation.The NMDAR channel blocker AP-5 (50 mM, Sigma-Aldrich, St. Louis, MO, USA) and the GABAa blocker bicuculline (20 mM, Sigma-Aldrich) were applied in the bath.
The independence of the inputs was tested by stimulating the first and second pathways with 50 and 200 ms delay, respectively; if the amplitude of the responses at 50 ms did not exceed the amplitude at 200 ms by more than 15%, the stimulating inputs were considered independent.Next, two stimulation protocols were used: paired-pulse (50 ms interstimulus interval) and short-train (5 pulses at 50 Hz) protocols.Stimulation of Schaffer's collaterals and the temporoammonic pathway was applied alternately at 10 s intervals.
The resting membrane potential was measured as the voltage at zero input current.Input resistance was estimated as the slope of the linear regression of the current-voltage relationship for the subthreshold steps.Membrane τ was calculated as a parameter of a single exponential function fitted to the onset of the voltage response to a hyperpolarizing current step.
APs generated at the rheobase current were selected for analysis.The rheobase current was determined as the minimum current sufficient to induce AP generation.The voltage threshold was determined as the point at which the first derivative of the voltage (dV/dt) exceeded 5 mV/ms.The time of the first spike was measured from the start of the current step to the time of the first AP threshold.The amplitude was determined as the peak AP voltage relative to its threshold.The rise time was measured from 10% to 90% of the peak amplitude relative to the AP threshold.Half-width was determined as the AP width at the voltage level of its half amplitude relative to a threshold.The fAHP peak was determined as the point at which the voltage decay slowed to less than 5 mV/ms.The mAHP peak was measured as the lowest value of the voltage after the AP peak relative to its threshold.The time to mAHP was measured between the fAHP and mAHP peaks.ADP amplitude was measured as the peak voltage on the portion of the AP between the fAHP and mAHP peaks relative to the fAHP peak voltage.
For each neuron, the maximum AP frequency and the maximum slope of the currentfrequency curves were measured.The current that elicited the maximum AP frequency was determined as the minimum current sufficient to elicit AP generation at the maximum frequency.Early frequency adaptation was determined as the ratio of the second interspike interval to the first, and late adaptation was determined as the ratio of the last interspike interval to the first at the next current step after the rheobase (rheobase + 10 pA).
Induction of Epileptiform Activity Ex Vivo
Epileptiform activity was induced in a brain slice by administering a proepileptic solution [61,76].The solution consisted of 120 mM NaCl, 3.5 mM KCl, 1.25 mM NaH 2 PO 4 , 0.25 mM MgSO 4 , 2 mM CaCl 2 , 24 mM NaHCO 3 , 10 mM dextrose, and 0.1 mM 4-AP.LFP was recorded at least one hour after the induction of epileptiform activity.
Analysis of Epileptiform Activity
To automatically detect interictal and ictal discharges in LFP recordings, we first filtered the signal up to a sampling rate of 200 Hz using an 8th-order lowpass Butterworth filter.We chose the Butterworth filter because it preserves the shape of the AFC and avoids signal distortion in the passband.Next, we binarized the signal, with 1 representing the beginning of unitary epileptiform discharge and 0 representing the baseline.To determine the onset of uEEs, the threshold value was used.UEEs have amplitudes that are significantly greater than the average amplitude of the signal noise.This value was determined by analyzing the frequency histogram of signal amplitudes for each recording.According to the 4-AP model, the frequency of epileptiform discharge is approximately 1-2 Hz, so the threshold was set at the amplitude of the 99th percentile.The onset of uEEs was defined as the first time the signal amplitude exceeded the threshold.To avoid multiple determinations of uEEs within a single discharge, subsequent exceedances of the threshold within 50 ms within a single event were ignored.The proposed method accurately determines uEEs with a low false positive rate (less than 0.5%), as evaluated by experts.Identification of ictal discharges and the moments at which they occurred were determined by plotting the function of the cumulative number of interictal pulses.The spectrograms were obtained using a continuous wavelet transform.The Morlet wavelet was used because it is most appropriate for medical problems.The scales were chosen to resolve the entire frequency range of interest.
Figure 1 .
Figure 1.Nissl staining of neurons in the hippocampus in control (n = 7) and epileptic (n = The diagrams show the average number of Nissl-stained neurons per 100 µm of the cell lay dots show the individual values for each rat.Asterisks denote significant differences be groups based on Student's t-test: * p < 0.05; *** p < 0.001.
Figure 1 .
Figure 1.Nissl staining of neurons in the hippocampus in control (n = 7) and epileptic (n = 8) rats.The diagrams show the average number of Nissl-stained neurons per 100 µm of the cell layer.The dots show the individual values for each rat.Asterisks denote significant differences between groups based on Student's t-test: * p < 0.05; *** p < 0.001.
Figure 2 .
Figure 2. Firing patterns of CA1 pyramidal neurons in control (Ctrl) and epileptic (Epil) rats.(a) Representative examples of the membrane responses to the steps of hyperpolarizing and subthreshold depolarizing current in CA1 neurons from control and epileptic animals showing that the membrane input resistance and τ are unaltered.(b) Representative examples of the membrane responses of CA1 neurons to the depolarizing steps of the rheobase current (bottom), 2 x rheobase current (middle), and current sufficient to elicit the depolarizing block (top).(c) Representative examples of the fast and medium AHP phases of the APs in CA1 neurons.(d) The current-frequency curves for the same neurons shown in (b).(e) Averaged current-frequency curves of CA1 neurons.
Figure 2 .
Figure 2. Firing patterns of CA1 pyramidal neurons in control (Ctrl) and epileptic (Epil) rats.(a) Representative examples of the membrane responses to the steps of hyperpolarizing and subthreshold depolarizing current in CA1 neurons from control and epileptic animals showing that the membrane input resistance and τ are unaltered.(b) Representative examples of the membrane responses of CA1 neurons to the depolarizing steps of the rheobase current (bottom), 2 x rheobase current (middle), and current sufficient to elicit the depolarizing block (top).(c) Representative examples of the fast and medium AHP phases of the APs in CA1 neurons.(d) The current-frequency curves for the same neurons shown in (b).(e) Averaged current-frequency curves of CA1 neurons.
Next, we compared the firing patterns of the CA1 neurons of the epileptic rats with the age-matched control.Representative examples of the action potential (AP) trains generated in response to the suprathreshold depolarizing current steps in CA1 neurons are shown in Figure2b.
Figure 3 .
Figure 3. Firing properties in CA1 pyramidal cells from control (Ctrl) and epileptic (Epil) rats.The dots show the individual values for each neuron.Asterisks denote significant differences between groups based on Student's t-test: ** p < 0.01.
Figure 3 .
Figure 3. Firing properties in CA1 pyramidal cells from control (Ctrl) and epileptic (Epil) rats.The dots show the individual values for each neuron.Asterisks denote significant differences between groups based on Student's t-test: ** p < 0.01.
Figure 4 .
Figure 4.The inputs from the entorhinal cortex and the CA3 region of the hippocampus to CA1 pyramidal neurons are altered in epileptic rats.(a) Schematic representation of the location of the electrodes used for the stimulation of Schaffer's collaterals and the temporoammonic pathways.(b,d) Representative examples of recordings of two-pulse (b) and train (d) evoked excitatory postsynaptic currents (eEPSCs) of Shaffer collaterals (red) and temporoammonic pathway (blue) in control (ctrl) and epileptic (epil) rats.(c) Bar graphs illustrating the paired-pulse ratios in the various groups.The dots show the individual values for each neuron.* p < 0.05, Student's t-test.(e) Normalized amplitude of eEPSCs obtained during short-train stimulation.A repeated measures ANOVA was conducted, followed by the Šidák post hoc test; * p < 0.05, ** p < 0.01, and *** p < 0.001.
Figure 4 .
Figure 4.The inputs from the entorhinal cortex and the CA3 region of the hippocampus to CA1 pyramidal neurons are altered in epileptic rats.(a) Schematic representation of the location of the electrodes used for the stimulation of Schaffer's collaterals and the temporoammonic pathways.(b,d) Representative examples of recordings of two-pulse (b) and train (d) evoked excitatory postsynaptic currents (eEPSCs) of Shaffer collaterals (red) and temporoammonic pathway (blue) in control (ctrl) and epileptic (epil) rats.(c) Bar graphs illustrating the paired-pulse ratios in the various groups.The dots show the individual values for each neuron.* p < 0.05, Student's t-test.(e) Normalized amplitude of eEPSCs obtained during short-train stimulation.A repeated measures ANOVA was conducted, followed by the Šidák post hoc test; * p < 0.05, ** p < 0.01, and *** p < 0.001.
Figure 5 .
Figure 5.In epileptic rats, spine density is observed to increase on apical dendrites of pyramidal neurons in stratum lacunosum moleculare.The images above illustrate examples of biocytin-filled and confocal reconstructed pyramidal neurons in control and epileptic rats at different magnifications.The bottom bar diagrams illustrate the density of spines on dendrites of CA1 pyramidal neurons, with the data presented in different layers.The dots show the individual values for each neuron.** p < 0.01, Student's t-test.
Figure 5 .
Figure 5.In epileptic rats, spine density is observed to increase on apical dendrites of pyramidal neurons in stratum lacunosum moleculare.The images above illustrate examples of biocytin-filled and confocal reconstructed pyramidal neurons in control and epileptic rats at different magnifications.The bottom bar diagrams illustrate the density of spines on dendrites of CA1 pyramidal neurons, with the data presented in different layers.The dots show the individual values for each neuron.** p < 0.01, Student's t-test.
Figure 6 .
Figure 6.Epileptiform activity induced by 4-aminopyridine in hippocampus-entorhinal cortex slices.(a) The drawing shows the position of the electrodes in the hippocampus and entorhinal cortex.Simultaneous LFP recordings in brain slices from control (b) and epileptic (c) rats showing the development of epileptiform activity after the application of a proepileptic solution.Expanded views of a representative period of epileptiform activity are displayed on a light brown background, with corresponding spectrograms shown on the right-hand side.Low-amplitude LFP changes correlating with ictal discharge are observed in the hippocampus of epileptic animals during ictal discharge in the entorhinal cortex (inset, light green background).
Figure 6 .
Figure 6.Epileptiform activity induced by 4-aminopyridine in hippocampus-entorhinal cortex slices.(a) The drawing shows the position of the electrodes in the hippocampus and entorhinal cortex.Simultaneous LFP recordings in brain slices from control (b) and epileptic (c) rats showing the development of epileptiform activity after the application of a proepileptic solution.Expanded views of a representative period of epileptiform activity are displayed on a light brown background, with corresponding spectrograms shown on the right-hand side.Low-amplitude LFP changes correlating with ictal discharge are observed in the hippocampus of epileptic animals during ictal discharge in the entorhinal cortex (inset, light green background).
Figure 7 .
Figure 7. Cumulative plots of unitary epileptiform events (uEEs) in the hippocampus (a) and entorhinal cortex (c) of control and epileptic rats.The bar graphs on the right-hand side (b,d) display the average number of uEEs per hour of recording, along with their standard error of measurement.Each point on the graph represents one brain slice.Asterisks indicate significant differences between groups according to Student's test: * p < 0.05; *** p < 0.001.
Figure 7 .
Figure 7. Cumulative plots of unitary epileptiform events (uEEs) in the hippocampus (a) and entorhinal cortex (c) of control and epileptic rats.The bar graphs on the right-hand side (b,d) display the average number of uEEs per hour of recording, along with their standard error of measurement.Each point on the graph represents one brain slice.Asterisks indicate significant differences between groups according to Student's test: * p < 0.05; *** p < 0.001.
Figure 8 .
Figure 8.The frequency of uEEs may be reduced due to neurodegeneration in The interevent intervals (IEIs) distributions in the hippocampi of control and shown in (a).The bar graphs in (b) display the averages of the greatest mode of d in control and epileptic rats.Each point on the graph represents one brain slice.significant differences between groups according to Student's test: * p < 0.05.
Figure 9 .
Figure 9.The distribution of IEIs in the entorhinal cortex of control and epileptic ings (a) and for only ictal (b) and interictal (c) discharges.The bar graphs display the ictal discharges, including the latency of the first ictal discharge (d), the nu charges during 1 h recordings (e), the duration of ictal discharge (f), and the num
Figure 8 .
Figure 8.The frequency of uEEs may be reduced due to neurodegeneration in the hippocampus.The interevent intervals (IEIs) distributions in the hippocampi of control and epileptic rats are shown in (a).The bar graphs in (b) display the averages of the greatest mode of distributions of IEIs in control and epileptic rats.Each point on the graph represents one brain slice.Asterisk indicates significant differences between groups according to Student's test: * p < 0.05.
Figure 8 .
Figure 8.The frequency of uEEs may be reduced due to neurodegeneration in the hippocampus.The interevent intervals (IEIs) distributions in the hippocampi of control and epileptic rats are shown in (a).The bar graphs in (b) display the averages of the greatest mode of distributions of IEIs in control and epileptic rats.Each point on the graph represents one brain slice.Asterisk indicates significant differences between groups according to Student's test: * p < 0.05.
Figure 9 .
Figure 9.The distribution of IEIs in the entorhinal cortex of control and epileptic rats for 1 h recordings (a) and for only ictal (b) and interictal (c) discharges.The bar graphs display the properties of the ictal discharges, including the latency of the first ictal discharge (d), the number of ictal discharges during 1 h recordings (e), the duration of ictal discharge (f), and the number of uEEs within an ictal discharge (g).Each dot on the graph represents one brain slice.Asterisks denote significant differences between groups based on Student's t-test: * p < 0.05; ** p < 0.01.
Figure 9 .
Figure 9.The distribution of IEIs in the entorhinal cortex of control and epileptic rats for 1 h recordings and for only ictal (b) and interictal (c) discharges.The bar graphs display the properties of the ictal discharges, including the latency of the first ictal discharge (d), the number of ictal discharges during 1 h recordings (e), the duration of ictal discharge (f), and the number of uEEs within an ictal discharge (g).Each dot on the graph represents one brain slice.Asterisks denote significant differences between groups based on Student's t-test: * p < 0.05; ** p < 0.01.
Figure A1 .
Figure A1.Epileptiform activity induced by 4-aminopyridine in cortex slices.(a) Simultaneous LFP recordings in brain slices from epileptic rats showing the development of ep-ileptiform activity after application of a proepileptic solution.Expanded views of an initial (b) and a steady (c) period of epileptiform activity are shown in the frames.
Table 1 .
Passive membrane properties of CA1 neurons in control and epileptic rats.
Table 1 .
Passive membrane properties of CA1 neurons in control and epileptic rats.
Table 2 .
Properties of action potentials in CA1 neurons in control and epileptic rats.
|
2024-07-15T15:14:17.767Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "b93156821b60670dbbf06bf94f33f2e092df0dff",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fed4e083b5c84217a08c92c7164f53030f84d4cc",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226054033
|
pes2o/s2orc
|
v3-fos-license
|
Associations of Vitamin D Deficiency, Parathyroid hormone, Calcium, and Phosphorus with Perinatal Adverse Outcomes. A Prospective Cohort Study
Vitamin D deficiency during pregnancy has been linked to perinatal adverse outcomes. Studies conducted to date have recommended assessing interactions with other vitamin D-related metabolites to clarify this subject. We aimed to evaluate the association of vitamin D deficiency during early pregnancy with preterm birth. Secondary outcomes included low birth weight and small for gestational age. Additionally, we explored the role that parathyroid hormone, calcium and phosphorus could play in the associations. We conducted a prospective cohort study comprising 289 pregnant women in a hospital in Granada, Spain. Participants were followed-up from weeks 10–12 of gestation to postpartum. Serum 25-hydroxyvitamin D, parathyroid hormone, calcium, and phosphorus were measured within the first week after recruitment. Pearson’s χ2 test, Mann–Whitney U test, binary and multivariable logistic regression models were used to explore associations between variables and outcomes. 36.3% of the participants were vitamin D deficient (<20 ng/mL). 25-hydroxyvitamin D concentration was inversely correlated with parathyroid hormone (ρ = −0.146, p = 0.013). Preterm birth was associated with vitamin D deficiency in the multivariable model, being this association stronger amongst women with parathyroid hormone serum levels above the 80th percentile (adjusted odds ratio (aOR) = 6.587, 95% CI (2.049, 21.176), p = 0.002). Calcium and phosphorus were not associated with any studied outcome. Combined measurement of 25-hydroxyvitamin D and parathyroid hormone could be a better estimator of preterm birth than vitamin D in isolation.
Introduction
Vitamin D deficiency is considered to be a pandemic [1] whose global prevalence varies widely depending on the studied population, dietary intake, ultraviolet-B light exposure, ethnicity, and age, amongst other factors [2]. The severe deficiency of this secosteroid is associated with skeletal disorders as well as other pathologies outside bone metabolism [3]. During pregnancy, vitamin D deficiency has to one month postpartum. This study was approved by the Ethics Committee of the University of Granada, number 72-2015, and conducted in accordance with the principles of the Declaration of Helsinki, reviewed in Fortaleza, Brazil, in 2003. Results of the present study are reported following the STROBE statement guidelines for cohort studies [30].
Participants Data
Women were approached in their first prenatal visit at the obstetrics and gynecology services of the hospital complex. Inclusion criteria included pregnant women older than 16 years old, able to speak Spanish, and capable of signing for informed consent between 10-12 weeks of gestation determined by ultrasonography. Exclusion criteria at enrollment consisted of pregnant women with the intention to give birth in a different hospital. Other exclusion criteria consisted of women undergoing voluntary interruption of pregnancy, miscarriage, stillbirth, and multiple pregnancy. Previous history of pregnancy adverse outcomes was not an exclusion criterion for the present study.
The required sample size for the present study was calculated based on the results obtained in another study conducted by Perez-Ferre et al., who observed a prevalence of preterm birth amongst vitamin D deficient women (<20 ng/mL) of 22.9% and a prevalence of preterm birth amongst vitamin D sufficient women (>20 ng/mL) of 8.25% with a vitamin D sufficiency/deficiency ratio of 0.69 [31]. To achieve a power of 80% to detect differences in the null hypothesis H 0 :p1 = p2, using χ 2 test with a confidence level of 95%, we estimated a sample size of 203 participants. Given the prospective design of the study, we estimated 20% of lost to follow-up. Hence, final calculated minimum sample size consisted of 244 participants to be included in the study.
Sociodemographic characteristics of participants were collected at recruitment by researchers from self-report and medical records. Considered variables consisted of maternal age, pre-gestational body mass index (BMI), smoking habit during pregnancy (defined as >1 or 0 cigarettes per day), parity and gravidity, history of previous pregnancy, and perinatal adverse outcomes (LBW, SGA, PTB, pre-eclampsia, gestational diabetes mellitus, miscarriage, and stillbirth), ethnicity and seasonality of sampling. Women with pre-gestational BMI >30 were classified as obese. Data regarding vitamin D supplementation at recruitment was not collected. However, Spain is a country without vitamin D supplementation policy, and vitamin D supplementation among Spanish pregnant women is uncommon in comparison with other European countries [32].
Clinical and Biochemical Procedures
Fasting maternal blood samples were obtained during the week of enrolment. Sampling was performed by venipuncture in tubes containing anticoagulant (EDTA, Ethylenediaminetetraacetic acid) and were immediately transported to the laboratory for analysis.
25-hydroxyvitamin D and intact-PTH (1-84) were quantified by microparticle chemiluminescence immunoassay (CMIA) using an Alinity I ® analyzer (Abbott, Wiesbaden, Germany). Briefly, CMIA analysis is based on the use of paramagnetic microparticles coated with antibodies. Regarding 25-hydroxyvitamin D, it is first separated from the vitamin D-binding protein (DBP) to be mixed with the anti-vitamin D antibody-coated microparticle. The complex is labeled with acridinium afterwards. The reaction conjugate is incubated to be later washed-out, and the correlation between emitted chemiluminescence light measured in relative light units (RLU) and the 25-hydroxyvitamin D or intact-PTH concentration is calculated. According to the manufacturer, the method detection limit for the 25-hydroxyvitamin D assay is 3.5 ng/mL (8.85 nmol/L) and intra-assay coefficient of variation is 3.6% at 39.8 ng/mL (99.4 nmol/L) whilst the quoted PTH assay detection limit is 0.5 pg/mL (0.05 pmol) and intra-assay coefficient of variation is 2.6% at 63.8 pg/mL (6.76 pmol/L).
Calcium and phosphorus were analyzed using an Alinity C ® analyzer (Abbott, Wiesbaden, Germany). Calcium was analyzed by arsenazo-III colorimetric assay measuring absorbance at 660 nm whilst phosphorus was analyzed by phosphomolybdate assay measuring absorbance at 340 nm.
Vitamin D deficiency was defined as serum 25-hydroxyvitamin D concentrations <20 ng/mL (50 nmol/L) whilst vitamin D insufficiency was defined as serum 25-hydroxyvitamin D concentrations <30 ng/mL (75 nmol/L). Used cut-off points were based on other studies [22]. Chosen cut-off points differed from those recommended by the American Institute of Medicine [33], however, optimal vitamin D cut-off points during pregnancy remain controversial and consensus on this matter has not been reached to date [34]. The seasonality of sampling was considered as a potential confounder given the existing association between sun exposure and vitamin D concentration [35]. Due to a lack of consensus, we considered elevated PTH levels as concentrations above the 80th percentile in line with another author [26]. Therefore, women with elevated PTH levels were those with PTH concentrations ≥31.9 pg/mL.
Women were followed-up in subsequent prenatal visits, and cases of pre-eclampsia and gestational diabetes mellitus were diagnosed. Values of maternal diastolic and systolic blood pressure, proteinuria, and glucose tolerance test results were collected by researchers during routine controls. Blood pressure was measured using a validated automatic tensiometer and the measurement was repeated within 15 min. De novo systolic blood pressure >140 mm/Hg and diastolic blood pressure >90 mm/Hg measurements were considered as gestational hypertension and women were further evaluated by the obstetrician. Proteinuria was defined as urine protein-to-creatinine ratio above 0.3 mg/mg and was assessed in routine controls after week 20 of gestation. Proteins in urine were quantified using benzethonium chloride turbidimetric method and creatinine was analyzed using alkaline picrate colorimetric assay. An oral glucose tolerance test was performed between weeks 24-28 of gestation. Blood glucose was analyzed using a hexokinase/glyceraldehyde 3-phosphate dehydrogenase activity assay kit.
Cases of pre-eclampsia were defined according to the International Society for the Study of Hypertension in Pregnancy (ISSHP) 2018 classification [36], and GDM cases were defined in line with the American Diabetes Association criteria [37]. Cases of miscarriage and stillbirth, type of delivery, and values of gestational age at delivery and birth weight were documented from medical records. Low birth weight was defined as live birth with less than 2500 g at delivery in accordance with the International Classification of Diseases, 10th Edition [12]. Preterm birth was defined as live birth with less than 37 weeks of gestation [38]. Small for gestational age cases were considered as live births with weight below 10th percentile for the gestational age [13] and were calculated using Spanish reference percentile charts from 2010-2014, based on gender, parity, and type of delivery [39].
Statistical Analysis
All statistical analyses were performed using the software SPSS version 25 (IBM Corp ® , Armonk, NY, USA). Normality of continuous variables was examined using Kolmogorov-Smirnov test. Categorical variables were reported as percentages, and continuous variables were reported as mean ± standard deviation or median and interquartile range based on normality test results. Differences between participants depending on vitamin D cut-off points were analyzed using Pearson's chi-square (χ 2 ) test for categorical variables and the Mann-Whitney U test for continuous variables. The Spearman correlation test was used to evaluate the strength of association between vitamin D and concentrations of PTH, calcium, and phosphorus. A scatter plot was provided to graphically represent statistically significant correlations. For each outcome, bivariate analysis was performed to evaluate possible confounders based on the literature. Variables with p-values < 0.20 in bivariate analysis were chosen for adjustment in multivariable analysis. This cut-off is supported by the literature [40,41]. Other related variables strongly supported by the scientific literature were also considered for adjustment when applicable. Odds ratios (ORs) with 95% confidence interval (95% CI) were calculated for each chosen outcome and biomarker using bivariate and multivariable logistic regression models. In logistic regression models, parathyroid hormone, calcium and phosphorus were analyzed as continuous variables whilst vitamin D deficiency was a categorical variable (<20 ng/mL/≥20 ng/mL) Finally, we provided binary logistic regression unadjusted and adjusted models to examine the associations between concentrations of vitamin D <20 ng/mL and <30 ng/mL along with the PTH 80th percentile and the odds of PTB, LBW, SGA in the cohort of study. A sensitivity analysis was conducted to evaluate consistency of the results using the PTH 75th percentile.
Cohort of Study
We approached 500 women for study participation, 380 of whom signed informed consent and were enrolled in the study. After follow-up, a completed dataset from 303 women and their children was available (20.26% lost to follow-up). A final analytical sample of 289 women fulfilled inclusion criteria and was available for the present study Figure 1.
Cohort of Study
We approached 500 women for study participation, 380 of whom signed informed consent and were enrolled in the study. After follow-up, a completed dataset from 303 women and their children was available (20.26% lost to follow-up). A final analytical sample of 289 women fulfilled inclusion criteria and was available for the present study Figure 1.
Characteristics of Participants
The sociodemographic characteristics of participants based on vitamin D cut-off points (<20 ng/mL or ≥20 ng/mL), are presented in Table 1 and concentrations of calcium, phosphorus and parathyroid hormone are reported in Table 2. Results of the Kolmogorov-Smirnov test showed that maternal age, BMI, calcium, phosphorus, and PTH concentrations were non-normally distributed across vitamin D cut-off points. All expected numbers were higher than five in Pearson's χ 2 test for categorical variables. Vitamin D levels were normally distributed amongst participants. Serum 25hydroxyvitamin D mean concentration was 22.36 ± 6.3 ng/mL. Thirty-four women had sufficient levels of vitamin D (≥30 ng/mL) (11.76%), 150 were vitamin D insufficient (20-29.9 ng/mL) (51.9%), and the 105 remaining women suffered vitamin D deficiency (<20 ng/mL) (36.33%). Median maternal age was 33 (29)(30)(31)(32)(33)(34)(35)(36) years old, whilst the median pre-pregnancy BMI was 25.1 (21.9-29.3). 52 participants were obese (18%). With respect to the history of previous pregnancy adverse outcomes, 67 women had history of miscarriage or stillbirth; one had history of pre-eclampsia; five had history of gestational diabetes mellitus; and 12 women had history of preterm birth. Regarding ethnicity, three African women were lost to follow-up, and most of the ethnic women approached did not fulfill the inclusion criteria (speak Spanish). Therefore, all women who completed the study were Caucasian. Only obesity (pre-pregnancy BMI ≥30), preterm birth and maternal blood parathyroid hormone concentration varied significantly across the chosen vitamin D cut-off points (p < 0.05).
Characteristics of Participants
The sociodemographic characteristics of participants based on vitamin D cut-off points (<20 ng/mL or ≥20 ng/mL), are presented in Table 1 and concentrations of calcium, phosphorus and parathyroid hormone are reported in Table 2. Results of the Kolmogorov-Smirnov test showed that maternal age, BMI, calcium, phosphorus, and PTH concentrations were non-normally distributed across vitamin D cut-off points. All expected numbers were higher than five in Pearson's χ 2 test for categorical variables. Vitamin D levels were normally distributed amongst participants. Serum 25-hydroxyvitamin D mean concentration was 22.36 ± 6.3 ng/mL. Thirty-four women had sufficient levels of vitamin D (≥30 ng/mL) (11.76%), 150 were vitamin D insufficient (20-29.9 ng/mL) (51.9%), and the 105 remaining women suffered vitamin D deficiency (<20 ng/mL) (36.33%). Median maternal age was 33 (29-36) years old, whilst the median pre-pregnancy BMI was 25.1 (21.9-29.3). 52 participants were obese (18%). With respect to the history of previous pregnancy adverse outcomes, 67 women had history of miscarriage or stillbirth; one had history of pre-eclampsia; five had history of gestational diabetes mellitus; and 12 women had history of preterm birth. Regarding ethnicity, three African women were lost to follow-up, and most of the ethnic women approached did not fulfill the inclusion criteria (speak Spanish). Therefore, all women who completed the study were Caucasian. Only obesity (pre-pregnancy BMI ≥30), preterm birth and maternal blood parathyroid hormone concentration varied significantly across the chosen vitamin D cut-off points (p < 0.05). The spearman correlation test showed an inverse association between vitamin D and parathyroid hormone concentrations (ρ = −0.146, p = 0.013). This correlation was also evident in the scatter plot in Figure 2. On the other hand, neither calcium nor phosphorus were correlated with vitamin D in the Spearman's test (calcium: ρ = 0.022, p = 0.705, phosphorus: ρ = −0.024, p = 0.689). The spearman correlation test showed an inverse association between vitamin D and parathyroid hormone concentrations (ρ = −0.146, p = 0.013). This correlation was also evident in the scatter plot in Figure 2. On the other hand, neither calcium nor phosphorus were correlated with vitamin D in the Spearman's test (calcium: ρ = 0.022, p = 0.705, phosphorus: ρ = −0.024, p = 0.689).
Pregnancy and Perinatal Adverse Outcomes
Frequencies of pregnancy and perinatal adverse outcomes observed in the present study compared to estimated global frequencies and estimated frequencies in the USA and Europe are described in Table 3. One pre-eclampsia case was a twin pregnancy, thus being excluded from further analyses. We also excluded type I and pre-gestational type II diabetes cases (four cases) when describing the frequency of gestational diabetes mellitus in the cohort of study.
Pregnancy and Perinatal Adverse Outcomes
Frequencies of pregnancy and perinatal adverse outcomes observed in the present study compared to estimated global frequencies and estimated frequencies in the USA and Europe are described in Table 3. One pre-eclampsia case was a twin pregnancy, thus being excluded from further analyses. We also excluded type I and pre-gestational type II diabetes cases (four cases) when describing the frequency of gestational diabetes mellitus in the cohort of study. Table 3. Frequency of pregnancy and perinatal adverse outcomes compared to global and regional frequencies. With the exemption of LBW, frequencies of adverse outcomes in the cohort of study were lower than average estimated frequencies. Seventeen births were premature (<37 weeks of gestation) (5.9%) and 24 newborns had low birth weight (<2.500 g) (8.3%). When comparing gestational age and birth weight data with the Spanish reference percentile charts [39], we obtained a total of 27 SGA cases in the cohort of study (birth weight < 10th percentile for their gestational age) (9.34%).
Associations of Vitamin D Deficiency, PTH, Calcium, and Phosphorus with Perinatal Adverse Outcomes
In Table 4, unadjusted and adjusted logistic regression models are presented to describe associations between vitamin D deficiency (<20 ng/mL/<50 nmol/L), parathyroid hormone, calcium, phosphorus continuous concentrations, and perinatal outcomes. Covariables with p-values < 0.20 in bivariate analysis were selected for adjustment in multivariable analysis. Maternal first-trimester vitamin D deficiency was associated with higher odds of preterm birth in bivariate analysis, but it was not statistically significant (OR = 2.662, 95% CI (0.982, 7.217), p = 0.054). Only after adjusting for history of PTB and cases of pre-eclampsia, did the association become statistically significant (OR = 3.529, 95% CI (1.159, 10.741), p = 0.026). PTH concentration and preterm birth were weakly associated only in bivariate analysis (OR = 1.030, 95% CI (1.002, 1.058), p = 0.035).
Regarding birth weight, there was a trend towards higher odds of low birth weight amongst the offspring of vitamin D deficient women. However, this association was not statistically significant neither in bivariate analysis or after adjusting for confounders (OR = 2.222, 95% CI (0.958, 5.157), p = 0.06/aOR = 1.586, 95% CI (0.586, 4.336), p = 0.361). In the same fashion, the relationship between vitamin D deficiency and risk of SGA was not significant neither in crude or adjusted models (OR = 2.024, 95% CI (0.912-4.488), p = 0.083/aOR = 1.794, 95% CI (0.786-4.093), p = 0.165). We did not observe any correlation between calcium and phosphorus concentrations with perinatal outcomes.
In Table 5, we presented the associations between vitamin D deficiency and insufficiency along with the PTH 80th percentile and perinatal outcomes. Data reported as OR (95%CI). OR: odds ratios. aOR: adjusted odds ratio. 1 Adjusted for pre-eclampsia and history of preterm birth. 2 Adjusted for maternal age, smoking habit, pre-eclampsia, and preterm birth. 3 Adjusted for seasonality, smoking habit, and parity. * p-value < 0.05.
Finally, we did not find any association between SGA and vitamin D deficiency or insufficiency along with the 80th PTH percentile neither in crude nor adjusted models. Sensitivity analyses using the 75th PTH percentile (≥29.25 pg/mL) were performed to evaluate the consistency of the results Table S1. Overall, associations between studied outcomes and combinations of vitamin D deficiency/insufficiency with the 75th PTH percentile were similar to those shown in the main analysis.
Discussion
The literature about deficiency of vitamin D and perinatal outcomes is inconsistent, and several authors have suggested that interactions with metabolites linked to the metabolism of vitamin D could play an important role in the associations [24][25][26]. We conducted a prospective cohort study with 289 pregnant women recruited between weeks 10-12 of gestation in a hospital of Granada, Spain, and associations between 25-hydroxyvitamin D, PTH, calcium, phosphorus, and perinatal adverse outcomes, namely preterm birth, low birth weight and small for gestational age were evaluated. We found a trend towards lower maternal 25-hydroxyvitamin D serum levels in the first trimester of gestation and higher odds of preterm birth. This association was stronger amongst women with elevated levels of PTH (>80th percentile), and it was not attenuated after adjusting for preterm birth confounders. Although a similar association was observed for low birth weight, it was not statistically significant after confounder adjustment. SGA was defined based on weight and weeks of gestation at delivery from Spanish percentile charts [39] did not correlate either with vitamin D or related metabolites.
With the exemption of low birth weight, the prevalence of pregnancy and perinatal adverse outcomes was lower than average estimates in Europe. Preeclampsia is a strong contributor to preterm birth [49]. The small number of preeclampsia cases could partially explain the low preterm birth cases observed in the cohort of study.
Limitations of the Study
The present study has some limitations to be acknowledged. The prevalence of the main outcome of the study, preterm birth, was more than 30% lower than average estimates in Europe. This could be the cause of the lack of significance observed in the association between vitamin D deficiency and preterm birth and might compromise extrapolation of our results to other populations. Regarding secondary outcomes (SGA and LBW), it is possible that the lack of statistical significance of associations could be consequence of sample size limitations given that they were not included in sample size calculations. We did not use liquid chromatography-tandem mass spectrometry (LC-MS/MS), which is considered the gold-standard method by most authors to analyze 25-hydroxyvitamin D. Due to equipment limitations, we did not directly measure ionized calcium and we could not determine albumin levels thus we were not able to estimate ionized calcium concentration which is the most active form of calcium. Additionally, we could not measure other important bone turnover biomarkers such as alkaline phosphatase which would be of interest when assessing associations between 25-hydroxyvitamin D and PTH. Almost 75% of the samples were obtained during autumn, and all participants were Caucasian. Therefore, it was not possible to adjust the results for ethnicity, and seasonality adjustment could be inaccurate. These are important factors that can potentially influence maternal vitamin D blood levels [22].
Deficiency of Vitamin D and Preterm Birth
Spain is a Mediterranean country with high levels of sun exposure. Despite this fact, vitamin D deficiency is highly prevalent among Spanish pregnant women [50]. This situation is known as the "Mediterranean paradox," and it has been estimated that 41%-90% of all pregnant women living in Mediterranean countries have vitamin D levels below sufficiency [51]. In line with this data, only 11.76% of study participants had sufficient vitamin D levels (>30 ng/mL) whilst more than one-third of the women had levels below 20 ng/mL, which implies a high prevalence of vitamin D deficiency amongst participants. The observed ratio of vitamin D sufficiency/insufficiency is consistent with the results obtained by Perez-Ferre et al., who conducted a prospective cohort study in 266 pregnant women during weeks 24-28 of gestation in Madrid, Spain, finding a significant association between vitamin D deficiency and preterm birth using the same vitamin D cut-off points, in both unadjusted and adjusted logistic regression models (OR = 3.31, 95% CI (1.52, 7.19), p = 0.002/aOR = 3.80, 95% CI (1.32, 10.97), p = 0.013) [31]. However, in the present study, we only observed a statistically significant association between vitamin D deficiency (<20 ng/mL) and preterm birth after adjusting for confounders with statistical significance in the univariate model (p < 0.20). Differences between both studies could be attributed to our significantly smaller number of PTB cases and different sampling time. Another study conducted in a Spanish cohort of 2382 pregnant women could not find any association between 25-hydroxyvitamin D and perinatal outcomes, including PTB and SGA. However, almost 50% of the participants had sufficient levels of vitamin D, which implies a low rate of vitamin D insufficiency in comparison with average estimates [52].
Using similar study designs, several authors have explored the link between vitamin D deficiency and prematurity in other countries yielding negative results [10,53], whilst other studies have found a positive association [11,54]. Authors of these studies state the necessity of conducting well-designed randomized clinical trials to further clarify this subject. However, meta-analyses of randomized clinical trials have failed to verify an association between vitamin D supplementation and lower odds of preterm birth [4,55]. In this sense, randomized clinical trials conducted to date not only have to face important ethical issues but also lack relevant criteria related to nutrients studies [56]. One important criterium that is usually overlooked is the optimization of the status of associated nutrients in order to ensure the causality of observed associations [57].
Vitamin D Associated Metabolites and Perinatal Outcomes
Vitamin D regulates calcium and phosphorus homeostasis, and its production is controlled by PTH [58]. Santorelli et al. measured 25-hydroxyvitamin D, PTH, and calcium in a heterogeneous population composed of 1010 pregnant women differentiating between white and Pakistani participants. They observed that higher calcium levels were associated with lower odds of PTB amongst white participants, whilst vitamin D exerted a protective effect on the overall risk of SGA. However, none of the studied metabolites were associated with SGA in white participants [17]. In the present study, we did not observe any significant association between calcium and preterm birth in Caucasian pregnant women. Nonetheless, due to sample limitations, we were not able to examine the impact that ethnicity could have on the analyses.
Other authors have explored the concept of functional vitamin D deficiency in pregnancy as a cause of calcium metabolic stress, which could ultimately lead to perinatal adverse outcomes associated with the deficiency of this secosteroid. This concept has been applied to examine the association between vitamin D deficiency, gestational hypertensive disorders, and fetal growth restriction [26,29]. Scholl et al. observed a higher incidence of SGA cases amongst pregnant women with PTH > 62 pg/mL in combination with 25-hydroxyvitamin D < 20 ng/mL or calcium intakes below 60% of the estimated average requirement (OR = 2.23, 95% CI (1.23, 4.33)) [29]. In the same line, Hemmingway et al. found a 2.38-fold increased risk of SGA amongst pregnant women with serum 25-hydroxyvitamin D levels < 12 ng/mL (<30 nmol/L) in combination with PTH > 80th percentile in the cohort of study (RR = 2.38, 95% CI (1.31, 4.33)). However, this association was not statistically significant after confounder adjustment [26]. More recently, Meng et al. prospectively measured PTH, calcium, and 25-hydroxyvitamin D in 3407 participants in China, finding that maternal 25-hydroxyvitamin D levels <12 ng/mL and <20 ng/mL (<50 nmol/L) along with PTH concentrations >75th percentile were associated with increased risk of SGA and lower mean birth weight compared to vitamin D sufficient women. This association was not attenuated in sensitivity analyses (PTH > 80th percentile) [25]. On the other hand, Tao et al. evaluated the effect of the duration of vitamin D supplementation (400-600 IU/d) on fetal growth, finding a direct association between more prolonged vitamin D supplementation and higher weeks of gestation and weight at delivery independently of calcium and phosphorus concentrations [59]. In the present study, we found a correlation between low birth weight and vitamin D < 20 ng/mL in combination with high levels of PTH (>80th). However, this association was not significant after adjustment for confounders, which implies that gestational age at delivery was the main underlying factor for the association. In the same fashion, the risk of SGA was not correlated with vitamin D or PTH in any subgroup analysis. Nonetheless, we observed that women with PTH levels > 80th percentile and 25-hydroxyvitamin D < 20 ng/mL had more than five times higher odds of PTB compared to the reference group, and this relationship persisted after adjusting for confounders. These results were consistent with those obtained in the sensitivity analysis using the 75th PTH percentile instead Table S1. It is possible that vitamin D deficiency could exert an effect on birth weight by influencing the length of gestation [23]. Finally, neither calcium nor phosphorus concentrations were associated with any studied outcome.
Our results do not support the hypothesis that elevated levels of PTH in combination with vitamin D deficiency are associated with fetal growth restriction. However, reference levels for PTH during pregnancy are not firmly established, and SGA is defined depending on specific reference charts and, thus, results could not be extrapolated to other populations.
Conclusions
In the present study, we observed that vitamin D deficiency defined as 25-hydroxyvitamin D concentrations below 20 ng/mL, in combination with parathyroid hormone maternal levels above the 80th percentile during the first trimester of gestation, was a better estimator of preterm birth than the assessment of vitamin D deficiency in isolation. However, we did not observe the same association with low birth weight after controlling for weeks of gestation or small for gestational age. Interventional studies with vitamin D supplementation would benefit from measuring parathyroid hormone in order to demonstrate a potential causal association between deficiency of vitamin D and perinatal adverse outcomes.
Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6643/12/11/3279/s1, Table S1: Associations between combination of maternal serum 25-hydroxyvitamin D and PTH 75th percentile and perinatal adverse outcomes. Funding: Research reported in this publication was funded by the Spanish Ministry of Science, Innovation, and Universities (Project FIS-ISCIII, PI17/02305) and co-founded by FEDER, "investing in your future".
|
2020-10-29T09:07:21.021Z
|
2020-10-26T00:00:00.000
|
{
"year": 2020,
"sha1": "709f874a48f47e9d20e30d513094c0cc887fbabb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/12/11/3279/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "991fcbcd92ff5806dfd2b9c38eee974296de5316",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209747295
|
pes2o/s2orc
|
v3-fos-license
|
The transfer of knowledge on integrated care among five European regions: a qualitative multi-method study
Background To examine how the knowledge transfer processes unfolded within SCIROCCO, a EU funded project (3rd Health Programme (2014–2020)) that aimed to facilitate the process of knowledge sharing across five European regions, to speed up adoption and scaling-up of integrated care initiatives. Methods A qualitative multi-method design was used. Data collection methods included focus groups, project documents and action plans of the regions. The data was analysed using a qualitative content-analysis procedure, which was guided by the frameworks of knowledge exchange and the why, whose, what, how framework for knowledge mobilisers. Results All five components (including the themes) of knowledge exchange could be identified in the approach developed on the knowledge transfer processes. The four questions and accompanying categories of the framework of knowledge mobilisation were also identified to a large degree. Conclusions The observed incorporation of distinct forms of knowledge from multiple sources and the observed dynamic and fluid knowledge transfer processes both suggest that SCIROCCO developed a comprehensive knowledge transfer approach aiming to enable the adoption and scaling-up of integrated care. Overall, the multi-method qualitative nature of this research has allowed some new and practical insights in the knowledge transfer activities on integrated care between several European regions. To obtain a clear understanding of the content of the knowledge transfer approaches, which could assist the operationalising of models to support the evaluation of knowledge transfer activities, it is strongly recommended that further research of this type should be conducted in other research settings.
Introduction
An increasing amount of knowledge is obtained on different models and approaches of integrated care implemented across a variety of settings [1]. Many definitions of integrated care exist of which one defines integrated care as "patient-centred, proactive and well-coordinated multidisciplinary care, using new technologies to support patients' self-management and improve collaboration between caregivers." [2] Nolte and colleagues, who looked into the experiences of 12 countries in Europe in their efforts to enhance the care for people with chronic conditions, found that innovative care models -that challenge established ways of organising services -are often implemented as time-limited pilot or small-scale, localised projects [3]. To obtain more benefit for people from the advantages seen in promising innovative practices, it is important to learn from and share successful experiences. The spread of insights is believed to support the scaling-up of favourable initiatives and innovations. However, it is implausible that a pilot offers much support for the next implementation: as Pawson points out, the interaction between the intervention and its context determines outcome, meaning that the determinants of success change in another context [4]. The question of how to spread innovative initiatives and, at the same time, do justice to recognising the need, context, culture and resource availability of different settings remains for the most part unanswered.
To encourage knowledge transfer (KT) and scaling-up of innovative integrated care solutions, the SCaling Inte-gRated Care in COntext (SCIROCCO) project has tested and validated the SCIROCCO tool ('the tool'), which is argued to support the adoption of integrated care across Europe [5]. Using a step-based approach, the tool was developed and assessed in real life settings within the project. The SCIROCCO project aimed to test an approach to transfer knowledge between five European regions using the tool to assist in this process of information sharing and KT across the regions.
In the literature, transfer of knowledge is known by a diversity of terms, e.g. knowledge transfer and exchange, knowledge translation or utilisation and knowledge mobilisation [6,7] and mostly considered to be an approach to exchange knowledge between research and practice. The process of KT is suggested to be a dynamic and iterative process taking place among a complex system of interactions. Hence, a frequently cited definition is the one of the Canadian Institutes of Health Research, which describes knowledge translation as "as a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically sound application of knowledge to improve the health of Canadians, provide more effective health services and products and strengthen the health care system. This process takes place within a complex system of interactions between researchers and knowledge users which may vary in intensity, complexity and level of engagement depending on the nature of the research and the findings as well as the needs of the particular knowledge user [8].".
To ensure a solid and successful process of moving knowledge to practice, several scholars have recommended to guide the KT activity by using a model displaying how the process works and how it can support knowledge producers and users to plan and evaluate the activities [9][10][11][12]. Unfortunately, partly because there is such disparity across the field, there are relatively few tools and mechanisms for evaluating knowledge mobilisation projects [13]. Furthermore, there is a lack of sufficiently empirically tested and operationalised models to adequality guide KT activities. Davies et al., found in their "review of reviews" in the area of knowledge mobilisation "a bewildering variety" of models, theories and frameworks in health care, education and social care [14]. Many of the models they investigated "were primarily descriptive of the processes around knowledge creation/flow/application, and tend not to be explicit about the necessary configurations, actions or resources that will underpin successful knowledge mobilisation." Furthermore, with a few exceptions, the models were found to be subjected to limited empirical testing [14].
The SCIROCCO project tested a unique process of KT and information flow among five European regions using the tool to assist this process. Since the transfer of knowledge is suggested to be a complex process and SCIROCCO's approach took place between several regions, including different health systems, at the start of the project it was not known how the process would unfold in practice. Despite the lack of adequate models to guide the evaluation of the KT approach, it was considered important to gain insight into the way in which the processes of the KT activities in SCIROCCO would play out.
This study, therefore, has the objective to examine the KT process which was designed and tested within the SCIROCCO project to assess whether the approach is meaningful in light of two existing frameworks on KT. Providing an understanding of the processes involved in the SCIROCCO KT process is valuable, as this is expected to provide insight in how SCIROCCO's KT process intended to add value to the participating regions. Furthermore, the expected insights can support policymakers and stakeholders in other regions in terms of what issues to consider when they are interested in using the tool and processes in order to achieve KT with other regions and to eventually scale-up integrated care initiatives.
Conceptual frameworks
The framework of knowledge exchange (KE) To provide insight into the complex process of KT that occurred during the SCIROCCO project, this evaluation study is guided by the framework for KE developed by Ward and colleagues [10]. The authors suggested that the framework can be used to gather evidence from case studies of KE interventions and recommended that it could also be used as a template for evaluating KE activities. The initial conceptual framework on KE was developed out of a review of 28 different models which focused on explaining the KT process. Five common components of the KT process identified were [15]: -Identifying and communicating about the problem which the knowledge needs to address; -Analysing the context which surrounds the producers and users of knowledge; -Developing and selecting the knowledge to be transferred; -Selecting specific KT activities or interventions; -Considering how the knowledge will be used in practice.
Subsequently, the authors empirically tested the framework and refined it. As a result, the five components were found to "all be in play at any one time and do not occur in a set order [10]." The framework of knowledge mobilisation Effectively sharing knowledge requires different strategies depending on who is sharing the knowledge, what knowledge is being shared, how it is shared, and the purpose for which it is shared [16]. More recently, Ward developed a framework for knowledge mobilisers based on a review of 47 knowledge mobilisation models. The framework consists of four questions: Why is knowledge being mobilised?; Whose knowledge is being mobilised?; What type of knowledge is being mobilised?; How is knowledge being mobilised? [7]. Ward argues that these questions and accompanying categories can help knowledge mobilisers reflect on, communicate, and evaluate their aims and objectives, increasing clarity and understanding [7]. The framework is designed to help those involved in knowledge mobilisation to reflect on their personal and/or project-related aims and objectives in a structured way. Therefore, this framework was also used in this study to examine the KT processes within the SCIROCCO project.
Methods
A qualitative multi-method study was undertaken as this was regarded to be best suited to obtain a detailed understanding of how the KT processes within the SCI-ROCCO project unfolded.
Setting
This study took place within the European funded SCI-ROCCO project under the 3rd Health Programme (2014-2020). From its start (April 2016) till its end (November 2018) the SCIROCCO project tested a strategy which was designed to explore how available knowledge and experiences on integrated care models can be shared to enable "easier and faster" adaptation and implementation in other settings. Several work packages (WPs) within the project were responsible to implement the project objective. Partners from five participating regions (the Basque Country, Norrbotten, Olomouc, Puglia, Scotland) and two organisations conducted the work in the WPs. This study was conducted by members of the project's third work package (WP3), who were responsible for evaluation activities within the project. The strategy included the next roughly described steps: Step 1: In the project, a total of 15 good practices were selected as viable good practices in integrated care in five participating European. These were selected by means of a viability assessment using a form which was developed during the implementation. Within SCIROCCO good practices were defined as inspiring real-life examples of successfully applied innovations in integrated care. Second, the maturity requirements for transfer of these 15 good practices were assessed using the new developed SCIROCCO tool, 1 resulting in an overview (radar diagram) of the requirements of the local context in which the practices have been developed.
Step 2: In the next step, the five regions assessed their maturity in the provision of integrated by using the SCIROCCO tool. The outcome of the assessment was also a 'radar diagram' which presents areas of strengths and weakness in each dimension of the tool.
Step 4: The SCIROCCO project facilitated the comparison of the radar diagrams and facilitated the matching of the participating regions for the purpose of the knowledge transfer process. The project then organised the twinning and coaching activities (KT process) between the matched regions. The knowledge transfer could flow between the regions with the same strengths (twinning) as well as between the regions scoring high at particular dimension with the regions scoring low along the same dimension (coaching). The sessions were intended to be organised as study visits, webinars, and through various online tools. In each participating region, two or three local project partners were involved in organising the activities and associated with the KT process for their regions. The project partners also recruited a maximum of five other local experts to participate in the twinning and coaching sessions for their region.
The planned outline for the twinning and coaching sessions, included the following steps. In the first step, the members of each participating region were asked to express their interest for the twinning and coaching activity to be informed by either a) the assessment of the healthcare system (step 2 of the strategy) or b) a selected 1 The SCIROCCO tool consists of 12 dimensions which represent the range of activities that need to be managed in order to deliver integrated care. These 12 dimensions include: Readiness to Change, Structure & Governance, Information & eHealth, Standardisation & Simplification, Finance & Funding, Removal of Inhibitors, Population Approach, Citizen Empowerment, Evaluation Methods, Breadth of Ambition, Innovation Management and Capacity Building. The maturity of health care system for integrated care or the maturity requirements of good practices in integrated care are assessed by considering each dimension and allocating a measure of progress or 'maturity' (on a 0-5 scale) to each dimension. The scales include the maturity indicators and reflect the basic indications to look for when assessing the current situation of the maturity of health care system for integrated care or the maturity requirements of good practices. After the assessment, a simple graphical representation (i.e. spider diagram) of status can be derived which reveals areas of strength, further attention and improvement in each of the 12 dimensions. good practice within the project (step 1 of the strategy). One twinning and one coaching activity was envisaged for each of the five SCIROCCO regions. In the second step, the twinning and coaching process of the receiving and transferring region was intended to be initiated, including introductory webinar(s) between the transferring and receiving region(s). In the third step, a study visit to the transferring region was to be facilitated. The study visits were organised at a location in the transferring region, which was the region/authority acting as the "coaching" partner in the KT process. The receiving region was the region/authority seeking support and know-how in order to deploy a good practice and/or improve a specific aspect of integrated care and acted as the "learning" partner. In the final (fourth) step, a local meeting with the experts in the receiving regions was planned to be organised to reflect on the learnings to be drawn from the twinning exercise. They were also held to agree on the priority actions for the ensuing improvement(s), including policy recommendations and potential impacts. These were then captured in an Action Plan by using a developed template that built on the outcomes of the study visit.
A total of five twinning and coaching sessions, referred to as KT cases (in short cases) were organised including five study visits of which one took place in Puglia, one in Basque Country, two in Scotland, and one in Norrbotten. Three cases included one transferring region and one receiving region, and two cases included one transferring and two receiving regions. In Table 1 an overview and description of the focus on each of the five KT cases is provided.
Data collection
Several qualitative research methods were used to collect data from each of the five cases, involving five focus groups and several documents; including seven action plans, five study visit programmes and one project document.
Focus groups
Between June 2018 and September 2018, a total of five focus groups were conducted after the study visits took place. The purpose of the focus groups was to collect the (shared) experiences of the participants and details on the SCIROCCO KT activities in each of the five cases. By undertaking focus groups, several perspectives of the participants could be collected while encouraging the participants to question each other [17] and to explore each other's views, which can lead to a detailed exploration of ideas [18]. They were held in suitable venues in the specific locations where the study visits were organised All the experts who participated in the study visits were invited to participate in the focus groups. These experts were recruited by the local partners based on their experience with the good practice or expertise that was the subject of the study visit. Some experts of the transferring regions were involved in the study visit by providing presentations and were not involved in the complete study visit, those experts were not participating in the focus groups. Each participant received a detailed programme of the study visit, which included details of the planned focus group at the end of the visit. The characteristics of the focus groups are presented in Table 2.
The questionnaire for the semi-structured focus groups was developed in collaboration with members of two other work packages (WP5 and WP8) who were active partners in the SCIROCCO project. WP8 was focused on collecting the lessons learned on the process of KT using the SCIROCCO tool and WP5 was oriented towards the design of the SCIROCCO tool. The framework for KE of Ward et al. was used to guide the topic list development to collect the experiences on the SCI-ROCCO KT activities for this study [10]. The full questionnaire is available via the corresponding author.
The focus groups were alternately facilitated by one member of WP3 and two members of WP8. The three moderators possessed a minimum of a master's degree and experience in qualitative research. At the start of the focus groups, the moderators provided an introduction to the focus group and themselves, explaining the purpose of the focus group, and requesting the participants to sign the informed consent form (see ethics statement). All participants received an overview of the focus group questions at the beginning of each study visit. Each focus group lasted approximately 1 hour and was audio-recorded and transcribed verbatim.
Documents
To examine the details on how the SCIROCCO KT process unfolded within the project, document data including the action plans, study visit programmes, and a project document on the twinning and coaching methodology were also collected between April 2018 and January 2019. The action plans contained descriptions on the outcomes of the study visits. The plans were codesigned by the transferring and receiving regions. The study visit programmes included the experts involved in the study visit, and the programme for the 1 or 1.5 day study visit. A total of seven action plans, five study visit programmes, and one project document were collected.
Data analysis
The analysis of the data was guided by the two frameworks of Ward (et al.) [7,10]. All data (the focus groups transcripts and collected documents) were analysed using a directed and conventional qualitative content objective is to reduce hospitalisation and re-hospitalisation and to improve the quality of care for patients at home. In addition, the objective is to: Reduce the number of patients with heart disease, diabetes and other chronic diseases in the process of instability Activate protected de-hospitalisation Optimise the therapy and diagnosis according to international guidelines.
The aim is to implement a new type of telemonitoring, based on continuous collaboration and patient monitoring by different professionals and different users. Patients are telemonitored by their General Practitioners by using the innovative home and health monitoring technological solution (H&H Hospital@Home). This solution is able to detect the main clinical and instrumental parameters in addition to the therapeutic administration, based on oxygen and bronco-aspiration. It is allocated at the patients' home and it is permanently interconnected with the General Practitioner and/or Specialist, by computer, telephone, tablet and other devices.
At the same time, there is a central monitoring room at the hospital for all patients and all devices located at their home. All clinical parameters of patients are stored on a dedicated server, respecting the rules for the respect of privacy. The system allows the healthcare professionals (neurologists, pulmonologist, cardiologists, diabetologists, etc) to monitor and speak with patients remotely. The patients can also activate the visit of the healthcare professionals in their homes. In addition to real-time monitoring of physiological parameters, the healthcare professionals can also monitor the physical and technical characteristics of home device. As a result, it is possible to deliver therapy to the patient remotely.
Basque Country
Good practice in advance care planning Advance care planning (ACP) is a voluntary process of discussion between an individual and care providers about future care, irrespective of discipline. The aim is to guarantee patients' right to take decisions about their own care as well as to have those decisions respected when time comes.
The goal of this Good Practice is to promote ACP approach to the Basque population, particularly to chronic conditions population. The idea is to adjust end of life care to meet patients' preferences and improve decision making processes.
Three stages were defined when designing the Good Practice: Diagnostic stage in order to identify the population that could benefit from the ACP Therapeutic stage in order to develop the intervention Evaluative stage in order to assess both the impact of the Good Practice and the Good Practice itself.
The core intervention consists of two individual semi structured interviews with the patient and one or two members of patient family and/or friends. The interviews are carried out by the patient's General Practitioner (GP) and the community nurse. The first meeting aims mainly at introducing the subject (Advanced Directives) and inviting the patient to reflect on his/her preferences regarding the care. The second interview then focuses on the discussions of the specific issues related to the patient and his/her clinical characteristics and situation. Participants write down an advance directive according to their values, health conditions and preferences. The GP and/or community nurse assist with this process. Every healthcare professional can access the Advance Directive using the Basque Country's Integrated Electronical Health Record. As ACP is considered an evolutive process, the patient can change opinion and modify its preferences whenever is needed.
Scotland
Dimension of the SCIROCCO tool: Innovation Management Health innovation is an exciting and dynamic area with a range of stakeholders from all sectors working collaboratively to position Scotland as a world leader in health innovation, contribute to a thriving economy and support faster adoption of innovation across health and social care. Innovation is defined in Scotland as the invention, development, production and use of products, medicines, therapeutics, approaches and supporting services which create the opportunity to make major improvements to health and healthcare.
Scotland is already recognised as an innovation nation. The recently refreshed 2017 Scottish Life Sciences Strategy sets out strategic priorities for the sector to fulfil Scotland's ambitions to be a world-leading entrepreneurial and innovative nation.
The Scottish Government has outlined its commitment to innovation with recently published Scotland Can Do The "third sector" in Scotland is made up of non-governmental and non-profit organisations, from grassroot community groups and village hall committees to social enterprises and registered national charities. It is often also described as the voluntary sector, not-for-profit, charity sector, social economy, social enterprise sector, NGOs (non-government organisation) or civil society.
The traditional idea of charities as benevolent organisations simply there to help the poor is being replaced by a modern, progressive, third sector which carries out an enormous range of activities to improve people's lives. It does it by: -Supporting people through social care, health services and employability programmes; -Empowering people by campaigning and advocating for minority and disadvantaged groups in our society; -Bringing people together through social activities, local clubs and community centres; -Enabling better health and wellbeing through medical research, addiction services, sport facilities and self-help groups; -Improving our environment through conversation of our land and heritage, and regeneration of our communities.
In Scotland, there is a legal framework in place for the engagement of third sector in the provision of integrated care. The Public Bodies (Joint Working) (Scotland) Act 2014 is the legislative framework for the integration of health and social care services which requires the integration of the governance, planning and resourcing of adult social care services, adult primary care and community health services and some hospital services. Other areas such as children's health and social care services, and criminal justice social work can also be integrated.
Norrbotten
Dimension of SCIROCCO tool: Information and eHealth A need to introduction technology enabled solutions is widely recognised among all stakeholders involved. There is a clear plan and strategy in place to support widespread implementation of eHealth services. Patients are widely supported and encouraged to manage their own care and participate actively in the decision-making process through the access to electronic health records and relevant health information. National ICT solutions to increase patients´access to their medical records have been developed and implemented in Norrbotten Region on a large scale. There are also national ICT solutions to support patients´participation in the management of their own care but not fully implemented yet. A share of patient related information between different care providers is facilitated at the regional level. Norrbotten Region has also very well progressed with building the ICT solutions on existing platforms and infrastructure and has thus created new services to empower patients and ensure their ability to participate in the decision-making on their care as well as supporting self-care. However, scalability of these solutions still remains the issue. analysis approach [19,20]. The aim of combing the two methods was triangulation for the convergence and confirmation of findings [21]. A coding scheme, including the five KE components and themes derived from the framework and the four knowledge mobilisation elements and categorisations, was used during the coding process. In Table 3, a short overview of the definitions used in the analysis of the components including the themes and questions of mobilisation is presented. By using the directed approach, all data were reviewed for content and deductively coded per KT case according to the categories of the coding scheme [22], to achieve fewer content-related categories [23]. To ensure a homogeneous interpretation of the data, a content check was performed. Two researchers (LG, HV) checked one focus group independently and compared them. The results from this coding process were discussed among the researchers and any disagreement was resolved until consensus was reached.
Although the analysis was primarily deductive, when the data did not fit the concepts of the coding list, the codes on the scheme were adjusted accordingly. Some codes on the coding scheme did not directly fit the data and we used the conventional approach to be able to adjust some of the codes to fit the data. With regards to the "knowledge" component, besides looking into the type of knowledge offered by the transferring regions, the type of knowledge needed by the receiving regions was also included as a code. In addition, for the intervention component, "to be used" was included after the type of intervention, as the actions were indicated as proposed actions.
Since this study took place within a project facilitating KT between known transferring and receiving regions, we chose to add an additional category under "Whose knowledge is being mobilised?" by including "knowledge receivers", referring to the knowledge recipients involved as the experts of the receiving regions. An elaboration to the description of one group of "knowledge donor/receiver" was added and included policy makers to the category "Decision makers" to better match it to the interpretation of this study. Furthermore, the categories of "Why knowledge is being mobilised" were slightly adjusted.
The coding process took place using QSR International's NVivo 12 software. After all the data were coded, the final data analysis phase was a cooperative effort between LG and HJMV. The analysis was an iterative process: it was made up of initial analyses by LG, followed by discussions between LG and HJMV, and further analyses and discussions in order to identify concordant and discordant themes. Any disagreement was resolved through discussion.
The validity of the findings was ensured by the examination of several cases and the use of triangulation as data from focus groups and documents were collected to inform the concepts of the knowledge transfer process [24]. This established a more thorough and multifaceted examination of the KT processes in SCIROCCO than could be gained from any single method or single case.
Results
The structure of the findings follows the five components of KE, including the distinct themes describing the nature of KE. The four questions derived from the knowledge mobilisation framework clarify some of the components. The two questions on whose and what type of knowledge is being mobilised are included under knowledge. The why and how questions on knowledge mobilisation are included in the intervention component.
Problem
At several points in time during the KT process, attention was paid to the challenge that the receiving regions chose to address. Two different approaches were designed for carrying out the KT activity.
The first approach (a) was informed by the maturity assessment of the healthcare system using the SCI-ROCCO tool. The outcomes of the assessment provided insight into the strengths of, and challenges for, integrated care in the region. Hereafter the region chose one domain for improvement. It sought support from another region which had previously demonstrated a significant progress in the corresponding domain (as shown in the outcomes of the maturity assessment). Within the project, two cases (3 and 4) focused on Table 3 Descriptions of the five components of KE by Ward et al. [10] including the themes and elements of knowledge mobilisation [7] Problem definition involved: identifying, clarifying, focusing, reviewing and evolving the problem over time.
Context involved: exploring, discovering and revealing context which consists of the personal, interpersonal, organisational, and institutional characteristics relevant to transferring knowledge into action.
Knowledge involved: locating the knowledge, classifying the knowledge, assessing the knowledge, tailoring the knowledge, usability of the knowledge/practical limitations, and whose and what knowledge?
Intervention involved: negotiating KT roles and responsibilities, clarifying the type of intervention to be used (information management, linkage, decision/ implementation support, capacity development), integrating the intervention, making the intervention iterative, and why and how mobilise knowledge?
Use involved: deciding how the knowledge will be used (knowledge was used in a range of different ways: directly i.e. with little modification), conceptually (i.e. to change opinions) or politically (i.e. to confirm or challenge practices or policies), considering the practicalities of use, spreading knowledge to others and sustaining knowledge use.
improving a specific domain/aspect of integrated care using the first approach.
In the second approach (b), the problem identification focused around a strategic interest of a region in one of the good practices which were selected by other regions in the project. After the region expressed their interest, the requirements of that good practice, for it to be adopted and transferred, were assessed using the SCIROCCO tool. Then, the receiving region assessed the maturity of the healthcare system for the adoption of the good practice using the SCIROCCO tool. After the regions explored the requirements of the healthcare system to adopt the good practice, the twinning and coaching process was initiated. Within the project, a total of three cases (1, 2 and 5) focused on the second approach.
After the regions were matched, they used different approaches to clarify the problem/challenge of the adopting regions before the study visit. One transferring region explicitly indicated to be in contact with the adopting region prior to the visit (case 5). In the other cases, the regions involved did not provide details on how the preparations on clarifying the challenge of the receiving regions was conducted. Participants in case 2 mentioned that the study visit would have benefitted from more preparation.
Clarifying/focussing on the challenges of the regions occurred during the study visits. In the programme of four out of five study visits, explicit time was scheduled to discuss the rationale for the twinning and coaching between the transferring and receiving region(s). The challenges of the adopting and transferring regions were sometimes also mentioned during the focus groups. Some respondents talked about reviewing the challenge experienced by their region, based on the knowledge they had received during the study visit. Quotes are presented in Additional file 1.
In the final step of the KT process, the challenge of the regions was described in the action plans by the regional project leaders of SCIROCCO (details are provided in Additional file 1). For all five cases, the background of the problem was identified as being part of a broader process for change and/or improvement of the health and social care systems. Almost all representatives of regions, acknowledged that the sustainability of their health and social care systems is becoming a challenge. Hereafter, the problem was more focused towards the subject of the KT activity: a short description of these challenges is presented in Additional file 1.
No direct observations were made where the problems in the regions evolved over time. In two cases, however, respondents mentioned that the tool could be used to track progress over time (in case 1 and 3).
Context
Exploring, discovering and revealing contextual characteristics was a central part of the five KT activities within the SCIROCCO project. This activity was supported by using the SCIROCCO tool. In the two study visits (in case 3 and 4), using the first approach, a facilitated discussion was organised that compared the selfassessments of the transferring and receiving region. For each dimension, this included the identified features of the health care system. The features were concrete attributes of the environment that are needed for improvement. The receiving regions explored what they needed to change in the local environment in order to enable the improvement of that domain in their local context. They also considered whether improvement in this specific aspect on integrated care related to other dimensions of the SCIROCCO tool. These aspects were later captured in the relevant action plan.
In the second approach, the KT was informed by the assessment of maturity requirements, first, of a good practice for adoption and, second, of the health care system of the receiving region. A maturity requirement is a feature that a good practice needs the environment for it to be implemented. In two out of the three study visits, a discussion was facilitated focusing on what would be the requirements of local health care systems to transfer the good practice (one did not take place because of lack of time). The outcomes of these discussions informed the development of the action plan.
In Additional file 2, the classification of contextual dimensions made by the receiving regions is presented based on the assessment of the extent to which the transfer of knowledge per contextual dimensions was regarded feasible to the local context. The organisational or professional contextual structural characteristics, were sometimes indicated by regions, and are also presented in Additional file 2.
SCIROCCO's KT procedure
The SCIROCCO tool and project activities supported regions in locating the knowledge. The tool and project activities assisted in the matching of regions and the further KT processes. Assessing the relevance and usefulness of knowledge by the experts of the receiving regions occurred during facilitated discussions in six out of the seven study visits. Based on the contextual dimensions and features, the experts assessed whether transferring the learning about the good practice or the learning about a dimension was feasible in their region's context. This was done by indicating whether transferability was feasible (yes or no). When it was considered feasible, this was further assessed by indicating whether this required little or much effort or adaptations.
After looking into the feasibility, a further selection of the knowledge was made. In the action plans, the receiving regions listed a maximum of three prioritised features to be considered for the transferability of learning about a good practice, or an improvement in dimension in the receiving regions' local context. Hereafter, the adopting regions described per listed feature the suggested adaptations to their local context to enable the creation of conditions for the adoption of the learning from the good practice or dimension (Additional file 2). The suggested adaptations can be understood as tailoring knowledge.
Type of knowledge
Transferring regions The knowledge shared by the transferring regions came from different sources, including presentations and discussions among the experts from the transferring and receiving regions. In four cases, knowledge also came from practical site visits. Furthermore, the transferring regions provided information in the action plan of the receiving regions. The knowledge shared among the regions came from a mix of "knowledge donors", who were involved in the KT activities and differed case (Additional file 2). Only one transferring region, case 3, included members of the public who were acting as, or on behalf of, their communities and people in receipt of services (i.e., service users). Furthermore, the type of knowledge which was offered by the transferring regions during the five organised KT activities within the SCIROCCO project varied per case (Additional file 2). In three study visits, scientific/ factual knowledge was shared in the form of data on the performance of the practice shown during presentations or was described in the action plans. Technical knowledge was shared during the presentations in the study visits. The sharing of technical knowledge and practical wisdom were reflected in sharing experiences with the experts of the transferring regions during both the discussions and demonstrations in the site visits.
Receiving regions The participants from the receiving regions, who were regarded as the "knowledge receivers", included a mix of experts, composed of project members of SCIROCCO and invited regional experts (Additional file 2). In all cases programme and programme developers were involved as experts. Only one receiving region included a member of the public acting as, or on behalf of, the person's community and people in receipt of services.
With regards to the type of knowledge, the adopting regions described in their action plans per listed feature the suggested adaptations/changes of the features to their local context. The type of knowledge which was of interest for the regions is shown in Additional file 2. The type of knowledge needed categorised as scientific/factual knowledge, were described by two of the receiving regions and included the feature of the SCIROCCO tool entitled Evaluation Methods.
The need for technical knowledge was noted in all the seven receiving regions. Technical knowledge was about developing (implementation) plans/mechanisms (enabling adoption), reforming/developing legislation and embedding learning though education and training and included different "dimensions/features." The last type of knowledge needed, practical wisdom, was found in five regions. The need for practical wisdom included raising awareness about a new way of working, increasing public awareness, and demonstrating benefits of the good practice or improvement in an aspect of integrated care. Features that emerged included Removal of Inhibitors, Citizen Empowerment, Readiness to Change, Innovation Management and Information and eHealth services (these five are among the 12 dimensions that populate the SCIROCCO tool).
Intervention
For the intervention concept, a distinction is made between the intervention consisting of the SCIROCCO project itself and the priority actions of the adopting regions as described in the action plans. These two sorts of interventions are described separately below.
SCIROCCO intervention
When focussing on the SCIROCCO project, three types of KT interventions were reflected in the methodology for twinning and coaching sessions. Starting with information management, the SCIROCCO project supported the regions in finding the knowledge in another participating region. Linkage and exchange occurred as the five KT activities organised by SCIROCCO included study visits to bring together the matched regions. All study visits included presentations from the transferring region. Almost all accommodated discussions among the regions based on comparisons of the self-assessments and some involved practical site visits. Finally, capacity building was facilitated by helping the regions to reflect on the possibility to transfer and adopt the learning about the good practice or dimensions to local settings, by drawing up an action plan following the study visit. On the negotiating KT roles and responsibilities within the SCIROCCO project, the SCIROCCO local project members were part of the KT activities representing their regions and they invited several types of regional experts to be part of the KT: details on these types of experts are presented in the "knowledge" section. The knowledge mobilisation technique used by SCIROCCO can be categorised as "making connections between knowledge stakeholders and actors by establishing and brokering relationships." The participants provided feedback on the SCI-ROCCO study visits: a short overview of their feedback on these visits is presented here.
The use of the SCIROCCO tool as part of the knowledge transfer activity was considered according to respondents in case 1, 4 and 5 as supportive in focussing/structuring the discussion during the study visit between the regions. Two experts (in cases 1 and 2) suggested to edit or the need to add elements in addition to the tool. Some experts indicated experiencing issues with the language of the tool (in cases 2,4,5). Overall, participants observations covered the usefulness of study visits especially when they including a practical (real life) trip and/or presentations (in cases 1, 2,3); an appreciation of the collaboration and process involved (in cases 1 and 3) and of the role that the site visits played in mutual learning (in case 4). Organisationally, the length of time sometimes spent on a study visit or on its preparation could, in some cases, have been lengthened (in cases 1, 3,5).
Interventions to be used by receiving regions
During the study visits, there was time for the regions to discuss and clarify possible interventions to transfer the learning/knowledge to their local contexts. This means that the adopting regions discussed what changes/improvements were needed to enable the transfer of the good practice or the improvement in an aspect of integrated care in their local environments. (This is also reflected in the knowledge needed by the regions as presented under "knowledge" section). Once back home, these processes were further clarified and written down in the regional action plans in the form of priority actions. An iterative process of selecting an intervention by the regions could not be observed. At the end of their action plans, the regions listed the actions proposed to enable conditions for adopting the learning of the good practice or to enable conditions for improvement of innovation management in the local context. The actions included objectives, anticipated outcomes, and policy implications. The priority actions of the regions are categorised under the type of intervention to be used and are presented in Additional file 3.
The type of intervention categorised as capacity development was described by all the regions. It involved raising awareness among professionals or citizens when certain improvements are needed. It concerned e.g. engaging professionals or embedding/improving education and training. Strengthening/improving or positioning several roles as part of the intended change were also considered part of capacity development. Linkage, as intervention, emerged as engaging/involving several stakeholders or joining efforts among actors, and encouraging participation and partnership building in the intended change. Decision and implementation support were reflected when receiving regions referred to developing plans or strategies for implementation, extending or scaling-up initiatives, or embedding elements in regulations or policies. Information management came up in a few regions, indicating the collection of data/information on the change and publishing data.
The study team also looked whether the actions could be categorised under the "How mobilise knowledge?" concept. However, since the action plans refer to "proposed" actions and policy "implications", the actual implementation of these plans was out of the scope of the project. As a result, it was not possible to categorise the "How mobilise knowledge?" concept for these actions.
Attention was paid to negotiating KT roles and responsibilities in the action plan, as the receiving regions were encouraged to think of who would be the (future) responsible actors for the priority actions. Six of the seven regions pointed out the responsible actors (see Additional file 3). Furthermore, the regions indicated policy implications for the intended actions, which can be considered as a form of integrating the intervention/ priority action in their local context.
Use
A range of ways of how the knowledge will be used could be retrieved from the action plans (see Additional file 3). The knowledge transferred during the twinning and coaching sessions is expected to be used mainly conceptually (i.e. to change opinions) and politically (i.e., to confirm or challenge policies) by the receiving regions. The receiving regions indicated policy implications for the proposed priority actions. Some regions indicated that they have a range of policies in place supporting the actions, while other regions were in the middle of developing them or opted for expressing the need for policies or strategies to support the action. The policy implications indicated are presented in Additional file 3 under "knowledge used politically." These policy implications, including the request to think of the responsible actor(s) and anticipated duration of the action, can be considered as SCIROCCO's way to support the receiving regions to think of sustaining and spreading knowledge.
The receiving regions also indicated the practicalities of knowledge use, as sometimes regions indicated during the assessment of knowledge, that the knowledge would not be feasible to transfer (see "context" section). Practicalities are also considered in the action plans, where the adopting regions described the benefits and opportunities of the adoption of the good practice or of improving a particular dimension in their region. These are summarised in Additional file 3.
The categories on "why knowledge is being mobilised", reflected in these practicalities, are also presented in Additional file 3. The reasons for mobilising knowledge between the regions are found mainly to be a mix of "To (further) develop new policies, programmes and/or recommendations", and "to change practices and behaviours." Also, a few regions were planning to use the knowledge "To adopt/implement transferring regions ideas on practices and policies.
Discussion
This multi-method study had the purpose to provide insights into how the processes of KT, facilitated between five European regions unfolded as part of the SCI-ROCCO project, aimed at the transfer and scaling-up of successful integrated care initiatives. To explore this aim, data were collected within the project by conducting focus groups and examining the content of project documents.
The two frameworks used to guide this study were found to be useful for analysing the KT processes. Moreover, the SCIROCCO project appeared to have designed an extensive approach for the KT process among five participating regions. It was found that the five components (including the themes) of KE [10] could to a large extent be identified in the developed methodology for the SCIROCCO twinning and coaching activities. Furthermore, the four questions and accompanying categories of the framework for knowledge mobilisers [7], were also identified to a large degree and provided additional insights in the SCIROCCO KT processes.
These key findings are discussed below in terms of the problem, context, knowledge, intervention and use before the strengths and limitations of the study are explored.
Problem
In all five cases, on several occasions in the KT process, attention was paid to problem definition. Evolving the problem definition over time could not be observed in the five cases of this study. There are two possible explanations for the periodicity of this treatment of problem definition. The first explanation may be the difference in both studies in the type and intensity of interactions between the facilitating party (called the "knowledge broker" by Ward and colleagues [10]) and the receiving "team." In the study of Ward et al. the knowledge brokering activities "were driven by the teams' own problemsolving processes [10]." In contrast, the outlined stepbased design of KT processes within SCIROCCO were implemented within the scope of a project, offering limited options by a time-bound and defined programme. Therefore, it might not have been possible to allow sufficient time and space for the problem to evolve. Indeed, some regions indicated the potential for the SCI-ROCCO tool to be used in the future to track changes over time. As Ward et al. observed "that an inability to revise and evolve KE problems can hamper the desired change process [10]", it is advised to allow for the evolution of problem in the design of the SCIROCCO-based KT processes in the future.
The second possible explanation is that, in the study of Ward et al., the knowledge broker participated in the KE process of three teams over a period of 10-15 months and collected observational field notes [10]. In our study of the experiences in SCIROCCO, data were collected on four occasions and no direct observations were made during the transfer of knowledge. This could mean that we were unable to detect the evolution of the problem over time. Furthermore, this could also explain the fact that insufficient data were gathered to enable a comprehensive insight in step 2 of the KT approach, when the regions were matched and prepared themselves for the knowledge transfer before the study visits took place.
Context
Contextual characteristics were specifically considered in the five KE cases by using the SCIROCCO tool to inform three steps in the KT processes. Not all themes identified by Ward et al. as being related to context, were reflected in the SCIROCCO approach [10]. The contextual characteristics within the SCIROCCO project are more focused on the macro health care system related to integrated care; however, some organisational and professional structural characteristics were reflected in the suggested adaptations to context addressed by some receiving regions. Ward et al. indicated that their findings suggest "that KE is a social and political rather than behavioural phenomenon which involves professional identities and norms in addition to individual beliefs" and that "fractions within a group may instigate KE as part of a strategy of contesting professional norms and identities [10]." This phenomenon is observed in SCIROCCO by the fact that several receiving regions indicated the requirement to raise the awareness of professionals for the need for change or to the benefits of change. Moreover, Ward et al. suggest that "knowledge translation approaches need to focus beyond individual behaviour or specific organisational characteristics [10]." Although they are focused on integrated care, the wide scope of the contextual dimensions which are part of the KT process in the SCIROCCO approach could be interesting to consider either in the framework of Ward et al. or in other KT processes [10]. Vice versa, the focus on personal, individual and interpersonal characteristics could be useful, when using an approach like SCIROCCO, when receiving regions would be interested in transferring the knowledge retrieved to the level of actual practice.
Knowledge
All five cases were actively supported by the SCIROCCO project in locating, assessing and tailoring knowledge during several steps of the knowledge transfer process. Ward et al. observed in their study that locating and tailoring knowledge was "rarely instigated by the knowledge broker" and that the teams "classified and selected knowledge in relation to their professional backgrounds and training and that these preferences are amenable to change through reflexive action by team members [10]." In contrast, these processes were outlined in the SCIROCCO approach. The professional background of experts could have played a role in the classification and selection of knowledge within SCIROCCO and, in the study visits, the facilitated discussions may have supported reflective action among the experts. However, since data for this study were only collected through focus groups, where reflection is an actual part of the data collection technique, and no observations were made during the discussions that took place in the study visits, the influence of the various professions on changing preferences and reflexive actions could not be observed. Nonetheless, in the SCIROCCO KT process, there was indeed time and space for discussions among different professionals, which suggests that support was offered for Ward et al. advised "naturalistic processes of reflexivity and discrimination [10]." The knowledge offered by the transferring regions came from a mix of knowledge donors, which were identified according to the categories of Ward [7], and several types of knowledge were offered. In contrast to Ward, we did focus on knowledge receivers since this would provide insight into the type of experts involved in the knowledge change process within SCIROCCO [7]. Ward indicated that focusing on knowledge receivers "suggests that knowledge is a product which is to be translated into practice, […] and is at odds with observations of the fluid, multidirectional nature of knowledge mobilisation [7]." In the process of knowledge transfer between several international regions, designated experts to participate in the process are needed. This selection does not mean that the KT process of SCIROCCO did not consider the potential for a fluid, multidirectional nature to the KT process, rather, the diverse types of experts were encouraged to think of how to deploy knowledge in the regions and to name responsible actors.
Intervention
The themes identified by Ward et al. in the "intervention" component were found to be facilitated within the SCIROCCO project [10]. Regions having discussions on "making the intervention iterative were not observed." Possible reasons, reflected by the scope of the SCI-ROCCO project or the lack of direct observations made, are elaborated on above under "Problem." Ward et al. found in their study that "many of the KE activities which we observed were an integral part of the process of change in which the teams were engaged" and argued that "the development of […] knowledge translation interventions could begin by focusing on these naturalistic KE activities [10]." They suggests that this could increase the willingness of members to engage with KE interventions and would also make them more easily conceivable in the absence of resources for external assistance.
All regions involved in the KT activities of SCIROCCO were found to be engaged in a broader process of change. However, the KT process of SCIROCCO can be regarded as an "add-on" intervention (using external resources and skills) to facilitate the KT process between regions. In addition, the SCIROCCO project also focused on the development of the tool to facilitate transfer among regions, which required additional resources. Nonetheless, the SCIROCCO approach seems to correspond to elements of the natural processes of KE and to be open to a wide range of sources and different types of knowledge resulting in a variety of types of interventions intended to be used. The general trend among the participants in SCIROCCO was of a positive experience derived from the study visits. This suggests that the participants were willing to engage in SCIROCCO's KT process. The approach of matching regions will, however, require external resources. Furthermore, the tool seems to be a useful tool in providing support to the regions to identify the problem, locate, clarify and assess knowledge and possible interventions during the KT processes. This indicates that the SCIROCCO tool has shown potential in the knowledge transfer process.
Use
Conceptualisations of knowledge use were found to be part of the SCIROCCO KT activities and included various forms of intended knowledge use in the five cases. Ward et al. suggest in their study that "KE can be understood as a dynamic and fluid process which incorporates distinct forms of knowledge from multiple sources [10]." The incorporation of distinct forms of knowledge from multiple sources are reflected in the SCIROCCO approach and the dynamic, fluid process could to a certain extent be observed. Some or sometimes all the components have been shown to occur simultaneously at different steps within the KT processes of SCIROCCO. However, a fluid process was not always reflected in the processes explored in this study. There are at least two potential explanations for this. This could be due to the limitations in data collection (as described under Problem). Another explanation lies in the outlined programme of SCIROCCO's KT process, this could have resulted in a more "linear" approach which could have compromised the fluid processes.
Knowledge mobilisation models have been found to focus on how change occurs, to lack practical utility and not to focus on the content of change activities [14,25]. The findings of this study contribute to providing concrete evidence to counterbalance the previous lack of practical insight into the specific methods of KT initiatives. Altogether, we consider the insights gained into SCIROCCO's unique methodology for KT and how it unfolded in practice valuable.
Furthermore, it is questionable whether there are any insights available from other studies about looking into practical knowledge transfer between international regions, as many studies focus on knowledge transfer between research and practice [26][27][28] . The insights obtained in this study are, specifically, compelling for other regions that are interested in SCIROCCO's KE process or, more generally, in the exchange of knowledge in the field of integrated care.
Strengths and limitations
There are three main strengths to this multi-method study. First, this study was guided by two frameworks which supported the data collection and analyses. This use of frameworks is important since the transfer of knowledge and the scalability of elements of integrated care initiatives to other organisations/regions lacks clarity and poses great challenges, and the literature supported us in obtaining a better understanding. Second, the focus groups enabled a depth of coverage of knowledge transfer issues, and by conducting document analysis, breadth was also achieved. The multi-method qualitative nature of this research enabled some practical insights into a KT transfer initiative and demonstrates what the approach yields for participants. Third, the collected data of the exchange of knowledge between several diverse European regions, in an international context, enabled obtain insights to be obtained which are likely to be applicable to other contexts.
There are three main limitations to the study. First is the fact that data were collected on four occasions, and no direct observations were made during the exchange of knowledge within SCIROCCO. The methodology for the KT process was developed and planned on short notice within the framework of a wider project, which had dealt with delays and deadlines. Although we were partner members the SCIROCCO project, which enabled us to follow the project closely, we had to consider the work timetabling of other project activities and thereby make instrumental choices about the data collection. This constrained the ability to collect data to cover all the potential components reflected in the KT process. The second limitation is that since the researchers were partner members of the SCIROCCO project, there is a risk of bias in the focus groups. Through the course of the project, the researchers continuously reflected on the methodological decisions and their own role in the research process and ensured their independent role as researcher by not interfering with any project activities except the evaluation activities. A third limitation which needs to be addressed is the practical use of the frameworks of Ward (et al.) [7,10].. Ward et al. stated that the framework of KE has "elements [that] need further examination" but suggested that it can "act as a starting point for exploration and evaluation [10]." For the second framework, Ward mentioned that "although the framework does not offer an easy set of methods or tools for evaluating knowledge mobilisation initiatives, it can provide some basic building blocks for determining and planning suitable evaluation strategies [7]." Despite the limitations of the practical applicability of the frameworks, given that there is a lack of available tools and mechanisms for evaluating knowledge mobilisation projects, to our understanding, these were the most comprehensive KT frameworks which were available, which fit the study. Some elements of the Ward et al. [10] and Ward [7] frameworks did not fit the specific knowledge change process of SCIROCCO, as the focus of the studies differed. Furthermore, some descriptions of concept were broad. Therefore, we did adjust some concepts and provided our own interpretation to some concepts.
Conclusion
This multi-method study provides new insights into how the KT processes unfolded as part of a European financed project aimed at the transfer and scaling-up of successful integrated care initiatives.
When compared with two frameworks which focus on KE and knowledge mobilisation, the SCIROCCO project seems to have used an extensive approach to the KT processes implemented in several European regions. The insights obtained could support other regions interested to use the SCIROCCO tool and processes in terms of what to consider during KT with another region, especially in order to improve the local conditions that may enable the adoption and scaling-up of integrated care. Due to its limited duration, the SCIROCCO project did not address the implementation and monitoring of the set of regional action plans, which were written by the five regions involved to capture the learning derived from twinning and coaching sessions. The implementation of the action plans would benefit from an iterative implementation process which could also be useful for interested regions. Furthermore, additional evaluation research is recommended to gain insight into the implementation processes of the plans and the monitoring of their implementation progress in the regions.
|
2019-10-24T09:02:55.443Z
|
2019-10-21T00:00:00.000
|
{
"year": 2020,
"sha1": "b5f1c2ffb9c6599536133dc3cd86ff0947f06ff4",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-019-4865-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7e0a0a743e8e04f8779b777cfd357c1ef7d1da9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.